I'm really getting tired of people with the C-hacker syndrome constantly bashing C++ and talking about the numerous "problems" in C++ and boasting how much better C is compared to C++. I don't have a problem with Java or Haskell people bashing C++, because they have good arguments, but the C people doing so are just nuts. Their bashing is all based on misinformation, not understanding the internal workings of C++, and plain FUD.
Let me debunk here a few myths they spread about C++.
For starters, let's get over with this one. This is actually based on the behavior of some C++ compilers back in the early 90's which had to statically link the entire C++ library into each executable. This hasn't been so for well over a decade. Yet this myth still persists. A common misconception is that even a single "hello world" program gets a ridiculous executable size when done in C++.
Let's make a comparison, using the gcc 4.1.2 compiler on a Linux system. A very simple C program:
#include <stdio.h> int main(void) { printf("Hello\n"); return 0; }
Let's compile this with "gcc -O3 -s
". The size of the
executable? 5836 bytes.
Let's convert that to C++:
#include <cstdio> int main() { std::printf("Hello\n"); return 0; }
We compile it with "g++ -O3 -s
". The size of the executable?
5892 bytes. A whopping 56 bytes larger. Yeah, C++ really produces enormous
binaries compared to C.
Ok, maybe I "cheated" above? Maybe I should print the text in the C++ way instead of "cheating" and using the C function? Fine:
#include <iostream> int main() { std::cout << "Hello\n"; return 0; }
The size of the binary? 5904 bytes. A whopping 68 bytes larger than the C version. Sure, it's larger, but the size increase is negligible. We can't talk about "huge" by any definition of the word. (When these people talk about "huge", they mean something like 200 kilobytes size increase.)
So please, just let this one rest, will you?
This is probably by far the most common misconception (sometimes even a deliberate "misconception") C hackers have about C++. Some of them probably honestly believe that C++ code just creates a slower program than the equivalent C code would, without even having tried it in practice.
I can say from long experience that this claim is pure BS. For example, I recently made a C++ version of the great ISAAC random number generator (which you can download here). I encapsulated all the ugly details inside a class with a simple and clean, easy to use public interface (much easier than the original C library). I have measured the speed of the original C library and my C++ version, and there just is no speed difference at all. They are both equally fast.
Most C hackers, for some reason, have the notion that if you use C++, you must use classes, and if you use classes, you must use inheritance, and if you use inheritance, you must use virtual functions, and virtual functions are enormously slower than regular C functions.
This chain of deductions is completely and absolutely false. None of those features are mandatory to be used and, most importantly, even if you use classes and inheritance, there's nothing in the resulting code that would make it slower than using regular functions. (My C++ version of the ISAAC library above, where I encapsulated the library inside a class, is a perfect example of this.)
The speed penalty of virtual functions compared to regular functions exists, but is greatly exaggerated. C hackers make it sound like calling a virtual function could be tens, if not even hundreds of times slower than calling a regular function. In reality calling a virtual function probably takes just a few clock cycles more than calling a regular function (sure, even those few clock cycles can have an impact in some cases, but in general it's negligible).
C hackers also make it sound like if you use a class, all the functions are automatically virtual (like they are, for example, in Java), with the subsequent speed penalty. This is, of course, just not true. A member function is virtual only if you explicitly say so, and a class with no virtual functions at all is, in its underlaying implementation, almost completely equivalent to a C struct, with no penalties whatsoever.
Note that even inheritance does not automatically imply virtual functions or speed penalties. A class inherited from another class, when there are no virtual functions anywhere, is no different from a C struct with just the same elements as the other class plus the new elements in the derived class. There are no size nor speed penalties.
Also this is one of the most common and persistent myths. They make it sound like anything involving templates is cryptic, extremely hard to understand and completely undreadable, and all in all a huge mess. Again, this is a complete exaggeration.
Let's look at an example:
int tableA[] = { 5, 4, 10, 2, 56, 3, 2 }; double tableB[] = { 4.2, 6, 9.34, 1, 5.25, 5, 2 }; std::sort(tableA, tableA + 7); std::sort(tableB, tableB + 7);
This defines two tables (one containing integers and another doubles),
and then sorts both of them. (The std::sort()
function takes a
range of values, that is, a pointer to the first value and a pointer to
the value after the last one to be sorted, which is what eg.
"tableA + 7
" does, as any C programmer knows.)
What does this have to do with templates? Well, surprise surprise, the
above code is template code. The std::sort()
function is a
template function, and an instance of it will be created for every used
type (in this case two types, int and double).
So where is the huge mess? Where is the cryptic code and unreadability?
In fact, implement that same code in C, using its qsort()
function, and then tell me the C version is less cryptic. I would dare to
say that the template code above makes the program simpler
and easier to read than the equivalent C version. We didn't even have to
specify how the elements of the arrays are compared because the function
uses a default comparator when none is specified explicitly (but it is
possible to specify a custom comparator if so desired).
Moreover, the above code is probably faster than the equivalent C code because everything (including the comparison of the elements) can be inlined by the compiler. For example when the compiler generates the code which sorts the integer table, it can most probably generate a simple integer comparison machine code when it needs to compare two elements. With the C version it has to push values on the stack and perform a function call through a user-defined function pointer.
(Quite ironically, this is the exact reverse of the "C++ is slow" misconception: Now it's the C version which has additional overhead because of extraneous function calls, something which the C++ version doesn't have to do.)
Ok, but how about creating, for example, a template function. Well, I really can't understand what so "messy" about it. Sure, you can create messy template code if you must, but simple template functions are pretty straightforward: The only difference from a regular function is that some types have been abstracted. For example:
// Regular function: int min(int a, int b) { return a < b ? a : b; } // Generic version of that function, using templates: template<typename Type> Type min(Type a, Type b) { return a < b ? a : b; }
What's so "messy" and "unreadable" about that? I really can't understand. The only difference is that now we don't fix the type of the parameters and the return value to 'int', but instead we use a generic type (in this case named 'Type').
Ok, what about using template classes? I still can't understand what's so "messy" about them. For example:
std::vector<int> someVector; someVector.push_back(5); int value = someVector[0];
So the definition of 'someVector
' says, inside the angle
brackets, that it uses the type 'int
'. So what? Is that a
"huge mess" and "completely unreadable"? Honestly?
Sure, you can create really messy template code. However, the C hackers make it sound like all template code is messy. That just isn't true. (And it's not like you couldn't create really messy code with the kind-of C "equivalent" of templates, ie. precompiler macros. That also goes for the error messages, which is also a common complaint.)
There are approximately two types of distinct C++-hating C hackers: Those who despise object-oriented programming to death (based solely on prejudices, resistance to change, and FUD by fellow hackers), and those who consider it a great tool but think that C++ somehow does it "wrong" and that it's much better done in C, using the tools offered by C.
The first type are simply misled by prejudice. For example notions like "abstraction", "data hiding" and "inheritance" are swearwords to them. For instance, they oppose to death, and simply can't understand the concept of data hiding, ie. hiding member variables from being accessed from outside code (mainly in C structs). They can't understand why that would be a good thing, and they strongly oppose the idea.
The majority of these C hackers are self-taught programmers (and I use the term "programmer" in a rather loose sense here) who have never actually studied the field of science called programming. They have formed strong prejudices against everything that they perceive as academic and theoretical, things they think are away from practical programming.
However, it's the second type of C hackers which really amaze me. That is, those who honestly believe that it's better to implement object-oriented programming in C than in C++. They somehow have the notion that C++ does it in the "wrong" way, probably because of the misconceptions about the speed and efficiency of code generated by C++ compilers (as described earlier).
Basically they want to always be "in control" of everything. They don't like the idea of the compiler creating logic, data structures and code "behind the scenes", without their full control of the process.
The irony in this is that when they create their own "object-oriented programming" implementation in C, that implementation will inevitably be clumsy, very hard to understand and to use, and often less efficient and/or with higher memory requirements than a clean C++ implementation would be.
For example, I have seen C implementations of virtual functions, where they were implemented by adding function pointers to structs. In other words, every virtual function increased the size of the struct by the size of a function pointer. The more functions there were, the larger the struct. Of course this makes the struct larger and less efficient than the equivalent C++ class would be, which is rather ironic.
This is based in the all-too-common notion that if something has a lot of features, you must learn to use all those features, and thus the more features something has, the more complicated it is.
Another closely related notion is that additional features always add to the overall complexity (even if those features were added in order to make the usage simpler).
Neither concept is true. You don't have to learn to use all the features of C++ if you don't want to. As for "more features make the language more complicated", I'd say it's the exact opposite: The extra features added by C++ (compared to C) makes programming simpler, not more complicated. And not only simpler, but safer and sometimes even more efficient.
And it's precisely safety that makes C++ excel over C any time. While you can, of course, create completely unsafe code in C++, it, however, offers the tools to create safe code, something which C doesn't. And moreover, this safe code is usually much simpler to use than the equivalent unsafe C alternative.
Just think about std::string
and its member functions
compared to C strings and the C string functions: When used properly,
std::string
is enormously safer than the C equivalent.
Also using std::string
is usually much simpler and easier
than the C equivalent. (Depending on the code it can, amusingly enough,
be also a lot faster.)
"Class member functions should be avoided because they increase the size of the class. Instead, regular functions should be created, which take the class as parameter."
Some people, even today, still have this odd misconception that adding a member function to a class will increase its size. The more member functions it has, the larger the class, and the more memory each instance of that class will consume.
Of course this is not true, and never has been. A member function is, internally, just like a regular function. It's simply its scope, at the language level, which is defined by putting it inside the class definition. (It also means that the function can be called without having to give an object as parameter.)
If you add a virtual function to a class which had none previously, that will increase the size of the class by a pointer. However, that only happens with the first added virtual function. After that you can add any amount of virtual functions you want without the class growing any further. If a class has already virtual functions, there's no need to try to avoid adding more to it. It will not make any difference.
"Using templates increases code size and wastes memory and thus they should be avoided."
This is a rather odd claim when one thinks what would be the alternative. If, for example, you would have to implement the same function for integers and doubles, there are exactly two alternatives: Create a single template function, or writing two functions, one taking integers and another taking doubles. How exactly does the template version "increase code size" compared to the alternative? The only thing the template version does is that it helps avoiding code repetition.
And anyways, even when, for example, class code is duplicated for every used type, the size increase is neglibigle in modern systems. Moreover, duplicating the code (instead of some alternative using void pointers or such) usually makes the code more efficient because the compiler can optimize the code for the specific type, instead of using some unique code which accesses the data through void pointers or whatever.
In fact, using templates sometimes reduces code size instead of increasing it, when compared to a templateless alternative. This is because the compiler can better optimize the code on a per-type basis (creating smaller code for types which can be optimized more than others).
Moreover, template functions, and especially template classes have the peculiar feature that if a certain function or member function is never called anywhere, that function is never even instantiated. In other words, nothing of that function will end up in the object file because the function is not instantiated at all. Thus there will be absolutely no trace that function even exists in the object file or the final executable file. All regular functions will end up in the object file, and it will be up to the linker to remove function code which is never called. Many linkers don't even bother doing that, and thus code which is never called will end up increasing the size of the final executable.
Often template code can also be used to more easily use memory in a more
efficient way. For example std::vector
uses memory much more
efficiently than the equivalent in Java, for instance.
This is another one of those misconceptions which is based solely on historical situations (somewhere in the earily and mid 90's), with complete disregard to the current situation.
The misconception (which many C hackers and by extension sadly even some C++ programmers have) is that the Standard Template Library (named like that for historical reasons, although after 1998 just part of the C++ standard library) is unreliable and shouldn't be used in portable code, if at all.
This notion comes from the fact that in the early and mid-90's, when C++ (and its libraries) had still not been officially standardized, there was great variation on which parts of the then-unofficial STL collection different compilers implemented, and how. Relying on some feature of the STL with one compiler could mean that your program might not compile when using a different compiler.
However, (what was formerly know as) the STL has been officially standardized in 1998, and all modern compilers have had a stable implementation of it for a long, long time. Not using the STL for the fear that some compiler from the 90's might not support all the features is completely foolish.
Another similar prejudice is that the STL is inefficient and shouldn't be used for that reason. In other words, data containers and algorithms should be implemented explicitly rather than relying on what the standard library offers.
Again, this might have been somewhat true with some compilers in the
early 90's. It hasn't been true for about a decade. Most STL implementations
are about as efficient as they can get. For instance, try writing a generic
sorting algorithm which clearly beats std::sort()
in most tests
with all possible datatypes. I'll wait. (I won't hold my breath, though.)
(Many compilers suffer from relatively inefficient memory allocation, making some of the STL containers slowish for that reason. However, that's not a fault of the STL, but of the low-level clib memory allocator implementation, and all programs suffer from the same problem unless they implement their own specialized memory allocation scheme. Yes, including all C programs; it affects them equally.)