(Back to index)


C++ FQA: Defective?

There's a somewhat popular website named C++ FQA Lite which main purpose seems to bash C++ in any way it can.

Rather than going through the entire site, I'll just comment on their "Defective C++" subpage. As it says, it's supposed to be a page that "summarizes the major defects of the C++ programming language." If this summarization of the worst defects of the language doesn't stand up to scrutiny, I don't think it's even necessary to go through anything else.

General

One could, rather ironically, say that the arguments in this page itself are quite "defective." The reason for this is that most of the arguments are dishonest and fallacious. It uses tactics like:

It also doesn't seem to be able to decide whether it's talking about "defects" from the point of view of a beginner programmer or from the point of view of a gigantic multi-million-lines-of-code project developed by a big company.

Let's see the individual arguments.

No compile time encapsulation

In naturally written C++ code, changing the private members of a class requires recompilation of the code using the class. When the class is used to instantiate member objects of other classes, the rule is of course applied recursively.

Yes, and?

This makes C++ interfaces very unstable - a change invisible at the interface level still requires to rebuild the calling code, which can be very problematic when that code is not controlled by whoever makes the change. So shipping C++ interfaces to customers can be a bad idea.

Yes, binary incompatibility between different versions of a library can in some cases cause problems when you are implementing libraries. This isn't a problem limited to C++ only. It happens with many compiled languages. It can also happen due to other reasons as well, and libraries get backwards-incompatible all the time. (For example, their public interface might change because it was decided that the old one was untenable.)

But I would ask: How much does this really happen, and why should the average programmer (most of which do not work on libraries that are distributed to others) be concerned?

I can talk from experience. I have been programming in C++ professionally for almost 10 years, and as a hobby for much longer than that. Guess how many times I have encountered this "defect" in any way, shape or form.

(Sure, it probably happens to some people. It has yet to happen to me. It doesn't seem like a very common problem in practice.)

Well, at least when all relevant code is controlled by the same team of people, the only problem is the frequent rebuilds of large parts of it.

And this is a major defect because...?

Outstandingly complicated grammar

"Outstandingly" should be interpreted literally, because all popular languages have context-free (or "nearly" context-free) grammars, while C++ has undecidable grammar. If you like compilers and parsers, you probably know what this means. If you're not into this kind of thing, there's a simple example showing the problem with parsing C++: is AA BB(CC); an object definition or a function declaration? It turns out that the answer depends heavily on the code before the statement - the "context". This shows (on an intuitive level) that the C++ grammar is quite context-sensitive.

And this should concern the average programmer how, exactly? How is this a "major defect"?

In practice, this means three things. First, C++ compiles slowly (the complexity takes time to deal with).

Now, this is a perfect example of a practically non-existent "defect" that's blown completely out of proportion. The writer repeats this argument over and over (even writing, as we will see, entire sections dedicated to this argument alone.)

Why exactly should the average programmer be concerned if a C++ program "compiles slowly"? How is this a "major defect"?

Sure, if a multi-million-lines-of-code gigantic project takes an hour to compile, it can be a nuisance. How many average C++ programmers are involved in such projects (and even from them, to how many of them this is a real annoyance)? To the average programmer it makes little difference whether the program compiles in 1 second or 10 seconds.

(Besides, compilers and computers have gotten pretty fast, so the complaint is less and less valid as time passes.)

Second, when it doesn't compile, the error messages are frequently incomprehensible (the smallest error which a human reader wouldn't notice completely confuses the compiler).

How frequently is "frequently"? Do you have any actual statistics? Any actual measurement of people getting "incomprehensible" error messages?

(Sure, in large part C++ isn't a beginner programmer's optimal first language. It takes experience and knowledge to both program in C++ and easily understand what the compiler is saying. However, to the average experienced C++ programmer the vast majority of error messages are quite clear. Besides, the "complexity" of the error messages has nothing to do with the syntax not being context-free, except perhaps in a very few obscure cases.)

And three, parsing C++ right is very hard, so different compilers will interpret it differently, and tools like debuggers and IDEs periodically get awfully confused.

Any actual examples? Exactly which compilers will interpret the program differently? (And how exactly does the syntax not being context-free cause this?)

And if parsing C++ is "very hard", why exactly should the average programmer be concerned?

No way to locate definitions

This entire section exists solely to bring forth once again the claim that C++ programs "compile slowly."

Even if they do, so what? Why should the average programmer be concerned?

No run time encapsulation

Yes, you can access arrays out-of-bounds, and the language standard does not demand compilers to add boundary checks. You can dereference pointers that point to invalid locations, and the standard does not demand the compiler to detect this.

This can be a completely valid reason to use a language where no such things can happen.

No binary implementation rules

It's hard to discern what the actual argument here is.

Yes, due to C++ being a so-called "unsafe language" it can sometimes be difficult to debug some memory corruption bugs. That can indeed sometimes be a big problem.

However, this section doesn't seem to be arguing only that. It talks about some "binary implementation rules" and how in C it's much "easier", and about standard ABIs... It feels like even the writer himself can't decide what exactly his argument is here, and is just writing whatever random things come to mind. Some coherent rewriting of this could be in place.

Having programmed in C++ professionally for about 10 years (and longer as a hobby), the "not standardized ABIs" has been a problem to me zero times. I don't even remember when was the last time I had to deal with core dumps directly (even though I do most of my development on Linux.) There have been a few times when a memory corruption bug has been really difficult to track, but so far they have been quite rare. (But I'm not claiming that they wouldn't be more frequent for a beginner programmer.)

No reflection

Some languages implement reflection, others don't. So what?

If you are going to list features that some programming languages offer which other languages (such as C++) don't, that doesn't make much sense.

(I also love how he accuses C++ of having a grammar that's way too complicated, but then later would want even more.)

Besides, he argues that reflection makes serialization easier. Not true. For example Objective-C has full support for reflection, yet serialization is not much easier there than it is in C++.

How many average programmers find the lack of reflection a major problem?

Very complicated type system

In C++, we have standard and compiler-specific built-in types, structures, enumerations, unions, classes with single, multiple, virtual and non-virtual inheritance, const and volatile qualifiers, pointers, references and arrays, typedefs, global and member functions and function pointers, and templates, which can have specializations on (again) types (or integral constants), and you can "partially specialize" templates by pattern matching their type structure (for example, have a specialization for std::vector<MyRetardedTemplate<T> > for arbitrary values of T), and each template can have base classes (in particular, it can be derived from its own instantiations recursively, which is a well-known practice documented in books), and inner typedefs, and... We have lots of kinds of types.

Yes. So what?

(I find it funny how sometimes C++ has too many features and sometimes it doesn't have enough features. Anything goes as long as it can be used to make it sound like a bad thing.)

Naturally, representing the types used in a C++ program, say, in debug information, is not an easy task. A trivial yet annoying manifestation of this problem is the expansion of typedefs done by debuggers when they show objects (and compilers when they produce error messages - another reason why these are so cryptic). You may think it's a StringToStringMap, but only until the tools enlighten you

Some debuggers are better than others. And compilers have got a lot better at giving informative error messages. (Usually when you get an error message caused by a wrong usage of a standard template, if you skip to the first error message that refers to your code, it will be very clear and simple.)

The types being complicated is not a major problem. How about mentioning the strengths of the type system instead of concentrating exclusively on some debugger information?

But wait, there's more! C++ supports a wide variety of explicit and implicit type conversions, so now we have a nice set of rules describing the cartesian product of all those types, specifically, how conversion should be handled for each pair of types. For example, if your function accepts const std::vector<const char*>& (which is supposed to mean "a reference to an immutable vector of pointers to immutable built-in strings"), and I have a std::vector<char*> object ("a mutable vector of mutable built-in strings"), then I can't pass it to your function because the types aren't convertible.

They are different, independent classes. There's no practical difference between those two types and having, for example, two classes that are named "vector_of_char" and "vector_of_const_char". Just because the names contain similarities doesn't make the classes automatically somehow related. Their implementation could be completely different. (The implementations of std::vector could also be completely different for those two types.)

How exactly do you suggest that template classes become related when some of their types are implicitly convertible? How exactly would the code work? How exactly would you express that in the template definition?

Very complicated type-based binding rules

This entire section repeats the same argument about long error messages.

How about some new arguments?

Defective operator overloading

C++ operator overloading has all the problems of C++ function overloading (incomprehensible overload resolution rules), and then some. For example, overloaded operators have to return their results by value - naively returning references to objects allocated with new would cause temporary objects to "leak" when code like a+b+c is evaluated. That's because C++ doesn't have garbage collection, since that, folks, is inefficient. Much better to have your code copy massive temporary objects and hope to have them optimized out by our friend the clever compiler.

Can you show me a C++ compiler that doesn't implement return value optimization?

Besides, if you are so worried about using operator overloading on your gigantic matrices, then don't use operator overloading on your gigantic matrices. It's not like anybody forces you to.

This section seems to be complaining that many features which may be useful for some things can be misused. That you need to know how a feature works in order to know if it's efficient. Well, duh.

You could use the exact same argument to claim that, for example, a linked list is a defective data container.

Defective exceptions

This could be a good idea in some cases if C++ exceptions were any good. They aren't, and can't be - as usual, because of another C++ "feature", the oh-so-efficient manual memory management. If we use exceptions, we have to write exception-safe code - code which frees all resources when the control is transferred from the point of failure (throw) to the point where explicit error handling is done (catch). And the vast majority of "resources" happens to be memory, which is managed manually in C++. To solve this, you are supposed to use RAII, meaning that all pointers have to be "smart" (be wrapped in classes freeing the memory in the destructor, and then you have to design their copying semantics, and...).

This is a perfect example of an argument of trying to make something sound like a bad thing when in fact it isn't. Making your functions exception-safe by using RAII actually makes them simpler, safer and easier to read. Making raw allocations and deallocations is going back to C, where functions are large, complicated, error-prone and full of gotos (to handle all those pesky deallocations in case of error.)

It's also a perfect example of dishonest exaggeration. ("And the vast majority of "resources" happens to be memory, which is managed manually in C++." The vast majority of code inside functions does not allocate and deallocate memory. Besides, if you are managing all your memory resources manually in your C++ program, you are doing it wrong.)

Exception safe C++ code is almost infeasible to achieve in a non-trivial program.

Now this is just outright bullshit, pure and simple.

At the bottom line, throw/catch are about as useful as longjmp/setjmp (BTW, the former typically runs faster, but it's mere existence makes the rest of the code run slower, which is almost never acknowledged by C++ aficionados).

It seems that as we get to the end of the page, the amount of utter bullshit only increases.

Comparing C++ exceptions to longjmp/setjmp is dishonest distortion backed up by the bullshit claim that "it's infeasible to make exception safe code".

The mere existence of exception support does not make the rest of the code run slower. C++ exceptions have been precisely designed so that if no exceptions are thrown, the rest of the code is not affected in any way (even if there's support for exceptions compiled in.) This was actually one of the biggest priorities of the standardization committee when they were creating the C++98 standard.

Duplicate facilities

This is again inventing flaws that aren't there.

But let's entertain for a moment that the claim is completely true. How exactly is this a "major defect" of the language? You have more than one way of doing the same thing (even though in practice there are differences that often make one way better than the other in a particular situation.) How exactly is this a "defect"?

The author is now inventing defects that just aren't there.

No high-level built-in types

The author is now just repeating himself to no end.

Cryptic multi-line or multi-screen compiler error messages, debuggers that can't display the standard C++ types and slow build times

*sigh*

There's also antiquated information:

you can't initialize std::vector with {1,2,3} or initialize an std::map with something like {"a":1,"b":2} or have large integer constants like 3453485348545459347376

And some really strange inventions:

the ability to throw anything, but without the ability to catch it later

WTF?

And what's this:

However, the most costly problem with having no new high-level built-in types is probably the lack of easy-to-use containers.

The more we advance in the page, the more bullshit comes up. He's just pulling these things out of his ass.

Manual memory management

I find it quite hilarious that he could present some actual good arguments related to this, but instead he decides to present utter bullshit:

Manual memory management is incompatible with features such as exceptions & operator overloading, and makes working with non-trivial data structures very hard

Defective metaprogramming facilities

They compile forever.

*sigh*

What does it tell when you have to keep repeating the same argument over and over?

Unhelpful standard library

Bullshit.

Defective inlining

More bullshit.

His argument seems to be that since some people abuse inlining in situations where it offers no advantage, it's somehow defective.

And what do you know:

making the code compile more slowly

Repeating the same argument over and over doesn't make it more valid, nor is it a good form of argumentation. It only shows that you are running out of arguments because you need to keep repeating the old ones.

Implicitly called & generated functions

And we close with even more bullshit, and invented flaws.

(I especially like how inconsistent he is. For example he advocates garbage collection, which is quite a big amount of code that the compiler generates on the background to interact with a runtime environment that has more background code you have no control over. However, when the C++ compiler generates code, it's somehow a "major defect". He's just inventing these "defects" as he goes.)

And what do you know, once again:

Implicit generation of functions is problematic because it slows compilation down

This has just become ridiculous.

Conclusion

The vast majority of that page is just dishonest exaggeration, accentuating the negative while ignoring the positive, outright fabricating flaws that aren't there, and repeating the same arguments over and over.

Even in the few instances where a good argument could have been made, instead the author chose to argue for something ridiculous instead.

The author seems to have an unhealthy obsession with "slow compile times". Yes, keep pounding on that argument. Maybe it will become valid if you repeat it a few dozen times more.


(Back to index)