In November 2003, i was chided about the General coding standards for the VTP software. In short, it was claimed that the approach to C++ coding therein was behind the times, not up to date with current C++ practices, particularly as described in the popular Effective C++ books by Scott Meyers.
In December, i spent a few days time to read the books cover to cover, and wrote up some observations and criticisms.
Let me first say that the books, generally speaking, are wonderful. The great bulk of the content is great explanation of difficult C++ concepts. Not only is it written clearly, but with a charming style that makes enjoyable reading of some otherwise insanely dry material.
However, there are plenty of controversial items.
It says to replace:
#define ASPECT_RATIO 1.653
const double ASPECT_RATIO = 1.653;
While this may be good advice in some cases, it would be really nice if Meyer could mention why this works without causing linker conflicts. In theory, putting such a 'const' definition a header file, which is included from multiple source files, should produce a linker error from multiple definitions of the symbol. From my own testing, it appears that it works because the compiler treats module-scoped 'const' variables as 'static' to the scope of the module! This is a baffling and unintuitive behavioral quirk. It makes Item 1's suggestion feasible, but is left unexplained.
Furthermore, the loss of efficiency of the const member over the
#defineis not just the memory consumed by the variable, but that memory multiplied by the number of modules that include it. This is a potentially significant bloat that should be measured against the (small) benefit of using a const variable over a
#define, and Meyer does not mention it at all.
As far as i can tell, this is largely bogus. The entire point of the formatted methods printf/scanf is to allow explicit, efficient control of how values are converted and interpreted. The << and >> operators do have means of formatting their output, but they are little-known and provably inefficient, so they are effectively a useless step backwards.
Meyers completely missing this point, and glosses right over the extremely inefficient overhead of the <fstream> methods, which go ~20 levels deep and allocate numerous chunks of memory - just to define a stream. In the real world, you can't even open a file without the huge body of <fstream> code doing obscure things like checking your 'locale' settings and allocating strings for the values you might use for commas in dates and times (!) If anyone in doubts this, simply use a debugger sometime to step all the way through the standard C++ library opening a stream to read a file. I promise you'll be sufficiently horrified to never use <fstream> again.
Meyer's points about using new/delete for class objects are fine, but he fails to mention that malloc/free for non-class objects are inherently more efficient and therefore always preferable.
A C++ zealot might argue that a programmer should use "operator new" and "operator delete" in place of malloc and free (as described in MEC++ Item 8). Meyer doesn't make that argument here, nor in MEC++ Item 8, so i'm left to assume that the superior efficiency of malloc/free is uncontested.
This is all much todo about nothing, dealing with the pain of catching cases where 'new' fails. Since only large memory allocations are likely to fail, and 'new' should only be used on class objects which are rarely large allocations, this isn't a real issue. Code can simply check the return of 'malloc' and avoid this whole C++ silliness.
Both of these are useless since they're based on the bogus item 10.
operator deleteif you write
The (single) argument here for writing a custom 'new' operator is bogus. It's correct that 'new' is inefficient and sometimes needs an alternative. However, calling that replacement 'new' just leads to countless pitfalls. The obvious solution of calling the replacement allocator something unambiguously different from 'new' seems to elude Meyers.
Thankfully it does not say to define these for all classes. Has a note of sanity: "For some classes, it's more trouble than it's worth to implement copy constructors and assignment operators." I couldn't agree more.
Fortunately, Meyer's contradicts the sweeping generalization of this item's title with an example of where you wouldn't prefer initialization. But, he doesn't go far enough. Since the only argument presented for using initialization over assignment refers only to class objects, it's clear that this doesn't apply to built-in types. Too many people abuse the member initialization by including members of built-in types. This results in code which as Meyer correctly says, is "error-prone in the short term and difficult to maintain in the long term" - not to mention unattractive.
This is a well-described gotcha with initialization at construction time, which although Meyer doesn't say so, is clearly an argument against the practice. The gotcha is avoided neatly by using normal assignment in the constructor to control the order of operation.
Actually, this item contains a lot of strong arguments for a case that Meyer doesn't make: unless a class must have an explicit assignment operator (e.g. if it has pointers to dynamically allocated content, and the class will be used with value assignment) it's better to omit the copy constructor completely. Otherwise, you open up all the pitfalls mentioned here: ease of forgetting to copy a member, trouble with assignment of base classes, etc.
This item feels a bit incomplete, because it only gives example of data members which are built-in types. How are we to interpret this advice ("everything is a function") in the case of members that are pointers, references, or compound objects that we don't want to do assignment with? This is especially puzzling in light of Item 30, which points out that it's a bad idea to simply make the pointer members private then provide public access methods to them. One must conclude that this Item's advice is either just wrong, or is missing a clause "..only for built-in types."
The only problem here is that it seems to contradict Item 20. Reading both of these items, the reader is left wondering how is a member of a class type supposed to be contained (i.e. "layered") without exposing it directly or duplicating every one of its members?
There is much less controversial material in this book.
Fascinating. This item informed me that there are ways to get the C++ library stream classes to do formatted printing ala printf. Long-winded, inefficient ways, but ways nonetheless. It is surprising that no C++ programmer i've ever asked has known of the existence of these (setw, ios_base::precision, etc.) Perhaps they are a late addition to the standard library.
This doesn't affect my criticism of EC++ Item 2, however, since as Meyer himself says, "the iostream library generally suffers in comparison with stdio, because stdio usually results in executables that are both smaller and faster.." In fact he includes benchmark figures that show this is more than a generality; it is a certainty with an enormous performance difference.
I found it interesting that this item specifically describes a situation in which it says to omit the copy constructor and assignment operator, as "the compiler-generated copy constructor" does all the right things. This contrasts with some C++ religions which insist on providing a copy constructor and assignment operator for every single class, due to irrational fear of the compiler's default methods.