A rant against flow control macros

Date:January 6, 2005 / year-entry #5
Tags:other
Orig Link:https://blogs.msdn.microsoft.com/oldnewthing/20050106-00/?p=36783
Comments:    53
Summary:I try not to rant, but it happens sometimes. This time, I'm ranting on purpose: to complain about macro-izing flow control. No two people use the same macros, and when you see code that uses them you have to go dig through header files to figure out what they do. This is particularly gruesome when...

I try not to rant, but it happens sometimes. This time, I'm ranting on purpose: to complain about macro-izing flow control.

No two people use the same macros, and when you see code that uses them you have to go dig through header files to figure out what they do.

This is particularly gruesome when you're trying to debug a problem with some code that somebody else wrote. For example, say you see a critical section entered and you want to make sure that all code paths out of the function release the critical section. It would normally be as simple as searching for "return" and "goto" inside the function body, but if the author of the program hid those operations behind macros, you would miss them.

HRESULT SomeFunction(Block *p)
{
 HRESULT hr;
 EnterCriticalSection(&g_cs);
 VALIDATE_BLOCK(p);
 MUST_SUCCEED(p->DoSomething());
 if (andSomethingElse) {
  LeaveCriticalSection(&g_cs);
  TRAP_FAILURE(p->DoSomethingElse());
  EnterCriticalSection(&g_cs);
 }
 hr = p->DoSomethingAgain();
Cleanup:
 LeaveCriticalSection(&g_cs);
 return hr;
}

[Update: Fixed missing parenthesis in code that was never meant to be compiled anyway. Some people are so picky. - 10:30am]

Is the critical section leaked? What happens if the BLOCK fails to validate? If DoSomethingElse fails, does DoSomethingAgain get called? What's with that unused "Cleanup" label? Is there a code path that leaves the "hr" variable uninitialized?

You won't know until you go dig up the header file that defined the VALIDATE_BLOCK, TRAP_FAILURE, and MUST_SUCCEED macros.

(Yes, the critical section question could be avoided by using a lock object with destructor, but that's not my point. Note also that this function temporarily exits the critical section. Most lock objects don't support that sort of thing, though it isn't usually that hard to add, at the cost of a member variable.)

When you create a flow-control macro, you're modifying the language. When I fire up an editor on a file whose name ends in ".cpp" I expect that what I see will be C++ and not some strange dialect that strongly resembles C++ except in the places where it doesn't. (For this reason, I'm pleased that C# doesn't support macros.)

People who still prefer flow-control macros should be sentenced to maintaining the original Bourne shell. Here's a fragment:

ADDRESS	alloc(nbytes)
    POS	    nbytes;
{
    REG POS	rbytes = round(nbytes+BYTESPERWORD,BYTESPERWORD);

    LOOP    INT	    c=0;
	REG BLKPTR  p = blokp;
	REG BLKPTR  q;
	REP IF !busy(p)
	    THEN    WHILE !busy(q = p->word) DO p->word = q->word OD
		IF ADR(q)-ADR(p) >= rbytes
		THEN	blokp = BLK(ADR(p)+rbytes);
		    IF q > blokp
		    THEN    blokp->word = p->word;
		    FI
		    p->word=BLK(Rcheat(blokp)|BUSY);
		    return(ADR(p+1));
		FI
	    FI
	    q = p; p = BLK(Rcheat(p->word)&~BUSY);
	PER p>q ORF (c++)==0 DONE
	addblok(rbytes);
    POOL
}

Back in its day, this code was held up as an example of "death by macros", code that relied so heavily on macros that nobody could understand it. What's scary is that by today's standards, it's quite tame.

(This rant is a variation on one of my earlier rants, if you think about it. Exceptions are a form of nonlocal control flow.)


Comments (53)
  1. SynMan says:

    I guess Macros where superceded with inline, templates, constants, RTTI, etc. The only valid use I can think of is some of the maps in MFC (or when you don’t have advanced C++ functions)

    Yours

  2. Sriram says:

    Raymond -you and Paul Graham will never get along.:-) Have you tried Lisp macros?

  3. Sriram says:

    Also, I’ve dug through the Rotor sources and they use a lot of macros all over the place – especially in exception handling,etc.

  4. Adrian says:

    This is an excellent point, but to be honest, I don’t see this kind of nonsense much any more. (The Message Map macros in ATL and MFC bother me, though–they hide too much.)

    Obscuring the code flow is bad, but filigree code flows are at least as bad. It seems most OO programmers have forgotten the lessons of structured programming, as if the two are incompatible. Premature returns, breaks, continues, gotos, setjmps/longjmps, exceptions — sometimes these have their place, but those times are much rarer than most programmers appear to believe.

  5. Avid Reader says:

    What are you looking at the source code for?

    " Pah, source-level debugging! Once the optimizer’s done with your code source-level debugging is useless."

    http://weblogs.asp.net/oldnewthing/archive/2004/11/11/255800.aspx

    Great post!

  6. WHAT?! You just mentioned C#?!?! How dare you!! This is NOT a .NET blog!!!!!

    :-/

  7. Gerson Kurz says:

    ATLBASE.H, #define IMPL_THUNK – nuff said ;)

    A nice use of macros using the same header twice: (yes I can hear you screaming already ;)

    step 1:

    #ifdef STRINGARRAY_BUILD

    #undef DECLARE_STRING

    #define DECLARE_STRING(__ID,__string)

    { TEXT(#__ID), &__ID },

    #else

    #ifdef CREATESTRINGS_BUILD

    #define DECLARE_STRING(__ID,__string)

    LPCTSTR __ID = __string;

    #else

    #define DECLARE_STRING(__ID,__string)

    extern LPCTSTR __ID;

    #endif

    #endif

    step 2: loadsa DECLARE_STRING macros in header.h

    step 3: in header.cpp

    #include "header.h"

    #define STRINGARRAY_BUILD

    #include "header.h"

    neat, eh?

    On a totally unrelated note, does everyone know the difference between the __TEXT and TEXT macros? No, they are NOT equal. Try:

    #define xyz "123"

    LPCTSTR a = TEXT(xyz);

    LPCTSTR b = __TEXT(xyz);

    works fine UNLESS you have UNICODE #defined ;)

  8. Senor Coconut says:

    Do

    #define unless(x) if(!(x))

    #define until(x) while(!(x))

    count?

    James Curran: I’d guess that

    #define LOOP while(1) {

    #define DONE break;

  9. asdf says:

    Macros are great. My only complaint against them is that nobody has added an extension to swallow up the first { or ; token that comes after the usage of the macro. I use unless() and until() and I also use:

    template <class T, unsigned S> char (& lengthof(T (&)[S])) [S];

    #define lengthof(x) (sizeof(lengthof(x)))

    #define widthof(x) sizeof(x[0])

    #define numbits(x) (sizeof(x)*CHAR_BIT)

    And now for something even more evil^H^H^H^Huseful: I have this custom build tool that allows me to embed perl code right in the middle of source code and it acts just like PHP or ASP except you generate C/C++ code on the fly instead of HTML.

  10. Raymond Chen says:

    Gerson, asdf: Noting of course that your macros aren’t flow control macros.

    Avid Reader: You’re changing the subject. This is about reading code not stepping through it with a debugger.

  11. Mike Dunn says:

    Raymond: Whatever you do, don’t look at the sample code in the WMP Format SDK. Example of an evil macro [this will probably come out bad in HTML, but oh well]:

    #define CORg(hResult)

    do

    {

    hr = (hResult);

    if (FAILED(hr))

    {

    goto Error;

    }

    }

    while (fFalse)

    Note the use of a local variable called "hr" (which must be an HRESULT that you have to declare) and the goto to a specially-named label.

  12. Jon Wiswall says:

    Side note… RAII kind of moots the original problem you’re pointing out. The critical section acquisition should have been performed by a C++ object with a destructor. If there was

    CCriticalSectionLockGrant cs_grant(g_cs);

    cs_grant.Acquire();

    at the top of the function, one could reasonably assume that the grant would function properly no matter how wierdly control flow ended up progressing. Flow-control code is ugly at the best of times. Deeply-nested "if (SUCCEEDED(..)) { if (SUCCEEDED(..)) { } }" statements are just as hard to read and maintain as "linear looking" code with macros.

    Compilers and editors nowadays handle long dscriptive identifiers just fine. If you must use flow-control macros (and some of us do), put what they do right in the name. My group has, for example, IFNOTNTSUCCESS_EXIT to replace the typical "if (!NT_SUCCESS(status = …)) return status;" pattern. Coupled with destructors, the code appears to run in a straight line with appropriate error code handling. The names of the macros describe exactly what they do.

    (Further, not to antagonize Raymond, but perhaps a better source browser is in order if looking up the definitions of macros is too hard. What happens when you have to go figure out what that Block::DoSomething function did?)

  13. Vorn says:

    …I’m gonna go hide in the corner now. This code is evil beyond belief.

    Vorn

  14. autist0r says:

    If you like macros, look at <a href="http://www.openssl.org">OpenSSL</a&gt; code. :p

  15. Mike Swaim says:

    Pro *C anyone?

    (It’s an Oracle precompiler that lets you embed SQL directly in your C/C++ code. It can generate really interesting code if you make a mistake.)

  16. Anon says:

    (Yes, the critical section question could be avoided by using a lock object with destructor, but that’s not my point. Note also that this function temporarily exits the critical section. Most lock objects don’t support that sort of thing, though it isn’t usually that hard to add, at the cost of a member variable.)

    As a related aside, "Unlock" objects can be useful in this situation too (at the cost of an unnecessary Lock / Unlock if an exceptional code path is taken). The idea is to unlock in the constructor & re-lock in the destructor.

    It suppose it would be less readable to a newcomer than LeaveCriticalSection / EnterCriticalSection though.

  17. K Biel says:

    Raymond,

    It seems obvious to me that the macros in the bourne shell code were used to emulate the look of a bourne shell script. Not that I am recommending this approach, but it doesn’t really support your point as it is a special case.

  18. Byron Ellacott says:

    As people pointed out in your rant against exceptions, cleanup code is what try…finally was introduced to handle. ;)

    However, I would agree that functions will generally be easier to understand if they only return at one point, and if that point is at the bottom of the function.

    And macros are an excellent way to make code harder to read. C++ really doesn’t need to be made harder to read. Any worse, and it’d be perl. ;)

  19. James Curran says:

    Hmmm… I’ve been going through that Bourne excerpt, and trying to figure out what the macros stand for. Here are some of my guesses:

    #if _some_compile_indicating_macro

    #define REG register

    #else

    #define REG

    #endif

    #define IF if(

    #define THEN ) {

    #define FI }

    #define WHILE while (

    #define DO ) {

    #define OD }

    #define ADR(x) (&(x))

    LOOP/POOL throw me, because there seems to be no loop control around them. My only guess is that there are defined as:

    #define LOOP {

    #define POOL }

    with the intention of putting other flow control before it, ie:

    while (x < 100)

    LOOP /* etc */ POOL

    So, not only are they abusing C to create this "language", they are even abusing the rules of *that* language!

  20. Tim Smith says:

    Exceptions aside, there is a HUGE difference between having to lookup what DoSomething does and looking up what those MACROs do.

    Here we are talking about flow control. DoSomething will "do something" and execution will continue at the next line of code. However, with those MACROS, all bets are off. You can not make any assumptions about the macros since not only can they have parameter side effects but also flow control side effects.

    (Doh, email server down. I can’t spell check my babble)

  21. Raymond Chen says:

    Thanks, Tim, for articulating what I inadvisedly left unwritten.

    Often you don’t care what the functions do. You’re just following the flow of execution trying to answer questions like "Does every code path that executes line X also eventually execute line Y?" or its converse "Is it ever possible to be at line Y without having gone through line X?" In such cases, you don’t care what DoSomething does – as long as it returns. But if there is a macro that hides flow control, you’re going to miss stuff.

  22. Scorponok says:

    "It seems obvious to me that the macros in the bourne shell code were used to emulate the look of a bourne shell script. Not that I am recommending this approach, but it doesn’t really support your point as it is a special case."

    Isn’t that a bit of a silly reason to have all the macros? Bourne shell scripting and C/C++ are going to be quite different, so why would you try to force one to look like the other? It’ll only end in tears when you suddenly run into something that you can’t hammer into place like that.

  23. Brian says:

    Agreed. The Bourne shell "script" is more weird and misguided than evil. It’s like they were jealous of the people who were writing C compilers in C and wanted to write a Bourne shell in a Bourne script. This is the next best thing! (though it’s a huge gap from best to next best)

  24. James says:

    I think ‘assert’ is a useful flow control macro. I’d argue that MFC and ATL are full of flow control macros that some people find useful, too.

    This seems like another of those stupid "you can do anything with macros, therefore they are evil" posts. The same people tend to think things like "goto is evil, except when you use it for X, Y and Z" (where the latter are things they consider idiomatic like handling errors in C programs by jumping to the end of the function), "C++ is bloated and inefficient compared to C" (because they don’t really understand the code they’re writing) and "exceptions lead to unmanageable software" (because they’re generally clueless about the disciplines that exceptions require – the same people find themselves debugging resource leaks and crashes whilst blaming the diagnostic output).

    Guys, newsflash: it’s just a tool. Use common sense. That C# doesn’t have an appropriate mechanism for syntactic manipulation is a weakness. That C++ has a dangerous-and-difficult-to-validate mechanism is a different weakness.

  25. Jonathan Pryor says:

    Is the primary complaint against control flow macros, or against *poorly named* control flow macros?

    For example, I find that the the GLib g_return family of macros tends to clarify function assertion logic.

    Compare:

    void SomeFunction (void *handle, int arg1, int arg2)

    {

    if (handle == null)

    return;

    if (arg1 <= 0 || arg1 > SOME_CONST)

    return;

    if (arg2 == INVALID_ARG)

    return;

    // …

    }

    to:

    void SomeFunction (void *handle, int arg1, int arg2)

    {

    g_return_if_fail (handle == NULL);

    g_return_if_fail (arg1 <= 0 || arg1 > SOME_CONST);

    g_return_if_fail (arg2 == INVALID_ARG);

    // …

    }

    As more assertions are added, I find that the macros clarify understanding.

    See: http://developer.gnome.org/doc/API/2.0/glib/glib-Warnings-and-Assertions.html#id2797042

    The primary issue is one of naming. It can be difficult, but it’s necessary that clear, concise, and *useful* names be used for everything (constants, functions, macros). A function called DoSomething() is equally unhelpful when debugging, and invocation of member functions or modification of global data can have ramifications that can be as subtle as control-flow macros. The need to dig through header or source code while debugging isn’t limited to macros.

    As always, the solution is clarity and restraint. If the macros clarify things, they can be useful, but as the Bourne Shell example shows, they can also be a hindrence.

  26. James, the title of the blog entry is "A rant against flow control macros", not "A rant against macros". Raymond is not denouncing macros completely – only their abuse in the area of flow control. Surely you’re not condoning the sample Bourne shell code?

    IMHO, C# doesn’t need macros. Anything you can do with a macro, you can do by simply coding a helper method or two. The inlining performed by the JIT compliler will generally alleviate any efficiency concerns, provided you know how to help the JIT compiler inline methods.

    A few blog entries ago, I asked about the new C++ standard (and thanks for the answer Andreas). I asked this because I’m interested in what improvements have been made to the language to counter abuse such as that demonstrated in Raymond’s post.

    It’s frustrating for me, primarily a C# developer, to switch to C++ (as I have currently to write a game) because there are so many pitfalls and language nuances that should be ironed out.

  27. mschaef says:

    Gerson, I like that example.

    I did something sorta-similar in a project I was on several years ago. It was coded in C, and I had a couple object-like pieces of code that had to be essentially duplicated for two different data types. I ended up using the preprocessor to compile source files multiple times. Something like this (more or less):

    #ifdef TYPE_DISCRETE

    # define GLOBAL(x) discrete_##x

    typedef value_type int;

    #else

    # define GLOBAL(x) not_discrete_##x

    typedef value_type float;

    #endif

    And then, globally exported symbols were treated like so:

    value_type GLOBAL(read_sensor)()

    {

    /* … logic goes here #ifdef DISCRETE can be used to add special case code to particular instances, if necessary.. */

    }

    Compiling the same source file with -DDISCRETE and without produced two seperate object modules.

    It seemed like a funky hack at first, but it turned out to be pretty easy to explain, worked nicely with our source debugger, and saved an immense amount of time tracking down duplicate bugs in the two code paths.

    More to the point of control flow macros, on the same project, I had to implement a state machine from a spec that was written in terms of numbered transitions and messages. I handled that by mapping transitions to a set of macros; and mapped messages to functions. It ended up being relatively easy to conform to the spec since the logic could be matched up quite closely.

    C-style Macros have a lot of problems, but they really have their moments…

  28. Memet says:

    I would agree with Jonathan Pryor’s remark about poorly named control flow macros. I for example, have opted to use a project wide TRY/CATCH macros system that hides tedious stuff like propagating exceptions and translating C++ exceptions (internal error reporting system) to COM Errors at interface boundaries.

    It’s not so complicated, once you look at one function, you’ve pretty much seen all functions:

    METHOD_BEGIN



    METHOD_END | METHOD_END_NOTHROW | METHOD_END_COM | METHOD_END_INJECT_STATEMENT(x)

    as you might guess, METHOD_END gets used about 98% of the time. Also, the catch doesn’t return anything, so returns are perfectly visible.

    I find that these macros are something that the language just doesn’t do well: that is ‘decorating’ function calls with code. Too often, in big projects, I see something else that infuriates me: random regions of code where hresults/bools/ints are returned from functions, while other places return void with exceptions. The idea, normally, with these macros is that they should be simple and deter programmers from wanting to add their own little version of things each time they have to deal with an error.

    What do you guys think? What does Microsoft do for example for ‘enforcing’ internal error policies when dealing with big projects?

  29. Well everyone can go ahead and continue sticking their collective heads in the sand w.r.t. exceptions, but it’s not possible to write reliable general purpose software that uses them for general error control flow. (Note the two caveats: general purpose software like libraries and platforms, and general error control flow meaning error paths which people expect to recover from as opposed to error paths from which you do not expect to recover.)

    So this leaves you with a choice. Do you prefer to see all the error checking and gotos, and therefore also the quality problem associated with them (bug counts tend to track bytes of code written by humans; abstractions like macros and functions have the desired behavior of decreasing the bugs/bytes-of-source-code) or the convoluted logical evaluation paths that people do to avoid goto and/or macros to help the error control flow paths.

    I’ll counter-rant Raymond, as a person who’s had to debug a bunch of source code that follows Raymond’s preferred mode for the past few years, and I’ll take IFFAILED_EXIT() over the nested ifs (and general lack of error handling) that pervades this source code.

    Jon’s point was good and really central (if not obvious) – a large part of the problem is the lack of adoption of RAII. If we could get people to agree to RAII, then we’re debating whether it’s better to write:

    hr = Foo();

    if (SUCCEEDED(hr)) {

    hr = Bar();

    if (SUCCEEDED(hr)) {

    hr = Baz();

    }

    }

    return hr;

    or…

    hr = Foo();

    if (FAILED(hr)) goto Exit;

    hr = Bar();

    if (FAILED(hr)) goto Exit;

    hr = Baz();

    if (FAILED(hr)) goto Exit;

    hr = S_OK;

    Exit:

    return hr;

    or:

    IFFAILED_EXIT(Foo());

    IFFAILED_EXIT(Bar());

    IFFAILED_EXIT(Baz());

    Exit:

    return hr;

    Well, my vote is clear. The goal that people who are drawn to exceptions like moths to the flame have is the simple straightforward source code. I wish there was a better approach that didn’t have the fundamental problems with exceptions, but I’ll personally take the control flow macros over (a) convoluted control flow like nested ifs or (b) the repeated possibility of mistakes in the error checking logic.

    I agree with the earlier poster here – the real issue at hand is poorly named macros that influence control flow. The thing I hate about exceptions is that I can’t see that:

    x = Foo(a, b, c);

    has like 6 opportunities for exit paths that you can’t see. Its apparent simplicity is just a big fat lie. At least with:

    IFFAILED_EXIT(Foo(a, b, c, &x));

    (when you know that there aren’t exceptions in play) there is a single nonlinear (e.g. goto Exit) control flow path here. Maybe I should have named the macro IFFAILED_GOTO_EXIT()? I didn’t feel the extra 5 characters added any value.

    To me personally, the really compelling factor that makes the macro usage attractive is that you can build tracing into the infrastructure. At this point, I consider calling a central function when an error condition is first detected (I tend to call this "origination") a manditory basic requirement because it means that you can set a breakpoint there in a debugger and often simply diagnose problems which derive from an untested error condition. If it wasn’t for this utility, I would tend to agree that the difference between:

    IFFAILED_EXIT(Foo(a, b, c, &x));

    and

    if (FAILED(hr = Foo(a, b, c, &x))) {

    ReportFailurePropagation(hr);

    goto Exit;

    }

    is a wash. Except that the visibility/accessibility of the point of the control flow makes people ever so tempted to not use RAII.

    (this is one of the reasons I hate people catching and rethrowing exceptions. There was a point in time when turning on "break on exceptions" in the debugger was useful and you would very often break in only when something interesting was going bad. Since they’ve evolved into a generalized communication mechanism, they’ve stopped being useful to find when… well… exceptional situations have occurred.)

  30. Absolutely, proper design and code reviews will help to solve that. If you find yourself repeating constructs such as try / catch blocks all over the place then you need to re-think your design. eg. Wrap the invocation in a new method and handle the errors in that method. If you’re doing the same thing all the time (as using a macro implies) then this is fine.

    What I’m failing to see is why you would possibly want to wrap exception catching up in a macro. Programmers will blindly use the macro, probably without understanding what the macro does. Either that, or they won’t use it or will write their own. That is exactly Raymond’s point. The programmer has to go and look at the macro to determine whether it is relevant to their particular context.

    You can no longer just look at the code and understand it. You have to go off and look at several different macros, understand each one, keep them all in your head, and then make sense of the whole code segment utilising the macros.

    I suppose you could argue that using methods / functions instead of macros yields little difference. You still have to know what each method does. I’m no C++ expert but I guess one of the major advantages of methods / functions is that they are easier to debug.

  31. The legendary Bourne Shell macros are, in fact, made to make the code emulate Algol-68 as much as possible. Steve Bourne was an Algol fanatic, and he felt that it was a failing of C that a single block construct was used (curly braces) instead of one for each different type of block (case/esac, if/fi, etc.). The macros, and resulting code, were his little political statement.

    Another, not so well known, annoyance about the Bourne Shell code include the fact that it does not use malloc()/free() for memory management, instead directly calling the sbrk() system call and doing its own heap management.

    The so-called "Bournegol C" was the inspiration for the modern IOCCC.

    Peter van der Linden mentions this and other hilarious/scary historical tidbits in <u>Expert C Programming: Deep C Secrets</u>. (If you don’t know the case where arrays and pointers are NOT equal, btw, you need to pick up this book now.)

  32. Moi says:

    I wonder what Raymond thought of the MFC exception macros. I’m not brave enough to ask!

    Ramond – just tell your compiler to produce something with the macros expanded. Problem solved.

  33. Raymond Chen says:

    If you’re reading somebody else’s code, you don’t want to have to compile it first just to see what it does. (And sending C/C++ code through the preprocessor tends to make it less readable rather than more. Try it.)

  34. JCAB says:

    IMHO, the crux of the problem is that many of these macros essentially modify the language you’re programming in (Bourne Macros being a glaring example of this). So, even if the C++ compiler compiles it just fine, it just is not C++.

    This can be a good thing in some cases, for some purposes, your mileage may vary. And you definitely don’t need macros to make a new language for your C++ compiler (check Boost Spirit, for instance). But I do believe it’s essential to realize, and advertise that it is not C++ any more, so learning C++ is not enough to be profficient in it.

    I recently had to ream a young new programmer for over-using macros (for instance, he used a macro called "Nothing" instead of NULL), for no better reason than he liked the resulting language better. He just didn’t see (and still doesn’t see) that the language wasn’t quite C++ anymore.

  35. Memet says:

    Kent Boogaart: the point of (well designed) control flow macros isn’t to mimic functions, it’s to avoid retyping of gramatical constructs over and over for example:

    catch( SomeException& e )

    { ReportSomeCriticalError(); }

    catch( SomeOtherException& w )

    { WarnUserButDontReportError(); }

    catch( … )

    { ShutdownSystem(); }

    With big projects with many programmers onboard, these generally tend to start off as :

    catch(…)

    { /* remember to do something */ }

    during initial development, and usually get transfered to:

    catch(…)

    { ShutdownSystem(); }

    during the final phases (if you’re lucky).

    I know, people are going to say proper code reviews should solve that, but IMHO a programmer needs all the help the language can muster when dealing with boiler plate code.

    Bottom line, IMO, bad code is just bad code. You can abuse pretty much everything if you put your mind to it.

  36. Petr Kadlec says:

    I hate when macros are used to force the C language to behave as a compiler of some completely unrelated metadata- (or whatever) language. As an example I would consider various macros in Windows DDK, even those MFC message maps, or (which is where I have fought with those for the first time) macros in MS Flight Simulator SDK.

    Not only you are forced to learn another language and use it in middle of your C source; there is also a problem when you want to use another programming language. In that case, you have to analyze the macros, understand what they do and then reimplement them in a totally different way in the other language. If the SDK would use either normal language constructs, or a special high-level tool, this would be not a problem.

  37. Moi says:

    Try it

    I do. If I’m looking for a bug, for example, without running the code through a debugger, I find it easier to look at the real code than code with macros in (there’s no way I could keep every single #define in my head).

  38. mschaef says:

    "just tell your compiler to produce something with the macros expanded. Problem solved. "

    It’s too bad there aren’t editors that provide ways to do this interactively. Maybe it isn’t all that useful in most code (or perhaps it would encourage overuse of macros), but the abilty to hit a key and toggle back and forth between editable source and a view with expanded macros could be pretty useful.

    "I hate when macros are used to force the C language to behave as a compiler of some completely unrelated metadata- (or whatever) language. "

    I dunno. Given the choice between reading through a set of macros to understand a metadata language, versus reading through a custom written metadata compiler, I’d rather deal with the macros. I’d also wager that the macro/C-compiler combo is more likely to be robust than a special tool.

    "Not only you are forced to learn another language and use it in middle of your C source; "

    The Lisp folks (who admittedly have much more powerful macros than C) have had good luck over the years embedding custom languages into Lisp. To some extent, it makes sense: If you can take tricky logic in your code and map it into a language that more closely matches the problem, that could end up being a more readable solution than a bunch of complex, probably tricky, logic written in C (or Lisp). Isn’t that what the whole DSL movement is aiming for?

  39. K Biel says:

    Scorponok: "Isn’t that a bit of a silly reason to have all the macros? Bourne shell scripting and C/C++ are going to be quite different, so why would you try to force one to look like the other? It’ll only end in tears when you suddenly run into something that you can’t hammer into place like that."

    I didn’t say that is was a good idea to make C/C++ look like another language. Instead I was pointing out that the bourne code does not support Raymond’s point. Rather than obscure or hide the control flow, it actually made it quite explicit, but in the form of a bourne shell script. You and I might look at it and think it ridiculous, but someone who writes shell scripts all day long would recognize it immediately.

    Let me state again, I am not endorsing the bourne source code. I was just trying to point out that it does not support Raymond’s point about how C macros can obscure control flow.

  40. K Biel says:

    Oops, I meant "flow control" above when my fingers typed "control flow".

  41. JCAB wrote "I recently had to ream a young new programmer for over-using macros (for instance, he used a macro called "Nothing" instead of NULL), for no better reason than he liked the resulting language better. He just didn’t see (and still doesn’t see) that the language wasn’t quite C++ anymore."

    Actually you are encouraged to just use 0 instead of "#define NULL 0" by Bjarne Strostrup in "The C++ Programming Language" :) (Personally I prefer to use NULL because it makes it easier to read and understand).

  42. CW says:

    Anyone still using message crackers from windowsx.h?

    switch (msg) {

    HANDLE_MSG(hwnd, WM_COMMAND, OnCommand);

    HANDLE_MSG(hwnd, WM_PAINT, OnPaint);



    default: …

    }

  43. Memet says:

    (Sigh, I just lost my entire post, I’m retyping)

    Kent Boogaart:

    The reason why we’ve opted to use macros across the project is because we wanted a unified exception handling system.

    What that means is that any exception thrown in a particular system should be a well known exception, preferably derived from a base class. So the macros handle either defined program failures, or catch all failures (which generally indicate there was a problem that we didn’t account for – which is bad).

    Also, exceptions aren’t meant to be used as control flow logic, they are meant to interrupt normal flow because something serious has occured.

    A good example is std::map::find() which returns std::map::end() if it can’t find the required element, it does *not* raise an exception. So it is wrong to compare exceptions to code like this:

    hr = SomeOperation();

    if( FAILED( hr ) )

    hr = DoSomeOtherOperation();

    else

    hr = DoSomethingCool();

    Here, we have a clear case of flow control based on the outcome of some function. On the other hand, operations like ‘new’ need to be atomic from a syntactic perspective, and not raising an exception when a object fails instantiation is not an option.

    In the same vein, in the project we’re in, components do not raise exceptions when their output is not something that cannot be recovered from, but when it is they do. And at that point, having a unified exception handling architecture really gives cohesion to the system since now you can log what’s happening in the app using, for example, a single point of logging in your base exception class.

    Michael Grier:

    The problem is not that we are in a blissful state of denial, it’s that C++ needs exceptions to be robust. C++ classes could not be instantiated properly without exceptions. When that observation sets in, you are faced with either using C++ as an enhanced C syntax compiler, and using COM (or something else) as your object model, or using C++ as an object oriented language and interfacing with COM as you would with any other C API. Even the #import code generates _com_error exceptions when wrapping around COM objects. It’s just a matter of principle, and for those who want to use C++ as an OOP, then exceptions are the way to go.

    (I’m sure I forgot to say something from what I had originally written, ah well)

  44. Re: memet:

    C++ is very useful over C without dragging exceptions into the mix. Exceptions are an experiment which has gone awry. They’re the new snake oil. They work well in functional languages since there aren’t side effects to be rolled back. Even people who believe in RAII are rarely actually prepared to make every possible error control path actually have all the rollback necessary.

    The point of the other blog entry Raymond references is that:

    a = b + c;

    doesn’t clearly have any control flow when in fact it may have 3 (or 2 more if you consider failures in destructors for temporaries which terminate the application to be control flow).

    MUST_SUCCEED(Foo());

    is not better. There’s some statement of intent but it’s not clear what the ramifications are. But then I’ve also seen this style:

    check << Foo();

    check << Bar();

    is that clear either? No, and for the same reason. I forget where I saw this but the rationale given by the team was that it’s an idiom used in their source and when you get used to it, it’s clear.

    My point here at the end is that this actually has nothing to do with the use of macros. Hidden control flow is bad. It causes a lot of errors.

    If we can agree that all of:

    if (FAILED(hr = Foo()) goto Exit;

    if (FAILED(hr = Bar()) goto EXit;

    and

    if (SUCCEEDED(hr = Foo())) {

    hr = Bar();

    }

    return hr;

    and

    IF_FAILED_GOTO_EXIT(Foo());

    IF_FAILED_GOTO_EXIT(Bar());

    are reasonably clear (assuming that the macros do the expected thing… do you verify that SUCCEEDED() does what you expect?) then we can have a more interesting debate about:

    a. How important RAII is

    b. When do you split functions vs. nest conditions

    I think that RAII is very important and I further think that if you can adopt a reasonable style you can prevent arbitrary new functions created only because the previous style created artificial source bloat and statement nesting. I like the use of macros because except for the annoying extra text on each source line, you get control flow that looks very much like the "exception-based ideal".

    (I alluded to this but on my team when we started a recent project, we didn’t mandate use of the error checking macros. So what happened? A lot of little error-path-cleanup code snippets occurred which of course didn’t get all the error paths covered. Thus we decided to disallow the explicit checks and "goto Exit;".)

  45. AC says:

    Michael Grier: I don’t consider your IF_FAILED_GOTO_EXIT as a good practice. I guess you’ve debugged a lot of deep nested ifs, but that’s also bad practice. Good code shouldn’t be too nested. If you introduce some coding discipline, you can always do something like

    void DoSomething( …

    if ( cond1 ) return;

    someth1();

    if ( cond2 ) return;

    mainThing

    etc. Instead the above code some people would produce two nested ifs, and when the "main thing" has any logic, there are more nestings, and the code is hard to follow. Making "early returns" has also good effect of looking (to me) more like mathematical expression where you first exclude trivial cases.

    Very often, when I get the code from somebody else, and rewrite the "multiple nesting" to the above presented principle (anybody knows if there’s some name for it) I easily discover that the original author didn’t cover some cases. Multiple ifs make it hard to see uncovered cases. Using a lot of returns stimulates covering all the cases.

  46. Memet says:

    Micheal: I agree that hidden control flow is evil, but I also think cleanup code shouldn’t be ‘cleanup’ code at all.

    I think we would all agree that duplicate code is the worst evil of all, correct? Well, in an ideal OO world where initialization is aquisition (with all the corrollaries that apply), there should never be ‘cleanup code’. In my opinion, cleanup code can never not be duplicate code. Anything that goes out of scope should ‘internally’ do all operations it requires to free its resources (ie. execute its destructor). That has the advantage of localizing functionality. That also has the advantage of providing a language-wise transactional environment where I should not need to know what all possible exit paths are because the owners of the resources clean themselves up.

    My favorite example of what I hate is creating an ATL ASP object using the wizard. This is the code that gets generated:

    hr = lpContext->get_Request(&lpRequest);

    if(FAILED(hr))

    { lpContext.Release();

    return hr;

    }

    // Get Response Object Pointer

    hr = lpContext->get_Response(&lpResponse);

    if(FAILED(hr))

    { lpContext.Release();

    lpRequest.Release();

    return hr;

    }

    // Get Server Object Pointer

    hr = lpContext->get_Server(&lpServer);

    if(FAILED(hr))

    { lpContext.Release();

    lpRequest.Release();

    lpResponse.Release();

    return hr;

    }

    // Get Application Object Pointer

    hr = lpContext->get_Application(&lpApplication);

    if(FAILED(hr))

    { lpContext.Release();

    lpRequest.Release();

    lpResponse.Release();

    lpServer.Release();

    return hr;

    }// ad infinitum

    Aside from the fact that the wizard generates object code but still uses functional cleanup (which I find weird), you can’t seriously be advocating that this (cleanup as required) is good practice, can you?

    With exceptions, and a properly OO model, hidden control flow becomes irrelevant as you don’t need clean up anyways. It’s only when you mix and match API and OOP that it becomes tricky, but that brings me back to my original post’s point.

    All in all though, I don’t mean to drag you into a flame war about OOP and exceptions. My point is just that I personally find that C++ doesn’t really help you or provide you with tools for dealing with method encapsulation in a way that avoids code duplication, and that was originally why I had said Macros were useful for.

  47. Purplet says:

    quote :

    Macros are great. My only complaint against them is that nobody has added an extension to swallow up the first { or ; token that comes after the usage of the macro.

    #define SOMETHING(A) do { something(A); } while(0)

    it swallows the ; and works ok if used in loops, ifs, etc :)

  48. Paul Spendlove says:

    Michael Greer:

    "C++ is very useful over C without dragging exceptions into the mix. Exceptions are an experiment which has gone awry. They’re the new snake oil. They work well in functional languages since there aren’t side effects to be rolled back. Even people who believe in RAII are rarely actually prepared to make every possible error control path actually have all the rollback necessary."

    Michael, why not read Bjarne Stroustrup’s "Appendix E" to the "C++ Programming Language (3rd ed)"? It puts forward the views of the designer of the language on exception safety. It has been freely available on the web for more than 4 years by now.

    I think it’s something you haven’t read yet. Why? Because you refer to RAII taking effort to implement and involving "rollback". But RAII is actually quite easy to implement and has *nothing* to do with whether you offer rollback semantics.

    As well as defining what RAII actually means in this document, Stroustrup also points out why exceptions were added to C++ – to deal with errors in object construction. There are alternative ways to deal with such errors (eg by providing a separate error-checked "init()" function). Stroustrup covers the alternatives, and explains why he finds exceptions superior. I find his arguments convincing. You are at liberty to differ, but it’s surely worth at least checking out what Stroustrup has to say.

    RAII is really not hard to do. It generally requires no more than a good reference-counted templated pointer – such classes are widely and freely available, eg from the boost project. You frequently don’t need to roll back every single operation, you just need to recover back to some higher level. Generally, simply releasing resources on the way back up the call chain is completely sufficient. When you *do* need to group stuff together in a transaction, you create a transaction object whose destructor rolls back, and give it a scope surrounding the members of the transaction.

    In conclusion, if people wish not to use C++ exceptions, it’s up to them. But if they are going to talk about exceptions in C++ and use terms such as RAII, they’ll be able to make their points more convincingly if they first read what the designer of the language has to say on the topic.

    PS RAII = "Resource Acquisition Is Initialisation", for those who were wondering.

Comments are closed.


*DISCLAIMER: I DO NOT OWN THIS CONTENT. If you are the owner and would like it removed, please contact me. The content herein is an archived reproduction of entries from Raymond Chen's "Old New Thing" Blog (most recent link is here). It may have slight formatting modifications for consistency and to improve readability.

WHY DID I DUPLICATE THIS CONTENT HERE? Let me first say this site has never had anything to sell and has never shown ads of any kind. I have nothing monetarily to gain by duplicating content here. Because I had made my own local copy of this content throughout the years, for ease of using tools like grep, I decided to put it online after I discovered some of the original content previously and publicly available, had disappeared approximately early to mid 2019. At the same time, I present the content in an easily accessible theme-agnostic way.

The information provided by Raymond's blog is, for all practical purposes, more authoritative on Windows Development than Microsoft's own MSDN documentation and should be considered supplemental reading to that documentation. The wealth of missing details provided by this blog that Microsoft could not or did not document about Windows over the years is vital enough, many would agree an online "backup" of these details is a necessary endeavor. Specifics include:

<-- Back to Old New Thing Archive Index