About | Buy Stuff | News | Products | Rants | Search | Security
Home » Resources » Rants

Elements of Great Programming

Week of March 24, 2000

A series of aphorisms, anecdotes, and general stuff which hopefully will be helpful.

No line of Windows GUI code was ever written twice. This sounds ridiculous, but it's true. All code everywhere is cribbed. Turn the coin around: remember all that talk about reusability? Well it's always been with us, even if we didn't realize it. Anyone starting a new Windows GUI application from scratch has to be out of their mind. And if they're not, they soon will be. Windows applications have a standard form - this form should be easily reproducible from project to project. The Windows API is simply too large to be able to master by heart - you have to continually refer to online documentation, and having code snippets around where the trick has already been done before is vital. As Brian W. Kernighan once quipped, 'let the other guy do it' - meaning if it's already been written, don't write it again, don't re-invent the wheel. This is the age of information technology - it's not important to memorize a lot of stuff, just to remember where the information is located when you need it. Keep your head clear of unnecessary details such as the parameter order for CreateWindowEx and remember instead where you last did that 'DialogBox with an icon in the caption bar trick' and you'll be far better off.

Stick to C. If you're not programming a web server, there's only one way to go, and there always has been. No other technology comes close. Back in the old days Brian Kernighan was brave enough to benchmark C against assembler, coming to the conclusion that C meant an approximate 20% drop-off in efficiency and speed contra 'real coding', and his judgment was that this was acceptable. We're talking here of a program, which if written properly in a native assembler, might take 100 milliseconds to run, but written in C might take 120 milliseconds. Or a disk image which might be 10,240 bytes when written in a native assembler, but in C would be 12,288 bytes.

This is a small factor to deal with, and a small cost to pay compared with the great benefits. C code is largely portable - Steve Johnson's estimate was that it was 94% portable, an empirical result, and before the advent of GUI systems to be sure, but nonetheless. Look for example at the development process for NT drivers and you will understand. If it's good enough for Cutler, it's good enough for you.

By way of contrast, no 4GL or CASE tool has ever offered anything approaching the efficiency and speed of C. And this includes the monster C++, which often enough finishes behind other language implementations which you would assume would be even slower. C++ fanatics will deny these facts, but fanatics are always prone to denying facts. C++ fanatics will argue that C++ and C are the same. This of course is not true, and if it were - why then are they touting C++? C++ at best will be a somewhat bloated and slower program than the corresponding C program, even if the code is written as pure C, because the C++ building process involves so many 'behind the scenes' constructs and code snippets you don't really want to have on disk and don't really want your processor to bother about. No disrespect to Bjarne (ha), but don't forget the origin of his language: he wanted to impress his new colleagues at Bell Labs with a demo program written in Simula 67 and it was too slow. The real bite comes, however, when you start to see things as Bjarne would want you to see them. Not only class thinking, but Bjarne's own rather unique kind of class thinking. Don't try to compare concepts between SmallTalk and C++, for example, unless you really are desperate to go out of your mind.

This is not a diatribe against C++, which should reasonably die of its own accord (the echoes of jubilant programmers are almost upon us now) but against relying on high level tools - including C++ - when they are not necessary.

Windows programming is quite straightforward and - yes - easy once you get the hang of it. Spend the time to really get the hang of it and you will 'know'. You will know that as a latter day IT Paul Bunyan you can beat the pants off any Delphi, VB, Java, MFC, or other 4GL programmer not only with a better release module, but with a faster time to market.

Debugging your own code is far easier than debugging someone else's. Have you read the complete MFC source and memorized it? No? Then how can you hope to understand what is wrong with your MFC application when it goes South? By way of comparison, your Windows application coded in C is all there in front of you - a simple, elegant structure, and when something goes wrong you should already on an intuitive level understand exactly what the problem is and how to correct it.

Take it easy. Writing a good program is like making love to a beautiful woman (female programmers out there, sorry for the lack of analogy for your sakes). Nurture it. Don't rush. Beginner know-it-alls always complain that the compiler or the linker is screwing up. How many times have you seen these 'premature ejaculators' rush to get the code into the 'machine' only to come to you with that genuinely perplexed look on their faces and say, 'It's unbelievable! The code is perfect - no compiler warnings or errors, and it won't work!' You have to wonder what they expect you to reply. They are genuinely perplexed. They don't have a clue. And weeks later, when they finally discover what went wrong, they never seem to learn their lesson anyway. They miss the great point: that if they had taken it slowly, and checked their coding every step of the way, they would never have gotten into that mess at all. One LISP expert estimated that 95% of the development process was debugging. Depends on how you see it. If this is interpreted to mean that after you've finished coding your application you're only 1/20th of the way there, then this must be wrong. If you have to sit and wonder, perplexed, what the dickens went wrong with your wizard app, then you are doing something fundamentally wrong.

Each time you add a new code snippet, check it - immediately. Develop a construction angle so you can do this right from the start. This means naturally that you must necessarily have pieces of the user interface in place already, but that is exactly what you must do. Never leave anything to chance. And of course always work with only a copy of your archived source code - never commit anything to an archive until you know the code is ready, steady, bug free, and sound. Check each and every statement for proper performance, or at least each and every logical step.

Never do two things at once. If you are adding code to a project, add one idea at a time and test it thoroughly before moving on. Thinking 'oh I can do this too while I'm at it' will inevitably lead to your destruction. Sooner or later you'll become even more lax and the next thing you know you'll be back at the raw beginner stage all over again.

So it finally works - good bye! If there is an attitude out there that irks, enrages, inspires to holy wrath, this is it. It's working, so I can take a break, and after that, I can move on. Take a break, to be sure, and guard that precious source that really works with your life - but never never think that you are through. You've only just begun.

How much more you are going to work on your project depends on its paradigms. At the very least you'll have to hone the code now that you've got the logic working correctly. At the very best you'll find all the 'in roads' that were not apparent when you were thinking in terms of your application logic, when you didn't have the perspective of understanding the domain from both sides of the fence so to speak. Now that you understand the domain in computerese, see if you can make it even clearer in the same language. This is not a question of documentation - it's a question of seeing things from the perspective of the processor. 'Why is he doing things that way? I know what he wants now, but wouldn't it be a lot smarter to do things this way instead?' Most developers with any seasoning at all will have countless stories they can relate about this subject matter. If you have such experiences in your glorious past, keep remembering them.

Alan Feuer once wrote a book called The C Puzzle Book. Alan taught C at Bell Labs. The purpose of his book was to implant the idea in every budding C programmer's head that there were three stages in one's linguistic development: learning the syntax; understanding and utilizing the side effects; and developing one's own style. But apologies to Alan, he missed the fourth and most important: knowing your compiler. This sounds like anathema, but it must be remembered, and becomes the immediately recognizable hallmark of any truly great programmer. Compilers and building machines are written by human beings. They are not perfect. They vary greatly from one version to another. It's almost impossible to corroborate the results of a compiling machine in any satisfactory mathematically robust fashion. Today things are down to the pre-fab cookie cutter stage, you slap a back end on a front end and away you go. But there is so much more to it than that.

It doesn't take a profound understanding of C to understand that the ternary expression, or even nested ternary expressions, are merely the equivalent of if-else statements. But do you have any guarantee that your compiler manufacturer sees things that way? There have been several very popular C compilers for the PC on the market which have been flawed in this very fashion - using the sometimes more readable ternary expression has resulted in hundreds of bytes of unnecessary object code when such was totally uncalled for. And should this code be in a 'tight loop', i.e. a computationally intensive loop - think of the additional performance write offs... No, once you have mastered your own style of coding platform-independent code, it's time to start looking at the tools you're using to turn this code into something that works on your target machine.

A certain very popular compiler for the PC had a bad bug for years in its implementation of lseek. This seems very weird, but it was true. If you didn't know the work around, you suddenly found distortion on your monitor screen. (Again, this is 100% true.) Relying blindly on another programmer to correctly and efficiently code the functions you will be calling runtime is just a way of passing the buck - and passing a less than acceptable product to your customer.

In the world of Windows there are thousands of caveats that only the seasoned guru can ever know. If your domain is Windows programming, it's time you started learning them, and the only way is to code more and more and more applications and stumble upon them yourself. The documentation is barely sufficient at best - you will invariably have to test things empirically to see how they really work.

What's the fastest and most efficient way of flood filling a rectangle in Windows? No answer is given here, although it is very well known - but suffice it to say that you would never have an inkling if you hadn't been around the code of Windows for some time. Code that has been written by programmers such as yourself - some better, some far worse, than the others. Whose code snippet are you diving into now? Get to know your compiler, your development environment, the code and the programmers you are inadvertently cooperating with.

A rather remarkable Norwegian (no, not Bjarne) was once contracted by a major European national agency to write a very elaborate disk duplexing system. The tools available were atrocious. No assembler, only a benevolent perversion of the abhorred Pascal. The build environment did allow for a sort of assembler output, but there was no assembler to assemble it with, and anyway the code was interspersed with lots of curlicues which would have made it impossible anyway. Yet this stubborn Norwegian compiled every code snippet in his system to assembler output and studied it meticulously until he got it just right. He learned how good his compiler was; he learned where the strengths and weaknesses were; and he left the documentation in the higher level language, which is how he worked. The applications written on top of his code were prone to crashes and inexplicable behavior from time to time, but his system never faltered, not a single time, and - needless to say - it was extremely compact and hyper-efficient. You might find it unreasonable to first compile all that you do to assembler, especially if you don't understand the native assembler anyway, but check on your object module sizes. Barring the modification of symbolic address (read: variable) names, a reduction in object module size should mean that you have improved your application.

Locally static variables will mean a certain overhead - this will not bloat your application but will affect the size of your intermittent object modules. Using string constants the same. Far better to declare them as globally (or at least locally) static arrays. In some few cases with some compilers this will mean a reduction not only in the size of your object modules, but in the size and efficiency of your application itself.

I don't have to worry - the optimizer will fix everything. If you ever hear anyone saying this or the equivalent, or sense this philosophy in their code - avoid them like the plague. First part of the therapeutic cure: read Al Aho's 'dragon books'. These are the two volumes called Compiler Construction Techniques. One is actually as good as the other, but if you can make it through both you're far better off.

Don't go thinking that these are books for your night table - unless you're working on becoming an incurable insomniac. If you can read and understand 10 pages a day in either volume you're making fantastic progress (and each volume is over 1000 pages, so be prepared). The idea is not to be able to build your own compiler, but to really and intuitively understand what a compiler really does. And to understand the concept of 'optimizing' as well. Al goes on about this at great lengths, deploring the misconception that optimizers actually 'optimize' at all, even deploring the word itself ('improver' would be more like it). Al is quick to point out as well that there is never any guarantee that a certain optimization algorithm will actually work - some will in fact produce the opposite effect - more bloated and inefficient code.

Again - optimizers are programs written by human beings. Who in this particular circumstance cannot have the foggiest what your application is all about. They can only take to generalities - some better than others. There are of course classic situations which optimizers can detect and improve, mostly situations where the programmer has really flubbed up too, such as loop optimization schemes, where the same poor variable is initialized each and every time through the loop and thereafter never touched. Yes, an optimizer can be intelligent enough to recognize this programming mistake and correct it by taking the variable assignment and moving it out of the loop to the code immediately preceding it - but think about this now: if you leave your code with a bad loop then this is the documentation you are passing on to the next generation of developers who will deal with your code. Actually it would be better if an optimizing pass would point out where the inefficiencies lie - then you could correct your source - as it should be corrected - and leave a far better document for posterity.

Never rely on an optimizer. All the 'optimization' that is to be done should be done already in your source.

Understand your source. This sounds like a truism but it is significant. If you are running a 32-bit operating system on an Intel machine, and the function you are calling is returning an int to you, where is that int when the function returns? Do you know? You should. Are you creating functions with argument lists a mile long? Or are you using pointers to structures which might be more convenient and more efficient to manipulate your data with? Do you realize that in the Ferrari world of C there is one thing that hampers speed more than anything else - 'changing gears'? The function call itself, in a context where speed is incomparable, is the most expensive thing you can have (this is true of course of all languages but if you are a true artist you will appreciate this factor and consider it continually in your coding).

Do you really need to call that function? You have to decide this from context to context, but you will learn when the creation of that function was necessary or at least convenient in the early stages of the application development, only to become redundant as time goes on and the application approaches maturity.

If all a function has is a single statement which calls another function, it is a redundant 'middle man', so eliminate it. You might have liked this 'division of labor' in the beginning - Update_Data perhaps as time goes on only invalidates your client rectangle, so get rid of it. There are two approaches here, neither of which is totally satisfactory. You can comment out your function Update_Data and #define it instead - thus breaking one of the cardinal rules of good C programming: all macros must be exclusively UPPER-CASE. Another approach, far more satisfactory, would be to first #define UPDATE_DATA and then comment out your Update_Data function and then watch for the compiler errors and change the appropriate function calls. Whatever alternative you choose, get that middle man out of there as soon as you see he's doing nothing but passing the job onto someone else.

Remember what's involved in a function call. No matter where you are in your code, you have a humungoid set of register variables (not just the ones you've declared, silly) and automatic (local) variables of your own, you have a stack pointer (or two), all of this must be saved (read: PUSHed) onto the stack before your call can be made. Once the local environment is stored away, you start PUSHing the function arguments for your call onto the same stack. When all that is done the actual call is made. And now the reverse process must begin: the called function must POP all those function arguments off the stack, initialize automatic (local) variables of its own, and then and only then can it get to work. And when it is finished, it must store the 'answer' somewhere where the calling function can find it - and then and only then return - and whoa, but we're still not out of the woods! Now the calling function must POP its complete local state exactly as it was prior to the call back into place, all the automatic (local) variables you know about, all the secret compiler-generated temporary variables you don't know about (unless you've read your dragon books properly that is), as well as retrieving the 'answer' (return value), if any, from the called function. It's quite easy to see that the computation involved in a function call can easily exceed the value of the call itself, the lines of code necessary to implement it far outreaching what you thought was your function in the first place. The answer to this is not to write everything 'inline' (or do anything else Bjarne says is good - he's invariably wrong about everything) but to remember: if all your function is doing is passing the buck on somewhere else, eliminate it.

No matter how well you've honed your code, you will always be able to go back and improve on it more. This is not theoretically verifiable - quite on the contrary, at face value it seems impossible. And yet it is a real empirical truth, borne out of countless man-years of programming. Is there a point to it? Yes and no. If the improvements can in effect go on forever, when in heaven's name are you going to finish your project? That's the one obvious reflection. The other, perhaps not as immediately self-evident, is that your code will never suffer from you giving it one last going-over. Another very important conclusion is that you never have to accept bloat. You've just added a new wiz-bang function to your app but the image size almost doubles. Whoa. What happened? Are you sure you coded the new function as well as you could have? You are? Is it integrated into the overall scheme of things as well as it should be? Has its introduction changed the topology of your program logic? Have you seen all of these possibilities and made adjustment for them? Ok, now go back over your old code. Sure you're not missing something there? Invariably you will find something. Some times it will take longer than others, but it will always raise its head sooner or later. In fact, it's not uncommon to find that an application, with a new function added, actually increases in efficiency and decreases in disk image size precisely because you subject the old code to this reassessment process.

Programming is an art, packaging a talent. Anyone can write code - even carpenters, so to speak. But not everyone really appreciates the beauty (if there is such a beauty) in finalizing an application. If you're not so inclined, maybe the programmer career is not for you and you should check elsewhere. If you already understand this, then a great number of the points brought up in this dissertation will be obvious to you. One can only hope they still somehow enrich your career experience in some way.

Be merciless with your applications. You should subject your own applications to the most gruesome tests. One IT department manager was famous for always inflicting what he called 'monkey tests' on anything that was to be passed into production. He would start by raising his keyboard above his head and slamming it down on the floor. He didn't expect anything intelligent to happen with the application - but he expected it to not crash or behave in any unexpected way. After that he would mercilessly bang away with his fists at the keyboard, trying to do all in his power to upset the application, make it spout out faulty data and/or crash. And only if the application survived the 'monkey test' was it considered finished and ready for production.

Consider doing the same with your own applications. Now surely you don't have to destroy your keyboards and pointer devices all the time, but make sure in a similar way that the unexpected is in fact expected. And this is not necessarily a question of implementing exception handling - it can simply be a case of knowing what your application is to do and making sure it does nothing else.

Being intimately acquainted with the SAA/CUA/CUI principles of design can be of immense value here. If you don't have access to these seminal documents, contact IBM and get them. There are a number of far-reaching principles which all your applications should observe, the most important of them in this context perhaps being the so-called 'Forgiveness Principle'. Never let your end user fall off a cliff - always give him the chance to retreat. It's the classic 'Are you REALLY sure?' situation. Look again at how standard seasoned Windows applications work and you will recognize this all the way through. How does 'FORMAT C:' work? How about ditching a document processor without saving changes? There are countless examples. Make sure your applications join the herd.

You like playing with your debugger? Ever considered being a bus driver instead? How many so-called developers can put sparkle in the eyes of their fellow colleagues with their agility at juggling all their debugger windows, but never get anything into production? Countless. Make sure you're not among them. Two developers, side by side in a Win32 programming course, worked entirely differently. The one wrote the sloppiest code imaginable, and then went straight to the debugger - and when the trainer was asked to come around and help, this wiz-kid did all he could to prevent the trainer from getting a good look at the source. How was the trainer expected to help? Good question. Across the aisle, and from the same company, sat a rather quiet chap who never once all week ran anything in debug mode, never needed the trainer to come and help. He coded carefully and invariably had lots of time on his hands when he had finished his exercises and he and the rest of the class waited for the debugger wiz kid to give up. Throughout the week the wiz kid never completed a single exercise. Yet he remained convinced, despite his obvious complete lack of progress, that he was head and shoulders above all his classmates, precisely because he could run the debugger as it if were a game of Asteroids or something. Perhaps that was his calling - to sit all day at an arcade and drop coins in a slot and wear out a joy stick - for it surely was not programming. If you find yourself instinctively turning to your debugger as soon as you've finished coding your application, or a part thereof - slam on the brakes. You should know your code well enough to almost never have to go to a debugger at all. If something is not right with your application, you should know immediately where the error is. Only in the most dire of circumstances should you have to resort to a debugger.

Why do we have them then, you might ask? For one, they generate revenues. Lots of millions to the ISVs who market them. For two, not that many programmers are that good. For three, there are of course situations where a meticulous look at what is going on in the code is necessary to be able to warranty its mathematical robustness. But generally you do not need a debugger. Once again, read it slowly: generally you do not need a debugger. If you find yourself leaning on it like a crutch, go back to school and try to discipline your development technique again - for example with the lessons to be culled in this treatise. You should be sure enough of your code when you write it to not have to run a debugger to check it out - you should be able to proceed directly to running your application and verifying that it works properly - and even passes the 'monkey test'.

Style is everything. Not true? Then consider the following code snippet:

switch (message) {
	case WM_COMMAND:
		switch (LOWORD(wParam)) {
			case IDM_FILE_EXIT:
	/* * */

And compare it with this:

switch (message) {
case WM_COMMAND:
	switch (LOWORD(wParam)) {
	case IDM_FILE_EXIT:
	/* * */

See wherein the difference lies and why the latter is better than the former? Code is a lot of nested, blocked stuff. Only so much can be on your screen at any one time - even if you're running the most monstrous screen resolution in this solar system. And if you are, only so much is going to be in your peripheral vision too. So don't start indenting when you don't need to. You'll find your code running off the right of your window and making things generally less manageable.

The converse is also true: don't split up lines for what some doofus has told you is 'readability' so you have to scroll up and down too much. A very early C guru once claimed that any C function with more than 50 lines of code was written in error, and would refuse any program submissions with such functions in them. Now it is not probable that this principle is still valid, but its spirit is - your code should be under your direct observation at all times.

There are countless mechanisms you can implement to make this easier on yourself:

Store your functions in alphabetical order. You have a dedicated source file with a generic type of function within, make it easier to locate them by putting them in alphabetical order. There is no excess in appreciating the 'confusion factor' when developing an application. Your goal is to have your thoughts in your fingertips. If countless seconds are wasted scrambling around in a source file ('where is that &%&£ function anyway?') then you'll be distracted, that stroke of genius you were just about to be blessed with will evade you. Your goal is to get your thoughts in your fingertips - with no delays in between. Storing your resources in alphabetical order according to type and in alphabetical order within type can also be a great boon.

Store your functions in dedicated files. Goes hand in hand with the previous mechanism. You can have all your message handlers in one file (provided it's really necessary to have dedicated message handler functions at all), you can have all your dialog box handlers in one file, etc. Establishing a general construction system and sticking to it, so you know intuitively where anything would be located, even when you're coming back to an application after half a year away, is the alpha and omega of a precision programmer.

Got your menu completed in your resource file? Fine - stick to that order in your WM_COMMAND switch. Using the same order here, where alphabetic is simply not applicable, is the best way to survive. Using intelligent command ID macros is another part of it. If the command is 'Exit' on the file menu, then the macro should be IDM_FILE_EXIT and nothing else - the 'IDM' of course standing for 'menu ID', the 'FILE' denoting it's on the File 'POPUP', and EXIT of course being the command itself. Capitals throughout, underscores to separate the various elements, and always use the complete command name when it is at all feasible - this will make it so much easier to correlate your code with your resources as time goes on. A typical layout might be as follows:

case WM_COMMAND:
	switch (LOWORD(wParam)) {
	case IDM_FILE_NEW:
		/* * */
		break;
	case IDM_FILE_OPEN:
		/* * */
		break;
	case IDM_FILE_SAVE:
		/* * */
		break;
	case IDM_FILE_EXIT:
		/* * */
		break;

	case IDM_EDIT_UNDO:
		/* * */
		break;

And so forth - providing a single empty line between 'POPUP' groups of commands, making the code easy to traverse and control with a minimum of effort.

Ever heard of <windowsx.h>? Forget you did. There is no limit to the amount of cursing that should be done on the part of this atrocity. Surely you've read countless supposed gurus expouting its benefits, but don't believe a word of it. Consider the following:

  • The next time you update your environment - have you any guarantee that these magical macros are the same?
  • Have you never heard of all the bugs already in that file? In such case - in what cave have you been hiding?
  • Do you realize you would now have to learn the Windows API all over again?
  • The whole scheme is extremely inefficient. Just give it a good browse if you dare. Windows and type casting are infamous by now - a lot of the work of this monster involves just that - and who knows what expense to the victimized application. Every message implies a dedicated function to handle it - and we've already discussed the overhead of function calls and why it's the only smart thing to do to avoid them if necessary - but no such provision here, no no. The calamity approaches the ridiculous when you realize that every function is expected to provide its own return value, and that if it cannot, then the monster goes through all the rigmarole of type casting all your arguments back again!

So avoid <windowsx.h> like the plague that it is. When message handling has to do something non-trivial, it will normally be relegated to a function of its own. And only you can know which of the four arguments, if any, need to be passed. Nothing is stopping you from using your own macro system - just don't tread in the tracks of the idiots who wrote <windowsx.h> when you do it. And normally, 'non trivial' in this context means one or more of the following: the code is just too lugubrious; it's going to be called from somewhere else too, and sending a message to yourself is not applicable in these circumstances; the message code gets just a bit too unwieldy.

Never forget that Windows is a layered monster just like MS-DOS was. Many gurus have likened the MS-DOS API to a 'quilt', meaning that there are several ways of skinning the cat, all interweaving in one another. If you've ever tried to track down the Ctrl+C handler in the interrupt vector table you'll know what this means. The Windows API is much the same. Almost everything is ultimately message based - so when it is possible, you should use the SendMessage call (or one of its variants depending on) rather than a higher level API which will just turn around and do the same thing.

When is this applicable and when is it not applicable? Consider the following situation: your dialog is about to disappear, and you have to get the all-important text out of edit control 101. You have at least four ways to skin this cat:

char s[ENUFF_CHARS];

GetDlgItemText(hDlg, 101, s, sizeof(s));
GetWindowText(GetDlgItem(hDlg, 101), s, sizeof(s));
SendMessage(GetDlgItem(hDlg, 101), WM_GETTEXT, sizeof(s), s);
SendDlgItemMessage(hDlg, 101, WM_GETTEXT, sizeof(s), s);

All of these are valid functions, none are macros, and all accomplish the exact same thing - hopefully. The question now becomes - which do you use? And do you use the same idea 'across the board'?

Well, here might be the considerations:

  • Alternatives two and three (GetWindowText and SendMessage) involve two API calls and therefore are probably more costly, and unnecessarily so as well.
  • Alternative four (SendDlgItemMessage) does not imply any benefits because it is not a macro for SendMessage (and if it were would result in using two, not one, APIs anyway).
  • Alternative one is the most natural and barring exceptional circumstances the most efficient way to skin this cat.

Another way of expressing the same thing:

  • Avoid using multiple ways of skinning the same cat. This will only lead to an excess of 'stubs' in your executable image. 'Stubs' are the actual names of the functions you will call, paired with their respective DLLs. Not only do these stubs take real estate in your image, but the actual 'jump table' used to get to them will naturally increase as well.
  • If you can use SendMessage instead of another API which otherwise would not be used, and if your SendMessage call does not require using any further API calls, then use it. SendMessage is found in all Windows applications - it needs to be mentioned only once among your image's 'stubs' and only once in your 'jump table', and using it again costs no overhead at all above the actual code of the call itself.

You hate Hungarian notation too? So what? It's upon us, and perhaps none of us really like it, but it's pervasive throughout the Windows API, so by letting it get to you and fighting it you only make things worse for yourself. Some golden rules to apply:

  • Global variables are named with a descriptive lower case prefix and a name beginning with a single upper case character. E.g. lpString for a global LPSTR; hWnd for a global HWND; and so forth.
  • Local variables should have completely lower case names, where the generic 'Windows data type' might be sufficient. E.g. lpstr for a local LPSTR, hwnd for a local HWND; and so forth.
  • Function arguments should follow the same scheme as global variables.
  • Don't overdo it - make it easy on yourself. E.g. lpstrString is totally unnecessary. Even a fool - even you - can see it pertains to a string, so the classic 'lp' as a prefix is completely sufficient.
  • Don't criticize too much - where you see 'consistent inconsistencies', accept them. E.g. the UINT in the window procedure is de facto message rather than Message - leave it alone.
  • 'b' as a prefix for both 'BOOL' and 'BYTE' is obviously going to get you into trouble, so do something about it. Try 'f', standing for 'flag', as a possible alternative.
  • Use the types Microsoft does unless it means total disaster. If Redmond falls flat on its face, it's hardly going to affect you anyway. Otherwise you're in constant jeopardy, as they might change things at a moment's notice, and expect that your code is already compliant. So make sure it is.

Efficiency is always an issue. If you've read your Al Aho, then you will know that except in unusual circumstances, less code also means faster code. And it stands to reason. Keep this in mind. So where can you start looking for possible areas to clean up?

  • WM_CREATE. About the only time handling this message is absolutely necessary is when you're writing a screen saver and don't have access to the startup code. In almost all other possible cases it's going to be a drag on your application. There will be that much more code hovering there 'in the stratosphere' for a message that can only occur once. You'll know when it's 'absolutely necessary', and if you can't point out exactly how and why, then you don't need it. Once you've created your main window, go on to do everything else you need to do in the same startup function before turning things over to your message pump.
  • You don't need separate InitApplication and InitInstance functions anymore. In fact, you can take everything a step farther and really reap the benefits: there is absolutely no reason to ever check the return value from your main RegisterClass call - if it fails then your subsequent CreateWindowEx call must necessarily fail too. And forget that nonsense about returning a BOOL value out of it all - returning the window handle of your main window is more than sufficient, and often much more applicable as well, especially if you're for example running an accelerator table.
  • Don't fill in common dialog structures on the fly unless you can ostensibly prove that it is more efficient. Most often coders waste kilobytes of image size filling in these monstrosities in dedicated functions, and the irony of course is that most of this data should be persistent - so where are you going to store it all? Can you make things even more difficult for yourself? You might have one field - normally at most two fields - which cannot be initialized already in your source. Having your compiler work hard is a far cry from having your application try it. See that you do things that way. And if you really have to create and fill in a common dialog structure on the fly, don't go initializing each and every field that needs it to zero - use the far more efficient ZeroMemory instead. Your application will be most grateful you did.
  • We don't use INI files anymore, so why translate every WritePrivateProfile* call into a separate RegSetValueEx? Nope - back in those days things were tough. INI files were about as good as you could get - unless you took a step back and beyond and devised your own system (which you should have, but no point pursuing that now). Today everything with few exceptions is to be stored in the Registry, and normally there's a great advantage in doing so, too. Every user gets their own settings, and all you have to do to ensure this is to always navigate under HKCU. But having said that, there is no reason to have all these descriptive strings anymore, and to moreover divide everything up either. There are a number of efficiency points here, and some pertain to your application, and some pertain to the system. The system as a whole does not benefit from having excess keys in its Registry. Each and every Registry key, in addition to the data it must hold, has an overhead of 260 bytes, so go figure. And every time you add a nice mile-long descriptive string to a value you only increase the overall system bloat. And why? Do you want your users mucking about like that with your application's Registry settings, then calling you in the middle of the night and complaining that your application does not work properly? Where do all these settings come from, anyway? One would hope you didn't have them spread helter-skelter all around your application; they should ideally be in the same place... meaning a system-wide application structure for persistent settings has got to be the way to go. You don't need to read these in individually, parse them, change them from character strings to integers, or anything of the sort - you simply do a 'swish - swish' - once at startup, and again at exit. Everything gets stored - that's right - binarily. You need but one storage and one retrieval call instead of that Chinese army you had before. Look at the way the MS Office applications are evolving - most of WinWord's settings are under a single key in binary format. Look at NT's task manager settings - again the same thing. Drop all that INI nonsense and make life easy on yourself.
  • Are you storing your resources correctly? Doofus question you think perhaps, but run through it again - are you? Hopefully you're not using the MFC toolbar construct, but storing the glyphs to your toolbar as a simple DIB or BMP. Yet - have you tried compressing this file? And don't laugh, but sometimes, with especially small toolbar glyph files, it actually pays to not compress them. And all of this will have a direct corresponding effect on your executable image size. The classic boner is the MFC splash screen bitmap, which weighs in at nearly 200KB, could be compressed to less than half that size (and note that the compression involved - run length encoding or RLE - involves no change in the quality of the image) but even so - what developer in their right mind would want a 100KB splash bitmap hanging around? Wake up - if your corporation is producing applications one after the other - why don't you have a standard splash mechanism, much like ShellAbout works for Microsoft applets? Not to talk about icons - Microsoft wants you to store icons in all possible formats, but is this reasonable? How many different aspects of Windows can handle them anyway? One thing is for sure at the time of writing - Windows cannot consistently handle icons with more than 16 colors. So don't pressure the system and bloat your application. As for small 16x16 icons - use your own judgment. If your small icon will be prevalent, and the system's 'stretch' of it looks ugly, then yes, create your own. If not - then don't. Can't be simpler than that. An icon should only take 766 bytes - remember that.

Cut out the middlemen in your project. This is going to hurt a lot of heads out there, but it can't be helped - it's the awful truth. There are a lot of castaways in the IT industry. Some estimates are that 95% of all IT employees are doing nothing at all except sucking up humungoid salaries larger than yours by a geometrical factor. At some companies it's easy to see within minutes that there are at least nine fat cats for every productive programmer, at some companies it's obvious the number could be ten times as many.

If you're in charge of the coding, try to establish a direct contact with your future end users as soon as possible, and roll over all the heads you have to in order to get there. Believe it - they're dying to meet you too, and just as smothered by a thick reprehensible layer of well salaried ne'er do wells as you are. As for the people in between - well they're most likely occupied day and night with finding new ways to ensure that you do not meet - so get to it! Your end users need you as much as you need them.

Don't believe in, don't rely on, the specs you've been given - they were most likely put together by the same bunch of middle echelon jerks who are trying to ruin everything for you right now - talk to your end users instead, they're the only people who really know. They might not know computerese - and you should of course not assume they do - but they know their own jobs, most are probably very proud of the work they have been doing, and it's to help them with their jobs that you've been assigned to your project.

Show them prototypes of how their new application or system will look - they will understand intuitively if everything's all right and in place, and if it's not you'll quickly be able to adjust things with their guidance, especially if you've developed a good and flexible prototyping system (yes, this might mean carrying around a laptop and drawing dialogs on the fly). Ask them to give you a guided tour and show you how they're working now - you need to know this, see this, and learn it well.

And finally, when all is said and done, take a few days and go over to their place of work and volunteer - work right beside them. As one of them. Not only will this impress the pants off them, but they'll loosen up. A lot of 'ordinary people' are literally taught and whipped into feeling awed in the presence of programmers and to clam up - when you work side by side with them, they will understand they they can talk to you, that you will listen (and you'd better when they start up), and you will gain invaluable insights into how your application or system is actually running. Normally they've been stepped on so mercilessly by their and your bosses during the negotiations that this behavior cannot but be expected. If there are changes to be made, you will be able to get most of them into production within a week instead of waiting for your two bureaucracies to grind through it over a year or two and countless brothel visits and wet dinner parties.

Never forget - your end users are real people too, and they deserve your consideration, your devoted attention, and your respect - no matter what it takes, no matter how many idiots in your and their organization you have to step over on your way there.

DLLs. Most good systems use their own DLLs - this is a fact of life and most often a good fact as well. (Many systems of course use DLLs when they shouldn't and then no, it's not a good fact then.) But the general rule is: if you are about to code the same frikkin function a second time - stop, and consider making a DLL instead, or incorporating it into an existing DLL.

Check around MSDN sample code, and see how many times you see the function CenterWindow. Under 100? Check again. The code is almost always identical - or nearly so - and the result is of course always the same - or at least it should be. Now check how many wiz-bang applications you have from Microsoft that center all their dialogs on their parent windows, and then finally check the Windows API and for the function. Find it?

Using helpful static libraries is a thing of the past - we're living in multiprocessing, multithreaded times now. You not only want to centralize functionality, you want to miminize RAM use.

Your functions must of course be re-entrant. They should store no variables which can affect thread behavior. They can perfectly well initialize on the part of a client application, but should be careful about doing things on a per-thread basis. Yes, there are mechanisms for this sort of thing - you get special messages in your entry point - but experience has shown that Microsoft is doing a lot of things behind the scenes in there, things they never have and never will document sufficiently, and if you can avoid per-thread services, do so. (For the record, there are special hidden critical sections behind every entry point in every DLL you hook up with and every DLL you write - Microsoft will never acknowledge this openly, but, 'off the record' - yep, they're there.)

Corporations that have a well delineated area of application development can benefit from having two separate DLLs per project - one that contains corporation-wide well documented functions, and one that contains functions germane to a particular system destined for market. Some companies might want to divide this even further, as the complexity of a project increases - delegating some particular assignment to some particular group and having them present their functionality in DLL form. Fine - the way to start is for this team to produce a 'dummy' DLL which sends back an innocuous reply or service and serves to hold the system in place for the time being - and as the main modules are completed, more and more of the DLL functionality can be tested. This is fine. Not everything has to be squeezed into the same two DLLs. But the principle remains - when a system, rather than a corporation, needs to use the same functionality over and over again - put it in a system-wide DLL. When something comes along that is so darned good that the entire corporation - and all the systems marketed by it - can benefit by its functionality - put it in a corporate wide DLL. And keep these demarcations as clear as possible. Don't clutter DLLs which other projects have to use with code they don't need and don't invite redundancy by duplicating the same code in multiple DLLs - this is as much of a boner as writing CenterWindow as many times as Microsoft does.

Always use DLL code rather than runtime code if you can help it. Of course, never link anything statically - not even the standard C runtime (your environment should provide a thread-safe DLL form - if it doesn't, change your environment). Functions such as strlen, strcpy, strcmp and sprintf are mostly a thing of the past - the Windows DLLs contain variants of all these functions which have been around for quite some time and are a lot faster and more efficient too. For string manipulation use lstrlen, lstrcpy, lstrcat, and lstrcmp(i). For string formatting, as long as it doesn't involve floating point, use wsprintf.

Avoiding redundancy also includes resource management. All your applications and systems using the same dialog scripts, the same cursors, the same splash screens and about boxes? Look at what Microsoft has done with ShellAbout. With their icon searching dialog (undocumented officially but still there nonetheless). With their 'Run' dialog (yes, it is shared - if you don't know how, look again). Do the same.

Keep your DLLs in your own directories. Read this again: keep your DLLs in your own directories. This means never using the SYSTEM or system32 or the equivalent directories under any circumstances whatsoever. Let's review this matter - no sarcasm intended: 'SYSTEM' or the 'system' in 'system32' stands for 'operating system', meaning 'Microsoft', meaning that's their area, their domain, and your stuff is out of line getting anywhere near. Again - there is no excuse - ever. Read it again: there is no excuse - ever. And if you are collaborating with a corporation that does not abide by this rule, and are in a position to influence things, then by all means do so. Get them out of there, where they don't belong.

9 2 5.Ideally programming should be a nine to five job. Microsoft and many Japanese corporations don't believe this, expecting programmers to install futons in their offices and work unbelievable hours. But you should fight this. Not only will you ruin your health, but you will seldom reap any benefits by being hopped up on caffeine (or heaven forbid even worse) for days on end without sleep. The good ideas, the solutions, will come to you in time. See it as a dragon in a cave. The dragon is in there, you can't go in after it, you have to wait for it to come out to kill it, and if you're not there, day in and day out - no matter how terrible and unproductive you feel - you're not going to be around when it finally comes out, and you won't get to kill it. You won't get the great solution, the unbelievable hack.

Startup code. A lot of environments today, eager to get on the flimsy C++ bandwagon, give you - without ever disclosing so officially - a lot of junk you won't need if you build your applications in a stable way with C. Investigate this code if you can find it and re-write it, eliminating all the contingencies for Bjarne's nightmare. You should save several kilobytes per image file. If you are working in a Windows environment, then know that your real entry point takes no arguments at all and has no return value, expecting you to call the Windows equivalent of exit - ExitProcess - right before the final right brace. If you need the command line call - what else - GetCommandLine, and if you need your HINSTANCE call GetModuleHandle. Make sure your environment understands it is to use your startup code and not its own and then spread your wings and fly.

About | Buy | News | Products | Rants | Search | Security
Copyright © Radsoft. All rights reserved.