Date: | June 15, 2004 / year-entry #235 |
Tags: | history |
Orig Link: | https://blogs.msdn.microsoft.com/oldnewthing/20040615-00/?p=38873 |
Comments: | 28 |
Summary: | Once your average GUI program picks itself up off the ground, control begins at your WinMain function. The second parameter, hPrevInstance, is always zero in Win32 programs. Certainly it had a meaning at some point? Of course it did. In 16-bit Windows there was a function called GetInstanceData. This function took an HINSTANCE, a pointer,... |
Once your average GUI program picks itself up off the ground, control begins at your WinMain function. The second parameter, hPrevInstance, is always zero in Win32 programs. Certainly it had a meaning at some point? Of course it did. In 16-bit Windows there was a function called GetInstanceData. This function took an HINSTANCE, a pointer, and a length, and copied memory from that instance into your current instance. (It's sort of the 16-bit equivalent to ReadProcessMemory, with the restriction that the second and third parameters had to be the same.) (Since 16-bit Windows had a common address space, the GetInstanceData function was really nothing more than a hmemcpy, and many programs relied on this and just used raw hmemcpy instead of using the documented API. Win16 was actually designed with the possibility of imposing separate address spaces in a future version - observe flags like GMEM_SHARED - but the prevalence of tricks like hmemcpy'ing your previous instance reduced this potential to an unrealized dream.) This was the reason for the hPrevInstance parameter to WinMain. If hPrevInstance was non-NULL, then it was the instance handle of a copy of the program that is already running. You can use GetInstanceData to copy data from it, get yourself up off the ground faster. For example, you might want to copy the main window handle out of the previous instance so you could communicate with it. Whether hPrevInstance was NULL or not told you whether you were the first copy of the program. Under 16-bit Windows, only the first instance of a program registered its classes; second and subsequent instances continued to use the classes that were registered by the first instance. (Indeed, if they tried, the registration would fail since the class already existed.) Therefore, all 16-bit Windows programs skipped over class registration if hPrevInstance was non-NULL. The people who designed Win32 found themselves in a bit of a fix when it came time to port WinMain: What to pass for hPrevInstance? The whole module/instance thing didn't exist in Win32, after all, and separate address spaces meant that programs that skipped over reinitialization in the second instance would no longer work. So Win32 always passes NULL, making all programs believe that they are the first one. And amazingly, it actually worked. |
Comments (28)
Comments are closed. |
I didn’t get into Windows programming until 1995 so I missed the whole Win16 era. The more I read about it, the more amazed I am that anything worked. ;) (That’s not a flame, just an observation of how every program on the machine had to play nice together to make the system work.)
The sort of contortions MS went through back in the day is also the reason why we have DLL Hell today. Back in the 16 bit days, memory and harddisk space were expensive enough that re-use of code via DLLs made sense. The idea was that you only stored a single copy of the DLL on the disk, and a single copy in memory at runtime.
Sadly, we are stuck with this outdated thinking today, even though memory and disk space are so cheap it’s completely silly to worry anymore. If every Windows app simply stored all the DLLs it needs (aside from true Windows DLLs) in a directory that either contains the executable or is a sub-direcotry of the directory with the EXE, then all the DLL Hell would go away.
Think about it – you wouldn’t worry if another app installed a new version of the DLL since your code would always load the version you installed with it. Also, if you wanted to move an application, simply moving the whole directory tree it lives in would move everything needed to run it at the same time.
Instead, programmers still act like it’s 1984 and we need to save precious disk and memory space by re-using an existing copy of a DLL.
Oh well….
Peter is right on. Starting even in Windows 98 additional OS capabilities were available to encourage side-by-side installation, but it still today hasn’t caught on.
http://msdn.microsoft.com/ms811700.aspx
There are other good articles in that same part of MSDN
How did the hPrevInstance parameter work in real mode? Did all the DS’s live in low memory or were they pageable to high memory the way code was?
The first time I used Windows was Windows 3.0 in a machine with 1 MB of RAM. Later that week I got the boss to spend $$$ to add 4 MB to it and I tried enhanced mode.
They made fun of me for learning about Windows in 1989. :)
As they say, premature optimisation is the root of all evil. Several of the things done now are due to premature space or time optimisations done in the very first versions of Windows.
IMHO the first versions of Windows were an absolutely excellent fit with the constraints of the machines at the time (no hard drives, two floppy drives, low res graphics, 512Kb memory).
But the optimisations done were only good for a generation or two at which point they became liabilities.
And Mike, you don’t realise just how hard it was doing non-trivial programs back then. You had to work with 64kb segments, worry about the stack, data and code segments pointing in different places, had to thunk all your callbacks (which ensured the correct segments where setup before calling the actual code) and you usually had all sorts of DOS level crud running as well (which later became vxds).
And some trivia I read was that by changing to the "pascal" calling convention, Windows 1.0 was 9% smaller than the C calling convention. Of course that meant you had to worry about which calling convention was used all over the place. Details at http://weblogs.asp.net/oldnewthing/archive/2004/01/02/47184.aspx
I have to admit I never had a machine with expanded memory so I never learned how Windows used it…
I like DLL’s. I think they are one of the coolest things in Windows.
"DLL Hell" came about because people (including Microsoft) were not disciplined enough.
You can avoid DLL Hell by creating a new name for a DLL whenever the interface or behavior (that breaks things) changes.
And never install an older version of a DLL than already exists on the system.
Note that you can keep the name the same when you add new functions.
Anonymous Coward wrote:
> As they say, premature optimisation is the
> root of all evil. Several of the things done
> now are due to premature space or time
> optimisations done in the very first
> versions of Windows.
Er… no. Optimization that is essential to get something to work on the hardware of the time is NOT premature optimization.
Optimization before you measure performance, however, IS premature optimization.
Please learn the difference.
Raymond: Regarding DLL Hell, it’s not even close to solved. While you have the trivial case of side by side DLL’s solved, the entire OLE situation is still just like the old days. Let’s say I want to run IE 5.01’s web browser control because I know that works. I can’t guarantee that as an ISV. Let’s say, well, I want to guarantee any older version of an ActiveX control. No can do.
Now while I don’t know if .NET fixes anything like this with components, I know that right now it’s just as bad as it ever was with COM components.
This is not my area of expertise so I will leave it to others.
Peter Montgomery: It’s tough to have more than one COM object with the same CLSID on the same system, so your cure for DLL Hell runs into a problem there.
njkayaker: Access violations also happen because people aren’t disciplined enough — not to mention resource leaks, ferry accidents in the Pacific, traffic accidents, and people getting the HWND and HDC arguments to ReleaseDC() transposed when they don’t have STRICT defined, etc. etc. etc.
Sure, if people were more disciplined, these things wouldn’t happen. If Grandma had wheels, she’d be a train. That’s not a terribly profound or useful observation.
Aarrgghh: Sure, if people were more disciplined, these things wouldn’t happen. If Grandma had wheels, she’d be a train. That’s not a terribly profound or useful observation.
I worked on software at DEC for 15 years. You can take an application written on a VAX for VAX/VMS 1.0 (released in 1977) and run it unchanged on the latest version of the operating system (assuming that it’s running on the same architecture hardware). It’s always amazed me that Microsoft has never been able to duplicate that feat.
It’s all a matter of what the priority is.
"If the priorities are keeping enough cruft in to run 27 year old applications, then those are terrible priorities."
Well, I don’t think it’s 27 year-old, but it’s certainly some pretty old apps. And I think there are a very large number of enterprise customers who would disagree with your priority assessment. Raymond has covered this in a previous blog entry (sorry, I’m too lazy to dig out the link), but there are a huge number of custom apps inside enterprise customers that were originally written for DOS or early Windows versions, which they still rely on for line-of-business. We have a tough time selling them a new version of Windows if it breaks their business critical applications.
I should clarify what I meant in that there is nothing wrong with optimisations per se, unless they show through in your API design or other areas that affect future source or binary compatibility. It is the latter I was referring to.
I don’t think Microsoft has historically done a particular good job of API design. However they did pick backwards compatibility as an important priority, hence the win16 api ending up in win32 rather than a cleaner design for NT.
And as for claims about running old binaries, try Visicalc from http://www.bricklin.com/history/vcexecutable.htm
It works fine for me on my XP Pro machine.
"I don’t think Microsoft has historically done a particular good job of API design."
Well, it’s certainly much better than the old Mac API…
"If the priorities are keeping enough cruft in to run 27 year old applications, then those are terrible priorities. "
That’s odd, since one of Microsoft’s defining characteristics (to me) has been an obsessive focus on certain kinds of backwards compatibility. Just witness VisiCalc, posted above, as an example.
FWIW, I think that this is a mark of high qualitity systems software provider, and signifies that they understand the economic realities of their customers. IBM has done a lot of work on backwards compatibliity with their mainframe line. Part of the design work for the original System/360 machines included work on emulating older hardware.
One of the things that helped VMS’s upward compatibility work was that the loader helped enforce it. Each image had in it the version for each shared image (another name for a DLL) it was linked against and it’s version matching criteria. Linker options allowed you to specify whether any version was acceptable, only an exact version match was acceptable, or any version which was greater than or equal to the version you linked against was acceptable. The default was to accept the version you linked against or newer.
So if you linked an application against V1.2.3 of a shared image and tried to run it against V1.2.1, the loader would abort and display a message explaining why.
Given Dave Cutler’s involvement in both operarating systems, I’ve never understood why this wasn’t part of NT.
Barry Tannenbaum wrote:
> So if you linked an application against V1.2.3
> of a shared image and tried to run it against
> V1.2.1, the loader would abort and display a
> message explaining why.
>
> Given Dave Cutler’s involvement in both
> operarating systems, I’ve never understood why
> this wasn’t part of NT.
I’d assume that it’s because apps which update system DLLs were supposed to replace those DLLs with same-version-or-higher. So theoretically, you’d never get v1.2.1 on the system if you were expecting v1.2.3
Being able to add functionality to a DLL without invoking DLL hell is only really true in non-COM DLLs. A COM consumer maintains function entry-point references based on memory offsets. If a COM DLL is recompiled with new functions added, the offsets will often change and cause older consumers to stop working.
This is one of the reasons (according to Don Box) for ubiquitous metadata in .net. Metadata-based discovery allows for more flexible referencing by assembly consumers, which in turn allows the developer to run old consumers with new assemblies that, traditionally, would have broken binary compatability.
We use COM dlls, which are used only by our applications. When we register them, we simply don’t register the full path, only the relative file name. Actually, we put an environment variable in place of the path, in the InProcServer32 registry key value. Windows automatically expands at environment variable at runtime with the environment of the executable, which we’ve setup in WinMain.
This allows the user to install multiple versions of our application in different directories without conflict. The only rule is that a COM object cannot ever change DLL name without changing CLSID, because it would ‘break’ the previous registration and break older version of hte app. CoCreateInstance will create our COM objects in the DLL in the same directory as the executable. It’s my understanding there is versionning for COM dll nowdays.. but we’ve been doing this for years.
{quote}
"I don’t think Microsoft has historically done a particular good job of API design."
Well, it’s certainly much better than the old Mac API…
{/quote}
And nowhere near as nice as the new Mac API :P
I always put my WinMain as follows :–
<pre>int APIENTRY _tWinMain(HINSTANCE hInstance, HINSTANCE, LPTSTR lpCmdLine, int nCmdShow)</pre>
Notice how the 2nd parameter is not named :-)
Holy crap! I didn’t know that weblogs.asp.net didn’t support the <pre> tag :-)
Lots of people used hPrevInstance as kind of a boolean flag, as if it were "bPrevInstanceExists." Sure, in Win32 you don’t need instance-to-instance copying via GetInstanceData any more, but there are still times when you want your app to be the only one running. IIRC, the API docs recommend that you jump through a hoop involving a shared, memory-mapped file — if the share exists, then another instance must be running. Not nearly as easy as the old way….
…whoops… make that "there are still times when you want your app to be the only instance running."
One way to ensure that you want only one instance of your app to be running is to use mutexes. Create the mutex at startup and release it at the end. If the mutex exists, then bring the existing app to the foreground (assuming its a desktop app).