Date: | November 1, 2004 / year-entry #380 |
Tags: | history |
Orig Link: | https://blogs.msdn.microsoft.com/oldnewthing/20041101-00/?p=37433 |
Comments: | 26 |
Summary: | Back in the days of 16-bit Windows, the difference was significant. In 16-bit Windows, memory was accessed through values called "selectors", each of which could address up to 64K. There was a default selector called the "data selector"; operations on so-called "near pointers" were performed relative to the data selector. For example, if you had... |
Back in the days of 16-bit Windows, the difference was significant. In 16-bit Windows, memory was accessed through values called "selectors", each of which could address up to 64K. There was a default selector called the "data selector"; operations on so-called "near pointers" were performed relative to the data selector. For example, if you had a near pointer Important: Near pointers are always relative to a selector, usually the data selector. The GlobalAlloc function allocated a selector that could be used to access the amount of memory you requested. (If you asked for more than 64K, then something exciting happened, which is not important here.) You could access the memory in that selector with a "far pointer". A "far pointer" is a selector combined with a near pointer. (Remember that a near pointer is relative to a selector; when you combine the near pointer with an appropriate selector, you get a far pointer.) Every instance of a program and DLL got its own data selector, known as the HINSTANCE, which I described in an earlier entry. The default data selector for code in a program executable was the HINSTANCE of that instance of the program; the default data selector for code in a DLL was the HINSTANCE of that DLL. Therefore, if you had a near pointer The memory referenced by the default selector could be turned into a "local heap" by calling the LocalInit function. Initialing the local heap was typically one of the first things a program or DLL did when it started up. (For DLLs, it was usually the only thing it did!) Once you have a local heap, you can call LocalAlloc to allocate memory from it. The LocalAlloc function returned a near pointer relative to the default selector, so if you called it from a program executable, it allocated memory from the executable's HINSTANCE; if you called it from a DLL, it allocated memory from the DLL's HINSTANCE. If you were clever, you realized that you could use LocalAlloc to allocate from memory other than HINSTANCEs. All you had to do was change your default selector to the selector for some memory you had allocated via GlobalAlloc, call the LocalAlloc function, then restore the default selector. This gave you a near pointer relative to something other than the default selector, which was a very scary thing to have, but if you were smart and kept careful track, you could keep yourself out of trouble. Observe, therefore, that in 16-bit Windows, the LocalAlloc and GlobalAlloc functions were completely different! LocalAlloc returned a near pointer, whereas GlobalAlloc returned a selector. Pointers that you intended to pass between modules had to be in the form of "far pointers" because each module has a different default selector. If you wanted to transfer ownership of memory to another module, you had to use GlobalAlloc since that permitted the recipient to call GlobalFree to free it. (The recipient can't use LocalFree since LocalFree operates on the local heap, which would be the local heap of the recipient - not the same as your local heap.) This historical difference between local and global memory still has vestiges in Win32. If you have a function that was inherited from 16-bit Windows and it transfers ownership of memory, it will take the form of an HGLOBAL. The clipboard functions are a classic example of this. If you put a block of memory onto the clipboard, it must have been allocated via HGLOBAL because you are transferring the memory to the clipboard, and the clipboard will call GlobalFree when it no longer needs the memory. Memory transferred via STGMEDIUM takes the form of HGLOBALs for the same reason. Even in Win32, you have to be careful not to confuse the local heap from the global heap. Memory allocated from one cannot be freed on the other. The functional differences have largely disappeared; the semantics are pretty much identical by this point. All the weirdness about near and far pointers disappeared with the transition to Win32. But the local heap functions and the global heap functions are nevertheless two distinct heap interfaces. I'm going to spend the next few entries describing some of the features of the 16-bit memory manager. Even though you don't need to know them, having some background may help you understand the reason behind the quirks of the Win32 memory manager. We saw a little of that today, where the mindset of the 16-bit memory manager established the rules for the clipboard. [Raymond is currently on vacation; this message was pre-recorded.] |
Comments (26)
Comments are closed. |
It seems that MS can’t win this one.
Break backwards compatability and they are the 800lb gorilla throwing its weight around and expecting the industry to kowtow to it like the evil monopolist it apparently is.
Make it easy to migrate and they’re short-sighted and sacrificing the long term for the short.
What to do… what to do…
" the 16-bit subsystem had to be much more tightly coupled to the 32-bit one. (E.g., sending messages between between them had to work as transparently as feasible.) "
How tightly coupled are Win16 and Win32 in Windows NT-derived OS’s? For example, is Win16 just a translation layer into Win32 or a completely seperate implementation?
(Also, do you plan on touching on things like __AHINCR and other issues related to huge pointers?)
"Even in Win32, you have to be careful not to confuse the local heap from the global heap. Memory allocated from one cannot be freed on the other. The functional differences have largely disappeared; the semantics are pretty much identical by this point.
Interesting. I always thought that LocalAlloc and GlobalAlloc were identical functions, but the above suggests that there are some differences. I’m guessing that they both use NtAllocateHeap (not sure of the name) etc on NT and I just always thought that what else do the heap functions do other than call NtAllocateHeap and handle various parameters flags that would make LocalAlloc and GlobalAlloc different?
I think that it would have been good if Win32 had only one heap function set, with source code macros to help out 16-bit programs. Were there any run-time differences that would make this a pain? I also understand however that Microsoft has to make it easy to migrate and sometimes has to relax ideology a little.
Another example of the evolutionary cruft in Windows. I responded to a previous article and said that non-versioned shared DLLs were a (semi) good idea at the time but have evolved into an environment where they are both useless and harmful.
Here’s another example that hopefully will be less controversial: There’s no way we could have gotten to this result, but now that there’s room and horsepower to spare, wouldn’t it be better all around if 16-bit programs ran in essentially an entirely distinct OS from 32-bit programs, with the only overlap being that needed to sychronize access to system resources (hopefully in a way that favors new apps?)?
WoW is a significant part of the way there. But why oh why do we even have to *have* a LocalAlloc in Win32? So people can recompile their 16-bit apps into 32 bit apps without changing the source code? If so, this was a learning of the wrong lesson. The hard part of doing this port was *never* the cut-and-paste operation of replacing obsolete functions with new ones. It was always about the conflicts that this caused. Even if you wanted to make it easy for people to maintain 2 versions, a compile-time macro would have been vastly superior.
Moving along to 64 bit Windows (which there’s still hope for)… Argh! How much hair can I tear out before I’m bald? Where to start… how about: Why do the 64 bit versions of system files reside in (e.g.) c:windowsSystem32 (really… I’ve heard the excuses and am not convinced)?
If you’re the new guy on the block, you have to bend over backwards to make it easy for people to port to your new platform. And 16-bit interop was also critical to allow people to switch over piecemeal instead of all-at-once. Consequently, the 16-bit subsystem had to be much more tightly coupled to the 32-bit one. (E.g., sending messages between between them had to work as transparently as feasible.)
Some may argue that decisions were made to trade off long term benefits for short term gains, but what if you don’t even know whether you’re going to *have* a long term? Think of all the products that died in the short term.
The near/far pointer madness was just the tip of the 16 bit iceberg (or should I say, the tip of the Win3.0 BurgerMeister). Initializing multiple local heaps was something I did before breakfast before tackling the tough issues.
I recall something about there were distinctions between pagelocked and non-pagelocked memory that often caused out-of-memory problems – sometimes causes when too many DLLs declared their code segments as locked – I no longer remember all the gory details. The memory allocator would try to move that memory below the DOS address boundary (1M), and once that got used up there was no more left. I had to write my own loader routines to allocate all the DOS memory before loading DLLs, then free all the DOS memory after the DLL had loaded. Or something like that. Heck, that was almost 15 years ago.
And then the distinctions between global atoms and local atoms and 16 usage count overflows…
I know that MSFT has to be backwardly compatible to the greatest extent possible, but at some point it’s worth drawing a line and dropping support for some of the more archaic bits that’s of interest only to archeologists. Does anyone really need to use LocalAlloc any more? I hope not…
How about Microsoft include its Virtual PC technology in Longhorn, and include virtual versions of Windows 3.1 and 9x? This way, the applications requiring backward compatibility can be run in a separate environment, while keeping the core OS free of compatibility shims.
Sure you can run Windows 3.1 programs inside a Virtual PC but programs inside the Virtual PC wouldn’t be able to interop with programs outside it. E.g., your 32-bit program won’t be able to access windows that belong to 16-bit processes. Maybe you don’t care but I suspect others do. (Like, say, people who use screen readers or virtual desktop managers.)
Since 16-bit processes do not pass HLOCALs among themselves, LocalAlloc can be slightly more efficient than GlobalAlloc. Sure LocalAlloc could have been #define’d to GlobalAlloc, but that would have made every LocalAlloc allocation pay the cost of HGLOBAL virtualization. Don’t make 99% of the callers pay the cost of something that only 1% of the callers actually need.
Raymond wrote: "If you asked for more than 64K, then something exciting happened, which is not important here."
Not important, but maybe interesting?
With regards to moving 16-bit into VPC environment. I guess most people who uses 16-bit software would rather use Windows XP then… too much resources will be spent on this VPC, and if you manage to setup some bridge between the VPC environment and your host (for applications such as screen readers) then it will have perf. implications and also the bridge will by a good chance introduce some bugs which some customers will run into.
But another option is to let users disable the WOW subsystem. (I really hope that the CSRSS(aka win32) subsystem is not dependent on the WOW(ak win16) environment!!). For example the POSIX environment can be disabled (in fact, it is no longer shipped with Windows XP, but can be downloaded from MS – see SFU (and SFU is a lot better then the POSIX subsystem which shipped earlier :))).
I expected to find the WOW subsystem at this registry key, HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSession ManagerSubSystems, but it wasn’t there :/. I always thought it ran as a separate subsystem, side-by-side with CSRSS, POSIX and OS/2…?
[PS. Quite ironic that I’m writing a comment here about reliability(which is what you should get by removing WOW), and got a BSOD in the middle of the comment :). Seems like my mousedriver caused some problems (or why would i8042prt!I8xQueueCurrentMouseInput+60 give a STATUS_ILLEGAL_INSTRUCTION exception? OCA didn’t know about it unfortunately…) — I really hope that the introduction of userland device drivers in "Longhorn" (see some WinHEC2004 slide about gfx drivers) will give positive results to system reliability, and that the version of Windows after Longhorn will move more device drivers into userland. sorry about the digression!]
I find these trips back into the dark ages of progamming enormously interesting despite being far too young to have ever used them. Well not too young as such, but I was a but a school kid playing games. Still these remind me of learning history at school about the English civil war and magna carta etc. ie, why the world is the way it is today.
WOW (and thus ntvdm) runs as a Win32 process not as a sub system.
Frank
Raymond wrote: "If you asked for more than 64K, then something exciting happened, which is not important here."
Not important, but maybe interesting?
/Frank
Doesn’t GlobalAlloc give you a huge pointer – i.e. if you allocate 128K of memory you get a far pointer sel:offset. Sel is actually the first of two consectutive selectors.
The selector Sel would have a base address pointing to the base of your 128K, and a limit of 128K. The next selector at sel+__AHINCR would have a base address 64K into the array and a size of (I guess) 64K. __AHINCR was a symbol which the Windows kernel exported.
So if you are 386 aware you can access it using sel and a 32 bit offset. Otherwise you must access it as huge pointer, with 16 bit offsets and add n*__AHINCR to the pointer to step between n 64K pages. Of course, the 16 bit code will be slow, because you need to keep reloading different selectors into a segment register to seek around in the array.
So the algorithm for 286 windows was to allocate n consecutive selectors for an allocation of n*64K. I guess on the 386 someone figured out that it cost nothing to set the limit on the first selector to the total size of the array, and it allowed 386 aware applications to access a huge pointer without a speed penalty.
Oops I meant "add n*__AHINCR" to the <b>segment part</b> of the pointer" to access a byte at n*64K.
I’d be all for Virtual PC providing the compatability environment in the future. In fact I suspect at times that is why Virtual PC was purchased.
But isn’t it a rather expensive fix in terms of complexity and resources? We’re talking about running an instance of a whole computer for each environment.
In some of the MSDN pages that are essentially tutorials on coding DLLs and threads, examples use LocalAlloc. From this discussion it does sound good if LocalAlloc is no longer necessary. Perhaps those examples can be updated?
I already answered this in an earlier comment.
http://weblogs.asp.net/oldnewthing/archive/2004/11/01/250610.aspx#250944
LocalAlloc is preferred since it doesn’t have the 16-bit compatibility goo that GlobalAlloc is forced to carry.
11/3/2004 5:20 PM Raymond Chen
> I already answered this in an earlier
> comment.
Hmm yes, you did say that LocalAlloc is slightly more efficient than GlobalAlloc. But I thought one of the points of this discussion was that LocalAlloc can be dangerous? If an ordinary C programmer (who didn’t program in Win16) has an ordinary pointer and passes it to a function that has an ordinary declaration, they can still get caught the same way as happened in Win16, right? One solution is for every programmer to check every function call, and not pass a LocalAlloc’ed pointer to a different DLL’s function. A different solution is to use GlobalAlloc instead of LocalAlloc (even though this solution doesn’t work well in 1993). Did I understand correctly?
You’re confusing Win16 (the topic of this article) with Win32. In Win32, LocalAlloc is preferable. The HINSTANCE problem doesn’t exist in Win32 since there are no near pointers.
Win32 has no near pointers? I’d have said it has no far pointers. The number of "FAR" decorators in the Win32 headers put lie to that.
By "near pointer" I mean "a pointer that changes meaning depending on which DLL in a process is using it". In Win32, a pointer has the same meaning throughout a process.
Yes, you can also say that all pointers are "near" in the sense that there are no selectors any more. But the fact that there are no segments means that you don’t have to worry about context-sensitive pointers in a process.
(Of course you do have the new problem of context-sensitive pointers across processes.)
The difference is too small….
分析了Windows下不同的动态内存分配方式
这里的
Windows下不同的动态内存分配方式