Date: | December 15, 2004 / year-entry #424 |
Tags: | history |
Orig Link: | https://blogs.msdn.microsoft.com/oldnewthing/20041215-00/?p=37003 |
Comments: | 14 |
Summary: | The performance of the syscall trap gets a lot of attention. I was reminded of a meeting that took place between Intel and Microsoft over fifteen years ago. (Sadly, I was not myself at this meeting, so the story is second-hand.) Since Microsoft is one of Intel's biggest customers, their representatives often visit Microsoft to... |
The performance of the syscall trap gets a lot of attention. I was reminded of a meeting that took place between Intel and Microsoft over fifteen years ago. (Sadly, I was not myself at this meeting, so the story is second-hand.) Since Microsoft is one of Intel's biggest customers, their representatives often visit Microsoft to show off what their latest processor can do, lobby the kernel development team to support a new processor feature, and solicit feedback on what sort of features would be most useful to add. At this meeting, the Intel representatives asked, "So if you could ask for only one thing to be made faster, what would it be?" Without hesitation, one of the lead kernel developers replied, "Speed up faulting on an invalid instruction." The Intel half of the room burst out laughing. "Oh, you Microsoft engineers are so funny!" And so the meeting ended with a cute little joke. After returning to their labs, the Intel engineers ran profiles against the Windows kernel and lo and behold, they discovered that Windows spent a lot of its time dispatching invalid instruction exceptions. How absurd! Was the Microsoft engineer not kidding around after all? No he wasn't. It so happens that on the 80386 chip of that era, the fastest way to get from V86-mode into kernel mode was to execute an invalid instruction! Consequently, Windows/386 used an invalid instruction as its syscall trap. What's the moral of this story? I'm not sure. Perhaps it's that when you create something, you may find people using it in ways you had never considered. |
Comments (14)
Comments are closed. |
Considering the majority of x86 processors produced must be used for running Windows it always surprised me that the two did not work together more closely to optimise performance.
There was a rumour somewhere that Dave Cutler was heavily involved in the design of AMD64, so perhaps that is a more Windows centric architechture than x86 ever was.
I initially thought that dev must be a pain in the ass to work with because he goes after the wrong problems but then I realized that with his solution you’ve sped up syscalls and faults with the same effort so it’s a win in general.
Engineers can sometimes be short sighted; instead of asking for a faster way to get into kernel mode they ask for their hack to be made faster.
So what was the aftermath of this? I presume that Microsoft has switched over to a newer syscall scheme; that should be easy because the scheme is wrapped in kernel32.dll. If so, what is it and when was the change made?
I think I remember reading about this particular syscall trap in Andrew Schulman’s "Unauthorised Windows 95" book. It was a few years ago now though.
Nate: I don’t know. I wasn’t at the meeting 15 years ago and I don’t work on syscall traps. I was just re-telling an amusing story.
asdf: The dev in question is actually a really nice guy, but he does have a playful sense of humor as well.
was the SYSENTER instruction introduced as an abstraction for that?
Kristoffer, the reason they wanted the "hack" to be made faster was that all the code up to that point used INT instructions; MS-DOS and the BIOS calls used that convention. When you do an INT instruction inside a virtual-86 machine, it naturally needs to somehow invoke the protected mode operating system. In the 386 calling out of a V86 box was an expensive operation, and it happened a lot since nearly all the code that users ran was DOS apps.
Virtual-86 mode was really Intel’s answer to a much uglier problem. Intel had made it easy to get into protected mode (just flip a status bit) but there was no way to get OUT of protected mode, which was important in the 286 era (circa 1987 or so) because there was no V86 support in the CPU and all the existing apps were real mode MS-DOS. No protected-mode OS (say, OS/2 or Windows/286) was going to launch without some sort of support for existing apps.
The solution to get from protected mode back to real mode was to create a triple-fault condition that would cause the processor to reset itself and head back to the BIOS reset vector, where it would eventually make it to some OS code that would start running the real mode apps. I had understood that Gordon Letwin figured that out, but there are some other credits for it here:
http://www.x86.org/productivity/triplefault.htm
I thought the 286 era was circa 1983 or so, and Intel’s market was still dominated more by embedded systems than by general purpose computers. Intel made RMX and other vendors or users (industrial users, makers of machines) could make other OSes, there was no need to worry about existing apps on the new processors, etc. Of course PC makers and Microsoft had different ideas, and for their purposes they used tools that they saw available, and they used those tools in ways that Intel didn’t imagine.
invalid instructions are still used as kind of syscalls in DOS emulation inside of NTVDM AFAIK.
PingBack from http://test.xxeo.com/archives/2006/06/24/more-x86-lore-illegal-instructions-and-the-286-protected-mode.html
PingBack from http://www.xxeo.com/archives/2006/06/24/more-x86-lore-illegal-instructions-and-the-286-protected-mode.html
PingBack from http://blog.meesqa.com/2009/04/18/more-side-effects-that-become-features/