Date: | February 11, 2004 / year-entry #56 |
Tags: | history |
Orig Link: | https://blogs.msdn.microsoft.com/oldnewthing/20040211-00/?p=40663 |
Comments: | 50 |
Summary: | The DirectX video driver interface for Windows 95 had a method that each driver exposed called something like "DoesDriverSupport(REFGUID guidCapability)" where we handed it a capability GUID and it said whether or not that feature was supported. There were various capability GUIDs defined, things like GUID_CanStretchAlpha to ask the driver whether it was capable of stretching... |
The DirectX video driver interface for Windows 95 had a method that each driver exposed called something like "DoesDriverSupport(REFGUID guidCapability)" where we handed it a capability GUID and it said whether or not that feature was supported. There were various capability GUIDs defined, things like GUID_CanStretchAlpha to ask the driver whether it was capable of stretching a bitmap with an alpha channel. There was one driver that returned TRUE when you called DoesDriverSupport(GUID_XYZ), but when DirectDraw tried to use that capability, it failed, and in a pretty spectacular manner. So one of the DirectDraw developers called the vendor and asked them, "So does your card do XYZ?" Their response: "What's XYZ?" Turns out that their driver's implementation of DoesDriverSupport was something like this: BOOL DoesDriverSupport(REFGUID guidCapability) { return TRUE; } In other words, whenever DirectX asked, "Can you do this?" they answered, "Sure, we do that," without even checking what the question was. (The driver must have been written by the sales department.) So the DirectDraw folks changed the way they queried for driver capabilities. One of the developers went into his boss's office, took a network card, extracted the MAC address, and then smashed the card with a hammer. You see, this last step was important: The GUID generation algorithm is based on a combination of time and space. When you ask CoCreateGuid to create a new GUID, it encodes the time of your request in the first part of the GUID and information that uniquely identifies your machine (the network card's MAC address, which is required to be unique by the standards that apply to network card). By smashing the network card with a hammer, he prevented that network card from ever being used to generate a GUID. Next, he added code to DirectDraw so that when it starts up, it manufactures a random GUID based on that network card (which - by its having been destroyed - can never be validly created) and passes it to DoesDriverSupport. If the driver says, "Sure, we do that", DirectDraw says, "Aha! Caught you! I will not believe anything you say from now on." |
Comments (50)
Comments are closed. |
Great story! I thought it was going to end in another driver-specific hack, like in the stdcall article.
Although I still think he should have taken the hammer to the vendor’s driver developers.
That’s hilarious!
Although I think the hammer could have been employed as a good LART for those driver developers. ;)
I understand that GUID no longer make use of the network card, due to action by daft people in the USA that thought that there soul was being given away each time GUID was made.
Why did Microsoft give in and remove the “G” from GUIDs?
The Platform SDK states:
"For security reasons, it is often desirable to keep ethernet/token ring addresses on networks from becoming available outside a company or organization. In Windows XP/2000, the UuidCreate function generates a UUID that cannot be traced to the ethernet/token ring address of the computer on which it was generated. It also cannot be associated with other UUIDs created on the same computer."
So, how is this done (hash of the real GUID?), and how can you guarantee that the GUID generated won’t collide with another one generated somewhere else?
If this is a bug in the driver (and clearly these sorts of people should not be writing drivers. If there is one thing I know about drivers it is that attention to detail and documentation is a 100% must), why didn’t Microsoft just inform the driver developers to FIX the bug, instead of putting in a hack like this? I’m sure it does not cost that much performance wise, but it’s the principle of the thing that bothers me I guess.
How are GUIDs created in computers without NICs?
Can this special GUID occour on one of those?
microsoft probably did tell the manufacturer to fix the bug, but remember: the reason microsoft even found out about this problem was that the driver had been released and was in the wild, and people had it on their machines. and games were failing to run, and people didn’t know why, and they were presumably calling support at the game developer and at microsoft.
saying ‘we don’t want to put in hacks to fix other people’s bugs’ is all well and good, but in the end, it’s the bottom line that matters. a customer who has to call support is less likely to buy the next product than a customer who just has everything work right, out of the box, hacks and all.
not to mention that the customer who just has everything work is actually happier…
I guess it’s the general principle of ‘if one driver does it, maybe others do.’ You can’t catch ALL the driver developers that do this damn stupid thing.
Ah, but what about WHQL, you say? Unfortunately, you can’t force driver developers to go through WHQL (except for inclusion on the Windows distribution media), and some manufacturers (video chips beginning with A and N spring to mind) appear to actively work around WHQL. A previous version of the drivers for my video card always crashed the system (with no bugcheck) if the Pocket PC 2002 Emulator was run. Running the same driver with Driver Verifier enabled worked with no problems. My guess is that they somehow detected that Verifier was enabled and throttled back on some operations.
It doesn’t help that MS requiring digital signatures for drivers is seen as Big Brotherish in some quarters (e.g. Slashdot, The Inquirer, other gutter IT press).
As for GUIDs, Windows 2000 and later generate type 4 GUIDs, which simply use CryptGenRandom to generate a 128-bit random number. 7 bits mark the format, so there’s a 1 in 2^121 chance that someone else will generate the same GUID. This is pretty unlikely – the 1 in 76,000,000 chance of winning the new Euro Lottery jackpot is less than 1 in 2^27.
See http://www.codeproject.com/netcf/PPCGuidGen.asp for more information on GUID generation.
oh, one other thing: i remember that in the days when GUID generation was tied to NICs, the "fake" NIC used by dial-up networking would sometimes get picked up as the MAC address to use. so look at CLSID_DirectInput, for example — the guid ends in 0x444553540000, which is "DEST ", which was the fake MAC address (IIRC). so there are GUIDs sprinkled ALL through the platform SDK ending in 444553540000 — grepping, i see stuff from comdef, dinput, mmreg, ksmedia, recguids, shldisp and others.
so there were a bunch of "GUIDs" generated that had a major part of the uniqueness algorithm being non-unique….
The special GUID can’t occur because it’s a version 1 GUID, not a version 4 GUID as the randomly-generated ones are.
3 comments:
1. I think MS is going overboard with GUID. Why use GUID for everything ? What’s wrong with using simple integer #define in this case?
2. >>The driver must have been written by the sales department.
This is a good one.
3. Why generate random GUID at run-time ? Can’t you just define a sure-fail GUID, like GUID_Give_Me_Winning_Lottery_#, and then query the driver for that ? If it says yes, then you caught it. No need to waste a network card.
This is EXACTLY the same technique we used to generate the GUID for the item level properties ACEs in Exchange – we took a network card and destroyed it.
Btw, UuidCreate doesn’t encode the MAC address directly in the GUID – instead it runs the 128 bit GUID through MD5, and returns that as a result to remove any personally identifiable information in it. As a result, UUID’s in COM aren’t actually universally unique.
UuidCreateUnique uses the old algorithm, but it can disclose personal information.
Andreas, please see:
http://hegel.ittc.ukans.edu/topics/internet/internet-drafts/draft-l/draft-leach-uuids-guids-01.txt
for a discussion of how UUIDs are generated.
Great story, Raymond! I keep telling you, you need to compile these anecdotes into a book! :)
B.Y.: Why use GUID for everything? Because if you used integers, then people would have to come to Microsoft and get an integer assigned to them to ensure that it was unique.
For example, in Windows 3.1 where each driver needed a unique integer and you had to ask Microsoft to assign you one. Lots of people got lazy and just "made up a number" without obtaining one from Microsoft. Then the inevitable collisions started happening…
Not to mention the conspiracy theories that emerge from having to get integers from Microsoft. "Microsoft gets the inside track on all new technologies. Microsoft now can steal your ideas before you even go to market!"
It’s funny someone mentioned GUIDs ending in 0x444553540000. I just had to un-install the Macromedia Flash player from my system to do some testing, and its GUID ends in 0x444553540000 as well. So I guess Microsoft isn’t the only company that let that get out into the world.
(Look in your registry under ShockwaveFlash)
MAC addresses are, unfortunately, not guaranteed to be unique in the real world. Many ethernet cards allow you to set your MAC address to whatever you like. Software that relies on your ethernet card having a guaranteed unique MAC address are therefore flawed because this is not a safe assumption in the real world. It would definitely result in a security vulnerability if you based your security around this at all (which is, I think, not the case in this article).
destroy the network card?
i know on windows it isn’t so common, but in the real world spoofing ethernet mac addrs is a common feature of drivers. why destroy the nic? i hope at least it was a stupid old isa nic or something…
This blog is hilarious! Very informative positings with a sense of humor.
Instead of using difficult-to-remember GUIDs or easy-to-conflight integers, why not use a namespace of dotted strings? Something like DoesDriverSupport("com.microsoft.directx.capability.CanStretchAlpha")?
btw, here is the Flash Player’s GUID: {D27CDB6E-AE6D-11cf-96B8-444553540000}
:-)
runtime: Strings can’t be embedded into structures (unless you decide on a maximum length) and they are harder to compare than GUIDs, and you have cultural issues too. For example, the string U+00E9 (lowercase e with acute accent) and the string U+0065 U+0301 (lowercase e followed by combining acute accent) are the same thing, but a naive strcmp would declare them different.
One of my biggest mistakes was using a string-based notation for DEV_BROADCAST_USERDEFINED instead of a GUID.
"It doesn’t help that MS requiring digital signatures for drivers is seen as Big Brotherish in some quarters"
It doesn’t worry me that drivers should be signed. It worries me that an entity with interests that may not align with those seeking driver signing controls the process. If it was done by a neutral party, I’m sure less people would have a problem with it.
I’m not saying anything has happened. It may well never happen. I just don’t see why we should take the chance.
My biggest gripe with driver signing is that it seems to do nothing for quality control. I’ve downloaded some nvidia graphics drivers that are certified and have been a lot worse than random nvidia latest build drivers. Simiarly, I’ve downloaded a hard disk controller driver from windows update that slowed my system so much (over 10 minutes to boot!) that I had to immediately roll back my system to before I installed it.
When Microsoft is signing (and distributing!) crap like that, it makes me question what benefit driver signing is giving me.
2/11/2004 8:17 AM Mike Dimmick
> Ah, but what about WHQL, you say?
> Unfortunately, you can’t force driver
> developers to go through WHQL (except for
> inclusion on the Windows distribution media),
What difference does it make if a driver goes through WHQL or not, or is included on the Windows distribution media or not? I’ve used two blue-screening video drivers that were included in the Windows 2000 distribution CD with Microsoft listed as the provider and digital signatures from Microsoft, and one crashing (though not blue-screening) driver from Microsoft’s Windows Update for Windows XP.
Regarding the non-uniqueness of MAC addresses, spoofing is probably not the only reason. I’ve read a rumor that some hardware manufacturers made more LAN cards than they had MAC addresses for, and intentionally reused MAC addresses. Unfortunately no name was specified in the rumor. But surely it is not inconceivable — when famous software companies can show their contempt for standards and distribute products that don’t comply, why not hardware companies?
We had some (very cheap) ethernet cards back around 1994 that did not have unique MAC addresses.
Figuring that one out was, ummm, amusing.
Very odd behaviour when you only get some of the ethernet packets destined for your card.
It is details like this, repeated over and over again in the codebase, that have so entrenched Windows — Sure, we do that…
OK, didn’t know you could define your own capabilities. I thought it was a list of predefined ones.
I don’t suppose the company that made this wonderful display driver went on to create video compression drivers? I had a run-in with a video codec recently that always answered "yes" to any decompression query… Decompress 41-bit RGB to 17-bit RGB? Sure, we do that!
One of the reasons that Visual Studio .NET and 2003 are a backward step (from VS6) in terms of source code control provider support (via SCC interface), is that the VS developer decided NOT to believe what the SCC providers told them about capabilities of any particular product.
Instead, VS does things like create temp project, add a temp file, delete a temp file etc. As you might imagine, this slows down the whole interface. Indeed, various providers had to implement code to detect this was happening and not really add files (in some SCM tools all actions are recorded).
I hope this is fixed in Whidbey.
At present, Microsoft signs the drivers based on evidence presented by the manufacturer’s own testing, using log files generated by the WHQL test suite. They don’t actually perform their own testing.
However, if a driver is WHQL signed, there’s some evidence that the manufacturer has actually performed some testing on the driver, rather than just shipping it. I prefer to use drivers that have been through the WHQL process.
The WHQL tests aren’t exhaustive, however. They don’t include profiling – ensuring that the drivers are ‘quick enough’, whatever that might mean – and they cannot catch every possible violation, only the things that WHQL thought of (presumably including any issues reported to MS).
Faulty drivers are, IMO, the #2 cause of the bad reputation Windows has for reliability. The #1 cause is faulty hardware.
More information on WHQL at http://www.microsoft.com/whdc/hwtest/default.mspx.
WQHL is pointless because its not a requirement, and if they are certified they dont have to be on windows update, hard to find then.
Nobody buys NVidia shit anymore, we all moved to ATI for a reason.
NV = the Dead and theyre cheaters (con artists) in order to get that better benchmark.
WQHL should enforce certified drivers to be on windows update or no cert for them. WQHL should also not permit any cheating in drivers for benchmarking improvements.
Actually, I got an nVidia card this year because I’d had a previous bad experience with an ATI (bad RAM on the card) and friends had had bad experiences with the Matrox G200 and later cards. My old Matrox Millennium II was actually very reliable.
Frankly they’re as bad as each other. I was going to say ‘cutting corners’ but that implies saving money or time. Instead they appear to be investing money and time in deliberately circumventing the driver model to get fractions of extra performance.
The latest build of nVidia’s drivers seem to be OK on this Riva TNT2 system, but an earlier version suffered from corrupted icons and toolbar buttons, where the correct mask was used but the wrong image. Downgrading back to the versions on the XP CD fixed the problem. A colleague with a RADEON card (can’t remember the model) had problems with context menus not appearing until mousing over them (each item appeared as you moved the pointer over it) if the Fade menu transition effect was selected. That seems to have gone in the latest build, but now he occasionally gets stray window frames on the second monitor.
It’s currently being reported that drivers for AMD64 are few and far between. If the manufacturers were following the DDK and the HAL’s model, building an AMD64 or any 64-bit driver would consist of modifying the places that make assumptions about pointer sizes and recompiling. However, few driver manufacturers appear to have bothered to get their drivers to work properly in Processor Address Extensions mode on x86, which presents a 64-bit physical address to the driver, or on non-PC systems with non-standard HALs (where simply grabbing a physical memory address for DMA doesn’t work). A number have problems with 64-bit PCI and PCI-X.
At this stage I must confess to being mostly a GUI programmer, on Pocket PCs a lot of the time. I’ve simply picked this stuff up from other resources, such as The NT Insider (http://www.osronline.com).
5x dets = teh sshit.
Im on 4x Detonators or omething or other for stability.
Theyre puttin too much crap into the drivers, ie. BLOAT and FEATURE CREEP. Before I had an ATI pre GeForce, then I got a GF2, then GF4, but the FX, no way would i touch that hair dryer.
They blew it.
A market needs at least 3 players to drive prices down and compete.
I plan to jump to ATI for future PCI Express cards. Im willing to give them a chance again.
On that point theyre NForce chip set is ok, except I wont trust my data to theyre RAID chips.
NVidia also had a hoopla with DX9 specs.
"Jack": Please choose a less offensive handle, please don’t misrepresent yourself as a Microsoft employee, and please curb the foul language. You’re free to voice your opinion, but be civil about it. Otherwise I’m going to have to delete your comments.
2/12/2004 3:24 AM Mike Dimmick:
> At present, Microsoft signs the drivers
> based on evidence presented by the
> manufacturer’s own testing,
And Microsoft identifies itself as the provider, in the .inf files and in the panels that are displayed when you ask for driver information. Microsoft is asking for the blame. You say that Microsoft wasn’t responsible for all of these bad drivers, but in the cases that I’ve observed, Microsoft said they were.
> Faulty drivers are, IMO, the #2 cause of the
> bad reputation Windows has for reliability.
Agreed. But we don’t agree on #1.
#1 is Windows causing deletion of massive amounts of data on hard disks. When Windows 95 fdisk was used as directed to create partitions, it created overlapping partitions and the time bomb effect was obvious to everyone except Microsoft. Microsoft eventually released a patch for IDE disks for Windows 95 A, but Microsoft never released a patch for SCSI disks and Microsoft never released a patch for Windows 95 B. Windows 95 isn’t the only Windows product that causes massive destruction of data, it was just the worst.
XP still has problems handling disks, Microsoft has developed patches for some of the problems, and Knowledge Base articles aren’t letting customers download patches unless we pay for support calls. (The KB articles say that fees can be canceled in some situations, but the situations aren’t broad enough. We’ve discussed this elsewhere in Mr. Chen’s blog.)
These are not bad hardware, that you called #1. It only starts out looking like bad hardware. Change to another vendor’s SCSI disk, change to another vendor’s PCMCIA-SCSI adapter, change to another vendor’s notebook PC, change language versions of Windows 95, change between Windows 95A and Windows 95B, and see that the result always comes up the same. Get a dump of the partition table and extended partition chain and you’ll see the overlapping logical drives. This is Windows.
2/12/2004 4:42 AM Jack Mayhoff:
> Nobody buys NVidia shit anymore, we all
> moved to ATI for a reason.
Your ATI and my ATI must be on different planets. A big manufacturer used ATI chips in desktop PCs, which actually really underwent some degree of testing by subcontractors to the big manufacturer, though apparently not by ATI or Microsoft. We were getting new drivers more than once a week, some of them certified and digitally signed but still not working. The PC manufacturer sent someone to Canada to work together with ATI and they still couldn’t make it work properly.
And then I bought a notebook PC with an ATI chip, because the PC was highly portable, small size and batteries that ran for 5 hours when new, and the price was cheap. The ATI chip was every bit as regrettable as I expected, but the PC serves most of its expected purpose.
Norman,
If you call MS support for a hotfix you will not be charged for the support call.
I promise.
I know because I’m one of the Microsoft folks who sends out hotfixes, and we ALWAYS ensure that the customer was either not charged for the call, or that we refund the customer’s money.
(Note that this is only true if the entire support case is to get a hotfix. If you’re sent a hotfix and require further troubleshooting charges may apply.)
I am glad I am not a coder for Microsoft, and not for any of the usual "M$ SuXX0Rz!" reasons you see floating around… http://weblogs.asp.net/oldnewthing/archive/2004/02/11/71307.aspx…
So, what’s the net effect of failing that test? Does that DirectX layer assume that the PC has an old VGA card, and emulate everything? (I guess that would be a big motivation for the video card company to fix their driver)
This is from memory, but I think it just assumes FALSE to any query for optional features. (So the driver does run, but none of its enhancements work.)
2/13/2004 10:33 AM Matthew:
> If you call MS support for a hotfix you will
> not be charged for the support call.
>
> I promise.
Matthew’s promise was dishonored by his colleague in Microsoft Japan.
Matthew and I had some correspondence by e-mail. Temporarily I was encouraged by the assertion that a telephone call to Microsoft Japan would bring different results than web pages of Microsoft US and Japan, previous e-mail correspondence with Microsoft US, and paper mail to Microsoft US including registered letters. But when I made the call, the result was the same. Microsoft Japan’s support personnel absolutely refused to take a request for hotfixes without prior payment for a support call. He did not even let me give the Knowledge Base article numbers (or Windows product codes, of which two include OEM but that shouldn’t matter for hotfixes right?).
I wasn’t charged for the call — because I refused to pay a fee to open a support case for the hotfixes. I didn’t get the hotfixes.
Regarding the possibility of getting hotfixes through unofficial channels, well, consider how many millions of customers need the same hotfixes and deserve access to the same channels. I haven’t decided yet what to think of the possibility.
On the matter of bugs for which Microsoft does not yet have hotfixes, even Matthew confirmed that a support call with advance payment is required. Sometimes Microsoft says that the advance payment can be refunded if Microsoft decides that it was Microsoft’s fault. So I asked: Elsewhere in Mr. Chen’s blog, Mr. Chen seemed agreeable to the idea that a Microsoft bug was a Microsoft bug even though Mr. Chen couldn’t reproduce it. I asked Matthew if this would result in half of a refund.
Of course most of this isn’t Matthew’s fault, but the facts are still the facts.
(By the way I usually use "Mr." or "Ms." and family names, but don’t want to take a chance on misremembering Matthew’s family name.)
PingBack from http://dewb.wordpress.com/2005/09/25/using-a-hammer-to-guarantee-a-mathematical-result/
PingBack from http://p10.hostingprod.com/@asciiarmor.com/blog/2005/01/04/javautiluuid-mini-faq/
PingBack from http://blog.tomtebo.org/2004/02/13/the_driver_must_have_been_written_by_the_sales_department/