Why not just block the apps that rely on undocumented behavior?

Date:December 24, 2003 / year-entry #176
Tags:history
Orig Link:https://blogs.msdn.microsoft.com/oldnewthing/20031224-00/?p=41363
Comments:    47
Summary:Because every app that gets blocked is another reason for people not to upgrade to the next version of Windows. Look at all these programs that would have stopped working when you upgraded from Windows 3.0 to Windows 3.1. HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Compatibility Actually, this list is only partial. Many times, the compatibility fix is made inside...

Because every app that gets blocked is another reason for people not to upgrade to the next version of Windows. Look at all these programs that would have stopped working when you upgraded from Windows 3.0 to Windows 3.1.

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Compatibility

Actually, this list is only partial. Many times, the compatibility fix is made inside the core component for all programs rather than targetting a specific program, as this list does.

(The Windows 2000-to-Windows XP list is stored in your C:\WINDOWS\AppPatch directory, in a binary format to permit rapid scanning. Sorry, you won't be able to browse it easily. I think the Application Compatibility Toolkit includes a viewer, but I may be mistaken.)

Would you have bought Windows XP if you knew that all these programs were incompatible?

It takes only one incompatible program to sour an upgrade.

Suppose you're the IT manager of some company. Your company uses Program X for its word processor and you find that Program X is incompatible with Windows XP for whatever reason. Would you upgrade?

Of course not! Your business would grind to a halt.

"Why not call Company X and ask them for an upgrade?"

Sure, you could do that, and the answer might be, "Oh, you're using Version 1.0 of Program X. You need to upgrade to Version 2.0 for $150 per copy." Congratulations, the cost of upgrading to Windows XP just tripled.

And that's if you're lucky and Company X is still in business.

I recall a survey taken a few years ago by our Setup/Upgrade team of corporations using Windows. Pretty much every single one has at least one "deal-breaker" program, a program which Windows absolutely must support or they won't upgrade. In a high percentage of the cases, the program in question was developed by their in-house programming staff, and it's written in Visual Basic (sometimes even 16-bit Visual Basic), and the person who wrote it doesn't work there any more. In some cases, they don't even have the source code any more.

And it's not just corporate customers. This affects consumers too.

For Windows 95, my application compatibility work focused on games. Games are the most important factor behind consumer technology. The video card that comes with a typical computer has gotten better over time because games demand it. (Outlook certainly doesn't care that your card can do 20 bajillion triangles a second.) And if your game doesn't run on the newest version of Windows, you aren't going to upgrade.

Anyway, game vendors are very much like those major corporations. I made phone call after phone call to the game vendors trying to help them get their game to run under Windows 95. To a one, they didn't care. A game has a shelf life of a few months, and then it's gone. Why would they bother to issue a patch for their program to run under Windows 95? They already got their money. They're not going to make any more off that game; its three months are over. The vendors would slipstream patches and lose track of how many versions of their program were out there and how many of them had a particular problem. Sometimes they wouldn't even have the source code any more.

They simply didn't care that their program didn't run on Windows 95. (My favorite was the one that tried to walk me through creating a DOS boot disk.)

Oh, and that Application Compatibility Toolkit I mentioned above. It's a great tool for developers, too. One of the components is the Verifier: If you run your program under the verifier, it will monitor hundreds of API calls and break into the debugger when you do something wrong. (Like close a handle twice or allocate memory with GlobalAlloc but free it with LocalAlloc.)

The new application compatibility architecture in Windows XP carries with it one major benefit (from an OS development perspective): See all those DLLs in your C:\WINDOWS\AppPatch directory? That's where many of the the compatibility changes live now. The compatibility workarounds no longer sully the core OS files. (Not all classes of compatibility workarounds can be offloaded to a compatibility DLL, but it's a big help.)


Comments (47)
  1. asdf says:

    "Outlook certainly doesn’t care that your card can do 20 bajillion triangles a second". I want to see if that statement still holds in 2010.

    Is the verifier a stand-alone local machine app or do I have to run that stuff over a serial line to another computer?

  2. Phaeron says:

    People are already becoming indifferent to triangle speeds — TnL is so fast in general now that the video card manufacturers are putting more emphasis on shader capabilities and overall quality/frame rate. This is good. I think the main reason why 3D hasn’t yet taken off for non-gaming purposes is that most video drivers blue screen the machine every time you try to do something in a non-game fashion.

    I sympathize with the amount of work AppCompat involves. I know some people who wrote commercial games around the late Windows 3.1 era, when Win32 and especially Win32s had just appeared. Some of the 3.1 games they wrote later conflicted badly with Windows 95 by doing things such as, oh… reassociating the SCR and LNK filename extensions. Nowadays getting a patch out the door requires the whole nine yards with regards to in-house compatibility and regression testing, which is why it’s so hard to get a patch for older games on newer OSes. People are NOT happy when a released patch creates more bugs than it fixes.

    What I wonder, though, is how the AppCompat group deals with copy protection. It must be really annoying to figure out how to patch a SafeDisc’ed or otherwise "protected" application. Some of them go so far as to partially disassemble and rethread prologs of system entry points at runtime!

  3. Raymond Chen says:

    The Verifier runs on the machine being debugged (so you don’t need a serial cable or anything). I strongly recommend it to everybody.

  4. Dan Maas says:

    Raymond’s story is a really good example of why "low quality rules" in the software market. I don’t mean that in a derogatory way!

    What I mean is, Microsoft could lean on vendors to increase the quality of their products by removing sketchy code. Or, Microsoft could increase the "quality" of Windows (as measured by software purists) by omitting these compatibility hacks. But both of these options lead to worse outcomes, as Raymond explains. Thus, low quality rules.

    (send that to the next clueless journalist who laments the bugginess of most consumer software in writing…)

  5. Christian says:

    I very much appreciate all the work you and others do to make new Windows-versions work.

    The application comaptibility toolkit is a great thing (especially LUARedirectFS for those stupid software that only runs as administrator) and it also feels absolutely right: Have the ugly compatibility glue inside a separate component, for winxp embedded you can even strip it away.

    I wanted to say that the company I work for always uses the newest version of Office and Windows and that I radically try to convince my boss that we can’t use software or hardware that comes with idiotic custom setup-systems (which are not scriptable), with programms that only work as admin and with software that has a extremely bloated design (like UPS’s software that wants to install a strange database-system on the computer. We use their webpage instead and I called their support often and angry until someone told me how to use all necessary features only webbased).

    My boss also wants desktops to be locked down (no admin access) so he supports these things as far as possible and we are now running a system where each computer can be wiped and reinstalled in only 1 hour fully automated and no one has admin-pws.

    If it wasn’t for my commitment to this project, then many of the software we use would have stopped tje project.

    But by upgrading to new versions (e.g. Corel Draw), messing with the setups, messing with components of the software and with application compatibility toolkit it works now!

    I so much wished that every customer makes one angry phone call a day to every company that doesn’t support limited user access so that these companys suffer as much as possbile.

    I hate them so much that I think boycotting is not enough.

    And whenever possible we use software from Microsoft (or open source software). I wished

    that e.g. MS had a widely accepted replacement for Acrobat Reader.

    It’s great that XP contains stuff like Media player, fax or scanning-software out of the box so that we are not so often forced to throw money at a company which only focueses on their product and feature-bloat instead of things like security.

    What I hate most is InstallShield: They took the MS-Installer-technology and put their stupid scripting engine inside. So you have the disadvantages of both worlds.

    Administrative Installs would be supported by MSI, but with Installshield you only get a "please run setup.exe instead of the msi-file".

    And please try to avoid that MS makes the same mistakes that others make: e.g. more custom setup systems, using undocumented features, placing icons in too prominent places, and so on…

  6. Peter Torr says:

    I just lost a rather large comment with a "the viewstate may be invalid" error. I’ll write it again, but in a much shorter way <grrr>.

    Basically I was wondering how the OSS community will deal with this. In theory the problem will be worse because it’s much easier for people to rely on OS behaviour (who needs a debugger when you can view the source?) and they can even "tweak" core parts of it to make their own flavour of the OS (or application) to meet their needs. How do you patch / upgrade a system that is essentially in an unknown state?

    Then I said something about how my group (VBA / ActiveScript / VSA) tries to address this with customisation technologies, but it’s still really hard because then you have to make sure your OM doesn’t break between versions (like, say, Office 97 to Office 2000). It’s gotta be orders of magnitude easier than dealing with the "customers made arbitrary source code changes" mode of customisaion though.

  7. TRS-80 says:

    Re OSS, I think you’ll find that these sorts of hacks are very uncommon, and when they do exist, they’re less likely to break. They’re uncommon because if you have the source, you can add the required functionality yourself and submit the patch upstream (or just distribute your own version of the library/program/whatever with your program), or as you say, you can just look at the source yourself. They’re less likely to break because of versioned libraries (yes, versioning does exist under Win32 and COM, but it’s less well done) and you still have the source and so can check out the problem and fix it.

    As for patching/upgrading a system in an unknown state, you simply reinstall the entire application over its old files. If a program has "tweaked" parts of the OS, it should just have its own version of those files. Anyway, because the application is open source, you can simply allow people to just download the whole new version (whereas commercial software obviously can’t do that for patches) and install it.

  8. Raymond Chen says:

    Okay, so consider somebody who, say, writes their own ext3 defragmenter. In order to do this, they decide that they need to access some internal data structure, so they add a new syscall, "get_open_file_mapping_table" that returns the internal structure and include a custom ext3 driver with their defrag program.

    Suppose later the maintainers of ext3 want to change that internal structure (say, to support remote file shadowing). What do they do?

    "You can simply allow people to just download the whole new version" – so every time you upgrade you might have to re-download all of the programs that you had downloaded for the previous version of linux?

    And what if the person that wrote the ext3 defragmenter wants to sell their program for money instead of being open source?

  9. Peter Torr says:

    TRS, the problem is *exactly* that you can look at the source. Rather than relying on the "black box" approach to APIs (this is its documented interface with acceptable inputs and expected outputs) you turn it into a "white box" approach where you figure out that if you set such-and-such magic bit, or internal data structure, you can make it do something special.

    If the change is localised to your application, you can’t submit the changes upstream because the change isn’t in the OS, it’s in your app. Even if you change the OS, what’s the likelihood that your special-case hack is going to be accepted by the maintainer? Who does the compatibility testing on it for all the existing apps in the world?

    If you build your own custom version of the library, you have to worry about incorporating and re-shipping security patches, etc. Anyone remember the double-free bug in the GZIP libraries, and the umpteen million places that had to be fixed because every vendor had their own "flavour" of GZIP baked into their own custom libraries?

    Finally, people DO NOT want to re-write their apps when they upgrade their OS, just as Raymond says they don’t want to re-purchase them. It’s too risky, too expensive, and too time consuming. Then you’ve got stuff like WebSphere, which I’m pretty sure IBM doesn’t give away ;-)

    I really see this as a problem for the long-term success of OSS software. Once it matures and really takes hold in enterprises, we’ll see what happens. I already hear there is some reluctance to move to the 2.6 kernel…

  10. Michael says:

    Phaeron: Copy protection causes at least three different problems.

    First, copy protected applications usually detect if a debugger is running and then crash. This means it can be extremely difficult to find out why the application isn’t working.

    Second, sometimes the copy protection itself fails on a new OS, because, like you said, some of them mess with the prologs of API’s, which can change when we switch compilers. Other copy protection schemes depend on undocumented OS behavior, for example, what the system does when read failures occur on a CD.

    Finally, there are some copy protection schemes that actually require a system driver. This can cause problems if you’re trying to run the application under a limited user account.

  11. Dan Maas says:

    Regarding the hypothetical ext3 defragmenter: you’d have a LOT of trouble getting Linus to include a kernel patch that basically exports internal filesystem structures to user space. It’s against the design philosophy of UNIX. If Linux DID include such an interface, they’d make sure it stays binary-compatible forever. It’s a cardinal sin for the kernel to break binary compatibility with user-space.

    And yes, library version management in the OSS world is *horrible*. Many times libraries make backwards- or forwards-incompatible changes WITHOUT changing the library’s name or version number. (embarrassingly, this happens often with the standard C library and GCC’s compiler libraries…). One time the glibc maintainers introduced a change that even broke applications that were 100% statically linked!

    It’s also common in the OSS world for people to deliberately look for ways to "clean up" public APIs, knowing full well it will break client applications.

    It’s up to the Linux distributor to clean up the resulting mess. Most common distros like Redhat don’t do a very good job – you often are forced to update tens of libraries just to get the latest version of some program to run. Debian is much better in this regard – their commitment to compatibility is on par with Microsoft’s.

    (going back to the defragmenter- nobody in the UNIX world really seems to want one, they figure it’s easier just to transfer the filesystem to a fresh disk. If you wanted off-line defragmentation, you would do it by un-mounting the filesystem and then have a user-space program that reads and writes directly on the block device, without kernel intervention. Ideally of course, you design filesystems that don’t NEED defragmentation :)

  12. Dan Maas says:

    I just want to clarify my first two paragraphs since they may seem to contradict each other. The kernel and user-space libraries are totally different animals in the Linux world. You have to differentiate because they are maintained by separate groups.

    The kernel itself does an excellent job with compatibility. Libraries vary, but most don’t do very well. Unfortunately some of the core system libraries are among the worst. (glibc, libgcc, Berkeley DB, etc)

    (there are noises being made about distributions of Linux using C libraries other than glibc, partly for this reason)

  13. Raymond Chen says:

    (I didn’t mean to pick on defragmenters; it was just an interesting class of programs that often is at odds with the filesystem designers.)

    So suppose some company wanted to access those internal structures (to do their hypothetical ext3 defragmenter) but Linus won’t let them. What do they do? Do they write a driver that grovels into kernel space to find the structures? Do they just say, "Oh well, I guess we can’t do this" and move on?

    I’m not trying to pick on linux. I’m genuinely curious how these app/driver compat nightmares are handled in the linux world. (Last time I hacked on linux it was 0.99pl13, that’s how long I’ve been away from it.)

  14. asdf says:

    (I could be wrong but) I don’t think 64 bit linux (itanium and amd) can run 32 bit (x86) apps. Maybe that would give you an idea about what linux people would do when something drastic changes that causes old apps not to work. Windows even uses the P64 (ptrs and long longs are 64 bit) as opposed to LP64 (longs, ptrs, and long longs are 64 bit) on linux, though I’m not sure what that is supposed to say about backwards compatibility (I personally like ILP64 myself).

  15. Dan Maas says:

    App/driver compatibility in Linux generally goes like this: for important "core" interfaces, like the interfaces for implementing and using Ethernet drivers, there is always a published API. The kernel<->application API is kept binary-compatible basically forever (versioned interfaces are used if necessary). The kernel<->driver API is much less stable, but the kernel maintainers will mostly keep your driver in sync, if it is part of the mainline kernel. (I wrote a driver for Linux 2.4 that was carried forward to Linux 2.6 without any effort on my part, since it is distributed with the mainline kernel).

    Things get a little less stable in the more "dimly lit" areas of functionality like drivers for exotic hardware. Without the glaring eyes of thousands of core kernel developers, compatibility mistakes sometimes slip through. (I’m guilty of breaking the kernel<->application binary API of my isochronous FireWire driver…)

    Going back to defragmentation- you’d have a very hard time making a successful product that grovels through internal Linux kernel structures. You’d have to get Linus to agree to implement an API for it (unlikely unless it’s generic, elegant, stable, and portable).

    Remember Linus doesn’t accept code into the kernel based on what features users want – it’s not marketing-driven. It’s all about what Linus believes is best for the kernel. Commercial vendors in the Linux world understand and work with this. (it took a very long time for SGI to convince Linus to accept their port of the XFS filesystem to Linux, since it required some extensive changes to core kernel code)

    You *could* write a kernel module (which is like a DLL for the kernel) that does anything you want, but it would be tied to a specific kernel version. Users don’t like that. If your code isn’t included in the mainline Linus kernel, it will never be widely adopted, and you won’t get the benefits of automatic maintenance to keep it in sync with kernel changes.

  16. Dan Maas says:

    On the user-space library side of things, the way compatibility is *supposed* to work is that libraries have two version numbers – a "major" API version and an incremental "minor" version. If you make a backwards-compatible change to a library, you increment the "minor" version but use the same "major" version. You only change the "major" version if the API changes in a backwards-incompatible way.

    On a typical Linux system you’ll see libraries like "/usr/lib/libz.so.1.1.3". The major version is 1, the minor version is 1.3 (why they use two digits for minor versions, I don’t know). Any application compiled against libz major version 1 should work with any libz library with major version 1.

    But – and this is a big but – there is nothing holding developers to this system. Some library maintainers introduce incompatible changes without changing the major version number. Or, they change the major version number so often that getting all your applications to use the same library is impossible.

    Personally, I think dynamic linking is generally overrated :).

  17. Raymond Chen says:

    So basically if your program/driver grovels into internal structures, and you can’t convince Linus to make the structures/interfaces public, your options are just to (1) suck it up and not write your program/driver after all, or (2) use those internal structures/interfaces knowing that Linus will probably break you sooner or later, leaving your customers high and dry.

    This is basically the same as Windows, except that in case (1) add "Complain that Microsoft is being anti-competitive by actively preventing you from writing the program you want to write," and in case (2) add "Complain that Microsoft is being anti-competitive by actively breaking your program".

  18. asdf says:

    Almost, but not quite. Under linux, most vendors would probably release the source code to the kernel module that lets you access the structs (even super-secretive comercial videocard driver writers release the code to this part). Then if something changes you would get a compiler error, compile time asserts via the typedef char foo[!!(expr)]; trick, or via the module init function. All bets are off if the user downloads one of those binary only installation packages. The other main difference between linux and windows is that microsoft wants you to upgrade windows (hell, they even retire software and not make it easy to save stuff off windowsupdate to disks).

    I would just let the app brake (or be prevented from running if it corrupts the disk), who knows what else the app might do. Whenever I write code to hack features into some library, I am fully aware that it could change in an updated version and make sure my code is isolated from their stuff. But then again, I statically link everything, I would never think about hacking into code that I load at runtime unless I really have to (then I check for version numbers). I’m just curious my Microsoft would alter their stuff to get the disk defragmenter software to work again, I’m sure the disk defragmenter company expected it to break and would have offered a patch or a new version (though I don’t know if it would be free or not).

    My guess is that most PC developers think of windows as: 3.1, 95-ME, 32-bit NT-2003, and amd/intel 64-bit (and they ignore the NT ones for non-x86 based computers and windows CE). If it works in whatever OS they’re testing against (say windows 2000) they assume it to work in another OS in that category (say windows XP). Microsoft making broken programs work just adds to that assumption.

    (What’s up with this .Text viewstate error all the time?)

  19. smidgeonsoft says:

    Warning! — Microsoft Applications & Microsoft OS salvo

    I will start by asking a rhetorical question: When was the last time that a new version of the OS broke a Microsoft application? If this happens almost never, then has providence granted the programmers in the applications area with virtue and omniscience concerning what can and cannot be done with the OS? And, if this is so, then why does it seem that with every new release of the OS, the new "look and feel" of GUI gadgets, for example, shows up first in Microsoft Word and Excel? In other words, who is driving the innovation in Windows? The applications area, the OS area, both in tandem? (Another rhetorical question: What is and is not part of the operating system?)

    I understand and accept that the playing field is not level when it comes to developing products that are competitive with Microsoft’s own offerings. But, I suggest that this topic on "compatibility fixes" and the statement, "It takes only one incompatible program to sour an upgrade", is sour grapes. This is one way that successful competitors can level the playing field.

  20. Raymond Chen says:

    The Office group can be a lot more adventuresome in their UI because they don’t have to publish an SDK to permit everybody else on the planet to use their new UI. They can decide in Office 2004 that the Office 2003 UI was crap and get rid of it entirely, and they don’t have to worry about breaking other progams that are using their UI (because nobody else is).

    Whereas if the OS team decided to make a cool new control, soon there would be thousands of apps that are using that new control so it can never go away. (Even if – after the passage of time – most people have lost interest in the control and it *should* be retired. Like the non-dropdown combo box, for example.)

    When was the last time an OS broke a Microsoft application? Check out the app compatibility database – you’ll find entries for Word, Excel, Visio… All of those programs broke at one point or another and had to be worked around.

    On the OS team, the Office programs are in the same category as the other major productivity programs: PageMaker, PhotoShop, CorelDraw, etc. They are in the "top priority" compatibility bucket, because these programs are "deal breakers" for millions of customers.

    Having watched the evolution of Office from the outside, I have to say that over the years they have improved their compatibility efforts considerable. The Office programs used to be awful and we’d swear at them daily, but now they’re quite good at playing by the rules. (Though just yesterday I ran across a place where they broke the rules – for IExtractIcon::GetIconLocation. Sigh. Compatibility work is never done.)

    From the OS team’s point of view, the Office programs are treated the same as the other "major vendor" productivity programs. We do not play favorites.

  21. Mike Hearn says:

    I should add that Dan Maas is overly critical of glibc – the static link breakage was due to fixing a serious bug in the loader. Unfortunately static binaries built with an old version of ld were broken and this triggered the bug.

    That highlights a fundamental difference between Windows and Linux – fixing bugs takes priority over keeping apps working, as the apps are all open source too and so will eventually be fixed.

    That sort of stuff is rare though. glibc generally preserves binary compatability extremely well, but it will not work around broken apps.

  22. Raymond Chen says:

    "the apps are all open source too": What about the internal programs written by a company to run their business? (In the Windows world, that would be lots of internally-developed VB programs, for example.) Those aren’t open source, are they. Is it the company’s responsibility to rewrite their apps to track the new OS?

  23. TRS-80 says:

    Why do they have to track the new OS? There are still lots of people out there running Linux 2.0, 2.2, because it just works. There’s also millions of desktops still running Win98 as well. But it doesn’t matter if you run an old (major) version of Linux (keeping up to date with security of course), whereas Microsoft has to force those desktops through the upgrade mill to keep revenues up.

    So yeah, if you need to track the new OS (because of features or whatever, and not just because you’re being raped by Licensing 6.0 and so need to upgrade to make it appear worthwhile), then yes, you’ll have to rewrite bits to accomodate new interfaces etc. – they don’t somehow magically integrate themselves into your programs under Windows or Linux.

    Anyway, internal software is where open source makes the most sense, since you’re not locked into one vendor and don’t lose the source when they go bust etc. I can’t find the links at the moment, but there are several articles on just this point.

  24. Raymond Chen says:

    The internal software I was thinking of is stuff written by the company for the company itself. This isn’t something that the company is going to release to the open source community. Google.com for example hasn’t released the source to their secretive PageRank code, even though they run on linux systems.

  25. Rodrigo Strauss says:

    Why did the Windows Team took that much to launch App Verifier? I know it’s not an easy task, but I think it’s VERY important. I develop drivers and I can’t live without Driver Verifier.

    And, do you have any idea of Verifier’s future with the :gasp: WinFX? :-)

  26. Srdjan says:

    I noticed one interesting thing related to Linux: Couple of posts above mention something like ‘Linus this’ or ‘Linus that’, ‘You have to convince Linus’, etc. What will happen to Linux kernel when Linus (inevitably) is not around anymore? It seems that stability of Linux kernel relies a lot on one man.

  27. Kyle Hamilton says:

    "Linus this" or "Linus that", "You have to convince Linus", etc.

    Linus has been doing his best to only be part of the developmental kernel — he has historically passed kernel maintenance of stable releases to other people (his ‘lieutenants’). I expect that all of his lieutenants are indoctrinated into his way of thinking, and thus any of them (Alan Cox, before he took a sabbatical to get his MBA… or whatever the guy’s name is that’s doing the 2.6 series…) would be able to take care of the problem, in the same way that Linus would.

    [The same problem was faced by the Internet, when Jon Postel — the IANA — died. People had a hard time separating the function of the IANA and RFC editor from the person — but eventually, life went on. We still have an IANA. We still have an RFC Editor. We still have the IETF, and the entire organization scheme of the Internet.]

    Open Source (‘white box’) versus Closed Source (‘black box’)

    One of the tenets of good software design is that you don’t create software that behaves differently with frobbed bits, or with undocumented interfaces. If you do, then you open yourself up to the use of that undocumented behavior — and in many ways, you’re responsible for it.

    Granted, end-users tend to want to do things that systems were never designed for, and application programmers are the ones who are typically asked to make it happen. [though the same issues could be said for network admins and DBAs.] In the closed-source world, you can’t change the interfaces (and interface updates are almost impossible to come by except at major releases of the OS — service packs are somewhat excepted, such as the API updates in NT4 SP3 and SP5) just by emailing the developer, and asking for a change to the API/ABI to support something that you need to get done… so you end up hacking around it.

    In the open-source world, you can do several things to reduce the cost of implementing your request: 1) you can do the development work yourself, and submit it to the developer (via a patch or such) for inclusion in later versions of the API. 2) You can have a meaningful dialog with the developer of the API you’re using, telling them the specs of what it is you need, and seeing if they can help you figure out an easy way to fix it. 3) You can look at the code and see if there’s a way to do it without involving the developer at all. [Unfortunately, #3 is the only thing you can really do in the closed-source world, and it is much less desirable than the other two, because it doesn’t change the core issue of the default code not doing what you want and/or need.]

    Most software engineers are aware that you only modify the core interfaces without input from the original developers if you’re doing research, or are actively extending what you’re modifying to such a degree that you are essentially creating a new project [and are willing to take on the support burden of that new project — see the various clustering systems for Linux, as examples].

    And as for ‘in-house development’ — yes, it’s the company’s responsibility to track the changes of the OS, if they rely on undocumented behavior at all. The company that developed the software in use is responsible for making sure it works with the upgrades — that’s why fees are charged and willingly paid for maintenance contracts, for software that is critical to the company.

  28. Michael Bacarella says:

    There are many reasons to do formal Operating System software releases, but few of them are technically practical.

    Imagine, starting with Windows NT 3.51, if on-demand, individual components were upgraded as required. This could be because you encountered a bug, or because it’s a recommended security update, or because you just like being cutting edge. As Microsoft continued to build and improve the OS, updates would find their way to your system based on your "stability" settings (aggressive settings get the new filesystem as soon as released, conservative settings won’t adopt it until it’s considered stable).

    Over time your system would evolve into exactly what you needed it to do. If your applications depended on functionality staying broken, your system upgrades around it, or installs compatiblity layers, or whatever. The other 99% of your system can be safely upgraded to Windows 2000’s level of functionality while your one broken killer app sits on Windows NT 3.51 compatibility DLLs. When you finally replace the broken app, you can move the other 1% forward too.

    What this means for the end user is that they can stop caring about version numbers and their system always works and they always have the newest versions of everything they want (with exceptions pointed out).

    It would also mean you could toss away IHV driver discs since you’re only a button press away from getting the newest drivers on your system.

    What it would mean for Microsoft is that instead of major release cycles every few years, they can make smaller incremental changes year round with CDs mastered every year or so for people who are looking to adopt anew–so that they don’t have to download the first release and install 8 years worth of updates to get current.

    Marketing is simplified, you only need to sell "Windows". XP, 2000, NT, whatever.

    It also means the subscription system they’ve had a hardon for. $50/year for a fluidly upgradable system is a better sell than $150/3 years for a drastic buy/reinstall/upgrade cycle with few updates in between.

    Software is a service. It’s silly for Microsoft to continue to treat it as a product.

  29. Raymond Chen says:

    It also means that instead of four supported Windows XP software configurations (Windows XP Home, XP Home SP1, XP Pro, XP Pro SP1) you have 4*N^K, if there are N components with K upgrade aggressiveness levels. Let’s say that N=4 and K=3, so you now have 256 configurations, that’s 64 more than today. So testing each patch will now take 64 times as long. Is that what you want? (Because it would be a bummer if somebody said, "Stupid Microsoft. I have Windows XP Pro SP1 with Component1=Component2=conservative, but Component3=normal and Component4=aggressive, and the patch doesn’t work. How dare they not run a thorough test on this specific configuration!")

  30. Raymond Chen says:

    Commenting on this article has been closed.

  31. John Gruber makes an appearance in the soon to be released book The Best Software Writing I which was put together by Joel Spolsky .

  32. Archipelagos says:

    I have just finished reading a book compiled, edited and introduced by Joel Spolsky and released by Apress. &amp;quot;The Best Software Writing I&amp;quot; is a collection of some of the best articles on software development, and management written on various w

  33. Ambassador says:

    One should always assert the pre and sometimes post conditions in the code so that any problem that arises during debug can be caught, saves a lot reputation.&lt;br/&gt;&lt;br/&gt;assert macro, defined in ISO 9899 standard, under Program Diagnos …

Comments are closed.


*DISCLAIMER: I DO NOT OWN THIS CONTENT. If you are the owner and would like it removed, please contact me. The content herein is an archived reproduction of entries from Raymond Chen's "Old New Thing" Blog (most recent link is here). It may have slight formatting modifications for consistency and to improve readability.

WHY DID I DUPLICATE THIS CONTENT HERE? Let me first say this site has never had anything to sell and has never shown ads of any kind. I have nothing monetarily to gain by duplicating content here. Because I had made my own local copy of this content throughout the years, for ease of using tools like grep, I decided to put it online after I discovered some of the original content previously and publicly available, had disappeared approximately early to mid 2019. At the same time, I present the content in an easily accessible theme-agnostic way.

The information provided by Raymond's blog is, for all practical purposes, more authoritative on Windows Development than Microsoft's own MSDN documentation and should be considered supplemental reading to that documentation. The wealth of missing details provided by this blog that Microsoft could not or did not document about Windows over the years is vital enough, many would agree an online "backup" of these details is a necessary endeavor. Specifics include:

<-- Back to Old New Thing Archive Index