Running old programs in a virtual machine doesn’t necessarily create a good user experience

Date:October 5, 2005 / year-entry #293
Tags:other
Orig Link:https://blogs.msdn.microsoft.com/oldnewthing/20051005-09/?p=33903
Comments:    37
Summary:Many people suggest solving the backwards compatibility problem by merely running old programs in a virtual machine. This only solves part of the problem. Sure, you can take a recalcitrant program and run it in a virtual machine, with its own display, its own hard drive, its own keyboard, etc. But there are very few...

Many people suggest solving the backwards compatibility problem by merely running old programs in a virtual machine. This only solves part of the problem.

Sure, you can take a recalcitrant program and run it in a virtual machine, with its own display, its own hard drive, its own keyboard, etc. But there are very few types of programs (games being a notable example) where running them in that manner yields a satisfying experience. Because most programs expect to interact with other programs.

Since the virtual machine is running its own operating system, you can't easily share information across the virtual machine boundary. For example, suppose somebody double-clicks a .XYZ file, and the program responsible for .XYZ files is set to run in a virtual machine.

  • Start the virtual machine.
  • Log an appropriate user on. Hopefully, the user has an account in the virtual machine image, too. And of course the user will have to type their password in again.
  • Once the system has logged the user on, transfer the file that the user double-clicked into the virtual machine's hard drive image somehow. It's possible that there are multiple files involved, all of which need to be transferred, and the identities of these bonus files might not be obvious. (Your word processor might need your spelling exceptions list, for example.)
  • Run the target program with the path to the copied file as its command line argument.
  • The program appears on the virtual machine operating system's taskbar, not on the main operating system's taskbar. Alt+Tab turns into a big mess.
  • When the user exits the target program, the resulting file needs to be copied back to the main operating system. Good luck dealing with conflicts if somebody changed the file in the main operating system in the meanwhile.

The hassle with copying files around can be remedied by treating the main operating system's hard drive as a remote network drive in the virtual machine operating system. But that helps only the local hard drive scenario. If the user double-clicks a .XYZ file from a network server, you'll have to re-map that server in the virtual machine. In all cases, you'll have to worry about the case that the drive letter and path may have changed as a result of the mapping.

And that's just the first problem. Users will expect to be able to treat that program in the virtual machine as if it were running on the main operating system. Drag-and-drop and copy/paste need to work across the virtual machine boundary. Perhaps they get information via e-mail (and their e-mail program is running in the main operating system) and they want to paste it into the program running in the virtual machine. International keyboard settings wouldn't be synchronized; changing between the English and German keyboards by tapping Ctrl+Shift in the main operating system would have no effect on the virtual machine keyboard.

Isolating the program in a virtual machine means that it doesn't get an accurate view of the world. If the program creates a taskbar notification icon, that icon will appear in the virtual machine's taskbar, not on the main taskbar. If the program tries to use DDE to communicate with Internet Explorer, it won't succeed because Internet Explorer is running in the main virtual machine. And woe unto a program that tries to FindWindow and then SendMessage to a window running in the other operating system.

If the program uses OLE to host an embedded Excel spreadsheet, you will have to install Excel in the virtual machine operating system, and when you activate the object, Excel will run in the virtual machine rather than running in the main operating system. Which can be quite confusing if a copy of Excel is also running in the main operating system, since Excel is a single-instance program. Yet somehow you got two instances running that can't talk to each other. And running a virus checker in a virtual machine won't help keep your main operating system safe.

As has already been noted, the virtual machine approach also doesn't do anything to solve the plug-in problem. You can't run Internet Explorer in the main operating system and an Internet Explorer plug-in in a virtual machine. And since there are so many ways that programs on the desktop can interact with each other, you can think of each program as just another Windows plug-in.

In a significant sense, a virtual machine is like having another computer. Imagine if the Windows compatibility story was "Buy another computer to run your old programs. Sharing information between the two computers is your own problem." I doubt people would be pleased.

For Windows 95, we actually tried this virtual machine idea. Another developer and I got Windows 3.1 running in a virtual machine within Windows 95. There was a Windows 3.1 desktop with Program Manager, and inside it were all your Windows 3.1 programs. (It wasn't a purely isolated virtual machine though. We punched holes in the virtual machine in order to solve the file sharing problem, taking advantage of the particular way Windows 3.1 interacted with its DPMI host.) Management was intrigued by this capability but ultimately decided against it because it was a simply dreadful user experience. The limitations were too severe, the integration far from seamless. Nobody would have enjoyed using it, and explaining how it works to a non-technical person would have been nearly impossible.


Comments (37)
  1. Travis Owens says:

    Off topic but on the subject of virtual machines:

    Applications like DosBox show that Microsoft didn’t cover all the bases with their backwards support.

    I find it surprising that 2000 and XP didn’t map DOS sound into Direct X audio, so virtual machines like DosBox are still needed at times.

    I wonder if Vista will finally take full dos featurality seriously or will I still have to run DosBox just to get proper DOS sound?

  2. Travis,

    I’m sorry to have to tell you that improving Windows support for 16 bit DOS applications is NOT high on the list of features. 16 bit Windows multimedia apps should continue to work, but we’re not investing a whole lot of effort into DOS application support.

    Never mind the fact that we can’t even begin to support them on 64bit platforms.

  3. kalleboo says:

    An interesting implementation is the Classic Environment in MacOS X, where they’ve done a very good job of bridging the old and the new. The classic MacOS sees the network through a fake network adapter, the programs show up on the OS X dock, and applications can pass high-level events between each other. Copy/paste and drag and drop is seamless.

  4. Frederik Slijkerman says:

    Interesting, because the virtual machine approach is how Mac OS Classic is supported on OS X. It only proves Raymond’s point, because Apple has never cared much about backwards compatibility.

  5. Jack Mathews says:

    I would say that the Classic interface has pretty much proved that a virtual machine CAN be done right. All of your arguments against hold no water because while you "tried" the virtual machine route, someone did it and got it working well.

  6. Mike Dunn says:

    Didn’t OS/2 take the VM approach as well? You could run Win 3.1 apps and they looked & ran like Win 3.1 apps, similar to OS9 apps on OSX.

  7. Doug says:

    Running new programs in a VM sandbox is where all of the isolation problems become a benefit. I hope that IE running in a VM sandbox where it will not infect the rest of the system will make it into Vista.

  8. Stu says:

    LarryOsterman: Never mind the fact that we can’t even begin to support them on 64bit platforms.

    Ive never understood why Microsoft refuses to support 16-bit apps on Win64. VMWare and Virtual PC both prove that 16-bit apps can be run (on a virtual CPU) in a 64-bit enironment. Why not just run WOW on WOW64? (WOWOW?)

    Raymond: Most of your arguements against virtual machines also apply to remote desktop. Vista solves most of them for RDP, such as file extension mapping, seamless windows/system tray, clipboard syncronisation, probably keyboad syncronisation, so they must be solvable for VM’s too.

    You dont have to use the stock OS in the VM, you can add all sorts of custom drivers, etc to achive most of the syncronisation, WinOS/2 and MacOS X’s Classic environment both do this, and VirtualPC and VMWare both provide drivers to achive some syncronisation (Such as drag and drop, clipboard and keymap syncronisation. (I remember drag and drop being possible on SoftWindows 95 for PowerMac and there the two OS’s aren’t even remotely similar, so its a solved problem.))

  9. kbiel says:

    That’s funny, OS/2 had a very successful (some would say too successful) VM implementation for running 16-bit Windows programs. It didn’t solve all of the problems you mentioned, but it worked well enough that I felt more comfortable running my Windows programs in OS/2 rather than in Windows itself.

    I know, I know, things are different these days, but I don’t believe the VM solution is as bad as you say it is, just expensive.

  10. Stu says:

    That was Longhorn Server rather than Vista… See here for a list of confirmed features: http://www.brianmadden.com/content/content.asp?id=500

  11. It depends on what you’re dragging and dropping. File names are not difficult to marshal. Excel spreadsheet cells are a little harder.

    Notice that all of the examples people cite for usable VMs punched holes in the VM boundary. Thus proving my point. Merely running them in a VM isn’t good enough.

  12. James Risto says:

    I am involved in a virtualization project right now, and this topic seems to gather more what I call "holy grail" than most other technologies. In other words, geez, if I can use virtual machines, I will solve everything. Wrong. The only savings is MAYBE on hardware, if you can load up enough VM’s on a box. But you don’t save on memory; that’s not virtualized. As far as TCO of a machine, you don’t save on licensing, patching, support effort. But management believes that the small it is, the cheaper. Blade servers = small = cheaper. VM’s = invisible = even cheaper. Wrong.

  13. Jack Mathews says:

    Punching holes is one thing. The way Windows did it is a veritable screen door.

    Most VM hole punching is done in a measured manner. The "host" controls all data in, and fishes for any data to come out, and only specific data. Certain small things are marshalled, and that’s it. You need a taskbar icon? You need drag and drop? You need a clipboard? Ok, you have those. And ONLY those.

    With Windows, they share the Window Manager. Any 16-bit window can get access to any control in any application. I mean, that’s just horrible. And that’s the difference.

  14. ATZ Man says:

    One man’s "punching through the VM boundary" is another man’s "implementing an interpreter."

    Chen, your experience with implementing a VM for running Win3.x apps under Win95 is interesting. Of course ultimately Win95 did run those apps, so your experience simply proves that some ways of running down-level apps are worse than other ways. Your approach was more like the way one might run a cellphone app on one’s desktop to validate it. The cellphone simulator is going to try to make the app’s environment and the tester’s experience of it as close as possible to the environment and experience provided by the real cellphone. We don’t want that verisimilitude when we want to run a DOS 3.3 app under 64-bit Windows Vista on Itanium. So the approach to take is more like the JVM approach (start with an interpreter and then optimize) than the simulator approach.

  15. Travis Owens says:

    LarryOsterman,

    I appreciate your bottom line answer, I have no problem keeping with DosBox. I’d rather not have to wonder if I’ll find multimedia support in an upcoming version of Windows as I’ve been wondering that for ~5yrs now but at least I know it’s not a priority and can just get over it.

  16. "Ive never understood why Microsoft refuses to support 16-bit apps on Win64."

    When an x64 CPU switches into Long Mode, you lose access to Virtual 8086 mode (which is how 16-bit apps were emulated on 32-bit Windows). This means you can’t execute 16-bit code.

    IIRC, you cant get out of long mode without resetting the CPU.

  17. Puckdropper says:

    Myron A. Semack, I remember such a problem with 286 or 386 processors. I think it was a 286. You couldn’t switch modes of the CPU without resetting it. What some programs did was save the state of the CPU before changing mode and put the CPU back in that state after the process was done.

  18. Yeah, that was the 286. As I recall, switching out of protected mode was extremely slow (like several miliseconds).

  19. Stu says:

    Myron A. Semack: When an x64 CPU switches into Long Mode, you lose access to Virtual 8086 mode (which is how 16-bit apps were emulated on 32-bit Windows). This means you can’t execute 16-bit code.

    Then explain how VMWare and VirtualPC can continue to function on a 64-bit OS (as a 32-bit app). VMWare already works on Linux x86_64 (I use it) and has a beta for Win64. It does not do CPU emulation. How does it continue to be able to run 16-bit code even on a 64-bit OS?

  20. James Day says:

    It sounds and has sounded for a while as though virtual machines are the way to go. Even with their limitations, they work for more cases than the other compatibility routes and that’s important when the base platform – Windows – has its current very short product lifecycles. I don’t care so much if something doesn’t work in a new OS if it includes a virtual machine and old OS version as well, so I can still run the applications.

  21. Nate says:

    Even if 64-bit Windows could not run 16-bit code directly, they could just emulate it in software. So what if it runs 15x slower? You do not need to run Lotus 123 at 3 GHz

  22. "Then explain how VMWare and VirtualPC can continue to function on a 64-bit OS (as a 32-bit app). VMWare already works on Linux x86_64 (I use it) and has a beta for Win64. It does not do CPU emulation. How does it continue to be able to run 16-bit code even on a 64-bit OS?"

    VMWare must be virtualizing the 16-bit stuff. The lack of Virtual 8086 mode when you’re in Long Mode is a documented limitation of the x64 ISA.

  23. kbiel says:

    Raymond,

    You said, "Notice that all of the examples people cite for usable VMs punched holes in the VM boundary. Thus proving my point."

    But your example of why using VM for backwards compatibility was just that, a VM with holes punched through the VM boundry. So, just what is your point?

  24. This entry was a response to people saying, "Why doesn’t Microsoft just include a copy of Virtual PC with Windows?" In order to get something usable, you have to start punching holes.

  25. Arlie Davis says:

    Are any of you paying attention? NT supported a fairly good 16-bit DOS virtual machine for *years* and still does on 32-bit platforms. NTVDM.EXE. The support is so transparent that most people don’t even realize there is anything different about 16-bit apps running on NT.

    Raymond’s talking about a fully isolated virtual machine, such as VMware. Approaches like NTVDM are highly integrated virtual machines. Most API calls result in thunked calls to the "real" APIs. This is not at all the approach of the fully-virtualized box that VMware (and Virtual PC, etc.) provide. Apple’s OS9- support is analogous to Microsoft’s NTVDM support — both are virtual machines that make every effort to integrate the virtual machine’s environment into the "real" machine.

    Further, Microsoft, please don’t waste any more time supporting real-mode 16-bit audio hardware. It’s 2005! Anyone who needs this stuff already has it, and I would rather have WinFS than SoundBlaster emulation.

    Backward compatibility can be the best thing in the world, and can be the worst thing in the world. But there comes a time to say, ENOUGH! It’s time to move on!

  26. James Day says:

    So punch some holes. This isn’t about purity of concept, it’s about keeping the applications running. Not many and not as many as you suggested. This isn’t full integration, it’s keep things working. Vista with Virtual PC and XP and ME and NT and 98 and 95 and 3.1 and M-DOS would be nice from my point of view, assuming they actually worked in the virtual machine. And assuming that this was in the most basic home user package as well as the rest.

  27. ken lubar says:

    There are some great examples of VMs that worked well and were very useful. Back a long time ago, VM370 (an old time sharing system) gave each user his or her own virtual IBM 370. Complete with console lights and switches.

    In that virtual machine you could load and run any operating system you wanted–DOS (the original), TSO (a time sharing system) or even another copy of VM370. I think you could next the virtual machines at least 7 layers deep. It was quite cool and really did work seamlessly.

    On occasion, I really would like to fire up another machine for testing or to deploy an application that potentially can much up my primary machine.

    Boy do I feel old writing about the 370.

  28. James Day says:

    Ken,

    Don’t worry about writing about the 370. I rewrote Microsoft’s FORTRAN 66 floating point library for the Z80 to use an AM9511ADC math coprocessor chip as a university project (don’t think Microsoft ever knew about this). Lots of us "old" people around. :)

  29. Jonathan Wilson says:

    I think that the best possible way to run old apps would be to do something similar to what WINE does that translates API calls. (only better because microsoft could use actual windows source code to make it happen)

    So, old apps would run on a "virtual" layer that presents the old API calls whilst new apps see a totally new API.

    Both upper-level APIs would talk through to the kernel and drivers and such.

  30. Nick says:

    > Ive never understood why Microsoft refuses to support 16-bit apps on Win64.

    > When an x64 CPU switches into Long Mode, you lose access to Virtual 8086 mode (which is how 16-bit apps were emulated on 32-bit Windows). This means you can’t execute 16-bit code.

    > IIRC, you cant get out of long mode without resetting the CPU.

    w3.x (in enhanced mode) & w9x uses Virtual 8086 mode to run dos apps, NT does not. Thus 64-bit NT should be able to emulate 8086 real mode just as easily as 32-bit NT does. 16-bit windows apps is another story, but shouldn’t be impossible to run in a similar way but with thunking.

  31. John Topley says:

    Wasn’t IBM’s OS/370 the first commercially available VM operating system? There’s a fascinating section in Andrew Schulman’s old "Unauthorized Windows 95" book that talks about how large chunks of the Windows 3x/9x line can indirectly trace their VM technology back to IBM’s work in 1972.

  32. Leif says:

    Several people here have cited Mac OS X’s Classic environment as a refutation of Raymond’s point, but I think Classic, as it exists, actually SUPPORTS Raymond’s argument. If you think Classic proves that a VM compatibility layer can be done right, clearly you haven’t used Classic enough.

    Most long-time Mac users that I know (myself included) HATE Classic. It takes forever to start (as Mac OS 9 boots within the VM), so that one quickly learns to avoid starting Classic apps, and winces every time one is started by accident. Apps running under Classic often don’t look or behave quite right, and are never as smooth as they are in their native environment. Mac OS X users yearn for Carbonized versions of their old apps, precisely because Classic simply isn’t good enough. If Classic was wonderful, the long and well-publicized delay in Carbonizing QuarkXPress would have been a non-issue.

    However, I actually disagree with Raymond, because I also think that Classic could have been done a lot better than it was. Classic seems to have been done at, or close to, the device driver level, which seems to be a mistake. Running old Mac apps under X, I don’t care about extensions or device drivers or all the rest of it; I just want to run the app itself. How hard would it be to create a lightweight, per-process emulation layer that essentially Carbonizes old apps on the fly? You could thunk many Classic OS/Toolbox calls to the native Carbon framework, and use Switcher/MultiFinder-style kludgery to handle the rest. There would be nothing to boot, and Classic apps would enjoy memory protection from one another (*). I’m going to try it…

    (*) The official mantra is that this is "impossible", to which I propose the following thought experiment: Could Classic be tweaked such that two instances of it ran side-by-side? Can Classic and the OS X port of MOL be run side-by-side?

  33. Nate says:

    How hard would it be to create a lightweight, per-process emulation layer that essentially Carbonizes old apps on the fly?

    It would be a compatibility nightmare. Reading The Old New Thing, we learn the hell Raymond has gone through keeping new versions of Windows with old programs. This would be hundreds of times harder with Classic Mac OS apps.

    The Win32 API is ugly, but the Classic Mac OS API makes Win32 look like a work of art. Classic Mac OS apps require the various Mac toolbox structures to be laid out in certain ways, often write into the system heap and do very "yucky" things more reminiscent of DOS applications. There is no way that a transparent Classic compatibility environment could have been made.

  34. Good Point says:

    Leif said:

    Most long-time Mac users that I know (myself included) HATE Classic. It takes forever to start (as Mac OS 9 boots within the VM), so that one quickly learns to avoid starting Classic apps, and winces every time one is started by accident.

    But if you make backwards compatibility somewhat functional but undesirable, the users will eventually want to stop using it and upgrade their applications.

    Microsoft could learn a thing or two. Then modern Windows wouldn’t be living with decisions made in the Windows 3.1 (or earlier) era. And it wouldn’t have to be hacked to run bug riddled applications targeted at older platforms, that just somehow worked.

  35. "w3.x (in enhanced mode) & w9x uses Virtual 8086 mode to run dos apps, NT does not."

    Some quick Googling says otherwise.

    http://en.wikipedia.org/wiki/Virtual_DOS_machine

  36. Mark Steward says:

    Nick is partially correct – NT does use Virtual 8086 mode to run 16-bit apps on the IA-32, but http://www.microsoft.com/resources/documentation/windowsnt/4/workstation/reskit/en-us/archi.mspx?pf=true#E5PAE says it depends on architecture. A bit further up on this page:

    The Win16 NTVDM can also run 16-bit applications On RISC processors, NTVDM emulates all Intel x86 instructions in addition to providing a virtual hardware environment.

    I assume this is carried through to Windows XP, as the public symbols for ntvdm.exe imply that it can still be built with proper CPU simulation (the lack of CamelCase suggesting it’s originally from a separate project).

    I see no technical reason not to continue WOW16 into Win64, but I’d surmise that it’s no longer a major force in selling Windows, so Microsoft are leaving it to 3rd parties, as with so many other projects that just don’t tip the balance any more.

    It would be nice in the VALUEADDUNSUPP folder, maybe…

  37. Anonymous Coward says:

    Raymond wrote:

    > This entry was a response to people saying,

    > "Why doesn’t Microsoft just include a copy of

    > Virtual PC with Windows?" In order to get

    > something usable, you have to start punching

    > holes.

    I fully agree that using a VM to run old apps is generally a bad idea. It has it’s place, I think more on the sever than on the client, but it’s a limited role.

    Having said that, I think the real reason Microsoft doesn’t include a copy of Virtual PC with Windows is because it can SELL it separatetly. Ever notice that Microsoft tends to "integrate" things into the OS that other companies are selling, but rarely bundles anything that MS is able to sell separately?

Comments are closed.


*DISCLAIMER: I DO NOT OWN THIS CONTENT. If you are the owner and would like it removed, please contact me. The content herein is an archived reproduction of entries from Raymond Chen's "Old New Thing" Blog (most recent link is here). It may have slight formatting modifications for consistency and to improve readability.

WHY DID I DUPLICATE THIS CONTENT HERE? Let me first say this site has never had anything to sell and has never shown ads of any kind. I have nothing monetarily to gain by duplicating content here. Because I had made my own local copy of this content throughout the years, for ease of using tools like grep, I decided to put it online after I discovered some of the original content previously and publicly available, had disappeared approximately early to mid 2019. At the same time, I present the content in an easily accessible theme-agnostic way.

The information provided by Raymond's blog is, for all practical purposes, more authoritative on Windows Development than Microsoft's own MSDN documentation and should be considered supplemental reading to that documentation. The wealth of missing details provided by this blog that Microsoft could not or did not document about Windows over the years is vital enough, many would agree an online "backup" of these details is a necessary endeavor. Specifics include:

<-- Back to Old New Thing Archive Index