Date: | August 20, 2007 / year-entry #305 |
Tags: | history |
Orig Link: | https://blogs.msdn.microsoft.com/oldnewthing/20070820-00/?p=25513 |
Comments: | 25 |
Summary: | Prerequisites: Moderate to advanced understanding of the window and dialog managers. When you're implementing a control, you need to be aware that you aren't necessarily being hosted inside a dialog box. One commenter suggested handling WM_KEYDOWN and closing the dialog box as a way to prevent multi-line edit controls from eating the Enter key. But... |
Prerequisites: Moderate to advanced understanding of the window and dialog managers. When you're implementing a control, you need to be aware that you aren't necessarily being hosted inside a dialog box. One commenter suggested handling This leads to a related topic brought up by another comment:
I remarked that The authors of the edit control back in 1981 didn't follow the above guidance. Probably¹ because back in the days when the edit control was first written, the window manager was still in a state of flux and its design hadn't settled down. You can't blame the edit control for not following guidance that didn't exist. The edit control implements What's more interesting is how the edit control implemented the absence of This is ugly no matter how you slice it, and it violates so many principles of control design it isn't funny. For one thing, the way it detects whether the control it hosted inside a dialog is fragile and can be tricked into guessing wrong. Next, its mimcry of the As I noted, all these mistakes are obvious in retrospect, but when the control was first written, these mistakes might not¹ even have been mistakes. (For example, nested dialogs didn't appear on the scene until Windows 95.) Why haven't these mistakes been fixed? Well, how can you prove that there aren't any programs that rely on the mistakes? One thing you quickly learn in application compatibility is that a bug once shipped gains the status of a feature, because you can be pretty sure that some program somewhere relies on it. (I've seen a plugin that relies on a memory leak in Explorer, for example.) This goes doubly true for core controls like the edit control. Any change to the edit control must be taken with a great deal of trepidation, because your change affects pretty much every single Windows program on the entire planet. With that high a degree of risk, the prudent choice is often to let sleeping dogs lie. Nitpicker's Corner ¹Note weasel words. This is my educated guess as to what happened based on personal observation and thought. It is not a statement of the official position of Microsoft Corporation, and this guess may ultimately prove incorrect. |
Comments (25)
Comments are closed. |
>I’ve seen a plugin that relies on a memory leak in Explorer, for example
How?
Not to demand anything, but that’s quite a feat that should be enshrined somewhere.
[And indeed it is. In my book. Appendix A. -Raymond]
Hehe, it made my day to see a leak discussion end with a plug (I know, my pun is quite artificial).
You note that the edit control has behaviour related to the way it was implemented in 1981.
I wonder whether the WPF edit control has been implemented fresh and will address the issues that you raised?
Common controls 6 may have been a good time to fix it since it was opt-in.
In the installationprogram of Win3.1 there was a stage where you could manually edit system.ini, win.ini, or was it even both, via a multi-line edit control.
If you hit enter inside this control, it would send the enter key to the OK button and the installation would unexpectedly continue.
Too bad if you wanted to add lines to the INI-file.
The solution was to press control-enter, which would insert a newline into the control, and not trigger the OK button.
Later, in Win95, I used the same method in other programs, where I wanted to add newlines inside a multi-line edit control in some window, where I could see the OK button had a heavy frame around it, so it would catch the enter key if it got pressed while that window had focus.
Code that relies on bugs deserves to be broken.
Period.
Depend on a memory leak? When you REQUIRE another app to malfunction for your app to work right, you just KNOW you are asking to get shut down, and frankly, you also know you deserve it.
If you had no clue, you accept that you screwed up and fix it asap.
Sadly, most people just whine that it is someone else’s fault.
I could seriously stand to see a few breaking changes for the better, so long as it is clearly documented. Vista has a lot of problems and some of them are mind-numbling stupid (try tab and shift-tabing through a file selection dialog one day, the path is not symetrical), but others make good sense.
Depend on weak security? Too bad, so sad. You want to run on my machine? Fix your app, you probably didn’t need that editor to format my hd anyways. At least now I get a warning.
You want to install drivers on my machine? Signing doesn’t seem that odious a problem to me, as long as I can do something really stupid like load them anyways if I really want to (and have to work at it).
Has this behaviour changed to be more sane for 64-bit versions of windows running 64-bit bit apps (where there is no backwards compatibility story)?
So, just for reference: the "right" way to do this would be to make WM_GETDLGCODE report that it didn’t want ENTER, thereby letting the parent dialog procedure capture and process it?
I’m a little curious why it wasn’t done that way, since that seems a *lot* easier. Still, I guess that answer is lost in the mists of time…
So, you write a lot of sample programs that rely on historically enshrined bugs, then?
Surely 64-bit Windows has compatibility requirements, but I can’t agree with the apparent Microsoft position that these requirements are absolute, and that all bugs are to be enshrined for posterity, in case anyone relies on them. It makes sense for Microsoft, since you avoid pissing off existing customers, but at the price of making life more difficult for every future developer (while the mistakes of past developers are covered by the infinite goodwill checks Microsoft will cash). I’m sure it makes good business sense, because if there’s one thing you can’t accuse Microsoft of it’s having bad business sense, but let’s just say I’m glad I’m not a Microsoft developer.
Xepol, I agree in part, but sometimes people make mistakes and you end up with programs that only happen to work because of a bug (or undocumented feature), not on purpose but because it worked and was never detected.
Years later that program may no longer be maintained. The source could even be lost. If it’s a popular/important program then it makes sense for MS to do what they have to do (within reason) to keep it working.
With programs that are still being updated it obviously makes sense to try to contact the vendor and get them to fix it in their next update. I would hope most vendors would want to do so, if they still supported that product. Even then, though, MS have to worry about people who may have the old version and not update, and then blame the new version of Windows when it suddenly starts to crash.
Regarding 64-bit Windows, anyone who has looked at it will quickly realise that MS have gone out of their way to allow people to re-compile code as 64-bit with the minimum amount of changes. In my opinion they went too far but maybe there are good reasons behind it that I’m not aware of. Either way, there hasn’t been a break from backwards compatibility with 64-bit. Probably just as well since if a program compiles and seems to run then people may not check every single codepath for bugs.
(An example is how you don’t even have to change "System32" to "System64" when you recompile your code, which shouldn’t have had either string hardcoded in the first place. This which results in a confusing and (IMO) messy system of virtual/substituted folders with strange names. The 64-bit binaries are in a folder whose name ends in "32" while the 32-bit binaries are in a folder whose name ends in "64", which is backwards, and your process will see different things depending on whether it is 32-bit or 64-bit.)
1981??? Really? Don’t you mean 91?
Xepol: "Code that relies on bugs deserves to be broken.
Period."
You’re right, but as Raymond and others have said many times, if you replace your operating system (let’s say Windows 98) with Windows XP, or XP with Windows Vista, or install a Microsoft patch, and your programs STOP WORKING, who are you going to blame?
You’ll blame Microsoft, and the new operating system will not get very many sales, and will get a bad reputation for breaking applications that worked "fine" under the old operating system. (You, as the customer, won’t know that the program that worked "fine" was relying on bad behavior that is now fixed.)
Bugs that are depended on by programs gain the status of features… as someone once said.
Thank you, thank you, thank you. I have long been asked why I refuse to touch Windows for any development purpose, and you have eloquently encapsulated the entire reason within a single example. Bravo, and my appreciation.
Leo — you’ll also see weird things in the registry, of course. HKEY_LOCAL_MACHINESoftware is no longer there for a 32-bit process; it’s now at SoftwareWow6432Node (whose name makes a *lot* more sense than SysWOW64).
But here’s the (IMO dumb) part of that: you’ll *also* see weird things if you try to connect to the registry of a 64-bit Server 2003 box from a (remote) 32-bit Server 2003 box. You’ll see the 32-bit registry under Software, and you’ll have no way to get at the 64-bit parts.
This does *not* happen if you connect from 32-bit XP, or (obviously 32-bit) 2000 Pro; from those OSes, you get the "correct" Software key, as you’d see it on the local machine with 64-bit regedit. It only happens from 32-bit Server 2003.
This has bugged me ever since I first saw it, but I never bothered to look up why it happened. I assume the 64-bit 2k3 is looking at the SMB/CIFS session from the 32-bit 2k3, realizing that it’s 2k3, then finding out its register-size (somehow), then changing its behavior accordingly. Why it doesn’t do this with XP or 2000 is a mystery, though; if it was a back-compat thing, it should do it with all OSes (especially ones that can only run 32-bit).
But I haven’t done a lot of searching either.
My point was that these apps that depend on the bug have to be recompiled anyways and not every API that existed in 32-bit still exists in 64-bit, does it?
Transitioning is a great time to fix up APIs, remove ugly hacks, and change stuff to remove the potential for ugly hacks in the future.
If they’ve recompiled on 64-bit that means a few things:
1. They have the source code.
2. They’re going to have to modify other stuff anyways (like the "banned" functions).
3. If they weren’t following the documentation and a hack was added to windows to work around their usage, this is the perfect time to force them to actually follow the documentation.
Since all 3 of those means they’re *actively* working on the code base, they can fix it to be proper as soon as the problem is discovered.
The same thing applies when you actually write an app specific hack in windows. Version it to that specific version of the app that relies on the bug. (or you can base it on the version of the DLL they say they link to) Remove the hack if the version becomes newer. Because that means the developer had to touch the code and update stuff, so it is the most perfect time to make them fall in line.
If people are allowed to write crap code without being punished, they’ll continue to write crap code.
I don’t see how keeping around buggy code from 1981 is in any way, shape, or form a good thing. Does this mean Microsoft never removes app specific hacks or that they always leave in "backwards compatibility cruft"?
A different perspective on what Rosyna is saying:
When Windows 95 came along, Microsoft made a huge, arguably heroic effort to maintain bugwards-compatibility with every application ever released. For instance, there is the famous story of how Windows 95 recognized Sim City and used a memory manager implementation that allowed writes to recently-freed memory to avoid destabilising it. The point of this, of course, was that using old software with the new system would be stable, so people wouldn’t say “Windows 95 is unstable – it crashes every time I run Sim City, which worked fine in Windows 3.11”.
However, the situation with the 64-bit transition is different. In this case, software must be recompiled and a new version released to be using the 64-bit API. Therefore, when an application crashes, it is *the new version of the app* that is seen as unstable. People will latch on to the last thing they changed, and attribute differences in behaviour to it.
Over in Mac OS X land, this sort of break was made at the x86 transition, and can be expected at the 64-bit transition in Leopard. Additionally, smaller versions happen with each OS release: if an application is linked against the Mac OS X 10.3 SDK, buggy behaviours from 10.3 and earlier are retained, but if it is linked against the 10.4 SDK, fixed behaviours are used. This means all existing software retains the old behaviours, but new apps benefit from less crufty API.
@Ahruman: It also means new apps can’t run on older versions of OS X . That drives me nuts — I use Panther and can’t run half the simplest apps out there. The main reason I’m on Panther is because everyone stopped supporting Jaguar.
Situations like this really make you appreciate the lengths Microsoft goes to as a user. As a developer, I can certainly appreciate breaking changes to fix bugs, but I’ve yet to see anyone come up with a versioning scheme that actually works in the real world. (Look at what happened when Microsoft tried to fix that with .NET.)
[disclaimer – I work for Microsoft]
Some perspective – first of all, I’ve recently started playing Total Annihilation (the best RTS ever written) again, even though all the documentation is about installing on Windows 98. I’m sure there are backwards compatibility tweaks that enable it to run – without those, I’d have to just throw the CD’s away, because the company (and the source code) are long gone.
On the flip side, I use Company Q XYZ for work. I bought XYZ 2 in May of this year, even with the warning that it wouldn’t run on Vista, because the chatter at the time was that XYZ 2 would be patched to run on Vista.
Now Company Q has released XYZ 3, which runs on Vista, and I’m betting no patch is forthcoming. Company Q wants me to upgrade for $300, which is insane.
Before I started reading Raymond’s blog, I may have grumbled about Microsoft breaking backwards compatibility. But after appreciating his perspective, I blame Company Q for being caught by surprise by an OS that was in late beta for eight months and the biggest buzz on the planet.
But I’m sure that most folks who installed Vista and found XYZ 2 didn’t work blamed Microsoft. And there are plenty of comments all over the ‘net to the effect of “Vista has compatibility issues, don’t upgrade”
That’s what happens when you break backwards compatibility. Not “oh, silly third party program coder should fix is product”, but the standard “blame Microsoft”
@Random Reader: actually, that’s a different issue. It’s entirely possible to build code for OS X 10.3, or 10.2, or 10.1 on an OS X 10.4 system. The same will be possible from Leopard, too. However, to do so requires the developer to avoid using new functionality, or to jump through hoops to use it if available but provide alternatives if it is not. Exactly the same applies to a Windows developer targeting Vista; if they use Vista-specific features, their software will not run on XP.
@Philo
I agree that it is entirely Company Q‘s fault in this situation, the fact that there are ‘don’t update’ comments tend to stem more from the fact that there are people depending on the product for their careers, and like you, will not, or perhaps cannot, spend the money to get the new version. These people will not update to Vista until the product they use for their livelyhood works.
Again, this is no fault of Microsoft, but the fact is that people are still running Windows 95/98 because Product X doesn’t work anywhere else.
@Ahruman: It’s a little different in Apple’s case, though. The Windows API doesn’t use a versioning scheme that ties an application to a platform version — features are available or not, and that’s it. On OS X, even the same features get versioned to the platform level, so by default developers end up tying their apps to the latest version unless they jump through hoops from the beginning. You don’t even have to change anything, simply recompiling has unexpected consequences.
Random Reader: … unless you compile to a specific SDK, which you should.
But that’s my point — if you do that, you can’t get bugfixes. (And because of Apple’s versioning defaults, there’s no downlevel compatibility for most applications, and therefore fewer potential users.)
IOW, versioning allows bugfixes but causes user grief. You can’t have your cake and eat it too :(
@Ahruman: "It’s entirely possible to build code for OS X 10.3, or 10.2, or 10.1 on an OS X 10.4 system. The same will be possible from Leopard, too. However, to do so requires the developer to avoid using new functionality, or to jump through hoops to use it if available but provide alternatives if it is not. Exactly the same applies to a Windows developer targeting Vista; if they use Vista-specific features, their software will not run on XP."
Not necessarily true. I use Delphi 2007 for Win32 development, and write applications that use the new Vista Task Dialogs and File Open/Save dialogs when run on Vista, and the old style MessageDlg and File Open/Save dialogs when running on Win2K/XP/98/95/ME.
@KenW: That’s what he meant with the "jump through hoops" bit — doing dynamic calls or whatever else is necessary to activate features without creating a hard dependency.