Date: | October 14, 2005 / year-entry #306 |
Tags: | code |
Orig Link: | https://blogs.msdn.microsoft.com/oldnewthing/20051014-19/?p=33763 |
Comments: | 23 |
Summary: | The window manager and GDI objects as a general rule will automatically destroy objects created by a process when that process terminates. (The window manager also destroys windows when their owner threads exit.) Note, however, that this is a safety net and not an excuse for you to leak resources in your own program with... |
The window manager and GDI objects as a general rule will automatically destroy objects created by a process when that process terminates. (The window manager also destroys windows when their owner threads exit.) Note, however, that this is a safety net and not an excuse for you to leak resources in your own program with the attitude of "Oh, it doesn't matter, the window manager will clean it up for me eventually." Since it's a safety net, you shouldn't use it as your primary means of protection. For one thing, leaving junk behind to be cleaned up is just plain sloppy. It suggests that your program is too lazy (or stupid) to keep track of its own resources and has abdicated this to the safety net. It's like throwing your clothes on the floor because you know your mother will eventually come by to pick it up and put it away. For another thing, this clean-up happens inside the window manager and no other window manager activity will occur until the clean-up is complete. If you leaked hundreds or thousands of objects, the system will seem visually unresponsive because the window manager is busy. (The system is still running, though. Operations that do not rely on the user interface, such as computation-intensive operations or network activity will still proceed normally while the window manager is cleaning up.) Why didn't the window manager optimize the "massive clean-up" scenario? Because when you design a system, you focus on optimizing the case where people are using your system responsibly and in the manner intended. You don't want to reward the people who are abusing you. Imagine what kind of message you'd be sending if you designed the system so that people who abuse the system get better performance than people who follow the rules! |
Comments (23)
Comments are closed. |
The garbage collector is designed to be very efficient at cleaning up behind you, and that is its purpose. The managed programming model is to put the burden of object lifetime management on the garbage collector, which gladly accepts this responsibility.
The window manager was not designed to be very efficient at cleaning up behind you, because that’s not its purpose. The unmanaged programming model is to put the burden of object lifetime management on the application.
Use each tool as it was intended.
Well a secure operative system is intended to cleanup leaked resources (to make them available for other programs) so it’s not using in another way than intended.
Also the application programmer can save much time and use it to go on the market before competition and/or improve the program in other ways. Also I find it difficult that users will blame the application if the system becomes a little slower after it has been closed, they most probably will blame the 160 useless gigabytes of useless software they’ve installed or spyware or in the end windows itself (thus supporting fast resources recollecting ultimately becomes your problem and not the application programmers one).
The last thing a user will evaluate of a software program is how much time it takes to close..
"You don’t want to reward the people who are abusing you."
While I agree that you don’t necissarily want to encourage poor programming habits, in the long run this doesn’t exactly punish the develper, because if they are taking advantage of the system in this way, they don’t really care or know anyway. What it does is punishes the end user who needs to use the software.
Kat, that’s not at all true. Users notice when their system regularly hangs immediately after closing a particular app. What they don’t know is why it happens. (I’m sure there are other potential reasons as well.)
"For one thing, leaving junk behind to be cleaned up is just plain sloppy"
Counterarguments:
– Why would I risk slowing down my exit code by swapping in code and/or data, knowing that the system must do the work, too?
– Why should every developer spend time writing, debugging and optimizing cleanup code, if Microsoft could do it correctly once, and Microsoft’s work would be easier because the OS knows that there is nothing running anymore in the application?
– Why would I spend time writing and debugging cleanup code if I could also spend that time on features, usability, whatever?
And yes, I do know that there often are valid arguments in favor of cleaning up things, but leaving junk behind to be cleaned up not always is sloppy.
When your component is loaded, used, and unloaded, you need to clean up everything when you’re unloaded. You have to write this cleanup code anyway.
Would you have a run-down of GDI table sizes and desktop heap, on a per OS basis? (including x64 and Vista of course…)
As Raymond pointed out, if your component is being loaded and unloaded a lot in the same process context, cleanup is required.
Also, Windows programming in general is full of software that can’t run longer than 24 hours because of resource leaks.
Read this post by Larry O. Learn from how he got burned by doing something that was "just good enough".
http://blogs.msdn.com/larryosterman/archive/2005/10/14/481137.aspx
My favourite text editor leaks GDI resources slowly over time. After a really long editing session, the window manager slows to a crawl. (Closing and reopening the offending program solves that issue).
Developers should cleanup their resources as they use them. Waiting for the end to come would result in even more ill-behaved programs that the slow the system while they are running.
I’ll be happen when eventually all programming is done in a managed garbage collected environment and these issues become a thing of the past.
When I check out of a good hotel, I only pack up what I want to take with me. I don’t expect to have to clean up. That is the job of the maid.
I am stunned to see so many people defending resource leaks.
Please let me know what programs you work on so I will know what to stay away from.
I mean, come on people, you requested a limited OS resource (GDI object, USER object, memory, etc). Are you seriously so lazy that you can’t even bother to give it back when you are done with it?
It is terribly annoying when a program that has 200 megs in leaked resources (like firefox 1.0x) closes and has to page in all of that just to release it. I have no idea if that’s the fault of the program’s resource freeing or the system’s safety net kicking in though.
It’s much more annoying trying to do other things while the program that leaked all over my ram gets paged in and out in the first place, of course. Resource leaks of any sort are usually the only things that have me genuinely complaining to developers, rather than just requesting or complimenting.
Remember, managed memory isn’t a panacea, you can leak resources just as badly in a managed environment with bad scope, broken pools, or incorrectly destructing. Firefox is built in a managed framework on top of C++ and was proof of that.
Actually firefox is quite fast and didn’t give me a problem once. (It’s funny because I first switched from IE to Opera for a leak problem which happened on Win2000 only – as now probably solved)
Second I think there is a double misunderstanding between defenders and attackers of leaked resources. Of course resources allocated temporarily must be freed, otherwise your program has a limited life-span before crashing and slows the system to a crawl.
The question, however raises in multiple facets, I think, for those "always on" resources – whose lifetime should effectively end at the program end. Like dynamically loaded plugins/scripts, configuration data, main data structures, etc.
Third, a funny thought of mine, yesterday.
Let’s suppose for whatever reason, that my program has some absurd data structure like a map<string, map<string, vector<int>>> allocated at startup (or during program use but never released because always used – not all programs are of the file/new/open/save kind..). If the structure is quite complex, releasing its memory could be quite expensive (all string, vector and maps destructors which are in any case useless, all those heap operations, etc). There is an high chance that by leaving it unallocated means that the OS just VirtualFree some number of memory blocks. It has the potential of being really really faster. Also if for whatever reason the structure is paged out on disk, it doesn’t really need to page in the allocated and soon to be freed memory. There is the problem of non executed destructors, but this is hardly a problem if brain was used when programming (a destructor should only contain instructions which releases other allocated resources).
And fourth I don’t really think for a big portion of the software market really matters if the PC reacts badly immediately after closing a program. For example I guess that for videogames it’s a no problem. For 3d modeling software the same (and believe me, 3dsmax surely leaves something to be released, but that’s is the smallest of its problems).
That said, on a programmer’s point of view, even leaks not compromising program functionality (always-on resources) should be fixed. On a real life point of view, spending less money/time in development, or adding a new feature, or fixing *real* bugs may be wiser.
So my opinion : given infinite time/budget, even the smallest and non influential resource leak should be solved. In real life, things may change depending on a thousand factors.
You mean this is bad:
concordance::~concordance()
{
//I’m too lazy to write this now.
}
[I know it is… ;-)]
I’m willing to bet not all resource leaks are as easy to spot as my code example above. Is there anything a programmer/developer can do to make sure he’s getting everything?
"You don’t want to reward the people who are abusing you."
So what about all those abusive apps you’ve rewarded in the name of back-compatability?
They still run, but the apps that follow the rules run better.
I think people are misinterpreting the sense of "reward" here. I mean reward relative to using the system as designed. You never want to be in the position where doing the wrong thing gives you an advantage. SimCity did the wrong thing, but it was not rewarded with any advantage over a program that does the right thing.
It’s far from clear that’s there’s any benefit to having a long-winded one shot cleanup in the shutdown phase of an /application/. Aside from the fact that Windows arbitrarily punishes the user if you don’t do it, why should you?
Raymond, IMO there’s no way to achieve your goal here. It’s always going to be cheaper for me to let the OS discard all my long term resources automatically than for me to free 99.9% of it by painstaking means and then let the OS finish off. We can guess that this will be true by looking at a smaller system. Suppose I have a fixed size allocator, e.g. because I use thousands of small fixed size structures. Your argument says that at the end of the program I should individually free up all the remaining allocations. But actually it’s always going to be faster to just throw the whole lot away, and scaled up that means it’s faster to let the OS throw all my allocations away too.
Of course you can arbitrarily punish behaviour that you don’t like, but the people who then get hurt are users. The guys who wrote Unreal Tournament don’t care that you saved say 20 man days by doing it your way, they certainly aren’t going to spend extra man days on their project to help you out but the user who exits UT2k4 and finds her machine unresponsive for 5 seconds is going to blame Microsoft, after all Microsoft sold her a supposedly multitasking OS.
This sort of laziness in the interface code is what causes people to assert that e.g. BeOS is "faster". It’s not faster, benchmarks confirm that, but it feels faster because the designers optimised to keep the UI responding. Users appreciate that.
In 1995 it might have been impressive that your UI clean-up doesn’t hang RC5 calculations, or (taking an example from early Mac OS) that your menu drawing code doesn’t freeze TCP/IP networking. This is 2005, desktop computers have come a long way in 10 years, it’s time UI house-keeping didn’t hang the UI either. Add this to Vista’s bug list Raymond, I guarantee it won’t be the least important thing "fixed" before they ship it.
I think people are still missing my point. I’m not talking about cleaning up memory on the heap (which can be cleaned up at one go by destroying the heap). I’m talking about systemwide shared resources like window handles. It’s not like the system can just "destroy the window manager" when your process exits.
I think you may be misunderstanding your market.
Garbage-collected environments seem to be targeted at LOB developers (Cobol, Java, .Net) who want to deal with the platform as a simplified abstraction. So just as Joe Cobol saw the computer as a decimal and text machine, a .Net guy wants to leave these kinds of cleanup as an excercise for the underlying platform (i.e. "magic").
I don’t mean that as a criticism, more of an observation of the cat that’s been let out of the bag. Sort of inevitable in the "give ’em an inch…" vein. It is a question of the expectation level that has been set.
So, it seems that we (the programmers) should do the cleanup… So now I’ll ask:
How do you Unlock a LockResource? Is it safe to free a Resource that has been Locked?
Is it safe to use FreeResource (it’s deprecated)? Because it would be much easier to use a single function instead of a collection of five. If yes/no, what are the differences between FreeResource and DestroyAcceleratorTable/DestroyObject/…?