Date: | September 27, 2005 / year-entry #281 |
Tags: | code |
Orig Link: | https://blogs.msdn.microsoft.com/oldnewthing/20050927-13/?p=34023 |
Comments: | 20 |
Summary: | If you try to do too much, you can find yourself in trouble. For example, if your destructor hands a reference to itself to other functions, those functions might decide to call your IUnknown::AddRef and IUnknown::Release methods as part of their internal operations. Consider: ULONG MyObject::Release() { LONG cRef = InterlockedDecrement(&m_cRef); if (cRef == 0)... |
If you try to do too much, you can find yourself in trouble.
For example, if your destructor hands a reference to itself
to other functions,
those functions might decide to call your
ULONG MyObject::Release() { LONG cRef = InterlockedDecrement(&m_cRef); if (cRef == 0) { delete this; } return cRef; } MyObject::~MyObject() { if (m_fNeedSave) Save(); } That doesn't look so scary now does it? The object saves itself when destructed.
However, the HRESULT MyObject::Save() { CComPtr<IStream> spstm; HRESULT hr = GetSaveStream(&spstm); if (SUCCEEDED(hr)) { CComQIPtr<IObjectWithSite, &IID_IObjectWithSite> spows(spstm); if (spows) spows->SetSite(this); hr = SaveToStream(spstm); if (spows) spows->SetSite(NULL); } return hr; } On its own, this looks pretty normal. Get a stream and save to it, setting ourselves as its site in case the stream wants to get additional information about the object as part of the saving process. But in conjunction with the fact that we call it from our destructor, we have a recipe for disaster. Watch what happens when the last reference is released.
Destructing the object a second time tends to result in widespread mayhem. If you're lucky, you'll crash inside the recursive destruction and identify the source, but if you're not lucky, the resulting heap corruption won't go detected for quite some time, at which point you'll just be left scratching your head.
Therefore, at a minimum, you should assert in your ULONG MyObject::AddRef() { assert(m_cRef != 0); return InterlockedIncrement(&m_cRef); } This would catch the "case of the mysteriously double-destructed object" much earlier in the game, giving you a fighting chance of identifying the problem. But once you've isolated the problem, what can you do about it? We'll look into that next time. |
Comments (20)
Comments are closed. |
It looks like you should bump the reference count in the destructor before the Save() call.
Something like:
MyObject::~MyObject()
{
assert(m_cRef == 0);
m_cRef = 1; // Set refcount back to valid state for other function calls.
if (m_fNeedSave) Save();
}
You can’t do the cleanup in the destructor as you cannot guarantee that the external object – now that it has a reference – won’t hold onto it for a while – and that delete this is going to continue cleaning up your object regardless of any (now) oustanding references.
C++ object destructors are very sensitive functions.
According to the Holy Standard, Chapter 3.8, Verse 1, after the destructor has started running, the object is officially dead. Any and all operations on it as a whole are considered necrophily and may be punishable by Undefined Behavior.
I don’t think the change to AddRef would work. When the object is first constructed, the reference count should be 0 and AddRef should be called at some point (probably via QueryInterface) to increment the reference count. If this is not done, then the reference will never go to 0 and the object is not deleted. I guess that is one reason why ATL destructors are not virtual and the FinalRelease method is provided.
Wow, that is pretty freakish. The best solution is probably to avoid doing anything other than memory deallocation in your destructor, and make sure it only gets called once. To do that, in Release() if the reference count has gone to zero, set a flag that says you’re in cleanup. If that flag is set, don’t initiate cleanup again. So, you’d end up with a Release() that looks something like this:
ULONG MyObject::Release()
{
LONG cRef, cCleanup;
cRef = InterlockedDecrement(&m_cRef);
if (cRef == 0) {
// Constructor sets cCleanup to zero
cCleanup = InterlockedIncrement(&m_cCleanup);
if (cCleanup == 1) {
if (m_fNeedSave) Save();
delete this;
}
}
return cRef;
}
Now, we can be certain that we only deallocate ourselves once.
My guess would be to create an internal (COM) object to pass to the save method – if we must call the save method.
Somehow serialize our data, pass it to this temporary object and then call the SaveToStream on that object.
I ran across this with a COM-like environment (Reference counted objects, interface-based accessor, just without MS’s overhead… It was for a set of all inproc DLLs so marshalling wasn’t necessary, also we were trying for cross-platform).
Anyways, we had a strange sequence of strong references that caused the situation above. Our solution was in our implementation of the Release() method to set the reference count to something out-of-range (like -100) immediately prior to calling "delete this".
This way any AddRef()/Release() would not have any effect (we only cared if the reference count == 0).
Not pretty, but it worked…
Calling functions that could fail from a destructor is a bad idea, not only in COM. A Save() function falls in this category.
I think I would just find a better way to save the object rather than doing it in the destructor. Especially if the code required to save it involved Addref’ing the object being deleted. If the SetSite didn’t require an interface to ‘this’, then there’d be no problem here. I think, regardless of any type of refcnt hacks, handing out an interface pointer to ‘this’ from the destructor is a Very Bad Thing.
VBScript needed to be designed around this exact issue.
http://blogs.msdn.com/ericlippert/archive/2004/12/22/330276.aspx
http://blogs.msdn.com/ericlippert/archive/2004/12/29/344074.aspx
ULONG CMyObject::Release(void)
{
assert(0 != m_cRef); // Somebody calling us twice?
if (1 == m_cRef) // Time to save, we’re about to be destroyed
Save();
// Save() could increase the counter
LONG cRef = InterlockedDecrement(&m_cRef);
if (0 == cRef)
delete this;
return cRef;
}
I’ve done some research into C++ destructors (on my own geek time), and using AddRef() in the destructor is just bad business.
Classes with virtual methods use a vtable to determine which methods to call at runtime. In most C++ implementations the vtable is actually changed as the destructor deletes from most derived to base class; this is to ensure that base classes calling virtual methods to not access methods or fields of derived classes that have already been destroyed. This means that even you somehow manage to prevent the memory from being reclaimed by the memory manager, virtual functions will only call the root classes methods.
My favorite is a component that pushed a message loop during the final release. The component in question will remain nameless…
All the discussions do prove that "COM object destructors are very sensitive functions".
At least it’s as same sensitve as ctr and dtr.
What about NOT saving in the destructor? I find it cleaner and more flexible to have an explicit call to save rather than an automatic save. Ok, sometimes it’s not your choice…
PingBack from http://www.antcassidy.com/code/?p=44
PingBack from http://www.antcassidy.com/code/?p=8