Date: | February 21, 2006 / year-entry #66 |
Tags: | code |
Orig Link: | https://blogs.msdn.microsoft.com/oldnewthing/20060221-09/?p=32203 |
Comments: | 16 |
Summary: | In 16-bit Windows, every thread (or "task" as it was called then) had a message queue, end of story. In the transition to 32-bit Windows, this model broke down because Win32 introduced the concepts of "worker threads" and "console applications", neither of which had much need for messaging. Creating a queue for every thread in... |
In 16-bit Windows, every thread (or "task" as it was called then) had a message queue, end of story. In the transition to 32-bit Windows, this model broke down because Win32 introduced the concepts of "worker threads" and "console applications", neither of which had much need for messaging. Creating a queue for every thread in the system would have been quite a waste, so the window manager deferred creating the input queue for a thread until that thread actually needed an input queue. That way, threads that didn't use the GUI didn't have to pay for something they weren't using. But once you send a message or peek for a message or create a window or do anything else that requires a message queue, poof a message queue would be created just in time to accomodate your request. As far as you knew, the queue was always there. The create-on-demand queue model worked out great: Queues were created only when they were needed, threads that didn't need message queues didn't get one, and nobody knew the difference. There was only one catch: The Making the |
Comments (16)
Comments are closed. |
What effect would this ‘attack’ have?
As far as I can tell it could cause the proccess to run out of address space, but could anything worse happen?
Also, interesting to note that even when programmers were ‘trusted’ this was considered too dangerous.
<i>In 16-bit Windows, every thread (or "task" as it was called then)</i>
Huh? I thought Win16 didn’t support threads. A quick google brings up http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnwui/html/msdn_styles32.asp which indicates the same thing.
So, what was the point of this again?
When an app starts up, if its entry point is WinMain (and is therefore not a console app) it could get a message queue automatically. Console apps and new threads could ask for one if they need one.
What’s wrong with that?
It sounds to me that using PostThreadMessage to send messages to a thread which would otherwise not have a queue would be harmless, like knocking on the door of a deaf person. Your average thread will never have a message queue, so it will be blissfully unaware of those thousands of messages waiting for it.
If a thread then checks the queue only to find thousands of messages waiting, it’s no different than posting thousands of messages right after the thread created its own queue. It’s not like you can’t post millions of messages to any thread’s queue after it’s been created.
The only way I can see this as a possible attack is it you could use up multiple megabytes of a process’s address space with messages that will never be read. In that case it would be safe to just limit the number of messages in a not-yet-read queue to a few hundred.
Has anybody run into the race condition of trying to post a message to a queue that hasn’t yet been created?
Stu, programmers were trusted only in the Win16 environment. PostThreadMessage doesn’t exist there, so it wasn’t an issue.
I never really understood how to handle correctly the message queues in worker threads. In an application I’m currently maintaining, there’s a main GUI thread that creates windows, receives the user input and of course processes messages with the classic GetMessage loop. Then, there’s another thread that never directly interacts with the GUI, but sends messages to the main thread with SendMessage or PostMessage. So, even this thread has its queue, but it never looks at it. Which are the side effects? This thread creates no windows, so it should never get window messages, so if the system broadcasts messages this thread should not cause hangs. But I’m still consuming system resources because the queue slowly fills in for some reason?
Why doesn’t PostThreadMessage just fail silently when there is no message queue? If the destination thread never reads any messages, what difference would it make if the queue exists or not? All messages posted would just be ignored anyways.
I’d like to add that PostThreadMessage was one of the first functions, which could allow to itself to fail due to inexistent message queue. All other “old” functions were called by 16-bit code in assumption that there is message queue for “task”. Then, if their semantics would suddenly changed (for example, message queue had to be created explicitly) it would broke many innocent applications.
PostThreadMessage is another story. Anyone who calls it is aware of threads already, therefore aware of message queues and the fact that queue is not necessarily there. So, it’s perfectly safe to fail because of missing message queue: whoever calls PostThreadMessage should not assume that message queue is always there.
Derek:
"Failing silently contributes to nearly-impossible-to-track-down bugs. Imagine that one thread (t1) posts a message to a thread (t2) which needs that message, but for whatever reason hasn’t created its queue. Then t2 reads from its queue (thereby creating the queue), and proceeds to process messages."
I hadn’t thought of that scenario. Yeah, that would result in lost messages. But what is the thread sending the message supposed to do when there is no message queue (yet)? Just keep re-posting the message until the queue is created?
It seems to me that the whole concept of tying a message queue to a thread was a bad idea from the start. Threads should be separate from message queues. A thread can create a message queue and pass its handle to another thread for posting messages. Having all of these byzantine rules about when a queue is created automatically (and when it isn’t) just leads to a world of pain and confusion.
Poster, by no means am I saying the current system is perfect. The fact that thread messages are routinely discarded by secondary message loops is evidence enough of that. But silent failure is typically worse than loud obnoxious failure. At least in the scenario I gave, the failure gives the developers a direction to proceed. e.g. The second thread could do a PeekMessage as soon as it starts.
This doesn’t make sense to me!
First of all, it’s obvious that Raymond’s logical explanation is done, what, 15 years after the function was designed. So his assumptions are probably current to today’s mindset but not to 15 or 20 years ago.
Furthermore, this is an optimization trick (creating queues on demand) and whatever benefit you get from it is clearly NOT by design. For example consider the case where each thread gets a queue by default, now calling PostThreadMessage on a thread in another process would actually post the message and you couldn’t do anything about it. The fact that because they decided to create queues on demand and not doing it for PostThreadMessage doesn’t translate into "we did it to protect threads from message-flooding attacks."
That, I don’t buy, regardless of who is selling it. I personally think that even if PostThreadMessage was designed in the way Raymond explained, they probably didn’t have "avoiding message-flood attacks" in their minds; There are MANY ways (and 15 years ago even more) to do all sorts of things to other processes, threads and windows, including broadcasting timer messages with nasty params (I think some of you know where I’m going with this.)
Cheers,
Ash
Ashod, RC didn’t say the design was WISE. He said it was CORRECT. There’s a big difference.
Poster, because if you post to a message queue which doesn’t exist, clearly there’s a problem.
Failing silently contributes to nearly-impossible-to-track-down bugs. Imagine that one thread (t1) posts a message to a thread (t2) which needs that message, but for whatever reason hasn’t created its queue. Then t2 reads from its queue (thereby creating the queue), and proceeds to process messages. For whatever reason, it needed that message. Maybe it was a file (or log) handle, or an address, or a memory location with info it needs, or whatever else. Since it didn’t get that message, it may lead to unexpected behavior.
T1 *knows* it sent the message successfully, and t2 *knows* it didn’t get there. Now, the developer are cursing Microsoft and trying to work around the "bug" that could have been avoided if the post had simply reported failure.
If creating a message queue in the destination thread (without it knowing) is a problem, then why is it possible to kill window-less timers of *any* thread session-wide? I think that’s far more dangerous than flooding a thread’s message queue (there’s a maximum of messages iirc) which just doesn’t process them.
It seems pretty logical to me, a thread should not be allowed to create the queue of another thread.
Its just rude.
"If a thread then checks the queue only to find thousands of messages waiting, it’s no different than posting thousands of messages right after the thread created its own queue. It’s not like you can’t post millions of messages to any thread’s queue after it’s been created.
The only way I can see this as a possible attack is it you could use up multiple megabytes of a process’s address space with messages that will never be read. In that case it would be safe to just limit the number of messages in a not-yet-read queue to a few hundred."
The aim is not to protect against malicious attacks, but against involountary attacks.
I guess there are programs that send some messages to all threads of the system, thinking that it is fine, threads discard them if they don’t treat them.
But this guess would be wrong for working threads.
And, slowly, if such messages are send at regular interval of time, the thread queues would increase.
Thomas: killing a timer of another thread is obviously evil… Every programmer is aware of that.
I think Alex Blekhman means that *PostMessage* (not PostThreadMessage) was one of the first functions.