Date: | September 2, 2005 / year-entry #250 |
Tags: | other |
Orig Link: | https://blogs.msdn.microsoft.com/oldnewthing/20050902-00/?p=34333 |
Comments: | 44 |
Summary: | Accuracy is how close you are to the correct answer; precision is how much resolution you have for that answer. Suppose you ask me, "What time is it?" I look up at the sun, consider for a moment, and reply, "It is 10:35am and 22.131 seconds." I gave you a very precise answer, but not... |
Accuracy is how close you are to the correct answer; precision is how much resolution you have for that answer. Suppose you ask me, "What time is it?" I look up at the sun, consider for a moment, and reply, "It is 10:35am and 22.131 seconds." I gave you a very precise answer, but not a very accurate one. Meanwhile, you look at your watch, one of those fashionable watches with notches only at 3, 6, 9 and 12. You furrow your brow briefly and decide, "It is around 10:05." Your answer is more accurate than mine, though less precise. Now let's apply that distinction to some of the time-related functions in Windows. The If you're looking for high accuracy, then you'd be better off playing around with the What |
Comments (44)
Comments are closed. |
So, I’m assuming the WaitForSingleObject and WaitForMultipleObjects also use GetTickCount for timing? If I want a timeout of say 30ms, is it going to use GetTickCount and end up timing out somewhere between 10 and 55ms?
Mosat, it’s not going to use GetTickCount, but the GetTickCount and WaitForSingleObject use the same timer with the same resolution.
There is a LOT to windows timing.
Tons of methods with tons of tradeoffs…
We yack about it on this email list for
midi developers:
http://groups.google.com/group/mididev
multimedia timers, directmusic timers, NT waitable timers, yada yada yada…
I wish it was all simpler…:(
…Steve
I had a problem with broken performance data on a four-way server that had two of its processors running at one speed, and two at another speed. We discovered that QueryPerformanceCounter and QueryPerformanceFrequency gave different results depending on which of the pairs of processors they ran on, so the perfmon data was mostly garbage. That was eight years ago so it’s probably fixed now.
What a typical M$ post by Mr Smug-I’m-So-Much-Better-Than-You Chen. Boils down to: ‘you can’t have both accuracy and precision’.
Err why not? It’s not like they’re mutually exclusive, in fact, in almost all fields of endeavor there’s a high degree of correlation! How about actually implementing a method to measure time both accurately and precisely, instead of patronizing us by explaining something every 12 year old highschooler is taught in science class?
I’ve just read the information about RDTSC instruction, and Intel specifies that on Pentium M and similar processors the value is not incremented on the constant rate (and that’s logical, since the frequency changes depending on the CPU usage). Does Windows (2k, XP) in QueryPerformanceCounter take the RDTSC values on Pentium M or not? What’s more important for this Windows function — a correlation to seconds or a correlation to CPU clocks? I’d prefer the later.
I’m sorry, Huh, that you find today’s entry insulting. But you’d be surprised how many people don’t understand the difference. Search Google Groups for “GetTickCount” and you’ll see lots of confusion. (Oh, and where did I say you can’t get both precision and accuracy? QueryPerformanceCounter usually gets you both, but at a cost in performance.)
About 4 years ago, I had some interesting time-warped results from the QueryPerf APIs on an IBM ThinkPad A21p. This had a processor with SpeedStep, which I think was relatively new back then. The QueryPerf counters would change speed according to which of its speedstep power modes the processor was in… This made it hard to get meaningful results. :)
(The System info control panel applet would also report a different clock speed depending on the processor power mode too.)
…and to the aptly named: "Huh" try reading the article, and then try engaging your brain. Enjoy the novelty of this sensation.
You accuse Raymond of saying ‘you can’t have both accuracy and precision’ but in fact he says nothing of the sort. The precision of the QueryPerf stuff is, as he says, variable, and from that you appear to have concluded, incorrectly, that the precision must be low.
In fact, it’s very high – it’s always orders of magnitude better than any of the other APIs. In my experience is precision has always been at least as good as microsecond order, and it’s often orders magnitude better in the cases where it can use RDTSC.
Huh, Raymond did not say "you can’t have both accuracy and precision". He actually said that the 2 concepts were different. I’m afraid that the topics discussed here are probably too complex for someone of your limited comprehension skills.
Huh: In most fields, people are trained not to report any more precision than they have accuracy. If a physicist can only measure time to plus or minus 55 ms, he says so.
Raymond is explaining that a function like GetTickCount doesn’t necessarily have millisecond precision just because it returns a value in milliseconds. That time measurement has to come from somewhere, and unfortunately computers on which Windows runs don’t have a standard high-accuracy, high-precision time clock that is also quick to access. So different APIs have made different tradeoffs, and this IS worth reporting on.
I wonder why no one ever mentions the timeGetTime function in winmm.dll. It’s probably the simplest way to get the time, and it’s quite precise (~5ms), although I’m not sure about its accuracy.
Interestingly, MSDN says that in Win95 the precision is 1ms, while in NT/2000 it can be 5ms or more. How can that be?
About a year back, I spent 2 weeks studying nothing but timers and counters in Windows (for a borderline multimedia project).
What Raymond says here is 100% accurate (and precise!). I don’t see any smugness in it. I think we just have to jump on Huh and beat him up in a proper democratic fashion.
Huh, you should try being nicer.
"About 4 years ago, I had some interesting time-warped results from the QueryPerf APIs on an IBM ThinkPad A21p. This had a processor with SpeedStep, which I think was relatively new back then. The QueryPerf counters would change speed according to which of its speedstep power modes the processor was in… This made it hard to get meaningful results. :)"
Sadly, this same thing happens today (on XP x64) with the Cool’n’Quiet feature of AMD Athlon 64 processors (a SpeedStep equivalent). I’ve had to yank out all the QueryPerformanceCounter calls in my apps to work around this.
"I’ve had to yank out all the QueryPerformanceCounter calls in my apps to work around this."
Which method have you reverted to? Good old GetTickCount()?
what about
<code>
__declspec(naked) unsigned long GetCounter()
{
__asm { rtdsc }
}
</code>
And by the way, keep in mind you most likely do not need a lot of precision with timing.
"Which method have you reverted to? Good old GetTickCount()?"
Yes. The resolution isn’t as good, but at least it increments at a constant rate.
I didn’t try timeGetTime; that might work too, provided it isn’t based on RDTSC.
autist0r: how about measuring? it would be nice to have at least a precision of a millisecond..
QueryPreformanceCounter is nice for this.. but the downside is:
http://support.microsoft.com/default.aspx?scid=KB;EN-US;Q274323&
anybody have an other solution?
I too have had problems with QueryPerformanceCounter(). One some el cheapo overclocked PCs, the LSBs were garbage and would sometimes go back in time! Needless to say, most apps do not like going back in time. :D
The Multimedia timers are also interesting Windows timers. The MM timers sleep using WaitForSingleObject() or whatever, but wake up early. Then they spin in a busy loop until the actual wake-up time, so they can fire events very precisely.
Chris – how do you know MM timers spin?
Inquiring minds wanna know…
Thanks for that informative post, very interesting.
Richard
"Raymond is explaining that a function like GetTickCount doesn’t necessarily have millisecond precision just because it returns a value in milliseconds."
I think you mean "[…] GetTickCount doesn’t necessarily have millisecond *accuracy* just because it returns a value in milliseconds."
:)
I implemented a kernel driver that connects to my USB webcam, and determines with very high accuracy (but no precision) the tick count by looking at the relative position of the sun. Of course, this method doesn’t work so well at night time, but I’m working on a lunar based mod.
</joke>
Have a nice friday night everyone.
The problem with that, memet, is that you can’t really have an accuracy that’s higher than your precision — there’s no way that a system which reports things to the nearest hour could have a millisecond precision, because the nearest hour is usually more than a millisecond away.
(Meanwhile, your webcam system ought to be working fine at night, with a +/- six hour — or eight in winter at high latitudes — precision. All it needs to measure is "dark")
I bring that up not to harsh on your joke, but because there’s an important point there: what your system has that’s valuable is a lack of drift — over really long periods of time, the fractional error is quite tiny (assuming the system stays running, of course!).
Observation: a number for accuracy is nearly meaningless, unless one also specifies what length of time one’s measuring for.
Consider, for instance, a hypothetical timer that’s usually off by 5%, and reports with a precision of 5ms. For a 2.0s time, it’s probably off by 100ms. But, for anything under 0.1s or so, it’s accurate to +/- 5ms. That’s very different from a timer that’s off by 100ms no matter how short the measurement time is.
And so that brings me back to Raymond’s original post: He hasn’t really told us all that much about the accuracy of GetTickCount. Is that 10-55ms expected error still valid if I’m timing something that only lasts 20ms? What if I measure something that takes 20 minutes? Is this a sort of random error that effectively gets added to all measurements regardless of size, or is it a typical value of the drift over an "average" measurement time?
If you can reliably repeat an operation enough times, you can get as much accuracy as you need out of GetTickCount, even better than 1ms.
As for the actual behavior of GetTickCount: If you watch the output for a while, you’ll see that it will sit at one value and then jump up by 10 or 11ms. You’ll see even bigger jumps periodically, but that’s for a different reason.
At http://developer.nvidia.com/object/timer_function_performance.html there is an article empirically comparing timing methods on a performance point of view.
The QueryPerformance* functions fail unless the supplied LARGE_INTEGER is DWORD aligned. Shouldn’t that be documented? I wasted a couple of hours once trying to figure this out… :)
Actually I would say that "precision" is the wrong term to use here. The dictionary.com definition is:
pre·ci·sion
The state or quality of being precise; exactness.
1. The ability of a measurement to be consistently reproduced.
2. The number of significant digits to which a value has been reliably measured.
Neither of these definitions really seem to be what Raymond is talking about. I would stick with the term "resolution" rather than precision, since for the very reasons Raymond mentions, the values returned by these functions are neither accurate nor precise.
This article does raise an interesting point though; in maths and physics (or any of the sciences for that matter), for example, it is basically *illegal* to quote any of your figures in more precision than you have accuracy. It is not so in computing – a point worth re-iterating.
Friday, September 02, 2005 3:50 PM by Huh
> I really should have my reproductive organs
> ground to hamburger so as to prevent any
> possibility of inflicting my corrupted DNA
> on the gene pool.
Nah, that’s excessive. All you have to do is patronize yourself by learning something every 12 year old highschooler is taught in mathematics class. It’s called logic. And in fact if you learn to think logically, then you might even learn enough to work on computers.
Next you’ll need to keep your reproductive organs in order to put something like this on display:
"My wristwatch is more accurate than Windows. You guys get ticks every 55ms or more or less and you don’t even know if it’s more or less. The only reason Microsoft even gives you the time of day is because NTP is enabled by default in Windows XP. Well, my wristwatch is around 300km from the nearest transmitter of atomic clock synchronized time signals, so due to the speed of light my wristwatch is around 1 millisecond behind. Permanently. Always. And how many milliseconds does it take for NTP to get you the time of day from Microsoft?"
Back in the days of windows 3.1 I used the 8253 timer chip directly and got about 840 nanoseconds of resolution since the clock was 1.19MHz. I found that I could read the timer of a 33MHz 486 in 40uSec. On a periodic event this offset is pretty constant so it washes out. When I ported the code to windows 98, surprize! The timer code still worked. I haven’t tried porting this to WinNT, and haven’t had the need. But another project on WinNT showed that the multimedia timer (MMTIMER.DLL) was pretty stable, about +/- 100 usec jitter with a 1 msec interupt/callback on a 486 33MHz.
Anyways, I know that the QueryPerfFreq() API can return different freq’s, I just have never seen anything different than 1.19MHz (BTW, that’s where the 55 ms DOS INT 8 tick comes from, 840 ns * 64K ticks = 55 ms)
Actually it was accurate to within about two minutes when I read it.
I would like to point out that the other ‘Huh’ posting was not made by me.
That I have to even say this speaks volumes about how selectively the ‘no impersonation’ rule is enforced around here: no surprise there.
I deleted the fake ‘Huh’ posting.
The "no impersonation" rule is enforced when people point out a violation. How am I supposed to know who the real ‘Huh’ is?
Monday, September 05, 2005 3:55 PM by oldnewthing
> I deleted the fake ‘Huh’ posting.
How do you know it was fake? When I lived in Toronto there were two Norman Diamonds in the phone book. (There’s only one in Ome now though.)
And don’t forget, in the country where your ancestors emigrated from, one Hu is considering rehabilitating another Hu who was purged or something like that. Maybe one who was transliterated differently needs rehabilitating too, how do you think, huh?
Can you clarify the concluding sentence of the the post? Do *we* have to do additionnal work for multiprocessors or ‘buggy hardware’ or is this handled by Windows?
Why did the other poster have to yank the QPC code from his app? If you call QueryPerformanceCounter and Frequency properly for short periods, you shouldn’t have any problems, right?
"Why did the other poster have to yank the QPC code from his app?"
I discovered that on CPUs with variable-speed clocks (e.g. Athlon 64 with its Cool’n’Quiet feature enabled) the actual frequency of the performance counter is equal to the current CPU clock frequency — it is NOT a fixed number.
For example, on a 1.8 GHz Athlon 64 running at full speed the performance counter ticks 1,800,000,000 times per second. But when the load is light, the processor switches to 1.0 GHz, and the performance counter ticks only 1,000,000,000 times per second. As you can probably imagine, this behavior breaks any code that attempts to convert performance counter values into time figures, e.g.:
count_before = QPC;
… long operation …
count_after = QPC;
seconds_elapsed = (count_after – count_before) / QPF;
QPF appears to always return the maximum processor frequency (1,800,000,000), so even if the frequency stays at 1.0 GHz during the entire operation, seconds_elapsed will still be off.
"How do you know it was fake?"
Because somebody posting as ‘Huh’ said so. If you guys are going to start an impersonation war then I’ll just turn off comments.
Elias: All pointers must be properly aligned (unless explicitly notated to the contrary). That’s just a fundamental rule of the language (6.2.3.2.7), like "Don’t use memory after freeing it".
Steve Hazel: ok, so I cannot *personally* vouch for whether the MM timers spin, but I worked at a Seattle startup with some ex-Microsoft devs who worked in the MM group.
PingBack from http://blogs.msdn.com/oldnewthing/archive/2006/02/15/532524.aspx
I recently wanted to add some performance measurements to an application. To avoid duplicating code everywhere I needed to make measurements, I coded up a small helper class.