Date: | October 30, 2006 / year-entry #367 |
Tags: | tipssupport |
Orig Link: | https://blogs.msdn.microsoft.com/oldnewthing/20061030-01/?p=29193 |
Comments: | 22 |
Summary: | This is sort of the reverse of Why is my CPU usage hovering at 50%?, but the answer is the same. When I run a CPU-intensive task, the CPU percentage used by that process never goes above 50%, and the rest is reported as idle. Is there some setting that I set inadvertently which is... |
This is sort of the reverse of Why is my CPU usage hovering at 50%?, but the answer is the same.
My psychic powers tell me that you have a single processor with hyperthreading enabled. (Because if you had a dual processor machine, you probably would have mentioned it in your question.) And my psychic powers tell me furthermore that the program in question is single-threaded, or at least has only one thread that is doing CPU-intensive work. Therefore, that thread is being run by one of the hyperthreading units of the CPU, and the other one isn't doing anything. That's why you can't get more than 50% CPU usage. |
Comments (22)
Comments are closed. |
There is a workaround for this problem available: go to BIOS and disable hyperthreading :)
When I saw the title, I thought the same thing! Your pyschic powers must be spreading to other people. Another varient of this is "Why can’t I get it to use more than 25% of the CPU?"… from a customer with a dual core/dual hyperthreaded system (2 physical + 2 logical = 4 "cores")
That gives me a good possibility for digression, to ask all the readers here: I still use VC 6 and it can’t cross much 50% CPU :) What are your experiences? Is any newer VS (e.g. 2005) capable to compile using the two cores to be two times faster?
VS 2005 can compile two different projects at the same time, if they belong to the same solution, but one project is tied to one core.
Xcode from Apple on the other hand is able to just hand out single files to each core which really speeds up those Universal-Binary compiles (each file is compiled separately for PowerPC and Intel).
<<VS 2005 can compile two different projects at the same time, if they belong to the same solution, but one project is tied to one core.>>
Which leads to problems if you never bothered to mark the project dependencies right, as I have seen in a project some time ago :-)
> Is there some setting that I set inadvertently which is preventing the program from using more than half of the CPU?
Yes, the setting is probably called "hyperthreading". Turn it off in your BIOS, and your single CPU-intensive thread will be able to use 100% of the CPU.
:-P
(Unless it actually *is* an SMP or dual-core machine. Then, turn off the second CPU (or the second core), and you’ll be able to use 100% of the CPU.)
During long compiles I use the 50% "unused" CPU to read email, surf, edit more source code. It’s nice not having the machine totally bogged down doing one thing.
ignore this if it’s obviously way off topic – As a user the only time I look at my CPU usage is in the task-manager when I’m wondering if one program I’m running is hogging resources, and I zap it if it’s seemingly slowing other, more important (email) stuff up. Should I llok at the overall CPU usage? Should I care? I hope the answer is no, don’t worry, we’ll sort it, its not relevant to you…
Which is why multithreading, producer/consumer models, and distributed algorithms are good ideas. Now that we have HT/Multicore systems being more and more standard, we need programs with a higher degree of ||ism.
Somewhere on a forum last week there was a discussion last week about how on linux and OS X, it shows 100% for one CPU usage. If you have a quad CPU and all for CPU are used full blast, it shows 400%.
I’ve been thinking about this and it’s simple, clear and makes sense; Windows’ task manager should do this as well. It’s scalable and easy to understand.
For simple users with simple apps, it doesn’t freak them out. At worse it makes them feel like their machine is over-performing.
For servers it is more ‘guessable’ than .. ho what is 1/8 of 100% again?
Ulric: I agree. Great idea.
Programmers: You guys may be interested in this:
http://www.xoreax.com/support_faq.htm#q212
"IncrediBuild can take advantage of multiple CPU/Core machines by allowing each CPU/Core to build a file at the same time."
Is VS seriously only single threaded? This hasn’t been fixed in the newest versions?
gmake is multithreaded and seems to work fine – I always thought VS seemed to be remarkably good, but then I’ve never used it on a SMP machine.
At work, I have a Dual-Xeon with hyperthreading, for a total of 4 logical processors. Our (rather large) product uses a build system that can scale across multiple CPUs, but beyond 2 it would actually lengthen build time. I think once you’re past a certain point, disk access time becomes the limitation, especially during the linking phase. I ended up disabling HT.
Wendy: a thread is only hogging the system if it’s maxed out at 100%. Even then, Windows uses priority boosts to ensure that other applications that are running get CPU time.
If many programs are doing something very extensive with the disk, though, Windows can struggle – it’s not that great at partitioning disk time. You won’t generally see these programs with high values in the CPU Usage column, because they’ll generally be blocking waiting for data from the disk. Also, large amounts of disk access will often cause Windows to increase the file system cache working set at the cost of process working sets; when a thread that could do work wakes up, it often has to wait for code or data to be paged back in before it can actually do anything useful.
Jonathan: whether compiling is disk- or CPU-bound will depend on how large the source code is and how large your system’s memory is. If you’ve built recently and the source code fits in RAM, it will probably be CPU-bound.
The proper answer to the above question would be:
Because no one bothers to write multi-threaded applications yet.
Archangel: gmake is not multithreaded. It can fork multiple copies of itself, but that’s multiple processes, not multiple threads.
VS is similar: it spawns separate processes to compile, link, etc. It could spawn multiple simultaneous instances of, for example, the compiler, but that’s a technique that works better on Unix-like platforms where per-process overhead is lower.
I dunno – something long running like a compilation would probably be just fine under windows.
"it shows 100% for one CPU usage. If you have a quad CPU and all for CPU are used full blast, it shows 400%."
…
"For servers it is more ‘guessable’ than .. ho what is 1/8 of 100% again?"
Those that work with computers should be able to work out any 2^n rather quickly.
I can see how having n * 100% available would be useful when you have a high enough n that precision becomes an issue. By then, however, I’d imagine that you’d want to have n different counters so that you can see the difference between 8 processors at 25% and 2 processors at 100%.
A dual-processor machine can’t run at 400%, though; its maximum is 200%. Unless I don’t understand what you’re saying?
But you can’t tell that difference with Windows’ setup when CPU usage is at 25% on a quad-CPU machine either. Is it 100% of one CPU, or is it 25% of all four, or 50% of two of them? Most of the time, it’s probably 100% of one, but that’s hardly infallible.
(Especially with HT — oftentimes I’ll pull up a remote screen on one of our dual-CPU hyperthreaded servers (4 virtual CPUs), and see that CPU usage is at "25%". That’s rarely 100% on one virtual CPU; most of the time it varies between 60% and 80% on one virtual CPU, and 20%-40% on its pair. Always half of the total for that pair, though. I have a feeling it’s due to the way HT works, but I’m not quite sure on that.)
It’s quad – logical cores. If that is 4 CPUs, 2 dual-core CPUs, 1 dual-core HT CPU, or 1 quad-core CPU, you’ll have to look at hardware information though. This also means that the task is using at least 4 threads (the total CPU usage is still reported up to 100% – so you can have 2 tasks each using 150%, while the overall usage is reported as 75%).
I would like taskmgr’s graph to compensate for SpeedStep®/Cool’n’Quietâ„¢.