Date: | November 23, 2006 / year-entry #393 |
Tags: | tipssupport |
Orig Link: | https://blogs.msdn.microsoft.com/oldnewthing/20061123-14/?p=28923 |
Comments: | 27 |
Summary: | A placebo setting that has been getting a lot of play in recent years is that of QoS bandwidth reservation. The setting in question sets a maximum amount of bandwidth that can be reserved for QoS. I guess one thing people forgot to notice is the word "maximum". It doesn't set the amount of reserved... |
A placebo setting that has been getting a lot of play in recent years is that of QoS bandwidth reservation. The setting in question sets a maximum amount of bandwidth that can be reserved for QoS. I guess one thing people forgot to notice is the word "maximum". It doesn't set the amount of reserved bandwidth, just the maximum. Changing the value will in most cases have no effect on your download speed, since the limit kicks in only if you have an application that uses QoS in the first place. QoS, which stands for "quality of service", is a priority scheme for network bandwidth. A program can request a certain amount of bandwidth, say for media streaming, and when the program accesses the network, up to that much bandwidth is guaranteed to be available to the program. The setting in question controls how much bandwidth can be claimed for high priority network access. If no program is using QoS, then all your bandwidth is available to non-QoS programs. What's more, even if there is a QoS reservation active, if the program that reserved the bandwidth isn't actually using it, then the bandwidth is available to non-QoS programs. Consider this analogy: A restaurant seats 100 people, and it has a policy of accepting reservations for at most twenty percent of those seats. This doesn't mean that twenty seats are sitting empty all the time. If ten people have made reservations for dinner at 8pm, then ninety seats are available for drop-in customers at that time. The twenty percent policy just means that once twenty people have made reservations for dinner at 8pm, the restaurant won't accept any more reservations. Here's an example with made-up numbers: Suppose you are downloading a large file over your 720kbps connection. Since there is nothing else using the network, your download proceeds at 720kbps. Now suppose you fire up a program that uses QoS, say, for streaming media. (I don't know whether Windows Media Player uses QoS.) You connect to a streaming media source, and the media player does some math and determines that in order to give you smooth playback, it needs a minimum of 100kbps. (If it gets more, then great, but it needs at least that much to avoid dropouts.) The program places a reservation of that amount through QoS. With a default maximum reservation of Now you hit pause on the media player to answer the phone. Even though the media player has a 100kbps reservation, it's not using it, so all 720kbps of bandwidth is devoted to your download. You get off the phone and unpause the media player. Bandwidth is once again divided 100kbps for the media player and 620kbps for the download. Now, sure, you can set your QoS maximum reservation to zero. This means that when the media player asks for a guarantee of 100kbps, QoS will tell it, "Sorry, no can do." The media player will still play the streaming media, but since it no longer has a guarantee of bandwidth, there may be stretches where the download consumes most of the network bandwidth and the streaming media gets only 50kbps. Result: dropped frames, stuttering, or pauses for buffering. So tweak this value all you want, but understand what you're tweaking. |
Comments (27)
Comments are closed. |
i think there is a common belief that Windows Update uses QoS, which would explain why people think it would make a difference, as Windows Update downloads are fairly frequent.
I assume that Windows Update actually uses BITS, which does the opposite of QoS (ie. only uses available bandwidth, meaning that normal traffic is to BITS as QoS traffic is to normal traffic).
PingBack from http://sanjay.mathur.ws/archives/interesting-post-at-the-old-new-thing-an-msdn-blog
I still think that downloader applications like BitTorrent and GetRight (etc) will simply lie and pretend to be VoIP applications, which will make all QoS work completely redundant. It’s based on everyone being honest, which from your work on the Start menu, you know is untrue.
Its only really controllable as far as
a) you control what software is running on your pc
b) you control what software is downloading/uploading data at any one time
c) you control the network infrastructure and know where it has QoS switched on.
So in a closed corporate environment (that almost certainly bans installing of bittorrent clients, etc) it can be very effective, and I believe its how my current company has consistent quality on our desk IP phones despite some other non-realtime apps using huge amounts of LAN bandwidth.
For a home user, its probably useful as far as the network socket on the back of their PC. Once the traffic leaves your house its out of your control. So you can control whether your media player steals bandwidth from your voip client, but can’t stop Johnny down the road’s illegal movie downloads from slowing you down, let alone congestion out on the big wide net.
Stu: BITS is a form of traffic shaping, but it may not actually use any part of XP’s built-in QoS. It’s just lowest-priority, instead of the default highest-priority that most people think of when they think of QoS. Modules that use XP’s QoS module can reserve such a low-to-lowest priority for background downloading, although by default non-QoS traffic is automatically set to Lowest, so there’s not a lot to be gained by doing that normally. (non-QoS should typically be normal or low, but I guess they had a good reason for doing that.)
Good whitepaper here:
http://www.microsoft.com/technet/prodtechnol/windows2000serv/plan/qosover8.mspx
Theres a discussion of limiting bits here:
http://blogs.msdn.com/oldnewthing/archive/2005/01/27/361595.aspx
I remember this issue – it became a serious contentious topic in computer enthusiast circles. Even today, you will see "tweak" guides that extoll the virtues of changing this setting, all the while painting MS programmers as buffoons.
I even tried this tweak, until I realized how crazy it would be if MS was depriving billions of Internet users 20% of their bandwidth.
The solution to this issue? More blog entries from experts like you! In 2001, Microsoft neither refuted or explained this setting, and the story took a life on its own.
P.S. The myth lives on in a slightly different form today as the "remove WinXP SP2’s maximum TCP session" placebo setting.
Good post Raymond, people should know that QoS is not the one who takes their bandwidth.
It is those idiots writing communication protocols with as much as 25% overhead.
For example PPPoE encapsulation takes ~25% of your bandwidth in both directions (up/down).
Let me explain the math using 512/128kbps ADSL (PPPoE) link:
64KB/sec is theoretical maximum download speed for 512kbps link. Divide 64 with 1.25 and you get 51.2KB/sec — that is your top download speed no matter what tweaks you use.
Your 128kpbs upload translates to 16KB/sec theoretical maximum. Divide again with 1.25 and it is only 12.8KB/sec you really have.
Try to upload more than that and your download speed plummets because ACK packets do not reach destination fast enough resulting in waits, timeouts and resends. Windows TCP/IP stack is of course stupid enough and does not offer any form of packet prioritization based on application or protocol used.
Luckily there are third party drivers like http://www.cfosspeed.de. If only that was part of the Windows…
"The myth lives on in a slightly different form today as the "remove WinXP SP2’s maximum TCP session" placebo setting."
You obviously don’t know much about p2p do you?
That tweak has real effect on p2p. Those applications use half-open connections to search for file sources. The more they can open the faster they acquire sources. If you have a limit downloads will take longer to start and reach full speed.
Not only that you can’t open more than 10 such connections but your application networking gets slowed down if you exceed it. You actually have to play it safe by opening up to 8 such connections so as not to trigger the slowdown.
Having that limit at 10 connections is not nice and there should be a registry setting or the limit should be lifted to at least 50.
Anyway, RAW sockets are much bigger threat than half-open connections and yet they are allowed and abused by many exploits.
Is this really the case, or is it just a case of recalibrating the sensors and rotating the shield harmonics? I’ve seen this claim on a number of sites, but always from the same sorts of sources that would talk about the QoS conspiracy.
How do you submit topic suggestions? I thought there was a link somewhere, but I can’t find it…
Anyway, this article accuses XP of wasting $25 billion of energy:
http://www.treehugger.com/files/2006/11/how_windows_xp.php
It specifically cites "XP letting applications override the sleep function" as a cause, and I was wondering if Raymond could shed any light on that…
Me, I blame people for leaving their damn PCs and screens on around the clock, not to mention aircon, lighting, heating etc, etc, grumble, grumble…
Yes, with some apps and usage patterns that put a heavy stress over TCP. For example, certain P2P "system" (which has many implementations) doesn’t keep permanent TCP connections, it only open them when needed. If you are downloading a significant number of files (think well into the hundreds) with a significant number of peers (tens or hundreds of thousands in total), then the 10 half-open connection limit can really be a burden, since:
many peers fail (have gone offline, and the stupid firewall doesn’t return a RST)
Right now, my average number of half-open connections is about 50. The number of new connections opened per second is about 200.
It’s also worth mentioning that Vista RTM’s TCP stack dies under such conditions (until reboot); and transfers (direct downloads) are significantly slower than XP (SP2 with a 256K window size set in the registry). It reminds me of the state of TCP in Win95 :)
Igor: "Anyway, RAW sockets are much bigger threat than half-open connections and yet they are allowed and abused by many exploits."
Amazing – I didn’t think anybody still paid attention to Steve Gibson’s hysterical rants. You do realize that the whole raw sockets scare was a crock, right?
Sceptic said: "Is this really the case"
Source — official eMule documentation:
http://www.emule-project.net/home/perl/help.cgi?l=1&rm=show_topic&topic_id=120
Max. Half Open Connections
This setting became necessary with the connection throttling of Windows XP SP2. This update in XP will only allow 10 half open connections and then starts parking further connections in a queue which is only slowly processed. This leads to timeout and other undesired effects in eMule.
If eMule is running on XP SP2 do not set this value any higher than 9. Although there are patches to increase this hardcoded value in XP, it is not recommended to patch such critical parts in Windows. The only effect this setting does have is that eMule acquires sources a bit slower right after startup. This will subside after the sources have been found.
In other operating systems like Windows 2000 or the obsolete Windows 95/ME set this value to 50.
Aaron said: "You do realize that the whole raw sockets scare was a crock, right?"
Or was it?!?
If I may quote Steve Gibson whose work both as a programmer and as a network specialist I admire:
"When those insecure and maliciously potent Windows XP machines are mated to high-bandwidth Internet connections, we are going to experience an escalation of Internet terrorism the likes of which has never been seen before."
Now you tell me Aaron, which part of his prediction hasn’t happened yet?!? Don’t tell me you haven’t heard about zombie nets, about cyber-extortion, etc?
Pfft, yeah, because if you couldn’t open raw sockets, there’d be no other way to spoof IP traffic or generate DoS attacks.
[The myth lives on in a slightly different form today as the "remove WinXP SP2’s maximum TCP session" placebo setting.]
I’m no expert, but from what I’ve seen, the limit for half-open connections imposed by SP2 *can* actually have a significant impact in any application similar to a peer-to-peer program.
Essentially, since the system can only have 10 half-open connections, it has to wait for responses from hosts that may not answer very quickly. When dealing with networks like Gnutella or Bittorrent there are sometimes tens of thousands of peers and only allowing 10 simultaneous connection attempts you can create a real bottleneck.
I would certainly be interested if Raymond has any insight into this, including why Microsoft implemented this change. It’s supposed to help stop worms from spreading, but that seems like "too little too late" and a poor reason to cripple the TCP stack.
In any case, pass the sugar pills.
[quote=igor]
If I may quote Steve Gibson whose work both as a programmer and as a network specialist I admire:
"When those insecure and maliciously potent Windows XP machines are mated to high-bandwidth Internet connections, we are going to experience an escalation of Internet terrorism the likes of which has never been seen before."
[/quote]
Before putting faith in GRC you might want to wonder why such a ‘security expert’ is generally absent from just about every single security forum, website or consortium other then grc.com. This is also the man who has been quoted as saying
""First of all, I never told anybody about winpcap. I took the position that it was impossible … I set up a deliberate disinformation campaign from the beginning, in order to persuade the script kiddies that I had come into contact with, from believing that there was another approach to this; and it was only because I had to defend myself against articles like Thomas’ that I finally was forced to acknowledge that there was other ways to do this.
I didn’t want to say that… …I agree that information should be kept as…away from people as much as possible. …it’s standard practice in security, as certainly you well know, when you divulge a vulnerability, the people who might be able to exploit it have the opportunity. It’s what happens every day in security."
I dunno about Raymond, but I can tell you the details: It was based on some work HP labs did on virus throttles, which won Best Paper at the ACSAC conference in 2002. You can get the original report at http://www.hpl.hp.com/techreports/2002/HPL-2002-172R1.html, and the followup on performance details at http://www.hpl.hp.com/techreports/2003/HPL-2003-103.html. What MS did in XP SP2 is a (uhh, sorry Raymond :-) rather badly-done 30-second hack on the original design.
""When those insecure and maliciously potent Windows XP machines are mated to high-bandwidth Internet connections, we are going to experience an escalation of Internet terrorism the likes of which has never been seen before."
Now you tell me Aaron, which part of his prediction hasn’t happened yet?!?"
The one where he said this would actually all be down to raw sockets.
So why does my 3Mbit/sec ADSL (using PPPoE) connection consistently download at around 350 kbytes/sec (from e.g. kernel.org, or some other server that has the upstream bandwidth to support it), when 3Mbit/sec "naively" translates to 384 kbytes/sec? That’s a heck of a lot less than 25% overhead; it’s closer to 10%.
(Now that bandwidth number includes the TCP, IP, and Ethernet header sizes. But that’s only a few tens of bytes (14+20+20) per each frame’s maximum of 1514; it can’t add 15% to that 10% number.)
This statement is wrong. PPPoE typically has a 8-byte overhead. With ethernet’s standard 1500 byte MTU (think cable internet), this means with PPPoE framing the MTU is 1492. That’s 0.53% overhead, not 25%.
Aaron said:
"Before putting faith in GRC…"
As for Gibson, his tool SPINRITE saved my bottom several times. So Aaron, when you match that with your skills and tools I will start having the same amount of faith in your opinions and judgement.
About PPPoE, perhaps I am wrong — I am simply telling what I am observing on my own (and some other) connections. If the overhead was smaller then I could reach 64KB/sec download on my 512kbps connection (which is connected at that speed and is not VBR) and yet I get only 52KB/sec maximum. For upload instead of 16KB/sec I get 12KB/sec. If it is not protocol overhead then I would really like to know what could that be.
Anyway, there are several protocol layers involved and each one has its own overhead added to the payload:
IP: 20 bytes
PPP: 2 bytes
PPPoE: 6 bytes
MAC: 18 bytes
RFC1483: 10 bytes
Igor: Add all those up sometime, and notice that you get 56 bytes. Even adding a 20-byte TCP header, 76/1500 is nowhere *NEAR* 25%.
Your "overhead" is coming from somewhere else.
My guess would be you use a downloader program (web browser?) that gives you the speed over the entire lifetime of the download, instead of over the last N seconds. This is obviously not the instantaneous speed, which is what you need to look at when doing overhead calculations.
To get the instantaneous speed, run gkrellm (or possibly task manager if you use XP). This will include the TCP, IP, and Ethernet overhead, but not PPP or PPPoE (unless the PPP/PPPoE client is running on the machine you run gkrellm on; then I’m not sure what number it gives you).
Igor Said:
Aaron said:
"Before putting faith in GRC…"
As for Gibson, his tool SPINRITE saved my bottom several times. So Aaron, when you match that with your skills and tools I will start having the same amount of faith in your opinions and judgement.
So because he wrote a decent tool that has absolutely nothing to do with network security you trust his opinions regarding network security? Would that be true if he had written a decent media player or card game?
BryanK said:
"My guess would be you use a downloader program"
For HTTP/FTP downloading I use Flashget and never mind how fast the server I am downloading off is, I can’t pass 52KB/sec.
I have no viruses or spyware on my machine and nothing runs in background. Automatic update disabled. Traffic handled by cheap ADSL modem/router which I intend to change.
Paul said:
"So because he wrote a decent tool that has absolutely nothing to do with network security…"
No, because of that tool I trust him not being a useless troll like some people who like to discuss things on the Internet.
I have read his articles about DoS attacks that hit his server(s) carefully and IMO he did show more than adequate level of expertise in both network protocols and security while defending himself against those attacks.
Point is that he experienced all weaknesses first hand unlike many of you here. I believe that most of you sit behind corporate firewalls while someone else administers your network and your PC. So give Gibson a break. Or at least write a tool as usefull as his if you want me to respect your opinion.
PingBack from http://blog.yuvisense.net/2006/11/30/windows-tip-imporve-your-internet-speed-get-rid-of-bandwidth-used-by-m/