There’s going to be an awful lot more overclocking out there

Date:November 7, 2006 / year-entry #376
Tags:other
Orig Link:https://blogs.msdn.microsoft.com/oldnewthing/20061107-02/?p=29103
Comments:    39
Summary:Last year, I told the story of overclocking being the source of a lot of mysterious crashes and that some of those overclocked machines were overclocked at the store. These machines came from small, independent shops rather than the major manufacturers. Well it looks like that's about to change. Gateway's FX530 Desktop can be ordered...

Last year, I told the story of overclocking being the source of a lot of mysterious crashes and that some of those overclocked machines were overclocked at the store. These machines came from small, independent shops rather than the major manufacturers.

Well it looks like that's about to change.

Gateway's FX530 Desktop can be ordered overclocked by the manufacturer.

Just say no to DIY overclocking and let us do it for you! We'll factory overclock your Intel® quad-core processor.4 Yep, you read that right: factory overclock, which is something that most other major PC manufacturers don't do.

We live in interesting times.


Comments (39)
  1. Darren Winsper says:

    Factory overclock?  Is that when they overclock the factory?  Sounds like a health and safety nightmare to me ;)

    In any case, I have no problem with individual people overclocking, so long as they understand the risks and promise not to complain about buggy software :)  Gateway’s scheme is dishonest as it doesn’t seem to state anywhere that overclocking can introduce subtle bugs/problems that they simply cannot test for.

  2. JS says:

    Maybe they’re trying to capture some of the "enthusiast" market. You know, the people that basically would never be caught dead buying Gateway computers.

  3. Puckdropper says:

    From the Ars Technica newspost that I read first:


    Gateway’s machines not only come overclocked straight from the factory, they also come (surprisingly) complete with a Gateway limited warranty for one year. That’s right—if and when that Gateway-overclocked processor fails, they will cover the replacement for you.


    That makes the story even more interesting.

  4. John Goewert says:

    Hrm… an interesting way to get the faux enthusiasts.

    I wonder if they just overclock a 2.709 GHz processor to 2.71

  5. dispensa says:

    > I wonder if they just overclock a 2.709 GHz processor to 2.71

    No, they’re doing some serious overclocking last I looked. They were
    explicitly trying to avoid the appearance of just doing it for
    marketing.

    I wonder if the “warranty” offer will extend to the kind of
    impossible single-bit errors that happen with an overclocked chip. They
    could have a lot of refunds on their hands in a year or so. :)

    [Sure, the warranty probably covers those errors –
    but good luck proving that your Windows crashes are caused by
    overclocking. -Raymond
    ]
  6. Daniel Garlans says:

    I’m surprised Intel would allow them to do this; deliberately misusing their hardware, and potentially making Intel look bad when the chips start failing or getting really unstable.

    Plus, you can’t forget about all the people who are going to jump back onto the "M$ SUCKS" bandwagon when the hidden single bit errors start causing everyone’s apps to crash, thus making Windows look unstable.

    I’d be curious to find out if any of the contracts between Gateway’s suppliers and vendors mentioned anything about making sure to actually use the hardware within its specified tolerance.

  7. James says:

    Re: Mr. Garlans’ comments–any such contracts between Gateway and its suppliers might very well be considered anticompetitive, and so illegal. After all, the (pseudo-)economic argument goes, what incentive would Gateway have to sell defective computers?

  8. If the machine has the "Intel Inside" logo (and is thus subsidized by Intel marketing funds) then I would thing the contracts with Intel would forbid overclocking. If they don’t Intel has been very foolish since flaky behavior could obviously harm Intel’s brand.

    I’m pretty sure that the courts would not hold such a restriction to be anti-competitive. After all nobody is required to accept Intel’s marketing money.

  9. Mark says:

    And for a few dollars more, they will even pre-spill coffee on your keyboard!

  10. James says:

    Re: Mr. Tappan’s comments– I apologize for this off-topic comment, but in fact there have been lawsuits based, essentially, on the premise that companies are required to accept supplier X’s marketing money.

    (One example I remember in particular is that Microsoft used to offer marketing money to OEMs whose computers would boot up within 30 seconds.)

  11. sergio says:

    Right now when something crashes and a user sends a report, he is usually not able to see any response to the crash. And he blames Microsoft. But what would happen if he would be able to get a report (from the OS or from Microsoft) “your crash displays symptoms of the processor running faster then allowed speed”?

    [It takes a human being an hour or more to make that determination, based on reading source code and working through all the possibilities, brainstorming over lunch with colleagues, etc. Multiply that by all the crash reports that come in, and it quickly becomes impractical. -Raymond]
  12. Robert says:

    Most video card manufacturers cherry pick the chips and overclock them from factory. Asus also has an option in BIOS which usually is enabled by default to auto overclock the system.

    > If the machine has the "Intel Inside" logo (and is thus subsidized by Intel marketing funds) then I would thing the contracts with Intel would forbid overclocking. If they don’t Intel has been very foolish since flaky behavior could obviously harm Intel’s brand.

    ATI and nVidia (and so 3dfx did) constantly preselect components for factory-overclocking manufacturers (just the same as they cherry pick the chips for reviews). Intel knows exactly which 1.5GHz chip would run at 3.0GHz because most of them are just 3GHz parts rebranded as 1.5G to satisfy market requests. Overclock sells good on review sites.

  13. Bilbo says:

    Since Intel knows which processors failed, at what frequency they failed, and what section of the cpu failed I’m sure they could share that with Gateway.  Toss in a  little device driver that throttles down the cpu a little more agressively per temperature than usual, or perhaps the MB/Bios timing is changed to avoid the flaky timing, or even a device driver that works around the offending microcode. 3Ghz processor marked as 1.5 because it failed 2Ghz test… suddenly becomes a 3Ghz processor with the "average" performance of a 2.5Gz one.

  14. e.thermal says:

    doing hardware support for years (although just for friends and family now) many a crash is caused by subtle hard failures.  I remember back in the day of the intel DX4 chips the dx4 – 75s being overclocked to 100mhz and 125mhz.  Many subtle problems would appear, for example this one machine came into our tech area from a customer.  It did everything perfect, everything, except that if you put a number in cell I9 in excel it would add a 1 on to the end.  No other cell did this.  A quick CPU swap fixed the problem, so we tried slowing the cpu down to stock settings and the problem disappeared as well.

  15. marty says:

    So will MS remove this product from the Hardware Compatability List? There’s still a HCL, right?

    Presumably when MS ask the CPU manufacturer if they will guarantee the datasheet specs at the overclocked rate, the answer will be NO. Seems cut-and-dried to me.

  16. I kind-of like sergio’s idea. It shouldn’t require a human to analyze the crash dump.

    Why can’t Watson do the same kind of CPU frequency detection that CPU-Z and others do, and if the system is overclocked by more than X% and the crash doesn’t fall into a known/common bucket then bring them to an online page which suggests overclocking may be the problem?

    [I answered this question in the comments to the linked article. -Raymond]
  17. Anony Moose says:

    Satire is quickly becoming reality:

    AMD Announces Athlon Extreme OC Processor

    http://www.bbspot.com/News/2001/01/athlon_extreme.html

    "We’re giving the overclockers what they want.  Gone are the days when you get tiny overclocking improvements.  We’re talking 10-20 times the rated clock speed without any additional cooling on the Extreme OC," said VP of Marketing Hank Potter.  "The first chips are rated for 75 MHz, but we expect to have the 50 MHz available shortly."

    AMD has kept the engineering specs top secret, but did hint that the Athlon Extreme OC is very, very similar to the standard Athlon."

    I wonder how that would work.   ;)

    (Also, that’s a satire site, it ought to be fairly obvious exactly how it would work, if it wasn’t just a joke.)

    (In the real world, "overclocking" is stupid. Either the chip can handle it in which case it’s "running at full clock speed" and ought to be standard or the chip can’t handle it, in which case it’s going to break. Factory overclocking is either silly marketing, or it’s a neat way to persuade people to buy products designed to break down quickly.)

  18. Jamie says:

    I went through the ordering process to see what the additional cost would be for the overclock option. Quickly I discovered you cannot order this option through the web interface, you must talk with a sales associate.

    My guess is it’ll be a few hundred. No way it will linearly scale compared with the high-end gfx card market, else it’ll be a $1k option.

    Monetarily this is a good thing for Gateway. The extra dough for overclocking is almost free money. It also defeats the purpose of overclocking IMO.

  19. Monker says:

    > In the real world, "overclocking" is stupid. Either the chip can handle it in which case it’s "running at full clock speed" and ought to be standard or the chip can’t handle it, in which case it’s going to break.

    No it’s not stupid. Your reasoning is right but based on a misconception. When you produce microprocessors, you produce them all equal and only the cream can reach the highest frequencies (let’s say for the sake of the argument 3GHz). The others are rated below, on the basis of testing and other parameters. There is a problem however : market. If you stay some hours in a hardware shop, you can notice that not many are buying topline processors, but most are buying the average ones. So you start that you are able to reach 3GHz with only 10% of your processors and you price them at 1000$ and the 2GHz ones are priced at 200$. Then either you improve your productivity (and it improves) or demand for the 3GHz ones is low and you find yourself doing what? You know the answer, selling 3GHz parts at 200$, rating them as 2GHz ones. Many processors out there are simply rebadged high performing ones. What is needed is LUCK (above all) and knowledge of which processors are most likely to be the lucky ones (depending on how microprocessors wafers are created – for example low frequency models of the same exact core of the most performing ones are more likely to overclock than completely different cores).

    I was a proud owner of a Celeron 300A (300MHz). Like most of them it run at 450MHz comfortably – it shared the same wafer of the P2/450 and had half the cache with a very chance of reaching that 50% overclock. It was perfectly stable for the 5 years I used it, it was the longest period without cpu upgrades and I sold it to a friend who used it for another 2 years.

  20. Monker says:

    > AMD Announces Athlon Extreme OC Processor

    Both the Athlon FX and Intel Pentium Extreme were targeted mainly at overclockers (with money to pay).

  21. foxyshadis says:

    BryanK> OTOH, the kind of single-bit errors that happen when overclocking a CPU may not happen when overclocking a GPU.  I’m not sure what the GPU looks like inside; perhaps it’s immune to those problems.  (I kinda doubt it, though, especially with vertex programs and such.)

    GPU errors usually manifest themselves as distortions, broken textures, jumpiness, and other video-related problems, since until recently cards were pretty much input-only. For most people that was acceptible in moderation. Now that they can return physics data and offload real cpu processing, I expect that this is going to cause a lot more crashes as the games (and window managers!) starts getting "impossible information" back from it.

  22. Wang-Lo. says:

    I want my Turbo switch back.

    If I have a weird problem I can switch to nominal speed and re-test before calling the help desk.

    -Wang-Lo.

  23. BryanK says:

    BFG has been doing this with video cards for a *long* time.  They also provide a lifetime warranty (which I haven’t read the fine print on, so it may not really be "forever").

    OTOH, the kind of single-bit errors that happen when overclocking a CPU may not happen when overclocking a GPU.  I’m not sure what the GPU looks like inside; perhaps it’s immune to those problems.  (I kinda doubt it, though, especially with vertex programs and such.)

  24. Miral says:

    The RAM in my PC doesn’t even work properly at the recommended BIOS voltage — it has to be raised to what my BIOS considers unsafe levels before it can achieve optimum speed.  So that’s kinda overclocking by design :)

  25. Regarding using Watson to detect overclocking, yeah I didn’t read the other link.

    However, I think it’s a bit of a cop-out to say that we can’t do this just because a few false positives would cause an outcry against Microsoft.

    First, don’t we already do this with drivers? I doubt we can tell with 100% certainty that a driver has caused a bugcheck, but we blame drivers anyway in certain circumstances.

    Second, the page the user gets redirected to could continue the feedback loop. Aside from having neutral wording to avoid a backlash, why not have a button like, "But I don’t overclock, or even know what that means!". Then we classify those Watson hits differently, both for debugging the issue and improving the logic to detect overclocking.

    You of will of course run into people who are dishonest or had their machines overclocked unknowingly. Nothing’s perfect, but in the aggregate I think it’d be beneficial.

  26. Sarath says:

    But still Intel Sticking and popular with 32 bit technologies. On the other hand AMD taking advantage on 64 bit processors.

  27. Btw, Tony: In Vista, the kernel guys added code to do some level of detection of single bit errors, so some of the "your memory is going bad" problems actually can and are diagnosable.

  28. Mike Dimmick says:

    The people promoting overclocking seem to be assuming that there are significantly more chips produced at the high end than are actually sold, and therefore that many chips are marked down.

    Chip production just isn’t that predictable. The masks have to be very precisely aligned for the features to all reach the designed size, and that’s practically impossible. Statistically, we would expect the clock rates achieved by a sample of processors from the same production line to follow a bell curve: fat in the middle, shallow at each end. The bottom end of the curve is filled with parts that didn’t even make the lowest marketed rating – these are simply recycled. This leaves a lot at the low end, declining to very few at the top end.

    To maximise their profit, the manufacturer wants to sell each part at the highest price point that it qualifies for. So they’re not very likely to mark down specific parts, unless they’re already suffering from a glut of inventory of top-end parts and a paucity of mid-level parts. Even if they are, they’re more likely to change the prices for each speed bin, increasing the price of the mid-level part and reducing the price of the top-end part, rather than relabelling the parts, which is very hard to do.

    I am assuming that all the parts are coming from the same production line. For the Core 2 Duo, Intel are selling the slower-clocked lines with 2MB cache and the faster ones with 4MB cache. I fully expect that all parts are manufactured with 4MB cache, with 2MB of the cache being disabled on the lower lines. It’s possible that the arithmetic parts of the CPU cores were capable of going faster, but that some of the cache didn’t work properly, so you *might* be OK when you overclock it. On the whole, though, it’s not clever.

  29. Igor says:

    Many people simply do not understand that all CPU dies come from the same wafers where the amount of bad crop is minimal. If it weren’t that way, CPU manufacturing would be too expensive.

    Some of those dies will work at higher frequency with lower voltage than the others (those from the center of the wafer for example). They will be packed as top-of-the-line products costing $350-$1000.

    Others will get a lower speed rating, be locked down on binning by using capacitors which connect critical clock path to ground plane thus weakening the frequency response at the higher frequencies (and causing chip to fail if overclocked too much) and sold for less money.

    Latest Core 2 Duo generation is simply amazing overclocker. Not that I need it but I can achieve 2.8GHz from 1.8GHz chip without even raising the core voltage. No overheating, no crashes. Rock solid.

    On the Raymond question how to detect overclocking — the easiest way is to parse CPU brand string and extract the original frequency so you can compare with measured (the one windows stores in registry in ~MHz key). Read Intel application note AP-485 for details.

  30. mitch says:

    overcloked my intel hypertreading 3.o gyg to 3.6 gyg my system don’t crashes and don’t seem to  have any problem with that so seems overclocking sometimes  going right

  31. Dean Harding says:

    The problem with overclocking is that it’s a numbers game. For one person, you might not see a crash in 99.9% of your days. And when a crash does happen, 99.9% of the time, it’s pretty benign. So you might be able to run for years and years without any problem.

    But then, multiply that 0.001% times 100 million users, and Microsoft gets to see crashes every 5 minutes. As Larry says, "one in a million is next Tuesday."

    http://www.jumbojoke.com/is_999_good_enough_36.html

  32. Norman Diamond says:

    Multiply that by all the crash reports that

    come in, and it quickly becomes impractical.

    Thank you very much for this information.  When people assert that I’m doing something wrong because no one else gets the crashes I get, now I can refute them.  Crashes are so numerous that it isn’t even practical for Microsoft to analyse the amount of their own reports that come in.  Thank you.

    (No complaint here.  No sarcasm here.  Thank you.)

  33. SuperKoko says:

    Modern processors are "overclocked" even at frequencies of the manufacturer (at least for AMD).

    For example, my brother has a 2400+ AMD Athlon CPU, normally running at 2Ghz. He had to underclock it at 1.6Ghz, because it gets too hot and shutdown as soon as some CPU intensive task is used for more than a few seconds.

    The CPU fan is huge, very fast, and there is enough thermic glue between the Fan and the CPU.

    I guess that at this stage he could send it back to AMD.

    On the other hand, I remember that my good old Pentium 90Mhz, once, had had its CPU fan, stopping turning for several weeks (I can only guess… It had become a silencious computer for this time), and I didn’t noticed it at all until I opened the computer for an unrelated reason. So, even without working fan, the CPU had no problem.

    So, IMHO, nowadays, all processors are pushed up to the upper working frequency…. And sometimes above.

    I would not like to use a computer with an higher overclock.

  34. Norman Diamond says:

    Thursday, November 09, 2006 2:33 PM by Aaron

    And I think Intel is probably a little better

    at finding said chip problems at their factory

    than you are on your home PC.

    Well, 1/1 of the time anyway (0.9999736 of the time).

    Though notice that after Intel overcame their initial garbage response to that bug, they issued bugfix releases without charge.

    Of course overclocking is still silly.  It’s still asking for more trouble not less.

  35. Aaron says:

    It’s kind of funny how many people try to justify overclocking in the general sense by saying "it worked for me!"  That’s testimonial evidence, the same kind used by people selling 6-second abs, coffee enemas, and get-rich-quick schemes.  Yes, it *does* work for some people – maybe a lot of people – but that doesn’t make it a good idea.

    And the notion that Intel or AMD would intentionally underclock perfectly good chips and rebrand them as slower models to sell at a cheaper price is simply ludicrous.  It makes no economic sense even as a market-segmenting scheme with price differences of $20-$50, but a price difference of several hundred bucks leads to one inescapable conclusion: the cheaper chips were the factory rejects, underclocked/rebranded because they failed a bunch of tests at the higher clock speed.  And I think Intel is probably a little better at finding said chip problems at their factory than you are on your home PC.

    Anyway, the real speed bottlenecks are memory, video, and disk.  Yes, cache takes care of a lot of that, but let’s not throw Amdahl’s Law out the window here.  I think a lot of people OC just because they can (or believe they can), not because there’s any significant performance improvement.  That’s basically the punchline of the BBSpot joke.

    I would have assumed that this was a joke if it weren’t posted on Gateway’s site for all to see.  Sad.

  36. jsm says:

    "Yes, it *does* work for some people – maybe a lot of people – but that doesn’t make it a good idea."

    In your opinion.

    And yes it does make economic sense to "rebrand" some higher-performing chips as lower-end models, IF it’s true that most of the chips coming off a given production line have similar performance characteristics.  Now, if this premise is not true, then how is it possible that so many people are able to overclock without problems?

    And the performance difference (which really can be substantial for some CPUs) is only half the story:  it’s the price/performance ratio that really matters.  The proof of this lies in the fact the price/performance ratio for a given line of CPUs almost always increases as a function of performance (rather than remaining flat).

    Finally, consider that selling PCs is typically a very low-margin business, and such vendors go to great lengths to reduce their other costs (such as support) as much as possible.  Gateway has apparently concluded that the increased sales and/or increased margins from overclocked systems will outweigh, on average, all additional support costs they will incur as a result of overclocking.  Considering how thin their margins probably are, that’s a pretty strong statement.

  37. James says:

    "First, don’t we already do this with drivers? I doubt we can tell with 100% certainty that a driver has caused a bugcheck, but we blame drivers anyway in certain circumstances."

    Yep… even when the "driver" in question is NTFS.SYS…

    Larry: Are you sure that memory error detection is new to Vista? I’ve seen that error from XPSP2 before; probably related to KB315223? (I’m not quite sure *how*, since the machine in question has non-ECC RAM and passed long memory soak testing afterwards, but…)

  38. Norman Diamond says:

    Saturday, November 11, 2006 4:01 PM by James

    > Larry: Are you sure that memory error

    > detection is new to Vista?

    Vista’s boot menu includes a memory tester which might or might not be copied from popular Linux boot menus.  This part of it is new to Vista from some points of view.

    > I’ve seen that error from XPSP2 before;

    > probably related to KB315223? (I’m not quite

    > sure *how*, since the machine in question has

    > non-ECC RAM and passed long memory soak

    > testing afterwards, but…)

    Some single-bit errors in RAM yield "impossible" results which can still be detected as "impossible" even though the reason is just a guess.  Cosmic rays can yield single-bit errors even in non-ECC RAM, e.g. a 4 might change to a 5 though it doesn’t change to a parity error.

  39. James says:

    "Some single-bit errors in RAM yield "impossible" results which can still be detected as "impossible" even though the reason is just a guess.  Cosmic rays can yield single-bit errors even in non-ECC RAM, e.g. a 4 might change to a 5 though it doesn’t change to a parity error."

    Which would explain why I got what appeared to be a ‘false positive’ memory error detected: probably a rogue driver corrupting something within Windows, which then *assumes* – wrongly, in this case – the corruption was a hardware issue. I wonder if Vista is more accurate about diagnosing this – and if so, how? (Better write-protection of certain pages would distinguish the two, presumably, assuming you avoid aliasing.)

    Given other recent problems, I’d point the finger at the Realtek 8139 driver which ships with XP; until I replaced that, I couldn’t even log on with Verifier options enabled!

Comments are closed.


*DISCLAIMER: I DO NOT OWN THIS CONTENT. If you are the owner and would like it removed, please contact me. The content herein is an archived reproduction of entries from Raymond Chen's "Old New Thing" Blog (most recent link is here). It may have slight formatting modifications for consistency and to improve readability.

WHY DID I DUPLICATE THIS CONTENT HERE? Let me first say this site has never had anything to sell and has never shown ads of any kind. I have nothing monetarily to gain by duplicating content here. Because I had made my own local copy of this content throughout the years, for ease of using tools like grep, I decided to put it online after I discovered some of the original content previously and publicly available, had disappeared approximately early to mid 2019. At the same time, I present the content in an easily accessible theme-agnostic way.

The information provided by Raymond's blog is, for all practical purposes, more authoritative on Windows Development than Microsoft's own MSDN documentation and should be considered supplemental reading to that documentation. The wealth of missing details provided by this blog that Microsoft could not or did not document about Windows over the years is vital enough, many would agree an online "backup" of these details is a necessary endeavor. Specifics include:

<-- Back to Old New Thing Archive Index