Date: | December 2, 2004 / year-entry #408 |
Tags: | history |
Orig Link: | https://blogs.msdn.microsoft.com/oldnewthing/20041202-00/?p=37153 |
Comments: | 32 |
Summary: | The story behind the 55ms timer tick rate goes all the way back to the original IBM PC BIOS. The original IBM PC used a 1.19MHz crystal, and 65536 cycles at 1.19MHz equals approximately 55ms. (More accurately, it was more like 1.19318MHz and 54.92ms.) But that just pushes the question to another level. Why 1.19...MHz,... |
The story behind the 55ms timer tick rate goes all the way back to the original IBM PC BIOS. The original IBM PC used a 1.19MHz crystal, and 65536 cycles at 1.19MHz equals approximately 55ms. (More accurately, it was more like 1.19318MHz and 54.92ms.) But that just pushes the question to another level. Why 1.19...MHz, then? With that clock rate, 216 ticks equals approximately 3600 seconds, which is one hour. (If you do the math it's more like 3599.59 seconds.) [Update: 4pm, change 232 to 216; what was I thinking?] What's so special about one hour? The BIOS checked once an hour to see whether the clock has crossed midnight. When it did, it needed to increment the date. Making the hourly check happen precisely when a 16-bit tick count overflowed saved a few valuable bytes in the BIOS. Another reason for the 1.19MHz clock speed was that it was exactly one quarter of the original CPU speed, namely 4.77MHz, which was in turn 4/3 times the NTSC color burst frequency of 3.5MHz. Recall that back in these days, personal computers sent their video output to a television set. Monitors were for the rich kids. Using a timer related to the video output signal saved a few dollars on the motherboard. Calvin Hsia has another view of the story behind the 4.77MHz clock. (Penny-pinching was very common at this time. The Apple ][ had its own share of penny-saving hijinks.) |
Comments (32)
Comments are closed. |
It only checked the time once an hour? So if I turned on my computer at 11:50 PM, it wouldn’t know to update the date until 12:50 AM. So if I looked at the date at 12:05 AM or 12:30 AM, it would be wrong. Is this true?
G. Man
I believe that is true, but remember, the original PC didn’t have a battery-operated clock as more recent PCs do. The reset to midnight (on, as I recall, 1-Jan-1982) when you turned it on. You then had to set the time & date manually, if you cared about them being right. Most people didn’t bother, so most of the time, the clock was wrong anyway.
"So if I turned on my computer at 11:50 PM, it wouldn’t know to update the date until 12:50 AM."
Obviously you do some work to set the initial conditions so that the check occurs on the hour.
Actually, the original PC had a 14.318180 MHz crystal. This frequency was chosen because it’s useful for generating composite video for the CGA adapter.
This frequency was divided by 3 in hardware for the CPU clock (the 8088 wanted a clock that was high for 1/3 of a cycle, and low for 2/3 of a cycle). The CPU clock was further divided by 4 in hardware to feed the Intel 8053 timer chip, which couldn’t run at 4.77 MHz.
Someone at IBM (in their short-sighted wisdom) decided to program the clock chip to divide the input frequency by 65536 (the maximum possible), then interrpt the CPU. This resulted in about 18.2 interrupts per second (or about 55 milliseconds between interrupts.)
At this frequency, there would be 1573042 interrupts per day. The original IBM BIOS got it wrong, and reset the counter to zero after 1573040 ticks. Various clone vendors got it wrong and programmed the timer chip to interrupt after 65535 ticks instead of 65536 ticks. Generally, people didn’t notice these issues because the crystals weren’t that accurate anyway.
Anyone who used the BIOS calls (INT 14) instead of the DOS calls had to convert these weird tick counts to seconds themselves. Many programmers had simply memorized the number 18.2 ticks per second (which is just an approximation), resulting in yet more error.
Overall, life would have been better if IBM had originally programmed the timer to interrupt after 59659 hardware ticks. This would have produced an interrupt 20 times per second, which would have been more convenient for everyone, with very little change in performance. (Isn’t 20/20 hindsight wonderful?)
It’s silly, but I’m hoping the timing will work out to make it possible for some vendor to offer a 4.77GHz ’25th Anniversary Edition’ PC in 2006.
It’s amazing how far and how fast we’ve come. The thought of CPUs in the single digit MHz range or having to use a TV because a monochrome monitor was too expensive is laughable by today’s standards. I guess in 20 years I’ll be flabbergasted that I ever used a computer under a terahertz. :)
I remember being totally stoked when I got a monitor for my C=128. No more fuzzy TV images for me, and I got EIGHTY COLUMNS! ;)
As Jim Lyon pointed out, the original PC had no real-time clock chip, so upon booting, a stock PC had no idea what time it was. It was common in those early days to start your AUTOEXEC.BAT with a DATE command, which would ask the user to input the date and time. By default the date was set to 1/1/1980 (not 1982), which is a Tuesday, a fact I’ll never forget because of seeing "Tuesday, January 1, 1980" displayed about a million times.
I am 98% sure that Mr. Hsia’s account of the reason behind 55ms is correct, and the fact that 2^16 (not 2^32) ticks works out to ABOUT an hour is coincidence. I never worked at IBM, but I did work at Compaq and Dell in the early days, and this was the common wisdom amongst that circle of engineers. Also, I (hazily) remember an interview in Byte an IBMer from the original PC design team, and I believe he confirmed the origins of 4.77 and 1.19 as being convenient divisions of the 14.318 MHz crystal already necessary to drive the video circuitry.
I make it 3599.59 seconds too:
x = 1.19318MHz = 1.19318 x 10^6 ts^-1;
y = 2^32 t = 4.294967296 x 10^9 t;
y / x ~= 4.2/1.1 x 10^3 s;
How do people get that 2^16 ticks makes an hour?
2^16 ticks of the 55ms timer makes an hour. According to Google: (1 hour) / (2^16) = 54.9316406 milliseconds
Interesting story but mostly this explains why DOS ran the timer at 55ms not why Windows 95 did. Of course, by the time Windows 95 came around we had to deal with everyone and their brother reprogramming the clock for their own purposes and assumming that no one else was doing the same thing in some other vm out of their sight … thus vtd was born but that’s another story.
Why does NT run the timer at 10ms when you’ve got one processor, but at 15ms when you’ve got multiple processors (physical or virtual)?
the 1.19..MHz was not chosen because of the division ratio to 1 hour, that’s just a coincidence. nor was it chosen because of cheapness – in large quantities it doesn’t matter if you order 1.19 or 1.234 MHz crystal. it was chosen simply because 3.579545 MHz (and sub/multiples) are one of the most common crystal in the world (even today), and it’s simply easier to get them (order a 3.579545 crystal from a manufacturer – you get it the next day. order a 3.500000 MHz crystal – you have to wait like 5-7 weeks to delivery). 3.579545 is the color subcarrier frequency of NTSC. divide by 31 and you get 115200 (RS232 baud rate). multiply by 4 and you get 14.318180, the main clock source in the IBM PC (Jim Lyon is correct). read about NTSC color subcarrier to understand why this frequency was chosen (the short story: because of its low interference with the monochrome TV signal).
What is the significance of 2^32 cycles? Does it have something to do with Win32 using 32 bits to for addresses?
@ashwin: 2^32 has been a common value for far longer than win32 has existed. 16 bits can only cover 64k of RAM, and 32 is the next larger number in that sequence. Notice that nobody ever talks about 24 bit processors.
Vorn
AFAIK 80286 had 24 bit physical address space – you could address up to 16MB of physical RAM in protected mode :-)
Raymond: The link to Calvin Hsia is a relative link, so when I read this entry on Bloglines.com it went to a 404 page.
Since when is it illegal to use relative links on a web page?
"Since when is it illegal to use relative links on a web page?"
I believe the announcement was made last Thursday. :)
This is really a crazy world. How can anybody understand all this crazy stuff all around? It’s so meaningless, but in one way it’s fantastic!
The early Amigas had a similar arrangement. The oscillator runs at ~28 MHz, the pixel clock runs at half or a quarter of that depending on screen mode, the processor runs at a quarter of that, the memory bus runs at an eighth, the colour clock runs at some other fraction I don’t remember, and the "E-clock" used by 6800-style chips runs at a fourtieth. The programmable timers are in two VIA chips connected to the E-clock and the OS sets one of them to a sensible 1/50 second (or the best approximation it can manage, anyway). A lot of regular interrupt handlers are connected to the vertical blank interrupt, though, making them dependent on the screen mode. Some people made the mistake of using that for music play-routines.
Sorry, this is really quite irrelevant, isn’t it?
@Ben
I’d been resisting mentioning the Amiga, but now you’ve broken the ice I’ll carry on… The odd thing about the Amiga was that while everything was based on multiples of the NTSC colour clock, the PAL/SECAM colour clock is almost 1MHz faster than NTSC, so the 7.14MHz CPU clock seemed a bit arbitrary for us Europeans. We could have easily got away with running at 8MHz (like the Atari ST), but for some reason those crazy hippies didn’t think of it. (Even the A1200 ran at 14MHz, long after saving a couple of pennies on a crystal was relevant.)
Mat: There’s a delicate balance between audio, floppy, and video DMA timing that a significant change to the custom chips’ clock frequency will upset. As for the CPU, in general a 68000 hits the memory bus every 4 processor cycles, so as long as the video display is set to low-res (140ns pixel clock) with <=4 bitplanes or high-res (70ns pixel clock) with <=2 bitplanes then Agnus and the 68000 use alternate memory cycles and the CPU hardly ever has to wait. If the processor frequency is just slightly higher then the connection from the custom chips to the CPU bus becomes more complicated for no gain: the CPU just spends more time waiting.
The A1200, however, is a botch job. Until you add some fast memory (for non-Amiga-users that’s memory that isn’t controlled by the custom chips), the CPU is running at about half speed thanks to memory contention. The underclocking of the CPU really makes little difference. You really need to expand it to get it running reasonably fast, and since it only has one expansion slot you may as well get an accelerator rather than just expanding the memory.
That’s pretty bad that bloglines didn’t bother to consider relative links.
It’s just a heads up. I’ll whine to Bloglines about it. :-)
Blog link of the week 49
12/2/2004 5:46 PM Vorn
> Notice that nobody ever talks about 24 bit
> processors.
Hmm. Question to the history channel, were there any? I used a machine where 24-bit integer arithmetic required bignum algorithms on two 12-bit words, but I don’t recall any native 24-bit words.
12/2/2004 10:25 PM Qbeuek
> AFAIK 80286 had 24 bit physical address
> space – you could address up to 16MB of
> physical RAM in protected mode :-)
The address space is separate from the integer value space. The 80286 still provided machine instructions to do arithmetic on 8-bit or 16-bit values. Subsequent versions added 32-bit and 64-bit but not 24-bit operands.
Regarding address spaces though, the IBM 360 also had a 24-bit physical address space. The top 8 bits of a register did not participate in addressing, so software used them for other things, mostly the single top bit but some used others among those bits too. The size of the address space didn’t change whether protection was used or not. There actually were some models without protection — that was kind of like having multiple concurrent users logged into Windows 95 or 98 and letting them corrupt each other’s memory besides just letting them corrupt the kernel’s memory.
There have been a ton of native 24-bit processors. It’s an extremely common word size for audio and video processors.
Accuracy is how close you are to the correct answer; precision is how much resolution you have.
PingBack from http://tellesfera.com/?p=25