Date: | November 6, 2006 / year-entry #374 |
Tags: | other |
Orig Link: | https://blogs.msdn.microsoft.com/oldnewthing/20061106-01/?p=29123 |
Comments: | 62 |
Summary: | Representatives from the IT department of a major worldwide corporation came to Redmond and took time out of their busy schedule to give a talk on how their operations are set up. I was phenomenally impressed. These people know their stuff. Definitely a world-class operation. One of the tidbits of information they shared with us... |
Representatives from the IT department of a major worldwide corporation came to Redmond and took time out of their busy schedule to give a talk on how their operations are set up. I was phenomenally impressed. These people know their stuff. Definitely a world-class operation. One of the tidbits of information they shared with us is some numbers about the programs they have to support. Their operations division is responsible for 9,000 different install scripts for their employees around the world. That was not a typo. Nine thousand. This highlighted for me the fact that backwards compatibility is crucial for adoption in the corporate world. Do the math. Suppose they could install, test and debug ten programs each business day, in my opinion, a very optimistic estimate. Even at that rate, it would take them three years to get through all their scripts. This isn't a company that bought some software ten years ago and don't have the source code. They have the source code for all of their scripts. They have people who understand how the scripts work. They are not just on the ball; they are all over the ball. And even then, it would take them three years to go through and check (and possibly fix) each one. Oh, did I mention that four hundred of those programs are 16-bit? |
Comments (62)
Comments are closed. |
Shouldn’t this be a reason enough to use any sort of technology (virtualization, emulation, whatever) to support Win16 on x64 OSs ?
Maybe, Kope R. but who can say for sure without seeing their programs in action and knowing how they’re being used.
Virtualisation is great but it isn’t magic.
This is why I’m uncomfortable when MS announce some wonderful feature or API on Windows XP (or Vista, now).
The fact is that the market penetration of Windows XP is about 2% – if that. Over 90% of PC users use Windows, full stop. Not Windows XP or 98 or 95 or 2000 or NT: just "Windows". If you don’t believe me, ask a random PC user.
Application software vendors have to follow a simple rule: however tempting and time-saving a feature is, unless it’s available for Windows, it doesn’t exist.
I live in the "third world" and indeed Windows 98 is still very common here but I think that 2% for XP is way too low. Does anyone has hard data on that?
I think Joseph means that only 2% of people would qualify that they’re using "XP" whereas 90% of people would state they are using "Windows" without qualifying which version they are using.
9,000 scripts?!?
Man… that’s a lot of scripts.
I just hope it is not just the hype and that those aren’t just login scripts :)
You know… "we have 9,000 scripts installed all over the world and our staff knows how they work".
I belive it. My company has less than two hundred employess in three locations and we easily have over 30 installer packages. Imagine when your company is in over two hundred locations with tens if not hundreds of thousands of employees.
Did anyone notice this is post #999999?
It’s very difficult to get a market penetration/usage share that can be trusted. However, you can find statistics on the OS reported by the web browser when hitting websites monitored by a number of companies. For example, http://marketshare.hitslink.com/report.aspx?qprid=2 (shows 84.6% Windows XP) and http://www.onestat.com/html/aboutus_pressbox46-operating-systems-market-share.html (86.8%).
These statistics can only, of course, go by what the browser reported (it could be lying/had its User-Agent string changed) and only the sites that those suppliers monitor.
However, these statistics have, I believe, had duplicate IP addresses removed, which will disproportionately favour minority systems behind NATs (since any number of Windows XP systems behind a NAT with a single public IP address will be counted as one system, while of course any number of Macs behind the same NAT will also be counted as one, leaving a real statistic of (say) 4:1 being reported as 1:1).
400 16-bit scripts? Sounds like a job for GW-BASIC!
Oh wait…
This is the reason that Windows is accused of being bloatware so often. I really wish we could just draw a lien in the sand today and refactor the whole thing just maintaining compatibility with apps currently shipping. As this post so aptly illustrates, that aint never gonna happen :(
<I>I really wish we could just draw a lien in the sand today and refactor the whole thing just maintaining compatibility with apps currently shipping.</i>
Shipping by who? Every one of those 9K scripts / apps is currently shipping. They’ve got the code, they can do maintance. And as we’ve seen recently, even current apps can still take advantage of old behavior.
"Even at that rate, it would take them three years to get through all their scripts."
Tell me about it. I am a lone developer who has developed 250K lines of production VB6 code. Do the math on that!
There was a time when I placed a lot of trust in Microsoft. Fool me once, shame on you. Fool me twice, shame on me!
I know that this is not the point of the blog but I have a related question…
“…install, test and debug ten programs each business day…”
Looks like they try to control everything from an all powerful control center. Which of course, has, and can only have very limitted resurces.
Centralized IT management is the only way or people just feel better this way? Why can’t big companies work like a bunch of smaller ones? Why do they end up with an overstressed central management?
We are developers, we tend to split complexity over manageable and distributeable routines. Can’t they do the same?
What’s odd is that, every time someone suggests adding another script language, people bring this up as if it were relevant. You can be backwards compatible and still add new stuff. It’s only when you change current stuff that you get in trouble.
"It’s very difficult to get a market penetration/usage share that can be trusted. However, you can find statistics on the OS reported by the web browser when hitting websites monitored by a number of companies."
While that’s interesting results, I’ll be the first to point out that intrinsically relying on web browser stats as a basis for OS usage is very unreliably. This is especially case given that many machines aren’t even connected to the internet, either because it’s entirely outside the scope of the job or because there’s too much risk of an old OS that no longer receives security updates being hit with a worm (and too few resources to bother firewalling and trying workarounds for every last exploit).
Of course, if one is using an OS that old, one is unlikely to update that machine any time soon. But eventually those machines will die and the new machines will simply not work with the old OS. Virtualization may be the answer for many companies. One can only hope it’s able to fill-in where OS backwards compatibility falters.
"Um, conservation of mass. Breaking up the IT department into a bunch of smaller IT departments doesn’t increase the amount of resources. It just spreads it out differently."
I assumed that they might have trouble with those scripts because they want to test(and not only test) them all in a centralized fashion (to make sure they work in lots of scenarios, including some not really needed) and this way doing some extra work comparing with a more localized(informed) testing(or other operation). If that is a big company I assumed that that 9000 scripts domain is not a plain domain but splitted on specific places and/or needs. At each place you have some people who know them better. If you have 100 places, 3 people who can do this on daily basis in each, and they generally need 3 days for each problem you can ideally do 100 a day :).
But, well, they know better their problems. Maybe the problems they have can’t really be localized. Maybe they do very well this localization from the centralized IT center. Maybe they are simply doing the optimal work considering how much that company wants to invest in IT and if there would be more spending they would have no problem to scale their current model and get 100 a day.
If you give us more details on how they do their work and from where their restrictions come we might find that they are really doing "a world-class operation" :).
"If you have 100 places, 3 people who can do this on daily basis in each, and they generally need 3 days for each problem you can ideally do 100 a day"
You may as well just hire 300 people for your central location. Even if you "re-used" workers and didn’t hire new ones at each location, you’re taking these other workers off other projects for 3 months at 100 per day… That’s kind of significant.
This is why I wish the 64-bit transition had been used as an opportunity to clean up windows and switch legacy users over to some sort of virtualization: we see no benefit and considerable cost to much of that backwards compatibility (e.g. imagine how different the world would be if Microsoft wasn’t afraid of breaking compatibility for things which are known to produce insecurity and instability) but we have to deal with it for the sake of very conservative large companies. I’d really prefer a world in which Microsoft allowed people to virtualize bizarre legacy code and instead used each operating system release to fix the most pressing design problems from the previous release.
Even with tremendous attention paid to backward compatibility, some things will inevitably break, particularly when security becomes more stringent. E.g., going from XP RTM to XP SP2, going from Server anything to Server 2003, and from *anything* to Windows Vista. And the bulk of the burden of finding ways to get these apps to work falls on the sysadmins, rather than developers. Changing system configuration (e.g., an IniFileMapping or an ACL change) is a much more efficient process than changing source code. Even in cases where you have the source for these older programs and an available developer, they were usually built with tools that are not as readily available as Visual Studio 2005.
(Can I mention here that LUA Buglight works on 16-bit as well as 32-bit apps? http://blogs.msdn.com/aaron_margosis/archive/2006/08/07/LuaBuglight.aspx
)
It is however really annoying that 16-bit apps won’t run on Win64. I run into that one occasionally, simply because older apps frequently used 16-bit installers (even when the app itself was 32-bit).
Thus far I’ve been able to work around it by installing on a Win32 system and then copying the results back to my Win64 system, but that only works for simple apps that don’t have to do any registration or anything.
Ok, so V86 mode doesn’t work on X64; that just means that code has to be run with CPU emulation instead. But it still ought to be executable.
Yeah, Microsoft would be like Apple, and nobody would be using Windows, either.
Miral: True, but Microsoft has done tremendous amounts of work to solve this problem. They even have 32-bit emulation of several popular 16-bit installers (old InstallShield, ACME). I was amazed that our own 16-bit InstallShield setup launcher worked on 64-bit – I wanted to use this excuse to throw it out.
http://support.microsoft.com/kb/282423
[quote]
It is however really annoying that 16-bit apps won’t run on Win64. I run into that one occasionally, simply because older apps frequently used 16-bit installers (even when the app itself was 32-bit).
[/quote]
Yes. For example, as far as I know, the e-banking program Bank Of China distribute to our accounting department is still using 16-bit installer for Win3.1, therefore it just won’t install at anywhere in the "Program Files" directory.
I can tell the problem judged by it’s apperaence, but other technical staffs in my company seems doesn’t expect it so cannot solve the installer error, and leave it until I came back from vocation to install it.
Speaking of 16bit applications, didn’t Windows NT already contain x86 emulation for them on non-x86 architectures (back when it still ran on RISC, Alpha and PowerPC)? Couldn’t this be used on AMD64, too?
I’m confused — just yesterday I ran dosbox on Linux on an AMD64 CPU under a 64-bit kernel (so the CPU came up in 64-bit mode). And the 16-bit DOS programs it was running were working fine.
Now, it’s possible that it only worked because it was a 32-bit process (at the time, I wasn’t running the 64-bit userspace stuff). But if that’s the case, then couldn’t Windows do the same thing — run 16-bit stuff under a set of 32-bit pages?
Or does dosbox do this emulation completely in software? I suppose that’s the other possibility.
[Virtualization is not seamless. -Raymond]
Betcha I could run [random windows OS] under XEN 3.0 on a newer AMD chip and have it be seamless. The [random windows OS] probably wouldn’t even notice.
What irritates me with x64 is that you can’t run console application in full-screen mode? Why is that so Raymond?
God, how I dislike hyper-productive VB developers, I wish VB never existed. Few days ago we optimized a piece of code that does 3D reconstruction to take less than 30 seconds. Then we gave a DLL to VB programmers which work on the GUI part. One of them alone managed to slow it down to 60 seconds — months of our GPGPU research trashed in one day.
When we checked his code, it turned out that he used (otherwise very fast) MatrixTranspose() function twice as a shortcut to pad a 2D array before calling our function instead of writting some padding code on his own.
BryanK: DosBox does its emulation entirely in software. Which is why my old DOS CAD program that ran fine natively on a 33MHz 486 was just horribly painfully slow on my 200MHz PPro Windows machine under DosBox. (It’s just about back to usability on a 1.1GHz machine, but still just maintaining a 10-frame-per-second video buffer at 800×600 resolution nearly pegs the CPU.)
What do these 9,000 scripts do? Are some redundant?
How involved are they? When do they run?
Are they really that competent if they locked themselves in in this manner?
Stephan
Perhaps they should spend some of the time they currently use to support the 9000 scripts to replace the smallest apps (ie the low hanging fruit) with websites.
This problem of operating system incompatibility will never arise. Ever.
<A HREF="http://realitydistortionfieldtheory.blogspot.com/2006/10/macos-x-vs-windows-investmentadoption.html">Mac OS vs. Windows</A> regarding backwards compatibility.
Q: In the 8080 arch. (8 bit), 80(2/3)86 arch. (16 bit), the 80486 (32 bit) ect. has the underlaying assembly calls changed (expanded beyond) the inital instruction set size, that really requires additional word size (multi bit length)?
Unless they upgrade their servers?
I would ask: why would the company with 9k of programs change their OS … if its working for them now just as the scripts are working, just leave it alone. :)
Congratulations :)
http://linux.slashdot.org/article.pl?sid=06/11/14/1557209
The previous place I worked was stuck on an old level of Access, because newer levels broke an important program / suite. I have no idea what the problem was, I was working in another area.
What industry was this company in?
Ah, 16-bit Windows applications. I know Wine (a sort of Windows API compatability layer for Windows) supports 16-bit applications (though that code’s probably suffering from major bit-rot by now, and I’m not sure how well it worked anyway).
Which leaves the interesting question: if someone managed to port Wine to Windows, could it run 16-bit applications on 64-bit versions of Windows? (After all, it can run them on Linux, and that doesn’t have much/any native 16-bit support AFAIK…)
It’s already avalaible for windows…
http://www.winehq.com/site/download
To anyone who wishes Windows would stop putting so much effort into backwards compatibility: just stop using Windows! There are other operating systems (which I won’t mention ;) ) that don’t force you to have a backwards compatible system if you don’t want it, cutting the bloat right out..
IBM did something a few years ago when they created the AS/400. They bad the box backwards compatible to their System 38. Later they moved the AS/400 from 32 bit to 64 bit. Virtually all applications moved as is…it was really pretty amazing stuff and they don’t get enough credit for improving the machines while protecting backwards compatiblity.
Definition:
"Backward Compatibility":
The ability of newer technology to work with older technology without any modification.
The physical world analogy to this is automobiles.
Assume you have two types of automobiels, CAR-A and CAR-B. And assume you have two types of fuel, FUEL-A, and FUEL-B.
Lets assume that CAR-B can only run on FUEL-B. Lets also assume that CAR-A can run on FUEL-A and FUEL-B at the same time. Lets assume that by design CAR-A has separate combustion chambers and fuel tanks for the two fuels it runs on.
By definition CAR-A is backward compatible with CAR-Bs fuel, because it can run on FUEL-B.
With this new car and the new fuel, I’m seeing people are upset that When CAR_A runs on FUEL-B it doesn’t get as good of mileage. Of course it doesn’t, it’s not the new and improved fuel, it’s the old fuel! Also I’m seeing that people are expecting to be able to meaningfully mix both FUEL-A and FUEL-B concurrently in CAR-As combustion chamber and fuel tanks. They’re upset that they’re told they can’t.
Nobody indicated, or even suggested that it was a wise move to expect CAR-A to run on FUEL-A and FUEL-B combined in the same cumbustion chamber. This could be a very dangerous (explosion) or unproductive (gumming up the engine) combination.
Furthermore, when running on FUEL-A the owner has more options available versus running on FUEL-B. Now the owners of CAR-A are demanding that they have all the same options available regardless of which fuel they’re using, and they want no to have knowledge of which fuel their running on. As well they really really want all the benefits of FUEL-A even though they may be using FUEL-B.
Also, people have heard horror stories about this new car when running on self produced FUEL-B. They hear there’s sometimes unexpected maintenance. They fear this because they have a deployed over 9000 different fuels, none of which come from a third party refiner. They want all the benefits of the FUEL-A, CAR-A combination, but with no thirdy party to assist, they fear the cost of having to reformulate their FUELs.
In short they don’t want to incurr the cost of change but want to reap its benefits, and are complaining about the cost. Perhaps they should have thought of that before creating so many different in house FUELs.
Let’s translate this rant to computer speak.
Translation Key:
[gumming up the engine]=slowing down/locking the processor
[cumbustion chamber]=kernel/user space
[get as good of mileage]=perform as well
[fuel tanks]=process spaces
[physical] = software
[automobiles] = operating systems
[car]=operating system
[fuel]=application
[run on] = execute
[runs on] = executes
[running on] = executing
[explosion]=[crash/gp/core dump/corruption of data]
[owner]=user
[vehicles]=computers
[refiners]=vendors
[reformulate]=change the code in
Translated to computer speak: (please pardon any minor inaccuracies)
Assume you have two types of operating systems, operating system-A and operating system-B. And assume you have two types of application, application-A, and application-B.
Lets assume that operating system-B can only execute application-B. Lets also assume that operating system-A can execute application-A and application-B at the same time. Lets assume that by design operating system-A has separate kernel/user spaces and process spaces for the two applications it executes.
By definition operating system-A is backward compatible with operating system-Bs application, because it can execute application-B.
With this new operating system and the new application, I’m seeing people are upset that When operating system-A executes application-B it doesn’t perform as well. Of course it doesn’t, it’s not the new and improved application, it’s the old application! Also I’m seeing that people are expecting to be able to meaningfully mix both application-A and application-B concurrently in operating system-As kernel/user space and process space. They’re upset that they’re told they can’t.
Nobody indicated, or even suggested that it was a wise move to expect operating system-A to execute application-A and application-B combined in the same kernel/user space. This could be a very dangerous (crash/gp/core dump/corrupting data) or unproductive (slowing down/locking the processor) combination.
Furthermore, when executing application-A the owner has more options available versus running on application-B. Now the owners of operating system-A are demanding that they have all the same options available regardless of which application they’re using, and they want no to have knowledge of which application they’re running. As well they really really want all the benefits of application-A even though they may be using application-B.
Also, people have heard horror stories about this new operating system when running on self produced application-B. They hear there’s sometimes unexpected maintenance. They fear this because they have a deployed over 9000 different applications, none of which come from a third party vendor. They want all the benefits of the application-A, operating system-A combination, but with no thirdy party to assist, they fear the cost of having to change the code in their applications.
In short they don’t want to incurr the cost of change but want to reap its benefits, and are complaining about the cost. Perhaps they should have thought of that before creating so many different in house applications.
Can someone tell me the substantive difference in the scenarios and expectations?
They look identical to me, yet I’m not sure you can find anyone in their right mind who would suggest this is reasonable for cars and fuels. Yet there are a plethora of people suggesting it for applications and operating systems.
This 9000 script/app company should have seen the writing on the wall for "64 bit windows" in 1997. So, they’ve had a decade to change from 16 bit to 32 bit. Or better yet 64 bit… I saw it, as I was heping a company automate their conversion to 32 bit…
As well, since 16 bit never truly worked splendidly on Win32, even worse support, or none at all should have been forseen in 64 bit windows.
By their optomistic estimated, with 9000 scripts 10 converted and tested per day, had they started in 1997 (they had ample opportuinty before that) they could have been fully over to win32 by year 2000 and then fully up to 64 bit by late 2007 early 2008. These daes assume they chose late 2004/early 2005 as their date of adoption of 64 bit windows. This is by all standards a the definition ofa late adopter of technology.
Waiting a decade since the last recommended conversion date, to begin thinking about it is anything but "on the ball". The fact is they’re in panic mode because they kept their eyes closed for a decade.
"See ya wouldn’t wanna be ya" is all I can say to them. They brought this crisis on themselves via poor planning. And the only reason Redmond granted and audience has to do with $$, not any idealistic notion that this companies notion of backward compatibility has vast technical merit.
The corporation is probably IBM.
> God, how I dislike hyper-productive
> VB developers, I wish VB never
> existed. Few days ago we optimized
> a piece of code that does 3D
> reconstruction to take less than
> 30 seconds. Then we gave a DLL to
> VB programmers which work on the
> GUI part. One of them alone managed
> to slow it down to 60 seconds —
> months of our GPGPU research
> trashed in one day.
God, how I dislike guys named Igor, I wish no Igors had ever been born. They all make idiotic, generalized posts to web forums in an effort to sound smarter than they are, when they should really be doing their god**am work instead.
""IBM did something a few years ago when they created the AS/400. They bad the box backwards compatible to their System 38. Later they moved the AS/400 from 32 bit to 64 bit. Virtually all applications moved as is…it was really pretty amazing stuff and they don’t get enough credit for improving the machines while protecting backwards compatiblity.""
IBM == Virtualization gods.
That’s their solution for backward compatability, even between hardware architectures. Instead of porting the OS from old hardware to new hardware you just run it in a VM.
The amazing thing is that they did it and it runs well.
We have hardware from the late 90’s, running software from the 80’s with a few tapedrives from the late 70’s (recently retired though). Now THAT is backward compatability. And it’s not just ‘recompile’ or anything like that. We have probably half a dozen different languages… jcl, rexx, assembly, etc etc. I don’t realy know everything involved. Hell, I am pretty sure that we have a programmer who has done a bit of direct machine code for us also.(no kidding)
Virtualization is realy the key for long-term backward comaptability. Not just 3 years or 5 years, but upwards to 20 or 30 years.
If it’s not seemless, then it’s just going to have to be Microsoft’s job to make it seemless. I know it’s possible, just that it’s going to be very difficult.
PingBack from http://penk.wordpress.com/2006/11/15/%e6%9c%ac%e6%97%a5%e6%9b%b8%e7%b1%a4-167/
I see echos of US Steel — a top notch organization with smart dedicated people — a company where excellent operations support a business which has failed to keep current and relevant. I don’t doubt the effectiveness of the IT group to maintain 9000 scripts. I doubt and question the necessity of the operation they support. The IT organization should instead be questioning the processes they support and/or providing answers to the questions no one wants to ask.
When I was still working IT (got outsourced…yay), I was responsible for maintaining a couple thousand programs, scripts and components. Some of these were over 30 years old!!!! The bulk were 20 years old. There were several hundred other drones like me who had their own code fiefs to maintain and expand. Backward compatibility is of HUGE importance. No company wants to or can afford to rewrite or recompile the billions of lines of code they invested in over the years. The company I was at was steadily updating their systems in a continuous process. They are so huge, we are talking a 10 year long process with hundreds of coders working on it. The company I was at, is long established, over 100 years old. New companies have the advantage of writing all their systems from scratch in the latest and greatest new code of the month. In time tho, they too will be where these old companies are.
It actually seems like a bit of a joke for Microsoft to be talking about the importance of backwards compatibility; you know, those old C-shell scripts from the 70’s still run just fine these days on “other” platforms! Given the fact that (a) there isn’t anything approaching a proper scripting language in the MS lineup (sure, you’ve got WSH but that’s mainly based around manipulating random OLE objects that tend to change with the wind), and (b) the fact that the command line arguments for things as simple as “del” change between releases, how on earth can you claim to have backwards compatibility?
As always, things are layered – I have no expectation of kernel-mode code being API compatible with a new version, let alone ABI. On the other hand, I do expect the “trimmings” such as scripts to function for a very long time. And ABI doesn’t really apply here, because… they’re scripts, not binaries!
Concepts for Microsoft:
1. API != ABI
2. Tighter integration into system ::= maintenance needed
3. At the core layers, ABI compatibility is unnecessary and API compatibility desirable but infrequently achieved (how many XP drivers are ABI compatible with Vista?). But if you’re writing drivers, then presumably you’re able to recompile…
4. At the trivial layers (e.g. scripts), virtually *no* maintenance should be needed.
The major issue is of keeping the same syntax for the same semantics; changing the syntax, for whatever reason, will break older code. Sure, it may seem cleaner, or “more intuitive”; but whenever you make a change like that there are always repercussions.
Pete Appleton, MCP (who avoid Windows when possible for these sorts of reasons)
Progress in languages (ie. python/ruby) and programming techniques (unit tests, etc) and dev environments/debugging would, I think, justify re-writing those 9000 scripts. Use the current scripting language to retro-fit python and then, with the help of all our slick modern techniques and tools, you’ll have those 9000 scripts re-written by lunch time, just in time for a nap.
Wine is actively developed and works surprisingly well – well enough that we use it in a production setting, running an OpenTV compiler for Windows (produces the code for interactive satellite television) under Wine on Linux. That’s an app that, if it breaks, LOTS of people notice. So Wine is up to business use.
Compatibility with Office is easy; the hard part is compatibility with the zillion little apps written ten years ago that your business utterly relies on and which you can’t even find the developer for any more. Wine is now running these much more often than not. These days I’m more surprised when Wine doesn’t run a Windows app well than when it does.
Also, rather than running several Windows VMs, you can run several wineserver processes, one for each Windows app. So you get compatibility-layer (rather than VM) speeds and Unix separation of processes.
Concerning the statistic quoted of a 2% penetration of WinXP into the market … Obviously there are going to be huge uncertainties over what statistics people are citing – are you talking about the OS on systems shipped from manufacturing in the last year, or the last 5 years ; are you talking about what’s left the manufacturers or what has actually made it to the desktop (one of my clients, through some funny-peculiar licensing, buys boxes locally in about 50 countries with <whatever> on it, pulls and vapes the hard drives and re-images with a customised bersion of Win2k, then deploys to the desktop, pumping station, ship or whatever. Is that a sale of WinXP or a deployment of Win2k?).
What I suspect is the meaning of the 2% statistic is that only 2% of systems out there are using APIs and services that are *only* available to WinXP. Most systems don’t need it, and won’t get re-written to be XP-only because that would only reduce the developer’s potential market.
It was 2004 before 50% of our clients had moved to Win2k and/ or XP ; but of course our application suite will continue to run on Win95. For bringing data in (on floppy), processing it, and outputting data to email or floppy, we don’t need other services. (We do have products in development that will require .NET, but thats not fit for going out into the field yet.
J.D.:
> These dates assume they chose late 2004/early 2005 as their date of adoption of 64 bit windows. This is by all standards a the definition of a late adopter of technology.
Late 2004 is considered a *late* adopter of 64-bit? The first Opterons were *just* out in late 2004, and IIRC XP x64 wasn’t even shipping yet. (Yes, Itanium had been out for a few years, but with really slow emulation of 32-bit instructions, they’re not really worth mentioning.) And when stuff like Exchange 2003 isn’t even available for 64-bit *now* (because as we all know, having a kernel-mode filesystem driver in a mail server is a good idea), that’s going to put off adoption even longer for people that use Exchange.
And by your definition, everybody that’s still on FF 1.x is a "late adopter" of browsers: FF2’s been out for three weeks already, get to it! (I should also note that it’s much easier to update a browser than an OS, so a lot more people have probably already moved to FF2 than have moved to an x64 OS.)
Now, I won’t disagree that they should have moved to 32-bit much earlier. Anybody still compiling 16-bit stuff now is way too far behind. (OTOH, the "scripts" mentioned may be anything, too; hopefully many of them don’t require any specific word-length.) But expecting them to move to a 64-bit OS (which may not have even been out yet), on a brand-new processor, and still call that a "late adoption", is missing something.
Perhaps you meant they’d be a late adopter if they moved in 2006/2007 (i.e. around now)? But then that would put your target for conversion-to-64-bit off until about 2009/2010.
I was witness to an IBM employee telling a person on the other end of the phone how to patch a running AS/400 system.
He was giving memory addresses and hex values to enter in at those addresses while meanderaing about his house getting ready to throw a party…
I don’t remember much of the party though…
I can only guess that it was a good one.
They usually were…
In any case IBM has some veritable geniuses working for them, as well as a lot of average schmoes…
In the words of 4chan, this is just 1 short of 1MGET, so it’s 999999GET.
Also, backwards compatibility is essential due to the vast amounts of software designed for a older day and age. Vista cutting off 16-bit apps isn’t the first; Windows 95 was the first to do a similar move by killing Windows 2.x apps and older. Seriously, find a copy of Windows 2.03 and try to run the applications provided with it on 95. "Windows cannot run this application. Please contact the software vendor for an updated version" comes to mind.
Their operations division is responsible for 9,000 different install scripts for their employees around the world.
BrianK> […] Late 2004 is considered a *late* adopter of 64-bit?
BrianK> The first Opterons were *just* out in late 2004,
BrianK> and IIRC XP x64 wasn’t even shipping yet.
BrianK> […]
————————————————————
As was pointed out the Opterons were not the first generation of 64 bit processor. On the subject of "not the first", Windows XP 64-bit was not the first generation of 64-bit Windows. Many things about the conversion to 64 bit are not the first, nor will the conversion be the last that’s seen in this regards. [I’m betting 1 cherry pixie stick that 128 bit prototypes OSs being used with sluggish 64 bit emulation will exist no later than 2010… Any takers? ]
——————- BEGIN: INTERMISSION ——————–
And for those who plead "But it takes so very long to completely upgrade an organization of the size we’re talking about". You’re absolutely right! That’s why advanced planning and early initial scouting of potential problems is a necessity to stay relevant and supported in the ever changing proprietary world of MS Windows.
——————- END: INTERMISSION ——————–
————————————————————
Elaborating on the definition of "Late Adoption":
————————————————————
As well here’s the reasoning behind the madness of my claim that <For any large corporation with tons of scripts to maintain> "[2004/2005] is by all standards [the] definition of a late adopter of technology." I meant this from an infrastructure and planning stand point, not from a desktop users standpoint. Since the impact of a conversion project is just beginning to be realized by its completion date; the date to measure "late adoption" by is the completion date, not the start date. By this measure, waiting 4 years to begin considering a change, with your completion date happening at roughly the 7 year mark is "late apotion" of a "new" technology. "Late adoption" is not necessarily risking irrelevance. As we agree the company in question has jeapordized their *relevance* by not moving away from 16 bit much sooner.
As well being a "late adopter" is not meant as an pejorative. It’s meant in context of when people/organizations of similar size/interest BEGIN converting scripts to be compatible with a newer rev of an OS. This is not the same crowd as net surfers or gamers (albeit gamers tend to be "early adopters" within the home user circles)
————————————————————
NEW AND IMPROVED BROWSER ANALOGIES:
————————————————————
Since browser technology changes at least twice the pace of OS technologies, this seems a better analogy, and more accurate to scenario Raymond posted.
A web designer waits until 2 years after the last corporate wide browser upgrade, to begin thinking about testing and revising the HTML templates used for producing their intranet. Now lets say during the initial look the designer discovers that they used a template for EVERYTHING that results in unusable web pages on the newest browser(s). Now instead of just verifying pages they know for sure that they have to revamp everything. Now lets say it’ll take 2 years to complete. So the company draws a line in the sand and decides they’re only migrating to the browser version they’ve verifed their issues and work arounds with. So 2 years after their original HTML pages/templates were found to be unsupported, they will be installing a 2 year old browser with which they now established a "new HTML coding standard". (They need to wait so that their web apps stay usable until the conversion is done)
Does that sound like a designer/company who is a "late adopter"?
This scenario could lead someone to that conclusion. The conclusion it shouldn’t lead someone to is that this company has become irrelevant. If it works for their needs, they’ve planned migrations to "new" (to them) technologies.
This is UNLIKE the 9000 script scenario in the following regards:
1) This company did NOT wait until they had absolutely no choice, but to upgrade.
2) This company planned for the upgrade.
3) This company allocated resources to investigate the potential problems before conversion.
Also taking home users and browsers as an example, the way I see browser technology adopted is as follows: [From Earliest to Latest Adoption]
————————————————————
Version Type of person
————————————————————
Beta/Alpha Technology Producers [For the browser in question]
Beta Content Producers (Web Designers and thier ilk)
On/Near Release Technophiles and Technology companies
+6mo Corporations with a modest interest in using the web for business [indirectly]
+1 yr Family, Friends and Associates of the above individuals
+2-3yr People on a Budget
3-5yr People who are blissfully unaware of the newest browser and find their current version suits their needs.
~5-10yrs People who only upgrade software when they upgrade hardware, when it dies.
Never Those with no interest in or ownership of computers.[just for completeness sake]
Somewhere between the +6mo and +1 year mark is a fuzzy range that I’d call "Late adopter" of browser technology.
————————————————————
Miscellaneous Stuff/Conclusion
————————————————————
And Brian, just so you know, ad-hominem seldom furthers any understanding. Please refrain from it when posting replys where you actually want to discuss things professionally. It’ll help that process immensely.
As well you’ve seen above I clarified assumptions I made about the subject matter. For example: Why it is early versions of 64 bit are valid to consider; what I meant by the phrase "late adopter"; and relevant examples using browsers and similar situations in addition to my real opinions on what constitues a late adopter of a browser for home user.
[ And yes, I’m a budding author… that’s why I write so much… :-P ]
> And Brian, just so you know, ad-hominem seldom furthers any understanding.
Ad-hominem is a logical fallacy, nothing more. It’s also one that I did not make. Basically, you can try to convince me based on either what you’re saying, or who you are. If you try to convince me by what you’re saying, and I come back and try to refute that argument by pointing out something about you, it’s a logical fallacy, because your argument didn’t depend on who you are.
I will agree that your argument was not based on your credentials. But I did not try to refute it based on that either; therefore, there was no ad-hominem fallacy.
Perhaps you mean that you took my comments personally, as some kind of insult? They were not intended that way. Upon re-reading, the only one that comes close to an "insult" IMO is the "get to it!" part, which I will admit sounds unclear. I probably should have just left that out, but I meant it to apply to the hypothetical user that hasn’t upgraded to FF 2 yet. Not you.
Now, I do see what you were saying. But using a term like "late adopter" in what is (to me!) a non-standard way doesn’t help that understanding either. Whenever I think of "late adopter", I think of somebody like my grandmother, who never upgrades anything until it dies. (Your "~5-10 years" category.) And it always implies that you’re measuring against start date of the conversion, not the end date. (*Especially* when the date mentioned just before the "late adopter" term is the start date.)
(I don’t do "planning" well. ;-))
Now it is possible that I’m the only one that thinks of the term that way; I don’t know. It doesn’t really matter anymore either.
PingBack from http://blogs.law.harvard.edu/hoanga/2006/11/16/why-compatibility-is-important-for-large-corps/
PingBack from http://www.eriknovales.com/blog/index.php/2007/12/20/even-when-you-wrote-the-operating-system-you-still-have-to-work-around-other-peoples-bugs/