If you have to ask, you’re probably doing something wrong

Date:March 1, 2007 / year-entry #75
Tags:other
Orig Link:https://blogs.msdn.microsoft.com/oldnewthing/20070301-00/?p=27803
Comments:    79
Summary:If you have to ask about various operating system limits, you're probably doing something wrong. If you're nesting windows more than 50 levels deep or nesting menus more than 25 levels deep or creating a dialog box with more than 65535 controls, or nesting tree-view items more than 255 levels deep, then your user interface...

If you have to ask about various operating system limits, you're probably doing something wrong.

If you're nesting windows more than 50 levels deep or nesting menus more than 25 levels deep or creating a dialog box with more than 65535 controls, or nesting tree-view items more than 255 levels deep, then your user interface design is in serious need of rethought, because you just created a usability nightmare.

If you have to ask about the maximum number of threads a process can create or the maximum length of a command line or the maximum size of an environment block or the maximum amount of data you can store in the registry, then you probably have some rather serious design flaws in your program.

I'm not saying that knowing the limits isn't useful, but in many cases, if you have to ask, you can't afford it.

Nitpicker's corner

Notice that I said "probably". In my experience, probably 90% of the people who ask what the limit is have either bumped into it or are considering a design that will. The fact that you folks can come up with suggestions for the other 10% doesn't invalidate my point.


Comments (79)
  1. John Topley says:

    Can’t you ask for no other reason than that you’re interested?

  2. DavidE says:

    I disagree on the maximum length of a command line. There shouldn’t be a maximum length, but traditionally it has been smaller on Windows than on Unix/Linux, and that causes problems in some cases.

  3. vince says:

    How do I use more than 640k of RAM?…

  4. mschaef says:

    "traditionally it has been smaller on Windows than on Unix/Linux"

    Unix sometimes needs more command line length than Windows does: unlike Windows, Unix does file wildcard expansion in the shell and passes a complete list of files down to the process being invoked, via the command line.  I’ve been in a few situations where this has caused problems.

  5. Kevin I says:

    I do agree in many cases. I remember one situation at my last company where we made a shell application which loaded sub screens through a common interface (they were com dlls).

    We hit the GDI object limit! People would open up a number of "Order" screens in the shell, and it would get slow. We found a few controls which were leaking GDI objects, but patched that up and still had a decreasing performance as someone uses the application.  

    The only way we found the GDI one though was cause we hit it a number of times (due to users opening up order screens which have many gdi objects in each), so we weren’t asking if we could afford it, we knew we couldn’t!  We added metrics to our error handling to report what it was at, and I think we talked about not letting the user open up another screen if they had > some number, but not sure if we actually did that.  Cause even over like 5k, it dogs down windows as a whole very badly, but the user just wants their queue of orders open to work through them one at a time, so they don’t care that it is dog slow for a bit as they work through the screens. That was one limit I remember hitting and causing some pain :)  

    With the CAB (application block), and our current project which is our own similar implementation from my last company, I think about that limit and realize we’ll need some sort of cap on that as well.

  6. I have hit the command line limit when trying to link programs before.  It is real easy to hit 8k when you have a large program with a lot of .obj files.  Especially if you put all of your files in a directory called shared_debug_obj or something like that. This was using make from the command line, not visual studio.

  7. Jules says:

    ""traditionally it has been smaller on Windows than on Unix/Linux"

    Unix sometimes needs more command line length than Windows does: unlike Windows, Unix does file wildcard expansion in the shell and passes a complete list of files down to the process being invoked, via the command line.  I’ve been in a few situations where this has caused problems."

    Bear in mind that some of  use Unix-ish tools under Windows (e.g. cygwin).  The same issues can occur under Windows, if you’re using the right(wrong) tools.

  8. Dave says:

    Unix does file wildcard expansion in the shell and passes a complete list of files down to the process being invoked, via the command line.  I’ve been in a few situations where this has caused problems.

    Especially if there are files named like "-rf"; the shell blindly passes those on to the poor command, which has no idea they were meant to be file names. Hilarity ensues.

  9. Diego says:

    WRT to maximum lenght of the command line, Rob Pike (Plan 9’s creator) said that it was stupid that today’s Unix boxes couldn’t pass more than X KB of parameters in boxes with GB of RAM and available dynamic memory allocation.

    In the Linux world there’s a patch around that will be merged some day, that does dynamic memory allocation for command line parameter passing.

  10. David says:

    Chances are if your really asking what the limit is, then you’re not exploring. If you’re asking why limit X for a particular ability then you’re obviously not using it as intended. However philosophical differences on usage can cause differences of opinion, but in general I agree. Limitiations should basically be hardware limitations, this way when hardware improves, so does the software (i.e. 64 bit, multi-core systems, etc)

  11. In most cases, Raymond is right when he says If you have to ask, you’re probably doing something wrong

  12. Tim Meadowcroft says:

    Like IE has a maximum URL length of just over 2K characters (http://support.microsoft.com/kb/208427) on the basis that "you should be using POST not GET". But of course that’s no use whatsoever when you’re say embedding a dynamically generated image (I have an ASP page that generates PNG charts that falls over this limit regularly requiring all sorts of ugly workarounds) within a generated page.

    More likely, if you have to ask, you’re not doing things the way Microsoft thought you’d do it.

  13. Mihai says:

    <<Especially if you put all of your files in a directory called shared_debug_obj or something like that.>>

    Link (and other tools) can take a file containing a list of obj

    Just enumerate all the obj, one per line in a file (let’s say ToLink.lst) and call link /whatever_parameters_you_want @ToLink.lst

  14. I’ve hit an operating system limit once thanks to a really bad batch file I wrote – It created something like 256 sub-directions then fell over – Only THEN did I realise just how bad a script it was!

  15. richard says:

    I’m with John Topley, I ask because I am curious.

    One of the first things I do whenever I get something new is to explore its limits.

  16. denis bider says:

    Raymond – I disagree, at least for the case of threads. Flow-Based Programming isolates individual and small components of a program so that they can be executed concurrently. If the OS threads were lightweight, and if it were possible to create very many threads, then an application could use the OS scheduler and simply use a thread for every object. Don’t tell me this is a bad idea, because it isn’t – it is used to good effect in Erlang.

    But it so happens that Windows doesn’t support this model natively because its threads are too expensive, and one has to write one’s own scheduler to support the Flow-Based Programming paradigm.

    So this, at minimum, is one example where the question being asked – how expensive are Windows threads, what’s the maximum number I can create? – is legitimate, and the answer – they’re expensive, so not very many – is a deficiency of Windows, not the developer asking the question.

    Not everything fits into a narrow view.

  17. Denis, in your scenario you probably want to use fibers, not threads – a fiber is a lightweight thread, roughly equivilant to *nix pthreads.

  18. denis bider says:

    More likely, if you have to ask,

    you’re not doing things the way

    Microsoft thought you’d do it.

    Aye.

    And then Raymond Chen comes at you and pronounces that you if you’re not doing it the way he and his colleagues envisioned, then you’re doing wrong. Somewhat arrogant.

  19. KJK::Hyperion says:

    Jules: the Cygwin limits for the command line are even worse. Cygwin reports a maximum lenght of 8192 characters, while Windows allows up to 64 Kb (32768 characters)

    Jeremy: all Microsoft development tools, nmake included, support response files for that reason. Response files virtually remove any limit on command line length

    Diego: Windows, technically, has no limit on command line length. The parent process just brutally copies it into the child process by remotely allocating memory and overwriting it. It is never copied in kernel-mode except for the small temporary buffer used to transfer private virtual memory across processes. The limit is in the length field, a byte count 16 bits in width

  20. KJK::Hyperion says:

    denis: you are basically asking why you can’t use Windows threads to implement what amounts to a glorified form of setjmp/longjmp, and that question should answer itself. No operating system has "lightweight" threads, because what you want is special-purpose threads and an operating system can’t give you that

    What Larry said: use fibers. They are a glorified form of setjmp/longjmp that’s officially supported and sanctioned. Microsoft SQL Server has its own scheduler and only ever creates one thread per CPU, and uses fibers exclusively. Also, Raymond has already demonstrated flow-oriented programming with fibers on this blog

  21. Dave says:

    Tim Meadowcroft: if i understand you correctly, you’re putting the actual data of an image in a URL?  if that’s the case, then it’s one of the strangest things i’ve heard of as far as putting data in URLs.

  22. denis bider says:

    Denis, in your scenario you probably want

    to use fibers, not threads – a fiber is a

    lightweight thread, roughly equivilant to

    *nix pthreads.

    Thanks. I looked into fibers, but in this case I still need to write my own scheduler, and I had already given up on preserving a component’s stack between activations by the time I considered fibers.

    Fibers also plays foul with thread local storage, which some library code uses, and then you can’t use that library code, and then you have to investigate in advance whether a certain library you want to use makes use of thread local storage or not… So that’s also a strong reason why I decided not to use fibers.

    I know there’s fiber local storage in recent Windows versions, but most code that uses local storage assumes TLS. If TLS worked with fibers that would make the case for fibers much more attractive in general, I think.

  23. KJK::Hyperion says:

    Larry: all current implementations of pthreads follow the N:N model, because 1:N and N:M have been tried and dismissed on the grounds of, uh, sucking. Fibers are, actually, a perfect equivalent of ucontext, a seriously underused part of the standard:

    <http://www.opengroup.org/onlinepubs/009695399/basedefs/ucontext.h.html&gt;

    with the difference that ucontext has no kind of thread affinity, so as not to introduce a dependency on pthreads (I guess, or maybe they’re just unrelated)

  24. Aidan says:

    If you have to ask about various operating system limits

    MAX_PATH not withstanding.

    (and no, it can’t be changed without a time machine)

  25. KJK::Hyperion says:

    BryanK: implementing pre-emption in user mode is always going to be a bad idea in any case… blocking system calls, sudden onset of reentrance issues, extremely expensive context switches, and a very coarse clock resolution. User-mode implementations of pthreads were (and will always be) totally doomed to failure, at least without the use of proprietary extensions

  26. Miles Archer says:

    The counter example that comes to mind is the 2GB (or 3GB) VM limit.

    But Raymond said "probably" not "always".

  27. John Hensley says:

    "MAX_PATH not withstanding."

    If you have to ask about MAX_PATH, ask whether you’re misusing it.

    (I’ve seen more than one coder who declares any string buffer, used for any purpose, with length MAX_PATH. As if it’s some kind of metaphysical limit.)

  28. Steve Loughran says:

    I’m going to be subversive here and argue if you hard code limits in your app, be they 16, 65535 (max-ipv4-ports), 255 -max-path in non-unicode filesystems, etc, then you are encoding assumptions.

    Those assumptions may or may not be valid. When they are found to be invalid you either have buffer overflows or users unable to achieve their goals. Even if the assumptions are valid at the time you write the app, by the time your app is retired, it may not be.

    Storing stuff in fixed size arrays may have been a valid approach back in DOS or win31 days, but nowadays you should use dynamic datastructures for all your core system elements.

    [If somebody posts the question, “What is the maximum number of IPv4 ports I can create in my application? I start running into trouble after around 60,000,” your response is probably going to be “What the heck are you doing that needs 60,000 ports?” -Raymond]
  29. Mikkin says:

    I guess old habits die hard.  I started programming in the bad old days of the 8086 and 6502.  In such environments you always have to ask because you often can’t afford it.  

    There is a limit, therefore I want to know what it is.  Assuming that as long as you are doing something reasonable you will never hit a limit is just alien to my way of thinking.

  30. KristofU says:

    We have implemented our own exclusive locks, based on event HANDLE’s.

    A lot of objects aggregate these locks, and on application startup we consume about 17000 handles because of it.

    So we would be interested in the maximum number of event handles, altough we have never had any real problems.

  31. Cooney says:

    Especially if there are files named like "-rf"; the shell blindly passes those on to the poor command, which has no idea they were meant to be file names. Hilarity ensues.

    that’s why God invented –.

    And then Raymond Chen comes at you and pronounces that you if you’re not doing it the way he and his colleagues envisioned, then you’re doing wrong. Somewhat arrogant.

    Of course, if he prevents you from foisting a 50 level menu structure on your users, then your users will sing his praises.

    Those assumptions may or may not be valid. When they are found to be invalid you either have buffer overflows or users unable to achieve their goals. Even if the assumptions are valid at the time you write the app, by the time your app is retired, it may not be.

    And if you avoid all assumptions, your code will drag ass. Code the common case and refactor when need be. Building castles in the air is mostly wasted effort.

    I’m going to be subversive here and argue if you hard code limits in your app, be they 16, 65535 (max-ipv4-ports), 255 -max-path in non-unicode filesystems, etc, then you are encoding assumptions.

    What’s the 64k thing doing in there? IPV4 is ubiquitous, and it’ll be years before that’s even close to changing. The limit is inherent in the protocol – did it change in ipv6?

  32. BryanK says:

    The 65535-port limit isn’t a property of IP, it’s a property of UDP and TCP.  (The port number is stored in the UDP or TCP header, not the IP header.)

    I’m not sure whether (UDP/TCP)-over-IPv6 uses a different header structure than (UDP/TCP)-over-IPv4, but I’d hope they both use the same headers.  If so, then IPv6 still has the same limit on TCP or UDP port numbers.

  33. JamesNT says:

    I’m beginning to believe that Raymond Chen and myself are the only ones here with a degree in computer science.  

    There are always limits.  Some limits are in place because that’s how things were in 1995 and those limits must remain for back compat reasons.  Other reasons for limits are to prevent one thing from DOMINATING the rest of the system (think fixed buffer lengths).  Other reasons for limits are becuase of the hardware itself.

    Few limits are arbitrary as many of you are assuming.  If you do your research (or get a degree as Chen and I did) then you’ll know why certain limits are in place and you won’t go to a public blog and make morons of yourselves.

    James

  34. Cody says:

    > And then Raymond Chen comes at you and pronounces that you if you’re not doing it the way he and his colleagues envisioned, then you’re doing wrong. Somewhat arrogant.

    Of course, if he prevents you from foisting a 50 level menu structure on your users, then your users will sing his praises.

    Untrue.  If the evils of the world are not unleashed before they are realized by the world due to the action of one man, that one man will receive no praise because the world does not know he prevented those evils.

  35. Anonymous says:

    Why can’t I allocate more memory than the system has?

  36. JerryJVL says:

    I think there might be a bias in the demographics of the audience of this site, which makes the statement more true than it is in general. I can imagine a newbie developer needing to ask because they genuinely don’t know that these limits are so high you should not hit them unless you are doing something wrong. Not knowing whether the maximum number of threads is 4, 4k or 4M can make the question relevant in that case.

  37. BryanK says:

    Anonymous:  You can.  (Well, probably — it depends on how much physical memory the system actually has, compared to the amount of virtual memory that can be used by your process.)  You’ll just bring the system to its knees when you try to touch all this virtual memory, because the system is thrashing the page file…  ;-)

    (Yes, yes, I know, you meant "more virtual memory than the system has".  Or perhaps you meant "more address space than the processor has".  Either way, I say this:  :-P)

  38. steveg says:

    JamesNT: I’m beginning to believe that Raymond Chen and myself are the only ones here with a degree in computer science.

    #define MAX_COMP_SCI_DEGREE 2

  39. Dean Harding says:

    denis: You probably wouldn’t have much luck with fibers anyway. The main reason for the "limit" on the number of threads (not only in Windows but in any OS) is stack space. All threads require stack space of their own. The default size on Windows is 1MB. 1MB * 2,000 threads = 2GB = the limit.

    If you’re trying to implement a flow-based programming model using one "actual" threads per process then, as Raymond said, you’re doing something wrong.

    Just because you think of something "logically" as a thread, doesn’t mean that is how you would implement it. A TCP/IP connection can be "logically" thought of as a hard-wired connection between two hosts, but that’s not how you’d implement it.

  40. Dave says:

    I haven’t wanted to exceed these limits, but I *have* sometimes wanted to reduce them to small values so that I can make sure the program works in low-resource situations and I’m not dribbling handles behind me. I know of tools and techniques to limit some resources like system memory. Can many of these other limits be reduced for testing as well?

  41. mastmaker says:

    @JamesNT

    Not just going elitist with your degree nonsense, you are guilty of trying to pull Raymond into it.

    You can a joint in your degree and smoke, for all I care.

  42. BryanK says:

    > Especially if there are files named like "-rf";

    That’s what the — option is for (at least in GNU’s rm, and most of the other GNU tools).  It says "everything after this is a filename, not an option, even if it looks like an option".  "rm — *" is what you should use.  ;-)

    > a fiber is a lightweight thread, roughly equivilant to *nix pthreads.

    Well, pthreads on Linux, at least, have been "full" kernel threads (with full preemptive multi-tasking) for a *long* time now.  They work just like Windows threads, they just have a lower creation cost.

    (Now, the Sun JRE on Solaris didn’t use Solaris’s native threading support for quite some time.  They used the Solaris "green threads" library instead, which was cooperative.  I’m not sure whether Sun ever changed this, but the JRE at the time was around version 1.0 or 1.1.)

    Anyway, I assume you meant "roughly equivalent to most *nix cooperative threading libraries", right?  I say that because fibers have to explicitly yield (like processes did in Win 3.1), and do not forcibly preempt each other.

  43. Anony Moose says:

    2000 threads where each thread uses just one millisecond of processor time (assuming no overhead, which is obviously not entirely accurate) means that each thread works for one millisecond then spends the next 1999 milliseconds waiting around. If one of those threads is your UI thread, you’ve kinda hurt application responsiveness.

    2000 threads that mostly spend their time waiting on I/O probably means a more efficient implementation is possible.

    No, really, "RAM used" or even "entries in a fixed length table used" aren’t the only relevant issues.

    If you have 2000 threads then chances are you’re either doing something wrong, or you’ld understand the entire threading system (and probably the rest of the OS) so deeply that you wouldn’t actually need to ask what the maximum thread limit actually is.

    As for 17000 handles used up in a custom lock system…. yeeep!  I’m not sure I want to ever see that particular system.

  44. Ross Bemrose says:

    "Like IE has a maximum URL length of just over 2K characters (http://support.microsoft.com/kb/208427) on the basis that ‘you should be using POST not GET’. But of course that’s no use whatsoever when you’re say embedding a dynamically generated image (I have an ASP page that generates PNG charts that falls over this limit regularly requiring all sorts of ugly workarounds) within a generated page.

    More likely, if you have to ask, you’re not doing things the way Microsoft thought you’d do it."

    You may find this page interesting: http://www.boutell.com/newfaq/misc/urllength.html

    Unfortunately, HTTP doesn’t actually specify a maximum URI length, which caused everyone to come up with their own length.  In theory, this is supposed to be managed by the server itself, but Microsoft has chosen to step in and attempt to stop users from making their own mistakes.

    P.S. If your URIs are really long, yes, you should rethink how you’re doing it.  There are other options out there, such as cookies or sessions.  In fact, sessions would be ideal for this sort of thing, because then the data isn’t being constantly being passed back and forth between the client and server.

  45. Dean Harding says:

    One article without a nitpicker’s corner and look what happens.

    At least the second half of that comment was actually quite useful ;)

  46. Drak says:

    Just out of curiosity: what are the chances of a normal program (no infinite recursion mistakes) running out of stack space?

  47. Tim Meadowcroft says:

    re:IE max URL length of 2080 chars

    Dave: No, I’m not passing the image data in the url GET, but the parameters for generating the image (it’s a little graph-generator, so I pass the points to plot, labels for the axes, labels for data points etc.) which can easily exceed 2,000 chars for an interesting chart.

    The idea is that the containing page assembles data from various data sources, can lay out interesting data in textual form, and can embed dynamically generated images to illustrate the data (eg my bug reports page has charts of the last 90 days rates of bugs being opened, those being closed, and the net outstanding).

    Ross Bemrose: And yes, I could (and have to) store all this in session variables – and what a nasty hack that is. So much for items operating purely on their inputs, and of course it makes it very ugly embedding multiple images in a page, or testing the graph code by itself, or to publish the component for other projects for re-use. Another option is for the outer page to generate the images and save them as files and then quote the url for accessing those images and clean them up again sometime later – again, a nasty hack that obscures a sensible information flow.

    IIS/HTML/HTTP all support the longer URL, and I’m more than happy to write a long url, but IE has a ridiculously short and arbitrary limit which I’m yet to hear justified.

    But the point was not this one issue – but the general point, that some limits are just plain daft because, I assume, the developers couldn’t see how a tool would be used, but failed to see that their understandable lack of foresight is not a good enough reason to impose an arbitrary limit where none is justified or required (some claim the only 3 valid values for any system limit are 0, 1, and infinity, although I’d also allow the maximum value that can be stored in a word on a given platform).

    While Raymond can claim "you’re probably doing it wrong", it’s only the "probably" disclaimer that saves the comment. And I know he’s talking about O/S limits, but it’s Microsoft who’ve muddied the O/S-application divide (is IE part of the O/S?).

  48. Puckdropper says:

    >Why can’t I allocate more memory than the system has?

    That’s actually a valid question…  Why can’t I allocate more memory to a Virtual PC than my system has RAM?  I *know* it will be slow, I *know* it’s not advisable, but you see I’ve only got 512MB of RAM in my laptop, and I want to play with Vista.  From what I’ve read, it requires 512MB to install, but will run with less.  Oh well, I’ve got a more powerful desktop I can play with…

  49. KJK::Hyperion says:

    Tim: rest assured, you are definitely talking about the operating system. The internal architecture of Internet Explorer is far from rocket science, and it’s very easy to tell apart the operating system components from the application from the front-end once you’ve worked with it a little. What you experience is a WinInet limitation, and WinInet is on the operating system side of things

    KristofU: you are doing something wrong. Custom locks should never require more than an O(1) number of operating system locks. Critical sections in Windows Vista and later use exactly one kernel lock per process, to give you an idea

  50. Norman Diamond says:

    It took a while to figure out what I was doing wrong when I tried to open four Internet Explorer windows on one computer running Windows 98.  The limit was three[*].  What I was doing wrong was using Windows 98, though not by choice.

    It also took a while to figure out what I was doing wrong when using the ftp command in a DOS prompt window and doing an mget *.  The limit is 511.  When an ftp server had 580 files in one directory, Windows didn’t fall apart, it just silently pretended to complete successfully.

    I still haven’t figured out what the maximum length of a pathname is when right-clicking and selecting to delete a file.  If the pathname were really too long then Internet Explorer would have refused to save the web page in the first place, though of course not saying why it was refusing.  But yeah, if I think about it, I can still figure out what I’m doing wrong.

    [* I saw a higher limit in a foreign language version of Windows 98, but that didn’t help the one I had to use.]

    Thursday, March 01, 2007 9:26 PM by Puckdropper

    > Why can’t I allocate more memory to a Virtual

    > PC than my system has RAM?

    To the best of my understanding, Virtual PC doesn’t permit that RAM to be paged.  When the CPU is executing instructions in the guest machine, the RAM is really still in RAM.  The guest OS can do its own paging within that area.

    [Sigh. One article without a nitpicker’s corner and look what happens. -Raymond]
  51. Dean Harding says:

    Actually, I just noticed that denis was talking about 64-bit Windows, where the stack is not the limiting factor.

    He said that after 37,000 thread, the system starts "thrashing and becoming unresponsive". Um, if you think creating 37,000 threads is a valid solution to *any* problem, then you’ve got more to worry about than just "limitations" in Windows…

    I think it’s probably time for Windows to introduce another "arbitrary" limit — create more than 5,000 threads (or so) and you simply get an ERROR_TOO_MANY_THREADS (or something) from CreateThread…

  52. Dr Pizza says:

    MAX_PATH is the devil.  It’s totally crap that there are APIs hither and yon that are restricted to MAX_PATH paths, and can’t use \? escapes.

  53. Steve says:

    @Drak

    Pretty good, depending. The default stack size used in stuff created by Visual Studio is 1MB but you can tune it down and many apps do. It up to the dev to understand the stack requirements for his code and adjust accordingly. To follow the logical direction of your question, yes, you theoretically could set your default stack size to 512K and get twice as many threads in the process, but why?

  54. Xavi says:

    Just out of curiosity: what are the chances of a normal program (no infinite

    recursion mistakes) running out of stack space?

    Anything that uses alloca() is potentially a candidate, like W2A character conversion.

    This macro is realized with alloca(), large strings (BSTRs allow to carry binary data) might blast the stack.

    However, no problem, world is full of limits.

  55. Hayden says:

    @Mihai, KJK::Hyperion: Ha! And then you give LINK.EXE a file to read, and lo! it barfs if the file contains more than 128KB of text without a newline. Even if that text is whitespace-separated filenames. WTF?

    Some build tools (well, jam.exe) generate submit files without embedded newlines. And why shouldn’t they?

  56. C Gomez says:

    The arrogance of most commenters is that they inject the words: "It is wrong to even think about such limits."

    I don’t think Raymond would ever feel you shouldn’t explore and research some of those limits for the pure academia of it.  But the reality is, most applications are pretty simple… and should be getting simpler.

    It goes along with alot of the stupid things developers do and then blame MSFT.

  57. Mark Richards says:

    I agree with Raymond in principle, that often limits are not meant to be ever reached, but I have to disagree with certain limits that others have discussed here:  The max length of a command line and the max length of a URL in IE (and possibly other browsers).

    The problem is that there isn’t a good reason for limiting these features; on one hand URLs shouldn’t be too long because users type them in, except that now URLs are generated by machines who don’t mind typing long meaningless strings.  Sure, there are memory considerations, but if a client process  (IE) doesn’t have enough memory to encode the URL, it’s not likely to have enough left to render the page.  Where I work we like long URLS because they let our users bookmark the resulting page, something that you can’t do with POST.

    The maximum command line length is also a major problem, at times, for Java developers who want to load a lot of JAR files onto their classpath using a .bat file.  I’ve worked on large projects where I needed to rename my jar files to one character names and put them all in one directory before I could fit them on the classpath which is entered on the command line.  Now you may tell me to use shorter names, or fewer jar files, but I think this is dodging the issue; there’s no usability reason for having a hard-coded limit on command line length, so any limit is arbitrary and thus annoying.  Sure, maybe the limit has sound technical reasons, but that doesn’t make it less of a problem.  I ended up having to resort to a custom program to launch my Java app so that the long path names wouldn’t be a problem.  

    Some limits make sense, most should never be reached, but some limits are bugs, deficiencies in the underlying system; no two ways about it.

  58. John Hensley says:

    It isn’t the "OS/application" divide that’s muddy, Tim. It’s your appreciation of what modern applications need from an OS. We’re pretty far past the days when some device drivers and a command prompt were enough.

  59. Marco M. says:

    Drak: "Just out of curiosity: what are the chances of a normal program (no infinite recursion mistakes) running out of stack space?"

    It depends on your definition of "normal program".

    There are two main reasons for stack overflow : either you alloc’ed too many or big variables on the stack, or you recursed too much. Overflowing without one of those is .. well possible but if you overflow the stack only on non recursive calls none of which alloc’s some array on the stack you have big problems around.

    Since a typical Windows stack is 1MB, you should be quite safe from overflowing it for local variables (everyone should ask himself something before putting an array of some thousands of structures in local variables).

    Recursion is commonly used to navigate complex data structures (the most basic require recursion is the tree). Navigating a *balanced* tree (like an RB-tree) is unlikely to overflow. For example a typical tree node is 12 bytes wide (on 32bit systems) which summed to the return pointer and adding some local var.. well let’s assume it amounts to 64bytes each call. You can then go 2^20/2^6 = 2^14 = 16384 level deep which on a balanced tree in a 2-3GB limited machine cannot happen. On unbalanced trees you can however go overflow quite easily if, for example, the tree has degenerated on a list (given the above values it suffice a degenerated tree of 16K elements to overflow).

  60. Mikkin says:

    JerryJVL >> I can imagine a newbie developer needing to ask because they genuinely don’t know that these limits are so high you should not hit them unless you are doing something wrong.

    I wonder how you know the limits are so high if you never asked.  Is it divine inspiration, or do you just assume there are no limits unless proven otherwise by catastrophic failure?  As an old fart who has been programming since long before newbies invented the term "developer," I have seen numerous iterations of operating systems where limits were expanded beyond any reasonable need, much to everyone’s relief, only to find that after a few iterations of Moore’s law they became legacy constraints that had to be worked around.

    Raymond is right that anyone pushing these limits on nesting windows and menus is doing the wrong thing.  But asking whether there *are* any practical limits is not wrong, it is responsible and prudent.  Sorry if this seems like picking a nit, but being cognizant of limits and tolerances is one of the distinguishing characteristics of engineering.  You have to ask.

  61. Gabe says:

    IE actually has a good reason for limiting the length of a URL: security. The longer the URL, the more room there is for XSS attacks and such. Making the URL shorter makes some attacks impossible or that much more difficult.

  62. Norman Diamond says:

    > One article without a nitpicker’s corner and

    > look what happens.

    Then take another look at this:

    > If you have to ask, you’re probably doing

    > something wrong

    OK, the title wasn’t nitpicking, it was just mildly offensive without nitpicking.  Do you really think it would be better to match mild offences with informationless mild offences instead of pointing out what made it that way?

    [The whole point of the title is to be catchy. To do that, one tends to overstate things for effect. Welcome to the craft of writing. -Raymond]
  63. Mikkin says:

    > one tends to overstate things for effect.

    I totally get it Raymond.  The charm and wit of your prose is a large part of the reason your blog is on my daily sweep. Keep it up.

    [Too bad other people don’t get it. Maybe I should just ignore those people. -Raymond]
  64. Dean Harding says:

    Maybe I should just ignore those people.

    YES! In fact, do it even more – just to annoy them :)

  65. SuperKoko says:

    From Raymond Chen:

    > The maximum command line length for the CreateProcess function is 32767 characters.

    I use Windows 98 SE

    I’ve tested a program creating a child process with 50MiB of command line arguments.

    This command line argument was created that way:

    char* p=(char*)malloc(n+1);

    p[n]=’’;

    for(unsigned i=0;i<=(n-8);i+=8) {

     sprintf(p+i,”%08X”,i);

    }

    for(unsigned i=0;i<n/100;i++) p[100*i]=’ ‘;

    On the other side, the child process simply created a file and put all the command line third parameter of WinMain in this file.

    This results in a 50MB file.

    Using an hex editor, I’ve seen that the contents of this file are “right”.

    CreateProcess(argv[1],p,NULL,NULL,FALSE,0,NULL,NULL,psi,ppi)

    Note: argv[1] is the name of the child process executable file… I call the parent program that way:

    Parent.exe child.exe

    Have I an hallucination?

    [I think you know the answer. You’re just showing off. -Raymond]
  66. 640k says:

    Why aren’t all type of (in memory) storage only limited by ram+swap? That’s a design flaw in the OS! Fixed size storage are thing of the 90s.

    [The limiting factor isn’t RAM or swap. It’s address space. -Raymond]
  67. Rune says:

    Ross mentioned "P.S. If your URIs are really long, yes, you should rethink how you’re doing it.  "

    The limit also affects mailto: URLs. Now, the useful thing to know about mailto: is that you can specify the message body within.

    However… Keeping the mailto: tag below the 2K limit is a… Well… PITA!

    There’s probably some COM object or other non-sense I could use, which brings us to the next point someone made… XSS attack? Well, if the alternative is using some COM object, I’d rather face the possibility of an exploitable URI.

    Raymond: I won’t mention Desktop Heap Size this time. I’m just very grateful 64-bit Windows addressed the issue and amply so. (we faced problems both on the client and server side — you can only have a limited number of services running…)

    Rune

  68. Mark says:

    Asking about the limit is very different from actually using the limit. I may be want to know the max length of command line because I want to preallocate a buffer large enough for such – what, Microsoft rather me guess that number and potentially slap a shorter max length limit on that?

  69. There are actually some good reasons for having all your 60K+ TCP ports open -if you look at Jabber servers or other IM servers they do it so that clients behind a firewall can keep their connection open awaiting incoming calls. So server load is quite low (assuming most clients are idle) but port utilisation tops up. Want to host 70K users? Get another machine -real or VMWare-.

    For the command line, one complaint is that it doesnt help Java take a long classpath. Agreed. But maybe the problem there is Java classpath setup, and the fact that java.exe doesnt take a response file, unlike, say javac.exe? Java6 lets you add entire directories to the CP in one go, but it does still use the CLASSPATH env variable, and then there is the whole endorsed directory mess. Its hard to blame MS for the screwed up command line tools of a key competitor.

    What you can do is complain about the awfulness of Win9x bat files, which is one of the key reasons why the Ant team no longer supports Win9x: too many bugreps came in about ant.bat on win9x, and nobody wants to keep a win9x vmware image alive to replicate the problems.

    -steve

    ps, yes I do have a degree in Computer Science. I studied under Milner. But I also ship things that work :)

  70. Dean Harding says:

    Steve: Why would you need to use more than one port for a Jabber server? One port per user seems a little strange to say the least…

  71. Richard Crist says:

    The best comment above is:

    Thursday, March 01, 2007 10:20 AM by vince

    How do I use more than 640k of RAM?…

    My (Richard Crist) comment:

    Innovation is made by those who push the limits.  Granted…the innovations are most likely made by those that understand why the limits were there to begin with.

  72. Igor says:

    "nesting tree-view items more than 255 levels deep"

    Hmm, so there is no way to create more than 255 nested folders in Windows?

  73. Alex Cohn says:

    Yet another example: http://www.experts-exchange.com/Programming/Languages/CPP/Q_22420157.html The guy complains that "HLMSOFTWAREMicrosoftWindows NTCurrentVersionWindowsGDIProcessHandleQuota to 50000 but it still craps out at around 10000".

  74. UFies.org says:

    Nice post by Raymond Chen about pushing operating system limits, in saying that if you’re pushing the OS limits, nesting…

  75. Internet Explorer has no problem creating an array of 65,536 elements. var array = []; for(var…

  76. If you have to ask, you’re probably doing something wrong…

Comments are closed.


*DISCLAIMER: I DO NOT OWN THIS CONTENT. If you are the owner and would like it removed, please contact me. The content herein is an archived reproduction of entries from Raymond Chen's "Old New Thing" Blog (most recent link is here). It may have slight formatting modifications for consistency and to improve readability.

WHY DID I DUPLICATE THIS CONTENT HERE? Let me first say this site has never had anything to sell and has never shown ads of any kind. I have nothing monetarily to gain by duplicating content here. Because I had made my own local copy of this content throughout the years, for ease of using tools like grep, I decided to put it online after I discovered some of the original content previously and publicly available, had disappeared approximately early to mid 2019. At the same time, I present the content in an easily accessible theme-agnostic way.

The information provided by Raymond's blog is, for all practical purposes, more authoritative on Windows Development than Microsoft's own MSDN documentation and should be considered supplemental reading to that documentation. The wealth of missing details provided by this blog that Microsoft could not or did not document about Windows over the years is vital enough, many would agree an online "backup" of these details is a necessary endeavor. Specifics include:

<-- Back to Old New Thing Archive Index