Applications and DLLs don’t have privileges; users do

Date:August 18, 2006 / year-entry #281
Tags:other
Orig Link:https://blogs.msdn.microsoft.com/oldnewthing/20060818-14/?p=30053
Comments:    75
Summary:I can't believe you people are actually asking for backdoors. If an end user can do it, then so can a bad guy. In response to the requirement that all drivers on 64-bit Windows be signed, one commenter suggested adding a backdoor that permits unsigned drivers, using some "obscure registry key". Before somebody can jump...

I can't believe you people are actually asking for backdoors. If an end user can do it, then so can a bad guy.

In response to the requirement that all drivers on 64-bit Windows be signed, one commenter suggested adding a backdoor that permits unsigned drivers, using some "obscure registry key". Before somebody can jump up and shouts "security through obscurity!", the commenter adds this parenthetical: "(that no application has privileges to do by default)".

What does that parenthetical mean? How do you protect a registry key from an application? And if applications don't have privileges to modify a key, then who does?

The Windows security model is based on identity. Applications don't have privileges. Users have privileges. If an application is running in your user context, then it can do anything you can, and that includes setting that "obscure registry key". (This is a variation on "Your debugging code can be a security hole".) Same goes for DLLs. There's no such thing as something only an individual program/library can read/write to or do. You can't check the "identity of the calling library" because you can't trust the return address. Coming up with some other "magic encryption key" like the full path to the DLL won't help either, because a key that anybody can guess with 100% accuracy isn't much of a key.

Yes, UNIX has setuid, but that still doesn't make applications security principals. Even in UNIX, permissions are assigned to users, not to applications.

That's one of the reasons I get so puzzled when I hear people say, "Windows should let me do whatever I want with my system", while simultaneously saying, "Windows should have used ACLs to prevent applications from doing whatever they want with my system." But when you are running an application, the application is you. If you can do it, then an application can do it because the application is you.

Some people want to extend the concept of security principal to a chunk of code. "This registry key can be written to only by this function." But how could you enforce this? Once you let untrusted code enter a process, you can't trust any return addresses any more. How else could you identify the caller, then?

"Well, the DLL when it is created is given a magic cookie that it can use to prove its identity by passing that cookie to these 'super-secure functions'. For example,

// SECRET.DLL - a DLL that protects a secret registry key
HANDLE g_hMagicCookie;

// this function is called by means to be determined;
// it tells us the magic cookie to use to prove our identity.
void SetMagicCookie(HANDLE hMagicCookie)
{
 g_hMagicCookie = hMagicCookie;
}

and then the program can use the magic cookie to prove that it is the caller. For example, you could have RegSetValueWithCookie(g_hMagicCookie, hkey, ...), where passing the cookie means 'It's me calling, please give me access to that thing that only I have access to."

That won't stop the bad guys for long. They just have to figure out where the DLL saves that cookie and read it, and bingo, they're now you.

// bad-guy program

int CALLBACK WinMain(...)
{
 // call some random function from SECRET.DLL
 // so it gets loaded and the magic cookie gets
 // initialized.
 SomeFunctionFromSECRETDLL();

 // experimentation tells us that SECRET.DLL
 // keeps its magic cookie at address 0x70131970
 HANDLE hMagicCookie = *(HANDLE*)0x70131970;
 RegSetValueWithCookie(hMagicCookie, hkey, ...);

 return 0;
}

Ta-da, we now have a program that writes to that registry key that SECRET.DLL was trying to protect. It does it by merely waiting for SECRET.DLL to receive its magic cookie, then stealing that cookie.

"Well, sure, but if I combine that with the check-the-return-address technique, then that'll stop them."

No, that doesn't stop anybody. All the bad guy has to do is change the RegSetValueWithCookie(hMagicCookie, hkey, ...) to code that hunts for a trusted address inside SECRET.DLL and cooks up a fake stack so that when control reaches RegSetValueWithCookie, everything in memory looks just like a legitimate call to the function, except that the attacker got to pass different parameters.

You can come up with whatever technique you want, it won't do any good. Once untrusted code has been granted access to a process, the entire process is compromised and you cannot trust it. Worst case, the attacker just sets a breakpoint on RegSetValueWithCookie, waits for the breakpoint to hit, then edits the stack to modify the parameters and resumes execution.

That's why code is not a security principal.

Corollary: Any security policy that says "Applications cannot do X without permission from the user" is flawed from conception. The application running as the user is the user. It's one thing to have this rule as a recommendation, even a logo requirement, but it's another thing to enforce this rule in the security subsystem.


Comments (75)
  1. Yuliy says:

    Permission from the user can be granted in a few different ways. One of which is requiring the user to enter their password again. Another is popping up a system modal dialog that doesn’t accept events generated by another application, but only from the console (but that’s tricky, because of applications like VNC or terminal services: what exactly is "the console"?).

    Still another approach for extremely dangerous settings is to actually enable the change after restart, but very early in the user’s login process, to pop up a message notifying the user: "This setting has changed from its old value of ___ to a new value of ___. Do you wish to Accept the new value or Restore the setting to its previous value?"

  2. Carlos says:

    The problem is that the Windows and UNIX security models are ass-backwards.  The operating systems go to great lengths to protect themselves from me, but do nothing to protect my data from the programs I run.

    Re-architecting Windows with a capability security model would be impracticable, but .Net is an excellent step in the right direction.

  3. Peter Ritchie says:

    "Any security policy that says "Applications cannot do X without permission from the user" is flawed from conception."  Isn’t that basically the out-of-the-box policy for IE downloads, installing unsigned drivers, etc?

  4. Peter Ritchie says:

    "Any security policy that says "Applications cannot do X without permission from the user" is flawed from conception."  Isn’t that basically the out-of-the-box policy for IE downloads, installing unsigned drivers, etc?

  5. Adam says:

    Carlos > How is an OS supposed to protect your data from the programs that access them?

    If you edit a Word .doc file with Notepad, then it’ll end up corrupted. How is the OS supposed to guard against that? What exactly are you proposing?

  6. Richard Gadsden says:

    “If you edit a Word .doc file with Notepad, then it’ll end up corrupted. How is the OS supposed to guard against that? What exactly are you proposing?”

    The OS should (a) know that the Word .doc file belongs to Word and (b) not let Notepad access that file.

    [I can see the error messages now. “Outlook cannot embed the OLE object Party.doc. That document can be opened only by Microsoft Word.” Or “Explorer cannot show the properties of the file Party.doc. That document can be opened only by Microsoft Word.” Or even better, “OpenOffice cannot open Party.doc. That document can only be opened by Microsoft Word.” -Raymond]
  7. Andy C says:

    "Any security policy that says "Applications cannot do X without permission from the user" is flawed from conception"

    But that’s only true because of the assumptions in current OS design, surely? An entirely managed code OS like Singularity can enforce those kinds of policies can’t it? Likewise you could build an infrastructure on Windows that used some form of hardware seperation (similar to user/kernel mode) that prevented rogue native code applications from acting like this, it’s just that we currently don’t.

  8. Todd Greer says:

    Richard Gadsden:

    Now you’ve got the OS unduly interfering with what I want it to do. As the user, if I want to edit a .doc file with Notepad, I expect the OS to refrain from interfering[1]. Yes, some sorts of programs should have a more limited set of (user configurable) privileges, but others, such as general-purpose editors, should not.

    [1] To get a less filtered view, I do occasionally open unusual files in Notepad, though never yet a .doc file.

  9. mastmaker says:

    @Richard Gadsden

    Ha! then you would end up with a visual basic OS. Or a AOL OS. Imagine a flight where you are prohibited from carrying a bottle of water….oh…I am sorry…THAT has already been done.

    I can’t begin to count the ways in which that suggestion is gaseous.

  10. Chet says:

    @mastmaker

    What are you going on about?

  11. Carlos says:

    “If you edit a Word .doc file with Notepad, then it’ll end up corrupted. How is the OS supposed to guard against that? What exactly are you proposing?”

    If notepad screws up a file that I chose to open then that’s my problem.  If notepad has permission to corrupt or delete all of my documents (which it does) then that’s a flaw in the OS security model.

    “And what’s the difference between a configuration file and a document?”

    A configuration file lives in the app’s per-user data directory.  So the app can access it without further permission.  (But it can’t access any other app’s data directory.)

    A document is a file explicitly chosen by the user through a secure file open dialog or windows explorer (which would also update the MRU list).

    Apps are only allowed to access their own private data and documents chosen by the user.

  12. Todd Greer says:

    "There’s no such thing as something only an individual program/library can read/write to or do."

    You are absolutely correct with regard to libraries (at least native ones). However, having data that only individual programs can read or write is a solved problem. In unix, you create a user to own the data, make the data only readable and writable by its owner, and set the program to be setuid to that user.

    I don’t know what the equivalent to the setuid bit is in Windows, but given Windows’s rich security capabilities, I’m sure there is a way to achieve the same result.

    Naturally, root/Administrative Users can still access the data, but having full access is what such accounts are for.

    If there is a legitimate need for a particular library to have such an altered set of privileges, just wrap it in a process and apply the same technique. That said, I suspect that there are very few situations where this should actually be done, and you still have to make sure that the now trusted process doesn’t permit itself to be misused, but it is certainly possible to isolate code this way.

    Note that I do agree with the overall point of the post, just not certain details.

  13. Adrian says:

    VMS had (has?) the concept of associating privileges with applications as well as with users or processes.

    For example, you could give the Backup program READALL priv (allowing it to read any file, regardless of ownership and ACL).  Then you would put an ACL on Backup to only allow your system operators to access it.

    When the loader loads a trusted app, it grants the associated privileges to the process.  When the application terminates, the privileges are revoked.  That kept the operators from starting Backup, hitting Ctrl+Y, and ending up with a process.  Typically the tape drive was physically secured in a locked room, so operators couldn’t just back up and then grab the tape.

    Maybe there’s a flaw in the design, but I know that VMS was once considered very secure.

  14. Chris Becke says:

    [So if you open an album in Picasa it shouldn’t show up in your Start menu’s “recent documents” list? Because that list is stored in HKCUSoftwareMicrosoft… which your proposal places off-limits to Picasa. And what’s the difference between a configuration file and a document? -Raymond]

    Yes. My argument is badly worded and ill thought out. Because I find it fruitless spending too much time thinking about that which will not happen.

    With regards to my Picasa example, I got an early beta version that had a bug that caused it to write “discovered” image file names into HKCU. This is why I picked on it. Id rather that writing to the root of HKCU would be impossible for a application installed with default app permissions.

    I don’t think that [HKCUSoftwareMicrosoft…] need necessially be off limits to applications. Not all of it for certain. Just like users can be granted access to some sub keys, and lcoked out of others, applications could be granted sufficient access – by default – for the shell component) hosted as it is in each applications process space – to read and write its recent documents list.

    Perhaps I should not have mentioned configuration files at all. But, for the purpose arguing about the hypothetical OS protecting config files but allowing access to documents, protected “config” files would be those stored in the applications Program Files folder. “Documents” would be those files stored in the users “My Documents” folder.

    [But you still have the DLL problem. Process A loads B.DLL (e.g. Wordpad loads a media player). Does Process A gain the ability to modify B.DLL’s settings? If you say No, then B.DLL can’t update its settings. If you say Yes, then bad guys can load B.DLL (thereby gaining “Allowed to modify B.DLL’s settings” permission) and then start partying on its settings. -Raymond]
  15. Todd Greer says:

    Adam: It actually might be a good idea to have the option of refusing to run any untrusted[1] code. This would be some sort of user preference, domain policy, or some such. Many users would have no problem with this. I could see it being good for some enterprise users, for example. It would of course be disabled on any developer’s computer, and it might be too disruptive to even have on by default.

    [1] I’m not talking about code signing; code could be considered trusted by virtue of the user having been asked by the OS.

  16. mastmaker says:

    @chet

    Just venting the steam while trying to be funny. Apparently, I am not so successful. Otherwise, ‘I’ would be chosen to replace Jay Leno instead of Conan O’brien. :-)

    I still can’t get over the monstrosity of that suggestion. (As Raymond surmised) Microsoft would be stripped down to underwear to pay the penalty for Anti-trust case on THAT one.

  17. mastmaker says:

    @chet

    Just venting the steam while trying to be funny. Apparently, I am not so successful. Otherwise, ‘I’ would be chosen to replace Jay Leno instead of Conan O’brien. :-)

    I still can’t get over the monstrosity of that suggestion. (As Raymond surmised) Microsoft would be stripped down to underwear to pay the penalty for Anti-trust case on THAT one.

  18. Adam says:

    Interesting. So if I open foo.doc in MS Word via the secure open file dialog, Word is fine to read and write foo.doc, but can’t write (or overwrite) a backup file, or an autosave file, in the same folder? Interesting.

    Although, Word could keep the backups/autosaves in it’s own cache folder instead of next to the real folder. But then you have a problem if the user wants to rename/move/delete the original file. How do you associate the backup with the original? Or make sure you remove backups in good time? It’s not like the user can see it right there next to their original document when they move/delete the original.

    HTML editors would be tricky too. At the moment if you add an image to a web page you’re working on the editor just sticks it in an "images" directory next to the HTML file, so that if you delete the original the version you put in the HTML file is still there. Now you’re saying that as well as selecting the images to include, the user will have to confirm saving every single image they’re about to add to the web page?

    And when they re-open the page, they’ll have to confirm opening every single image file included in the page to view it properly?

    Wow, what about music players? Open a playlist and be prompted to open every single mp3 file referenced in it? Or do you want to keep a copy of each mp3 in the playlist? What if you have multiple playlists with the same file in? Note – even if this can be done in bulk, what about nested playlists? "All" contains references to each "Artist" playlist. When you open "All", you’ll have to be prompted to confirm all the sub-playlists to open before they can be read to figure out which mp3 files to confirm.

    I dunno – there just seem to be a lot of problems associated with what you’re proposing. And I’m sure I haven’t listed even a tiny fraction of them.

  19. nksingh says:

    Most suggestions here are exactly like saying that we should have a dictatorship because we don’t like our current president…  (I’ve sometimes felt myself believing that way over the past 5 years).

    Nevertheless, in UNIX people achieve a similar thing by having restricted user accounts for programs like apache (I think IIS does this too).  This isn’t a general purpose solution, though, for applications that users wish to run directly.  

    Frankly, the easiest security model is one in which the user has full access to her computer so she can install or remove things at will (with maybe some simple protections like hiding system files so that she doesn’t make terrible mistakes).  Entering a gazillion passwords is not my idea of an easy experience.  I think we need to stop trying to protect computers against malware through technology but instead use legal means to target these people.  If people start going to jail for producing evil software, it would probably put a dent in the amount of stuff out there.

  20. XRay says:

    > "Any security policy that says "Applications cannot do X without permission from the user" is flawed from conception"

    Have the setting kernel protected. Always. Let it unprotected in safemode or whatever other bootmode only the user can select.

    That said, I’m very against the decision of allowing only signed drivers. Cheap hardware companies (you know, those very cheap bluetooth dongles) are cut away. Freeware is cut away from the driver market. Opensource too. Not a nice way to push RDP in place of VNC..

  21. Erzengel says:

    If the application is you, what’s to keep applications from modifying XP’s pin/recently used menu?

  22. Paul says:

    @ Yuliy : "Permission from the user can be granted in a few different ways. One of which is requiring the user to enter their password again

    "

    This defeats the whole point of a single, secured sign-on process in the first place, also what if the user is using some other authentication mechanism like a smart-card?

    Also not only will user get sick of the constant prompting to confirm security details they will stop assigning any special consideration to the process and blindly enter them regardless of who / what asks for them regardless of when / where it asked.

  23. dave says:

    I think the ‘access by application only’ thing can be solved in principle.

    Assume there are such things as “application SIDs”, i.e. SIDs that identify applications.

    First approximation:

    When a new process is created to run an image, the image file on disk is checksummed (MD5, say). The result is looked up in some Big Database which maps from checksum to application SID. If found, the process token is modified to include the application SID.

    Naturally, resources can then have security descriptors which mention the SID in the usual way.

    This isn’t too terribly different in principle from the use of role-specific SIDs like INTERACTIVE, BATCH, etc.

    In practice, it’ll have to be slightly different, since NtCreateProcess doesn’t take a file, it takes a section handle. But those are mere details.

    The Big Database is constructed by management operation: “I allow this application program to run with identifier XYZ”.  There are some logistical problems to deal with, in that you probably don’t know the SID until after you’ve “installed” the application in that manner, so you can’t put XYZ into the SD of any object. But such things can be dealt with.

    The real question here is, does this make the system more secure?  It requires the admin (the XP Home admin, even) to make decisions about what apps can be trusted. The database needs to be adjusted every time an application is updated, which means the user is going to get conditioned into pressing “OK” without reading anything, just as they are now.

    [And then all I have to do is inject some code into this application (since it merely added the application SID to the token, my SID is still there and I have debug privileges over my own applications) and I have effectively “stolen” the application SID. No MD5 hash hacking necessary. (And the “dialog box fatigue” issue you mention is a real problem too.) -Raymond]
  24. Tom says:

    Wow.  There certainly are some strange requests in the comments today!

    As Raymond points out, security is associated with the account (i.e. the user), not to applications or processes.  This means that you have the same privileges / permissions from the start of your session to the end, and so do all of the programs you run.  Applications can, of course, drop privilege when running, but they cannot gain privileges that are not already assigned to the account.  

    The problem with this system is that you don’t want to use those privileges all the time.  For example, if you are a power user and you’re browsing the web, you’d probably prefer that the "software install" privilege be dropped so that malicious websites can’t install software without your asking.  But can you imagine what such a system that asked you about using privileges would look like every time you tried to do something?  It would look like Vista Beta 1.  In other words, there would be so many dialog boxes about permissions and access that you’d tear your hair out before finishsing the latest install of QuickBooks.

    One solution is available from http://www.desktopstandard.com/PolicyMakerApplicationSecurity.aspx (which I have no afiliation with; I found the link from Aaron Margosis’ MSDN blog at http://blogs.msdn.com/aaron_margosis/ ).  This program runs as an NT service that will grant privileges to processes (keyed by application name and verified by MD5 hashes) when certain users run them.  This allows the user to run with the least privilege at all times and be granted temporary privilege when running  certain processes.  According to the manufacturer’s website, you can even prevent children of the "blessed" process from inheriting its privileges.

    The catch is that this is still prone to the same problems as Raymond has already mentioned.  For example, instead of using some hidden registry key for protection, you’re now depending on the MD5 hash of the application.  While this may be harder to crack, recent research shows that it is clearly not impossible.

    I’m not sure what the best solution to this problem is.  In the UNIX world it’s easy to switch to root and run some sensitive task; in Windows, not so much.  Fast user switching helps, but only if the task that requires privileges can be completed while running with privilege.  Otherwise, switching between the users is a real PITA.  And, yes, I know about the ‘runas’ utility helps, but somehow it doesn’t seem to work as well as switching to root does in UNIX (at least to me).

  25. Nick says:

    Why is the answer to every security problem “Just add another dialog box”? -Raymond

    Any security policy that says “Applications cannot do X without permission from the user” is flawed from conception. -Raymond

    (And the “dialog box fatigue” issue you mention is a real problem too.) -Raymond

    Reading this post and the comments makes me curious… what do you think of the security model in Vista?  Asking permission to do this and that gets annoying pretty quick, and does seem somewhat problem prone. I suppose when the little “Yes/No” box pops up, other applications are stymied somewhat from messing with the dialog, but is it that secure? All it takes is one person to figure out how to send a keystroke through the ‘fog’ and the box is nullified.

    I agree that bouncing between user and root in *nix is easier than in Windows, but still a pain. Too often I find myself just doing an su – and then staying as root until I’m done with whatever it was I was working on. Typing sudo 500 times an hour gets old real fast as well.

    I work in IT at the moment and am constantly bouncing around my network and hardware settings, registry, filesystem, etc. If I try an run as a regular user, my work is doubled just to get into the right security context.  The solution? I run as an admin and think before I click. Maybe that will bite me one day, but it hasn’t yet.

    [“What do you think of the new security model in Vista?” Ask me in ten years. -Raymond]
  26. bmm6o says:

    “Any security policy that says “Applications cannot do X without permission from the user” is flawed from conception. The application running as the user is the user.”

    This contains the underlying assumption that the user trusts the software.  I don’t know about the average user, but I get a little lump in my stomach every time I run a program I downloaded from the internet.  Even if I trust that Winamp (e.g.) won’t intentionally mess up my system, how can I be sure there aren’t any bugs that cause data loss?  Wouldn’t it be great if I could give it read-only access to my mp3 files, and restrict it from doing anything else?  I have much more faith that Windows implements and enforces file ACLs correctly, and I’d like to leverage that.

    I think that’s all that people want, and I don’t think it’s unreasonable.  And you’re right Raymond that manually doing this in Windows would put an excessive burden on the user.  They would have to create a new account with appropriate rights, create a batch file to  RunAs before launching the app, remember to create an appropriate ACL for each new mp3 file they rip from CD, etc.  But just because nobody’s proposed a good UI for it, it doesn’t follow that it wouldn’t be a desirable outcome.

    [Perhaps Restricted Tokens will get you most of what you want. It’s not file-by-file but it may be close enough. -Raymond]
  27. "Of course this only works if you can trust a process, which you can never really do in Windows/Unix.  One way Singularity accomplishes this is by disallowing any form of code loading after the process starts.  Basically, no LoadLibarys.  If you want to load a DLL, that must be specified in your manifest.  Now you can authenticate an in-storage program and trust that it won’t change."

    In other words, there is no extensibility possible for applications running on Singularity. Doesn’t strike me as too useful, then!

  28. Chris Becke says:

    I don’t know… I still *want* permissions to be assigned to applications.

    Now, dll’s within an application – thats another story. I can see that it would be impossible to enforce module based permissions.

    That said, when I download and run some random application from the internet. Say, for example, google picasa… I think that it should be confined to reading/writing exclusively from HKCUSoftwareGoogle… And, within Program Files, it again should again be stuck within its install script declared folder. “Program FilesGoogle…”

    I would like a OS where *all* software *has* to be installed via install scripts that are executed by a trusted OS supplied install tool. i.e. MSI.

    And, the applications / sfotware subsequently isntalled, would be, entirely confined – by default permissions – to its own program folders and registry keys. OS keys (HKCR) would be read only, other application keys, would be entirely hidden. I dont think theres any reason for application X to know of application Ys presence on the machine, nevermind have the ability to screw with its settings.

    I am a developer. Management occasionally come to me and ask me to do… distasteful things. So far I have held them off. But, for as long as “dont screw with other applications or the OS via undocumented means” remains a not very well documented guideline, the temptation exists.

    Me – I want the OS to partition applications, and keep them partitioned. At least their files and registry keys. That way, when I want to uninstall something, I can, and be sure its cleanly gone.

    The process boundary “protection” the os offers is all for naught when apps can directly go and poke at the config files, code files, and registry settings of other apps.

    [So if you open an album in Picasa it shouldn’t show up in your Start menu’s “recent documents” list? Because that list is stored in HKCUSoftwareMicrosoft… which your proposal places off-limits to Picasa. And what’s the difference between a configuration file and a document? -Raymond]
  29. theofour says:

    Actually I think users should be able to do almost everything they want to do, just hazardous operations should be made difficult to finish (by asking user to confirm passwords, inserting PIN and PUK and other passwords/keys).

    You do not need an application to cause user unhappyness due to having those rights – they will almost always do something they will later regret doing. So technically users need to be protected from themselves. That is why versioning filesystems (filesystems, that version your files making them recoverable) and system restore functionality are so useful. The only drawback is that these features require a lot of space on your disks (and that space is not cheap, at least not yet). On the other hand, having systems, that never lose data, can be very effective when battling piracy as you couldn’t remove the evidence.

    It is just too bad I wouldn’t like that kind of attack on my privacy, so running computers with no versioning data storage and root/administrator rights with having to manually confirm/allow (almost) any application at any time trying to run or access any libraries or other applications, will be the best solution for me.

  30. Raymond today has a discussion up about the folly of trying to set security with a granularity of per-DLL. …

  31. Adam says:

    Chris> "I would like a OS where *all* software *has* to be installed via install scripts that are executed by a trusted OS supplied install tool. i.e. MSI."

    What about batch files/Monad scripts? Should you be required to write an installer for your batch file and then execute it before being able to /run/ the script? Every time you modify the file? Even while you’re in the process of developing it? (How do you make a difference between a "finished" script and one under construction? Is there a difference? For most of my scripts there isn’t.)

    Or do you want the scripts you write to not be able to access any of your files? (Why are you writing the script again?)

  32. El Guapo says:

    Sorry if this is said before but,

    How the heck do you explain CAS (CODE ACCESS SECURITY) in .NET?

    The whole principal of CAS is exactly the OPPOSITE of what you just asserted!

    [This was already explained yesterday. -Raymond]
  33. El Guapo says:

    Oh, sorry, I will go read it.

  34. I am to the point I’m ready to pull my hair out.  I cannot believe all of you are having such a horrible time with the simple concept Mr. Chen is trying to convey to you.

    THE PROGRAM RUN BY THE USER IS RUN UNDER THE SAME SECURITY CONTEXT AS THAT USER.

    If a user with ADMINISTRATOR RIGHTS starts up notepad, then notepad HAS ADMINISTRATOR RIGHTS!  It’s that simple.  And the same goes for any other stupid program the user gets from download.com.

    What I am seeing is that many of you want to log on as god but still be able to download all the crap you want in utmost safety.  Or, you want Windows to protect the user from every application out there BUT YOURS!!  Which begs the question of why in the hell should I trust you and your little app?

    Please stop and think about what you are typing before you type it.  Right now I am having a horrible time thinking any of you have a degree in computer science – and if you do you obviously skipped the entire semester of your Operating Systems class (and possibly even compiler construction).

    The answer isn’t to create more dialog boxes for users to ignore.

    The answer isn’t for MS to create special back doors just for you.

    The answer is for MS to continue to spend millions of dollars every year in research and usability labs to figure out the best way to move forward – which is what they are doing.

    Of course, there’s always the conspiracy theory that many of you are open source/slashdotters who are trying everything you can to knock a hole in Mr. Chen’s wall.

    On a different note, users logging on with admin rights explains 99.99% of all security problems and crashes that Windows has endured since Windows 2000.  And the reason why users have to log on with admin right is because of poorly written third party apps (like yours).  Thank God (and Mr. Chen) that MS is going to do something about that in Vista – despite it not being their problem to begin with.

    James

  35. Carlos says:

    @Adam: I didn’t go into detail but the problems you note are easily surmountable.

    When applications are security principals you can associate permissions with them.  So a web server would have access to “wwwroot” and its subdirectories.  You would give media players permission to *read* all media files (identified by extension).  Playlists are part of the apps data so don’t require any permissions.

    If you opened a web project for editing you would check a box saying “and all files in this directory and subdirectories”.

    If you open a Word file in a read-only directory it already puts its “~df*.tmp” files in the temp directory.  This hasn’t caused me any problems.

    [So the “secure file open” dialog would have an “and all files in this directory and subdirectories” checkbox? What file extesions will you allow Notepad to open? Why is the answer to every security problem “Just add another dialog box”? -Raymond]
  36. Carlos says:

    “So the ‘secure file open’ dialog would have an ‘and all files in this directory and subdirectories’ checkbox?”

    Yes.

    “What file extesions will you allow Notepad to open?”

    None by default.  Notepad can only open files that the user has selected in the secure dialog (whatever extension they have).

    “Why is the answer to every security problem ‘Just add another dialog box’?”

    The question “should this application be allowed to access that file/API/service/whatever” is not one that can be answered by a computer (or an OS vendor).  Ultimately, someone has to make a policy decision.  And if you’re going to the ask the user, making it an implicit part of the file-open dialog (or windows explorer) is very unobtrusive, since it’s something they have to do anyway.

    Of course, home users can install random software with arbitrary permissions and screw up their computers if they want to.  But that’s no reason to deny security to knowledgeable users and administrators.

    [Notepad can’t open any files by default. Hm. Okay, well I guess there’s no point taking this any further – we fundamentally disagree. -Raymond]
  37. Reuven Lax says:

    Take a look at the security model used in Singularity out of Microsoft Research.  In this model, the application being run _is_ a security principle.  The security principle for any instance of an application contains the history leading up to that execution (kinda like Java/.Net stack inspection).  So, an instance of Word may have the following principle:

    logon.exe@ntdevraymondc+explorer.exe+winword.exe

    Meaning that you logged on as raymondc, started explorer which then started word.  The ACL on a file now becomes a regexp.  Say .*logon.exe@raymondc.*winword.exe.*

    This means that only raymondc can access this file and only through winword.exe at least indirectly (I’m assuming there aren’t any more logon.exe@otheruser in the principle).

    Of course this only works if you can trust a process, which you can never really do in Windows/Unix.  One way Singularity accomplishes this is by disallowing any form of code loading after the process starts.  Basically, no LoadLibarys.  If you want to load a DLL, that must be specified in your manifest.  Now you can authenticate an in-storage program and trust that it won’t change.  

  38. Dean Harding says:

    In other words, there is no extensibility possible for applications

    > running on Singularity. Doesn’t strike me as too useful, then!

    IPC is essentially free in Singularity (it almost has to be) so to implement extensibility, you run your plugin in separate processes.

  39. Mike Hearn says:

    This isn’t correct – *most* UNIXes have the user as the center of the security system, but SELinux and AppArmor change this completely. They allow you to assign fine grained privileges to applications and not users. Binaries are tagged with extended attributes identifying which security context they should run it (or in AppArmor they are identified by file paths).

    The system originally governed only the standard UNIX syscalls and APIs, so you could for instance say that Apache is only allowed to listen on port 80, only allowed to read files marked in a special fashion, only allowed to fork and so on …. through “userspace object managers” servers like DBUS (central rpc router) and the X server (graphics/windowing subsystem) can *also* perform security checks by examining the context that the remote process is running as.

    This allows you to do things like say “No applications are allowed to take screenshots, except the official screenshot application”.

    So you can get very fine grained security like this, at the application level. CoreForce provides something similar for Windows, but not quite as good.

    Now this is quite good at limiting what hackers can do but different from stopping spyware/malware/untrusted software because by definition, an application has to specify what the minimal set of permissions it needs are. So a malware program could just say it needs the ability to call CreateRemoteThread on explorer, and the operating system would have to believe it.

    Using this kind of process-centric security system is a step in the right direction however … it allows you to say “We can have magic registry key XYZ and only the official Microsoft Control Center program can change it” then you an also say “The Microsoft Control Center program can only be remotely controlled by accessibility programs/debuggers signed by us” and now it is possible for an end user to set the registry key but not a program. I’m simplifying but you get the idea right?

    [“… only by programs signed by Microsoft.” Yeah that’ll go over really well. -Raymond]
  40. John Smith says:

    Tiny Firewall 2005 does pretty much what some are asking for. It basically sandboxes all processes on the machine and allows fine grained per process per user control.

    I used one of their first versions a couple of years ago and even then it was fairly stable and didn’t cause many problems, but it was a bit of a pain to set up properly.

    I have no idea how effective it is against an application that is designed specifically to thwart it, but it works well against normal processes.

    VMWare + regmon + filemon are a good combo for testing out new applications.

  41. Archangel says:

    Well, I’m flattered to have had one of my posts linked to…

    I don’t believe "Windows should let me do whatever I want" and "Windows should use ACLs to protect files, not sneaky tricks" are as orthogonal as all that. The obvious solution is implemented (apparently badly, but hey) in Vista as UAC – then you can do whatever you want, given a wee jump through a hoop, but important bits of the OS are still protected against malicious applications, at least within reason.

    As far as signing all drivers on 64-bit Windows – my opinion is not to require it. I’m told it costs money – I like the idea that I can buy a piece of hardware without having to pay extra to get the driver signed by Microsoft. None of the drivers on my current system are signed by anyone, and it seems to work fine – I’m not at all convinced that it’s a necessary evil in any way.

  42. Norman Diamond says:

    Yes, UNIX has setuid, but that still doesn’t

    > make applications security principals. Even

    > in UNIX, permissions are assigned to users,

    > not to applications.

    OK.  A user who isn’t allowed to log in and isn’t ordinarily impersonatable by any real world user is still a user, OK, let’s continue.

    > But when you are running an application, the

    > application is you.

    Not if you’re running a setuid application.  In such a case the application is the user who owns the application, the user is someone whom you can’t even impersonate by other means (unless you’re already root).

    > Any security policy that says "Applications

    > cannot do X without permission from the user"

    > is flawed from conception.

    Recent Vista betas give quite a different impression.  Sometimes your employer decides to do a good job of tackling some issue, and they seem to be proceeding pretty well on this one.  (It would still be a good idea to add setuid in addition though.)

  43. yet another open source/slashdotter says:

    > Of course, there’s always the conspiracy theory that many of you are open source/slashdotters who are trying everything you can to knock a hole in Mr. Chen’s wall.

    Quite a racist comment.

    >> Or, you want Windows to protect the user from every application out there BUT YOURS!!

    Effectively I don’t want Windows to protect me from anything at all. But since MS is going to allow only signed drivers, and since I use many unsigned drivers on my system (without any problem, heck I even wrote one of them) I’d like a way to bypass the protection on my will.

    A different boot mode. A different setting at Windows installation time (press F6 or something). A different x64 edition entirely. Something to be confirmed at next reboot (before applications start).

    Why you cannot selectively enable/disable the rule for different drivers, you can do it at a global level.

    Of course some way around exists (otherwise noone could develop drivers on/for vista x64 ?!) maybe involving having to use checked builds.

    >> The answer is for MS to continue to spend millions of dollars every year in research and usability labs to figure out the best way to move forward – which is what they are doing.

    They spent millions of dollars researching that for security reasons it’s good for outlook to zap every attachment with exe or xml or whatever else extension ?

    They spent millions of dollars so that MSN could automagically delete a file it received just because it PERCEIVES it as dangerous. Gosh I’M the one who decides what’s dangerous and what not.

  44. Mike Dimmick says:

    yos/s: you’re not up to date. There will be a boot mode option in Vista/x64 to enable unsigned drivers. The desktop will then be ‘stamped’ to indicate that the system is booted in this mode – the desktop background will be overlaid with some text indicating this, as occurs in safe mode and for the betas. I don’t think this was implemented in Beta 2 but should be in the next public release.

    There won’t be any way for an application to enable this – it will have to be selected from the boot menu every time. To do otherwise would compromise the driver signing program.

    Source: http://www.osronline.com/article.cfm?article=465 (requires login)

  45. Tyler Reddun says:

    People keep talking about setuid as a solution, but it’s not, it’s a securty hole. The idea that having an app anyone can run automaticly take on higher privlages then yourself is asking for your system to be compromised (example: your running IE as a regular user, a hack lets it run code, it runs the setuid app then hacks in, now it owns your box).

    setuid is the bane of unix systems, a work around to allow users to change there passwords because root owns that file. It’s a very bad choice to use it as a security feature.

    Windows at least as a "Run As…" on every application, this way you can start up an app as an administrator while in a normal user account (I use it mostly for installing new applications). It’s not automatic and you have to be willing to put up with a password dialog, but it’s there, and it works.

  46. OrsoYoghi says:

    > Looks like part of the Secure Audio Path to me.

    I have the same opinion.

    It looks like another of these "own the user while making him believe we’re doing him a favour"..

    Driver signing is not protecting me from getting garbage in my system. It’s protecting my system from me and this is the wrong way around, yet another time in history.

  47. Nar says:

    Explorer cannot show the properties of the file Party.doc.
    >That document can be opened only by Microsoft Word.

    Apps can drop privileges, right? So Word can say it only wants write permission to Word documents, the user’s setting, etc., and can’t later be coerced into taking over the OS by a macro it loads or corrupting something by a bug. This is good, in the sense that user’s don’t want any program they load to have arbitrary access to everything they have available to them.

    The only thing the user wants Word to open is the documents the user asks it to open. So, have Vista/Explorer hand Word permissions on a file-by-file basis as the user opens them. If Outlook wants Office to open a file, have it use the same Vista API, and automatically give Office its own privs to that file at the same time. Then, if Vista wants to keep a list of ‘recent documents’, then it’s a Vista implementation detail, not something each app has to manage (and manage to screw up) itself.

    [“The OS” is a pretty vague term. Could you be more specific? Are you refering to DLLs (which run in-process and therefore are under the same security constraints as the host process)? Or out-of-proc services? Or kernel-mode components? Given that nearly all software runs in user-mode in a single process, how do you distinguish between “application running in-proc” and “OS code running in-proc”? If Word drops privileges and then calls an OS function provided in the form of an in-proc DLL, then that OS function also runs with reduced privileges and may not be able to accomplish what it was being asked to do. Maybe you consider this a feature. -Raymond]
  48. Dewi Morgan says:

    There was a similar case when everyone outside Microsoft said "outbound protection in a software firewall is a good thing" and MS said "No, it is pointless and can trivially be subverted" (qv Jesper’s blog: he mentioned it in his "Security Myths").

    BOTH were right.

    If you install a firewall with outbound protection, then you’ll block most malware from getting out, stop most badly behaved software from dialling home without asking, and stop most adware from showing ads. You’d be in a tiny minority of users who take this step, so the programmers don’t take the time to work around it.

    But if MS had introduced it in XP by default, then every piece of malware would have outbound-protection-dodging. Every DLL for adware and dialling home would have it. People would dodge the firewall by default, as the firewall would be part of their development environment by default.

    Not until they had altered the Windows core significantly could MS include undodgable outbound protection, as they allegedly have in Vista, for drivers, or something. They had to do this because it was very clear that applications which provided this even as broken functionality were very, very popular, as they filled a need.

    The "application ACL" is similar: we need third-party ACL programs (such as CoreForce and TinyFW, ibid: anyone know of any others? Especially free ones?) that do this task, as well as they can under the current system. As third-party apps, these would work to protect the users from badly-behaved apps, because they would not be part of the default dev environment for those apps.

    Microsoft can’t do it themselves, yet, as their ACL would become part of the default malware development environment, so they’d need to do it "properly", in a non-dodgable manner, rather than just "as best as possible". This would require a total rewrite of windows security, and there is no business case for that, until a lot of users start to use third-party application ACLs, and clamour for their introduction in Windows.

    So, Raymond is RIGHT to say per-application ACLs just won’t work as a solution for MS at the moment, but at the same time that doesn’t mean that per-application ACLs aren’t needed, as well as can be written under the current system.

  49. Chris Becke says:

    I really can’t think of any clean way to get B.DLL to save its settings when loaded by application A. Well I can. Some kind of IPC. B.DLL cannot save its settings. Process A can load B.DLL. B.DLL will IPC to an instance of ProcessB – probably ServiceB, and *that* will save settings. What a pain, I agree.

    Perhaps I want too much when I want applications to be partitioned.

    Well, I just made the mistake of installing Spliner Cell Chaos Theory. It told me to restart. Which can only mean its installed a rogue device driver. I *really* do wish for more intrusive dialogs when setup programs try to install – probably unsigned – drivers behind my back.

  50. yet another

    >>Effectively I don’t want Windows to protect me from anything at all.

    —————————-

    The problem we have is that people like you say that until some trojan gets on your machine (right after you click OK on something) and then you bitch about the insecurity of Windows.

    =============================================

    >>A different boot mode. A different setting at Windows installation time (press F6 or something). A different x64 edition entirely. Something to be confirmed at next reboot (before applications start).

    Why you cannot selectively enable/disable the rule for different drivers, you can do it at a global level.

    ———————————

    We already have drivers that do that.  Some vendors click OK for you when Windows notifies you that the driver is unsigned.  The only thing MS is trying to do is get rid of the unscrupulous ones.

    =========================================

    >>They spent millions of dollars researching that for security reasons it’s good for outlook to zap every attachment with exe or xml or whatever else extension ?

    ——————————-

    Why yes, they did.  Clicking on an EXE sent as an email attachment is like trying a new food recommended by someone you have never met before.  Yeah, I wouldn’t either.

    ===========================================

    >>They spent millions of dollars so that MSN could automagically delete a file it received just because it PERCEIVES it as dangerous.

    ————————————

    Not because it percieves, but because statistics plainly show that the majority of such files cause the majority of the problems in question.

    =====================================

    >>Gosh I’M the one who decides what’s dangerous and what not.

    ————————————-

    Yes you are.  The problem is you’re in the minority.  There is a term for people like you : casualties of war.  And I am sorry you have to deal with all this.

    James

  51. Mike Hearn says:

    ["… only by programs signed by Microsoft." Yeah that’ll go over really well. -Raymond]

    I know, I know.

    Still. The world we live in today is VERY different to that of even 5 years ago.

    I remember back when XP was about to come out, people were kicking up a fuss about whether online update would be on by default or not (MS will control every computer in the world, etc). 5 years on we’re all just damn glad it is. Nobody thought about botnets, trojans, adware like Aurora back then.

    I think if you started locking down the system (for those who wanted it) such that CreateRemoteThread, driver loading, debugging etc only worked for applications from verified authors, and such that the operating system protected itself from malicious software via a combination of mandatory access control and signing programs …. I think people would be a lot more sympathetic to that now than perhaps they would have once been. I know this is true for me.

    What’s the alternative? Organised crime is winning this one, they are wiping the floor with us. I don’t see any alternative to gradually locking the system down. I wish there was :(

  52. KJK::Hyperion says:

    Todd: Windows doesn’t have "setuid" at all. The creation of identities (token objects) is an extremely sensitive operation that can only be initiated by a system process, and it has to pass through an authentication package. While it’s possible to create an authentication package that can just synthetize identities out of thin air (and one does exist, it’s included in the Windows port of the CVS server), as a general rule you have to provide credentials (typically username, password). In Windows Vista, though, that has changed: there must be some sort of "setuid" facility because the Task Scheduler no longer requires a password when running jobs as another user (only if you need to access remote resources through integrated authentication, which is how it should have been since forever), and all services now automatically get their own SID synthetized from the service’s short name

    Everyone else: per-application SIDs do exist, altough you have to configure them yourself. The implementation (Safer), while powerful (you can match the path, the checksum, the digital signature, etc.) is purely advisory, meaning it can be bypassed on the first run (but once inside the sandbox, there’s no way out because the kernel guarantees that), and the per-application SID part may or may not be implemented (it’s undocumented but public, see SaferObjectRestrictedSidsInverted and SaferObjectRestrictedSidsAdded and surroundings in <winsafer.h>)

    The major downside is severe inconvenience, that can sometimes stop the application from working. ACLs have to be set manually, to enable the application to at least access its configuration files and registry keys, but that’s relatively easy: filesystem and registry ACLs are permanent, but there are many volatile, hard-coded ACLs that will limit your application left and right – notably, the credentials storage will deny access to any sandboxed application, meaning you won’t be able to save your MSN Messenger password for one. Did I mention SSPI (the component providing "integrated authentication" for network protocols) will not give sandboxed processes any credentials? barring you from any network drive and SSL servers? (… and did I mention the default GUI for Safer doesn’t even support configurable sandboxing?)

    (… and did I mention that the temporary directory now that you are a sandboxed process is read-only? did you know? but most importantly: did your parent process know?)

    Personally, I have written my own sandboxing tool, it’s called iam ("I Am", sorta like the opposite of "whoami"…), it’s command-line with a help text several Bibles long, and requires a rocket science degree to use (or just run "iam -typical cmd"… after setting %TMP% and %TEMP% to a writable directory, of course):

    <http://spacebunny.xepher.net/hack/iam/&gt;

    Myria: I swear if you use the "pwn" word once more, I’m going to slap you with a wet noodle

  53. Myria says:

    Driver signing is a horrible mistake.  It does not prevent rootkits.  All you need for a rootkit on the vast majority of systems even in Vista64 is to NtCreateFile DeviceHarddisk0Partition0, NtWriteFile 512 bytes, and NtShutdownSystem to reboot.  System pwned.  You could do things like not reboot and wait for the user to, or bugcheck the system to act like Windows crashed.

    Unless Microsoft wants to restrict raw disk access to kernel drivers, this will never be fixed.  And if Microsoft does that, the first thing Symantec will do is make a (signed) driver that allows raw disk access to user mode, and then rootkits can copy that driver since rookit authors don’t particular care about copyright law.  Imagine Symantec Ghost having to exist entirely as a kernel driver, since granting *any* raw disk access to user mode breaks driver signing.

    Rather than preventing Administrator from doing, well, administrative things, Microsoft should be doing all they can to prevent trojans and/or shellcode from getting to Administrator level.  This they have made great strides with in Vista.

    I suspect DRM is the real reason for driver signing, since its anti-rootkit security is dubious at best.  If you disable driver signing (either with F8 at startup or with test signing), Windows Media Player refuses to play protected songs.  Looks like part of the Secure Audio Path to me.

    That brings up another point.  Test signing is something else you can enable from user mode then reboot the system to take over.

    Raymond is completely correct in his post.  If you are going to have driver signing, nothing available to an Administrator user-mode program should be able to disable driver signing.  Of course, the MBR and BOOTLDR are two such things, but I think you get the point…

    Melissa

  54. And now, let us all sit back and watch as Ken, a person who has never seen a single line of Windows source code, tells the person who works on said source code all day everyday how said source code works.

    This should be fun.

    James

  55. Ken Hagan says:

    I think Raymond is wrong on a (significant) technicality.

    There’s no such thing as the user’s privileges. Access rights and privileges are bound to a thingy called a token, and this is nearly always built (by the kernel) in a standard way based on the user’s identity. (Except for fresh logins, the standard way is to clone the token of the parent process.) However, it might be clearer if we think of it the “current instance” of the user. If you log in interactively, your token will have INTERACTIVE privileges. If not, your token won’t and you can use INTERACTIVE in file or registry ACLs to exploit this difference. The kernel’s “Job objects” allow privileges within a token to be selectively withdrawn (which is the secret behind the Safer APIs noted by KJK::Hyperion). So it *is* technically possible to run a process with fewer privileges then you normally have. Since Job objects are a kernel concept, this is as bullet-proof as Windows itself.

    Ideally, to use this in the real world, you would want to populate your token with several groups, each of which would represent a trust distinction that you cared about and any or all of which could be restricted in the token. You’d then configure file and registry ACLs to grant or deny access based on those trust groups, rather than more conventional aliases. As far as I know, Windows lacks a convenient UI for setting up the groups in the token.

    This is the closest the Windows XP kernel gets to supporting the notion of trusted or untrusted *code*. (Vista uses the technique to enforce its “limited administrator” feature.) IE attempts something similar with its zones, a trust level based on *where* code comes from, and the infamous signed/unsigned distinction, a trust level based on *who* code comes from, but it doesn’t use kernel-level facilities and so it can be circumvented by bugs or malicious code in the DLLs involved. However, I think the concepts that IE is reaching for are valid ones and ought to be better supported. (Interesting to read about SELinux and AppArmor.)

    [True, it depends on what the definition of “you” is, and as you noted, by “you” I meant “the token”. Change the token and you change the process’s identity, but the identity is still at the token level (which is assigned to an entire process), not the code level. -Raymond]
  56. Mike Dimmick says:

    Mike Hearn:

    "I think if you started locking down the system (for those who wanted it) such that CreateRemoteThread, driver loading, debugging etc only worked for applications from verified authors, and such that the operating system protected itself from malicious software via a combination of mandatory access control and signing programs …. I think people would be a lot more sympathetic to that now than perhaps they would have once been. I know this is true for me."

    Yeah, I had that thought too, but it would basically mean that you would have to prevent debuggers from being scriptable. Right now kd (from the Windows Debugging Tools) effectively accepts a hacking script on the command line. See for example how adplus.vbs works.

    The debugger would have to be restricted because otherwise an attacker can simply debug an application running with a token that does have the required privileges or permissions. I saw this approach used to ‘silently’ open a listening inbound port without Windows Firewall prompting to unblock, because it did so by creating a remote thread in one of the processes that is permitted by default.

    I don’t know whether Windows Vista’s firewalls between processes running in the same window station but with different privilege levels extend to stopping a low-privileged process from debugging a high-privileged one. I would hope that this is covered.

  57. Mike Hearn says:

    Mike Dimmick:

    Yes the debugger APIs are a backdoor around any security system, which is why I am always annoyed when I see ZoneAlarm annoy my friends every 10 seconds with pointless popups – adds no real security because evading them is so easy.

    I think the trick is to ensure only a real, physical person interacting via hardware can install software. If you have a scriptable debugger installed, well that’s a back-door and the user should be aware of that. But most people won’t, so "mass market" exploits like bots or adware won’t be shipping kd scripts anytime soon. If the adware can install kd and then use a script, then clearly we lose, but if only humans can install software (such that the operating system recognises it as being signed and therefore given higher level of privilege) then you can work your way up from here to start getting serious security.

    The Vista UAE prompts are someway towards this now, but there is no well defined concept of what installation is, so there’s still a lot of work to do ….

  58. OrsoYoghi says:

    > The problem we have is that people like you say that until some trojan gets on your machine (right after you click OK on something) and then you bitch about the insecurity of Windows.

    Actually I see more people bitching about stupid protections than about trojan and so on. That said, adding the annoyance of having to zip the exe before sending the mail/im does not add anything to security. At least a dialog box could be there.

    >> We already have drivers that do that.  Some vendors click OK for you when Windows notifies you that the driver is unsigned.  The only thing MS is trying to do is get rid of the unscrupulous ones.  

    Boot modes cannot be remoted by an applications through messages. Unless someone does the boot sector trick mentioned, but in that case they might just replace a patched kernel32 anyway.

    >> Why yes, they did.  Clicking on an EXE sent as an email attachment is like trying a new food recommended by someone you have never met before.  Yeah, I wouldn’t either.

    Is it different from unzipping the exe and trying it ? When your agent at a 12hour timezone offset needs the patch as soon as he logs on again, having to remember to zip (or change the extension to ".ohmygodwhatanannoyance_rename_as_exe") is, well, just plain stupid.

  59. Andy C says:

    Mike Hearn:

    "I think the trick is to ensure only a real, physical person interacting via hardware can install software"

    I suspect a lot of large corporate system administrators would have something to say about that.

  60. Whiz Kid says:

    Most of the suggestions in this thread is stupid. Even compared to current ms os.

  61. Alexey Lavnikov says:

    The idea that “application is the user” is not well thought. The user in current understanding is an security account which has a set of permissions enforced by OS.

    Why can’t a single user have several security accounts with different set of permissions (like IE zones)? Let the user decide under which of these security accounts this application is to run. One of them could be default one with limited set of permissions (like built-in guest account with persisted registry).

    In this case, application is not always the user, but only when user decides so…

    [And then when Windows Vista tries to do what you suggest, people are upset because of all the elevation prompts. You just can’t win. -Raymond]
  62. Adam says:

    Mike > “The debugger would have to be restricted because otherwise an attacker can simply debug an application running with a token that does have the required privileges or permissions.”

    Mike > “Yes the debugger APIs are a backdoor around any security system,”

    WTF?!? You can debug a process that has a different security descriptor than you? Wha…? Why…? Huh…?

    That’s … not even broken. It was never whole to begin with!

    *flabbergasted*

    Mike > “you would have to prevent debuggers from being scriptable”

    How would you do this? What’s to stop someone writing their own scriptable debugger? If they’re installing other software (adware, bots, etc…) they can install that.

    [I would recommend people actually verify what the rules are regarding who can debug processes before jumping to conclusions. Notice that Mike said “the require privileges or permissions.” -Raymond]
  63. C++ guy says:

    Raymond said:

    I get so puzzled when I hear people say, "Windows should let me do whatever I want with my system", while simultaneously saying, "Windows should have used ACLs to prevent applications from doing whatever they want with my system."

    Ok, programmers should know better.

    But users?  I think users have a very clear concept of "what I did" vs "what the application did."  I typed in a URL.  Internet Explorer installed spyware for me.

    Fixing this would require a security overhaul that would dwarf the XP -> Vista security overhaul.  Ain’t gonna happen.

    But whoever does it will make a lot of money. ;-)

  64. Adam says:

    [I would recommend people actually verify what the rules are regarding who can debug processes before jumping to conclusions. Notice that Mike said "the require privileges or permissions." -Raymond]

    Ah – I was falling into the trap of assuming that all the posters here are non-/.-karma-whore-ish enough to only post things like "you can do X" if they’ve actually done it.

    Most of the time, on this blog, it’s a pretty good assumption.

    "You should be able to do X" I always take with a pin^H^H^Htub of salt – you get all kinds of madness posted along those lines. :)

  65. Ken says:

    "And now, let us all sit back and watch as Ken, a person who has never seen a single line of Windows source code, tells the person who works on said source code all day everyday how said source code works."

    I doubt Raymond does work on this part of Windows every day, but that’s hardly the point since Raymond doesn’t disagree with my nit-pick. (Neither do I think IE’s concept of zones is arcane knowledge that only the Raymonds of this world can be expected to understand.)

    The point is that there’s a distinction between the user and a token with that user’s SID sitting in it. Whoever designed the "Safer" APIs and Vista’s new dialogs thinks it can be exploited.

  66. Nar says:

    If Word drops privileges and then calls an OS function
    >provided in the form of an in-proc DLL, then that OS
    >function also runs with reduced privileges and may not
    >be able to accomplish what it was being asked to do.

    If it was IIS, and it was asking to write to C:windows, and being denied, that’s a good thing. I’d say the ‘secure’ file-open API would have to be a system call into kernel mode. By default, many kinds of apps might have a profile with read access to computer settings/files,  and read/write access to user settings/files and a working set of files. The working set would be provided by Explorer (or whatever) when calls the secure file-open API, and the API would tell the app its working set and temporarily give working-set privs to it that the calling app had.

    The same API might allow asking for new permissions. Some, like creating temp files, might be in the app’s profile and automatically granted or blocked, some might be grantable by the kinds of apps that give it a working set, and some would fall back on the user or get the app recognized as a problem and killed.

    [I’m not sure what your remarks have to do with the sentence you quoted. -Raymond]
  67. Ben Last says:

    A bit of an aside, but the Symbian OS (especially as of version 9) does have privileges (known as capabilities) assigned to applications and DLLs.  Those capabilities are granted when the application or DLL’s installation file is digitally signed.  Since a mobile device usually doesn’t have the idea of separate users, and can potentially be picked up and used by anyone, identity-based privileges don’t work in the same way.

    At the very least, it’s a different approach.

    [I wonder what happens if an application tries to load a DLL which has different privileges. -Raymond]
  68. Mike Hearn says:

    Andy C – yes, good point, but managed desktops are a whole different kettle of fish. Assume that whatever restrictions prevent some random program installing other software can be lifted by an administrator in a secure way?

  69. Mike Hearn says:

    Adam – we were talking about firewalls and other ‘security systems’ that try to police programs on the basis of who they are. The problem is not that you can debug some process with a different kernel security context (you can’t …), the problem is that other programs try and give out permissions when they can’t enforce that (like ZoneAlarm).

  70. Adam says:

    Mike : Sorry, it looked like you were replying more to Mike’s post where he was talking about using a debugger to make another app with more *kernel* privs do something that they would not be otherwise allowed to do. The "any" in "debugger APIs are a backdoor around any security system" kind of helped with that, as did the fact that that clause was *before* the example of a user-mode access control system.

    Thx for clarification.

  71. Barry Kelly says:

    I think that anyone looking at this problem should read the "Capability Myths Demolished" paper, here:

    http://srl.cs.jhu.edu/pubs/SRL2003-02.pdf

    It changed the way I think about ACLs versus Capability systems. I now think Windows has the wrong security model.

  72. Michiel says:

    What exactly is the problem of applying security to sensitive applications? Sure, you cannot load just any DLL anymore from such a process. That is a good thing! If I signed A.EXE, installed it using an application-ACL and it tries to load B.DLL which I didn’t sign, Windows should terminate thre process.

    Yes, this means Word or Explorer or ActiveX hosts won’t get such application-ACLs. Good, they shouldn’t be messing with the OS in the first place. Any open-ended application should be considered potentially unsecure.

    However, it does solve the original problem. You can have a registry key that is changable by any user, but only via permitted applications. (They will off course use the OS DLLs, but those should have superset ACLs. The process-ACL is just the common subset of application-ACL and DLL-ACLs)

    [Yes, this means Firefox won’t get such application-ACLs either. Hope that’s okay. -Raymond]
  73. Yuhong Bao says:

    So why not use ACLs for the obscure key to require real (in Vista with UAC, elevated) administrator privledges to modify it?

    [And what’s to stop an elevated application from screwing with teh key, then? That’s the whole point of the article. -Raymond]
  74. Yuhong Bao says:

    [And what’s to stop an elevated application from screwing with teh key, then? That’s the whole point of the article. -Raymond]

    Yea, but this is the best you can do and is sorely need because not everyone can afford a VeriSign key, nor they evn want it to be required. I am thinking open source here.

  75. Matt says:

    [And then when Windows Vista tries to do what you suggest, people are upset because of all the elevation prompts. You just can’t win. -Raymond]

    In the context of the ‘Run as Restricted’ stuff you linked to before. How about the converse – nicer/automatic privilege *degradation* not escalation…

    What would be nice is for some reasonable* default restricted mode to be available when installing an application in Vista. When an app is installed that knows it can function fine in this restricted mode (off the top of my head it doesn’t need anything but access to it’s own registry key, it only needs access to an app specific temp directory,  when using the Internet (if it even needs to) it only needs access to a shortlist of named sites).

    A good example for this sort of thing is the innumerable widgets which are cropping up everywhere.

    When I install one of these (using the usual vista escalation privilege route) It would be nice to know if the app has declared itself ‘Restricted Compliant’. It is then installed as such, any attempt to execute the app automatically runs it as the severely restricted user (all this happens transparently to me of course).

    Now you have this system, and – if you have defined the restricted capabilities well enough then it is possible you can allow – as an admin – the installation of these type of programs, but not others without a sysadmin coming over (or better doing the install for you remotely)

    This would then create a reason for developers to code their apps to only require low restrictions from the word go since they would benefit from the Halo of "well behaved program" status (if you can pick between umpteen different widgets to display the time you may be more likely to go with the ones marked thusly).

    Obviously this is a (very) big piece of work which would require vast amounts of effort and developer buy in. As such it may be that the effort expended to make it happen would be more productive elsewhere. But still this _would_ improve things since, as more apps begin to use this mode (or shades of this mode, hey let’s not run before we can walk) then there would be increasing pressure on the other apps to toe the line (since this is a ‘feature’ most IT departments would be happy to have on their check list) and do it too.

    Whenever I see Unix installation descriptions and I see "create user with the following privileges, set the daemon to run as this user" I wince a little – since no matter how effective this is it is never going to work in broad strokes across the windows ecology unless it is automatic and transparent to the end user (apart from, on install, the warm glow the user gets seeing the ‘Safe App" Logo someone in your marketing department dreams up :)

    I know a lot of apps won’t run in this mode, including any that need to read from the filesystem except in their own area.

    I think one element you would have to provide to get this wide spread would be an API for opening a file open/save dialog (that is incorruptible in the same way the privilege escalation one is) that allows opening and writing to a file (so basically an _Edit_ file dialog).

    No need for this dialog to request your password, you are explicitly okaying it editing a file after all (if you need temp files either provide an api for the temp file being writhen to the same location in a controlled manner or just have the app deal with it in it’s own sandbox – pros and cons to each).

    This is not perfect (I came up with it in 5 mins so course it won’t be – but it shows you one way to use the existing RunAs functionality (which should be a secure base) but automate the tedious (and frankly impossible for the average user) process of setting this all up.

    * I know what is reasonable will always be debatable, but several things are definite, no registry access to anything it didn’t create. Significantly restricted file system and network privileges)

    ** or to be more exact were _told_ at install time

Comments are closed.


*DISCLAIMER: I DO NOT OWN THIS CONTENT. If you are the owner and would like it removed, please contact me. The content herein is an archived reproduction of entries from Raymond Chen's "Old New Thing" Blog (most recent link is here). It may have slight formatting modifications for consistency and to improve readability.

WHY DID I DUPLICATE THIS CONTENT HERE? Let me first say this site has never had anything to sell and has never shown ads of any kind. I have nothing monetarily to gain by duplicating content here. Because I had made my own local copy of this content throughout the years, for ease of using tools like grep, I decided to put it online after I discovered some of the original content previously and publicly available, had disappeared approximately early to mid 2019. At the same time, I present the content in an easily accessible theme-agnostic way.

The information provided by Raymond's blog is, for all practical purposes, more authoritative on Windows Development than Microsoft's own MSDN documentation and should be considered supplemental reading to that documentation. The wealth of missing details provided by this blog that Microsoft could not or did not document about Windows over the years is vital enough, many would agree an online "backup" of these details is a necessary endeavor. Specifics include:

<-- Back to Old New Thing Archive Index