Date: | May 10, 2006 / year-entry #162 |
Tags: | other |
Orig Link: | https://blogs.msdn.microsoft.com/oldnewthing/20060510-06/?p=31243 |
Comments: | 48 |
Summary: | If changing a setting requires administrator privileges in the first place, then any behavior that results cannot be considered a security hole because in order to alter the setting, attackers must already have gained administrative privileges on the machine, at which point you've already lost the game. If attackers have administrative privileges, they're not going... |
If changing a setting requires administrator privileges in the first place, then any behavior that results cannot be considered a security hole because in order to alter the setting, attackers must already have gained administrative privileges on the machine, at which point you've already lost the game. If attackers have administrative privileges, they're not going to waste his time fiddling with some setting and leveraging it to gain even more privileges on the system. They're already the administrator; why go to more work to get what they already have? One reaction to this is to try to "secure" the feature by asking, "Well, can we make it harder to change that setting?" For example, in response to the Image File Execution Options key, Norman Diamond suggested "only allowing the launching of known debuggers." But this solution doesn't actually solve anything. What would a "known debugger" be?
Besides, it doesn't matter how much you do to make the Image File Execution Options key resistant to unwanted tampering. If the attacker has administrative privileges on your machine, they won't bother with Image File Execution Options anyway. They'll just install a rootkit and celebrate the addition of another machine to their robot army. Thus is the futility of trying to stop someone who already has obtained administrative privileges. You're just closing the barn door after the horse has bolted. |
Comments (48)
Comments are closed. |
I assume the situation is different if the machine’s real admin changes a setting that inadvertently lets an attacker in, right? I don’t think this would apply to IFEO, but it may apply to certain other things. Not sure whether that means anything as far as security is concerned, though — dumb admins can always screw up a system.
Regarding the signing stuff: Do the same principles apply to driver signing? Specifically, if you install your own root cert in the CA store on a machine, will drivers show a warning if they’re signed with a cert that chains to yours? Or does the code-signing cert have to chain to one of a hardcoded list?
(Or don’t you know, since you work on the shell and not the kernel? If not, do you know who might?)
"If changing a setting requires administrator privileges in the first place, then any behavior that results cannot be considered a security hole because in order to alter the setting, attackers must already have gained administrative privileges on the machine, at which point you’ve already lost the game."
Not necessarily. It may be that a priveliged service that accepts connections from untrused users has a bug that does not allow arbitrary code execution, but does allow an attacker to execute existing code that they shouldn’t be able to, to change such a setting.
Example:
if (!userIsAdmin) {
return EACCESS;
}
changePriveligedSetting();
If userIsAdmin() is a function, not a variable, then the use of userIsAdmin above will decay to a non-null function pointer, meaning that the user will never be denied permission to call changePriveligedSetting().
As noted, that’s only an example; there are other ways of achieving the same result. It doesn’t have to be a service that accepts remote connections, a setuid root program with a similar bug could do the same thing. Also, it doesn’t have to be a function call/function pointer mix-up – there are other coding errors that could creep into a priveliged program that could cause the same effect. e.g. reversing a privelige test so that only non-admin users can execute the sensitive code.
maybe the code to change the setting fits within the required size for your exploit, but a full attack doesn’t.
hello security hole.
Adam, in your case, changing the setting does not require "administrator privileges in the first place": the programmer just thinks it does. The problem is actually privilege elevation (it allows an attacker to change something they shouldn’t), and is solved by correct validation.
Look who’s speaking. The company that purposefully cripples its OS to "mitigate the spread of malware".
Hello 10 half-open outbound connection limit, hello no-raw-sockets, hello DNS client that ignores .hosts for MS addresses…
And hello 10-lines-of-C-code that disable these "protections". On XP, you’ve already lost. Leave it alone, please :)
And I don’t like Vista’s direction: you still are supposed to run installers as admin, having to trust every company out here. It’ll only solve security holes, but not malware spread by "traditional" means.
Mark:
Sorry I didn’t make it clear; if the setting I’m talking about has a system-enforced ACL (or equivalent) that only allows Admin users to change it, then an attacker could leverage a lack of correct validation in an elevated-privelige program to gain more priveliges on the system.
Not all security holes are code injection bugs.
dimmik: This is an ACDSee failing, not a Windows failing. It’s perfectly possible to write an app that a restricted user can install – after all, installation consists of copying files, and changing / creating registry settings. Those are activities that any user can do.
Installing to your "Program Files" directory, or setting system-wide registry settings, that’s going to require admin privilege.
Adam: If there’s a program running with admin-level privilege and allowing ordinary users to execute inappropriate admin-level actions, then that program is the flaw, not the setting it sets.
:’): Show me those "10-lines-of-code".
<i>Hello 10 half-open outbound connection limit, hello no-raw-sockets, hello DNS client that ignores .hosts for MS addresses…
And hello 10-lines-of-C-code that disable these "protections". On XP, you’ve already lost. Leave it alone, please :)</i>
Really? You can disable all of those as a limited user? Do tell me how!
"It may be that a priveliged service that accepts connections from untrused users has a bug that…"
Then you have a bug that is a security hole. But that’s not the topic for today. As I noted, it’s a setting that only administrators can change. Your counter-example is a setting that non-administrators can change (due to a flaw).
The topic for today is, "I have a setting that only administrators can change. This setting can take values that are insecure." My point is that this is not a bug in the setting.
Alun Jones: "It’s perfectly possible to write an app that a restricted user can install (…) Installing to your "Program Files" directory, or setting system-wide registry settings, that’s going to require admin privilege"
As far as I understand, MSFT still promotes that all installs are to be to "Program Files". Of course any user can run anything he downloads/copies etc, and the simplest way for a user to avoid to fetch administrator/get permissions is still to simply install the app somewhere in his own "My Documents" :)
E.g. if my grandmother gets some game, and I haven’t given her an admin password, she’ll probably be able to install the game to her documents.
There is a danger that then some malware run without provileges can infect the executable installed that way. But the possibility for a user to run something for himself is a good one. MSFT probably thinks that some sandbox environment is confusing for a normal user, but I believe that sandboxing is the future. Why shouldn’t the user be allowed to install his own apps, and why shouldn’t they be nicely sandboxed, so that one can’t infect another, none can infect the system, and each can create and modify files only in it’s own incarnation of My Documents?
mikeb, KnownDlls isn’t for security purposes. It’s there for efficiency. Since almost every single process will use them, it makes sense to load them into memory in a fixed location and not have to worry about searching for them, loading them, and possibly rebasing them every time you start a process?
Hmmm, only processes running with admin privileges ever had access to DevicePhysical Memory but starting with W2K3, MS took away acccess to DevicePhysical Memory in the name of making the OS "more secure".
Pot. Kettle. Black.
I agree that if there are vulnerabilities, then these settings can be used as stepping stones. But is that a fault of the setting? Should all useful settings be removed because somebody might st it incorrectly?
Is "Image File Execution Options" useful enough to justify its continued existence? You tell me.
‘As far as I understand, MSFT still promotes that all installs are to be to "Program Files". Of course any user can run anything he downloads/copies etc, and the simplest way for a user to avoid to fetch administrator/get permissions is still to simply install the app somewhere in his own "My Documents" ‘
OneClick (or ClickOnce) applications by default do not install into Program Files. That’s nice because my users can update an app as limited users.
"99% of users run as admin because they view security warnings as a pointless nuisance, they want those warnings to go away…"
99% of Windows developers don’t understand the proper security and ask "everybody else requires admin, so why not I?"
How many developers know what a Limited User is? How many of us use it??
I’ve met devs who refuse to fix crashes because they figure Windows crashes so often anyway, it’s just easier to blame Windows.
99% of statistics are useless. But as somebody else said, don’t underestimate a developer’s will to undermine your computer’s security.
Adam:
The question here is not whether the attacker can gain more privileges on the system, it is whether he can do so without the collusion of an administrative user. Which, in this case, he has – albeit unknowingly – by virtue of that user’s failure to test the code properly.
And from my perspective, this security hole actually *is* a code injection bug: the original code to change the setting has no bug. It is additional code injected between the original code and the end user which has the bug. Blaming the original code for the end result is simply unreasonable. Just because the attacker didn’t inject the code himself doesn’t change the mechanics of the attack.
But that verification code might also have been presented to the user on a developer group, for example, by the attacker. It certainly looks like an innocent mistake (we’ve all done that!), so if he’s caught he need only apologise.
This leads me to some thoughts on open source, but I won’t go into it here.
It’s interesting that it seems that Microsoft is expending quite a bit of its resources in Vista trying to solve some security issues using some of the techniques Raypond indicates are fatally flawed.
In Vista you’ll find:
1) requiring GUI verification of some accesses that are restricted to Admins (due to Vista’s LUA/UAC policies)
2) *All* drivers on 64-bit Vista will need to be signed with a certificate (a PIC) that can be obtained only after purchasing another certificate from Verisign for about $500 (or is it $300?). As an alternative, the driver can be signed in the WHQL program which has similar costs associated with identity certificates in addition to other testing costs. In fairness, it appears that Microsoft might be re-evaluating the PIC program.
In fact, Microsoft already uses some of these techinques in WinXP/Win2003 such as using registry keys to designate KnownDlls, and signing and other enforcement in Windows File Protection to try to ensure that valid binaries are in place.
So, while all of these techinques may be ultimately fatally flawed (until the Trusted Computing Platform arrives), they must have some utility and benefit, or Microsoft is wasting a lot of people’s time and effort.
I’m not necessarily saying that having a ‘known good debugger’ list is a good idea, but Raymond’s post seems to imply that any security solution should be perfect to be considered. If that’s the standard then we might as well toss out our computers right here and now, or at least disconect them from the ‘net.
Sometimes just raising the bar is indeed worthwhile.
mikeb, it’s not just raising the bar that’s worthwhile. Sometimes you can leave the bar where it is, and simply remind people how high that bar happens to be.
Assuming that your system has a known and unfixable security flaw, it will increase customer confidence if you move the same flaw to another location and then explain why it is every bit as secure as it is going to get. There are three results.
1. You have responded. Even though your activity has no real effect, you have demonstrated that you are listening. This is the single most important thing to do.
2. You have explained. Those who care about your explanation will look to people they trust to understand it, and get a confirmation there.
3. Your detractors will say "this doesn’t change anything!" because it really doesn’t.
However, #3 only works if you *omit* #1 or #2. Action without explanation leaves the public uncertain, and they will latch onto your detractor as the authority. Explanation without action sounds like an excuse, and the public will latch onto your detractor because you really haven’t changed anything.
But if you have taken action and explained that action, your detractors are just whining. The public has what they want: you have done something about it, and they are satisfied with the rationale for what you’ve done. So the detractor doesn’t actually do any damage; he merely advertises that you didn’t really HAVE to change or explain anything, which makes you look even better.
As Oscar Wilde said, living well is the best revenge.
Raymond Says:
> The topic for today is, "I have a setting that
> only administrators can change. This setting
> can take values that are insecure." My point is
> that this is not a bug in the setting.
Not a bug, but still something that can be used by ‘hackers’ to exploit. And hence if it’s reasonably possible for someone to do so, it is not wise to create a setting like this.
You must see this.
Raymond —
My impression of much of the security work for Vista — e.g. signed drivers only on x64 — is that it’s effectively trying to demote the horribly overused Administrator to the level of a more normal user and then add a privileged level of mandatory access control beneath. My question is, why didn’t Microsoft do that literally? E.g. hoist the entire system up and put a lightweight privileged monitor beneath? Or, hoist up user space, virtualize the disk, and establish a protected data store for the kernel and its configuration that only the kernel (or another operating system, but not administrative tools) could access? Many of the massive headaches, like not being able to disable driver signing through boot.ini because that could be modified by programs running as administrator, would disappear, since you could introduce a ‘Hyperadministrator’ able to manage these options. Also, the concern that someone could find a gap in the current, ad-hoc enforcement would seem to diminish greatly.
Raymond> "…in order to alter the setting, attackers must already have gained administrative privileges on the machine…"
"The topic for today is, "I have a setting that only administrators can change. This setting can take values that are insecure." My point is that this is not a bug in the setting."
Respectfully, I must disagree with that particular conclusion[0]. An attacker can and will exploit multiple bugs/vulnerabilites, in series, to 0wn your box.
Forgive the non-Windows example and slight misrepresentaions, but often, when a single vulnerability is discovered on a UNIX system, it hardly ever allows a remote attacker to gain a root shell on your system. For this reason, you will normally see the UNIX die-hard-fanboys give one of the following two mitigations:
a) Hah, it only allows a remote user to get shell access with the account of the running process. apache/ftpd/cvsd runs with limited priveliges, so that’s not a problem like it would be with a Windows system, where such services run as the local system.
b) Hah, it only allows a local user to elevate their privs to r00t. We carefully monitor the people who are allowed to log on to our server, even as limited users, so that’s not a problem like it would be with a Windows system, where everyone needs to be a domain administrator to get work done.
Unfortunately, becuase each is announced separately, the fanboys seem to consistently fail to see that combined, one of each of the above vulnerabilites is fatal, and that you need to keep on top of *all* of them to keep your system secure.
If you have a setting which makes something else insecure, attackers will try to leverage it *in conjuntion with other vulnerabilites* to help them 0wn your box.
[0] I agree with everything else. Security by obscurity must concede to the almighty Google, and once someone has Admin, you’re toast.
Definitely keep Image File Execution Options.
1. The whole point of Administrator/root is to be dangerous. Sometimes it’s the only way to get things done.
2. Remove this and somebody will discover another way to do the same thing. Never underestimate the ‘Net knowledge-spreading ability. Blackhats use the Net very effectively – in many ways.
3. Removing this closes ONE hole. Fixing the underlying problems – everybody runs with Admin privs, and the ease of obtaining Admin from a Limited user – will close MANY holes.
I’m still not sure where this "list of debuggers" is. If I’m reading you correctly (and I’m probably not), you’re saying that the way to answer the question "Is this a valid debugger?" is to do a treewalk of the administrator’s Start menu to see if anything there is a shortcut to that same program. (What if you don’t have permission to access the administrator’s start menu? Does that mean you can’t debug anything? Does this mean that the administrator can’t "clean up" the Start menu by getting rid of rarely-used programs? What if the administrator’s start menu is redirected to another server that is unavailable?)
*All* useful settings? Of course not.
But, if an option has an insecure setting, then I’d say that that *particular* setting of that *particular* option *should* be at least considered a "security issue", and looked at for possible removal from the next release.
On reflection, the risk may be worth the functionality, especially if individual admins are given enough background to make an informed decision themselves. And admins are at least somewhat likely to read the big warning dialogs that pop up when they select that setting :)
But to leave an insecure setting in just because it’s useful? Must … resist … cheapshot … IE ….
*sucks teeth* Tricky one. :)
I presume you mean the "debugger" subkey that you’ve written about before.[0]
Considering how open to abuse that particular setting is (something you pointed out yourself in the previous article), I’d say that having a good look at alternative ways of getting the same results would be strongly encouraged.
Ideas:
1) You could enable that setting only under a windows debugging kernel. Yes, you won’t be running the target programs fully "in the wild", but it’d keep a lot of people safe.
2) Where program A calls program B, and you need to debug program B without modifying it, then:
2a) If you can wait to attach a debugger to program B until after it reads data from program A, you could run program A under a debugger, pause it after it starts program B, and then attach another debugger to program B while it’s waiting for data from A?
2b) If you have to attach a debugger to B before it reads any information from A, can’t you just start B straight from the debugger?
3) Can you set up a windows debugger to follow the child on CreateProcess()? If so, could that be used to debug B?
Of course, there may be good reasons why none of these (off-top-of-head) ideas, and indeed no other more secure alternatives to IFEO that cover *all* its functionality do exist, and that the functionality *is* vital enough to keep.
*That doesn’t stop those risks existing though, and it doesn’t mean that IFEO couldn’t be used as part of an exploit.*
In answer to your question, I don’t know. I’ve done a reasonable amount of (ATL/COM/C++) development on Windows and never needed it. But I’ve not delved into driver programming, or some of the nastier corners of MFC/the Win32 API either. My guess would be to keep that one. It’s the kind of thing that seems like it wouldn’t have been put there in the first place unless there was a real need for it. But that doesn’t mean that I’d keep all such settings. It’s definitely a case-by-case type of decision.
[0] http://blogs.msdn.com/oldnewthing/archive/2005/12/19/505449.aspx
Alun Jones> This is an ACDSee failing, not a Windows failing.
May be.
But it looks like it’s common way for any windows-app to install itself.
They (apps) look for %ProgramFiles% which points to, say, "C:Program Files" and tries to put all necessary files there. And, of course, fails because of user being non-admin.
Ok, it, may be, failure of almost every app to be installed. But it seems to be very common design flaw. ;)
But – why not to point %ProgramFiles% to, say, "C:SomeUserProgramFiles" and %WINDIR% to "C:SomeUserWindows" and so on? And save something like %GlobalProgramFiles% pointing to privileged folder.
It will be transparent for apps and for users, and they will have no need to care about admin priv.
My wife have no idea what folders and files are – she knows she has fotos and she has viewer and nothing else. Exaggregation, of course, but close to reality.
How (and what for) to explain her that she have to change default "C:Program Files" to smth else?
And what if app wants to write into registry?
Sooner or later you’d have apps that install to %GlobalProgramFiles%. What do you do then?
Norman: I think a list of debuggers that’s modifiable by Administrator to protect a registry key that’s modifiable by Administrator adds so little protection it’s not worth doing. And I think that was Raymond’s point.
It doesn’t even stop the limited users: once you have one debugger, you can run any other debugger under it. And, since you can recreate a program’s functionality in VBA, or copy the program and edit it, should debugging programs be a controlled resource?
I find having a debugger as important as a programming language. The ability to change the behaviour of a program without rewriting it is (to me) an essential part of an OS. I often use it to force programs to install on my limited profile. (Ugh and I hate OLE’s obsession with HKLM…)
I know many companies have policies against running your own code, but until Windows has a comprehensive way of stopping users running their own code (appsec is trivial to break), that’ll only stop the good guys.
If your concern is somebody using IFEO to make a system act strangely, perhaps a better solution is to create a notification for when IFEO is in effect. Then the jokers will be caught, and the serious hackers will be using more than one technique anyway.
Norman: Whoops, sorry, I thought you were advocating it from a security point of view. I agree that administrators, like developers ("no programmer would be stupid enough to…"), shouldn’t be trusted with their own systems.
It’s always risky when changing a dangerous setting becomes an everyday procedure. So to protect the admin against operator error, I’d instead suggest a property dialog that changes IFEO for you. It could then warn (from a system-protected list of debuggers) if there’s a problem. And perhaps IFEO should only be writeable by SYSTEM. How’s that?
How is IFEO any more dangerous that the HKLM ‘Run’ key, or the document editor associations in ‘Classes’, or any of the dozens of other ways to obfuscate an an application’s presence once it’s got administrative privileges?
(AFAIK, the privilege check for actually attaching the debugger to a process is completely independent and sound — you need rights with respect to the process.)
Mr. Steward, thank you for posting your second followup so quickly, so I could read it before complaining about your first one.
Nonetheless as a minor security measure it still increases the chance of a hack being discovered rather than remaining undiscovered. Just abit.
For limited users it still helps too. Limited users would be allowed to debug their own programs. But if an administrator or hacker set the limited user’s options to use Solitaire as a debugger but the administrator or hacker forgot to update the list of known debuggers then the user would get a warning. The user would know they’ve been oddly administrated.
> It’s always risky when changing a dangerous
> setting becomes an everyday procedure.
Well sure, but for Visual Studio isn’t one of the programs that I reinstall every day. Nearly every day I wish for bug fixes, but they don’t come, and reinstallation won’t help. I wouldn’t mind if installation of Visual Studio would automatically update the list of known debuggers. I don’t mind if this update includes a dialog box or not — but I hope the dialog would be more understandable than a simple "Do you wish to update your JIT settings" (yes or no) without even saying which product is asking and what kind of JIT settings are affected and what the change is.
I don’t understand when the "XYZ is not a registered debugger. Do you want to use it anyway?" dialog box is supposed to be displayed. Is Regedit supposed to display it when you set the value in IFEO? Is the RegSetValueEx function supposed to display the message? (What if RegSetValueEx is being performed from a service or a driver?) Is it supposed to be displayed when the target program is run? (What if the process is being run as a service? Where do you display the dialog box?)
I think it’s important to distinguish between "security" and "safety" in this matter. I completely agree with Raymond here.
With any system that cares about security, there is a line in the sand you can draw between privilege levels. Any program with access above that line can use its privilege to become god, no matter which particular privilege level they actually have.
For example, consider being in the Administrators group, or having any of the SeTakeownershipPrivilege, SeTcbPrivilege, SeRestorePrivilege, SeLoadDriverPrivilege, or SeDebugPrivilege privileges. If you have any of those rights, you can elevate your privileges in some way to become kernel.
Even though we generally separate powers like this, they are all effectively equivalent. It means nothing for security.
So why bother? It’s because security isn’t the only concern. Safety is another important one. This separation makes it more difficult to accidentally break something.
NT disables privileges by default even if you have them. Obviously, it does nothing for security, but it does a lot for safety.
Vista x86-64 driver signing is another example of missing the point. It will anger developers and maybe users, but one thing it will not do is stop rootkits. First of all, kernel-mode rootkits are few and far between (other than as copy protection schemes or drivers that hide cheat programs from online games). Almost all soldiers of robot armies are infected with a user-mode "rootkit" of some kind, not a kernel rootkit.
Second, if a bad program is already running as elevated Administrator, driver signing is not going to stop it from getting into the kernel if it *really* wants to.
A program running as elevated Administrator can overwrite ntldr (whatever it’s called in Vista) and reboot the system. No more driver signing check.
A response would be to block writing to those files. In that case, raw open DeviceHarddisk0Partition1 and write to the sectors containing ntldr.
A response to that would be to block raw disk access. Fine. Create a 512 byte rootkit loader, put it in a file called rootkit.bin, and use bcdedit to add a new legacy OS entry for the file. Set the default option to that and timeout to zero, then reboot.
There’s even tricks you can do without rebooting, but I won’t go into them here.
Microsoft should be concentrating on ways of preventing unelevated programs from becoming elevated. Vista does a somewhat good job of this already with UAP, but it’s not perfect. This feature is definitely a step in the right direction, unlike driver signing.
It’s complete futility to try to prevent anyone above the line in the sand from taking over the system. The only way to do something like this is to require all programs that access any kind of protected data to be signed. What scares me is that Microsoft appears to be headed in that direction. I don’t think it will be long before executing as anything but verifiable .NET requires a signature.
Melissa
Submitted as a separate response so it will be easy to delete if too much teasing is still considered offensive ^_^
Sunday, May 14, 2006 2:58 AM by Myria
> I completely agree with Raymond here.
You see that, Mr. Chen? You even distinguish between "security" and "safety" like a girl.
dimmik: a lot of Windows applications use COM components, both to construct their UIs and as general-purpose utilities.
COM registration /can/ be written into HKEY_CURRENT_USERSoftwareClasses, but this often isn’t done since this feature was first introduced in Windows 2000. You can also have registration-free COM but this seems to be a complex feature of Windows Installer, and no-one understands Windows Installer (yes, this is a gross over-simplification, but I think the number of people who actually understand Windows Installer is tiny, and I’m not one of them).
File associations can also be written to HKEY_CURRENT_USERSoftwareClasses, and they override the local-machine settings.
Basically, few applications actually support per-user installs.
Norm, I don’t think you realize that in Windows NT, every process is basically a debugger of any process it starts. An NT process starts out as more or less an empty space (unless you fork).
The creating process actually allocates memory inside the new process and writes to it, and it even sets the initial values of the CPU registers for the new process’s thread. The creating process is effectively a debugger of the new process.
Adding "known debuggers" is no better than WinSafer (group policy) and is trivial to break, as someone mentioned.
Melissa
Friday, May 19, 2006 4:10 AM by Myria
> The creating process actually allocates
> memory inside the new process and writes to
> it,
You’re right, I didn’t know that. While assuming too much, I assumed that the kernel would initiate a stub in the new process and let the parent process continue with its own operations. I figured that the parent would interfere with the child only when the parent was designed to do so, using the handles returned.
Now I’m wondering, even if inspection of the IFEO key is done by the parent process, and even if the debugging role is handed off from the parent process to the designated debugger, could the code that reads IFEO still warn the user. The SHELLEXECUTEINFO structure has an hwnd which is designed for user notifications, but the SECURITY_ATTRIBUTES structure doesn’t.
> Adding "known debuggers" is no better than
> WinSafer (group policy) and is trivial to
> break
Sure it’s trivial to break deliberately, but John Robbins didn’t mention the possibility of adding Solitaire to a list of debuggers in WinSafer ^_^