M-A's

technology blog

Monday 19 February 2007

Rationale behind UAC

I don't have any "insight" knowledge. Everything written in this post is directly coming from my head. This is what I think is the rationale behind UAC. All terms and acronyms have been explained in previous posts. It is not a pledge to Microsoft's design and implementation; it is simply an analysis of its behaviour and a deduction of the intended solutions.

Why MIC?
MIC permits a separation of access to object, either live or permanent. This comes in two parts.

First, the permanent objects with higher level will be protected from low level process, for a write-only perspective. You may say that it is not secure. It's true, except that it is at least enough secure for the protected audio process. Coupled with dual-token, MIC actually helps security. The reason is that it is an orthogonal layer of protection compared to token's SID. Think of it as a multiplicator for your SID privileges. Why didn't they leverage group SIDs to simply define a group SID for every level and use deny group SID for higher levels on DACL? In a way, it is actually explained this way: the SACL contains an SID with a variable end that is the IL. The only difference between the "variable" SID and defining specific IL group SID is that every higher level would have to be denied, not only setting a "level". MIC's design permits a much higher level of granularity that group SID; 6 levels are currently defined, but there are 0x6000 levels in total. Deny group SID wouldn't scale that well...

The second part is that permanent objects get an intrinsic IL too. This permits the "protection" that is necessary for registry/file system on Vista. The dual-token scheme (explained later) is non-persistent; ILs are. You can view it as a dynamic remapping of your access rights of the resources. On Windows, there's already some generic restriction that is done per-user. In Terminal Services (fast user switching), \??\ is remapped by session but there is no granularity per-process. MIC permits a more granular distinction even for one token with the same group SIDs in it and in the same session.

Why MIC does not cross computer boundary?
MIC can't cross computer boundary since I think SMB/NTLM does not implement it and I think it is not part of the token. (In fact, I'm not sure; it's maybe part of the token, TBV) It is part of the active process.
It's part of objects too, but since the re-created token on the server isn't the same as the client's token, the MIC can't be determined.

MIC can't cross computer boundary for a very simple reason. IL on the token (or active process, whatever) is set for this process and can only be lowered. When you access a remote resource, your token is not duplicated; you are network logged in on the remote computer so your token is actually different. Since there's no IL concept in the login procedure, that IL is inherited and lowered by the parent process, the network logon is always at high IL. Furthermore, network shares are entirely executed in kernel space, and if Process Explorer is right that ILs is a process property and not a token property, there's no MIC concept there either. (I'm not 100% sure about this, it needs verification. I finally think the IL is in fact a token's attribute even if documentation let think that it's a process' attribute.)

Why dual-token?
MIC is not a true security scheme. It is not sufficient alone to really protect from processes running as standard user. This is because it is an inheritance-only scheme. Dual token alone is not sufficient either because a process running as a standard user still have access to other process running as the same user (object ownership is the key here). The same apply to permanents objects too. Dual-token doesn't protect from shatter attacks. So the dual-token helps in having a personality that can be upgraded. It's not possible in MIC. IL can only be lowered, not increased. With dual-token, your level can be increased by gaining access to your original token. This original token is used to do the DACL verification when consent.exe has been executed.

The original token is really the normal token you'd get when UAC is off. The secondary token is a token with many privileges removed and with a Administrator Deny-only SID added. What it means is that if there is a resource that explicitly denies access to administrator, you won't be able to access resource denied to administrators even with your standard user equivalent token. This has been done to make sure administrators continue to be correctly denied resources they were already being denied (user's secrecy, I personally deny access to files for SYSTEM to make AVs stop bothering me, when I'm stuck with an AV, which doesn't happen often).

I talked about remapping in MIC. Well with dual-token, the global \??\ root is actually different for each token, like if they were on a different session. What it means in practice is that if you map a remote drive with the standard version of your token, you won't see it in the administrative version of your token. It's to complete the security of namespace. It will also bother people.

Why dual-token cross computer boundary?
Because of two things. Remember that when you access a network resource, you are actually network logged on. Since a logon is done, UAC still kicks in. UAC kicks in on the server, not on the client. So if UAC is disabled on the server, you will get your normal administrator token. Network logon could have been exempted from this scheme but it wasn't for a good reason. An old trick is the file:////localhost/c$ access (Note: I hate blogger's automatic reformating). If a user can chain thru local share to get back administrative access to the file system, there would be no point to dual token. This is another point where MIC couldn't have handled this alone.

Why UAC?
UAC is mainly the UI to make MIC and dual-token work. It's bothering, and the whole thing mainly done to make ISVs follow the "recommended" behaviour. In fact, this is a good thing; forcing ISV's hand, not bothering users.

Why virtualization?
That's a solution to the previous solutions' problems. Microsoft wanted to make legacy apps run better on Vista at the same time. If they wouldn't use virtualization, users would have been hammered with more popups that you can even imagine. Since virtualization may activates when you are on standard user token, the Admin-only resources look like they're allowed and UAC don't kick in. This behaviour will cause troubles to many people, but I think Microsoft felt obliged to do this because otherwise it would have been unmanageable for users running with the standard user token. Since Microsoft's goal is to force ISVs to make their application well behave in standard user environment, UAC was a must for them and virtualization was an obliged patch.

Note that applications with a correct manifest and x64 executable won't get virtualized. This is to stop virtualization somewhere. So what if virtualization is a problem for your app or you want to stop having virtualization for a specific third-party application? Simply place a correct manifest beside the executable, if the executable doesn't have a manifest already embedded in it. Otherwise, it will simply not work, i.e. you have to update the embedded manifest with some resource editing tool like VS2005. It is the trustInfo section that needs to be added, yes, the section that used to BSOD Windows XP. :)

I'd like to thank an anonymous friend for pointing out some errors before posting, but since he's not confident in what I'm saying, he'll remain anonymous. :)

No comments: