This implies to me that we, as web developers, make some assumptions about trust with the user agent. We assume any user agent with the correct authentication tokens is trustworthy. However, this is obviously not the case. My thought is that when we issue tokens we should look at other factors which isolate one computer from another. The obvious one would be location.
For example, when an HTTP request comes in, we need to know where it came from in order to send a reply. Since we know the IP address of the originating computer (or at least it’s primary upstream NAT address) then why not look all session based authentication to that? In the example above this would have caused the authentication tokens to have failed from the compromised browser because the token was being used from an invalid location. This assumes that your authentication tokens are encrypted to stop tampering, but I would really hope they are anyway.
It wouldn’t be 100% fool proof (what is?), but it would reduce the attack vectors considerably. To compromise a user, the attacker would have be attempting to use the cookies from the same computer (if they can do that, they could probably steal the cookies straight off the hard drive), from the same NAT (not much to do about this case), or have a compromised router some place. If they did have a router upstream from either Google or the victim they could capture the cookies on-route anyway in a regular request. One simple thing would reduce a lot of issues.
Would this cause any problems with users? I don’t think so. The changes associated with an IP address tend to be physical anyway, a different computer, a changed wireless lan connection, etc. In this context it actually makes sense to a user to re-authenticate when they move. The physical aspect is important to help users understand the reason why they are having to authenticate a new access token.