This exploit uses the history API, which allows JavaScript to change the URL in the browser URL bar to another URL with the same origin without actually causing a new full page request. The same-origin policy has always been in place, because it would obviously be a huge vulnerability to allow any web page to pretend to be a different website.
Changing window.location is different: it allows you to change the browser URL bar to any URL (including google.com, etc.), but it actually causes the browser to do a normal page load of the new URL, just like if the user had clicked a link to the new URL. Thus there is no spoofing vulnerability exposed by the window.location feature.
> One of such defenses I uncovered during testing is using javascript to check if window.location contains the legitimate domain. These detections may be easy or hard to spot and much harder to remove, if additional code obfuscation is involved.
In the example, a nefarious page opens a new window and then uses its connection to that opened window - the window object is returned from "window.open()" - to manipulate it. No window.opener required, and the two windows don't even need to be in the same website for this to work.
Google's page isn't just about abusing "window.opener", it's generally about abusing JavaScript-accessible connections between windows/tabs. Additionally, the site it's in isn't about browser bugs, but bugs in Google's websites. Google knows these issues are fixable, but they consider them to be flaws in browsers rather than in their websites.
That's why Google says it's still easy to exploit the other vectors for the attack. A website can protect itself by clearing "window.opener", but similar attacks are possible that a website can't prevent. e.g. A window can't prevent the window that opened it from later forcing it to navigate to a different URL.
I thought this was sensationalist the last time it came up, and I still do.
This is an attack which targets people who are carefully checking the link URL before clicking, but who then ignore the actual content of their URL bar. That has to be a pretty limited group, right? And this is far from the only way to spoof a link in JavaScript, so to really make this impossible would mean disabling swaths of functionality used widely across the web, i.e. not gonna happen.[0]
And it's counterproductive. Since the birth of the web we've been trying to drill into people's skulls not to trust anything except what it says in your URL bar after "https:". We need to avoid anything that would give users any other impression.
That said, there is a useful message here, not "this is a problem with JavaScript" but "this is another reason you must personally validate the domain name before entering any personal information."
[0] On a large scale, that is. Obviously some people here are comfortable with disabling swaths of JavaScript across the web.
> The newly opened tab can then change the window.opener.location to some phishing page.
This is true, and is a vulnerability I have been looking at for a while now, though I've not actually seen it exploited yet in the real world. For anyone interested, there are some pretty interesting exploits involving pages where an auth token is in the querystring and thus sent in the referer field by the browser. Also, consider what happens when you use an alert() in javascript to yank context back to the now attacker controlled tab...
> Or execute some JavaScript on the opener-page on your behalf…
Not true, this implies the "attacker" can run javascript in the context of the original page. They can only run javascript after redirecting the original page to one they control, so it's not like they can run code on the facebook.com domain, which would be a _huge_ exploit.
Fortunately most browsers prevent you from pasting JavaScript URIs in the URL bar these days.
It's a little surprising Apple overlooked not one but two fairly obvious major holes: allowing JavaScript URIs, and the lack of same-origin policy. I wonder how many other applications are similarly vulnerable.
Since you're opening a new window from another domain and writing arbitrary HTML into it, I wonder if this vulnerability could be used to bypass cross-domain restrictions...
The behavior of the browser, after appending an '#' to location.href, is different depending on the value of the current URL.
The state of the browser after making such a change gives you information about what the current URL was before making the change.
The information it gives you is whether or not you only appended '#.* ' to the current URL.
Normally, this information is only available in this context if the current URL follows the code origin policies of the currently executing code. In most browsers, this means you can only look at current URLs if they are in the same domain as the executing code.
The article shows code that exploits this to try to guess your Facebook username.
This is interesting 1) because it allows for brute force attacks to gain potentially sensitive information and 2) this may cause new discoveries that could make this more efficient or that expand what is possible and 3) because this behavior is as designed.
Somehow my customer had been tricked into changing:
"https://" : "http://"
to:
"https://customersite.com" : "http://www.customersite.com"
This makes the JavaScript called from http://www.customersite.comgoogle-analytics.com which then redirects to dxwebhost.com/l.js for the JavaScript. It looks like JavaScript file then uses a CSS vulnerability to look-up the user's browser history, and asynchronously send it off to the third party site.
So if you happen to be looking at your network traffic and notice your browser history is being sent off to a strange site, check out the Google Analytics tracking code.
Looks like this will be great for reflected XSS attacks. Even advanced users will not be able to notice there's something weird going on outside of the domain name part of the URL. Perfect!
Basically any page on the website with this vulnerability will be useable to show a fake login page, and user will not even notice he's not on the /login, but on some weird path + ?_sort=somejavascript
Not that it's that hard to clean up url via history api after you get access to the page via XSS atm, but there's still some short period of time where the full url is shown in such a case, that may provoke suspicion.
If a malicious hacker can insert some script in a trusted page, security is pretty much completly broken and you have other worries. The fact that you can make links in this page point to other malicious pages seems like a small problems as most people won't even check the domain before clicking the link.
I would think that some users are most likely to check the address bar after clicking the link. But my dad would probably woudn't see anything.
1. You're talking about a pop-up / new tab that somebody clicked while already on an attack site...
2. then loading google.com in this pop-up / new tab and, waiting a few seconds, and then changing the location of the pop-up / new tab
The vulnerability in question consists of landing up on google.com, logging in while still on google.com, then google.com sending you directly to an attack site after you finish logging in.
The user could come from an official looking email (a known and largely unavoidable problem), or from a link that somebody pasted (e.g. "click here to view my spreadsheet on Google Sheets") into a comment on a different trusted site. The "trusted site" is important to note, because as you might have deduced, no scripts or malicious intent is required on the part of the other site; just a link. In your example, the other site would have to either A) be compromised itself, to facilitate the script, or B) malicious itself.
Which of the workflows would you be more likely to fall for, your example or mine?
This seems to be explained in better detail by https://bo0om.ru/chrome-and-safari-uxss. Working via Google Translate, the claim seems to be that using MHTML and XSLT allows you to bypass the sandboxing rules and inject JavaScript that bypasses the same-origin policy.
This article is incorrect about this being a vector for executing JavaScript (the same origin policy prevents that), but the phishing potential from redirecting the opener page to a fake URL is definitely cause for concern.
I've known about it since I encountered javascript.history.back(). I figured if js could find out my last page, then it could probably read my entire history. Since then my browser does not have any history saved.
Edit: I've never really thought of it as an exploit though.
I don't understand the testcase they provide. It opens a window at https://ssl.comodo.com/ and sends a message to it with `postMessage`. However, the whole point of postMessage is to provide cross-origin communication. Continuing, the message they send is:
Apparently https://ssl.comodo.com/ used to then proceed to execute that code. However, this is not a vulnerability in the browser, but in that website. Am I missing something? Was Chromodo breaking the `messageEvent.origin` property, breaking same-origin checks in JavaScript? Seems far-fetched.
good point. What it passes via URL parameter is URL encoded JSON - which is parsed using JSON.parse()... so shouldn't expose an attack vector.
The alternative - creating about:blank windows and moving DOM elements into them comes with a wealth of glitches and restrictions
Changing window.location is different: it allows you to change the browser URL bar to any URL (including google.com, etc.), but it actually causes the browser to do a normal page load of the new URL, just like if the user had clicked a link to the new URL. Thus there is no spoofing vulnerability exposed by the window.location feature.
reply