Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
XSS Attacks: The Next Wave (snyk.io) similar stories update story
88.0 points by tkadlec | karma 425 | avg karma 12.14 2017-06-08 17:17:51+00:00 | hide | past | favorite | 47 comments



view as:

The rise in the client doing heavy lifting via libraries such as React is driving an increase in vulnerabilities.

Developers getting into React don't always realize that all the code is executed in the client and any input validation and authentication they come up with has to also exist on the server storing that data.


Developers who don't realize such basic elements of how their application actually works have no chance of creating a secure web-app.

This has been the case since the advent of client side JavaScript. Validation can/should occur on the client side, but it MUST occur on the server side. These aren't new issues due to the use of new JavaScript libraries - they are problems that might be new to some developers.

Completely agree! The post actually alludes to that a bit towards the end.

> Single Page Apps increase the amount of client side logic and user input processing. This makes them more likely to be vulnerable to DOM-based XSS, which, as previously mentioned, is very difficult for website owners to detect.

The more significant work we do on the client, the more interesting it becomes as an attack vector.


Snyk's done some analysis on that aspect specifically too: https://snyk.io/blog/77-percent-of-sites-use-vulnerable-js-l...

React by default has pretty good XSS protection. That being said, "don't trust the client" has been something developers have struggled with ever since we started writing client/server software.

Not just pretty good, I mean to show code unescaped you need to write `dangerouslySetInnerHTML`.

I think it's a common misconception, heavy-weight software usually does pretty well with common problems. If you think of frameworks like Rails which make input validation easy, writing manual SQL almost obsolete (SQL injection) and even CSRF protection happens mostly transparently.


While those kinds of "junior developer confused by client vs server" vulnerabilities may be more common, the XSS vulnerabilities described in the article are likely being reduced by libraries like React. You really have to go through some contortions (including manipulating a property called dangerouslySetInnerHTML) to create the kind of insidious XSS vulnerabilities that were commonplace in server-rendered code a few years ago.

It used to be very easy for even experienced developers to accidentally forget to escape a variable somewhere. It took framework developers a while to realize that "escape" should be the default, and now we're at "escape by default and make the developer sign forms in triplicate to override". Which is healthy, I think.


It seems to me to be the exact opposite of this. If all of the data going from server to client comes through JSON to javascript, which usually means a JSON serializer and should correctly escape the data since you're not generating the JSON by hand, then there is no chance for traditional XSS attacks since the only remaining vector would be doing manual DOM building by concatenating strings, which you generally don't do in React. Now CSRF attacks I would believe you, but not XSS with React.

IIRC the GitHub Open Source Survey noted that the people surveyed were more likely to trust OSS software in terms of security because of the transparency with vulnerabilities and the community surrounding it.

This article mentions increased use of OSS libs as a rising source of XSS. I'm really not sure what's worse - OSS that can be fixed and audited easily or proprietary software that's closed and lacking visibility.


OSS is no silver bullet - you still have to do your due diligence to have secure system. OSS just gives you an option to "fix it yourself".

Just recently I was reading a library and stumbled upon this interesting crypto tidbit [0] ("XXX get some random bytes instead"). Maybe a paid engineer would've designed it better but history is full of counter-examples (see CVE-2017-5689 [1]).

[0]: https://github.com/nitram509/macaroons.js/blob/master/src/ma...

[1]: https://www.cve.mitre.org/cgi-bin/cvename.cgi?name=2017-5689


> OSS just gives you an option to "fix it yourself".

I would also say that generally speaking you also get more eyes on your source code so you increase the likelihood that someone will find the flaw more quickly (although you could also say it's easier for bad actors to locate flaws to exploit too).


Well we are comparing apples and oranges here because this small open source repo most certainly have less people looking at it than Intel have engineers working on ME.

Who said this is a small open-source repo? Node.js has one of the most active OSS communities on the web, with many contributors and developers looking at the code, consuming and working on security and fixing bug reports daily.

Also, a single company provides limitations - you've got blinders on, and your project isn't open for those with a different perspective to come in and take a look and notice something. I honestly think that fresh, open, and global perspective is truly key the success of OSS.


Large communities of open source developers are no panacea, look at shellshock or all the various OpenSSL libs. Those bugs stayed present for years in highly used software...

A large community of devs who are focused on security would indeed be good for a projects security, but that's not always their number one priority.


Yes, my point is that we're just throwing anecdotes here, picking examples that suit the augment. It's not proven than one model is better than the other, otherwise we'd all just use the best one and that's all.

> your project isn't open for those with a different perspective to come in and take a look and notice something.

Yes, but consider the fact that a malicious party can also do this kind of analysis. For the record I'm not advocating for closed software, on the contrary, but merely pointing that the matter is more complex than it looks like on the surface.


I don't think that the many eyes make all bugs shallow style of approach is one people should be relying on for their security. Ever since shellshock (which was present in a very popular open source program for 25 years (1989 -> 2014)) there has been more effort applied to open source libs (e.g. the Internet bug bounty programme) but that's still a vanishingly small percentage of libraries that are being covered.

What I'd say is that given an equal amount of security effort an open source lib is more likely to have higher security, however by far and away the most important factor here is the amount of security effort employed and that is not generally correlated with the software being open source.


What a closed source development team provides over OSS is some control over the quality and training of the developers allowed to commit to the codebase (e.g. the company can mandate that all developers have had training in how to avoid common XSS issues), control over the processes to be followed when commiting code, and control over the security tests to be carried out.

Of course as a consumer of software that doesn't help too much 'cause you don't know which companies do a good job and which ones just say they do a good job...

Open source is better in that you can audit it easily. However lets be honest, how many users of open source software actually are able to audit the libraries they use...

So neither option is particularly great at the moment(IMO)


"Single Page Apps increase the amount of client side logic and user input processing. This makes them more likely to be vulnerable to DOM-based XSS, which, as previously mentioned, is very difficult for website owners to detect."

Hmmm...assuming your back end has all the requisite validation and other security in place, how can a SPA cause an XSS? Are there any purely client side attack vectors (XSS or otherwise) that need to be considered if your back end is fully protected?


Reflected XSS attacks

Imagine a link like this:

https://example.com/login?vulernable-param=evilcredentialste...

If I can convince a user to click that, and then login, I can steal their username, password or anything else. Basically anything they do in that window after clicking that link can be compromised.


Yes, but that gets passed to the server.

It may get logged by the server but if it's designed to be parsed client-side, there may not be any server-side code examining or sanitizing that value before the SPA gets to it.

What about httsp://example.com/login#vulnerable-fragment

Yes, as i commented elsewhere in this thread that would be fine.

Wouldn't evilCredntialStealingJavascript() have to be stored on the server in the first place...?

It needs to be echoed into somewhere on the page. Not necessarily stored on the server.

No.

DOM-based XSS is when JavaScript running on the client takes data from a "source" (URL parameter, DOM content, cookie, LocalStorage, etc), manipulates it, and then executes it on a "sink" without properly escaping it. Examples of "sources" and "sinks"[1].

I've reported DOM-based XSS on a website that parses user-generated comments for URLs then converts the comment by adding hyperlink markup to the URL. It was done insecurely, so I managed to use combinations of spaces and other HTML attribute delimiters to inject an "onMouseOver" attribute and collect a bounty (about $2000 IIRC). In my case, the payout was large because the data was stored on the server (therefore it was persistent XSS), but with URL fragments, it's possible for the server to never see the content that is passed to the "source".

[1] https://docs.google.com/spreadsheets/d/1Mnuqkbs9L-s3QpQtUrOk...


Yes, a very common dom-based XSS vector is against document.hash, which is never passed to the server. Versions of Adobe Robohelp keep getting pwned by this. The article is kind of wrong that attacks against the URL won't be detected by the server since a decent WAF will detect this.

>a decent WAF will detect this.

Nope, nope, and nope. In a DOM based attack via a GET request, an attacker can place the payload after a hash (the pound, ergo anchor reference): http://foobar.whatever/foo?bar=tender#<XSS VECTOR>

No browser sends either # or anything after it to the server, thus the only way to detect this attack is to have some active script in the DOM which sends the document.location to the server but of course if the attacker knows about that and if they can get to the DOM before that script, well, it's over.


That is what I said in the first sentence of the post you are replying to. If it is not clear from that, I am referring to the non-fragment part of the URL.

I do not believe any entity in the world has statistics strong enough to make predictions like the expected percentage change in XSS year over year. Everyone claiming to have those statistics has thoroughly confounded their analyses by relying heavily on applications that have been made available to specific tools and companies. But the modal web application deployed on the Internet is the one that has had no security testing whatsoever.

Be very suspicious of articles like these.


> Lastly, “DOM-based XSS” attacks occur purely in the browser when client-side JavaScript echoes back a portion of the URL onto the page.

This Google Doc has tracked almost all "sinks" and "sources" for DOM-based XSS[1]. They aren't by any means limited to the URL (usually accessed by the `document.location` object).

[1] https://docs.google.com/spreadsheets/d/1Mnuqkbs9L-s3QpQtUrOk...


You're right, I tried to keep this section as brief as I could. DOM Based XSS could happen from any source, but the hardest-to-detect (and very common) variant is using the fragment (the part after the #) to inject the payload, which is never sent to the user.

> AngularJS version 1, used at that point by approx. 30% of all websites

:) Something got quoted wrong there.


I think they misread a chart from their source, which shows 0.3% market share for AngularJS: https://w3techs.com/technologies/history_overview/javascript...

awkward typo there! Fixed now.

Why can't we tackle XSS in the browser, by preventing javascript from executing in the <body> (or anywhere other than <head> for that matter)? There is an old memory protection technique of designating the stack & heap (data portions of memory) as non-executable. It seems like a similar idea should apply to the web, where the DOM is effectively a "data" portion, and separate out all executable javascript into a separate section. I know this breaks things like `onclick=` attributes, but can't those be replaced with event listeners? Of course it would be opt-in by setting an attribute somewhere in the DOM (e.g. <body non-executable="true">)

This seems like a fairly obvious idea to me, but I'm not a frontend developer, so I'm looking for someone to tell me why this doesn't already exist :)


This already exists. It's called Content Security Policy. https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP

And it is an almighty pain in the arse to set up

Unless you can influence an organisation at a pretty high level it is often impossible to write a useful CSP.

To take a really degenerate example, media sites tend to have so many third-party JS integrations (maps, multiple analytics providers, ad systems etc etc) that you can't write a useful, security-improving CSP :/

Which means talking to marketing about their preferred analytics tool, asking the business if they really want these ad networks etc etc.


Why can't we tackle XSS in the browser, by preventing javascript from executing in the <body> (or anywhere other than <head> for that matter)? There is an old memory protection technique of designating the stack & heap (data portions of memory) as non-executable. It seems like a similar idea should apply to the web, where the DOM is effectively a "data" portion, and separate out all executable javascript into a separate section. I know this breaks things like `onclick=` attributes, but can't those be replaced with event listeners? Of course it would be opt-in by setting an attribute somewhere in the DOM (e.g. <body non-executable="true">)

This seems like a fairly obvious idea to me, but I'm not a frontend developer, so I'm looking for someone to tell me why this doesn't already exist :)


I'm interested in the topic, but found the article quite disappointing. It doesn't really go into the technical details why we have a new wave of XSS vulns.

What I learned only recently: With many modern javascript frameworks many of the assumptions you may have had about XSS in the past are obsolete. The strategies that worked in the past - proper escaping of untrusted input - don't necessarily work any more if you're using something like angularjs.


This article was very much about the data we've collected and our analysis of it, as opposed to our opinions as to why - had to keep it to a reasonable length! So we kept that section short in the end. I do plan follow up posts that provide my theories as to why it's happening, and I think a best practices guide that discusses template-related XSS is a good idea. In the meantime, you can check out this related post: https://snyk.io/blog/type-manipulation/

Legal | privacy