Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
We Give a F*** How the Site Loads (dt.deviantart.com) similar stories update story
127 points by namenotrequired | karma 1494 | avg karma 1.88 2013-08-13 14:07:33 | hide | past | favorite | 43 comments



view as:

all the more reason to compress your css

It would not have helped in this case. She mentions that the change required them to rename a file that was referenced in the stylesheet.

probably won't save you if you set a class-name of 'ass'

Clearly the correct answer is encryption. /s

> That's right. The almighty F-word was breaking how some stylesheets were loading for deviants who were accessing the site from computers with overly sensitive system-wide profanity filters installed. These users' browsers likely stopped parsing the stylesheet entirely upon reaching the word in the stylesheet, leading to a fairly ugly and/or broken page.

Wait what? What filters are checking CSS files, and why?


They probably check everything. Makes sense since naughty words could be hidden in image URLs inside CSS or in Javascript.

Imagine what happens when someone names their content folder 'assets' and it gets rewritten to 'a<star><star>ets'.

(edit: why does double * get escaped away on here.)


Markdown parser turning your emphasis into <strong>

>(edit: why does double * get escaped away on here.)

* text * italicizes, like so: text.

(Putting spaces around the * avoids this behaviour).



This was years ago, but I remember I was unable to test one of our vendor API calls locally because it had to hit something like api.association-of-commercial-something-or-others.com... turns out the corporate filter didn't like the word "ass".


My favourite stupid auto-sanitisation is "consbreastutional"

Also known as the clbuttic mistake.

Probably overzealous filter that looks at everything going over HTTP and stops passing the connection whenever it encounters a naughty word.

Unsurprisingly these types of filters don't have a lot of understanding of what is going to be displayed to the user vs not, but then again they probably don't need to to satisfy their target audience quite well.


There are several ways the user might be exposed to the contents of the CSS file, but the most straightforward is probably the content property - http://www.w3schools.com/cssref/pr_gen_content.asp .

On a related note, I always wondered if CSS selectors (classes and ids) had any SEO weight, considering they are present in the HTML code. Just in case, for some time I used to carefully choose semantic names for my selectors, which in the end, is not a bad practice for code readability.

In deviantART's case, the F-word was inside a CSS comment, thus supposedly never displayed to the user. So it's weird to block a file for that case.


    The irony here is that we didn't have to do anything to fix this bug (well, we did have to rename an image file that had a vulgar name!).

I don't think that it's the actual CSS, but rather the linked content which triggered the filters.

.fword:before { content: "fuck"; }

<span class="fword"></span>


Anti-filter method:

    .fword:before { content: "fu"; }
    .fword:after { content: "ck"; }

    <span class="fword"></span>

I learned LONG time ago to never write profanities in any part of my work. My boss told me a story where some "strong" language showed up on a projected demo of one of his projects and i stuck to me.

I have the same policy. Also test data should always be professional.

You don't want this happening to you: http://www.snopes.com/business/consumer/bastard.asp



I coded in an alert with "WTF?!" as the text in an unlikely logic path once. I like having something unique so I know where it came from. Came up in a demo in front of the (govt) client. Luckily they had a chuckle at my expense, but it could have been so much worse. I've got back to either contextual errors or just "foo" for the lazy ones.

I built out some webservers with apache on it for a dev team once. I just set it up with an index.html that had "FOO" in it just to prove that you could point a browser at it and you got "FOO" back. Well a week later they configured them behind a load balancer VIP so they wound up on a public IP address. That was fine, but now you could them from the real world. The mistake happened because that IP address had been used as part of a pool for the public website and it was still configured on the networking gear doing dynamic dns for the front end website. As soon as the IP came back up again the DNS appliance decided that cluster was 'back in service' and started handing out its IP address in response to www.<a large internet retailer>.com. This resulted in a sev1 ticket coming back titled "customers reporting 'foo' on the website". Imagine how much worse it could have been if I hadn't kept it professional and used some more controversial text there...

The video in this post is worth the price of admission alone. I love how ASP is the least "badass" of all...

This is funny. I've had to 'strongly encourage' our devs and designers to keep it clean/professional when designing mockups, test apps, etc. It's so easy to send a file around internally, give it a quick look over and shoot it off to the client without realising there is a some dodgy paragraph copy or awkward photo staring back at you. The humour isn't worth it!

Although I still find myself typing console.log("WHY THE FFFUUU") on annoying JS bugs...


I implemented a policy to only use the most bland test data while testing my apps, and it's saved me a few times when, for whatever reason, there was an accidental leakage of test data through to the client (whether it sneaks into a slideshow, or a test email is accidentally sent to a production email address, or...). I've been around development long enough to know that at some point, this type of leakage WILL occur, and it will be embarrassing when a customer sees a dummy entry for "Dickhead Buttface" or similar.

While it's excruciating and boring to use plain test data, you'll be glad you did it when you have an accidental leak between environments.


First thought: HTTPS sitewide would probably solve this problem. I'm pretty sure most filtering happens on the networking level, not within the browser. And if this browser on a public computer is augmented to read your https traffic, don't use it.

Another reason to use https (though I guess IT can install a cert for transparent MITM SSL proxying). Imagine a user whose TCP connection shuts down as soon as the 4 bytes 'fuck' pass it (say, only for port 80). This should happen randomly for high-entropy binary (e.g. compressed text/image) data - about 1 in 4 billion bytes. I guess it's not that likely except for extremely large files.

Not can. Will.

And home based filters do the same, or work at the browser level.

So https doesn't help at all.


I always wondered how many IT departments and filters actually bother to do this...

Search for SSL on this page: http://www.dsd.gov.au/infosec/top-mitigations/top35mitigatio...

I imagine DOD requirements are the same.


On every serious project I've ever worked on, all comments get removed from all exposed front-end code (CSS, JavaScript, HTML) via LESS/minification/etc/etc.

I mean, there's no reason to expose ugly stuff like that... developers WILL swear in comments, so just make sure it doesn't get out the front door...


Here at Grooveshark, we ran into a bug once where users in some countries couldn't perform searches. It took a while to figure out, but eventually we discovered that it was a problem with the name of the API call the site makes to get search results.

Originally the call was named 'getSearchResults'. Over time, we made some non-backwards-compatible changes to the search call, and following our usual naming standards, named the new call 'getSearchResultsEx'.

Additionally, for ease of debugging, when we hit the API endpoint, we add the method name as a GET param. Makes it much easier when scrolling through Firebug's network tab to find the API request you need to examine, since they all hit the same php script.

It turns out that in some countries, you can't visit a url that contains the word 'sex'...and the filter is case insensitive.


The good ole' days of expertsexchange!

Which fucking countries?

It was a while ago. I want to say the main one was Egypt, but one of our community reps would probably remember better than I.

Can you ask? I'm curious.

> But we can promise if you're browsing deviantART at the public library, our swearing won't stop you from using the site.

If your public library censors profane words, you should fucking riot.


In May of this year, my development team went on a 1-month work-cation in Panama.

The internet was fast enough, but we quickly discovered that we couldn't access google analytics (google.com/ANALytics) without a VPN or socks proxy back in the USA.

I suspect text-based web content filters tend to have fairly high false-positive rates in general.


One of our clients once complained that one of their web fonts weren't loaded.

The file in question was blocked by their stupid proxy because its name was "brandSansExtended", which happens to contain the substring "sex".


Legal | privacy