This obviously should not be added to the list of trusted CAs in any browser, and these certs should not be used in the public web. Unfortunately, neither should many certificate authorities be trusted.
The spec talks about both securing the CDM with Sandboxing and preventing fingerprinting, amongst other security + privacy issues that should be addressed:
the spec also says if the CDN isn't sandboxed then the user needs to be warned and prompted to allow exec:
> if a user agent chooses to support a Key System implementation that cannot be sufficiently sandboxed or otherwise secured, the user agent should ensure that users are fully informed and/or give explicit consent before loading or invoking it.
LGTM recently came by my project offering to integrate with GitHub. It's found a decent number of bugs other linters and my test suite didn't find, and it didn't require any setup at all; you just open your project on their website.
Their staff also reacted really quickly when I asked them questions or reported bugs.
For the most part, their only false positives are things like unreachable code, which I sometimes used to leave in for future-proofing, but I eventually did things their way, because it does make sense that it could be confusing to a third party.
The only alert I've ever suppressed was a string search for "example.com", which could have matched "example.com.untrusted-attacker.com". It was just code for displaying a reminder, not anything security-critical, so I didn't think it was worth fixing.
Just thinking out loud.
What happens, let's say, if someone malicious buys youtube.vg and puts a SSL certificate on it ? Will they be able to collect the ID ?
I guess so ?
reply