> The routine update, it turns out, is no longer so routine
Is there the rare case that we shouldn't update because the update could contain a malicious payload? If the update gets served over plaintext HTTP I would treat it as suspicious and may even block it from connecting at all. I run the risk of having outdated software, but that can be addressed by storing the software in a machine that's not connected to The Internet in any way, so it can't really do anything/talk to a C2 server (if someone does decide to execute an 0day with the software or inject malicious code via a rogue update).
@cyberlab: “Is there the rare case that we shouldn't update because the update could contain a malicious payload?”
You don't ever update your security “computers” from some third-party outsourcer. What you do is have your own people constantly probing their own systems for potential vulnerability and patching it themselves.
* jeez ..SolarWinds run their stuff on FTP and "active directory". It's got to be a joke.
* “computers” .. not allowed to use the 'W' word ;]
Plaintext transport doesn't matter if at least one part of the payload chain is cryptographically protected/verified.
If you have a machine that's air-gapped and its only IO is strictly humans (read: keyboard/screen, not USB or other electronic means) then your weak point is the human, so center your security around that. You can look at security of lottery machines to get a good idea how that's handled.
But if you're updating the machine with updates, then it doesn't really fit that criteria, soooo....
“A ‘Worst Nightmare’ cyberattack” that we all... just take in stride? Either the consequences are themselves clandestine, or cyberattacks aren’t as meaningful as our headlines would indicate.
An attack directed at your own govt is potentially a nightmare for the average individual. If the wrong information is stolen it could be used much farther down the road. Your govt may find itself at a disadvantage at a critical moment.
I'm a sense it's only not a nightmare if you aren't paying attention.
A worst nightmare for normal people would be something like a foreign country hacking all the US nuclear missiles and launching them against themselves. You know, an actual war caused by it, not just some minor intelligence advantage with no particular direct effect on normal people.
Clandestine I think. Immediate reaction steps to this from CISA were pretty unprecedented; a govt-wide unpluggening on a Sunday night of a specific vendor doesn’t happen a lot.
We can compare and contrast with the effects of NotPetya, which caused widespread obvious economic damage (e.g. Maersk shipping and Merck losses) - due to the number of affected companies, Solarwinds had the potential to be worse, but I'm not sure if you can be more destructive than that without it being obviously visible.
>I'm not sure if you can be more destructive than that without it being obviously visible.
I don't know if it the damage was greater than NotPetya but you definitely can have something more destructive without it being immediately apparent. If you lose credit card numbers and PII from your customers you HAVE to report it to the public but there are different rules for the loss of incredibly valuable intellectual property.
Perhaps the damage is just not visible yet. The sand has not been tossed in the gear box. The blue prints have not been built. Maybe it’s a precursor event to a longer decline.
The attacks have more similarities than differences.
First, to correct a common misconception, NotPetya definitely wasn't ransomware run amok - it was designed to look like the previously popular Petya ransomware, but the actual ransom and decryption key processing mechanism was removed as that wasn't its purpose. It was masquerading as ransomware, but it wasn't ransomware, it just destroys data by encrypting it with a non-recoverable key.
Just as Solarwinds, NotPetya also was a targeted supply chain attack - it was deployed through updates from a previously hacked accounting/tax software company "Intellect Service" to all their customers in Ukraine, which also included many multinational companies which had their finance depts file tax reports in Ukraine; and just as Solarwinds, NotPetya is attributed to Russian government.
The main difference is that, as you say, it seems that Solarwinds was (at least at the stage it was detected) used only for espionage, while NotPetya was designed for pure destruction.
Definitely correct in that NP originated a supply chain attack on that vendor in Kiev, I had forgotten and good catch.
NP, as Maersk and co experienced was definitely rware (a variant, sure) run amok however. It’s industry consensus that the attacker either a) didn’t think of the possible Global blast radius or b) thought of the blast radius but didn’t plan for how bad it would get.
In a sense, SW might reflect a more mature approach: consider the network spread, use a different exploit and intent - spyware for espionage vs rware variant for destruction.
That said, very different exploits and intents were used.
The Biden administration just announced sanctions against the Russian government for their (presumed) responsibility for the SolarWinds attack. They're not shrugging this one off.
(Which is itself a bit odd. The US has argued in other contexts for "cyber norms" which would allow pure espionage operations, but put more restrictions on attacks. And so far as anyone can tell so far, SolarWinds was a pure espionage operation -- using tools that could be repurposed to do something else, but you could say that about a lot of operations in this sphere, including US operations that our government wants everyone else to shrug off. Yet here are the sanctions. I expect the "norms" push, to the extent that the current administration still wants to pursue it, will take some kind of a hit...)
It’s nice how they equivocate over the ease of entry and their security policies:
There was another unsettling report about passwords. A security researcher in Bangalore, India, named Vinoth Kumar told NPR that he had found the password to a server with SolarWinds apps and tools on a public message board and the password was: "solarwinds123." Kumar said he sent a message to SolarWinds in November and got an automated response back thanking him for his help and saying the problem had been fixed.
When NPR asked SolarWinds' vice president of security, Brown, about this, he said that the password "had nothing to do with this event at all, it was a password to a FTP site." An FTP site is what you use to transfer files over the Internet. He said the password was shared by an intern and it was "not an account that was linked to our active directory."
How a Vp security can ignore a privesc risk like that is pretty inexcusable. Ever vuln falls on a risk mgmt spectrum but that’s a really nonsense answer to give. Weak PW mgmt on a FTP server that you let interns set should raise some areas of interest.
I don't think they ignored it? Says it was addressed. I think it was OK to dispel the implication that the elaborate supply chain attack was allowed in the first place by sloppy pw practices. You don't want that to be the takeaway and then other companies thinking, well our pw management practice is really good so we don't have to worry about being a victim so much
They did ignore it. I think it was over a year actually. I don't feel like looking it up right now but it was also discussed in another submission. Whatever the actual number was it was VERY long.
You don't get this kind of attack because you had an exposed FTP server.
The attack implanted malicious code into their code, learning the tooling, process and responsibilities of the personal.
They then reversed engineered the protocol and used it in their backdoor to look basically the same as regular communications.
The issue is that we blindly trust 3rd party software that is used by hundreds of companies. this makes SolarWinds a prime target, one that is worth the efforts taken in this case.
>You don't get this kind of attack because you had an exposed FTP server
This kind of attack needs an entry point, and an exposed FTP server provides the potential for one. Whether it actually was the entry point is a separate matter, willfully ignoring one unlocked door means there's likely to be others.
Initial access is part of the day-to-day these days.
you can't cover all entry points, it's a matter of time for someone to make a mistake. the fact that the adversary showed these extreme levels of proficiency and dedication tells me that the vast majority of companies would have fallen for that. In fact, the backdoor was running for months on targets like Microsoft, gov agencies, security companies like Malwarebytes.
These companies know a thing or two about security.
Today we work with "assume breach" mentality that assumes you are already compromised.
> You don't get this kind of attack because you had an exposed FTP server.
Leaking the extremely weak login credentials to your updateserver, trough a public Github repo, is not exactly a glowing endorsement of how serious security seems to have been taken at Solwarwinds.
With stuff like that being a thing, who knows where else they cut corners/got lazy.
> this makes SolarWinds a prime target, one that is worth the efforts taken in this case.
A prime target, yet apparently could still not be bothered to put in some minimum effort to protect themselves.
Yeah I mean I’ve read the technical FEYE writeups, and yeah once they had a foothold it got fairly f’ing complex (hope those guys don’t end up in my CI/CD ever, yikes).
That said, foothold for this stuff more often than note comes from keys on public GitHub repos (or things like that: simple misses that are enough to throw the door open).
I hear things like “intern” and “totally unrelated” and it’s the dog whistles of policy/policy enforcement failures of these sorts of initial ways in.
So many sexy hacks start from admin123 passwords unfortunately
The password was in a file that was committed to github IIRC.
I think both the intern that posted the file and the person making that statement both did not realize that someone had given the user write access.. which in my opinion was the actual mistake, not the ftp or the password.
Something to remember is that these sorts of things happen in many organizations.
You should always verify the data you get, be careful with complex supply chains, and avoid binaries you didn't build yourself. Don't skip these things just because "that's the way it's always been done." 10 years ago very little internet traffic was encrypted, security still means progress not the status quo.
> An FTP site is what you use to transfer files over the Internet.
Afaik that was the updatesever and the password originally came from a public Github repo where is was stored in plaintext since at least June 2018 [0]
This article is a case of [a lot of] Monday morning quarterback[s]. Except for Mandia, I wouldn't allow any other exec(s) to speak about this, publicly, as to the why and how. Side note: I bet Bejtlich wishes he was still in that team ;-)
I fear that you're being downvoted for pointing out the even bigger threat of our national media's exceedingly more dangerous acts of propaganda, presumably because readers either don't recognize it or are wilfully blind to it because it reflects their own biases.
The SolarWinds hack was a devastating attack on our sovereignty.
Yes, let's assassinate a world leader of the country with the most nukes in the world based on unfounded claims of election interference and a hack.
Do you have any idea what America did in Russia in the immediate aftermath of the fall of the Soviet Union? The 1996 election in Russia which was majorly "interfered" with by Clinton: https://archive.is/R7i5u, not to mention the fact that most Russian hacking activities are done using NSA backdoors which were leaked?
I'm surprised and disappointed to see such flagrant ivory tower imperialism on HN.
Your cited article is interesting but does not support your assertion of major interference IMO. Policy maneuvers for political purposes are the norm for all countries right or wrong.
Really? A government saying that they would ensure no "negative stories would come out", literally puppetting international institutions to pay out money - billions - for an election campaign, sent US Government agents to be embedded into the Yeltsin campaign right as he was violating every law on the books and calling in favours from the mafia and oligarchs, and very likely also used intelligence agencies to help, all of that isn't major interference?
I have a family member affected by those "targeted ads" and can tell you this is hardly comparable to news agencies writing favorably or not about someone. Some people are seriously fucked up because of this. Just look what happened on January 6th.
For the record, I think that both are elections interference. But there's a far cry behind publishing fake news and advertising it to some demographics and fuelling IMF money into campaigns by the billions and embedding foreign advisors into a criminal election campaign.
I agree completely that assasination, beyond moral abhorrence, would work out as well as most U.S. foreign policy - horrifically. The last foreign policy success I'm aware of was the Marshall plan.
I think it's a mistake to minimize Russian influence, when it suffices to compare the unmitigated disaster of U.S. actions - both government and private sector, and at enormous scale - in post-Soviet Russia.
I was speaking to some Russians at a party yesterday. They lamented at how Russian and American media were mirrors of each other.
Russia uses America as the core of their propaganda just the same as the DNC has been using Russia as their boogoieman since they needed a narrative to apply to Trump.
As someone who formerly worked for Crowdstrike and has performed the APT-29 demo countless times, I will say attribution is bullshit and people frankly have no real data to point fingers.
> By design, the hack appeared to work only under very specific circumstances. Its victims had to download the tainted update and then actually deploy it. That was the first condition. The second was that their compromised networks needed to be connected to the Internet, so the hackers could communicate with their servers.
Yea, wow, thanks NPR. Hard hitting stuff right there. Those are “very specific circumstances” that just happen to generally apply to a huge percentage of hacks.
I normally appreciate some stories on NPR, but you’re right. This is a narrative piece.
That's a perfectly valid paragraph. In a decent environment, outbound internet access should be restricted to only the hosts / networks / ports that require it. Especially for server environments. Many servers running the backdoored Orion probably tried to beacon but failed for that reason. (And I'd assume the backdoor would probably first verify outbound internet access so that the failed beacon doesn't generate a firewall/ACL deny event that a security team might detect.)
Plus, the article is written to condense technical information into something that's as layman-friendly as possible. The specific malicious update has to be downloaded, and also installed, and also running on a server which can reach out to anything on the internet. Their point is that there are only going to be so many servers that both use this software and meet those conditions, and that's in part why the backdoor took so long to identify. This is maybe a little obvious to people with infosec knowledge, but definitely not obvious to their target audience.
The article timing is interesting, but I don't think a coincidence is that unlikely. If you read the whole article, it covers enough that I could see it taking months to make.
I don't think coordination with the government is that unlikely, either, or perhaps just a pragmatic editorial decision ("everyone knows sanctions are likely going to be placed sometime in the next few months, and maybe we should wait until then so we can include those details in the story"). Both of those scenarios are more likely than a coincidence, probably - but, either way, I think your post seems overly cynical in general.
I whitelist my networks (by port and host name, At least until TLS 1.3 removes the visible SNI) and my top denies list is very interesting. They have access to bits and pieces of the internet (especially PyPi damn you runtime downloads) and all connections to an allowed port will succeed (they will just be closed after the SNI or IP check fails).
Nonetheless the article was correct the hack did not need bizarre or rare circumstances to take effect.
Not exactly sure what they meant, but maybe just that it's often hard to install a single Python package if the machine can't reach PyPI. If you download a tarball or wheel of some Python package on a different computer and transfer it to the isolated computer, there's a pretty good chance the package is going to have at least one third-party sub-dependency, in which case trying to install it with pip or setup.py will cause it to try to install more packages from PyPI.
Docker's a decent way to address this, since you install the dependencies when building the image rather than when running the container, so you can build the image elsewhere or through CI and run the container on any server without having to allow access to package management repo servers.
I worked for Crowdstrike. We did this hack as our primary demo. Dmitri is on the Atlantic Council along with the rest of the Obama team.
There is no more evidence to suggest Russia was involved than there was that Dmitri himself was for political reasons. And Shawn Henry will fire you for saying this out loud.
Let me introduce you to PTECH and see if you still think Solarwinds was worse. Warning, a conspiracy rabbit hole lies this way, proceed with caution, lest your view of the world be challenged.
Given that audio and video are much less likely than written works to include references and are harder to reference themselves, I think they are poor evidence of anything contentions such as a conspiracy theory.
The first and most important skill needed for the internet to be a net gain for a persons understanding is the ability to quickly develop and modify heuristics to filter out bad data. Filtering good data is fine, the truth repeats itself in many formulations, but believing bad data or even spending too long noticing it is bad and you are worse off than just using a library and books.
Like, say, that backdoor someone wrote an article about recently which ran from RAM and had a sophisticated self-destruct mechanism that erased all traces if anyone tried to dump its memory? I wonder how many companies had exploits like that which they either didn't notice or didn't have the sophistication to actually catch and dump.
There are defences for this: If one controls/monitors for every app in system for network access, as soon as any unusually network access are triggered, it is investigated and block.
In my home windows setup, only windows defender, firefox and chrome are allowed out going internet access in regular base. Everything else are blocked.
Windows update are only allowed when I in the mood for it (~once a year). Anyone can do this easily by control srvhost.exe 's internet access with windows firewall app.
> If one controls/monitors for every app in system
Quis custodiet ipsos custodes? Who monitors the monitor?
What happens when it is compromised, loopholed through, gets its inputs tampered with, etc.? For a home setup and its threat model, this sounds a simple, workable plan. When you're dealing with attacks of the level of sophistication described in the OP, trusting trust [1] becomes complicated and difficult.
I think you mean "blocked, then investigated", which it sounds like you're doing. A company running a large variety of software - particularly third party - needs to have staffing sufficient to such investigations.
If you want to stay unnoticed you need to remove the backdoor and all the traces after you are done with what you wanted to do (steal some information, damage something, ...), sure.
How much value SolarWinds shareholders have lost because of this? If it is not number one incentive for investors to fix, then there won’t be change in business practices. This is why GDPR in the EU has (some) teeth.
To me the "worst nightmare" was a story in the NYT about a hypothetical concerted attack against healthcare infrastructure, transit and more. Sadly I can't seem to find the link but it was a few years ago...
I agree with the parallels with aviation regulation, there needs to be something forcing a supplier's hand to solve this. The way to protect against supply chain attacks is to invest in a security-hardened build system (eg don't build releases on dev workstations, do them on build farms by build software that is the only thing able to access the release signing keys). This costs too much for most companies, so if they don't have the obligation to build it, they'll do features instead.
Which they did. The problem of hardening your build infrastructure against someone who has admin access for months is... non-trivial.
This boils down to the question of should average companies be including the Russian intelligence services in their threat model? To paraphrase James Mickens great USENIX paper, if your threat model includes the SVR, you're going to be SVR'd upon.
First, you do not need to be the Russian intelligence services to pull off this attack. Given prevailing trends in the vulnerabilities market this sort of attack would cost at most $1M to pull of which puts it within the capabilities of maybe ~50,000,000 individuals worldwide let alone organizations. If the SVR is anything like the CIA they are probably running at least 1,000 programs of similar scale simultaneously, so it is not as if the attack was supported by the full weight of the Russian intelligence services.
Second, a company's threat model should include entities that want to attack them. Given that they are claiming the SVR wanted to and did attack them, it would be ridiculous to not include them since that would be empirical evidence that they are an actual threat actor. Even if we were to ignore empirical evidence any company like SolarWinds that sells to wide swaths of government agencies in critical capacities should absolutely be including foreign intelligence services in their threat models and should probably be required to demonstrate effectiveness against attacks funded to at least the $100M level since only at that level does it start to actually get problematic for state actors to run operations.
I'm not convinced about the arguments of cost. There are a whole lot of presumptions in that chain of reasoning. The initial vector seems to not require any high-prices vulnerabilities, but simple authorization by pass, e.g., bypassing 2FA. Which could well have been a root account. From there, get the keys that Duo depends on, then you own the whole thing.
I have always argued that doing what I call "defense by presumed motive". The logic would have been "ok, UNC2452 wants to access DHS hacker's email. I'll go after SolarWinds". Better spend your energy on basic security principles.
If you want to see the difference between just throwing money at the problem and an actual tier 1 threat, compare this with the Chinese iPhone 0day from fall of 2019. Probably a multimillion dollar exploit, with terrible quality and no opsec on the c2 side . Just spending money doesn't get you the kind of expertise that's needed to pull off something like this.
It's a bit like arguing Bill Gates is a serious threat to any naval power because he can afford to buy a nuclear attack sub; there's more to it than that.
> But as CrowdStrike's decryption program chewed its way through the zeroes and ones, Meyers' heart sank. The crime scene was a bust. It had been wiped down
That's a lot of words to say, we don't know who did it. I had a quick look but couldn't find anything, why are the fingers being pointed at Russia?
Tools can be stolen. In fact, if we're claiming whoever did this was a super-genius, they would have stolen or spoofed the tools they used to point at someone else. Unless they were Russia being so clever they were pretending to be someone pretending to be Russia!
Edit: Your link show Kaspersky labs making the claim that this was the FSB. Yet the West also claims Kaspersky is controlled by FSB! Well, you could say "they should know". Or maybe they want to humor the West so their ban will be lifted. Or maybe they aren't controlled by the FSB at all. But if the West can't figure that out, how do they expect to figure out the true origin of the hack.
The reply from ipsin was much more helpful. This list is mostly full of articles in general about the Solarwinds attack and others including "Supernova malware clues link Chinese threat group Spiral to SolarWinds server hacks (ZDNet)".
I worked for Crowdstrike. We did this hack as our primary demo. Dmitri is on the Atlantic Council along with the rest of the Obama team. There is no more evidence to suggest Russia was involved than there was that Dmitri himself was for political reasons. And Shawn Henry will fire you for saying this out loud.
The wiki page on the attack speculates an Office360 account was hacked. Presumably it was an account from an admin, and from there I could see them probing until finding credentials for the build system.
No idea if it is related, but a SAML implementation security issue was disclosed the same (or very close) day that the SolarWinds attack became public knowledge. Maybe that gave them access to the admin account?
Two things, are they unsure the attack could have worked the way they describe, or are they unsure the attack actually happened that way?
From the viewpoint of the public it would be important to know what made this attack possible, and how to defend against it, even if the actual attack was accomplished some other way.
I guess what I'm asking is, do they know how to repeat this attack?
Tim Brown VP Security of Solarwinds said:
“We check code out of source code control, have a TeamCity environment to kick off the build, and here the attacker looked for Orion to be built and swapped a file. It was a transient virtual machine and that’s hard to detect,” he said.
Network monitoring software is a key part of the backroom operations we never see. [...] By its very nature, it touches everything — which is why hacking it was genius.
This is frustrating to read, since plenty of people did in fact warn that these kinds of systems were easy targets.
Yea.. not sure I'd call it genius. Think if you ask anyone that is a little knowledgeable about what would be the juiciest target for a nation state to hack, a large portion of people would have said something like SolarWinds.
It seems like SolarWinds should have known better themselves as well. There is no way that their upper management didn't know that they would be an amazing target for a hack. Supply chain attacks are not that new. Their lax security seems extremely negligent.
Ninety-nine times out of a hundred, defenders call attacks "genius" as a way of subverting accountability. What makes this particular incident pernicious is that it already had a built-in deflection of accountability --- the responsibility for ensuring that SolarWinds was fit for purpose was diffuse; hundreds of giant companies with large security teams all believed it was someone else's job to verify that SolarWinds could safely deliver its functionality.
I've worked with people who don't operate this way, and who take continuous flack from CIOs for spending resources on verification for COTS IT management tools. But those teams are, in my experience, very rare --- and the SolarWinds hack provides further evidence of that view.
It's not a perfect predictor, but a reasonable rule of thumb: if you've never heard of a vendor's security team, chances are they barely have one. That's obviously true of... most vendors! So you should be careful when you select one for a role as sensitive as fleetwide agent-based monitoring, where a vulnerability or a software supply chain fuckup is going to create mass compromise. This seems so clear to me that it barely counts as insight.
Also their security team can just be a subgroup of coders who have some idea how their software executes.
IMHO most sane vendors who want you to install something on your machine make it open source and use existing tools as much as possible. Doing it this way also decreases chances of some "temporary fix" changes on even otherwise secure software. Companies optimize for money, management tries to align with company values and engineers often just have to follow it. It's inevitable what trade-offs will be made unless there's some direct negative impact. For everybody selling their time and not being heavily invested, ignoring black swans and basically "eating tons of sugar" is the natural move.
"A former security adviser at the IT monitoring and network management company SolarWinds Corp. said he warned management of cybersecurity risks and laid out a plan to improve it that was ultimately ignored.
In a 23-page PowerPoint presentation reviewed by Bloomberg News, Ian Thornton-Trump recommended to company executives in 2017 that SolarWinds appoint a senior director of cybersecurity, and said he told them that “the survival of the company depends on an internal commitment to security.”
The following month, he terminated his relationship with the company, saying he believed its leadership wasn’t interested in making changes that would have “meaningful impact.”"
I was just playing with Grafana a few days ago, their cloud version. When you install agent, it opens up ports with unprotected metrics on a public IP. Opens up ports on your production without info about it whatsoever, because it is in theory a push agent. Why would you do that?
Support response:
"Regarding your second message about port 12345 on the Grafana Agent -- the HTTP server only exposes the Agent's internal metrics and provides an API for the Agent's status. The gRPC server is used for agents to communicate with one another, if the scraping service is used. It does not allow anyone to get access to their metrics.
That said there's we understand the concern and upon review will look at documenting that agents listen on 0.0.0.0 by default, and that you can change it by setting http_listen_address and grpc_listen_address in the Agent's server config to 127.0.0.1: (...)"
Also, sometimes I feel like I'm the only person in the world who is not comfortable in running tons of untrusted docker containers. If I put a link to a binary here and tell you to run it, I doubt any HN reader would. But 400MB of binaries? No worries, I have just the right tools to run them as root.
Not sure what your comment about docker containers is trying to say. In docker you need to specify port mappings before ports are exposed outside the container, so this wouldn't happen without you explicitly knowing. Containers are by default untrusted and access to network and filesystem is mediated. In fact, if you linked to a docker container I'd have less qualms running than a binary.
Yes, you have and that is reasonable. It is assumed that it is safely contained environment that cannot maliciously affect your host system. Yet every virtualization and container solution we've seen so far had been exploited.
"But is there a vulnerability at the moment that can be exploited?" "None that we know of"
Containers also often get generous access rights and internet access, even assuming container solution being intact.
Security of the system inside is a whole big surface of attack that you have little control of (unless you meticulously analyze each docker full fucking system image every time you run them), and necessarily affects your systems because otherwise you would have no use for this container.
The filesystem is restricted, but network generally not. You can have a malicious container connect to rest of your network. The ports don't need to be open either, it can phone home (like how most bots do).
Certain data centers restrict access to outside network (i.e. Internet), with now popular public cloud. It's actually hard to place such restriction, because majority tooling, including official images are expecting to be connected outside.
A common way to dodge accountability is to exaggerate the size of the enemy and his cleverness and goodness he had 1000 developers working on it.
Nonetheless, there are some things that are kind of impressive. Inserting their own code into the build process without touching any file.
But the real level of skill, I think, is the operational discipline exercised by the attackers. For example, waiting two weeks before doing anything, erasing traces of what they did, and targeting very specific sites.
Security is not my line of work, but back in the days I spent a while exploring the field (I especially enjoyed reverse engineering / subverting software ).
Why is this 'operational discipline' remarkable? I'd expect this to be common sense (including waiting for two weeks) , especially if I was a state actor and had spent a whole bunch of money on it? Or do you mean that the vast majority of hacking operations don't even bother to do this?
The article vaguely describes the build system being compromised. Have any details been published to indicate what build systems they were running and what the exploits were there?
>And so we are fairly broadly deployed software and where we enjoy administrative privileges in customer environments.
There is a lot of talk about shoring up security practices by many of the people quoted here. But something that would be hard to admit is that maybe they should not have administrative privileges in customer environments. Maybe they should not install agents on your machine. They would never recommend you to do so with anyone else, except them of course, because you can trust them.
And you would think that a bit of analysis would be done on software and the company that built it for something that you install and give it full administrative control.
I don't like the Solarwinds Linux agent. When I last looked, there was still an sh syntax error in the cron job it installed (look for a file named 1 in your root directory) and I couldn't reach anyone who could understand my bug report. It also frequently exhausted the space in its log partition. I replaced the agent with SNMP 3 read only access. I don't believe I'm authorized to describe my employer's current monitoring posture.
I am curious about "compiler" attack they are mentioning. Looks like they compromized compiler used to build the code. Any more technical info on this aspect?
I think they compromised Teamcity, or at least I've heard a lot of mentions of Teamcity related to this hack, I could be completely wrong. But it's possible they just got admin access to the Teamcity build server and added their code just before the build was deployed. So not really compromising the compiler per se
I think the most important thing the SolarWinds hack has revealed is that the massive pile of paperwork that has to be filled out, full of security controls, to accredit system for government systems, is fairly useless. It's the digital equivalent of the Great Wall of China. Designed by bureaucrats, impressive in size, a massive effort, and ultimately not going to stop the Mongols anyways. Security paperwork is not security.
More important I think is that the months and months it takes to usher things through the process forces things to be out of date which in itself creates security problems.
An actual audit of the source code + running it in an instrumented live test environment to capture behavior is far better.
They didn’t say “completely useless,” they said “fairly useless.”
If you look at this case even briefly, you should come to the conclusion that the “security paperwork” is fairly useless.
An FTP server compromised because of a terrible password policy? No suspicious activity alerts of any kind? Executives who (based on their comments) are clearly ignorant of what makes software actually secure?
What is the paperwork able to prevent, if it can’t prevent such fundamental problems?
Exactly. It's wonderful that there's a few thousand controls for things like password length. But things that cause massive security nightmares like "does the CTO care about security?" or "has a pentest team reviewed and audited the source?" are so fundamental and should be triggered as any new system touches more and more systems.
"Does this software touch literally every other system on the network?" should be a question that triggers a much more rigorous and deeply technical evaluation and review.
But the current processes don't work that way, they purely paperwork drills that often demonstrably make systems less safe.
From what I've read so far I haven't been able to gather how Solarwinds could have prevented this? In other words what were the critical failures in their defenses? Or is this kept non-public on purpose?
Is there the rare case that we shouldn't update because the update could contain a malicious payload? If the update gets served over plaintext HTTP I would treat it as suspicious and may even block it from connecting at all. I run the risk of having outdated software, but that can be addressed by storing the software in a machine that's not connected to The Internet in any way, so it can't really do anything/talk to a C2 server (if someone does decide to execute an 0day with the software or inject malicious code via a rogue update).
reply