Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Seems like an argument about system-driven and component-driven risk analyses - they both have their place, and they're not mutually exclusive. Risk-based approaches aren't about either removing all risk or paying attention to only the highest priority ones. Instead, they are about managing and tracking risk at acceptable levels based on threat models and the risk appetites of stakeholders, and implementing appropriate mitigations.

https://www.ncsc.gov.uk/collection/risk-management/introduci...



sort by: page size:

That's not a good approach to risk management because we don't know all of the specific risks. Even if we knew that, we don't know the likelihood of it occurring. Even if we knew that, we don't know the harm that it would cause. Even if we knew that, we don't know how much it would cost to make that risk go away.

I think a better approach to risk management is to flip it and think of the key parts of the system, and figure out ways to make the system more robust, have redundancy, and some strategic slack built in.


My point is that risk is central to systems management. If you look at earlier standard texts on the subject, e.g., Nemeth or Frisch, the concept of risk is all but entirely missing. I've numberous disagreements with Google, but one place where I agree is that the term SRE, systems reliability engineer, puts the notion of managing for stability front and centre, and inherently acknowledges the principle of risk. I've since heard from others that this is in fact how the practice is presented and taught there.

Quibbling over whether the proper term is risk management or risk reduction rather spectacularly misses the forest for the trees.


I'm not trying to dismiss risk calculations. I appreciate them and challenges involved, having worked on tools supporting risk calculations in corporate space.

I feel this thread is getting out of hand. I initially replied to say why, in general, the kind of thinking that makes you unsatisfied with reductions but not eliminations of concerns, is common among programmers - because it's a sound heuristic. Reducing is good, but eliminating is better.


You're betraying your bias by insinuating I suggested throwing out risk mitigation. I advocated for streamlined risk mitigation by highlighting the risk of unnecessary complexity.

Very true. A risk-based analysis is the proper approach.

On the cybersecurity side, risk management is pretty tangible. It's technology governance, and security teams essentially act as a licensing body for tech in an organization, and provide intelligence about existential threats to the status quo of the line of business. Success is anticipating attempts on the org, and demonstrating how they were deflected or mitigated. There's very little that is vague about it. Just this week I discovered a new technique that some malware is using to bypass most sensors - we manage risk very concretely. I know portfolio risk managers who operate on instantaneous feedback about the P&L of their models and opportunity costs.

Where I disagree with the article is that I think the author is seeing an opportunity to frame ideological concerns that exploit uncertainty by calling it risk and equating it to disciplines that he doesn't realize have very concrete competencies and performance metrics. Also, we have technology and economic solutions for our climate impact already. I'm still of the view that if your plan doesn't work unless you take over the planet and deprive entire nations of people of their freedoms, it's an objectively evil plan, and somehow that makes me a counter-revolutionary denialist.


>"Risk = probability * impact" doesn't work when you throw security into the mix.

In my experience, that's primarily because cybersecurity people know neither the probability nor the impact of the things which they are calling risks, which moreover means they don't know how to order them from a prioritization perspective.


> I've increasingly come to view systems operations / SRE as a risk management exercise, where the goal is to reduce the odds of a catastrophic failure.

s/reduce/find an appropriate level for/

It's a common misconception that risk management and risk reduction are synonyms. Risk management is about finding the right level of risk given external factors. Sometimes that means maintaining the current level of risk or even increasing it in favour of other properties.


I think that approach could be explained by the inherent difficulty in modelling risk for complex systems.

I touch on both these perspectives in my book ‘Risk-First Software Development’. You can read all it online at https://riskfirst.org if you want.

Hope this is useful to you.


Yea I agree with you. It was perhaps bad examples from me. The thing is when you do a risk analysis, digital systems have more risks than well organised and trains humans with paper and pencils.

Maybe we will eventually bring more automation in this domain.


I'm not trying to discount the value of risk avoidance they are both important and should both be used, but mitigation should always be the priority of the two.

1) When you have neither you should focus on risk mitigation first. 2) Having a great and complex risk avoidance policy in place is a good thing but doesn't mean that you need a lesser mitigation system.


Risk management is nothing to do with removing the risk from the company, indeed increasing risk is often acceptable. It's about removing risk from your department.

People conflate risk (likelihood of the event) and hazard (amount of harm if the event happens) and I think it degrades our conversations. The issue with software is that hazard has quite a large range, due to class attack (hitting all instances of something at once, like a software update poisoned with malware) and I don't think even software developers understand the scale of a worst case scenario, let alone most politicians.

Not that I really disagree with you, but I find that "risks", "constraints", "end-users", "systems administrators", "testers", and "the business" are six separate forces, each pulling in their own direction, and you can't put them all first.

This comment expresses a misunderstanding. Merely not believing hard enough is not why threat modelling has failed. There is an incentive mechanism and this description of it can help them re-orient their strategy.

The counterfactual premise of threat modelling is that a business wants responsibility for mitigating or remediating risk without direct compensation, instead of a method to manage and transfer it. A technologist is just happy to solve problems, so they don't see this open loop as a source of value.


Risk management is the product. Surely you agree that a product that reduces risk is worth something, right?

It's about risk/threat minimization

You are completely missing the point. Risk analysis is a balancing act:

Pros: open source, white hats can easily inspect the source

Cons: open source, black hats can easily inspect the source

You don’t get to just delete the cons you don’t like, even, if like me, you think the pro out weighs the con.

next

Legal | privacy