Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

The way I'd try and design a fail-safe solution would be not to design it in the first place. No one is forcing us to build these things. But government contracts are lucrative and contractors exist.


sort by: page size:

You, like everyone else who has these ideas every few years when this government overreach bubbles up to the surface again, have made the mistake of believing that first the government is responsible enough to manage this kind of program -- they aren't and never will be, and second that engineering can stop engineering.

If you can make it, someone can break it, and will. And companies are not responsible, that's not how this works.

Also, a "weakness of the individual part" is a break, and the entire system collapses. You're essentially arguing for security by obscurity.


I was obviously being rhetorical.

"Fixable in practice" is really not a sufficient rebuttal when the people tasked with actually building this stuff have given clear and specific reasons why building this stuff safely is impossible.

That the government think it's totally fine if the government have the power to listen in on everything is not especially surprising. The problem is most people don't actually trust the government with this power.


It does, but it makes sense to explore fail-safe designs.

Depending on a lack of incompetence in dangerous systems works until it doesn't. To the extent that these things can simply halt when incompetently managed, they should.

If anyone disagrees, I'd like to know why they think the next hundred years are going to be so much freer of political shortsightedness and corruption than the last hundred.


Those better fail safe designs have still have a long way to go to be commercialized, regulatory approval is just one step. It would take a lot of R&D and investment to build a real fail safe plant. Unfortunately no one wants to put any money into it. Just stick with cheap aging risky designs.

Yes there are newer designs that are suppose to be passively safe. But of course the money and government approval needed to build them, especially an as of yet untested design, are all locked behind political doors and hurdles.

I mean, that's akin to proposing a situation where a power plant is it's own backup power plant. I or any human can't personally generate electricity when the system fails, but if I fail to design the system in such a way that something else will be capable of handling the failure with reasonably high reliability, I probably should not be designing systems.

I don't necessarily disagree with your theory on design, but every complex war machine ... has some level of controls that if used improperly could be fatal. They are inherently dangerous systems.

Designing fail safes for such things has to be a sort of balance between added complexity / options for failure, and safety.

In the military there are plenty of "don't touch that, let the person who knows how to do it touch it" and I suspect that is often the right design.


Perhaps if you can't afford to build a safe system, you should reconsider if you should be in the business of building unsafe ones?

Sure, you can come up with a solution to any objection, but the problem is failure modes you haven't thought of. The overall point is that any safety measure brings its own dangers along, and you have to be very careful when adding stuff to make sure that you've solved more problems than you've added. It's not at all obvious.

If there are no inherently safe options, then that tends to preclude engineering in safety. I really don't think you know what you are talking about.

I am suggesting that other engineering disciplines produce much safer products without opensource and open design. How it is achieved is another question. Bridges are not safe because of any government agency but because mechanical engineers design them to be safe.

If the risk to such a thing failing is so large, this seems a point against your argument. We MUST be able to reduce the risk of such systems failing, and do so provably. If we cannot reduce the cost, we must reduce the risk, such that the calculation makes the system affordable again.

This all misses the point of my first post.

You could have a much safer design available, but if the 'less safe' design is more cost effective, and meets all specifications, it would still be the preferred design. This means that the any problem is with the specifications/requirements. The next question is whether the standard of safety which would require choosing the 'failsafe' (or at least failsafer) design would cause statistical murder.


You always have to consider threat models like these. If you don't, you're not doing your job well.

If you're trying to imply that considering these threat models is a reason not to come up with a proper fail-safe system, then the very same argument applies to creating these charging stations and letting people use them in the first place.


It's easier to failsafe something than make things perfect

Even better, when you failsafe you plan for the (unknown) future.

That's why we have circuit breakers, hydraulic and electric fuses, pressure relief valves, etc. Because no one thinks they can know all things that can go wrong in the future (with catastrophic consequences) and plan for that


Sure, but that's because of safety necessity, not because of optimal solution. If building bridges, powerstations, cranes, etc etc were essentially free outside of drawing/planning them, and could be deployed outside of "production" infrastructure, I'm sure that industry would be lightyears ahead of where it is.

Building them quicker and cheaper without compromising safety is an unsolved problem though.

The engineers don't necessarily have enough power for that to work. And the current engineers on site aren't necessarily those at fault for a disaster that was caused by incorrect design or construction 30 years ago.

What I would like to see is mandatory escrow of a large percentage of profits from highly-profitable, possibly-disaster-prone industries. If you go disaster-free for 20 years you get your escrowed money with interest. If not, you lose (an appropriate amount of) it.


That's how safety critical devices are already built, so yes. We have standardized probabilities of failure (e.g. SIL [0]) from the unexpected, because mitigating 100% of risk is somewhere between impractical and impossible.

[0] https://en.wikipedia.org/wiki/Safety_integrity_level

next

Legal | privacy