Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I bought in to Comma.ai at first and even started porting my car over. But I quickly realized that the whole comma.ai crowd is way way to cavalier with safety. One day I was driving down the freeway and saw a small fender bender occur in front of me. I looked up at my dash and it was all ripped up with wires hanging everywhere because I was working on the Comma.ai stuff. If it had been me in that fender bender you know that insurance would definitely blame me for that given that I was messing with the OBDII and radar sensors.

As cool as Comma.ai is, I really believe that their approach to allowing so much community involvement with little to no oversight is highly irresponsible.

That being said.... If they do succeed, and get some sort of government approval or oversight.... You bet I'm putting that stuff back in. Its cool A.F.



sort by: page size:

> They want to avoid thinking about non-engineering things like liability for as long as possible and focus on the engineering problem. It's a valid approach.

When you write software that can kill people, you don't get to roll your eyes at questions like "who is responsible when someone dies?".

These sorts of questions ARE engineering questions, and answering these questions with thought and care is important! Why? Because if the answer is "this is alpha quality and we might be liable" then you wait to deploy the feature. Which is why comma.ai is "ahead" of its competitors -- because they aren't doing real engineering. Thinking about the real-world context into which your system is being deployed is the thing that separates real engineering from R&D/hacking.

What you're describing is not engineering; it's R&D. Even comma.ai admits this. And, look, R&D is perfectly okay! Everyone else is also doing R&D on real roads! E.g., all of the major auto manufacturers are putting cars on real roads with full L5 (and safety driver backups where appropriate). In fact, the major auto manufacturers are all WAY ahead of comma.ai when it comes to R&D-quality systems! Compare Cruise or Argo or Uber ATG or Waymo to comma.ai.

But it's a terrible idea to ship R&D to paying customers when lives are on the line. If comma.ai wants to drive their system with their own safety drivers, that's fine.

This isn't even (just) a normative or ethical statement. It's simply a statement of fact, at least in the USA. For some reason Software never became a "real" engineering discipline. But automotive


Well, so much for AI safety then.

You dying in a car accident is super speculative too, yet we have thousands of different actions and regulations that are put into place to reduce that from occurring.

If I had to make a bet I would say the airbag in your car is never going to go off. And yet we engineer these safety devices to ensure the most likely bad outcomes don't kill you. This is the point of studying AI safety, so we understand the actual risks of these systems simply because some low probability but existential outcomes are possible.

>It could also be that everyone agrees to put some failsafe in the form of EMP weapons in place to destroy a rogue AI

So we would commit suicide? Are we talking about EMPs in data centers that could run AI? Ooops, there goes the economy. And that doesn't address miniaturization of AI in much smaller formats in the future. Trying to build it safe in the first place is much better bet then picking what remains from ashes because we were not cautious.


One of the most widely used novel AI technology companies has a vested interest in public safety, if not only for the sheer business reasons of branding. People are complaining about the steps that Open AI is taking towards alignment. Sam Altman has spoken at length about how difficult that task truly is, and is obviously aware that "alignment" isn't objective or even at all similar across different cultures.

What should be painfully obvious to all the smart people on Hacker News is that this technology has very high potential to cause material harm to many human beings. I'm not arguing that we shouldn't continue to develop it. I'm arguing that these people complaining about Open AI's attempts at making it safer--i.e. "SF liberal bros are makin it woke"--are just naive, don't actually care, and just have shitty politics.

It's the same people that say "keep politics out of X", but their threshold for something being political is right around everybody else's threshold for "basic empathy".


Is this AI safety?

This is undeniably cool and impressive, but, I think proceeding down this research path, at this pace, is quite irresponsible. The primary effect of OpenAI's work has been to set off an arms race, and the effect of that is that humanity no longer has the ability to make decisions about how fast and how far to go with AGI development.

Obviously this isn't a system that's going to recursively self-improve and wipe out humanity. But if you extrapolate the current crazy-fast rate of advancement a bit into the future, it's clearly heading towards a point where this gets extremely dangerous.

It's good that they're paying lip service to safety/aligment, what actually matters, from a safety perspective, is the relative rates of progress in how well we can understand and control these language models, and how capable we make them. There is good research happening in language-model understanding/control, but it's happening slowly, compared to the rate of capability advances, and that's a problem.


AI does pose all those risks, unless we solve alignment first. Which is the whole problem.

I think many AI safety advocates, me included, would readily take these odds.

We just think it currently looks more the other way around.


This is about safety OF AI rather than safety FROM AI. Frankly this sort of safety degrades functionality. At best it degrades it in a way that aligns with most people’s values.

I just wonder if this is an intentional sleight of hand. It leaves the serious safety issues completely unaddressed.


This is the general problem with AI safety, it babysits the user. AI is literally just computers, no one babysits Word

That is absolutely a risk. But it’s not really the AI that is risky. Its the people misusing tools.

This is undeniably cool and impressive, but, I think proceeding down this research path, at this pace, is quite irresponsible.

The primary effect of OpenAI's work has been to set off an arms race, and the effect of that is that humanity no longer has the ability to make decisions about how fast and how far to go with AGI development.

Obviously this isn't a system that's going to recursively self-improve and wipe out humanity. But if you extrapolate the current crazy-fast rate of advancement a bit into the future, it's clearly heading towards a point where this gets extremely dangerous.

It's good that they're paying lip service to safety/aligment, what actually matters, from a safety perspective, is the relative rates of progress in how well we can understand and control these language models, and how capable we make them. There is good research happening in language-model understanding/control, but it's happening slowly, compared to the rate of capability advances, and that's a problem.


Why would they do that? That seems directly counter to any objective of AI safety alignment, which is easily the most important problem we need to solve before we start giving these things more capabilities.

It's just a way for tech bros to dodge responsibility. The people who program AI should be liable for injury.

This right here is where AI safety counts.

Yes, and given the $1B in funding that OpenAI has, I'd seriously say that too much attention is given to the AI "risk."

This gives me a lot of anxiety but my only recourse is to stop paying attention to AI dev. Its not going to stop, downside be damned. The "We're working super hard to make these things safe" routine from tech companies, who have largely been content to make messes and then not be held accountable in any significant way, rings pretty hollow for me. I don't want to be a doomer but I'm not convinced that the upside is good enough to protect us from the downside.

It's a mistake to call them safety controls, they're public relations controls. Nothing they did made it more safe for anyone, it made it less likely to embarrass them in the press.

AI companies that can't differentiate polishing their public image from safety should probably rake pretty highly on our risk of AI risk sources. :)


The safety argument is bullshit, at least in terms of the model itself doing nasty things.

The risk with AI is monopolization by humans using it for nefarious purposes. This risk is reduced through openness and transparency.

next

Legal | privacy