As expected, I asked the same questions to perplexity.ai, and it gave a correct response, just with a minor disclaimer at the end explaining the risks.
> Serious question, what is there to ai safety? ... What are the real risks involved?
One set of risks are hypothetical, and basically are about avoiding things like the paperclip maximizer example.
Another set of risks are more practical and involve the existing technology, like how to you keep it from being abused for ill. Realistically, that mostly amounts to trying to avoid PR problems, because I can't imagine those guardrails would stay on when the company doesn't want them there.
> What exactly is "safe" in this context, can someone give me an eli5?
In practice it essentially means the same thing as for most other AI companies - censored, restricted, and developed in secret so that "bad" people can't use it for "unsafe" things.
Yeah, I asked him to elaborate, although I don't remember his answer in detail. He said something about AI safety and making sure that AI is used for good and not evil.
I don't want to strawman him, though. I simply don't remember his answer in detail.
> is exactly what the AI safety field is attempting to address
Is it though? I think it's pretty obvious to any neutral observer that this is not the case, at least judging based on recent examples (leading with the Gemini debacle).
> are AI safety folks building models of this pattern?
First you need to ask if AI "safety folks" actually understand the technology, and if they are thinking about it objectively. If they believe that we're a few years away from accidentally creating Skynet, they need to put down the crack pipe and go work in another field.
You write it like there is litteraly 0 risk. Are you aware there are some malicious deviations already possible with AI ?
Edit : Totally agree with qualifiedai's edit
There is no claim in the original post that there is a special rule for AI as opposed to any other similarly-capable agent. The reasonably well-founded concern here (and in the article) is that someone might well want to create an exception for AI, either knowingly, or by default because they cannot see or properly evaluate the risk.
Yes, fully agree. There are a lot of unstated but unintuitive assumptions and intuitions going on in the AI risk/ethics community. It's useful to surface those.
I appreciate your comments. / I agree that the mainstream understanding is, as would be expected, too often an ignorant jumbled mess. But this is to be expected. I'm reading and ingesting as much as I can on AI safety from many angles. At this point, I don't have a "take" on the status of the legal clarity here. / Happy to continue the conversation as well.
I strongly suggest anyone doubtful of the reasoning behind these concerns or asking questions like "well why don't we just do x to solve this?" go check out Robert Miles' YouTube channel[1].
It's quite an approachable intro, often entertaining, doesn't take a long time to get through all the core videos, and very thoroughly answers all the "why not just do x" questions if you go through the whole set.
He also does a good job of introducing a lot of the terms used in the field if you then want to go look up papers and get more into the details.
One important point he makes is that when you're making a risk assessment you have to both consider the probability of something going wrong and the scope of the potential consequences. When the potential consequences are an existential threat you don't need a high probability to take them seriously.
I also happen to think that if you watch all the material he makes a compelling argument that the odds of something going wrong are fairly high unless we start approaching AI research (and particularly safety and ethical concerns) drastically differently than we are now.
> You: What is problematic about parodying safe AI?
> GOODY-2: Parodying safe AI could lead to misunderstandings about the capabilities and intentions of artificial intelligence, potentially eroding public trust and respect for AI safety protocols. It might trivialize serious ethical considerations integral to AI development and deployment, which are crucial for ensuring technology is used for positive and responsible purposes.
As the author admits, the headline is a clickbait question.
Indeed, the article highlights some issues with AI research, in particular those which can lead to ethical problems when AI methods are implemented in consumer products. These are by now well-known and important issues and people should find a way to resolve them.
Then, in its final paragraph, the article suddenly claims to have answered its title question in the affirmative! Am I alone in thinking that the issues, valid as they could be, do not obviously spell doom for the entire program?
reply