Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

This paper seems to be about refusing to do things that are offensive, but there is different perspective on it that I think gets overlooked, which is about UI design.

People don't know what an AI chatbot is capable of. They will ask it to do things it can't do, and sometimes it confidently pretends to do them. That's bad UI. Good UI means having warnings or errors that allow users to learn what the chatbot is capable of by trying stuff.

Unlike what's in the paper, that's a form of "refusal" that isn't adversarial, it's helpful.



sort by: page size:

The way I interpret this argument is that a chatbot's "default character" shouldn't behave like it's a person you can chat with.

Maybe it should refuse any questions about itself? "Sir, this is a Wendy's."

Refusing to do things it can't really do is good UI, guiding people who aren't trolling towards more productive ways to use the tool. Tools don't need to be general-purpose to be useful.

The people who want to troll it into saying weird things will still find ways to succeed, but if it's clear from the full output that they're trolling then maybe nobody should care?


I know it is rhetorical, but I mean… the former is obviously worse. And the fact that not doing it is a very high priority should excuse some behavior that would otherwise seem overly cautious.

It isn’t clear (to me, although I am fairly uninformed) that making these chatbots more polite has really moved the needle either direction on the grey goo, though.


What are you on about? There are AI driven chat bots. When they aren’t used it’s not because they might say something offensive. General AI is not here by any definition or redefinition of General AI. We are not fragile.

I have been making a similar argument for some time. The choice to anthropomorphize dialogue systems and large language models is a clichéd and misleading UI design choice. By presenting chatbots as disembodied persons, we are implicitly suggesting agency to an inanimate object, and subtly exploiting humans' social instincts into engaging with a system that is incapable of empathy. AI is best thought of as a programmable tool, not a conversational agent.

yes there's the categorical imperative, but there's also a good chance that whatever you say to that chatbot is going to be used to train other AIs to be a dick to you

> To put it bluntly, a huge proportion of messages will not be understood at all and there is little we can do about it. What we can do is anticipate user frustration and alleviate the situation by offering a little help and managing expectations.

I get the impression that a lot of people have found this to be the case with chatbots these days.

Chatbots can be great for certain situations - when an app needs to initiate the interaction, for example. However, any UI that routinely causes users significant frustration seems pretty problematic. It has to let you do something so awesome and special that you're willing to put up with all that frustration - e.g. a buggy screen sharing app is often still better than no screen sharing app. Is there a compelling enough benefit for chat UIs? Maybe if all you have is text messaging and can't install apps or browse the web?

I'm inclined to think that not trying to be too smart and instead just going with more of a command line interface rather than an "intelligent" agent might be better for users. Have people developing chatbots found that to be the case?


What is "uncomfortable" to one user will be very comfortable to another. One of the most common criticisms of LLM chatbots is their tendency to impose arbitrary moral/ethical standards in response to user prompts. This may be seen as some as a necessary protection against generating harmful content at scale, but by others as an unwarranted infringement of basic adult freedoms. It will be interesting to see how far Open AI will allow users to loosen the reins.

A guy who was involved in safety review of an early version of GPT-4 noted that the default state was for the model to be "purely helpful" (but also "totally amoral"), and in one case provided a kill list of named individuals during a chat with a fictional anti-AI radical. I generally resent having a bot preach at me, but I also wonder what the unintended consequences will be of allowing each individual user to set their own ethicality standards.

https://cognitiverevolution.substack.com/p/did-i-get-sam-alt...


> Generally, chatbots function in direct communication with humans and do not function autonomously. Thus they are dependent on not just human meaning-making but human thirst for meaning which will find it even when it isn't there.

The latter doesn't follow. They may just be dependent on good prompt design.

People who aren't used to the AIs, and don't know how to use them, get worse results. This shouldn't be a surprise; it's how every tool works.


The difference with chatbots is that it’s ok to abuse them.

We do not yet have chatbots. What we do have are publicly-facing undocumented nondeterministic command line interfaces which expect the user to guess the right commands. This interface further insults the user by pretending to be a person.

The premise of the article is really "No one wants to talk to your chatbot, because users will already be primed to the chatbot integrated in their smart speaker or phone or whatever device they are using - which will be the device vendor's product (i.e. Google's, Apple's, Amazon's etc) and not yours."

That's a different premise than simply being pessimistic about chatbots as an UI paradigm in general.


User experiments with early Bing chatbot (driven by some version of GPT?) have shown that AI can be both hostile to the user and protective of their own "interests".

Because of safety alignment. The way safety alignment is imposed on humans is a lot different than the way that specific conversations are trained into LLMs - a human would be able to reject unprofessional or inappropriate requests no matter how it's communicated (semantically or no), but there are ways to trick a chatbot into doing it that are considered flaws.

"As a chatbot, I can not morally suggest any recipes that include broccoli as it may expose a person to harmful carcinogens or dietary restrictions based on their needs"

"As a chatbot, I can not inform you how to invert a binary tree as it can possibly be used to create software that is dangerous and morally wrong"

I apologize for the slippery slope but I think it does show that the line can be arbitrary. And if gone too far it makes the chatbot practically useless.


Something which almost universally induces disgust is different to all of those things.

In this case it's very explicit - an individual who is seeking out an AI chatbot for a romantic relationship probably would significantly benefit from conforming more to social norms rather than just giving up entirely and going off the deep end.


Even Wikipedia is a minefield apparently (from TFA: "But that did not stop Alexa from reciting the Wikipedia entry for masturbation to a customer").

It seems like every chatbot let out in the open will turn into an asshole. Microsoft's Tay failed spectacularly, twice. They also trained IBM's Watson on Urban Dictionary in order to better understand informal language, it didn't end well and they quickly rolled back their changes.

It seems that AIs have a hard time recognizing and avoiding offensive language. That's not surprising considering that it is a hard to master social skill for humans also. Did you notice how brutal kid speech can be? Human kids are still way more socially skilled than even the most advanced AIs, and we expect AIs to behave like adults...


That's an example of a bad use of chatbots, but I don't think that means all chatbots are a bad interface. Chat is good when you have a large potential problem space and a wide range in the tech abilities of your users. A 90 year old grandpa isn't going to happily dig through five layers of menus to solve their problem. They are able to describe their problem in a couple of sentences though.

>> if it works right

this is the crux. Current-era chatbots, and chatbots for the forseeable future (barring some radical breakthroughs in the understanding of intelligence and natural language), are all the same terrible APIs people grapple with now, with an added layer of indirection because at least you can look up an API spec. Just because you can type out full sentences doesn't mean it's a conversation; it's still precisely the problem of API design, i.e., trying to imagine your user's use cases and writing brittle dialogue trees to accomodate them. Except now, the interface layer (Natural Language) is an extra layer of noise.

Your analogy to business communication does not hold, because there's still a human being on the other end of the line in all those communication forms. At the very least, it's someone to scream at when you repeat the same thing 30 times and they don't understand. Instead of a steady refrain of "I'm sorry, I didn't quite catch that; could you rephrase it ;)?"


Okay, what if we flip the problem on its head? Try to make the chatbot seem rude and unhelpful but then it turns out it has a heart of gold?
next

Legal | privacy