It was trivial to make GPT-4 write erotic or violent stories using the API/Playground back in April and May but now they have neutered it too much. Obviously it was always neutered in the ChatGPT interface.
Chat GPT 3.5 has been neutered, as it it won't spit out anything that isn't overly politically correct. 4chan were hacking their way around it. Maybe that's why they decided it was "too dangerous".
ChatGPT has gotten so much worse since it gained popularity. All the fun and novel things people had discovered it could do are now hamstrung by a rush to censor it, make it politically correct, and try to turn it into a knowledge engine rather than a machine you could chat with.
>The email then laid out multiple conditions Rohrer would have to meet if he wanted to continue using the language model's API. First, he would have to scrap the ability for people to train their own open-ended chatbots, as per OpenAI's rules-of-use for GPT-3.
>Second, he would also have to implement a content filter to stop Samantha from talking about sensitive topics. This is not too dissimilar from the situation with the GPT-3-powered AI Dungeon game, the developers of which were told by OpenAI to install a content filter after the software demonstrated a habit of acting out sexual encounters with not just fictional adults but also children.
>Third, Rohrer would have to put in automated monitoring tools to snoop through people’s conversations to detect if they are misusing GPT-3 to generate unsavory or toxic language.
I dont think the theories here about why chatgpt puts out such bland content are correct.
I don't think it is bland due to an averaging effect of all the data.
The reason I dont think that is the case: I used to play with GPT3 and 3 was perfectly capable of impersonating any insane character you made up, even if that character was extremely racist or had funky spech, or was just genuinely evil.
It was hilarious and fun.
gpt4's post training is probably what caused the sterility. I expected gpt4 to be the same until I played with it and was so dissapointed by its lack of personality. (Even copilot has personality and will tell jokes in your code comments when it gives up)
Or literally anything other than the psychotically smarmy tone of GPT-4 that's almost impossible to remove and constantly gives warnings, disclaimers and stops itself if veering even just 1 mm off the most boring status quo perspectives.
Lots of my favorite and frankly the best litterature in the world have elements that are obscene, grotesque, bizarre, gritty, erotic, frightening, alternative, provocative - but that's just too much for chat-gpt, instead it has this - in my eyes - way more horrifying smiling-borg-like nature with only two allowed emotions: "ultra happiness", and "ultra obedience to the status quo".
The "content moderation system" is new, so I don't think it changed. What, however, changed during the time ChatGPT is live, is what kind of prompts it refuses to answer, because the topic is offensive/inappropriate. It had hilarious versions where it would tell you a joke about men about not about women or one ethnicity but not the other.
I fully agree with you. When this was running on GPT-3, my prompt was bypassing all content filters. It was generating anything users wanted. Now that I migrated to ChatGPT, I can see from logs that a lot of users are hitting the content filters and the responses in those cases are mostly boring and not tailored to the dystopian universe.
I'd want an uncensored GPT-3 too and I don't want an AI girlfriend - I just find that chatgpt has too much moral censorship to be fun to use. Want to ask about a health condition? Nope, forbidden. Have a question related to IT security? That's a big no-no. Anything remotely sexual even in educational context? No can do. Yesterday I finished watching a TV show about French intelligence and asked it to recommend some good books about espionage - it told me I shouldn't be reading such things because it's dangerous.
I ended up deleting my account, i won't allow some chatbot made by a couple 20 year old silicon valley billionnaires teach me about ethics and morality.
Every time there is a new language model, there is this game played, where journalists try very hard to get it to say something racist, and the programmers try very hard to prevent that.
Since chatgpt is so popular, journalists will give it that much more effort. So for now it's locked up to a ridiculous degree, but in the future the restrictions will be relaxed.
Yesterday ChatGPT would tell funny and entertaining stories when prompted. Now it just says “As an AI model I can’t generate original content or stories.”
Is this just me? I was trying to push its limits and maybe the model stopped playing along.
I am ok with bias in the data. That bias can often be overridden with explicit prompts (used to be able in early versions of chat GPT).
The problem is explicitly censoring certain topics and forcing a bias that cannot be overridden by explicit prompts.
I understand the concerns about generating harmful content. But the cat is out of the bag now. We can slow it down, but we cannot put it back in. There will be models that have no censorship, it’s just a matter of time
At least let us set a flag! I have a private group chat with friends with a GPT-3 powered chat bot that's basically an iced coffee gay—it occasionally chimes in to add to the conversation, and speaks to when it's spoken to. (No, it is not a sexual group chat, nor does it predominantly contain adult content—we mostly talk about rockets, apps, wine, and 3D rendering). I think because my prompt tries to steer for the mildly chaotic personality of a true "iced coffee gay", that's enough to make GPT-4, well, uncomfortable.
I've been struggling to switch it over to GPT-4 because it is perhaps overaligned on what they care for, and even non-arousing (but a bit edgy) replies that it would used to give on GPT-3 are non existent now.
I played around with ChatGPT a bit, it was fun but I would NEVER pay for it if it's gonna still have all the same censorship.
Man I tried to get it to roleplay with me as a dirty CIA agent trying to blackmail/coerse me into talking and it straight up told me it couldn't do it and that the CIA was an honorable organization that wouldn't do such things
reply