Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I believe we can reach a point where biases can be personalized to the user. Short prompts require models to fill in a lot of the missing details (and sometimes they mix different concepts together into 1). The best way to fill in the details the user intended would be to read their mind. While that won't be possible in most cases getting some kind of personalization to help could improve the quality for users.

For example take a prompt like "person using a web browser", for younger generations they may want to see people using phones where older generations may want to see people using desktop computers.

Of course you can still make a longer prompt to fill in the details yourself, but generative AI should try and make it as easy as possible to generate something you have in your mind.



sort by: page size:

I have to say, I agree that prompt engineering has become very superstitious and in general rather tiresome. I do think it's important to think of the context, though. Even if you include "You are an AI large language model" or some such text in the system prompt, the AI doesn't know it's AI because it doesn't actually know anything. It's trained on (nearly exclusively) human created data; it therefore has human biases baked in, to some extent. You can see the same with models like Stable Diffusion making white people by default - making a black person can sometimes take some rather strong prompting, and it'll almost never do so by itself.

I don't like this one bit, but I haven't the slightly clue of how we could fix it with the currently available training data. It's likely a question to be answered by people more intelligent than myself. For now I just sorta accept it, seeing as the alternative (no generative AI) is far more boring.


I think baseline it should be tuned to be statistically accurate. The problem is that people leave a lot in their prompt to be implied. Some interaction designers use this as an opportunity to infill their own subjective opinion of what should be inferred as a way to 'take care' of the user.

This isn't their only option... they could also just ask for more information.

A good approach here would be ask the user to further clarify what exactly they want before generating a person — "Do you want a random depiction or a specific depiction". A good tool for users is one which helps them be and feel more tactically or predictably in control of it; which means making them aware of its behavioural pitfalls so they can avoid them if they want to.


I am interested to know if it's possible to correct biases after training without resorting to retraining and training data curation.

As for prompt engineering for gpt, it feels a bit like reading tea leaves. I'm not sure if it is possible to know for certain that a specific prompt will elicit the desire all the time.


Do different people get different prompts? How hard would it be to generate prompts based on cohorts/personas? Or at an individual level?

I don’t know how I haven’t thought of this before but my intelligence is skewed verbally and a lot of people on online discussion boards talking about the merits of prompt-driven UIs are going to be skew towards being wordcels. So a lot of the discourse around this technology is probably bias. It’s a good point that I probably should have realized before now.

It also makes me think of a few people who I should try to convince to get into prompting…


Yes, but that requires money and training data. Tuning the prompt only requires an idea of what you want, and some skill in using the right words to get what you want.

You have a point. Can you share a prompt with that level of specificity? Its fascinating to see this new sort of science of prompt engineering.

Prompts are natural language, but you're using them with the model in a way similar to getting a split-second gut feel reaction from a human - that reaction may very well vary between people.

Sure(-ish; finetuning, particularly, tuning on the specific kinds of inputs and appropriate responses applicable to the use case, can change this significantly), but the beginning of the prompt doesn't have to be the beginnibg of user input in an AI application.

Thanks for the feedback. Right now I have the prompt generation 90% automated. There is also some post-processing I'd like to automate as well (AI sometimes comes up with some wacky results).

Learning about prompt engineering will help, but my guess is- it'll evolve too with newer versions of GPT and it's significance could go down as we get bigger and higher dimension models.

If I'd already developed an application- I would explore ways to make it AI native app. Question I'll be asking myself would be: is there anything in my app that needs prediction, classification or generation? Will my app user gets any benefit out of it?

For example: if HN was my app,I could show more relevant articles based on my history or comments. But does end user really need it? That's answer we should dig in ourself though


Thanks for the feedback on my site. That is a good point, especially for users who are not familiar with the prompt style AI generation.

You wouldn't. As the originator of the prompt, the human user is the best judge of whether the prompt accurately captures their intent.

> "Prompt engineering is best done by the model"

Well, That would be ideal, but if I type in "Middle-aged white male in full plate armor standing on a battlefield resting on a full tower shield" I likely will want to further modify the result, or style, or detail level. There almost certainly will continue to be "hacks" to get it stylized as desired. Even if I say "Painting of..." there's still a huge range of options.

I understand and agree that it's desirable to get AI prompts as close to natural language, but how do you quantify a level of stylization in natural language? "A very very very very Michelangelo style painting of a slightly slightly slightly Middle-aged white male..."

I think prompt engineering will change quickly, and to keep up, it could potentially be a 'profession' that is very specific to the model. I don't think that's a bad thing, but I would agree that it will likely not employ as large a number of people as perhaps perceived.


Does it matter who generated the text people don't bother to read? And how will lack of attention to detail affect prompt generation?

They need that info. Your prompts and all other people prompts are super valuable.

Its basically reading your personal thoughts as you go through the day.

They will be pushing for more advanced AI as soon as our hardware can run past versions locally.


It's almost like the "I" should be in quotes. Because if anyone else in the world gave ChatGPT the same prompts, they would get more-or-less the same story. (visual generative AI is different).

I think this makes things like college essay generation less scary. If two students prompt ChatGPT with "write a personal statement for the common application for college" or whatever, ChatGPT will more-or-less produce the same generic statement, with minor variations. It can not personalize for each student because it doesn't know anything about either student. Prompt engineering? Ok sure, but by the time I have written a prompt that is comprehensive enough to make an essay that describes me, I've basically written the essay myself.

So anyway, I imagine people will try to copyright output from generative AI responses - interesting question about "who" produced that output.


Disagree. Some examples.

If I have an LLM in a video game that generates NPC dialog, I might want to feed more information to the prompt based on things my character has done in the game so the NPC dialogue is more relevant to me.

Maybe I want to inject the user’s location or the current weather in Sunnyvale, CA for the specific query. Or maybe I want to inject that the user is currently at Disneyland.

Maybe the user really likes a specific tv show and I want to let the model know that. Or maybe the specific TV show was DMCA’d by an IP owner and I need to put that in the prompt.

Do I need to detect that the user is below 13 yrs old and use a different prompt (or a different model altogether)?

Maybe I want the model to only ever respond with JSON without the user needing to specify. It needs to be clarified in the prompt with no way for the user to override it.

Models can be simplified to: input to output. Prompt engineering will be engineering the inputs.


While you're at it, make it mandatory to publish the prompt with the published result for text-based generators.

I would love to see all the biases and intention that went thru the creator's mind for certain pictures and texts and what the system associated with that.

next

Legal | privacy