Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> I do however suspect that if you just add an ever so tiny (intelligent) human check to the mix, the use and outcome of any such tools will become so much better. I suspect that will be true for a long time into the future as well.

I love this paragraph. I think that generative AI companies, especially OpenAI, have completely dropped the ball when it comes to their marketing.

The narrative (that these companies encourage and often times are responsible for) is that AI is intelligent and will be a replacement for humans in the near future. So is it really a surprise when people do things like this?

LLMs don’t shine as independent agents. They shine when they augment our skills. Microsoft has the right idea by calling everything “copilot”, but unfortunately OpenAI drives the narrative, not Microsoft.



view as:

It's also a better company strategy to be an augment vs replacement. Like, advertise that you can get twice as much done not that you can get the same amount done with half the effort.

If somebody spends 10M on Labor then at best you can change 10M to replace their labor costs. Lets say its 1,000 people.

If you instead argue that those people are now 2x as efficient you can sell the company of the idea of paying for 2,000 seats when their company grows.


That assume the company has the requirement to have 2x the work.

If not, the argument becomes:

a) Get rid of 1000 people

b) Get rid of 500 people, by making 500 people 2x efficient.

Option (a) is clearly better.


Option A is not clearly better because it assumes you can actually completely get rid of the 1000 people.

I don't think we've seen much evidence of this.

If anything, I do believe I've seen evidence that option B is more realistic and doable.

It always baffles me though, how people pick what they believe in with zero supporting evidence, just because it sounds better... with zero context.


It's not zero context; this thread is about using AI to augment human productivity vs Ai to replace humans.

The supporting evidence is just the math. (A) If I sell you a product that makes your employees twice as productive then my revenue scales with your employee count. (B) If I sell you a product that eliminates your employees then my _maximum_ revenue is your current employee count. With (A) I have a unlimited revenue cap while with (B) it's cap'd at your current employee count. I also didn't invent this approach so there's other people that think this too.

It's not that (B) is bad; it's just that (A) is better. It's similar to say selling people a cable subscription without ads; it's just better (more revenue) to both sell them a subscription and give them ads.


You've flipped A and B.

Current employee cost may be higher than revenue scaling with employees.


I've been using a chocolate factory analogy around this. These companies are making damn fine chocolate, without a doubt. Maybe even some of the best chocolate in the world. But they got tired of selling just chocolate and so started marketing their chocolate as cures for cancer, doctors, farmers, and all sorts of things that aren't... well... chocolate. Some people are responding by saying that the chocolate tastes like shit and others are true believers trying to justify the fact that they like the chocolate by defending the outrageous claims. But at the end of the day, it's just chocolate and it is okay to like it even if the claims don't hold up. So can't we just enjoy our chocolate without all the craziness? This seems to be a harder ask than I've expected.

If a chocolate factory was making deceptive claims about curing cancer, then -- regardless of chocolate quality -- I think a lot of people would very reasonably

1. Stop eating that chocolate 2. Preface every recommendation of that chocolate with a clear disclaimer

I don't think it would be ethical to continue recommending the chocolate, only mentioning its benefits and being silent about the drawbacks.


The chocolate is useful though. So personally I preface it. But I don't know how to accurately communicate "Hey, the chocolate is tasty, but doesn't cure cancer, can we stop saying it does" without it being interpreted as "chocolate is horrible and will summoning a random number of parrots who will drop paperclips on you until you die." I'm weirded out by how hard this is to communicate.

If the chocolate didn't have such utility I'd fully agree with you, but that's the only slight disagreement I have. I definitely agree it is unethical to be selling the chocolate in this way, or overselling in really any way. Likewise I think it is unethical to deny its tastiness and over exaggerate your dislike for it.


Legal | privacy