Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

But that has already happened. There is no way to distinguish between human and machine-written text.


sort by: page size:

I expect it would be much harder to judge whether text (instead of audio/speech) has been written by a machine

Computers can now produce text that is indistinguishable from (badly) human-produced text. Arguably they always could; it’s less bad now than ever.

The “Turing test”/Chinese Room school of thought would assume that as the text becomes better, it is at some point good evidence of reasoning behind the scenes. I never bought that: there should be “easier” ways (than reasoning) if you just want to produce human-level text output.


Isn't this exactly the same problem as detecting if text was AI generated which has largely agreed to be impossible?

If that's the case then they have little use. We'll be reading AI created text from now on and we won't even know it. I'm not sure how I feel about that.

Human written text has been indistinguishable from machine written text for a very long time. We've still managed to maintain chains of trust to discern legitimate messages with decent success rates.

Yeah but that’s not the point. The point is whether you can tell that an AI wrote it and not a human.

It should also be worth mentioning that AI development is specifically optimizing to reduce the difference between human and AI generated texts. In the end it is simply a rat race that the tools for detection won't win as there won't be a conclusive difference.

That’s not true, though! It’s written for humans to read, not for machines to parse, and any human reading this will realize what they’re supposed to be.

If you can detect AI-generated text you can detect human-"generated" text, and vice-versa. But currently, there is no way to do either.

If text generation was done with an adversarial AI, it would be impossible to detect with AI by definition, but still not necessarily at a human level of quality.

In that sense only a human is able to detect writing that's not at human quality


Whether text is AI generated or not doesn’t matter. It’s whether we can detect if the text is low or high quality that matters.

I doubt that, looking at how bad the SOTA in detecting generated text is or conversely how human-like the generated text has become.

In which case it really doesn't matter whether the writing was written by a human author or whether it was AI-generated. The effect would be the same, if an AI detector (mis-)classifies the content as generated.

Does that warrant the author to be fired though?


> Detect if a passage is written by human or AI. If you can't get in, it means we reached AGI.

So we've reached AGI decades ago then? Text generators aren't rocket science and carefully cherry picked results of even a simple Markov-chain will be indistinguishable from human writings.


If generator output is truly indistinguishable from human output, then who cares? We've won.

It reminds me of this xkcd: https://xkcd.com/810/

> But I feel like the technology to write fake text is inevitably going to outpace the ability to detect it.

Counterintuitively, this isn't always true! For example, spam detectors have outpaced spam generation now for decades.


As far as I understood, computers can not detect the autogenerated text, because if they could then devs are having some obvious ways to improve.

But since most of my reading time is books/whitepapers, I can distinct a great work of human work from anything else, but only in the disciplines I am interested in. This is as easy as detecting robot on phone.


It would def be hard to pick out the AI generated paragraphs versus real ones.

I ran the supposedly AI-generated text through an AI detector, it's not hard to do.

0% probability it was AI-generated.

Bad human writing exists too.


Indistinguishable from human writers.
next

Legal | privacy