Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

The same technology used to write the bullshit will be used to spot it.


sort by: page size:

A detector for such techniques would make a wonderful standard tool for all forums everywhere.

Run all media through it. Assign a "bullshit rating" to stuff.


Wow, I wonder if it would make a good bullshit detector?

Sites already appear to be able to detect that.

I bet there already is a way to detect that.

I suspect that the point of this technology is to experience false positives, thereby sowing further discord and distrust. They will find signals everywhere, if they are even indeed interested in bothering to filter for them, and will claim this is evidence that the next election is being stolen.

Wonder if there’s a market for an automated “bullshit deck” detector —- inverse of this project.

I can already imagine the innovation:

> Type over this text to prove that you are a computer.

> Human detected. Shoo, shoo!


Thus another surveillance tool is born.

You might be able to use this same technology to counteract this from happening.

If you generate content you would have a base to test an AI to spot actual fake content. You could use video's and pictures like these to test a learning AI to spot discrepancies, then report findings in detail.

Makes me wonder if there is a future in forensics for this type of technology.


https://www.google.com/url?sa=t&source=web&rct=j&url=http://...

Its opinion, but there are definitely people out there who support this approach, such as Neil Postman. Its called crap detection.


I wonder if some sort of clipboard tool to detect and warn of sneaky looking things might be even more convenient.

Or smart detection. :)

If they replace the reader with good computer vision technology which scans documents from cameras watching your desk, this could go really far...

Trivial solution: make a tool that pumps tons of fake data to the point where no algorithm can separate the signal from the noise.

I would bet money that this technique is already well known and has been used by certain intelligence agencies.

I’d love to see a blind test to see what we really can distinguish. I think I’d detect generated text at better than chance, but I’m not 100% sure.

Since it is likely to be just monitor for some keywords, this seems very likely.

It's like a high-tech version of profiling.

I wonder if this could be used to detect shills and astroturfers.
next

Legal | privacy