Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> Omegle was always a horny and abusive place and a place that was so very skin colour sensitive

So they had php or python code that did skin detection and altered the code branches according to the melanin level? That's interesting, can you elaborate on why they did such a thing?



sort by: page size:

> But I really wonder what the authorities where doing - if paedophiles were using the platform, sounds like it would have been easy to identify and trap them

You didn't need an account to use Omegle and when it matched you up with someone the chat/video was peer-to-peer directly between your computer and theirs. Not really much to go on if you are trying to identify the person on the other end.


> How did I know she was non-white before clicking the link?

FacRec's history precedes it. The darker the skin the higher the false positive rate. https://duckduckgo.com/?hps=1&q=facial+recognition+skin+colo...

Surveillance abettors in general (and LEO specifically) haven't been overly put-off by that feature.


>Perhaps if they were dealing with highly sensitive information then it makes sense to make those kinds of disclosures, but for novelty headshot generator app?

Such a model can be used to generate nudes of you.


> Until uncensored models are generally available, these novelty models will always be less-than.

The most popular generative model on HuggingFace at the time of this comment is Pygmalion 6b, a model that I believe is fine tuned on top of Alpaca to generate porn. I couldn't find the data source, though, so I don't know on what kind. And Facebook's "leaked" LLaMa, while not fine-tuned for conversation, has several warnings on its potential for offensive content.

If I read the instructions correctly, mlc-ai is loading "plain" Alpaca which is great for conversation but, as you notice, rather conservative. I don't think this is a bad idea - perhaps it's better if we don't inflict racist AI on unsuspecting users. Try shopping around for other models.

Edit: I repeated your experiment with other models (but another library). They had no objections against generating offensive-yet-unfunny jokes.

[1] https://huggingface.co/PygmalionAI/pygmalion-6b


> I believe they're using some automated filtering (see e.g. https://www.wired.com/story/tumblr-porn-ai-adult-content/). I assume it's easier to train an AI to recognise pornography generally, rather than a specific kind -- and now that I think about it, the process of training an AI to recognise child porn sounds extremely unpleasant and legally dubious.

Quite the opposite - it's much easier to allow pornography and ban only child pornography than it is to create an automated system to detect pornography.

PhotoDNA makes it easy to find matches, which can then be used to uncover the networks of people posting child pornography, which are then added back to the database. It doesn't require any computer vision at all.

By contrast, banning all pornography does require computer vision of some sort, and that's much more difficult, as evidenced by how terrible the new Tumblr NSFW content detector is.


> I assumed that these are reverse engineered from legitimately illegal and problematic porn of known origin

I would assume these were engineered by getting the perceptual hash valies, using distance from the hash values in the DB as an error function, and starting with an innocuous image and hash value, and iterating to a collision for each.


> Would be super creepy though.

You're not thinking evil enough. Provide app for free, then use face recognition to allow people to pay to look better naked through the app.


>Google’s Vision API doesn’t even flag one of the images as violating

We have used AWS's Rekognition API for moderation in our dating platform for over 200,000 images per month. As far as nudity detection is concerned; Rekognition performs optimally.

I tested it against the Hell Girl image by TB Choi & it detects the nudity[1] & also detects the weapons under general Object/scene detection[2].

But I would warn against using Rekognition for anything related to gender as it is very biased and would behave indifferently towards people with colored skin. I have raised concerns about the bias in the Rekognition data set with AWS team & also other media outlets have covered it at length.

With that being said, I feel sad that we are in a state where such beautiful art should be moderated where as applications exploiting children are being given a free run.

[1]:https://imgur.com/a/FyJ5V56 [2]:https://imgur.com/a/LlfS7wO


> How specifically could that play out in this instance?

As AI is famously bad at matching against dark skinned individuals [0], all photos of black kids now flag as CSAM, whether or not any part of them is even dubious at all.

[0] https://www.theverge.com/2020/6/24/21301759/facial-recogniti...


>Porn and AI

This comment reminds me of a disturbing one I read a few months back on an anonymous underwater basket weaving forum.

The poster was discussing those expensive “real doll” (?) sex robots that creep me out, anyways he went on to mention that he discovered the app “face app” could take these things from “uncanny valley” to essentially photorealistic. Makes sense given what the app is supposed to do, but the real mind fuck was he said it also works on under aged dolls… which again I was surprised to learn are apparently legal most places.

So AI/ML can take an inanimate object and then create highly illegal fool-most-humans level fake image from said inanimate object. Seems pretty scary to me, and we might not be far from an ML model that can essentially produce endless amounts of illegal content.

What are the implications of that? How do you even begin to police/filter that? Then I guess there’s the question of if we even should, since none of it’s real anyways.

Porn and AI is gonna get real messy.


> People are constantly trying to come up with ways to work around facial recognition technology using everything from rigged hats (if you’re out in public) to heavy pixelation (if you’re online).

Why does "online" necessarily involve facial images? I've used this image[0] for several years. I did use a cropped and fuzzed version of this[1], for a while. And other images, for other personas.

0) https://s3.amazonaws.com/keybase_processed_uploads/7c420e0c6...

1) https://russia.wcs.org/Portals/32/images/News/Vladimir%20Ars...


> Doc: From what I can tell, it only starts looking in the rooms and looking at individual people if they are reported for something.

https://darknetdiaries.com/transcript/93/

If I were Kik, I would also write a blog post about using something like this. Many, many things point at Kik only doing the bare minimum though. (If you're the type who supports moderation, apparently they're already doing too much according to much of HN.)


> it feels optimized for porn

So I was relatively unimpressed with the first few images, they looked incredibly unrealistic. But then after reading this message i decided to add "topless" to the notes just to see how crazy the result was.

I was actually impressed, it returned an image that actually looks lifelike once I gave it an adult request. Which reinforces the idea that this was optimized for porn.

For those curious, this was the result I was provided. NSFW of course, topless female gender (ai generated): https://generated.photos/human-generator/64e6c26e190809000fb...


> In the case of Tumblr, this would be a reviewer going to search, typing in something like 'tits’ and finding porn.

I just don't get it. With the current state of image processing ML hacking together an NSFW detecting horizontally scalable service is a question of days.

Then you just have to mark all your NSFW content by a flag in a database and stop showing it to iOS devices. That will make people think a bit next time when they choose a new mobile phone.


> Maybe I actually know something about it.

If you know then you'd agree that with the right setup ML can do this with a very high precision. We're talking about a highly customized system trained exactly for this one purpose not some chatbot.

> You whined at me a little while ago

You're the one whining here buddy-- remember this is about a law about to be forced on you that you find inconvenient ;) I find it suboptimal but in some sense it might be better than nothing.

> this was about "abuse, not just porn".

These are related. If you have this material, you obtained it from somewhere even if you didn't make it yourself. Some police work and it may lead to some dark web exchange marketplace and actual producers.

That said yes, there's difference. The EU law being discussed is probably more fit to counter realtime abuse, compared to Apple's algo for example.

> Because this isn't just about stupid bullshit like your embarrassing disease. It's about people ending up in prison.

I actually agree with these two sentences, but not in the way you probably intended.

> The Stasi were not a child-friendly institution.

I was waiting until Hitler gets invoked in a discussion about using tech to combat and prevent child abuse facilitated by tech, I was not disappointed.


> everyone using them would almost immediately start using it to generate nude images of the people they know in real life and other very scummy things.

Even if this were true, I don't see what is scummy about it. Is it scummy for you to imagine people naked? If it's not scummy for you to do it, why is it scummy for you to use a computer to do it?

It's not like you're getting actual photographs. It's all made up.


> My only hope is that this extreme enshittification of online images will make people completely lose trust in anything they see online, to the point where we actually start spending time outside again.

Well in a weird way it will provide cover. You could post nudes of yourself online and just explain it away as bad actors using AI.


> It's immediately and self-evidently obvious that no end-user in 2007 consented to photos of their 2007 era teenage self being used to train an AI how to identify an emo kid.

I can think of worse things than that which might be hidden away for public scraping.


>> An engineer can quit if the project is unethical.

Is creating face recognition software unethical? Your answer is not really important, just the fact that different people will classify this differently. I thought it was creepy as f* when facebook started wanting to automatically tag people in photos. But if that's all the tech was for it may well be ethical, if creepy to some. And yet, face recognition is really all that was used in this case - matching up porn images with social media ones.

next

Legal | privacy