>> An engineer can quit if the project is unethical.
Is creating face recognition software unethical? Your answer is not really important, just the fact that different people will classify this differently. I thought it was creepy as f* when facebook started wanting to automatically tag people in photos. But if that's all the tech was for it may well be ethical, if creepy to some. And yet, face recognition is really all that was used in this case - matching up porn images with social media ones.
>“He said because they are media it may be ok but the average joe and company should not get any ideas of doing the same thing.”
Or else what?
How would they even know this was done?
All facial recognition tech benefits from plausible deniability. You could just easily say a face was recognized by an individual who saw the photo online and sent an anonymous tip. They have no leverage.
> Does having a machine analyze your face invade your privacy any more than having humans analyze your face?
Seriously? Yes, of course it does, by a gigantic amount; this should be maddeningly self-evident. Human beings do not, for instance, have instant recall of hundreds of millions of names and faces.
> What exactly is the argument against AI-generated child porn?
As something you generate in a photorealistic image generator you are building a business around?
The fact that it is a serious crime in many jurisdictions and, even where it isn’t, photorealistic child porn images that get noticed anywhere or going to result in uncomfortable conversations for everyone involved in the process of establishing that they aren’t evidence of a crime.
> For example, if next month you developed a model that could produce extremely high quality video clips from text and reference images, you did a small, gated beta release with no PR, and one of your beta testers immediately uses it to make e.g. highly realistic revenge porn.
As I understand it, revenge porn is seen as being problematic because it can lead to ostracization in certain social groups. Would it not be better to regulate such discrimination? The concept of discrimination is already recognized in law. This would equally solve for revenge porn created with a camera. The use of AI is ultimately immaterial here. It is the human behaviour as a product of witnessing material that is the concern.
> There should be a human in the loop to separate medical images from exploitative ones.
No, there really should not. I would not want a facebook employee to look at my pictures. I don't use their services, but the thought is pretty off-putting. The idea that these companies have to police content is what is wrong.
There are other ways to get to offenders here. An environment that takes good care of kids will spot it. Not some poor fella that needs to look at private images.
> No matter what, putting images into your machine then selling the output generated with them and not compensating the original creators is going to be seen as problematic. Machines aren't people.
What about a company where you submit images and it tells you which faces are in them?
> I like the feature that automatically tags people in pictures.
Yeah, when it first launched I thought "this is what facial recognition is for," completely oblivious to the privacy concerns.
In a world where privacy wasn't exploited:
A photo album that automatically finds you in other people's photos is really awesome. Imagine you went to a concert and your phone died, there would likely be dozens of photos with you in the frame. You could likely amass a treasure trove of unexpected perspectives of your memories, from complete strangers.
Fun ideas that are completely incompatible with reality...
> pretend that any misuse was entirely separate to you is at best naive.
No, it is naive to pretend facial recognition is worth anything when creating tools to defeat it is a mere academic exercise in reading papers and implementing algorithms described in those.
> Making and distributing porn of somebody using their likeness without their consent is unethical behavior
How do you propose we define likeness here? Sure if its shared and claims to be of a named celebrity that must meet the bar, but how much alike the original person must it look like to become unethical?
If a person finds a celebrity attractive and bases a fictitious character on them, is that a problem? What if the haircut is the same, or the eyes and nose are similar? If it goes to court, how personal can the defendant get to show discrepancies between the fake porn and the actual person?
Don't get me wrong I really wish people wouldn't use tech for this kind of stupid shit. I just don't think we will ever be able to draw clear and predictable lines around what does and does not break the law.
> You don't want to be "that guy" constantly demanding that your friends pull down any picture that you show up in, a request your friends will probably ignore anyway, and which will greatly diminish your personal likability.
Mostly because it doesn't work. I've always been that guy, and people still constantly take your picture. There's a lot of money in convincing people that taking and sharing publicly pictures of oneself and everything that one comes into contact with is unbelievably fulfilling.
> I don't really think there's any way around it and I am in fact surprised that people haven't already made a public "search by face" engine. Correlate this data with Facebook or some other all-seeing collections and the technology to dynamically identify every individual that enters a building via surveillance footage is already there. The only thing holding it back now is a) political correctness and b) the practical difficulty of extracting all of this data at a large scale into a format that can be easily cross-referenced. Both of these are permeable and temporary restrictions, and access to the data is already a non-issue for some actors.
> While the difference between innocent images and something explicit easy for a human to identify, I’m not sure I’d trust AI to understand that nuance.
I recall a story several years ago where someone was getting the film developed at a local drugstore, and the employee reported them for CP because of bath photos. This was definitely a thing before computers with normal every day humans.
> > ... people could still use it to generate extremely offensive images with little effort. I think people see that as profoundly dangerous.
> Do they see it as dangerous? Or just offensive?
I won't speak to whether something is "offensive", but I think that having underlying biases in image-classification or generation has very worrying secondary effects, especially given that organizations like law enforcement want to do things like facial recognition. It's not a perfect analogue, but I could easily see some company pitch a sketch-artist-replacement service that generated images based on someone's description. The potential for having inherent bias present in that makes that kind of thing worrying, especially since the people in charge of buying it are likely to care, or notice, about the caveats.
It does feel like a little bit of a stretch, but at the same time we've also seen such things happen with image classification systems.
> She wants to make sure at the very least that some unaffiliated AI project isn't just stealing these people's work, names, likenesses, etc., and selling it.
Sounds like thinly veiled ludditism to me, why would it be necessary for AI porn to steal anybody’s likeness? I’m sure the 100% generated content is fine without any need for that.
It’s hard to drum up any sympathy for such an exploitative industry (exploitative for both the participants and the customers). Am I supposed to care that pornographers ability to profit of their unhealthy and addictive product is in jeopardy?
> Seems a way over engineered way to activate the display and is pretty much in place for future use.
This is a very generous reading. A more parsimonious one is that they are actively doing facial recognition to get age and gender, and are arguing this is legal because they don't store the faces after acting on them, as the quoted marketing material suggests.
(I would expect laws banning unconsented facial recognition to ban it regardless of whether the faces end up in a database afterwards.)
> Does anyone seriously think the FAANG corporate boards will ever in their lives have to undergo this?
Yes, me. All of them will already be used to being hounded by paparazzi, and it would be very human for them to mistake that for normal. Two of them (Facebook and Apple) already use face recognition in their products.
Is creating face recognition software unethical? Your answer is not really important, just the fact that different people will classify this differently. I thought it was creepy as f* when facebook started wanting to automatically tag people in photos. But if that's all the tech was for it may well be ethical, if creepy to some. And yet, face recognition is really all that was used in this case - matching up porn images with social media ones.
reply