If you know how to build a bug-free system with content classification rules that are properly implemented for 2 billion people in every conceivable circumstance, I think Facebook (or any other big social media company) would pay you a lot of money to implement that. Like, an unbelievably large sum of money.
I would trust Zuckerberg to understand better if something is easier or harder to implement at Facebook. Also considering that they make money from data, it might be very expensive to throw away everybody's data.
There's almost certainly already a micro-economy of exactly that sort of thing happening at companies like Facebook.
People sit around in offices, organizing daily stand ups about project planning, and assessing focus group reports for eye-motion-tracking data tied to Like buttons, and accelerometer metrics surrounding ringtone events.
I'm really curious, how the average Facebook engineer feels about this data mining and when is the point when they think they should stop building tools to allow Facebook to do this.
That would be nice. What is Facebook AIs take on ethical use of its research?
Facebook probably pays through the nose for AI research and probably wants a ROI. Facebook makes money by building better user models and spamming targetted ads. Some of them are getting scarily good.
Facebook has one of the strongest AI research groups in the world led by Yann LeCun. They could build models that are quick good at finding all of these and similar versions.
And it doesn't need to be perfect. If you're an employee who may face severe repercussions if you mention unions, are you going to risk your job based on whether the filter catches your post?
Probably. I'm going to set up all my data so that, anytime it's parsed by Facebook, there'll be a middle-finger waiting for their supercomputer with just two words: "Process this".
That seems like a useful engineering problem for Facebook to try to solve, rather than this bullshit rocket science optimization stuff that adds yet more complexity for only incremental gains.
I doubt it would have to be used that much, and results are questionable and even malicious at times. So, not sure if that makes people working at AI at Facebook that great...
Imagine how many billions of Referrer headers Giphy gets every day. Imagine all the AI training material that comes in when people try to turn a search term into an image. $400mil for a human-curated dataset that pays dividends in ongoing harvestable data over time is chump change for Facebook.
reply