Ah, but it'll encourage innovation and automation. It also forces people to get a better education and greater skills, or become trapped in poverty and never advance socially.
That only works if society has the nerve to actually trap people in poverty and restrict them socially when they don't get education or skills. That idea is politically unpopular with a lot of people.
Charles Stross Saturn's Children and Neptune's Brood include a theme of robot slaves. Both are told from the perspective of a robot, with many variations of free-will override. Stross wrote, http://www.antipope.org/charlie/blog-static/2013/07/crib-she..., "A society that runs on robot slaves who are, nevertheless, intelligent by virtue of having a human neural connectome for a brain, is a slave society.."
On visual effects, http://www.theverge.com/2015/5/8/8572317/ex-machina-movie-vi..., "The hardest stuff to track is when someone is not moving. Because nobody is not moving, they’re moving but really, really subtlely ... It was the first film I’ve ever done where we did not put a single greenscreen up ..she had to be mechanically plausible ..things like the muscles contracting properly, and the various pipes and wiring having just a tiny amount of jiggle."
Well, I think the important thing is not whether there is slavery, but whether there is suffering. If the robots are programmed to be happy as slaves, voluntary slaves, is it so bad?
You can 'program' humans that way too. Doesn't mean they aren't slaves and it doesn't mean it isn't bad.
And anyway I suspect that your use of the word suffering implies a rather narrow definition of suffering that counts only material suffering as important.
Turing machines aren't self-aware. And in my opinion, it will be a century before a computer can truly mimic self-awareness. And even then it's mimicry.
I order to 'reprogram' humans you need to expose them to huge amounts of suffering and mental manipulations to achieve the desired outcome.
With machines, assuming you could build their personality from the ground up, you wouldn't have such quagmires. In essence it would be the difference between hacking down a tree and pouring cement (or using nanomachines) to build a home. One of them is more wasteful than the other.
However things get weird if you can only train them and or clone their neural nets. If you create a mental model and cut and paste a version on your USB 11.0, you get with same problems teleporter in Star Trek do. Namely, which one is original and whether it constitutes a death/rebirth?
Honestly I think this might be the practical "moral way" to deal with AI. If they don't have a singularity style bootstrap and leave us behind. We should treat is like sub-ordinate mutualism. Only create new ones as needed, treat them as well as possible, motivate them with happiness/reward rather than threat and make sure they live out the remainder (this one needs work) of their lives comfortably if they cannot be adapted to new uses.
Next question: is implementing Mortality (a time limited lifespan) murder? I'd argue that mortality would be important to allow the development of a society in early stages.
>"A society that runs on robot slaves who are, nevertheless, intelligent by virtue of having a human neural connectome for a brain, is a slave society.."
Why the bloody hell would you do that!? There's no point in putting a human mind in a slave-machine when you could just write a wholly adequate artificial inference engine.
The NYT Review of Books has a much better article: "How robots & algorithms are taking over", about "The Glass Cage: Automation and Us", by Nicholas Carr.[1] That's about the future of work. Computers are taking over because they're so incredibly cheap, and now they're getting smarter. Anything a computer can do, it can now do cheaper than a human. Usually much cheaper.
Yes, people have worried about this before, all the way back to the 1920s.
This time, though, the skill sets of machines are getting much closer to those of humans. The list of things machines can't do gets shorter every year. "There is about a 50 percent chance that programming, too, will be outsourced to machines within the next two decades." The Glass Cage cites a skill study which analyzes job categories by difficulty of automation.
"Peak factory worker" was reached in the US in 1977. Manufacturing output continued to climb. (There was a drop in output after 2008, but output has totally recovered. Without an increase in employment.) The next big event to watch for is "peak office workers". At some point, the number of people needed in offices will start to decline.
This is getting more attention now that automation is finally cutting into the chattering classes. The "college equals lifetime middle class employment" concept that sustained the middle class in the developed world is dead.
We have plenty of productive capacity and lots of capital, yet can't figure out how to structure society to cope with that. That's the real problem.
Just 1 question - finance is the pioneer of commercial application of digital machines. Not talking HFT here at all. Algorithms in trading stocks/otpions. Algorithms and programming in valuation in private equity. Algorithms in insurance and lending. Automation is everywhere. How come jobs in the sector only grow? Both in the big employers and in new employers.
How about an example closer to the HN crowd - BI. Such a prolifiteration in self-service BI. You previously had to wait 2 weeks for IT to update the cube, now you can do your analysis yourself in 50% of those cases. Corporate BI and reporting jobs should be declining because of this automation, right? How come the sector is exploding?
Demand and investment are closely related. If there's no obvious winners of there's evidence that the market can grow, it makes sense that people will I he's in new offerings. Not every BI stack is based around Microsoft and Oracle's stack, but we don't quite have the Salesforce of BI yet that pretty much dominates 60%+ of a possibly nearly-saturated market.
The financial crisis broke out in the fall of 2008. Employment in finance peaked in 2006, and it has yet to reclaim 2005 levels. I would not be terribly surprised to see that chart turn down again before it does.
Observers of the financial services industry talk about how the industry has a lot of "self-generated work". If the legal environment of the industry were rolled back to, say, 1980, the industry would be smaller and much more boring. From 1934, when the SEC was created, to 1980, the US financial industry was highly regulated, boring, and had few crises. It also created no billionaires.
Yes, people have worried about this before, all the way back to the 1920s. This time, though, the skill sets of machines are getting much closer to those of humans.
Instead of "this time" I prefer "over time", because there isn't a new technology "this time" that is suddenly different that is worth arguing about. What's happening is that machines get a little more capable every day in various ways, while humans do not, and it keeps going and going....
Whatever the controversial comparison of machines and humans may be today, by tomorrow, machines will have moved forward more than people have, and even moreso the next day, and again the day after that, with no obvious reason it should ever stop.
Unless this phenomenon changes, it's just a question of time.
The point of labor is to turn raw materials and energy into useful things and perforn services. When robots do this, those who own raw materials and energy will have increasing power.
I would invest in farm land, mines, water and solar panels. In the future we will have automated mining and prospecting robots that are tied to automated foundries that are tied to 3d printing and other forms of automated manufacturing. Within a small area you could have a fully vertically integrated brick factory or metal chair factory.
In the Soviet Union they once built a iron smelting factory out in the middle of nowhere near a big iron containing mountain. The engineers said they should build it near a population center, like in the west. It was very inefficient in the end because once the deposit ran out they had to ship the iron and people in to work there. In the future, the factory will be containerized and move itself around to where it's most convenient and that doesn't have to mean near a population center, because it will be fully automated.
Farmland and mines (or the minerals in them) seem to be limited world-wide and maybe good investments (though we may improve artificial ways to produce food that are less dependent on land).
Water is a regional thing (for some places it's less of a problem). Not really sure how one effectively "invests" in water.
Solar is entirely different. I think solar has a great future, but as an investment, I'm not so sure. It seems highly likely that we will needs lots more power in the future and it does seem that much of it would come from solar. Current solar technologies are still relatively young, inefficient and (relative to future technologies) expensive. Investing in solar at this point would be like investing in an car manufacturer in the early 1900s.
> We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.
The author points out that the meaning of this statement hinges on the word "we". Hasn't the world already arrived at a dystopian future where AI systems simply maximize revenue and ad clicks for their owners? Whereas "good" AI would be emloyed "for the people", i.e. to maximize social welfare while taking into account externalities (poverty and global warming).
The fear is of some future AI which acts only to further its own pathological interests... but how would that be different from present-day corporate bureacracies, with business processes increasingly driven by weak (but rapidly improving) AI? Seems to me that the owners of such systems are the only people who would fare differently under the two scenarios. But for the vast majority of the world, their experience would be the same in both cases: exploitation by autonomous AI, or exploitation by bureaucracy. Changing the parameters that control the way either one operates is equally difficult.
>The fear is of some future AI which acts only to further its own pathological interests... but how would that be different from present-day corporate bureacracies, with business processes increasingly driven by weak (but rapidly improving) AI?
It's not that different at all, which is precisely why we need to avoid both options on this dichotomy. Also, I think you meant to reply to the thread about the other NYRB article above.
Namely that the main character is projecting his own views of what is considered patriarchal onto another being that's been trapped in a toxic 'relationship'.
The most important factor I think people are over looking is the fact that it is HUMANS who have to put in EFFORT and TRY to CREATE these robots. They don't just spawn from an old female computer. Humans have to put in work to create these artificial intelligence. We have 100% control over the future of AI.
reply