Bill Joy wrote about this back in 2000. The essay was titled 'Why the Future Doesn't Need Us' and offers a very (in my mind) depressing attitude of the future.
One of his worries is that whatever positive things we can do with new technology are vastly outnumbered by the negative things we can do with them. Bad actors can be few and far but still destroy the world.
It's interesting that Bill is worried about genetic engineering, nanotechnology and robotics. Sam specifically calls out AI and synthetic biology.
There's a lot of recurring themes between these two articles, but both propose similar solutions: Proceed cautiously.
youre right, but I fear a future where such techonolgy can be abused. I haven't carfully considered all the positives such technology will bring into our world. I was starteled by the potential of abuse and how far technology has come.
Many technologies have the potential to be evil (I could decide tomorrow that I want to drive my car at high speed into a shopping mall but more than likely won't*). I think this is certainly more a positive thing than not and there will be abuses from time to time but the power here lies in the potential.
Ah yes, Bill's essay on the dangers of the future. The first time I read it I was struck not by the message but by the tone. Similar to the tone someone uses when they buy their first (and perhaps only) gun in the USA and, realizing how straight forward it was, they realize that anyone could just walk up to them and shoot them because well they could easily get their own gun.
The world is a big and generally ambivalent place. It has been in our (by which I mean humans) power to render it uninhabitable for over 50 years now, and yet still we haven't. We may at some point, but so far so good.
But as adults, you have to choose. You can choose not to drive on the freeways, knowing that at any time you could be mowed down by a drunk driver or someone texting, or you can go into it with your eyes open and your precautions in place. It is important to know that you can be killed while driving in order to respect what needs to be done (and not done) when you are behind the wheel of a car. If you don't respect that, you die.
So it is with the world at large. And the great "information hiding"[1] campaign not withstanding, its important to know how technology can kill you so as to know when it is likely to.
I do believe at some point we'll be able to talk to machines directly with your brain. We do that today with sound and visual images, eliminating the ears and eyes in the path is a matter of understanding the API. A friend of mine, points out that the first person to become part of a computer will be able to out perform everyone else, if they are not a nice person they will prevent others from getting the same advantage.
But what is their advantage? They can make more money than you and me? Lots of people already have that. They can write code faster? better? Sure there are meglomaniacs, we need to watch out for those folks and shut them down, but there are bad drivers too.
We can run away, but it doesn't change what is. The older you get it seems the more you recognize the futility of that.
[1] Somewhere in the mid-90's it occurred to people you could just learn all this stuff that was dangerous and they have been on a mission ever since to carefully remove information from the system. The trick is to do it slowly and carefully to avoid the Striesand effect, but it continues to this day. Chemistry sets are a good exemplar.
Although there is still lots we can due to mitigate the damage we caused and we should do it, the article is right. The future will be a lot bleaker in many ways due to our stupidity, and especially the stupidity of those who think technology will save us.
There is a significant contingent of influential people that disagree. "Why the future doesn't need us" (https://www.wired.com/2000/04/joy-2/), Ray Kurzweil etc.
This is qualitatively different than what the Luddites faced, it concerns all of us and touches the essence of what makes us human. This isn't the kind of technology that has the potential to make our lives better in the long run, it will almost surely be used for more harm than good. Not only are these models trained on the collectively created output of humanity, the key application areas are to subjugate, control and manipulate us. I agree with you that this will not happen immediately, because of the very real complexities of physical manufacturing, but if this part of the process isn't stopped in its tracks, the resulting progress is unlikely to be curtailed. I at least fundamentally think that the use of all of our data and output to train these models is unethical, especially if the output is not freely shared and made available.
I'll openly admit that I am an annoying pessimist, but I don't think the solution is going to come in the form of anything that a book from 1995 could possibly recognize as progress.
To be blunt, what I mean is that a lot of problems we have nowadays come from the mere existence of modern technology, not exclusively from just the way people live. It's a bit hard for me to admit because I used to be an avid technologist (and am currently employed as a computer engineer), but it's gets harder and harder to deny everyday. No one in the world gets to choose what technology is the next logical step, and the decision to pursue its development is ultimately an arms race with the entire world as competitors. It is just not possible for any philosophy, economic system, or culture to change that fact. The good side-effects are inseparable from the bad. If you have nuclear energy, you have atomic bombs and nuclear power plants. If you have fast computer networking, you have on-demand entertainment and surveillance networks. If you can store information forever, you can instantly know someone's worst act and they can remember yours.
Technology can create new problems by itself. The most obvious ones are Risks of Blowing Ourselves Up (or existential risks). Drexlerian Nanotechnology? Gray goo accident. Strong AI? Skynet, only it'll kill us all before we even realise it. Biological research? Pandemic disease from the lab. And of course the more mundane World War III and environmental collapse, from which it is unclear our civilization can ever recover. The difference with the last two is that it can run on current technology.
We could also create a universe that we wouldn't like at all, like Robin Hanson's vision of the future after mind uploading.
We could also modify ourselves for the sake of competitivity alone, or pleasure alone, or anything that doesn't encompass all that we want to do and be.
Sure, technology is only a catalyst. But it's one hell of a catalyst. Aim a knife in the wrong direction, and you might hurt someone. Aim a nuclear warhead in the wrong direction, and you might obliterate a whole city. In both cases, the root of the problem is that you aimed in the wrong direction. But technology makes quite a difference in terms of consequences. By the way, we're already aware of that to some extent: nuclear warheads tend to have more security around them than knives.
It's always possible to imagine bad and worse scenarios. Fear is always available, and can be applied to any situation.
Believing that technology is the tool of an "evil" system, or that it facilitates "evil" is not healthy for the believer or anyone else.
Naturally, we are all aware of the power of tools and especially modern "smart" systems. How can we trust anything? We can imagine how things could be as bad as we can imagine!
I've decided, even at the risk of being naive, that I must not fear the machine, fear the network, fear anything.
We living beings all basically want the same outcome- happiness.
Having convenient tools help us actualize that outcome is a good thing. Just because we can imagine some deep dark conspiracy of "evil", behind the scenes, deceiving and manipulating us, doesn't mean that there is any such thing.
Nor, of course, that there isn't. Since we can't know, let's just enjoy all of our cool gadgets while being happy...
Although, I'm also beginning to feel as though the best decision will be exiting the information super-highway :)
If y'all haven't read it, The Joy Makers is a pretty fun sci-fi exploration of technological routes to happiness, and possible downsides to "plugging in".
As I said in a previous post, technology doesn't change, but empower us. Unfortunately that power doesn't usually reside in the hands of people who would use it best, but the people who want it most and have the least regard to the consequences of attaining that power.
Your misgivings about the future strike me as wise, given the power of the technology in play, and quality of the people using it.
There's a seemingly reasonable fallacy embedded in the argument that so far technology has been able to dig us out of the holes that it itself digs: That past performance is indicative of the future. Of course, the past often does predict the future -- yet, we can't bank on it, because there are sometimes qualitative shifts, and exceptions.
The problem with hoping that technology will solve climate change before it happens, is that we're gambling something we can't afford to lose. So, it may well be the case that we do solve the problem -- but what if we don't? There are limits to technology, after all.
To me, the danger in placing excessive faith in technology is that it can bypass our critical thinking. For example, right now our own technology (nuclear weapons, or biological weapons as that technology develops) is the greatest threat to humanity's existence. So while technology is a great thing, it may well be our species undoing rather than its savior -- at least until (if) we survive long enough to colonize other planets or enhance our own morality such that we can handle the responsibility our technology demands.
All this choice guarantees is new technology will always be used for bad things first. It holds no sway on whether someone will do something bad with technology, after all it's not just "good people" capable of advancing it. See the atomic bomb vs the atomic power plant.
What's important is how we prepare for and handle inevitable change. Hoping no negative change comes about if we just stay the same is a far worse game.
Cheap, ubiquitous wireless Internet, dirt-cheap bit storage, and low-power miniaturized electronics (computers, cameras) have ruined us. I truly don't see a way to a future that isn't pretty damn dystopic, though it may be a relatively comfortable dystopia, at least for some and in some places.
Definitely a good excuse to just allow the technology to come into existence then. "Hey bad things already exists let's just add more of it! Don't wanna be against progress right?"
Joking aside, I agree. It's too bad, though, that we know a thing (this or anything else even technological or not) that could be used for good and improving ourselves will almost always be diverted for something bad...
No, we are building the tools that eventually could take us to a tech dystopia (and not only tech at that point). New tech can be used for bad things. How do we prevent that? Maybe that's the question we ought to try and find an answer to.
A lot of things are technologically feasible, and in many cases can't realistically be prevented ahead of time, yet are still considered socially unacceptable or even made illegal. Just because we can do something, that doesn't mean we should. This principle has never been more relevant than in the use of technology.
But the advance of technology also inevitably advances the capabilities of murderers, rapists, and parking meter scofflaws.
The question is, does it advance their capabilities more than it advances the capabilities of those who oppose them? Is there any way to influence that balance?
Or should we just say "any technology can be used for good or evil, so don't even bother considering the question, let someone else worry about it"?
Technology makes certain kind of bad acts more possible.
I think I was a bit shocked by that article in the day.
reply