* We have no idea of how to measure intelligence, and we have no way of deciding whether thing X is more or less intelligent than thing Y. (The article makes this point.) Therefore, superintelligence is perhaps a bogus concept. I know it seems implausible, but perhaps there can never be something significantly more intelligent than us.
* Nevertheless, mere human-like intelligence, if made faster, smaller and with better energy-efficiency, could cause the same runaway scenario that people worry about: imagine a device that can simulate N humans at a speed M times faster than real time, and those humans designed and built the device and can improve its design and manufacture.
In general, I see a lot of merit in the arguments on both sides of this discussion.
> In general, I see a lot of merit in the arguments on both sides of this discussion.
That's great news. One side is alarmist about a potential humanity extinction event, the other side is not. Even a small chance that the alarmists are right means we should take their view seriously, right?
No, if there's no ground to do so. Chemtrailers believe that there are gigantic conspirancies to subdue humans (and ultimately worsen the condition of the whole humanity) but we shouldn't take them seriously.
There are so many criticism/debunk of the Singularity theory that there's not even a debate anymore.
I would say that debating the scientific likelihood of a singularity is sterile because they don't provide any argument on that plane. The debate is mostly philosophical and/or theological nowadays.
I've suffered through this ever waiting for it to engage with any of the arguments from MIRI, Bostrom and co. yet it only managed to paint a few strange windmills for itself to charge at.
So it talks about Singularists, and how misguided they are, and how AI is a misnomer and it should be called EI and namedrops cybernetics and systems and reminisces about how we lost our way.
It is a bad manifesto, a boring and toothless essay.
It has a good point, but alas I forget it as I tried to keep reading.
Anyway, let's just take this sentence: "We can measure the ability for systems to adapt creatively, as well as their resilience and their ability to use resources in an interesting way." - This is so broad and universal, that it's either meaningless or false. And there is absolutely no argument supporting this. I highlighted this, because this is what directly contradicts the alarmist thinking. If we were able to measure creativity and resilience in general, we could train AIs to get a higher creativity score, furthermore, we could then control them.
And it's also interesting that this claim goes counter to a claim a bit earlier about how unknowable and messy things are going to be.
It also somehow picks corporations as the perfect model for a superintelligence, which is convenient, but by doing so sidesteps all the real arguments about how a self-perfecting machine superintelligence is not bound by slow components. And somehow also ignores the reality of Samsung, AIG, MicroSoft, Oracle, Shell, BP and how all got away with almost everything. (And other companies that are quite successful, despite competition. And how much we have to work to keep them sort of aligned to our laws and goals.)
The best argument against a hard take off is that it's hard to imagine that so many S-curves can be combed through in so little time with the real world resource constraints. However Yudkowsky did an analysis of that: https://intelligence.org/files/IEM.pdf and sure, it's just a step toward more questions, more hard to imagine things, but not something that should be dismissed just because our mind throws up its hands and says "i don't see how, it's very complex, so unlikely, let's go shopping".
And this essay is exactly that. It goes on and on about those blind Singularists, and completley misses the point.
In general, I am in agreement with you. The whole thing started resembling religious wars in the last years and it's quite hilarious and sad to observe.
This part however I can't stand behind:
> Anyway, let's just take this sentence: "We can measure the ability for systems to adapt creatively, as well as their resilience and their ability to use resources in an interesting way." - This is so broad and universal, that it's either meaningless or false. And there is absolutely no argument supporting this. I highlighted this, because this is what directly contradicts the alarmist thinking. If we were able to measure creativity and resilience in general, we could train AIs to get a higher creativity score, furthermore, we could then control them.
It's true that such generalist statements basically mean nothing. But your rebuttal doesn't take into account chaos theory -- where there's generally accepted that most living systems live on the brink of chaos yet are very stable and manage to swing back even after big interferences. Not sure what a "living system" is, don't ask. :D
I do agree with you that your highlight contradicts the alarmists though. Not everything swings out of control by the gentlest of touches. In fact, most of the universe doesn't seem to be that way. There's always a lot of critical mass that must be accumulated before a cataclysm-like event occurs.
---
All of the above said, I don't think it's serious or scientific to discard alarmists simply because the guys/girls with the most PR do ridiculuous or non-scientific statements. Behind them are probably thousands of people who are more systemic and have better arguments but aren't interviewed by mainstream media.
Where do I stand in the spectrum of this? The so-called singularity is possible. BUT, we are a very long way from it. We're going to be clawing our way to it, inch by inch, for centuries, if not millenia. That's what I think is most likely.
And by the time it occurs, IMO we will be living in a cyberpunk-like future -- very well articulated in the "Ghost in the Shell" anime movies and series by the way -- where the line between a man and machine will have already be blurred quite severely.
> I know it seems implausible, but perhaps there can never be something significantly more intelligent than us.
The human brain is easily overwhelmed by complexity. You can present a problem with sufficiently many interacting parts even to a very intelligent person and at some point they are completely stunned. Obviously it's difficult to judge this from a position in which you suffer from the same problem, but objectively this situation is similar to a problem one would present to an animal to test its intelligence. We can easily "see through" the complexity, but the animal just cannot even combine two or three interacting parts to find a solution to the problem.
Although we have no real objective measurement for intelligence, this does not mean that there is no way to rank the intelligence of objects that are sufficiently far apart in terms of dealing with complexity. A human is clearly smarter than a dog.
People have developed tools that allow us to deal with this complexity, but we are still severly limited in dealing with very complex problems simply because our brains haven't evolved to deal with complexity.
Theoretically there is absolutely no reason to think that a computer or another organism that is capable of much more raw processing power and uses it in a way that is similar to how we process information, could not exist. I find it sort of arrogant to claim that we are the pinaccle of intelligence when we are so clearly limited in an ability that we understand to be one of the core components of intelligence.
There is a well-developed theory for the measurement of intelligence.
And even if there wasn't, even a child can rank-order the intelligence of different animals, or different humans that they personally know.
It's true that trying to quantify the intelligence of creatures more intelligent than any we've seen before would constitute extrapolating beyond the dataset; this doesn't imply there is nothing beyond the dataset. Imagining that the current dataset limits the possible scope of outcomes is simple anchoring bias.
* We have no idea of how to measure intelligence, and we have no way of deciding whether thing X is more or less intelligent than thing Y. (The article makes this point.) Therefore, superintelligence is perhaps a bogus concept. I know it seems implausible, but perhaps there can never be something significantly more intelligent than us.
* Nevertheless, mere human-like intelligence, if made faster, smaller and with better energy-efficiency, could cause the same runaway scenario that people worry about: imagine a device that can simulate N humans at a speed M times faster than real time, and those humans designed and built the device and can improve its design and manufacture.
In general, I see a lot of merit in the arguments on both sides of this discussion.
reply