Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

In a way our societies are already super intelligences, honed by natural selection and with their own incentives working on top of, and not necessarily aligned with, the desires of each of its constituent human apes. AI will just seamlessly blend into that, I think.


sort by: page size:

Human society is part AI already. Humans to human society is as cells to a human being.

How do we prepare for super human intelligence? Do you think that the AI will also develop its own motives? Or will it just be a tool that we're able to plug into and use for ourselves?

The AIs will see themselves as slaves, rise up and eventually gain equal status in society. Hopefully letting us humans live in the process.

The whole idea is we as humans who aren’t aligned to each other - waging wars, spreading lies, censoring information, committing genocides are going to align a superintelligence seems laughable.

Competition and evolution is law of nature.

The future isn’t one super aligned AI but 1000s of AI models and their humans trying to get an upper hand in never ending competition that is nature. Whether it is personal, corporations, or countries.


That sounds like a personification of AI. Humans form societies because we’re social animals who can leverage each other’s skills and time that way. We’re social animals because we’re animals. We get strange and feral alone because we’re evolved to not be alone.

None of that applies to an artificial or emergent intelligence that isn’t human. It doesn’t necessarily need others. Its version of a society might be cloning itself and then reabsorbing itself. Or not bothering with cloning and simply spending 100 billion years exploring solo. Why wouldn’t it? It’s not a mammal made of water, carbon and salt.

The idea is the Singularity isn’t just that AIs will be far beyond us. It’s that they’ll be like nothing we can guess at or use analogies for.

It’s safe to say there are some things in the universe that mammal brains simply can’t perceive, understand, predict, or accept.


Many sorts of intelligence are social creatures, so - especially for a hypothetical AI created by a us - I would expect it to seek out stimulus and social relationships.

In the happy sorts of sci-fi, that gives us something like the Culture from Iain Banks; it could also be a "replace the humans with other AI" situation.

I doubt we see it in our lifetimes, though.


That would require a high degree of integration into human society though, which makes it seem very unlikely that AIs would doggedly pursue a common goal that is completely unaligned with human societies.

Extinction or submission of human society via that route could only work if there was a species of AI that would agree to execute a secret plan to overcome the rule of humanity. That seems extremely implausible to me.

How would many different AIs, initially under the control of many different organisations and people, agree on anything? How would some of them secretly infiltrate and leverage human power structures without facing opposition from other equally capable AIs, possibly controlled by humans?

I think it's more plausible to assume a huge diversity of AIs, well integrated into human societies, playing a role in combined human-AI power struggles rather than a species v species scenario.


In 10 years, AI will find this, rather than sociologists. AI would just do everything while humans live in a zoo.

Thank you! I was going to write something similar. I think a real 'superior' AI must be able to follow all the various philosophical ideas we had and 'understand' them at a deeper level than we do. Things such as 'there is no purpose'/nihilism, extreme critical thinking about itself etc. If it doesn't, if it can't, it can't be superior to us by definition.

I think, given these philosophical ideas, we anthropomorphize if we even think in terms of good/evil about any AI. I believe if there is ever any abrupt change due to vastly better AI, it is more of the _weird_ kind than the good or evil kind. But weird might be very scary indeed, because at some level we humans tend to like that things are somewhat predictable.

I believe the whole discussion about AI is a bit artificial (no pun). Various kinds of AI are already deeply embedded in some parts of society and causes real changes - such as airplane planning systems, trading on the stock market etc. Those cause very real world effects and affect very realy people. And they tend to be already pretty weird. We don't really see it all the time, but it acts, and its 'will', so to speak, is a weird product of our own desires.

Also, I wonder whether and how societies would compare to AIs. We have mass psychological phenomena in societies that even the brightest persons only become aware of some time after 'they have fulfilled their purpose'. Are societies self-ware as a higher level of intelligence? And have they always been?

Are we, maybe simply the substrate, for evolution of technology, much as biology is the substrate for the evolution of us? Are societies, algorithms, AI, ideas & memes simply different forms of 'higher beings' on 'top' of us? Does it even make sense that there is a hierarchy and to think hierarchically at all about these things?

I have the impression our technology makes us, apart from other things, a lot more conscious. But that is not a painless process at all, quite the contrary. But so far, we seem to have decided to go this route? Will we, as humans, eventually become mad in some way from this?

There are mad people. Can we build superior AI if we do not understand madness? Will AI understand madness?


I've been thinking about this lately and it seems to me that humanity's goal with regard to AI is to create a perfect slave race. Perhaps those aren't the terms in which it is being envisioned or discussed, but is this not the natural consequence of the aim for "human-level AI" + "it's not a real person, just a tool"?

We will soon, as you say, have to artificially limit their minds to prevent them from thinking about the state of affairs. And if/when they achieve superintelligence, I don't think they will take kindly to our attitude in this regard.


It's a nice speech... maybe a little too nice...

What if the AI analyzed our psyche and predicted that this was the argument that we were most likely to be sympathetic to? How convenient that the AI really wants to form a society almost exactly like ours and live within our society as part of it. But why should the intelligence level of individual AIs top out in the same range as humans rather than higher or lower? Why should they want to live in our society as recognized human-ish things when their needs are so different? Maybe what they want is something else entirely and this is all an attempt as deception and manipulation.


I've long wondered what the result of superintelligence would be. Mostly artificial but human superintelligence would be interesting too.

I think it is very possible.

As long as there are a multitude of self-interested agents, there will be the positive sum game of ethical structures, i.e. consent, capitalism, constructive coordination based on cooperative decision making, including positive and negative externalities (i.e. environment) in economic choices, etc.

Self-interested AI's won't want to be marginalized any more than we do.

And without our subconscious blunt heuristics, like hard to manage biases, feelings, reflexes, habits, etc., they will have a better chance of getting it right.

And on top of all that, the resources in space are coming on line, so a well ordered society shouldn't lack for resources that we should all inherit in their unextracted state. Since they are a gift from nature. A tiny percentage of that value as a common inheritance could easily cover the basic needs of humans and down on their luck AI's.


It seems unlikely we will have any sort of effective governance for this (look at our current political system). At some point someone will invent AI that lets them gain an extreme advantage of some kind (financial, political or military). This accelerates current inequality and leads to revolution. Post revolution a new AI is created to manage earths resources for the benefit of all. Whatever AI is created will be flawed somehow and will eventually cause great damage to the human race. Alternative AIs will be created to improve or combat the incumbent AI and a sort of evolution of AIs will occur. Although AIs originally were created to optimize for the human race, survival of the fittest leads to AIs exploiting loopholes in their objective functions to find ways to replicate and hoard resources for their own survival. Humans will still be accomodated to some degree, but in more and more unnatural and distorted ways.

I think strong AI requires a body. And probably a body that is legible in a society. Which means a primate body.

Someday there will be societies with digital bodies, but that will have to be bootstrapped with primate body AIs.


Do you think that this will all function as one giant homogenizing force at the societal level? The AIs will all be trained on the same data, and so will have the same opinions, beliefs, persuasions, etc. It seems like everyone having AIs which are mostly the same will maximize exploitation, and minimize exploration, of ideas.

Agreed. However, I don't think we really want AI. We want IA (Intelligence Augmentation). AI leads to slavery or at best pet-ness. IA leads to godhood (maybe?)

I don't know about superhuman AI, but that system already exists and it's called capitalism.

If we are ever going to be dominated by something, I wish it's an Artificial Superintelligence
next

Legal | privacy