Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> If we invented AGI tomorrow, how would it achieve domination over us humans?

Biological evolution takes millions of years to improve human intelligence. Artificial intelligence will be able to develop and deploy self-improvements within years, compounding its abilities.



sort by: page size:

> An AGI is an AI that can do everything a human can do, period

> (...)

> That is the goalpost for AGI. It’s an artificial human - a human replacement.

This considerably moves the goalpost. An AGI can have a different kind of intelligence than humans. If an AGI is as intelligent as a cat, it's still AGI.

More likely, the first AGI we develop will probably greatly exceed humans in some areas but have gaps in other areas. It won't completely replace humans, just like cats don't completely replace humans.


>> 1. Super Intelligent AGI is possible.

When?


> Do Any of those cited give specific timelines? Even if we are very far away, do you really doubt that one day machines will have superhuman intelligence? I take that as pretty much a given, whether it's 50 or 500 years from now

Why? If you extrapolate from the amount of progress we have made toward AGI in the last 50 years (ie, none), then it's reasonable to argue that we still will have made no progress 50 and 500 years from now.

There are intellectual problems that humans aren't capable of solving; it wouldn't make any sense to talk about "superhuman intelligence" if that wasn't the case. The currently available evidence suggests that "constructing an AGI" might very well be one of those problems.


> those have stalled at local maxima every single time.

It's challenging to encapsulate AI/ML progress in a single sentence, but even assuming LLMs aren't a direct step towards AGI, the human mind exists. Due to its evolutionary limitations, it operates relatively slowly. In theory, its functions could be replicated in silicon, enhanced for speed, parallel processing, internetworked, and with near-instant access to information. Therefore, AGI could emerge, if not from current AI research, then perhaps from another scientific branch.

> We've built some incredibly impressive tools, but so far, nothing that looks or feels like a concept of will (note, not consciousness) yet, to the best of my knowledge.

Objectives of AGIs can be tweaked by human actors (it's complex, but still, data manipulation). It's not necessary to delve into the philosophical aspects of sentience as long as the AGI surpasses human capability in goal achievement. What matters is whether these goals align with or contradict what the majority of humans consider beneficial, irrespective of whether these goals originate internally or externally.


>artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work

Then we already have AGI, automated farming equipment outperforms humans in 90% of jobs*.

*Jobs in 1700. As things got automated the jobs changed and we now do different things.


This is a good critique. I'm very skeptical of the near term (<100 years) risk of AGI, but I don't really think this article's arguments are valid. Saying we already have AGI because there exist humans is vacuous and seems to almost deliberately miss the point.

If you want to counter the arguments that AGI will be capable of exponential self-improvement, you need to use an analogue other than humans. Humans categorically lack the capability to exponentially self-improve. Likewise human intelligence is definitionally non-alien, which is not something we can say a priori about any successful AGI we create.


You’re making the common mistake to assume AGI significantly increases the advancement of technology. If AI research has demonstrated anything it’s Intelligence doesn’t increase linearly with computation. Compare two identical chess engines where one has 10% more computational power behind it and that program has a surprisingly limited increase in ELO score.

It’s seductive to think AGI will suddenly advance technology dramatically, but dumping more effort into existing technology has a bad habit of hitting diminishing returns. Ideas like nano-machines seem to have vast potential, until you realize biology is already operating at those scales and has significant limitations.


> AGI is possible because intelligence is possible (and common on earth) in nature.

But you haven't asked if we're capable of building it. While it might be technically possible are we capable of managing its construction?

All I see today is ways of making the process more opaque at the benefit of not having to provide the implementation. How does that technique even start to scale in terms of its construction? I worry about the exponentially increasing length of the "80% done" stage, and that's on the happy path.


<wild speculation>

AGI is not the same as human intelligence. It has the generality of human intelligence but since it isn't restricted by biology it can scale up much easier and can achieve superhuman performance in pretty much any individual task, group of tasks, or entire scientific or technological fields. That's pretty exciting.

</wild speculation>

<reality>

It's questionable whether the above is possible at all. In all likelihood none of us will see anything even remotely close to this in our lifetimes. We're currently so far away from it that we don't even know how to get started on solving such a problem. Nobody is currently working on this, despite how they're advertising their work.

</reality>

I guess what I'm saying isn't that AGI will be underwhelming, it's that it won't exist at all, at least as far as we are concerned.


> - Artificial general intelligence

Not only we don't have AGI but there is an ongoing discussion whether it's possible to have it at all.


> In theory, its functions could be replicated in silicon, enhanced for speed, parallel processing, internetworked, and with near-instant access to information. Therefore, AGI could emerge, if not from current AI research, then perhaps from another scientific branch.

This is true, but there are some important caveats. For one, even though this should be possible, it might not be feasible, in various ways. For example, we may not be able to figure it out with human-level intelligence. Or, silicon may be too energy inefficient to be able to do the computations our brains do with reasonable available resources on Earth. Or even, the required density of silicon transistors to replicate human-level intelligence could dissipate too much heat and melt the transistor, so it's not actually possible to replicate human intelligence in silico.

Also, as you say, there is no reason to believe the current approaches to AI are able to lead to AGI. So, there is no reason to ban specifically AI research. Especially when considering that the most important advancements that led to the current AI boom were better GPUs and more information digitized on the internet, neither of which is specifically AI research.


> AGI means humans are no longer the smartest entities on the planet.

Superintelligence and AGI are not the same thing. An AI as smart as an average 5 year old human is still an Artificial General Intelligence.


> We already have (weak) AGI

No. We all intuitively know what AGI means, we've read/seen enough science fiction. It doesn't have to be "superintelligent", but it does need independent agency. None of the recent AI stuff is anything approaching that.


" It possesses the potential to enhance its own capabilities at a speed that outpaces our own evolutionary timelines without AI."

Does it?

And there may be a limit to intelligence anyway. In humans extreme intelligence often seems to pair with some kind of psychosis, for example.

I think all we can say is IF AGI is possible, it can potentially be made to be very much faster than humans.


> In theory, its functions could be replicated in silicon, enhanced for speed, parallel processing, internetworked, and with near-instant access to information. Therefore, AGI could emerge, if not from current AI research, then perhaps from another scientific branch.

Let's be clear, we have very little idea about how the human brain gives rise to human-level intelligence, so replicating it in silicon is non-trivial.


> If you extrapolate from the amount of progress we have made toward AGI in the last 50 years (ie, none)

That's an odd way of defining progress.

> There are intellectual problems that humans aren't capable of solving; it wouldn't make any sense to talk about "superhuman intelligence" if that wasn't the case.

A superhuman intelligence doesn't necessarily have to come up with solutions humans would never think of, it just needs to come up with a solution in less time, or with less available data, or with fewer attempts.


"We are going to use powerful AI to teach kids to do jobs that AI will almost certainly do better in 10-20 years?"

I think understanding how to work well with AI and what its limitations are will be helpful regardless of what the outcome is.

Even if silicon brains achieve AGI or super-intelligence, I think it's highly unlikely that they will supersede biological brains along every dimension. Biological brains use physical processes that we have very little understanding of, and so they will likely not be possible to fully mimic in the foreseeable future even with AGI. We don't know exactly how we'll fit in and be able to continue being useful in the hypothetical AGI/super-intelligence scenario, but I think it's almost certain there will be gaps of various kinds that will require human brains to be in the loop to get the best results.

And even if we do assume that humans get superseded in every conceivable way, AGI does not imply infinite capacity, and work is not zero sum. Even if AI completely takes over for all the most important problems (for some definition of important), there will always be problems left over.

Right now, just because you aren't the best gardener in the world (or even if you're one of the worst), that doesn't mean you couldn't make the area around where you live greener and more beautiful if you spent a few months on it. There is always some contribution you can make to making life better.


> If you have an AGI you can probably scale up its runtime by throwing more hardware at it

Without understanding a lot more than we do about both what intelligence is and how to acheive it, that's rank speculation.

There's not really any good reason to think that AGI would scale particularly more easily than natural intelligence (which, in a sense, you can scale with more hardware: there are certainly senses in which communities are more capable of solving problems than individuals.)

> Biology is limited in ways that AGI would not be due to things like power and headsize constraints

Since AGI will run on physical hardware it will no doubt face constraints based on that hardware. Without knowing a lot more than we do about intelligence and mechanisms for achieving it, the assumption that the only known examples are particularly suboptimal in terms of hardware is rank speculation.

Further, we have no real understanding of how general intelligence scales with any other capacity anyway, or even if there might be some narrow “sweet spot” range in which anything like general intelligence operates, because we don't much understand either general intelligence or it's physical mechanisms.


> Scale is not the solution.

Agreed, I don't think any modern AI techniques will scale to become generally intelligent. They are very effective at specialized, constrained tasks, but will fail to generalize.

> AI will design AGI.

I don't think that a non-general intelligence can design a general intelligence. Otherwise, the non-general intelligence would be generally intelligent by:

1. Take in input. 2. Generate an AGI. 3. Run the AGI on the input. 4. Take the AGI output and return it.

If by this, the article means that humans will use existing AI techniques to build AGI, then sure, in the same way humans will use a hammer instead of their hands to hit a nail in. Doesn't mean that the "hammer will build a house."

> The ball is already rolling.

In terms of people wanting to make AGI, sure. In terms of progress on AGI? I don't think we're much closer now than we were 30 years ago. We have more tools that mimic intelligence in specific contexts, but are helpless outside of them.

> Once an intelligence is loose on the internet, it will be able to learn from all of humanity’s data, replicate and mutate itself infinitely many times, take over physical manufacturing lines remotely, and hack important infrastructure.

None of this is a given. If AGI requires specific hardware, it can't replicate itself around. If the storage/bandwidth requirements for AGI are massive, it can't freely copy itself. Sure, it could hack into infrastructure, but so can existing GI (people). Manufacturing lines aren't automated in the way this article imagines.

The arguments in this post seem more like optimistic wishes rather than reasoned points.

next

Legal | privacy