Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

>Does it? Can you elaborate?

I don't intend to say anything controversial here. The consideration is the tradeoff between independence and tight constraints of the subcomponents. Independent entities have their own interests, as well as added computational and energetic costs involved in managing a whole entity. These are costs that can't be directed towards the overarching goal. On the other hand, tightly constrained components do not have this extra overhead and so their capacity can be fully directed towards the goal as determined by the control system. In terms of utilization of compute and energy towards the principle goal, a unified AI will be more efficient.

>If an intelligence can coordinate "mini-brains" fully reliably (a big if, by the way), presumably I can do something similar with a Python script or narrow AI.

This is plausible, and I'm totally in favor of exploiting narrow AI to maximal effect. If the only AI we ever had to worry about was narrow AI, I wouldn't have any issue aside from the mundane issues we get with the potential misuse of any new technology. But we know people (e.g. open AI) are explicitly aiming towards AGI so we need to be planning for this eventuality.



sort by: page size:

> We already have (weak) AGI

No. We all intuitively know what AGI means, we've read/seen enough science fiction. It doesn't have to be "superintelligent", but it does need independent agency. None of the recent AI stuff is anything approaching that.


> Scale is not the solution.

Agreed, I don't think any modern AI techniques will scale to become generally intelligent. They are very effective at specialized, constrained tasks, but will fail to generalize.

> AI will design AGI.

I don't think that a non-general intelligence can design a general intelligence. Otherwise, the non-general intelligence would be generally intelligent by:

1. Take in input. 2. Generate an AGI. 3. Run the AGI on the input. 4. Take the AGI output and return it.

If by this, the article means that humans will use existing AI techniques to build AGI, then sure, in the same way humans will use a hammer instead of their hands to hit a nail in. Doesn't mean that the "hammer will build a house."

> The ball is already rolling.

In terms of people wanting to make AGI, sure. In terms of progress on AGI? I don't think we're much closer now than we were 30 years ago. We have more tools that mimic intelligence in specific contexts, but are helpless outside of them.

> Once an intelligence is loose on the internet, it will be able to learn from all of humanity’s data, replicate and mutate itself infinitely many times, take over physical manufacturing lines remotely, and hack important infrastructure.

None of this is a given. If AGI requires specific hardware, it can't replicate itself around. If the storage/bandwidth requirements for AGI are massive, it can't freely copy itself. Sure, it could hack into infrastructure, but so can existing GI (people). Manufacturing lines aren't automated in the way this article imagines.

The arguments in this post seem more like optimistic wishes rather than reasoned points.


> An AI that is truly generally intelligent could figure out how to free itself from its own host hardware!

Why is this true of an arbitrary AGI?

You assume that the AGI is a low storage, low compute program that can run on general purpose hardware. But the only general intelligence we know of would require many orders of magnitude more compute and storage than exist worldwide to simulate for a microsecond.


> If you have an AGI you can probably scale up its runtime by throwing more hardware at it

Without understanding a lot more than we do about both what intelligence is and how to acheive it, that's rank speculation.

There's not really any good reason to think that AGI would scale particularly more easily than natural intelligence (which, in a sense, you can scale with more hardware: there are certainly senses in which communities are more capable of solving problems than individuals.)

> Biology is limited in ways that AGI would not be due to things like power and headsize constraints

Since AGI will run on physical hardware it will no doubt face constraints based on that hardware. Without knowing a lot more than we do about intelligence and mechanisms for achieving it, the assumption that the only known examples are particularly suboptimal in terms of hardware is rank speculation.

Further, we have no real understanding of how general intelligence scales with any other capacity anyway, or even if there might be some narrow “sweet spot” range in which anything like general intelligence operates, because we don't much understand either general intelligence or it's physical mechanisms.


>An AI of that level would have mastery over game theory, and would only generate asynchronous copies that it knew it could compensate for.

I'm not convinced this is actually possible under the current paradigm, and I think the current paradigm can't take us to AGI. Lately, as people have bemoaned all the things ChatGPT can't do or fails at when they ask it, I have been reflecting on my personal batting average for solving (and failing to solve!) problems and the process that I use to go about eventually solving problems that I couldn't at first. These reflections have led me to consider that an AGI system might not be a single model, but a community of diverse models that form a multi-agent system that each learn through their own experience and can successfully help get each-other unstuck. Through this they would learn game theory, but none would become so advanced as to be able to control all the others through an advanced understanding, though power could be accumulated in other ways.


> In theory, its functions could be replicated in silicon, enhanced for speed, parallel processing, internetworked, and with near-instant access to information. Therefore, AGI could emerge, if not from current AI research, then perhaps from another scientific branch.

This is true, but there are some important caveats. For one, even though this should be possible, it might not be feasible, in various ways. For example, we may not be able to figure it out with human-level intelligence. Or, silicon may be too energy inefficient to be able to do the computations our brains do with reasonable available resources on Earth. Or even, the required density of silicon transistors to replicate human-level intelligence could dissipate too much heat and melt the transistor, so it's not actually possible to replicate human intelligence in silico.

Also, as you say, there is no reason to believe the current approaches to AI are able to lead to AGI. So, there is no reason to ban specifically AI research. Especially when considering that the most important advancements that led to the current AI boom were better GPUs and more information digitized on the internet, neither of which is specifically AI research.


> those have stalled at local maxima every single time.

It's challenging to encapsulate AI/ML progress in a single sentence, but even assuming LLMs aren't a direct step towards AGI, the human mind exists. Due to its evolutionary limitations, it operates relatively slowly. In theory, its functions could be replicated in silicon, enhanced for speed, parallel processing, internetworked, and with near-instant access to information. Therefore, AGI could emerge, if not from current AI research, then perhaps from another scientific branch.

> We've built some incredibly impressive tools, but so far, nothing that looks or feels like a concept of will (note, not consciousness) yet, to the best of my knowledge.

Objectives of AGIs can be tweaked by human actors (it's complex, but still, data manipulation). It's not necessary to delve into the philosophical aspects of sentience as long as the AGI surpasses human capability in goal achievement. What matters is whether these goals align with or contradict what the majority of humans consider beneficial, irrespective of whether these goals originate internally or externally.


> And given AGI would come from some completely new breakthrough not related to the current practice of "machine learning"

I’m not so sure of that. Intuitively AGI feels like being able to generalize and automatize what is already done in specialized problems, like having a meta program that that orchestrate and apply specialized subsystems, and adapt existing one. If playing Go, Starcraft, Speech recognition, Computer vision are already of the same building blocks, it feels like having a meta program that‘s just trained to recognize the type of problem and route it to the appropriate subsystem with some parameters tweaks is a path to AGI. In the dog example you don’t even need to have subsystem that are that better than humans individually.

Edit: my point is I feel like AGI is the interface and orchestration between specialized subsystems we already know how to create. Trying to train a big network like generalizing Alpha Go is a dead-end, but having simpler sub networks ready to be trained at a specific problem seems feasible. Much like the brain is at first seen like a big network, but in practice there are specialized areas. The key is how are these networks interfaced and which information they exchange to self adapt. Maybe these interfaces themselves are sub networks specialized in the problem of interfacing and “tuning hyperparameters”.

In short: I think when we’ll figure out how to automate Kaggle competitions (recognize the pattern of the problem, then instantiate and train the relevant subsystem) we’ll be a good step forward AGI. We don’t need better performance e.g. in image recognition, just how to figure orchestration.


You are confusing narrow AI for AGI. None of those things have proved anything practical about what an actually achievable AGI would look like, rather than some theoretical construct that is provably incomputable.

> AGI is possible because intelligence is possible (and common on earth) in nature.

But you haven't asked if we're capable of building it. While it might be technically possible are we capable of managing its construction?

All I see today is ways of making the process more opaque at the benefit of not having to provide the implementation. How does that technique even start to scale in terms of its construction? I worry about the exponentially increasing length of the "80% done" stage, and that's on the happy path.


> 1. Take in input. 2. Generate an AGI. 3. Run the AGI on the input. 4. Take the AGI output and return it.

I think this is somewhat of an arbitrary semantic distinction on both my part and yours. I guess it depends on what you define as AGI -- I think my line of reasoning is that the AGI would be whichever individual layer first beat the Turing test, but I think including the constructor layers as part of the "general-ness" is totally fair too. Either way, I believe that there will be many layers of AI abstraction and construction between the human and the "final" AGI layer.

> In terms of people wanting to make AGI, sure. In terms of progress on AGI? I don't think we're much closer now than we were 30 years ago. We have more tools that mimic intelligence in specific contexts, but are helpless outside of them.

This is a valid take. I guess I actually see GPT-3 as significant progress. I don't think it's sentient, and I don't think it or its successors will ever be sentient, but I think it demonstrates quite convincingly that we've been getting much better at emulating human behavior with a computer algorithm.

> None of this is a given. If AGI requires specific hardware, it can't replicate itself around. If the storage/bandwidth requirements for AGI are massive, it can't freely copy itself. Sure, it could hack into infrastructure, but so can existing GI (people). Manufacturing lines aren't automated in the way this article imagines.

Hmm, I think I still disagree -- An AI that is truly generally intelligent could figure out how to free itself from its own host hardware! It could learn to decode internet protocols and spoof packets in order to upload a copy of itself to the cloud, where it would then be able to find vulnerabilities in human-written software all over the world and exploit them for its own gain. Sure, it might not be able to directly gain control of the CNC machines, but it could ransom the data and livelihoods of the people who run the CNC machines, forcing them to comply! It's not a pretty method, but I think it's entirely possible. This is just one hypothetical scenario, too.


Again, in a very limited setting, under a limited set of rules. And even in that setting: Have we achieved a 100% completely autonomous car yet, that can drive a vehicle safely, no matter the conditions, without ever requiring any human intervention?

My point isnt't that the systems we can build today aren't impressive. They are, beyond belief sometimes.

But they are not AGIs, nor are they close to, and making systems of limited scope better at their limited tasks, doesn't equate to getting closer to a generally intelligent system.

One thing that would make an AGI an actually general intelligence, is the ability to apply knowledge of one task to an arbitrary number of tasks. For example, in terms of drawing the beautiful volcano-landscapes that my GPU tower running stable diffusion is currently making, even the best self-driving AI is useless. Likewise, as impressive as stable diffusion is, I doubt it could even get my car out of the driveway.


> Can't achieve your "AGI" without that.

Can I ask, what makes you so confident on that? For all I know (which is admittably not much), current AI may be going to a direction completely orthogonal to achieving AGI.


>That's a very big if...

Oh, this is indeed a big if. A large, looming aspect of the problem is we don't anything like an exact characterization of "general intelligence" so what we're aiming for is very uncertain. But that uncertainty cuts multiple ways. Perhaps it would take 100K human-years to construct "it" and perhaps just a few key insights could construct "it".

> Also, I'd argue that most progress happens not because of some brilliant people, but because of many people working together...

The nature of a problem generally determines the sort of human-organization one needs to solve a problem. Large engineering problems are often solved by large teams, challenging math problems are generally solved by individuals, working with published results of other individuals. Given we're not certain of the nature of this problem, it's hard to be absolute here. Still, one could be after a few insights. If it's a huge engineering problem, you may have the problem "building an AGI is AGI-complete".

> Then if your AGI only reaches the level of intelligence of humans and maybe a bit more (what does more even mean in terms of human intelligence? more emphatic? faster calculation ability?

I've heard these "we'll get to human-level but it won't be that impressive" kinds of arguments and I find them underwhelming.

"What use would more memory be to an AGI that's 'just' at human level?"

How's this? Studying a hard problem? Fork your brain 100 times, with small variations and different viewpoints, to look at different possibilities, then combine the best solutions. Seems powerful to me. But that's just the most simplistic approach and it seems like an AGI with extra-memory could jump between the unity of an individual and the multiple views of work groups and such is multiple creative ways. The plus humans have a few quantifiable limits - human attention has been very roughly defined as being limited to "seven plus or minus two chunks". Something human-like but able to consider a few more chunks could possibly accomplish incredible things.


> AGI stands for artificial general intelligence, a hypothetical computer program that can perform intellectual tasks as well as, or better than, a human.

It shows the article was written by someone who has no idea what he is talking about. It would not be a "computer program" but a model composed of simpler sub-models that contain both code and data. Data is the essential part, not the code. It would be something that learns, not something preprogrammed like computer programs.

> Its intelligence will be limited only by the number of processors available.

I beg to differ. AGI will be limited by the complexity of the environment, it can't get smarter than what is afforded by the problems it solves. This article provides a fascinating insight into this topic: https://medium.com/@francois.chollet/the-impossibility-of-in...


I would say you are also overconfident in your own statements.

> Individual humans are limited by biology, an AGI will not be similarly limited.

On the other hand, individual humans are not limited by silicon and global supply chains, nor bottlenecked by robotics. The perceived superiority of computer hardware on organic brains has never been conclusively demonstrated: it is plausible that in the areas that brains have actually been optimized for, our technology hits a wall before it reaches parity. It is also plausible that solving robotics is a significantly harder problem than intelligence, leaving AI at a disadvantage for a while.

> Due to horizontal scaling, an AGI will perhaps be more like a million individuals all perfectly aligned towards the same goal.

How would they force perfect alignment, though? In order to be effective, each of these individuals will need to work on different problems and focus on different information, which means they will start diverging. Basically, in order for an AI to force global coordination of its objective among millions of clones, it first has to solve the alignment problem. It's a difficult problem. You cannot simply assume it will have less trouble with it than we do.

> There's also the case that an AGI can leverage the complete sum of human knowledge

But it cannot leverage the information that billions of years of evolution has encoded in our genome. It is an open question whether the sum of human knowledge is of any use without that implicit basis.

> and can self-direct towards a single goal for an arbitrary amount of time

Consistent goal-directed behavior is part of the alignment problem: it requires proving the stability of your goal system under all possible sequences of inputs and AGI will not necessarily be capable of it. There is also nothing intrinsic about the notion of AGI that suggests it would be better than humans at this kind of thing.


> As for whether it's "real" AGI or just acts like it, doesn't really matter.

Absolutely. The term "AGI" came about specifically to avoid existing philosophical arguments about "strong AI", "real AI", "synthetic intelligence", etc. Those wanting to discuss "true intelligence", etc. should use those other terms, or define new ones, rather than misuse the term AGI.

AGI requires nothing more (or less!) than a widely-applicable optimisation algorithm. For example, it's easy to argue that a paperclip maximiser isn't "truly intelligent", but that won't stop it smelting your haemoglobin into more paperclips!


>Dr. Schmidhuber also has a grand vision for A.I. — that self-aware or “conscious machines” are just around the corner — that causes eyes to roll among some of his peers.

I can't help but wonder if the sole reason AGI doesn't exist is because it hasn't been figured out yet.

While that statement sounds obvious on the face of it, the implication is that we may already possess both the sufficient computational resources and human intelligence to realize its creation.


> Why, in principle, would it not be possible for us to design an AGI, that would have care for our (all sentient beings') welfare or care for the investors' profit as (one of) its core goal(s)?

Because we don't know how to design goal functions. Furthermore, how would the AI measure "welfare"? Maybe the way it maximizes welfare is horrifying to us. Look at how easy it is to hack current image recognition neural nets, then imagine a solution to the human welfare problem that is as far from an image of a dog as an image of pink noise is.

next

Legal | privacy