Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

My thoughts on AGI (at least in the sense of being indistinguishable from interaction with a human) are the same as my thoughts on extraterrestrial life: I'll believe it only when I see it (or at least when provided with proof that the mechanism is understood). This extrapolation on a sample size of one is something I don't understand. How is the fact that machine learning can do specific stuff better than humans different in principle than the fact that a hand calculator can do some specific stuff better than humans? On what evidence can we extrapolate from this to AGI?

We haven't found life outside this planet, and we haven't created life in a lab, therefore n=1 for assessing probability of life outside earth (which means we can't calculate a probability for this yet). Likewise, we haven't created anything remotely like animal intelligence (let alone human) and we have no good theory regarding how it works, so n=1 for existing forms of general intelligence.

Note that I'm not saying there can be no extraterrestrial life or that we will never develop AGI, just that I haven't seen any evidence at this point in time that any opinions for or against their possibility are anything more than baseless speculation.



sort by: page size:

I don't think AGI has ever moved in its goalposts, defined rather well by the Turing test. AGI must be general, capable of reasoning at least as well as a human in any domain of inquiry, at the same time. Showing more than human reasoning in certain domains is trivial, and has been happening since at least Babbage's difference engine.

However, while AI has been overtaking human reasoning on many specific problems, we are still very far from any kind of general intelligence that could conduct itself in the world or in open-ended conversation with anything approaching human (or basically any multicellular organism) intelligence.

Furthermore, it remains obvious that even our best specific models require vastly more training (number of examples + time) and energy than a human or animal to reach similar performance, wherever comparable. This may be due to the 'hidden' learning that has been happening in the millions of years of evolution that are encoded in any living being today, but it may also be that we are missing some fundamental advancements in the act of learning itself


I think quite the opposite: the vast difference between living beings' ability to learm how to exist in their environment (including other living beings and sometimes social structures) and the very limited successes of even modern AI still show that we are very far away from AGI.

We couldn't create an ant AI right now, imagining we are pretty close to a human is pretty absurd.

And if we're taking about intelligence, human exceptionalism is hard to argue against (in the context of Earth, no point in speculating about alien life). There are pretty few creatures on earth trying to build AIs, and I for one would not consider something to be an AGI if it couldn't even understand the concept.


IMO AGI will never exist period let alone exceed human intelligence. Sure we'll have increasingly powerful pattern recognition, but that's all it will ever be

That's not even close to true. Humans don't have the ability to exponentially amplify their own intelligence. It's not too farfetched to imagine that AGI just might have such a capability.

My reasoning is based on two things. For one, what we know about brains in general and the amount of time it has taken to learn these things (relatively little - nothing concrete about memory or computation). For the other, the obvious limitations in all publically shown AI models, despite their ever-increasing sizes, and the limitted nature of the problems they are trying to solve.

It seems to me extremely clear that we are attacking the problem from two directions - neuroscience to try to understand how the only example of general intelligence works, and machine learning to try to engineer our way from solving specific problems to creating a generalized problem solver. Both directions are producing some results, but slowly, and with no ability to collaborate for now (no one is taking inspiration from actual neural networks in ML, despite the naming; and there is no insight from ML that could be applicable in formulating hypotheses about living brains).

So I can't imagine how anyone really believes that we are close to AGI. The only way I can see that happen is if the problem turns out to be much, much simpler than we believe - if it turns out that you can actually find a simple mathematical model that works more or less as well as the entire human brain.

I wouldn't hold my breath for this, since evolution has had almost a billion years to arrive at complex brains, while basic computation has started from the first unicellular organisms (even organelles inside the cell and nucleus are implementing simple algorithms to digest and reproduce, and even unicellular organisms tend to have some amount of directed movement and environmental awareness).

This is all not to mention that we have no way right now of tackling the problem of teaching the vast amounts of human common sense knowledge that is likely baked into our genes to an AI, and it's hard to tell how much that will impact true AGI.

And even then, we shouldn't forget that there is no obvious way to go from approximately human-level AGI to the kinds of sci-fi super-super-human AGIs that some AI catastrophists imagine. There isn't even any fundamental reason to assume that it is even possible to be significantly more intelligent than a human, in a general sort of way (there is also no reason to assume that you can't be!).


If you measure AGI by its ability to be human, the only thing you'll be able to find find is a human.

That means you will never believe in the consciousness of an intelligence that doesn't try to be human, no matter if a god, animal, machine, or alien. Until you expand your definition, that is.


When we talk about AGI, everyone always takes it for granted that an AGI would be human-like. But I think if you look at the complexity of the brain, and how poor our attempts to emulate it have been so far, I think it is almost a virtual certainty that the first successful attempts at AGI will create non-human intelligence. In many ways, I expect that our creations will find us to be as unrelatable as we consider dolphins or other highly intelligent animals.

I've been interested in AGI for a long time. But I've never been on board with the speculation about AGIs making multiple orders of magnitude improvement in intelligence.

That is total speculation, and also it's not necessary to assume so much to have a similar outcome. It seems much easier to imagine an AI that is 2 or 3 times smarter than a human, and has very fast transfer of knowledge with other compatible beings.

I think that's enough for them to take over, if there are enough of them.

But anyway it seems obvious to me that we absolutely should avoid trying to build fully autonomous digital creatures that compete with us. We should rather aim carefully for something more like a Star Trek computer, without any real autonomy or necessarily fully generalized skills or cognition or animal-like characteristics/drives.


If we actually create an AGI, it will view us much like we view other animals/insects/plants.

People often get wrapped up around an AGI's incentive structure and what intentions it will have, but IMO we have just as much chance of controlling it as wild rabbits have controlling humans.

It will be a massive leap in intelligence, likely with concepts and ways of understanding reality that either never considered or aren't capable of. Again, that's *if* we make an AGI not these LLM machine learning algorithms being paraded around as AI.


I doubt there will be AGI in our lifetime. Maybe some breakthrough happens but it won't be even close to human intelligence.

Even if say we are close to “AGI” it’s going to be something significantly different from human and animal intelligence. One of the most obvious differences that I don’t see people bring up:

1. ChatGPT can only respond to an input. If you left it alone it would literally do nothing. It cannot, generate, create thoughts and choose what stimuli to respond to.

This posits that the human brain is like a dynamical system. You switch it on and it keeps going on forever, there is also no hardline between learning and inference like DL. ChatGPT and others etc, feel very much like a digital system. It can only respond to inputs provided, there is a clear demarcation between the learning stage and the inference stage

Note: it is still possible that these AI systems will reach human like performance in a variety of tasks. But they will seem very weird and different from the intelligent systems we are exposed to in Nature


I'm surprised by the number of "is AGI even possible" comments here and would love to hear more.

I personally think AGI is far off, but always assumed it was an inevitability.

Obviously, humans are sentient with GI, and various other animals range from close-ish to humans to not-even-close but still orders of magnitude more than any machine.

Ie. GI is a real thing, in the real world. It's not time travel, immortality, etc.

I certainly understand the religious perspective. If you believe humans/life is special, created by some higher power, and that the created cannot create, then AGI is not possible. But, given the number of "is AGI possible?" comments I assume not all are religious based (HN doesn't seem to a highly religious cohort to me).

What are the common secular arguments against AGI?

Are people simply doubting the more narrow view that AGI is possible via ML implemented on existing computing technology? Or the idea in general?

While the article does focus on current ML trajectory and "digital" solutions, its core position is mostly focused on a "new approach" and AI creating AGI.

I'd consider a scenario where highly advanced but non-sentient ML algorithms figure out to devise a new technology (be that digital or analog, inorganic or organic) that leads to AGI as an outcome that is consistent with this article.

Is that viable in 20 years time? No idea, but given an infinite timescale it certainly seems more possible than not to me given that we all already exist as blackbox MVPs that just haven't been reverse engineered yet.


If humans can solve problems that require more computational resources than exist in the universe, then AGI is not possible. I have run one experiment to demonstrate this.

<wild speculation>

AGI is not the same as human intelligence. It has the generality of human intelligence but since it isn't restricted by biology it can scale up much easier and can achieve superhuman performance in pretty much any individual task, group of tasks, or entire scientific or technological fields. That's pretty exciting.

</wild speculation>

<reality>

It's questionable whether the above is possible at all. In all likelihood none of us will see anything even remotely close to this in our lifetimes. We're currently so far away from it that we don't even know how to get started on solving such a problem. Nobody is currently working on this, despite how they're advertising their work.

</reality>

I guess what I'm saying isn't that AGI will be underwhelming, it's that it won't exist at all, at least as far as we are concerned.


We don't even know if AGI is possible. Let's not mince words here: nothing, and I do mean nothing, not a single, solitary model on offer by anyone, anywhere, right now has a prayer of becoming AGI. That's just... not how it's going to be done. What we have right now are fantastically powerful, interesting, and neato to play with pattern recognition programs, that can take their understanding of patterns, and reproduce more of them given prompts. That's it. That is not general intelligence of any sort, it's not even really creative, it's analogous to creative output but it isn't outputting to say something, it's simply taking samples of all the things it's seen previously and making something new with as few "errors" as possible, whatever that means in context. This is not intelligence of any sort, period, paragraph.

I don't know to what extent OpenAI and their compatriots are actually trying to bring forth artificial life (or at least, consciousness) forward, versus how much they're just banking on how cool AI is as a term in order to funnel even more money to themselves chasing the pipe dream of building it, and at this point, I don't care. Even the products they have made do not hold a candle to the things they claim to be trying to make, but they're more than happy to talk them up like there's a chance. And I suppose there is a chance, but I really struggle to see ChatGPT turning into skynet.


Well, opinions differ. I'd guess the probability of there being AGI as in computers able to think as well as a human in most ways by 2300 as close to 100%.

Things we have now are getting closer eg Deepmind's Gato and GPT-3.


No, it’s proof that human intelligence is possible. Since we don’t know how that works either we can’t really draw any conclusions about the viability of replicating it artificially (if that’s the definitions of AGI).

You're right, the existence of humans strongly suggests that artificial general intelligence is possible.

But the existence of DALL-E 2, which is not AGI and nonetheless produces beautiful art, does not convince me that software will be writing reliable software before AGI happens.


This seems like a weak argument to me. We don't really understand how human intelligence works yet so how can we claim that computers will never realize similar intelligence? We don't know for a fact that human intelligence depends on these things.

I'm personally skeptical that we'll see AGI any time soon but I don't think we know enough to say this definitively.

next

Legal | privacy