Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I think the most accurate answer is that we just don't know. Since we really don't know how an AGI could work, we have no idea which of the advances we've made are getting us closer, if at all. Is it just an issue of faster GPUs? Is the work done on deep learning advancing us? I don't think we'll know until we actually reach AGI, and can see in hindsight what was important, and what was a dead end.

I do take exception to some of the specific statements you make though, which make it sound like the only real progress has been on the hardware side. There's been plenty of research done, and lots of small and even large advances (from figuring out which error functions work well ala Relu, all the way to GANs which were invented a few years ago and show amazing results). Also, the idea that "just applied statistics" won't get us to AGI is IMO strongly mistaken, especially if you consider all the work done in ML so far to be "just" applied statistics. I'm not sure why conceptually that wouldn't be enough.



sort by: page size:

I would definitely argue there's been substantial progress towards AGI. We've been seeing increasingly impressive AI results with increasingly more general architectures. Which criteria do you have for progress towards AGI that aren't being met?

Can someone please explain what has happened in ML or AI that makes AGI closer? Whilst some practical results (image processing) have been impressive, the underlying conceptual frameworks have not really changed for 20 or 30 years. We're mostly seeing quantitive improvements (size of data, GPGPU), not qualitative insights.

ML in general is just applied statistics. That's not going to get you to AGI.

Deep Learning is just hand-crafted algorithms for very specific tasks, like computer vision, highy parameterised and tuned using a simple metaheuristic.

All we've done is achieve the "preprocessing" step of extracting features automatically from some raw data. It's super-impressive because we're so early in the development of Computing, but we are absolutely nowhere near AGI. We don't even have any insights as to where to begin to create intelligence rather than these preprocessing steps. Neuroscience doesn't even understand the basics of how a neuron works, but we do know that neurons are massively more complex than the trivial processing units used in Deep Learning.

Taking the other side for a moment, even if we're say 500 or 1000 years out (I'd guess < 500) to AGI, you could argue that such a period is the blink of an eye on the evolutionary scale, so discussion is fine but let's not lose any sleep over it just yet.

What I find most frustrating about this debate is that a lot of people are once again massively overselling ML/DL, and that's going to cause disappointment and funding problems in the future. Industry and academia are both to blame, and it's this kind of nonsense that holds science back.


Its not AGI. It's machine learning. It's unknown if it's possible to ever achieve AGI by using machine learning techniques.

It's still not that obvious. There has been a lot of interesting stuff in this current iteration of "AI", but the overall approach could still end up being a dead end with respect to AGI itself.

It's an old discussion, and while a few of the deep learning results are really impressive I don't think any of them have fundamentally changed that discussion, yet.


None of these come close to AGI and combining them in any number of constellations and increasing complexity still misses a qualitative step.

All those machine learning tricks have been available for the longest time, the two orders of magnitude speed-up that we have received (courtesy of the gaming industry) should be seen as such a qualitative step and yet we are no closer to a general AI than we were before that speed up took place. If anything we've learned how incredibly hard the problem really is and the predictions on how close we are from a decade ago have already slipped significantly.


I don't know. It sounds more like the current approaches don't get us AGI. The machine learning tools are not enough. But that doesn't mean AGI is intrinsically hard. Maybe you just need a couple of different things in tandem, like OpenCog does.

I'm super curious why you were optimistic about AGI in the first place?

it seem to me that a majority of the performance gains in ML are a result of using better hardware to run brute-force statistics with larger more complex models but the algorithms themselves have been improving at a nominal rate.


Except that there's no such thing as the field of "AGI" (or if there is, its a subfield of philosophy). Asking modern ML and Deep Learning researchers about their thoughts on AGI is like asking the wright brothers about a mars mission.

Sure, they're the closest thing we have to "experts", but there's not just one, but likely 5 or 10 more field, if not world-changing leaps we need to make before the technologies we have will resemble anything like AGI.

And that's ignoring the people who ascribe diety-like powers to some potential AGI. Air gap the computer and control the inputs and outputs. We can formally prove what a system is capable of. That fixes the problem.


Right. I think one reason that the definition of AGI is so contentious is that we're not that close to it. All of the current benchmarks are interesting but I don't see how we ever use those to declare that AGI has been reached. And quite frankly, I don't think we'd even care about most of them if we had a truly intelligent system.

For me, if hook up an AI with no training to a vehicle and it drives at a human level in arbitrary scenarios, I'd consider it to be AGI. It seems obvious to me that we're not very close to this.


As someone who has worked in the field of AI/ML for quite awhile now, the problem with current AGI predictions is ML hasn't done anything new since the 80s (or arguably earlier).

At the end of the day all ML is using gradient descent to do some sort of non-linear projection of the data on to a latent space, then doing some relatively simple math in this latent space to perform some task.

Personally I think the limits of this technique are far better than I would have thought 10 years ago.

However we are only near AGI if this is in fact how intelligence works (or can work) and I don't believe we've seen any evidence of this. And there are some very big assumptions baked into this approach.

Essentially all we've done is pushed the basic model proposed by linear regression to it's absolutely limits, but, as impressive as the results are, I'm not entirely convinced this will get is over the limit to AGI.


My opinion is that it is a very long-term goal. Compare it to the autonomous car, getting the first 90% of a complete autonomous car is much easier than the last 10%. With AGI probably it is the same. Deep learning has got traction the last years, but we are almost using the same algorithms than 20 years ago, just in better hardware (plus some tweaks to make them work with bigger models). But I think we know almost nothing about how to achieve AGI. Just compare our deep learning models with the models we have from our brains, they are a subset of what we know about our brains. And I don't think we know a lot about our brains, but very little.

We are a very long way from even having 0.0001% of the compute required to produce a weak AGI.

like everything, it will take much more resources and time than we can predict today with our best estimates, partly because that's just how these things turn out, and mostly because a true AGI will likely require billions of neural networks all adapting and swapping neurons and communications pathways amongst themselves and training themselves at the same time they are doing useful work.

we have no idea how to do anything like this at scale, yet.


I’m surprised at the amount of comments greatly downplaying the likelihood of AGI in the coming decades or saying that is impossible.

The rate of progress is AI technologies seems absolutely incredible to me. Sure, there is tons of hype and noise to weed through. And much of the technology and research hasn’t yet found commercial applications. But the progress in recent decades is objectively incredible.

We are reaching a point where all games are basically falling to AI. Go, no-limit hold ‘em (poker), StarCraft, Dota, and so on.

In other domains, we have GPT-3 and AlphaFold. And I’m sure there are many developments I’m not aware of.

From what I can tell, GPT-3 is mainly significant for challenging that notion that increased scale can’t be the primary factor in a significant AI developments. The jury is still out, but my understanding is that it demonstrated that modest tweaks to existing algorithms with a massive increase in model size can result large performance improvements. GPT-3 had quite limited “memory”, which is one of the reasons it struggled with coherency. How would a model 10x - 100x larger than GPT-3 with a significantly enhanced memory perform? What if it was trained to predict the next frame in video in addition to the next sequence of text blobs?

No one knows how close we are. It will likely happen quickly without many people knowing it is coming soon. It will seem “impossible” to most even as the last few pieces are falling in place. Major technological paradigm shifts usually happen unexpectedly, as history shows us.

I’m rambling a bit (If I had more time / skill I would have written a shorter comment ;). Suffice to say I’m surprised by the number of confident proclamations against AGI. Our brains are amazing. But they aren’t magic.


I'm not attempting to be a smart ass, but you should back up your speculation with facts and references. Is the best research a secret?

Your post implied that you have some special insight into research groups that are doing AGI research and are making steady progress. I have done plenty of Google searches and not run across any notable AGI progress.

The one reference you provided was for a deep learning implementation, which is what you seemed to be dismissing.


I don't entirely disagree that we're progressing towards AGI. For me, the most disappointing part of machine learning is complete lack of systems that can extrapolate.

I’m not an AGI fanboy. I agree that the current line of inquiry (ie deep learning) won’t get us there. I think neurosymbolic reasoning is needed. That work is still nascent, and worse, we don’t have great ways to connect our current paradigm to it.

Not in the field, so genuine question: what is the evidence/theory to support the notion that deep learning is at all a reasonable route towards AGI? As I understand, this is nothing like how actual neurons work - and since they are the only "hardware" which has ever demonstrated general intelligence hoping for AGI from current computational neural networks feel like a stretch, at best.

Four things are needed to truly build advanced AI's (read deep learning, deep reinforcement learning): new algorithms, complex data sets, and advanced GPU based computing (optimally GPU in any case) but also an open community.

Actually we have no idea what the constituent parts of AGI are.

What you mention are the current state of the art for narrow AI projects like classification and segmentation - which is basically 100% of machine/deep learning currently, but are not generalizable yet.

As an example the pre-eminant biologically inspired computing researcher Richard Granger is skeptical (and I agree) that parallel silicon will be able to scale to the flexibility that we see in biological learning (aka General intelligence).

Based on what I see so far from OpenAI I don't see them getting to AGI. They haven't stated it as an explicit goal, I think because they don't have a pathway (nobody does by the way).


Asking this openly, even on HN, is going to get a lot of unqualified answers. So I'll preface that I have a recent PhD in deep RL and pretty well versed on the cutting edge develops of ML.

I think your question has two angles. First, do LLMs have potential for AGI? I think that's an emphatic no. There is nothing special about this generative model vs. say something like sampling from a mixture of Gaussians. Much better generations and super impressive it works, sure, but there is no mechanism for it to improve itself let alone change its "prime directive". See Sam Altman's claim that RL with human feedback being where most are the gains are now.

At a higher level, there is the concern that the pace at which we are going that we can extrapolate that AGI is around the corner. My take on this is that basically everything made possible in the last 20 years is because of GPUs and better tooling. In specific, the recent hype we see is because of how democratized things are getting. We have kids using ChatGPT to do homework. This is very disruptive from a societal perspective but we are essentially at the height of the rate at which this technology is being adopted. The growth rate will decelerate and it will stop being news once society has learned to adapt to the new technology. However, from a technical perspective, these concerns are like looking at children playing legos and being concerned they will build the next nuclear bomb. Hypothetically feasible but there are so many fundamental gaps that clearly separate the two that the real concerns would be when we see Manhattan projects being successful. From an outsider's perspective, Deep Mind and AlphaZero or OpenAI and LLMs seem like project Manhattans but the fact these companies have spent billions with no returns yet should say a lot about the utility of these models.

next

Legal | privacy