Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

We don't even know if AGI is possible. Let's not mince words here: nothing, and I do mean nothing, not a single, solitary model on offer by anyone, anywhere, right now has a prayer of becoming AGI. That's just... not how it's going to be done. What we have right now are fantastically powerful, interesting, and neato to play with pattern recognition programs, that can take their understanding of patterns, and reproduce more of them given prompts. That's it. That is not general intelligence of any sort, it's not even really creative, it's analogous to creative output but it isn't outputting to say something, it's simply taking samples of all the things it's seen previously and making something new with as few "errors" as possible, whatever that means in context. This is not intelligence of any sort, period, paragraph.

I don't know to what extent OpenAI and their compatriots are actually trying to bring forth artificial life (or at least, consciousness) forward, versus how much they're just banking on how cool AI is as a term in order to funnel even more money to themselves chasing the pipe dream of building it, and at this point, I don't care. Even the products they have made do not hold a candle to the things they claim to be trying to make, but they're more than happy to talk them up like there's a chance. And I suppose there is a chance, but I really struggle to see ChatGPT turning into skynet.



sort by: page size:

No. Stop reading reddit.com/r/futurology or that _awful_ article by waitbutwhy. Sure its a possibility but we're still making baby steps and tiny tools, pastiches of intelligence as opposed to genuine intelligence or conscious.

People who ask questions such as this often don't consider that it remains eminently possible that AGI is an impossibility for us to build. Also remember that anything an AI can do in the future a human + an AI can probably do better. Right now at least they're just tools we use and will remain so for the foreseeable future.


AGI is currently as likely as teleportation, time travel or warp drives. You can write a computer program to do just about anything. Artificial "General" intelligence is simply not a thing. We're not even making progress toward it.

Either I don't understand what AGI is or openai doesn't understand what AGI is. Because of the way gpt 3,4,5 or even 1000 works it can't be possible for AGI to be achieved. How could we say that we've got AGI when the "intelligence" is a synthesis of existing knowledge, without even ever understanding what it means?

If real AGI is to be achieved it can only be through a completely different paradigm (don't ask me which) definitely not through machine learning.

My opinion is that this whole machine learning thing will make achieving AGI even more difficult since most resources have switched their focus on that instead of trying to achieve real artificial intelligence (if that's even possible anyway).


There's no such thing as AGI in my opinion. There is no way to create a "conscious" machine. We might be able to come up with some reasonably impressive imitations, but nothing that is conscious or actually thinking like a human.

From the website:

"About OpenAI: OpenAI is a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence."

At any rate, the bigger question is whether AGI is even possible. Why does no one even take a stab at answering this question? We have all these well funded research institutes that just assume AGI is possible, and we could just be throwing all the money down a hole.


I doubt there will be AGI in our lifetime. Maybe some breakthrough happens but it won't be even close to human intelligence.

IMO AGI will never exist period let alone exceed human intelligence. Sure we'll have increasingly powerful pattern recognition, but that's all it will ever be

There has been next to ZERO progress towards genuine AGI despite a never-ending deluge of AI articles; that's normally the cause of scepticism.

After several decades and a much-hyped last few years we have fake cleverness - impressively so in both cases - but nothing more.


<wild speculation>

AGI is not the same as human intelligence. It has the generality of human intelligence but since it isn't restricted by biology it can scale up much easier and can achieve superhuman performance in pretty much any individual task, group of tasks, or entire scientific or technological fields. That's pretty exciting.

</wild speculation>

<reality>

It's questionable whether the above is possible at all. In all likelihood none of us will see anything even remotely close to this in our lifetimes. We're currently so far away from it that we don't even know how to get started on solving such a problem. Nobody is currently working on this, despite how they're advertising their work.

</reality>

I guess what I'm saying isn't that AGI will be underwhelming, it's that it won't exist at all, at least as far as we are concerned.


There is no evidence that building an AGI is actually possible, or that even if it is possible that it will be better than humans at inventing anything. Let me know when AGI researchers build something as smart as a lab mouse. Until then this is just idle speculation and totally useless as an input to setting public policies.

Of course limited AI systems (without the "G") will continue to improve and eliminate jobs that don't involve advanced reasoning or creativity.


No one can explain why AGI is impossible because you can't prove a negative. But so far there is still no clear path to a solution. We can't be confident that we're on the right track towards human-level intelligence until we can build something roughly equivalent to, let's say, a reptile brain (or pick some other similar target if you prefer).

If you have an idea for a technical approach then go ahead and build it. See what happens.


Yes I don’t think AGI (which is entirely an ill-defined concept, but put that aside) will happen until AI is embodied in the physical world.

I'm talking about the efforts to really reproduce human-like intelligence. I'm not saying that AI isn't a field, I am saying if you want human-like intelligence, the AGI people are the most far along, or at least the most serious about it.

Did you really look into AGI, for example the past conferences or those projects, and conclude that it is just invaluable holistic mumbo-jumbo?

That is so unfair and inaccurate, I can't see how you can possibly be evaluating things rationally if you really came to that conclusion.


But we actually understand so little about how the brain works, let alone how intelligence emerges from it.

What I am saying is that AGI may be impossible, but people are so determined that it's just around the corner with enough hardware and clever enough software.

It's just around the corner, and we don't even know whether it is possible.


I may be really only speaking for myself here but I have very sincere doubts that anyone who has done any moderately serious work on "AI" and wrangled with the nitty gritty details of it all is really having any huge expectations for AGI. I mean if it comes during my lifetime, hurrah! But personally I'm not gonna sit around waiting for it or depend on it for anything. That being said, is the current AI tech as we have it useless? Of course not. Things like protein folding and alphago are still huge leaps forward in tech, it'd be kind of silly to treat AGI as the only thing worth achieving

This is no AGI. An AGI is supposed to be the cognitive equivalent of a human, right? The "AI" being pushed out to people these days can't even count.

AGI is still a long way off. The history of AI goes back 65 years and there have been probably a dozen episodes where people said "AGI is right around the corner" because some program did something surprising and impressive. It always turns out human intelligence is much, much harder than we think it is.

I saw a tweet the other day that sums up the current situation perfectly: "I don't need AI to paint pictures and write poetry so I have more time to fold laundry and wash dishes. I want the AI to do the laundry and dishes so I have more time to paint and write poetry."


Then you grossly misunderstand how far along AI is. AGI is not even a remote possibility with current techniques and implementations (and I would contest, entirely impossible with digital logic). It's just massive amount of statistics that were computationally impossible given available hardware until recently.

We don't have a baseline understanding of consciousness or intuition to a degree that we could even begin to replicate it.


I think the current maluse of AI is about as likely to produce AGI as finger painting of a toddler is to produce a Mona Lisa, and the whole AGI drama is overblown way out of proportion. Right now the state of the field is such that no one can even begin to contemplate how to create the very basic underpinnings of anything remotely resembling AGI. That’s how fundamental this problem still is.

That’s not to say that there’s no way the humanity can be fucked with the more pedestrian “garden variety” AI that is with our technical capabilities.

It’s to say that AGI is a nebulous, unobtainable red herring which only serves to detract from the more immediate issues.

next

Legal | privacy