Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

No. Stop reading reddit.com/r/futurology or that _awful_ article by waitbutwhy. Sure its a possibility but we're still making baby steps and tiny tools, pastiches of intelligence as opposed to genuine intelligence or conscious.

People who ask questions such as this often don't consider that it remains eminently possible that AGI is an impossibility for us to build. Also remember that anything an AI can do in the future a human + an AI can probably do better. Right now at least they're just tools we use and will remain so for the foreseeable future.



sort by: page size:

AGI is currently as likely as teleportation, time travel or warp drives. You can write a computer program to do just about anything. Artificial "General" intelligence is simply not a thing. We're not even making progress toward it.

I'd say AGI is very very much possible because it already exists (for example: humans) and unless there exists a soul or some other kind of "literally magic" (and I'd be doubtful even then) there is no reason we couldn't one day build AGI.

It also seems like unless something goes very very wrong with the exponential growth curve in computational capacity humanity has had, we'll be able to straight up simulate a human brain (probably well before) 2100.


There is no evidence that building an AGI is actually possible, or that even if it is possible that it will be better than humans at inventing anything. Let me know when AGI researchers build something as smart as a lab mouse. Until then this is just idle speculation and totally useless as an input to setting public policies.

Of course limited AI systems (without the "G") will continue to improve and eliminate jobs that don't involve advanced reasoning or creativity.


I don’t think AGI is necessarily impossible, but I’m not convinced that it’s possible to achieve in a way that gets around the constraints of human intelligence. The singularity idea is basically the assumption that AGI will scale the same way computers have scaled over the past several decades, but if AGI turns out to require special hardware and years of training the same way we do, it’s not obvious that it’s going to be anything more remarkable than we are.

Even if AGI is achievable, we already have plenty human intelligence. It remains to be seen if AGI will lead to anything over what we can already do.

Not any time soon. They way I see it there's no defined path to get to an intelligent AGI. We just don't have the know-how right now. Eventually we might be able to do it since human intelligence evolved from nothing so there's got to be a way to do it artificially.

There is the very minute possibility of emergence, that is the idea that if we unite multiple simple random systems they can create an intelligent system. Given how fast computer systems have become then there might be something there. But that's more like wishful thinking than not.


Something can be possible, while still technically not feasible.

I agree our knowledge currently is lacking, but see no reasons why this will never catch up.

There are fundamental limits on cognition. For one our universe is limited by the amount of computing energy available. Plenty of problems can be fully solved, to where it does not matter if you are increasingly more intelligent (beyond a certain point, two AGI's will always draw at chess). Another limit is practical: the AGI needs to communicate with humans (if we manage to keep control of it), so it may need to dumb down so we can understand it.

Even an AGI as smart as the smartest human will greatly outrun us: it can duplicate and focus on many things in parallel. Then the improved bandwith between AGI's will do the rest (humans are stuck with letters and formulas and coffee breaks).

Manually deployed atom bombs and malware can already wreck us. No difference with autonomous (cyber)weapons.


I'm curious: is there anyone here who thinks:

1. Human-level (or greater) AI is possible. 2. The development of AGI can be prevented, or even slowed by more than, let's say, a decade.

If there is, I'd love to hear your thoughts on how (2) can be achieved.


We don't even know if AGI is possible. Let's not mince words here: nothing, and I do mean nothing, not a single, solitary model on offer by anyone, anywhere, right now has a prayer of becoming AGI. That's just... not how it's going to be done. What we have right now are fantastically powerful, interesting, and neato to play with pattern recognition programs, that can take their understanding of patterns, and reproduce more of them given prompts. That's it. That is not general intelligence of any sort, it's not even really creative, it's analogous to creative output but it isn't outputting to say something, it's simply taking samples of all the things it's seen previously and making something new with as few "errors" as possible, whatever that means in context. This is not intelligence of any sort, period, paragraph.

I don't know to what extent OpenAI and their compatriots are actually trying to bring forth artificial life (or at least, consciousness) forward, versus how much they're just banking on how cool AI is as a term in order to funnel even more money to themselves chasing the pipe dream of building it, and at this point, I don't care. Even the products they have made do not hold a candle to the things they claim to be trying to make, but they're more than happy to talk them up like there's a chance. And I suppose there is a chance, but I really struggle to see ChatGPT turning into skynet.


I doubt there will be AGI in our lifetime. Maybe some breakthrough happens but it won't be even close to human intelligence.

Those aren't exactly the questions I'm raising; I have no doubt that there exists some way to produce AGI. My concern is that it doesn't seem like the right question to ask, since history suggests that humans are much better at first building specialized devices, and when it comes to AI risk the only one that really matters is the first one built.

I might have misunderstood your post, though.


I have had a similar hunch, that while humans have and will create powerful AI tools, that AGI is not achievable without consciousness.

I am also fascinated with the idea of conscious realism - the idea that consciousness itself is fundamental and that the material world arises from consciousness and not the other way around. If true, AGI will not be realizable within the current framework that is being pursued which may be humanity's saving grace.


> AGI is possible because intelligence is possible (and common on earth) in nature.

But you haven't asked if we're capable of building it. While it might be technically possible are we capable of managing its construction?

All I see today is ways of making the process more opaque at the benefit of not having to provide the implementation. How does that technique even start to scale in terms of its construction? I worry about the exponentially increasing length of the "80% done" stage, and that's on the happy path.


If AGI is possible, it already happened. If even AI experts put it a 100-1000 years out, where some human monkeys banging on digital typewriters could eventually create it, then, in the vastness of space, time, military contracts, alien intelligences, and random Boltzmann brains, it must have been reality multiple times already.

If AGI is impossible, it will never happen. We already know that perfectly intelligent AGI's are not physically possible: Per DeepMind's foundational theoretical framework, optimal compression is non-computable, and besides that, it is not possible for an inference machine to know all of its universe (unless it is bigger than the universe by at least 1 bit, AKA it is the universe).

Remains being more intelligent than all of humanity. To accomplish that, by Shannon's own estimates, there is currently not enough information available in datasets and the internet. Chinese efforts to artificially increase the intelligence of babies is still in its infancy too (the substrate of AGI is irrelevant for computationalism, unless it absolutely needs to run on the IBM 5100).

So until that time travels, we will have to make due with being smarter than/indistinguishable from a human on all economic tasks. We're already there for some subset of humanity, you may even be a part of that subset, if you believed this post was written by a human.


I am a professional AI practitioner and I feel that I understand the field well enough to see multiple possible paths towards actually creating AGI. They are certainly out of my own personal reach and/or skillset right now, but that doesn't mean they're impossible. And yeah, "AI will create AGI" is kind of purposefully vague, but I think it's still valid. I think the flaws we unconsciously introduce into AI through our biases as human beings are what holds it back, so the more layers of stable, semi-stochastic abstraction we can place between ourselves and the final product, the more likely the model will be able to optimize itself to a place where it is truly free of the shortcomings of being "designed".

Then you grossly misunderstand how far along AI is. AGI is not even a remote possibility with current techniques and implementations (and I would contest, entirely impossible with digital logic). It's just massive amount of statistics that were computationally impossible given available hardware until recently.

We don't have a baseline understanding of consciousness or intuition to a degree that we could even begin to replicate it.


IMO AGI will never exist period let alone exceed human intelligence. Sure we'll have increasingly powerful pattern recognition, but that's all it will ever be

This doesn't mention AGI, which seems to be the prerequisite to this being a possibility. Despite impressive advances in "weak" ai, strong ai is not a simple extension of weak ai, and it's hard to tell if it will arrive within our lifetime.

I don't think AGI is likely, I think it is inevitable. We can make specialized neural networks that can do specific tasks quite well. There's nothing stopping us from chaining those together. We have the pieces to make neural networks that can train on new data, thus creating new layers atop previous networks. We can even train those layers based on the data generated by the action of the network itself. The pieces seem to be present, the tooling around putting them together seems to be lacking for the time being. I expect to see AGI in my lifetime, artificial super intelligence shortly thereafter and then the event horizon of the singularity.
next

Legal | privacy