AGI is still a long way off. The history of AI goes back 65 years and there have been probably a dozen episodes where people said "AGI is right around the corner" because some program did something surprising and impressive. It always turns out human intelligence is much, much harder than we think it is.
I saw a tweet the other day that sums up the current situation perfectly: "I don't need AI to paint pictures and write poetry so I have more time to fold laundry and wash dishes. I want the AI to do the laundry and dishes so I have more time to paint and write poetry."
Robots are garbage at manipulating objects, and it's the software that's lacking much more than the hardware.
Let's say AGI is 10 and ASI is 11.
They're saying we can't even get this dial cranked up to 3, so we're not anywhere close to 10 or 11. You're right that folding laundry doesn't need 11, but that's not relevant to their point.
Human flight, resurrection (cardiopulmonary resuscitation machines), doubling human lifespans, instantaneous long distance communication, all of these things are simply pipe dreams.
We have people walking around for weeks with no heartbeat.
They're tied to a machine, sure, but that machine itself is a miracle compared to any foundational religious textbook including those as recent as Doreen Valiente and Gerald Gardner with Wicca.
But had those people been lying around with no heartbeat for a week or three before they were hooked up to the machine? If they had, then yes, that would actually be resurrection. But what you're describing doesn't sound like it.
That "tweet" loses a veneer if you see that we value what has Worth as a collective treasure, and the more Value is produced the better - while that one engages in producing something of value is (hopefully but not necessarily) a good exercise in intelligent (literal sense) cultivation.
So, yes, if algorithms strict or loose could one day produce Art, and Thought, and Judgement, of Superior quality: very welcome.
Do not miss that the current world is increasingly complex to manage, and our lives, and Aids would be welcome. The situation is much more complex than that wish for leisure or even "sport" (literal sense).
> we value what has Worth as a collective treasure, and the more Value is produced the better ... So, yes, if algorithms strict or loose could one day produce Art, and Thought, and Judgement, of Superior quality: very welcome.
Except that's not how we value the "worth" of something. If "Art, and Thought, and Judgement" -- be they of "Superior quality" or not -- could be produced by machines, they'd be worth a heck of a lot less. (Come to think of it, hasn't that process already begun?)
Also, WTF is up with the weird capitalisations? Are you from Germany, or just from the seventeenth century?
The issue I have with all of these discussions is how vague everyone always is.
“Art” isn’t a single thing. It’s not just pretty pictures. AI can’t make art.
And give a good solid definition for thought which doesn’t depend on lived experiences while we’re at it. You can’t. We don’t have one.
> “Art” isn’t a single thing. It’s not just pretty pictures
And this is why it was capitalized as "Art", proper Art.
> AI can’t make art
Not really: "we may not yet have AI that makes art". But if a process that creates, that generates (proper sense) art is fully replicated, anything that can run that process can make Art.
> And give a good solid definition for [T]hought
The production of ideas which are truthful and important.
> which doesn’t depend on lived experiences while we’re at it. You can’t
Yes we can abstract from instances to patterns and rules. But it matters only relatively: if the idea is clear - and ideas can be very clear to us - we do not need to describe them in detail, we just look at them.
> AGI” as well
A process of refinement of the ideas composing a world model according to truthfulness and completeness.
That’s not a real thing. There’s no single definition for what art is as it’s a social construct. It depends on culture.
> anything that can run process can make art
Again without a definition of art, this makes no sense. Slime mold can run processes, but it doesn’t make art as art is a human cultural phenomenon.
> the production of ideas that are truthful and important
What does “ideas” and “important” mean?
To an LLM, there are no ideas. We humans are personifying them and creating our own ideas.
What is “important,” again, is a cultural thing.
If we can’t define it, we can’t train a model to understand it
> yes we can abstract from instances to patterns and rules.
What? Abstraction is not defining.
> we do not need to describe them in detail
“We” humans can, yes.
But machines can not because thought, again, is a human phenomenon.
> world model
Again, what does this mean?
Magic perfect future prediction algorithm?
We’ve had soothsayers for thousands of years /s
It seems to me that you’ve got it in your head that since we can make a computer generate understandable text using statistics that machines are now capable of understanding deeply human phenomena.
I’m sorry to break it to you, but we’re not there yet. Maybe one day, but not now (I don’t think ever, as long as we’re relying on statistics)
It’s hard enough for us to describe deeply human phenomena through language to other humans.
Do us all a favour and never again keep assumptions in your head: your misunderstanding was beyond scale. Do not guess.
Back to the discussion from the origin: a poster defends the idea that the purpose of AI would be in enabling leisure and possibly sport (through alleviating menial tasks) - not in producing cultural output. He was replied first that cultural output having value, it is welcome from all sources (provided the Value is real), and second that the needs are beyond menial tasks, given that we have a large deficit in proper thought and proper judgement.
The literal sentence was «yes, if algorithms strict or loose could one day produce Art, and Thought, and Judgement, of Superior quality: very welcome», which refers to the future, so it cannot be interpreted in the possibility being available now.
You have brought LLMs to the topic when LLMs are irrelevant (and you have stated that you «personify[] them»!). LLMs have nothing to do with this branch of discussion.
You see things that are said as «vague», and miss definitions for things: but we have instead very clear ideas. We just do not bring the textual explosion of all those ideas in our posts.
Now: you have a world in front of you; of that world you create a mental model; the mental model can have a formal representation; details of that model can be insightful to the truthful prosecution of the model itself: that is Art or Thought or Judgement according to different qualities of said detail; the truthful prosecution of the model has Value and is Important - it has, if only given the cost of the consequences of actions under inaccurate models.
> Except that's not how we value the "worth" of something
In that case, are you sure your evaluation is proper? If a masterpiece is there, and it /is/ a masterpiece (beyond appearances), why would its source change its nature and quality?
> Come to think of it, hasn't that process already begun?
Please present relevant examples: I have already observed in the past that simulations of the art made by X cannot just look similar but require the process, the justification, the meanings that had X producing them. The style of X is not just thickness of lines, temperature of colours and flatness of shades: it is in the meanings that X wanted to express and convey.
> WTF is up with the weird capitalisations?
Platonic terms - the Ideas in the Hyperuranium. E.g. "This action is good, but what is Good?".
Faking thinking isn't “Thinking”. Art is supposed to have some thought behind it; therefore, “art” created by faking thinking isn't “Art”. Should be utterly fucking obvious.
> Platonic terms - the Ideas in the Hyperuranium.
Oh my god, couldn't you please try to come off as a bit more pretentious? You're only tying yourself into knots with that bullshit; see your failure to recognise the simple truth above. Remember: KISS!
No, CRConrad, no. You misunderstood what was said.
Having put those capital initials in the words was exactly to mean "if we get to the Real Thing". You are stating that in order to get Art, Thinking and Judgement we need Proper processes: and nobody said differently! I wrote that «if algorithms strict or loose could one day produce Art, and Thought, and Judgement, of Superior quality [this will be] very welcome». There is nothing in there that implies that "fake thinking" will produce A-T-J (picked at writing as the most important possible results I could see); there is an implicit statement that Proper processes (i.e. "real thinking") could be artificially obtained, when we will find out how.
Of course the implementation of a mockery of "thought" will not lead to any Real A-T-J (the capitals were for "Real"); but if we will manage to implement it, then we will obtain Art, and Thought, and Judgement - and this will be a collective gain, because we need more and more of them. Irregardless if the source has more carbon or more silicon in it.
«Faking thinking» is not "implementing thinking". From a good implementation of thinking you get the Real Thing - by definition. That we are not there yet does not mean it will not come.
(Just a note: with "Thought" in the "A-T-J" I meant "good insight". Of course good thinking is required to obtain that and the rest - say, "proper processes", as it is indifferent whether it spawns from an algorithmic form or a natural one.)
> KISS
May I remind you of Einstein's "As simple as possible, but not oversimplified".
> only
Intellectual instruments can be of course quite valid and productive if used well - the whole of a developed mind comes from their use and refinement. You asked about the capitals, I told you what they are (when you see them in the wild).
> see your failure to recognise
Actually, that was a strawman on your side out of misunderstanding...
> You are stating that in order to get Art, Thinking and Judgement we need Proper processes
Well yeah, but no -- I was mostly parodying your style; what I actually meant could be put as: in order to get art, thinking and judgement we need proper processes.
(And Plato has not only been dead for what, two and a half millennia?, but before that, he was an asshole. So screw him and all his torch-lit caves.)
> «Faking thinking» is not "implementing thinking".
Exactly. And all the LLM token-regurgitatinmg BS we've seen so far, and which everyone is talking about here, is just faking it.
> May I remind you of Einstein's "As simple as possible, but not oversimplified".
Yup, heard it before. (Almost exactly like that; I think it's usually rendered as "...but not more" at the end.) And what you get out of artificial "intelligence" is either oversimplified or, if it's supposed to be "art", usually just plain kitsch.
> > see your failure to recognise
> Actually, that was a strawman on your side out of misunderstanding...
Nope, the imaginary "strawman" you see is a figment of your still over-complicating imagination.
You have stated: «Faking thinking isn't “Thinking”. Art is supposed to have some thought behind it; therefore, “art” created by faking thinking isn't “Art”. Should be utterly fucking obvious».
And nobody said differently, so you have attacked a strawman.
> And all the LLM token-regurgitatinmg BS we've seen so far, and which everyone is talking about here ... And what you get out of artificial "intelligence" is either oversimplified or, if it's supposed to be "art", usually just plain kitsch
But the post you replied to did not speak about LLMs. Nor it spoke about current generative engines.
You replied to a «if algorithms strict or loose could one day produce Art, and Thought, and Judgement, of Superior quality» - which has nothing to do with LLMs.
You are not understanding the posts. Make an effort. You are strongly proving the social need to obtain at some point intelligence from somewhere.
The posts you replied to in this branch never stated that current technologies are intelligent. Those posts stated that if one day we will implement synthetic intelligence, it will not to be «to fold laundry and wash dishes», and let people have more time «to paint and write poetry» (original post): it will be because we need more intelligence spread in society. You are proving it...
it’s harder than we thought so we leveraged machine learning to grow it, rather than creating it symbolically. The leaps in the last 5 years are far beyond anything in the prior half century, and make predictions of near term AGI much more than a “boy who cries wolf” scenario to anyone really paying attention.
I don’t understand how your second paragraph follows. It just seems to be whining that text and art generative models are easier than a fully fledged servant humanoid, which seems like a natural consequence of training data availability and deployment cost.
> I don’t understand how your second paragraph follows. It just seems to be whining that text and art generative models are easier than a fully fledged servant humanoid, which seems like a natural consequence of training data availability and deployment cost.
No, it's pointing out that "text and art generative models" are far less useful [1] than machines that would be just as little smarter at boring ordinary work, to relieve real normal people from drudgery.
I find it rather fascinating how one could not understand that.
___
[1]: At least to humanity as a whole, as opposed to Silicon Valley moguls, oligarchs, VC-funded snake-oil salesmen, and other assorted "tech-bros" and sociopaths.
> No, it's pointing out that "text and art generative models" are far less useful [1] than machines that would be just as little smarter at boring ordinary work, to relieve real normal people from drudgery.
That makes no sense. Is alphafold less useful than a minimum wage worker because alphafold can't do dishes? The past decades of machine learning have revealed that the visual-spatial capacities that are commonplace to humans are difficult to replicate artificially. This doesn't mean the things which AI can do well are necessarily less useful than the simple hand-eye coordination that are beyond their current means. Intelligence and usefulness isn't a single dimension.
AGI does look like an unsolved problem right now, and a hard one at that. But I think it is wrong to think that it needs an AGI to cause total havoc.
I think my dyslexic namesake Prof Stuart Russell got it right. It humans won't need an AGI to dominate and kill each other. Mosquitoes have killed far more people than war. Ask yourself how long will it take us to develop a neutral network as smart as a mosquito, because that's all it will take.
It seems so simple, as the beastie only has 200,000 neurons. Yet I've been programming for over 4 decades and for most of them it was evident neither I nor any of my contemporaries were remotely capable of emulating it. That's still true if course. Never in my wildest dreams did it occur to me that repeated applications could produce something I couldn't, a mosquito brain. Now that looks imminent.
Now I don't know what to be more scared of. An AGI, or a artificial mosquito swarm run by Pol Pot.
But then, haven't we reached that point already with the development of nuclear weapons? I'm more scared of a lunatic (whether of North Korean, Russian, American, or any other nationality) being behind the "nuclear button" than an artificial mosquito swarm.
The problem is that strong AI is far more multipolar than nuclear technology and the ways in which it might interact with other technologies to create emergent threats is very difficult to forsee.
And to be clear, I'm not talking about superintelligence, I'm talking about the models we have today.
I think AGI in the near future is pretty much inevitable. I mean you need the algos as well as the compute but there are so many of the best and brightest trying to do that just now.
That statement is extremely short sighted. You don't need AI to do laundry and dishes. You need expensive robotics. in fact both already exist in a cheapened form. A laundry machine and a dishwasher. They already take 90% of the work out of it.
I saw a tweet the other day that sums up the current situation perfectly: "I don't need AI to paint pictures and write poetry so I have more time to fold laundry and wash dishes. I want the AI to do the laundry and dishes so I have more time to paint and write poetry."
reply