Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

That's like complaining about artificial, non-organic flight being a fantasy before the wright brothers.

Nope.

Before the Wright brothers, we knew it was scientifically possible to suspend objects heaver than air is an air current. For example, kites and balloons.

The only examples we have of "intelligence" are organic in nature.

Since we have no examples to the contrary; for all we know, "life" and "intelligence" could be somehow inter-related. And we don't currently know how to engineer either one.



sort by: page size:

I guess there is also a point which is that it isn't 'artificial' intelligence - it is just intelligence.

Like we don't say that planes artificially fly, they just fly. I mean technically it's non-natural flight, but it's interesting to think about.


> But we're still at the point where we don't understand intelligence as well as we understood aerodynamics when building the first planes

Actually, I'd say that our understanding of intelligence is right about at the level of aerodynamics at the dawn of heavier than air flight:

https://youtu.be/Sp7MHZY2ADI

https://youtu.be/gN-ZktmjIfE

I mean, we could quibble about exactly where we are pre- or post-Wright Flyer, but given the amount of AI research that amounts to brute-force flailing about in search of incremental improvements, disagreements on the importance of "biological plausibility" and so on, it's pretty clear that, roughly speaking, AI is currently somewhere in the equivalent of the Lilienthal-Langley-Wright-Curtis continuum (ie. 1890-1910-ish) and still prior to the most important theoretical breakthroughs. IOW, AI has not in my opinion yet achieved an equivalent to aerodynamics' Prandtl lifting-line theory: https://en.m.wikipedia.org/wiki/Lifting-line_theory


I like how Max Tegmark frames it by considering Birds and Flight and how we built Planes that can also fly. It turns out Flight, as an action on its own, is not so difficult if you throw enough energy behind a particularly shaped collection of parts. Intelligence, as a behavior observable in the world, appears to be similar in that it emerges in stages at scales as we are discussing.

And just like inventing airplanes caused us to say 'all things that fly are not birds', now we may start saying 'all things that display intelligence are not us'

(where intelligence is => human level intelligence and "us" could be swapped out with alive/sentient/conscious/etc at least for now and perhaps in all cases)


For one, flying is a relatively simple physical phenomenon, and once we had a firm grasp on the principles of lift, we were able to design our own solutions.

Intelligence is, in comparison, an incredibly complex phenomenon that seems a lot harder to separate from the inner workings of the biology and evolutionary history. It feels to me like there is so much implicit hubris embodied in the industry and reflected in your comment. Obviously there has been an increasing overlap between neuroscience and AI, and the approaches do strive to understand and mimic the biological structure of the brain. What is not clear to me is why we have largely collectively decided that we don’t need to incorporate understanding of how that structure came to be.


I used to think this same way, especially given I have a biology background. It is indeed true if you look at the internals of chatGPT it is doing a lot of basic and dare I say dumb suboptimal things.

However I find the analogy to flying compelling. It was thought to be decades days before the Wright Brother flight because of how simple, and seemingly dumb the engineering was relative to evolution's design in making a bird fly. There is a TED talk on this. It took us well over a generation later to build bots that can fly the way a bird does. That's how much longer it took us to understand biological flight. Yet by that time we had already made machines that can fly faster than any and all birds. We didn't need to mimic biology's design to get something that outclasses it.

Could be the same with AI. We might be able to use engineering methods that are dumb relative to how biology sculpts intelligence, and yet still create an AI that outperforms all humans on all intelligence tasks. We may well create AGI that we end up coexisting with for over a generation before we figure out how intelligence in the biological brain works.

And because we keep comparing AI currently to how biological intelligence works I am concerned the same thing will happen where a few days before it is released most people will be saying AGI is decades away.

For me at least, this consideration makes "centuries away" look off and "decades away" seem far more likely... Which is in my lifetime. Yikes!


I'm sorry, but AGI vs Alchemy is a poor analogy with almost no parallels. I see this as a rather blatant strawman argument.

Flying machines vs AGI makes a much better analogy. Natural flying machines (birds) existed in nature, before artificial ones (planes) were invented. Artificial manned flight was speculated as a possibility for centuries with no macroscopic working examples, and heavily criticized as a transportational panacea/fantasy that was seen as clearly impossible to people at the time.

In fact, the analogy extends startlingly far. People doubted the possibility of manned flying machines for the longest time, with arguments strikingly similar to yours, e.g. 'We have no proof that such artificial flying contraptions can exist!'. Explaining away natural flight as a supernatural magic only accessible to birds is also eerily similar to the constantly retreating dualistic arguments against a mechanical brain.

Even your criticisms of current AI has analogues[2]. Long ago, even in ancient times, there is evidence of small "toy" birds that might have flown much like paper planes. Many similar toy examples existed around the 19th/20th century, too, yet many people still vigorously doubted the possibility of a flying machine that could carry a human.

Let's compare a paraphrase of common arguments against heavier-than-air human flight, with a paraphrase of your argument:

Of course we have natural examples of flight, as we see birds all around us. But there is no more evidence for human flight than there is for Bigfoot, leprechauns or space aliens. Sure, we have little toy examples of flying machines, but they're very limited -- the idea of full heavier-than-air human flight requires a leap of faith.

Vs.

[Of course we have natural examples of AI, I, the author presumably am one.] But there is no more evidence for AGI than there is for Bigfoot, leprechauns or space aliens. Sure, we have little toy examples of AI, but they're very limited -- the idea of full AGI requires a leap of faith.

[1] (Obviously I'm not referring to fusion or anything like that, because people of that era wouldn't have recognized that as transmutation.)

[2] To be fair, I think it's safe to say old-school non-probabilistic AI is dead. But just because one path ends doesn't mean there aren't a thousand others constantly exploring new ideas. Indeed, recent advances in deep learning are incredibly promising.


You're falling into the same naturalistic fallacy the likes of Clement Ader fell into: thinking you had to imitate nature to get flight (or AI) done. I trust the underlying principles of intelligence are much simpler than current implementations.

It's probably not going to be easy, tough.


Huh? We have flying cars. They are called planes, and they take a lot of maintenance to keep from falling out of the sky.

But let's change the question up a bit... There are billions and billions of flying intelligences on this planet. Birds, insects, even mammals. Nature has already created that. We've created things that are even better at flying fast and carrying more weight. So simply looking at 'flying cars' and saying they didn't happen so AI can't happen is at the least, very ignorant.

If nature can create something randomly, we can create something directed in a shorter period of time (well, we don't really have another 4 billion years to try). AGI is an eventuality.


"You might as well argue that since heavier-than-air-flight is a search problem over an effectively infinite, high-dimensional landscape of possible machines - and that since it took evolution billions of years to produce birds, we have little chance of stumbling upon a working design for a wing."

Tired analogies between AI and flight are dead-ends. Until we have identified AI-side components of the analogy that corresponds to "air", "wing", "lift" &c., the analogy is empty and unproductive.

IOW these analogies neither get us off the ground nor do they take us anywhere.


That’s… not a refutation of my point.

AI/flight analogies are tired, but the OPs argument amounts to the equivalent of, before the Wright Brothers, proclaiming ‘there’s an inherent inability for humans to ever conceive of a way to engineer heavier than air flight’.

It’s a ‘man was never mean to fly, therefore heavier than air flight is impossible’ argument.


> Reading this is like hearing "there is no evidence that heavier-than-air flight is even possible" being spoken, by a bird.

This is a vivid bit of rhetoric to underscore this point, but if you think about it for any length of time it starts to fall apart really, really quickly. The Wright brothers and the dozens if not hundreds of inventors forgotten who came before them drew upon the physics of what they observed in heavier than air flight to create the winged shape we know today that reliably causes lift, and then set about constructing it. That's not what OpenAI is doing. We still do not have a very solid understanding of where our own intelligence emerges from, apart from having particularly large brains relative to our body's size. So, to borrow your metaphor, it is indeed like a bird saying that there's no evidence to say that heavier than air flight is possible, because the bird lives in a world without atmosphere upon which to glide.


> It is difficult to imagine anything resembling a "general" intelligence that isn't attached to a life-form[...]

How is that argument any stronger than these ones that (non-animal) heavier than air flight is impossible?

https://www.xaprb.com/blog/flight-is-impossible/

I think these are actually stronger arguments, because I can think of at least three very least significant material-science advantages that birds have even over modern technology that would, from a physics point of view, let me entertain the idea that a 19th century physicist could believe you'd at least need to be made of flesh to fly. I can't think of any comparable case for intelligence being restricted to biology.


That quoted sentence argues that we do not have AGI now. I have no counterargument against that. We do not have AGI now. That sentence on the other hand fails to argue it is impossible to develop AGI.

Someone before the invention of the aeroplane could have said: Our only known good model for flying is birds and insects. It took billions of years of the universe churning at random to accidentally generate birds and insects. We have no clue how to replicate that scale.

And yet we know that it's not impossible to create flying machines.


You could make basically the same argument for why humans will never invent heavier-than-air flight - because avian flight 'evolved based on a need to survive and reproduce', and 'artificial flight doesn't have darwinian pressures'.

But humans did make artificial flying machines.

I don't mean to be unkind, but your argument doesn't have close to the level of certainly that we should bet our species on.

No one knows where the roadblocks to AGI are, or what the timelines might be; but there is what seems like huge progress happening recently in an area which might eventually lead there. While no one knows the path, many intelligent people have thought about this without finding any theoretical roadblocks. Please don't publicly dismiss the concerns as 'science fiction' without a little more thought.


The question that comes up for me at this point is whether there is much that is dispensable about humans when it comes to exhibiting intelligent behavior (running on 100 watts, no less). It turned out that for abstracting useful flight dynamics based on birds, there was a rather simple rule: lift > weight. Sure, reducing it to that simple formula may not help you build a machine that maneuvers as well as birds and insects do, but we didn't need that for flight. We just wanted to cross an ocean in less than 6 weeks.

Whether the flight analogy carries on to intelligence, in my mind, depends on how many of our subsystems are 1) indispensable for intelligence, and 2) reasonably computationally reducible.

From neurotransmitters to ganglia cells to hormones and bacteria in the gut, we have found a lot of subsystems that contribute to our abilities to make diverse, everyday decisions that the ideal AI we are discussing would have to make. The cortex actually seems like one of the most orderly and therefore reducible parts of the apparatus. The hormonal system that regulates emotion based decision making may be far more difficult to abstract and less efficient to model. And there are many many other systems. Could it be that without details of those subsystems, our AI behaves in less than optimal ways the same way a human would? How much can we get away with reducing biology to simpler rules while maintaining general intelligence?

It is possible I suppose that all those biological dependencies are merely hampering an ideal algorithm for generalized intelligence that we are only crude approximations of, a powerful and simple algorithm we can finally free of biological constraints, -- but it's too late to get into the probability of that hypothesis! In any case it's not clear to me how that kind of nonhuman intelligence would serve us.


The steps to learning to make aircraft didn't come from understanding how birds flap their wings. I can't imagine the kind of intelligence we consider general will be from study of how humans biologically think.

> doesn't mean it's not possible

In general, that which nature has demonstrated is can usually be replicated. A bird (flight), a floating log (ships), a fish (submarine), an asteroid (space travel), etc. Nature has demonstrated intelligence: a human.

However, just like nature has not demonstrated superluminal travel, it has not demonstrated super-intelligence; so that is still a question.


I like the flight analogy too. Extending it further, we failed to build flying machines by blindly copying bird design when we had no idea how bird flight actually worked, and we successfully built heavier-than-air powered flying machines when we understood how bird flight worked and realised the power-to-weight ratio etc. was such that we'd need a completely different design to get humans into the air. I believe it'll be a similar case with AI and neuroscience - we're unlikely to get general AI until we have a pretty solid understanding of the way human intelligence works, but when we do we'll probably find that the artificial form of intelligence will have to be designed differently due to inherent constraints.

There is a large discussion to be had and I do believe that I have some novel points to make on the subject, but what I would like to do at this point is note that just because you can't imagine another way that this could be done, it doesn't mean that it can't be done another way. You called this a fact, and that's a strong claim that I urge you to retract.

http://en.wikipedia.org/wiki/Argument_from_ignorance

To approach this from a different perspective, imagine somebody saying the following at about 1890:

"This ignores the fact that if we ever do make a heavier-than-air flying machine it will either be via (a) reverse-engineering of birds or (b) some kind of evolutionary algorithm."

That would have turned out to be false.

next

Legal | privacy