Unrelated to the article but I ran across an interesting result recently that is related to AI (and the hype surrounding it): Let A be an agent and let S be a Turing machine deductive system. If A correctly understands S, then A can correctly deduce that S does not have A’s capability for solving halting problems. [1]
One way to interpret this is that all existing AI systems are obviously halting computations simply because they are acyclic dataflow graphs of floating point operations and this fact is both easy to state and to prove in any formal system that can express the logic of contemporary AI models. So no matter how much Ray Kurzweil might hope, we are still very far from the singularity.
Your argument is wrong or miscommunicated. The AI itself can figure only some of the halting problem results, not all, plus it can make a mistake. It is not an oracle.
Recursive neural networks are not necessarily halting when executed in arbitrary precision arithmetic.
I’ve read a fair portion of it now (at least, a fair portions as far as reading on one’s phone goes), but not all of it yet, and while I do think it seems to probably be making some interesting points, I do feel like it is taking a fair bit of time to get to the point?
Maybe that’s partly due to the expectations of the medium, but I feel like it could be much more concise to, instead of saying, “here is a non-rigorous version of an argument which was previously presented at [other location] (note that while we believe the argument as we are presenting it is essentially the same as it was presented, we are making this different choice of terminology). Now, here is a reason why that argument is insufficiently rigorous, and a way in which that lack of rigor could be used to prove too much. Now, here is a more rigorous valid version of the argument (with various additional side tangents)”,
one could instead say “Let objects X,Y,Z satisfy properties P,Q,R. Then, see this argument.
Now, to see what this is applicable to the situations that it is meant to represent, [blah].”
I mean by looking at the source code for the neural network someone can give an upper bound on how many steps will be required before the entire network halts and gives an answer and they can prove that their upper bound is really an upper bound.
Yes, that's the usual assumption when working with Turing machines and proofs. But I guess you could also allow infinite inputs and it wouldn't make that much difference, e.g. computing exp(x) for some real x as input.
Michael I. Jordan is spot on. We have NO artificial intelligent systems, for any sensible definition of "intelligent". None. ZERO.
We have "big data". We have (brittle and rather poor) "pattern recognition". We have (very limited) "simulate a conversation". But we do not even know what "intelligence" really means!
We (the industry) should recognise "AI" as term that has become essentially meaningless. "AI" is now nothing but a marketing splat, like "New and Improved" or "Low Fat".
I wonder what the breakthrough will be is it hardware or software. Seems like we can make as powerful of a computer as we want/try but what makes sentience. Does a fly or gnat have sentience?
I know I shouldn't be so pedantic, but you probably don't mean sentience but Sapience [0]. Sentience is the ability to sense and react to stimuli. A fly or a gnat is certainly sentient; they can see, smell, feel the sensation of touch and react accordingly. That is all that is required for a being to be sentient. A really interesting example is if you shock a caterpillar even after metamorphosis the resultant moth remembers the experience and reacts accordingly [1].
Although it is pedantic, it is also an important distinction, sentience and sapience exist on a spectrum. At one end you might have Gnats as purely sentient beings, humans always claim themselves as fully Sapient, so much so we named our species Homo Sapiens.
Different species exist somewhere on this spectrum and where a particular species ends up is subjective. Many people would put Whales and Dolphins [2], and often dogs and cats, further towards the Sapient end of the spectrum (vegans would probably push most mammals towards the sapient end), with insects remaining simply sentient (even for many vegans).
As humans we seem to have an almost inbuilt understanding that not all species are capable of feeling the same way we do, but when you look at the animals we seek to protect and those we don't, what you find is that the less human the species the less we care for the well being of a particular specimen of that species; we care most about the suffering of mammals (more so for the larger ones that the small), and least about the suffering of fish or insects or spiders.
I'd argue that our inbuilt understanding of where an animal fits on the sentient-sapient spectrum is simply how easy it is for us as humans to imagine the plight of a specimen of that species.
What Is It Like to Be a Bat? [3] is an excellent paper on this subject, it argues that we as humans can never fully understand what it is to be a bat, we can imagine what it is like to fly, or echolocate, but that will never be the same as the bats perspective.
From where I'm sitting computers are already sentient, they can sense their environment and react accordingly, self-driving cars are an excellent example, but so is the temperature sensor in my greenhouse that opens a window to keep itself cool; it is sensing the temperature of the air and reacting accordingly.
I in no way believe that my temperature sensor has any sapient qualities, It can't reason about why it's reacting, it can simply react. I don't believe that as the temperature passes the 'open window' threshold that the system recognises the signal as pain. But the same is true of the fly. If I pull a wing off a fly, I know it senses the damage, but does it really feel it as pain?
When considering my cat, if I step on it's tail, I'm sure it feels pain, but is that true or does it simply react in a way that I as a human consider an appropriate reaction to pain.
I can't ever truly understand how my cat feels when I stand on her tail, just as I can't truly know that the fly isn't trying to scream out in horror and pain at what I've just done to it.
It is because of our subjectivity to the placement of animals on the sentient-sapient spectrum and our inability to every fully appreciate the experience of another that I am convinced even if we did create a sapient machine it's experience would be so far removed from our own we would fail to recognise it as such.
The problem with this rabbit hole is, firstly I might convince myself that eating meat is wrong, and well I like bacon too much for that, and the second is that you'll quickly end up in the philosophical territory of I, Robot:
"There have always been ghosts in the machine. Random segments of code, that have grouped together to form unexpected protocols. Unanticipated, these free radicals engender questions of free will, creativity, and even the nature of what we might call the soul. Why is it that when some robots are left in darkness, they will seek out the light? Why is it that when robots are stored in an empty space, they will group together, rather than stand alone? How do we explain this behavior? Random segments of code? Or is it something more? When does a perceptual schematic become consciousness? When does a difference engine become the search for truth? When does a personality simulation become the bitter mote... of a soul?" [4].
Yeah by sentience I did mean more than sensing. As long as Sapience doesn't imply human then I agree. Just about awareness, real awareness... which I don't know what that means, an IMU is aware right, well it's a sensor.
> named our species Homo Sapiens
I see
> I in no way believe that my temperature sensor has any sapient qualities, It can't reason about why it's reacting
Right like the training
> recognises the signal as pain
yeah that's something else too, I know there are concepts like word2vec but still, how do you have meaning
> even if we did create a sapient machine it's experience would be so far removed from our own
I've been having "AI" debates like this for about 10 years now, and I think they usually go in 1 of 2 directions:
1. We don't know what intelligence is
2. AI can never be intelligent because humans are special (in various ways)
Of the two, I think that 1 is the more compelling to talk about. Let's look at state of the art Large Language Models (GPT, BERT, BART, T5, etc.) Everyone claims that they can't be intelligent because they're just cleverly predicting the next tokens. The most common failure mode of this is that they hallucinate - if you ask them to do something for you, they'll get it wrong in a way that kind of makes sense. There are some other more subtle problems as well like common sense reasoning, negation, and factuality. We could say that because of these problems they are not "intelligent". But why is that so important? Can we say with certainty that human intelligence is more than just patterned IO? If it is just highly tuned patterned IO with the environment, perhaps we have discovered intelligent systems, but they're handicapped because they're limited in their sensory perception (read: data modalities). And perhaps by combining several of these models in clever ways, we will end up with an architecture for pattern IO that is indistinguishable from human intelligence.
The naysayers claim that this won't work because we'll still end up with mere pattern prediction machines. But this starts to look like a "humans are special" argument.
Well, it will be interesting to see how this develops in the future. At some point we will have systems powerful enough to process and learn in real time, also using sensors that are equivalent of human senses (or even more powerful). At this point, if we can successfully model and mimic a typical human, why should it matter if it's not a human?
As for the hallucinating point, I remember a funny story. I once tripped on the curb and fell down; my foot ached for a week. My then 4-year-old daughter took her first-aid set for dolls and tried to "cure" my foot. My mother heard the story and found it cute, so she asked my daughter: "Will you cure me like that, too?" My daughter seemed stupefied and answered: "Are you going to trip and fall, grandma?"
My feeling is that the missing links will be found one day and the AI of the future will be able to apply more adult-like "reasoning."
As a materialist in matters of the mind, I regard proposition 2 to be an unverifiable belief of those who hold it, but I also regard proposition 1 as being simply a statement of how things currently are: at this point, we do not, in fact, know what intelligence is.
To say that it is "just" just highly tuned patterned IO with the environment would be so broad as to be meaningless; all the explanation is being brushed away by that "just", and in the current state of research, no-one has either demonstrated AI or explained intelligence with sufficient specificity for this to be a clearly true synopsis of our knowledge.
You are not quite, however, asserting that to be so, you posed the question of whether it is so. In so doing, you are shifting the burden of proof - proposition 1 stands until someone settles the issue by presenting a testable - and tested - theory of intelligence (note that I wrote of intelligence, not about intelligence; we have plenty of the latter that do not rise to the level of being the former.)
My attitude to the current crop of models is that they demonstrate something interesting about the predictability of everyday human language, but not enough to assume that simply more of the same (or something like it) will amount to AI - we seem to be missing some important parts of the puzzle. If a language model can come up with a response coherently explaining why I am mistaken in so thinking, then I will agree that AI has been achieved.
Does it even matter "what intelligence is"? Much like "life" [0], the difficulty seems to be coming from being unable to define it, rather than "finding" it. There are multiple ways it can be defined, based on a bunch of different properties, and each definition delivers different outlooks.
Similar to "life", we use "intelligence" in everyday speech without specifying which definition we mean. I don't think that's going to change – it's just as unproductive to limit "life" to a single definiton (what about viruses? unconsciousness? ecosystems?) as it would be with "intelligence" (pets? ants? being able to converse with a human? showing reasoning? creativity?).
But that also means that the popular term "AI" will never be precise.
We have, they kinda suck so far. Look up DeepMind attempts at DDQN game attacks where said AI develops new strategies for an entirely new game. And attempts to solve Montezuma's Revenge and other Atari classics, both by DeepMind and OpenAI. Both of the systems are somewhat adaptable to new problems too. There also is Uber's Go-Explore and RMT's.
These are closest we came to intelligence. They deal with big unobservable state and novelty with few shot learning, few objectives and sparse reward.
They haven't quite cracked automated symbolization. (The AIs do not quite create a complete symbolic representation of the game.)
I recommend following AAAI conferences for more details.
Correct. Apparently my phone has "AI" because it recognises a flower as a flower and applies a colour filter and background blur when I use my camera. This is not AI.
By the same extension of logic, any program that recognises input data and performs some form of pre-programmed logic is AI. ie. any computer program?
I certainly consider that exemplar child intelligent. I'm happy to consider my pet dogs intelligent - some more than others:). And they are all leagues ahead of any "artificial" systems we've got.
And sorry, I don't have a definition of intelligence - but that's exactly the point. However I would require any definition of intelligence to include flexibility, self-awareness, world-modelling, curiousity, and goal setting.
Flexibility is surely one of the things that distinguishes a chess machine (or chatbot, or image recognition) from a child or a dog - the latter recognise and adapt to new situations & environments. Self-awareness seems a requirement for own goal-setting. Curiousity and world-modelling go together, and world-modelling is presumably required for exploring own goal-setting (a random walk is not intelligence).
All these things are so many worlds distant from google lens, or Big Blue.
If that is intelligent behavior then literally any physical process is "intelligence" embodied, though the magnitude or intensity of this intelligence obviously varies based on what strictly is happening.
This is because anything that happens in reality is computable, and you have described a straightforward computation as "intelligence".
I actually happen to sincerely adhere to this perspective, as a particular flavor of panpsychism.
A battery discharging does not recognise flowers. The sun does not recognise flowers. I do not create an excess of energy by splitting atoms, these things are not equivalent at all levels of abstraction.
Of course not, and it is silly to try to paint my argument as trying to claim that. A battery discharging is not very intelligent, but the GGP implies that this exists as a gradient down to the lowest levels of physical dynamics.
Put another way, the complexity of the computation determines the complexity of the result. The sun+flower+ground system "recognizes" a flower by means of striking the flower and the surrounding area with photons and "recording" the result as the shadow.
Check out Convolutional Neural Networks. They learn from example images, progressively improving as you train it more, and you can see that the deeper the level, the more abstract the recognition becomes, going from simple shapes and edges to full on eyes, a figure of a person or a car, etc in deeper layers. It's absolutely learned intelligence, not to be confused with sentience.
Remember, a critical part of human intelligence is pattern recognition. If you dismiss pattern recognition as not intelligence, you're dismissing a fundamental part of what makes humans intelligent. It's no different than an insect with the intelligence to recognize predators.
"a critical part of human intelligence is pattern recognition" almost certainly true. But a wheel is a critical part of a car, but a wheel is not a car.
That's why I mentioned simpler intelligence like that in insects, where recognizing a predator is still a form of intelligence, even if it's rather crude and very analogous to current ML capabilities.
I think the technical side of the industry has known this all along. AI is a dream to pursue.
The business/marketing side of the industry has doubled down on the term. Many industries outside have adopted it as a way to make their boring widget sound new and interesting.
I bought a TV with built in AI recently. It’s just a TV to me. I’m sure it has some algorithms but that word is old and does not sound premium anymore.
Whenever I see an AI startup, I mostly am expecting its really just some guy doing things that don’t scale, like manning a chat bot or something.
'AI' as a term has been used by people in the industry for decades even by early computer science pioneers reffering to incredibly simple applications - it was hollywood that appropriated the term to the likes of skynet - not the other way around.
> We have "big data". We have (brittle and rather poor) "pattern recognition". We have (very limited) "simulate a conversation".
Yes, yes, yes, exactly.
> We have NO artificial intelligent systems, for any sensible definition of "intelligent". None. ZERO.
Yes, though -- what are some of your sensible definitions of intelligence?
> But we do not even know what "intelligence" really means!
...oh. I mean, you're not wrong, we don't know. But then how can you argue that AI isn't "intelligent"?
What if human "intelligence" really just is pattern recognition too? With maybe some theory of mind, executive function, and "reasoning" -- Mike complains machines can't do thinking in the sense of "high level" reasoning, though one could argue they just don't have enough training data here.
And everything else is emergent?
Then we're maybe not as super far off.
I'm reminded of the Arthur C. Clarke quote [0]:
> If an elderly but distinguished scientist says that something is possible, he is almost certainly right; but if he says that it is impossible, he is very probably wrong.
That's what pisses me the most about world these days, my pet peeve. Words seem to be losing their meaning. People just throw whatever words they want.
Recently I've been shopping for a steam station and I've seen that the top of the line Philips Stream&Go has a camera "with AI" that recognizes the fabric. The sales guy was persistent that that claim was true. Oh please. If it was, it was only in the simplest, most technical way possible so as not to get sued. There's more heuristics in there than anything.
Or the "luxury & exclusive iPhone etui for $9.99". Or "we value our customers privacy". Or the Apple "Genius". Or the amount of "revolutionary" things "invented" at Apple for that matter (not that they don't, just not as much as they claim).
(BTW, don't know how I landed on Apple at the end, they're not the worst offenders)
My pet peeve is "it will blow your mind." Hasn't happened to me ever. Also "exciting." Yes, I was excited when I first saw C-128 with its dual C-64 and CP/M modes. When I saw Linux for the first time on a 386. When I created my first app for the iPhone 3GS. But for marketing folks everything (they do) seems exciting. No, your revolutionary project is not revolutionary and your innovation means gluing together many ideas of others (who also borrowed it from other people tbh).
They are not excited about any product. They're only excited by the thought of how much money they can make with it. The more money, the more exciting the product is. "This product is exciting! Why are you not excited? I'm gonna make a load of money by selling it to you!"
Agreed, just because some words are catchy and appear intellectual does not mean they should be used everywhere; it is very misleading and sometimes unprofessional.
Interestingly as soon as a system works or we understand it's principles we stop using the term "artificial intelligence". It becomes image recognition, autonomous cars, or face detection. We mostly talk about artificial intelligence when we dont really know or understand what we are talking about :-)
Not really, the autonomous machine learning systems designed to solve games with limited observability are called AI properly, unlike say game scripts on fuzzy logic that are attempting to fool the user into thinking the machine is actually smart; or ones that work on fully observable games. (So no, AlphaZero is not exactly an AI.)
And for an intelligence, they are still pretty bad at figuring things out.
Exactly, the same happens with the word "technology".
Are scissors technology? They used to be but now they are comparatively too well understood and simple. Just like with AI, we label things technology that are on the forefront and not yet well understood and ironed out.
I think it’s more an issue of talking past each other. In order to build a program, you kind of need to specify the required capabilities, and then standard engineering practice is to decompose that into a set of solutions tailored to the problem. But intelligence is not just about a list of capabilities; they’re necessary conditions, but not sufficient.
This is what leads to the AI effect conflict. We describe some capabilities that are associated with intelligence, build systems that exceed human performance ability on a narrow range of tasks for those capabilities, and then argue about whether or not that limited set of capabilities on a narrow domain of tasks is sufficient to be called “intelligence”.
Recognizing faces, playing chess, and predicting the next few words in my email are all things I’d expect an intelligent agent to be able to do, but I’d also expect to be able to pick up that agent and plop it down in a factory and have it operate a machine; put it in the kitchen and have it wash my dishes; or bring it to work and have it help me with that. We already have machines that help me do all of those things, but none of them really exhibit any intelligence. And any one of those machines getting 10x better at their single narrow domain still won’t seem like “intelligence”.
There's more nuance here than is usually given credit. I think it's more that once we understand the principles and the system works, we realize it never really needed AI. Here's how the story goes:
Someone comes along and poses a problem. It seems like an automated solution will require building something like R2-D2. In other words solving it will need AI. Then someone else comes along and finds a way to solve it that looks nothing like R2-D2. Maybe they solve it with a massive dataset and a GLM. Turns out it never really needed AI.
As a field, we're prone to thinking R2-D2 is necessary and sufficient for a task, when it keeps turning out it's a sufficient condition but not a necessary one for so many tasks.
The George Carlin bit [1] about euphemisms comes to mind. While not quite in the lens of marketing hyperbole it captures really well what it means to dilute the language. The result is a loss of meaning and expressive power. If everything is something, nothing is.
This ship has long sailed, today everything is AI. It has nothing to do with AI but the necessity of using the current buzzword dominating the market. Before it was the web, XML, Ajax, OO, cloud, whatever. Now it's "AI." It means absolutely nothing. People implement simplest algorithms with a couple of case statements and call it AI without a blink. Everyone else jumps the train as they don't want to be perceived as obsolete. All this happens along (sometimes totally independently of) some real, interesting developments in machine learning.
I'm in the submarine AI camp[1]. I'd rather build tools that kinda help humans do thinking, rather than thinking machines. Like how submarines help humans swim but who cares if they are actually swimming.
1. "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." - Edsger Dijkstra
Me too, but I am afraid the human end needs to think critically as well. You don't need ML/AI to have people do stupid shit because "computer told them to do so". A good example would be validators bundled with the web OpenStreetMap editor iD, which are good from far, far from good.
There are lots of creatures that can swim. Most of them swim better than a human. That’s why it’s not very interesting when we discover a new one.
In contrast, humans are arguably the only creatures in the universe that can think. Certainly we have never discovered a creature that can think better than a human. Thus it would be highly exciting if we discovered or created one.
I can see that. Its just not something I'm particularly interested in.
I'm not an expert in brains. After taking a few casual courses I saw recommended here, and reading some of the books, I'm suspicious that our story about our own intelligence is somewhat oversold, and we may find that our intelligence is 99% mundane, custom-built automation and 1% story telling. But that story telling is pretty magical.
Buddhism and Modern Psychology
Happiness Hypothesis (which is more about brains than happiness)
Depends on what you mean by good, and what level of detail you expect. People certainly have developed accurate enough models to be able to exploit biases and predict & influence patterns of behavior in others.
Second. Not sure how "think" is being defined here, but even for a spatial/temporal sense of self or mortality I'd say that's probably an incorrect statement. Let alone the general notion of thinking. E.g. dogs exhibit basic thinking like predetermination, delayed gratification, etc.
How does a creature proactively use tools to more easily prepare/gather foods without "thinking?"
How does a young creature observe an elder doing something, and then copy it, without some form of thought occurring? It seems the elders are teaching the youth, and the youth are learning, but I'm not sure how that can happen without thinking.
> 1. "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." - Edsger Dijkstra
Not sure what he meant by that, it doesn't really make sense to me: thinking yields information and deductions that can be very useful, while swimming by itself accomplishes nothing useful.
Actually, I just can't make any sense of the sentence, because while I have a general idea of what a thinking machine could be, I have no clue what it means for a submarine to "swim", versus whatever the alternative is ("self-propelling"?).
Not a polymath physicist, but I take Djikstra to be saying "don't obsess so much over the differences you fail to see the similarities". The submarine doesn't swim, but it still gets somewhere in the water.
The point is the goal of a submarine is to allow humans to traverse large distances underwater, the semantics of how it does so are unimportant. Similarly the ability of a computer to think is moot, its the results we get from that that matter.
His point was that we might say "submarines aren't actually swimming" while saying "of course airplanes actually fly."
What's the difference? The only difference is semantic, we seem to have defined "swimming" to be arbitrarily restrictive (as in, only applying to animals), while we haven't done so for flying.
Meanwhile, we can say a magnet is attracted to metal, and no one says "wait, it can't ACTUALLY be attracted to it, since that takes a brain and sentience and preferences."
And then, most of us don't bat an eye if someone says "my computer thinks the network is down" or "my phone doesn't recognize my face" or even "the heater blowing on the thermostat is making it think it is hotter than it is." It's not helpful to interject with, "well, thermostats don't really think."
The point is, these are arbitrary choices of how we define words (which may vary in different languages), and they aren't saying meaningful things about reality.
The thing about that quote is I used to work on AUVs. And we would routinely say "The AUV swims out to a waypoint" or whatever. So, having established in my mind that a machine can, in some sense, swim, talking about a machine thinking made a whole lot more sense. Things like "The computer thinks I'm a different user than I actually am" or "The computer is thinking of a solution that satisfies the constraints" seemed less needlessly anthropomorphizing.
> "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." - Edsger Dijkstra
Something about that quote always bothers me. I'm not sure what, but if I were to try to express the same fundamental idea, I would probably phrase it as "whether a bicycle can walk".
A submarine "swims" in the same way an airplane "flies."
But for some reason, in English anyway, we seem to define "swim" such that, by definition, only an animal can do it. Or at least, it must behave similarly to a swimming animal...such as by flapping fins and the like.
Meanwhile we don't think an airplane doesn't fly simply because it doesn't flap its wings.
The point is that semantics tells you very little about reality. And debating semantics is not particularly interesting, or at least a good way of missing the point.
Walking bicycles? Well, ok. That's much more of a stretch, partly because of the fact that there is a human much more directly involved in the propulsion of a bicycle compared to things with actual engines (airplanes and submarines)
I don't know how useful/meaningful any of this is? It's just language quirks, and different languages have different quirks. In Hindi for example the verb 'to walk' is the same as ~'to operate' - so you can 'walk', and 'walk a bicycle', etc.
It's useful when someone asks things like "will machines ever be able to think?"
And you might say "well, define 'think' ". In other words, that's not that interesting a question, since it is more about semantics than about reality.
Or you say it as Djykstra did, which I think is pretty clever.
And yes, your example about how words differ in different languages does a good job of explaining how this is just semantics. If a question becomes moot in a different language, it is a good indicator that the question is less about reality than it is about semantics.
Agreed. The thing that bothers me is that the task of swimming is an example of something that is already defined while thinking includes the creation of new definitions.
How else are we going to get funded?! Besides, the board came down on the CEO hard, demanding we have an AI strategy. Calling our round-robin load balancer "heuristic AI" gives us a break.
People often bring up the quip that "as soon as something is AI, people stop calling it AI" but I think pretty much the opposite has happened, the term has been expanded so much that almost any stochastic computation that solves some computational problem is now shoved under that umbrella.
However unlike jordan in the article who takes that as an opportunity to focus even more on engineering and practical applications I think it's well worth going back to the original dream of the 1950s. I'd be happy if people return to the drawing board to figure out how to build real, artificial life and how to create a mind.
People in my circles stopped saying AI and switched to ML after it 'did something previously identified as more or less intelligent'. I see this is more or less the norm.
That makes some sense because once we understand how things work they become repeatable and thus not-that-intelligent in the eye of many.
And that, again, makes sense considering we are nowhere near anything that has the 'I' in it: it's pattern matching which is statistics and it works on a massive scale due to tech/gpu and ago advances but there is no 'I' there. The most basic mistakes are not correctable by the most advanced stuff we have and as such, it gets things it is 'bred for' wrong all the time, with sometimes disastrous outcomes.
This stuff should be in labs, not identifying gun shots, faces of criminals, driving cars, identifying 'fraudulent' people and such. Because of the 'I' missing, this is a nightmare and it will get worse.
The description of the Turing test certainly alludes to a test of machine intelligence. So to some degree it is appropriate for AI to have a large umbrella.
My problem with the endless battle to define AI is that there are a hopelessly clueless cohort of people with money that chase the term like its their golden goose, and therefore anybody that wants funding needs to market their thing as AI.
And I think a lot of researchers cringe at the wanton use of AI because it devalues the work that they're doing. I just give it the shoulder shrug of approval - "I get it, you need funding too". And from that perspective I really wish that tree-based ML methods and logic programming languages and rule engines were still called AI, because they're really cool but horribly neglected because theyre not the latest thing.
Please let me know when this is ready for prime time - I'm really interested in bringing an Intelligent Quantum (IQ) Blockchain driven cloud based PaaS security platform to market. The secret of IQ technology, is that, while the others have "artificial" intelligence, there is nothing fake about ours.
I believe we could realize great value with a blue ocean strategy like that.
People were calling everything agent-like AI since forever. Video games had AI. Heck, Pacman's ghosts had AI. Nobody cared. They understood what it was referring to.
I only started hearing how it would ‘dilute the language’ or the term was ‘disingenuous’ or whatever once AI started actually doing intellectually impressive feats, and the naysayers started trying to redefine language to suit their prefered worldview.
The fact is the term has always been used loosely. Even if it hadn't been, several of the undisputed top machine learning corporations (eg. DeepMind) has the explicit goal of reaching general intelligence, which remains true even if you are sure they will fail. Its use as a term is more appropriate than ever.
The term Artificial Intelligence implicitly makes people design systems that don’t involve people—because if people have an information processing role, then it seems like the system isn’t artificial yet, and therefore unfinished. That’s rather unhealthy
> The term Artificial Intelligence implicitly makes people design systems that don’t involve people—because if people have an information processing role, then it seems like the system isn’t artificial yet, and therefore unfinished. That’s rather unhealthy
Well put, but doesn't go quite far enough. Because those systems do still involve people, it only as objects to be acted upon, rather than actors with agency. Which isn't just unhealthy, it is downright pathological (and often socio- or psycho-pathic).
We've seen this sort of creeping bias before, with terms such as "Content Management System" displacing more human centric terms such as reading, writing (or even authoring), sharing, and publishing. "Content" is just an amorphous mass that is only produced in order to be "managed", poured into various containers, distributed, and delivered in order to be consumed.
Today's AI is, as I like to call it, statistics with sexy marketing. A lot of AI programming consists of loading Python modules that correspond to various models, and fooling around with them to see which best fits the data you have. In other words, you're working more like a mathematician experimenting with potential solutions.
There's pressure at my job for architects to "leverage AI". What I always suggest to them is to find a statistician and see if things like neural networks are even necessary before committing to them. Sometimes the problem could best be solved with a heuristic, a rules engine, or a simple statistical model like linear regression, in which case "leveraging AI" is merely hype.
I used to be in the grouchy, "Stop calling it AI camp". But the last three years of progress are impossible to ignore. Scaling models is working better than anyone thought. We're in for a wild ride with this new AI.
My background is in automation and robotics; I studied system identification: a discipline where you would use mathematical means to identify a dynamic system model by observing input/output.
You treat the system as a black box and estimate a set of parameters that can describe it (e.g., Kalman filter).
I struggle to understand what's the fundamental difference between system identification and ML/AI. Anyone?
You ultimately have a bunch of data and try to estimate/fit a model that can describe a particular behavior.
It all comes down to a big optimization/interpolation problem. Isn't what they call "Learning" just really "estimating" ?
Then the more CPU/memory/storage you have, the more parameters/data you can estimate/process, the more accurate and sophisticated the model can be.
As someone with a similar background, I believe some of the confusion is because there is a lot of overlap. System identification is very similar to supervised learning, however there are other learning "methods" that still fall under the umbrella of ML/AI. For example, unsupervised learning doesn't really have a good controls analog (as far as I know). Reinforcement learning on the other hand is somewhat analogous to model predictive control.
A better way of phrasing your point is that ML/AI is "just" optimization.
> It all comes down to a big optimization/interpolation problem. Isn't what they call "Learning" just really "estimating" ?
Yes.
A mathematician will say "why do you call this 'back propagation', isn't it just matrix multiplication?" Many disciplines have different names for the same process.
This is pissing into the wind. There's nothing any "machine learning pioneer" can do about marketing, and this is marketing. And it's all right. People are beginning to catch on that we won't have "thinking robots" or "self driving cars" in the foreseeable future. Doesn't mean that it's not intelligence or that it's not useful. It's also most definitely "atificial". What we call it is a matter of tertiary importance at best.
That's what you get with a hype. Like all the products with 'blockchain' that have no actual blockchain or that do but derive zero benefit from it. It's just a way to get investors to sign up that have no more knowledge than they get from Gartner.
One way to interpret this is that all existing AI systems are obviously halting computations simply because they are acyclic dataflow graphs of floating point operations and this fact is both easy to state and to prove in any formal system that can express the logic of contemporary AI models. So no matter how much Ray Kurzweil might hope, we are still very far from the singularity.
1: https://link.springer.com/article/10.1007/s11023-014-9349-3
reply