> but calling it AI still doesn't make sense to me.
That's the core problem of AI, no matter what progress is made, it's instantly called not AI anymore while the goalpost of what AI is is continually being pushed out to not this. The real issue of course is what don't how intelligence actually works so it's impossible to set a fixed goalpost of when AI is truly achieved.
> This is why people hate AI and the people who use it.
For me personally, I just hate that they don't stop talking about it!
I think focusing on the AI-shit is missing my point: why do you even care about this? What is really at stake in this? Is it really just the idea that there is some kind injustice being done? An injustice to whom?
If you spend your life doing all the hard work it takes to excel at something, the rewards of that hard work are intrinsic to your new mastery. What someone calls you or not because of your skill is like the least important thing about the entire pursuit.
I too am quick to point out how annoying the AI stuff is, but I think if we are going to defeat them we need to hold on and affirm shared human capacity more than we need to protect titles.
It can be many problems, but I think it is certainly an AI problem. A big one. It really shows that asking a computer for something is fundamentaly a bad idea. Computers are for, well, computing. They are not for answering history questions or whatever.
>Yes, we do. It's just that _some_ people like to pretend we don't understand them
I generally enjoy your comments on AI even though I usually disagree with your takes. But this is just preposterous. When people use the term understand they mean in the sense of mechanistic understanding of how features of the trained network result in features of the output. This means being able to predict (and potentially set strong bounds on) the behavior of the system in the wide range of scenarios it will encounter. We are nowhere near this level of insight into how these systems construct their output.
I doubt I'm telling you anything you didn't already know here, so your comment is extremely puzzling.
>At least with AI, you don't have a system that has been fine-tuned by eons of evolution to manipulate people at least partly against their own interests.
As a smart person, I actually find this remarkably insulting. Not only do I have little desire to manipulate others, I have little ability. I'm much better with computers than with manipulating people.
>>It gets things wrong enough that I have to manually check everything it does and correct it, to the point where I might as well just do it myself in the first place.
I have had personal experience with this. And seen others telling me as well. These AI things often suggest wrong code, or even with bugs. If you begin your work by assuming AI is suggesting the correct code you can go hours, to even days debugging things in the wrong place. Secondly when you do arrive at a place where you find the bug in the AI generated code, it can't seem to fix or even modify it, because it misses context in which the which itself generated at the first place. Thirdly the AI itself can interpret your questions in a way you didn't mean.
As of now AI generated code is not for serious work of any kind.
My guess a whole new paradigm of programming is needed, where you will more or less talk to AI in a programming language itself, some what like lisp. I mean a proper programming language at a very abstract level, which can be interpreted in only one possible meaning, and hence not subject to interpretation.
> My only problem with it is that it doesn't understand the meaning of the words it's translating.
I think that's a fallacy that will haunt AI forever (or, more likely, will be the definitive civil rights struggle ca. March 25, 2035 6:25:45am to March 25, 2035 6:25:48am)
We tend to move the goalpost whenever AI makes advances. Where many people would have considered chess a pretty good measure of at least some aspect of intelligence, it seems like mundane number crunching once you know how it works.
It may be that we really mean consciousness when we say "intelligence", although if we ever find an easy formulation that creates a perfect "illusion" of consciousness, it may end up having some strong effects on people's conception of themselves that I don't necessarily want to witness.
> The question for me is...if I have an AI system that outperforms humans empirically, why do I need to understand how it works to use it?
Because, for a start, if you don't have a clue how it works, it's pretty difficult to have any confidence that it will actually outperform humans empirically in the long run, or in a specific set of circumstances.
> Since AIs are smarter than humans, why bother teaching humans at all? Unless there is a point where a developed human brain could outsmart the AI, it seems to be a waste and emotionally guided.
The obscurity for the definition of AI is precisely what drives this. The term AI is so vague among tech circles that I find the term basically laughable at this point.
If we're talking about sentient AI, then none of these concepts matter. We'll have WAY more interesting problems to deal with.
> It doesn't look like what people would think of as AI at all.
If by “people” you mean specifically the subset of people who have technical knowledge or who read HN, then sure, but in the wider population, anything that seems intelligent that’s fine by a computer is AI, they don’t care if it’s using statistical techniques, a random number generator or a bunch of conditionals. Laypeople don’t know or care what happens under the hood.
I once heard a conversation between an old boss of mine and an investor where my boss was saying that the software did calculations X and Y automatically and the investor responded with “so it’s AI”. My thought was “wait what? No it’s not” but my boss said something like “AI is anything that people perceive as intelligence thats artificial”. Historically, exist systems were seen as AI and they weren’t necessarily statistics based. That’s why we have more technical terms like machine learning.
But I agree, that laws should be far more specific about what they mean and “you can use a computer for this” would be better.m, if that’s what they really mean.
> The only thing worse than having poor AI is having no AI at all
To the contrary, the main thing that wears me out is interacting with AI systems which change from an interaction to another and aren't predictable - text completion on mobile, search results on datasets, order of posts on forums, etc etc. If I use a computer it's because it (used to) be predictable and dumb : I do action A, I get reaction B. With AI systems, if I do action A twice I generally get a different B for whatever unknown reason there is, and that makes me want to throw the device out by the window out of frustration.
Not sure if I agree with this. My computer can't do a lot of things and only a small set of that would really be considered AI.
The chess example isn't great. We have known chess is a deterministic game and a good enough algorithm on powerful enough hardware should beat or stalemate a grandmaster. The state space is big but finite. Its just a more complex form of checkers. This seemed to be more of a research issue of a known solvable problem that needs its state space shrunk to something we can calculate than a true AI breakthrough.
I think AI like described in this article is a functioning, self-determined, self-learning, HAL-like system. This doesn't exist. It probably can with the right research and funding. Its not a problem of perception or arguing about what may or may not be AI. Although the AI-effect can be annoying.
I don't see the AI effect explaining why AI hasn't gotten farther. I think not understanding consciousness or the brain on a certain level is probably our biggest hurdle. Unlike chess, we don't know what we're trying to solve exactly.
> I struggle to see how anything we have today is “AI”.
Your struggle is inherent in the "AI effect".
> So you think we’ve done it?
Repeatedly.
> We’ve solved the “AI” problem.
Calling it "the" is as wrong as calling all medical science "the" problem of medicine.
Replace "AI" with "medicine" and see how ridiculous your words look.
> Rather than posturing, perhaps you could provide us with the definition of “AI” so we can all agree it’s here.
"""It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and discover which actions maximize their chances of achieving defined goals"""
Which is pretty close to the opening paragraph on Wikipedia, sans the recursion of the latter using the word "intelligence" to define "intelligence".
But you haven't put much effort into trying to understand, have you?
reply