When a problem can be solved mindlessly - with a repeatable set of steps no matter the situation - imperative/declarative programming makes sense.
Most real world situations are unique and require unique solutions. That's where AI really shines. You just describe your target, attempt to solve the problem, and pay attention to how far off you were from your target. The learning happens naturally.
Neural networks are too complex - sometimes billions of variables - to decide what each neuron should do. We as a species have evolved to develop brains that are extremely adaptable. AI mimics our own natural learning process. And it's proven to be far more effective at solving unique problems.
The problem is I think that we don't actually know how to program an AI. Neural networks only sidestep the problem by being easier to program by trial and error, but it's nothing more than emulating a different architecture and brute forcing its programming.
IMHO it's not artificial intelligence it's just intelligent scripting. A real AI, like a human shouldn't be programmed for something specific but adapt itself to what it encounters.
Also - must remember that AI is a programmer on the sidelines letting data do the logic. It required a fairly different frame of mind than traditional programming
That's the point. If we had good and intuitive models of how a system works, we could simply write a program to calculate whatever result we were looking for. The purpose of AI is to look at data and find patterns that the human mind struggles to pick up thus allowing us to make accurate predictions without understanding the underlying rules. In the early days of AI when we were basically just practicing, we applied them to problems that we knew the answers to because we understood the underlying rules, and thus could see easily how close AIs got to accurately learning these rules. But now as AI has graduated to the level where it can be used for real world applications, of course now it is successfully doing its job of figuring out relations which are not only difficult to spot but also difficult to turn into an intuitive narrative.
Your logic is completely flawed. If there is a super AI with real intelligence that understands problems and codes it up for you, why wouldn't it be possible to go one step further and solve problems on its own? Why do you think that a human programmer has to feed a problem statement to the AI for it to work?
I don't really see your point. AI/ML can make programs the same way as humans, which is writing a program that seems right but it very wrong and keep iterating until it mostly works, most of the time.
A number of programmers prefer writing programs they do understand, and still these programs can do incredibly interesting and useful thinks: theorem proving or verification, finding optimal solutions to all kind of problems, ...
AI as it is now is merely brute force: huge datasets, huge computers. Of course it yields results: that would be a shame it would not using so many resource.
Sure it is. AI is a computer reacting to its environment to solve a problem. It doesn't necessitate machine learning, neural networks, or what-have-you.
why would AI program? Programming is a way to make hardware perform some task. Well, couldn't AI just go straight to performing the task, skipping the programming step, at least as we know it.
I read it as, people will direct AI to search for solutions, then use these refined solutions to search for more solutions. A bit like how we use libraries and packages, and improved languages, to enhance traditional programming practices.
Taking an extreme helicopter view I think I can see it, but on the other hand I'm not convinced it's a very interesting observation. Throughout history we used machines to make more complicated machines.
Edit with this quote:
Because you only have to provide Software 1.0 implementation for a small number of the core computational primitives (e.g. matrix multiply), it is much easier to make various correctness/performance guarantees.
This is true, but it raises another question which is (to me) comically hand waved aside:
how do you make correctness guarantees of the output of the neural net? It's not addressed, probably because it's very hard to do so.
It's like the NAND gates inside CPUs and GPUs. Since they are so simple building blocks, they are very easy to verify.
It does not follow that the business logic I write to run on these things, is easy to verify.
The same goes for neural nets, but more so.
I'm not saying these new AI tools aren't useful, they are. But it's easy to misunderstand what they can do.
I think it is because AI, like intelligence, is undefined in a concrete sense. When someone creates a new algorithm that solves a problem of identification or logic we call it AI. So when people say a thing will be better with better AI you could say it would be better with more effective problem solving. What thing in life couldn’t be made better with more effective problem solving..?
The focus on learning is misguided. Human brains don’t have to use the same mechanism for learning as they do for reasoning. Perhaps they do, but I don’t believe we have enough evidence either way.
The useful part of AI is the inference/output. If we achieve human-level performance AI inference then we have an incredibly powerful tool that can revolutionise society. The learning algorithm, be it gradient descent, evolutionary mechanisms or something entirely new, doesn’t matter. Whatever gets the job done.
The challenging part of coding is converting domain knowledge into correct code.
An AI has no high-level understanding of the domain and desired solution. It just guesses at what you might want and glosses over all the edge cases and exceptions that actually need to be considered and resolved. The issues that only come to light after a real intelligence studies the problem, and works through correct steps to implement a solution.
It's an iterative process, because the solution is arrived at only after attempting a solution, learning what you don't know or haven't considered about the problem, then resolving those ambiguities, and producing a revised and more correct solution.
When responsible programmers realize that they don't know what they don't know, they ask questions and apply the answers.
AI never realizes when it makes mistakes because it has no domain knowledge. It just pulls a best guess out it's ass and says, "Here you go. Check my work."
Because a human can understand from first principles, while current AIs are lazy and don't unless pressed. See for example, suggesting creating bleach smoothies, etc.
Being able to come up with solutions to assigned tasks that don't have a foundation in something that's often referenced and can be memorized is basically the most valuable use case for AI.
Simple example: I want to tell my robot to go get my groceries that includes frozen foods, pick up my dry cleaning before the store closes, and drive my dog to her grooming salon but only if it's not raining and the car is charged. The same sort of logic is needed to accomplish all this without my frozen food spoiling and wasting a salon visit and making sure I have my suit for an interview tomorrow.
Most real world situations are unique and require unique solutions. That's where AI really shines. You just describe your target, attempt to solve the problem, and pay attention to how far off you were from your target. The learning happens naturally.
Neural networks are too complex - sometimes billions of variables - to decide what each neuron should do. We as a species have evolved to develop brains that are extremely adaptable. AI mimics our own natural learning process. And it's proven to be far more effective at solving unique problems.
reply