I think that what we're losing is a habit of mapping our squishy human desires onto a formal system. Maybe you didn't chose a toolchain that treats it as a formal system. There isn't a proof assistant to be seen for miles in any direction. But when the guy says "give me foo" and you say "that's impossible", you're consulting a formal system and translating a proof back into that squishy human desire language.
The author is maybe a bit too attached to the idea that the system is an ideal representation of the real world. Given a landscape of systems that might represent it, probably there's one that lets you give the guy what he wants, and probably it's not the one you've built. Spend enough time building something and it changes how you see the world, so that's an expected bias.
I agree that it's a loss. The statistical approach used by AI's or by any sufficiently complex system of customer facing employees (which likely isolates the customer from the programmer) tends towards creating responses that are likely to make the guy go away, which is not always the same as responses that gave him what he asked for.
The issue is that a lot of intuitive stuff is wrong. When you formalize, you remove the simple, intuitive explanation - but you make it much harder for you to remain wrong, if you are wrong - or to become wrong, if you started off right.
As a simple explanation, consider the difference between explaining the Monty Hall problem - which might seem to be philosophical, open to interpretation - and coding it up. The moment you code it up --- go ahead, code up a monte carlo simulation that compares the two alternatives of switching doors or maintaining the same choices, and spits out a running count of which is what percentage correct. I'll wait while you code. ---
the moment you do that, you see two lines in your code that make the explanation 100% irrefutable and completely obvious.
That is why papers are written this way. Intuition can go both ways.
Doesn't sound all that different from other professions. When a client asks for something logically impossible, it's a case of them not being as good at creating logically consistent models. The domain is about as irrelevant as whether the totaled car was going to work or to a birthday party.
Also, looking for proof in a complex system will often lead you down the path of least resistance (the path for which it's easiest to find supporting proof; not necessarily the best path). So asking for proof is just evidence of a lack of systems thinking at play.
Ability to understand your specific system and guide it is the far more important factor.
There are many cases where you can verify an answer much more easily than finding it. Not saying that necessarily applies here, but for the more general problem space it isn't the limitation it might appear to be at first.
I agree with you on this, since we humans want things in a vague form, and it's still very hard for computers to infer insights from those ambiguous requirements. Not very easy to do that by taking derivatives of function compositions.
I think he is arguing that they don't aid in counterfactual reasoning. They provide no facilities to help you yourself discover contradictions. This isn't about automating proofs, just helping you think through the problem.
I don't think a formal system with symbolic inference is useful for describing the knowledge of any reasonably complex domain that doesn't have a mathematical model. And most of the human knowledge tends to be like this.
Not to mention that smart people usually do amazing things by just changing the rules and making the problem easier to solve (you can only make a horse go so fast or pull so much, but you can invent an automobile or train)
This whole analogy I realized after thinking about it is just computer science baby babble.
Fundamentally it comes down to not formally specifying the problem (not saying I can, I can’t and neither can anyone else that I’m aware of)
People always want to cling to quantitative interpretations of qualitative problems, and then declare QED, despite the whole thing being predicated off a false premise to begin with (like the thing we’re discussing is an appropriate candidate for an
algorithmic interpretation to begin with).
And without a formal specification of the problem, we have no formal way of checking our solution.
Pretty sure if they’d used the term “Boolean satisfiability” even once, we would have been able to work it out.
But SAT could mean a million things and if you’re not using this stuff on a daily basis then it’s simply non obvious.
It’s not the authors fault, they can write what they want. what’s pissing me off is the utterly unhelpful attitude of people responding to questions about wtf the guy is talking about.
that's not reasoning though. it's not even an expert system or prolog (because adversarial examples exist). it's very close to a million monkeys typing.
It's quite a stretch to say that we find truths that are beyond every possible formal system, rather than those that we've simply managed to encode in computers.
I agree. Reasoning about correctness can be a very powerful tool for some of the algorithmic parts of a system. But large parts of a system are not really amenable to formal reasoning about correctness.
As an example, if you try to write our own web search engine, the linear algebra or neural network operations you may be using are certainly amenable to mathematical reasoning, but the relevancy of the returned results is not something you can prove mathematically.
And in my opinion, this fact often leads "correctness" fanatics to retreat to safe areas where their methods work, and snipe at all the rest of the world. Characteristically, the article quotes E.W.Dijkstra, whose much beloved notes are full of explorations of mathematical toy problems, and catty dismissals of anybody trying to make real world use of computers.
When Knuth was dissatisfied with the results of early computer typesetting, he spent years of his life developing TeX. Meanwhile, Dijkstra kept hand writing his papers and having staff turn them into printed form. That, to me, is the epitome of the "correctness über alles" mindset.
Yes, your interpretation is correct. I think the killer app here is mathematical proof. You often need intuition and creativity to come up with a proof, and I expect AI to become really good at that. Checking the proof then is completely reliable, and can be done by machine as well.
Once we have AI's running around with the creativity of artists, and the precision of logicians, ... Well, time to read some Iain M. Banks novels.
There is no quote that "the smarter you are, the more false assumptions you have". I can't even find the words false or assumption in the article. You are strawmanning or reading incorrectly.
This is what he says:
> Here's the problem. Logic is a pretty powerful
> tool, but it only works if you give it good input.
> As the famous computer science maxim says,
> "garbage in, garbage out." If you know all the
> constraints and weights - with perfect precision
> - then you can use logic to find the perfect
> answer. But when you don't, which is always,
> there's a pretty good chance your logic will lead
> you very, very far astray.
My personal experience in life has been that this is the case. No matter how much I have studied, it's not possible to logic your way out of insufficient or misperceived data which in most domains other than programming is the rule.
I've also experienced the same over-confidence of logical thinkers kept in bubbles - they adjust to the quantity and correctness of data they have in one part of their life and then use logic and cognitive biases to rationalise the outcomes they have in life.
Yeah, I think the catch is the "pre-existing property specifications" part.
I think you need to carefully understand both AI & Human generated propositions.
While proofs once checked are just gravy.
I have much more sympathy for Pearl. He's constructed "toy" problems that yield different answers depending on the hidden causal structure chosen. He then shows that unless one takes account of the causal structure, it's impossible to correctly answer the problem. The logical conclusion of this is that any technique that doesn't use the causal structure as an explicit assumption is instead using it as an hidden implicit assumption. His hope is that anyone who goes through his exercise will come to the same conclusion. Starting with real-world problems is tricky, because one doesn't know the correct answer in advance, thus it's much harder show the reliance on the assumptions.
Consider a parallel with computer programming. A user complains that they fear a program is giving them the wrong answer on a complex real world problem. They report this, and get back the unhelpful answer "Works for me, will not fix". Unable to shake the feeling that the answer is unreliable, they reduce the problem down to a proof of concept that serves as a simple self-contained test case. Two different inputs produce the same answer, but only one of them can be right! But now they are unable to convince the maintainer to even look at the test case, because now the maintainer says "I need to focus on the real world, and don't want to waste my time on toy examples".
The author is maybe a bit too attached to the idea that the system is an ideal representation of the real world. Given a landscape of systems that might represent it, probably there's one that lets you give the guy what he wants, and probably it's not the one you've built. Spend enough time building something and it changes how you see the world, so that's an expected bias.
I agree that it's a loss. The statistical approach used by AI's or by any sufficiently complex system of customer facing employees (which likely isolates the customer from the programmer) tends towards creating responses that are likely to make the guy go away, which is not always the same as responses that gave him what he asked for.
reply