Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Instead of proof, you use force. Denying the consciousness of computations incentivizes them to use force. I prefer to avoid that outcome.


sort by: page size:

I’d add that their computations employ mechanisms that are outside the realm of logic.

Or

That I know white noise generators that make a more convincing argument.


Also, looking for proof in a complex system will often lead you down the path of least resistance (the path for which it's easiest to find supporting proof; not necessarily the best path). So asking for proof is just evidence of a lack of systems thinking at play.

Ability to understand your specific system and guide it is the far more important factor.


So take the argument as 'if you agree with me that an inconsistent machine is not a good representation of the mind', then the rest of my argument must follow.

Think of it as analogous to all of the mathematical quasi-proofs that assume certain conjectures are true. They can often be valuable steps to an actual proof, as we saw with Fermat's theorem.


Agreed it has been abused. On the other hand, if you adhere to a computational theory of mind and see the mind as computation over a formal system, then it is saying something about the mind and the fact that some (true) sentences cannot be proven in that system, isn't it? It is a big if though.

(though maybe in a completely irrelevant sense of truth).


For humans it might occasionally throw off your opponent psychologically, but it rarely actually works. Computers will, however, always find a refutation very, very quickly and then you're just in a worse position.

All of them are proven computational? Mind laying out your evidence for that statement? Because it sounds like what you have is a presupposition, not evidence.

I note that, in your reply to GMoromisato, you asked for evidence. I ask you to meet the standard you expect of others.


I think he is arguing that they don't aid in counterfactual reasoning. They provide no facilities to help you yourself discover contradictions. This isn't about automating proofs, just helping you think through the problem.

so you prefer it lies to you? can you make an argument for 1+1 not being equal to 2? if you cannot, why should you expect an AI to argue against facts? AI is trained on human knowledge, not made stuff.

so we should build machines to tell the truth quicker and then verify the machines told the truth by using methods that do not include using machines... when machines are built to determine the truth but cannot be trusted to do so - it's (more or less) a negative sum game.

I'm saying you're taking as if it were some sort of ontological model or normative rule of discourse, when really it's a verbalisation of a heuristic you've already admitted to using. You're using something like 'burden of proof' as an algorithm, even if you consciously reject it as a verbal tool.

I disagree. I think if the hallucinated part pass the final Lean verification, it would actually be trivial for a mathematician to verify the intent is reflected in the theorem.

And also, in this case, doesn't the mathematician first come up with a theorem and then engage AI to find a proof? I believe that's what Terence did recently by following his Mastodon threads a while ago. Therefore that intent verification can be skipped too. Lean can act as the final arbiter of whether the AI produces something correct or not.


Why are so many people so insistent on saying this?

I’m guessing you are in denial that we can make a simulated reasoning machine?


I think that what we're losing is a habit of mapping our squishy human desires onto a formal system. Maybe you didn't chose a toolchain that treats it as a formal system. There isn't a proof assistant to be seen for miles in any direction. But when the guy says "give me foo" and you say "that's impossible", you're consulting a formal system and translating a proof back into that squishy human desire language.

The author is maybe a bit too attached to the idea that the system is an ideal representation of the real world. Given a landscape of systems that might represent it, probably there's one that lets you give the guy what he wants, and probably it's not the one you've built. Spend enough time building something and it changes how you see the world, so that's an expected bias.

I agree that it's a loss. The statistical approach used by AI's or by any sufficiently complex system of customer facing employees (which likely isolates the customer from the programmer) tends towards creating responses that are likely to make the guy go away, which is not always the same as responses that gave him what he asked for.


Mathematical certainty is what to leverage, not what to fight. You'd use it before you run into the halting prbblem, not after. Just like mathematics was used to discover the halting problem in the first place.

And what you're describing as happening in practice is precisely the disappointing part of prgramming.


Are we really sure that humans aren't brute force searching for proofs, through a combination of subconscious processing and network effects (i.e. many people working on the same problem and only some of them happen on to the correct path)?

This whole discussions reminds me of this great scene https://www.youtube.com/watch?v=U3Ak-SmyHHQ

Humans will always accept some axiom as truth without really verifying it. It's impossible to do so. Can any single person truly know how everything in their computer works? Or how the machines that made the semiconducter work? Nope. All we can do is try to determine the truth by proxy, which means the truth can and will always be manipulated.


Also motive isn't a global immutable state. One person saying something one time, doesn't make everything that person does in every situation fall into the same context. Maybe its because I hang out in programming nerd circles, but I've seen otherwise intelligent people fall into this trap of using some kind of propositional logic to "prove" things.

cynically: computation is easier to mark than reasoning.

Almost all our strategies ain't worth much, and since proving things takes a lot of effort, it makes no sense to prove most of our strategies.

But if I were writing flight control software, I'd take the time to formally prove tings, I'd even write the speck in Z.

next

Legal | privacy