Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

This comment reminds me of 'Fooled by Randomness' and 'Black swan' by Taleb.Logic rules.


sort by: page size:

Trying to apply logic to an illogical system is always going to be tricky.

I was going to say the same thing. You might also be interested in logical induction:

https://golem.ph.utexas.edu/category/2016/09/logical_uncerta...

https://intelligence.org/2016/09/12/new-paper-logical-induct...


Black box systems are incredibly common. By definition, there isn't anything readily apparent from the outside. You haven't shown that to be absurd, you only showed that if you can see enough of the interior of a black box, it is no longer a black box.

You complain that the author's logic will lead to creating complicated hidden variable theories but that's exactly what the author is advocating. While there will always be some ever more convoluted model to explain results, any given model is testable, whereas assuming there is nothing to model is not testable.


This is where we start to wander into problems with classical set logic.

No, I'm not joking. Fuzzy sets are pretty much required for any meaningful discussion of "failed", "foreseeable" etc, if they are to be at all useful concepts: http://chester.id.au/2012/04/09/review-drift-into-failure/

Reduce foreseeability and failure to a binary toggle and you destroy enormous amounts of information with high utility so that some syllogisms still work. Wasteful.

Taleb is a very intelligent man, but AFAICT he does often reinvent existing concepts with much cooler names. "Antifragile" sounds awesome. "Robust" sounds boring.

Take "black swan", for example. Given the technology of the day, black swans simply didn't exist. Iain Banks called these "Outside Context Problems", one might also call them "paradigm-busters".

Anyhow. I should have padded out my original definition with the usual legalese about "reasonably foreseeable".


This shows an important concept: programs with well thought out rules systems are very useful and safe. Hypercomplex mathematical black boxes can produce outcomes that are absurd, or dangerous. There are the obvious ones of prejudice in making decisions based on black box decisions, but also--- who would want to be in a plane where _anything_ was controlled by something this easy to game and unknowable to anticipate?

Me too. I believe the effect contains a logical recursion that is impossible to escape from. Maybe the randomness variable in it? It looks as if all validations and refutations of it are always going to appear logical. I don't know what to call it or compare it with but it feels this must be documented as being a prime example of its category.

> I'll bet you a nickel something screwy is going on with the JMP Indirect logic

I think we're on the same page, except that you sound like you actually know what you're talking about and I'm just following my gut, also known as `guessing`.

Thanks for the link, that's a great reference!


Rationalist reasoning about things you don't actually know about in the real world not only leads you to guaranteed wrong conclusions, it makes you think you're right because you made up a bunch of math about it.

This is why people think computers are going to develop AGI and enslave them.


But then we're using logic similar to Pascal's Wager to make important decisions.

Humans also intuit a whole lot of theorems (loosely speaking) which are false. There is nothing prohibiting an algorithmic process from generating statements which are variously (and, from the POV of the algorithm, indistinguishably) true but unprovable, true and provable, and false.

> However, most of real-life is not as clear-cut. Deriving the truth of a statement may depend on multiple potentially faulty pieces of evidence which must be taken into account together. For this, one needs to assign probabilities.

This what the fuzzy people want you to believe. The logicians have a better answer. For this you need more context. E.g in programming you would add types, pre- and post conditions. And not this statement will be 85% true. As the current AI hype is pretending.


Well put. This goes to the essential unknowability of causality (Godel incompleteness) and the uncomputability of plans (Turing halting problem).

I bet on a stock and it went up, I sold it made money. Just because betting on a stock made me money does not mean that my reasoning was correct. Furthermore, there is no way to really know why the stock went up. Even if I repeat this across many many trials, there is no amount of verification which will prove me right.

McKinsey is a real world arbitrage on this but dressed up in expensive ivy league educations, $500/hr consulting fees and fancy gobbledegook which awes management.

Munger, FWIW, says they never ever hire management consulting firms.


There is no quote that "the smarter you are, the more false assumptions you have". I can't even find the words false or assumption in the article. You are strawmanning or reading incorrectly.

This is what he says:

  > Here's the problem. Logic is a pretty powerful
  > tool, but it only works if you give it good input.
  > As the famous computer science maxim says, 
  > "garbage in, garbage out." If you know all the
  > constraints and weights - with perfect precision
  > - then you can use logic to find the perfect 
  > answer. But when you don't, which is always, 
  > there's a pretty good chance your logic will lead
  > you very, very far astray.
My personal experience in life has been that this is the case. No matter how much I have studied, it's not possible to logic your way out of insufficient or misperceived data which in most domains other than programming is the rule.

I've also experienced the same over-confidence of logical thinkers kept in bubbles - they adjust to the quantity and correctness of data they have in one part of their life and then use logic and cognitive biases to rationalise the outcomes they have in life.


That and Fuzzy Logic.

Great question. A legend or brief description of the underlying logic / heuristic would be helpful.

Occam's razor at work. The simpler the logic, the more likely it is that it does not contain unexpected faults than a system which has more complex logic.

Notice how blacks swans is a labeling/categorization problem. Whereas F=ma is a causal relationship. The first has no induction problem, there is no swan-ness except in our minds. The second has no induction problem, because you model causal relationships and predict from them, better predictions means less wrong. The problem of induction is only one of "ultimate reality".

How does reasoning logically based on potentially unreliable information produce accurate, beneficial results? Especially when it's impossible to know the specific probability any given piece of information is true?

Fascinating thought! Makes sense though. Given n inputs and outputs, you could literally make an if-statement that solves nothing but gives you the 'right answer'. Actually understanding the fundamentals (often unseen) of a problem to then create an elegant, holistic solution that almost prophetically seems to predict the future is the current peak of human thought achievement, IMO.
next

Legal | privacy