Maybe I'm that ahead of the curve, but I understood inmediately what an algorithm that "takes a Bernoulli sequence with p != 1/2 and outputs another B. seq. with p = 1/2" is. Or perhaps the author chose a wrong example.
If anything, it shows that this field is mature, and has progressed to a more general vocabulary. It's still light years behind other kinds of engineering, IMO.
It's not at all accessible to children, at least compared to what it could be. I've been writing code for money for 20 years and it took me a long time to understand that paragraph. It is concise, but it's not clear. And worse there is no indication of just how important this clever little algorithm is.
The point is this, even if I'm the fastest coder on earth, write the most elegant software, quickly with very low bug rate, innovate and spread my creativity to the team, if I don't know things like the above, I probably won't be able to work for Google... or will ever be considered a "real" software engineer. Computer Science and Application Development overlap, but are not the same thing, and require at many times, different skills. A great front end developer can understand performance and write efficient code, regardless if he knows all the details of big O notation.
I think part of the problem is that if you know what a Bernoulli sequence is, there's no need to read that. It's a useful abstraction for building things at the next level, but not for teaching things AT that level.
(I have a EE degree and disagree that CS is particularly far behind it; it's not like they're solving problems in hardware that are just way beyond our ability do in software are they?)
I'm a developer, a good one people say, but I'm not a computer scientist, or any kind of scientist, nor a mathematician (and my CS degree won't change that)
I get code, and I get design, but I didn't get that example without the (very interesting) simplified explanation
Bought the Kindle edition (price is very reasonable), will let you know what I think of it for grown ups...
I'm with you--got a BS in EE&CS, and an MS in Software Engineering, but I can't really honestly say I'm an engineer, electrical, software, or otherwise, nor a computer scientist. Maybe a computer scientician... And while I did very well in my studies (many, many years ago) and am pretty highly compensated for my work with computers, I needed the explanation too.
> This algorithm works because no matter what the bias is, the odds of getting a Heads then Tails will always be exactly the same as the odds of getting a Tails then Heads.
That may be easy to state, but I don't see it at all as a self-evident Truth. Hence the need for jargon, in order to actually come up with proofs in the language of mathematics.
The odds of getting Heads on any given flip are always (x)%, regardless of the previous result. Likewise, the odds of getting Tails is always (100-x)%. So you will get HT with probability (x)% * (100-x)% which might be as high as 25% with a fair coin but might be much lower. Regardless of how small the probability, it is exactly the same as getting TH which is (100-x)% * (x)%.
Is something like this sufficient as a proof? It's so natural and easy, yet convincing and accurate, that I have hard time believing it. :) Even if it's just an explanation it's still awesome. How I wish I could find materials teaching math ideas without relying heavily on math language! (And no, because I'm not going to contribute to the field I don't really need to know that language... Except I need, because it's the only way to obtain what is useful to me, however uncomfortable it is for me...)
The difficulty with reading more "formal" proofs is that they try to make each statement as consise as possible, because it makes it easier to manipulate the results, and hold more of the proof in your head at a time.
For perspective, consider the quadratic equation. This equation was first discovered in 628 AD, India. An English translation (from 1817) reads:
"To the absolute number multiplied by four times the [coefficient of the] square, add the square of the [coefficient of the] middle term; the square root of the same, less the [coefficient of the] middle term, being divided by twice the [coefficient of the] square is the value"
Or, as modern mathematicians would right, x=(sqrt(4ac+b^2)-b)/2a.
The equation is slightly different for two reasons. First, it applied to equations of the from ax^2+bx=c, as apposed to ax^2+bx+c=0, so the sign of 4ac is inverted. It also misses the second answer provided by the '±'.
The original paragraph form is probably easier for someone unfamiliar with anything beyond arithmetic, however the modern form is far easier to reason about, and use to those who invest the time to learn the notation.
> The original paragraph form is probably easier for someone unfamiliar with anything beyond arithmetic
Isn't this because of the fact that at the time (628 AD as you say) there wasn't that much to be familiar with beyond arithmetic? (I don't really know the history of maths, just curious.)
> and hold more of the proof in your head at a time
Is it really the case? I mean that no matter with which notation you start with, you need to parse and evaluate it mentally to convert it into terms you actually think with.
I can see that for maths 'natives' it is helpful, because their mental models are very closely tied to the formal notation. But for me mathematics is a foreign language, it seems that I think in slightly different terms and this succinct notation makes it harder for me to actually understand what it is really about. (I don't mean your example specifically - it's still well inside my comfort zone).
As a programmer I do struggle for "math literacy" because I have to. I need to be able to read proofs, understand some concepts to be able to use them and to know when I need them. The problem is it's not natural for me. Yesterday I found here on HN a book that just may be exactly what I'm looking for: http://greenteapress.com/thinkstats/html/index.html (didn't have time to read it yet, but it looks promising).
> who invest the time to learn the notation
I think I said it before but maybe it's worth noting explicitly. I am somewhat familiar with the notation, I consider myself 'literate'. But I'm far from fluent and I probably never will be. Similarly, I can read and write cyrillic script (with pen and paper). But it takes me more than twice the time to read russian in cyrillic than reading the same text written with latin letters. I don't need someone to translate russian to english for me. It's just the character set that is a problem. I think it's very similar to what I experience with math.
Perhaps not self evident, but easily provable. Now, having to COME UP WITH this when all you know is p(a) = x; p(b) = 1 - x, and the fact that p(a)p(b) == p(b)p(a) is some awesome insight.
You don't even need the first part. As long as p(a) and p(b) are both constant, they can add to any number, not just 1. They could even be negative, which is impossible for a physical coin of course but my point is: you only need to know that there are two constants, and multiplication is commutative, and you're done :)
An example: if you have a die and you don't know exactly how many sides it has, or whether it's fair, you can still get a perfectly fair 50/50 outcome. Just count "1 followed by 2" vs. "2 followed by 1" and ignore every other result.
Quantum mechanics: frustrating formal mathematicians for over a century.
You'd have to exercise caution when using (quasi-)probabilities less than zero or greater than one, because many of the classical properties of probability theory depend upon those conditions.
More specifically, jargon is wonderful once you join a community that uses it. Much like any other language, its lifespan is measured by how many speakers it has. As a term, it is not significantly meaningful from "pidgin" or "creole" or "dialect" or "slang".
"I started writing Lauren Ipsum by looking for ideas that I understood well enough to explain to a nine-year-old child."
Three decades ago I started a company that created a word processor. As part of our usability testing, we gave our little 100 page manual to an eight year-old boy and told him to write a letter. A couple of hours later he finished and we pronounced our endeavor a success. Even our customers were happy with how easy it was to use.
For our next release we produced a real manual -- 400 pages long. We were inundated with complaints about how difficult our product was to use now. Lesson: keep the story simple. Make the advanced features emergent and discoverable.
Intel's Ivy Bridge chips use that technique of eliminating bias to ensure true random numbers for the RdRand instruction (it gets random data from thermal noise).
I really see programming as an art first and then a science, just like music.
Art comes from creating something new and stepping outside the box while science is about explaining some existing thing and defining the box to put it in. By studying music history we can see that new styles, harmonies and figures almost always came before the theory that explained them. This is much like new languages, frameworks or patterns coming out and then papers being written about them.
During the middle ages you would hear almost only intervals of an octave or a fifth. Then with classical music came the third and the sixth intervals and the second and seventh intervals only made it a few centuries later. Music as a theory evolved over thousands of years. Only recently did it explode with jazz trying to find every single way to bend that theory.
The brain cannot easily play with a concept it has no single-word name associated to. Right now CS is full of concepts described by mini-phrases and no clear hierarchy for all of these concepts. Music theory by contrast can be entirely described starting from the major chord. Every single concept (as far as I know) has it's proper place in a tree structure with the major chord as its root element.
It's ironic how programmers dread spaghetti code yet ended up with a spaghetti computer science. It might also not help that the industry is plagued with code smells.
In the end, by comparing harmonies to data structures, melodies to code flows and multiple voices to concurrency, I believe its mostly all the same to the brain. A musical score is nothing more than source code for the human brain to read and interpret.
On another page [1] the writer says: "Children in Finland go to school year round". Why would he say that? Did he just pick a random far-away country to say a strange (but untrue) thing about? Or does he really think that children in Finland go to school year round, even though a little googling would reveal otherwise?
In a word, yes. It's a running joke in the book, eg children in Finland start reading at 12 months. Children in Finland eat all their vegetables, etc. It's intended to poke fun at fretful American parents and the things they say to motivate children. I've gotten more outrage over this one sentence from Finns than anything else in the book. :/
If anything, it shows that this field is mature, and has progressed to a more general vocabulary. It's still light years behind other kinds of engineering, IMO.
reply