> But if you can do it for dead-simple programs running on hardware you understand very well, you can do it for complex software as well.
Not necessarily when you're bounded by reality and finite amounts of time and energy. Just because you can count to 2^8 doesn't mean you can count to 2^128.
> It will happily execute this algorithm. For large numbers, it is slightly off on the arithmetic. When I asked it to double check, it did so using Python code. After that, it kept using Python code to perform the math. It was also able to reason intelligently about different outcomes if always picking a (or b) given different starting points.
Notice that you had to notice the error and had to prompt it to double check. Lots of complicated things going on here. Many (most?) humans will fail somewhere along this trajectory.
Did it double check the Python code to make sure it is correct (not just in the sense that it is valid, executable code, but that it is the correct check in the first place)? Or did you double check that its modified algorithm is correct? Fool me once and all that…
Upon reflection it appears as if you have a heuristic (algorithm? that leverages logic, awareness, critical thinking, experience, a goal in mind, intuition, etc. to push towards better results.
“It was able to reason intelligently” imbues qualities that I am skeptical is reasonable to attribute to this very narrow domain - what’s an example where it showed intelligent reasoning capabilities?
> You can write a perfect function that adds two 32 bit integers.
how ? the problem of adding two 32 bits integers is itself imperfect since you may at some point have big integers to sum, so any solution is inherently flawed, too
> We trap on division because it is objectively a programmer error that has no obvious right answer.
Defining (2^31 - 1) + 1 = -2^31 is no more of an "obvious right answer" than defining N/0 = 42 for all N. It's just one that we computer engineers have been trained to accept because it is occasionally useful.
But it's still madness which discourages us from using tools that emit more correct code.
> A consistently not taken branch has very, very close to 0 overhead in a modern processor.
Except the half a dozen or so bytes it consumes in the cache.
Nevertheless, I concede that others have thought about this a lot more than I have and done so with hard data in front of them.
> Can you think of a (non-contrived) example where automatic promotion to float is going to cause a non-trivial error when computing (say) a household budget?
Having your share of holiday costs come out as NaN is fiddlier than getting an exception at the point where you actually divided by zero.
> it's criminal that you can write 1/2 and get values nowhere near one-half.
that's sort of ridiculous. Would he be more ok with the output 0.48? what about 0.51? Ironically, in the age of LLMs and non-deterministic output, maybe he would.
If his solution doesn't work with arbitrary number-like objects (matrices at the very least), doesn't support voice input, can't send results via email and can't seamlessy resume the calculations after a hardware failure, it's far from overengineered.
> The way black box composition is done in modern software, your n=100 code (say, a component) gets reused into a another thing somewhere above, and now you're being iterated through m=100 times. Oops, now n=10k
That doesn't seem quite right. as 100 * (100^2) <<<<< 10000^2
> If there is one correct way, then software writers are fungible
I don't believe that follows.
For example, there is one 'correct way' to unscramble a rubiks cube, but humans unscramble them in a much more roundabout manner.
Just because there is one correct way to do it doesn't imply that all rubiks cube solvers are fungible.
This is because of the very high cognitive load of finding the correct way.
Math, especially number theory, is full of conjectures that are easy to state but take hundreds of years to resolve, and at it's base computer science is math.
>>> Problem: Programmers with negative productivity cannot be represented on the same log scale.
This is similar to the problem of price-to-earnings ratio. The ratio goes asymptotic as earnings goes through zero. It would be better to quote earnings-to-price ratio. Another screwy reciprocal unit is miles per gallon for cars.
Software is a fine counterexample. It's made of bits, which cost nothing to create, and the value is in the order of the bits, not the bits themselves.
> Off: floating point numbers can be used to store integer values, so equality comparison might be perfectly valid in some cases.
Yeah, but then you're having to learn all the special cases for when it silently gives wrong answers, and hope to hell that you didn't miss any.
Much better to have consistency and behave the same way all the time, than to optimise for 3 keystrokes and introduce all sorts of special exceptions that the programmer must memorise.
I'd take that a step further: if software work can be reduced to a positive integer, you're doing it wrong.
reply