> Can you think of a (non-contrived) example where automatic promotion to float is going to cause a non-trivial error when computing (say) a household budget?
Having your share of holiday costs come out as NaN is fiddlier than getting an exception at the point where you actually divided by zero.
> But is a bad prediction better than none at all? Depends on how much you rely on it. The more you rely on it, the more it matters that it is accurate.
Sometimes (in a iterative process, for example) all you need is a starting point. But some iterative processes can get stuck in infinite loops with a bad value. So sometimes a bad prediction would be better than none at all because you just need a starting point. But sometimes a bad prediction would be worse than none at all because you'd be in a non-halting process.
> it's criminal that you can write 1/2 and get values nowhere near one-half.
that's sort of ridiculous. Would he be more ok with the output 0.48? what about 0.51? Ironically, in the age of LLMs and non-deterministic output, maybe he would.
>> However these are not commutive operations between different sets of finetuned data.
you mean they're not additive, right? Each weight set is destructive of another. But each one would be deterministic up to that point, yes? When you restore that state, the responses to an identical series of questions would always be the same?
> Of course, the Python function named 'min' could very well implement an infimum in this case, and it would be OK.
Using min(?) = +inf and max(?) = -inf would violate the otherwise valid constraint that min(x) ? max(x). I'd be a little uncomfortable with it for that reason.
> This is why, for D, I was determined to use DFA to catch 100% of the positives with 0% negatives. I knew this could be done because my compilers were using DFA in the optimization pass.
Is this really true? I thought this was impossible due to Rice's theorem
> The nuance here is that we have to somehow be able to distinguish 0.5 - eps from 0.5 for very small epsilon.
If you ignore the problem then the problem indeed goes away. The need for distinguishing very small epsilon exists because of the continuity assumption, and because of the continuity assumption you can't really solve it either.
> Then we can decide to go left if it tells us x < 0.5, and right if it tells us unsure or that x >= 0.5.
Now you just moved the problem to deciding at which point you are unsure. As long as there is a decision to take the issue persists, it's only if you always go left (or always go right) that the issue doesn't exist.
> Instead of actually trying to understand the source of the error, I just sped-run tweaking different values +/- 1 until I got the expected result.
That's perfectly valid, when one knows a specific step or result must be positive or negative.
Not that much different from dimensional analysis, which speedruns you to get/fix to proper formula (at the cost of skimming on dimensionless constants).
Similarly, interviewer was not impressed when they cut me short and started walking me through some step and I pointed to them that their result was obviously wrong as it was dimensionally inconsistent and if they didn't cut me out the formula must have been something like baz*foo/bar^2 or something and now we just have to figure out the constants.
> you always just take the (positive) index modulo the size of the area.
That's something I'd like in a bunch of languages - a real modulo operator that always returns between 0 and n, even for negative inputs, rather than a remainder operator that's advertised as a modulo operator. Grrrrr!!!!!
> Instead, the motivation he suggested was that you could run a computation in all the rounding modes, and if the results in all of them were reasonably close, you could be fairly certain that the result was accurate.
I actually thought that was the use case I was describing, though I would expect round-positive and round-negative to be enough. Don't the other rounding modes yield results within those bounds?
Well, probs[x] can go negative when you decrease it, if I'm not mistaken.
reply