We only ever have finite conversations in real life, and strictly speaking big-O notation requires arbitrary input sizes.
So any application of big-O notation to this would require some generalisation and abuse of notation. It's a bit hard to formally argue which abuse is The Right One.
I don't think the limit is really an issue here. Most CS textbooks define big-O with the limit ->infinity part baked into the definition. So, this is more of an issue of different people using different definitions than an issue of abuse of notation.
This. I loved that string concatenation example! I still don't understand how people can be so against big-O notation when it's such a simple idea, with so many applications.
I think, though, that your argument depends very directly on hardware implementation concerns, which is exactly the kind of thing that big O notation is meant to abstract away.
You've taken an informal and sloppy summary of big-O notation, isolated a single English expression, and construed that a similar English expression used in a different context with entirely different meaning is an accurate approximation of the formal meaning. You would be incorrect.
I use the concept of Big-O notation, informally, almost every day. Almost every time I'm writing a new non-trivial method I ask myself, how will this scale to large values of n. Sure I don't sit down and formally prove anything and I rarely even spend a long time thinking about it. But knowing whether the function I'm about to write is O(n), O(n^2) or O(2^n) and understanding the implications of each is something I'd consider very fundamentally important.
I think you (and others) are misunderstanding the purpose of the big-O notation. It is used to discuss about algorithms, not about their implementation.
That (Big O as slang) sounds like a horrible situation.
It feels analogous to the widespread use of "exponentially" to mean "a lot" or "quickly" which is a really bad, silly thing. The difference is that few physicists and mathematicians misuse "exponentially" in casual conversation, whereas you are claiming that software people deliberately misuse "Big O". I'm not sure I believe you but either way this seems regrettable.
You seem to be missing the entire point of this kind of question. Yes, you are providing an upper bound. But as you yourself admit, your answer is useless. The point of big o is to put the lowest reasonable bound on an algorithm so they can be compared.
Your approach is basically interview trolling.
I don't use big o in interviews. That's more the domain of big data than the web slots I typically fill. I'm more interested in the depth of understanding they display in the techs they claim to know.
Big O notation is often used with assumptions like adding numbers is always constant time. You can't physically build a machine where addition between any two numbers is constant time, but that doesn't matter.
In over a dozen years as a professional web developer, I've never once needed to use Big O notation outside of an interview. Sounds like this guy doesn't know how to interview, knows that he doesn't know, but still doesn't want to admit it. No wonder he can't find anyone.
Big-O notation is hardly "math geek" stuff. I'd not call this comment ridiculous, because I might have misunderstood it, but catch me in my cups and ask me for my true opinion and maybe I'd say that.
Big-O notation is pretty basic, and very useful. Combined with a memorized table of powers of two (you don't need all of them - I know up to 2^20, and this has proven sufficient - just enough so you can guess log N for a given value) you have a good chance at being able to make quick calculations about time and space requirements for things. Which, since we don't have infinitely fast computers with infinite amounts of memory, often comes in handy.
I don't think he formally uses big-O but understands if something he is writing is linear (iterating over an array), exponential or O(1).
People have to get rid of their big hard on for Big-O, a useful concept that takes a couple hours to learn. It isn't a difficult thing that only the true macho programmers can know. I'd wish it was traditionally in starting programming books in the 'optimization & profiling' chapter and we wouldn't be having big fights about it.
So any application of big-O notation to this would require some generalisation and abuse of notation. It's a bit hard to formally argue which abuse is The Right One.
reply