I use the concept of Big-O notation, informally, almost every day. Almost every time I'm writing a new non-trivial method I ask myself, how will this scale to large values of n. Sure I don't sit down and formally prove anything and I rarely even spend a long time thinking about it. But knowing whether the function I'm about to write is O(n), O(n^2) or O(2^n) and understanding the implications of each is something I'd consider very fundamentally important.
The idea behind Big-O notation is how the time generally varies as you increase the number of items for a given algorithm on a given data structure/data set. That is if you plot a graph `y=f(x)` where `f(x)` is the time taken to perform that operation and x is the number of items you are performing it on. You can then match that resulting curve to a polynomial or other mathematical expression, and Big-O is the dominant term in that expression without any associated scale factors (e.g. for 6x^3 + 2x^2 + 7 you have an O(n^3) algorithm).
Sure, you can choose a large constant such that numerically it is equal to or smaller than O(n^2) for a given n, but as you vary n then O(1) should approximate a flat `y=N` line, while O(n^2) should approximate a parabola and would result in values larger and smaller than N as you vary it.
Big O notation doesn't come up explicitly in conversation often, but the principles come up constantly. You need a fundamental understanding of the time complexity of algorithms to write good software of non-trivial complexity in most domains.
Big O is important. I actually have to frequently understand whether my function runs in logarithmic or linear or quadratic or exponential time relative to some value.
I can see an issue if you’re asking for
a weird big O which really isn’t useful in practice, like “oh, we were expecting this weird algorithm which is O(log log n), not something O(log n)”, or having to calculate the big O of one of those weird algorithms.
Also, Big-O notation is a theoretical construct for thinking about complexity that works well for academic proofs but glosses over many real world considerations. Those ignored constant factors can be large and the datasets are not always large enough to amortize their cost.
You don't necessarily need to be able to sit down and formally write out the exact amortized big O complexity, (or little O, [O|o]mega or theta) but equally it's good to have some idea of how something would scale.
If you have a good idea of how things scale, being able to express exactly how they scale with succinct and clear notation is useful. Quite useful in fact.
That's why formal notation exists, because it is handy. Not because there is an eternal, global conspiracy among academics to keep up useless habits just to show off.
I worry about Big-O almost every day. I run an analytics startup and scaling to huge datasets is important. Sure, I could throw hardware at the problem, but even that has its problems and bottlenecks and would also inflate cost. Instead, I carefully handpick the algorithms used (where possible) to have reasonable time/space complexities for the tasks I want to perform. Not everything needs to be carefully tuned, of course, but for the bits that do, Big-O is a useful tool.
Before this, I've used Big-O when writing some hobby game engine code to make sure that my algorithms scaled nicely as number of game objects increased.
I've also used Big-O to compare algorithms in back-end web development. Usually, I didn't care, but there were a few cases where it mattered (one specific one I remember was sending out notification emails based on a complex criteria and the operations I was measuring with O(n) was number of database queries).
Big-O notation is hardly "math geek" stuff. I'd not call this comment ridiculous, because I might have misunderstood it, but catch me in my cups and ask me for my true opinion and maybe I'd say that.
Big-O notation is pretty basic, and very useful. Combined with a memorized table of powers of two (you don't need all of them - I know up to 2^20, and this has proven sufficient - just enough so you can guess log N for a given value) you have a good chance at being able to make quick calculations about time and space requirements for things. Which, since we don't have infinitely fast computers with infinite amounts of memory, often comes in handy.
I think this is just it though. You don't necessarily need to be able to sit down and formally write out the exact amortized big O complexity, (or little O, [O|o]mega or theta) but equally it's good to have some idea of how something would scale.
I feel like self taught developers who are serious just learn this by intuition because, frankly, if you're writing software where it matters then very quickly it becomes an obvious concern. If your self taught and it doesn't matter then it doesn't matter!
With a formal background you may or may not use it, but I'd say the only difference is knowing the formal notation makes talking about it with other programmers who also know that notation easier, but even then it's not like algorithmic complexity is (at it's heart) at particularly difficult concept when directly applied to a project. I always found it much harder as an abstract idea rather than when working with a specific algorithm.
You don't need to know how to prove a big-O to optimize an algorithm. Anyone who can think about what their code is doing will have at least some sense of how much it is doing.
Honestly, I don't think calculating the O(f(n)) is ever worth it outside of academia. Intuition is only slightly worse, and it is just so much time-cheaper.
I don't think he formally uses big-O but understands if something he is writing is linear (iterating over an array), exponential or O(1).
People have to get rid of their big hard on for Big-O, a useful concept that takes a couple hours to learn. It isn't a difficult thing that only the true macho programmers can know. I'd wish it was traditionally in starting programming books in the 'optimization & profiling' chapter and we wouldn't be having big fights about it.
I guess I'm not buying that Big-O is a necessary requirement to understand how fast or slow an algorithm will be. It could be that someone understands and uses the process behind it without being at all familiar with the notation.
You don't "use" big-O, it's just a way of writing down different classes of (mathematical) functions; hence, "big-O-notation".
At all times when writing software the engineer should know a) the complexity of the code they are trying to write and b) what the memory complexity of the program is and, in many cases (slightly off-topic), c) the approximate times it takes to do certain simple tasks (this may be memory access, simple disk/network IO, database queries etc)
This is useful because it protects you from writing programs that "perform" unexpectedly badly (I'm confused why anyone would expect them to "perform" without thinking about it).
It may sound like a lot of work, but thankfully most things you program are really constructed of simpler things that have well known complexities (simple loops, sorting, hashing, stuff like that)
At our company (pretty straight-forward social web-app), we have surprisingly many pieces of code where the trivial implementation wouldn't work. We have a graph of different users and relations and want to maintain a transitive closure for each type of edge while also maintaining a bunch of indexes.
Even on the client side we have simple things, such as an autocomplete box for usernames. With many users, we can't just keep an array of all existing users and iterate through it on every keystroke. At that point you have so many options to choose from (e.g. have an index on the server and do a request for the filtered list after short timeout or build a simple index (searchtree) on the client and use that for searching) and you couldn't make a decision without knowing about runtime complexity, constant time factors and memory usage.
I originally wrote O(n^4) vs. O(n^2), which is something that actually happened to me two days ago, but thought I'd go more basic. I should've gone real-world example.
Oh my, another handwaving article about Big O Notation. Why does everyone and their mother feel the need to explain this thing in "plain English".
"Big O notation seeks to describe the relative complexity of an algorithm [...]"
Relative to what? To the moon and the stars?
Big O is a notation to describe the growth rate of functions.
Therefore, you can also use it to describe the memory usage of an algorithm. Even if the author of the article thinks otherwise.
And so on and so forth.
Seriously, I understand the wish to explain this thing clear and simple. But what is so hard about a clear and concise description using maths and some plots?
Or, in John McCarthy's words:"He who refuses to do arithmetic is doomed to talk nonsense.".
reply