The Big O means less
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.
For most of my development career, code-review has emphasized looking for unnecessary complexity in algorithms.
The Big O notation goes something like this:
O(1), O(N), O(N^2), O(2^N), etc.
On the whole, developers would focus on creating algorithms that performed as few operations as possible.
If(a == 0)return false;
That's an O(1).
for (i = 0; i
That would be O(N).
From there, you woudl look for things like nested for loops or unanticipatd recursion.
The game has changed
However, in the past several years, our CPUs have gotten so fast that the big O is not necessarily the biggest issue. Instead, it's memory accessing.
Our CPUs are really fast. Memory, however, remains pretty slow to deal with. Unfortunately, most developers are not trained to think about the cost of reading and writing to memory. In fact, many developers are outright hostile to using "old school" methods like fixed arrays and pre-allocation in favor of more elegant solutions.
When O(N) is far slower than O(N^2)
Imagine you have a vector of units in your game. In fact, let's say you have all the units in yoru game as a big vector but you want to only deal with a small subset of them in order to improve performance due to some expensive operation.
I have 5,000 robots in the world. I want to do an "expensive" operation on the robots that the player can "see". Many engineers will try to create a vector or just the robots they can see so that they can avoid doing the "expensive" set of operations to them.
Thus they might do something like this (forgive my crappy syntax, i'm lazy):
for(i = 0; i
Seems straight forward right? But it turns out that that call might be one of the most performance draining calls in your game. That's because allocating and deallocating, on modern CPUs, is incredibly expensive.
Reading and writing memory is often far more performance intensive than doing any number of mathematical operations to them.
In the above example, you might even find that it's faster to just do the "expensive" operations to all the robots whether they're on screen or not rather than allocating a new vector. Or, at the very least, filtering out the robots in a series of inelegant IF/THEN checks which many new developers are loathe to do because it looks hackish.
CPUs vs. Memory
The main point is that increasingly, CPU power is cheap, memory management is expensive. Avoid allocating and deallocating memory when possible.