The Usual Complexity Assumptions

"The usual complexity assumptions" is a term refering to a set of conventions employed so often in complexity calculations that they are usually taken for granted.

For example, one often refers to Bubble sorting as an O(n2)-time operation, when, in fact, it is not. It merely requires O(n2) comparison operations. If we were to compare two integers, a and b using (for example) a Turing machine, we would, in the general case, need to go over all their bits in order to find a difference, so this operation may take as long as O(log(min(|a|,|b|))), rather than O(1).

Similarly, we casually refer to storing real numbers and performing operations on them, when, in fact, it is impossible to store a full representation of a general real number on any computer or any Turing machine.

"The usual complexity assumptions" therefore refers to the most commonly used (and least commonly explicitly specified) computational model. It defines "integers", "reals" and "arrays" as abstract data types, and also defines an execution model that uses them. This, in turn, means that certain operations are assumed to be atomic O(1)-time operations.

The following are the O(1) operations assumed:

Regarding integers

Note that bitwise operations other than shifts are not considered O(1), unless explicitly stated.

Regarding real numbers

Regarding arrays

Note that use of an integer/real variable is modeled as the use of an array of size 1. Integer/real constants (i.e. program literals) are like variables, but unlike regular arrays/variables do get initialized and cannot be overwritten with new values.

Regarding flow control

Though anybody who has done any complexity calculation has probably been exposed to this computational model, its assumptions are rarely stated explicitly. They are repeated here because in riddles they are often stretched ad absurdum, so it is useful to keep them in mind.