31251 Lec 4.2: Big Oh Notation

preview_player
Показать описание
This video introduces big Oh notation, and its relatives big Omega and big Theta. To make big Oh more intuitive, we go over the reasons why it is defined the way it is. The first key property of big Oh is that constant multiplicative factors are ignored: for both an algorithm that makes n^2 steps and one that makes 2n^2 steps we would say the running time is O(n^2). The reason behind this is that there are many unknown constant factors along the way in going from pseudocode to actually running a program on a particular architecture. Greater accuracy in counting the steps in the pseudocode does not necessarily lead to better predictions of the actual running time. In ignoring constant factors in our analysis we accept this uncertainty.

The second key property of big Oh is that small size effects are ignored, we only care about the running time as the input size goes to infinity. We again illustrate this on the "contains duplicate" problem, showing that the double for loop solution is actually faster than a hash-table-based solution on inputs of size up to 400. Once the input size becomes ~50,000, however, the hash-table-based solution blows the double for loop out of the water. Big Oh ignores small size effects to focus on the question: which algorithm do you want to use when the input size becomes really large?
Рекомендации по теме