# 大O，你怎么计算/近似呢？

1 但是正如他们所说的那样，不要过头， 不成熟的优化是万恶之源 ，没有正当理由的优化也应该得到这个名称。

``int sum(int* data, int N) { int result = 0; // 1 for (int i = 0; i < N; i++) { // 2 result += data[i]; // 3 } return result; // 4 }` `

` `Number_Of_Steps = f(N)` `

` `Number_Of_Steps = f(data.length)` `

` `f(N) = C + ??? + C` `

` `f(N) = C + (C + C + ... + C) + C = C + N * C + C` `

1. 拿走所有的常数`C`
2. `f()`得到`standard form`的polynomium 。
3. 划分多项式的条件并按增长率sorting。
4. `N`接近`infinity`时，保持越大`infinity`

` `f(N) = 2 * C * N ^ 0 + 1 * C * N ^ 1` `

` `f(N) = 1 + N ^ 1` `

` `O(N)` `

` `for (i = 0; i < 2*n; i += 2) { // 1 for (j=n; j > i; j--) { // 2 foo(); // 3 } }` `

` `f(N) = Summation(i from 1 to 2 * N / 2)( ... ) = = Summation(i from 1 to N)( ... )` `

` `f(N) = Summation(i from 1 to N)( Summation(j = ???)( ) )` `

` `f(N) = Summation(i from 1 to N)( Summation(j = 1 to (N - (i - 1) * 2)( C ) )` `

（我们假设`foo()``O(1)`并采取`C`步骤。）

` `f(N) = Summation(i from 1 to N / 2)( Summation(j = 1 to (N - (i - 1) * 2)) * ( C ) ) + Summation(i from 1 to N / 2) * ( C )` `

1. 求和（从1到N）（C）= N * C
2. 总和（W从1到N）（A（+/-）B）=总和（W从1到N）（A）（+/-）总和（w从1到N）
3. 求和（w从1到N）（w * C）= C *求和（w从1到N）（w）（C是一个常数，与`w`无关）
4. 求和（w从1到N）（w）=（N *（N + 1））/ 2

` `f(N) = Summation(i from 1 to N / 2)( (N - (i - 1) * 2) * ( C ) ) + (N / 2)( C ) f(N) = C * Summation(i from 1 to N / 2)( (N - (i - 1) * 2)) + (N / 2)( C ) f(N) = C * (Summation(i from 1 to N / 2)( N ) - Summation(i from 1 to N / 2)( (i - 1) * 2)) + (N / 2)( C ) f(N) = C * (( N ^ 2 / 2 ) - 2 * Summation(i from 1 to N / 2)( i - 1 )) + (N / 2)( C ) => Summation(i from 1 to N / 2)( i - 1 ) = Summation(i from 1 to N / 2 - 1)( i ) f(N) = C * (( N ^ 2 / 2 ) - 2 * Summation(i from 1 to N / 2 - 1)( i )) + (N / 2)( C ) f(N) = C * (( N ^ 2 / 2 ) - 2 * ( (N / 2 - 1) * (N / 2 - 1 + 1) / 2) ) + (N / 2)( C ) => (N / 2 - 1) * (N / 2 - 1 + 1) / 2 = (N / 2 - 1) * (N / 2) / 2 = ((N ^ 2 / 4) - (N / 2)) / 2 = (N ^ 2 / 8) - (N / 4) f(N) = C * (( N ^ 2 / 2 ) - 2 * ( (N ^ 2 / 8) - (N / 4) )) + (N / 2)( C ) f(N) = C * (( N ^ 2 / 2 ) - ( (N ^ 2 / 4) - (N / 2) )) + (N / 2)( C ) f(N) = C * (( N ^ 2 / 2 ) - (N ^ 2 / 4) + (N / 2)) + (N / 2)( C ) f(N) = C * ( N ^ 2 / 4 ) + C * (N / 2) + C * (N / 2) f(N) = C * ( N ^ 2 / 4 ) + 2 * C * (N / 2) f(N) = C * ( N ^ 2 / 4 ) + C * N f(N) = C * 1/4 * N ^ 2 + C * N` `

BigOh是：

` `O(N²)` `

` `int array[n];` `

` `x = array[0];` `

` `for(int i = 0; i < n; i++){ if(array[i] == numToFind){ return i; } }` `

` `for(int i = 0; i < n; i++){ for(int j = i; j < n; j++){ array[j] += 2; } }` `

O（1） – 确定一个数字是偶数还是奇数; 使用恒定大小的查找表或散列表

O（logn） – 使用二分search查找sorting数组中的项目

O（n） – 在未sorting列表中查找项目; 增加两个n位数字

O（n 2 ） – 用简单的algorithm乘以两个n位数字; 增加两个n×nmatrix; 冒泡sorting或插入sorting

O（n 3 ） – 用简单的algorithm乘以两个n×nmatrix

O（c n ） – 使用dynamic规划查找旅行商问题的（确切的）解决scheme; 用暴力来确定两个逻辑语句是否相等

O（n！） – 通过蛮力search解决旅行商问题

O（n n ） – 通常用来代替O（n！）来得到渐近复杂度的简单公式

• 最坏的情况（通常是最简单的，尽pipe并不总是很有意义）
• 平均情况下（通常很难弄清楚…）

` `(define (fac n) (if (= n 0) 1 (* n (fac (- n 1)))))` `

recursion地计算给定数量的阶乘。

1 *（n-1）= O（n）

O（n / 2 + 1）*（n / 2））= O（n 2/4 + n / 2）= O（n 2/4）= O

O（log N ）<O（ N ）<O（ N log N ）<O（ N 2 ）<O（ N k ）<O（e n ）<O（ n ！）

1. 算术运算（例如+或％）。
2. 逻辑运算（例如&&）。
3. 比较操作（例如，<=）。
4. 结构访问操作（例如像A [i]的数组索引，或者使用 – >运算符的指针）。
5. 简单的赋值，如将值复制到variables中。
6. 调用库函数（例如，scanf，printf）。

1. 在expression式中不涉及函数调用的分配语句。
2. 读取语句。
3. 编写不需要函数调用来评估参数的语句。
4. 跳转语句中断，继续，跳转和返回expression式，其中expression式不包含函数调用。

` `for (i = 0; i < n-1; i++) { small = i; for (j = i+1; j < n; j++) if (A[j] < A[small]) small = j; temp = A[small]; A[small] = A[i]; A[i] = temp; }` `

` `(1) for (j = 0; j < n; j++) (2) A[i][j] = 0;` `

` `(2) for (i = 0; i < n; i++) (3) for (j = 0; j < n; j++) (4) A[i][j] = 0;` `

` `a=0; b=1; for (i = 0; i <n; i++) { tmp = b; b = a + b; a = tmp; }` `

1 + 2 + 3 + … + n = n（n-1）/ 2 = O（n ^ 2）

` `int nCmp = 0; System.Random rnd = new System.Random(); // measure the time required to sort a list of n integers void DoTest(int n) { List<int> lst = new List<int>(n); for( int i=0; i<n; i++ ) lst[i] = rnd.Next(0,1000); // as we sort, keep track of the number of comparisons performed! nCmp = 0; lst.Sort( delegate( int a, int b ) { nCmp++; return (a<b)?-1:((a>b)?1:0)); } System.Console.Writeline( "{0},{1}", n, nCmp ); } // Perform measurement for a variety of sample sizes. // It would be prudent to check multiple random samples of each size, but this is OK for a quick sanity check for( int n = 0; n<1000; n++ ) DoTest(n);` `

I don't know how to programmatically solve this, but the first thing people do is that we sample the algorithm for certain patterns in the number of operations done, say 4n^2 + 2n + 1 we have 2 rules:

1. If we have a sum of terms, the term with the largest growth rate is kept, with other terms omitted.
2. If we have a product of several factors constant factors are omitted.

If we simplify f(x), where f(x) is the formula for number of operations done, (4n^2 + 2n + 1 explained above), we obtain the big-O value [O(n^2) in this case]. But this would have to account for Lagrange interpolation in the program, which may be hard to implement. And what if the real big-O value was O(2^n), and we might have something like O(x^n), so this algorithm probably wouldn't be programmable. But if someone proves me wrong, give me the code . 。 。 。

great question!

If you're using the Big O, you're talking about the worse case (more on what that means later). Additionally, there is capital theta for average case and a big omega for best case.

Check out this site for a lovely formal definition of Big O: https://xlinux.nist.gov/dads/HTML/bigOnotation.html

f(n) = O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n.

Ok, so now what do we mean by "best-case" and "worst-case" complexities?

This is probably most clearly illustrated through examples. For example if we are using linear search to find a number in a sorted array then the worst case is when we decide to search for the last element of the array as this would take as many steps as there are items in the array. The best case would be when we search for the first element since we would be done after the first check.

The point of all these adjective -case complexities is that we're looking for a way to graph the amount of time a hypothetical program runs to completion in terms of the size of particular variables. However for many algorithms you can argue that there is not a single time for a particular size of input. Notice that this contradicts with the fundamental requirement of a function, any input should have no more than one output. So we come up with multiple functions to describe an algorithm's complexity. Now, even though searching an array of size n may take varying amounts of time depending on what you're looking for in the array and depending proportionally to n, we can create an informative description of the algorithm using best-case, average-case, and worst-case classes.

Sorry this is so poorly written and lacks much technical information. But hopefully it'll make time complexity classes easier to think about. Once you become comfortable with these it becomes a simple matter of parsing through your program and looking for things like for-loops that depend on array sizes and reasoning based on your data structures what kind of input would result in trivial cases and what input would result in worst-cases.