Big O Exercises and Solutions

March 27, 2018 | Author: Alhaji Baturey | Category: Time Complexity, Logarithm, Integer (Computer Science), Control Flow, Theoretical Computer Science


Comments



Description

1Exercises and Solutions Most of the exercises below have solutions but you should try first to solve them. Each subsection with solutions is after the corresponding subsection with exercises. T 1.1 Time complexity and Big-Oh notation: exercises 1. A sorting method with “Big-Oh” complexity O(n log n) spends exactly 1 millisecond to sort 1,000 data items. Assuming that time T (n) of sorting n items is directly proportional to n log n, that is, T (n) = cn log n, derive a formula for T (n), given the time T (N ) for sorting N items, and estimate how long this method will sort 1,000,000 items. 2. A quadratic algorithm with processing time T (n) = cn2 spends T (N ) seconds for processing N data items. How much time will be spent for processing n = 5000 data items, assuming that N = 100 and T (N ) = 1ms? 3. An algorithm with time complexity O(f (n)) and processing time T (n) = cf (n), where f (n) is a known function of n, spends 10 seconds to process 1000 data items. How much time will be spent to process 100,000 data items if f (n) = n and f (n) = n3 ? 4. Assume that each of the expressions below gives the processing time T (n) spent by an algorithm for solving a problem of size n. Select the dominant term(s) having the steepest increase in n and specify the lowest Big-Oh complexity of each algorithm. Expression 5 + 0.001n3 + 0.025n 500n + 100n1.5 + 50n log10 n 0.3n + 5n1.5 + 2.5 · n1.75 n2 log2 n + n(log2 n)2 n log3 n + n log2 n 3 log8 n + log2 log2 log2 n 100n + 0.01n2 0.01n + 100n2 2n + n0.5 + 0.5n1.25 0.01n log2 n + n(log2 n)2 100n log3 n + n3 + 100n 0.003 log4 n + log2 log2 n 1 Dominant term(s) O(. . .) A or B. should be chosen to process very big databases. One of the two software packages. for a problem of size n. Algorithms A and B spend exactly TA (n) = 0. Choose the algorithm.5. If your problems are of the size n ≤ 109 . Hint: Find a constant c and threshold n0 such that cn3 ≥ T (n) for n ≥ n0 . 2 . respectively. for a problem of size n. Prove that T (n) = a0 + a1 n + a2 n2 + a3 n3 is O(n3 ) using the formal definition of the Big-Oh notation. respectively. containing each up to 1012 records. which is better in the Big-Oh sense. and the average processing time of the package B is TB (n) = 5 · n microseconds. Determine whether each statement is TRUE or FALSE and correct the formula in the latter case. which algorithm will you recommend to use? 8. Algorithms A and B spend exactly TA (n) = 5·n·log10 n and TB (n) = 25·n microseconds. 7. 9. Is it TRUE or FALSE? If it is FALSE then write the correct formula Statement Rule of sums: O(f + g) = O(f ) + O(g) Rule of products: O(f · g) = O(f ) · O(g) Transitivity: if g = O(f ) and h = O(f ) then g = O(h) 5n + 8n2 + 100n3 = O(n4 ) 5n+8n2 +100n3 = O(n2 log n) 6. and find out a problem size n0 such that for any larger size n > n0 the chosen algorithm outperforms the other.1n2 log10 n and TB (n) = 2. for a problem of size n. The statements below show some features of “Big-Oh” notation for the functions f ≡ f (n) and g ≡ g(n). Find the best algorithm for processing n = 220 data items if the algoritm A spends 10 microseconds to process 1024 items and the algorithm B spends only 1 microsecond to process 1024 items. Which algorithm is better in the Big-Oh sense? For which problem sizes does it outperform the other? 10. respectively. Algorithms A and B spend exactly TA (n) = cA n log2 n and TB (n) = cB n2 microseconds.5n2 microseconds. Average processing time of the package A is TA (n) = 0.1 · n · log2 n microseconds. If f (n) = n f (1000) then T (100. 12. The constant factor c = T (5000) = 2. 000. respectively. the average time of processing n = 104 data items with the package A and B is 100 milliseconds and 500 milliseconds. for 1000000 = n = 1000000 the time is T (1. 000 ms 2.5 and TWP = 0. and C have time complexity O(n2 ).75 . 11. containing each up to 109 records. Let three such algorithms A. respectively. T (n) = ms and T (100. hence. 3. Software packages A and B have processing time exactly TEP = 3n1. spend exactly TA (n) = cA n log10 n and TB (n) = cB n milliseconds to process n data items. then which package should be choose? 1. One of the two software packages. and O(n log n). The constant factor c = f (n) 10 f (1000) T (N ) N2 . each algorithm spends 10 seconds to process 100 data items.03n1. then T (100. Therefore. Average processing time of the package A is TA (n) = 0. 000) = 1000 ms. Which algorithm has better performance in a ”Big-Oh” sense? Work out exact conditions when these packages outperform each other. 000) = 107 ms. 1. respectively.001n milliseconds and the average √ processing time of the package B is TB (n) = 500 n milliseconds. respectively. 3 .5 ). B. There- fore.000 items. base of 10). any appropriate base can be used in the above formula (say.000) ms. and T (n) = Ratio of logarithms of the same base is independent of the base (see Appendix in the textbook). During a test. Software packages A and B of complexity O(n log n) and O(n). A or B. the constant factor c = n log n T (N ) N log N . 14. 000) = 10 f (100.Which algorithm has better performance in a ”Big-Oh” sense? Work out exact conditions when these packages outperform each other. Derive the time each algorithm should spend to process 10. If you are interested in faster processing of up to n = 108 data items. 13.2 Time complexity and Big-Oh notation: solutions T (N ) N log N . Let processing time of an algorithm of Big-Oh complexity O(f (n)) be directly proportional to f (n). Work out exact conditions when one package actually outperforms the other and recommend the best choice if up to n = 109 items should be processed. Because processing time is T (n) = cn log n. 500 ms. 000) = T (1. During a test. If f (n) = n3 . n therefore T (n) = T (N ) N 2 = 2 n2 10000 ms and T (1000) f (1000) = 10 f (1000) milliseconds per item. O(n1. should be chosen to process data collections. 000) · 1000000 log10 1000 1000 log10 1000000·6 1 · 1000·3 = 2. O(g)} Rule of sums: O(f + g) = O(f ) + O(g) Rule of products: O(f · g) = O(f ) · O(g) 5.75 ) O(n2 log n) O(n log n) O(log n) O(n2 ) O(n2 ) O(n1.25 0.25 n(log2 n)2 n3 0.003 log4 n + log2 log2 n Dominant term(s) 0. It outperforms the algo4 . 100n + 0. In the Big-Oh sense.5 + 50n log10 n 0.01n2 0.5 + 0.5n1.5 2.5n1. .) O(n3 ) O(n1.001n3 + 0. .75 n2 log2 n n log3 n.75 n2 log2 n + n(log2 n)2 n log3 n + n log2 n 3 log8 n + log2 log2 log2 n 4. Thus if n ≥ 1. n log2 n 3 log8 n 0.01n log2 n + n(log2 n)2 100n log3 n + n3 + 100n 0.5 · n1. then T (n) ≤ cn3 where c = |a0 | + |a1 | + |a2 | + |a3 | so that T (n) is O(n3 ). Transitivity: if g = O(f ) and h = O(f ) then g = O(h) 5n + 8n2 + 100n3 = O(n4 ) FALSE TRUE if g = O(f ) and f = O(h) then g = O(h) FALSE TRUE 5n + 8n2 + 100n3 = O(n3 ) 5n + 8n2 + 100n3 O(n2 log n) = FALSE 6.25 ) O(n(log n)2 ) O(n3 ) O(log n) Statement If it is FALSE then write the correct formula O(f + g) = max {O(f ).003 log4 n Is it TRUE or FALSE? O(. It is obvious that T (n) ≤ |a0 | + |a1 |n + |a2 |n2 + |a3 |n3 . 7.3n + 5n1.001n3 100n1.01n + 100n2 2n + n0.Expression 5 + 0.5n1.5 ) O(n1.5 + 2.025n 500n + 100n1. the algorithm B is better.01n2 100n2 0. 8.5n2 ≤ 0. or log10 n ≥ 5. when 0. the algorithm B is better. the algorithm of choice is A. when 2. 11. 2 log2 (220 ) = 20280µs and TB (220 ) = 1024 10242 TA (220 ). that is. respectively. In the Big-Oh sense. 1001. respectively.5 ) O(n log n) Time to process 10. or n ≥ 100. The constant factors for A and B are: cA = 10 1 = . 10. Thus for processing up to 1012 data items. that is. 1024 log2 1024 1024 cB = 1 10242 Thus. when 0. 000) = T (100) · 10000 log 100 = 10 · 200 = 2. The processing times of the packages are TA (n) = cA n log10 n and TB (n) = cB n. 000) = T (100) · 10000 = 10 · 1000 = 10. the package B of linear complexity O(n) is better than the package A of O(n log n) complexity. If n ≤ 109 . . Thus for processing up to 109 data items. Because TB (220 ) 9. Thus for processing up to 109 data items.5 T (10. that is.000 items 2 T (10. This inequality reduces to log10 n ≥ 25. the method of choice is A. or n ≥ 25 · 1010 . This 400 400 20 inequality reduces to log10 n ≥ 20 . or n ≥ 10 . 000 sec. The tests allows us to derive the constant factors: cA cB = = 100 104 log10 104 500 104 = = 1 400 1 20 The package B begins to outperform A when we must estimate the data n size n0 that ensures TA (n) ≥ TB (n). In the “Big-Oh” sense. when n log10 n ≥ 20 . 1002 1. the algorithm B of complexity O(n) is better than A of complexity O(n log n). the package B of complexity O(n0.5 log 10000 T (10. 100 5 13. the package of choice is A. or n ≥ 250 ≈ 1015 . he package B has better performance in a The package B begins to outperform A when (TA (n) ≥ TB (n). 000 sec.1n log2 n ≥ 5 · n. the package of choice is A. the package of choice is A. This inequality reduces √ to n ≥ 5 · 105 . The package B begins to outperform A when √ (TA (n) ≥ TB (n). 000 sec. to process 220 = 10242 items the algorithms A and B will spend TA (220 ) = 1 1 20 240 = 220 µs. In the “Big-Oh” sense. It outperforms the algorithm A if TB (n) ≤ TA (n). A1 A2 A3 Complexity O(n2 ) O(n1. that is.5 ) is better than A of complexity O(n).1n2 log10 n. that is.001n ≥ 500 n. or n ≥ n0 = 1025 .rithm A when TB (n) ≤ TA (n). In the “Big-Oh” sense. 000. This inequality reduces to 0. 000) = T (100) · 10000 = 10 · 10000 = 100. if 25n ≤ 5n log10 n.1 log2 n ≥ 5. 12. n. high. high ). In the Big-Oh sense. else if ( goal > target ) return recurrentMethod( a. slightly different algorithm is described by the recurrence: T (n) = k · T n + c · k · n. and k. int goal ) { int target = arrangeTarget( a. 3. low. Hint: To have the well-defined recurrence. . target+1. that is.25 ≥ 3/0. What is the computational complexity of this algorithm in a “Big-Oh” sense? Hint: To have the well-defined recurrence. T (1) = 0. the package of choice is B.03n1. k Derive a closed form formula for T (n) in terms of c. assume that n = k m with the integer m = logk n and k. if ( goal < target ) return recurrentMethod( a. 4.). or n ≥ 108 .75 . goal ). Running time T (n) of processing n data items with a given algorithm is described by the recurrence: T (n) = k · T n + c · n. Running time T (n) of processing n data items with another. target-1. low. k Derive a closed form formula for T (n) in terms of c. int high.14. .03(= 100).71828 .3 Recurrences and divide-and-conquer paradigm: exercises 1. int low. n.5 ≤ 0. But it outperforms the package B when TA (n) ≤ TB (n). assume that n = k m with the integer m = logk n and k. T (1) = 0. or 4 results in the fastest processing with the above algorithm? Hint: You may need a relation logk n = ln n where ln ln k denotes the natural logarithm with the base e = 2. 2. goal ). } 6 . What value of k = 2. when 3n1. 3. the package A is better. else return a[ target ]. and k and detemine computational complexity of this algorithm in a “Big-Oh” sense. 1. Derive the recurrence that describes processing time T (n) of the recursive method: public static int recurrentMethod( int[] a. Thus for processing up to 108 data items. This inequality reduces to n0. 1 If you know difference equations in math. n − 1 occurs with the same probability n . . 1.4 Recurrences and divide-and-conquer paradigm: solutions 1.The range of input variables is 0 ≤ low ≤ goal ≤ high ≤ a. Hint: consider a call of recurrentMethod() for n = a. 1 2·3 2 (T (0) + · · · + T (n − 1)) + c n + ··· + 1 n(n+1) = n n+1 for 7. . . . recurrentMethod( a. The obvious linear algorithm for exponentiation xn uses n − 1 multiplications. T (0) = 0 n 1 1 Hint: You might need the n-th harmonic number Hn = 1+ 2 + 1 +· · ·+ n ≈ 3 ln n + 0. . 6. 1.length data items. Propose a faster algorithm and find its Big-Oh complexity in the case when n = 2m by writing and solving a recurrence formula. A non-recursive method arrangeTarget() has linear time complexity T (n) = c · n where n = high − low + 1 and returns integer target in the range low ≤ target ≤ high.577 for deriving the explicit formula for T (n). you will easily notice that the recurrences are the difference equations and ”telescoping” is their solution by substitution of the same equation for gradually decreasing arguments: n − 1 or n/k. e. Determine an explicit formula for the time T (n) of processing an array of size n if T (n) relates to the average of T (n − 1). a. Each recurrence involves the number of data items the method recursively calls to. Derive an explicit (closed–form) formula for time T (n) of processing an array of size n if T (n) is specified implicitly by the recurrence: T (n) = 1 (T (0) + T (1) + · · · + T (n − 1)) + c · n. . T (0) as follows: T (n) = where T(0) = 0. 7 .1. The closed-form formula can be derived either by “telescoping”1 or by guessing from a few values and using then math induction. then 1 every target = 0. 1 Hint: You might need the equation 1·2 + deriving the explicit formula for T (n).g. goal ) and analyse all recurrences for T (n) for different input arrays.length . Output values of arrangeTarget() are equiprobable in this range. 0. e. . if low = 0 and high = n − 1. . Time for performing elementary operations like if–else or return should not be taken into account in the recurrence.length − 1. 5.g. . 8 . or T (n) = c·n·logk n. Then T (k m ) = k · T (k m−1 ) + c · k m = k · c · (m − 1) · k m−1 + c · k m = c · (m − 1 + 1) · k m = c · m · k m Therefore. 1.(a) Derivation with “telescoping”: the recurrence T (k m ) = k · T (k m−1 ) + c · k m can be represented and “telescoped” as follows: T (k m ) km m−1 T (k ) k m−1 ··· T (k) k m = = ··· = T (k m−1 ) +c k m−1 T (k m−2 ) +c k m−2 ··· T (1) +c 1 Therefore. Let us explore it with math induction. (b) Another version of telescoping: the like substitution can be applied directly to the initial recurrence: T (k m ) k · T (k m−1 ) ··· k m−1 · T (k) = = ··· = k · T (k m−1 ) + c · k m k 2 · T (k m−2 ) + c · k m ··· k m · T (1) + c · k m so that T (k m ) = c · m · k m (c) Guessing and math induction: let us construct a few initial values of T (n). T (k ) = c·m. the assumed relationship holds for all values m = 0. . . . . starting from the given T (1) and successively applying the recurrence: T (1) = 0 T (k) = k · T (1) + c · k = c · k T (k 2 ) = k · T (k) + c · k 2 = 2 · c · k 2 T (k 3 ) = k · T (k 2 ) + c · k 3 = 3 · c · k 3 This sequence leads to an assumption that T (k m ) = c · m · k m . or T (k m ) = c·k m ·m. . so that T (n) = c · n · logk n. m − 1. km The Big-Oh complexity is O(n log n). . ∞. . The inductive steps are as follows: (i) The base case T (1) ≡ T (k 0 ) = c · 0 · k 0 = 0 holds under our assumption. (ii) Let the assumption holds for l = 0. .. and by subtracting the latter equation from the former one.8854. The recurrence T (k m ) = k · T (k m−1 ) + c · k m+1 telescopes as follows: T (km ) km+1 T (km−1 ) km T (k) k2 m ··· = = ··· = ··· T (km−1 ) km T (km−2 ) km−1 +c +c T (1) k +c Therefore. or T (n) = T (n − 1) + 2c − n .8854. 4 3 ln 3 = 2. km+1 The complexity is O(n log n) because k is constant. Processing time T (n) = c · k · n · logk n can be easily rewritten as T (n) = k 2 c ln k · n · ln n to give an explicit dependence of k. 4... Because all the variants: T (n) = T (0) + cn T (n) = T (1) + cn T (n) = T (2) + cn . It follows that (n−1)T (n−1) = T (0)+. or T (n) = T (0) + cn if if if if target = 0 target = 1 target = 2 .The Big-Oh complexity of this algorithm is O(n log n). and ln 4 = 2. T (k ) = c · m. or T (n) = c · k · n · logk n. the fastest processing is obtained for k = 3. + T (n − 1)) + c · n n 5. .+T (n−2)+c·(n−1)2 . . c It reduces to nT (n) = n · T (n − 1) + 2cn − c. 3. 2.. the explicit expression for T (n) is: T (n) = 2cn − c · 1 + 1 1 + ··· + 2 n = 2cn − Hn ≡ 2cn − ln n − 0.577 9 . .. target = n − 1 are equiprobable.7307. The recurrence suggests that nT (n) = T (0)+T (1)+· · ·+T (n−2)+T (n− 1)+cn2 . or T (k m ) = c · k m+1 · m. we obtain the following basic recurrence: nT (n) − (n − 1)T (n − 1) = T (n − 1) + 2cn − c.. then the recurrence is T (n) = 1 (T (0) + . Telescoping results in the following system of equalities: T (n) T (n − 1) ··· T (2) T (1) = = ··· = = T (n − 1) +2c T (n − 2) +2c ··· ··· T (1) +2c T (0) +2c c −n c − n−1 ··· c −2 −c Because T (0) = 0. Because ln 2 = 2. T (n) = T (n − 1) + cn or T (n) = T (n − 1) + cn or T (n) = T (n − 2) + cn or T (n) = T (n − 3) + cn . .+T (n−2)+T (n− 1)) + c · n. . + (n − 1)c) + c 2 = n (n−1)n c + c 2 = (n − 1)c + c = cn The assumption is proven. the explicit expression for T (n) is: 1 c 1 c n T (n) = + + ··· + =c· n+1 1·2 2·3 (n − 1)n n(n + 1) n+1 so that T (n) = c · n. . . . that is. It c reduces to nT (n) = (n + 1)T (n − 1) + c. x4 = x2 · x2 . . the subtraction of the latter equality from the former one results in the following basic recurrence nT (n) − (n − 1)T (n − 1) = 2T (n − 1) + c. 7. . The recurrence suggests that nT (n) = 2(T (0)+T (1)+. n − 1. A straightforward linear exponentiation needs n = 2m−1 multiplications 2 n n n to produce xn after having already x 2 . . . . Processing time for such more efficient 10 . Because (n − 1)T (n − 1) = 2(T (0) + . T (k) = kc for k = 1. x8 = x4 · x4 .6. more efficient exponentiation is performed as follows: x2 = x·x. . . But because xn = x 2 · x 2 . or T (n) = T (n−1) + n(n+1) . . n+1 n Telescoping results in the following system of equalities: T (n) n+1 T (n−1) n = = ··· = = T (n−1) n T (n−2) n−1 + + ··· + + c n(n+1) c (n−1)n ··· T (2) 3 T (1) 2 ··· T (1) 2 T (0) 1 ··· c 2·3 c 1·2 Because T (0) = 0. (ii) Let it be valid for k = 1. etc. Therefore. + T (n − 2)) + c · (n − 1). n − 1. Then T (n) 2 = n (0 + c + 2c + . An alternative approach (guessing and math induction): T (0) = 0 2 T (1) = 1 · 0 + c 2 T (2) = 2 (0 + c) + c 2 T (3) = 3 (0 + c + 2c) + c T (4) = 2 (0 + c + 2c + 3c) + c 4 = c = 2c = 3c = 4c It suggests an assumption T (n) = cn to be explored with math induction: (i) The assumption is valid for T (0) = 0. and T (n) = cn. . it n can be computed with only one multiplication more than to compute x 2 . algorithm corresponds to a recurrence: T (2m ) = T (2m−1 ) + 1. T (2m ) = m. and by telescoping one obtains: T (2m ) = T (2m−1 ) = ··· T (2) = T (1) = T (2m−1 ) + 1 T (2m−2 ) + 1 ··· T (1) + 1 0 that is. i < n. k < n. j < n... j /= 2 ) { for ( k = j. k < n. k += 2 ) { sum += (i + j * k ). Work out the computational complexity of the following piece of code. i > 0. for ( i=1. i-. j > 0. k < j. i *= 2 ) { for ( j = n. 1. } } } 3. i /= 2 ) { = 1. j < n.. } } } n. The “Big-Oh” complexity of such algorithm is O(log n). Work out the computational complexity of the following piece of code assuming that n = 2m : for( int i = for( int j for( int .) { = 1.5 Time complexity of code: exercises 1. i > 0. k += 2 ) { // constant number of operations 2. j *= 2 ) { k = 0.. j *= 2 ) { k = 0. } } } n. Work out the computational complexity of the following piece of code: for( int i = for( int j for( int . or T (n) = log2 n. k++ ) { // constant number C of operations 11 . bound *= 2 ) { for( int i = 0. } 12 . Determine the average processing time T (n) of the recursive algorithm: 1 2 3 4 5 6 7 int myTest( int n ) { if ( n <= 0 ) return 0. goodSort( a ).i ). n] whereas all other instructions spend a negligibly small time (e. i++ ) { for( j = 0.. 6..g. j += 2 ) { . // constant number of operations } for( int j = 1. } } providing the algorithm random( int n ) spends one time unit to return a random integer value uniformly distributed in the range [0. else { int i = random( n . return myTest( i ) + myTest( n .. i < n.. i++ ) { for( int j = 0. . bound <= n. j < n.. Determine the Big-Oh complexity for the following fragments of code taking into account only the above computational steps: for( i = 0. // constant number of operations } } } 5. j < n. You might need the equation 1·2 + 2·3 +· · ·+ n(n+1) = n n+1 for deriving the explicit formula for T (n). that the method randomValue takes constant number c of computational steps to produce each output value.1 ).4. j++ ) a[ j ] = randomValue( i ). i < bound. Hints: derive and solve the basic recurrence relating T (n) in average to 1 1 1 T (n−1). and that the method goodSort takes n log n computational steps to sort the array. j < n. . . j *= 2 ) { . Work out the computational complexity (in the “Big-Oh” sense) of the following piece of code and explain how you derived it using the basic features of the “Big-Oh” notation: for( int bound = 1.1 . . Assume that the array a contains n values. T (0) = 0). T (0). respectively. More detailed optional analysis gives the same value. . and outer loop is proportional to n.. + 2m−1 = 2m − 1 ≈ n times. 1. In the outer for-loop. 2. Let n = 2k . .6 Time complexity of code: solutions 1. The outer for-loop goes round n times. 2.1. The innermost loop by k goes round n times. .. the overall complexity of the innermost part is O(n). the next loop goes round m = log2 n times. Loops 2 are nested. The outermost and middle loops have complexity O(log n) and O(n).. the inner loop has different execution times: j Inner iterations 2k 1 2k−1 (2k − 2k−1 ) 1 2 2k−2 (2k − 2k−2 ) 1 2 . so a straightforward (and valid) solution is that the overall complexity is O(n2 log n). middle. so that the two inner loops together go round 1 + 2 + 4 + . next loop goes round also log2 n times. 13 . Thus. Thus the overall Big-Oh complexity is O(n(log n)2 ). and log n. . the innermost loop by k goes round j times. Then the outer loop is executed k times. the number of inner/middle steps is 1 + k · 2k−1 − (1 + 2 + . The first and second successive innermost loops have O(n) and O(log n) complexity. . respectively. 4. the middle loop is executed k + 1 times. 21 (2k − 21 ) 1 2 0 2 (2k − 20 ) 1 2 In total. because of doubling the variable j. For each i. the total complexity is O(n(log n)2 ). For each i. Loops are nested.. Running time of the inner. + 2k−1 ) 1 2 = = 1 + k · 2k−1 − (2k − 1) 1 2 1. and for each value j = 2k . . For each j. . because of doubling the variable j. 2k−1 . 3. so the bounds may be multiplied to give that the algorithm is O n(log n)2 . . the variable i keeps halving so it goes round log2 n times. . log n.5 + (k − 1) · 2k−1 ≡ (log2 n − 1) n 2 = O(n log n) Thus. so the bounds may be multiplied to give that the algorithm is O n2 . the basic recurrence is as follows: nT (n) − (n − 1)T (n − 1) = 2T (n − 1) + 1. but the next called method is of higher complexity n log n. . Because (n − 1)T (n − 1) = 2(T (0) + . . . By summing this relationship for all the possible random values i = 0. The telescoping n+1 n results in the following system of expressions: T (n) n+1 T (n−1) n = = ··· = = T (n−1) n T (n−2) n−1 + + ··· + + 1 n(n+1) 1 (n−1)n ··· T (2) 3 T (1) 2 ··· T (1) 2 T (0) 1 ··· 1 2·3 1 1·2 Because T (0) = 0. . . n − 1.More detailed analysis shows that the outermost and middle loops are interrelated. or T (n) = T (n−1) + n(n+1) . . But selection of either answer (O(n2 log n) and O(n2 ) is valid. + T (n − 2)) + (n − 1). Thus actually this code has quadratic complexity O(n2 ). . the explicit expression for T (n) is: T (n) 1 1 1 1 n = + + ··· + = n+1 1·2 2·3 (n − 1)n n(n + 1) n+1 so that T (n) = n. Because the outer loop is linear in n. and the number of repeating the innermost part is as follows: 1 + 2 + . The inner loop has linear complexity cn. 6. The algorithm suggests that T (n) = T (i) + T (n − 1 − i) + 1. the overall complexity of this piece of code is n2 log n. or 1 nT (n) = (n + 1)T (n − 1) + 1. 14 . . + 2m = 2m+1 − 1 where m = log2 n is the smallest integer such that 2m+1 > n. we obtain that in average nT (n) = 2(T (0)+T (1)+.+T (n−2)+T (n−1))+n. 5. 1. . .
Copyright © 2024 DOKUMEN.SITE Inc.