ANALYSIS AND DESIGN OF ALGORITHMSAn algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output. An algorithm is thus a sequence of computational steps that transform the input into the output. We can also view an algorithm as a tool for solving a well-specified computational problem. Example: 1 2 3 4 5 6 7 get a positive integer from input if n > 10 print "This might take a while..." for i = 1 to n for j = 1 to i print i * j print "Done!" Analyzing an algorithm means predicting the resources that the algorithm requires. Occasionally, resources such as memory, communication bandwidth, or computer hardware are of primary concern, but most often it is computational time(or running time) that we want to measure. The efficiency or running time of an algorithm is stated as a function relating the input length(generally denoted by n) to the number of steps (time complexity generally denoted by T(n)) or storage locations (space complexity generally denoted by S(n)). Complexity is measured in form of function . Example: T(n) denote time complexity function of an algorithm where n is number of input. T(n)= n2+5n+100 = O(n2) means that time complexity of algorithm is order of n2,running time of algorithm increases more quickly than the algorithm having complexity of O(n). For quadratic equation type function complexity of algorithm is determined by degree of equation. Example: T(n)=n4+1000n3+1 In this equation complexity is determined by n4,since 4 is the largest degree of n. Note: O(n2) and O(n) are the set of functions which have same behavior at large input values. 5 n2 +n = O(n2) and also 1000 n2 = O(n2) since they exhibit same behavior at large values Notation for complexity O-notation(Big O) O(g(n)) = {f (n) : there exist positive constants c and n0 such that 0 ≤ f (n) ≤ cg(n) for all n ≥ n0} . g(n) is an asymptotic upper bound for f (n). If f (n) ∈ O(g(n)), we write f (n) = O(g(n)) Example: 2 n2 = O(n3), with c = 1 and n0 = 2. Examples of functions in O(n2): n2, n2 + n, n2 + 1000n, 1000 n2 + 1000n, Also, n, n/1000, n1.999, n2/ lg lg lg n Ω-notation Ω(g(n)) = { f (n) : there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f (n) for all n ≥ n0} . g(n) is an asymptotic lower bound for f (n). Example: √n = Ω (lg n), with c = 1 and n0 = 16. Examples of functions in Ω (n2): n2, n2 + n, n2 – n, 1000 n2 + 1000n, 1000 n2 − 1000n, Also, n3, n2.00001, n2 lg lg lg n, 22n θ-notation θ (g(n)) = { f (n) : there exist positive constants c1, c2, and n0 such that 0 ≤ c1g(n) ≤ f (n) ≤ c2g(n) for all n ≥ n0} . g(n) is an asymptotically tight bound for f (n). Example: n2/2 − 2n = _(n2), with c1 = 1/4, c2 = 1/2, and n0 = 8. Theorem: f (n) = θ (g(n)) if and only if f = O(g(n)) and f = Ω(g(n)) . Asymptotic notation in equations When on right-hand side: O(n2) stands for some anonymous function in the set O(n2). 2 n2+3n+1 = 2 n2+θ(n) means 2 n2+3n+1 = 2 n2+ f (n) for some f (n) ∈ θ(n). In particular, f (n) = 3n + 1. When on left-hand side: No matter how the anonymous functions are chosen on the left-hand side, there is a way to choose the anonymous functions on the right hand side to make the equation valid. Interpret 2 n2 + θ(n) = θ(n2) as meaning for all functions f (n) ∈ θ(n), there exists a function g(n) ∈ θ(n2) such that 2 n2 + f (n) = g(n) . Interpretation: • First equation: There exists f (n) ∈ θ(n) such that 2 n2+3n+1 = 2 n2+ f (n). • Second equation: For all g(n) ∈ θ(n) (such as the f (n) used to make the first equation hold), there exists h(n) ∈ θ(n2) such that 2 n2 + g(n) = h(n). Small o-notation o(g(n)) = { f (n) : for all constants c > 0, there exists a constant n0 > 0 such that 0 ≤ f (n) < cg(n) for all n ≥ n0} . Another view, probably easier to use: n1.9999= o(n2) n2/ lg n = o(n2) n2 ≠ o(n2) (just like 2 _< 2) n2/1000 ≠ o(n2) small omega ω-notation ω(g(n)) = { f (n) : for all constants c > 0, there exists a constant n0 > 0 such that 0 ≤ cg(n) < f (n) for all n ≥ n0} .Another view, again, probably easier to use: n2.0001= ω(n2) n2 lg n = ω(n2) n2 ≠ω(n2) Comparisons of functions Relational properties: Transitivity: f (n) = θ(g(n)) and g(n) = θ (h(n))⇒ f (n) = θ (h(n)). Same for O, Ω, o, and ω. Reflexivity: f (n) = θ ( f (n)). Same for O and Ω . Symmetry: f (n) = θ (g(n)) if and only if g(n) = θ ( f (n)). Transpose symmetry: f (n) = O(g(n)) if and only if g(n) = Ω( f (n)). ex gets closer to 1 + x.f (n) = o(g(n)) if and only if g(n) = ω( f (n)). • f (n) is asymptotically larger than g(n) if f (n) = ω(g(n)). Example: n1+sin n and n. a = b. • If we hold a constant. Standard notations and common functions Monotonicity • f (n) is monotonically increasing if m ≤ n ⇒ f (m) ≤ f (n). aman = am+n . . Exponentials Useful identities: a−1 = 1/a . where a < b. and n. we might not be able to compare some functions. ln n = loge n (natural logarithm) . then the expression is strictly decreasing as b increases. Logarithms Notations: lg n = log2 n (binary logarithm) . c > 0. (am)n = amn . b > 0. Logarithm functions apply only to the next term in the formula. • f (n) is strictly increasing if m < n ⇒ f (m) < f (n). and where logarithm bases are not 1: a = blogb a . lg lg n = lg(lg n) (composition) . Comparisons: • f (n) is asymptotically smaller than g(n) if f (n) = o(g(n)). In the expression logb a: • If we hold b constant. logc(ab) = logc a + logc b . lgk n = (lg n)k (exponentiation) .sizes. then the expression is strictly increasing as a increases. so that lg n + k means (lg n) + k. logb(1/a) = −logb a . and not lg(n + k). A suprisingly useful inequality: for all real x. A way to compare . logb an = n logb a . of functions: O ≈ ≤ Ω ≈ ≥ θ ≈ = o ≈ < ω ≈ > but unlike real numbers. logb a = logc a logc b. ex ≥ 1 + x . or a > b. • f (n) is monotonically decreasing if m ≥ n ⇒ f (m) ≥ f (n). Useful identities for all real a > 0. • f (n) is strictly decreasing if m > n ⇒ f (m) > f (n).As x gets closer to 0. since 1 + sin n oscillates between 0 and 2. Can relate rates of growth of polynomials and exponentials: for all real constants a and b such that a > 1. which implies that nb = o(an). to derive that lg(n!) = θ(n lg n). gnOhn(B) f ngn. Special case: 0! = 1.logb a = 1 loga b. Just as polynomials grow more slowly than exponentials. a logb c = c logb a . Q2. Consider the following functions: Which of the following statements about the asymptotic behavior of f(n). Changing the base of a logarithm from one constant to another only changes the value by a constant factor. II and III CS 2003 Ans. In . D Explanation: To solve this type of problem first arrange the functions in increasing order of complexity. so we usually don‘t worry about logarithm bases in asymptotic notation. Constants can be ignored in addition and multiplication but not in powers for large input values. III. gnOhn (C) gnOf n. Similarly 2n+1 =2*2n ≈ c2n so second is also correct. A Explanation. Factorials n! = 1 · 2 · 3 · n. 22n+2= 22*22n ≈ c22n ≈which c2n is much larger than for large values of n. Consider following three claims I. II. (A) I and II (B) I and III (C) II and III (D) I. So in I n + k ≈ n so first is correct. log(f(n))=n log 2= θ(n) log (g(n))=log(n!)=θ(n log n) .and h(n) is true? f(n)=2n g(n)=n! h(n)=nlogn (A) f nOgn. (n + k)m = θ(nm) where k is constant 2n+1 =θ(2n) 22n+2= θ(2n) Which of these claims are correct. Convention is to use log within asymptotic notation. Logic: To find complexity order take log of functions. Can use Stirling‘s approximation. g(n). logarithms grow more slowly than polynomials. substitute lg n for n and 2a for a: implying that lgb n = o(na). Q 1. gnf nCS2008 Ans. hnOf n(D) hnOf n. unless the base actually matters. Q3. i<=n. 3. Loop ends when 2i becomes greater than n. i<=n. i=i+1) { Print n } Ans. f2(n)< f3(n)< f4(n)< f1(n)< f5(n)< f6(n) 1. so loop executed till 2i <=n or i <= log2(n). θ(n2) . So n times of θ(1) equals to θ(n). Substitute Take log Find .…….log(h(n))= log n * log n=θ(log2n) now order is θ(log2n)< θ(n) < θ(n log n) h(n) < f(n) <g(n) from this relation only D option is correct. Find time complexity of following iterative algorithm for(i=1.j<=n. Find time complexity of following iterative algorithm for(i=1. θ(n) Explanation: here for loop is executing n times.16. Find time complexity of following iterative algorithm for(i=2. Q 6 . If its 0 then g(n) > f(n) and if its infinite then f(n) > g(n) Complexity of Iterative Algorithm To find complexity of iterative algorithm correctly analyze the behavior of algorithm.32. At first step value of i is 2 then it incremented to 4 then to 8. Note : there are three methods to find which function is larger asymptotically. θ(log2 n) Explanation: here value of I is increasing exponentially. i++) { for(j=1. j++) { Print n } } Ans. Print statement and conditional statement take θ(1) unit of time. Arrange following function in increasing order of complexity f1(n)=2n n f2(n)=n¾ f3(n)=n(log n)3 f4(n)=nn log n 2n n² f5(n)=n f6(n)=2 Ans.So loop executes log2 (n) times. 2. i<=n. Q 5. Q 4. i=i*2) { Print n } Ans . This decreases number of times loop executed. For i=3 inner loop execute 3 times. Find time complexity of following iterative algorithm for(i=1. i<=n.+ n =n(n+1)/2=n2/2 + n/2= θ(n2). Explanation : Here inner loop depends on value of i. Find time complexity of following iterative algorithm for(i=1. . Total execution of inner loop i 1 + 2 + 3 + …………. i++) { for(j=1. Find time complexity of following iterative algorithm for(i=1. j*2) { Print n } } Ans. n times θ(log2 n)= θ(n log2 n). . for every value of i inner loop is executed. θ(n log2 n). θ(n log2 n) Explanation: Her. i++) { for(j=1. Q 8. . For i=1 inner loop execute 1 time.j<=n.j<=i. Time complexity of inner loop is θ(n). Explanation: Here inner loop depends on value of i and inner loop execute log2(i) times for each value of i. i++) { for(j=1. i<=n.j<=i. . θ(n2). Q 7 .Explanation: Here. j++) { Print n } } Ans. Q 9. i<=n. n times θ(n)= θ(n2). . For i=1 inner loop execute log2 1 time. Time complexity of inner loop is θ(log2 n). For i=2 inner loop execute 2 times. For i=n inner loop execute n times. For i=2 inner loop execute log2 2 times. j*2) { Print n } } Ans. for every value of i inner loop is executed log n time. . For i=3 inner loop execute log2 3 times. . For(i=n. j and n are integer variables. Master Method 3.j=0.+2+1=2K-1=n= θ(n) Complexity of recurrence relation A recurrence is a function is depends in terms of one or more base cases. Q 10.i>0. For i=2 j=n/2 + n/4 +n/8+…………+ 1 For i=1 j=n/2 + n/4 +n/8+…………+ 1+0 To solve this equation let n=2k j=2K-1 +2K-2 +2K-3 +………. .Find Complexity of following Recurrence relation T(n)=T(n-1)+1 where T(1)=1 Ans. and itself.. . Iterative Method 2.j+=i). Tree Method Iterative method In this we reduce recurrence relation to a non recurrence relation. Q 11. with smaller arguments. + log2 n =log(1*2*3*………. C Explanation: execute loop step by step For i=n j=0 For i=n/2 j=0+n/2 For i=n/4 j=n/2+n/4 For i=n/8 j=n/2+n/4 +n/8 . Which one of the following is true? (A) val( j )=θ(logn) (B) val( j )=θ(√n) (C) val( j )=θ(n) (D) val( j )=θ(nlogn) CS2006 Ans. Total execution of inner loop is log2 1 +log2 2 + log2 3 + …………. Let val( j ) denotes the value stored in the variable j after termination of the for loop.*n) = log2 (n!)= θ (n log2 n). T(n)=θ(n) Explanation: T(n)=T(n-1) + 1 …….For i=n inner loop execute log2 n times. Examples: There are three methods to find complexity of recurrence relation 1.i/=2. Consider the following C –program fragment in which i.1 . ) b b . b > 1.1 = 1 + n .1 = n = θ(n) Q 12. T(n)= T(n-k) +k T(n)=T(n-(n-1))+n-1 = T(1) + n . f (n): b Case 1: f (n) = O(n log a-ε) for some constant ε> 0. Compare n log a vs. b b b Case 2: f (n) = θ( n log a lgk n). b b b b b Case 3: f (n) = θ(n log a+ε ) for some constant ε > 0 and f (n) satisfies the regularity condition a f ( n/b) ≤ c f (n) for some constant c < 1 and all sufficiently large n. T(n)=θ(n2) Explanation: T(n)=T(n-1) + n T(n)= (T(n-2) + n-1 ) + n = T(n-2) + 2n-1 T(n)=(T(n-3)+n-2) + 2n-1 = T(n-3) +3n-3 . ( f (n) is within a polylog factor of n log a. but not smaller. and f (n) > 0. where k ≥ 0. Simple case: k = 0 ⇒ f (n) = θ(n log a)⇒ T (n) = θ(n log a log2 n)....Find Complexity of following Recurrence relation T(n)=T(n-1) + n where T(1)=1 Ans.. . T(n)=2k*T(n-k) + 1 + 2 + 22 +23 +……+2k-1 T(n)= 2n-1*T(n-(n-1))+ 1 + 2 + 22 +23 +……+2n-2 = 2n-1*T(1) +2n-1 =2n-1 +2n-1 =2*2n-1 = 2n = θ(2n) Master method Used for many divide-and-conquer recurrences of the form T (n) = aT (n/b) + f (n) . .. ( f (n) is polynomially greater than n log a. . ( f (n) is polynomially smaller than n log a. T(n)=T(n-k) + kn .k(k-1)/2 T(n)=T(n-(n-1))+n-1 = T(1) + (n – 1)n + (n-1)(n-2)/2 = θ(n2) Q 13. where a ≥ 1.2 Ans.) Solution: T (n) = θ( n log a).Put value of T(n-1) in eq 1 T(n)= (T(n-2) + 1 ) + 1 = T(n-2) + 2 Where using eq 1 we can derive T(n-1)=T(n-2)+1 Now further reduce eq 2 T(n)=(T(n-3)+1) + 2 = T(n-3) +3 .) Solution: T (n) = θ(n log a lgk+1 n).Find Complexity of following Recurrence relation T(n)=2*T(n-1) + 1 where T(1)=1 .…. T(n)=θ(2n) Explanation: T(n)=2*T(n-1) + 1 T(n)=2* (2*T(n-2) + 1) + 1 = 22*T(n-2) + 1 +2 T(n)= 22* (2T(n-3)+1) ) + 1 +2 = 23*T(n-3) + 1 + 2 + 22 . here a=4. To reduce it first substitute n = 2k .Find complexity of following recursive relation using master method T(n)=T(n/2)+1 Ans. Now relation becomes T(2k)=2T(√2k)+ 1 or T(2k)=T(2k/2)+ 1 Now replace T(2k) to S(k). Find complexity of following recursive relation using master method T(n)=4T(n/2)+1 Ans. D Explanation: This problem can not be solved directly using master method since it is not in the form of T(n)=aT(n/b)+f(n) .now new relation is S(k)=2S(k/2)+ 2k here a=2. here a=1. so T(n) = θ(n) b b b 2 . where k≥ 0. greater one decides the complexity of recurrence relation. here a=2. so T(n) = θ(f(n))= θ(n2 ) b b b 2 Q 18. Note: If both f(n) and n log a are equal or dividing n log a from f(n) results in logkn. b=2 and f(n)=1 n log a = n log 1 = n 0 =1 n log a = f(n) so by case 2 of Master Method T(n) = θ(n log a log2 n)= θ(log2 n). Example: b b b Q 14. b=2 and f(n)=1 n log a = n log 4 = n 2 n log a ≠ f(n) so by case 3 of Master Method n log a < f(n).T(1) = 1 Which one of the following is true? (A) T(n) = θ(log log n) (B) T(n) = θ (log n) (C) T(n)= θ (√n) (D) T(n) = θ (n) CS2006 Ans. b 2 b b Q 16. b=2 and f(n)=n n log a = n log 2 = n 1 =n n log a = f(n) so by case 2 of Master Method T(n) = θ(n log a log2 n)= θ(n log2 n). b=2 and R(k)=1 k log a = k log 2 = k 1 =k=log n k log a ≠ R(k) so by case 1 of Master Method n log a > R(k). Consider the following recurrence: T(n)=2T(√n)+1. Find complexity of following recursive relation using master method T(n)=2T(n/2)+1 Ans. b=2 and f(n)=1 n log a = n log 2 = n 1 =n n log a ≠ f(n) so by case 1 of Master Method n log a > f(n). function in terms of k instead of 2k.Find complexity of following recursive relation using master method T(n)=2T(n/2)+n Ans. b b 2 b Q 15.first we have to reduce it in this form. then apply case 2. so T(n) = θ(n log a) = θ(n ) b b b 2 b Q 17. If case 2 is not applicable then check for which one is greater n log a or f(n). here a=2.Solution: T (n) = θ( f (n)). . analyze program and mainly focus on following If else conditions Loops Return statement and the arguments of the returning function. T(n)=T(n-1)+θ(1) Explanation: statement ―return 1. Q 20. There are log3 n full levels. the problem size is down to 1. } Ans.‖ will be executed once and statement ―return p(n1). Each level contributes ≤ cn. each node represents the cost of a single sub-problem somewhere in the set of recursive function invocations. We sum the costs within each level of the tree to obtain a set of per-level costs. How to write Recurrence relation for a recursive program To determine the complexity of recursive program. Find recurrence relation for following function void p(int n) { If(n<=1) return 1. else return p(n-1).Recursion Tree Method In a recursion tree. For this . By summing log3/2 n we got running time =c nlog3/2 n which is equals to θ (n lg n). Then we can apply any of the above method to find complexity of program. the recursion tree shows the cost at each level of Recursion. and after log3/2 n levels. Example: T (n) = T (n/3)+T (2n/3)+θ(n). void p(int n) { If(n<=1) return 1. By summing across each level. Terminating condition. Find recurrence relation for following function int i. and then we sum all the per-level costs to determine the total cost of all levels of the recursion. Recursion trees are particularly useful when the recurrence describes the running time of a divide-and-conquer algorithm.‖causes recursive calls to itself and decreasing value of n. first we should identify the recurrence relation. Q 19. else return(recursive(n-1)+recursive(n-1)).i<=n.i++) { sum= sum+i. } } Ans. The recurrence equation T(1)=1 T(n)=2*T(n-1) + n . T(n)=2k*T(n-k) + n +2(n-1)+22(n-2)+……+2k-1(n-(k-1)) T(n)= 2n-1*T(n-(n-1)) + n +2(n-1)+22(n-2)+……+2n-1(n-(n-2)) After solving series u will get T(n)= 2n+1 – n – 2 Note: In this type of question you can also use hit and try method Take n=2 then T(n)=2*T(1)+2=4 only option (a) evaluates to 4 at n=2 . d Explanation: recurrence relation for above program is T(n)=2*T(n-1)+1 So we can find time complexity using iterative method which is O(2n). T(n)=T(n-1)+n Explanation: statement ―return 1. } (a) O(n) (b) O(n log n) (c) O(n2) (d) O(2n) Ans. n >1 Evaluates to (a) 2n+1 – n – 2 (b) 2n – n CS2004 (c) 2n+1 – 2n – 2 (d) 2n+ n CS2004 Ans.Statement ―return p(n-1). Q 21. Q 22. Running time of for loop depends on argument n .‖ will be executed once but for loop will be executed each time function is called. . a Explanation: Using iterative method T(n)=2*T(n-1) + n T(n)=2* (2*T(n-2) + n-1) + n = 22*T(n-2) + n +2(n-1) T(n)= 22* (2T(n-3)+n-2) ) + n +2(n-1)= 23*T(n-3) + n +2(n-1)+22(n-2) .else { for(i=1.‖causes recursive calls to itself and decreasing value of n. Time complexity of following C function is (assume n >0) int recursive(int n) { if(n = = 1) return 1. Third Pass: (12458) (12458) (12458) (12458) (12458) (12458) (12458) (12458) Finally. algorithm compares the first two elements.O(n) Average case performance . Data structure -Array Worst case performance -O(n2) Best case performance. comparing each pair of adjacent items and swapping them if they are in the wrong order. and the algorithm can terminate. but our algorithm does not know if it is completed.O(n2) Worst case space complexity . algorithm does not swap them. First Pass: (51428) ( 1 5 4 2 8 ). Here. Swap since 5 > 4 ( 1 4 5 2 8 ) ( 1 4 2 5 8 ). O(1) auxiliary . Data structure . and swaps them. Find the minimum value in the list 2. Repeat the steps above for the remainder of the list (starting at the second position and advancing each time) Here is an example of this sort algorithm sorting five elements: 64 25 12 22 11 11 25 12 22 64 11 12 25 22 64 11 12 22 25 64 11 12 22 25 64 Note: In bubble sort number of comparison is O(n2) and number of swaps is also O(n2) but in selection sort number of is O(n2) and number of swaps is O(n). The algorithm needs one whole pass without any swap to know it is sorted.О(n²) Worst case space complexity .Array Worst case performance -О(n²) Best case performance .О(n) total. ( 1 5 4 2 8 ) ( 1 4 5 2 8 ). Swap it with the value in the first position 3. Second Pass: (14258) (14258) ( 1 4 2 5 8 ) ( 1 2 4 5 8 ).SORTING ALGORITHMS BUBBLE SORT Bubble Sort is a simple sorting algorithm. Now. Swap since 5 > 2 ( 1 4 2 5 8 ) ( 1 4 2 5 8 ). and sort the array from lowest number to greatest number using bubble sort algorithm. since these elements are already in order (8 > 5). Selection sort takes less time in comparison to bubble sort. In each step. elements written in bold are being compared. It works by repeatedly stepping through the list to be sorted.O(1) auxiliary SELECTION SORT The algorithm works as follows: 1.О(n²) Average case performance . Swap since 4 > 2 (12458) (12458) (12458) (12458) Now. Let us take the array of numbers "5 1 4 2 8". the array is already sorted. the array is sorted. Here is an example of this sort algorithm sorting five elements: 64 25 12 22 11 First Pass: 64 is already in array and insert 25 25 64 12 22 1125 is compared with 64 and swapped Second Pass: Insert 12 25 12 64 22 1125 is compared with 64 and swapped 12 25 64 22 1112 is compared with 25 and swapped Third pass: Insert 22 12 25 22 64 1122 is compared with 64 and swapped 12 22 25 64 1122 is compared with 25 and swapped 12 22 25 64 1122 is compared with 12 Fourth Pass: Insert 11 12 22 25 11 6411 is compared with 64 and swapped 12 22 11 25 6411 is compared with 25 and swapped 12 11 22 25 6411 is compared with 22 and swapped 11 12 22 25 6411 is compared with 12 and swapped Data structure . Merge the two sublists back into one sorted list.О(n2) Worst case space complexity . Here is an example of this sort algorithm sorting seven elements: Data structure . Each insertion overwrites a single value: the value being inserted.INSERTION SORT To perform an insertion sort.Array Worst case performance -Θ(n log n) Best case performance . 4. O(1) auxiliary MERGE SORT Conceptually. then it is already sorted. Otherwise: 2. The ordered sequence into which the element is inserted is stored at the beginning of the array in the set of indices already examined. Sort each sublist recursively by re-applying merge sort.О(n) total. 3.О(n2) Best case performance . begin at the left-most element of the array and insert each element encountered into its correct position. If the list is of length 0 or 1.O(n) Average case performance .Θ(n log n) . a merge sort works as follows 1.Array Worst case performance . Divide the unsorted list into two sublists of about half the size. This is called the partition operation.Θ(n log n) Worst case space complexity . from the list. Suppose there is a procedure for finding a pivot element which splits the list into two sub-lists each of which contains at least one-fifth of the elements. B Explanation: Since pivot always partition array in 1/5 and 4/5. while all elements with values greater than the pivot come after it (equal values can go either way). and then recurrence relation for quicksort is . The steps are: 1. called a pivot. Pick an element. Recursively sort the sub-list of lesser elements and the sub-list of greater elements.Θ(n log n) Worst case space complexity .Array Worst case performance -Θ(n2) when pivot selected always at the end or start of array Best case performance . which are always sorted. Reorder the list so that all elements with values less than the pivot come before the pivot. The base cases of the recursion are lists of size zero or one.Average case performance .Θ(n) auxiliary QUICK SORT Quicksort sorts by employing a divide and conquer strategy to divide a list into two sub-lists.Θ(n log n) Average case performance . the pivot is in its final position. After this partitioning. Here is an example of this sort algorithm sorting nine elements: Elements in black box are selected as pivot. 3.Θ(n) auxiliary Q 23. Consider the Quicksort algorithm. Then (A) T (n) ≤ 2T (n /5) + n (B) T (n) ≤ T (n /5) + T (4n /5) + n (C) T (n) ≤ 2T (4n /5) + n (D) T (n) ≤ 2T (n /2) + n CS2008 Ans. Data structure . Let T(n) be the number of comparisons required to sort n elements. 2. So option B is correct. There is a one to one correspondence between elements of the array and nodes of the tree. Note: This type of recurrence relation generally results in complexity O(n log n) Example: T(n)=T(n/9) + T(8n/9) + n=O(n log n) T(n)=T(n/4) + T(3n/4) + n=O(n log n) Q 24. for sorting n elements. The function of Heapify is to let i settle down to a position (by swapping itself with the larger of its children. What is the worst case time complexity of the quick sort? (A) θ (n) (B) θ (n log n) (C) θ (n2 ) (D) θ (n2log n) CS2009 Ans. Step II: Build Heap Operation: Let n be the number of nodes in the tree and i be the key of a tree. When Heapify is called both the left and right subtree of the i are Heaps. B HEAP SORT The heap data structure is an array object which can be easily visualized as a complete binary tree.T(n)=T(n/5) + T(4n/5) + n. Step I: The user inputs the size of the heap(within a specified limit). All nodes of heap also satisfy the relation that the key value at each node is at least as large as the value at its children. the program uses operation Heapify. which is filled from the left upto a point. Step IV: The program executes Heapify(new root) so that the resulting tree satisfies the heap property.This operation calls . Step V: Go to step III till heap is empty How to create heap: lets take an array of elements 10 6 12 15 8 . The tree is completely filled on all levels except possibly the lowest. For this. whenever the heap property is not satisfied)till the heap property is satisfied in the tree which was rooted at (i). Step III: Remove maximum element: The program removes the largest element of the heap(the root) by swapping it with the last element. In quick sort.The program generates a corresponding binary tree with nodes having randomly generated key Values. the (n/4)th smallest element is selected as pivot using an O(n) time algorithm. 13.8.14.13. Data structure. Complexity of finding largest element in max heap O(1).14. After deleting heapify operation performed. Complexity of deleting largest element in max heap O(n log n).Θ(n log n) Average case performance . Where ceil function return lower integer value.8. Which one of the following array represents a binary max-heap? (A) {25.12} (D) {25. Sorting Using heap For sorting 5 element deletemax is called 5 times.Array Worst case performance -Θ(n log n) Best case performance . N time calling Heapify operation causes heap sort complexity to O(n log n) .Θ(n log n) Worst case space complexity .13.10. if c is index of child then ceil((c-1)/2).13.Similarly.10.16} CS2009 .14.In Array implementation of heap sort if p is index of parent then index of child is 2p+1 and 2p+2 (when array index starts from 0).12} (C) {25.14} (B) {25.10.12.8. Heapify operation take O(log n) time.12. Θ(1) auxiliary Statement for Linked Answer Questions: 25 & 26 Consider a binary max-heap implemented using an array.16.16. Q 25.16.Θ(n) total.8.10. 8} (B) {14.12.10} (C) {14.10} Ans. Q26.8.12.12. Only heap in C option is correct.10} (D) {14. C Explanation: A 25 B 25 C 14 25 12 16 14 13 16 13 10 13 10 16 10 8 12 13 10 8 12 D 25 14 12 13 10 8 16 Elements in bold are violating heap condition.8.Ans.13.13.13.8.10.13. perform delete operation as 12 25 16 Deletemax 14 16 13 13 10 8 12 14 16 heapify 14 12 10 8 13 10 8 deletemax 14 14 8 heapify 13 12 8 12 14 12 8 10 13 10 13 10 . What is the content of the array after two delete operations on the correct answer to the previous question? (A) {14. D Explanation: Using heap in option c in previous question .12. when sorting technique maintains relative order of repeated data then it is called stable sorting technique. 170 Complexity: O (n*k) where k is number of digits. 075. b Explanation: Let n be d digit number then nk have kd digits at max . 090. 170. 802. 024. Example: for Unordered list 170.RADIX SORT Algorithm for radix sort is as follows: 1. but otherwise keep the original order of keys. nk]. Quick sort can be implemented as a stable sort depending on how the pivot is handled.if a sorting algorithm is able to sort data available on secondary storage then its called external sorting algorithm. Heap sort is non stable. selection.066. for some k>0 which is independent of n. 3. 024. In comparison based sorting algorithm only merge sort is external sorting algorithm and all other comparison based algorithm is used for sorting data in main memory. . Group the keys based on that digit. 075. 002. 802. Merge and radix sort are non in-place. Bubble. 090. Counting based.066. insertion. 045. In-place and non in-place sort. 045. Internal and External sorting algorithm. Third pass: sort according to 100th digit 002. radix and bucket sort. Stable and non stable sort. 090. 2. 3. Q27. Bubble. insertion. Note: All key in this sort should have same no of digit. 024. and for same digit order is maintained. heap and quick sort are in-place. 075. 002. If we use Radix sot to sort n integers in range (nk/12. selection. Repeat the grouping process with each more significant digit. 802. merge.sorting algorithm which require extra memory to perform sorting is called non in-place sorting algorithm. Stability of selection sort depends on its implementation (how conditions are handled). so time taken would be θ(n*k*d) ≈ θ(nk) . 066 Second pass: sort according to 10th digit 002.counting. the time taken would be (a) θ(n) (b) θ(kn) (c) θ(n log n) (d) θ(n2) Ans. 090. 4. Take the least significant digit (or group of bits. 045. 066 First pass: sort according to unit digit 170. 075. 045. 2. 802. both being examples of radices) of each key. (This is what makes the LSD radix sort a stable sort). 024. Comparison based and counting based Comparison based. quick and heap sort all are comparison based sorting algorithms.bubble. insertion and merge sort are stable. CLASSIFICATION OF SORTING ALGORITHMS 1. size can change dynamically.g. array. array.... integer. structure. Non-primitive data structures are derived from primitive data structures. e. specified by an address — a bit string that can be itself stored in memory and manipulated by the program. the condition for ―stack full‖ is (a) (top1 = MAXSIZE/2) and (top2 = MAXSIZE/2+1) (b) top1+ top2 = MAXSIZE (c) (top1 = MAXSIZE/2) or (top2 = MAXSIZE) (d) top1 = top2 -1 CS2004 Ans. Data structures are generally based on the ability of a computer to fetch and store data at any place in its memory. structure. The array implementation aims to create an array where the first element (usually at the zerooffset) is the bottom. STACK A stack is a last in.g.g. hiding any items already on the stack. In heterogeneous data structures elements are of different types. Linear / Non-linear: Linear data structures maintain a linear relationship between their elements.DATA STRUCTURES a data structure is a particular way of storing and organizing data in a computer so that it can be used efficiently. A single array A[1. or initializing the stack if it is empty.. like matrices. The program must keep track of the size. Non-linear data structures do not maintain any linear relationship between their elements.g. 3. d Explanation: two stack using single Array is implemented as top1 top2 As element inserted top1 and top2 moves inward when stack is full then top1 =top2 -1 @ @ @ # # # # # # . Variables top1 and top 2 (topl< top 2) point to the location of the topmost element in each of the stacks. The two stacks grow from opposite ends of the array. Q28.g. That is. array. like lists.g. The pop operation removes an item from the top of the list. character. e. or the length of the stack. e.. Primitive / Non-primitive: Primitive data structures are basic data structure and are directly operated upon machine instructions. in a tree. A stack can have any abstract data type as an element. 2. e.MAXSIZE] is used to implement two stacks. Classification of data structure: 1. e. or results in an empty list. Static / Dynamic: In static data structures the size cannot be changed after the initial allocation. Homogeneous / Heterogeneous: In homogeneous data structures all elements are of the same type. The push operation adds to the top of the list. If the space is to be used efficiently. union. array[0] is the first element pushed onto the stack and the last element popped off. but is characterized by only two fundamental operations: push and pop. first out (LIFO) abstract data type and data structure. The stack itself can therefore be effectively implemented as a two-element structure in C. A pop either reveals previously concealed items. e. Complexity of insertion and deletion in stack is O(n). In dynamic data structures. 4. and returns this value to the caller.. p(1) p(2) p(3) p(4) p(5) p(5) p(4) p(5) p(3) p(4) p(5) p(2) p(3) p(4) p(5) prints 1 p(2) p(3) p(4) p(5) Void main() { p(5). else { sum = 0. Maximum stack needed would be of size 4 and content of stack would be f(0) f(1) . run this program for n=3. 1. } else { p(n-1). } } Q29.i++) sum +=foo(i). printf(―%d‖. double sum.0. n). To check maximum space needed by above program. b Explanation: Recursive calls are implemented using stacks. return sum. } Prints 2 p(3) p(4) p(5) Prints 3 p(4) p(5) Prints 4 p(5) Prints 5 Statement for Linked Answer Questions 29 & 30: Consider the following C-function: double foo (int n) { int i. The space complexity of the above function is: (a) 0(1) (b) 0(n) (c) 0(n!) (d) 0(n2) CS2005 Ans.0.top1 top2 APPLICATIONS OF STACK.i<n. for (i =0. In recursion:Consider the program: Void p(int n) { if(n = = 1) { printf(―1‖) return 1. } This program runs using stack. if (n==O) return 1. f(2) f(3) Similarly, for input n space complexity of above program will be O(n) Q30. Suppose we modify the above function foo() and store the values of foo (i), 0<=I<n, as and when they are computed. With this modification, the time complexity for function foo() is significantly reduced. The space complexity of the modified function would be: (a) 0(1) (b) 0(n) (c) 0(n2) (d) 0(n!) CS2005 Ans. b Explanation: when we store of foo(i),,then we have to store n values, so the space complexity will remain same O(n). Note : Storing values of foo(i) will reduce time complexity, because by doing so we need not to call function recursively, instead we can look up for store values. 2. Expression Conversion An algebraic expression is a legal combination of operands and operators. Operand is the quantity (unit of data) on which a mathematical operation is performed. Operand may be a variable like x, y, z or a constant like 5, 4,0,9,1 etc. Operator is a symbol which signifies a mathematical or logical operation between the operands. Example of familiar operators include +,-,*, /, ^ etc. An Algebraic Expression can be represented using three different notations: INFIX: e.g. x+y, 6*3 etc this way of writing the Expressions is called infix notation. PREFIX: In the prefix notation, as the name only suggests, operator comes before the operands, e.g. +xy, *+xyz etc. POSTFIX: In the postfix notation, the operator comes after the operands, e.g. xy+, xyz+* etc. Note: INFIX notations are not as simple as they seem specially while evaluating them. To evaluate an infix expression we need to consider Operators' Priority and Associativity. For example, will expression 3+5*4 evaluate to 32 i.e. (3+5)*4 or to 23 i.e. 3+(5*4). To solve this problem Precedence or Priority of the operators were defined. Operator precedence governs evaluation order. An operator with higher precedence is applied before an operator with lower precedence. As we know the precedence of the operators, we can evaluate the expression 3+5*4 as 23. But wait, it doesn't end here what about the expression 6/3*2? Will this expression evaluate to 4 i.e. (6/3)*2 or to 1 i.e. 6/(3*2).As both * and the / have same priorities, to solve this conflict, we now need to use Associativity of the operators. Operator Associativity governs the evaluation order of the operators of same priority. For an operator with left-Associativity, evaluation is from left to right and for an operator with right-Associativity; evaluation is from right to left.* , /, +, - have left Associativity. So the expression will evaluate to 4 and not 1. Note: We use Associativity of the operators only to resolve conflict between operators of same priority. Due to the above mentioned problem of considering operators' Priority and Associativity while evaluating an expression using infix notation, we use prefix and postfix notations. Both prefix and postfix notations have an advantage over infix that while evaluating an expression in prefix or postfix form we need not consider the Priority and Associativity of the operators. E.g. x/y*z becomes */xyz in prefix and xy/z* in postfix. Q31. The following postfix expression, containing single digit operands and arithmetic operators + and *, is evaluated using a stack. 52*34+52**+ Show the contents of the stack. (i) After evaluating 5 2 * 3 4 + (ii) After evaluating 5 2 * 3 4 + 5 2 (iii) At the end of evaluation. CS2000 Ans. 5 2 * 3 4 4 3 10 + 5 5 7 10 2 2 5 7 10 (ii) * 10 7 10 * + 5 2 5 10 3 10 7 10 (i) 70 10 80 (iii) Infix to Postfix Conversion : In normal algebra we use the infix notation like a + b * c. The corresponding postfix notation is abc*+. The algorithm for the conversion is as follows : Scan the Infix string from left to right. Initialize an empty stack. If the scanned character is an operand, add it to the Postfix string. If the scanned character is an operator and if the stack is empty Push the character to stack. If the scanned character is an Operand and the stack is not empty, compare the precedence of the character with the element on top of the stack (topStack). If topStack has higher precedence over the scanned character Pop the stack else Push the scanned character to stack. Repeat this step as long as stack is not empty and topStack has precedence over the character. Repeat this step till all the characters are scanned. (After all characters are scanned, we have to add any character that the stack may have to the Postfix string.) If stack is not empty add topStack to Postfix string and Pop the stack. Repeat this step as long as stack is not empty. Return the Postfix string. Example : Let us see how the above algorithm will be implemented using an example. Infix String : a+b*c-d Initially the Stack is empty and our Postfix string has no characters. Now, the first character scanned is 'a'. 'a' is added to the Postfix string. The next character scanned is '+'. It being an operator, it is pushed to the stack. + Stack a Postfix String Next character scanned is 'b' which will be placed in the Postfix string. Next character is '*' which is an operator. Now, the top element of the stack is '+' which has lower precedence than '*', so '*' will be pushed to the stack. * + Stack ab Postfix String The next character is 'c' which is placed in the Postfix string. Next character scanned is '-'. The topmost character in the stack is '*' which has a higher precedence than '-'. Thus '*' will be popped out from the stack and added to the Postfix string. Even now the stack is not empty. Now the topmost element of the stack is '+' which has equal priority to '-'. So pop the '+' from the stack and add it to the Postfix string. The '-' will be pushed to the stack. Stack abc*+ Postfix String Next character is 'd' which is added to Postfix string. Now all characters have been scanned so we must pop the remaining elements from the stack and add it to the Postfix string. At this stage we have only a '-' in the stack. It is popped out and added to the Postfix string. So, after all characters are scanned, this is how the stack and Postfix string will be : abc*+dStack End result : Infix String : a+b*c-d Postfix String : abc*+dQ32. Assume that the operators +, -, x, are left associative and ^ is right associative. The order of precedence (from highest to lowest) is ^, x, +, -. The postfix expression corresponding to the infix expression a + b x c – d ^ e ^ f is (a) a b c x + d e f ^ ^(b) a b c x + d e ^ f ^ (c) a b + c x d – e ^ f ^ (d) - + a x b c ^ d e f CS2004 Ans. A Explanation: postfix expression can be evaluated using stack Step 1. First element is a which is an operand add it to list and second character is + operator push it to stack Postfix String + Postfix String Stack Step 2.next character is b add it to list. Next character is * operator . At top of stack there is + operator which is of low precedence than *, so push * to stack a * ab 2) Examine the next element in the input. 9) Reverse the output string. Postfix String is: a b c * + d e f ^ ^ Note: If top of stack and incoming operator are of same precedence and left associative then we first pop the operator and add it to list and push new incoming operator. Pop and discard the closing parenthesis. 3) If it is operand. here we check for associative property of ^ operator. At top of stack there is ^ operator. pop operators from stack and add them to output string until a closing parenthesis is encountered. push operator on stack.+ Postfix String Stack Step 3. at top of stack there is + which is higher precedence than . push operator on stack. . so push ^ to stack. ^ ^ abc*+def Postfix String Stack Now pop all stack content and add to list.pop * from stack and add it to list. add it to output string. 5) If it is an operator. . pop the remaining operators and add them to output string. push operator on stack. at top of stack there is * which is higher precedence than -.^ is right associative. iv) Else pop the operator from the stack and add it to output string. 7) If there is more input go to step 2 8) If there is no more input. Add d to list.operator which is of low precedence than ^. abc*+ ^ Stack abc*+d Postfix String Step 5. At top of stack there is . iii) If it has same or higher priority than the top of stack.Similarly. Next character is ^ operator . Converting Expression from Infix to Prefix using STACK In this algorithm we first reverse the input expression so that a+b*c will become c*b+a and then we do the conversion and then again the output string is reversed. Push e to list. so push ^ to stack. ii) If the top of stack is closing parenthesis. Add f to list.. Next character is ‗ – ‗ operator. Now . pop + from stack and add it to list and push – to stack Postfix String Stack Step 4. push it on stack. 6) If it is a opening parenthesis. Next character is ^ operator. Doing this has an advantage that except for some minor modifications the algorithm for Infix->Prefix remains almost same as the one for Infix->Postfix. 4) If it is closing parenthesis. then i) If stack is empty. repeat step 5. add c to list. Algorithm 1) Reverse the input string. the final Prefix Expression is +/*23-21*5-41 All the remaining conversions can easily be done using a Binary Expressions Tree. The following Figure shows an expression tree for above expression 2*3/(2-1)+5*(4-1). Reverse String )1-4(*5+)1-2(/3*2 Char Scanned ) 1 4 ( * 5 + ) 1 2 ( / 3 * 2 Stack Contents(Top on right) ) ) ))Empty * * + +) +) +)+)+ +/ +/ +/* +/* Empty 1 1 14 141414-5 14-5* 14-5* 14-5*1 14-5*1 14-5*12 14-5*1214-5*1214-5*12-3 14-5*12-3 14-5*12-32 14-5*12-32*/+ Prefix Expression(right to left) Reverse the output string : +/*23-21*5-41 So. left child and then right child) on the Binary Expression Tree we get prefix notation of the expression. Once we obtain the Expression Tree for a particular expression. When we run Pre-order traversal (visit root. and postfix) and evaluation become a matter of Traversing the Expression Tree.Example: Suppose we want to convert 2*3/(2-1)+5*(4-1). Note: A binary expression tree does not contain parenthesis. similarly an Post-order traversal . its conversion into different notations (infix. structure of tree itself decides order of the operations. prefix. root node contain the operator that is applied to result of left and right sub trees. the reason for this is that for evaluating an expression using expression tree. Binary Expression Tree An Expression Tree is a strictly binary tree in which leaf nodes contain Operands and nonleaf nodes contain Operator. Deque extends the notion of a queue. 2) Run in-order traversal on the tree. In a deque. QUEUE A queue is a pile in which items are added an one end and removed from the other. What will we get from an in-order Traversal (visit left child. Doing the Conversions with Expression Trees Prefix -> Infix The following algorithm works for the expressions whose infix form does not require parenthesis to override conventional precedence of operators. a queue is used when a sequence of activities must be done on a first-come.b c d k Explanation: to solve this type of question. 1) Create the Expression Tree from the prefix expression. for the expressions which do not contain parenthesis. In a sense. items can be added to or removed from either end of the queue. they join the end of the queue while the teller serves the customer at the head of the queue. Deque(): remove element from front end of queue. in-order traversal will definitely give infix notation of expression but expressions whose infix form requires parenthesis to override conventional precedence of operators can not be retrieved by simple in-order traversal. + / k * - d a b c Now prefix and postfix can easily be found using preorder and post order traversal of tree respectively. Q 33. right child and then root) will yield postfix notation.Write down postfix and prefix expression for infix expression a *((b-c)/d)+k Ans: postfix: a b c – d / * k + Prefix: + * a / . root and then right child)? Well. a deque is the more general abstraction of which the stack and the queue are just special cases. Prefix -> Postfix 1) Create the Expression Tree from the prefix expression. As customers arrive. first create binary expression tree.(visit left child. As a result. a queue is like the line of customers waiting to be served by a bank teller. Example: . first-served basis. In this respect. 2) Run post-order traversal on the tree. In queue there are two basic operation Enque(): add element at the rear end of queue. Q34. For deleting an element. In a queue. search through all elements for the one with the highest priority. first pop elements from s1 and push to s2 until s1 is empty. They provide an analogy to help one understand what a priority queue is: Sorted list implementation: Like a checkout line at the supermarket. then pop an element from s2(this is the first element in queue). Example: A linked list whose nodes contain two fields: an integer value and a link to the next node Linked lists are among the simplest and most common data structures.. For inserting new element. Priority Queue One can imagine a priority queue as a modified queue. the priority of each inserted element is monotonically increasing. thus. push it to s1 while s2 remains empty. append it to the end. the last element inserted is always the first retrieved. the highest-priority one is retrieved first.e. (O(log(n)) insertion time (can binary search for insertion position) if implemented using arrays. Since stack have only one end. There are a variety of simple. usually inefficient.A bounded queue is a queue limited to a fixed number of items and is implemented using array. O(n) insertion time if implemented using linked lists. the priority of each inserted element is monotonically decreasing. giving O(log n) performance for inserts and removals. Let we have two stack s1 and s2. To add an element. including stacks. O(n) get-next due to search) These implementations are usually inefficient. they provide an easy implementation for several important abstract data structures. thus. ways to implement a priority queue. a link) to the next record in the sequence. but when one would get the next element off the queue. LINKED LIST a linked list is a data structure that consists of a sequence of data records such that in each record there is a field that contains a reference (i. the first element inserted is always the first retrieved. (O(1) insertion time. Stacks and queues may be modeled as particular kinds of priority queues. inserting at one end and deletion at another end is not possible with one stack. The . but where important people get to "cut" in front of less important people. queues. To get the next element. What is the minimum number of stacks of size n required to implement a queue of size n? (a) One (b) Two (c) Three (d) Four CS2001 Ans. In a stack. priority queues typically use a heap as their backbone. To get better performance. b Explanation: queue can be implemented using minimum two stack. O(1) get-next time) Unsorted list implementation: Keep a list of elements as the queue. or finding a node that contains a given datum. Linked List implementation of stack The linked-list implementation is equally simple and straightforward. linked lists by themselves do not allow random access to the data. Thus. or locating the place where a new node should be inserted — may require scanning most of the list elements. Inserting a node: Inserting a node before an existing one cannot be done.principal benefit of a linked list over a conventional array is that the order of the linked items may be different from the order that the data items are stored in memory or on disk. Example: doubly-linked list whose nodes contain three fields: an integer value. otherwise it is said to be open or linear. 2. Deleting a node: To find and remove a particular node. a special value that is interpreted by programs as meaning "there is no such node". the number of comparisons needed to search a singly linked list of length n for a given element is . Q35. For that reason. the link forward to the next node. Linked list operations 1. In fact. linked lists allow insertion and removal of nodes at any point in the list. each node contains. On the other hand. A less common convention is to make it point to the first node of the list. the link field often contains a null reference. instead. with a constant number of operations. The two links may be called forward(s) and backwards. or any form of efficient indexing. or popped. and the link backward to the previous node. or next and previous). you have to locate it while keeping track of the previous node. besides the next-node link. and a node can only be inserted by becoming the new head node. in that case the list is said to be circular or circularly linked. many basic operations — such as obtaining the last node of the list. In a doubly-linked list. a stack linkedlist is much simpler than most linked-list implementations: it requires that we implement a linked-list where only the head node or element can be removed. a second link field pointing to the previous node in the sequence. In the worst case. Linear and circular lists In the last node of a list. one must again keep track of the previous element. To which node should p point such that both the operations enQueue and deQueue can be performed in constant time? (a) rear node (b) front node (c) not possible with a single pointer (d) node next to front CS2004 Ans. struct item * next. Q36. Consider the function f defined below struct item { int data.(a) log n (b)1 (c) log n – 1 (d) n CS2002 Ans. B Explanation: return statement in f either returns null value when linked list ends or check if data at current node is less than next node . if it is false then it will not check for right side operand. Q37. (a) Explanation: A circular liked list p If circular linked list used to represent a queue and we assign rear to list then rear . }. the function f returns 1 if and only if (A) The List is empty or has exactly one element. If in middle current node data is greater next node data then it will not execute next statement and returns 0. Note: We cannot apply binary search in linked list because we cannot find middle of linked list. } For a given linked list p.then it calls f recursively for next node. (B) The elements in the list are sorted in non-decreasing order of data value (C) The elements in the list are sorted in non-increasing order of data value (D) Not all elements in the list have the same data value CS2003 Ans. int f(struct item *p) { return ( (p = = NULL) || ( p -> next = = NULL) || ( ( p-> data <= p-> next ->data ) && f( p -> next ) ) ) . Note: in C && operator checks for left side operand first. A single variable p is used to access the Queue. since next element will be accessed when previous element is accessed. d Explanation: in singly link list searching start from first element to last element. A circularly linked list is used to represent a Queue. struct node *next.and loop terminates 6 7 5 p 6 6 q 5 7 7 p .4.3.6.7 (B) 2.2.3. The following C function takes a single-linked list of integers as a parameter and rearranges the elements of the list. But if we assign front to p then we cannot find last element in constant time since we cannot traverse singly linked list in reverse orders.4. B Explanation: list 1 2 3 4 5 6 7 CS2008.7.3. list 1 2 3 4 5 6 7 p q Step 2: q is not 0 so enter in while loop.after executing loop second time list 2 1 4 3 Now p-> next is NULL so q becomes null . q= list-> next.6. IT2005 Step 1: list is not null so assign p=list. p ->value= q-> value.2. while (q) { temp =p-> value. } } (A) 1.1 Ans.7 (C) 1. }.2.6.1.3.4.3.7.6 (D) 2.5.We can easily trace last and first element of queue. q= list-> next. * q.4.5. if( !list || !list -> next) return. Void rearrange( struct node *list ){ struct node *p.after executing loop second time list 2 1 4 3 Step 4: since q is not 0.5. What will be the contents of the list after the function completes execution? struct node { int value. rear is pointing last element and next element to the rear is first element of queue. p= q-> next. Q38. int temp. after executing loop first time list 2 1 3 4 5 p q Step 3: since q is not 0.7 in the given order. q ->value= temp.5. q=p?p-> next : 0.4.5.6. The function is called with the list containing the integers 1. p=list. A subtree of a tree T is a tree consisting of a node in T and all of its descendants in T. The number of leaves in such a tree with n internal nodes is: (a) nk (b) (n—1) k+ 1 (c) n(k— 1) + 1 (d)n(k— 1) CS 2005 Ans. Furthermore. has no parent. we take a 4-ary tree In this tree internal node is 3 and leaf node is 10. All other nodes can be reached from it by following edges or links. A binary tree consists of . or represent a separate data structure (which could be a tree of its own). a condition. The topmost node in a tree is called the root node. The height of a node is the length of the longest downward path to a leaf from that node. Note: there is always unique path traverses from the root to each node. at the top. In a complete k-ary tree. The root node will not have parents. the subtree corresponding to any other node is called a proper subtree. The depth of a node is the length of the path to its root. The height of the root is the height of the tree. Nodes that do not have any children are called leaf nodes. Lets. the children of each node have a specific order. labeled 2 and 6. The subtree corresponding to the root node is the entire tree. Binary Trees The simplest form of tree is a binary tree. The root node. It is the node at which operations on the tree commonly begin. in this diagram. Q39. They are also referred to as terminal nodes. it is not a tree. A node is a structure which may contain a value. and one parent. Trees store data in a hierarchical manner. but an acyclic connected graph where each node has zero or more children nodes and at most one parent node. c Explanation: take example of any complete k-ary tree. A simple unordered tree. An internal node or inner node is any node of a tree that has child nodes and is thus not a leaf node. Mathematically. Only option c satisfy the values. labeled 2. Each node in a tree has zero or more child nodes. the node labeled 7 has two children.Final values in list are 2 1 4 3 6 5 7 TREES A tree is a widely-used data structure that emulates a hierarchical tree structure with a set of linked nodes. every internal node has exactly k children. . or two children (left and right).> root Example : for above binary tree preorder traversal generates: 1 4 7 6 3 13 14 10 8 Q41 Draw all binary trees having exactly three nodes labeled A. respectively. or further nested.A. each of which are themselves binary trees This recursive definition uses the term "empty tree" as the base case Every non-empty node has two children. as we know preorder traversal is root.> right Example : for above binary tree inorder traversal generates: 1 3 4 6 7 8 10 13 14 2.. Preorder traversal: root . Inorder traversal: left . } Traversal in binary tree 1. Both the sub-trees are themselves binary trees.. struct node *left.> right Example : for above binary tree preorder traversal generates: 8 3 1 6 4 7 10 14 13 3. Consider the following nested representation of binary trees: (X Y Z) indicates Y and Z are the left and right sub stress.B. and zero.> left . Postorder traversal: left . In given sequence C is first so its root.> right .. So only option c is correct. of node X.1. Note that Y and Z may be NULL. In option b there is missing parentheses. There are Five possibilities C C C C C B A B A B A B B A A . left and right sub-trees. one. In option d (4 5) contains two element only. struct node *right. B and C on which Preorder traversal gives the sequence C. a node (called the root node) and 2. either of which may be empty. Q40. Which of the following represents a valid binary tree? (a) (1 2 (4 5 6 7)) (b) (1 (2 3 4) 5 6) 7) (c) (1 (2 3 4)(5 6 7)) (d) (1 (2 3 NULL) (4 5)) CS2000 Ans.> root . Representation of node in C Struct node { int item. left. 3. Or in other words: A binary tree is either: -an empty tree -consists of a node. called a root.. c Explanation: As per definition in question binary tree should have triplet like (X Y Z).. right traversal. In option a (4 5 6 7) contains for element. CS2002 Ans. 8. 4. By having these two sequences we can create unique binary search tree. 1 (B) 1. 2. And in inorder sequence left is visited first . 1. When the tree is traversed in pre-order and the values in each node printed out. 6. 4. Q42. 8. 3. 5 3 678 5 12 4 3 Similarly. 5 (D) 2. 5 (C) 2. 3. Binary Expression Tree –We have discussed binary expression tree earlier under the topic ―expression conversion‖. 1. 4. and inorder sequence of binary search tree (BST) is sorted data. and 4 is right subtree. 5. 5 IT2005 Ans. 4. A binary search tree contains the numbers 1. Explanation: pre-order sequence is given. the sequence of values obtained is 5. 7. 5 1234 678 Step 2: Now Draw Left subtree of root 5 Pre-order sequence is 3 1 2 4 In order sequence is 1 2 3 4 From preorder sequence we can find root for subtree which is 3 and by inorder sequence 1 2 is left subtree of root 3. so 1 2 3 4 form left subtree of root 5. 3.Note: You cannot draw unique binary tree only with either preorder or postorder sequence or both. 7. 2. 4. Similarly. 3. solve for 1 2 and 6 7 8. 6 7 8 forms right subtree of 5. 1. 8. 6. There should be a inorder sequence given in addition to either of them. 6. 7. 7. 3. 3. 8. 7. 6. 6. 6. 2. 6 1 2 4 7 8 APPLICATIONS OF BINARY TREES 1. 7. . 5. 2. 1 2 3 4 are at left of 5. the sequence obtained would be (A) 8. If the tree is traversed in post-order. Pre-order : 5 3 1 2 4 6 8 7 Inorder: 1 2 3 4 5 6 7 8 Step 1: in preorder sequence root is visited first so 5 is root of BST. 4. 8. Otherwise. the keys of all the nodes in the left sub-tree are less than that of the root. the keys of all the nodes in the right sub-tree are greater than that of the root. choose either its in-order successor node or its in-order predecessor node. Binary Search Tree – It is an ordered binary tree in which. Do not delete N. if the root is not equal to the value. If the searched value is not found before a null subtree is reached. Note: Each node (item in the tree) has a distinct key. Deletion in binary search tree There are three possible cases to consider: Deleting a leaf (node with no children): Deleting a leaf is easy.2. then delete R. the search is successful. 1. when every node have single child. Worst case searching in BST is O(n). 2. If the tree is null. "R". Similarly. Eventually. depending on the node's value. if the value equals the root. Note: Inorder traversal of binary search tree generates sorted data. then the item must not be present in the tree. Searching a binary tree for a specific value can be a recursive or iterative process. the value we are searching for does not exist in the tree. search the right subtree. We begin by examining the root node. Replace the value of N with the value of R. we will reach an external node and add the value as its right or left child. as we can simply remove it from the tree. Deleting a node with one child: Delete it and replace it with its child. but Ω(n) time in the worst case. which is O(log n) time in the average case over all trees. . we examine the root and recursively insert the new node to the left subtree if the new value is less than the root. Deleting a node with two children: Call the node to be deleted "N". or the right subtree if the new value is greater than or equal to the root. we search the left or right subtrees as before. In other words. if it is greater than the root. Insertion in binary search tree Insertion begins as a search would begin. If the value is less than the root. the left and right sub-trees are themselves ordered binary trees. search the left subtree. 3. Instead. This process is repeated until the value is found or the indicated subtree is null. This operation requires time proportional to the height of the tree in the worst case. 9. 16. 8. The following numbers are inserted into an empty binary search tree in the given order: 10. 3. 0. The binary search tree uses the usual ordering on natural numbers. 3. b Explanation: create binary search tree for sequence height of tree is 3 10 15 1 12 3 16 5 Q44. 6. 12. 4. 2 are inserted in that order into an initially empty binary search tree. 15. 5.1.Q43. 5. What is the in-order traversal sequence of the resultant tree? . 1. What is the height of the binary search tree (the height is the maximum distance of a leaf node from the root)? (a) 2 (b) 3 (c) 4 (d) 6 CS2004 Ans. suppose the numbers 7. Create huffmann code for 6 symbol alphabet with the following symbol probabilities: A = 1. Huffmann Code Huffman coding is based on the frequency of occurrence of a data item (pixel in images). Now OPEN becomes {p2 D E F} Repeat similar step in OPEN one element remains: Final tree is . remove them from list. C = 4. 3. and delete the children from OPEN..g. Insert p1 to the list and sort the list. B = 2. create a parent node of them. Add A as left childe of p1 and B as right child of p1. 1 to the two branches of the tree. keep it sorted at all times (e. create a new node having sum of these two. D = 8. create a new node having sum of these two . The principle is to use a lower number of bits to encode the data that occurs more frequently. Now OPEN becomes {p1 C D E F} And huffmann tree is P1 A B Step 2 : now From OPEN remove p1 and C . Insert p2 to the list and sort the list. 1.(A) 7 5 1 0 3 2 4 6 8 9 (C) 0 1 2 3 4 5 6 7 8 9 (B) 0 2 4 3 1 6 5 9 8 7 (D) 9 8 6 4 2 3 0 1 5 7 CS2003 Ans. (c) Assign code 0. F = 32 Ans: create OPEN(in increasing order of probability)={A B C D E F} Step 1: In OPEN. p1 with probability 3. 2. E = 16. p2 with probability 7. Repeat until the OPEN list has only one node left: (a) From OPEN pick two nodes having the lowest frequencies/probabilities. Initialization: Put all nodes in an OPEN list. (b) Assign the sum of the children's frequencies/probabilities to the parent node and insert it into OPEN. Add p1 as left childe of p2 and C as right child of p2. Q45 . A and B are with lowest probability. ABCDE). C Explanation: inorder traversal of binary search tree always gives sorted data. no element is prefix of other element. So huffmann code is also called as huffmann prefix code. The balance factor of a node is the height of its left subtree minus the height of its right subtree. Search. 0. then the right subtree outweighs the left subtree of the given node. and a node with balance factor 1. If insertions are performed serially. Insertion: After inserting a node. or −1 is considered balanced. 4 . Searching: Searching in AVL tree is performed exactly as in an unbalanced binary search tree. A node with any other balance factor is considered unbalanced and requires rebalancing the tree. it is necessary to check each of the node's ancestors for consistency with the rules of AVL. 0. The first . Let L be the left child of P. insertion. AVL Trees an AVL tree is a self-balancing binary search tree. and the balance factor of the right child (R) must be checked. a double left rotation is needed. a left rotation is needed with P as the root. it is also said to be heightbalanced.huffmann code can be found by traversing node from root A=00000 B=00001 C=0001 D=001 E=01 F=1 Note: in above example. However. after each insertion. Right-Right case and Right-Left case: If the balance factor of P is −2. at most two tree rotations are needed to restore the entire tree to the rules of AVL. if the balance factor remains −1. therefore. where n is the number of nodes in the tree prior to the operation. 1. If the balance factor of R is ≤ 0. For each node checked. The balance factor is either stored directly at each node or computed from the heights of the subtrees. the heights of the two child subtrees of any node differ by at most one. There are four cases which need to be considered. If the balance factor of R is +1. Let P be the root of the unbalanced subtree. and deletion all take O(log n) time in both the average and worst cases. if the balance factor becomes ±2 then the subtree rooted at this node is unbalanced. of which two are symmetric to the other two. Insertions and deletions may require the tree to be rebalanced by one or more tree rotations. or +1 then no rotations are necessary. Let R be the right child of P. In an AVL tree. Q46. remove it. 2. then the left subtree outweighs the right subtree of the given node. a right rotation is needed with P as the root. and the balance factor of the left child (L) must be checked. and remove that node. adjusting the balance factors as needed. Left-Left case and Left-Right case: If the balance factor of P is +2. If the balance factor of L is ≥ 0. Figure shown below describe rotation in four cases. B Explanation: Try to make AVL tree of maximum height of 7 node 1 1 -1 1 0 0 . a double right rotation is needed. If the balance factor of L is −1. Pivot is the node that will become root after rotation. In the figure root is node with balance factor +2 or -2 which is violating AVL tree conditions. retrace the path back up the tree (parent of the replacement) to the root. replace it with either the largest in its left subtree (inorder predecessor) or the smallest in its right subtree (inorder successor). If the node is not a leaf. (A) 2 (B) 3 (C) 4 (D) 5 CS2009 Ans. After deletion. The node that was found as a replacement has at most one subtree.rotation is a right rotation with R as the root. The second is a right rotation with P as the root. What is the maximum height of any AVL-tree with 7 nodes? Assume that the height of a tree with a single node is 0. The second is a left rotation with P as the root. Deletion: If the node is a leaf. The first rotation is a left rotation with L as the root. the number of nodes in the let sub tree is at least half and at most twice the number of nodes in the right sub tree. not only might we have to move some keys one position to the right. 2. In a B-tree each node may contain a large number of keys. A B-tree of order m (the maximum number of children for each node) is a tree which satisfies the following properties: 1. the maximum number of children that a node can have is 5 (so that 4 is the maximum number of keys). A non-leaf node with k children contains k−1 keys. According to condition 4. It is commonly used in databases and file systems. first do a search for it in the B-tree. (Of course.) Note that when adding to an internal node. may also be large. Example B-Tree The following is an example of a B-tree of order 5. This means that (except root node) all internal nodes have minimum (5-1)/2 =2 keys . If there is room in this leaf. Every node (except root) has at least m-1⁄2 children. this unsuccessful search will end at a leaf.A weight-balanced tree is a binary tree in which for each node. B-tree is optimized for systems that read and write large blocks of data. The B-tree is a generalization of a binary search tree in that more than two paths diverge from a single node. if that node has no room. then it may have to be split as well. 3. then. Of course. Inserting a New Item When inserting an item. 4. All leaves appear in the same level. The number of subtrees of each node.0 Maximum height of AVL tree with n number of node is log n +1 Q47. and carry information. The median (middle) key is moved up into the parent node. 5. then the node must be "split" with about half of the keys going into a new node to the right of this one. The root has at least two children if it is not a leaf node. each leaf node must contain at least 2 keys. The maximum possible height (number of nodes on the path from the root to the furthest leaf) of such a tree on n nodes is best described by which of the following? (a) log2n (b) log4 n (c) log3n (d) log2 n CS2001 Ans. but . If instead this leaf node is full so that there is no room to add the new item. Note that this may require that some existing keys be moved one to the right to make room for the new item. a B TREES A B-tree is a specialized multiway tree designed especially for use on disk (Secondary Storage). In practice B-trees usually have orders a lot bigger than 5. Every node has at most m children. just insert the new item here. If the item is not already in the B-tree. When Z is added. we find no room in this node. L. the rightmost leaf must be split. All nodes other than the root must have a minimum of 2 keys. . K. Let's work our way through an example similar to that given by Kruse. the tree is kept fairly balanced.the associated pointers have to be moved right as well. the median key moves up into a new root node. Note that by moving up the median key. Note that M happens to be the median key and so is moved up into the parent node. The median item T is moved up into the parent node. and Q proceeds without requiring any splits: Inserting M requires a split. The letters F. and T are then added without needing any split. moving the median item G up into a new root node. Inserting E. Insert the following letters into what is originally an empty B-tree of order 5: C N G A H E K Q M F W L T Z D P R X Y S Order 5 means that a node can have a maximum of 5 children and 4 keys. so we split it into 2 nodes. Note that in practice we just leave the A and C in the current node and place the H and N into a new node to the right of the old one. W. The first 4 letters get inserted into the same node. If the root node is ever split. with 2 keys in each of the resulting nodes. resulting in this picture When we try to insert the H. thus causing the tree to increase in height by one. so it splits. . delete the T. and Y are then added without any need of splitting: Finally. That way. R. we find its successor (the next item in ascending order). since this leaf has extra keys. Deleting an Item In the B-tree as we left it at the end of the last section. what we really have to do is to delete W from the leaf. which we already know how to do. Since T is not in a leaf. Of course. The letters P. the node with N. sending the median Q up to the parent. Since H is in a leaf and the leaf has more than the minimum number of keys. X. However.The insertion of D causes the leftmost leaf to be split. sending the median M up to form a new root node. Q. we first do a lookup to find H. by using this method. the parent node is full. In ALL cases we reduce deletion to a deletion in a leaf. and move W up to replace the T. this is easy. and R splits. D happens to be the median key and so is the one moved up into the parent node. This gives: Next. when S is added. which happens to be W. delete H. P. Note how the 3 pointers from the old parent node stay in the revised node that contains D and G. We move the K over where the H had been and the L over where the K had been. If this problem node had a sibling to its immediate left or right that had a spare key. the N P node would be attached via the pointer field to the right of M's new location. you immediately see that the parent node now contains only one key. let's combine the leaf containing F with the leaf containing A C. delete R. the S is moved over so that the W can be inserted in its proper place. the successor W of S (the last key in the node where the deletion occurred). this leaf does not have an extra key. Of course. Since in our example we have no way to borrow a key from a sibling. we can then borrow a key from the parent and move a key up from this sibling. and the X is moved up. This one causes lots of problems. In our specific case. In other words. the sibling to the right has an extra key. If the sibling node to the immediate left or right has an extra key. nor do the siblings to the immediate right or left. In our example. let's delete E.Next. This includes moving down the parent's key that was between those of these two leaves.) Finally. the tree shrinks in height by one. the leaf has no extra keys. We also move down the D. . the old left subtree of Q would then have to become the right subtree of M. is moved down from the parent. However. Although R is in a leaf. Although E is in a leaf. then we would again "borrow" a key. the deletion results in a node with only one key. In this case. we must again combine with the sibling. (Of course. In such a case the leaf has to be combined with one of these two siblings. This is not acceptable. and move down the M from the parent. G. Suppose for the moment that the right sibling (the node with Q X) had one more key in it somewhere to the right of Q. We would then move M down to the node with too few keys and move the Q up where the M had been. which is not acceptable for a B-tree of order 5. So. and move the D up to replace the C. move it up to the parent. Thus we borrow the M from the sibling. Let's consolidate with the A B node. Let's try to delete C from it We begin by finding the immediate successor. this leaves us with a node with too few keys. . But now the node containing F does not have enough keys. Since neither the sibling to the left or right of the node containing E has an extra key. However. Note that the K L node gets reattached to the right of the J.Another Example Here is a different B-tree of order 5. we must combine the node with one of these two siblings. which would be D. its sibling has an extra key. However. and bring the J down to join the F. 000) = 4 nodes to be accessed. B tree of even order Consider B tree of degree 4. larger capacity means large number of records so large values of n and also disc access is much slower than memory access. we traverse a path from the root to a leaf node. For a 4k byte disk block with a search-key size of 12 bytes and a disk pointer of 8 bytes. log2xn complexity is not acceptable because disk capacity is much larger than main memory. Left Biasing 15 21 Right Biasing 15 6 12 18 27 6 12 18 21 27 . B -trees are preferred to binary trees in databases because (a) Disk capacities are greater than memory capacities (b) Disk access is much slower than memory access (c) Disk data transfer rates are much less than memory data transfer rates (d) Disks are more reliable than memory CS2000 Ans. Right Biasing: Right Child has more key than left child.000. Since root is in usually in the buffer. so typically it takes only 3 or fewer disk reads. 15 6 21 can be inserted without any splitting but inserting 27 requires a splitting.Q48. If n =100. n is around 200. a look-up of 1 million search-key values may take log50(1. In this case if 21 is moved up. Left Biasing 6 15 21 27 6 Right Biasing 15 21 27 21 15 6 15 27 6 21 27 Inserting 12 does not require any split and leftmost node contain 6 12 15. • In processing a query. Left Biasing: Left Child has more key than right child. b Explanation: Self balancing binary search can be used for searching in main memory but in case of disk. even in large files. a. this path is no longer than log(n/2) K . First create B tree for 15 6 21 27 12 18. Maximum key=3 and minimum key 4/2 =2. Inserting 18 requires splitting of leftmost node in case of left biasing but in case of right biasing no splitting required. If there are K search key values in the file. where n is number of links possible in any given node. but here problem is in deciding whether 15 or 21 is out of 6 15 21 27 moved up. In this case if 15 is moved up. • This means that the path is not long. U Right Biasing B. In this all data is on leaf node. CS2003 Ans.T VXZ B+ Trees This is advance version of B tree. (A) (B) (C) (D) None of These. Figure below shows left biasing and right biasing of tree after insertion of G. Step 2: inserting H require a split. We can either use left biasing or right biasing. If a internal node splits then it will be same as in B tree. B Explanation: 2-3-4 tree means tree of having internal node(except root node) with minimum degree two and maximum degree 3.e. Consider the following 2-3-4 tree (i. . G will be inserted to node BHI and insertion of G requires two splitting. Left Biasing P L H. But if any of biasing is used then it should be used for whole tree.Q49. G is moved up and also G is copied to successor leaf (If right biasing is used ). a tree with a minimum degree of two) in which each data item is a letter.T VXZ B HI N Q. L U G P. If a leaf node splits then data that moves up will also copied into its successor node. The usual alphabetical ordering of letters is used in constructing the tree.G I N Q. What is the result of inserting G in above tree. Example: Create B+ tree for C N G A H E K Q M F W L T Z D P R X of order 5 Minimum number of keys=(n-1)/2=2 Maximum number of keys =n-1=4 Step1: C N G A can be inserted without splitting. If internal node split then key is not copied to successor because key is already at leaf node. N is moved up. Second split in internal node. D is moved up and Copied to successor leaf node.Step 3: E and K can be inserted without any split. First split in left most node. Step 9: insertion of require two split. K is moved up and copied to successor leaf node. Step 4: inserting Q causes split in rightmost node. Step 5: insert M F (no split) Step 6: insert W requires a split. Step 7: insert L T (no split) Step 8: inserting Z require split in rightmost node. . and the order of leaf nodes is the maximum number of data items that can be stored in it. in the sequence given below. The order of internal nodes is the maximum number of tree pointers in each node. 10. 2. The B+ . Q. 6. 1 The maximum number of times leaf nodes would get split up as a result of these insertions is (A) 2 (B) 3 (C) 4 (D) 5 CS2009 Ans. Insert 10 1 split st 6 3 10 3 6 10 Invalid state require a split 3 6 10 Insert 8 2 nd 6 8 split 6 6 8 10 3 3 6 8 10 Insert 4 Invalid state require a split .tree in which order of the internal nodes is 3. The following key values are inserted into a B+ . maximum 2 key can be inserted in internal nodes. C Explanation: order of leaf nodes is 2 so maximum 2 key can be inserted in leaf nodes and order of internal node is 3. and that of the leaf nodes is 2. Step 9: insert P R X (no split).tree is initially empty. 8.K is moved up. 3. 4. Let LASTPOST. Respectively.6 8 Insert 2 6 8 3 4 6 8 10 2 3 4 8 8 10 Invalid state require a split 3 split rd 6 4 th split 4 8 2 4 4 6 8 Invalid state require a split 2 4 5 6 8 10 5 6 8 10 Insert 1 6 4 8 2 1 4 5 6 8 10 Miscellaneous Questions Q50. Traversal Inorder Postorder Preorder Last node visited rightmost node in tree Root node rightmost node in tree First node visited leftmost node in tree leftmost node in tree Root node . b Explanation: In case of complete binary tree. of a complete binary tree. inorder and preorder traversal. LASTIN and LASTPRE denote the last vertex visited in a postorder. Which of the following is always true? (a) LASTIN = LASTPOST (b) LASTIN = LASTPRE (c) LASTPRE = LASTPOST (d) None of the above CS2000 Ans. Q55. Consider the following functions f(n) = 3n g (n) = 2nIog2 h(n) = n! Which of the following is true? (a) h(n) is 0 (f(n)) (b) h(n) is 0 (g(n)) (c) g(n) is not 0 (f(n)) (d) f(n) is 0(g(n)) Ans. Note: take example of one or two such tree. Which of the following statements is correct? (a) g(n) = O(f(n)) and f(n) = O(g(n)) (b) g(n) = O(f(n) ) and f(n) ≠ O(g(n)) (c) f(n) = O(g(n)) and g(n) ≠ O(f(n)) (d) f(n) = O(g(n)) and g(n) =O(f(n)) CS2001 Ans. Q53. b Explanation: n2 log n is monotonically larger than n(log n) CS2000 Q54. with each node having 0 or k children is (n – 1)(k – 1)/k + 1. What is the worst case complexity of sorting n numbers using randomized quicksort? (a) 0(n) (b) 0(n log n) (c) 0(n2) (d) 0(n!) CS2001 Ans. Randomized quicksort is an extension of quicksort where the pivot is chosen randomly. Q51. d Explanation: g(n) and f(n) are of same order and h(n) is larger than both. and put values in options.Note: to solve this type of question first draw example tree then try to prove the option incorrect. b Explanation: Worst case complexity of sorting n numbers using randomized quicksort is O(n log n). with each node having 0 or 3 children is: (a)(n-1)*2/3 +1 (b)3n . Q52. The running time of the following algorithm Procedure A(n) .1 (c) 2n (d)2n -1 CS2002 Ans. Ternary tree with 10 nodes have 7 leaf nodes in which each node having 0 or 3 children. Only option a satisfy results. The number of leaf nodes in a rooted tree of n nodes. a Explanation: General formula The number of leaf nodes in a rooted k-ary tree of n nodes. Let f(n) = n2 log n and g(n) = n(log n) be two positive functions of n. If you have two sorted array then merging of these array results in a sorted array of elements. C Explanation: in heap 7th smallest element can be at any level. 7th smallest element can be found in time (A) O(n log n) (B) O(n) (C) O(log n) (D) O(1) CS2003 Ans. and (iii) the result of merging B and c gives A? (A)2 (B) 30 (C)56 (d)256 CS2003 Ans. C Explanation: Binary search will reduce comparison in insertion sort Now in each pass causes O(log n) computation T(n)= log 1 + log 2 + log 3 + ………………+ log n =log n!=O(n log n) Q58.If n<=2 return(1) else return (A(n/2)). and there are only 8C3 ways to select 5 numbers from 8 distinct numbers. if it not already present in the set Which of the following data structure can be used for this purpose? . Is best described by (a) 0(n) (b) 0(log n (c) 0(log log n) (d) 0(1) Ans. b 2 b CS2002 Q56. Deletion of the smallest element II. How many distinct pair of sequences. If there are 8 distinct integer sorted in increasing order .if you select any five of them then these five element will already be sorted in ascending order and also remaining three element is also in sorted order. Insertion of an element. where n is the number of elements in the set I. O(7log n)=O(log n) Q59.. C Explanation: This question basically uses permutation and combination. To find 7th smallest element in heap we have to perform 7 delete operations. The usual O(n2)implementation of insertion sort to sort an array uses linear search to identify the position where an element is to be inserted into the already sorted part of the array. 8 C3 =8! / (3! * 5!)=(8*7*6*5!) / (3!*5!)=8*7=56 Q57. Let A be a sequence of 8 distinct integer sorted in ascending order. b Explanation: Recurrence relation for above algorithm is T(n)=T(n/2)+1 By master method n log a = n log 1=n0=1 f(n)= n log a so by case 2 of Master method complexity of algorithm is O(log n). Each delete operation will cause O(log n ) complexity. we use binary search to identify the position the worst case running time will be (A) remain O(n2) (B) become O(n(log n)2) (C) become O(n log n) (D) become O(n) CS2003 Ans. If instead. B and C are there such that (i) each is sorted in ascending order. So what you have to do is to select 5 elements from group of 8 elements. (ii) B has 5 and C has 3 elements. In heap with n elements with the smallest element at the root. A data structure is required for storing a set of integers such that each of the following operations can be done in O(log n) time. For m>=1 . The best data structure to check whether an arithmetic expression has balanced parentheses is a (a) queue (b) stack (c) tree (d) list CS2004 Ans. Consider the following C program segment struct CellNode . Assume that push and pop operations take X seconds each. Q61. Time elapsed between each operation is Y seconds First push these element then pop them. if it not already present in the set require O(log n) time. Insertion of an element. B Q62. B Explanation: in heap(min-heap)Complexity of deletion of the smallest element=O(1) Insertion of an element=O(log n) but it cannot search whether element present or not in heap in O(log n) time. Q60. In Balance binary search tree-deletion of the smallest element require traversing tree to its leftmost node which is O(log n) and deleting it will also require O(log n ) time. simultaneously calculate start and end time of each element . E D C B A 5X+4Y 5X+5Y Start time End time A X B A 2X+Y C B A 3X+2Y D C B A 4X+3Y D C B A 6X+6Y C B A 7X+7Y B A 8X+8Y A 9X+9Y Stack-Life of E=(5X+5Y )– (5X+4Y) = Y Stack-Life of D=(6X+6Y) –(4X+3Y) = 2X + 3Y Stack-Life of C=(7X+7Y) –(3X+2Y) = 4X + 5Y Stack-Life of B=(8X+8Y) –(2X+Y) = 6X + 7Y Stack-Life of A=(9X+9Y) –(X ) = 8X + 9Y Average stack life is= (Y + (2X + 3Y) +(4X + 5Y) +(6X + 7Y) +(8X + 9Y)) / 5 = (20X+25Y) / 5 =4X + 5Y Only option C satisfy these values. Let S be a stack of size n>=1. and then perform n pop operations. define the stack–life of m as the time elapsed from the end of push(m) to the start of the pop operation that removes m from S. Similarly. C Explanation: to solve this question take an stack of size five (n=5) elements A B C D E Push and pop operation require X seconds. suppose we push the first n natural numbers in sequence. and Y seconds elapse between the end of one such stack operation and the start of the next operation. The average stack-life of an element of this stack is (A) n(X+Y) (B) 3Y + 2X (C)n)(X+Y)-X (D) Y + 2X CS2003 Ans. Starting with the empty stack. Total complexity of deleting minimum element is O(log n).(A) (B) (C) (D) A heap can be used but not a balanced binary search tree A balanced binary search tree can be used but not a heap Both balanced binary search tree and heap can be used Neither balanced binary search tree nor heap can be used CS2003 Ans. 60. 25.> rightChild)). Explanation: take example of 4 distinct keys and try to make binary trees as many possible. 25. 23. 29 which one of the following sequences of keys can be the result of an in-order traversal of the tree T? (a) 9. 95. 50. 29 (c) 29. 22. 25. 60. } The value returned by the function DoSomething when a pointer to the root of a non-empty tree is passed as argument is (a) The number of leaf nodes in the tree (b) The number of nodes in the tree (c) The number of internal nodes in the tree (d) The height of the tree CS2004 Ans. 95 (d) 95. T produces the following sequence of keys 10. if (ptr ! = NULL) { if (ptr .> rightChild ! = NULL) value = max(value. 15. 15. } return (value). if (ptr . 50. 23. 25. 95 (b) 9. 27.> rightChild ! = NULL) value = max(value. 22. Simpler way to solve this question take an example of tree then run algorithm. if (ptr . 10. a Explanation: inorder traversal of binary search tree always produces sorted data. 25. 40.{ struct CelINode *leftchild. Larger of them is assigned to value. 22. b.> leftChild). 22.1 + DoSomething (ptr . d Explanation: if (ptr .> leftchild ! = NULL) value = 1 + DoSomething (ptr . 40. Postorder traversal of a given binary search tree. 27. 9.> rightChild)). 50. 9. 9. 29.> leftchild ! = NULL) value = 1 + DoSomething (ptr . 15. 23. 95. 15. 27. 23. Q64. How many distinct binary search trees can be created out of 4 distinct keys? (a) 5 (b) 14 (c) 24 (d) 42 CS2005 Ans. struct CelINode *rightChild. 40. 22.> leftChild). 50. } int DoSomething (struct CelINode *ptr) { int value = 0. 23. 60. 60. 10. 60. this statement increasing value by one every time a node is passed and call function recursively for left subtree. 40. 27. . 40. this statement check whether value calculated by first statement is larger or value of right subtree plus one is larger. Q63.1 + DoSomething (ptr . 50. 29 CS2005 Ans. 10. 27. int element. 15. 10. 7. 8. 2. 5. 8. Suppose T(n) =2T(n/2) +n. 8. 3. 3. 2. 3. d Explanation: First create heap from given data 10 10 10 8 5 Insert 1 8 5 insert 7 8 5 3 2 3 2 1 3 2 1 7 heapify 10 level order traversal gives : 10 8 7 3 2 1 5 8 7 3 2 1 5 Q66. the smallest element can be found in time (A) 0(n) (B) O(log n) (C) 0(log log n) (D) 0(1) CS2006 . In a binary max heap containing n numbers. 1. 3.Q65. Initially. 8. 8. 5. 5 (C) 10. 7. 7. Q67. 2. 2 Two new elements ‗1‘ and ‗7‘ are inserted in the heap in that order. 1. 3. T(0) =T(1)=1 Which one of the following is FALSE? (a) T(n)=O(n2) (b) T(n)=θ(n log n) (c) T(n)=Ω(n2) (d) T(n)=O(n log n) CS2005 Ans. 1. The level order traversal of the heap after the insertion of the elements is: (a) 10. 7. The levelorder traversal of the heap is given below: 10. b Explanation: according to master theorem case 2 T(n)=θ(n log n) Note : Case 2 of master theorem evaluates complexity in terms of θ() not O(). it has 5 elements. 5 (d) 10. So correct answer is b. 2. A Priority-Queue is implemented as a Max-Heap. 5 CS2005 Ans. 1 (b) 10. x) { push (S1. To be able to store any binary tree on n vertices the minimum size of X should be (A) log2 n (B) n (C) 2n+1 (D) 2n—1 CS2006 Ans. D Explanation: maximum space required by a binary tree is when it is forming a chain. An element in an array X is called a leader if it is greater than all elements to the right of it in X. the left child. A Explanation: in max heap we cannot find smallest element by root to leaf traversing. The best algorithm to find all leaders in an array (A) Solves it in linear time using a left to right pass of the array (B) Solves it in linear time using a right to left pass of the array (C) Solves it using divide and conquer in time θ (n log n) (D) Solves it in time θ(n2) CS2006 Ans. Which one of the following in place sorting algorithms needs the minimum number of swaps? (A) Quick sort (B) Insertion sort (C) Selection sort (D) Heap sort CS2006 Ans. return. B Q71. using two stacks S1 and S2.total space required for creating a chain of 3 element = 7 node = 2n-1. A scheme for storing binary trees in an array X is as follows. if any. In this example dashed node shows extra space that we need to store when creating a chain of 3 element . Q69. is given below: void insert (Q. Indexing of X starts at 1 instead of 0. is stored in X[2i] and the right child. if any. the root is stored at X[1].Ans. } void delete (Q) { if (stack—empty(S2)) then { if (stack—empty(S1)) then { print(‖Q is empty‖). } else while (! (stack—empty (S1))) . in X[2i+1]. Q68. B Q70. For a node stored at X[i]. x). instead we can search using linear search in level order traversing. An implementation of a queue Q. Which one of the following is true for all m and n? (A) n+m <x<2n and 2m<y<n+m (B) n+m <x<2n and 2m<y<2n (C) 2m< x<2n and 2m<y<n+m (D) 2m <x<2n and 2m<y<2n CS2006 Ans. Case 2. An item x can be inserted into a 3-ary heap containing n items by placing x in the location a[n] and pushing it up the tree to satisfy the heap property. } } Let n insert and m(<= n)delete operations be performed in an arbitrary order on an empty queue Let x and y be the number of push and pop operations performed respectively in the process. then recurrence relation for quick sort is T(n)=2T(n/2) + O(n) By master method T(n)= ) θ (n log n) Statement for Linked Answer Questions 73 & 74: A 3-ary max heap is like a binary max heap.{ x=pop (S1). nodes have 3 children. 3. 6. A Explanation: case 1: we perform all n insertion first then m deletion operation. For m delete operation m push and 2m pop operation performed. 8. 5. A 3-ary heap can be represented by an array as follows: The root is stored in the first location. a[0]. 1.x) } x=pop (S2). 9 (B) 9. 8. 1 (D) 9. 8.Them m delete operation call m pop(). 3. nodes in the next level. is stored from a[1] to a[3]. The median of n elements can be found in O(n)time. push(S2. To do this first we will push all n elements to S1. Q73. 5. then call remaining n-m insert operations. 5 (C) 9. 6. 3. Then first delete operation will pop all element from S1 and push them to S2.if we call one insert and then one delete operation alternatively. 6. D . 1 CS2006 Ans. The nodes from the second level of the tree from left to right are stored from a[4] location onward. Now for each delete operation one push and two pop operations are performed. in which median is selected as pivot? (A) θ(n) (B) θ (n log n) (C) θ (n2) (D) θ (n3) CS2006 Ans. 6. Q72. Which one of the following is a valid sequence of elements in an array representing 3ary max heap? (A) 1. 5. So total number of push is m+n and total number of pop operation is 2m. 8. Which one of the following is correct about the complexity of quick sort. from left to right. B Explanation : if median can be found in O(n) times and if we select median as pivot then partition operation break array in equal parts. 3.total no of push is2n and total no of pop operation is n+m. but instead of 2 children. 5. 6. 5. 8.ary max heap found in the above question. 7. Which one of the following is the sequence of items in the array representing the resultant heap? (A) 10. Suppose the elements 7.Explanation: create 3-ary heap for given data 1 9 9 9 3 5 6 6 3 1 5 3 6 8 5 6 8 8 9 8 5 1 3 1 A B C D In above trees only d satisfy max heap conditions. 9. 2. 1. 4. 6. 1 (C) 10. 8. 7. in that order. 8. 4. 4. 7. 2. 8. 6. 2. 9. 3. 5. 4 (B) 10. 3. Q74. C . 1. 2. 7. 5 CS2006 Ans. The maximum number of nodes in a binary tree of height h is: (A) 2h-1 (B)2h− 1− 1(C) 2h+1 − 1 (D) 2h+1 CS2007 Ans. 9. The height of a binary tree is the maximum number of edges in any root to leaf path. A Explanation: Q75. 6. 2. 10 and 4 are inserted. 3 (D) 10. into the valid 3. 9. 3. 1. where n= L+I. C Explanation: by the formula L=(K-1)n +1. Which of the following sorting algorithms has the lowest worst-case complexity? (A) Merge sort (B) Bubble sort (C) Quick sort (D) Selection sort CS2007 Ans. Consider the following C program segment where CellNode represents a node in a binary tree: struct CellNode { struct CellNOde *leftChild. and I = 10. The number of comparisons made in the execution of the loop for any n > 0 is: (A) floor (log2 n) + 1 (B) n (C) 2 * ceil (log n) (D) 2 * ceil(log n) + 1 Ans. B Q77. Let I be the number of internal nodes and L be the number of leaves in a complete n-ary tree.Q76. n. while (j <=n) j = j*2. A Q78. what is the value of n? (A) 3 (B) 4 (C) 5 (D) 6 CS2007 Ans. D Explanation: recurrence relation for above algorithm is T(n)=T(√n) + 1 By substitution and master method T(n)= θ(log log n) Q81. struct CellNode *rightChild. }. int element. j = 1. K=(L-1)/n + 1=(41-1)/10 +1=5 Q80. If L = 41. A complete n-ary tree is a tree in which each node has n children or no children. else return (DoSomething (floor(sqrt(n))) + n). } (A) θ(n2) (B) θ(n log n) (C) θ(log n) (D) θ(log log n) Ans. Q79. int GetValue (struct CellNode *ptr) { int value = 0. Consider the following segment of C-code: int j. The maximum number of binary trees that can be formed with three unlabeled nodes is: (A) 1 (B) 5 (C) 4 (D) 3 CS2007 Ans. if (ptr != NULL) { if ((ptr->leftChild == NULL) && (ptr->rightChild == NULL)) CS2007 CS2007 . D Explanation: solve question by taking example. What is the time complexity of the following recursive function: int DoSomething (int n) { if (n <= 2) return 1. return 0. } Let T (n)denote the number of times the for loop is executed by the program on input n. } return 1. and we know that inorder traversal of a binary search tree is sorted data . Consider the following C code segment: int IsPrime(n) { int i.…. by knowing inorder and postorder of tree we can uniquely determine a binary tree. You are given the postorder traversal.n.i<=sqrt(n). 3. 2. What is the time complexity of the most efficient algorithm for doing this? (A) θ (log n) (B) θ (n) (C) θ (n log n) (D) None of the above.n. Which takes O(n log n) time. of a binary search tree on the n elements 1. B Q84. We have a binary heap on n elements and wish to insert n more elements (not necessarily one after another) into this heap.i++). else value = value + GetValue(ptr->leftChild) + GetValue(ptr->rightChild). if(n%i == 0) { printf(―Not Prime\n‖). A Q83. 2. The minimum number of comparisons required to determine if an integer appears more than n/2 times in a sorted array of n integers is (A) θ (n) (B) θ (log n) (C) θ (log*n ) (D) θ (1) CS2008 Ans. Q85. } return(value). The total time required for this is (A) θ (log n) (B) θ (n) (C) θ (n log n) (D) θ ( n2 ) CS2008 . C Explanation: Since it‘s a binary search tree and postorder traversal is given..n.which is 1. Which of the following is TRUE? CS2007 √nand T √n√n(B) T nO√nand T n1 (C) T nOnand T n√n(D) None of the above (A) T nO CS2007 Ans. as the tree cannot be uniquely determined CS2008 Ans. P. Try to write algorithm for this question. C Q82... } The value returned by GetValue when a pointer to the root of a binary tree is passed as its argument is: (A) the number of nodes in the tree (B) the number of internal nodes in the tree (C) the number of leaf nodes in the tree (D) the height of the tree Ans.value = 1. You have to determine the unique binary search tree that has P as its postorder traversal.. for(i=2. ……. now also Y[4]<3 is true . 3. k. f (int Y[10] . Lop will never terminate. CS2008 Ans. int u. In next step k=(6+9) / 2= 7. heap already have n elements. else j = k + 1. j=9. 1. 9.in first step i=0 and j= 9 k=4. {. } while (Y [k] != x && i< j) . k=( i+j) /2. else j= k. int x) 2. Q87. On which of the following contents of Y and x does the program fail? (A) Y is{1 2 3 4 5 6 7 8 9 10} and x < 10 (B) Y is{1 3 5 7 9 11 13 15 17 19} and x < 1 (C) Y is{2 2 2 2 2 2 2 2 2 2} and x > 2 (D) Y is{2 4 6 8 10 12 14 16 18 20} and 2 < x < 20 and x is even CS2008 Ans. The program is erroneous. Y[4]<3 is true . i becomes 6. i becomes 7. 6. In next step k=(8+9) / 2=8.C Explanation: If you run this program for values in option C then while loop will never terminate. In next step k=(8+9) / 2=8. Let take x=3. then Y[4]<3 is true. (C) Change line 6 to: if (Y [k] < = x) i = k. In next step k=(4+9) / 2=6. i becomes 4. i becomes 8.Y[4]<3 is true . now also Y[4]<3 is true . Inserting n+1 th element in heap require θ(log(n+1))time Inserting n+2 th element in heap require θ(log(n+2))time Inserting n+3 th element in heap require θ(log(n+2))time . if(Y[ k]< x) i=k. Inserting n+n th element in heap require θ(log(n+n))time Total time in inserting n elements=log(n+1) + log(n+2) + log(n+3) +……………+ log(n+n) =log((n+1)* (n+2)* (n+3)*…………. In next step k=(7+9) / 2=8. } Q86. If(Y[ k]= =x) print f("x is in the array ") . j.Ans. The correction needed in the program to make it work properly is (A) Change line 6 to: if (Y [k] < x) i = k + 1. . C Explanation: inserting an element in heap take θ(log n) time. else print f (" x is not in the array ") . do { 5. (D) Change line 7 to: } while ((Y [k] = = x) & & (i < j)). else j = k − 1. 8. 10. i becomes 8. i=0. else j = k. (B) Change line 6 to: if (Y [k] < x) i = k − 1.* (n+n)) ≈ θ(log(nn)) = θ(n log n) Statement for Linked Answer Questions: 86 & 87 Consider the following C program that attempts to locate an element x in an array Y[ ] using binary search. . 4. A . . i becomes 8. 7. now also Y[4]<3 is true . What is the worst-case time complexity of the best known algorithm to delete the node x from the list? (A) O(n) (B) O(log2 n) (C) O(log n) (D) O(1) IT2004 Ans. A Explanation: in singly linked list we can not traverse back.. What is the number of nodes in the tree that have exactly one child? (A) 0 (B) 1 (C) (n − 1) /2 (D) n-1 CS2010 Ans. and doing this adjustment up to the root node (root node is at index 0) in order [(n—1)/2]. Q90. every node has an odd number of descendants.Explanation: in previous question loop will not terminate when i=j or i=j+1 . Every node is considered to be its own descendant. Option A shows most promising conditions. Let Q be the pointer to an intermediate node x in the list.e. An array of integers of size n can be converted into a heap by adjusting the heaps rooted at each internal node of the complete binary tree starting at the node [(n—1)/2]. A Q89. Q91.. Let P be a singly linked list. in the worst case? (A) θ(n) (B) θ (n log n) (C) θ (n2 ) (D) θ (n2 log n) CS2009 Ans.0. The time required to construct a heap in this manner is (A) 0(log n) (B) 0(n) (C) 0(n log log n) (D) 0(n log n) IT2004 Ans. itself) . Q88. and to reach its previous node time taken will be O(n). . What is the number of swaps required to sort n elements using selection sort. now we can perform delete operation. We should change code such that this condition will not arrive. In such tree there is exactly one node having one child (i. so to delete a arbitrary node first we should have access to its previous node.[(n— 3)/2]. B Explanation : we know that a binary tree have only two child and according to definition in question every node is its child. In a binary tree with n nodes. which is last node. B Explanation: Creating heap as stated above will require run loop only for (n+1)/2 times... Which one of the following binary trees has its in-order and preorder traversals as BCAD and ABCD. In the resulting tree. Which one of the following statements is FALSE? (A) f(n)+g(n)=O(h(n)+h(n)) (B) f(n)=O(h(n)) (C) h(n)=O(f(n)) (D) f(n)h(n)=O(g(n)h(n)) Ans. C Explanation: order of complexities for above functions is f(n) < g(n)=h(n) So option C is FALSE Q94. Number of nodes in right subtree=p Number of nodes in right subtree=n – p – 1 Then root will be = n – p – 1 +1=n .Q92. The first number to be inserted in the tree must be (A) p (B) p + 1 (C) n .g(n) and h(n) be functions defined for positive integers such that f(n) = O(g(n)). g(n) =O(h(n)). The numbers 1. C Q93. Let f(n). respectively? A A D A B A B C C B D D C C D B D C B A IT2004 Ans. C Explanation: In binary search tree left subtree have smaller element than root and right subtree have larger elements than root. the right subtree of the root contains p nodes. g(n) ≠ O(f (n)).p (D)n — p + 1 IT2005 Ans. 2. and h(n) = O(g(n)).p . n are inserted in a binary search tree in some order. In the figure root is node with balance factor +2 or -2 which is violating AVL tree conditions. a double right rotation is needed. Figure shown below describe rotation in four cases. If the node is not a leaf. After deletion. Pivot is the node that will become root after rotation. Left-Left case and Left-Right case: If the balance factor of P is +2. remove it. 2. then the left subtree outweighs the right subtree of the given node. If the balance factor of L is í1. B Explanation: Try to make AVL tree of maximum height of 7 node 1 1 -1 1 0 0 . adjusting the balance factors as needed. (A) 2 (B) 3 (C) 4 (D) 5 CS2009 Ans. The second is a left rotation with P as the root. retrace the path back up the tree (parent of the replacement) to the root.rotation is a right rotation with R as the root. The node that was found as a replacement has at most one subtree. Q46. What is the maximum height of any AVL-tree with 7 nodes? Assume that the height of a tree with a single node is 0. a right rotation is needed with P as the root. and the balance factor of the left child (L) must be checked. replace it with either the largest in its left subtree (inorder predecessor) or the smallest in its right subtree (inorder successor). If the balance factor of L is 0. and remove that node. Deletion: If the node is a leaf. The first rotation is a left rotation with L as the root. The second is a right rotation with P as the root. If there is room in this leaf. Note that this may require that some existing keys be moved one to the right to make room for the new item. 5. 3. this unsuccessful search will end at a leaf. The number of subtrees of each node. All leaves appear in the same level. The median (middle) key is moved up into the parent node. It is commonly used in databases and file systems. and carry information. not only might we have to move some keys one position to the right. just insert the new item here. The B-tree is a generalization of a binary search tree in that more than two paths diverge from a single node. The root has at least two children if it is not a leaf node. a B TREES A B-tree is a specialized multiway tree designed especially for use on disk (Secondary Storage). A B-tree of order m (the maximum number of children for each node) is a tree which satisfies the following properties: 1. Every node (except root) has at least m-1»2 children. then. the number of nodes in the let sub tree is at least half and at most twice the number of nodes in the right sub tree. In a B-tree each node may contain a large number of keys. The maximum possible height (number of nodes on the path from the root to the furthest leaf) of such a tree on n nodes is best described by which of the following? (a) log2 n (b) log4 n (c) log3 n (d) log2 n CS2001 Ans. then the node must be "split" with about half of the keys going into a new node to the right of this one.A weight-balanced tree is a binary tree in which for each node. Every node has at most m children. if that node has no room. This means that (except root node) all internal nodes have minimum (5-1)/2 =2 keys . In practice B-trees usually have orders a lot bigger than 5. Example B-Tree The following is an example of a B-tree of order 5.) Note that when adding to an internal node. Of course. each leaf node must contain at least 2 keys. 4. then it may have to be split as well. If instead this leaf node is full so that there is no room to add the new item. 2. may also be large. Inserting a New Item When inserting an item. If the item is not already in the B-tree. (Of course. but .0 Maximum height of AVL tree with n number of node is log n +1 Q47. According to condition 4. B-tree is optimized for systems that read and write large blocks of data. the maximum number of children that a node can have is 5 (so that 4 is the maximum number of keys). A non-leaf node with k children contains kí1 keys. first do a search for it in the B-tree. When Z is added. thus causing the tree to increase in height by one. All nodes other than the root must have a minimum of 2 keys. the rightmost leaf must be split. with 2 keys in each of the resulting nodes. The median item T is moved up into the parent node. Note that in practice we just leave the A and C in the current node and place the H and N into a new node to the right of the old one. we find no room in this node. so we split it into 2 nodes. The letters F. and Q proceeds without requiring any splits: Inserting M requires a split. the tree is kept fairly balanced. resulting in this picture When we try to insert the H. L. The first 4 letters get inserted into the same node. . K. W. Let's work our way through an example similar to that given by Kruse. Note that M happens to be the median key and so is moved up into the parent node. Note that by moving up the median key. the median key moves up into a new root node. Inserting E. moving the median item G up into a new root node. and T are then added without needing any split. Insert the following letters into what is originally an empty B-tree of order 5: C N G A H E K Q M F W L T Z D P R X Y S Order 5 means that a node can have a maximum of 5 children and 4 keys.the associated pointers have to be moved right as well. If the root node is ever split. Since H is in a leaf and the leaf has more than the minimum number of keys. the parent node is full. Since T is not in a leaf. delete H. we first do a lookup to find H. This gives: Next. D happens to be the median key and so is the one moved up into the parent node. In ALL cases we reduce deletion to a deletion in a leaf. The letters P. However. X. Q. what we really have to do is to delete W from the leaf. the node with N. which happens to be W. Note how the 3 pointers from the old parent node stay in the revised node that contains D and G. we find its successor (the next item in ascending order). and Y are then added without any need of splitting: Finally.The insertion of D causes the leftmost leaf to be split. so it splits. by using this method. That way. this is easy. delete the T. R. sending the median M up to form a new root node. Of course. sending the median Q up to the parent. P. when S is added. since this leaf has extra keys. We move the K over where the H had been and the L over where the K had been. and R splits. Deleting an Item In the B-tree as we left it at the end of the last section. . and move W up to replace the T. which we already know how to do. which is not acceptable for a B-tree of order 5. and move down the M from the parent. is moved down from the parent. we can then borrow a key from the parent and move a key up from this sibling. However. the old left subtree of Q would then have to become the right subtree of M. Suppose for the moment that the right sibling (the node with Q X) had one more key in it somewhere to the right of Q. In this case. this leaf does not have an extra key. G. In other words. the sibling to the right has an extra key. If this problem node had a sibling to its immediate left or right that had a spare key. delete R. (Of course. the leaf has no extra keys. We also move down the D. In our specific case. the deletion results in a node with only one key. So. Although E is in a leaf. We would then move M down to the node with too few keys and move the Q up where the M had been. Although R is in a leaf. the S is moved over so that the W can be inserted in its proper place. and the X is moved up. Of course. In such a case the leaf has to be combined with one of these two siblings. let's combine the leaf containing F with the leaf containing A C.) Finally. you immediately see that the parent node now contains only one key. This includes moving down the parent's key that was between those of these two leaves.Next. the N P node would be attached via the pointer field to the right of M's new location. let's delete E. then we would again "borrow" a key. Since in our example we have no way to borrow a key from a sibling. In our example. we must again combine with the sibling. This one causes lots of problems. the successor W of S (the last key in the node where the deletion occurred). This is not acceptable. If the sibling node to the immediate left or right has an extra key. nor do the siblings to the immediate right or left. the tree shrinks in height by one. . its sibling has an extra key. Since neither the sibling to the left or right of the node containing E has an extra key.Another Example Here is a different B-tree of order 5. and bring the J down to join the F. Let's consolidate with the A B node. which would be D. we must combine the node with one of these two siblings. and move the D up to replace the C. . Let's try to delete C from it We begin by finding the immediate successor. However. But now the node containing F does not have enough keys. this leaves us with a node with too few keys. Thus we borrow the M from the sibling. However. move it up to the parent. Note that the K L node gets reattached to the right of the J. a look-up of 1 million search-key values may take log50(1. First create B tree for 15 6 21 27 12 18. This means that the path is not long. If n =100. even in large files. Left Biasing 15 21 Right Biasing 15 6 12 18 27 6 12 18 21 27 . If there are K search key values in the file. Left Biasing 6 15 21 27 6 Right Biasing 15 21 27 21 15 6 15 27 6 21 27 Inserting 12 does not require any split and leftmost node contain 6 12 15. but here problem is in deciding whether 15 or 21 is out of 6 15 21 27 moved up.Q48. b Explanation: Self balancing binary search can be used for searching in main memory but in case of disk. 15 6 21 can be inserted without any splitting but inserting 27 requires a splitting. so typically it takes only 3 or fewer disk reads.000) = 4 nodes to be accessed.000. n is around 200. For a 4k byte disk block with a search-key size of 12 bytes and a disk pointer of 8 bytes. In this case if 15 is moved up. a. Inserting 18 requires splitting of leftmost node in case of left biasing but in case of right biasing no splitting required. where n is number of links possible in any given node. Since root is in usually in the buffer. In processing a query. log2xn complexity is not acceptable because disk capacity is much larger than main memory. larger capacity means large number of records so large values of n and also disc access is much slower than memory access. Left Biasing: Left Child has more key than right child. we traverse a path from the root to a leaf node. Maximum key=3 and minimum key 4/2 =2. In this case if 21 is moved up. this path is no longer than log(n/2) K . B -trees are preferred to binary trees in databases because (a) Disk capacities are greater than memory capacities (b) Disk access is much slower than memory access (c) Disk data transfer rates are much less than memory data transfer rates (d) Disks are more reliable than memory CS2000 Ans. B tree of even order Consider B tree of degree 4. Right Biasing: Right Child has more key than left child. a tree with a minimum degree of two) in which each data item is a letter.Q49. Figure below shows left biasing and right biasing of tree after insertion of G. Consider the following 2-3-4 tree (i. If a internal node splits then it will be same as in B tree.T VXZ B HI N Q. G will be inserted to node BHI and insertion of G requires two splitting. G is moved up and also G is copied to successor leaf (If right biasing is used ). But if any of biasing is used then it should be used for whole tree. Left Biasing P L H. The usual alphabetical ordering of letters is used in constructing the tree.G I N Q. Example: Create B+ tree for C N G A H E K Q M F W L T Z D P R X of order 5 Minimum number of keys=(n-1)/2=2 Maximum number of keys =n-1=4 Step1: C N G A can be inserted without splitting. U Right Biasing B. L U G P. CS2003 Ans. We can either use left biasing or right biasing. What is the result of inserting G in above tree.e. (A) (B) (C) (D) None of These. In this all data is on leaf node. Step 2: inserting H require a split. B Explanation: 2-3-4 tree means tree of having internal node(except root node) with minimum degree two and maximum degree 3. . If a leaf node splits then data that moves up will also copied into its successor node.T VXZ B+ Trees This is advance version of B tree. Second split in internal node. Step 7: insert L T (no split) Step 8: inserting Z require split in rightmost node.Step 3: E and K can be inserted without any split. . K is moved up and copied to successor leaf node. Step 5: insert M F (no split) Step 6: insert W requires a split. D is moved up and Copied to successor leaf node. If internal node split then key is not copied to successor because key is already at leaf node. First split in left most node. Step 4: inserting Q causes split in rightmost node. Step 9: insertion of require two split. N is moved up. and the order of leaf nodes is the maximum number of data items that can be stored in it. 8. The order of internal nodes is the maximum number of tree pointers in each node.tree in which order of the internal nodes is 3. 2. Step 9: insert P R X (no split). 4. in the sequence given below. 6. Insert 10 1 split st 6 3 10 3 6 10 Invalid state require a split 3 6 10 Insert 8 2nd split 6 8 6 3 6 8 10 3 6 8 10 Insert 4 Invalid state require a split . and that of the leaf nodes is 2.K is moved up. The B+ . The following key values are inserted into a B+ . Q. maximum 2 key can be inserted in internal nodes. C Explanation: order of leaf nodes is 2 so maximum 2 key can be inserted in leaf nodes and order of internal node is 3. 1 The maximum number of times leaf nodes would get split up as a result of these insertions is (A) 2 (B) 3 (C) 4 (D) 5 CS2009 Ans. 10.tree is initially empty. 3. of a complete binary tree. Traversal Inorder Postorder Preorder Last node visited rightmost node in tree Root node rightmost node in tree First node visited leftmost node in tree leftmost node in tree Root node . Let LASTPOST. Respectively.6 8 Insert 2 6 8 3 4 6 8 10 2 3 4 8 8 10 Invalid state require a split 3 split rd 6 4 th split Invalid state require a split 4 8 4 6 8 2 4 5 6 8 10 2 4 5 6 8 10 Insert 1 6 4 8 2 1 4 5 6 8 10 Miscellaneous Questions Q50. LASTIN and LASTPRE denote the last vertex visited in a postorder. Which of the following is always true? (a) LASTIN = LASTPOST (b) LASTIN = LASTPRE (c) LASTPRE = LASTPOST (d) None of the above CS2000 Ans. b Explanation: In case of complete binary tree. inorder and preorder traversal. What is the worst case complexity of sorting n numbers using randomized quicksort? (a) 0(n) (b) 0(n log n) (c) 0(n2) (d) 0(n!) CS2001 Ans.1 (c) 2n (d)2n -1 CS2002 Ans. Q52. d Explanation: g(n) and f(n) are of same order and h(n) is larger than both.Note: to solve this type of question first draw example tree then try to prove the option incorrect. Note: take example of one or two such tree. with each node having 0 or 3 children is: (a)(n-1)*2/3 +1 (b)3n . Let f(n) = n2 log n and g(n) = n(log n) be two positive functions of n. b Explanation: n2 log n is monotonically larger than n(log n) Q54. and put values in options. Only option a satisfy results. The number of leaf nodes in a rooted tree of n nodes. Q51. Q53. b Explanation: Worst case complexity of sorting n numbers using randomized quicksort is O(n log n). Q55. Ternary tree with 10 nodes have 7 leaf nodes in which each node having 0 or 3 children. Randomized quicksort is an extension of quicksort where the pivot is chosen randomly. The running time of the following algorithm Procedure A(n) CS2000 . Which of the following statements is correct? (a) g(n) = O(f(n)) and f(n) = O(g(n)) (b) g(n) = O(f(n) ) and f(n) O(g(n)) (c) f(n) = O(g(n)) and g(n) O(f(n)) (d) f(n) = O(g(n)) and g(n) =O(f(n)) CS2001 Ans. a Explanation: General formula The number of leaf nodes in a rooted k-ary tree of n nodes. Consider the following functions f(n) = 3n g (n) = 2nIog2 h(n) = n! Which of the following is true? (a) h(n) is 0 (f(n)) (b) h(n) is 0 (g(n)) (c) g(n) is not 0 (f(n)) (d) f(n) is 0(g(n)) Ans. with each node having 0 or k children is (n ± 1)(k ± 1)/k + 1. C Explanation: in heap 7th smallest element can be at any level. 8 C3 =8! / (3! * 5!)=(8*7*6*5!) / (3!*5!)=8*7=56 Q57. A data structure is required for storing a set of integers such that each of the following operations can be done in O(log n) time. If instead.if you select any five of them then these five element will already be sorted in ascending order and also remaining three element is also in sorted order. If you have two sorted array then merging of these array results in a sorted array of elements. and (iii) the result of merging B and c gives A? (A)2 (B) 30 (C)56 (d)256 CS2003 Ans. Is best described by (a) 0(n) (b) 0(log n (c) 0(log log n) (d) 0(1) Ans. In heap with n elements with the smallest element at the root. b 2 b CS2002 Q56. Let A be a sequence of 8 distinct integer sorted in ascending order. Insertion of an element. Each delete operation will cause O(log n ) complexity. Deletion of the smallest element II. if it not already present in the set Which of the following data structure can be used for this purpose? . If there are 8 distinct integer sorted in increasing order .. The usual O(n2 )implementation of insertion sort to sort an array uses linear search to identify the position where an element is to be inserted into the already sorted part of the array. 7th smallest element can be found in time (A) O(n log n) (B) O(n) (C) O(log n) (D) O(1) CS2003 Ans. B and C are there such that (i) each is sorted in ascending order. (ii) B has 5 and C has 3 elements. So what you have to do is to select 5 elements from group of 8 elements. b Explanation: Recurrence relation for above algorithm is T(n)=T(n/2)+1 By master method n log a = n log 1=n0=1 f(n)= n log a so by case 2 of Master method complexity of algorithm is O(log n). we use binary search to identify the position the worst case running time will be (A) remain O(n2) (B) become O(n(log n)2 ) (C) become O(n log n) (D) become O(n) CS2003 Ans. C Explanation: Binary search will reduce comparison in insertion sort Now in each pass causes O(log n) computation T(n)= log 1 + log 2 + log 3 + ««««««+ log n =log n!=O(n log n) Q58.If n<=2 return(1) else return (A(n/2)). To find 7th smallest element in heap we have to perform 7 delete operations. C Explanation: This question basically uses permutation and combination. where n is the number of elements in the set I. and there are only 8C3 ways to select 5 numbers from 8 distinct numbers. How many distinct pair of sequences. O(7log n)=O(log n) Q59. Total complexity of deleting minimum element is O(log n). and then perform n pop operations. C Explanation: to solve this question take an stack of size five (n=5) elements A B C D E Push and pop operation require X seconds. Assume that push and pop operations take X seconds each. and Y seconds elapse between the end of one such stack operation and the start of the next operation. Similarly. Q61. Consider the following C program segment struct CellNode . Let S be a stack of size n>=1. simultaneously calculate start and end time of each element . define the stack±life of m as the time elapsed from the end of push(m) to the start of the pop operation that removes m from S. The best data structure to check whether an arithmetic expression has balanced parentheses is a (a) queue (b) stack (c) tree (d) list CS2004 Ans. Time elapsed between each operation is Y seconds First push these element then pop them. if it not already present in the set require O(log n) time. In Balance binary search tree-deletion of the smallest element require traversing tree to its leftmost node which is O(log n) and deleting it will also require O(log n ) time.(A) (B) (C) (D) A heap can be used but not a balanced binary search tree A balanced binary search tree can be used but not a heap Both balanced binary search tree and heap can be used Neither balanced binary search tree nor heap can be used CS2003 Ans. Starting with the empty stack. suppose we push the first n natural numbers in sequence. Q60. For m>=1 . D C B A 4X+3Y E D C B A 5X+4Y 5X+5Y D C B A 6X+6Y Start time End time A X B A 2X+Y C B A 3X+2Y C B A 7X+7Y B A 8X+8Y A 9X+9Y Stack-Life of E=(5X+5Y )± (5X+4Y) = Y Stack-Life of D=(6X+6Y) ±(4X+3Y) = 2X + 3Y Stack-Life of C=(7X+7Y) ±(3X+2Y) = 4X + 5Y Stack-Life of B=(8X+8Y) ±(2X+Y) = 6X + 7Y Stack-Life of A=(9X+9Y) ±(X ) = 8X + 9Y Average stack life is= (Y + (2X + 3Y) +(4X + 5Y) +(6X + 7Y) +(8X + 9Y)) / 5 = (20X+25Y) / 5 =4X + 5Y Only option C satisfy these values. B Q62. Insertion of an element. B Explanation: in heap(min-heap)Complexity of deletion of the smallest element=O(1) Insertion of an element=O(log n) but it cannot search whether element present or not in heap in O(log n) time. The average stack-life of an element of this stack is (A) n(X+Y) (B) 3Y + 2X (C)n)(X+Y)-X (D) Y + 2X CS2003 Ans. 23. . 27. 15. 29 which one of the following sequences of keys can be the result of an in-order traversal of the tree T? (a) 9. 40. } return (value). } The value returned by the function DoSomething when a pointer to the root of a non-empty tree is passed as argument is (a) The number of leaf nodes in the tree (b) The number of nodes in the tree (c) The number of internal nodes in the tree (d) The height of the tree CS2004 Ans. 25. 23. 29 (c) 29. 95. 60. 29 CS2005 Ans. this statement check whether value calculated by first statement is larger or value of right subtree plus one is larger. if (ptr ! = NULL) { if (ptr . 25. 95 (b) 9. Explanation: take example of 4 distinct keys and try to make binary trees as many possible. 27. 9. 95. 27. } int DoSomething (struct CelINode *ptr) { int value = 0. 40. 10.> rightChild)). 40. 22. 95 (d) 95. Q63. 10. 50.1 + DoSomething (ptr . d Explanation: if (ptr . 60. 27.> leftChild). 50. Simpler way to solve this question take an example of tree then run algorithm. 25. 25. 50. int element. 15. T produces the following sequence of keys 10. 60. 22. 50. 60. 22. 27. if (ptr . 23. 15. 40. this statement increasing value by one every time a node is passed and call function recursively for left subtree.> leftchild ! = NULL) value = 1 + DoSomething (ptr . 40. 60. b. a Explanation: inorder traversal of binary search tree always produces sorted data. Q64. 9. 22.> rightChild ! = NULL) value = max(value. 9. 29.> leftchild ! = NULL) value = 1 + DoSomething (ptr . struct CelINode *rightChild. How many distinct binary search trees can be created out of 4 distinct keys? (a) 5 (b) 14 (c) 24 (d) 42 CS2005 Ans. 10.1 + DoSomething (ptr . 23. Postorder traversal of a given binary search tree. 15. 50. Larger of them is assigned to value. 15. 25.> rightChild)). 22. if (ptr .> rightChild ! = NULL) value = max(value.{ struct CelINode *leftchild.> leftChild). 10. 23. 1 (b) 10. 7. 3. 5. Initially. 1. The level order traversal of the heap after the insertion of the elements is: (a) 10. 8. 3. 2. b Explanation: according to master theorem case 2 T(n)= (n log n) Note : Case 2 of master theorem evaluates complexity in terms of () not O(). 5 (C) 10. 2 Two new elements µ1¶ and µ7¶ are inserted in the heap in that order. 8. d Explanation: First create heap from given data 10 10 10 8 5 Insert 1 8 5 insert 7 8 5 3 2 3 2 1 3 2 1 7 heapify 10 level order traversal gives : 10 8 7 3 2 1 5 8 7 3 2 1 5 Q66. 2. 3. Suppose T(n) =2T(n/2) +n. 1. So correct answer is b. The levelorder traversal of the heap is given below: 10. 5 (d) 10. 8. Q67. T(0) =T(1)=1 Which one of the following is FALSE? (a) T(n)=O(n2 ) (b) T(n)= (n log n) (c) T(n)= (n2 ) (d) T(n)=O(n log n) CS2005 Ans. 3. it has 5 elements. 1. 2.Q65. 7. 2. In a binary max heap containing n numbers. 7. A Priority-Queue is implemented as a Max-Heap. 5 CS2005 Ans. 8. the smallest element can be found in time (A) 0(n) (B) O(log n) (C) 0(log log n) (D) 0(1) CS2006 . 7. 5. 8. 3. An implementation of a queue Q. } else while (! (stack empty (S1))) . An element in an array X is called a leader if it is greater than all elements to the right of it in X. is given below: void insert (Q. Q69. using two stacks S1 and S2. the left child. Which one of the following in place sorting algorithms needs the minimum number of swaps? (A) Quick sort (B) Insertion sort (C) Selection sort (D) Heap sort CS2006 Ans. x).total space required for creating a chain of 3 element = 7 node = 2n-1. To be able to store any binary tree on n vertices the minimum size of X should be (A) log2 n (B) n (C) 2n+1 (D) 2n 1 CS2006 Ans. The best algorithm to find all leaders in an array (A) Solves it in linear time using a left to right pass of the array (B) Solves it in linear time using a right to left pass of the array (C) Solves it using divide and conquer in time (n log n) (D) Solves it in time (n2 ) CS2006 Ans. B Q70. if any.Ans. the root is stored at X[1]. A scheme for storing binary trees in an array X is as follows. if any. In this example dashed node shows extra space that we need to store when creating a chain of 3 element . Indexing of X starts at 1 instead of 0. Q68. return. For a node stored at X[i]. x) { push (S1. in X[2i+1]. is stored in X[2i] and the right child. } void delete (Q) { if (stack empty(S2)) then { if (stack empty(S1)) then { print(´Q is empty´). A Explanation: in max heap we cannot find smallest element by root to leaf traversing. B Q71. instead we can search using linear search in level order traversing. D Explanation: maximum space required by a binary tree is when it is forming a chain. So total number of push is m+n and total number of pop operation is 2m. D . 3. Which one of the following is a valid sequence of elements in an array representing 3ary max heap? (A) 1. then recurrence relation for quick sort is T(n)=2T(n/2) + O(n) By master method T(n)= ) (n log n) Statement for Linked Answer Questions 73 & 74: A 3-ary max heap is like a binary max heap. 5. 8.x) } x=pop (S2). For m delete operation m push and 2m pop operation performed.Them m delete operation call m pop(). nodes in the next level. 3.if we call one insert and then one delete operation alternatively. Case 2.{ x=pop (S1). 6. To do this first we will push all n elements to S1. 3. Which one of the following is true for all m and n? (A) n+m <x<2n and 2m<y<n+m (B) n+m <x<2n and 2m<y<2n (C) 2m< x<2n and 2m<y<n+m (D) 2m <x<2n and 2m<y<2n CS2006 Ans. 1. An item x can be inserted into a 3-ary heap containing n items by placing x in the location a[n] and pushing it up the tree to satisfy the heap property. 9 (B) 9. A 3-ary heap can be represented by an array as follows: The root is stored in the first location. 8.total no of push is2n and total no of pop operation is n+m. 5. 1 CS2006 Ans. Q73. 8. The nodes from the second level of the tree from left to right are stored from a[4] location onward. B Explanation : if median can be found in O(n) times and if we select median as pivot then partition operation break array in equal parts. 6. is stored from a[1] to a[3]. 6. 3. 6. 5. 5 (C) 9. 8. a[0]. Then first delete operation will pop all element from S1 and push them to S2. A Explanation: case 1: we perform all n insertion first then m deletion operation. The median of n elements can be found in O(n)time. Q72. from left to right. in which median is selected as pivot? (A) (n) (B) (n log n) (C) (n2 ) (D) (n3) CS2006 Ans. push(S2. nodes have 3 children. Which one of the following is correct about the complexity of quick sort. Now for each delete operation one push and two pop operations are performed. } } Let n insert and m(<= n)delete operations be performed in an arbitrary order on an empty queue Let x and y be the number of push and pop operations performed respectively in the process. then call remaining n-m insert operations. 1 (D) 9. but instead of 2 children. 7. 2. 6. 9. 8. 3 (D) 10.ary max heap found in the above question. 4. 4. The height of a binary tree is the maximum number of edges in any root to leaf path.Explanation: create 3-ary heap for given data 1 9 9 9 3 5 6 6 3 1 5 3 6 8 5 6 8 8 9 8 5 1 3 1 A B C D In above trees only d satisfy max heap conditions. 1 (C) 10. 5. The maximum number of nodes in a binary tree of height h is: (A) 2h-1 (B)2hí 1 í 1(C) 2h+1 í 1 (D) 2h+1 CS2007 Ans. 8. 3. 7. 10 and 4 are inserted. 7. 5 CS2006 Ans. in that order. 4. 5. 8. A Explanation: Q75. 3. into the valid 3. Q74. C . 5. 6. 9. 7. 1. 6. 4 (B) 10. 6. 1. Which one of the following is the sequence of items in the array representing the resultant heap? (A) 10. 2. 8. 3. 2. Suppose the elements 7. 9. 1. 9. 2. 2. A complete n-ary tree is a tree in which each node has n children or no children. If L = 41. A Q78.Q76. int element. D Explanation: recurrence relation for above algorithm is T(n)=T(¥n) + 1 By substitution and master method T(n)= (log log n) Q81. if (ptr != NULL) { if ((ptr->leftChild == NULL) && (ptr->rightChild == NULL)) CS2007 CS2007 . and I = 10. Consider the following segment of C-code: int j. }. C Explanation: by the formula L=(K-1)n +1. Let I be the number of internal nodes and L be the number of leaves in a complete n-ary tree. Consider the following C program segment where CellNode represents a node in a binary tree: struct CellNode { struct CellNOde *leftChild. K=(L-1)/n + 1=(41-1)/10 +1=5 Q80. int GetValue (struct CellNode *ptr) { int value = 0. n. where n= L+I. What is the time complexity of the following recursive function: int DoSomething (int n) { if (n <= 2) return 1. else return (DoSomething (floor(sqrt(n))) + n). B Q77. } (A) (n2 ) (B) (n log n) (C) (log n) (D) (log log n) Ans. D Explanation: solve question by taking example. struct CellNode *rightChild. The maximum number of binary trees that can be formed with three unlabeled nodes is: (A) 1 (B) 5 (C) 4 (D) 3 CS2007 Ans. j = 1. what is the value of n? (A) 3 (B) 4 (C) 5 (D) 6 CS2007 Ans. The number of comparisons made in the execution of the loop for any n > 0 is: (A) floor (log2 n) + 1 (B) n (C) 2 * ceil (log n) (D) 2 * ceil(log n) + 1 Ans. Q79. while (j <=n) j = j*2. Which of the following sorting algorithms has the lowest worst-case complexity? (A) Merge sort (B) Bubble sort (C) Quick sort (D) Selection sort CS2007 Ans. C Q82.n. } The value returned by GetValue when a pointer to the root of a binary tree is passed as its argument is: (A) the number of nodes in the tree (B) the number of internal nodes in the tree (C) the number of leaf nodes in the tree (D) the height of the tree Ans. } return(value). return 0. for(i=2. } Let T (n)denote the number of times the for loop is executed by the program on input n. if(n%i == 0) { printf(³Not Prime\n´).i++). Which of the following is TRUE? CS2007 .i<=sqrt(n). } return 1. else value = value + GetValue(ptr->leftChild) + GetValue(ptr->rightChild).value = 1. Consider the following C code segment: int IsPrime(n) { int i. ¥n and T . ¥n !. ¥n (B) T . n !O. ¥n and T . n !. 1 (C) T . n !O. n and T . n !. ¥n (D) None of the above (A) T . by knowing inorder and postorder of tree we can uniquely determine a binary tree. We have a binary heap on n elements and wish to insert n more elements (not necessarily one after another) into this heap. What is the time complexity of the most efficient algorithm for doing this? (A) (log n) (B) (n) (C) (n log n) (D) None of the above. ««. of a binary search tree on the n elements 1. You have to determine the unique binary search tree that has P as its postorder traversal. 2... A Q83.n.. Try to write algorithm for this question.n !O CS2007 Ans.. The minimum number of comparisons required to determine if an integer appears more than n/2 times in a sorted array of n integers is (A) (n) (B) (log n) (C) (log*n ) (D) (1) CS2008 Ans. and we know that inorder traversal of a binary search tree is sorted data .«. You are given the postorder traversal. Which takes O(n log n) time. as the tree cannot be uniquely determined CS2008 Ans.n. 3. 2. C Explanation: Since it¶s a binary search tree and postorder traversal is given. The total time required for this is (A) (log n) (B) (n) (C) (n log n) (D) ( n2 ) CS2008 . Q85. P.which is 1. B Q84. 10. 3. now also Y[4]<3 is true . {. 9. Lop will never terminate. heap already have n elements. now also Y[4]<3 is true . i becomes 8. i becomes 4. do { 5. then Y[4]<3 is true. On which of the following contents of Y and x does the program fail? (A) Y is{1 2 3 4 5 6 7 8 9 10} and x < 10 (B) Y is{1 3 5 7 9 11 13 15 17 19} and x < 1 (C) Y is{2 2 2 2 2 2 2 2 2 2} and x > 2 (D) Y is{2 4 6 8 10 12 14 16 18 20} and 2 < x < 20 and x is even CS2008 Ans. i becomes 8. In next step k=(7+9) / 2=8. In next step k=(4+9) / 2=6. j. i=0.C Explanation: If you run this program for values in option C then while loop will never terminate. The correction needed in the program to make it work properly is (A) Change line 6 to: if (Y [k] < x) i = k + 1. A . f (int Y[10] . CS2008 Ans. else j = k + 1. . int x) 2. i becomes 6. In next step k=(6+9) / 2= 7. int u. In next step k=(8+9) / 2=8. else j = k í 1. In next step k=(8+9) / 2=8. (C) Change line 6 to: if (Y [k] < = x) i = k. (D) Change line 7 to: } while ((Y [k] = = x) & & (i < j)).in first step i=0 and j= 9 k=4. . j=9. } Q86. } while (Y [k] != x && i< j) .Ans. else j = k. If(Y[ k]= =x) print f("x is in the array ") . Y[4]<3 is true . Inserting n+1 th element in heap require (log(n+1))time Inserting n+2 th element in heap require (log(n+2))time Inserting n+3 th element in heap require (log(n+2))time . else j= k. i becomes 8. The program is erroneous. 7. 1.* (n+n)) § (log(nn)) = (n log n) Statement for Linked Answer Questions: 86 & 87 Consider the following C program that attempts to locate an element x in an array Y[ ] using binary search. Q87. now also Y[4]<3 is true .Y[4]<3 is true . . if(Y[ k]< x) i=k. k=( i+j) /2. 6. k. Let take x=3. i becomes 7. else print f (" x is not in the array ") . 8. Inserting n+n th element in heap require (log(n+n))time Total time in inserting n elements=log(n+1) + log(n+2) + log(n+3) +«««««+ log(n+n) =log((n+1)* (n+2)* (n+3)*««««. (B) Change line 6 to: if (Y [k] < x) i = k í 1. 4. C Explanation: inserting an element in heap take (log n) time. 0. In such tree there is exactly one node having one child (i. What is the worst-case time complexity of the best known algorithm to delete the node x from the list? (A) O(n) (B) O(log2 n) (C) O(log n) (D) O(1) IT2004 Ans. Q91.. The time required to construct a heap in this manner is (A) 0(log n) (B) 0(n) (C) 0(n log log n) (D) 0(n log n) IT2004 Ans. What is the number of nodes in the tree that have exactly one child? (A) 0 (B) 1 (C) (n í 1) /2 (D) n-1 CS2010 Ans. which is last node. We should change code such that this condition will not arrive. A Explanation: in singly linked list we can not traverse back. Let Q be the pointer to an intermediate node x in the list. in the worst case? (A) (n) (B) (n log n) (C) (n2 ) (D) (n2 log n) CS2009 Ans. Let P be a singly linked list. Option A shows most promising conditions. An array of integers of size n can be converted into a heap by adjusting the heaps rooted at each internal node of the complete binary tree starting at the node [(n 1)/2].Explanation: in previous question loop will not terminate when i=j or i=j+1 . . itself) .e. Every node is considered to be its own descendant. Q88. A Q89. B Explanation : we know that a binary tree have only two child and according to definition in question every node is its child. every node has an odd number of descendants... What is the number of swaps required to sort n elements using selection sort. B Explanation: Creating heap as stated above will require run loop only for (n+1)/2 times. and doing this adjustment up to the root node (root node is at index 0) in order [(n 1)/2]. now we can perform delete operation.[(n 3)/2]. so to delete a arbitrary node first we should have access to its previous node. In a binary tree with n nodes. and to reach its previous node time taken will be O(n).. Q90. The first number to be inserted in the tree must be (A) p (B) p + 1 (C) n . C Explanation: order of complexities for above functions is f(n) < g(n)=h(n) So option C is FALSE Q94. In the resulting tree. 2. C Explanation: In binary search tree left subtree have smaller element than root and right subtree have larger elements than root. Which one of the following binary trees has its in-order and preorder traversals as BCAD and ABCD.Q92. n are inserted in a binary search tree in some order. the right subtree of the root contains p nodes. respectively? A B A B C A C B A D D C D C D B D C B A IT2004 Ans. The numbers 1.g(n) and h(n) be functions defined for positive integers such that f(n) = O(g(n)). g(n) =O(h(n)). Let f(n). and h(n) = O(g(n)). C Q93.p (D)n p+1 IT2005 Ans.p . g(n) O(f (n)). Number of nodes in right subtree=p Number of nodes in right subtree=n ± p ± 1 Then root will be = n ± p ± 1 +1=n . Which one of the following statements is FALSE? (A) f(n)+g(n)=O(h(n)+h(n)) (B) f(n)=O(h(n)) (C) h(n)=O(f(n)) (D) f(n)h(n)=O(g(n)h(n)) Ans.