MCQs Asymptotic Notations Worst, Average and Best Case Time Complexities Analysis of Loops

1. What is the time complexity of fun()?

 int fun(int n)

{

  int count = 0;

  for (int i = 0; i < n; i++)

     for (int j = i; j > 0; j--)

        count = count + 1;

  return count;

} 

  • Theta (n)
  • Theta (n^2)
  • Theta (n*Logn)
  • Theta (nLognLogn)

Ans: Theta (n^2)

Explanation

The time complexity can be calculated by counting number of times the expression “count = count + 1;” is executed. The expression is executed 0 + 1 + 2 + 3 + 4 + …. + (n-1) times.
Time complexity = Theta(0 + 1 + 2 + 3 + .. + n-1) = Theta (n*(n-1)/2) = Theta(n2)

2. Let w(n) and A(n) denote respectively, the worst case and average case running time of an algorithm executed on an input of size n. which of the following is ALWAYS TRUE? (GATE CS 2012)

  • A(n) = Omega(W(n))
  • A(n) = Theta(W(n))
  • A(n) = O(W(n))
  • A(n) = o(W(n))

Ans: A(n) = O(W(n))

Explanation

The worst case time complexity is always greater than or same as the average case time complexity.

3. Which of the following is not O(n^2)?

  • (15^10) * n + 12099
  • n^1.98
  • n^3 / (sqrt(n))
  • (2^20) * n

Ans: n^3 / (sqrt(n))

Explanation

The order of growth of option c is n2.5 which is higher than n2.

4. Which of the given options provides the increasing order of asymptotic complexity of functions f1, f2, f3 and f4?

  f1(n) = 2^n
f2(n) = n^(3/2)
f3(n) = nLogn
f4(n) = n^(Logn)
  • f3, f2, f4, f1
  • f3, f2, f1, f4
  • f2, f3, f1, f4
  • f2, f3, f4, f1

Ans: f3, f2, f4, f1

Explanation

f1(n) = 2^n
f2(n) = n^(3/2)
f3(n) = nLogn
f4(n) = n^(Logn)

Except f3, all other are exponential. So f3 is definitely first in output. Among remaining, n^(3/2) is next.

One way to compare f1 and f4 is to take Log of both functions. Order of growth of Log(f1(n)) is Θ(n) and order of growth of Log(f4(n)) is Θ(Logn * Logn). Since Θ(n) has higher growth than Θ(Logn * Logn), f1(n) grows faster than f4(n).


Following is another way to compare f1 and f4.
Let us compare f4 and f1. Let us take few values to comparen = 32, f1 = 2^32, f4 = 32^5 = 2^25
n = 64, f1 = 2^64, f4 = 64^6 = 2^36
……………
……………

Also see http://www.wolframalpha.com/input/?i=2^n+vs+n^%28log+n%29
Thanks to fella26 for suggesting the above explanation.

5. Consider the following program fragment for reversing the digits in a given integer to obtain a new integer. Let n = D1D2…Dm

 
int n, rev;
rev = 0;
while (n > 0)
{
rev = rev*10 + n%10;
n = n/10;
}

The loop invariant condition at the end of the ith iteration is: (GATE CS 2004)

  • n = D1D2….Dm-i and rev = DmDm-1…Dm-i+1
  • n = Dm-i+1…Dm-1Dm and rev = Dm-1….D2D1
  • n != rev
  • n = D1D2….Dm and rev = DmDm-1…D2D1

Ans: n = D1D2….Dm-i and rev = DmDm-1…Dm-i+1

Explanation

We can get it by taking an example like n = 54321. After 2 iterations, rev would be 12 and n would be 543.

6. Consider the following function

 int unknown(int n) {
int i, j, k = 0;
for (i = n/2; i <= n; i++)
for (j = 2; j <= n; j = j * 2)
k = k + n/2;
return k;
}

What is the return value of the function? (GATE CS 2013)

  • Theta(n^2)
  • Theta(n^2logn)
  • Theta(n^3)
  • Theta(n^3logn)

Ans: Theta(n^2logn)

Explanation

In the below explanation, ‘^’ is used to represent exponent:
The outer loop runs n/2 or Theta(n) times.
The inner loop runs (Logn) times (Note that j is multiplied by 2 in every iteration).
So the statement “k = k + n/2;” runs Theta(nLogn) times.
The statement increases value of k by n/2.
So the value of k becomes n/2*Theta(nLogn) which is Theta((n^2) * Logn).

7. The recurrence equation

T(1) = 1
T(n) = 2T(n - 1) + n, n ≥ 2

evaluates to

a. 2n + 1– n – 2
b. 2n – n
c. 2n + 1 – 2n – 2
d. 2n + n

Ans: 2n + 1– n – 2

Explanation

If draw recursion tree, we can notice that total work done is,
T(n) = n + 2(n-1) + 4(n-2) + 8(n-3) + 2n-1 * (n – n + 1)
T(n) = n + 2(n-1) + 4(n-2) + 8(n-3) + 2n-1 * 1

To solve this series, let us use our school trick, we multiply T(n) with 2 and subtract after shifting terms.2*T(n) = 2n + 4(n-1) + 8(n-2) + 16(n-3) + 2n
T(n) = n + 2(n-1) + 4(n-2) + 8(n-3) + 2n-1 * 1

We get2T(n) – T(n) = -n + 2 + 4 + 8 + ….. 2n T(n) = -n + 2n+1 – 2 [Applying GP sum formula for 2, 4, …]
= 2n+1 – 2 – n




Alternate Way to solve is to use hit and try method.
Given T(n) = 2T(n-1) + n and T(1) = 1

For n = 2, T(2) = 2T(2-1) + 2
= 2T(1) + 2
= 2.1 + 2 = 4

Now when you will put n = 2 in all options,
only 1st option 2^(n+1) – n – 2 satisfies it.

8. Consider the following three claims
I (n + k)^m = theta(n^m), where k and m are constants
II 2^(n + 1) = 0(2^n)
III 2^(2n + 1) = 0(2^n)
Which of these claims are correct? (GATE CS 2003)

  • I and II
  • I and III
  • II and III
  • I, II and III

Ans: I and II

Explanation

(I) (n+m)^k = n^k + c1*n^(k-1) + … k^m =theta(n^k)
(II) 2^(n+1) = 2*2^n = O(2^n)

9. Consider the following C code segment 
 

int f (int x)
{
if (x < 1) return 1;
else return (f(x-1) + g(x))
}

int g (int x)
{
if (x < 2) return 2;
else return (f(x-1) + g(x/2));
}

Of the following, which best describes the growth of f(x) as a function of x? 

  • Cubic
  • Quadratic
  • Exponential
  • Linear

Ans: Exponential

Explanation

f(n) = f(n-1) + g(n) —- 1
g(n) = f(n-1) + g(n/2) —- 2
Putting the value of g(n) in equation 1,
f(n) = 2*f(n-1) + g(n) + g(n/2)
So, we can derive the below equation,
f(n) > 2f(n-1)
=> f(n) > 2*2*f(n-2) —- because f(n) > 2*f(n-1), so, f(n-1) > 2*f(n-2)….
So we can write f(n)>2*2*f(n-2).
so on….
=> f(n) > (2^n)f(1) — here ‘^’ denotes the exponent.
So, f(n) > Theta(2^n)
So, option B is true, exponential growth for f(x).

10. What is the time complexity of following function fun()? Assume that log(x) returns log value in base 2.

void fun(int n)
{
int i, j;
for (i=1; i<=n; i++)
for (j=1; j<=log(i); j++)
printf("GeeksforGeeks");
}
  • Θ(n)
  • Θ(nLogn)
  • Θ(n^2)
  • Θ(n^2(Logn))

Explanation

The time complexity of above function can be written as: Θ(log 1) + Θ(log 2) + Θ(log 3) + . . . . + Θ(log n) which is Θ (log n!) Order of growth of ‘log n!’ and ‘n log n’ is same for large values of n, i.e., Θ (log n!) = Θ(n log n). So time complexity of fun() is Θ(n log n). The expression Θ(log n!) = Θ(n log n) can be easily derived from following Stirling’s approximation (or Stirling’s formula)
log n! = n log n – n + O(log(n))
Option (B) is correct.

Happy Learning – If you require any further information, feel free to contact me.

Share your love
Saurav Hathi

Saurav Hathi

I'm currently studying Bachelor of Computer Science at Lovely Professional University in Punjab.

📌 Nodejs and Android 😎
📌 Java

Articles: 444

Leave a Reply

Your email address will not be published. Required fields are marked *