Let’s say we have a problem of size n. Now for each step of our algorithm(which we need write), our original problem becomes half of its previous size(n/2).

So at each step, our problem becomes half.

Step | Problem | —— | —— | 1 | n/2 | 2 | n/4 | 3 | n/8 | 4 | n/16 |

When the problem space is reduced(i.e solved completely), it cannot be reduced any further(n becomes equal to 1) after exiting check condition.

  1. Let’s say at kth step or number of operations:

problem-size = 1

  1. But we know at kth step, our problem-size should be:

problem-size = n/2k

  1. From 1 and 2:
n/2<sup>k</sup> = 1 or

n = 2<sup>k</sup>
  1. Take log on both sides
log<sub>e</sub> n = k log<sub>e</sub>2 

or

k = log<sub>e</sub> n / log<sub>e</sub> 2
  1. Using formula logx m / logx n = logn m

k = log2 n

or simply k = log n

Now we know that our algorithm can run maximum up to log n, hence time complexity comes as

O( log n)


A very simple example in code to support above text is :

for(int i=1; i<=n; i=i*2)
{
    // perform some operation
}