Understanding Time Complexity: The Key to Efficient Algorithm Design

What is time complexity ?

Time complexity of an algorithm quantifies the amount of time taken by an algorithm to run,

as a function of the size of the input to the program. It measures the time required to execute each statement of code in an algorithm.

Time complexity is usually expressed using Big O notation, which describes the upper bound of the time complexity in the worst case scenario.

This provides an upper limit on the time required, so it gives a worst case scenario estimate.

Common time complexities include:

- O(1): Constant time complexity. The algorithm takes a constant amount of time, regardless of the input size.

An example is accessing an array element by its index.

- O(log n): Logarithmic time complexity. The time complexity grows logarithmically in proportion to the input size. An example is binary search.

Example: Here the time increase logarithmically

1 second time to compute 2 elements

2 second time to compute 4 elements

3 second time to compute 8 elements

4 second time to compute 16 elements

We can represent Log 16=4

Another example:

If you consider a sorted array of 16 elements , and will search an element using binary search , and if we consider worst case.

the time complexity will be O(log n) . here in this example O(log 16) which is 4 . assume base is 2 .

we can divide the array of 16 elements .

Then divide the array of 8 elements.

Then divide the array of 4 elements.

Then divide the array of 2 elements .

Then last element will be the answer .

- O(n): Linear time complexity. The time complexity grows linearly with the size of the input.

An example is a simple loop that goes through each element in an array.

- O(n log n): Log-linear time complexity. This is commonly seen in efficient sorting algorithms like quicksort and mergesort.

- O(n^2): Quadratic time complexity. The time complexity grows quadratically with the size of the input.

An example is a simple nested loop structure in algorithms like bubble sort.

- O(2^n) or O(n!): Exponential or factorial time complexity. The time complexity grows exponentially or factorially with the size of the input. These are seen in less efficient algorithms, or problems where the solution involves checking all possible permutations, such as the traveling salesman problem.

In general, when evaluating algorithms, a lower time complexity is usually preferred as it indicates that the algorithm is more efficient in terms of its time usage.


To view or add a comment, sign in

More articles by Rajanikanta Pradhan

  • The SOLID Principles

    The aim of the SOLID principles is to reduce dependencies, enabling the ability to change code in one area of an…

  • List in java and how to iterate.

    List is the data structure , which keeps the insertion order preserve It allows to store duplicate elements . List is…

    2 Comments
  • Fail-Fast and Fail-Safe Iterators in Java

    Concept Of the Day: Fail-Fast and Fail-Safe Iterators in Java: Iterators are used in Collection framework in Java to…

  • Isolation Levels and Spring Transaction

    while Performing Database operation we can face some problems: like DirtyRead Problem:- Dirty readproblem means Reading…

  • Annotations in java

    Annotations :- It's a form of metadata, provide data about a program that is not part of the program itself…

  • Jax-Rs Application Deployment and Runtime Environments

    There are multiple deployment options in the Servlet container for a JAX-RS application. 1.

Insights from the community

Others also viewed

Explore topics