Categories: Data Structure

Time Complexity

What is Time Complexity?

Time complexity is a key concept in computer science, particularly in data structures and algorithms(DSA). It measures how long it takes to run an algorithm or complete a task, and it is usually expressed in terms of the input size.

What Is Space Complexity?

Space complexity is a measure of the amount of memory or space required by an algorithm to solve a problem as a function of the input size, in computer science.

Factors affecting the time complexity:

The time complexity of an algorithm can be affected by several factors, including:

  • Size of the input: The algorithm’s time to solve the problem may increase as the size of the input increases.
  • Data structures used: The algorithm’s data structure selection can have a significant impact on its time complexity. Some array operations, for example, may take constant time, whereas similar operations on linked lists may take linear time.
  • Specific operations performed by the algorithm: The algorithm’s various operations may have varying time complexities. For example, a linear search algorithm may have a time complexity of O(n), while a binary search algorithm may have a time complexity of O(log n).

Other factors that can affect the time complexity of an algorithm include the use of recursion, the efficiency of the implementation, and the presence of nested loops or nested recursive calls.

Various types of Time Complexities:

The performance of algorithms is examined using a variety of time complexity measures.

TypeDescriptionExample
ConstantRegardless of the size of the input, the algorithm needs the same amount of time to complete the task.O(1)
LinearThe algorithm’s processing time grows linearly with the size of the input.O(n)
LogarithmicThe input size has a logarithmic effect on how long the algorithm takes to solve the problem.O(log n)
PolynomialWith each increase in input size to the power of a certain constant k, the algorithm’s time to solve the problem grows.O(n ^ k)
ExponentialWith increasing input size, the algorithm’s processing time grows exponentially.O(2 ^ n)
FactorialThe algorithm’s time to solve the problem grows as the factorial of the input size grows.O(n!)
What Are Asymptotic Notations?

Asymptotic notations are a class of mathematical notations used in computer science and mathematics to describe the behavior of functions as their inputs grow arbitrarily large. It is also referred to as an algorithm’s growth rate.

Note: We compare space and time complexity using asymptotic analysis.

Types of Asymptotic Notations:

Asymptotic notations are classified into three types:

  • Big-Oh (O) notation: is used to describe the upper bound of a function’s growth rate. It represents an algorithm’s worst-case time complexity.
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }
  • Omega ( Ω ) notation: is used to describe a function’s lower bound on its growth rate. It represents an algorithm’s best-case time complexity.
Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }
  • Theta ( Θ ) notation: is used to describe a function’s tight bound on its growth rate. It represents an algorithm’s average-case time complexity.
Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1 * g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0}
Example:

Let us take an example to see how one can conclude the time complexity of a program:

Consider sum of all elements of a matrix:

#include <iostream>
using namespace std;

int main()
{
 int n = 3;
 int m = 3;
 int arr[][3]
  = { { 1, 2, 4 }, { 5, 6, 7 }, { 3, 0, 8 } };
 int sum = 0;

 // Iterating over all 1-D arrays in 2-D array
 for (int i = 0; i < n; i++) {

  // Printing all elements in ith 1-D array
  for (int j = 0; j < m; j++) {

   // Printing jth element of ith row
   sum += arr[i][j];
  }
 }
 cout << sum << endl;
 return 0;
}

Using two nested loops, the program iterates through all the elements in the 2D array. For each iteration of the outer loop, the inner loop iterates n times and the outer loop iterates m times. As a result, the program’s time complexity is O(n*m).

Note: also read about SQL: Sequence

Follow Me

Please follow me to read my latest post on programming and technology if you like my post.

https://www.instagram.com/coderz.py/

https://www.facebook.com/coderz.py

Recent Posts

Generate Parenthesis | Intuition + Code | Recursion Tree | Backtracking | Java

Problem Statement: Given n pairs of parentheses, write a function to generate all combinations of well-formed parentheses. Example…

3 months ago

Square Root of Integer

Given an integer A. Compute and return the square root of A. If A is…

1 year ago

Build Array From Permutation

Given a zero-based permutation nums (0-indexed), build an array ans of the same length where…

1 year ago

DSA: Heap

A heap is a specialized tree-based data structure that satisfies the heap property. It is…

2 years ago

DSA: Trie

What is a Trie in DSA? A trie, often known as a prefix tree, is…

2 years ago

Trees: Lowest Common Ancestor

What is the Lowest Common Ancestor? In a tree, the lowest common ancestor (LCA) of…

2 years ago