It’s hard to create efficient algorithms without understanding the time and space complexity of various operations. The concept of Big O notation helps programmers understand how quickly or slowly an algorithm will execute as the input size grows.

In this article, we’ll cover the basics of Big O notation, why it is used and how describe the time and space complexity of algorithms with example.

## Table Of Contents

## Why do we use Big O?

As software engineer, our goal is to solve any given problem. And there could be many solutions, so as good engineer, we need to choose the most efficient solution. So how can we determine which solution is more efficient than others? To know that, let’s first start with what efficiency is.

## What is efficiency?

Regarding programming, efficiency is all about **how much computation resources are used by an algorithm**. The least resources used, the more efficient algorithm is.

Generally, we only concern with only two types of resources:

- Memory space needed (Space Complexity)
- Time of the algorithm (Time complexity)

It seems like we do not need Big O to measure time complexity. We can use a stopwatch to calculate how long (milliseconds or seconds) an algorithm took to complete, and space complexity can be measured with KB and MB.

However, there is a huge problem with this kind of thinking.

For example, I have two computers with Intel i7 (16GB RAM) and Intel i3 (4GB RAM). So the runtime will not be the same when I run algorithm A on both devices. Obviously, the Intel i7 is more powerful than the i3 so it will solve the problem earlier.

But what if we have both computers with the same configurations? Still, the runtime will sometimes differ because other factors outside our control can influence the computer’s performance, such as concurrency and multi-processing.

To solve this issue, we analyze algorithms on paper using Big O notation for the following reasons.

- Big O notation can objectively describe the efficiency of code without the use of concrete units such as (milliseconds and seconds)
- Focus on how the time and space requirements scale in terms of the size of the Input.
- Prepare for the worst-case scenario.

Now you understand why we need the Big O let’s go over what Time and Space complexity.

## Time complexity

The time complexity of an algorithm is** the number of steps it takes to complete** its task. Time complexity is often used to compare different algorithms, as they all have different performance characteristics.

For example, one algorithm may run much more slowly than another, even if it appears that both should perform equally well on your problem set.

## Space complexity

Space complexity is the** amount of memory an algorithm uses**. It’s measured in bytes, and it’s also called memory usage.

**What causes the space complexity?**

- Variables
- Data Structures
- Function Call
- Allocations

Space complexity is more important for large data sets because they require much more memory to store than small ones do.

For example, if you have a list containing hundreds of thousands of numbers and you want to sort that list using bubble sort, then bubble sort will take up a lot of space since it must keep all those numbers in memory at once. However, if your list only has 3 items in it (1 being unique), then sorting won’t take up as much space because there are fewer items to store!

## What is Big O?

Big O notation is the way to measure how an algorithm’s running time or space requirements grow as the input size grows.

For example, one dentist takes 30 minutes to treat one patient. As her line of patients increases, the time it takes for her to treat all patients will scale linearly with the number of patients waiting in line.

This is because it always takes her a constant amount of time to treat each patient, which is 30 minutes. This gives us a general understanding of how long our dentist would take to treat 10 patients, 20 patients or even 100,000 patients.

This is because since we know that the dentist takes a constant amount of time, which is 30 minutes, to treat each patient, we can always calculate the time it would take for her to treat any number of patients by multiplying the number of patients times 30 minutes.

With this in mind, we can categorise her efficiency as being linear. Or, as we would say in Big O terms O(n), where n is equal to the number of patients and the time that it takes for her to finish her work scales linearly or proportionally with the number of patients, we can use the same technique to determine the efficiency of algorithms.

Let’s calculate dentist efficiency using a function.

```
var time = 0;
const patients = ['James', 'Robert', 'John', 'Patricia', 'Mary', 'Jennifer', 'Michael'];
function calculateTime() {
for(let i = 0; i < patients.length; i++){
time += 30;
}
console.log("Total Time 👉 ", time);
}
```

**Note:** I have created the function only to explain how Big O works with programming logic. That’s why I did not calculate time by multiplying the number of patients and time.

We created the `calculateTime`

function inside the function we wrote the `for`

loop, which iterates each element of a `patients`

array.

```
function calculateTime() {
for(let i = 0; i < patients.length; i++){ // 👉 O(n)
time += 30; // 👉 O(1)
}
console.log("Total Time 👉 ", time); // 👉 O(1)
}
```

The time complexity for the above function is `O(n)`

, but how I calculated the complexity of the function.

Let’s learn how to calculate Big O notation for any given algorithm.

## Big O Rules

First let’s go over the rules of calculating Big O.

- Worst Case
- Remove Constants
- Drop Non Dominants

**1. Worst Case: **

There are three types of analysis to analyze an algorithm.

- Best Case
- Average Case
- Worst Case

### 1. Best case

Best case means how fast algorithm can run for provided input.

```
let friends = ['Joey', 'Rachel', 'Phoebe', 'Ross', 'Monica', 'Chandler'];
function findJoey(){
for(let i = 0; i < friends.length; i++){
if(friends[i] === 'Joey'){
console.log("Found!");
break;
}
}
}
```

We created the function `findJoey`

to find “Joey” from the `friends`

array.

Since “Joey” is the first element of an array will only run once then it will break loop because we used the `break`

statement.

So we can say the best case for the function is **O(1)**.

### 2. Average case

It provides a prediction about the algorithm’s running time when the input is random.

```
let friends = ['Rachel', 'Phoebe', 'Ross', 'Joey', 'Monica', 'Chandler'];
function findJoey(){
for(let i = 0; i < friends.length; i++){
if(friends[i] === 'Joey'){
console.log("Found!");
break;
}
}
}
```

In the function we provide input array in which element “Joey” will be on random places it could be second, third or forth.

### 3. Worst case

Worst case means how slow/long algorithm can run for provided input.

```
let friends = ['Rachel', 'Phoebe', 'Ross', 'Monica', 'Chandler', 'Joey'];
function findJoey(){
for(let i = 0; i < friends.length; i++){
if(friends[i] === 'Joey'){
console.log("Found!");
break;
}
}
}
```

In the above function we placed “Joey” at the end of the array, so it need to iterate through whole array to find the element. So, the algorithm’s time complexity is** O(n).**

Even though, input is random we will always analyze algorithm’s worst case for Big O.

**2. Remove Constants**

While we are calculating Big O, we should not consider any constants. Let’s understand it using an example.

```
let friends = ['Rachel', 'Phoebe', 'Ross', 'Monica', 'Chandler', 'Joey'];
function findJoey(){
for(let i = 0; i < friends.length; i++){ // O(n)
console.log("Step 👉", i + 1); // O(1)
if(friends[i] === 'Joey'){ // O(1)
console.log("Found!"); // O(1)
}
}
}
```

Let’s sum the complexity of an above function.

```
Compelxity = O(n) + O(1) + O(1) + O(1)
= O(n) + O(3)
= O(n + 3)
```

After sum our function’s complexity is `O(n + 3)`

.

When calculating the efficiency of a function, constant time does not matter. This is because if our array were some crazy length, like 2 million, then these lines of code would have a negligible effect on the efficiency of the function as a whole. We still need to iterate through 2 million items in an array.

Let’s calculate it again.

```
Compelxity = O(n + 3)
= O(n)
```

So, after removing all the constants its just O(n) now.

However if the function has only constants then the complexity of the function is constant time.

```
function steps(){
console.log("Step 1"); // O(1)
console.log("Step 2"); // O(1)
console.log("Step 3"); // O(1)
}
```

If we calculate complexity of the function it will be `O(3)`

. Instead of saying it `O(3)`

we will call it `O(1)`

as we assume it will always take the same amount of time to run.

**3. Drop Non Dominants**

In Big O we have a growth hierarchy.

Don’t worry about other complexities, we learn everything in this article.

The chart shows the efficiency categories in order from good to bad. The first case of one is the best case and the last is the worst.

As we discussed earlier, when determining the efficiency of an algorithm in terms of Big O, we only care about the worst case.

```
let friends = ['Rachel', 'Phoebe', 'Ross', 'Monica', 'Chandler', 'Joey'];
function findJoey(){
for(let i = 0; i < friends.length; i++){ // O(n)
console.log("Step 👉", i + 1); // O(1)
if(friends[i] === 'Joey'){ // O(1)
console.log("Found!"); // O(1)
}
}
}
```

We calculated the function’s complexity, which is `O(n)`

because it’s a dominant and worst case than `O(1)`

.

Now that you understand how the Big O works let’s discuss major Big O complexities (time and space).

- O(1) – Constant time
- O(log n) – Logarithmic time
- O(n) – Linear time
- O(n log n) – Polynomial time
- O(n ^ 2) – Quadratic time
- O(2 ^ n) – Exponential time
- O(n!) – Factorial time
- O(m+n) or O(mn)

**Big-O Complexity Chart**

Let’s discuss Big O complexities.

## 1. O(1) – Constant time

Constant time complexity doesn’t grow. No matter how large the input size gets, the same number of steps is needed.

Since constants do not matter, we can use the number one `O(1)`

to represent constant time complexities.

But how could constant time come into existence? Let’s look at a very simple example.

**You have an array and want to print the fourth element of an array. **

```
let friends = ['Rachel', 'Phoebe', 'Ross', 'Monica', 'Chandler', 'Joey'];
function printFourthIndex(){
console.log(friends[3]); // 0(1)
}
```

This function takes the same amount of steps, no matter how large the array is. It won’t get more complicated just because the array has a millions elements. You still need to do the same steps. So the runtime complexity of this is constant.

## 2. O(log n) – Logarithmic time

You probably remember the logarithmic function from high school. It tells you how many times you have to multiply the base by itself to get the given number as a result.

Sounds complicated! let’s understand by an example.

Let’s take the number eight as an example. So we want to raise some number to some power to get an eight.

In computer science, we can always assume that the number we want to raise to sum power is two.

We can see that if we raise two to the power of three, we get the number that we’re looking for eight, so log base two of eight is three.

Now you know that how logarithmic works let’s move to the meaning of O(log n).

Let’s understand it by simple function.

```
// n = 8
function logNFunc(n){
while(n > 1){
n = Math.floor(n / 2);
}
}
```

This function’s time complexity is `O(log n)`

. let’s dig deeper to find out why.

So, when we pass a number into this function, it divides it by two or splits it in half, and then calls itself with the new half or divided number.

```
n = 8
Iteration 1 = 8 / 2 = 4
Iteration 2 = 4 / 2 = 2
Iteration 3 = 2 / 2 = 1
```

As you can see, we need to iterate only three times to get one. So we can say these three iterations are log iterations.

The good thing about logarithmic time is that** its growth is slowing down over time**. So the curve gets flatter and flatter as we go on. When you get to really high numbers the graph looks almost like a constant runtime complexity.

If a problem can be solved in logarithmic time, you cannot really consider it a problem. It is extremely efficient.

## 3. O(n) – Linear time

We already discussed the linear time complexity using the dentist’s example.

Linear time complexity essentially means that the growth is constant. You need to perform the same amount of steps to the input size.

For example, if our input size is eight we need to iterate it eight times.

```
function nFunc(n){
for(let i = 0; i < n; i++){
console.log("Step 👉", i + 1);
}
}
```

If a problem is solvable in linear time it is also** not really a challenge**. It is very easy to solve and the **runtime complexity is extremely desirable**.

## 4. O(n log n) – Pseduo-Linear time

If we combine the last two runtime complexities (logarithmic and linear), we get the pseudo-linear time. It is sometimes also called linearithmic.

This time complexity is often found in** divide-and-conquer algorithms**, such as merge sort, quick sort and heap sort.

Let’s understand it by a simple example.

```
// n = 4
function nLogNFunc(n){
let temp = n;
while(n > 1){
n = Math.floor(n/2);
for(let i = 0; i < temp; i++){
console.log("Step 👉", i + 1);
}
}
}
```

The function accepts one argument `n`

and we will assume n = 4.

First of all, we created a `temp`

variable and assign n to it to copy the value of `n`

to `temp`

.

Next, we have a while loop in which we are dividing the value of `n`

by two. In the next line we have for loop which iterates `temp`

times means `n`

times since the `temp`

variable is a copy of the `n`

.

It’s a bit confusing. Let’s visualize it.

As you can see the top-level loop’s time complexity is `O(log n)`

and the inner loop’s time complexity is `O(n)`

.

Let’s put it all together.

```
Time Complexity = O(n * log n);
= O(n log n);
```

As you can see, the function is not quite linear but since the logarithmic part grows slower and slower over time, it looks like one when we get to larger numbers.

This is the last runtime complexity that we will consider very efficient. If a problem is solvable in O(n log n) time, it is not really a challenge.

## 5. O(n ^ 2) – Quadratic time

Quadratic time O(n ^ 2) increases faster as the input grows.

Examples of quadratic runtime are inefficient sorting algorithms like the bubble sort, insertion sort or selection sort but also traversing a simple 2D array.

Let’s understand it by an example.

```
// n = 4
function nSquaredNFunc(n){
for(let j = 0; j < n; j++){
for(let k = 0; k < n; k++){
console.log(j, k);
}
}
}
```

The above function takes `n`

as an argument. And the top level (first) for loop will iterate until `n`

and for each iteration, it will also iterate nested for loop.

When we run the function (n = 4) as output it will print matrix like below.

It has 4 columns and 4 rows so let’s calculate the total steps taken to print this matrix.

```
Time Complexity = row * columns
= 4 * 4
= 16
4² = 16
```

So, if we take n = 8, the function will take 8 * 8 = 64 iterations to print the matrix, so we can say the complexity of the function is O(n ^ 2).

You can see that the growth is happening a lot faster here. It is still manageable, but quadratic runtime can be a challenge sometimes.

## 6. O(2 ^ n) – Exponential time

Exponential time `O(2 ^ n)`

increases extremely fast as the input grows.

Let’s understand it by following the function.

```
function fib(n){
if(n === 0){
return 0;
}
if(n === 1){
return 1;
}
return fib(n - 1) + fib(n - 2)
}
```

In the above function, we use recursion to calculate the Fibonacci sequence.

If we visualize the function it will look like below.

Here we can see that steps get double on every level. For example, on the first level, we need to call the `fib()`

function two times, And on the second level, we need to call the `fib()`

function four times. Fortunately, on a third level, we need to call the function only two times because of our small input.

Actually, the `fib()`

function’s complexity is `O(2 ^ (n - 1))`

, but if you remember we discussed earlier that we will remove the constants. So, we will ignore the `-1`

, which results in `O(2 ^ n)`

.

An O(2^n) function’s exponential growth curve starts shallow and then rises rapidly.

If it is possible, you should avoid exponential runtime complexity.

## 7. O(n!) – Factorial time

This time complexity is even less desirable than exponential time.

Let’s first go through the factorial’s definition.

`n! = n * (n - 1) * (n - 2) * ... * 3 * 2 * 1`

We multiply n by all natural numbers that are smaller than n except for zero.

One runtime complexity that is worse than that is of the form n to the power of n, but it rarely occurs. However, factorial time does occur.

For example, when you try to find all possible permutations of a given set of elements. Like in the following function.

```
function f(n){
if(n === 0){
return 0;
}
for(let i = 0; i < n; i++){
f(n - 1);
}
}
```

I think it goes without saying that you want to avoid factorial time whenever it is possible. We don’t consider this time complexity manageable.

## 8. O(m + n) or O(mn)

Until now we calculated complexity using only one input but what if we have multiple inputs.

```
function mn(m, n){
for(let i = 0; i < m; i++){
console.log("m => ", i);
}
for(let j = 0; j < n; j++){
console.log("n => ", j);
}
}
```

The above function accepts two inputs and the inputs can be different. So we cannot say that its **O(n + n) = O(2n) = O(n), **instead we should call it **O(m + n)** because input might not be the same.

```
function mn(m, n){
for(let i = 0; i < m; i++){
for(let j = 0; j < n; j++){
console.log(i, j);
}
}
}
```

In the above function, we have a nested loop so, this function’s complexity is **O(m * n) = O(mn)**.

We discussed the most common time complexities and how to calculate them.

Let’s explore complexities for common algorithms using following cheat sheet.

## Big O Notation Algorithm Complexity Cheat Sheet

**Common data structure (Average time complexity**)

Data Structure | Time Complexity | |||
---|---|---|---|---|

Access | Search | Insertion | Deletion | |

Array |
`Θ(1)` |
`Θ(n)` |
`Θ(n)` |
`Θ(n)` |

Stack |
`Θ(n)` |
`Θ(n)` |
`Θ(1)` |
`Θ(1)` |

Queue |
`Θ(n)` |
`Θ(n)` |
`Θ(1)` |
`Θ(1)` |

Singly-Linked List |
`Θ(n)` |
`Θ(n)` |
`Θ(1)` |
`Θ(1)` |

Doubly-Linked List |
`Θ(n)` |
`Θ(n)` |
`Θ(1)` |
`Θ(1)` |

Skip List |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |

Hash Table |
`N/A` |
`Θ(1)` |
`Θ(1)` |
`Θ(1)` |

Binary Search Tree |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |

Cartesian Tree |
`N/A` |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |

B-Tree |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |

Red-Black Tree |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |

Splay Tree |
`N/A` |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |

AVL Tree |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |

KD Tree |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |

**Common data structure (Worst time and space complexity)**

Data Structure | Time Complexity | Space Complexity | |||
---|---|---|---|---|---|

Access | Search | Insertion | Deletion | ||

Array |
`O(1)` |
`O(n)` |
`O(n)` |
`O(n)` |
`O(n)` |

Stack |
`O(n)` |
`O(n)` |
`O(1)` |
`O(1)` |
`O(n)` |

Queue |
`O(n)` |
`O(n)` |
`O(1)` |
`O(1)` |
`O(n)` |

Singly-Linked List |
`O(n)` |
`O(n)` |
`O(1)` |
`O(1)` |
`O(n)` |

Doubly-Linked List |
`O(n)` |
`O(n)` |
`O(1)` |
`O(1)` |
`O(n)` |

Skip List |
`O(n)` |
`O(n)` |
`O(n)` |
`O(n)` |
`O(n log(n))` |

Hash Table |
`N/A` |
`O(n)` |
`O(n)` |
`O(n)` |
`O(n)` |

Binary Search Tree |
`O(n)` |
`O(n)` |
`O(n)` |
`O(n)` |
`O(n)` |

Cartesian Tree |
`N/A` |
`O(n)` |
`O(n)` |
`O(n)` |
`O(n)` |

B-Tree |
`O(log(n))` |
`O(log(n))` |
`O(log(n))` |
`O(log(n))` |
`O(n)` |

Red-Black Tree |
`O(log(n))` |
`O(log(n))` |
`O(log(n))` |
`O(log(n))` |
`O(n)` |

Splay Tree |
`N/A` |
`O(log(n))` |
`O(log(n))` |
`O(log(n))` |
`O(n)` |

AVL Tree |
`O(log(n))` |
`O(log(n))` |
`O(log(n))` |
`O(log(n))` |
`O(n)` |

KD Tree |
`O(n)` |
`O(n)` |
`O(n)` |
`O(n)` |
`O(n)` |

**Sorting Algorithms**

Algorithm | Time Complexity | Space Complexity | ||
---|---|---|---|---|

Best | Average | Worst | Worst | |

Quicksort |
`Ω(n log(n))` |
`Θ(n log(n))` |
`O(n^2)` |
`O(log(n))` |

Mergesort |
`Ω(n log(n))` |
`Θ(n log(n))` |
`O(n log(n))` |
`O(n)` |

Timsort |
`Ω(n)` |
`Θ(n log(n))` |
`O(n log(n))` |
`O(n)` |

Heapsort |
`Ω(n log(n))` |
`Θ(n log(n))` |
`O(n log(n))` |
`O(1)` |

Bubble Sort |
`Ω(n)` |
`O(n^2)Θ/code>` |
`O(n^2)` |
`O(1)` |

Insertion Sort |
`Ω(n)` |
`O(n^2)Θ/code>` |
`O(n^2)` |
`O(1)` |

Selection Sort |
`Ω(n^2)` |
`Θ(n^2)` |
`O(n^2)` |
`O(1)` |

Tree Sort |
`Ω(n log(n))` |
`Θ(n log(n))` |
`O(n^2)` |
`O(n)` |

Shell Sort |
`Ω(n log(n))` |
`O(n(loΘ(n))^2)` |
`O(n(log(n))^2)` |
`O(1)` |

Bucket Sort |
`Ω(n+k)` |
`Θ(n+k)` |
`O(n^2)` |
`O(n)` |

Radix Sort |
`Ω(nk)` |
`Θ(nk)` |
`O(nk)` |
`O(n+k)` |

Counting Sort |
`Ω(n+k)` |
`Θ(n+k)` |
`O(n+k)` |
`O(k)` |

Cubesort |
`Ω(n)` |
`Θ(n log(n))` |
`O(n log(n))` |
`O(n)` |

## Conclusion

As you know, Big O notation is a helpful way to describe the performance of an algorithm. I hope this article will help you understand Big O. And using the complexity cheat sheet you can easily understand how fast the program will run in comparison to other algorithms and will also help to improve the code.

**Learn More: **

- Binary Search: A Beginner’s Guide
- Sliding Window Algorithm
- Asymptotic Notations: A Comprehensive Guide
- JavaScript Tutorials
- React JS Tutorials
- Node JS Tutorials
- CSS Tutorials
- TypeScript Tutorials
- Data Structure and Algorithm Tutorials

### What are the rules for calculating Big O Notation?

First let's go over the rules of calculating Big O Notation.

1. Worst Case

2. Remove Constants

3. Drop Non Dominants

### What are the common Big O complexities (time and space)?

The common Big O complexities (time and space) is as follows:

1. O(1) – Constant time

2. O(log n) – Logarithmic time

3. O(n) – Linear time

4. O(n log n) – Polynomial time

5. O(n ^ 2) – Quadratic time

6. O(2 ^ n) – Exponential time

7. O(n!) – Factorial time

8. O(m+n) or O(mn)