It’s hard to create efficient algorithms without understanding the time and space complexity of various operations. The concept of Big O notation helps programmers understand how quickly or slowly an algorithm will execute as the input size grows.

In this article, we’ll cover the basics of Big O notation, why it is used and how describe the time and space complexity of algorithms with example.

## Table of Contents

## Why do we use Big O?

As software engineer, our goal is to solve any given problem. And there could be many solutions, so as good engineer, we need to choose the most efficient solution. So how can we determine which solution is more efficient than others? To know that, let’s first start with what efficiency is.

## What is efficiency?

Regarding programming, efficiency is all about **how much computation resources are used by an algorithm**. The least resources used, the more efficient algorithm is.

Generally, we only concern with only two types of resources:

- Memory space needed (Space Complexity)
- Time of the algorithm (Time complexity)

It seems like we do not need Big O to measure time complexity. We can use a stopwatch to calculate how long (milliseconds or seconds) an algorithm took to complete, and space complexity can be measured with KB and MB.

However, there is a huge problem with this kind of thinking.

For example, I have two computers with Intel i7 (16GB RAM) and Intel i3 (4GB RAM). So the runtime will not be the same when I run algorithm A on both devices. Obviously, the Intel i7 is more powerful than the i3 so it will solve the problem earlier.

But what if we have both computers with the same configurations? Still, the runtime will sometimes differ because other factors outside our control can influence the computer’s performance, such as concurrency and multi-processing.

To solve this issue, we analyze algorithms on paper using Big O notation for the following reasons.

- Big O notation can objectively describe the efficiency of code without the use of concrete units such as (milliseconds and seconds)
- Focus on how the time and space requirements scale in terms of the size of the Input.
- Prepare for the worst-case scenario.

Now you understand why we need the Big O let’s go over what Time and Space complexity.

### Time complexity

The time complexity of an algorithm is** the number of steps it takes to complete** its task. Time complexity is often used to compare different algorithms, as they all have different performance characteristics.

For example, one algorithm may run much more slowly than another, even if it appears that both should perform equally well on your problem set.

### Space complexity

Space complexity is the** amount of memory an algorithm uses**. It’s measured in bytes, and it’s also called memory usage.

**What causes the space complexity?**

- Variables
- Data Structures
- Function Call
- Allocations

Space complexity is more important for large data sets because they require much more memory to store than small ones do.

For example, if you have a list containing hundreds of thousands of numbers and you want to sort that list using bubble sort, then bubble sort will take up a lot of space since it must keep all those numbers in memory at once. However, if your list only has 3 items in it (1 being unique), then sorting won’t take up as much space because there are fewer items to store!

## What is Big O?

Big O notation is the way to measure how an algorithm’s running time or space requirements grow as the input size grows.

For example, one dentist takes 30 minutes to treat one patient. As her line of patients increases, the time it takes for her to treat all patients will scale linearly with the number of patients waiting in line.

This is because it always takes her a constant amount of time to treat each patient, which is 30 minutes. This gives us a general understanding of how long our dentist would take to treat 10 patients, 20 patients or even 100,000 patients.

This is because since we know that the dentist takes a constant amount of time, which is 30 minutes, to treat each patient, we can always calculate the time it would take for her to treat any number of patients by multiplying the number of patients times 30 minutes.

With this in mind, we can categorise her efficiency as being linear. Or, as we would say in Big O terms O(n), where n is equal to the number of patients and the time that it takes for her to finish her work scales linearly or proportionally with the number of patients, we can use the same technique to determine the efficiency of algorithms.

Let’s calculate dentist efficiency using a function.

```
var time = 0;
const patients = ['James', 'Robert', 'John', 'Patricia', 'Mary', 'Jennifer', 'Michael'];
function calculateTime() {
for(let i = 0; i < patients.length; i++){
time += 30;
}
console.log("Total Time 👉 ", time);
}
```

**Note:** I have created the function only to explain how Big O works with programming logic. That’s why I did not calculate time by multiplying the number of patients and time.

We created the `calculateTime`

function inside the function we wrote the `for`

loop, which iterates each element of a `patients`

array.

```
function calculateTime() {
for(let i = 0; i < patients.length; i++){ // 👉 O(n)
time += 30; // 👉 O(1)
}
console.log("Total Time 👉 ", time); // 👉 O(1)
}
```

The time complexity for the above function is `O(n)`

, but how I calculated the complexity of the function.

Let’s learn how to calculate Big O notation for any given algorithm.

## Big O Rules

First let’s go over the rules of calculating Big O.

- Worst Case
- Remove Constants
- Drop Non Dominants

**1. Worst Case: **

There are three types of analysis to analyze an algorithm.

- Best Case
- Average Case
- Worst Case

#### 1. Best case

Best case means how fast algorithm can run for provided input.

```
let friends = ['Joey', 'Rachel', 'Phoebe', 'Ross', 'Monica', 'Chandler'];
function findJoey(){
for(let i = 0; i < friends.length; i++){
if(friends[i] === 'Joey'){
console.log("Found!");
break;
}
}
}
```

We created the function `findJoey`

to find “Joey” from the `friends`

array.

Since “Joey” is the first element of an array will only run once then it will break loop because we used the `break`

statement.

So we can say the best case for the function is **O(1)**.

#### 2. Average case

It provides a prediction about the algorithm’s running time when the input is random.

```
let friends = ['Rachel', 'Phoebe', 'Ross', 'Joey', 'Monica', 'Chandler'];
function findJoey(){
for(let i = 0; i < friends.length; i++){
if(friends[i] === 'Joey'){
console.log("Found!");
break;
}
}
}
```

In the function we provide input array in which element “Joey” will be on random places it could be second, third or forth.

#### 3. Worst case

Worst case means how slow/long algorithm can run for provided input.

```
let friends = ['Rachel', 'Phoebe', 'Ross', 'Monica', 'Chandler', 'Joey'];
function findJoey(){
for(let i = 0; i < friends.length; i++){
if(friends[i] === 'Joey'){
console.log("Found!");
break;
}
}
}
```

In the above function we placed “Joey” at the end of the array, so it need to iterate through whole array to find the element. So, the algorithm’s time complexity is** O(n).**

Even though, input is random we will always analyze algorithm’s worst case for Big O.

**2. Remove Constants**

While we are calculating Big O, we should not consider any constants. Let’s understand it using an example.

```
let friends = ['Rachel', 'Phoebe', 'Ross', 'Monica', 'Chandler', 'Joey'];
function findJoey(){
for(let i = 0; i < friends.length; i++){ // O(n)
console.log("Step 👉", i + 1); // O(1)
if(friends[i] === 'Joey'){ // O(1)
console.log("Found!"); // O(1)
}
}
}
```

Let’s sum the complexity of an above function.

```
Compelxity = O(n) + O(1) + O(1) + O(1)
= O(n) + O(3)
= O(n + 3)
```

After sum our function’s complexity is `O(n + 3)`

.

When calculating the efficiency of a function, constant time does not matter. This is because if our array were some crazy length, like 2 million, then these lines of code would have a negligible effect on the efficiency of the function as a whole. We still need to iterate through 2 million items in an array.

Let’s calculate it again.

```
Compelxity = O(n + 3)
= O(n)
```

So, after removing all the constants its just `O(n)`

now.

However if the function has only constants then the complexity of the function is constant time.

```
function steps(){
console.log("Step 1"); // O(1)
console.log("Step 2"); // O(1)
console.log("Step 3"); // O(1)
}
```

If we calculate complexity of the function it will be `O(3)`

. Instead of saying it `O(3)`

we will call it `O(1)`

as we assume it will always take the same amount of time to run.

**3. Drop Non Dominants**

In Big O we have a growth hierarchy.

Don’t worry about other complexities, we learn everything in this article.

The chart shows the efficiency categories in order from good to bad. The first case of one is the best case and the last is the worst.

As we discussed earlier, when determining the efficiency of an algorithm in terms of Big O, we only care about the worst case.

```
let friends = ['Rachel', 'Phoebe', 'Ross', 'Monica', 'Chandler', 'Joey'];
function findJoey(){
for(let i = 0; i < friends.length; i++){ // O(n)
console.log("Step 👉", i + 1); // O(1)
if(friends[i] === 'Joey'){ // O(1)
console.log("Found!"); // O(1)
}
}
}
```

We calculated the function’s complexity, which is `O(n)`

because it’s a dominant and worst case than `O(1)`

.

Now that you understand how the Big O works let’s discuss major Big O complexities (time and space).

- O(1) – Constant time
- O(log n) – Logarithmic time
- O(n) – Linear time
- O(n log n) – Polynomial time
- O(n ^ 2) – Quadratic time
- O(2 ^ n) – Exponential time
- O(n!) – Factorial time
- O(m+n) or O(mn)

**Big-O Complexity Chart**

## How to Calculate Time Complexity

Let’s discuss Big O complexities.

### 1. O(1) – Constant time

Constant time complexity doesn’t grow. No matter how large the input size gets, the same number of steps is needed.

Since constants do not matter, we can use the number one `O(1)`

to represent constant time complexities.

But how could constant time come into existence? Let’s look at a very simple example.

**You have an array and want to print the fourth element of an array. **

```
let friends = ['Rachel', 'Phoebe', 'Ross', 'Monica', 'Chandler', 'Joey'];
function printFourthIndex(){
console.log(friends[3]); // 0(1)
}
```

This function takes the same amount of steps, no matter how large the array is. It won’t get more complicated just because the array has a millions elements. You still need to do the same steps. So the runtime complexity of this is constant.

### 2. O(log n) – Logarithmic time

You probably remember the logarithmic function from high school. It tells you how many times you have to multiply the base by itself to get the given number as a result.

Sounds complicated! let’s understand by an example.

Let’s take the number eight as an example. So we want to raise some number to some power to get an eight.

In computer science, we can always assume that the number we want to raise to sum power is two.

We can see that if we raise two to the power of three, we get the number that we’re looking for eight, so log base two of eight is three.

Now you know that how logarithmic works let’s move to the meaning of O(log n).

Let’s understand it by simple function.

```
// n = 8
function logNFunc(n){
while(n > 1){
n = Math.floor(n / 2);
}
}
```

This function’s time complexity is `O(log n)`

. let’s dig deeper to find out why.

So, when we pass a number into this function, it divides it by two or splits it in half, and then calls itself with the new half or divided number.

```
n = 8
Iteration 1 = 8 / 2 = 4
Iteration 2 = 4 / 2 = 2
Iteration 3 = 2 / 2 = 1
```

As you can see, we need to iterate only three times to get one. So we can say these three iterations are log iterations.

The good thing about logarithmic time is that** its growth is slowing down over time**. So the curve gets flatter and flatter as we go on. When you get to really high numbers the graph looks almost like a constant runtime complexity.

If a problem can be solved in logarithmic time, you cannot really consider it a problem. It is extremely efficient.

### 3. O(n) – Linear time

We already discussed the linear time complexity using the dentist’s example.

Linear time complexity essentially means that the growth is constant. You need to perform the same amount of steps to the input size.

For example, if our input size is eight we need to iterate it eight times.

```
function nFunc(n){
for(let i = 0; i < n; i++){
console.log("Step 👉", i + 1);
}
}
```

If a problem is solvable in linear time it is also** not really a challenge**. It is very easy to solve and the **runtime complexity is extremely desirable**.

### 4. O(n log n) – Pseduo-Linear time

If we combine the last two runtime complexities (logarithmic and linear), we get the pseudo-linear time. It is sometimes also called linearithmic.

This time complexity is often found in** divide-and-conquer algorithms**, such as merge sort, quick sort and heap sort.

Let’s understand it by a simple example.

```
// n = 4
function nLogNFunc(n){
let temp = n;
while(n > 1){
n = Math.floor(n/2);
for(let i = 0; i < temp; i++){
console.log("Step 👉", i + 1);
}
}
}
```

The function accepts one argument `n`

and we will assume n = 4.

First of all, we created a `temp`

variable and assign n to it to copy the value of `n`

to `temp`

.

Next, we have a while loop in which we are dividing the value of `n`

by two. In the next line we have for loop which iterates `temp`

times means `n`

times since the `temp`

variable is a copy of the `n`

.

It’s a bit confusing. Let’s visualize it.

As you can see the top-level loop’s time complexity is `O(log n)`

and the inner loop’s time complexity is `O(n)`

.

Let’s put it all together.

```
Time Complexity = O(n * log n);
= O(n log n);
```

As you can see, the function is not quite linear but since the logarithmic part grows slower and slower over time, it looks like one when we get to larger numbers.

This is the last runtime complexity that we will consider very efficient. If a problem is solvable in O(n log n) time, it is not really a challenge.

### 5. O(n ^ 2) – Quadratic time

Quadratic time O(n ^ 2) increases faster as the input grows.

Examples of quadratic runtime are inefficient sorting algorithms like the bubble sort, insertion sort or selection sort but also traversing a simple 2D array.

Let’s understand it by an example.

```
// n = 4
function nSquaredNFunc(n){
for(let j = 0; j < n; j++){
for(let k = 0; k < n; k++){
console.log(j, k);
}
}
}
```

The above function takes `n`

as an argument. And the top level (first) for loop will iterate until `n`

and for each iteration, it will also iterate nested for loop.

When we run the function (n = 4) as output it will print matrix like below.

It has 4 columns and 4 rows so let’s calculate the total steps taken to print this matrix.

```
Time Complexity = row * columns
= 4 * 4
= 16
4² = 16
```

So, if we take n = 8, the function will take 8 * 8 = 64 iterations to print the matrix, so we can say the complexity of the function is O(n ^ 2).

You can see that the growth is happening a lot faster here. It is still manageable, but quadratic runtime can be a challenge sometimes.

### 6. O(2 ^ n) – Exponential time

Exponential time `O(2 ^ n)`

increases extremely fast as the input grows.

Let’s understand it by following the function.

```
function fib(n){
if(n === 0){
return 0;
}
if(n === 1){
return 1;
}
return fib(n - 1) + fib(n - 2)
}
```

In the above function, we use recursion to calculate the Fibonacci sequence.

If we visualize the function it will look like below.

Here we can see that steps get double on every level. For example, on the first level, we need to call the `fib()`

function two times, And on the second level, we need to call the `fib()`

function four times. Fortunately, on a third level, we need to call the function only two times because of our small input.

Actually, the `fib()`

function’s complexity is `O(2 ^ (n - 1))`

, but if you remember we discussed earlier that we will remove the constants. So, we will ignore the `-1`

, which results in `O(2 ^ n)`

.

An O(2^n) function’s exponential growth curve starts shallow and then rises rapidly.

If it is possible, you should avoid exponential runtime complexity.

### 7. O(n!) – Factorial time

This time complexity is even less desirable than exponential time.

Let’s first go through the factorial’s definition.

`n! = n * (n - 1) * (n - 2) * ... * 3 * 2 * 1`

We multiply n by all natural numbers that are smaller than n except for zero.

One runtime complexity that is worse than that is of the form n to the power of n, but it rarely occurs. However, factorial time does occur.

For example, when you try to find all possible permutations of a given set of elements. Like in the following function.

```
function f(n){
if(n === 0){
return 0;
}
for(let i = 0; i < n; i++){
f(n - 1);
}
}
```

I think it goes without saying that you want to avoid factorial time whenever it is possible. We don’t consider this time complexity manageable.

### 8. O(m + n) or O(mn)

Until now we calculated complexity using only one input but what if we have multiple inputs.

```
function mn(m, n){
for(let i = 0; i < m; i++){
console.log("m => ", i);
}
for(let j = 0; j < n; j++){
console.log("n => ", j);
}
}
```

The above function accepts two inputs and the inputs can be different. So we cannot say that its **O(n + n) = O(2n) = O(n), **instead we should call it **O(m + n)** because input might not be the same.

```
function mn(m, n){
for(let i = 0; i < m; i++){
for(let j = 0; j < n; j++){
console.log(i, j);
}
}
}
```

In the above function, we have a nested loop so, this function’s complexity is **O(m * n) = O(mn)**.

We discussed the most common time complexities and how to calculate them.

## How to Calculate Space Complexity

Let’s learn how to calculate space complexity.

### 1. O(1) – Constant Space

This is when the space required by the algorithm does not increase with the size of the input data set.

```
function constantSpace(n) {
let result = n * n;
console.log(result);
}
```

In this example, no matter the size of `n`

, the space used by the algorithm remains constant because we’re only ever using two variables (`n`

and `result`

).

### 2. O(log n) – Logarithmic Space

When the space needed by the algorithm grows logarithmically with the size of the input data set.

```
function binarySearch(array, x) {
let start = 0, end = array.length - 1;
while (start <= end) {
let mid = Math.floor((start + end) / 2);
if (array[mid] === x) return true;
else if (array[mid] < x)
start = mid + 1;
else
end = mid - 1;
}
return false;
}
```

In this binary search example, the space complexity is `O(log n)`

because in each iteration of the while loop, we’re reducing the problem size by half.

### 3. O(n) – Linear Space

When the space required by the algorithm increases linearly with the size of the input data set.

```
function linearSpace(n) {
let array = [];
for(let i = 0; i < n; i++) {
array[i] = i;
}
console.log(array);
}
```

In this example, we’re creating an array of size `n`

, so the space complexity is `O(n)`

.

### 4. O(n^2) – Quadratic Space

When the space required by the algorithm is proportional to the square of the size of the input data set.

```
function quadraticSpace(n) {
let matrix = [];
for(let i = 0; i < n; i++) {
matrix[i] = [];
for(let j = 0; j < n; j++) {
matrix[i][j] = i + j;
}
}
console.log(matrix);
}
```

In this example, we’re creating a 2D matrix of size `n * n`

, so the space complexity is `O(n^2)`

.

## Quick Tips to Identify Big O Notation

Identifying the Big O notation of an algorithm can seem daunting at first, but with practice, it becomes more intuitive. Here are some quick tips to help you:

### 1. O(1) – Constant Time

If an algorithm performs a fixed number of operations regardless of the input size, it has a constant time complexity. An example of this is accessing an array element by its index.

`let value = array[index];`

### 2. O(n) – Linear Time

If the number of operations is proportional to the size of the input, the algorithm has a linear time complexity. An example of this is a simple for loop that traverses an array or a list.

```
for(let i = 0; i < array.length; i++) {
// code
}
```

### 3. O(log n) – Logarithmic Time

In a logarithmic time complexity, the algorithm splits the problem size with each step. A common example is finding an element in a sorted array.

To keep it simple, let’s consider the operation of reducing the problem size by half with each step.

```
let i = n;
while (i > 1) {
i = i / 2;
// code
}
```

### 4. O(n^2) – Quadratic Time

If an algorithm performs operations in a pair of nested loops, it often has a quadratic time complexity. Bubble sort is an example of an algorithm with quadratic time complexity.

```
for(let i = 0; i < array.length; i++) {
for(let j = 0; j < array.length - i - 1; j++) {
// code
}
}
```

### 5. O(n^3) – Cubic Time

If an algorithm performs operations in three nested loops, it has a cubic time complexity. An example is the naive algorithm for matrix multiplication.

```
for(let i = 0; i < matrix1.length; i++) {
for(let j = 0; j < matrix2[0].length; j++) {
for(let k = 0; k < matrix1[0].length; k++) {
// code
}
}
}
```

### 6. O(2^n) – Exponential Time

If the number of operations doubles for each addition to the input size, the algorithm has an exponential time complexity. Recursive calculations of Fibonacci numbers often have this time complexity.

```
function fibonacci(n) {
if(n <= 1) {
return n;
} else {
return fibonacci(n - 1) + fibonacci(n - 2);
}
}
```

Remember, these are general tips. The actual time complexity can depend on various factors, including the specific details of the algorithm and the nature of the input.

## Big O Notation Algorithm Complexity Cheat Sheet

**Common data structure (Average time complexity**)

Data Structure | Time Complexity | |||
---|---|---|---|---|

Access | Search | Insertion | Deletion | |

Array |
`Θ(1)` |
`Θ(n)` |
`Θ(n)` |
`Θ(n)` |

Stack |
`Θ(n)` |
`Θ(n)` |
`Θ(1)` |
`Θ(1)` |

Queue |
`Θ(n)` |
`Θ(n)` |
`Θ(1)` |
`Θ(1)` |

Singly-Linked List |
`Θ(n)` |
`Θ(n)` |
`Θ(1)` |
`Θ(1)` |

Doubly-Linked List |
`Θ(n)` |
`Θ(n)` |
`Θ(1)` |
`Θ(1)` |

Skip List |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |

Hash Table |
`N/A` |
`Θ(1)` |
`Θ(1)` |
`Θ(1)` |

Binary Search Tree |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |

Cartesian Tree |
`N/A` |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |

B-Tree |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |

Red-Black Tree |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |

Splay Tree |
`N/A` |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |

AVL Tree |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |

KD Tree |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |
`Θ(log(n))` |

**Common data structure (Worst time and space complexity)**

Data Structure | Time Complexity | Space Complexity | |||
---|---|---|---|---|---|

Access | Search | Insertion | Deletion | ||

Array |
`O(1)` |
`O(n)` |
`O(n)` |
`O(n)` |
`O(n)` |

Stack |
`O(n)` |
`O(n)` |
`O(1)` |
`O(1)` |
`O(n)` |

Queue |
`O(n)` |
`O(n)` |
`O(1)` |
`O(1)` |
`O(n)` |

Singly-Linked List |
`O(n)` |
`O(n)` |
`O(1)` |
`O(1)` |
`O(n)` |

Doubly-Linked List |
`O(n)` |
`O(n)` |
`O(1)` |
`O(1)` |
`O(n)` |

Skip List |
`O(n)` |
`O(n)` |
`O(n)` |
`O(n)` |
`O(n log(n))` |

Hash Table |
`N/A` |
`O(n)` |
`O(n)` |
`O(n)` |
`O(n)` |

Binary Search Tree |
`O(n)` |
`O(n)` |
`O(n)` |
`O(n)` |
`O(n)` |

Cartesian Tree |
`N/A` |
`O(n)` |
`O(n)` |
`O(n)` |
`O(n)` |

B-Tree |
`O(log(n))` |
`O(log(n))` |
`O(log(n))` |
`O(log(n))` |
`O(n)` |

Red-Black Tree |
`O(log(n))` |
`O(log(n))` |
`O(log(n))` |
`O(log(n))` |
`O(n)` |

Splay Tree |
`N/A` |
`O(log(n))` |
`O(log(n))` |
`O(log(n))` |
`O(n)` |

AVL Tree |
`O(log(n))` |
`O(log(n))` |
`O(log(n))` |
`O(log(n))` |
`O(n)` |

KD Tree |
`O(n)` |
`O(n)` |
`O(n)` |
`O(n)` |
`O(n)` |

**Sorting Algorithms**

Algorithm | Time Complexity | Space Complexity | ||
---|---|---|---|---|

Best | Average | Worst | Worst | |

Quicksort |
`Ω(n log(n))` |
`Θ(n log(n))` |
`O(n^2)` |
`O(log(n))` |

Mergesort |
`Ω(n log(n))` |
`Θ(n log(n))` |
`O(n log(n))` |
`O(n)` |

Timsort |
`Ω(n)` |
`Θ(n log(n))` |
`O(n log(n))` |
`O(n)` |

Heapsort |
`Ω(n log(n))` |
`Θ(n log(n))` |
`O(n log(n))` |
`O(1)` |

Bubble Sort |
`Ω(n)` |
`O(n^2)Θ/code>` |
`O(n^2)` |
`O(1)` |

Insertion Sort |
`Ω(n)` |
`O(n^2)Θ/code>` |
`O(n^2)` |
`O(1)` |

Selection Sort |
`Ω(n^2)` |
`Θ(n^2)` |
`O(n^2)` |
`O(1)` |

Tree Sort |
`Ω(n log(n))` |
`Θ(n log(n))` |
`O(n^2)` |
`O(n)` |

Shell Sort |
`Ω(n log(n))` |
`O(n(loΘ(n))^2)` |
`O(n(log(n))^2)` |
`O(1)` |

Bucket Sort |
`Ω(n+k)` |
`Θ(n+k)` |
`O(n^2)` |
`O(n)` |

Radix Sort |
`Ω(nk)` |
`Θ(nk)` |
`O(nk)` |
`O(n+k)` |

Counting Sort |
`Ω(n+k)` |
`Θ(n+k)` |
`O(n+k)` |
`O(k)` |

Cubesort |
`Ω(n)` |
`Θ(n log(n))` |
`O(n log(n))` |
`O(n)` |

## Conclusion

And there you have it - a comprehensive guide to Big O Notation.

We've gone through the intricacies of this concept, understanding its importance in evaluating the efficiency of algorithms. We've seen how it helps us predict the behavior of our code, allowing us to make informed decisions when it comes to optimization.

Remember, the goal isn't always to achieve `O(1)`

but to understand the trade-offs we're making with each algorithmic choice.

As we continue to advance in our coding journey, let's keep Big O Notation as our constant companion, guiding us towards more efficient and effective programming.

Happy coding!

## Frequently Asked Questions

**What is the big O complexity?**

Big O complexity, also known as Big O notation, is a mathematical notation used in computer science to describe the worst-case performance or complexity of an algorithm in terms of time or space. It gives an upper limit on the time taken or space used by an algorithm as a function of the size of the input.

**Why is Big O notation called Big O?**

The term "Big O" comes from the notation itself, which uses the letter O (for "Order of") to describe the growth rate of a function. The notation was first introduced by German mathematician Paul Bachmann in 1894.

**What is Big O notation with example?**

Big O notation is used to classify algorithms by their runtime or space complexity. For example, an algorithm with a time complexity of O(n) would have a runtime that grows linearly with the size of the input. This could be a simple for loop that iterates over an array.

**What is Big O and small O notation?**

Big O notation provides an upper bound on the time complexity of an algorithm, meaning it describes the worst-case scenario. Small o notation, on the other hand, provides an upper bound that is not tight. In other words, small o notation describes an upper limit that the algorithm will never reach.

**How many types of Big O notation are there?**

There are several types of Big O notation, each representing a different complexity class. Some common ones include O(1) for constant time, O(log n) for logarithmic time, O(n) for linear time, O(n log n) for linearithmic time, O(n^2) for quadratic time, and O(2^n) for exponential time.

**Which Big O notation is faster?**

In terms of time complexity, O(1) is the fastest because it represents constant time, meaning the algorithm takes the same amount of time regardless of the input size. As you move towards O(2^n), the time complexity and thus the runtime of the algorithm increase.

**Is Big O notation constant?**

Big O notation can be constant, represented as O(1), which means the time or space complexity of an algorithm remains the same regardless of the size of the input.

**Is Big O notation upper or lower?**

Big O notation provides an upper bound on the time or space complexity of an algorithm, meaning it describes the worst-case scenario.

**Who created Big O notation?**

The Big O notation was first introduced by German mathematician Paul Bachmann in 1894. However, it was popularized by another mathematician, Donald Knuth, in his book "The Art of Computer Programming".

**Can Big O notation be negative?**

No, Big O notation cannot be negative. It describes the time or space complexity of an algorithm, which cannot be less than zero. A negative time or space complexity would not make sense in this context.

**Learn More: **

Good article !!

Thanks for your kind words