What are BigO Notation and Time Complexity?4 min read

When you try to solve a problem online on some online website, you probably heard about some terms like BigO notation, O(n), O(1), O(n²), etc… or time complexity couple of times, especially when solving an algorithm. So what they mean and what they are used for? If you don’t have an exact idea about these terms and want to figure out what they are and want to nostalgic back to your math skills in high school or college, this article is for you.

Time Complexity and Space Complexity

When trying to solve a problem, from writing flowchart to writing actual code, sometimes we can solve this problem by more than one way, it usually perhaps has several ways to solve. Hence, understanding the time complexity and also space complexity to compare the performance of different algorithms and choose the best one for a particular problem is crucial.

In computer science, the time complexity is the computational complexity that measures or estimates the time taken for running an algorithm.
Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that an elementary operation takes a fixed amount of time to perform. In short, the time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the input. In the same way, space complexity is much like time complexity, we can define space complexity as the amount of space or memory taken by an algorithm run as a function of the length of the input.

In the real cases, time and space complexity depend on many factors such as hardware, processor, memory, etc…But we don’t consider these cases for evaluating the algorithms, we just only consider the execution time for the algorithm.

Big O notation

Big O notation in computer science is used to describe the worst case scenario in a particular algorithm in term of time complexity and/or space complexity such as execution time or the space used. Or you can say the maximum amount of time taken on inputs of a given size, which is Big O notation.

O(n)

O(n) is used to describe an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set (remember linear function :D). The very common example for O(n) scenario is a for loop, the running time increases at most linearly with the size of the input. This example below will denote O(n) worst-case in JavaScript language:

It’s more clear with the example cases below:

O(1)

O(1) Constant time: Given an input of size n, it only takes a single step for the algorithm to accomplish the task regardless of the number of items. Consider the example below:

O(n²)

In this case, O(n²) also called Quadratic time is used to describe an algorithm whose performance is directly proportional to the square of the size of the input data set. This is common with algorithms that involve nested loops. Deeper iterations will lead to O(n³), O(n&sup4;), O(n&sup5;) and so on…: A common sorting algorithm like the bubble sort, selection sort, and insertion sort runs in O(n²). Take a look at the example below:

O(2²) and O(n!)

O(2²) – Exponential time denotes an algorithm whose growths as double each input of the data set, this is common in situations when you traverse all the nodes in a binary tree. It usually exists in recursive function, calculating Fibonacci number is one of those:

You can contemplate this code by opening up the debug mode in Chrome:

O(n!) – factorial time is common in generating permutation:

O(log N)

This case is a little bit hard to explain, but in a whole, O(log N) – Quasilinear time: in many cases, the n log n running time is simply the result of performing an O(log n) operation n times. It works by selecting the middle element of the data set, essentially the median, and compares it against a target value. If the values match it will return success.
The common technique is binary tree sort creates a binary tree by inserting each element of the n-sized array one by one. Also, the average case of quicksort, heapsort, and merge sort run in O(n log n) time. Look and consider the example about Quick Sort Algorithm below:

How do I define when it is O(n), O(1), O(n²)?

Look at these examples above and also read the text carefully, you will get the idea. There are also some rules that you can apply to remember easier.

  • General Rule #1: Return statements, initialize a variable, increment, assigning, etc. All of these operations take O(1) time.
  • General Rule #2: The running time of the loop is at most the running time of the statements inside the loop multiplied by the number of iterations.
  • General Rule #3: The total running time of the nested loops is the running time of the outer loop multiplied by the inner loop(s).
Image courtesy of Jessica Yung
What are BigO Notation and Time Complexity?4 min read
5 (99.84%) 123 votes

Facebook Comments

Previous Article
Next Article

Leave a Reply

avatar
  Subscribe  
Notify of

Sign up for newsletter

* indicates required

Categories

Archives