
Roman Samborskyi / Shutterstock.com
Understandably, one of the most important factors when picking an algorithm is its performance. If you re using an inefficient algorithm, your performance will be subpar, no matter how optimized the rest of your code is. The main method for measuring algorithm performance is by using a concept known as time complexity. This concept can seem complicated at first, so we re going to break down exactly what it is, how it s represented, as well as examples of the common complexities.
What is Time Complexity?
In simple terms, time complexity is a way to represent how the run-time of an algorithm increases with input size. Generally, input size has a large effect on an algorithm s performance. For example, when searching an array for a specific element, the complexity is usually equal to O(n). This means that it depends linearly on n, which is the size of the input. As the number of elements increases, we have to search more elements to find the one we want. Therefore, the run-time of the algorithm is longer.
It should be said that, whileasymptotic notationis a closely related concept, it s not the same as time complexity. You can think of asymptotic notation as how we represent time complexity, and time complexity as the underlying concept. Considering our previous example, O(n) would be considered asymptotic notation, used to represent the time complexity of the search algorithm. We ll get into the types of time complexity next, and how they re represented using asymptotic notation.
What Factors Affect Time Complexity?
As previously mentioned, one of the main factors affecting time complexity is the input size. However, this isn t the only factor. The number of operations performed also greatly affects the complexity, as well as the presence of nested loops, recursion, and even the hardware we re using.
Common Cases of Time Complexity, With Examples
There are many different time complexities possible, each with its own unique notation. Big-O notation is used most commonly to represent the way complexity grows with input size. The notations you ll likely come across most frequently include O(1), O(n), O(log n), O(n log n), O(n2), O(n3), O(2n), and O(n!). We ll get into these next.
Constant Time O(1)
This complexity is known as constant time because it s not dependent on the input size at all. Therefore, the run-time won t change, no matter the input. Here are some examples of where O(1) is found:
- Checking whether an integer is odd or even.
- Checking whether an item is NULL.
- Accessing the element at a specific index in some sort of data structure.
For example, consider this code in Python:
def access_element(arr, index):
return arr[index]
Here, we re trying to access the element of the arrarrayat the index index. Since the index won t change even if we add more elements, the time needed here is constant.
Linear time O(n)
Another common complexity type is linear, or O(n). This increases proportionally, according to the input size. This will mostly be found in these situations:
- Obtaining the minimum or maximum value of an array.
- Performing an operation on each element.
- Printing all of the values within a list.
For example, we have this operation:
def sum_elements(arr):
total = 0
for element in arr:
total += element
return total
Here, we re summing up the values of the elements in an array. Since we re doing the same operation on each, this complexity will increase as we have more elements in the list.
Logarithmic time O (log n)
When we say logarithmic time, we often mean that the speed divides in half with each operation. For example, thebinary search algorithm. This works by taking a sorted list and a target element, then repeatedly dividing the search area by half, comparing the target with the middle element. Since the list is sorted, going on the size of the middle element, we can determine whether the target will be found in the left or right half. The algorithm then repeats these steps to find the target in logarithmic time. This can be used as follows in Python:
def binary_search(arr, target):
low = 0
high = len(arr) - 1
while low
In this case, we re looking for the value 16, which is found at index 4. The time complexity of binary search is logarithmic but equal to 2log(n). Therefore, the target is found after 3 operations, since base-2 log(7) is roughly equal to 3. This can be seen in the image below.

History-Computer.com
Linearithmic O(n log n)
Similar to logarithmic, linearithmic has an extra dependency on input size.Merge sortis a good example of such an algorithm. This works by the divide-and-conquer process, similar to binary search, but performs a merging process once all subarrays have been fully divided. Consider the following code:
def merge_sort(arr):
if len(arr)
We have an array of 8 elements, which is split into smaller halves again and again until it can t be split anymore. The arrays are then merged to give a sorted array, as seen in the image.

History-Computer.com
Quadratic time O (n2)
Quadratic time is a kind of polynomial time (O(nc), where the complexity increases exponentially with input size, by a factor of 2. We can consider the bubble sort algorithm an example of this, where adjacent elements are swapped over and over until they re sorted. This is shown in the following code.
def bubble_sort(arr):
n = len(arr)
for i in range(n):
for j in range(0, n-i-1):
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]
arr = [8, 3, 1, 7, 4, 6, 2, 5]
bubble_sort(arr)
print(arr)
In the worst case, we must swap every element, so the number of operations grows quadratically.

History-Computer.com
Cubic time O(n3)
Cubic time means that the complexity increases even faster than quadratic time. Therefore, we usually want to avoid greater polynomial complexities where we can. An example is theFloyd-Warshall algorithm, used for calculating the shortest path distances in a graph. This is shown in the code block next.
def floyd_warshall(graph):
n = len(graph)
distances = graph.copy()
for k in range(n):
for i in range(n):
for j in range(n):
distances[i][j] = min(distances[i][j], distances[i][k] + distances[k][j])
return distances
inf = float('inf')
graph = [
[0, inf, -2, inf],
[4, 0, 3, inf],
[inf, inf, 0, 2],
[inf, -1, inf, 0]
]
shortest_paths = floyd_warshall(graph)
for row in shortest_paths:
print(row)
Since we must use 3 nested for loops iterating over each vertex pair and the potential shorter path, the complexity is cubic. This is because each loop depends on the input size, and all iterations must be completed. It should be noted that Floyd-Warshall is actually quite a quick example of a cubic time algorithm, as long as it s used for a fairly typical graph.

History-Computer.com
Exponential time O(2n)
Exponential here basically means the number of operations doubles as the input increases. We can illustrate this with the recursiveFibonacciprocess as follows:
def fibonacci(n):
if n
This simple examplecalculates the Fibonacci numberfor the 5th index. This is 5, as we can see in the output. As such, exponential operations aren t usable for large datasets, as the complexity increases very quickly.

History-Computer.com
Factorial time O(n!)
The factorial of a number is the multiplication of every whole integer that comes before it, plus itself. For example, the factorial of 3 is equal to 6, because 1 * 2 * 3 = 6. Since the growth of this complexity is huge, factorial algorithms are rarely used in everyday programming. However, generating all of the permutations in a string, or the possible combinations of the individual values, is an example of a process with factorial time. For example, consider this code:
def generate_permutations(elements):
if len(elements) == 1:
return [elements]
permutations = []
for i in range(len(elements)):
remaining = elements[:i] + elements[i+1:]
sub_permutations = generate_permutations(remaining)
for perm in sub_permutations:
permutations.append([elements[i]] + perm)
return permutations
elements = [1, 2, 3]
permutations = generate_permutations(elements)
for perm in permutations:
print(perm)
In this case, we re trying to find all possible permutations of the elements [1, 2, 3]. Recursively, each element is picked to be the starting element, and permutations are calculated. We receive 6 permutations here because there are 3 elements. The results can be seen in the image.

History-Computer.com
Implications and Applications of Time Complexity
Understanding the time complexity of an algorithm is essential to maximize the efficiency and scalability of your algorithm. As can be expected, those with lower complexities are more efficient and easier to scale. Being able to determine the rate-determining algorithm or the slowest algorithm, helps to identify the bottlenecks in our process and optimize these steps. As such, time complexity is important in designing algorithms and optimizing performance, and has applications insoftware engineering, database systems,machine learning, and computational sciences.
Wrapping Up
Time complexity is closely related to asymptotic notation, which is used to represent it. The most common types of time complexity include O(1), O(n), O(nc), O(log n) and O(n log n). Choosing the correct algorithm for your needs, as well as optimizing its performance, is crucial in making your code efficient and scalable. Understanding how time complexity is affected will help you design programs with better performance, and reduce the time needed to carry out your operations on large volumes of data.