11.1 Definition of Big O
Notation
Big O
notation is a mathematical notation used to describe the upper bound of the runtime or resource consumption of an algorithm based on the input size. It helps determine how well an algorithm scales and how its performance changes as the data size increases
.
Big O
notation focuses on the most significant aspects of an algorithm, ignoring constants and less significant terms, allowing us to focus on the long-term behavior of the algorithm.
Main notations:
O(1)
— Constant complexity:
- The runtime of the algorithm does not depend on the input size.
- Example: accessing an array element by index.
O(n)
— Linear complexity:
- The runtime of the algorithm linearly depends on the input size.
- Example: simple iteration over all array elements.
O(log n)
— Logarithmic complexity:
- The runtime of the algorithm grows logarithmically with the increase in input size.
- Example: binary search in a sorted array.
O(n^2)
— Quadratic complexity:
- The runtime of the algorithm quadratically depends on the input size.
- Example: bubble sort, insertion sort.
O(2^n)
— Exponential complexity:
- The runtime of the algorithm exponentially depends on the input size.
- Example: solving the knapsack problem with brute force.
Note. You will learn more about binary search, bubble sort, and the "knapsack problem" in the following lectures.
11.2 Interpretation of Big O
Notation
How to interpret and use Big O
notation?
Ignoring constants and less significant terms: Big O
describes the growth order of a function, ignoring constants and less significant terms. For example, O(2n)
and O(3n)
are interpreted as O(n)
.
Algorithm comparison: Big O
allows us to compare algorithms by their asymptotic efficiency. For example, an algorithm with O(n log n)
is more efficient than one with O(n^2)
for very large data sizes.
Worst-case analysis: Big O
is usually used for analyzing the worst-case runtime of an algorithm, allowing us to evaluate its maximum complexity.
Ignoring constants
Let's consider examples of ignoring constants and less significant terms:
Example 1. Consider two functions:
f(n) = 3n + 2
g(n) = 5n + 1
Both functions have linear complexity because the dominant term in each function is n
. Therefore, both functions are interpreted as O(n)
, despite differences in coefficients and additional terms.
Example 2. Consider two functions:
f(n) = n^2 + 3n + 4
g(n) = 2n^2 + n
Both functions have quadratic complexity because the dominant term is n^2
. Both expressions are interpreted as O(n^2)
, despite differences in the remaining terms and coefficients.
11.3. Algorithm Comparison
1. Comparing algorithms on very large data sizes
Example 1:
- Algorithm A has a time complexity of
O(n^2)
. - Algorithm B has a time complexity of
O(n log n)
.
For small values of n
, algorithm A may run faster due to smaller constants, but for large values of n
, algorithm B will be significantly faster because its growth is logarithmic, not quadratic.
Example 2:
- Algorithm X has a time complexity of
O(n)
. - Algorithm Y has a time complexity of
O(1)
.
Algorithm Y will always be faster, regardless of the size of n
, because O(1)
means the runtime of the algorithm does not depend on the input size.
2. Worst-case Analysis
Example 1:
Bubble sort has a time complexity of O(n^2)
in the worst case when the array is sorted in reverse order. This means that for each element in the array, it needs a comparison and possibly a swap with every other element.
Example 2:
Binary search has a time complexity of O(log n)
in the worst case. This means that even in the worst case, the number of steps needed to find an element will logarithmically depend on the size of the array, which is very efficient.
3. Impact on Performance and Scalability
Example 1:
If we have two algorithms for processing data, one with a time complexity of O(n^2)
and another with O(n log n)
, and we increase the data size from 1000 elements to 10,000 elements, the performance difference will be significant.
- The
O(n^2)
algorithm will perform approximately 100,000,000 operations for 10,000 elements. - The
O(n log n)
algorithm will perform about 40,000 operations for 10,000 elements.
Example 2:
Consider an algorithm that runs in O(2^n)
. If we increase the input size from 10 to 20 elements, the number of operations grows exponentially.
- For
n = 10: 2^10 = 1024
operations. - For
n = 20: 2^20 = 1,048,576
operations.
This shows how quickly exponential complexity becomes impractical for large values of n
.
GO TO FULL VERSION