5.1 Definition of Dynamic Programming.
Dynamic Programming (DP) - is an optimization method used to solve complex problems by breaking them down into simpler subproblems. Dynamic programming is effectively used for problems that can be divided into overlapping subproblems with optimal substructure.
Working Principles:
1. Optimal Substructure:
A problem has an optimal substructure if its optimal solution can be constructed from the optimal solutions of its subproblems. This means that you can solve a big problem by solving several smaller subproblems.
2. Overlapping Subproblems:
A problem has overlapping subproblems if its subproblems are repeated multiple times in the process of solving the main problem. Dynamic programming effectively solves problems with overlapping subproblems by remembering the results of already solved subproblems (memoization).
3. Memoization and Tabulation:
Memoization (top-down): A recursive approach where the results of subproblems are stored in memory to avoid repeated calculations.
Tabulation (bottom-up): An iterative approach where the results of subproblems are calculated and stored in a table (usually an array) and used to compute the final result.
Examples of problems solved using dynamic programming
5.2 Knapsack Problem.
Problem:
You have a set of items, each with a weight and a value. You need to choose items for a backpack with a limited weight capacity so that the total value is maximized.
Important!
Unlike the similar problem that was well solved with a greedy algorithm, you cannot break items in this problem.
Solution:
A table is created where rows correspond to items and columns to possible backpack capacities. The value in a cell represents the maximum value for that number of items and capacity.
Time complexity: O(n * W)
, where n
is the number of items, W
is the backpack capacity.
Python code example:
def knapsack(weights, values, W):
n = len(weights)
dp = [[0] * (W + 1) for _ in range(n + 1)]
for i in range(1, n + 1):
for w in range(W + 1):
if weights[i - 1] <= w:
dp[i][w] = max(dp[i - 1][w], dp[i - 1][w - weights[i - 1]] + values[i - 1])
else:
dp[i][w] = dp[i - 1][w]
return dp[n][W]
5.3 Shortest Path Problem
Problem:
Find the shortest paths between all pairs of vertices in a weighted graph.
Solution:
A table is used where rows and columns correspond to the vertices of the graph. The value in a cell represents the shortest distance between two vertices.
Time complexity: O(V^3)
, where V
is the number of vertices
Python code example:
def floyd_warshall(graph):
v = len(graph)
dist = list(map(lambda i: list(map(lambda j: j, i)), graph))
for k in range(v):
for i in range(v):
for j in range(v):
dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])
return dist
5.4 Comparing Dynamic Programming with Other Methods.
Comparison with Brute Force:
Time Complexity:
-
Brute Force: Often has exponential time complexity (e.g.,
O(2^n)
for the Fibonacci problem). -
Dynamic Programming: Reduces time complexity to polynomial (e.g.,
O(n)
for the Fibonacci problem).
Space Complexity:
- Brute Force: May use less memory, but time costs are high.
-
Dynamic Programming: May use more memory to store intermediate results (e.g.,
O(n)
for the Fibonacci problem with memoization), but is faster.
Comparison with Greedy Algorithms:
Optimality:
- Greedy Algorithms: Don't always find a globally optimal solution, but are quicker and easier to implement.
- Dynamic Programming: Finds a globally optimal solution, but requires more computational resources.
Examples of Problems:
- Greedy Algorithms: Fractional knapsack problem, Minimum Spanning Tree (MST).
- Dynamic Programming: Integer knapsack problem, Longest Common Subsequence (LCS).
GO TO FULL VERSION