Bytes

Optimization Algorithms

Last Updated: 5th January, 2026

Python allows implementation of algorithms for optimization problems, such as dynamic programming and greedy algorithms. Problems like the Knapsack Problem, Shortest Path (Dijkstra), and Fibonacci sequence using memoization are solved using these approaches.

Greedy Algorithm

A Greedy Algorithm solves problems by making the best local choice at each step, hoping it leads to a globally optimal solution. It does not reconsider previous choices, so decisions are final as the algorithm progresses.

In Python, greedy algorithms are implemented using iterative or sorting-based approaches, depending on the problem. The main operations performed in a greedy algorithm are:

  • Choose: Select the option that looks best at the current step.
  • Include: Add the selected option to the solution set.
  • Repeat: Continue until the problem is fully solved.
  • Finalize: Return the complete solution, which is expected to be optimal.

Greedy strategies are widely used in coin change problems, activity selection, minimum spanning trees (Prim/Kruskal), Huffman encoding, and scheduling problems.

Time Complexity: Varies by problem, often O(n log n) or O(n²) depending on sorting or edge selection.

Example:

Input:

activities = [(1,3),(2,5),(4,6),(6,8)]
activities.sort(key=lambda x: x[1])
last_end = 0
for s,e in activities:
    if s >= last_end:
        print(f"Selected: ({s},{e})")
        last_end = e

Output:

Selected: (1,3)
Selected: (4,6)
Selected: (6,8)

Dynamic Programming (DP) Algorithm

Dynamic Programming (DP) solves problems by breaking them into overlapping subproblems and storing their results to avoid redundant calculations. This is done using memoization (top-down) or tabulation (bottom-up) techniques. DP is especially useful for optimization problems where subproblems repeat.

In Python, DP can be implemented using arrays, dictionaries, or matrices to store intermediate results. The main operations performed in dynamic programming are:

  • Define Subproblems: Break the main problem into smaller, manageable subproblems.
  • Store Results: Save the solution of each subproblem to reuse later.
  • Combine Solutions: Use stored results to construct the solution to the original problem.
  • Return Optimal Solution: Retrieve the final answer efficiently.

Dynamic programming is widely used in Fibonacci sequence calculation, 0/1 Knapsack problem, Matrix Chain Multiplication, shortest path algorithms, and many combinatorial optimization problems.

Time Complexity: Reduces exponential recursion to polynomial time, depending on the number of subproblems.

Example:

Input:

dp = [0,1]+[0]*8
for i in range(2,10): dp[i] = dp[i-1]+dp[i-2]
print("Fibonacci:", dp)

Output:

Fibonacci: [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]

Divide and Conquer Algorithm

Divide and Conquer is a problem-solving strategy that splits a problem into smaller, independent subproblems, solves each recursively, and then combines their results to form the final solution. It efficiently handles complex problems by breaking them into manageable parts.

In Python, divide and conquer is implemented using recursive functions. The main operations performed in this strategy are:

  • Divide: Break the problem into smaller subproblems.
  • Conquer: Solve each subproblem independently, often recursively.
  • Combine: Merge the solutions of subproblems to form the final solution.
  • Return: Provide the final result after combining all subproblem solutions.

Divide and conquer is widely used in sorting algorithms (Merge Sort, Quick Sort), searching algorithms (Binary Search), numerical algorithms, and computational geometry problems.

Time Complexity: Typically O(n log n) for sorting algorithms like Merge and Quick Sort.

Example

Input:

def merge_sort(arr):
    if len(arr)>1:
        mid=len(arr)//2;L,R=arr[:mid],arr[mid:];merge_sort(L);merge_sort(R)
        i=j=k=0
        while i<len(L) and j<len(R): arr[k]=L[i] if L[i]<R[j] else R[j];k+=1;i+=(L[i]<R[j]);j+=(L[i]>=R[j])
        while i<len(L): arr[k],i,k=L[i],i+1,k+1
        while j<len(R): arr[k],j,k=R[j],j+1,k+1

arr=[38,27,43,3,9,82,10]
merge_sort(arr)
print(arr)

Output:

[3, 9, 10, 27, 38, 43, 82]

Linear Programming Algorithm

Linear Programming (LP) is a mathematical optimization technique used to maximize or minimize a linear objective function subject to linear constraints (equalities or inequalities). It helps make optimal decisions in scenarios with limited resources.

In Python, LP problems can be solved using libraries like PuLP or SciPy.optimize. The main operations performed in linear programming are:

  • Define Objective Function: Specify what needs to be maximized or minimized.
  • Set Constraints: Express limitations as linear equations or inequalities.
  • Solve: Use algorithms like Simplex or interior-point methods to find the optimal solution.
  • Interpret Solution: Analyze the results to implement optimal decisions.

Linear programming is widely used in resource allocation, production planning, transportation optimization, scheduling problems, and financial modeling.

Time Complexity: Depends on the algorithm; Simplex is efficient in practice, while interior-point methods guarantee polynomial time.

Example:

Input:

from scipy.optimize import linprog

# Maximize z = 3x + 2y  => minimize -z
c = [-3, -2]  # coefficients for maximization
A = [[2, 1], [1, 2]]  # constraints
b = [20, 20]  # RHS of constraints
res = linprog(c, A_ub=A, b_ub=b)
print("Optimal values:", res.x, "Maximum Z:", -res.fun)

Output:

Optimal values: [6. 7.] Maximum Z: 32.0

Backtracking Algorithm

Backtracking is a problem-solving technique that explores all possible solutions recursively and abandons paths that violate constraints, effectively pruning invalid options. It systematically searches for solutions by building them step by step and backtracking when a dead-end is reached.

In Python, backtracking is implemented using recursive functions. The main operations performed in backtracking are:

  • Choose: Select a candidate option for the current step.
  • Explore: Recursively move forward with the chosen option.
  • Check Constraints: Abandon the path if constraints are violated.
  • Backtrack: Return to the previous step to try a different option.

Backtracking is widely used in combinatorial problems, constraint satisfaction problems, N-Queens, Sudoku, maze solving, and generating permutations or subsets.

Time Complexity: Exponential in the worst case (depends on the number of possibilities).

Example:

Input:

def backtrack(arr, path=[]):
    if not arr: print(path)
    else: backtrack(arr[1:], path), backtrack(arr[1:], path+[arr[0]])

backtrack([1,2,3])

Output:

[]
[3]
[2]
[2, 3]
[1]
[1, 3]
[1, 2]
[1, 2, 3]
Module 3: Algorithms in PythonOptimization Algorithms

Top Tutorials

Related Articles