Algorithms
Think of algorithms as instructions for solving problems. These problems can be solved in various ways, but there are better ways that take less time, less repetition, and require less work.
And each algorithm has its best use case, meaning the best place where it can be used.
Sorting
Bubble Sort
Pros ✅
-
Easy to implement: The algorithm is straightforward and ideal for teaching sorting concepts.
-
In-place sorting: Requires only a constant amount of additional memory $O(1)$.
-
Stable sorting: Maintains the relative order of equal elements, which is useful in certain applications.
-
Possible optimization for best case: With an early exit condition (when no swaps occur), can have O(n) performance for nearly sorted data.
Cons ❌
-
Very inefficient for large datasets: Has average and worst-case time complexity of O(n²), making it impractical for large lists.
-
Many swaps: Performs more swaps than necessary compared to more efficient algorithms like insertion sort or quicksort.
-
Not adaptive by default: Without optimization, always goes through all iterations even if the list becomes sorted prematurely.
-
Rarely used in practice: There are algorithms with better performance like Merge Sort, Quick Sort, and Tim Sort used in real applications.
Worst case: O(n²) | Best case: O(n)
Youtube - Bubble Sort by Michael Sambol
def bubble_sort(arr):
n = len(arr)
for i in range(n - 1):
for j in range(0, n - i - 1):
if arr[j] > arr[j + 1]:
arr[j], arr[j + 1] = arr[j + 1], arr[j]
return arr
sorted_arr = bubble_sort([64, 34, 25, 12, 22, 11, 90])
print(sorted_arr)
Insertion Sort
Pros ✅
-
Simple and intuitive: The algorithm is easy to understand and implement.
-
Better performance in best case: Can achieve O(n) time complexity for nearly sorted data.
-
Stable sorting: Maintains the relative order of equal elements, which is useful in certain applications.
Cons ❌
-
Very inefficient for large datasets: Has average and worst-case time complexity of O(n²), making it impractical for large lists.
-
Not adaptive by default: Without optimization, always goes through all iterations even if the list becomes sorted prematurely.
-
Rarely used in practice: There are algorithms with better performance like Merge Sort, Quick Sort, and Tim Sort used in real applications.
Worst case: O(n²) | Best case: O(n)
Youtube - Insertion Sort by Michael Sambol
def insertion_sort(arr):
for i in range(1, len(arr)):
key = arr[i]
j = i - 1
while j >= 0 and key < arr[j]:
arr[j + 1] = arr[j]
j -= 1
arr[j + 1] = key
return arr
sorted_arr = insertion_sort([64, 34, 25, 12, 22, 11, 90])
print(sorted_arr)
Merge Sort
Pros ✅
-
Stable sorting: Maintains the relative order of equal elements, which is useful in certain applications.
-
Better performance in best case: Can achieve O(n) time complexity for nearly sorted data.
-
Not adaptive by default: Without optimization, always goes through all iterations even if the list becomes sorted prematurely.
Cons ❌
-
Not in-place sorting: Requires additional memory proportional to the size of the input array.
-
Not adaptive by default: Without optimization, always goes through all iterations even if the list becomes sorted prematurely.
Worst case: O(n log n) | Best case: O(n log n)
Youtube - Merge Sort Algorithm by Michael Sambol
def merge_sort(arr):
if len(arr) <= 1:
return arr
mid = len(arr) // 2
left = merge_sort(arr[:mid])
right = merge_sort(arr[mid:])
return merge(left, right)
def merge(left, right):
result = []
i = j = 0
while i < len(left) and j < len(right):
if left[i] < right[j]:
result.append(left[i])
i += 1
else:
result.append(right[j])
j += 1
result.extend(left[i:])
result.extend(right[j:])
return result
sorted_arr = merge_sort([64, 34, 25, 12, 22, 11, 90])
print(sorted_arr)
Quick Sort
Pros ✅
-
Better performance in best case: Can achieve O(n) time complexity for nearly sorted data.
-
In-place sorting: Requires only a constant amount of additional memory $O(1)$.
-
Not adaptive by default: Without optimization, always goes through all iterations even if the list becomes sorted prematurely.
Cons ❌
-
Not stable by default: Equal elements may be reordered, which is not useful in certain applications.
-
Not adaptive by default: Without optimization, always goes through all iterations even if the list becomes sorted prematurely.
Worst case: O(n²) | Best case: O(n log n)
Youtube - Quick Sort Algorithm by Michael Sambol
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
sorted_arr = quick_sort([64, 34, 25, 12, 22, 11, 90])
print(sorted_arr)
Search
Linear Search
Pros ✅
-
Simple and intuitive: The algorithm is easy to understand and implement.
-
Better performance in best case: Can achieve O(n) time complexity for nearly sorted data.
Cons ❌
-
Very inefficient for large datasets: Has average and worst-case time complexity of O(n²), making it impractical for large lists.
-
Not adaptive by default: Without optimization, always goes through all iterations even if the list becomes sorted prematurely.
Worst case: O(n) | Best case: O(1)
Youtube - Linear Search Algorithm by Bro Code
def linear_search(arr, target):
for i, value in enumerate(arr):
if value == target:
return i
return -1
target = 25
index = linear_search([64, 34, 25, 12, 22, 11, 90], target)
print(index)
Binary Search
Pros ✅
-
Better performance in best case: Can achieve O(n) time complexity for nearly sorted data.
-
In-place sorting: Requires only a constant amount of additional memory $O(1)$.
Cons ❌
-
Not stable by default: Equal elements may be reordered, which is not useful in certain applications.
-
Not adaptive by default: Without optimization, always goes through all iterations even if the list becomes sorted prematurely.
Worst case: O(log n) | Best case: O(1)
Youtube - Binary Search by Michael Sambol
def binary_search(arr, target):
left, right = 0, len(arr) - 1
while left <= right:
mid = (left + right) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
left = mid + 1
else:
right = mid - 1
return -1
target = 25
index = binary_search([64, 34, 25, 12, 22, 11, 90], target)
print(index)
Recursion
Pros ✅
-
Simple and intuitive: The algorithm is easy to understand and implement.
-
Better performance in best case: Can achieve O(n) time complexity for nearly sorted data.
Cons ❌
-
Not stable by default: Equal elements may be reordered, which is not useful in certain applications.
-
Not adaptive by default: Without optimization, always goes through all iterations even if the list becomes sorted prematurely.
Worst case: O(n) | Best case: O(1)
Youtube - Recursion in 100 seconds by Fireship
def factorial(n):
if n == 0:
return 1
return n * factorial(n - 1)
result = factorial(5)
print(result)
BFS (Breadth-First Search)
Breadth-First Search (BFS) is an algorithm for traversing or searching tree or graph data structures. It starts at the root node and explores all neighboring nodes at the current level before moving to nodes at the next level.
Pros ✅
-
Complete: Guarantees finding the shortest path in unweighted graphs.
-
Optimal for unweighted graphs: Finds the path with the fewest edges.
-
Simple to implement: Uses a queue data structure.
Cons ❌
-
Memory usage: Can consume a lot of memory in very large graphs.
-
Not ideal for long paths: Can be inefficient when the path is very long.
Time complexity: O(V + E) | Space complexity: O(V)
Youtube - BFS by William Fiset
from collections import deque
def bfs(graph, start):
visited = set()
queue = deque([start])
visited.add(start)
while queue:
vertex = queue.popleft()
print(vertex, end=' ')
for neighbor in graph[vertex]:
if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)
# Example usage
graph = {
'A': ['B', 'C'],
'B': ['A', 'D', 'E'],
'C': ['A', 'F'],
'D': ['B'],
'E': ['B', 'F'],
'F': ['C', 'E']
}
bfs(graph, 'A')
DFS (Depth-First Search)
Depth-First Search (DFS) is an algorithm for traversing or searching tree or graph data structures. It explores as far as possible along each branch before backtracking.
Pros ✅
-
Memory efficient: Uses less memory than BFS in very large graphs.
-
Good for long paths: Can be more efficient when the path is very long.
-
Useful for backtracking: Ideal for problems that require backtracking.
Cons ❌
-
Doesn't guarantee shortest path: May not find the shortest path in unweighted graphs.
-
Can get stuck in cycles: Requires marking visited nodes to avoid infinite cycles.
Time complexity: O(V + E) | Space complexity: O(V)
Youtube - DFS by William Fiset
def dfs(graph, start, visited=None):
if visited is None:
visited = set()
visited.add(start)
print(start, end=' ')
for neighbor in graph[start]:
if neighbor not in visited:
dfs(graph, neighbor, visited)
# Example usage
graph = {
'A': ['B', 'C'],
'B': ['A', 'D', 'E'],
'C': ['A', 'F'],
'D': ['B'],
'E': ['B', 'F'],
'F': ['C', 'E']
}
dfs(graph, 'A')
Dijkstra's Algorithm
Dijkstra's algorithm is an algorithm for finding the shortest path between two vertices in a graph with non-negative edge weights.
Pros ✅
-
Guarantees shortest path: Finds the path with the lowest total weight.
-
Efficient: Uses a priority queue to optimize the search.
-
Versatile: Can be used in various types of routing problems.
Cons ❌
-
Doesn't work with negative weights: Requires all edges to have non-negative weights.
-
Implementation complexity: More complex than BFS or DFS.
Time complexity: O((V + E) log V) | Space complexity: O(V)
Youtube - Dijkstra's Algorithm by Computer Science Lessons
import heapq
def dijkstra(graph, start):
distances = {node: float('infinity') for node in graph}
distances[start] = 0
pq = [(0, start)]
while pq:
current_distance, current_node = heapq.heappop(pq)
if current_distance > distances[current_node]:
continue
for neighbor, weight in graph[current_node].items():
distance = current_distance + weight
if distance < distances[neighbor]:
distances[neighbor] = distance
heapq.heappush(pq, (distance, neighbor))
return distances
# Example usage
graph = {
'A': {'B': 1, 'C': 4},
'B': {'A': 1, 'C': 2, 'D': 5},
'C': {'A': 4, 'B': 2, 'D': 1},
'D': {'B': 5, 'C': 1}
}
distances = dijkstra(graph, 'A')
print(distances)
Conclusion
Each algorithm has its specific place and purpose. The choice of the correct algorithm depends on the problem you're trying to solve and the characteristics of your data. Understanding the complexities and trade-offs of each algorithm is crucial for writing efficient and scalable code.
Remember:
- Use simple algorithms (like Bubble Sort) for small datasets or educational purposes
- More complex algorithms (like Quick Sort) are better for large datasets
- BFS and DFS are fundamental for graph problems
- Dijkstra is the right choice for finding shortest paths in weighted graphs
Constant practice and deep understanding of these algorithms will help you become a more efficient programmer capable of solving complex problems elegantly.