Skip to main content

Algorithms

· 10 min read

Think of algorithms as instructions for solving problems. These problems can be solved in various ways, but there are better ways that take less time, less repetition, and require less work.

And each algorithm has its best use case, meaning the best place where it can be used.

Sorting


Bubble Sort

Pros ✅

  1. Easy to implement: The algorithm is straightforward and ideal for teaching sorting concepts.

  2. In-place sorting: Requires only a constant amount of additional memory $O(1)$.

  3. Stable sorting: Maintains the relative order of equal elements, which is useful in certain applications.

  4. Possible optimization for best case: With an early exit condition (when no swaps occur), can have O(n) performance for nearly sorted data.

Cons ❌

  1. Very inefficient for large datasets: Has average and worst-case time complexity of O(n²), making it impractical for large lists.

  2. Many swaps: Performs more swaps than necessary compared to more efficient algorithms like insertion sort or quicksort.

  3. Not adaptive by default: Without optimization, always goes through all iterations even if the list becomes sorted prematurely.

  4. Rarely used in practice: There are algorithms with better performance like Merge Sort, Quick Sort, and Tim Sort used in real applications.

info

Worst case: O(n²) | Best case: O(n)

Youtube - Bubble Sort by Michael Sambol

def bubble_sort(arr):
n = len(arr)
for i in range(n - 1):
for j in range(0, n - i - 1):
if arr[j] > arr[j + 1]:
arr[j], arr[j + 1] = arr[j + 1], arr[j]
return arr

sorted_arr = bubble_sort([64, 34, 25, 12, 22, 11, 90])

print(sorted_arr)

Insertion Sort

Pros ✅

  1. Simple and intuitive: The algorithm is easy to understand and implement.

  2. Better performance in best case: Can achieve O(n) time complexity for nearly sorted data.

  3. Stable sorting: Maintains the relative order of equal elements, which is useful in certain applications.

Cons ❌

  1. Very inefficient for large datasets: Has average and worst-case time complexity of O(n²), making it impractical for large lists.

  2. Not adaptive by default: Without optimization, always goes through all iterations even if the list becomes sorted prematurely.

  3. Rarely used in practice: There are algorithms with better performance like Merge Sort, Quick Sort, and Tim Sort used in real applications.

info

Worst case: O(n²) | Best case: O(n)

Youtube - Insertion Sort by Michael Sambol

def insertion_sort(arr):
for i in range(1, len(arr)):
key = arr[i]
j = i - 1
while j >= 0 and key < arr[j]:
arr[j + 1] = arr[j]
j -= 1
arr[j + 1] = key
return arr

sorted_arr = insertion_sort([64, 34, 25, 12, 22, 11, 90])

print(sorted_arr)

Merge Sort

Pros ✅

  1. Stable sorting: Maintains the relative order of equal elements, which is useful in certain applications.

  2. Better performance in best case: Can achieve O(n) time complexity for nearly sorted data.

  3. Not adaptive by default: Without optimization, always goes through all iterations even if the list becomes sorted prematurely.

Cons ❌

  1. Not in-place sorting: Requires additional memory proportional to the size of the input array.

  2. Not adaptive by default: Without optimization, always goes through all iterations even if the list becomes sorted prematurely.

info

Worst case: O(n log n) | Best case: O(n log n)

Youtube - Merge Sort Algorithm by Michael Sambol

def merge_sort(arr):
if len(arr) <= 1:
return arr
mid = len(arr) // 2
left = merge_sort(arr[:mid])
right = merge_sort(arr[mid:])
return merge(left, right)

def merge(left, right):
result = []
i = j = 0
while i < len(left) and j < len(right):
if left[i] < right[j]:
result.append(left[i])
i += 1
else:
result.append(right[j])
j += 1
result.extend(left[i:])
result.extend(right[j:])
return result

sorted_arr = merge_sort([64, 34, 25, 12, 22, 11, 90])

print(sorted_arr)

Quick Sort

Pros ✅

  1. Better performance in best case: Can achieve O(n) time complexity for nearly sorted data.

  2. In-place sorting: Requires only a constant amount of additional memory $O(1)$.

  3. Not adaptive by default: Without optimization, always goes through all iterations even if the list becomes sorted prematurely.

Cons ❌

  1. Not stable by default: Equal elements may be reordered, which is not useful in certain applications.

  2. Not adaptive by default: Without optimization, always goes through all iterations even if the list becomes sorted prematurely.

info

Worst case: O(n²) | Best case: O(n log n)

Youtube - Quick Sort Algorithm by Michael Sambol

def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)

sorted_arr = quick_sort([64, 34, 25, 12, 22, 11, 90])

print(sorted_arr)

Pros ✅

  1. Simple and intuitive: The algorithm is easy to understand and implement.

  2. Better performance in best case: Can achieve O(n) time complexity for nearly sorted data.

Cons ❌

  1. Very inefficient for large datasets: Has average and worst-case time complexity of O(n²), making it impractical for large lists.

  2. Not adaptive by default: Without optimization, always goes through all iterations even if the list becomes sorted prematurely.

info

Worst case: O(n) | Best case: O(1)

Youtube - Linear Search Algorithm by Bro Code

def linear_search(arr, target):
for i, value in enumerate(arr):
if value == target:
return i
return -1

target = 25

index = linear_search([64, 34, 25, 12, 22, 11, 90], target)

print(index)

Pros ✅

  1. Better performance in best case: Can achieve O(n) time complexity for nearly sorted data.

  2. In-place sorting: Requires only a constant amount of additional memory $O(1)$.

Cons ❌

  1. Not stable by default: Equal elements may be reordered, which is not useful in certain applications.

  2. Not adaptive by default: Without optimization, always goes through all iterations even if the list becomes sorted prematurely.

info

Worst case: O(log n) | Best case: O(1)

Youtube - Binary Search by Michael Sambol

def binary_search(arr, target):
left, right = 0, len(arr) - 1
while left <= right:
mid = (left + right) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
left = mid + 1
else:
right = mid - 1
return -1

target = 25

index = binary_search([64, 34, 25, 12, 22, 11, 90], target)

print(index)

Recursion

Pros ✅

  1. Simple and intuitive: The algorithm is easy to understand and implement.

  2. Better performance in best case: Can achieve O(n) time complexity for nearly sorted data.

Cons ❌

  1. Not stable by default: Equal elements may be reordered, which is not useful in certain applications.

  2. Not adaptive by default: Without optimization, always goes through all iterations even if the list becomes sorted prematurely.

info

Worst case: O(n) | Best case: O(1)

Youtube - Recursion in 100 seconds by Fireship

def factorial(n):
if n == 0:
return 1
return n * factorial(n - 1)

result = factorial(5)

print(result)

Breadth-First Search (BFS) is an algorithm for traversing or searching tree or graph data structures. It starts at the root node and explores all neighboring nodes at the current level before moving to nodes at the next level.

Pros ✅

  1. Complete: Guarantees finding the shortest path in unweighted graphs.

  2. Optimal for unweighted graphs: Finds the path with the fewest edges.

  3. Simple to implement: Uses a queue data structure.

Cons ❌

  1. Memory usage: Can consume a lot of memory in very large graphs.

  2. Not ideal for long paths: Can be inefficient when the path is very long.

info

Time complexity: O(V + E) | Space complexity: O(V)

Youtube - BFS by William Fiset

from collections import deque

def bfs(graph, start):
visited = set()
queue = deque([start])
visited.add(start)

while queue:
vertex = queue.popleft()
print(vertex, end=' ')

for neighbor in graph[vertex]:
if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)

# Example usage
graph = {
'A': ['B', 'C'],
'B': ['A', 'D', 'E'],
'C': ['A', 'F'],
'D': ['B'],
'E': ['B', 'F'],
'F': ['C', 'E']
}

bfs(graph, 'A')

Depth-First Search (DFS) is an algorithm for traversing or searching tree or graph data structures. It explores as far as possible along each branch before backtracking.

Pros ✅

  1. Memory efficient: Uses less memory than BFS in very large graphs.

  2. Good for long paths: Can be more efficient when the path is very long.

  3. Useful for backtracking: Ideal for problems that require backtracking.

Cons ❌

  1. Doesn't guarantee shortest path: May not find the shortest path in unweighted graphs.

  2. Can get stuck in cycles: Requires marking visited nodes to avoid infinite cycles.

info

Time complexity: O(V + E) | Space complexity: O(V)

Youtube - DFS by William Fiset

def dfs(graph, start, visited=None):
if visited is None:
visited = set()

visited.add(start)
print(start, end=' ')

for neighbor in graph[start]:
if neighbor not in visited:
dfs(graph, neighbor, visited)

# Example usage
graph = {
'A': ['B', 'C'],
'B': ['A', 'D', 'E'],
'C': ['A', 'F'],
'D': ['B'],
'E': ['B', 'F'],
'F': ['C', 'E']
}

dfs(graph, 'A')

Dijkstra's Algorithm

Dijkstra's algorithm is an algorithm for finding the shortest path between two vertices in a graph with non-negative edge weights.

Pros ✅

  1. Guarantees shortest path: Finds the path with the lowest total weight.

  2. Efficient: Uses a priority queue to optimize the search.

  3. Versatile: Can be used in various types of routing problems.

Cons ❌

  1. Doesn't work with negative weights: Requires all edges to have non-negative weights.

  2. Implementation complexity: More complex than BFS or DFS.

info

Time complexity: O((V + E) log V) | Space complexity: O(V)

Youtube - Dijkstra's Algorithm by Computer Science Lessons

import heapq

def dijkstra(graph, start):
distances = {node: float('infinity') for node in graph}
distances[start] = 0
pq = [(0, start)]

while pq:
current_distance, current_node = heapq.heappop(pq)

if current_distance > distances[current_node]:
continue

for neighbor, weight in graph[current_node].items():
distance = current_distance + weight

if distance < distances[neighbor]:
distances[neighbor] = distance
heapq.heappush(pq, (distance, neighbor))

return distances

# Example usage
graph = {
'A': {'B': 1, 'C': 4},
'B': {'A': 1, 'C': 2, 'D': 5},
'C': {'A': 4, 'B': 2, 'D': 1},
'D': {'B': 5, 'C': 1}
}

distances = dijkstra(graph, 'A')
print(distances)

Conclusion

Each algorithm has its specific place and purpose. The choice of the correct algorithm depends on the problem you're trying to solve and the characteristics of your data. Understanding the complexities and trade-offs of each algorithm is crucial for writing efficient and scalable code.

Remember:

  • Use simple algorithms (like Bubble Sort) for small datasets or educational purposes
  • More complex algorithms (like Quick Sort) are better for large datasets
  • BFS and DFS are fundamental for graph problems
  • Dijkstra is the right choice for finding shortest paths in weighted graphs

Constant practice and deep understanding of these algorithms will help you become a more efficient programmer capable of solving complex problems elegantly.