Below is a breakdown of essential optimization techniques used in arrays, graphs, dynamic programming, and advanced data structures.
πΉ 1. Preprocessing Techniques
π Use extra space to reduce time complexity.
β Prefix Sum (O(1) query, O(n) preprocessing)
- Problem: Given an array
nums[]
, compute the sum of elements between indices[L, R]
efficiently. - Optimization: Instead of recalculating the sum repeatedly, use a prefix sum array.
- Example:
prefix = [0] * (n+1)
for i in range(n):
prefix[i+1] = prefix[i] + nums[i] # Precompute prefix sums
β Suffix Sum (Used for right-to-left computations, similar to prefix sum).
β Hashing (Precompute Frequencies/Occurrences)
- Convert O(N) lookups into O(1) hash table lookups.
β Sparse Tables (O(1) range queries, O(n log n) preprocessing).
πΉ 2. Two Pointers / Sliding Window
π Reduce nested loops by maintaining two pointers.
β Sliding Window
- Used in subarray problems (e.g., max sum, longest substring).
- Avoids unnecessary recomputation.
πΉ Example (Find longest substring with at most k
distinct characters):
def longest_substr_k_distinct(s, k):
count = {}
left = 0
max_len = 0
for right in range(len(s)):
count[s[right]] = count.get(s[right], 0) + 1
while len(count) > k:
count[s[left]] -= 1
if count[s[left]] == 0:
del count[s[left]]
left += 1
max_len = max(max_len, right - left + 1)
return max_len
π‘ Optimized from O(NΒ²) brute-force to O(N) using sliding window.
β Fast-Slow Pointers
- Detects cycles in Linked Lists.
- Used in finding middle of a list (O(N) time, O(1) space).
πΉ 3. Greedy Optimization
π Make locally optimal choices to get a globally optimal solution.
β
Activity Selection Problem (Sorting + Greedy)
β
Huffman Encoding (Min-Heap Optimization)
β
Dijkstraβs Algorithm (Greedy + Priority Queue)
Example: Jump Game II (Greedy O(N) vs DP O(NΒ²))
def jump(nums):
jumps = 0
curr_end = 0
farthest = 0
for i in range(len(nums) - 1):
farthest = max(farthest, i + nums[i])
if i == curr_end:
jumps += 1
curr_end = farthest
return jumps
π‘ Optimized from O(NΒ²) (DP) to O(N) using a Greedy approach.
πΉ 4. Dynamic Programming (Memoization & Tabulation)
π Reduce exponential time complexity by caching results.
β
Memoization (Top-Down DP) β Recursive + Cache
β
Tabulation (Bottom-Up DP) β Iterative
Example: Fibonacci Sequence (Optimized DP O(N) vs Recursion O(2^N))
def fib(n, memo={}):
if n <= 1:
return n
if n not in memo:
memo[n] = fib(n-1, memo) + fib(n-2, memo) # Store results
return memo[n]
π‘ Avoids recomputation using memoization.
πΉ 5. Binary Search Optimization
π Used in searching, finding lower/upper bounds, and optimization problems.
β
Binary Search on Sorted Arrays
β
Binary Search on Answer (Minimized Search Space)
- Example: Find smallest divisor such that sum of divisions β€ threshold.
def smallestDivisor(nums, threshold):
left, right = 1, max(nums)
while left < right:
mid = (left + right) // 2
if sum((num + mid - 1) // mid for num in nums) > threshold:
left = mid + 1
else:
right = mid
return left
π‘ Optimized from O(NΒ²) to O(N log M) using binary search on the divisor.
πΉ 6. Graph Optimizations
π Avoid recomputation in traversal-heavy problems.
β
Dijkstraβs Algorithm (Optimized with Min-Heap) O(E log V)
β
A* Algorithm (Heuristic-based optimization of Dijkstra)
β
Union-Find with Path Compression (O(Ξ±(N)) β O(1))
β
Topological Sorting (Kahnβs Algorithm O(V + E))
Example: Graph BFS with Priority Queue Optimization
import heapq
def dijkstra(graph, start):
heap = [(0, start)] # (distance, node)
distances = {node: float('inf') for node in graph}
distances[start] = 0
while heap:
curr_dist, node = heapq.heappop(heap)
for neighbor, weight in graph[node]:
new_dist = curr_dist + weight
if new_dist < distances[neighbor]:
distances[neighbor] = new_dist
heapq.heappush(heap, (new_dist, neighbor))
return distances
π‘ Optimized from O(VΒ²) to O(E log V) using a priority queue.
πΉ 7. Bit Manipulation Optimization
π Optimize space and operations using bits.
β
Check if a number is a power of 2 (n & (n-1) == 0
)
β
Find single non-repeating element (XOR all numbers
)
β
Count number of set bits (Brian Kernighan’s Algorithm)
Example: Find Single Non-Duplicate Number in O(N) Time & O(1) Space
def singleNumber(nums):
res = 0
for num in nums:
res ^= num # XOR cancels out duplicates
return res
π‘ Optimized from O(N) space (HashMap) to O(1) using XOR.
π₯ Summary
Optimization | Technique | Time Complexity Gain |
---|---|---|
Preprocessing | Prefix/Suffix Sum, Hashing | O(N) β O(1) per query |
Sliding Window | Two Pointers | O(NΒ²) β O(N) |
Greedy | Activity Selection, Dijkstra | O(NΒ²) β O(N log N) |
Dynamic Programming | Memoization, Tabulation | O(2^N) β O(N) |
Binary Search | Lower/Upper Bound | O(N) β O(log N) |
Graph Algorithms | BFS, DFS, Dijkstra | O(VΒ²) β O(E log V) |
Bit Manipulation | XOR, Power of 2 | O(N) β O(1) |
πΉ Final Tip:
When solving DSA problems, always start with brute force, analyze time complexity, and then apply one or more of these optimizations to improve performance.
π₯ Would you like a curated list of FAANG problems for each optimization technique? Let me know! π