Assorted Notes

Posted on November 25, 2013

Last updated on September 26, 2015

Table of Contents

This page serves as my notes on sorting and related subproblems. This is a Work in Progress.

1 Sorting Algorithms

1.1 Bubble Sort

Bubble Sort works by comparing two adjacent pairs of numbers in an array, and swapping them so that the larger element “bubbles” through to the right. This means that on the first pass, the largest element is now its final position at the end of the array. We then repeat, this time placing the second-last element in its place. Repeat, until there are no more elements to process. For the reason that bubble sort makes the largest element “sink” to the right of the array to its final resting place, the algorithm is sometimes referred to as a sinking sort.

Java code:

public static void bubbleSort(int[] arr){
  for(int i = arr.length - 1; i >= 1; i--){
    for(int j = 0; j < i; j++){
      if(arr[j] > arr[j+1]) swap(arr, j, j+1);
Bubble sort
Runtime Description
\(O(n^2)\) Average case

1.2 Insertion Sort

Insertion sort works by looking at an element \(i\), and sifting it to the left until the element to the left is smaller than or equal to \(i\). It’s efficient on small and nearly sorted arrays. For example, when the array is sorted, the run time is linear in the input size. Since it’s good for smaller arrays, it can be used as a subroutine in a more complex sort such as quicksort: first use quicksort to partition the array into subgroups such that if a subgroup \(i\) is to the left of subgroup \(j\), all elements of \(i\) are smaller than or equal to every element of \(j\). Afterwards, invoke insertion sort on this array to sort it in nearly linear time.

public static void insertionSort(int[] arr){
  for(int i = 1; i < arr.length; i++){
    int j = i-1;
    while(j >= 0 && arr[j] > arr[j+1]){
Insertion sort algorithm
Runtime Description
\(O(n^2)\) Average case
\(O(n)\) Sorted / nearly sorted

1.3 Selection Sort

Selection sort is similar to insertion sort. It works by placing the smallest value in \(x[0]\), then the next remaining one in \(x[1]\) and so forth.

public static void selectionSort(int[] arr){
  for(int i = 0; i < arr.length - 1; i++){
    for(int j = i+1; j < arr.length; j++){
      if(arr[i] > arr[j]) swap(arr, i, j);
Selection Sort
Runtime Description
\(O(n^2)\) Average case

1.4 Shell Sort

Shell Sort is a generalization of insertion sort – it works by using a strictly decreasing sequence of gaps. For every \(g \in \mathrm{gaps}\), we \(g\)-sort the array, meaning that every \(g\)-th element is sorted. We then take the next \(g\), and continue the process. The final step is with \(g=1\), meaning we do a standard insertion sort. The performance is highly dependent on the input distribution and gap sequence used.

public static void shellSort(int[] arr){
  int[] gaps = {701, 301, 132, 57, 23, 10, 4, 1}; // you choose this
  for(int g : gaps){ // g-sort 
    for(int i = g; i < arr.length; i += g){
      int j = i;
      while(j >= g && arr[j] < arr[j-g]){
        swap(arr, j, j-g);
        j -= g;
Shell Sort

You wouldn’t really use this sort in practice nowadays.

1.5 Quicksort

1.5.1 Idea

Quicksort is a recursive algorithm that uses the divide-and-conquer approach to split the problem into subproblems, solve them recursively, and then combine the solution. Quicksort works by first partitioning the array around a value \(v\). This means shuffling the array so that all elements \(< v\) are one side of the array, and all elements \(\geq v\) are on the other side. Once we have that, we know the final resting place for \(v\). It now remains to recursively sort the left and right partitions.

1.5.2 Partition function

The idea here is to return an index \(m\) such that everything \(\leq m\) is \(< val\), and everything \(> m\) is \(\geq val\). Use \(i\) to traverse through the required bounds \([a,b]\). If \(a[i] \geq val\), we are allright, nothing needs to be done. Otherwise, we increment \(m\) (so now it’s pointing to an element that’s \(\geq val\), and swap it with \(a[i]\) (so that \(m\) again points at an element \(< val\).

// Partition the array `arr` between `a` and `b` such that
// m = partition(arr, val, a, b) &&
//   a <= i <= m --> arr[i] < val &&
//   m < i <= b --> arr[i] >= val
private static int partition(int[] arr, int val, int a, int b){
  int m = a-1;
  for(int i = a; i <= b; i++){
    if(arr[i] < val){
      swap(arr, i, m);
  return m;
Partition function

1.5.3 Result

The final quicksort algorithm can be expressed as such:

public static void quicksort(int[] arr){
private static void quicksortAux(int[] arr, int l, int r){
  if(l >= r) return; // Do nothing, at most one element
  int m = partition(arr, arr[l], l+1, r);
Quicksort Algorithm
Runtime Description
\(O(n\log{n})\) Average case
\(O(n^2)\) Worse case: When sorted/nearly sorted

There are various methods to get rid of the \(O(n^2)\) scenario, for example by randomly choosing the value to partition around, instead of always choosing the left bound of the sub-array we are currently sorting.

1.6 Mergesort

Mergesort also works recursively – it splits the array into two sub-arrays, mergesorts them recursively, and then merges the two sub-arrays into one. The key is the merge function, which given two sorted arrays, interleaves them together to produce a third sorted array.

public static int[] mergeSort(int[] arr){
  if(arr.length <= 1) return arr;
  int index = (int)(arr.length/2);
  int[] leftArray = Arrays.copyOfRange(arr, 0, index);
  int[] rightArray = Arrays.copyOfRange(arr,index, arr.length);
  return merge(mergeSort(leftArray),mergeSort(rightArray));

private static int[] merge(int[] a, int[] b){
  int aLen = a.length;
  int bLen = b.length;
  int[] res = new int[aLen + bLen];
  int aPtr = 0;
  int bPtr = 0;
  int resPtr = 0;
  while(aPtr < aLen && bPtr < bLen){ 
    if(a[aPtr] < b[bPtr]){
      res[resPtr] = a[aPtr];
    } else{
      res[resPtr] = b[bPtr];
  if(aPtr == aLen){ // we exhausted array a, copy b
    while(bPtr < bLen){
      res[resPtr] = b[bPtr];
  else{ // we exhausted array b first, copy a 
    while(aPtr < aLen){
      res[resPtr] = a[aPtr];
  return res;
Mergesort with `merge` function
Runtime Description
\(O(n\log{n})\) Average, best, and worst cases

2 Related Algorithms

The following code returns the index of the first occurence of target in A, or the insertion index if target is not part of the array.

public int searchInsert(int[] A, int target) {
  if(A.length == 0 || A[0] >= target) return 0;

  int l = 0;
  int r = A.length;

  while(r - l > 1){
    int mid = (r-l)/2 + l;
    if(A[mid] >= target) r = mid;
    else l = mid;

  return r;
Binary Search

2.2 Binary Search Rotated Array

Supposed a sorted array has been rotated by an unknown integer \(k\). We can also binary search on this array.

public int search(int[] A, int target) {
  int l = 0;
  int r = A.length-1;

  while(l <= r){

    int mid = (r-l)/2 + l;

    if(A[mid] == target) return mid;

    if(A[l] <= A[mid]){ // left half in increasing order
      if(target < A[mid] && target >= A[l]) r = mid-1;
      else l = mid+1;
    else{ // right half must be in increasing order
      if(target > A[mid] && target <= A[r]) l = mid+1;
      else r = mid-1;
  return -1;
Binary Search Rotated Array

2.3 Random elements

Consider we wish to solve the following problem:

Given integers \(m\),\(n\) such that \(0 < m \leq n\), generate randomly (following a uniform distribution) \(m\) numbers from the range \([0,n-1]\).

2.3.1 Algorithm S

Algorithm S is an algorithm by Donald Knuth to solve the above problem. It works by keeping track of how many elements we have left to choose (\(s\)), and how many choices we have left \(r\), and selects the current item \(i\) with probability \(\frac{s}{r}\). If there is nothing left to select, then \(\frac{s}{r} = 0\) and we don’t select any more elements. It also can’t select fewer than \(m\) elements since when \(s=r\) an element will be selected with probability \(1\).

// Returns an array of $m$ distinct elements selected at random
// from the range [0..n-1]. This is Knuth's algorithm S.
// Pre: 0 < m <= n
// Uses O(m) time and O(m) space
public static int[] getRandom1(int m, int n){
  int[] result = new int[m];
  int toSelect = m;
  int remaining = n;
  for(int i = 0; i < n; i++){
    // with probability (toSelect/remaining), select i
    if(randint(1,remaining) <= toSelect){
      result[toSelect-1] = i;
  return result;
Random numbers using Algorithm S

2.3.2 By shuffling

We can solve the required problem as follows: Get an array with elements \([0,n-1]\), shuffle it, and take the first \(m\) elements. The issue remains how to shuffle an array uniformly. We can use Algorithm P to perform the shuffle. Algorithm P / Yates shuffle

In pseudocode:

for i in [0..n-1]
  j = random integer from range [i,n-1]
  swap a[i] with a[j]

In Java, the Yates shuffle looks like this:

public static void shuffle(int[] arr){
  for(int i = 0; i < arr.length; i++){
    swap(arr, i, randint(i, arr.length-1));
Yates Shuffle

We can now use the Yates shuffle to get back \(m\) random elements:

// Returns an array of $m$ distinct elements selected at random
// from the range [0..n-1]. It uses Knuth's Algorithm P.
// Pre: 0 < m <= n
// Uses O(n) space and O(m) time
public static int[] getRandom2(int m, int n){
  int[] nums = new int[n];
  for(int i = 0; i < n; i++) nums[i] = i;
  for(int i = 0; i < m; i++){
    int j = randint(i, n-1);
    swap(nums, i, j);
  int[] numbers = Arrays.copyOfRange(nums, 0, m);
  Arrays.sort(numbers); // can sort them or not
  return numbers;
Random numbers using Algorithm P - $O(n)$ space, $O(m)$ time

2.3.3 Random element from finite stream

We now wish to solve the following problem:

Given a finite stream of unknown length, select an element from it uniformly at random using \(O(1)\) space.

The problem would be simple in \(O(n)\) space where \(n\) is the eventual length of the stream. Simply save the elements, and when the stream finishes, get a random element from that array. The following algorithm solves this problem in \(O(1)\) space:

p = 1
e = null
for s in stream:
  with probability 1/p:
    replace e with s
return e Proof:

For a stream of size \(1\), we replace \(e\) with \(s\) with probability \(\frac{1}{1} = 1\), and return \(s\). Otherwise, assume inductively that for a stream \(S\) of which we processed \(p\) elements so far, each of the \(p\) elements seen so far has \(\frac{1}{p}\) probability of being \(e\). Let a new element \(i\) arrive in the stream. We replace \(e\) with \(i\) with probability \(\frac{1}{p+1}\), so \(i\) has probability \(\frac{1}{p+1}\) of being selected. \(e\) is not replaced with probability \(\frac{p}{p+1}\), so all previous elements are selected with probability \(\frac{1}{p} \times \frac{p}{p+1} = \frac{1}{p+1}\) as required.

3 Heaps

3.1 Heap construction

3.2 Heap Problems

3.2.1 \(k\)-way merge

Give an \(O(n\log{k})\)-time algorithm to merge \(k\) sorted lists into one sorted list, where \(n\) is the total number of elements in all the input lists. (Question 6.5-8 from CLRS)

The solution is to use a min-heap of size \(k\). At the beginning, take the first element from all \(k\) lists, and add them to the min-heap. Then, extract the minimum from the heap, and add to the heap the next element from the list from which the minimum element came from from. If that list is exhausted, extract another element from the heap and repeat. Repeat until all lists are exhausted.

3.2.2 At-most \(k\) away sort

Sort an array of integers where every element is at most \(k\) indices away from its sorted position.

The solution is to use a min-heap. The smallest element must be located in the first \(k\) elements of the array. Insert the first \(k\) elements into the min-heap, and extract the minimum. This is the smallest element, you can output it. Now, just traverse the rest of the array, inserting an element into the min-heap, and then extracting it. The result is a sorted sequence in time \(O(n\log{k})\). Depending on \(k\), an insertion sort might be a better choice.

3.2.3 Top \(k\) elements from unsorted array

Given an array \(A\) of \(n\) integers, return the largest \(k\) elements.

An inefficient solution would be to sort the array for a runtime of \(O(n\log{n})\).

Another solution is to use a max-heap. Heap construction takes \(O(n)\) time, and extracting \(k\) elements takes \(O(k\log{n})\), for a total runtime of \(O(n+k\log{n})\).

We can do even better though. Firstly, create a max-heap from the array in \(O(n)\) time, and initialize an empty auxiliary max-heap. Peek at the maximum in the first max-heap and it to the second heap. Now, remove an element from the second max-heap, and add to this heap the two children of that element from the first heap. This is possible because of the structure of the max-heap – if you have the top \(k-1\) elements, then the \(k\)’th element must be a child of one of these \(k-1\) elements. Since every child has at most two children, the auxiliary heap is at most size \(2k\). This gives us a total running time of \(O(n + k\log{k})\).

Apparently it is also possible to do this in \(O(n + k)\) by first creating a max-heap of the array, and then finding the \(k\)’th largest element in the max-heap using Friedrickson’s heap selection algorithm in surprisingly, only \(O(k)\) time. After you have the \(k\)’th largest element, simply traverse the heap looking only at elements larger than \(k\) in \(O(k)\) time, for a total runtime of \(O(n+k)\).

4 Numerical Algorithms

4.1 Prime factorization

The code below factors an integer into it’s prime factors. The result for an integer \(I\) is returned as a hashmap of \((P,e)\) pairs where \(P^e\) appears in the prime factorization of \(I\).

private HashMap<Integer,Integer> getPrimeFactors(int A){
  HashMap<Integer, Integer> hm = new HashMap<Integer,Integer>();
  for(int i = 2; i*i <= A; i++){
    if(A % i == 0){
      int cnt = 0;
      while(A % i == 0){
        A /= i;
      hm.put(i, cnt);
  if(A != 1){ // it's prime
    hm.put(A, 1);
  return hm;
Prime Factorization

4.2 All primes up to N - Sieve of Eratosthenes

The following uses up O(N) memory to print out all the primes up to \(N\) using the sieve of Eratosthenes

public void primes(int N){
  boolean[] sieve = new boolean[N+1];
  for(int i = 2; i*i <= N; i++){
      int mul = 2;
      while(i*mul <= N){
        sieve[i*mul] = false;
  for(int i = 2; i <= N; i++){
    if(sieve[i]) System.out.println(i);
Prime Sieve

4.3 Primality check by trial division

This is the simplest way to check if a number \(N\) is prime – just try dividing it by all numbers up to \(\sqrt{N}\).

public boolean isPrime(int N){
  if(N <= 1) return false;
  for(int i = 2; i*i <= N; i++){
    if(N % i == 0) return false;
  return true;
Primality check – trial division

4.4 Fast exponentiation

We can find \(a^n\) in \(O(\log{n})\) multiplications exploiting the following:

\[ a^n = \left\{ \begin{array}{lr} a^{\frac{n}{2}} \times a^{\frac{n}{2}} & : n \textrm{ is even} \\ a^{\frac{n-1}{2}} \times a^{\frac{n-1}{2}} \times a & : n \textrm{ is odd} \end{array} \right. \]

static int fastPow(int a, int n){
  if(n == 0) return 1;
  int num = fastPow(a,n/2);
  num *= num;
  if(n % 2 != 0) num *= a;
  return num;   
Fast exponentation algorithm - $O(\log{n})$ multiplications

4.5 Horner’s method for polynomials

Suppose we need to evaluate the polynomial: \[ y = a_nx^n + a_{n-1}x^{n-1} + \ldots + a_1x + a_0 \] This requires roughly \(2n\) multiplications. Rearranging: \[ y = a_0 + x(a_1 + x(a_2 + \ldots x(a_n))) \] We can evaluate \(y\) with roughly \(n\) multiplications

static double horners(int[] a, int x){
  int n = a.length - 1;
  double y = a[n];
  for(int i = n-1; i>= 0; i--){
    y = y*x + a[i];
  return y;
Horner's rule for polynomials

4.6 GCD

The GCD (Greatest Common Divisor) of two positive integers \(a\) and \(b\) is the greatest integer \(n\) less than \(a\) and \(b\) such that \(n|a\) and \(n|b\). The solution is to use Euclid’s algorithm.

static int gcd(int a, int b){
  if(b == 0) return a;
  return gcd(b, a % b);

4.7 Totient Function

For two positive integers \(a\),\(b\), we say \(a\) and \(b\) are coprime iff \(\mathrm{gcd}(a,b) = 1\). That is, there is no integer besides \(1\) that divides both numbers.

Given an integer \(n\), Euler’s totient function, \(\phi\), tells us the number of positive integers less than \(n\) that are coprime to \(n\). We have that:

\[ \phi(n) = n \sum_{p|n}\left(1 - \frac{1}{p}\right) \]

where \(p\)’s are the distinct prime factors of \(n\). A Java implementation:

public int totient(int n){
  int ret = n;
  for(int p : primeDivisors(n)){
    ret = ret - ret/p;
  return ret;

public List<Integer> primeDivisors(int n){
  List<Integer> ret = new ArrayList<Integer>();
  int p = 2;
  while(p*p <= n){
    if(n % p == 0) ret.add(p);
    while(n % p == 0) n = n / p;
  if(n > 1) ret.add(n);
  return ret;
Euler's totient function

4.8 Computing Square Roots

Given a positive real number \(n\), find \(a\) such that \(n=a^2\)

4.8.1 Binary Search

We can compute the square root using binary search as follows:

static double squareRootBS(double n){
  double precision = 0.001;
  double left = 0;
  double right = n;
  if(n < 1) right = 1; // Fix for 0 < n < 1
  double guess = right/2;
  while( Math.abs(guess*guess-n) > precision){
    guess = (right-left)/2 + left;
    if(guess*guess < n) left = guess;
    else right = guess;
  return guess;
Square root using binary search

4.8.2 Newton’s Method

Newton’s Method finds the root of the equation \(a^2 - n = 0\) using the iteration \[ x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)} \]

static double squareRootNewton(double n){
  double precision = 0.001;
  double guess = n/2;
  double val = guess*guess-n;
  while(Math.abs(val) > precision){
    guess = guess - val/(2*guess); // derivative is 2*a
    val = guess*guess - n;
  return guess;
Newton's Method for Square Root

4.9 Computing n-th root of an integer

Suppose you need to compute the \(n\)-th root of an integer. One possibility is do a binary search as above for the square root case. If you can use library functions, then the \(n\)-th root of \(a\) is simply Math.pow(a,1/n). Suppose you want to find the \(n\)-th root of an integer \(a\) and confirm that it itself is an integer. I almost guarantee you will make a mistake if you try do it by binary search (you will get the bounds wrong, or most probably won’t properly handle overflow). An alternative how to do it using the library function:

static long nthRoot(long a, int n) {
  double r = Math.pow(a, 1/(double)n);
  long res = 1;
  for(int i = 0; i < n; i++) res *= r;
  if(res == (long)Math.round(r)) return (long)Math.round(r);
  return -1; // no integer root
nth integer root

4.10 Matrix Algebra

4.10.1 Matrix Multiplication

The following code multiplies two integer matrices \(A\),\(B\) in time \(O(n^3)\)

// Computes the matrix product AB
private int[][] mul(int[][] A, int[][] B){
  int hA = A.length;
  int wA = A[0].length;
  int hB = B.length;
  int wB = B[0].length;
  if(wA == hB){
    int[][] res = new int[hA][wB];

    for(int i = 0; i < hA; i++){
      for(int j = 0; j < wB; j++){
        for(int k = 0; k < wA; k++){
          res[i][j] += A[i][k]*B[k][j];
    return res;
  } else return null; // wrong dimensions
Integer Matrix Multiplication

4.10.2 Identity Matrix

The matrix \(I\) with ones along the diagonal. Wherever the dimensions agree, \(AI=IA=A\).

public static int[][] identity(int n){
  int[][] res = new int[n][n];
  for(int i = 0; i < n; i++) res[i][i] = 1;
  return res;
Identity Matrix

4.10.3 Matrix Exponentiation

Given a square matrix \(A\), we can compute \(A^n\) using the same technique as for fast exponentiation of rational numbers.

\[ A^n = \left\{ \begin{array}{lr} I & : n \textrm{ is zero} \\ A & : n \textrm{ is one} \\ A^{\frac{n}{2}} \times A^{\frac{n}{2}} & : n \textrm{ is even} \\ A^{\frac{n-1}{2}} \times A^{\frac{n-1}{2}} \times A & : n \textrm{ is odd} \end{array} \right. \]

Java implementation:

// computes the matrix power A^n
public static int[][] pow(int[][] A, int n){
  int hA = A.length;
  int wA = A[0].length;
  if(hA == wA){
    if(n == 0) return identity(hA);
    if(n == 1) return A;

    int[][] half = pow(A,n/2);
    int[][] res = mul(half,half);
    if(n % 2 == 1) res = mul(res,A);

    return res;
  } else return null;
Matrix Exponentiation

4.10.4 Application – Fibonacci

We can use matrix exponentiation to compute recurrences. For example, take the fibonacci sequence, with \(F(0) = 0\), \(F(1) = 1\), and \(F(n) = F(n-1)+F(n-2)\). We can represent the recursion using matrices:

\[ \left( \begin{array}{c} F_{n+1} \\ F_{n} \end{array} \right)= \left( \begin{array}{cc} 1 & 1 \\ 1 & 0 \end{array} \right) \left( \begin{array}{c} F_n \\ F_{n-1} \end{array} \right)= \left( \begin{array}{cc} 1 & 1 \\ 1 & 0 \end{array} \right)^n \left( \begin{array}{c} 1 \\ 0 \end{array} \right) \]

We can translate this to code and get a logarithmic runtime for computing the recurrence:

public static  int fib(int n){
  int[][] start = { {1}, {0} };
  int[][] T = { {1,1}, {1,0} };
  int[][] fibs = mul(pow(T,n),start);
  return fibs[1][0];
Matrix Fibonacci

5 Graphs

Let \(G=(V,E), E \subseteq V \times V\) be a graph. The graph has \(|V|\) vertices and \(|E|\) edges.

5.1 Representing graphs in memory

We have three main ways of representing a graph in memory

  1. A boolean incidence matrix \(M\) of size \(V \times V\), where \(M[i][j] \leftrightarrow (i,j) \in E\). We get \(O(1)\) time to determine whether \((i,j) \in E\), but at a large space cost.
  2. An array of \((i,j) \in E\) tuples. Has space requirement \(O(E)\) but takes \(O(E)\) time to figure out the proposition \((i,j) \in E\).
  3. An array \(A\) of size \(|V|\). \(A[i]\) is a list of \((i,j) \in E\). This representation makes it possible to check whether \((i,j) \in E\) in a runtime proportional to the out-degree of \(i\). This is our preferred representation and will be used for the rest of these notes.

5.2 A graph in Java

Here is a simple class representing a (directed, unweighted) graph. It does no error checks, and expects well-formed input.

public class Graph {
  private int V;
  private int E;

  ArrayList<Integer>[] nodes;

  // a graph with V_ vertices and E_ edges
  public Graph(int V_, int E_) {
    V = V_;
    E = E_;
    nodes = (ArrayList<Integer>[]) new ArrayList[V];
    for(int i = 0; i < V; i++){
      nodes[i] = new ArrayList<Integer>();

  public ArrayList<Integer> adj(int a){
    return nodes[a];

  // adds the edge (a,b) - directed graph
  public void addEdge(int a, int b){

  public int V(){ return V; }
  public int E(){ return E; }

    public String toString() {
      String out = "";
      for(int i = 0; i < V; i++){
        out += (i + ": ");
        for(int j : nodes[i]) out += (j + " ");
        out += '\n';
      return out;
Java graph class

A backbone for many graph algorithms. Idea in pseudocode:

  visit node
  for child in node.children:
    if child not visited:

Notice that we need a data-structure to keep track of which nodes have been visited. This can be a set, or just a boolean array. Now, the above looks correct, but we are missing something. Namely, we will only visit the connected component of the graph which includes node. Hence, the corrected algorithm in pseudocode is as follows:

  for node n in nodes:
    if node not visited

  visit node
  for child in node.children:
    if child not visited

5.3.1 Example – Number of connected components

To find the total number of connected components in a digraph, we can use DFS. In Java:

public class DepthFirstSearch {
  boolean[] visited;
  int connected_components;

  // returns number of connected components in G - 1 if g is fully connected
  public int DFS(Graph g, int s){
    visited = new boolean[g.V()];
    connected_components = 0;
    return connected_components;

  private void dfs_full(Graph g){
    for(int node = 0; node < g.V(); node++){
        dfs(g, node);

  private void dfs(Graph g, int vertex){
    visited[vertex] = true;
    for(int i : g.adj(vertex){
      if(!visited[i]) dfs(g,i);
DFS connected components

5.3.2 Example – Check whether there is a path between two vertices

We can use DFS to check for existence of a path between a start vertex \(s\) and goal vertex \(g\). We could do the same using Union Find, but it would only return us a yes/no answer, and DFS can give us a path. Bear in mind that it will give us ANY path, not the shortest path. We will look at the other cases later. Here is an implementation in Java. It uses an edgeTo array where edgeTo[i] tells us which vertex we arrived from to reach \(i\) during the DFS traversal. Java code:

public class DepthFirstSearchPathFinding{
  boolean[] visited;
  int[] edgeTo;

  // Return null if no path exists, otherwise return a backward path from goal to source
	public ArrayList<Integer> hasPathTo(Graph g, int source, int goal){
		visited = new boolean[g.V()];
		edgeTo = new int[g.V()];
		edgeTo[source] = -1;
		boolean hasPath = dfsFindPaths(g,source,goal);
			int idx = goal;
			ArrayList<Integer> route = new ArrayList<Integer>();
			while(edgeTo[idx] != -1){
				idx = edgeTo[idx];
			return route;
		return null;

	private boolean dfsFindPaths(Graph g, int vertex, int goal){
		visited[vertex] = true;
		if(vertex == goal) return true;
		boolean ret = false;
		for(int i : g.adj(vertex)){
				edgeTo[i] = vertex;
				ret |= dfsFindPaths(g,i,goal);
		return ret;
DFS path finding

Breadth-First Search (BFS) is another traversal technique. It works by iteratively expanding a frontier of visited nodes. In pseudocode:

  for node n : nodes:
    if not visited n:
      initialize empty queue Q
      add n to Q
      mark n as visited

      while Q is non-empty:
        node top := pop an element from Q
        visit top
        for node child : children(top):
          if child not visited:
            mark child as visited
            add child to Q

Java implementation that can be adapted to various needs:

public void BFS(Graph g){
  boolean[] visited = new boolean[g.V()];

  for(int i = 0; i < g.V(); i++){
      Queue<Integer> q = new LinkedList<Integer>();
      visited[i] = true;

        int top = q.poll();
        System.out.println("Visiting node: " + top);
        for(int child : g.adj(top)){
            visited[child] = true;
      System.out.println("Finished visited a connected component...");
BFS with a Queue

Implementation Pitfall

You might be tempted to not mark visited[child] = true in the innermost for-loop and instead do all the visiting and marking right after the element is popped off the stack. Consider the following undirected graph, and assume we start our BFS from node marked 1.


The incorrect BFS would proceed as follows:

visited = {}
Q = {1}
------- pop 1 off the queue
visited = {1}
Q = {2,3,4}
------- pop 2 off the queue
visited = {1,2}
Q = {3,4}
------- pop 3 off the queue
visited = {1,2,3}
Q = {4,4}

As you can see, we added 4 again into the queue since it’s connected to 3, but it’s also connected to 1 which is where we started off our BFS and we still aren’t finished. Without the fix, we would run into issues with double-visiting certain vertices. Moreover, this could be fixed by popping an element of the queue and checking if it’s visited but this spawns another problem – for huge graphs, our queue might run out of memory. Therefore, the proposed solution in the Java code is to be used.

We can similarly use BFS to solve the problem of finding a path between two vertices. Moreoever, for an unweighted digraph, using BFS will give us the shortest path based on the number of edges on the path.

5.5 Union Find

The Union Find algorithm is used to compute the connected components in a graph. Assume that we have a total of \(N\) nodes. We use an array \(P\) to store the representative of all nodes. A representative of a node \(a\) is a unique node that represents the connected component that \(a\) belongs to. At the beginning of the algorithm, every node is its own component. As the connections come in, we find the representative of both nodes, and if they are different (they will only be different if \(a\),\(b\) are in different groups), we merge the two components together. We can also compress the paths so that after every find operation, each node points directly to the representative of its group.

public class UnionFind {
  int[] P;
  int count;
  public UnionFind(int size) {
    P = new int[size];
    count = size;
    for(int i = 0; i < size; i++) P[i] = i;
  public boolean connected(int a, int b){
    return find(a) == find(b);
  public void union(int a, int b){
    int aRoot = find(a);
    int bRoot = find(b);
    if(aRoot == bRoot) return;
    P[aRoot] = bRoot;
  public int find(int a){
    int o = a;
    while(a != P[a]) a = P[a];
    // path compress
    while(o != a){
      int b = P[o];
      P[o] = a;
      o = b;
    return a;
Union Find

5.5.1 Application – Connected Components

Union Find can help us discover whether a digraph is fully connected. Simply run union-find, and then check whether all the nodes have the same representative. Similarly, if you want the number of connected components, just calculate how many distinct representative there are after running union find. This technique is very similar to the DFS-based approach and has almost the same time complexity, but depending on the graph, might have different space complexities. Union-find does not need to build a full representation of the graph. It’s also an online-algorithm. At any time, we have a constant amortized time operation to let us know whether two vertices are in the same component or not.

5.6 Topological Sorting

Given a directed acyclic graph (DAG) \(G=(V,E)\), a topological sorting of \(G\) is a permutation of the vertices of \(G\), \(p(V)\), such that for every directed edge \(u \rarrow v \in V\), \(u\) appears before \(v\) in \(p(V)\). This is

5.6.1 Using a queue

5.6.2 Using DFS

5.7 All-Pairs Shortest Paths

Given a graph \(G=(V,E)\), we want to find the shortest distance between all pairs of vertices. The algorithm for this is the Floyd-Warshall algorithm and it works in \(O(V^3)\) time. Suppose the graph is given as an int[] a, int[] b and int[] dist, where for all valid \(i\), the distance between a[i] and b[i] is dist[i]. There are \(n\) vertices numbered \(0 \ldots n-1\).

public int[][] floydWarshall(int n, int[] a, int[] b, int[] dist){
    // initialization
    final int mm = Integer.MAX_VALUE/2 - 1;
    int[][] D = new int[n][n];
    for(int i = 0; i < n; i++) Arrays.fill(D[i],mm);
    for(int i = 0; i < n; i++) D[i][i] = 0;
    for(int i = 0; i < a.length; i++){
        D[a[i]][b[i]] = dist[i];

    // Floyd-Warshall

    for(int k = 0; k < n; k++){
        for(int i = 0; i < n; i++){
            for(int j = 0; j < n; j++){
                D[i][j] = Math.min(D[i][j], D[i][k] + D[k][j]);
    return D;

5.7.1 Floyd-Warshall Pitfalls

You can get bitten by some issues when implementing Floyd-Warshall:

6 Stacks

6.1 Stack that supports \(O(1)\) maximum and minimum

Let’s look at just the minimum case. We use an extra stack \(M\) whose top element will be the current minimum. When a new element comes in, we check if it’s \(\leq\) than the top of the minimum stack. If it is, we add to the minimum stack. If it’s not, we don’t add it. When an element is popped from the stack, we check if it’s equal to the top of the minimum stack. If it is, we pop an element off from the minimum stack. The case for the \(O(1)\) maximum is symmetric.

7 Combinatronics

Many combinatronics questions can easily be expressed recursively.

7.1 All subsets

Any computation that outputs the powerset of a set \(S\) will necessarily have a complexity as least \(2^{|S|}\) since every element has either the chance to be part of a set, or not to be part of it. The simple Java code to output a powerset of an integer array:

public void powerset(int[] A){
  boolean[] mask = new boolean[A.length];
private void powerset(int[] A, boolean[] mask, int idx){
  if(idx == A.length){
    for(int i = 0; i < A.length; i++){
      if(mask[i]) System.out.print(A[i] + " ");
  } else{
    for(boolean b : new boolean[]{true,false}){
      mask[idx] = b;

7.2 All \(k\)-element subsets

Similar idea, except we want our bitmask to have exactly \(k\) elements set to true, and all the rest set to false. We can keep track of how many more we need to set to true, and only print out a subset once we reach the required number. The following Java code accomplishes this:

public void subsetsK(int[] A, int k){
  boolean[] mask = new boolean[A.length];

private void subsetsK(int[] A, int left, boolean[] mask, int idx){
  if(left == 0){
    for(int i = 0; i < A.length; i++){
      if(mask[i]) System.out.print(A[i] + " ");
  else if(idx >= A.length) return;
    mask[idx] = true;
    mask[idx] = false;
k-element subsets

7.3 All permutations

To generate all permutations of an array \(S\) of distinct elements, take the first element, and swap it with all other successive elements. Continue this process recursively. Note that this code only works for arrays of distinct elements.

public void permutations(String s){
  permutations(s.toCharArray(), 0);
private void permutations(char[] C, int idx){
  if(idx == C.length){
    System.out.println(new String(C));
  } else{
    for(int i = idx; i < C.length; i++){
      swap(C,idx,i); // swap
      swap(C,idx,i); // put back
private void swap(char[] C, int a, int b){
  char tmp = C[a];
  C[a] = C[b];
  C[b] = tmp;
All permutations

7.3.1 Iterative All Permutations via Next Permutation

We can construct all the permutations by starting with the least lexicographic permutation, and then calling next_permutation which should produce the next lexicographically greater permutation. The following algorithm is from Knuth Volume 4A, p320.

The lexicographical successor of a combinatorial pattern \(a_1 \ldots a_n\) is obtainable by a three-step procedure:

  1. Find the largest \(j\) such that \(a_j\) can be increased
  2. Increase \(a_j\) by the smallest feasible amount
  3. Find the lexicographically least way to extend the new \(a_1 \ldots a_j\) to a complete pattern.

To apply this, start off with the least lexicographical permuation \(a_1 \ldots a_n\).

  1. Visit the permutation \(a_1 \ldots a_n\)
  2. Set \(j = n-1\). If \(a_j \geq a_{j+1}\), decrease j repeatedly until \(a_j < a_{j+1}\). Terminate the algorithm if \(j=0\), since this is the lexicographically greatest permutation.
  3. Set \(l=n\). If \(a_j > a_l\), decrease \(l\) until \(a_l > a_j\). Swap \(a_l\) with \(a_j\).
  4. Reverse \(a_{j+1} \ldots a_n\)

Java implementation:

// Permutes A into the next lexicographical permutation and returns true
// If A is the last permutation, returns false
private boolean nextPermutation(int[] A){
  int j = A.length-2;
  while(j >= 0 && A[j] >= A[j+1]) j--;
  if(j < 0) return false;
  int l = A.length-1;
  while(A[j] >= A[l]) l--;
  return true;

// Reverses A[l..r]. To reverse fully, call with l=0, r=A.length-1
private void reverse(int[] A, int l, int r){
  for(int i = l; i <= (l+r)/2; i++){
    swap(A, i, r+l-i);

private void swap(int[] a, int idx1, int idx2){
  int tmp = a[idx1];
  a[idx1] = a[idx2];
  a[idx2] = tmp;
Next Permutation

7.4 All tuple subsets – Cartesian Product

Let \(S\) be a list of lists, for example \(S = [[1,2],[3],[4,5,6]]\). We want to generate all the tuple subsets of \(S\), that is, for the given example:

[ [1,3,4],
  [2,3,6] ]

This is very easy to generate recursively. Simply try all the possibilities for the first list, and then continue on recursively. Stop when \(S\) is exhausted. Java code that expects an array of strings:

public void cartesianProd(String[] Ss){

private void cartesianProd(String[] Ss, String buf, int idx){
  if(idx == Ss.length) System.out.println(buf);
    for(int i = 0; i < Ss[idx].length(); i++){
      cartesianProd(Ss, buf + Ss[idx].charAt(i), idx+1);
All tuple subsets

7.5 Example – All Valid Bracketings

A valid bracketing \(B\) of size \(N\) is a word of size \(2N\) from the alphabet \(\Sigma = \{\mathbf{(},\mathbf{)}\}\) such that the following BNF production rule is adhered to:

\[ \begin{aligned} B &= \epsilon \\ B &= \mathrm{( } B \mathrm{ ) } B \end{aligned} \]

The above is formal, we intuitively know what a valid bracketing is. For example ()(()) is valid but (())) is not. We can generate this recursively by keeping track of how many left parentheses we can still use up, and how many right (closing) parentheses we can still use up. If we still have an opening parentheses, use it up. In addition, if the number of opening parentheses we have is greater than the number of closed ones, we can close one parentheses. Continue this process recursively. In Java:

public void allParentheses(int N){

private void allParentheses(int left, int right, String buf){
  if(left == 0 && right == 0) System.out.println(buf);
    if(left > 0) allParentheses(left-1,right,buf+"(");
    if(left < right) allParentheses(left,right-1,buf+")");
All valid bracketings

8 Backtracking/DP/Recursion

8.1 Subset Sum

Given a set \(S\) of integers, does there exist a non-empty \(s \subseteq S\) such that \(\sum_{i \in s}i = 0\).

8.1.1 Exponential via recursive backtracking

The exponential method is to simply generate all the subsets, and check whether any of them sum to \(0\). Here is a sample Java code to do it:

static boolean checkSubset(boolean[] mask, int[] a){
  int sum = 0;
  boolean empty = true;
  for(int i = 0; i < mask.length; i++){
      empty = false;
      sum += a[i];
  return !empty && (sum == 0);

static boolean subsetSumNaive(int[] a){
  boolean[] b = new boolean[a.length];
  return powerset(b, a, 0);
static boolean powerset(boolean[] s, int[] a, int n){
  if(n == a.length) return checkSubset(s, a);
  boolean[] opts = {true, false};
  boolean res = false;
  for(boolean b : opts){
    s[n] = b;
    res = res || powerset(s, a, n+1);
  return res;
Subset Sum Naive

The checkSubset function checks whether the array mask filled with booleans corresponds to a subset of \(a\) that sums to 0. The recursion is started in subsetSumNaive, which calls the recursive function powerset.

8.1.2 Pseudo-Polynomial via Dynamic Programming

The problem can also be solved using a pseudo-polynomial timple dynamic programming algorithm, as explained on this Wiki page.

Java Code:

// Is there a subset of `a` that sums to 0
// Pre: a has distinct elements.
static boolean subsetSum(int[] a){
  int negSum = 0; 
  int posSum = 0;
  for(int i : a){
    if(i > 0) posSum += i;
    else negSum += i;
  int total = posSum - negSum + 1;
  boolean[][] q = new boolean[a.length][];
  for(int i = 0; i < a.length; i++){
    q[i] = new boolean[total];
    for(int s = 0; s < total; s++){
      q[i][s] = (a[i] == (s+negSum));
  for(int i = 1; i < a.length; i++){
    for(int s = 0; s < total; s++){
      int val = s+negSum;
      boolean cond = val-a[i] >= negSum && val-a[i] < posSum;
      q[i][s] =  q[i-1][s] || (a[i] == val) || (cond && q[i-1][val-a[i]-negSum]);
  return q[a.length-1][-negSum];
Pseudo-polynomial subset sum algorithm

8.2 N-Queens

Given an \(N \times N\) chessboard, print out all the ways you can place \(N\) queens onto the board such that no Queen attacks any other queen.

We can use a recursive backtracking solution – Use an array \(q\) of \(N\) integers, where \(q[i] \in [0..N-1]\) represents which column the queen in row \(i\) is. A solution in Java:

// Check whether queen in column n doesn't conflict with
// other queens in previous n-1 columns
static boolean isValid(int[] q, int n){
  for(int i = 0; i < n; i++){
    if(q[i] == q[n]) return false;
    if(q[i] - q[n] == n - i) return false;
    if(q[n] - q[i] == n - i) return false;
  return true;

static void printBoard(int[] q){
  for(int i = 0; i < q.length; i++){
    for(int j = 0; j < q.length; j++){
      if(q[i] == j) System.out.print("Q");
      else System.out.print("*");

static void enumerate(int n){
  int[] q = new int[n];

static void enumerate(int[] q, int i){
  if(i == q.length) printBoard(q);
    for(int j = 0; j < q.length; j++){
      q[i] = j;
      if(isValid(q, i)) enumerate(q, i+1);

public static void main(String[] args) {
N-Queens in Java

8.3 Maximum value contiguous subsequence

Given an array \(a\) of integers, find the contiguous subsequence with the maximum sum.

We can denote \(Q[i]\) to be the maximal sum over all windows ending at index \(i\). Then

\[ \begin{aligned} Q[0] &= a[0] \\ Q[i] &= \max{(Q[i-1]+a[i],a[i])} \end{aligned} \]

int maxContiguousSubsequence(int[] a){
  int[] q = new int[a.length];
  q[0] = a[0];
  int max = q[0];
  for(int i = 1; i < a.length; i++){
    q[i] = Math.max(q[i-1] + a[i], a[i]);
    max = Math.max(max,q[i]);
  return max;
Maximum value contiguous subsequence

We can reduce the space complexity by noting that we only need the value of the previous subproblem:

int maxContiguousSubsequence(int[] a){
  int maxSoFar = a[0];
  int maxEndingHere = a[0];
  for(int i = 1; i < a.length; i++){
    maxEndingHere = Math.max(maxEndingHere + A[i], A[i]);
    maxSoFar = Math.max(maxSoFar, maxEndingHere);
  return maxSoFar;
Maximum value contiguous subsequence

If we want to allow empty subsequences, we can modify the code as follows:

int maxContiguousSubsequence(int[] a){
  int maxSoFar = 0;
  int maxEndingHere = 0;
  for(int i = 0; i < a.length; i++){
    maxEndingHere = Math.max(maxEndingHere + A[i], 0);
    maxSoFar = Math.max(maxSoFar, maxEndingHere);
  return maxSoFar;
Maximum value contiguous subsequence

8.4 Longest increasing contiguous subsequence

Given an array \(a\) of integers, find the longest contiguous increasing subsequence.

We denote \(Q[i]\) to be the length of the longest subsequence in range \(a_1..a_i\) ending at \(i\). Then \(Q[0] = 1\) and \(Q[i] = Q[i-1] + 1\) if \(a[i] > a[i-1]\) and \(1\) otherwise. The longest contiguous value increasing subsequence ending at \(i\) is then \(\max{(max(i-1),Q[i])}\). In Java:

static int longestIncresasingContiguousSubsequence(int[] a){
  int[] q = new int[a.length];
  q[0] = 1;
  int max = 1;
  for(int i = 1; i < a.length; i++){
    if(a[i] > a[i-1]) q[i] = q[i-1] + 1;
    else q[i] = 1;
    max = Math.max(max,q[i]);
  return max;
Longest increasing contiguous subsequence

8.5 Minimal coin change

Given an array \(v\) of \(n\) coin denominations where \(v_1 < v_2 \ldots < v_n\) and \(v_1 = 1\), find the minimal number of coins needed to make change for \(C\) amount of money.

8.5.1 Recursive formulation

We can quickly formulate a recursive subformulation. Let \(M(j)\) be the least amount of coins needed to make change for \(j\) amount of money. Then:

\[ M(j) = \left\{ \begin{array}{lr} \infty & : j < 0 \\ 0 & : j = 0 \\ \min_{i}{M(j-v_i)} + 1 & : \textrm{otherwise} \end{array} \right. \]

We can quickly translate this to a recursive formulation:

static int naiveLeastChange(int[] v, int j){
  if(j == 0) return 0;
  int min = Integer.MAX_VALUE;
  for(int i = 0; i < v.length; i++){
    if(j - v[i] >= 0){
      min = Math.min(min,naiveLeastChange(v, j-v[i]));
  return 1 + min;
Minimal coin change Recursively - Slow

This is slow since the recursive calls recompute the same results.

8.5.2 Dynamic Programming

We use DP to cache the results as follows:

static int leastChange(int[] v, int j){
  int[] M = new int[j+1];
  M[0] = 0;
  for(int i = 1; i <= j; i++){
    int min = Integer.MAX_VALUE;
    for(int val : v){
      if(i >= val)
        min = Math.min(M[i-val], min);
    min += 1;
    M[i] = min;
  return M[j];
Minimum coin change DP

8.6 Number of ways to make change

Now, instead of the least number of coins, we want to find the total number of ways of making change using coins from \(C\) for a total value of \(V\). Assume that \(C\) includes a \(1\) so that we can always make change for any value.

We can solve this with DP: Let \(DP[i,v]\) be the number of ways to make change for value \(v\) using coins up to \(i\). Then, we have

\[ \begin{aligned} DP[i,0] &= 1 \; \forall i \\ DP[i,v] &= DP[i-1][v]+DP[i][v-C_i] \; \textrm{ if $v \geq C_i$} \\ DP[i,v] &= DP[i-1][v] \; \textrm{ otherwise} \end{aligned} \]

Java code:

static int waysToMakeChange(int[] C, int v){
  int N = C.length;
  int[][] DP = new int[N+1][v+1];

  for(int i = 0; i <= N; i++){
    DP[i][0] = 1;

  for(int i = 1; i <= N; i++){
    for(int val = 1; val <= v; val++){
      if(C[i-1] <= val){
        DP[i][val] = DP[i-1][val] + DP[i][val-C[i-1]];
      } else{
        DP[i][val] = DP[i-1][val];
  return DP[N][v];
Number of ways to make change

8.7 Longest Increasing Subsequence

Given a array \(a\) of integers, find the length longest strictly increasing subsequence of \(a\) (not necessarily contiguous).

Let \(L(j)\) be the length of the longest subsequence ending at index \(j\). Then

\[ L(j) = \max_{i < j \wedge a[i] < a[j]}{L[i]} + 1; \]

Java formulation:

static int longestIncreasingSubsequence(int[] a){
  // L[i] is length of longest subsequence ending at i
  int[] L = new int[a.length];
  L[0] = 1;
  int best = Integer.MIN_VALUE;
  for(int i = 1; i < a.length; i++){
    int max = Integer.MIN_VALUE;
    for(int j = i - 1; j >=0; j--){
      if(a[j] < a[i])
        max = Math.max(L[j], max);
    max += 1;
    L[i] = max;
    best = Math.max(best,max);
  return best;
Longest increasing subsequence

The complexity is \(O(n^2)\).

8.8 Longest Common Subsequence

Let \(x\) and \(y\) be two sequences of lengths \(m\) and \(n\) respectively. The Longest Common Subsequence (LCS) of \(x\) and \(y\) is defined as a subsequence that occurs in both \(x\) and \(y\) and has maximal length over all other common subsequences.

Let \(c[i,j] = \operatorname{LCS}(x[1..i],y[1..j])\), that is, the length of the longest subsequence between the subarrays \(x[1..i]\) and \(x[1..j]\). Then:

\[ c[i,j] = \left\{ \begin{array}{lr} 0 & : i = 0 \textrm{ or } j = 0 \\ c[i-1,j-1] + 1 & : x_i = y_j \\ \max{(c[i-1,j],c[i,j-1])} & : \textrm{otherwise} \\ \end{array} \right. \]

Dynamic programming solution in Java:

static int longestCommonSubsequence(int[] x, int[] y){
  int[][] c = new int[x.length+1][];
  for(int i = 0; i < x.length + 1; i++){
    c[i] = new int[y.length + 1];
    for(int j = 0; j < c[i].length; j++) c[i][j] = -1; // uninitialized
  int longest = LCS(c, x, y, x.length, y.length);
  return longest;

static int LCS(int[][] c, int[] x, int[] y, int i, int j){
  if(c[i][j] == -1){
    if(i == 0 || j == 0) c[i][j] = 0;
    else if(x[i-1] == y[j-1]){
      c[i][j] = 1 + LCS(c, x, y, i-1, j-1);
    } else{
      c[i][j] = Math.max(LCS(c,x,y,i-1,j), LCS(c,x,y,i,j-1));
  return c[i][j];

static String printLCS(int[][] c, int[] x, int[] y, int i, int j){
  if(i == 0 || j == 0) return "";
  else if(x[i-1] == y[j-1]){
    return printLCS(c,x,y,i-1,j-1) + Integer.toString(x[i-1]);
    if(c[i][j-1] > c[i-1][j]){
      return printLCS(c,x,y,i,j-1);
    } else{
      return printLCS(c,x,y,i-1,j);
Longest Common Subsequence DP

The above can also print out a possible LCS.

8.9 \(n\) choose \(k\) - Pascal’s Triangle

We know that \({n \choose k} = \frac{n!}{(n-k)!k!}\), although it’s very inefficient to compute due to factorial overflows. We can compute \({n \choose k}\) using the recurrence: \[ {n \choose k} = {n-1 \choose k} + {n-1 \choose k-1} \] Think about it as follows: to choose \(k\) elements from \(n\), we can ignore the last element in our selection, and hence the total number of combinations without using the last element is \({n-1 \ choose k}\). Or otherwise, we can pick the \(n\)’th element, and we are then left with \(k-1\) more elements to pick. Java code to compute the pascal triangle table:

public long[][] nCk(){
  int max = 50;
  long[][] nCk = new long[max][max];
  for(int i = 0; i < max; i++){
    nCk[i][0] = 1;
    nCk[i][i] = 1;
  for(int n = 2; n < max; n++){
    for(int k = 1; k < n; k++){
      nCk[n][k] = nCk[n-1][k-1] + nCk[n-1][k];
  return nCk;
Pascal's Triangle

The first couple of rows are:

1 1 
1 2 1 
1 3 3 1 
1 4 6 4 1 
1 5 10 10 5 1 
1 6 15 20 15 6 1 
1 7 21 35 35 21 7 1 
1 8 28 56 70 56 28 8 1 
1 9 36 84 126 126 84 36 9 1 
1 10 45 120 210 252 210 120 45 10 1 
1 11 55 165 330 462 462 330 165 55 11 1 

8.10 Knapsack problem

We have a knapsack of size \(S\) and a set of items where the \(i\)’th item is a tuple \((v_i,w_i)\) – \(v_i\) is the value of the ith item, and \(w_i\) is the weight (or size) of the ith item. The goal is the fill your backup with items so as to maximize the total value of the elements in the backpack, but so as not to go over the weight limit.

8.10.1 Unbounded knapsack

In the unbounded knapsack problem, we can select each item as many times as we want. Let \(B\) be a multiset of the items we choose. We want to find a \(B\) such that \[ \max{\sum_{i \in B} v_i} \] subject to \[ \sum_{i \in B}{w_i} \leq S \]

We can solve this with dynamic programming in time \(O(nS)\).

Let \(K[w]\) be the maximum value of the backpack using items weighing up to at most \(w\). Then, our solution is \(K[S]\), the value using at most the whole backpack. We then have: \[ K[w] = \left\{ \begin{array}{lr} 0 & : w = 0 \\ \max_{w_i \leq w}{k[w-w_i]+v_i} & : \textrm{otherwise} \\ \end{array} \right. \]

An optimisation is to divide all the weights by their greatest common divisor.

Java code:

TODO java code

8.10.2 0/1 Knapsack

In the 0/1 Knapsack problem, we can use each element at most once. For \(n\) elements, we could exhaustively search the \(O(2^n)\) solution space and find an answer.

Formally, let \(E \subseteq I\) but a subset of the element set. We want an \(E\) such that \[ \max{\sum_{i \in E}{v_i}} \] subject to \[ \sum_{i \in E}{w_i} \leq S \]

This again can be solved using dynamic programming in time \(O(nS)\). Let \(K[i,w]\) be the maximum value attainable using only elements up to \(i\) with total weight at most \(w\). The answer then is \(K[n,S]\). We have:

\[ \begin{aligned} K[i,0] &= 0 \; \forall i \\ K[0,w] &= 0 \; \forall w \\ K[i,w] &= K[i-1][w] \textrm{ if } w_i > w \\ K[i,w] &= \max{K[i-1][w],K[i-1][w-w_i]+v_i} \textrm{ if } w_i \leq w \end{aligned} \]

Java code: (\(V\) is the value array, \(W\) is the weight array, and \(S\) is the size of the backpack)

public static int knapsack(int[] V, int[] W, int S){
  int N = V.length;
  int[][] DP = new int[N+1][S+1];

  for(int i = 1; i <= N; i++){
    for(int w = 1; w <= S; w++){
      if(W[i-1] > w){
        DP[i][w] = DP[i-1][w];
      } else{
        DP[i][w] = Math.max(DP[i-1][w],DP[i-1][w-W[i-1]] + V[i-1]);
  return DP[N][S];
0/1 Knapsack

Example problem to solve: spoj KNAPSACK

8.10.3 Fractional Knapsack – A greedy approach

Now suppose that you can pick a fractional part of the item, it’s not all-or-nothing. Then, the greedy solution works – simply sort the elements by value, and pick off the largest elements until you you fill the knapsack.

9 String Algorithms

9.1 Tries

A trie allows us to perform efficient lookup of dictionary words, and finding all the words that share a common prefix. A problem with the normal trie implementation is that it can take up a lot of memory if many of the strings are long.

Here is a simple implementation of a case insensitive trie that works for the alphabet [A..Z].

public class Trie {

  class TrieNode{
    TrieNode[] next;
    int value = -1; // a value of -1 signifies the word is not finished
    public TrieNode(){
      next = new TrieNode[26]; 


  TrieNode root;

  public Trie() {
    root = new TrieNode();

  // Returns false if the key is already in the trie
  // Otherwise, inserts into the trie and returns true
  public boolean insert(String key){
    key = key.toUpperCase();
    char[] C = key.toCharArray();
    TrieNode node = root;
    int i = 0;
    while(i < C.length &&[C[i]-'A'] != null){
      node =[C[i]-'A'];
    if(i == C.length && node.value != -1) return false;

    while(i < C.length){[C[i]-'A'] = new TrieNode();
      node =[C[i]-'A'];
    node.value = 1; // mark as end of word

    return true;

  // Returns null if key is in the trie
  // Otherwise returns the last node in the trie
  // based on key
  private TrieNode search(String key){
    key = key.toUpperCase();
    char[] C = key.toCharArray();
    TrieNode node = root;
    for(int i = 0; i < C.length; i++){
      int idx = C[i]-'A';
      if([idx] == null) return null;
      node =[idx]; 
    return node;

  public boolean contains(String key){
    TrieNode n = search(key);
    return n != null ? n.value != -1 : false; 

  // Returns all the elements of the trie
  public ArrayList<String> getAll(){
    ArrayList<String> res = new ArrayList<String>();
    return res;

  private void getAll(TrieNode root, String buf, ArrayList<String> res){
    if(root.value != -1) res.add(buf); # end of the word
    for(int i = 0; i < 26; i++){
      char thisChar = (char)('A'+i);
      if([i] != null) getAll([i], buf+thisChar, res);

  // Returns all strings in the trie with prefix `prefix`
  public ArrayList<String> getAllWithPrefix(String prefix){
    ArrayList<String> res = new ArrayList<String>();
    TrieNode node = search(prefix);
    if(node != null){
    return res;
Trie Implementation

An example of the usage of the above trie:

public static void main(String[] args) {
  Trie trie = new Trie();
  String[] keys = {"she","sells","seashells","by","the","sea","shore","sea"};
  for(String s : keys) trie.insert(s);
  for(String s : trie.getAllWithPrefix("SE")){
Trie Usage

The above code snippet prints out:


9.2 Ternary Search Tries (TSTs)


Suppose we have a string \(S\) and we want to check whether a substring \(T\) occurs within \(S\). The three main methods to do that are the naive algorithm, the Boyer Moore algorithm, and the Knuth-Morris-Pratt (KMP) algorithm.

9.3.1 Naive

The simplest bruteforce method is to first align \(T\) at the beginning of \(S\), and check whether all the characters match up. If not, move \(T\) over to the right by one character. Eventually, we’ll either find a match, and run out of possible alignments and conclude that \(T\) does not occur as a substring in \(S\). The following Java code returns nulll if \(T\) does not occur in \(S\), and otherwise returns the suffix of \(S\) which matched \(T\).

public String strStr(String haystack, String needle) {
  int N = haystack.length();
  int M = needle.length();
  int i;

  for(i = 0; i <= N-M; i++){
    int j = i;
    int k = 0;
    while(k < M && haystack.charAt(j) == needle.charAt(k)){
    if(k == M) break;
  if(i > N-M || N < M) return null;
  else return haystack.substring(i);
Naive Substring Search

9.3.2 Boyer Moore


9.3.3 Knuth-Morris-Pratt (KMP)


Given strings \(S\) and \(T\), check whether \(T\) is a subsequence of \(S\). A string \(B\) is said to a subsequence of a string \(A\) if we can remove characters from \(A\) and eventually make it equal to \(B\).

We can do this in linear time using a greedy approach. Find the first letter of \(B\) in \(A\), then find the second, etc… Java code:

public boolean isSubsequence(String s, String seq){
  if(seq.length() == 0) return true;
  int j = 0;
  for(int i = 0; i < s.length(); i++){
    if(s.charAt(i) == seq.charAt(j)) j++;
    if(j == seq.length()) return true;
  return false;
String subsequence search

10 Bit Twiddling

Bit manipulation often comes in useful. This section presents a number of popular techniques using bit manipulation.

int setBit(int i, int bit){
  return (i | (1 << bit));
Set Bit
int getBit(int i, int bit){
  return (i >> bit) & 1;
Get Bit
// Pre: i > 1
boolean pow2(int i){
  return (i & (i-1)) == 0;
Check power of 2
// A has 2n+1 elements. Each element except one appears twice.
// Find the unique element that appears once.
int once(int[] A){
  int i = 0;
  for(int a : A) i ^= a;
  return i;
Single array occurence
void powerset(int[] A){
  int N = 1 << A.length;
  for(int i = 0; i < N; i++){
    for(int b = 0; b < A.length; b++){
      if(( (i >> b) & 1) == 1) System.out.print(A[b] + " ");
Powerset using a bitmask

11 Computational Geometry

11.1 Lines

11.1.1 Check if three points are colinear

To check if \(p_1 = (x_1,y_1)\), \(p_2 = (x_2,y_2)\), \(p_3 = (x_3, y_3)\) are colinear, we can check if the slopes \(p_1p_2\) and \(p_2p_3\) are equal. We need to be careful to not divide by zero though, so it’s better to use multiplication. We will check if following holds:

\[ (y_2-y_1) \times (x_3-x_2) = (y_3-y_2) \times (x_2 - x_1) \]

static boolean colinear(double x1, double y1, double x2, double y2, double x3, double y3) {
  double s1 = (y2 - y1) * (x3 - x2);
  double s2 = (y3 - y2) * (x2 - x1);
  return Math.abs(s1-s2) < 1e-5;
Check if three points are colinear
Markdown SHA1: ce80cf45d42c9da474bca09fa73936572f08d5c8