Table of Contents
- 1 Sorting Algorithms
- 2 Related Algorithms
- 3 Heaps
- 4 Numerical Algorithms
- 5 Graphs
- 6 Stacks
- 7 Combinatronics
- 8 Backtracking/DP/Recursion
- 8.1 Subset Sum
- 8.2 N-Queens
- 8.3 Maximum value contiguous subsequence
- 8.4 Longest increasing contiguous subsequence
- 8.5 Minimal coin change
- 8.6 Number of ways to make change
- 8.7 Longest Increasing Subsequence
- 8.8 Longest Common Subsequence
- 8.9 \(n\) choose \(k\) - Pascal’s Triangle
- 8.10 Knapsack problem
- 9 String Algorithms
- 10 Bit Twiddling
- 11 Computational Geometry
This page serves as my notes on sorting and related subproblems. This is a Work in Progress.
1 Sorting Algorithms
1.1 Bubble Sort
Bubble Sort works by comparing two adjacent pairs of numbers in an array, and swapping them so that the larger element “bubbles” through to the right. This means that on the first pass, the largest element is now its final position at the end of the array. We then repeat, this time placing the second-last element in its place. Repeat, until there are no more elements to process. For the reason that bubble sort makes the largest element “sink” to the right of the array to its final resting place, the algorithm is sometimes referred to as a sinking sort.
Java code:
Runtime | Description |
---|---|
\(O(n^2)\) | Average case |
1.2 Insertion Sort
Insertion sort works by looking at an element \(i\), and sifting it to the left until the element to the left is smaller than or equal to \(i\). It’s efficient on small and nearly sorted arrays. For example, when the array is sorted, the run time is linear in the input size. Since it’s good for smaller arrays, it can be used as a subroutine in a more complex sort such as quicksort: first use quicksort to partition the array into subgroups such that if a subgroup \(i\) is to the left of subgroup \(j\), all elements of \(i\) are smaller than or equal to every element of \(j\). Afterwards, invoke insertion sort on this array to sort it in nearly linear time.
Runtime | Description |
---|---|
\(O(n^2)\) | Average case |
\(O(n)\) | Sorted / nearly sorted |
1.3 Selection Sort
Selection sort is similar to insertion sort. It works by placing the smallest value in \(x[0]\), then the next remaining one in \(x[1]\) and so forth.
Runtime | Description |
---|---|
\(O(n^2)\) | Average case |
1.4 Shell Sort
Shell Sort is a generalization of insertion sort – it works by using a strictly decreasing sequence of gaps
. For every \(g \in \mathrm{gaps}\), we \(g\)-sort the array, meaning that every \(g\)-th element is sorted. We then take the next \(g\), and continue the process. The final step is with \(g=1\), meaning we do a standard insertion sort. The performance is highly dependent on the input distribution and gap sequence used.
You wouldn’t really use this sort in practice nowadays.
1.5 Quicksort
1.5.1 Idea
Quicksort is a recursive algorithm that uses the divide-and-conquer approach to split the problem into subproblems, solve them recursively, and then combine the solution. Quicksort works by first partitioning the array around a value \(v\). This means shuffling the array so that all elements \(< v\) are one side of the array, and all elements \(\geq v\) are on the other side. Once we have that, we know the final resting place for \(v\). It now remains to recursively sort the left and right partitions.
1.5.2 Partition function
The idea here is to return an index \(m\) such that everything \(\leq m\) is \(< val\), and everything \(> m\) is \(\geq val\). Use \(i\) to traverse through the required bounds \([a,b]\). If \(a[i] \geq val\), we are allright, nothing needs to be done. Otherwise, we increment \(m\) (so now it’s pointing to an element that’s \(\geq val\), and swap it with \(a[i]\) (so that \(m\) again points at an element \(< val\).
1.5.3 Result
The final quicksort algorithm can be expressed as such:
Runtime | Description |
---|---|
\(O(n\log{n})\) | Average case |
\(O(n^2)\) | Worse case: When sorted/nearly sorted |
There are various methods to get rid of the \(O(n^2)\) scenario, for example by randomly choosing the value to partition around, instead of always choosing the left bound of the sub-array we are currently sorting.
1.6 Mergesort
Mergesort also works recursively – it splits the array into two sub-arrays, mergesorts them recursively, and then merges the two sub-arrays into one. The key is the merge
function, which given two sorted arrays, interleaves them together to produce a third sorted array.
Runtime | Description |
---|---|
\(O(n\log{n})\) | Average, best, and worst cases |
2 Related Algorithms
2.1 Binary Search
The following code returns the index of the first occurence of target
in A
, or the insertion index if target
is not part of the array.
2.2 Binary Search Rotated Array
Supposed a sorted array has been rotated by an unknown integer \(k\). We can also binary search on this array.
2.3 Random elements
Consider we wish to solve the following problem:
Given integers \(m\),\(n\) such that \(0 < m \leq n\), generate randomly (following a uniform distribution) \(m\) numbers from the range \([0,n-1]\).
2.3.1 Algorithm S
Algorithm S is an algorithm by Donald Knuth to solve the above problem. It works by keeping track of how many elements we have left to choose (\(s\)), and how many choices we have left \(r\), and selects the current item \(i\) with probability \(\frac{s}{r}\). If there is nothing left to select, then \(\frac{s}{r} = 0\) and we don’t select any more elements. It also can’t select fewer than \(m\) elements since when \(s=r\) an element will be selected with probability \(1\).
2.3.2 By shuffling
We can solve the required problem as follows: Get an array with elements \([0,n-1]\), shuffle it, and take the first \(m\) elements. The issue remains how to shuffle an array uniformly. We can use Algorithm P to perform the shuffle.
2.3.2.1 Algorithm P / Yates shuffle
In pseudocode:
In Java, the Yates shuffle looks like this:
We can now use the Yates shuffle to get back \(m\) random elements:
2.3.3 Random element from finite stream
We now wish to solve the following problem:
Given a finite stream of unknown length, select an element from it uniformly at random using \(O(1)\) space.
The problem would be simple in \(O(n)\) space where \(n\) is the eventual length of the stream. Simply save the elements, and when the stream finishes, get a random element from that array. The following algorithm solves this problem in \(O(1)\) space:
2.3.3.1 Proof:
For a stream of size \(1\), we replace \(e\) with \(s\) with probability \(\frac{1}{1} = 1\), and return \(s\). Otherwise, assume inductively that for a stream \(S\) of which we processed \(p\) elements so far, each of the \(p\) elements seen so far has \(\frac{1}{p}\) probability of being \(e\). Let a new element \(i\) arrive in the stream. We replace \(e\) with \(i\) with probability \(\frac{1}{p+1}\), so \(i\) has probability \(\frac{1}{p+1}\) of being selected. \(e\) is not replaced with probability \(\frac{p}{p+1}\), so all previous elements are selected with probability \(\frac{1}{p} \times \frac{p}{p+1} = \frac{1}{p+1}\) as required.
3 Heaps
3.1 Heap construction
3.2 Heap Problems
3.2.1 \(k\)-way merge
Give an \(O(n\log{k})\)-time algorithm to merge \(k\) sorted lists into one sorted list, where \(n\) is the total number of elements in all the input lists. (Question 6.5-8 from CLRS)
The solution is to use a min-heap of size \(k\). At the beginning, take the first element from all \(k\) lists, and add them to the min-heap. Then, extract the minimum from the heap, and add to the heap the next element from the list from which the minimum element came from from. If that list is exhausted, extract another element from the heap and repeat. Repeat until all lists are exhausted.
3.2.2 At-most \(k\) away sort
Sort an array of integers where every element is at most \(k\) indices away from its sorted position.
The solution is to use a min-heap. The smallest element must be located in the first \(k\) elements of the array. Insert the first \(k\) elements into the min-heap, and extract the minimum. This is the smallest element, you can output it. Now, just traverse the rest of the array, inserting an element into the min-heap, and then extracting it. The result is a sorted sequence in time \(O(n\log{k})\). Depending on \(k\), an insertion sort might be a better choice.
3.2.3 Top \(k\) elements from unsorted array
Given an array \(A\) of \(n\) integers, return the largest \(k\) elements.
An inefficient solution would be to sort the array for a runtime of \(O(n\log{n})\).
Another solution is to use a max-heap. Heap construction takes \(O(n)\) time, and extracting \(k\) elements takes \(O(k\log{n})\), for a total runtime of \(O(n+k\log{n})\).
We can do even better though. Firstly, create a max-heap from the array in \(O(n)\) time, and initialize an empty auxiliary max-heap. Peek at the maximum in the first max-heap and it to the second heap. Now, remove an element from the second max-heap, and add to this heap the two children of that element from the first heap. This is possible because of the structure of the max-heap – if you have the top \(k-1\) elements, then the \(k\)’th element must be a child of one of these \(k-1\) elements. Since every child has at most two children, the auxiliary heap is at most size \(2k\). This gives us a total running time of \(O(n + k\log{k})\).
Apparently it is also possible to do this in \(O(n + k)\) by first creating a max-heap of the array, and then finding the \(k\)’th largest element in the max-heap using Friedrickson’s heap selection algorithm in surprisingly, only \(O(k)\) time. After you have the \(k\)’th largest element, simply traverse the heap looking only at elements larger than \(k\) in \(O(k)\) time, for a total runtime of \(O(n+k)\).
4 Numerical Algorithms
4.1 Prime factorization
The code below factors an integer into it’s prime factors. The result for an integer \(I\) is returned as a hashmap of \((P,e)\) pairs where \(P^e\) appears in the prime factorization of \(I\).
4.2 All primes up to N - Sieve of Eratosthenes
The following uses up O(N) memory to print out all the primes up to \(N\) using the sieve of Eratosthenes
4.3 Primality check by trial division
This is the simplest way to check if a number \(N\) is prime – just try dividing it by all numbers up to \(\sqrt{N}\).
4.4 Fast exponentiation
We can find \(a^n\) in \(O(\log{n})\) multiplications exploiting the following:
\[ a^n = \left\{ \begin{array}{lr} a^{\frac{n}{2}} \times a^{\frac{n}{2}} & : n \textrm{ is even} \\ a^{\frac{n-1}{2}} \times a^{\frac{n-1}{2}} \times a & : n \textrm{ is odd} \end{array} \right. \]
4.5 Horner’s method for polynomials
Suppose we need to evaluate the polynomial: \[ y = a_nx^n + a_{n-1}x^{n-1} + \ldots + a_1x + a_0 \] This requires roughly \(2n\) multiplications. Rearranging: \[ y = a_0 + x(a_1 + x(a_2 + \ldots x(a_n))) \] We can evaluate \(y\) with roughly \(n\) multiplications
4.6 GCD
The GCD (Greatest Common Divisor) of two positive integers \(a\) and \(b\) is the greatest integer \(n\) less than \(a\) and \(b\) such that \(n|a\) and \(n|b\). The solution is to use Euclid’s algorithm.
4.7 Totient Function
For two positive integers \(a\),\(b\), we say \(a\) and \(b\) are coprime iff \(\mathrm{gcd}(a,b) = 1\). That is, there is no integer besides \(1\) that divides both numbers.
Given an integer \(n\), Euler’s totient function, \(\phi\), tells us the number of positive integers less than \(n\) that are coprime to \(n\). We have that:
\[ \phi(n) = n \sum_{p|n}\left(1 - \frac{1}{p}\right) \]
where \(p\)’s are the distinct prime factors of \(n\). A Java implementation:
4.8 Computing Square Roots
Given a positive real number \(n\), find \(a\) such that \(n=a^2\)
4.8.1 Binary Search
We can compute the square root using binary search as follows:
4.8.2 Newton’s Method
Newton’s Method finds the root of the equation \(a^2 - n = 0\) using the iteration \[ x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)} \]
4.9 Computing n-th root of an integer
Suppose you need to compute the \(n\)-th root of an integer. One possibility is do a binary search as above for the square root case. If you can use library functions, then the \(n\)-th root of \(a\) is simply Math.pow(a,1/n)
. Suppose you want to find the \(n\)-th root of an integer \(a\) and confirm that it itself is an integer. I almost guarantee you will make a mistake if you try do it by binary search (you will get the bounds wrong, or most probably won’t properly handle overflow). An alternative how to do it using the library function:
4.10 Matrix Algebra
4.10.1 Matrix Multiplication
The following code multiplies two integer matrices \(A\),\(B\) in time \(O(n^3)\)
4.10.2 Identity Matrix
The matrix \(I\) with ones along the diagonal. Wherever the dimensions agree, \(AI=IA=A\).
4.10.3 Matrix Exponentiation
Given a square matrix \(A\), we can compute \(A^n\) using the same technique as for fast exponentiation of rational numbers.
\[ A^n = \left\{ \begin{array}{lr} I & : n \textrm{ is zero} \\ A & : n \textrm{ is one} \\ A^{\frac{n}{2}} \times A^{\frac{n}{2}} & : n \textrm{ is even} \\ A^{\frac{n-1}{2}} \times A^{\frac{n-1}{2}} \times A & : n \textrm{ is odd} \end{array} \right. \]
Java implementation:
4.10.4 Application – Fibonacci
We can use matrix exponentiation to compute recurrences. For example, take the fibonacci sequence, with \(F(0) = 0\), \(F(1) = 1\), and \(F(n) = F(n-1)+F(n-2)\). We can represent the recursion using matrices:
\[ \left( \begin{array}{c} F_{n+1} \\ F_{n} \end{array} \right)= \left( \begin{array}{cc} 1 & 1 \\ 1 & 0 \end{array} \right) \left( \begin{array}{c} F_n \\ F_{n-1} \end{array} \right)= \left( \begin{array}{cc} 1 & 1 \\ 1 & 0 \end{array} \right)^n \left( \begin{array}{c} 1 \\ 0 \end{array} \right) \]
We can translate this to code and get a logarithmic runtime for computing the recurrence:
5 Graphs
Let \(G=(V,E), E \subseteq V \times V\) be a graph. The graph has \(|V|\) vertices and \(|E|\) edges.
5.1 Representing graphs in memory
We have three main ways of representing a graph in memory
- A boolean incidence matrix \(M\) of size \(V \times V\), where \(M[i][j] \leftrightarrow (i,j) \in E\). We get \(O(1)\) time to determine whether \((i,j) \in E\), but at a large space cost.
- An array of \((i,j) \in E\) tuples. Has space requirement \(O(E)\) but takes \(O(E)\) time to figure out the proposition \((i,j) \in E\).
- An array \(A\) of size \(|V|\). \(A[i]\) is a list of \((i,j) \in E\). This representation makes it possible to check whether \((i,j) \in E\) in a runtime proportional to the out-degree of \(i\). This is our preferred representation and will be used for the rest of these notes.
5.2 A graph in Java
Here is a simple class representing a (directed, unweighted) graph. It does no error checks, and expects well-formed input.
5.3 Depth-First Search
A backbone for many graph algorithms. Idea in pseudocode:
Notice that we need a data-structure to keep track of which nodes have been visited. This can be a set, or just a boolean array. Now, the above looks correct, but we are missing something. Namely, we will only visit the connected component of the graph which includes node
. Hence, the corrected algorithm in pseudocode is as follows:
5.3.1 Example – Number of connected components
To find the total number of connected components in a digraph, we can use DFS. In Java:
5.3.2 Example – Check whether there is a path between two vertices
We can use DFS to check for existence of a path between a start vertex \(s\) and goal vertex \(g\). We could do the same using Union Find, but it would only return us a yes/no answer, and DFS can give us a path. Bear in mind that it will give us ANY path, not the shortest path. We will look at the other cases later. Here is an implementation in Java. It uses an edgeTo
array where edgeTo[i]
tells us which vertex we arrived from to reach \(i\) during the DFS traversal. Java code:
5.4 Breadth-First Search
Breadth-First Search (BFS) is another traversal technique. It works by iteratively expanding a frontier of visited nodes. In pseudocode:
Java implementation that can be adapted to various needs:
Implementation Pitfall
You might be tempted to not mark visited[child] = true
in the innermost for-loop and instead do all the visiting and marking right after the element is popped off the stack. Consider the following undirected graph, and assume we start our BFS from node marked 1.
The incorrect BFS would proceed as follows:
As you can see, we added 4
again into the queue since it’s connected to 3
, but it’s also connected to 1
which is where we started off our BFS and we still aren’t finished. Without the fix, we would run into issues with double-visiting certain vertices. Moreover, this could be fixed by popping an element of the queue and checking if it’s visited but this spawns another problem – for huge graphs, our queue might run out of memory. Therefore, the proposed solution in the Java code is to be used.
We can similarly use BFS to solve the problem of finding a path between two vertices. Moreoever, for an unweighted digraph, using BFS will give us the shortest path based on the number of edges on the path.
5.5 Union Find
The Union Find algorithm is used to compute the connected components in a graph. Assume that we have a total of \(N\) nodes. We use an array \(P\) to store the representative of all nodes. A representative of a node \(a\) is a unique node that represents the connected component that \(a\) belongs to. At the beginning of the algorithm, every node is its own component. As the connections come in, we find the representative of both nodes, and if they are different (they will only be different if \(a\),\(b\) are in different groups), we merge the two components together. We can also compress the paths so that after every find
operation, each node points directly to the representative of its group.
5.5.1 Application – Connected Components
Union Find can help us discover whether a digraph is fully connected. Simply run union-find, and then check whether all the nodes have the same representative. Similarly, if you want the number of connected components, just calculate how many distinct representative there are after running union find. This technique is very similar to the DFS-based approach and has almost the same time complexity, but depending on the graph, might have different space complexities. Union-find does not need to build a full representation of the graph. It’s also an online-algorithm. At any time, we have a constant amortized time operation to let us know whether two vertices are in the same component or not.
5.6 Topological Sorting
Given a directed acyclic graph (DAG) \(G=(V,E)\), a topological sorting of \(G\) is a permutation of the vertices of \(G\), \(p(V)\), such that for every directed edge \(u \rarrow v \in V\), \(u\) appears before \(v\) in \(p(V)\). This is
5.6.1 Using a queue
5.6.2 Using DFS
5.7 All-Pairs Shortest Paths
Given a graph \(G=(V,E)\), we want to find the shortest distance between all pairs of vertices. The algorithm for this is the Floyd-Warshall algorithm and it works in \(O(V^3)\) time. Suppose the graph is given as an int[] a
, int[] b
and int[] dist
, where for all valid \(i\), the distance between a[i]
and b[i]
is dist[i]
. There are \(n\) vertices numbered \(0 \ldots n-1\).
5.7.1 Floyd-Warshall Pitfalls
You can get bitten by some issues when implementing Floyd-Warshall:
- Remember to initialize
D[i][i] = 0
for validi
. - Make sure that your large value that represents infinity is not too large – otherwise the computation
D[i][k] + D[k][j]
will overflow into negative numbers and the algorithm will return incorrect results. - Make sure you are using the correct datatype – if the distances are large, you might prefer to use a
long
.
6 Stacks
6.1 Stack that supports \(O(1)\) maximum and minimum
Let’s look at just the minimum case. We use an extra stack \(M\) whose top element will be the current minimum. When a new element comes in, we check if it’s \(\leq\) than the top of the minimum stack. If it is, we add to the minimum stack. If it’s not, we don’t add it. When an element is popped from the stack, we check if it’s equal to the top of the minimum stack. If it is, we pop an element off from the minimum stack. The case for the \(O(1)\) maximum is symmetric.
7 Combinatronics
Many combinatronics questions can easily be expressed recursively.
7.1 All subsets
Any computation that outputs the powerset of a set \(S\) will necessarily have a complexity as least \(2^{|S|}\) since every element has either the chance to be part of a set, or not to be part of it. The simple Java code to output a powerset of an integer array:
7.2 All \(k\)-element subsets
Similar idea, except we want our bitmask to have exactly \(k\) elements set to true
, and all the rest set to false
. We can keep track of how many more we need to set to true, and only print out a subset once we reach the required number. The following Java code accomplishes this:
7.3 All permutations
To generate all permutations of an array \(S\) of distinct elements, take the first element, and swap it with all other successive elements. Continue this process recursively. Note that this code only works for arrays of distinct elements.
7.3.1 Iterative All Permutations via Next Permutation
We can construct all the permutations by starting with the least lexicographic permutation, and then calling next_permutation
which should produce the next lexicographically greater permutation. The following algorithm is from Knuth Volume 4A, p320.
The lexicographical successor of a combinatorial pattern \(a_1 \ldots a_n\) is obtainable by a three-step procedure:
- Find the largest \(j\) such that \(a_j\) can be increased
- Increase \(a_j\) by the smallest feasible amount
- Find the lexicographically least way to extend the new \(a_1 \ldots a_j\) to a complete pattern.
To apply this, start off with the least lexicographical permuation \(a_1 \ldots a_n\).
- Visit the permutation \(a_1 \ldots a_n\)
- Set \(j = n-1\). If \(a_j \geq a_{j+1}\), decrease j repeatedly until \(a_j < a_{j+1}\). Terminate the algorithm if \(j=0\), since this is the lexicographically greatest permutation.
- Set \(l=n\). If \(a_j > a_l\), decrease \(l\) until \(a_l > a_j\). Swap \(a_l\) with \(a_j\).
- Reverse \(a_{j+1} \ldots a_n\)
Java implementation:
7.4 All tuple subsets – Cartesian Product
Let \(S\) be a list of lists, for example \(S = [[1,2],[3],[4,5,6]]\). We want to generate all the tuple subsets of \(S\), that is, for the given example:
This is very easy to generate recursively. Simply try all the possibilities for the first list, and then continue on recursively. Stop when \(S\) is exhausted. Java code that expects an array of strings:
7.5 Example – All Valid Bracketings
A valid bracketing \(B\) of size \(N\) is a word of size \(2N\) from the alphabet \(\Sigma = \{\mathbf{(},\mathbf{)}\}\) such that the following BNF production rule is adhered to:
\[ \begin{aligned} B &= \epsilon \\ B &= \mathrm{( } B \mathrm{ ) } B \end{aligned} \]
The above is formal, we intuitively know what a valid bracketing is. For example ()(())
is valid but (()))
is not. We can generate this recursively by keeping track of how many left
parentheses we can still use up, and how many right
(closing) parentheses we can still use up. If we still have an opening parentheses, use it up. In addition, if the number of opening parentheses we have is greater than the number of closed ones, we can close one parentheses. Continue this process recursively. In Java:
8 Backtracking/DP/Recursion
8.1 Subset Sum
Given a set \(S\) of integers, does there exist a non-empty \(s \subseteq S\) such that \(\sum_{i \in s}i = 0\).
8.1.1 Exponential via recursive backtracking
The exponential method is to simply generate all the subsets, and check whether any of them sum to \(0\). Here is a sample Java code to do it:
The checkSubset
function checks whether the array mask
filled with booleans corresponds to a subset of \(a\) that sums to 0. The recursion is started in subsetSumNaive
, which calls the recursive function powerset.
8.1.2 Pseudo-Polynomial via Dynamic Programming
The problem can also be solved using a pseudo-polynomial timple dynamic programming algorithm, as explained on this Wiki page.
Java Code:
8.2 N-Queens
Given an \(N \times N\) chessboard, print out all the ways you can place \(N\) queens onto the board such that no Queen attacks any other queen.
We can use a recursive backtracking solution – Use an array \(q\) of \(N\) integers, where \(q[i] \in [0..N-1]\) represents which column the queen in row \(i\) is. A solution in Java:
8.3 Maximum value contiguous subsequence
Given an array \(a\) of integers, find the contiguous subsequence with the maximum sum.
We can denote \(Q[i]\) to be the maximal sum over all windows ending at index \(i\). Then
\[ \begin{aligned} Q[0] &= a[0] \\ Q[i] &= \max{(Q[i-1]+a[i],a[i])} \end{aligned} \]
We can reduce the space complexity by noting that we only need the value of the previous subproblem:
If we want to allow empty subsequences, we can modify the code as follows:
8.4 Longest increasing contiguous subsequence
Given an array \(a\) of integers, find the longest contiguous increasing subsequence.
We denote \(Q[i]\) to be the length of the longest subsequence in range \(a_1..a_i\) ending at \(i\). Then \(Q[0] = 1\) and \(Q[i] = Q[i-1] + 1\) if \(a[i] > a[i-1]\) and \(1\) otherwise. The longest contiguous value increasing subsequence ending at \(i\) is then \(\max{(max(i-1),Q[i])}\). In Java:
8.5 Minimal coin change
Given an array \(v\) of \(n\) coin denominations where \(v_1 < v_2 \ldots < v_n\) and \(v_1 = 1\), find the minimal number of coins needed to make change for \(C\) amount of money.
8.5.1 Recursive formulation
We can quickly formulate a recursive subformulation. Let \(M(j)\) be the least amount of coins needed to make change for \(j\) amount of money. Then:
\[ M(j) = \left\{ \begin{array}{lr} \infty & : j < 0 \\ 0 & : j = 0 \\ \min_{i}{M(j-v_i)} + 1 & : \textrm{otherwise} \end{array} \right. \]
We can quickly translate this to a recursive formulation:
This is slow since the recursive calls recompute the same results.
8.5.2 Dynamic Programming
We use DP to cache the results as follows:
8.6 Number of ways to make change
Now, instead of the least number of coins, we want to find the total number of ways of making change using coins from \(C\) for a total value of \(V\). Assume that \(C\) includes a \(1\) so that we can always make change for any value.
We can solve this with DP: Let \(DP[i,v]\) be the number of ways to make change for value \(v\) using coins up to \(i\). Then, we have
\[ \begin{aligned} DP[i,0] &= 1 \; \forall i \\ DP[i,v] &= DP[i-1][v]+DP[i][v-C_i] \; \textrm{ if $v \geq C_i$} \\ DP[i,v] &= DP[i-1][v] \; \textrm{ otherwise} \end{aligned} \]
Java code:
8.7 Longest Increasing Subsequence
Given a array \(a\) of integers, find the length longest strictly increasing subsequence of \(a\) (not necessarily contiguous).
Let \(L(j)\) be the length of the longest subsequence ending at index \(j\). Then
\[ L(j) = \max_{i < j \wedge a[i] < a[j]}{L[i]} + 1; \]
Java formulation:
The complexity is \(O(n^2)\).
8.8 Longest Common Subsequence
Let \(x\) and \(y\) be two sequences of lengths \(m\) and \(n\) respectively. The Longest Common Subsequence (LCS) of \(x\) and \(y\) is defined as a subsequence that occurs in both \(x\) and \(y\) and has maximal length over all other common subsequences.
Let \(c[i,j] = \operatorname{LCS}(x[1..i],y[1..j])\), that is, the length of the longest subsequence between the subarrays \(x[1..i]\) and \(x[1..j]\). Then:
\[ c[i,j] = \left\{ \begin{array}{lr} 0 & : i = 0 \textrm{ or } j = 0 \\ c[i-1,j-1] + 1 & : x_i = y_j \\ \max{(c[i-1,j],c[i,j-1])} & : \textrm{otherwise} \\ \end{array} \right. \]
Dynamic programming solution in Java:
The above can also print out a possible LCS.
8.9 \(n\) choose \(k\) - Pascal’s Triangle
We know that \({n \choose k} = \frac{n!}{(n-k)!k!}\), although it’s very inefficient to compute due to factorial overflows. We can compute \({n \choose k}\) using the recurrence: \[ {n \choose k} = {n-1 \choose k} + {n-1 \choose k-1} \] Think about it as follows: to choose \(k\) elements from \(n\), we can ignore the last element in our selection, and hence the total number of combinations without using the last element is \({n-1 \ choose k}\). Or otherwise, we can pick the \(n\)’th element, and we are then left with \(k-1\) more elements to pick. Java code to compute the pascal triangle table:
The first couple of rows are:
8.10 Knapsack problem
We have a knapsack of size \(S\) and a set of items where the \(i\)’th item is a tuple \((v_i,w_i)\) – \(v_i\) is the value of the ith item, and \(w_i\) is the weight (or size) of the ith item. The goal is the fill your backup with items so as to maximize the total value of the elements in the backpack, but so as not to go over the weight limit.
8.10.1 Unbounded knapsack
In the unbounded knapsack problem, we can select each item as many times as we want. Let \(B\) be a multiset of the items we choose. We want to find a \(B\) such that \[ \max{\sum_{i \in B} v_i} \] subject to \[ \sum_{i \in B}{w_i} \leq S \]
We can solve this with dynamic programming in time \(O(nS)\).
Let \(K[w]\) be the maximum value of the backpack using items weighing up to at most \(w\). Then, our solution is \(K[S]\), the value using at most the whole backpack. We then have: \[ K[w] = \left\{ \begin{array}{lr} 0 & : w = 0 \\ \max_{w_i \leq w}{k[w-w_i]+v_i} & : \textrm{otherwise} \\ \end{array} \right. \]
An optimisation is to divide all the weights by their greatest common divisor.
Java code:
TODO java code
8.10.2 0/1 Knapsack
In the 0/1 Knapsack problem, we can use each element at most once. For \(n\) elements, we could exhaustively search the \(O(2^n)\) solution space and find an answer.
Formally, let \(E \subseteq I\) but a subset of the element set. We want an \(E\) such that \[ \max{\sum_{i \in E}{v_i}} \] subject to \[ \sum_{i \in E}{w_i} \leq S \]
This again can be solved using dynamic programming in time \(O(nS)\). Let \(K[i,w]\) be the maximum value attainable using only elements up to \(i\) with total weight at most \(w\). The answer then is \(K[n,S]\). We have:
\[ \begin{aligned} K[i,0] &= 0 \; \forall i \\ K[0,w] &= 0 \; \forall w \\ K[i,w] &= K[i-1][w] \textrm{ if } w_i > w \\ K[i,w] &= \max{K[i-1][w],K[i-1][w-w_i]+v_i} \textrm{ if } w_i \leq w \end{aligned} \]
Java code: (\(V\) is the value array, \(W\) is the weight array, and \(S\) is the size of the backpack)
Example problem to solve: spoj KNAPSACK
8.10.3 Fractional Knapsack – A greedy approach
Now suppose that you can pick a fractional part of the item, it’s not all-or-nothing. Then, the greedy solution works – simply sort the elements by value, and pick off the largest elements until you you fill the knapsack.
9 String Algorithms
9.1 Tries
A trie allows us to perform efficient lookup of dictionary words, and finding all the words that share a common prefix. A problem with the normal trie implementation is that it can take up a lot of memory if many of the strings are long.
Here is a simple implementation of a case insensitive trie that works for the alphabet [A..Z]
.
An example of the usage of the above trie:
The above code snippet prints out:
9.2 Ternary Search Tries (TSTs)
TODO
9.3 Substring Search
Suppose we have a string \(S\) and we want to check whether a substring \(T\) occurs within \(S\). The three main methods to do that are the naive algorithm, the Boyer Moore algorithm, and the Knuth-Morris-Pratt (KMP) algorithm.
9.3.1 Naive
The simplest bruteforce method is to first align \(T\) at the beginning of \(S\), and check whether all the characters match up. If not, move \(T\) over to the right by one character. Eventually, we’ll either find a match, and run out of possible alignments and conclude that \(T\) does not occur as a substring in \(S\). The following Java code returns nulll
if \(T\) does not occur in \(S\), and otherwise returns the suffix of \(S\) which matched \(T\).
9.3.2 Boyer Moore
TODO
9.3.3 Knuth-Morris-Pratt (KMP)
TODO
9.4 Subsequence Search
Given strings \(S\) and \(T\), check whether \(T\) is a subsequence of \(S\). A string \(B\) is said to a subsequence of a string \(A\) if we can remove characters from \(A\) and eventually make it equal to \(B\).
We can do this in linear time using a greedy approach. Find the first letter of \(B\) in \(A\), then find the second, etc… Java code:
10 Bit Twiddling
Bit manipulation often comes in useful. This section presents a number of popular techniques using bit manipulation.
11 Computational Geometry
11.1 Lines
11.1.1 Check if three points are colinear
To check if \(p_1 = (x_1,y_1)\), \(p_2 = (x_2,y_2)\), \(p_3 = (x_3, y_3)\) are colinear, we can check if the slopes \(p_1p_2\) and \(p_2p_3\) are equal. We need to be careful to not divide by zero though, so it’s better to use multiplication. We will check if following holds:
\[ (y_2-y_1) \times (x_3-x_2) = (y_3-y_2) \times (x_2 - x_1) \]