# The Algorithm Design Manual: Chapter 4

4-1. The Grinch is given the job of partitioning 2n players into two teams of n players each. Each player has a numerical rating that measures how good he/she is at the game. He seeks to divide the players as unfairly as possible, so as to create the biggest possible talent imbalance between team A and team B. Show how the Grinch can do the job in $O(n \log n)$ time.

Solution
First sort the players with an algorithm which runs in $O(n \log n)$ time and afterwards form two teams with the first n player in the first team and the last n players in the second team.

4-3. Take a sequence of 2n real numbers as input.Design an $O(n \log n)$ algorithm that partitions the numbers into n pairs, with the property that the partition minimizes the maximum sum of a pair. For example, say we are given the numbers (1,3,5,9). The possible partitions are ((1,3),(5,9)), ((1,5),(3,9)), and ((1,9),(3,5)). The pair sums for these partitions are (4,14), (6,12), and (10,8). Thus the third partition has 10 as its maximum sum, which is the minimum over the three partitions.

Solution We can minimize the maximum sum if we pair up the lowest and the highest element. But first we have to sort our numbers which takes $O(2n \log n)$ and create the pairs which takes $O(2n)$.

Let S be the sequence with length 2n
sort S
for i := 1 to n
pair[i] = (S[i], S[2n - i - 1])

4-6. Given two sets $S_1$ and $S_2$ (each of size n), and a number x, describe an $O(n \log n)$ algorithm for finding whether there exists a pair of elements, one from $S_1$ and one from $S_2$, that add up to x. (For partial credit, give a $\Theta (n^2)$ algorithm for this problem.)

Solution We could sort $S_2$ and then iterate through $S_1$ and calculate the second number $s_1 + s_2 = x \Leftrightarrow s_2 = x - s_1.$ Now we just have to search for $s_2$ in $S_2$ which takes $O(\log n)$ time. Therefore we take $O(2n \log 2n) + O(n \log n) = O(n \log n)$ time.

4-7. Outline a reasonable method of solving each of the following problems. Give the order of the worst-case complexity of your methods.
(a) You are given a pile of thousands of telephone bills and thousands of checks sent in to pay the bills. Find out who did not pay.
(b) You are given a list containing the title, author, call number and publisher of all the books in a school library and another list of 30 publishers. Find out how many of the books in the library were published by each company.
(c) You are given all the book checkout cards used in the campus library during the past year, each of which contains the name of the person who took out the book. Determine how many distinct people checked out at least one book.

Solution (a) We can use quick sort in this scenario which works in expected $O(n \log n)$ time and match the ith bill with the ith check which takes $O(n)$ time.

(b) We can use a dictionary quite easily which counts the books by publisher. This takes $O(k ^cdot n)$ where $k=30$ time.

(c) We can sort the list by name and then iterate through it and count each different name. This takes about $O(n \log n) + O(n) = O(n \log n)$ time.

4-9. Give an efficient algorithm to compute the union of sets A and B, where $n = max(|A|,|B|).$ The output should be an array of distinct elements that form the union of the sets, such that they appear more than once in the union.
(a) Assume that A and B are unsorted. Give an $O(n \log n)$ algorithm for the problem.
(b) Assume that A and B are sorted. Give an O(n) algorithm for the problem.

Solution
(a) We can create one array with every element of A and B. Afterwards we sort this array and iterate through it. In this iteration we just have to check if ith item differs from the (i+1)th item which takes $O(n)$ therefore we need $O(n \log n) + O(n) = O(n \log n).$

(b)

Let A got n elements and B got m elements with n >= m.

i := 1 # counter for A, A[1] is first element
j := 1 # counter for B
while i <= n
if j >= m # no more elements in B
add A[i] to union set
i := i + 1

else if A[i] == B[j] # same element
add A[i] to union set
i := i + 1
j := j + 1

else if A[i] > B[j] # increase j until B[j] is big enough
add B[j] to union set
j := j + 1

else if A[i] < B[j]
add A[j] to union set
i := i + 1

4-12. Devise an algorithm for finding the k smallest elements of an unsorted set of n integers in $O(n + k \log n).$

Solution We can build an “unsorted” heap in $O(n)$, i.e. only bring the smallest item at the top. Now we can extract k times the smallest item which takes $O(k \log n)$ time.

4-13. You wish to store a set of n numbers in either a max-heap or a sorted array. For each application below, state which data structure is better, or if it does not matter. Explain your answers.
(a) Want to find the maximum element quickly.
(b) Want to be able to delete an element quickly.
(c) Want to be able to form the structure quickly.
(d) Want to find the minimum element quickly.

Solution (a) They are both equally fast. In the max-heap it’s the first element in the sorted array the last one.
(b) A max-heap is here better because it takes only $O(\log n)$ instead of $O(n)$ for a sorted array.

(c) Both the heap and the sorted array take $O(n \log n)$ time to be formed.

(d) The minimum element in a sorted array is the first. In a max-heap every leaf could be the minimum element, therefore it takes $O(n).$

4-14. Give an $O(n \log k)$-time algorithm that merges k sorted lists with a total of n elements into one sorted list. (Hint: use a heap to speed up the elementary $O(kn)$– time algorithm).

Solution We can basically do an heap sort on these lists. We iterate from i := 1 to n and select the smallest item from the head of each list which takes $O(1) + O(\log n)$ time. Therefore the algorithm takes $O(n \log n)$ time.

4-15. (a) Give an efficient algorithm to find the second-largest key among n keys. You can do better than $2n - 3$ comparisons.
(b) Then, give an efficient algorithm to find the third-largest key among n keys. How many key comparisons does your algorithm do in the worst case? Must your algorithm determine which key is largest and second-largest in the process?

Solution (a) There’s a faster method for construction heaps which runs in $O(n)$. Afterwards we just have to call find-min two times which takes $O(\log n).$

(b) We can use the same method here as well. The largest second-largest key is implicitly found by constructing the heap.

4-16. Use the partitioning idea of quicksort to give an algorithm that finds the median element of an array of n integers in expected $O(n)$ time.

Solution The partition part of quicksort basically can help us. It determines partitions which are bigger respectively smaller than the pivot element. We just have to find n/2 elements which are smaller or equal to our element m which is then the median. This takes expected $O(n)$ time.

4-19. An inversion of a permutation is a pair of elements that are out of order.
(a) Show that a permutation of n items has at most $n(n-1) / 2$ inversions. Which permutation(s) have exactly $n(n-1) / 2$ inversions?
(b) Let P be a permutation and Pr be the reversal of this permutation. Show that $P$ and $P^r$ have a total of exactly $n(n-1) / 2$ inversions.
(c) Use the previous result to argue that the expected number of inversions in a random permutation is $n(n-1) / 4.$

Solution (a) Let’s take for example the items 1..5. We can get the maximum inversions if we reverse this list, i.e. [5, 4, 3, 2, 1] or more general [n, n-1, …, 1]. Now we can start at the right and count the inversions.

Sublist              Inversions                  #Inversions
1                                                     0
2, 1                 (2,1)                            1
3, 2, 1              (3, 2), (3, 1)                   2
4, 3, 2, 1           (4, 3), (4, 2), (4, 1)           3
5, 4, 3, 2, 1        (5, 4), (5, 3), (5, 2), (5,1)    4

Therefore we got $sum_{i=0}^{n} (i-1) = \frac{n(n-1)}{2}$ inversions at most.

(b) Let’s prove this relation. Assume we got a permutation with n items. Let’s add an other item. The reversal for the old permutation got $\frac{n(n-1)}{2}$ inversions. The (n+1)th item adds n inversions which are:
$(n+1, n), (n+1, n-1), .... (n+1, 1).$ Therefore we get $\frac{n(n-1)}{2} + n = \frac{n^2 - n + 2n}{2} = \frac{n(n+1)}{2}$ and we’re done.

(c) A rough approach is to take the smallest and the highest value and assume that inversions are uniformly distribute. Therefore we get $\frac{0 + \frac{n(n-1)}{2}}{2} = \frac{n(n+1)}{4}$

4-24. Let A[1..n] be an array such that the first $n - sqrt{n}$ elements are already sorted (though we know nothing about the remaining elements). Give an algorithm that sorts A in substantially better than $n \log n$ steps.

Solution Merge sort can used quite nicely. We need to sort the remaining $sqrt{n}$ which takes $sqrt{n} \log sqrt{n}$ time. Afterwards we have to use merge on our old and new sorted lists which takes $O(n)$ time.

4-29.
Mr. B. C. Dull claims to have developed a new data structure for priority queues that supports the operations Insert, Maximum, and Extract-Max—all in $O(1)$ worst-case time. Prove that he is mistaken. (Hint: the argument does not involve a lot of gory details—just think about what this would imply about the $Omega(n \log n)$ lower bound for sorting.)

Solution If insert, maximum and extract-max would be possible in $O(1)$ we could use the following algorithm to sort data.

A[1...n]
for i := 1 to n
insert(A[i])

for i := 1 to n
A[i] = maximum()
Extract-max()

This would sort data in $O(2n) = O(n)$ which is smaller than lower bound $Theta(n \log n).$

4-30. A company database consists of 10,000 sorted names, 40% of whom are known as good customers and who together account for 60% of the accesses to the database. There are two data structure options to consider for representing the database:
• Put all the names in a single array and use binary search.
• Put the good customers in one array and the rest of them in a second array. Only if we do not find the query name on a binary search of the first array do we do a binary search of the second array.
Demonstrate which option gives better expected performance. Does this change if linear search on an unsorted array is used instead of binary search for both options?

Solution Single array, binary search: $\log 10000 = 4$
Good and bad, binary search: $0.6 cdot \log 4000 + 0.4 cdot (\log 4000 + \log 6000) approx 5.11$
The first variant is a bit faster here.

Single array, unsorted: $10000$
Good and bad, unsorted: $0.6 cdot 4000 + 0.4 cdot (4000 + 6000) = 6400$
However in this case the second variant is far superior.

4-32. Consider the numerical 20 Questions game. In this game, Player 1 thinks of a number in the range 1 to n. Player 2 has to figure out this number by asking the fewest number of true/false questions. Assume that nobody cheats.
(a) What is an optimal strategy if n in known?
(b) What is a good strategy is n is not known?

Solution (a) Binary search.
(b) We can start asking if the number is between 1 and two. If not we can double our range from two to four, four and eight, etc.

4-35. Let M be an $n x m$ integer matrix in which the entries of each row are sorted in increasing order (from left to right) and the entries in each column are in increasing order (from top to bottom). Give an efficient algorithm to find the position of an integer x in M, or to determine that x is not there. How many comparisons of x with matrix entries does your algorithm use in worst case?

Solution

for j:= 1 to m
if m[1][j] <= x <= m[n][j]
do binary search on this row
if found return value

return Not found

This algorithm runs in $O(m\log n)$ time.

4-39. Design and implement a parallel sorting algorithm that distributes data across several processors. An appropriate variation of mergesort is a likely candidate. Mea- sure the speedup of this algorithm as the number of processors increases. Later, compare the execution time to that of a purely sequential mergesort implementation. What are your experiences?

Solution Non-threaded version:

from random import randint

def mergesort(lst):
n = len(lst)
if n < 2:
return lst
else:
middle = n / 2

left = mergesort(lst[:middle])
right = mergesort(lst[middle:])

return merge(left, right)

def merge(left, right):
result = []
i = 0
j = 0

while i < len(left) and j < len(right):
if left[i] <= right[j]:
result.append(left[i])
i += 1
else:
result.append(right[j])
j += 1

result += left[i:] # append the rest
result += right[j:]
return result

unsorted = [randint(0, 150000) for i in xrange(0, 10 ** 6)]
mergesort(unsorted)


Threaded version:

from random import randint
from multiprocessing import Process, Queue

def mergesort(lst):
n = len(lst)
if n < 2:
return lst
else:
middle = n / 2

left = mergesort(lst[:middle])
right = mergesort(lst[middle:])

return merge(left, right)

def merge(left, right):
result = []
i = 0
j = 0

while i < len(left) and j < len(right):
if left[i] <= right[j]:
result.append(left[i])
i += 1
else:
result.append(right[j])
j += 1

result += left[i:] # append the rest
result += right[j:]
return result

def callMergesort(lst, q):
q.put(mergesort(lst))

def main():
unsorted = [randint(0, 150000) for i in xrange(0, 10 ** 6)]
middle = 5 * 10 ** 5

q = Queue()

p1 = Process(target = callMergesort, args = (unsorted[:middle], q))
p2 = Process(target = callMergesort, args = (unsorted[middle:], q))

p1.start()
p2.start()

merge(q.get(), q.get())

if __name__ == '__main__':
main()

For small to medium sizes the threaded version is a bit slower than the non-threaded one. However, for very big sizes the threaded version works faster.

4-45. Given a search string of three words, find the smallest snippet of the document that contains all three of the search words—i.e. , the snippet with smallest number of words in it. You are given the index positions where these words in occur search strings, such as word1: (1, 4, 5), word2: (4, 9, 10), and word3: (5, 6, 15). Each of the lists are in sorted order, as above.

Solution We can sort these positions with its identifier. Afterwards we iterative through the list and put identifiers on a stack. If we got all of them, we found the snippet. We replace each identifier if we find a nearer one. At the end, we just have to search for the smallest snippet in all snippets.

# The Algorithm Design Manual: Chapter 3

3-1. A common problem for compilers and text editors is determining whether the parentheses in a string are balanced and properly nested. For example, the string ((())())() contains properly nested pairs of parentheses, which the strings )()( and ()) do not. Give an algorithm that returns true if a string contains properly nested and balanced parentheses, and false if otherwise. For full credit, identify the position of the first offending parenthesis if the string is not properly nested and balanced.

Solution:

def isBalanced(lst):
stack = []
for position, item in enumerate(lst):
if item == "(":
stack.append((item, position))
else:
if stack != []: # no more open parentheses
stack.pop() # pops "("
else:
return (False, position)

if stack == []:
return True
else:
return (False, stack[0][1])

print ")()( :", isBalanced(list(")()("))
print "()) :", isBalanced(list("())"))
print "((()()))() :", isBalanced(list("((()()))()"))
print "(())(())()()()()(() :", isBalanced(list("(())(())()()()()(()"))

3-3. We have seen how dynamic arrays enable arrays to grow while still achieving constant-time amortized performance. This problem concerns extending dynamic arrays to let them both grow and shrink on demand.
(a) Consider an underflow strategy that cuts the array size in half whenever the array falls below half full. Give an example sequence of insertions and deletions where this strategy gives a bad amortized cost.
(b) Then, give a better underflow strategy than that suggested above, one that achieves constant amortized cost per deletion.

Solution: 3-3. (a) Let’s assume that the array is current fulled at half. If we delete one element it will cut in half. If we now add a new element it have to expand.

(b) Instead of shrinking it by half its size I would shrink it only by a forth its size. So we have a buffer which avoids shrinking and expanding in too short periods.

3-4. Design a dictionary data structure in which search, insertion, and deletion can all be processed in O(1) time in the worst case. You may assume the set elements are integers drawn from a finite set 1, 2, .., n, and initialization can take O(n) time.

Solution: Luckily these are sets so there are no duplicates. Therefore we can use an array with size n. It takes $O(n)$ to fill the array. Search (A[i]) can be done in $O(1)$, also insertion (A[i] = v). To remove data we can just use a specific value like NUL which works also in $O(1).$

3-6. Describe how to modify any balanced tree data structure such that search, insert, delete, minimum, and maximum still take O(\log n) time each, but successor and predecessor now take O(1) time each. Which operations have to be modified to support this?

Solution: My idea is to take a search tree and use a double linked list for the predecessors and successors. You can see how this works in this picture:

Each node gets a pointer to its entry on the double linked list. Let’s see if it still works in $O(\log n)$. Search, minimum and maximum haven’t really changed, so this works still fine. What about insert and delete? Assume that 3 isn’t inserted yet. We transverse the list up to node 2. We know that the node of 3 will be placed on the right. Now we have to check what happens with the double linked list.
Node 2 refers to list entry 2 which has a successor list entry 4. So we have to point 2 to 3 and 3 to 2 and 4 and we’re done. The same works for deletion. Successor and predecessor are available through the double linked list with $O(1)$. And we’re done.

3-8. Design a data structure to support the following operations:
• insert(x,T) – Insert item x into the set T .
• delete(k,T) – Delete the kth smallest element from T.
• member(x,T) – Return true iff x ∈ T .
All operations must take $O(\log n)$ time on an n-element set.

Solution: We use a basic binary search tree for this and add two counters which count the number of children nodes.

Here you can see how the counters in pink. Let’s look at 4. The pink 2 indicates that there are 2 nodes on the left, i.e. smaller than 4. And the pink 3 indicates that there are 3 nodes on the right, i.e. bigger than 4. What happens if we insert an item?
We do the standard insert traversal of a binary search tree with the difference that in each node we add a one to our counter because there will be one more node if we added our new item.

You can see what happened when we added 1. Besides adding the node we increased the left counter of 2 by one and the left counter by 4 by 1 because we traveled this way. So there’s no problem on inserting new nodes with $O(\log n)$ complexity.
The next method is member which is basically search in a binary search tree which also works with a complexity of $O(\log n).$
The last one is delete(k, T) which removes the kth smallest item. Finally we can use our counters. The first counter indicates if our kth smallest item is on the left, the item or on the right.
Example 1: We want the 3th smallest item. We start at 4 and see that there are 3 items on the left, i.e. the 3th smallest item in its left children. Next we are at 2 and see that there are 1 on the left and 1 on the right, therefore we have to go right (1 left + item itself + 1 right item = 3). Now we have arrived 3 and there are no other child items therefore 3 is our 3th smallest item.

Example 2: We want the 4th smallest item. We start at 4 and see that there are 3 items to the left. Therefore the 4th smallest item is 4 itself.

This is basically some kind of binary search so it also works with $O(\log n)$ complexity.

3-12. Suppose you are given an input set S of n numbers, and a black box that if given any sequence of real numbers and an integer k instantly and correctly answers whether there is a subset of input sequence whose sum is exactly k. Show how to use the black box O(n) times to find a subset of S that adds up to k.

Solution: The first time we enter our set S. If it returns yes we can continue otherwise it isn’t possible to form the sequence which sums up to k.
The next step is to test our Set without the first element. If the black box returns yes we can delete it from our set otherwise we know that it is needed. We do this for each element and our S shrinks to a set which sums up to k.

3-13. Let A[1..n] be an array of real numbers. Design an algorithm to perform any sequence of the following operations:
• Add(i,y) – Add the value y to the ith number.
• Partial-sum(i) – Return the sum of the first i numbers, i.e. $\sum_{j=1}^{i} A[j].$
There are no insertions or deletions; the only change is to the values of the numbers. Each operation should take $O(\log n)$ steps. You may use one additional array of size n as a work space.

Solution: The general idea is to use the work space for the sums. Instead of using each i, I will only use log n of them to guarantee $O(\log n)$ for each operation.

function Add(i, y)
A[i] = A[i] + y
for each workspace with index j
workspace[j] = workspace[j] + 1

This function takes $1 + \log n = O(\log n)$ time.

function Partial-sum(i)
find nearest workspace
sum = 0

if nearest workspace index > needed \sum index
substract numbers from nearest workspace to \sum
else
add numbers with nearest workspace to \sum

return sum

Here you can see how this works on an example:

3-17. A Caesar shift (see Section 18.6 (page 641)) is a very simple class of ciphers for secret messages. Unfortunately, they can be broken using statistical properties of English. Develop a program capable of decrypting Caesar shifts of sufficiently long texts.

Solution:

def countChars(s):
myCount = dict()
s = s.upper() # don't have to distingush between upper and lower case
for c in s:
if c == " " or c == "n":
continue

if c in myCount.keys():
myCount[c] += 1
else:
myCount[c] = 1

return myCount

def deCaesar(s):
letterProb = { 'A': 8.167 * 0.01,
'B': 1.492 * 0.01,
'C': 2.782 * 0.01,
'D': 4.253 * 0.01,
'E': 12.702 * 0.01,
'F': 2.228 * 0.01,
'G': 2.015 * 0.01,
'H': 6.094 * 0.01,
'I': 6.966 * 0.01,
'J': 0.153 * 0.01,
'K': 0.772 * 0.01,
'L': 4.025 * 0.01,
'M': 2.406 * 0.01,
'N': 6.749 * 0.01,
'O': 7.507 * 0.01,
'P': 1.929 * 0.01,
'Q': 0.095 * 0.01,
'R': 5.987 * 0.01,
'S': 6.327 * 0.01,
'T': 9.056 * 0.01,
'U': 2.758 * 0.01,
'V': 0.978 * 0.01,
'W': 2.360 * 0.01,
'X': 0.150 * 0.01,
'Y': 1.974 * 0.01,
'Z': 0.074 * 0.01}

counts = countChars(s)
countsSorted = sorted(counts.iteritems(), key = lambda (k, v) : (v, k),
reverse = True)
letterProbSorted = sorted(letterProb.iteritems(), key = lambda (k, v) : (v, k),
reverse = True)

firstCount = countsSorted[0][0]
firstLetter = letterProbSorted[0][0]

return ord(firstCount) - ord(firstLetter)

txt = """UHVHDUFKHUV (EORRP (1985), EUBDQ & KDUWHU (1899), KDBHV (1989), VLPPRQ & FKDVH (1973)) KDYH VKRZQ LW WDNHV DERXW WHQ BHDUV WR GHYHORS HASHUWLVH LQ DQB RI D ZLGH YDULHWB RI DUHDV, LQFOXGLQJ FKHVV SODBLQJ, PXVLF FRPSRVLWLRQ, WHOHJUDSK RSHUDWLRQ, SDLQWLQJ, SLDQR SODBLQJ, VZLPPLQJ, WHQQLV, DQG UHVHDUFK LQ QHXURSVBFKRORJB DQG WRSRORJB. WKH NHB LV GHOLEHUDWLYH SUDFWLFH: QRW MXVW GRLQJ LW DJDLQ DQG DJDLQ, EXW FKDOOHQJLQJ BRXUVHOI ZLWK D WDVN WKDW LV MXVW EHBRQG BRXU FXUUHQW DELOLWB, WUBLQJ LW, DQDOBCLQJ BRXU SHUIRUPDQFH ZKLOH DQG DIWHU GRLQJ LW, DQG FRUUHFWLQJ DQB PLVWDNHV. WKHQ UHSHDW. DQG UHSHDW DJDLQ. WKHUH DSSHDU WR EH QR UHDO VKRUWFXWV: HYHQ PRCDUW, ZKR ZDV D PXVLFDO SURGLJB DW DJH 4, WRRN 13 PRUH BHDUV EHIRUH KH EHJDQ WR SURGXFH ZRUOG-FODVV PXVLF. LQ DQRWKHU JHQUH, WKH EHDWOHV VHHPHG WR EXUVW RQWR WKH VFHQH ZLWK D VWULQJ RI #1 KLWV DQG DQ DSSHDUDQFH RQ WKH HG VXOOLYDQ VKRZ LQ 1964. EXW WKHB KDG EHHQ SODBLQJ VPDOO FOXEV LQ OLYHUSRRO DQG KDPEXUJ VLQFH 1957, DQG ZKLOH WKHB KDG PDVV DSSHDO HDUOB RQ, WKHLU ILUVW JUHDW FULWLFDO VXFFHVV, VJW. SHSSHUV, ZDV UHOHDVHG LQ 1967. PDOFROP JODGZHOO UHSRUWV WKDW D VWXGB RI VWXGHQWV DW WKH EHUOLQ DFDGHPB RI PXVLF FRPSDUHG WKH WRS, PLGGOH, DQG ERWWRP WKLUG RI WKH FODVV DQG DVNHG WKHP KRZ PXFK WKHB KDG SUDFWLFHG:

HYHUBRQH, IURP DOO WKUHH JURXSV, VWDUWHG SODBLQJ DW URXJKOB WKH VDPH WLPH - DURXQG WKH DJH RI ILYH. LQ WKRVH ILUVW IHZ BHDUV, HYHUBRQH SUDFWLVHG URXJKOB WKH VDPH DPRXQW - DERXW WZR RU WKUHH KRXUV D ZHHN. EXW DURXQG WKH DJH RI HLJKW UHDO GLIIHUHQFHV VWDUWHG WR HPHUJH. WKH VWXGHQWV ZKR ZRXOG HQG XS DV WKH EHVW LQ WKHLU FODVV EHJDQ WR SUDFWLVH PRUH WKDQ HYHUBRQH HOVH: VLA KRXUV D ZHHN EB DJH QLQH, HLJKW EB DJH 12, 16 D ZHHN EB DJH 14, DQG XS DQG XS, XQWLO EB WKH DJH RI 20 WKHB ZHUH SUDFWLVLQJ ZHOO RYHU 30 KRXUV D ZHHN. EB WKH DJH RI 20, WKH HOLWH SHUIRUPHUV KDG DOO WRWDOOHG 10,000 KRXUV RI SUDFWLFH RYHU WKH FRXUVH RI WKHLU OLYHV. WKH PHUHOB JRRG VWXGHQWV KDG WRWDOOHG, EB FRQWUDVW, 8,000 KRXUV, DQG WKH IXWXUH PXVLF WHDFKHUV MXVW RYHU 4,000 KRXUV.

VR LW PDB EH WKDW 10,000 KRXUV, QRW 10 BHDUV, LV WKH PDJLF QXPEHU. (KHQUL FDUWLHU-EUHVVRQ (1908-2004) VDLG "BRXU ILUVW 10,000 SKRWRJUDSKV DUH BRXU ZRUVW," EXW KH VKRW PRUH WKDQ RQH DQ KRXU.) VDPXHO MRKQVRQ (1709-1784) WKRXJKW LW WRRN HYHQ ORQJHU: "HAFHOOHQFH LQ DQB GHSDUWPHQW FDQ EH DWWDLQHG RQOB EB WKH ODERU RI D OLIHWLPH; LW LV QRW WR EH SXUFKDVHG DW D OHVVHU SULFH." DQG FKDXFHU (1340-1400) FRPSODLQHG "WKH OBI VR VKRUW, WKH FUDIW VR ORQJ WR OHUQH." KLSSRFUDWHV (F. 400EF) LV NQRZQ IRU WKH HAFHUSW "DUV ORQJD, YLWD EUHYLV", ZKLFK LV SDUW RI WKH ORQJHU TXRWDWLRQ "DUV ORQJD, YLWD EUHYLV, RFFDVLR SUDHFHSV, HASHULPHQWXP SHULFXORVXP, LXGLFLXP GLIILFLOH", ZKLFK LQ HQJOLVK UHQGHUV DV "OLIH LV VKRUW, [WKH] FUDIW ORQJ, RSSRUWXQLWB IOHHWLQJ, HASHULPHQW WUHDFKHURXV, MXGJPHQW GLIILFXOW." DOWKRXJK LQ ODWLQ, DUV FDQ PHDQ HLWKHU DUW RU FUDIW, LQ WKH RULJLQDO JUHHN WKH ZRUG "WHFKQH" FDQ RQOB PHDQ "VNLOO", QRW "DUW". """

print deCaesar(txt)


3-24. What is the best data structure for maintaining URLs that have been visited by a Web crawler? Give an algorithm to test whether a given URL has already been visited, optimizing both space and time.

Solution: An easy way is to use a hash table for the domains and a hash table for paths.

function testURL
calculate hash function of domain
look up in hash table for domains
if multiple entries:
traverse until you find right domain

if path not found
return Not found

calculate hash function of path
look up in hash table for path
if multiple entries:
traverse until you find right domain

if path not found
return Not found

3-28. You have an unordered array X of n integers. Find the array M containing n elements where Mi is the product of all integers in X except for Xi. You may not use division. You can use extra memory. (Hint: There are solutions faster than $O(n^2).$)

Solution: Here’s a algorithm which works in $O(n \log n)$:
First we use our extra memory to store multiplications. We multiply $\log n$ numbers together. This takes $O(n \log n).$ Now we just need to find our number by using this pre-multiplications and one element of the original array. This takes for all M about $O(n \log n)$ time. Therefore we have $O(n \log n)$ total run time.
Example:

X  = [6, 5, 3, 1, 7, 6, 2, 3]
Y  = [6*5*3, 5*3*1, 3*1*7, 1*7*6, 7*6*2, 6*2*3, 2*3*6, 3*6*5]

M1 = [Y[2] * Y[5] * X[8]]
M2 = [Y[3] * Y[6] * X[1]]
M3 = [Y[4] * Y[7] * X[2]]
M4 = [Y[5] * Y[8] * X[3]]
M5 = [Y[6] * Y[1] * X[4]]
M6 = [Y[7] * Y[2] * X[5]]
M7 = [Y[8] * Y[3] * X[6]]
M8 = [Y[1] * Y[4] * X[7]]

# The Algorithm Design Manual: Chapter 2

2-1. What value is returned by the following function? Express your answer as a function of n. Give the worst-case running time using the Big Oh notation.

function mystery(n)
r := 0
for i := 1 to n − 1 do
for j := i + 1 to n do
for k := 1 to j do
r := r + 1
return(r)

Solution: To find out what value is returned just represent this function mathematically and simplify it.

$\sum_{i=1}^{n-1}\sum_{j=i+1}^n\sum{}_{k=1}^{j} 1$
$= \sum_{i=1}^{n-1}\sum_{j=i+1}^n j$
$= \sum_{i=1}^{n-1} (\frac{n(n+1)}{2} - \frac{i(i+1)}{2})$
$= \frac{1}{2} \sum_{i=1}^{n-1} (n^2 + n - i^2 - i)$
$= \frac{1}{2} ( (n-1)n^2 + (n-1)n - \sum_{i=1}^{n-1} i^2 - \frac{(n-1)n}{2})$
$= \frac{1}{2} { ( n^3 - n) - (\frac{n(n+1)(2n+1)}{6} - n^2) - \frac{n^2 -n}{2} }$
$= \frac{1}{2} { \frac{1}{6} * (6n^3 - 6n - (2n^3+n^2+2n^2+n-6n^2) - 3n^2 +3n) }$
$= \frac{1}{12} { 6n^3 - 6n - 2n^3 - n^2 - 2n^2 - n + 6n^2 - 3n^2 + 3n }$
$= \frac{1}{12} (4n^3 - 4n) = \frac{n^3 - n}{3}$

The complexity is $O(n^3).$

2-10. Prove that $n^3 - 3n^2 - n + 1 = \Theta(n^3).$
Solution: $O(n^3)$: For $c > 1: n^3 - 3n^2 - n + 1 \leq c \cdot n^3.$
$Omega(n^3)$: For $0 \leq c \leq 1: n^3 - 3n^2 - n + 1 \geq c \cdot n^3.$

2-34. Assume that Christmas has n days. Exactly how many presents did my “true love” send me? (Do some research if you do not understand this question.)
Solution: I made this table of the first three days:

Day    Presents                        \sum of Presents
1       1                               1
2       1 + 2                           1 + 1 + 2
3       1 + 2 + 3                       1 + 1 + 2 + 1 + 2 + 3


We can break this down into sub steps. On the ith day we get $p_i = \sum_{k=1}^{i} k$ presents.
The total amount of presents is: $\sum_{i=1}^{n} p_i = \sum_{i=1}^{n} \sum_{k=1}^{i} k = \sum_{i=1}^{n} \frac{i^2+i}{2}$
We can be simplified as:
$= \frac{1}{2} { \sum_{i=1}^{n} i^2 + \sum_{i=1}^{n} i } = \frac{1}{2} { \frac{n(n+1)(2n+1)}{6} + \frac{3n^2 + 3n)}{6} }$
$= \frac{1}{12} { 2n^3+3n^2+n + 3n^2 + 3n}$
$= \frac{2n^3 + 6n^2 + 4n}{12} = \frac{n^3+3n^2+2n}{6}$

2-39. Prove the following identities on \logarithms:
(a) Prove that $\log_a(xy) = \log_a x + \log_a y$
(b) Prove that $\log_a x^y = y \log_a x$
(c) Prove that $\log x = \frac{\log_b x}{\log_b a}$
(d) Prove that $x^{\log_b y} = y^{\log_b x}$
Solution:
(a) The first proof is straight forward: $a^{\log_a(x) + \log_a(y)} = a^{\log_a(x)} \cdot a^{\log_a(y)} = x \cdot y = a^{\log_a(xy)}$
(b) The trick here is to see that $x^y = \prod_{i=1}^{y} x$. Therefore we can use the identity from (a): $\log_a x^y = \log_a (\prod_{i=1}^{y} x) = \sum_{i=1}^{y} \log_a x = y \log_a x$
(c) Here you try to form around a variable (z) to get the right term:
$\log_a x = z \Leftrightarrow a^{\log_a x} = a^z \Leftrightarrow x = a^z$
$\log_b x = \log_b a^z \Leftrightarrow \log_b x = z \log_b a$
$\frac{\log_b x}{\log_b a} = z = \log_a x$
(d) The last one is quite easy. $x^{\log_b y} = y^{\log_b x}.$ Now we have just to $\log_b$ on the equation and use the identity from (b) and we get: $\log_b y \cdot \log_b x = \log_b x \cdot \log_b y$

2-44. We have 1,000 data items to store on 1,000 nodes. Each node can store copies of exactly three different items. Propose a replication scheme to minimize data loss as nodes fail. What is the expected number of data entries that get lost when three random nodes fail?
Solution: My first idea was a kind of a binary tree which then evolved into this:

The idea is to save on each node the value of the corresponding left and right nodes and of course the main value. In the last nodes we got some free space were we can save item10 because it isn’t backed up yet.
So what happens if three nodes fall out? There are various scenarios.
1. 3 corresponding nodes fall out, e.g. Node 1, 2 and 3. Then Item 11 and 12 are completely lost. (Remember Item10 is saved further down again)
2. A node and its parent fall out, e.g. Node 1 and 2. Here we just lose Item 11.
3. A random node falls out, e.g. Node 3. No problem whatsoever. We can retrieve Item12 from Node 1.

We got about 500 free storages in the last row of nodes which could be used to backup half of the total items again. Which would reduce our loss further

2-46. You have a 100-story building and a couple of marbles. You must identify the lowest floor for which a marble will break if you drop it from this floor. How fast can you find this floor if you are given an infinite supply of marbles? What if you have only two marbles?
Solution: The first case with infinite supply of marbles is very easy. We just do a binary search on the story building, i.e. we need about 7 marbles.
The second case is a bit more demanding. I would start to try to minimize the possible interval as much as possible by starting with a marble in the middle of the whole interval, i.e. at the 50th story.
If it’s broken we have to work our way up from 1 to at max 50.
If it’s still alive we can cut the next interval into half till we find our floor.

2-50. A Ramanujam number can be written two different ways as the sum of two cubes—i.e. , there exist distinct a, b, c, and d such that $a^3 +b^3 = c^3 +d^3$. Generate all Ramanujam numbers where a, b, c, d < n. Solution:

def RamanujamNumbersBF(n):
numbers = []
for a in xrange(0, n):
for b in xrange(0, n):
for c in xrange(0, n):
for d in xrange(0, n):
if a != b and a != c and a != d and b != c and b != d and c != d:
if a ** 3 + b ** 3 == c ** 3 + d ** 3:
numbers.append((a, b, c, d))

return numbers

def RamanujamNumbersDP(n):
numbers = []
Ds = dict()

# Init List
for d in xrange(0, n ** 3):
Ds[d] = False

# Fill list
for d in xrange(0, n):
Ds[d**3] = d

for a in xrange(0, n):
for b in xrange(0, n):
for c in xrange(0, n):

if a != b and a != c and b != c:
d = a ** 3 + b ** 3 - c ** 3

if a != d and b != d and c != d and d >= 0 and d < n ** 3:
if Ds[d] != False:
numbers.append((a, b, c, Ds[d]))

return numbers

print "Brute Force"
print RamanujamNumbersBF(50)

print "Dynamic Programming"
print RamanujamNumbersDP(50)

The DP approach works must faster than raw BF which you can see quite fast because complexity of BF is $O(n^4)$ and DP only takes $O(n^3).$

# The Algorithm Design Manual: Chapter 1

Yeah, new book series! I had laying this book around for about two and an half years and only read about a quarter of it but never worked through it. So, I decided to work through it and post interesting problems and solutions online.
The book consists of two parts. The first part treats the algorithmic basics. The second part is just a reference of different algorithmic problems and ways to solve it. I won’t cover the second part, mainly because there are no exercises.

1-1. Show that a + b can be less than min(a, b).
Solution: 1-1. For any $a, b < 0: a + b < \text{min}(a,b)$. For example:
$a = -5, b = -3. a + b = -8 < \text{min}(-5, -3) = -5.$

1-2. Show that a × b can be less than min(a, b).
Solution For example for $a = -5$ and $b = 3$ the result is $-5 * 3 = -15 < \text{min}(-5, 3) = -5.$

1-5. The knapsack problem is as follows: given a set of integers $S = {s_1, s_2, . . . , s_n}$, and a target number T, find a subset of S which adds up exactly to T. For example, there exists a subset within $S = {1,2,5,9,10}$ that adds up to $T = 22$ but not $T = 23$.
Find counterexamples to each of the following algorithms for the knapsack problem. That is, giving an S and T such that the subset is selected using the algorithm does not leave the knapsack completely full, even though such a solution exists.
(a) Put the elements of S in the knapsack in left to right order if they fit, i.e. the first-fit algorithm.
(b) Put the elements of S in the knapsack from smallest to largest, i.e. the best-fit algorithm.
(c) Put the elements of S in the knapsack from largest to smallest.
Solution:
(a) $S = {1, 2}, T = 2$
(b) $S = {1, 2}, T = 2$
(c) $S = {2, 3, 4}, T = 5$

1-16. Prove by induction that n3 + 2n is divisible by 3 for all n ≥ 0.
Solution: The base case is $n = 0, 0^3 + 2*0 = 0 \text{ mod } 3 = 0$ which is true.
We can assume that this holds up to n. For n + 1 we get:
$(n+1)^3 + 2(n+1) = n^3 + 3n^2 + 3n + 1 + 2n + 1$
$= n^3 + 3n^2 + 5n + 2 = (n^3 + 2n) + (3 (n^2 + n)).$
The first term is parenthesis is our assumption and the second term is obviously divisible by 3, therefore we showed that our assumption is true.

1-26. Implement the two TSP heuristics of Section 1.1 (page 5). Which of them gives better-quality solutions in practice? Can you devise a heuristic that works better than both of them?
Solution:

from random import randint

class Point:
def __init__(self, ID, x, y):
self.ID = ID
self.x = x
self.y = y

def distance(self, other):
return ( (self.x - other.x) ** 2 + (self.y - other.y) ** 2 ) ** 0.5

def closestPoint(P, visited, p):
shortestDistance = float("Inf")
shortestPoint = None

for pi in P:
if visited[pi.ID] == False:
if p.distance(pi) < shortestDistance:
shortestDistance = p.distance(pi)
shortestPoint = pi

return shortestPoint

def nearestNeighbour(P):
visited = [False] * ( len(P) + 1) # ID starts with 1
p = P[0]

visited[0] = True # not existing
visited[1] = True # first node

i = 0

finalPath = [p]

while False in visited:
i += 1

pi = closestPoint(P, visited, p)

if pi == None: # last point
break

visited[pi.ID] = True

p = pi
finalPath.append(p)

finalPath.append(P[0])

return finalPath

def closestPair(P):
n = len(P)
finalPath = []

vertexChain = [[p] for p in P]

d = float("INF")
for i in xrange(1, n-1):
d = float("INF")
for i1, v1 in enumerate(vertexChain):
for i2, v2 in enumerate(vertexChain):
if i1 != i2:
s = v1[-1]
t = v2[-1]

if s.distance(t) <= d:
sm = i1
tm = i2
d = s.distance(t)

vertexChain[sm] += vertexChain[tm]
vertexChain.pop(tm)

vertexChain[0] += vertexChain.pop(1)

return vertexChain[0]
return finalPath

def getPathSum(P):
distance = 0
for i in xrange(1, len(P)):
distance += ((P[i].x - P[i-1].x) ** 2 + (P[i].y - P[i-1].y) ** 2)

return distance ** 0.5

P_rect = [Point(1, 0, 0), Point(2, 0, 5), Point(3, 5, 0), Point(4, 5, 5)]
P_line = [Point(1, 0, 5), Point(2, 0, 0), Point(3, 0, 10), Point(3, 0, 15)]
P_rand = [Point(i, randint(0, 15), randint(0,15)) for i in xrange(0, 10)]

print "**Rectangle**"
print "> NearestNeighbour: %i" % getPathSum(nearestNeighbour(P_rect))
print "> ClosestPair: %i" % getPathSum(closestPair(P_rect))

print "**Line**"
print "> NearestNeighbour: %i" % getPathSum(nearestNeighbour(P_line))
print "> ClosestPair: %i" % getPathSum(closestPair(P_line))

print "**Rand**"
print "> NearestNeighbour: %i" % getPathSum(nearestNeighbour(P_rand))
print "> ClosestPair: %i" % getPathSum(closestPair(P_rand))


NearestNeighbour createst shorter paths for small input graphs. For midsize input paths and big input graphs closestPair creates smaller paths. However nearestNeighbour is much faster. In conclusion, it depends on the application which heuristic is more suitable.

1-28. Write a function to perform integer division without using either the / or * operators. Find a fast way to do it.
Solution: A simple way to perform integer division is substracting the divisor and counting each substraction. This only works for positive numbers. (more in the comments)

def divideBySub(a, b):
count = 0
while a >= 0:
a -= b
count += 1

return count

If we can find a bigger divisor we could speed it up. An easy way is to multipy the divisor by itself as long as it divides the denominator with a rest.

def divideBySubFaster(a, b):
countB = 0
lastB = b

while a % b == 0:
lastB = b
b += b
countB += 1

b -= lastB # revert last change
countB -= 1 # revert last change

result = divideBySub(a, b)
for i in xrange(0, countB):
result += result

return result

This works quite nice and can speed up the process substantially.