The Algorithm Design Manual: Chapter 3

3-1. A common problem for compilers and text editors is determining whether the parentheses in a string are balanced and properly nested. For example, the string ((())())() contains properly nested pairs of parentheses, which the strings )()( and ()) do not. Give an algorithm that returns true if a string contains properly nested and balanced parentheses, and false if otherwise. For full credit, identify the position of the first offending parenthesis if the string is not properly nested and balanced.


3-3. We have seen how dynamic arrays enable arrays to grow while still achieving constant-time amortized performance. This problem concerns extending dynamic arrays to let them both grow and shrink on demand.
(a) Consider an underflow strategy that cuts the array size in half whenever the array falls below half full. Give an example sequence of insertions and deletions where this strategy gives a bad amortized cost.
(b) Then, give a better underflow strategy than that suggested above, one that achieves constant amortized cost per deletion.

Solution: 3-3. (a) Let’s assume that the array is current fulled at half. If we delete one element it will cut in half. If we now add a new element it have to expand.

(b) Instead of shrinking it by half its size I would shrink it only by a forth its size. So we have a buffer which avoids shrinking and expanding in too short periods.

3-4. Design a dictionary data structure in which search, insertion, and deletion can all be processed in O(1) time in the worst case. You may assume the set elements are integers drawn from a finite set 1, 2, .., n, and initialization can take O(n) time.

Solution: Luckily these are sets so there are no duplicates. Therefore we can use an array with size n. It takes O(n) to fill the array. Search (A[i]) can be done in O(1), also insertion (A[i] = v). To remove data we can just use a specific value like NUL which works also in O(1).

3-6. Describe how to modify any balanced tree data structure such that search, insert, delete, minimum, and maximum still take O(\log n) time each, but successor and predecessor now take O(1) time each. Which operations have to be modified to support this?

Solution: My idea is to take a search tree and use a double linked list for the predecessors and successors. You can see how this works in this picture:

Each node gets a pointer to its entry on the double linked list. Let’s see if it still works in O(\log n). Search, minimum and maximum haven’t really changed, so this works still fine. What about insert and delete? Assume that 3 isn’t inserted yet. We transverse the list up to node 2. We know that the node of 3 will be placed on the right. Now we have to check what happens with the double linked list.
Node 2 refers to list entry 2 which has a successor list entry 4. So we have to point 2 to 3 and 3 to 2 and 4 and we’re done. The same works for deletion. Successor and predecessor are available through the double linked list with O(1). And we’re done.

3-8. Design a data structure to support the following operations:
• insert(x,T) – Insert item x into the set T .
• delete(k,T) – Delete the kth smallest element from T.
• member(x,T) – Return true iff x ∈ T .
All operations must take O(\log n) time on an n-element set.

Solution: We use a basic binary search tree for this and add two counters which count the number of children nodes.

Here you can see how the counters in pink. Let’s look at 4. The pink 2 indicates that there are 2 nodes on the left, i.e. smaller than 4. And the pink 3 indicates that there are 3 nodes on the right, i.e. bigger than 4. What happens if we insert an item?
We do the standard insert traversal of a binary search tree with the difference that in each node we add a one to our counter because there will be one more node if we added our new item.

You can see what happened when we added 1. Besides adding the node we increased the left counter of 2 by one and the left counter by 4 by 1 because we traveled this way. So there’s no problem on inserting new nodes with O(\log n) complexity.
The next method is member which is basically search in a binary search tree which also works with a complexity of O(\log n).
The last one is delete(k, T) which removes the kth smallest item. Finally we can use our counters. The first counter indicates if our kth smallest item is on the left, the item or on the right.
Example 1: We want the 3th smallest item. We start at 4 and see that there are 3 items on the left, i.e. the 3th smallest item in its left children. Next we are at 2 and see that there are 1 on the left and 1 on the right, therefore we have to go right (1 left + item itself + 1 right item = 3). Now we have arrived 3 and there are no other child items therefore 3 is our 3th smallest item.

Example 2: We want the 4th smallest item. We start at 4 and see that there are 3 items to the left. Therefore the 4th smallest item is 4 itself.

This is basically some kind of binary search so it also works with O(\log n) complexity.

3-12. Suppose you are given an input set S of n numbers, and a black box that if given any sequence of real numbers and an integer k instantly and correctly answers whether there is a subset of input sequence whose sum is exactly k. Show how to use the black box O(n) times to find a subset of S that adds up to k.

Solution: The first time we enter our set S. If it returns yes we can continue otherwise it isn’t possible to form the sequence which sums up to k.
The next step is to test our Set without the first element. If the black box returns yes we can delete it from our set otherwise we know that it is needed. We do this for each element and our S shrinks to a set which sums up to k.

3-13. Let A[1..n] be an array of real numbers. Design an algorithm to perform any sequence of the following operations:
• Add(i,y) – Add the value y to the ith number.
• Partial-sum(i) – Return the sum of the first i numbers, i.e. \sum_{j=1}^{i} A[j].
There are no insertions or deletions; the only change is to the values of the numbers. Each operation should take O(\log n) steps. You may use one additional array of size n as a work space.

Solution: The general idea is to use the work space for the sums. Instead of using each i, I will only use log n of them to guarantee O(\log n) for each operation.

This function takes 1 + \log n = O(\log n) time.

Here you can see how this works on an example:

3-17. A Caesar shift (see Section 18.6 (page 641)) is a very simple class of ciphers for secret messages. Unfortunately, they can be broken using statistical properties of English. Develop a program capable of decrypting Caesar shifts of sufficiently long texts.


3-24. What is the best data structure for maintaining URLs that have been visited by a Web crawler? Give an algorithm to test whether a given URL has already been visited, optimizing both space and time.

Solution: An easy way is to use a hash table for the domains and a hash table for paths.

3-28. You have an unordered array X of n integers. Find the array M containing n elements where Mi is the product of all integers in X except for Xi. You may not use division. You can use extra memory. (Hint: There are solutions faster than O(n^2).)

Solution: Here’s a algorithm which works in O(n \log n):
First we use our extra memory to store multiplications. We multiply \log n numbers together. This takes O(n \log n). Now we just need to find our number by using this pre-multiplications and one element of the original array. This takes for all M about O(n \log n) time. Therefore we have O(n \log n) total run time.

The Algorithm Design Manual: Chapter 2

2-1. What value is returned by the following function? Express your answer as a function of n. Give the worst-case running time using the Big Oh notation.

Solution: To find out what value is returned just represent this function mathematically and simplify it.

\sum_{i=1}^{n-1}\sum_{j=i+1}^n\sum{}_{k=1}^{j} 1
= \sum_{i=1}^{n-1}\sum_{j=i+1}^n j
= \sum_{i=1}^{n-1} (\frac{n(n+1)}{2} - \frac{i(i+1)}{2})
= \frac{1}{2} \sum_{i=1}^{n-1} (n^2 + n - i^2 - i)
= \frac{1}{2} ( (n-1)n^2 + (n-1)n - \sum_{i=1}^{n-1} i^2 - \frac{(n-1)n}{2})
= \frac{1}{2} {  ( n^3 - n) - (\frac{n(n+1)(2n+1)}{6} - n^2) - \frac{n^2 -n}{2} }
= \frac{1}{2} { \frac{1}{6} * (6n^3 - 6n - (2n^3+n^2+2n^2+n-6n^2)  - 3n^2 +3n)    }
= \frac{1}{12} { 6n^3 - 6n - 2n^3 - n^2 - 2n^2 - n + 6n^2 - 3n^2 + 3n }
= \frac{1}{12} (4n^3 - 4n) = \frac{n^3 - n}{3}

The complexity is O(n^3).

2-10. Prove that n^3 - 3n^2 - n + 1 = \Theta(n^3).
Solution: O(n^3): For c > 1: n^3 - 3n^2 - n + 1 \leq c \cdot n^3.
Omega(n^3): For 0 \leq c \leq 1: n^3 - 3n^2 - n + 1 \geq c \cdot n^3.

2-34. Assume that Christmas has n days. Exactly how many presents did my “true love” send me? (Do some research if you do not understand this question.)
Solution: I made this table of the first three days:

We can break this down into sub steps. On the ith day we get p_i = \sum_{k=1}^{i} k presents.
The total amount of presents is: \sum_{i=1}^{n} p_i = \sum_{i=1}^{n} \sum_{k=1}^{i} k = \sum_{i=1}^{n} \frac{i^2+i}{2}
We can be simplified as:
= \frac{1}{2}  { \sum_{i=1}^{n} i^2 + \sum_{i=1}^{n} i  } = \frac{1}{2}  { \frac{n(n+1)(2n+1)}{6}  + \frac{3n^2 + 3n)}{6} }
= \frac{1}{12}  { 2n^3+3n^2+n + 3n^2 + 3n}
= \frac{2n^3 + 6n^2 + 4n}{12} = \frac{n^3+3n^2+2n}{6}

2-39. Prove the following identities on \logarithms:
(a) Prove that \log_a(xy) = \log_a x + \log_a y
(b) Prove that \log_a x^y = y \log_a x
(c) Prove that \log x = \frac{\log_b x}{\log_b a}
(d) Prove that x^{\log_b y} = y^{\log_b x}
(a) The first proof is straight forward: a^{\log_a(x) + \log_a(y)} = a^{\log_a(x)} \cdot a^{\log_a(y)} = x \cdot y = a^{\log_a(xy)}
(b) The trick here is to see that x^y = \prod_{i=1}^{y} x. Therefore we can use the identity from (a): \log_a x^y = \log_a (\prod_{i=1}^{y} x) = \sum_{i=1}^{y} \log_a x = y \log_a x
(c) Here you try to form around a variable (z) to get the right term:
\log_a x = z \Leftrightarrow a^{\log_a x} = a^z \Leftrightarrow x = a^z
\log_b x = \log_b a^z \Leftrightarrow \log_b x = z \log_b a
\frac{\log_b x}{\log_b a} = z = \log_a x
(d) The last one is quite easy. x^{\log_b y} = y^{\log_b x}. Now we have just to \log_b on the equation and use the identity from (b) and we get: \log_b y \cdot \log_b x = \log_b x \cdot \log_b y

2-44. We have 1,000 data items to store on 1,000 nodes. Each node can store copies of exactly three different items. Propose a replication scheme to minimize data loss as nodes fail. What is the expected number of data entries that get lost when three random nodes fail?
Solution: My first idea was a kind of a binary tree which then evolved into this:

The idea is to save on each node the value of the corresponding left and right nodes and of course the main value. In the last nodes we got some free space were we can save item10 because it isn’t backed up yet.
So what happens if three nodes fall out? There are various scenarios.
1. 3 corresponding nodes fall out, e.g. Node 1, 2 and 3. Then Item 11 and 12 are completely lost. (Remember Item10 is saved further down again)
2. A node and its parent fall out, e.g. Node 1 and 2. Here we just lose Item 11.
3. A random node falls out, e.g. Node 3. No problem whatsoever. We can retrieve Item12 from Node 1.

We got about 500 free storages in the last row of nodes which could be used to backup half of the total items again. Which would reduce our loss further

2-46. You have a 100-story building and a couple of marbles. You must identify the lowest floor for which a marble will break if you drop it from this floor. How fast can you find this floor if you are given an infinite supply of marbles? What if you have only two marbles?
Solution: The first case with infinite supply of marbles is very easy. We just do a binary search on the story building, i.e. we need about 7 marbles.
The second case is a bit more demanding. I would start to try to minimize the possible interval as much as possible by starting with a marble in the middle of the whole interval, i.e. at the 50th story.
If it’s broken we have to work our way up from 1 to at max 50.
If it’s still alive we can cut the next interval into half till we find our floor.

2-50. A Ramanujam number can be written two different ways as the sum of two cubes—i.e. , there exist distinct a, b, c, and d such that a^3 +b^3 = c^3 +d^3. Generate all Ramanujam numbers where a, b, c, d < n. Solution:

The DP approach works must faster than raw BF which you can see quite fast because complexity of BF is O(n^4) and DP only takes O(n^3).