# The Algorithm Design Manual: Chapter 8

8-2. Suppose you are given three strings of characters: X, Y , and Z, where |X| = n, |Y|=m,and|Z|=n+m.Z is said to be a shuffle of X and Y iff Z can be formed by interleaving the characters from X and Y in a way that maintains the left-to-right ordering of the characters from each string.
(a) Show that cchocohilaptes is a shuffle of chocolate and chips, but chocochilatspe is not.
(b) Give an efficient dynamic-programming algorithm that determines whether Z is a shuffle of X and Y . Hint: the values of the dynamic programming matrix you construct should be Boolean, not numeric.

Solution (a) We can start from the end and move forwards to the beginning. You can check cchocohilaptes for yourself. If you check chocochilatspe you take the last letter from chocolate, i.e. the next letter could be s or t but it is p, therefore it isn’t a shuffle.
(b)

X = "chocolate"
Y = "chips"

Z1 = "cchocohilaptes"
Z2 = "chocochilatspe"

def isShuffel(X, Y, Z):
if not Z:
return True

if Z[-1:] not in [X[-1:], Y[-1:]]:
return False

if Y[-1:] == X[-1:]: # two possibilites
fst = isShuffel(X[:-1], Y, Z[:-1])
snd = isShuffel(X, Y[:-1], Z[:-1])

if fst:
return fst
elif snd:
return snd
else:
return False

elif Z[-1:] == X[-1:]:
return isShuffel(X[:-1], Y, Z[:-1])

elif Z[-1:] == Y[-1:]:
return isShuffel(X, Y[:-1], Z[:-1])

print isShuffel(X, Y, Z1)
print isShuffel(X, Y, Z2)

Here’s the general idea behind the algorithm. I haven’t implemented the DP solution, mainly because it would make the code messier. Instead of using list slicing, I would work with indicies for X, Y and Z. With this method we can easily construct a n x m x (n+m) Matrix which saves the return value of isShuffel for the needed values. If we work with memoization, we can speed the whole process substainally for larger words.

8-5. Let P1,P2,…,Pn be n programs to be stored on a disk with capacity D megabytes. Program Pi requires si megabytes of storage. We cannot store them all because $D < \sum_{i=1}^{n} s_i$
(a) Does a greedy algorithm that selects programs in order of nondecreasing si maximize the number of programs held on the disk? Prove or give a counter- example.
(b) Does a greedy algorithm that selects programs in order of nonincreasing order si use as much of the capacity of the disk as possible? Prove or give a counter- example.

Solution (a) Assume that it doesn’t. That is, we can take two Programs $P_j$ and $P_k$ instead of $P_i$ where $s_i \leq s_j \leq s_k$. However, this would make only sense if $s_k + s_j < s_i + s_j \Leftrightarrow s_k < s_i$ which isn’t true. Therefore the greedy algorithm maximies the number of programs on the disk.

(b) We can easily construct a counter-example. Let’s say we got the following four programs:

Program i         space s
1                   2
2                   3
3                   4
4                   5

And there should be a capacity of $D = 10$. The greedy algorithm selects program 1, 2 and 3 which leads to $D_{\text{greedy}} = 2 + 3 + 4 = 9.$ If we select program 1, 2 and 4 we use more space, that is the greedy algorithm doesn’t use as much space as possible.

8-6. Coins in the United States are minted with denominations of 1, 5, 10, 25, and 50 cents. Now consider a country whose coins are minted with denominations of {d1,…,dk} units. We seek an algorithm to make change of n units using the minimum number of coins for this country.
(a) The greedy algorithm repeatedly selects the biggest coin no bigger than the amount to be changed and repeats until it is zero. Show that the greedy algorithm does not always use the minimum number of coins in a country whose denominations are {1, 6, 10}.
(b) Give an efficient algorithm that correctly determines the minimum number of coins needed to make change of n units using denominations {d1,…,dk}. Analyze its running time.

Solution (a) Take for example 12. The algorithm uses 10, 1 and 1. A shorter possibility is 6 and 6.

(b) I start with using the example case from (a). We can construct the following graph.

The optimal solution is that one with the shortest path. However, this requires a lot of time therefore we try to use some sort of dynamic programming.
Instead of constructing a whole tree we construct multiples for each value up to n. You can see here why:

We got multiple reoccurences therefore it is suitable for dynamic programming. How should our calculation look like?

n         Coins
1         1
2         1, 1
3         1, 1, 1
4         1, 1, 1, 1
5         1, 1, 1, 1, 1
6         6
7         6, 1
8         6, 1, 1
9         6, 1, 1, 1
10        10
11        10, 1
12        6, 6
13        6, 6, 1
..        ..

The beauty is that we can look up smaller values for computing an n. Let’s take n = 12 as an example.
We start with the smallest value and search for the smallest combination.

Calculation       Coins?          #Coins
12 -  1 = 11       1 & 10, 1       1 + 2 = 3
12 -  6 =  6       6 &  6          1 + 1 = 2
12 - 10 =  2      10 &  1, 1       1 + 2 = 3

Therefore our reoccurence looks like:
$C[n] = 1 + min_{\forall d_i \in \{d_1,...,d_k\}} C[n - d_i]$
Where C[n] is the minimum amount of coins. Our base case is $C[0] = 0$.
Now to the complexity. For each n we need $O(k)$ tests, therefore we need in the worst case $O(k \cdot n)$ tests overall.

8-9. The knapsack problem is as follows: given a set of integers S = {s1, s2, . . . , sn}, and a given target number T, find a subset of S that adds up exactly to T. For example, within S = {1,2,5,9,10} there is a subset that adds up to T = 22 but not T = 23.

Solution Basically similar to the coin problem but with constrains. We can construct the following table/matrix:

T          calculation        elements of S
1          1 - 1              1
2          2 - 2              2
3          3 - 2 = 1 - 1      2, 1
4          4 - 1 = 3          None, because 1 will be used*
5          5 - 5              5
6          6 - 5 = 1          5, 1
7          7 - 5 = 2          5, 2
8          8-5=3-2=1          5, 3, 1
9          9                  None, 8 uses all elements < 10
10         10 - 10            10
11         11 - 10 = 1        10, 1
...


* We test each $s_i < T$ and its remainder. If we subtract 1 we get $T=3$ we uses 2 and 1 otherwise we can subtract 2 which yields $T=2$ which uses 2.

For each row we need at most 2n calculations and checks. We got T rows therefore our construction time is $O(nT).$

8-10. The integer partition takes a set of positive integers S = s1, . . . , sn and asks if
there is a subset I ∈ S such that
$\sum_{i \in I} s_i = \sum_{i \notin I} s_i = M$

Give an O(nM) dynamic programming algorithm to solve the integer partition problem.

Solution We can use the solution of the knapsack problem for this exercise. We know that each partition should be $\frac{1}{2} \sum_{i=1}^{n} s_i$, i.e. we just search for this value as T and check if the rest of S is equal to this too. This algorithm runs in $O(nM)$ time.

8-13. Consider the following data compression technique. We have a table of m text strings, each at most k in length. We want to encode a data string D of length n using as few text strings as possible. For example, if our table contains (a,ba,abab,b) and the data string is bababbaababa, the best way to encode it is (b,abab,ba,abab,a)— a total of five code words. Give an O(nmk) algorithm to find the length of the best encoding. You may assume that every string has at least one encoding in terms of the table.

Solution We can again use a typical dynamic programming approach were we use memoization to save and generate substrings. We try every permutation, i.e. we will get the correct solution.

8-15. Eggs break when dropped from great enough height. Specifically, there must be a floor f in any sufficiently tall building such that an egg dropped from the fth floor breaks, but one dropped from the (f − 1)st floor will not. If the egg always breaks, then f = 1. If the egg never breaks, then f = n + 1.
You seek to find the critical floor f using an n-story building. The only operation you can perform is to drop an egg off some floor and see what happens. You start out with k eggs, and seek to drop eggs as few times as possible. Broken eggs cannot be reused. Let E(k,n) be the minimum number of egg droppings that will always suffice.
(a) Show that E(1, n) = n.
(b) Show that E(k,n) = Θ(nk ).
(c) Find a recurrence for E(k,n). What is the running time of the dynamic program to find E(k, n)?

Solution (a) We got only one egg, so the only strategy to find the fth floor is to start at the first and try each consecutive floor. This leads to $E(1, n) = n.$
(b) The idea is to use some kind of binary search. We can half the interval for each k – 1. We use the last egg to test our last interval. Therefore $E(k, n) = (k - 1) + \frac{n}{2^{k-1}}$ is the minimum number of egg droppings.

(c) $E(k, n) = E(k-1, n) + 2^{1-k}n + 2^{2-k}n + 1.$ We can just use the formular, so that it is $O(1).$

8-16. Consider a city whose streets are defined by an X × Y grid. We are interested in walking from the upper left-hand corner of the grid to the lower right-hand corner.
Unfortunately, the city has bad neighborhoods, whose intersections we do not want to walk in. We are given an X × Y matrix BAD, where BAD[i,j] = “yes” if and only if the intersection between streets i and j is in a neighborhood to avoid.
(a) Give an example of the contents of BAD such that there is no path across the grid avoiding bad neighborhoods.
(b) Give an O(XY ) algorithm to find a path across the grid that avoids bad neighborhoods.
(c) Give an O(XY ) algorithm to find the shortest path across the grid that avoids bad neighborhoods. You may assume that all blocks are of equal length. For partial credit, give an O(X2Y 2) algorithm.

Solution (a) B means BAD, G means good

GBB
BBG

(b) We can implement a basic breath first backtracking algorithm which visits each block no more than one time. Backtracking, visited array, each i, j is visited one time. That is, we start at the higher left corner and put each adjacent block in a queue. If we visited the lower right corner we found a path otherwise if we have visited every adjacent block and not have visited the lower right corner then there’s no path.

(c) We can use a modified version of (b) for this task. We start with an extra cost matrix which is initialized with 0 for each not bad block and ‘X’ otherwise. If we start at an arbitrary block [k, l] and put its adjacent blocks on our queue, we set their cost entry to C[k,l] + 1. At the end, we get a cost matrix which allows us to find the shortest paths.
But let’s look at an example first.

G G B G B G       0 1 X 7 X 9
B G B G G G       X 2 X 6 7 8
B G G G B G       X 3 4 5 X 9
G B G B B G       0 X 5 X X 10

On the right is the initial matrix and on the right is the cost matrix. We can easily find the shortest path by starting at [i, j] and traverse backwards by choosing the next block which got one cost less than the current one.

8-17. Consider the same situation as the previous problem. We have a city whose streets are defined by an X × Y grid. We are interested in walking from the upper left-hand corner of the grid to the lower right-hand corner. We are given an X × Y matrix BAD, where BAD[i,j] = “yes” if and only if the intersection between streets i and j is somewhere we want to avoid.
If there were no bad neighborhoods to contend with, the shortest path across the grid would have length (X − 1) + (Y − 1) blocks, and indeed there would be many such paths across the grid. Each path would consist of only rightward and downward moves.
Give an algorithm that takes the array BAD and returns the number of safe paths of length X + Y − 2. For full credit, your algorithm must run in O(XY ).

Solution Let’s start with an example with three paths of length X + Y – 2.

G G G G G      0 1 2 3 4
G G B G G      1 2 X 4 5
G B G B G      2 X 6 X 6
G G G G G      3 4 5 6 7

We started with costs 0 at [0, 0] therefore we got X + Y – 2 – 1 as shortest paths. We know that each valid path is characterized by its costs 0 to X + Y – 2 – 1. Therefore we just have to count how often the full set exists. E.g. we count for each number its occurrence with an array and afterwards find the smallest amount of occurrences which is the number of paths.

8-18. Consider the problem of storing n books on shelves in a library. The order of the books is fixed by the cataloging system and so cannot be rearranged. Therefore, we can speak of a book bi, where 1 ≤ i ≤ n, that has a thickness ti and height hi. The length of each bookshelf at this library is L.
Suppose all the books have the same height h (i.e. , h = hi = hj for all i,j) and the shelves are all separated by a distance of greater than h, so any book fits on any shelf. The greedy algorithm would fill the first shelf with as many books as we can until we get the smallest i such that bi does not fit, and then repeat with subsequent shelves. Show that the greedy algorithm always finds the optimal shelf placement, and analyze its time complexity.

Solution We can’t rearrange books, therefore we have to think about the alternatives to the greedy algorithm. Permutations aren’t allowed, i.e. the only thing we can control is how many books we put into each shelves. We want to minimize the space therefore we want to put as much books as possible in the shelves. That’s exactly what the greedy algorithm does.

8-24. Given a set of coin denominators, find the minimum number of coins to make a certain amount of change.
Solution See 8-6.

8-25. You are given an array of n numbers, each of which may be positive, negative, or zero. Give an efficient algorithm to identify the index positions i and j to the maximum sum of the ith through jth numbers.
Solution We can implement a pretty naive algorithm which tests every possibility.

max := array[1]
for i := 1 to n
for j := i to n
sum := calculateSum(i, j)
if sum > max
max := sum

where calculateSum(i,j) looks like:

function calculateSum(i, j)
sum := 0
for k := i to j
sum := sum + array[k]

We can speed up calculateSum() using a matrix which saves the sum from i to j as M[i, j]. There are three possibilites to calculate this:

(1) M[i, j] = i + M[i - 1, j]          if i > 1
(2) M[i, j] = M[i, j - 1] + j          if j > 1
(3) M[i, j] = i + M[i - 1, j - 1] + j  if i > 1 and j > 1

# The Algorithm Design Manual: Chapter 7

7-1. A derangement is a permutation p of {1, . . . , n} such that no item is in its proper position, i.e. pi ̸= i for all 1 ≤ i ≤ n. Write an efficient backtracking program with pruning that constructs all the derangements of n items.

Solution

def testSolution(lst, nmax):
for i, x in enumerate(lst):
if lst[i] == (i + 1 + (nmax - len(lst))): # starts with 1
return False
return True

def genPermutation(lst, n, nmax):
if n == 2:
for i, x in enumerate(lst):
yield [lst[i]] + lst[0:i] + lst[i + 1:]
else:
for i, x in enumerate(lst):
for x in genPermutation(lst[0:i] + lst[i + 1:], n - 1, nmax):
testCandidate = [lst[i]] + x
if testSolution(testCandidate, nmax):
yield testCandidate

for x in genPermutation([1,2,3,4], 4, 4):
print x

7-4. Anagrams are rearrangements of the letters of a word or phrase into a different word or phrase. Sometimes the results are quite striking. For example, “MANY VOTED BUSH RETIRED” is an anagram of “TUESDAY NOVEMBER THIRD,” which correctly predicted the result of the 1992 U.S. presidential election. Design and implement an algorithm for finding anagrams using combinatorial search and a dictionary.

Solution

def checkWord(reference, possible):
tmp = reference[:]

for c in possible:  # enough characters
if c not in reference:
return False
else:
try:
tmp.remove(c)
except:
return False
if tmp == []:
return True
else:
return tmp

def genAnagram(anagram, possibleWords):
for word in possibleWords:
x = checkWord(anagram, word)
if x == True:
yield [word]
elif x:
for y in genAnagram(x, possibleWords):
yield [word] + [' '] + y

anagram = "MANY VOTED BUSH RETIRED"
possibleWords = []

for word in dictionary:
word = word.upper().strip()
if len(word) < 4:
continue

tmp = list("".join(anagram.split()))
skip = False
for c in word:
if c not in anagram:
skip = True
continue
else:
try:
tmp.remove(c)
except:
skip = True
continue
if skip:
continue
else:
possibleWords.append(word)

anagram = [x for x in anagram.strip() if x != ' ']
for x in genAnagram(anagram, possibleWords):
print "".join(x)

7-8. Design and implement an algorithm for solving the maximum satisfiability prob- lem discussed in Section 14.10 (page 472).

Solution

from random import choice

def genSAT(f, n):
if n == 0:
yield []
else:
for v in [False, True]:
for x in genSAT(f, n - 1):
yield [v] + x

def solveSAT(f, n):
for y in genSAT(f, n):
if f(y):
yield y

def solveSATHeur(f, n):
for i in xrange(0, 2 ** n):
y = [choice([False, True]) for i in range(n)]
if f(y):
return y

f = lambda (x1, x2, x3): (x1 and x2) or (x3 and not(x1))
print [x for x in solveSAT(f, 3)]

print solveSATHeur(f, 3)



7-14. Write a function to find all permutations of the letters in a particular string.
Solution Basically like in 7-1.

7-16. An anagram is a rearrangement of the letters in a given string into a sequence of dictionary words, like Steven Skiena into Vainest Knees. Propose an algorithm to construct all the anagrams of a given string.
Solution See 7-4.

7-19. Use a random number generator (rng04) that generates numbers from {0, 1, 2, 3, 4} with equal probability to write a random number generator that generates numbers from 0 to 7 (rng07) with equal probability. What are expected number of calls to rng04 per call of rng07?

Solution One basic idea is to use two events with equal probability eight times and count one of these events and use this as our number. E.g. we take 0, 1 from rng04 and encode it as zero and take 2, 3 from rng04 and encode it as one. Now zero and one happen with equal probability. We now generate enough numbers and count the ones which will be the number in rng07.
How many calls do we need? Let’s first analyze our encoded variable.

Prob                             #Calls
4/5                                 1
1/5 * 4/5                           2
(1/5)^2 * (4/5)                     3
(1/5)^3 * (4/5)                     4
...
(1/5)^k * (4/5)                     k + 1

That is, we need $\lim_{n \rightarrow \infty} sum_{k=0}^{n} (\frac{1}{5})^k (k + 1) = \frac{25}{16}$ calls. Therefore we need $\frac{25}{16} \cdot 8 = 12.5$ calls for our rng07.

# The Algorithm Design Manual: Chapter 6

6-3. Assume that all edges in the graph have distinct edge weights (i.e. , no pair of edges have the same weight). Is the path between a pair of vertices in a minimum spanning tree necessarily a shortest path between the two vertices in the full graph? Give a proof or a counterexample.

Our minimum spanning tree is marked by the thick edges. We are interested in the path between B and D. We’re interested in the weight b. If b is smaller than 5, the MST (minimum spanning tree) isn’t the shortest path. Would this change our MST? It depends. We could form the graph from B -> D -> C instead of B -> C -> D. It would be our MST if $b + 3 < 2 + 3$, i.e. $e \in [0, 2] \setminus \{1, 2\}.$ In conclusion, the MST isn’t the shortest path if $2 < b < 5.$

6-6. Suppose we are given the minimum spanning tree T of a given graph G (with n vertices and m edges) and a new edge e = (u, v) of weight w that we will add to G. Give an efficient algorithm to find the minimum spanning tree of the graph G + e. Your algorithm should run in O(n) time to receive full credit.

Solution

We are interested into each path going from u to v. In this example there is the initial path u->t->v and the new one u->v. Now we can check which one is shorter and take this one. The rest of the tree isn’t affected by this.

6-7. (a) Let T be a minimum spanning tree of a weighted graph G. Construct a new graph G′ by adding a weight of k to every edge of G. Do the edges of T form a minimum spanning tree of G′? Prove the statement or give a counterexample.

(b) Let P = {s, . . . , t} describe a shortest weighted path between vertices s and t of a weighted graph G. Construct a new graph G′ by adding a weight of k to every edge of G. Does P describe a shortest path from s to t in G′? Prove the statement or give a counterexample.

Solution (a) The order of the weights aren’t affected by the addition therefore it still forms the MST: $a + c < b + c = a < b$.

(b) No, see the counter example in the comments by Matus Goljer. Thanks for that!

6-23. Arbitrage is the use of discrepancies in currency-exchange rates to make a profit. For example, there may be a small window of time during which 1 U.S. dollar buys 0.75 British pounds, 1 British pound buys 2 Australian dollars, and 1 Australian dollar buys 0.70 U.S. dollars. At such a time, a smart trader can trade one U.S. dollar and end up with 0.75 × 2 × 0.7 = 1.05 U.S. dollars—a profit of 5%. Suppose that there are n currencies c1 , …, cn and an n × n table R of exchange rates, such that one unit of currency ci buys R[i,j] units of currency cj. Devise and analyze an algorithm to determine the maximum value of

R[c1, ci1] · R[ci1, ci2] · · · R[cik−1, cik] · R[cik, c1]

Solution Here’s the example as a graph.

The first problem is that maximum path algorithms don’t multiply the values the add them. This problem can be solved by using logarithms: $\log(a \cdot b \cdot c) = \log(a) + \log(b) + \log(c).$

# The Algorithm Design Manual: Chapter 5

5-6. In breadth-first and depth-first search, an undiscovered node is marked discovered when it is first encountered, and marked processed when it has been completely searched. At any given moment, several nodes might be simultaneously in the discovered state.
(a) Describe a graph on n vertices and a particular starting vertex v such that Θ(n) nodes are simultaneously in the discovered state during a breadth-first search starting from v.
(b) Describe a graph on n vertices and a particular starting vertex v such that Θ(n) nodes are simultaneously in the discovered state during a depth-first search starting from v.
(c) Describe a graph on n vertices and a particular starting vertex v such that at some point Θ(n) nodes remain undiscovered, while Θ(n) nodes have been processed during a depth-first search starting from v. (Note, there may also be discovered nodes.)

Solution. (a) A graph with n vertices and a depth of one where no node was processed yet which started at the root.
(b) A graph with n vertices and a depth of n where no node was processed yet which started at the root.
(c) E.g. the graph of (b) which is processed up to n/2.

5-7. Given pre-order and in-order traversals of a binary tree, is it possible to reconstruct the tree? If so, sketch an algorithm to do it. If not, give a counterexample. Repeat the problem if you are given the pre-order and post-order traversals.

Solution I made this example to simplify the process of reconstructing the binary tree.

It’s essential to understand preorder and inorder traversal. Let’s take a look at the preorder sequence: A B C D E F
We know that A has to be our root. We don’t know yet if B is the right or left child. Therefore let’s look at the inorder sequence: C B D A F E
Oh, A is on the 4th position, i.e. C, B and D have to be in the left subtree. Therefore B is the left child. Now we have C and D left in the left subtree. Let’s look again at the inorder sequence: C B D A. So C has to be the far left node and because B is the left child D have to be the right child of B.
Now we have to find out where E and F belong. We know that they belong to a right subtree to A. If you look at the preorder sequence you see that E has to be the root of the left subtree. Finally we look at the inorder sequence and see that F is before E. Therefore F has to be the left sub because E is the root and we’re done.

5-8. Present correct and efficient algorithms to convert an undirected graph G be- tween the following graph data structures. You must give the time complexity of each algorithm, assuming n vertices and m edges.
(b) Convert from an adjacency list to an incidence matrix. An incidence matrix M has a row for each vertex and a column for each edge, such that M[i,j] = 1 if vertex i is part of edge j, otherwise M[i,j] = 0.
(c) Convert from an incidence matrix to adjacency lists.

Solution (a)

for y := 1 to n
for x := y to n
if matrix[x][y] = 1
addEdge(x, y)

Which runs in $O(n^2)$ time.

(b)

initialize matrix[m][n]
x := 1
for each entry in list
matrix[x][from] = 1
matrix[x][to] = 1

We need $O(mn)$ for initializing the matrix and the loop needs $O(m)$ therefore the algorithm needs $O(mn)$ time.

(c)

for x := 1 to n
start := None
end := None
for y := 1 to n
if matrix[x][y] = 1
if start = None
start := y
else if end = None
end := y
addEdge(start, end)

This algorithm also takes $O(n^2)$ time.

5-9. Suppose an arithmetic expression is given as a tree. Each leaf is an integer and each internal node is one of the standard arithmetical operations (+,−,∗,/). For example, the expression 2 + 3 ∗ 4 + (3 ∗ 4)/5 is represented by the tree in Figure 5.17(a). Give an O(n) algorithm for evaluating such an expression, where there are n nodes in the tree.

Solution

function evaluateTree
if tree->left-subtree and tree->right-subtree is not a expression
return evaluate(tree, tree->left-subtree->value, tree->right-subtree->value)
else
if tree->left-subtree is a expression
left := evaluateTree(tree->left-subtree)
else
left := tree->left-subtree->value

if tree->right-subtree is a expression
right := evaluateTree(tree->right-subtree)
else
right := tree->right-subtree->value

return evaluate(tree, left, right)

5-18. Consider a set of movies M1, M2, . . . , Mk. There is a set of customers, each one of which indicates the two movies they would like to see this weekend. Movies are shown on Saturday evening and Sunday evening. Multiple movies may be screened at the same time.
You must decide which movies should be televised on Saturday and which on Sun- day, so that every customer gets to see the two movies they desire. Is there a schedule where each movie is shown at most once? Design an efficient algorithm to find such a schedule if one exists.

Solution We can model this problem as a graph problem. Each node is a movie and each edge (x, y) represents a person who wants to watch movie x and y.

For example: People want to watch movie 1 & 3, 1 & 4 and 2 & 4. Here we can show movie 1 and 2 on saturday and movie 3 and 4 on sunday because there isn’t a connection between movie 1 and 2 and movie 3 and 4. That is, nobody wants to see movie 1 and 2 or movie 3 and 4.
Let’s add an edge between movie 1 and 2 and see what’s happen.

Our last schedule doesn’t work anymore because movie 1 and movie 2 are screened on the same day. We’re looking for a bipartite graph which is discussed early in the book.

5-23. Your job is to arrange n ill-behaved children in a straight line, facing front. You are given a list of m statements of the form “i hates j”. If i hates j, then you do not want put i somewhere behind j, because then i is capable of throwing something at j.
Give an algorithm that orders the line, (or says that it is not possible) in O(m + n) time.

Solution We can save us a lot of time if we model the problem correctly.

In this example 1 hates 2 and 3, 2 hates 4, 3 hates 4, and 4 hates 5. You can see that we can use topological sorting for this problem which is also described early in the book.