True/False 6pts each (48 pts total)
General grading instructions for T/F
Example Solution:
False. A reduction from A to B shows that B is at least as hard as A. So to find new NP-Hard problems, we reduce a known NP-Complete problem to the candidate in order to show that the candidate problem must be at least as hard as any NP-Complete problem. Then to further show the candidate is NP-Complete, we'd show that it's also in NP.
Student Answers:
Example Solution:
True. We stated that we would assume base 2 for all logs in the class unless otherwise noted. So, for example, . So, .
Answer can also be false if student considered a base less than 2. Recall that difference in log base is a constant factor difference. A constant factor exponent difference gives us a different asymptotic complexity!
Student Answers:
Example Solution:
False. A graph may have multiple distinct minimum spanning tree. Answer needs to be a counterexample, so consider a complete graph with 3 vertices, A, B, and C and equal edge weights between them. A spanning tree for this graph contains 3 vertices and two edges. This graph contains 3 distinct minimum spanning trees, since there are 3 different spanning trees and they are all equally minimum. [A-B, B-C], [B-C, C-A], and [C-B, B-A].
Student Answers: Apparently a tough reading comprehension question. However, it is a question we answered in class and is in the posted week 8 notes.
Example Solution:
True. Trivially, for any graph, we'd need to look at each vertex and edge to determine if it's complete. A counting function to do this just needs to count the number of vertices and edges: if there are nondirected edges, then the graph is complete.
Student Answers:
Example Solution:
True. Given that NP-Hard problems are at least as hard as any problem in NP, it follows that NP-hard problems cannot generally be reduced to 3SAT, which is NP-complete. Some problems in NP-Hard can be reduced to 3SAT, i.e. any NP-complete problems can be reduced to any other NP-complete problems and NP-complete problems are also NP-Hard. However, this can't be generally done, and the key phrase here is 'in general'.
Student Answers:
(-2) If student answers false. Definition of NP-Hard is correct and correctly identifies that some NP-Hard problems can be reduced to 3SAT. However, this can't be done in general.
Example Solution:
False. Tractable problems are by definition solvable in polynomial-time (i.e. in class P). An algorithm that runs in O(n^(log n)) is not polynomial-time. However, that doesn't preclude the possibility that there is an algorithm that solves it in polynomial-time.
Student Answers:
Definition 1 in chapter 11.3 defines tractable problems as those in complexity class P, i.e. solvable in polynomial time. And intractable problems all problems not in P. In particular, we don't know whether problems in NP are tractable or not, since we don't know whether P=NP.
Important to be careful about the reasoning for their answer.
Example Solution:
False. The inner loop could do more than constant work, for instance, which would result in worse than time.
For i in 1...n: For j in 1...n: insert(<i, j>) into list(A) # where insert is O(n)-time operation.
Student Answers:
Example Solution:
True. Dynamic programming and greedy algorithms both require that the problem has optimal substructure. This is because both strategies need to be able to take the optimal solutions to subproblems to generate optimal solutions to the whole problem.
Student Answers:
Example Solution:
The adjacency list will be more efficient. If a sparse graph has on the order of edges, then DFS with an adjacency list will have complexity. The efficiency of DFS depends on your graph representation, i.e. what data structure you need to traverse. For an adjacency list, the complexity is and if , then we get for searching a sparse adjacency list. On the other hand, an adjacency matrix is a 2d array. To traverse this thing, even if the graph it represents is sparse, requires time.
Student Answers:
(-2) to (-3) Depending on how clear it is that the student mixed up adjacency matrix with adjacency list. For example, if the student states that neighbors can be accessed in constant time for a sparse graph for an adjacency matrix since the linked list will be short. This indicates they probably have the two names mixed up, since that's the exact opposite of what happens to them.
(-5) If student answered adjacency list but thought that DFS or BFS have different running times based on something other than the data structure that holds the graph, e.g. student thought that DFS is more efficient with adjacency lists but BFS is more efficient with an adjacency matrix.
Example Solution:
The bottom-up (or tabulation) dynamic programming algorithm for the knapsack problem stores the results of all subproblems in an table. It must iterate through each field in the table to calculate all sub-results, so the efficiency is .
Student Answers:
Key idea to the answer is that the tabulation method requires setting up an 2d array and must iterate through each index.
Example Solution:
Kruskal's builds a minimum spanning tree by iterating through edges in nondecreasing order by weight. We begin with a forest of trees, where every vertex is a tree with just a root. With each edge considered, we add it if and only if the two vertexes of that edge are parts of different trees, making them part of the same tree. To decide if two vertices are part of the same tree, we use a union-find or disjoint set data structure. When all edges have been considered, the vertices along with edges we added constitute a MST.Kruskal's runs in time or equivalently time, since is at most so . This is because we can sort the edges in time, so considering an edge with min weight can operate in constant time. Deciding whether two vertices are part of the same tree requires operations total. Using a union-find or disjoint set data structure, we can do this in time. So Kruskal's runs in or the tighter bound time.
Student Answers:
5 pts for describing Kruskal's correctly
Key points:
5 pts for describing the complexity correctly
Key points:
def unknown(n): if n = 1: return 1 else return unknown(floor(n/2)) + 1
a. (3 pts) What does this algorithm compute?
Example Solution:
It computes the number of digits in a binary representation of , i.e.floor
b. (3 pts) What's the basic operation for this algorithm?
Example Solution:
Floor, division, addition are all reasonable picks. Division or floor probably represent the most costly basic operations.
c. (3 pts) Is the best case different from the worst case?
Example Solution:
No, they are the same. The input is some number and there's no variance for a given input size.
d. (6 pts) Find the worst case asymptotic complexity for this algorithm?
Example Solution:
Solve , for , where . That ends up being or . Rigorous proof, solving the recurrence relation using backward or forward substitution, or using the master theorem are good solutions, a good informal explanation for the asymptotic complexity is also acceptable.
Student Answers:
Consider the problem of scheduling n jobs of known durations for execution by a single processor. The jobs can be executed in any order, one job at a time. You want to find a schedule that minimizes the total time spent by all the jobs in the system. (The time spent by one job in the system is the sum of the time spent by this job in waiting plus the time
spent on its execution.)
a. (6 pts) Describe a greedy algorithm for this problem. (Primarily looking a greedy rule.)
Example Solution:
Sort them in nondecreasing order by their duration , then schedule along this order. The greedy rule is to pick the shortest duration first. Idea is that if the shorter jobs are executed first, then later jobs wait less. So we want the minimize the time waiting. Notice that the total execution time for the jobs is the same no matter the order. But the amount of time that each job waits to be executed is not.
Student Answers:
b. (6 pts) Does the greedy algorithm always yield an optimal solution? Explain.
Example Solution:
Yes, it's optimal. Optimal in this case refers to the total time spent executing jobs in addition to each job's time waiting to be executed. So picking the shortest duration jobs first means that other jobs wait less before they get executed. That will minimize the total amount of time in execution and waiting.
Student Answers: