Prerequisite Concepts for Data Structures

Algorithms (Version 4)

Concept:mathematics notation

1. ${\mathrm{log}}_{2}n$ is:
1. $\Theta \left({\mathrm{log}}_{10}n\right)$
2. $o\left({\mathrm{log}}_{10}n\right)$
3. $\omega \left({\mathrm{log}}_{10}n\right)$
1. ${\mathrm{log}}_{2}n$ is equal to:
1. $\genfrac{}{}{0.1ex}{}{{\mathrm{log}}_{2}n}{{\mathrm{log}}_{10}2}$
2. $\genfrac{}{}{0.1ex}{}{{\mathrm{log}}_{10}n}{{\mathrm{log}}_{10}2}$
3. $\genfrac{}{}{0.1ex}{}{{\mathrm{log}}_{2}n}{{\mathrm{log}}_{2}10}$
4. $\genfrac{}{}{0.1ex}{}{{\mathrm{log}}_{10}n}{{\mathrm{log}}_{2}10}$
1. $\mathrm{log}\left(nm\right)$ is equal to:
1. $\mathrm{log}n+\mathrm{log}m$
2. ${\left(\mathrm{log}n\right)}^{m}$
3. $n\mathrm{log}m$
4. $m\mathrm{log}n$
1. $\mathrm{log}\left({n}^{m}\right)$ is equal to:
1. $\mathrm{log}n+\mathrm{log}m$
2. ${\left(\mathrm{log}n\right)}^{m}$
3. $n\mathrm{log}m$
4. $m\mathrm{log}n$
1. ${\mathrm{log}}_{2}2$ can be simplified to:
1. 1
2. 4
3. 2
4. ${\mathrm{log}}_{2}2$ cannot be simplified any further
1. ${2}^{{\mathrm{log}}_{2}n}$ is equal to:
1. n
2. ${2}^{n}$
3. ${n}^{2}$
4. ${\mathrm{log}}_{2}n$
1. ${n}^{2}$ is $o\left({n}^{3}\right)$. Therefore, $\mathrm{log}{n}^{2}$ is $?\left(\mathrm{log}{n}^{3}\right)$. Choose the tightest bound.
1. little omicron
2. theta
3. big omega
4. big omicron
5. little omega
1. $\mathrm{log}{n}^{n}$ is $\Theta \left(?\right)$.
1. log ${n}^{\mathrm{log}n}$
2. n
3. log n
4. n log n
1. $\mathrm{log}{2}^{n}$ is $\Theta \left(?\right)$.
1. $n\mathrm{log}n$
2. $\mathrm{log}n$
3. ${2}^{n}$
4. n
1. The number of permutations of a list of n items is:
1. n!
2. $n\mathrm{log}n$
3. n
4. ${2}^{n}$
5. $\mathrm{log}n$

Concept: relative growth rates

1. Which of the following has the correct order in terms of growth rate?
1. $1<\mathrm{log}n<\sqrt{n}
2. $1<\mathrm{log}n<\sqrt{n}
3. $1<\sqrt{n}<\mathrm{log}n
4. $1<\sqrt{n}<\mathrm{log}n
5. $1<\sqrt{n}<\mathrm{log}n
1. What is the correct ordering of growth rates for the following functions:
• $f\left(n\right)={n}^{0.9}\mathrm{log}n$
• $g\left(n\right)=1.{1}^{n}$
• $h\left(n\right)=9.9n$
1. h < g < f
2. h < f < g
3. f < g < h
4. g < f < h
5. g < h < f
6. f < h < g
1. What is the correct ordering of growth rates for the following functions:
• $f\left(n\right)=n{\left(\mathrm{log}n\right)}^{2}$
• $g\left(n\right)=n\mathrm{log}{2}^{n}$
• $h\left(n\right)=n\mathrm{log}\left(\mathrm{log}n\right)$
1. f > g > h
2. g > f > h
3. h > g > f
4. f > h > g
5. g > h > f
6. h > f > g

Concept:order notation

1. What does big Omicron roughly mean?
1. worse than or the same as
2. better than or the same as
3. better than
4. the same as
5. worse than
1. What does $\omega$ roughly mean?
1. better than
2. the same as
3. worse than
4. better than or the same as
5. worse than or the same as
1. What does $\theta$ roughly mean?
1. better than
2. worse than or the same as
3. better than or the same as
4. the same as
5. worse than
1. T or F: All algorithms are $\omega \left(1\right)$.
1. T or F: All algorithms are $\theta \left(1\right)$.
1. T or F: All algorithms are $\Omega \left(1\right)$.
1. T or F: There exist algorithms that are $\omega \left(1\right)$.
1. T or F: There exist algorithms that are $O\left(1\right)$.
1. T or F: All algorithms are $O\left({n}^{n}\right)$.
1. Consider sorting 1,000,000 numbers with mergesort. What is the time complexity of this operation? [THINK!]
1. ${n}^{2}$, because mergesort takes quadratic time
2. n log n, because mergesort takes n log n time
3. constant, because n is fixed

Concept:comparing algorithms using order notation

Assume the worst case and sufficiently large input size unless otherwise indicated. The phrase the same time as means equal within a constant factor (or lower order term) unless otherwise indicated. The phrase by a stopwatch means the actual amount of time needed for the algorithm to run to completion, as measured by a stopwatch.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs faster than g.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs faster than g, in all cases.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs faster than g, regardless of input size.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs faster than or takes the same time as g.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs faster than or takes the same time as g, in all cases.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs faster than or takes the same time as g, regardless of input size.
1. T or F: If $f=\omega \left(g\right)$, then it is possible that algorithm f can run faster than g, in some cases.
1. T or F: If $f=\omega \left(g\right)$, then it is possible that algorithm f can run faster than g.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs slower than g.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs slower than g, in all cases.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs slower than g, regardless of input size.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs slower than or takes the same time as g.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs slower than or takes the same time as g, in all cases.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs slower than or takes the same time as g, regardless of input size.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f can take the same time as g, in some cases.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f can run in the same time as g.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs faster than g.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs faster than g, in all cases.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs faster than g, regardless of input size.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs faster than or takes the same time as g.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs faster than or takes the same time as g, in all cases.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs faster than or takes the same time as g, regardless of input size.
1. T or F: If $f=\Omega \left(g\right)$, then it is possible that algorithm f can run faster than g, in some cases.
1. T or F: If $f=\Omega \left(g\right)$, then it is possible that algorithm f can run faster than g.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs slower than g.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs slower than g, in all cases.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs slower than g, regardless of input size.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs slower than or takes the same time as g.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs slower than or takes the same time as g, in all cases.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs slower than or takes the same time as g, regardless of input size.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs slower than g.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs slower than g, in all cases.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs slower than g, regardless of input size.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs slower than or takes the same time as g.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs slower than or takes the same time as g, in all cases.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs slower than or takes the same time as g, regardless of input size.
1. T or F: If $f=o\left(g\right)$, then it is possible that algorithm f can run slower than g, in some cases.
1. T or F: If $f=o\left(g\right)$, then it is possible that algorithm f can run slower than g.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs faster than g.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs faster than g, in all cases.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs faster than g, regardless of input size.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs faster than or takes the same time as g.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs faster than or takes the same time as g, in all cases.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs faster than or takes the same time as g, regardless of input size.
1. T or F: If $f=o\left(g\right)$, then algorithm f can run slower than g, in some cases.
1. T or F: If $f=o\left(g\right)$, then algorithm f can take the same time as g, in some cases.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs slower than g.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs slower than g, in all cases.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs slower than g, regardless of input size.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs slower than or takes the same time as g.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs slower than or takes the same time as g, in all cases.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs slower than or takes the same time as g, regardless of input size.
1. T or F: If $f=O\left(g\right)$, then it is possible that algorithm f can run slower than g, in some cases.
1. T or F: If $f=O\left(g\right)$, then it is possible that algorithm f can run slower than g.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs faster than g.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs faster than g, in all cases.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs faster than g, regardless of input size.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs faster than or takes the same time as g.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs faster than or takes the same time as g, in all cases.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs faster than or takes the same time as g, regardless of input size.
1. T or F: If $f=\Theta \left(g\right)$, then algorithm f takes the same time as g.
1. T or F: If $f=\Theta \left(g\right)$, then algorithm f takes the same time as g, in all cases.
1. T or F: If $f=\Theta \left(g\right)$, then algorithm f takes the same time as g, regardless of input size.
1. T or F: If $f=\Theta \left(g\right)$, then it is possible that algorithm f can run slower than g, in some cases.
1. T or F: If $f=O\left(g\right)$, then it is possible that algorithm f can run slower than g.
1. T or F: If $f=O\left(g\right)$, then it is possible that algorithm f can run slower than g, for some problem sizes.
1. T or F: If $f=\Theta \left(g\right)$, then it is possible that algorithm f can run faster than g, in some cases.
1. T or F: If $f=O\left(g\right)$, then it is possible that algorithm f can run faster than g.
1. T or F: If $f=O\left(g\right)$, then it is possible that algorithm f can run true than g, for some problem sizes.
1. T or F: If $f=\omega \left(g\right)$, then f and g can be the same algorithm.
1. T or F: If $f=\Omega \left(g\right)$, then f and g can be the same algorithm.
1. T or F: If $f=o\left(g\right)$, then f and g can be the same algorithm.
1. T or F: If $f=O\left(g\right)$, then f and g can be the same algorithm.
1. T or F: Suppose algorithm $f=\theta \left(g\right)$. f and g can be the same algorithm.
1. T or F: If $f=\Omega \left(g\right)$ and $g=O\left(f\right)$, then $f=\Theta \left(g\right)$.
1. T or F: If $f=\Omega \left(g\right)$ and $g=O\left(f\right)$, then f and g must be the same algorithm.
1. T or F: Suppose algorithm $f=\theta \left(g\right)$. Algorithm f is never slower than g, by a stopwatch.
1. T or F: Suppose algorithm $f=\theta \left(g\right)$. Algorithm f always takes the same time as g, by a stopwatch.

Concept:analyzing code

In the pseudocode, the lower limit of a `for` loop is inclusive, while the upper limit is exclusive. The additive step, if not specified, is one.

Tracing recursive functions

When asked about the number of recursive calls, do not include the original call.
1. How many recursive calls are made if $n=5$? Assume the initial value of i is zero.
```    function f(i,n)
{
if (i < n)
{
println(i);
f(i+1,n);
}
return 0;
}
```
1. 5
2. the function recurs infinitely
3. none of the other answers are correct
4. 4
5. 6
6. 3
1. How many recursive calls are made if $n=81$. Assume the initial value of i is one.
```    function f(i,n)
{
if (i < n)
{
println(i);
f(i*3,n);
}
return 0;
}
```
1. 6
2. 4
3. 3
4. none of the other answers are correct
5. 5
6. the function recurs infinitely
1. How many recursive calls are made if $n=64$. Assume the initial value of i is one.
```    function f(i,n)
{
if (i < n)
{
println(i);
f(i*sqrt(n),n);
}
return 0;
}
```
1. 8
2. 1
3. the function recurs infinitely
4. 4
5. 9
6. 7
7. none of the other answers are correct
8. 2
1. How many recursive calls are made if $n=64$. Assume the initial value of i is zero.
```    function f(i,n)
{
if (i < n)
{
println(i);
f(i+sqrt(n),n);
}
return 0;
}
```
1. none of the other answers are correct
2. 9
3. 7
4. 1
5. 2
6. 8
7. the function recurs infinitely
8. 4

Time complexity, recursive functions, single recursion

1. What is the time complexity of this function? Assume the initial value of i is zero.
```    function f(i,n)
{
if (i < n)
{
println(i);
f(i+1,n);
}
return 0;
}
```
1. $\theta \left(1\right)$
2. $\theta \left(\mathrm{log}n\right)$
3. $\theta \left(n\right)$
4. $\theta \left(\sqrt{n}\right)$
5. $\theta \left({n}^{2}\right)$
6. $\theta \left(\mathrm{log}n\right)$
1. What is the time complexity of this function? Assume the initial value of i is one.
```    function f(i,n)
{
if (i < n)
{
println(i);
f(i*3,n);
}
return 0;
}
```
1. $\theta \left({\left(\mathrm{log}n\right)}^{3}\right)$
2. $\theta \left(n\sqrt{n}\right)$
3. $\theta \left(\mathrm{log}n\right)$
4. $\theta \left(n\mathrm{log}n\right)$
5. $\theta \left(n\right)$
6. $\theta \left({n}^{2}\right)$
1. What is the time complexity of this function? Assume the initial value of i is one.
```    function f(i,n)
{
if (i < n)
{
f(i*sqrt(n),n);
println(i);
}
}
```
1. $\theta \left(\sqrt{n}\right)$
2. $\theta \left(1\right)$
3. $\theta \left(n\sqrt{n}\right)$
4. $\theta \left(\mathrm{log}n\right)$
5. $\theta \left({n}^{2}\right)$
6. $\theta \left(n\right)$
1. What is the time complexity of this function? Assume positive, integral input and integer division.
```    function f(n)
{
if (n > 0)
{
for (var i from 0 until n)
println(n);
f(n/2);
}
return 0;
}
```
1. $\theta \left(n\right)$
2. $\theta \left({n}^{2}\right)$
3. $\theta \left(n\sqrt{n}\right)$
4. $\theta \left(\mathrm{log}n\right)$
5. $\theta \left(n\mathrm{log}n\right)$
6. $\theta \left(1\right)$

Time complexity, recursive functions, double recursion

1. What is the time complexity of this function? Assume the initial value of i and j is zero.
```    function f(i,j,n)
{
println(i,j);
if (i < n)
{
if (j < n)
f(i,j+1,n);
else
f(i+1,0,n);
}
return 0;
}
```
1. $\theta \left(\sqrt{n}\right)$
2. $\theta \left(n\mathrm{log}n\right)$
3. $\theta \left({n}^{2}\right)$
4. $\theta \left({\mathrm{log}}^{2}n\right)$
5. $\theta \left(n\right)$
6. $\theta \left(1\right)$
1. What is the time complexity of this function? Assume the initial value of i and j is zero.
```    function f(i,j,n)
{
println(i,j);
if (i < n)
{
if (j < i)
f(i,j+1,n);
else
f(i+1,0,i+1);
}
return 0;
}
```
1. $\theta \left({n}^{2}\right)$
2. $\theta \left(1\right)$
3. $\theta \left({\mathrm{log}}^{2}n\right)$
4. $\theta \left(n\mathrm{log}n\right)$
5. $\theta \left(n\right)$
6. $\theta \left(\sqrt{n}\right)$
1. What is the time complexity of this function? Assume the initial value of i is one and j is zero.
```    function f(i,j,n)
{
println(i,j);
if (i < n)
{
if (j < n)
f(i,j+1,n);
else
f(i*2,0,n);
}
return 0;
}
```
1. $\theta \left(n\right)$
2. $\theta \left({n}^{2}\right)$
3. $\theta \left(n\sqrt{n}\right)$
4. $\theta \left(n\mathrm{log}n\right)$
5. $\theta \left({\mathrm{log}}^{2}n\right)$
6. $\theta \left(1\right)$
1. What is the time complexity of this function? Assume the initial value of i is one and j is zero.
```    function f(i,j,n)
{
println(i,j);
if (i < n)
{
if (j < i)
f(i,j+1,n);
else
f(i*2,0,n);
}
return 0;
}
```
1. $\theta \left(n\right)$
2. $\theta \left(1\right)$
3. $\theta \left({\mathrm{log}}^{2}n\right)$
4. $\theta \left({n}^{2}\right)$
5. $\theta \left(n\sqrt{n}\right)$
6. $\theta \left(n\mathrm{log}n\right)$
1. What is the time complexity of this function? Assume the initial value of i is zero and j is one.
```    function f(i,j,n)
{
println(i,j);
if (i < n)
{
if (j < n)
f(i,j*2,n);
else
f(i+1,1,n);
}
return 0;
}
```
1. $\theta \left({n}^{2}\right)$
2. $\theta \left(n\right)$
3. $\theta \left(1\right)$
4. $\theta \left({\mathrm{log}}^{2}n\right)$

5. $\theta \left(n\sqrt{n}\right)$
6. $\theta \left(n\mathrm{log}n\right)$

Space complexity, recursive functions, single recursion

1. What is the space complexity of this function? Assume positive, integral input and integer division.
```    function f(x,n)
{
if (x > 0)
{
f(x/2,n);
for (var i from 0 until n)
println(n);
}
}
```
1. $\theta \left(x/2\right)$
2. $\theta \left(n\right)$
3. $\theta \left(nx\right)$
4. $\theta \left(x\right)$
5. $\theta \left(n\mathrm{log}x\right)$
6. $\theta \left(\mathrm{log}n\right)$
7. $\theta \left(1\right)$
8. $\theta \left(\mathrm{log}x\right)$
1. What is the space complexity of this function? Assume the initial value of i is zero.
```    function f(i,n)
{
if (i < n)
{
f(i+1,n);
println(i);
}
}
```
1. $\theta \left(1\right)$
2. $\theta \left(\sqrt{n}\right)$
3. $\theta \left(n\right)$
4. $\theta \left({n}^{2}\right)$
5. $\theta \left(\mathrm{log}n\right)$
6. $\theta \left(n\mathrm{log}n\right)$
1. What is the space complexity of this function? Assume the initial value of i is one.
```    function f(i,n)
{
if (i < n)
{
f(i*2,n);
println(i);
}
}
```
1. $\theta \left(n\mathrm{log}n\right)$
2. $\theta \left({n}^{2}\right)$
3. $\theta \left(n\right)$
4. $\theta \left(\sqrt{n}\right)$
5. $\theta \left(1\right)$
6. $\theta \left(\mathrm{log}n\right)$
1. What is the space complexity of this function? Assume the initial value of i is one.
```    function f(i,n)
{
if (i < n)
{
f(i*sqrt(n),n);
println(i);
}
}
```
1. $\theta \left(n\sqrt{n}\right)$
2. $\theta \left(1\right)$
3. $\theta \left(n\right)$
4. $\theta \left(\sqrt{n}\right)$
5. $\theta \left(n\genfrac{}{}{0.1ex}{}{n}{\sqrt{n}}\right)$
6. $\theta \left(n-\sqrt{n}\right)$
1. What is the space complexity of this function? Assume the initial value of i is one.
```    function f(i,n)
{
if (i < n)
{
f(i+sqrt(n),n);
println(i);
}
}
```
1. $\theta \left(n\genfrac{}{}{0.1ex}{}{n}{\sqrt{n}}\right)$
2. $\theta \left(n\right)$
3. $\theta \left(\sqrt{n}\right)$
4. $\theta \left(n\sqrt{n}\right)$
5. $\theta \left(1\right)$
6. $\theta \left(n-\sqrt{n}\right)$

Space complexity, recursive functions, double recursion

1. What is the space complexity of this function? Assume the initial value of i and j is zero.
```    function f(i,j,n)
{
println(i,j);
if (i < n)
{
if (j < n)
f(i,j+1,n);
else
f(i+1,i+1,n);
}
return 0;
}
```
1. $\theta \left({\mathrm{log}}^{2}n\right)$
2. $\theta \left({n}^{2}\right)$
3. $\theta \left(n\mathrm{log}n\right)$
4. $\theta \left(\sqrt{n}\right)$
5. $\theta \left(n\right)$
6. $\theta \left(1\right)$
1. What is the space complexity of this function? Assume the initial value of i and j is zero.
```    function f(i,j,n)
{
println(i,j);
if (i < n)
{
if (j < i)
f(i,j+1,n);
else
f(i+1,0,i+1);
}
return 0;
}
```
1. $\theta \left({n}^{2}\right)$
2. $\theta \left(n\mathrm{log}n\right)$
3. $\theta \left(1\right)$
4. $\theta \left(n\right)$
5. $\theta \left(\sqrt{n}\right)$
6. $\theta \left({\mathrm{log}}^{2}n\right)$
1. What is the space complexity of this function? Assume the initial value of i is one and j is zero.
```    function f(i,j,n)
{
println(i,j);
if (i < n)
{
if (j < n)
f(i,j+1,n);
else
f(i*2,0,n);
}
return 0;
}
```
1. $\theta \left(n\sqrt{n}\right)$
2. $\theta \left(n\mathrm{log}n\right)$
3. $\theta \left(1\right)$
4. $\theta \left({n}^{2}\right)$
5. $\theta \left({\mathrm{log}}^{2}n\right)$
6. $\theta \left(n\right)$
1. What is the space complexity of this function? Assume the initial value of i is one and j is zero.
```    function f(i,j,n)
{
println(i,j);
if (i < n)
{
if (j < i)
f(i,j+1,n);
else
f(i*2,0,n);
}
}
```
1. $\theta \left({\mathrm{log}}^{2}n\right)$
2. $\theta \left(n\sqrt{n}\right)$
3. $\theta \left({n}^{2}\right)$
4. $\theta \left(1\right)$
5. $\theta \left(n\right)$
6. $\theta \left(n\mathrm{log}n\right)$
1. What is the space complexity of this function? Assume the initial value of i is zero and j is one.
```    function f(i,j,n)
{
println(i,j);
if (i < n)
{
if (j < n)
f(i,j*2,n);
else
f(i+1,1,n);
}
return 0;
}
```
1. $\theta \left(n\mathrm{log}n\right)$
2. $\theta \left(n\right)$
3. $\theta \left(1\right)$
4. $\theta \left(n\sqrt{n}\right)$
5. $\theta \left({n}^{2}\right)$
6. $\theta \left({\mathrm{log}}^{2}n\right)$

Time complexity, iterative loops, single loops

1. What is the time complexity of this code fragment?
```    for (i from 0 until n by 1)
println(i);
```
1. $\theta \left({\mathrm{log}}^{2}n\right)$
2. $\theta \left(n\mathrm{log}n\right)$
3. $\theta \left({n}^{\sqrt{n}}\right)$
4. $\theta \left(n\sqrt{n}\right)$
5. $\theta \left(n\right)$
6. $\theta \left({n}^{2}\right)$
1. What is the time complexity of this code fragment?
```    i = 1;
while (i < n)
{
println(i);
i = i * 2;
}
```
1. $\theta \left({n}^{2}\right)$
2. $\theta \left(n\mathrm{log}n\right)$
3. $\theta \left(n\right)$
4. $\theta \left({\left(\mathrm{log}n\right)}^{2}\right)$
5. $\theta \left(n\sqrt{n}\right)$
6. $\theta \left(\mathrm{log}n\right)$
1. What is the time complexity of this code fragment?
```    i = 1;
while (i < n)
{
println(i);
i = i * sqrt(n);
}
```
1. $\theta \left(\sqrt{n}\right)$
2. $\theta \left(n\sqrt{n}\right)$
3. $\theta \left(n\genfrac{}{}{0.1ex}{}{n}{\sqrt{n}}\right)$
4. $\theta \left(n-\sqrt{n}\right)$
5. $\theta \left(n\right)$
6. $\theta \left(1\right)$

Time complexity, iterative loops, double loops

1. What is the time complexity of this code fragment?
```    for (i from 0 until n by 1)
for (j from 0 until i by 1)
println(i,j);
```
1. $\theta \left(n\right)$
2. $\theta \left(1\right)$
3. $\theta \left(n\mathrm{log}n\right)$
4. $\theta \left(n\sqrt{n}\right)$
5. $\theta \left({\mathrm{log}}^{2}n\right)$
6. $\theta \left({n}^{2}\right)$
1. What is the time complexity of this code fragment?
```    i = 1;
while (i < n)
{
for (j from 0 until n by 1)
println(i,j);
i = i * 2;
}
```
1. $\theta \left({\mathrm{log}}^{2}n\right)$
2. $\theta \left(n\mathrm{log}n\right)$
3. $\theta \left({n}^{2}\right)$
4. $\theta \left(n\right)$
5. $\theta \left(n\sqrt{n}\right)$
6. $\theta \left(1\right)$
1. What is the time complexity of this code fragment?
```    i = 1;
while (i < n)
{
for (j from 0 until i by 1)
println(i,j);
i = i * 2;
}
```
1. $\theta \left(n\right)$
2. $\theta \left(n\sqrt{n}\right)$
3. $\theta \left({n}^{2}\right)$
4. $\theta \left(1\right)$
5. $\theta \left(n\mathrm{log}n\right)$
6. $\theta \left({\mathrm{log}}^{2}n\right)$
1. What is the time complexity of this code fragment?
```    for (i from 0 until n)
{
j = 1;
while (j < n)
{
println(i,j);
j = j * 2;
}
}
```
1. $\theta \left(n\sqrt{n}\right)$
2. $\theta \left(n\mathrm{log}n\right)$
3. $\theta \left({\mathrm{log}}^{2}n\right)$
4. $\theta \left({n}^{2}\right)$
5. $\theta \left(n\right)$
6. $\theta \left(1\right)$
1. What is the time complexity of this code fragment?
```    i = 1;
while (i < n)
{
println(i);
i = i * 2;
}
for (j from 0 until n by 2)
println(j);
```
1. $\theta \left({\mathrm{log}}^{2}n\right)$
2. $\theta \left(1\right)$
3. $\theta \left(n\mathrm{log}n\right)$
4. $\theta \left({n}^{2}\right)$
5. $\theta \left(n\sqrt{n}\right)$
6. $\theta \left(n\right)$
1. What is the time complexity of this code fragment?
```    for (i from 0 until n by 1)
println(i);
for (j from 0 until n by 1)
println(j);
```
1. $\theta \left(n\right)$
2. $\theta \left(1\right)$
3. $\theta \left({n}^{2}\right)$
4. $\theta \left(n\mathrm{log}n\right)$
5. $\theta \left({\mathrm{log}}^{2}n\right)$
6. $\theta \left(n\sqrt{n}\right)$
1. What is the time complexity of this code fragment?
```    for (i from 0 until n by 2)
println(i);
j = 1;
while (j < n)
{
println(j);
j = j * 2;
}
```
1. $\theta \left(n\sqrt{n}\right)$
2. $\theta \left({n}^{2}\right)$
3. $\theta \left(n\mathrm{log}n\right)$
4. $\theta \left(1\right)$
5. $\theta \left({\mathrm{log}}^{2}n\right)$
6. $\theta \left(n\right)$

Space complexity, iterative loops, single loops

1. What is the space complexity of this code fragment?
```    for (i from 0 until n by 1)
println(i);
```
1. $\theta \left(n\right)$
2. $\theta \left(1\right)$
3. $\theta \left(n\mathrm{log}n\right)$
4. $\theta \left(\mathrm{log}n\right)$
5. $\theta \left({n}^{2}\right)$
6. $\theta \left(\sqrt{n}\right)$
1. What is the space complexity of this code fragment?
```    i = 1;
while (i < n)
{
println(i);
i = i * sqrt(n);
}
```
1. $\theta \left(1\right)$
2. $\theta \left({n}^{2}\right)$
3. $\theta \left(\mathrm{log}n\right)$
4. $\theta \left(n\right)$
5. $\theta \left(n\mathrm{log}n\right)$
6. $\theta \left(\sqrt{n}\right)$
1. What is the space complexity of this code fragment?
```    i = 1;
while (i < n)
{
println(i);
i = i * 2;
}
```
1. $\theta \left(n\mathrm{log}n\right)$
2. $\theta \left(1\right)$
3. $\theta \left(\sqrt{n}\right)$
4. $\theta \left({n}^{2}\right)$
5. $\theta \left(n\right)$
6. $\theta \left(\mathrm{log}n\right)$

Space complexity, iterative loops, double loops

1. What is the space complexity of this code fragment?
```    for (i from 0 until n by 1)
println(i);
for (j from 0 until n by 1)
println(j);
```
1. $\theta \left({n}^{2}\right)$
2. $\theta \left(1\right)$
3. $\theta \left(\mathrm{log}n\right)$
4. $\theta \left(n\mathrm{log}n\right)$
5. $\theta \left(\sqrt{n}\right)$
6. $\theta \left(n\right)$
1. What is the space complexity of this code fragment?
```    i = 1;
while (i < n)
{
println(i);
i = i * 2;
}
for (j from 0 until n by 2)
println(j);
```
1. $\theta \left(\sqrt{n}\right)$
2. $\theta \left(n\mathrm{log}n\right)$
3. $\theta \left(1\right)$
4. $\theta \left({n}^{2}\right)$
5. $\theta \left(\mathrm{log}n\right)$
6. $\theta \left(n\right)$
1. What is the space complexity of this code fragment?
```    for (i from 0 until n by 2)
println(i);
j = 1;
while (j < n)
{
println(j);
j = j * 2;
}
```
1. $\theta \left(n\mathrm{log}n\right)$
2. $\theta \left(\mathrm{log}n\right)$
3. $\theta \left(1\right)$
4. $\theta \left(\sqrt{n}\right)$
5. $\theta \left({n}^{2}\right)$
6. $\theta \left(n\right)$
1. What is the space complexity of this code fragment?
```    for (i from 0 until n by 1)
for (j from 0 until i by 1)
println(i,j);
```
1. $\theta \left({n}^{2}\right)$
2. $\theta \left(n\mathrm{log}n\right)$
3. $\theta \left(\sqrt{n}\right)$
4. $\theta \left(\mathrm{log}n\right)$
5. $\theta \left(1\right)$
6. $\theta \left(n\right)$
1. What is the space complexity of this code fragment?
```    i = 1;
while (i < n)
{
for (j from 0 until i by 1)
println(i,j);
i = i * 2;
}
```
1. $\theta \left(n\mathrm{log}n\right)$
2. $\theta \left(\sqrt{n}\right)$
3. $\theta \left(\mathrm{log}n\right)$
4. $\theta \left(1\right)$
5. $\theta \left({n}^{2}\right)$
6. $\theta \left(n\right)$
1. What is the space complexity of this code fragment?
```    for (i from 0 until n)
{
j = 1;
while (j < n)
{
println(i,j);
j = j * 2;
}
}
```
1. $\theta \left(1\right)$
2. $\theta \left(\sqrt{n}\right)$
3. $\theta \left(\mathrm{log}n\right)$
4. $\theta \left(n\mathrm{log}n\right)$
5. $\theta \left(n\right)$
6. $\theta \left({n}^{2}\right)$
1. What is the space complexity of this code fragment?
```    i = 1;
while (i < n)
{
for (j from 0 until n by 1)
println(i,j);
i = i * 2;
}
```
1. $\theta \left(\mathrm{log}n\right)$
2. $\theta \left(n\mathrm{log}n\right)$
3. $\theta \left(n\right)$
4. $\theta \left(1\right)$
5. $\theta \left({n}^{2}\right)$
6. $\theta \left(\sqrt{n}\right)$

Concept:analysis of classic, simple algorithms

1. Which of the following describes the classic recursive fibonacci's time complexity?
1. $\theta \left(n-\sqrt{n}\right)$
2. $\theta \left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{n}}\right)$
3. $\theta \left(\sqrt{n}\right)$
4. $\theta \left({\Phi }^{n}\right)$
5. $\theta \left(\Phi \right)$
6. $\theta \left(1\right)$
7. $\theta \left(\genfrac{}{}{0.1ex}{}{\Phi }{n}\right)$
1. Which of the following describes the classic recursive fibonacci's space complexity?
1. $\theta \left(1\right)$
2. $\theta \left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{n}}\right)$
3. $\theta \left(n-\sqrt{n}\right)$
4. $\theta \left(\genfrac{}{}{0.1ex}{}{\Phi }{n}\right)$
5. $\theta \left(\sqrt{n}\right)$
6. $\theta \left(n\right)$
1. Which of the following describes iterative factorial's time complexity?
1. $\theta \left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{n}}\right)$
2. $\theta \left(\sqrt{n}\right)$
3. $\theta \left(n-\sqrt{n}\right)$
4. $\theta \left(n\right)$
5. $\theta \left(1\right)$
6. $\theta \left(\genfrac{}{}{0.1ex}{}{\Phi }{n}\right)$
1. Which of the following describes iterative fibonacci's space complexity?
1. $\theta \left(\genfrac{}{}{0.1ex}{}{\Phi }{n}\right)$
2. $\theta \left(1\right)$
3. $\theta \left(\sqrt{n}\right)$
4. $\theta \left(n\right)$
5. $\theta \left(n-\sqrt{n}\right)$
6. $\theta \left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{n}}\right)$

Concept:linear selection

1. Suppose you wish to find both the minimum and maximum values in an array of n values, n being odd. Consider this algorithm:
1. Make a pass through the array and find the minimum
2. Make a pass through the array and find the maximum
3. Report the minimum and the maximum at termination
What is the minimum number of key comparisons that this algorithm needs to perform?
1. $2n-2$
2. $2n+2$
3. $2n+1$
4. $2n-1$
5. $2n$
1. Suppose you wish to find both the minimum and maximum values in an array A of n values, n being odd. Consider this algorithm:
1. Set the maximum and minimum to the first value in the array
2. Make a pass through the array `(i = 1; i < n; i += 2)`
3. Compare A[i] with A[i+1]
4. Update the minimum if the smaller of the two is less
5. Update the maximum if the larger of the two is greater
6. Report the minimum and the maximum at loop termination
What is the minimum number of key comparisons that this algorithm needs to perform?
1. $3n-3$
2. $3n-1$
3. $3\genfrac{}{}{0.1ex}{}{n-1}{2}+2$
4. $3\genfrac{}{}{0.1ex}{}{n-1}{2}$
5. $2n-3$
6. $3n$
1. Consider running the linear selection algorithm on an array of n unique elements. What is a tight lower bound on the number of elements less than the median of medians? Assume the median of medians is found with groups of five and that there are an odd number of groups.
1. $2\genfrac{}{}{0.1ex}{}{n}{5}+3$
2. $3\genfrac{}{}{0.1ex}{}{n}{10}+3$
3. $2\genfrac{}{}{0.1ex}{}{n}{5}+2$
4. $5\genfrac{}{}{0.1ex}{}{n}{10}+2$
5. $3\genfrac{}{}{0.1ex}{}{n}{5}+2$
6. $5\genfrac{}{}{0.1ex}{}{n}{10}+3$
7. $3\genfrac{}{}{0.1ex}{}{n}{5}+3$
8. $\genfrac{}{}{0.1ex}{}{3n+5}{10}$
1. Consider running the linear selection algorithm on an array with n unique elements. What is a tight lower bound on the number of elements less than the median of medians? Assume the median of medians is found with groups of three and that there are an odd number of groups.
1. $2\genfrac{}{}{0.1ex}{}{n}{6}+2$
2. $3\genfrac{}{}{0.1ex}{}{n}{6}+2$
3. $\genfrac{}{}{0.1ex}{}{n}{3}+1$
4. $2\genfrac{}{}{0.1ex}{}{n}{6}+1$
5. $2\genfrac{}{}{0.1ex}{}{n}{3}+2$
6. $2\genfrac{}{}{0.1ex}{}{n}{3}+1$
7. $\genfrac{}{}{0.1ex}{}{n}{3}$
8. $3\genfrac{}{}{0.1ex}{}{n}{6}+1$
1. Consider running the linear selection algorithm on an array with n unique elements. What is a tight lower bound on the number of elements less than the median of medians? Assume the median of medians is found with groups of seven and that there are an odd number of groups.
1. $3\genfrac{}{}{0.1ex}{}{n}{7}+3$
2. $3\genfrac{}{}{0.1ex}{}{n}{14}+2$
3. $\genfrac{}{}{0.1ex}{}{2n+28}{14}$
4. $\genfrac{}{}{0.1ex}{}{2n+7}{7}$
5. $11\genfrac{}{}{0.1ex}{}{n}{14}+3$
6. $11\genfrac{}{}{0.1ex}{}{n}{14}+2$
7. $3\genfrac{}{}{0.1ex}{}{n}{7}+2$
8. $3\genfrac{}{}{0.1ex}{}{n}{14}+3$
1. Consider running the linear selection algorithm on an array with $n={7}^{k}$ unique elements. What is a reasonable recurrence equation to describe the running time of the algorithm? Assume the median of medians is found with groups of seven.
1. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{7}\right)+T\left(\genfrac{}{}{0.1ex}{}{6n}{7}\right)+\theta \left(n\right)$
2. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{7}\right)+T\left(\genfrac{}{}{0.1ex}{}{5n}{7}\right)+\theta \left(n\right)$
3. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{7}\right)+T\left(\genfrac{}{}{0.1ex}{}{4n}{14}\right)+\theta \left(n\right)$
4. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{7}\right)+T\left(\genfrac{}{}{0.1ex}{}{4n}{7}\right)+\theta \left(n\right)$
5. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{7}\right)+T\left(\genfrac{}{}{0.1ex}{}{11n}{14}\right)+\theta \left(n\right)$
6. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{7}\right)+T\left(\genfrac{}{}{0.1ex}{}{10n}{14}\right)+\theta \left(n\right)$
1. Consider running the linear selection algorithm on an array with $n={3}^{k}$ unique elements. What is a reasonable recurrence equation to describe the running time of the algorithm? Assume the median of medians is found with groups of three.
1. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+T\left(\genfrac{}{}{0.1ex}{}{n}{6}\right)+\theta \left(n\right)$
2. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+T\left(\genfrac{}{}{0.1ex}{}{5n}{6}\right)+\theta \left(n\right)$
3. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\theta \left(n\right)$
4. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+\theta \left(n\right)$
5. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+T\left(\genfrac{}{}{0.1ex}{}{2n}{5}\right)+\theta \left(n\right)$
6. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+T\left(\genfrac{}{}{0.1ex}{}{2n}{3}\right)+\theta \left(n\right)$
1. Consider running the linear selection algorithm on an array with $n={5}^{k}$ unique elements. What is a reasonable recurrence equation to describe the running time of the algorithm? Assume the median of medians is found with groups of five.
1. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{5}\right)+T\left(\genfrac{}{}{0.1ex}{}{3n}{5}\right)+\theta \left(n\right)$
2. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{5}\right)+T\left(\genfrac{}{}{0.1ex}{}{9n}{10}\right)+\theta \left(n\right)$
3. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{5}\right)+T\left(\genfrac{}{}{0.1ex}{}{7n}{10}\right)+\theta \left(n\right)$
4. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{5}\right)+T\left(\genfrac{}{}{0.1ex}{}{4n}{5}\right)+\theta \left(n\right)$
5. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{5}\right)+T\left(\genfrac{}{}{0.1ex}{}{2n}{5}\right)+\theta \left(n\right)$
6. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{5}\right)+T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\theta \left(n\right)$
1. T or F: If the linear selection algorithm uses groups of three to find the median of medians, the asymptotic run time is still $\theta \left(n\right)$.
1. T or F: If the linear selection algorithm uses groups of seven to find the median of medians, the asymptotic run time is still $\theta \left(n\right)$.

Concept:decision trees

1. In proving a tight lower bound for a class of algorithms, one tries to establish, over all possible algorithms:
1. the worst possible best case
2. the best possible best case
3. the best possible worst case
4. the worst possible worst case
1. In an efficient decision tree for comparison sorts of n numbers, what is the smallest possible depth of a leaf, in the best case? Assume the root is at depth 0.
1. approximately $\mathrm{log}n$
2. 1
3. $n-1$
4. n
5. approximately $n\mathrm{log}n$
6. $n!$
1. In an efficient decision tree for comparison sorts of n numbers, what is the largest possible depth of a leaf, in the worst case? Assume the root is at depth 0.
1. $n!$
2. 1
3. approximately $\mathrm{log}n$
4. $n-1$
5. n
6. approximately $n\mathrm{log}n$
1. In an efficient decision tree for comparison sorts of n numbers, what does the shortest path from the root to a leaf represent?
1. nothing, the longest path is what's important
2. the average case behavior of the sort
3. the best case behavior of the sort
4. the worst case behavior of the sort
1. An efficient decision tree for comparison sorts of n numbers has how many leaves?
1. ${2}^{n}$
2. $\mathrm{log}n$
3. $n\mathrm{log}n$
4. $n!$
1. Deriving a tight lower time bound for comparison sorts, based upon an efficient decision tree, yields:
1. $O\left(n\mathrm{log}n\right)$
2. $\Omega \left(n\right)$
3. $\omega \left(n\mathrm{log}n\right)$
4. $\Omega \left(n\mathrm{log}n\right)$
5. $O\left(n\right)$
6. unbounded

Concept:linear sorting fundamentals

1. Stability in a sort means:
1. the sort can work on any kind of number (integers, reals, etc.)
2. the order of ties in the output reflects the input
3. the asymptotic run time does not vary upon the input
1. The best case behavior for insertion sort is:
1. exponential
2. log linear
3. linear
1. T or F: A linear-time sort does not compare entire keys with one another.
1. T or F: A linear-time sort must always compare entire keys with one another.

Concept:linear sorting -- counting

For counting sort, assume array A holds the data to be sorted, array B holds the sorted data, and array C holds the counts. The index i is used to sweep arrays A and B and the index j is used to sweep array C.
1. Counting sort is:
1. stable if lower indexed elements from the input array are transferred to higher indexed elements in the output array.
2. stable if higher indexed elements from the input array are transferred to higher indexed elements in the output array.
3. always unstable
4. always stable
1. Suppose you are to sort n numbers using counting sort. What size should the C array be?
1. there's not enough information
2. less than n
3. greater than n
4. equal to n
1. Consider using a counting sort to sort the input array `[2,5,0,3,2,0,3,3]`. After the first phase, when `C[i]` holds the number of elements equal to i, the array C looks like:
1. `[2,0,2,3,0,1]`
2. `[2,0,5,2,3,0,3,3]`
3. `[0,0,2,2,3,3,3,5]`
4. `[2,2,4,7,7,8]`
5. `[2,2,4,6,7,8]`
1. Consider using a counting sort to sort the input array `[2,5,0,3,2,0,3,3]` with auxiliary array C. After the second phase, when `C[i]` holds the number of elements less than or equal to i, the array C looks like:
1. `[2,2,4,7,7,8]`
2. `[2,2,4,6,7,8]`
3. `[2,0,5,2,3,0,3,3]`
4. `[2,0,2,3,0,1]`
5. `[0,0,2,2,3,3,3,5]`
1. Consider using a stable counting sort to sort the input array `[2,5,0,3,2,0,3,3]` with destination array B. At start of phase three, using a right to left placement, the first element to be placed in B is:
1. 0, at index 1
2. 4, at index 5
3. 3, at index 6
4. 1, at index 0
5. 2, at index 3
6. 5, at index 7
1. Let n be the count of numbers in a collection of base${}_{10}$ numbers. Suppose zero is the minimum number and k is the maximum number in the collection. The time complexity of counting sort is:
1. $\Theta \left(n\mathrm{log}k\right)$
2. $\Theta \left({n}^{k}\right)$
3. $\Theta \left(nk\right)$
4. $\Theta \left(n+k\right)$
1. Let n be the count of numbers in a collection of base${}_{10}$ numbers. Suppose zero is the minimum number and k is the maximum number in the collection. If $k=o\left(n\right)$, then the time complexity of counting sort is:
1. $\Theta \left({k}^{2}\right)$
2. $\Theta \left(k\right)$
3. $\Theta \left(n\right)$
4. $\Theta \left(k\mathrm{log}n\right)$
5. $\Theta \left(n\mathrm{log}k\right)$
6. $\Theta \left({n}^{2}\right)$
1. T or F: Suppose the lower bound of some numbers to be sorted is not zero. Counting sort can still be used without changing its fundamental time bounds.
1. T or F: Suppose the lower bound of some numbers to be sorted, `LB`, is not zero. Counting sort can still be used without changing its fundamental time bounds, if, among other changes, the statement `C[A[i]] += 1` is changed to `C[A[i]-LB] += 1`.
1. T or F: Suppose the lower bound of some numbers to be sorted, `LB`, is not zero. Counting sort can still be used without changing its fundamental time bounds, if, among other changes, the statement `B[C[A[i]]-1] = A[i]` is changed to `B[C[A[i-LB]]-1] = A[i]`.
1. T or F: Counting sort can be used to sort n decimal numbers uniformly distributed over the range of zero to ${n}^{5}$ in linear time.
1. Consider using counting sort to sort n numbers uniformly distributed over the range of zero to ${n}^{k}$. The asymptotic complexity of the sort will be
1. $\theta \left(n+k\right)$
2. $\theta \left(k\mathrm{log}n\right)$
3. $\theta \left(k\right)$
4. $\theta \left({n}^{k}+n\right)$
5. $\theta \left(n\mathrm{log}k\right)$
6. $\theta \left(n\right)$

1. Suppose the recurrence equation $T\left(p,q\right)=T\left(p,q-1\right)+\theta \left(p\right)$ is used to describe radix sort. Considering only the sorting of the numbers on a particular digit, which sort is consistent with that equation?
1. selection sort
2. mergesort sort
4. counting sort
1. Suppose the recurrence equation $T\left(p,q\right)=T\left(p,q-1\right)+\theta \left(p\mathrm{log}p\right)$ is used to describe radix sort. Considering only the sorting of the numbers on a particular digit, which sort is consistent with that equation?
1. counting sort
2. mergesort sort
4. selection sort
1. Suppose the recurrence equation $T\left(p,q\right)=T\left(p,q-1\right)+\theta \left({p}^{2}\right)$ is used to describe radix sort. Considering only the sorting of the numbers on a particular digit, which sort is consistent with that equation?
2. counting sort
3. mergesort sort
4. selection sort
1. Suppose the recurrence equation $T\left(p,q\right)=T\left(p,q-1\right)+\theta \left(p\right)$ is used to describe radix sort. Which variable in the equation refers to the number of digits?
1. d
2. p
3. n
4. q
1. Consider using radix sort for sorting the following numbers:
```    558
354
318
622
```
After the first pass, the order of the numbers is:
1. 354, 318, 558, 622
2. 318, 354, 558, 622
3. 622, 354, 558, 318
4. 622, 558, 354, 318
5. 622, 354, 318, 558
6. 622, 558, 318, 354
1. Let n be the count of numbers in a collection of positive, base${}_{10}$ numbers. Let m be the number of digits in the largest number in the collection. Suppose the auxiliary sort works in $\Theta \left(n\right)$ time. Then radix sorting takes time:
1. $\Theta \left(m\mathrm{log}n\right)$
2. $\Theta \left(n\mathrm{log}n\right)$
3. $\Theta \left(n\mathrm{log}m\right)$
4. $\Theta \left(nm\right)$
1. Let n be the count of numbers in a collection of positive, base${}_{10}$ numbers. Let m be the number of digits in the largest number in the collection. Suppose the auxiliary sort works in $\Theta \left(n\mathrm{log}n\right)$ time. Then radix sorting takes time:
1. $\Theta \left(mn\mathrm{log}n\right)$
2. $\Theta \left(n\mathrm{log}\left(mn\right)\right)$
3. $\Theta \left(n\mathrm{log}\left(m+n\right)\right)$
4. $\Theta \left(n\mathrm{log}m\right)$
1. T or F: Suppose during one pass of radix sort, there is a tie between two numbers. Since they are considered the same number, it does not matter if those two numbers swap positions.
1. Consider the sort used for each pass of radix sort. Must the auxiliary sort be stable?
1. No, because radix sort works even if the auxilliary sort is unstable.
2. Yes, because swapping ties can undo the work of previous passes
3. No, because there can be no ties in radix sort.
4. Yes, because swapping ties can undo the work of future passes
1. T or F: Radix sort can be used to sort n decimal numbers uniformly distributed over the range of zero to ${n}^{5}$ in linear time.

Concept:linear sorting -- bucket

1. T or F: If bucket sort uses equally-sized buckets, then the distribution of the incoming numbers need not be uniform for the sort to run in expected linear time.
1. Consider using bucket sort to sort n numbers evenly distributed over the range 0 to m. Roughly, how many buckets should you use?
1. n
2. m
3. $\genfrac{}{}{0.1ex}{}{n}{m}$
4. $\genfrac{}{}{0.1ex}{}{m}{n}$
1. Consider using bucket sort to sort n numbers evenly distributed over the range 0 to m. Roughly, what size bucket should you use?
1. m
2. $\genfrac{}{}{0.1ex}{}{m}{n}$
3. n
4. $\genfrac{}{}{0.1ex}{}{n}{m}$
1. Consider using bucket sort to sort n non-negative numbers evenly distributed over the range 0 to m. The range of the first bucket should be:
1. 0 to $\genfrac{}{}{0.1ex}{}{n}{m}$
2. $\genfrac{}{}{0.1ex}{}{m}{n}$ to $2\genfrac{}{}{0.1ex}{}{m}{n}$
3. $0$ to $\genfrac{}{}{0.1ex}{}{m}{n}$
4. $\genfrac{}{}{0.1ex}{}{n}{m}$ to $\genfrac{}{}{0.1ex}{}{2n}{m}$
1. Consider using bucket sort to sort n non-negative numbers evenly distributed over the range 0 to m. The average count of numbers in a bucket is:
1. 0
2. $\genfrac{}{}{0.1ex}{}{n}{m}$
3. $\genfrac{}{}{0.1ex}{}{m}{n}$
4. 1
1. Consider a bucket sort that uses insertion sort to sort the individual buckets. Suppose you could bound the maximum count of numbers in a bucket to a constant C. Then the asymptotic running time to sort one bucket would be:
1. $\Theta \left(1\right)$
2. $\Theta \left(n\mathrm{log}n\right)$, because insertion sort is log-linear in the average case
3. $\Theta \left(n\right)$, because insertion sort is linear in the best case
4. $\Theta \left({n}^{2}\right)$, because insertion sort is quadratic in the worst case
1. Consider using bucket sort to sort n non-negative numbers evenly distributed over the range 0 to m. The expected running time is:
2. constant
3. log-linear
4. linear
1. Consider using bucket sort to sort n non-negative numbers in the range 0 to m with no a priori knowledge of the distribution of the numbers. Using insertion sort as the auxilliary sort, the worst case running time is:
2. constant
3. cubic
4. linear
1. T or F: Bucket sort can be used to sort n decimal numbers uniformly distributed over the range of zero to ${n}^{5}$ in linear time.

Concept:recurrence equations

1. Stooge sort has the following algorithm. Recursively sort the lower two-thirds of an array, then recursively sort the upper two-thirds, then recursively sort the lower two-thirds again. The recursion stops when the array consists of two or fewer elements. If the array size is two, the elements are swapped if necessary. Which of the following recurrence equations describe stooge sort?
1. $T\left(n\right)=3T\left(2\genfrac{}{}{0.1ex}{}{n}{3}\right)+\Theta \left(1\right)$
2. $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(\mathrm{log}n\right)$
3. $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(n\right)$
4. $T\left(n\right)=2T\left(3\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(1\right)$
5. $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+\Theta \left(1\right)$
1. Merge sort has the following algorithm. Recursively sort the lower half of an array, then recursively sort the upper half, then merge the two halves. When merging, a single pass is made through each half. The recursion stops when the array consists of one element. Which of the following recurrence equations describe merge sort?
1. $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(1\right)$
2. $T\left(n\right)=T\left(n-1\right)+\Theta \left(n\right)$
3. $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(1\right)$
4. $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(n\right)$
1. Binary search has the following algorithm. Look at the middle element in an array. If the element is the one begin looked for (the target), return FOUND. Otherwise, if the element is greater than the target, recursively search the lower half of the array. Otherwise, recursively search the upper half of the array. If the array consists of three or fewer elements, perform a linear search of the array, returning FOUND if found and MISSING otherwise. Which of the following recurrence equations describe binary search?
1. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(1\right)$
2. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(n\right)$
3. $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(1\right)$
4. $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(n\right)$
5. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(\mathrm{log}n\right)$
6. $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(\mathrm{log}n\right)$
7. $T\left(n\right)=2T\left(n-1\right)+\Theta \left(1\right)$
1. Quicksort can have the following algorithm: find the median value of the array and partition the array into two regions, one region holds all the values less than the median and the other holds all values greater than the median. Median finding and partitioning can both be done in linear time. Finally, the two regions are recursively sorted. Which of the following recurrence equations describe this version of quicksort?
1. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left({n}^{2}\right)$
2. $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(1\right)$
3. $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(n\right)$
4. $T\left(n\right)=2T\left(n-1\right)+\Theta \left(1\right)$
5. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(n\right)$
6. $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(\mathrm{log}n\right)$
7. $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left({n}^{2}\right)$
1. The recurrence equation $T\left(n\right)=T\left(n-1\right)+\Theta \left(n\right)$ describes which sort, in the worst case?
1. quick sort
3. merge sort
4. heap sort
1. Consider using selection sort to sort a previously sorted array. Which of the following recurrence equations describes this situation?
1. $T\left(n\right)=2T\left(n-1\right)+\Theta \left(1\right)$
2. $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(n\right)$
3. $T\left(n\right)=2T\left(n-1\right)+\Theta \left(n\right)$
4. $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(1\right)$
5. $T\left(n\right)=T\left(n-1\right)+\Theta \left(1\right)$
6. $T\left(n\right)=T\left(n-1\right)+\Theta \left(n\right)$

Concept:recursion trees

Level 0 refers to the apex of recursion tree. Level 1 refers to the level immediately below level 0, and so on. Assume at the bottom level, the size of the problem is 1. Assume that it takes $\Theta \left(1\right)$ time to solve a problem of size 1. Assume the problem size is always evenly divisible when divided.
1. Given the recurrence $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$, what is the work done at the level 0?
1. ${4}^{lo{g}_{2}n}$
2. $4\mathrm{log}\genfrac{}{}{0.1ex}{}{n}{2}$
3. ${n}^{4}$
4. ${n}^{{\mathrm{log}}_{2}4}$
5. $4n$
6. $\mathrm{log}n$
1. Given the recurrence $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$, what is the work done at level 1?
1. $2n$
2. $\mathrm{log}n$
3. ${\genfrac{}{}{0.1ex}{}{n}{2}}^{{\mathrm{log}}_{2}4}$
4. ${4}^{lo{g}_{2}\genfrac{}{}{0.1ex}{}{n}{2}}$
5. $4\mathrm{log}\genfrac{}{}{0.1ex}{}{n}{2}$
6. ${n}^{4}$
1. Given the recurrence $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+n$, how many nodes are level 2?
1. 16
2. 4
3. 8
4. 2
5. 6
6. 12
1. Given the recurrence $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+n$, how many nodes are at the bottom level?
1. ${4}^{{\mathrm{log}}_{3}n}$
2. ${4}^{{\mathrm{log}}_{4}n}$
3. ${3}^{{\mathrm{log}}_{4}n}$
4. ${3}^{{\mathrm{log}}_{3}n}$
1. Given the recurrence $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+\mathrm{log}n$, how much work is done at the bottom level?
1. $\Theta \left({4}^{{\mathrm{log}}_{4}n}\mathrm{log}n\right)$
2. $\Theta \left({4}^{{\mathrm{log}}_{3}n}\right)$
3. $\Theta \left({4}^{{\mathrm{log}}_{3}n}\mathrm{log}n\right)$
4. $\Theta \left({3}^{{\mathrm{log}}_{3}n}\mathrm{log}n\right)$
5. $\Theta \left({3}^{{\mathrm{log}}_{4}n}\mathrm{log}n\right)$
6. $\Theta \left({3}^{{\mathrm{log}}_{3}n}\right)$
7. $\Theta \left({3}^{{\mathrm{log}}_{4}n}\right)$
8. $\Theta \left({4}^{{\mathrm{log}}_{4}n}\right)$

Concept:identifying cases of the master recurrence theorem

1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+{n}^{2}$ fall?
1. between case 2 and case 3
2. case 3
3. between case 1 and case 2
4. the equation is not in the correct form
5. case 1
6. case 2
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{2}$ fall?
1. between case 1 and case 2
2. case 3
3. case 1
4. between case 2 and case 3
5. case 2
6. the equation is not in the correct form
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{2}$ fall?
1. the equation is not in the correct form
2. case 1
3. between case 2 and case 3
4. case 3
5. between case 1 and case 2
6. case 2
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{2}^{n}$ fall?
1. case 2
2. between case 2 and case 3
3. case 3
4. between case 1 and case 2
5. case 1
6. the equation is not in the correct form
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)={2}^{n}T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{n}$ fall?
1. case 3
2. between case 1 and case 2
3. the equation is not in the correct form
4. between case 2 and case 3
5. case 1
6. case 2
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=T\left(n-1\right)+1$ fall?
1. the equation is not in the correct form
2. case 3
3. case 2
4. between case 1 and case 2
5. case 1
6. between case 2 and case 3
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=2T\left(n-2\right)+n$ fall?
1. case 3
2. between case 1 and case 2
3. the equation is not in the correct form
4. between case 2 and case 3
5. case 1
6. case 2
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=16T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+n$ fall?
1. case 2
2. between case 2 and case 3
3. between case 1 and case 2
4. case 3
5. the equation is not in the correct form
6. case 1
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n\mathrm{log}n$ fall?
1. between case 1 and case 2
2. case 3
3. the equation is not in the correct form
4. case 2
5. between case 2 and case 3
6. case 1
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n\mathrm{log}n$ fall?
1. case 2
2. case 3
3. between case 2 and case 3
4. between case 1 and case 2
5. case 1
6. the equation is not in the correct form
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\genfrac{}{}{0.1ex}{}{n}{\mathrm{log}n}$ fall?
1. the equation is not in the correct form
2. between case 1 and case 2
3. between case 2 and case 3
4. case 1
5. case 2
6. case 3
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+{n}^{0.51}$ fall?
1. case 2
2. the equation is not in the correct form
3. case 1
4. case 3
5. between case 2 and case 3
6. between case 1 and case 2
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=\genfrac{}{}{0.1ex}{}{1}{2}T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+1$ fall?
1. case 1
2. case 3
3. case 2
4. between case 2 and case 3
5. the equation is not in the correct form
6. between case 1 and case 2
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=16T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+n!$ fall?
1. the equation is not in the correct form
2. case 3
3. case 2
4. case 1
5. between case 1 and case 2
6. between case 2 and case 3
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=\sqrt{2}T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$ fall?
1. the equation is not in the correct form
2. between case 1 and case 2
3. case 1
4. case 2
5. between case 2 and case 3
6. case 3
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n$ fall?
1. case 3
2. case 1
3. between case 1 and case 2
4. between case 2 and case 3
5. case 2
6. the equation is not in the correct form
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+\sqrt{n}$ fall?
1. case 1
2. case 2
3. case 3
4. between case 1 and case 2
5. between case 2 and case 3
6. the equation is not in the correct form
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n$ fall?
1. case 1
2. case 3
3. the equation is not in the correct form
4. case 2
5. between case 2 and case 3
6. between case 1 and case 2
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+n\mathrm{log}n$ fall?
1. between case 2 and case 3
2. case 1
3. the equation is not in the correct form
4. case 2
5. case 3
6. between case 1 and case 2
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+\genfrac{}{}{0.1ex}{}{n}{2}$ fall?
1. the equation is not in the correct form
2. between case 1 and case 2
3. case 3
4. between case 2 and case 3
5. case 1
6. case 2
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=6T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+{n}^{2}\mathrm{log}n$ fall?
1. case 2
2. between case 1 and case 2
3. case 1
4. the equation is not in the correct form
5. case 3
6. between case 2 and case 3
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\genfrac{}{}{0.1ex}{}{n}{\mathrm{log}n}$ fall?
1. the equation is not in the correct form
2. between case 1 and case 2
3. between case 2 and case 3
4. case 3
5. case 1
6. case 2
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=64T\left(\genfrac{}{}{0.1ex}{}{n}{8}\right)-\genfrac{}{}{0.1ex}{}{n}{\mathrm{log}n}$ fall?
1. the equation is not in the correct form
2. case 1
3. between case 2 and case 3
4. between case 1 and case 2
5. case 2
6. case 3
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=7T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{7}}\right)+{n}^{2}$ fall?
1. case 3
2. case 2
3. between case 2 and case 3
4. between case 1 and case 2
5. case 1
6. the equation is not in the correct form
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=7T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{2}$ fall?
1. case 2
2. between case 2 and case 3
3. case 3
4. between case 1 and case 2
5. the equation is not in the correct form
6. case 1
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=7T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+{n}^{2}$ fall?
1. between case 2 and case 3
2. case 1
3. case 2
4. the equation is not in the correct form
5. case 3
6. between case 1 and case 2
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$ fall?
1. between case 1 and case 2
2. case 2
3. the equation is not in the correct form
4. case 3
5. case 1
6. between case 2 and case 3
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$ fall?
1. case 2
2. between case 2 and case 3
3. case 3
4. case 1
5. the equation is not in the correct form
6. between case 1 and case 2
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n$ fall?
1. the equation is not in the correct form
2. case 3
3. between case 2 and case 3
4. case 1
5. case 2
6. between case 1 and case 2
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt[3]{5}}\right)+{n}^{3}$ fall?
1. the equation is not in the correct form
2. case 2
3. between case 2 and case 3
4. case 1
5. between case 1 and case 2
6. case 3
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt[3]{5}}\right)+\genfrac{}{}{0.1ex}{}{{n}^{3}}{\mathrm{log}n}$ fall?
1. the equation is not in the correct form
2. case 2
3. between case 2 and case 3
4. between case 1 and case 2
5. case 3
6. case 1
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{5}}\right)+{n}^{2}\mathrm{log}n$ fall?
1. between case 2 and case 3
2. between case 1 and case 2
3. case 1
4. the equation is not in the correct form
5. case 3
6. case 2
1. In terms of the master recurrence theorem, where does the equation $T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{5}}\right)+n\mathrm{log}n$ fall?
1. between case 1 and case 2
2. case 2
3. case 3
4. the equation is not in the correct form
5. case 1
6. between case 2 and case 3

Concept:using the master recurrence theorem

1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+{n}^{2}$?
1. the master theorem cannot be used
2. $\Theta \left({n}^{2}\mathrm{log}n\right)$
3. $\Theta \left({n}^{2}\right)$
4. $\Theta \left({n}^{3}\right)$
5. $\Theta \left({n}^{3}\mathrm{log}n\right)$
6. $\Theta \left({n}^{4}\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{2}$?
1. $\Theta \left({n}^{\genfrac{}{}{0.1ex}{}{3}{2}}\right)$
2. $\Theta \left({n}^{2}\right)$
3. $\Theta \left({n}^{2}\mathrm{log}n\right)$
4. $\Theta \left({n}^{3}\right)$
5. the master theorem cannot be used
6. $\Theta \left({n}^{3}\mathrm{log}n\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{2}$?
1. $\Theta \left({n}^{2}\right)$
2. $\Theta \left({n}^{3}\right)$
3. $\Theta \left({n}^{4}\right)$
4. $\Theta \left({n}^{3}\mathrm{log}n\right)$
5. $\Theta \left({n}^{2}\mathrm{log}n\right)$
6. the master theorem cannot be used
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{2}^{n}$?
1. $\Theta \left({2}^{{n}^{\mathrm{log}n}}\right)$
2. $\Theta \left({n}^{2}\mathrm{log}n\right)$
3. the master theorem cannot be used
4. $\Theta \left({n}^{2}\right)$
5. $\Theta \left({2}^{n}\right)$
6. $\Theta \left({2}^{n}\mathrm{log}n\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)={2}^{n}T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{n}$?
1. $\Theta \left({n}^{n}\right)$
2. $\Theta \left({2}^{n}\right)$
3. $\Theta \left({2}^{n}\mathrm{log}n\right)$
4. $\Theta \left({n}^{n}\mathrm{log}n\right)$
5. $\Theta \left(\left({n}^{{n}^{\mathrm{log}n}}\right)$
6. the master theorem cannot be used
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=T\left(n-1\right)+1$?
1. $\Theta \left(n\mathrm{log}n\right)$
2. $\Theta \left(1\right)$
3. $\Theta \left(\mathrm{log}{n}^{\mathrm{log}n}\right)$
4. the master theorem cannot be used
5. $\Theta \left(\mathrm{log}n\right)$
6. $\Theta \left(n\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=2T\left(n-2\right)+n$?
1. the master theorem cannot be used
2. $\Theta \left(n\right)$
3. $\Theta \left(n\mathrm{log}n\right)$
4. $\Theta \left(\mathrm{log}n\right)$
5. $\Theta \left({\left(\mathrm{log}n\right)}^{2}\right)$
6. $\Theta \left(1\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=16T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+n$?
1. $\Theta \left(n\right)$
2. $\Theta \left(n\mathrm{log}n\right)$
3. $\Theta \left(\mathrm{log}n\right)$
4. $\Theta \left({n}^{2}\right)$
5. $\Theta \left({n}^{2}\mathrm{log}n\right)$
6. the master theorem cannot be used
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n\mathrm{log}n$?
1. $\Theta \left(n\right)$
2. $\Theta \left(n\mathrm{log}n\right)$
3. $\Theta \left(\mathrm{log}n\right)$
4. $\Theta \left({n}^{2}\right)$
5. $\Theta \left({n}^{2}\mathrm{log}n\right)$
6. the master theorem cannot be used
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n\mathrm{log}n$?
1. $\Theta \left({n}^{{\mathrm{log}}_{2}\left(3\right)}\right)$
2. $\Theta \left(n\mathrm{log}n\right)$
3. the master theorem cannot be used
4. $\Theta \left({n}^{{\mathrm{log}}_{3}\left(2\right)}\right)$
5. $\Theta \left({n}^{\genfrac{}{}{0.1ex}{}{3}{2}}\right)$
6. $\Theta \left(n{\mathrm{log}}_{\genfrac{}{}{0.1ex}{}{3}{2}}\left(n\right)\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\genfrac{}{}{0.1ex}{}{n}{\mathrm{log}n}$?
1. $\Theta \left({n}^{2}\right)$
2. $\Theta \left(n\mathrm{log}n\right)$
3. $\Theta \left({n}^{2}\mathrm{log}n\right)$
4. the master theorem cannot be used
5. $\Theta \left(\mathrm{log}n\right)$
6. $\Theta \left(n\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+{n}^{0.51}$?
1. the master theorem cannot be used
2. $\Theta \left(\sqrt{n}\right)$
3. $\Theta \left(\mathrm{log}n\right)$
4. $\Theta \left(n\right)$
5. $\Theta \left({n}^{0.51}\right)$
6. $\Theta \left(\sqrt{n}\mathrm{log}n\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=\genfrac{}{}{0.1ex}{}{1}{2}T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+1$?
1. the master theorem cannot be used
2. $\Theta \left({n}^{\genfrac{}{}{0.1ex}{}{1}{2}}\right)$
3. $\Theta \left({\mathrm{log}}_{2}\genfrac{}{}{0.1ex}{}{1}{2}\right)$
4. $\Theta \left(\mathrm{log}n\right)$
5. $\Theta \left(n\right)$
6. $\Theta \left({n}^{{\mathrm{log}}_{2}\genfrac{}{}{0.1ex}{}{1}{2}}\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=16T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+n!$?
1. $\Theta \left(n!\right)$
2. the master theorem cannot be used
3. $\Theta \left({n}^{2}n!\right)$
4. $\Theta \left(\left({n}^{2}\right)!\right)$
5. $\Theta \left({n}^{2}\right)$
6. $\Theta \left(n!\mathrm{log}n\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=\sqrt{2}T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$?
1. $\Theta \left(\mathrm{log}n\right)$
2. $\Theta \left(n\mathrm{log}n\right)$
3. $\Theta \left(\genfrac{}{}{0.1ex}{}{n}{\mathrm{log}\left(2\right)}\right)$
4. $\Theta \left(\sqrt{n}\right)$
5. $\Theta \left(\sqrt{n}\mathrm{log}n\right)$
6. the master theorem cannot be used
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n$?
1. $\Theta \left({n}^{{\mathrm{log}}_{2}\left(3\right)}\right)$
2. $\Theta \left(n\right)$
3. $\Theta \left(n{\mathrm{log}}_{\genfrac{}{}{0.1ex}{}{3}{2}}n\right)$
4. the master theorem cannot be used
5. $\Theta \left(n\mathrm{log}n\right)$
6. $\Theta \left({n}^{{\mathrm{log}}_{3}\left(2\right)}\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+\sqrt{n}$?
1. $\Theta \left(n\mathrm{log}\sqrt{n}\right)$
2. $\Theta \left(n\mathrm{log}n\right)$
3. $\Theta \left(\sqrt{n}\mathrm{log}n\right)$
4. $\Theta \left(\mathrm{log}n\right)$
5. $\Theta \left(n\right)$
6. the master theorem cannot be used
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n$?
1. the master theorem cannot be used
2. $\Theta \left(n\mathrm{log}n\right)$
3. $\Theta \left(n\right)$
4. $\Theta \left({n}^{2}\right)$
5. $\Theta \left(\mathrm{log}n\right)$
6. $\Theta \left({n}^{2}\mathrm{log}n\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+n\mathrm{log}n$?
1. $\Theta \left(n\right)$
2. $\Theta \left(\mathrm{log}n\right)$
3. the master theorem cannot be used
4. $\Theta \left({n}^{{\mathrm{log}}_{4}\left(3\right)}\right)$
5. $\Theta \left(n\mathrm{log}n\right)$
6. $\Theta \left({n}^{{\mathrm{log}}_{3}\left(4\right)}\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+\genfrac{}{}{0.1ex}{}{n}{2}$?
1. $\Theta \left({n}^{2}\mathrm{log}n\right)$
2. the master theorem cannot be used
3. $\Theta \left(n\right)$
4. $\Theta \left(\mathrm{log}n\right)$
5. $\Theta \left({n}^{2}\right)$
6. $\Theta \left(n\mathrm{log}n\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=6T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+{n}^{2}\mathrm{log}n$?
1. $\Theta \left({n}^{{\mathrm{log}}_{3}\left(6\right)}\mathrm{log}n\right)$
2. the master theorem cannot be used
3. $\Theta \left({n}^{{\mathrm{log}}_{6}\left(3\right)}\mathrm{log}n\right)$
4. $\Theta \left({n}^{{\mathrm{log}}_{3}\left(6\right)}\right)$
5. $\Theta \left({n}^{2}\mathrm{log}n\right)$
6. $\Theta \left({n}^{{\mathrm{log}}_{6}\left(3\right)}\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\genfrac{}{}{0.1ex}{}{n}{\mathrm{log}n}$?
1. $\Theta \left({n}^{2}\right)$
2. $\Theta \left(\mathrm{log}n\right)$
3. $\Theta \left(\genfrac{}{}{0.1ex}{}{{n}^{2}}{\mathrm{log}n}\right)$
4. the master theorem cannot be used
5. $\Theta \left(n\mathrm{log}n\right)$
6. $\Theta \left({n}^{2}\mathrm{log}n\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=64T\left(\genfrac{}{}{0.1ex}{}{n}{8}\right)-\genfrac{}{}{0.1ex}{}{n}{\mathrm{log}n}$?
1. $\Theta \left(n\mathrm{log}n\right)$
2. $\Theta \left({n}^{2}\right)$
3. $\Theta \left({n}^{2}\mathrm{log}n\right)$
4. the master theorem cannot be used
5. $\Theta \left(\mathrm{log}n\right)$
6. $\Theta \left(n\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=7T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{7}}\right)+{n}^{2}$?
1. $\Theta \left(n\mathrm{log}n\right)$
2. $\Theta \left(\genfrac{}{}{0.1ex}{}{n}{{\mathrm{log}}_{\sqrt{7}}\left(7\right)}\right)$
3. $\Theta \left({n}^{2}\right)$
4. $\Theta \left({n}^{{\mathrm{log}}_{\sqrt{7}}\left(7\right)}\right)$
5. $\Theta \left({n}^{2}\mathrm{log}n\right)$
6. the master theorem cannot be used
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=7T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{2}$?
1. $\Theta \left({n}^{2+{\mathrm{log}}_{2}\left(7\right)}\right)$
2. $\Theta \left({n}^{2+{\mathrm{log}}_{7}\left(2\right)}\right)$
3. $\Theta \left({n}^{{\mathrm{log}}_{2}\left(7\right)}\right)$
4. the master theorem cannot be used
5. $\Theta \left({n}^{2}\right)$
6. $\Theta \left({n}^{{\mathrm{log}}_{7}\left(2\right)}\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=7T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+{n}^{2}$?
1. $\Theta \left({n}^{2+{\mathrm{log}}_{3}\left(7\right)}\right)$
2. $\Theta \left({n}^{{\mathrm{log}}_{3}\left(7\right)}\right)$
3. the master theorem cannot be used
4. $\Theta \left({n}^{2+{\mathrm{log}}_{7}\left(3\right)}\right)$
5. $\Theta \left({n}^{2}\right)$
6. $\Theta \left({n}^{{\mathrm{log}}_{7}\left(3\right)}\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$?
1. $\Theta \left({n}^{2}\mathrm{log}n\right)$
2. the master theorem cannot be used
3. $\Theta \left(n\mathrm{log}n\right)$
4. $\Theta \left(n\right)$
5. $\Theta \left({n}^{2}\right)$
6. $\Theta \left(\mathrm{log}n\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$?
1. $\Theta \left({n}^{{\mathrm{log}}_{5}\left(2\right)}\right)$
2. $\Theta \left(\mathrm{log}n\right)$
3. $\Theta \left({n}^{{\mathrm{log}}_{5}\left(2\right)}\mathrm{log}n\right)$
4. $\Theta \left({n}^{{\mathrm{log}}_{2}\left(5\right)}\mathrm{log}n\right)$
5. the master theorem cannot be used
6. $\Theta \left({n}^{{\mathrm{log}}_{2}\left(5\right)}\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n$?
1. the master theorem cannot be used
2. $\Theta \left({n}^{{\mathrm{log}}_{5}\left(2\right)}\right)$
3. $\Theta \left({n}^{{\mathrm{log}}_{2}\left(5\right)}\mathrm{log}n\right)$
4. $\Theta \left({n}^{{\mathrm{log}}_{2}\left(5\right)}\right)$
5. $\Theta \left({n}^{{\mathrm{log}}_{5}\left(2\right)}\mathrm{log}n\right)$
6. $\Theta \left(n\mathrm{log}n\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt[3]{5}}\right)+{n}^{3}$?
1. $\Theta \left({n}^{{\mathrm{log}}_{5}\sqrt[3]{5}}\right)$
2. $\Theta \left({n}^{{\mathrm{log}}_{5}\sqrt[3]{5}}×\mathrm{log}n\right)$
3. $\Theta \left({n}^{{\mathrm{log}}_{\sqrt[3]{5}}\left(5\right)}×\mathrm{log}n\right)$
4. the master theorem cannot be used
5. $\Theta \left({n}^{3}\mathrm{log}n\right)$
6. $\Theta \left({n}^{{\mathrm{log}}_{\sqrt[3]{5}}\left(5\right)}\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt[3]{5}}\right)+\genfrac{}{}{0.1ex}{}{{n}^{3}}{\mathrm{log}n}$?
1. $\Theta \left({n}^{3}\mathrm{log}n\right)$
2. $\Theta \left({n}^{{\mathrm{log}}_{5}\sqrt[3]{5}}\right)$
3. $\Theta \left({n}^{{\mathrm{log}}_{5}\sqrt[3]{5}}×\mathrm{log}n\right)$
4. $\Theta \left({n}^{{\mathrm{log}}_{\sqrt[3]{5}}\left(5\right)}×\mathrm{log}n\right)$
5. $\Theta \left({n}^{{\mathrm{log}}_{\sqrt[3]{5}}\left(5\right)}\right)$
6. the master theorem cannot be used
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{5}}\right)+{n}^{2}\mathrm{log}n$?
1. the master theorem cannot be used
2. $\Theta \left({n}^{2}\mathrm{log}n\right)$
3. $\Theta \left({n}^{{\mathrm{log}}_{5}\sqrt{5}}×\mathrm{log}n\right)$
4. $\Theta \left({n}^{{\mathrm{log}}_{5}\sqrt{5}}\right)$
5. $\Theta \left({n}^{{\mathrm{log}}_{\sqrt{5}}\left(5\right)}\right)$
6. $\Theta \left({n}^{{\mathrm{log}}_{\sqrt{5}}\left(5\right)}×\mathrm{log}n\right)$
1. Using the master recurrence theorem, what is the solution of $T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{5}}\right)+n\mathrm{log}n$?
1. $\Theta \left({n}^{2+{\mathrm{log}}_{5}\sqrt{5}}\right)$
2. $\Theta \left({n}^{{\mathrm{log}}_{5}\sqrt{5}}×\mathrm{log}n\right)$
3. the master theorem cannot be used
4. $\Theta \left({n}^{2}\right)$
5. $\Theta \left({n}^{2+{\mathrm{log}}_{\sqrt{5}}\left(5\right)}\right)$
6. $\Theta \left({n}^{{\mathrm{log}}_{\sqrt{5}}\left(5\right)}×\mathrm{log}n\right)$
7. $\Theta \left(n\mathrm{log}n\right)$

Determining $\epsilon$ in the M.R.T.

1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+\mathrm{log}n$. What is the largest listed value of $\epsilon$ that can be used in the solution?
1. no legal value is listed
2. $\epsilon$ is not used in the solution
3. the master theorem cannot be used
4. 0.5
5. 0.25
6. 0.1
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{2}$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 2) of 3 is ~1.585 and the log (base 3) of 2 is ~0.631.
1. no legal value is listed
2. 0.25
3. 0.1
4. the master theorem cannot be used
5. $\epsilon$ is not used in the solution
6. 0.5
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{2}$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 4) of 2 is 0.5 and the log (base 2) of 4 is 2.
1. the master theorem cannot be used
2. $\epsilon$ is not used in the solution
3. 0.25
4. no legal value is listed
5. 0.5
6. 1.0
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{2}^{n}$? What is the largest listed value of $\epsilon$ that can be used in the solution?
1. 0.5
2. 1.0
3. 0.1
4. the master theorem cannot be used
5. $\epsilon$ is not used in the solution
6. no legal value is listed
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)={2}^{n}T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{n}$? What is the largest listed value of $\epsilon$ that can be used in the solution?
1. 1.0
2. no legal value is listed
3. 0.5
4. the master theorem cannot be used
5. $\epsilon$ is not used in the solution
6. 0.1
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=16T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+n$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 16) of 4 is 0.5 and log (base 4) of 16 is 2.
1. 0.25
2. 0.75
3. no legal value is listed
4. $\epsilon$ is not used in the solution
5. the master theorem cannot be used
6. 0.5
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n\mathrm{log}n$? What is the largest listed value of $\epsilon$ that can be used in the solution?
1. $\epsilon$ is not used in the solution
2. no legal value is listed
3. 0.75
4. 0.5
5. the master theorem cannot be used
6. 1.0
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n\mathrm{log}n$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 2) of 3 is ~1.585 and the log (base 3) of 2 is ~0.631.
1. $\epsilon$ is not used in the solution
2. 0.25
3. no legal value is listed
4. the master theorem cannot be used
5. 0.5
6. 1.0
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=8T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 8) of 2 is ~0.333 and the log (base 2) of 8 is 3.
1. 0.025
2. the master theorem cannot be used
3. no legal value is listed
4. $\epsilon$ is not used in the solution
5. 0.100
6. 0.05
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+{n}^{0.51}$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 4) of 2 is 0.5 and log (base 2) of 4 is 2.
1. 0.075
2. the master theorem cannot be used
3. $\epsilon$ is not used in the solution
4. no legal value is listed
5. 0.100
6. 0.050
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=\genfrac{}{}{0.1ex}{}{1}{2}T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+1$? What is the largest listed value of $\epsilon$ that can be used in the solution?
1. the master theorem cannot be used
2. 0.05
3. $\epsilon$ is not used in the solution
4. 0.10
5. 0.01
6. no legal value is listed
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=16T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+n!$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 16) of 4 is 0.5 and the log (base 4) of 16 is 2.
1. 1
2. $\epsilon$ is not used in the solution
3. 100
4. no legal value is listed
5. 10
6. the master theorem cannot be used
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=\sqrt{2}T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 2) of $\sqrt{2}$ is 0.5 and the log (base $\sqrt{2}$) of 2 is 2.
1. 1.0
2. 0.50
3. 0.25
4. no legal value is listed
5. $\epsilon$ is not used in the solution
6. the master theorem cannot be used
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 3) of 2 is ~0.631 and the log (base 2) of 3 is ~1.585.
1. 0.25
2. $\epsilon$ is not used in the solution
3. 1.0
4. 0.10
5. the master theorem cannot be used
6. no legal value is listed
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+\sqrt{n}$? What is the largest listed value of $\epsilon$ that can be used in the solution?
1. 0.3
2. the master theorem cannot be used
3. $\epsilon$ is not used in the solution
4. 0.6
5. 0.9
6. no legal value is listed
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 2) of 4 is 2 and the log (base 4) of 2 is 0.5.
1. $\epsilon$ is not used in the solution
2. 0.9
3. 0.3
4. 0.6
5. no legal value is listed
6. the master theorem cannot be used
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+n\mathrm{log}n$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 3) of 4 is ~1.262 and the log (base 4) of 3 is ~0.792.
1. the master theorem cannot be used
2. 0.6
3. $\epsilon$ is not used in the solution
4. 0.9
5. 0.3
6. no legal value is listed
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+\genfrac{}{}{0.1ex}{}{n}{2}$? What is the largest listed value of $\epsilon$ that can be used in the solution?
1. the master theorem cannot be used
2. $\epsilon$ is not used in the solution
3. 0.6
4. 0.9
5. no legal value is listed
6. 0.3
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=6T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+{n}^{2}\mathrm{log}n$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 3) of 6 is ~1.631 and the log (base 6) of 3 is ~0.613.
1. 0.4
2. 0.3
3. no legal value is listed
4. 0.5
5. $\epsilon$ is not used in the solution
6. the master theorem cannot be used
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\genfrac{}{}{0.1ex}{}{n}{\mathrm{log}n}$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 4) of 2 is 0.5 and the log (base 2) of 4 is 2.
1. no legal value is listed
2. the master theorem cannot be used
3. 1.0
4. 0.5
5. $\epsilon$ is not used in the solution
6. 0.25
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=64T\left(\genfrac{}{}{0.1ex}{}{n}{8}\right)-\genfrac{}{}{0.1ex}{}{n}{\mathrm{log}n}$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 8) of 64 is 2 and the log (base 64) of 8 is 0.5.
1. 0.5
2. 0.25
3. $\epsilon$ is not used in the solution
4. 1.0
5. no legal value is listed
6. the master theorem cannot be used
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=7T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{7}}\right)+{n}^{2}$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 7) of $\sqrt{7}$ is 0.5 and the log (base $\sqrt{7}$) of 7 is 2.
1. 0.25
2. $\epsilon$ is not used in the solution
3. 0.5
4. no legal value is listed
5. 1.0
6. the master theorem cannot be used
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=7T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{2}$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 2) of 7 is ~2.807 and the log (base 7) of 2 is ~0.356.
1. 0.8
2. 1.0
3. no legal value is listed
4. $\epsilon$ is not used in the solution
5. 0.6
6. the master theorem cannot be used
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=7T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+{n}^{2}$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 3) of 7 is ~0.565 and the log (base 7) of 3 is ~1.771.
1. the master theorem cannot be used
2. 0.6
3. 0.8
4. 1.0
5. $\epsilon$ is not used in the solution
6. no legal value is listed
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 2) of 4 is 2 and the log (base 4) of 2 is 0.5.
1. 0.5
2. $\epsilon$ is not used in the solution
3. 1.0
4. the master theorem cannot be used
5. no legal value is listed
6. 2.0
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 5) of 2 is ~0.431 and the log (base 2) of 5 is ~2.322.
1. 1.0
2. $\epsilon$ is not used in the solution
3. 2.0
4. the master theorem cannot be used
5. 0.5
6. no legal value is listed
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 2) of 5 is ~2.322 and the log (base 5) of 2 is ~0.431.
1. 1.0
2. the master theorem cannot be used
3. no legal value is listed
4. 2.0
5. $\epsilon$ is not used in the solution
6. 0.5
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt[3]{5}}\right)+{n}^{3}$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 5) of $\sqrt[3]{5}$ is ~0.333 and the log (base $\sqrt[3]{5}$) of 5 is 3.
1. 1.0
2. the master theorem cannot be used
3. no legal value is listed
4. 2.0
5. 0.5
6. $\epsilon$ is not used in the solution
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt[3]{5}}\right)+\genfrac{}{}{0.1ex}{}{{n}^{3}}{\mathrm{log}n}$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base $\sqrt[3]{5}$) of 5 is 3 and the log (base 5) of $\sqrt[3]{5}$ is ~0.333.
1. 1.0
2. 0.5
3. no legal value is listed
4. 2.0
5. the master theorem cannot be used
6. $\epsilon$ is not used in the solution
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{5}}\right)+{n}^{2}\mathrm{log}n$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base 5) of $\sqrt{5}$ is 0.5 and the log (base $\sqrt{5}$) of 5 is 2.
1. 1.0
2. the master theorem cannot be used
3. $\epsilon$ is not used in the solution
4. 2.0
5. 0.5
6. no legal value is listed
1. Consider using the master recurrence theorem to solve the equation $T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{5}}\right)+n\mathrm{log}n$? What is the largest listed value of $\epsilon$ that can be used in the solution? The log (base $\sqrt{5}$) of 5 is 2 and the log (base 5) of $\sqrt{5}$ is 0.5.
1. no legal value is listed
2. 2.0
3. 1.0
4. 0.5
5. $\epsilon$ is not used in the solution
6. the master theorem cannot be used

Determining c for M.R.T., case 3

1. Consider using the master recurrence theorem and the regularity condition for case 3 to solve the equation $T\left(n\right)=8T\left(\genfrac{}{}{0.1ex}{}{n}{8}\right)+{n}^{2}$. What is the smallest legal value of the constant c listed for the solution?
1. no legal value is listed
2. case 3 does not apply
3. $\genfrac{}{}{0.1ex}{}{1}{9}$
4. $\genfrac{}{}{0.1ex}{}{1}{5}$
5. $\genfrac{}{}{0.1ex}{}{1}{7}$
1. Consider using the master recurrence theorem and the regularity condition for case 3 to solve the equation $T\left(n\right)=7T\left(\genfrac{}{}{0.1ex}{}{n}{8}\right)+{n}^{2}$. What is the smallest legal value of the constant c listed for the solution?
1. no legal value is listed
2. case 3 does not apply
3. $\genfrac{}{}{0.1ex}{}{9}{64}$
4. $\genfrac{}{}{0.1ex}{}{7}{64}$
5. $\genfrac{}{}{0.1ex}{}{11}{64}$
1. Consider using the master recurrence theorem and the regularity condition for case 3 to solve the equation $T\left(n\right)=6T\left(\genfrac{}{}{0.1ex}{}{n}{8}\right)+{n}^{2}$. What is the smallest legal value of the constant c listed for the solution?
1. case 3 does not apply
2. $\genfrac{}{}{0.1ex}{}{5}{64}$
3. no legal value is listed
4. $\genfrac{}{}{0.1ex}{}{3}{64}$
5. $\genfrac{}{}{0.1ex}{}{1}{64}$
1. Consider using the master recurrence theorem and the regularity condition for case 3 to solve the equation $T\left(n\right)=64\left(\genfrac{}{}{0.1ex}{}{n}{8}\right)+{n}^{2}$. What is the smallest legal value of the constant c listed for the solution?
1. no legal value is listed
2. $\genfrac{}{}{0.1ex}{}{63}{64}$
3. $\genfrac{}{}{0.1ex}{}{61}{64}$
4. case 3 does not apply
5. $\genfrac{}{}{0.1ex}{}{65}{64}$
1. Consider using the master recurrence theorem and the regularity condition for case 3 to solve the equation $T\left(n\right)=32\left(\genfrac{}{}{0.1ex}{}{n}{8}\right)+{n}^{2}$. What is the smallest legal value of the constant c listed for the solution?
1. $\genfrac{}{}{0.1ex}{}{63}{64}$
2. no legal value is listed
3. case 3 does not apply
4. $\genfrac{}{}{0.1ex}{}{8}{64}$
5. $\genfrac{}{}{0.1ex}{}{16}{64}$

Inductive Proofs

Here is a recursive version of mergesort, which works for all arrays with size one or greater:
```    mergeSort(array, start, end)
{
if (end-start < 2) return;

m = (start + end) / 2;
mergesort(array,start,m);
mergesort(array,m,end);
merge(array,start,m,end);
}
```
Assume that the recurrence for merge sort is described as:
$T\left({2}^{0}\right)=1$
$T\left({2}^{n}\right)=2T\left({2}^{n-1}\right)+{2}^{n}$

Consider a proof, by induction, that $T\left({2}^{n}\right)=\left(n+1\right){2}^{n}$ where ${2}^{n}$ is the size of the array and b is the base case value of n.
1. To completely prove the base case value b, you would need to show:
1. $2T\left({2}^{b-1}\right)={2}^{b}$
2. $b{2}^{b}={2}^{b+1}$
3. $2T\left({2}^{b-1}\right)=2$
4. $b{2}^{b}=2$
5. $2T\left({2}^{b-1}\right)+{2}^{b}=\left(b+1\right){2}^{b}$
6. $T\left({2}^{b}\right)=4$
1. Assuming b is the largest base case value and you wish to prove the inductive case n, an inductive hypothesis for a strong inductive proof would be:
1. Assume true for $i.
2. Assume true for $i\le n$.
3. Assume true for $n$.
4. Assume true for $b.
5. Assume true for $b.
6. Assume true for $n+1$.
1. Assuming b is the largest base case value and you wish to prove the inductive case n+1, an inductive hypothesis for a weak inductive proof would be:
1. Assume true for $n+1$.
2. Assume true for $n-1$.
3. Assume true for $n+2$.
4. Assume true for $n$.
1. Assuming you wish to prove the inductive case n, the first line of the inductive case for a strong inductive proof would be:
1. $T\left({2}^{n-1}\right)=2T\left({2}^{\left(n-1\right)-1}\right)+{2}^{n-1}$
2. $T\left({2}^{n+1}\right)=2T\left({2}^{\left(n+1\right)-1}\right)+{2}^{n+1}$
3. $T\left({2}^{n}\right)=\left(n+1\right){2}^{n}$
4. $T\left({2}^{n}\right)=2T\left({2}^{n-1}\right)+{2}^{n}$
5. $T\left({2}^{n-1}\right)=\left(\left(n-1\right)+1\right){2}^{n-1}$
6. $T\left({2}^{n+1}\right)=\left(\left(n+1\right)+1\right){2}^{n+1}$
1. Assuming you wish to prove the inductive case n+1, the first line of the inductive case for a weak inductive proof would be:
1. $T\left({2}^{n}\right)=2T\left({2}^{n-1}\right)+{2}^{n}$
2. $T\left({2}^{n+1}\right)=2T\left({2}^{\left(n+1\right)-1}\right)+{2}^{n+1}$
3. $T\left({2}^{n+1}\right)=\left(\left(n+1\right)+1\right){2}^{n+1}$
4. $T\left({2}^{n-1}\right)=\left(\left(n-1\right)+1\right){2}^{n-1}$
5. $T\left({2}^{n-1}\right)=2T\left({2}^{\left(n-1\right)-1}\right)+{2}^{n-1}$
6. $T\left({2}^{n}\right)=\left(n+1\right){2}^{n}$
1. Assuming you wish to prove the inductive case n, the inductive step for a strong inductive proof would be:
1. substitute $T\left({2}^{\left(n+1\right)-1}\right)$ for $\left(n+1\right){2}^{n}$
2. substitute $n{2}^{n-1}$ for $T\left({2}^{n-1}\right)$
3. substitute $\left(n+1\right){2}^{n}$ for $T\left({2}^{\left(n+1\right)-1}\right)$
4. substitute $T\left({2}^{n-1}\right)$ for $n{2}^{n-1}$
5. substitute $T\left({2}^{\left(n-1\right)-1}\right)$ for $\left(n-1\right){2}^{n-2}$
6. substitute $\left(n-1\right){2}^{n-2}$ for $T\left({2}^{\left(n-1\right)-1}\right)$
1. Assuming you wish to prove the inductive case n+1, the inductive step for a weak inductive proof would be:
1. substitute $\left(n+1\right){2}^{n}$ for $T\left({2}^{\left(n+1\right)-1}\right)$
2. substitute $T\left({2}^{n-1}\right)$ for $n{2}^{n-1}$
3. substitute $n{2}^{n-1}$ for $T\left({2}^{n-1}\right)$
4. substitute $T\left({2}^{\left(n-1\right)-1}\right)$ for $\left(n-1\right){2}^{n-2}$
5. substitute $\left(n-1\right){2}^{n-2}$ for $T\left({2}^{\left(n-1\right)-1}\right)$
6. substitute $T\left({2}^{\left(n+1\right)-1}\right)$ for $\left(n+1\right){2}^{n}$
1. T or F: For strong inductive proofs, proving the inductive case makes use of the previously proved base cases.
1. T or F: Weak inductive proofs are more closely related to recursive functions than strong inductive proofs.
1. The more formal names for weak and strong inductive proofs are:
1. simple and complex
2. complex and simple
3. mathematical and complete
4. mathematical and simple
5. simple and mathematical
6. complete and mathematical

Concept:inductive proofs

1. (6 pts) Prove by strong induction, with n as the inductive case, that $\sum _{k=1}^{n}k=\genfrac{}{}{0.1ex}{}{n\left(n+1\right)}{2}$.

Base case:

Inductive Hypothesis:

Inductive Case:

1. (6 pts) Prove by strong induction, with n as the inductive case, that $T\left(n\right)=\genfrac{}{}{0.1ex}{}{n\left(n+1\right)}{2}$, where:
$T\left(0\right)=0$
$T\left(n\right)=T\left(n-1\right)+n$

Base case:

Inductive Hypothesis:

Inductive Case:

1. (6 pts) Prove by strong induction, with n as the inductive case, that $\sum _{k=0}^{n}{k}^{2}=\genfrac{}{}{0.1ex}{}{n\left(n+1\right)\left(2n+1\right)}{6}$.

Base case:

Inductive Hypothesis:

Inductive Case:

1. (6 pts) Prove by strong induction, with n as the inductive case, that $T\left(n\right)=\genfrac{}{}{0.1ex}{}{n\left(n+1\right)\left(2n+1\right)}{6}$, where:
$T\left(0\right)=0$
$T\left(n\right)=T\left(n-1\right)+{n}^{2}$

Base case:

Inductive Hypothesis:

Inductive Case:

1. (6 pts) Prove by strong induction that $\sum _{k=0}^{n}{k}^{3}=\genfrac{}{}{0.1ex}{}{{n}^{2}{\left(n+1\right)}^{2}}{4}$.

Base case:

Inductive Hypothesis:

Inductive Case:

1. (6 pts) Prove by strong induction that $T\left(n\right)=\genfrac{}{}{0.1ex}{}{{n}^{2}{\left(n+1\right)}^{2}}{4}$, where:
$T\left(0\right)=0$
$T\left(n\right)=T\left(n-1\right)+{n}^{3}$

Base case:

Inductive Hypothesis:

Inductive Case:

1. (6 pts) Prove by strong induction that $\sum _{k=0}^{n}{x}^{k}=\genfrac{}{}{0.1ex}{}{{x}^{n+1}-1}{x-1}$.

Base case:

Inductive Hypothesis:

Inductive Case:

1. (6 pts) Prove by strong induction that $T\left(x,n\right)=\genfrac{}{}{0.1ex}{}{{x}^{n+1}-1}{x-1}$, where:
$T\left(x,0\right)=1$
$T\left(x,n\right)=T\left(x,n-1\right)+{x}^{n}$

Base case:

Inductive Hypothesis:

Inductive Case:

1. (6 pts) Prove by strong induction that the number of nodes in a perfect binary tree with $n$ levels is ${2}^{n}-1$.

Base case:

Inductive Hypothesis:

Inductive Case:

1. (6 pts) Prove by strong induction that the number of nodes in a perfect quaternary tree with $n$ levels is $\genfrac{}{}{0.1ex}{}{{4}^{n}-1}{3}$.

Base case:

Inductive Hypothesis:

Inductive Case:

1. (6 pts) Prove by strong induction that the number of nodes in a perfect binary tree with $n$ levels is ${2}^{n}-1$.

Base case:

Inductive Hypothesis:

Inductive Case:

1. (6 pts) Prove by strong induction that the number of interior nodes in a perfect binary tree with $n$ levels is ${2}^{n-1}-1$.

Base case:

Inductive Hypothesis:

Inductive Case:

1. (6 pts) Prove by strong induction that the number of child pointers in an arbitrary binary tree with $n$ nodes is $n-1$.

Base case:

Inductive Hypothesis:

Inductive Case:

Concept:rotations in a BST

1. Which of the following is true for rotations in a BST:
1. the number of leaf nodes always decreases
2. the number of leaf nodes always increases
3. the tree always becomes more balanced
4. BST-ordering is always preserved
1. Consider a right-rotation of a node n upwards in a BST. The former right child of n, if it exists:
1. becomes the right child of the former parent of n
2. becomes the left child of the former parent of n
3. becomes the niece or nephew of n
4. becomes the sibling of n
5. becomes the left child of n
6. remains the right child of n
1. Consider a right-rotation of a node n upwards in a BST. The former left child of n, if it exists:
1. becomes the left child of the former parent of n
2. remains the left child of n
3. becomes the right child of n
4. becomes the sibling of n
5. becomes the right child of the former parent of n
6. becomes the niece or nephew of n
1. Consider a right-rotation of a node n upwards in a BST. The former parent of n, assuming it exists:
1. becomes the niece or nephew of n
2. remains the parent of n
3. becomes the sibling of n
4. becomes the left child of n
5. becomes the right child of n
1. Consider a right-rotation of a node n upwards in a BST. The former sibling of n, assuming it exists:
1. becomes the niece or nephew of n
2. becomes a grandchild of n
3. remains the sibling of n
4. becomes the left child of n
5. becomes the right child of n

Concept:red-black trees

1. The number of rotations that occur after an insertion into a red-black tree is:
1. $\Theta \left(n\mathrm{log}n\right)$
2. $\Theta \left(\mathrm{log}n\right)$
3. $\Theta \left(1\right)$
4. $\Theta \left(n\right)$
1. The maximum number of rotations that occur after an insertion into a red-black tree is:
1. $2$
2. 3
3. $\sim \mathrm{log}n$
4. 1
1. The maximum number of rotations that occur after a deletion from a red-black tree is:
1. $\Theta \left(n\right)$
2. $\Theta \left(n\mathrm{log}n\right)$
3. $\Theta \left(\mathrm{log}n\right)$
4. $\Theta \left(1\right)$
1. The maximum number of rotations that occur after a deletion from a red-black tree is:
1. $3$
2. $\sim \mathrm{log}n$
3. $2$
4. 1
1. Consider a node n in a red-black tree and all paths from n to a leaf. Which of the following is a constraint on these trees?
2. the number of nodes (red or black) on each path is the same
3. the number of red nodes on each path is the same
4. the number of black nodes on each path is the same
1. Consider a node n in a red-black tree and all paths from n to a leaf. Which of the following is a constraint on these trees?
1. no black node can have a red parent
2. no red node can have a red parent
3. no red node can have a black parent
4. no black node can have a black parent
1. Consider a black interior node n in a red-black tree and any path from n to a leaf. Which of the following is a constraint on these trees, where R is the number of red nodes and B is the number of black nodes?
1. $B\le R$
2. $R\le B+1$
3. $R\le B$
4. $B\le R+1$
5. $B
6. $R=B$
1. Consider a red node n in a red-black tree and the length of the shortest possible path from n to a leaf, S, and the length of the longest possible path from n to a leaf, L. Which of the following is a constraint on these trees?
1. $L=2S+1$
2. $L=2S-2$
3. $L=2S-1$
4. $L=2S$
5. $L=2S+2$
1. Inserting a value in a red-black tree and a regular BST, respectively, takes time:
1. $\Theta \left(\mathrm{log}n\right)$ and $\Theta \left(\mathrm{log}n\right)$
2. $\Theta \left(1\right)$ and $\Theta \left(\mathrm{log}n\right)$
3. $\Theta \left(n\right)$ and $\Theta \left(n\right)$
4. $\Theta \left(\mathrm{log}n\right)$ and $\Theta \left(n\right)$
1. Suppose one wished to allow more red nodes in a red-black tree, but still wished this new tree to have the same asymptotic behavior as before. One could allow more red nodes on any path to a leaf as long as:
1. no red node could have a red sibling.
2. no black node could have a red parent.
3. the number of black nodes between any two red nodes is bounded by a constant.
4. the number of red nodes between any two black nodes is bounded by a constant.
1. T or F: A black node in a red-black tree can have one child.
1. T or F: A red node in a red-black tree can have one child.
1. T or F: A red node in a red-black tree can have a red parent.
1. T or F: A black node in a red-black tree can have a black parent.
1. Choose an order of insertion for seven consecutive integers such that a red-black tree performs no rotations for any of the insertions.
1. 3 2 6 0 1 4 5
2. 3 2 6 1 0 5 4
3. 3 2 5 1 0 4 6
4. 3 1 5 2 0 4 6
5. 3 2 5 0 1 4 6
1. Choose an order of insertion for seven consecutive integers such that a red-black tree performs one rotation for one of the insertions and no rotations for the other insertions.
1. 3 2 6 0 1 4 5
2. 3 2 5 1 0 4 6
3. 3 2 5 0 1 4 6
4. 3 1 5 2 0 4 6
5. 3 2 6 1 0 5 4
1. Choose an order of insertion for seven consecutive integers such that a red-black tree performs two rotations for one of the insertions and no rotations for the other insertions.
1. 3 2 5 1 0 4 6
2. 3 2 5 0 1 4 6
3. 3 2 6 0 1 4 5
4. 3 2 6 1 0 5 4
5. 3 1 5 2 0 4 6
1. Choose an order of insertion for seven consecutive integers such that a red-black tree performs one rotation for two of the insertions and no rotations for the other insertions.
1. 3 2 5 0 1 4 6
2. 3 1 5 2 0 4 6
3. 3 2 5 1 0 4 6
4. 3 2 6 1 0 5 4
5. 3 2 6 0 1 4 5
1. Choose an order of insertion for seven consecutive integers such that a red-black tree performs no rotations for any of the insertions and yields the most unbalanced tree.
1. T or F: Inserting the following numbers, in the order given, into an empty BST:
```    0 4 3 8 1 2 6 5 9 7
```
yields a tree whose shape is consistent with a red-black tree.
1. Consider inserting the following numbers, in the order given, into an empty BST and then coloring the root black and the other nodes such that no red node has a red parent: each node:
```    0 4 3 8 1 2 6 5 9 7
```
What is the minimum / maximum number of red nodes possible?
1. 0 / 3
2. the correct answer is not listed
3. 0 / 4
4. 3 / 5
5. 0 / 5
6. 3 / 4
7. 0 / 6
8. 3 / 3
1. Consider an node with a single child in a red-black tree. If that node has a sibling and you wish to maximize the number of descendants the sibling has, what color is the sibling and how many descendants does it have? Do not include the null children. Choose the best answer.
1. black / 6
2. either black or red / 2
3. either black or red / 4
4. red / 2
5. either black or red / 5
6. the correct answer is not listed
7. black / 2
8. red / 6
1. Consider inserting the following numbers, in the order given, into an empty red-black tree:
```     5 3 8 1 7 6 4
```
How many rotations and how many node recolorings are performed? Don't forget what happens on the initial insertion.
1. 1 / 6
2. the correct answer is not listed
3. 0 / 5
4. 1 / 7
5. 0 / 7
6. 0 / 6
7. 1 / 5
1. Consider inserting the following numbers, in the order given, into an empty BST:
```    0 4 3 8 1 2 6 5 9 7
```
What is the minimum number of rotations that would yield a tree with a shape consistent with a red-black tree?
1. 1
2. 3
3. 0
4. the correct answer is not listed
5. 5
6. 2
7. 4
1. Consider inserting the following numbers, in the order given, into an empty red-black tree:
```    0 4 3 8 1 2 6 5 9 7
```
After which insertion value does the red-black tree make its first rotation?
1. 0
2. 6
3. 1
4. 3
5. 4
6. 8
7. the correct answer is not listed
8. 2
1. Consider inserting the following numbers, in the order given, into an empty red-black tree:
```    0 4 3 8 1 2 6 5 9 7
```
Which values, when inserted, cause a rotation?
1. none
2. 2 6 7
3. 3 2 7
4. the correct answer is not listed
5. 6 7
6. 3 2 6
1. Consider inserting the following numbers, in the order given, into an empty red-black tree:
```    0 4 3 8 1 2 6 5 9 7
```
Which values, when inserted, cause a double rotaion?
1. 2 7
2. 7
3. the correct answer is not listed
4. 2 6
5. 3 6
6. 6
7. 3 7

Concept:AVL trees

1. Consider a node n in an AVL tree and the height of the left subtree of n, LH and the height of the right subtree of n, RH. Assuming $LH>RH$, which of the following is a constraint on these trees?
1. $LH-RH=1$
2. $LH-RH<3$
3. $LH-RH=0$
4. $LH-RH<2$
5. $LH-RH=2$
1. The number of rotations that occur after an insertion into an AVL tree is, in the worst case:
1. $\Theta \left(n\mathrm{log}n\right)$
2. $\Theta \left(\mathrm{log}n\right)$
3. $\Theta \left(1\right)$
4. $\Theta \left(n\right)$
5. $\Theta \left(\mathrm{log}\phantom{\rule{0.278em}{0ex}}\mathrm{log}n\right)$
1. The number of rotations that occur after an insertion into an AVL tree is, in the worst case:
1. 1
2. $\Theta \left(n\right)$
3. 2
4. $\Theta \left(\mathrm{log}n\right)$
1. The number of rotations that occur after an deletion in an AVL tree is, in the worst case:
1. $\Theta \left(n\mathrm{log}n\right)$
2. $\Theta \left(\mathrm{log}\phantom{\rule{0.278em}{0ex}}\mathrm{log}n\right)$
3. $\Theta \left(\mathrm{log}n\right)$
4. $\Theta \left(1\right)$
5. $\Theta \left(n\right)$
1. The number of rotations that occur after an deletion into an AVL tree is, in the worst case:
1. 1
2. $\Theta \left(n\right)$
3. 2
4. $\Theta \left(\mathrm{log}n\right)$
1. The minimum number of rotations that occur after an deletion into an AVL tree is:
1. $\Theta \left(n\right)$
2. 1
3. 0
4. $\Theta \left(\mathrm{log}n\right)$
5. 2
1. T or F: Inserting the following numbers, in the order given, into an empty BST:
```    0 4 3 8 1 2 6 5 9 7
```
yields a tree whose shape is consistent with an AVL tree.
1. Consider inserting the following numbers, in the order given, into an empty BST and then computing the balance factors of each node:
```    0 4 3 8 1 2 6 5 9 7
```
What nodes are out of balance, with respect to AVL balance factors?
1. the correct answer is not listed
2. 0 3
3. 2 5 6 7 9
4. 0 1 2 3
5. 0 1 3 8
6. 0 4 3
7. 0 3 8
1. Consider an node with a single child in an AVL tree. If that node has a sibling, what is the least / most number of descendants the sibling can have?
1. 1 / 4
2. 1 / 5
3. 1 / 7
4. 0 / 7
5. 0 / 6
6. 0 / 4
7. 1 / 6
8. 0 / 5
9. the correct answer is not listed
1. Consider inserting the following numbers, in the order given, into an empty BST and then computing the balance factors of each node:
```    0 4 3 8 1 2 6 5 9 7
```
Performing a level order traversal of the tree in which balance factors are displayed, which of the following sequences of balance factors is consistent with the above tree? Assume the height of a null child is zero. Do not include the null children in the output.
1. 1 0 1 1 1 0 0 0 0 0
2. the correct answer is not listed
3. -4 0 2 1 -1 0 0 0 0 0
4. -1 0 1 1 -1 0 0 0 0 0
5. -9 -2 2 2 -1 0 0 0 0 0
1. Consider inserting the following numbers, in the order given, into an empty BST and then computing the heights of each node:
```    0 4 3 8 1 2 6 5 9 7
```
Performing a level order traversal of the tree in which node heights are displayed, which of the following sequences of heights is consistent with the above tree? Assume the height of a null child is zero. Do not include the null children in the output.
1. 5 5 4 4 3 3 2 2 1 1
2. 5 4 3 3 2 2 2 1 1 1
3. 4 4 3 3 2 2 1 1 0 0
4. the correct answer is not listed
5. 4 3 2 2 1 1 1 0 0 0
1. Consider inserting the following numbers, in the order given, into an empty BST:
```    0 4 3 8 1 2 6 5 9 7
```
What is the minimum number of rotations that would yield a tree with balance factors consistent with an AVL tree?
1. 5
2. 2
3. the correct answer is not listed
4. 3
5. 4
6. 1
7. 0
1. Consider inserting the following numbers, in the order given, into an empty AVL tree:
```    0 4 3 8 1 2 6 5 9 7
```
After which insertion value causes the AVL tree's first rotation?
1. the correct answer is not listed
2. 2
3. 1
4. 7
5. 8
6. 5
7. 6
8. 3
9. 9
1. Consider inserting the following numbers, in the order given, into an empty AVL tree:
```    0 4 3 8 1 2 6 5 9 7
```
Which values, when inserted, cause rotations?
1. the correct answer is not listed
2. 7 6
3. 7 2 6
4. 3 7 2
5. 3 7 2 6
1. Consider inserting the following numbers, in the order given, into an empty AVL tree:
```    0 4 3 8 1 2 6 5 9 7
```
Which values, when inserted, cause a double rotaion?
1. the correct answer is not listed
2. 2
3. 2 6
4. 7
5. 6
6. 3
7. 3 7
1. Choose an order of insertion for seven consecutive integers such that an AVL tree performs no rotations for any of the insertions.
1. Choose an order of insertion for seven consecutive integers such that an AVL tree performs one rotation for one of the insertions and no rotations for the other insertions.
1. Choose an order of insertion for seven consecutive integers such that an AVL tree performs two rotations for one of the insertions and no rotations for the other insertions.
1. Choose an order of insertion for seven consecutive integers such that an AVL tree performs one rotation each for two of the insertions and no rotations for the other insertions.
1. Choose an order of insertion for seven consecutive integers such that an AVL tree performs no rotations for any of the insertions and yields the most unbalanced tree.

Assume min-heaps unless otherwise directed.

Binomial heaps

1. After 7 consecutive inserts into an empty binomial heap, how many subheaps are in the root list?
1. 2
2. 3
3. 6
4. 4
5. 7
6. 8
7. 5
8. 1
1. After 9 insertions into an empty binomial heap, how many subheaps are there?
1. 3
2. 8
3. 4
4. 9
5. 7
6. 6
7. 5
8. 2
1. T or F: Consider a sequence of n insertions into an empty binomial heap, followed by a single extractMin operation. At this point, the number of subheaps in the root list can be calculated from n.
1. Consider inserting the following values, in the order given:
```   3 2 9 5 6 4 1 0
```
into an empty binomial heap. The value 0 is found in a subheap whose root has value:
1. 4
2. 1
3. 3
4. 0
5. 6
6. 7
7. 5
8. 2
1. Consider inserting the consecutive integers from 0 to 12, inclusive and in increasing order, into an empty binomial heap. After 3 extractMin operations, the value 12 can be found in the subheap whose root has value:
1. 5
2. 0
3. 3
4. 2
5. 4
6. 6
7. 7
8. 1
1. Consider inserting the consecutive integers from 0 to 12, inclusive and in increasing order, into an empty binomial heap. After deleting the value 5, the value 12 can be found in the subheap whose root has value:
1. 7
2. 2
3. 0
4. 1
5. 6
6. 4
7. 5
8. 3
1. One expects to insert a value into a binomial heap in time that is:
1. log
2. linear
3. constant
4. log-linear
5. log log
1. Consider inserting the consecutive integers from 0 to 12, inclusive and not necessarily in order, into an empty binomial heap. What is the largest root value possible after all values have been inserted?

Fibonacci heaps

Assume for fibonacci heaps that newly inserted values are inserted at the beginning (leftmost) of the root list and that consolidation runs from left to right. Assume that in the union of heaps A and B, the root list of B is appended to the root list of A. Assume that extractMin appends the child list of the extracted value to the root list. Assume that after consolidation, the root list is ordered with lower degree subheaps to the left of higher degree subheaps.
1. After 7 consecutive inserts into an empty fibonacci heap, how many subheaps are in the root list?
1. 2
2. 5
3. 4
4. 3
5. 8
6. 6
7. 7
8. 1
1. T or F: Consider a sequence of n insertions into an empty fibonacci heap, followed by a single extractMin operation. At this point, the number of subheaps in the root list can be calculated from n.
1. T or F: Consider a sequence of n insertions into an empty fibonacci heap, followed by two consecutive extractMin operations. At this point, the number of subheaps in the root list can be calculated from n.
1. Consider inserting the following values, in the order given:
```   3 2 9 5 6 4 1 0
```
into an empty fibonacci heap. The value 0 is found in a subheap whose root has value:
1. 5
2. 3
3. 6
4. 7
5. 2
6. 4
7. 1
8. 0
1. Consider inserting the consecutive integers from 0 to 12, inclusive and in increasing order, into an empty fibonacci heap. After 3 extractMin operations, the value 12 can be found in the subheap whose root has value:
1. 6
2. 2
3. 4
4. 5
5. 3
6. 1
7. 0
8. 7
1. Consider inserting the consecutive integers from 0 to 12, inclusive and in increasing order, into an empty fibonacci heap. After deleting the value 5, the value 12 can be found in the subheap whose root has value:
1. 3
2. 6
3. 5
4. 2
5. 7
6. 4
7. 1
8. 0
1. Consider inserting the consecutive integers from 0 to 12 into an empty fibonacci heap. After deleting the value 5, which node is marked?
1. 0
2. 3
3. 2
4. 1
5. 5
6. no node is marked
7. the answer is not given
8. 4
1. One expects to find a value in a Fibonacci heap in amortized:
1. $\Theta \left(\mathrm{log}n\right)$ time
2. $\Theta \left(1\right)$ time
3. $\Theta \left(\mathrm{log}n×\mathrm{log}n\right)$ time
4. $\Theta \left(\mathrm{log}\left(\mathrm{log}n\right)\right)$ time
5. $\Theta \left(n\mathrm{log}n\right)$ time
6. $\Theta \left(n\right)$ time
1. One expects to find a value in a Fibonacci heap in the worst case in:
1. $\Theta \left(1\right)$ time
2. $\Theta \left(\mathrm{log}\left(\mathrm{log}n\right)\right)$ time
3. $\Theta \left(n\right)$ time
4. $\Theta \left(n\mathrm{log}n\right)$ time
5. $\Theta \left(\mathrm{log}n\right)$ time
6. $\Theta \left(\mathrm{log}n×\mathrm{log}n\right)$ time
1. Consider this set of operations: 15 inserts and one extraction of the minimum (in any order). What is the fewest / most number of subheaps found after the set is performed on an initially empty fibonacci heap?
1. 4 / 14
2. 4 / 12
3. 3 / 12
4. 3 / 15
5. 4 / 13
6. 3 / 13
7. 3 / 14
8. 4 / 15
When cutting a node out of a Fibonacci subheap, the degree of the parent is decremented. In addition, the parent of the node (unless it is a root) is either marked, if it hasn't been previously marked, or cut, if it has.
1. What is the most number of nodes that can be cut in a single cascade on a subheap of degree 4?
1. 1
2. 5
3. 4
4. 6
5. 2
6. 7
7. 8
8. 3
1. What is the most number of marked nodes that can exist in a subheap of degree 4?
1. 2
2. 4
3. 3
4. 5
5. 15
6. 8
7. 16
8. 7
1. What are the most number of nodes that can be cut from a Fibonacci subheap of degree 4 such that the degree of the root is not reduced?
1. 10
2. 6
3. 3
4. 5
5. 9
6. 7
7. 4
8. 8
1. What are the most number of nodes that can be cut from a Fibonacci subheap of degree 4 such that the degree of the root is reduced by exactly one?
1. 9
2. 7
3. 13
4. 12
5. 11
6. 8
7. 14
8. 10

Priority Queues

1. Consider a priority queue (max) based upon a singly lined list. What is the time complexity of the insert operation?
1. log linear
2. logarithmic
3. linear
4. constant
1. Consider a priority queue (max) based upon a normal heap. What is the time complexity of the insert operation?
1. log linear
2. constant
3. logarithmic
4. linear
1. Consider a priority queue (max) based upon a singly lined list. What is the time complexity of the increase-key operation?
1. linear
2. constant
3. log linear
4. logarithmic
1. Consider a priority queue (max) based upon a normal heap. What is the time complexity of the increase-key operation?
1. logarithmic
2. log linear
3. linear
4. constant
1. Consider a priority queue (max) based upon a singly lined list. What is the time complexity of the decrease-key operation? Assume the link list is sort
1. logarithmic
2. constant
3. linear
4. log linear
1. Consider a priority queue based upon a normal heap. What is the time complexity of the decrease-key operation?
1. log linear
2. linear
3. constant
4. logarithmic

The following questions assume a linked-list implementation of a disjoint set. Assume each value has a pointer to its representative. Assume worst case behavior.
1. The make-set operation takes time:
1. constant
2. logarithmic
3. linear
4. log linear
1. The find-set operation takes time:
1. none of the other answers are correct
2. constant
3. log linear
4. linear
6. logarithmic
1. Assuming no preference on which representative becomes the representative of the resulting set, the union operation takes time:
1. constant
2. log linear
4. logarithmic
5. linear
6. none of the other answers are correct
1. Assuming the representative of the smaller set becomes the representative of the resulting set, the union operation takes time:
2. log linear
3. none of the other answers are correct
4. linear
5. constant
6. logarithmic
1. Assuming the representative of the larger set becomes the representative of the resulting set, the union operation takes time:
1. linear
2. none of the other answers are correct
4. constant
5. logarithmic
6. log linear
1. Assuming no preference on which representative becomes the representative of the resulting set, the total work of a series of m disjoint set operations is:
2. log linear
3. none of the other answers are correct
4. logarithmic
5. linear
6. constant
1. Assuming the representative of the smaller set becomes the representative of the resulting set, the total work of a series of m disjoint set operations is:
1. linear
2. none of the other answers are correct
3. logarithmic
5. constant
6. log linear
1. Assuming the representative of the larger set becomes the representative of the resulting set, the total work of a series of m disjoint set operations is:
1. none of the other answers are correct
3. constant
4. linear
5. log linear
6. logarithmic
1. Assuming the representative of the larger set becomes the representative of the resulting set, how many times can a value's representative be updated in a series of union operations?
1. none of the other answers are correct
2. logarithmic
3. constant
5. log linear
6. linear
1. Assuming the representative of the smaller set becomes the representative of the resulting set, how many times can a value's representative be updated in a series of union operations?
2. none of the other answers are correct
3. linear
4. log linear
5. constant
6. logarithmic
1. Assuming no preference on which representative becomes the representative of the resulting set, how many times can a value's representative be updated in a series of union operations?
2. constant
3. logarithmic
4. linear
5. log linear
6. none of the other answers are correct

Concept:disjoint sets as trees

The following questions assume a tree implementation of a disjoint set. Assume each value has a pointer to its parent with the root of the tree serving as the representative of the set. Assume worst case behavior.
1. The make-set operation takes time:
1. linear
2. logarithmic
3. constant
4. log linear
1. The find-set operation (no path compression and no union-by-rank) takes time:
2. constant
3. linear
4. log linear
5. logarithmic
6. none of the other answers are correct
1. The find-set operation (with path compression but no union-by-rank) takes time:
1. none of the other answers are correct
2. logarithmic
3. constant
5. log linear
6. linear
1. The find-set operation (with no path compression but with union-by-rank) takes time:
1. logarithmic
2. none of the other answers are correct
3. linear
4. log linear
5. constant
1. Assuming no preference on which representative becomes the representative of the resulting set, the union operation takes time:
1. constant
2. logarithmic
4. none of the other answers are correct
5. linear
6. log linear
1. Assuming the representative whose tree has the smaller rank becomes the representative of the resulting set, the union operation takes time:
1. logarithmic
2. none of the other answers are correct
3. linear
4. log linear
5. constant
1. Assuming the representative whose tree has the larger rank becomes the representative of the resulting set, the union operation takes time:
1. linear
2. constant
3. none of the other answers are correct
4. logarithmic
5. log linear
1. Assuming no path compression and no union-by-rank, the total work of a series of m disjoint set operations is:
1. log linear
2. logarithmic
3. linear
5. none of the other answers are correct
6. constant
1. Assuming path compression but no union-by-rank, the total work of a series of m disjoint set operations is:
1. none of the other answers are correct
2. logarithmic
4. linear
5. constant
6. log linear
1. Assuming no path compression but union-by-rank, the total work of a series of m disjoint set operations is:
1. log linear
2. none of the other answers are correct
3. logarithmic
5. linear
6. constant
1. Assuming path compression and union-by-rank, the total work of a series of m disjoint set operations is:
2. none of the other answers are correct
3. log linear
4. logarithmic
5. linear
6. constant
1. Path compression is used to speed up the average cost of which operation(s)?
1. union
2. union and find-set
3. make-set, find-set, and union
4. find-set and make-set
5. find-set
6. make-set
7. none of the other answers are correct
8. union and make-set
1. Union by rank is used to speed up the average cost of which operation(s)?
1. union and make-set
2. make-set, find-set, and union
3. make-set
4. union
5. find-set and make-set
6. find-set
7. none of the other answers are correct
8. union and find-set
1. T or F: A single find-set operation with path compression takes asymptotically longer than a single find-set operation without path compression, in the worst case.
1. Suppose there are initially n disjoint sets. If m union operations are performed, what is the fewest number of disjoint sets that remain? Assume n is a power of two and n > m.
1. $n-{2}^{m}$
2. $n-m$
3. m
4. $n-2m$
For the following set of questions, consider the following set of operations:
```    for each i in 0..9 do make-set(i)
union(1,2);
union(3,4);
union(5,6);
union(7,8);
union(9,0);
union(4,1);
union(4,6);
union(7,9);
find-set(3);
find-set(1);
```
assuming union-by-rank and path compression. When unioning two sets having the same rank, assume the root with the larger value becomes the root of the resulting set.
1. How many disjoint sets remain?
1. none of the other answers are correct
2. 4
3. 2
4. 5
5. 3
6. 1
1. How many values have a root as parent?
1. 4
2. 3
3. 1
4. none of the other answers are correct
5. 2
6. 5

Concept:Graphs

1. What is the primary characteristic of a directed graph?
1. the vertices are all reachable
2. the edges are uni-directional
3. the edges are bi-directional
4. at least one vertex is unreachable
1. What is the primary characteristic of a weighted graph?
1. each vertex has an associated weight
2. the edge count exceeds vertex count
3. each edge has an associated weight
4. the vertex count exceeds some threshold
1. What is the primary characteristic of an undirected, simple graph?
1. there is at most one edge between any two vertices
2. the termini of an edge may be the same vertex
3. the edge count exceeds or equals the vertex count
4. the edge count is less than the vertex count
1. What is the degree of a vertex in an undirected, simple graph?
1. the total number of paths emanating from the vertex
2. the count of vertices reachable from that vertex
3. the count of vertices not reachable from that vertex
4. the total number of edges emanating from the vertex
1. What is the primary characteristic of a undirected, simple, connected graph?
1. a path exists between every pair of vertices
2. an edge exists between every pair of vertices
3. each vertex has a edge that connects to itself
4. there is at least one edge
1. What is the primary characteristic of an undirected, simple, regular graph?
1. all edges have the same weight
2. all vertices have the same weight
3. all vertices have the same degree
4. all edges terminate at the same vertex
1. What is the primary characteristic of an undirected, simple, complete graph? Choose the most general answer.
1. there exists a path between every pair of vertices
2. all vertices have a degree greater than some threshold
3. each vertex has two edges or no edges
4. there exists an edge between every pair of vertices
1. What is the primary characteristic of an undirected, simple, connected, acyclic graph? Choose the most general answer.
1. the vertex count is one greater than the edge count
2. there are at least two unique paths between any two vertices
3. the vertex count is equal to the edge count
4. no edge has a single vertex as its termini
1. What is the primary characteristic of a planar graph?
1. the maximum degree of a vertex is 4
2. it can be drawn in a plane with no crossed edges
3. when drawn in a plane, it must have crossed edges
4. the maximum degree of a vertex is 3

Graph traversals

1. What is a walk? Choose the most general answer.
1. a sequence of vertices with no edge between ${v}_{i}$ and ${v}_{i+1}$
2. a sequence of vertices with an edge between ${v}_{i}$ and ${v}_{i+1}$
3. a set of edges that have one vertex in common
4. a sequence of edges such that ${e}_{i}$ and ${e}_{i+1}$ have no common vertex
1. What is a trail? Choose the most general answer.
1. a walk with at least one edge appearing twice (or more)
2. a walk with at least one vertex appearing twice (or more)
3. a walk with no vertex appearing more than once
4. a walk with no edge appearing more than once
1. What is the primary characteristic of a path?
1. a trail with all vertices appearing at least once
2. a trail with no vertex appearing more than once
3. a trail with no vertex appearing more than twice
4. a trail with all edges appearing at least once
1. What is an Euler trail? Choose the most general answer.
1. a trail which invovles every vertex, except one, exactly once
2. a trail which invovles every vertex exactly once
3. a trail which invovles every edge exactly once
4. the longest trail in a graph
1. What is a Hamiltonian path? Choose the most general answer.
1. the longest path in a graph
2. a path which involves every edge
3. the shortest path in a graph
4. a path which involves every vertex
1. T or F: An Euler trail is always the longest trail in an undirected graph.
1. T or F: All undireted graphs have an Euler trail.
1. T or F: A Hamiltonian path is always the longest path in an undirected graph.
1. T or F: All undireted graphs have a Hamiltonian path.
1. The longest path in an undirected graph is bounded by:
1. the maximum degree of a graph
2. the number of vertices
3. the number of edges
4. the minimum degree of a graph
1. T or F: An undirected graph with a min-degree of 2 must have a cycle.

Spanning trees and shortest paths

1. T or F: All connected, undirected graphs have a spanning tree.
1. Refering to graphs, Kruskal's algorithm is used to find:
1. the shortest path between two vertices
2. the longest path with the least weight
3. all-pairs shortest paths
4. a minimum spanning tree
1. Refering to graphs, Dijkstra's algorithm is used to find:
1. the shortest path between a source vertex and the other vertices
2. all-pairs shortest paths
3. a minimum spanning tree
4. the longest path with the least weight
1. Refering to graphs, Prim's algorithm is used to find:
1. the shortest path between two vertices
2. all-pairs shortest paths
3. a minimum spanning tree
4. the longest path with the least weight
1. Refering to graphs, the Floyd-Warshall algorithm is used to find:
1. the longest path with the least weight
2. a minimum spanning tree
3. all-pairs shortest paths
4. the shortest path
1. Suppose $E=\omega \left(V\right)$. What is the asymptotic running time of Kruskal's Algorithm? Choose the simplest answer.
1. $\Theta \left(E\mathrm{log}E+V\mathrm{log}V\right)$
2. $\Theta \left(E\mathrm{log}E+V\mathrm{log}E\right)$
3. $\Theta \left(E+V\right)$
4. $\Theta \left(E+V\mathrm{log}V\right)$
5. $\Theta \left(E\right)$
6. $\Theta \left(E\mathrm{log}E\right)$
1. Suppose $E=\Theta \left(V\right)$. What is the asymptotic running time of Prim's Algorithm? Choose the simplest answer.
1. $\Theta \left(E+V\right)$
2. $\Theta \left(E\mathrm{log}E\right)$
3. $\Theta \left(E+V\mathrm{log}V\right)$
4. $\Theta \left(E\mathrm{log}E+V\mathrm{log}V\right)$
5. $\Theta \left(E\right)$
6. $\Theta \left(E\mathrm{log}E+V\mathrm{log}E\right)$
1. T or F: Suppose you kept track of the level number for each vertex w in a breadth-first search of a simple, undirected, unweighted graph, starting from a vertex v. The level numbers of each w would correspond to the shortest path distance.
1. T or F: For a simple, undirected graph, $\Theta \left(E\mathrm{log}E\right)=\Theta \left(E\mathrm{log}V\right)$
1. Consider running Kruskal's algorithm on the complete graph ${K}_{n}$, processing the first i edges. What is the smallest value of i that could yield the final result?
1. $\genfrac{}{}{0.1ex}{}{n\left(n+1\right)}{2}-1$
2. n-1
3. $\genfrac{}{}{0.1ex}{}{n\left(n-1\right)}{2}-1$
4. $\genfrac{}{}{0.1ex}{}{n\left(n+1\right)}{2}$
5. $\genfrac{}{}{0.1ex}{}{n\left(n-1\right)}{2}$
6. n
1. Consider running Dijkstra's algorithm using a linked list (with a tail pointer) as the basis for a priority queue. What is the asymptotic run time for the algorithm?
1. $\Theta \left({V}^{3}E\right)$
2. $\Theta \left({V}^{3}\mathrm{log}V\right)$
3. $\Theta \left({V}^{2}E\right)$
4. $\Theta \left({V}^{2}\mathrm{log}V\right)$
5. the correct answer is not listed
6. $\Theta \left(V{E}^{2}\right)$

Concept:memoization

Assume zero-based indexing.
1. Consider memoizing this function:
```    function f(x)
{
if (x == 0) return 0;
if (x == 1) return 1;
return f(x-2) + f(x-1);
}
```
What would be memoization table's largest index/indices?
1. x and x
2. x - 1
3. x - 2
4. x + 1 and x
5. x + 1 and x - 1
6. x + 1
7. x + 1 and x + 1
8. x
1. Consider memoizing this function:
```    function g(x,items,y)
{
if (x == 0) return 1;
if (x < 0) return 0;
if (y == items.size) return 0;
return minimum(g(x-items[y],items,y),g(x,items,y+1));
}
```
What would be the memoization table's largest index/indices, where x refers to the original value of x?
1. x and items.size
2. x + 1 and items.size + 1
3. x - 2
4. x and items.size + 1
5. x
6. x - 1
7. x + 1
8. x and items.size - 1

Dynamic programming

1. Consider using dynamic programming to improve the efficiency of the following function:
```    function f(a,b,c,d,e)
{
if (a == 0) return 0;
if (a < 0) return -INFINITY;
if (d == e) return 0;
return
max(
f(a,b,c,d+1,e),
f(a-b[d],b,c,d,e) + c[d]
);
}
```
What would be the dimensionality of the dynamic programming table?
1. 3
2. 4
3. 6
4. 1
5. 2
6. 5
1. Consider using dynamic programming to improve the efficiency of the following function:
```    function f(a,b,c,d,e)
{
if (a == 0) return 0;
if (a < 0) return -INFINITY;
if (d == e) return 0;
return
max(
f(a,b,c,d+1,e),
f(a-b[d],b,c,d,e) + c[d]
);
}
```
How would the dynamic programming table be filled, using a as an index?
1. a is not used as an index
2. smaller a to larger a
3. larger a to smaller a
1. Consider using dynamic programming to improve the efficiency of the following function:
```    function f(a,b,c,d,e)
{
if (a == 0) return 0;
if (a < 0) return -INFINITY;
if (d == e) return 0;
return
max(
f(a,b,c,d+1,e),
f(a-b[d],b,c,d,e) + c[d]
);
}
```
How would the dynamic programming table be filled, using b as an index?
1. b is not used as an index
2. smaller b to larger b
3. larger b to smaller b
1. Consider using dynamic programming to improve the efficiency of the following function:
```    function f(a,b,c,d,e)
{
if (a == 0) return 0;
if (a < 0) return -INFINITY;
if (d == e) return 0;
return
max(
f(a,b,c,d+1,e),
f(a-b[d],b,c,d,e) + c[d]
);
}
```
How would the dynamic programming table be filled, using d as an index?
1. larger d to smaller d
2. d is not used as an index
3. smaller d to larger d
1. Consider using dynamic programming to improve the efficiency of the following function:
```    function f(a,b,c,d,e)
{
if (a == 0) return 0;
if (a < 0) return -INFINITY;
if (d == e) return 0;
return
max(
f(a,b,c,d+1,e),
f(a-b[d],b,c,d,e) + c[d]
);
}
```
What is wrong, if anything, about the following loop for filling out the dynamic programming table?
```    for (a = 0; a < max_a; ++a)
for (d = 0; d < max_d; ++d)
{
t[a][d] =
max(
getTable(a,d+1),
getTable(x-b[d],d) + c[d]
);
}
```
1. there should be three nested loops
2. one or more of the loop indices is incorrect
3. the a loop goes in the wrong direction
4. there should only be one loop (no nesting)
5. the d loop goes in the wrong direction
6. the table is filled out correctly

Concept:amortized analysis

1. Suppose a dynamic array was implemented so that growing the array increased the capacity by 1000 elements. What is the worst-case cost of the append operation?
1. log linear
2. linear
4. constant
1. Suppose a dynamic array was implemented so that growing the array increased the capacity by 2%. What is the amortized and worst-case costs of the append operation?
1. constant and linear
2. linear and linear
3. log and linear
1. Suppose a dynamic array was implemented so that growing the array increased the capacity by 100 elements. What is the amortized and worst-case costs of the append operation?
1. constant and linear
3. linear and linear
4. log and linear
1. Consider a dynamic fillable array which grows by doubling in size. Which is a valid equation for calculating the total cost incurred when insertion ${2}^{i}+1$ is made? Assume the individual cost of an insert when there is room is 1 and the individual cost of an insert when there is no room is ${2}^{i}+1$.
1. $2n-3$
2. $3n-1$
3. $2n-1$
4. $3n-2$
5. $2n-2$
6. $3n-3$
1. Consider a dynamic fillable array which grows from size n to size $2n+1$ each time the array is filled. Which is a valid equation for calculating the total cost incurred when insertion ${2}^{i}$ is made? Assume the individual cost of an insert when there is room is 1 and the individual cost of an insert when there is no room is ${2}^{i}$.
1. $3n-3$
2. $2n-1$
3. $2n$
4. $3n-\mathrm{log}n-2$
5. $3n-6$
6. $3n-1$
1. Consider a dynamic fillable array which grows by doubling in size. Let S represent the number of filled slots and E, the number of empty slots, and C, the capacity of the array. Which is a valid potential function for proving the amoritized cost of an insert is constant?
1. $2S-E$
2. $S-E$
3. $2S-C$
4. $S+E$
5. $E+2C$
6. $S+2E$
1. Consider implementing a queue with two stacks. Enqueues are translated to pushes onto the first stack. For a dequeue, if the second stack is empty, each element on the first stack is popped and pushed, in turn, onto the second stack. In either case, the item popped from the second stack is returned.

The worst-case times for enqueue and dequeue are:
1. $\Theta \left(n\right)$ and $\Theta \left(1\right)$
2. $\Theta \left(n\right)$ and $\Theta \left(n\right)$
3. $\Theta \left(1\right)$ and $\Theta \left(1\right)$
4. $\Theta \left(1\right)$ and $\Theta \left(n\right)$
1. Consider implementing a queue with two stacks. Enqueues are translated to pushes onto the first stack, V. For a dequeue, if the second stack, W, is empty, each element on the first stack is popped and pushed, in turn, onto the second stack. The total work of this transfer is the number of elements popped plus the number of elements pushed. In either case, the item popped from the second stack is returned. The number of elements on stack V is denoted ${V}_{S}$ and the number of elements on stack W is denoted ${W}_{S}$.

Which of the following potential functions can be used to show an amortized bound of $\Theta \left(1\right)$ for operations on this kind of queue?
1. $\Phi =2{V}_{s}$
2. $\Phi ={V}_{s}$
3. $\Phi ={V}_{s}+{W}_{s}$
4. $\Phi ={V}_{s}+2{W}_{s}$
1. Suppose a data structure has operation A with a real cost of 1 and operation B with a real cost of $2n+1$. After an A operation, n increases by 1 while after a B operation, n decreases to zero.

Which of the following potential functions can be used to show an amortized bound of $\Theta$(1) for operations A and B on this data structure?
1. $Phi=n.$
2. $\Phi =3n$
3. $Phi=2n.$
1. Suppose a data structure has operation A with a real cost of $3n+1$ and operation B with a real cost of $\genfrac{}{}{0.1ex}{}{3}{2}n+1$. After an A operation, n decreases to $\genfrac{}{}{0.1ex}{}{1}{4}n$ while after a B operation, n decreases to $\genfrac{}{}{0.1ex}{}{5}{8}n$.

Which of the following potential functions can be used to show an amortized bound of $\Theta$(1) for operations A and B on this data structure?
1. $\Phi =3n$
2. $\Phi =\genfrac{}{}{0.1ex}{}{3}{2}n$
3. $\Phi =4n$
1. Suppose a data structure has operation A with a real cost of 1 and operation B with a real cost of $k+n$. After an A operation, n increases by 1. After a B operation, n decreases to $k+1$.

Which of the following potential functions can be used to show an amortized bound of $\Theta$(1) for A operations and $\Theta \left(k\right)$ for B operations?
1. $\Phi =2n$
2. $\Phi =n$
3. $\Phi =3n$

Concept:The classes $𝒫$ and $NP$

1. A problem can be in $𝒫$ and not in $NP$.
1. False
2. Not known
3. True
1. A problem can be in $NP$ and not in $𝒫$.
1. Not known
2. False
3. True
1. All problems are in $𝒫$.
1. True
2. False
3. Not known
1. All problems are in $NP$.
1. Not known
2. True
3. False
1. $NP$ stands for:
1. Non-deterministic Polynomial.
2. Non-exponential Program.
3. Non-intractable Program.
4. Non-Polynomial.
1. T or F: A constant time algorithm is in $𝒫$.
1. T or F: A linear time algorithm is in $𝒫$.
1. T or F: A constant time algorithm is in $NP$.
1. T or F: A linear time algorithm is in $NP$.
1. Someone shows you a correct algorithm for problem A whose solution can be verified in polynomial time. You can conclude:
1. problem A is in $NP$
2. nothing about whether problem A is in $NP$ or not.
3. problem A is not in $NP$
1. Someone proves that for a correct algorithm for problem A, solutions must be verified in at least exponential time. You can conclude:
1. nothing about whether problem A is in $NP$ or not.
2. problem A is in $NP$
3. problem A is not in $NP$
1. Someone shows you a correct polynomial time algorithm for problem A. You can conclude:
1. nothing about whether problem A is in $𝒫$ or not.
2. problem A is in $𝒫$
3. problem A is not in $𝒫$
1. Someone shows you a correct exponential time algorithm for problem A whose solution can be verified in polynomial time. You can conclude:
1. problem A is in $𝒫$
2. problem A is not in $𝒫$
3. nothing about whether problem A is in $𝒫$ or not.
1. Someone shows you a correct polynomial time algorithm for problem A. You can conclude:
1. problem A is not in $NP$
2. nothing about whether problem A is in $NP$ or not.
3. problem A is in $NP$
1. Which one of the following is not a valid way to prove a problem is in $NP$:
1. show that a solution can be found in polynomial time on a deterministic computer.
2. show that a solution can be verified in polynomial time on a non-deterministic computer.
3. show that a solution can be found in polynomial time on a non-deterministic computer.
4. show that a solution can be verified in polynomial time on a deterministic computer.

Concept:$NP$-completeness

1. To show that a problem A is $NP$-complete, one task is to:
1. show A is in $NP$.
2. show A is not in $𝒫$.
3. show A is in $𝒫$.
4. show A is not in $NP$.
1. Suppose B is an $NP$-complete problem. To show that a problem A is $NP$-complete, one task could be:
1. show a polynomial time/space reduction from B to A.
2. show an exponential time/space reduction from A to B.
3. show a polynomial time/space reduction from A to B.
4. show a exponential time/space reduction from B to A.
1. Another way of stating “a reduction from A to B” is:
1. solve B-type problems with an algorithm for A
2. convert an algorithm for B to an algorithm for A
3. convert an algorithm for A to an algorithm for B
4. solve A-type problems with an algorithm for B

Concept: If $𝒫=NP$?

1. If $𝒫$ = $NP$, then all problems in $𝒫$ are in $NP$.
1. True
2. False
3. Not known
1. If $𝒫$ = $NP$, then all problems in $NP$ are in $𝒫$.
1. True
2. Not known
3. False
1. If $𝒫$ != $NP$, then there exist problems in $𝒫$ that are not in $NP$.
1. Not known
2. False
3. True
1. If $𝒫$ != $NP$, then there exist problems in $NP$ that are not in $𝒫$.
1. False
2. True
3. Not known

Concept: Proving $𝒫=NP$.

1. Factoring is in $NP$. Currently, the best known algorithm on a conventional computer takes exponential time. If factoring is proved to take at least exponential time, what is the effect on the question $𝒫$ = $NP$?
1. $𝒫$ = $NP$
2. $𝒫$ != $NP$
3. the question is still unanswered
1. Factoring is in $NP$. Currently, the best known algorithm on a conventional computer takes exponential time. If factoring is shown to take take polynomial time, what is the effect on the question $𝒫$ = $NP$?
1. $𝒫$ = $NP$
2. the question is still unanswered
3. $𝒫$ != $NP$
1. Factoring is in $NP$ and the best known algorithm takes exponential time. In the past, a linear time algorithm was discovered for quantum computers. What is the effect on the question $𝒫$ = $NP$?
1. $𝒫$ != $NP$, but just for quantum computers
2. $𝒫$ = $NP$ for all types of computers.
3. $𝒫$ = $NP$? is still unanswered.
4. $𝒫$ = $NP$, but just for quantum computers
5. $𝒫$ != $NP$ for all types of computers.
1. Subset Sum is $NP$-complete. Currently, the best known algorithm on a conventional computer takes exponential time. If Subset Sum is proved to take at least exponential time, what is the effect on the question $𝒫$ = $NP$?
1. $𝒫$ = $NP$
2. the question is still unanswered
3. $𝒫$ != $NP$
1. Subset Sum is $NP$-complete. Currently, the best known algorithm on a conventional computer takes exponential time. If solving Subset Sum can be shown to take polynomial time, what is the effect on the question $𝒫$ = $NP$?
1. $𝒫$ = $NP$
2. $𝒫$ != $NP$
3. the question is still unanswered
1. In the past, it was shown how to solve Hamiltonian Path (an $NP$-complete problem) in linear time, using a DNA-based computer. However, the algorithm takes a factorial number of DNA strands, which need to be created each time. This means:
1. $𝒫$ != $NP$, but just for DNA-based computers
2. $𝒫$ != $NP$ for all types of computers.
3. $𝒫$ = $NP$, but just for DNA-based computers
4. $𝒫$ = $NP$ for all types of computers.
5. $𝒫$ = $NP$? is still unanswered.
1. T or F: $𝒫$ = $NP$ is just another way of saying, for problems in $NP$, finding a solution is no harder than verifying a solution.