Concept: mathematics notation

${\mathrm{log}}_{2}n$ is:
 $\Theta \left({\mathrm{log}}_{10}n\right)$
 $o\left({\mathrm{log}}_{10}n\right)$
 $\omega \left({\mathrm{log}}_{10}n\right)$

${\mathrm{log}}_{2}n$ is equal to:
 $\genfrac{}{}{0.1ex}{}{{\mathrm{log}}_{2}n}{{\mathrm{log}}_{10}2}$
 $\genfrac{}{}{0.1ex}{}{{\mathrm{log}}_{10}n}{{\mathrm{log}}_{10}2}$
 $\genfrac{}{}{0.1ex}{}{{\mathrm{log}}_{2}n}{{\mathrm{log}}_{2}10}$
 $\genfrac{}{}{0.1ex}{}{{\mathrm{log}}_{10}n}{{\mathrm{log}}_{2}10}$

$\mathrm{log}\left(nm\right)$ is equal to:
 $\mathrm{log}n+\mathrm{log}m$
 ${\left(\mathrm{log}n\right)}^{m}$
 $n\mathrm{log}m$
 $m\mathrm{log}n$

$\mathrm{log}\left({n}^{m}\right)$ is equal to:
 $\mathrm{log}n+\mathrm{log}m$
 ${\left(\mathrm{log}n\right)}^{m}$
 $n\mathrm{log}m$
 $m\mathrm{log}n$

${\mathrm{log}}_{2}2$ can be simplified to:
 1
 4
 2
 ${\mathrm{log}}_{2}2$ cannot be simplified any further

${2}^{{\mathrm{log}}_{2}n}$ is equal to:
 n
 ${2}^{n}$
 ${n}^{2}$
 ${\mathrm{log}}_{2}n$

${n}^{2}$ is $o\left({n}^{3}\right)$. Therefore, $\mathrm{log}{n}^{2}$ is $?\left(\mathrm{log}{n}^{3}\right)$.
Choose the tightest bound.
 little omicron
 theta
 big omega
 big omicron
 little omega

$\mathrm{log}{n}^{n}$ is $\Theta (?)$.
 log ${n}^{\mathrm{log}n}$
 n
 log n
 n log n

$\mathrm{log}{2}^{n}$ is $\Theta (?)$.
 $n\mathrm{log}n$
 $\mathrm{log}n$
 ${2}^{n}$
 n

The number of permutations of a list of n items is:
 n!
 $n\mathrm{log}n$
 n
 ${2}^{n}$
 $\mathrm{log}n$
Concept: relative growth rates

Which of the following has the correct order in terms of growth rate?
 $1<\mathrm{log}n<\sqrt{n}<n<n\mathrm{log}n<{n}^{2}<{n}^{3}<{2}^{n}<n!<{n}^{n}$
 $1<\mathrm{log}n<\sqrt{n}<n<n\mathrm{log}n<{n}^{2}<{n}^{3}<{2}^{n}<{n}^{n}<n!$
 $1<\sqrt{n}<\mathrm{log}n<n<n\mathrm{log}n<{n}^{2}<{n}^{3}<{2}^{n}<n!<{n}^{n}$
 $1<\sqrt{n}<\mathrm{log}n<n<n\mathrm{log}n<{n}^{2}<{n}^{3}<n!<{2}^{n}<{n}^{n}$
 $1<\sqrt{n}<\mathrm{log}n<n<n\mathrm{log}n<{n}^{2}<{n}^{3}<n!<{2}^{n}<{n}^{n}$

What is the correct ordering of growth rates for the following functions:

$f\left(n\right)={n}^{0.9}\mathrm{log}n$

$g\left(n\right)=1.{1}^{n}$

$h\left(n\right)=9.9n$
 h < g < f
 h < f < g
 f < g < h
 g < f < h
 g < h < f
 f < h < g

What is the correct ordering of growth rates for the following functions:

$f\left(n\right)=n{\left(\mathrm{log}n\right)}^{2}$

$g\left(n\right)=n\mathrm{log}{2}^{n}$

$h\left(n\right)=n\mathrm{log}\left(\mathrm{log}n\right)$
 f > g > h
 g > f > h
 h > g > f
 f > h > g
 g > h > f
 h > f > g
Concept: order notation

What does big Omicron roughly mean?
 worse than or the same as
 better than or the same as
 better than
 the same as
 worse than

What does $\omega $ roughly mean?
 better than
 the same as
 worse than
 better than or the same as
 worse than or the same as

What does $\theta $ roughly mean?
 better than
 worse than or the same as
 better than or the same as
 the same as
 worse than

T or F:
All algorithms are $\omega \left(1\right)$.

T or F:
All algorithms are $\theta \left(1\right)$.

T or F:
All algorithms are $\Omega \left(1\right)$.

T or F:
There exist algorithms that are $\omega \left(1\right)$.

T or F:
There exist algorithms that are $O\left(1\right)$.

T or F:
All algorithms are $O\left({n}^{n}\right)$.

Consider sorting 1,000,000 numbers with mergesort. What
is the time complexity of this operation? [THINK!]
 ${n}^{2}$, because mergesort takes quadratic time
 n log n, because mergesort takes n log n time
 constant, because n is fixed
Concept: comparing algorithms using order notation
Assume the worst case and sufficiently large input size unless
otherwise indicated. The phrase the same time as means equal
within a constant factor (or lower order term) unless otherwise indicated.
The phrase by a stopwatch means
the actual amount of time needed for the algorithm to run to
completion, as measured by a stopwatch.

T or F:
If $f=\omega \left(g\right)$, then
algorithm f always runs faster than g.

T or F:
If $f=\omega \left(g\right)$, then
algorithm f always runs faster than g, in all cases.

T or F:
If $f=\omega \left(g\right)$, then
algorithm f always runs faster than g,
regardless of input size.

T or F:
If $f=\omega \left(g\right)$, then
algorithm f always runs faster than
or takes the same time as g.

T or F:
If $f=\omega \left(g\right)$, then
algorithm f always runs faster than
or takes the same time as g,
in all cases.

T or F:
If $f=\omega \left(g\right)$, then
algorithm f always runs faster than
or takes the same time as g,
regardless of input size.

T or F:
If $f=\omega \left(g\right)$, then
it is possible that algorithm f can run faster than g,
in some cases.

T or F:
If $f=\omega \left(g\right)$, then
it is possible that algorithm f can run faster than g.

T or F:
If $f=\omega \left(g\right)$, then
algorithm f always runs slower than g.

T or F:
If $f=\omega \left(g\right)$, then
algorithm f always runs slower than g, in all cases.

T or F:
If $f=\omega \left(g\right)$, then
algorithm f always runs slower than g,
regardless of input size.

T or F:
If $f=\omega \left(g\right)$, then
algorithm f always runs slower than
or takes the same time as g.

T or F:
If $f=\omega \left(g\right)$, then
algorithm f always runs slower than
or takes the same time as g,
in all cases.

T or F:
If $f=\omega \left(g\right)$, then
algorithm f always runs slower than
or takes the same time as g,
regardless of input size.

T or F:
If $f=\omega \left(g\right)$, then
algorithm f can take the same time as g,
in some cases.

T or F:
If $f=\omega \left(g\right)$, then
algorithm f can run
in the same time as g.

T or F:
If $f=\Omega \left(g\right)$, then
algorithm f always runs faster than g.

T or F:
If $f=\Omega \left(g\right)$, then
algorithm f always runs faster than g, in all cases.

T or F:
If $f=\Omega \left(g\right)$, then
algorithm f always runs faster than g,
regardless of input size.

T or F:
If $f=\Omega \left(g\right)$, then
algorithm f always runs faster than
or takes the same time as g.

T or F:
If $f=\Omega \left(g\right)$, then
algorithm f always runs faster than
or takes the same time as g,
in all cases.

T or F:
If $f=\Omega \left(g\right)$, then
algorithm f always runs faster than
or takes the same time as g,
regardless of input size.

T or F:
If $f=\Omega \left(g\right)$, then
it is possible that algorithm f can run faster than g,
in some cases.

T or F:
If $f=\Omega \left(g\right)$, then
it is possible that algorithm f can run faster than g.

T or F:
If $f=\Omega \left(g\right)$, then
algorithm f always runs slower than g.

T or F:
If $f=\Omega \left(g\right)$, then
algorithm f always runs slower than g, in all cases.

T or F:
If $f=\Omega \left(g\right)$, then
algorithm f always runs slower than g,
regardless of input size.

T or F:
If $f=\Omega \left(g\right)$, then
algorithm f always runs slower than
or takes the same time as g.

T or F:
If $f=\Omega \left(g\right)$, then
algorithm f always runs slower than
or takes the same time as g,
in all cases.

T or F:
If $f=\Omega \left(g\right)$, then
algorithm f always runs slower than
or takes the same time as g,
regardless of input size.

T or F:
If $f=o\left(g\right)$, then
algorithm f always runs slower than g.

T or F:
If $f=o\left(g\right)$, then
algorithm f always runs slower than g, in all cases.

T or F:
If $f=o\left(g\right)$, then
algorithm f always runs slower than g,
regardless of input size.

T or F:
If $f=o\left(g\right)$, then
algorithm f always runs slower than
or takes the same time as g.

T or F:
If $f=o\left(g\right)$, then
algorithm f always runs slower than
or takes the same time as g,
in all cases.

T or F:
If $f=o\left(g\right)$, then
algorithm f always runs slower than
or takes the same time as g,
regardless of input size.

T or F:
If $f=o\left(g\right)$, then
it is possible that algorithm f can run slower than g,
in some cases.

T or F:
If $f=o\left(g\right)$, then
it is possible that algorithm f can run slower than g.

T or F:
If $f=o\left(g\right)$, then
algorithm f always runs faster than g.

T or F:
If $f=o\left(g\right)$, then
algorithm f always runs faster than g, in all cases.

T or F:
If $f=o\left(g\right)$, then
algorithm f always runs faster than g,
regardless of input size.

T or F:
If $f=o\left(g\right)$, then
algorithm f always runs faster than
or takes the same time as g.

T or F:
If $f=o\left(g\right)$, then
algorithm f always runs faster than
or takes the same time as g,
in all cases.

T or F:
If $f=o\left(g\right)$, then
algorithm f always runs faster than
or takes the same time as g,
regardless of input size.

T or F:
If $f=o\left(g\right)$, then
algorithm f can run slower than g,
in some cases.

T or F:
If $f=o\left(g\right)$, then
algorithm f can take the same time as g,
in some cases.

T or F:
If $f=O\left(g\right)$, then
algorithm f always runs slower than g.

T or F:
If $f=O\left(g\right)$, then
algorithm f always runs slower than g, in all cases.

T or F:
If $f=O\left(g\right)$, then
algorithm f always runs slower than g,
regardless of input size.

T or F:
If $f=O\left(g\right)$, then
algorithm f always runs slower than
or takes the same time as g.

T or F:
If $f=O\left(g\right)$, then
algorithm f always runs slower than
or takes the same time as g,
in all cases.

T or F:
If $f=O\left(g\right)$, then
algorithm f always runs slower than
or takes the same time as g,
regardless of input size.

T or F:
If $f=O\left(g\right)$, then
it is possible that algorithm f can run slower than g,
in some cases.

T or F:
If $f=O\left(g\right)$, then
it is possible that algorithm f can run slower than g.

T or F:
If $f=O\left(g\right)$, then
algorithm f always runs faster than g.

T or F:
If $f=O\left(g\right)$, then
algorithm f always runs faster than g, in all cases.

T or F:
If $f=O\left(g\right)$, then
algorithm f always runs faster than g,
regardless of input size.

T or F:
If $f=O\left(g\right)$, then
algorithm f always runs faster than
or takes the same time as g.

T or F:
If $f=O\left(g\right)$, then
algorithm f always runs faster than
or takes the same time as g,
in all cases.

T or F:
If $f=O\left(g\right)$, then
algorithm f always runs faster than
or takes the same time as g,
regardless of input size.

T or F:
If $f=\Theta \left(g\right)$, then
algorithm f takes the same time as g.

T or F:
If $f=\Theta \left(g\right)$, then
algorithm f takes the same time as g,
in all cases.

T or F:
If $f=\Theta \left(g\right)$, then
algorithm f takes the same time as g,
regardless of input size.

T or F:
If $f=\Theta \left(g\right)$, then
it is possible that algorithm f can run slower than g,
in some cases.

T or F:
If $f=O\left(g\right)$, then
it is possible that algorithm f can run slower than g.

T or F:
If $f=O\left(g\right)$, then
it is possible that algorithm f can run slower than g,
for some problem sizes.

T or F:
If $f=\Theta \left(g\right)$, then
it is possible that algorithm f can run faster than g,
in some cases.

T or F:
If $f=O\left(g\right)$, then
it is possible that algorithm f can run faster than g.

T or F:
If $f=O\left(g\right)$, then
it is possible that algorithm f can run true than g,
for some problem sizes.

T or F:
If $f=\omega \left(g\right)$, then
f and g can be the same algorithm.

T or F:
If $f=\Omega \left(g\right)$, then
f and g can be the same algorithm.

T or F:
If $f=o\left(g\right)$, then
f and g can be the same algorithm.

T or F:
If $f=O\left(g\right)$, then
f and g can be the same algorithm.

T or F:
Suppose algorithm $f=\theta \left(g\right)$.
f and g can be the same algorithm.

T or F:
If $f=\Omega \left(g\right)$ and $g=O\left(f\right)$,
then $f=\Theta \left(g\right)$.

T or F:
If $f=\Omega \left(g\right)$ and $g=O\left(f\right)$,
then f and g must be the same algorithm.

T or F:
Suppose algorithm $f=\theta \left(g\right)$.
Algorithm f is never slower than g,
by a stopwatch.

T or F:
Suppose algorithm $f=\theta \left(g\right)$.
Algorithm f always takes the same time as g,
by a stopwatch.
Concept: analyzing code
In the pseudocode, the lower limit of a
for
loop
is inclusive, while the upper limit is exclusive.
The additive step, if not specified, is one.
Tracing recursive functions
When asked about the number of recursive calls, do not include
the original call.

How many recursive calls are made if $n=5$?
Assume the initial value of i is zero.
function f(i,n)
{
if (i < n)
{
println(i);
f(i+1,n);
}
return 0;
}
 5
 the function recurs infinitely
 none of the other answers are correct
 4
 6
 3

How many recursive calls are made if $n=81$.
Assume the initial value of i is one.
function f(i,n)
{
if (i < n)
{
println(i);
f(i*3,n);
}
return 0;
}
 6
 4
 3
 none of the other answers are correct
 5
 the function recurs infinitely

How many recursive calls are made if $n=64$.
Assume the initial value of i is one.
function f(i,n)
{
if (i < n)
{
println(i);
f(i*sqrt(n),n);
}
return 0;
}
 8
 1
 the function recurs infinitely
 4
 9
 7
 none of the other answers are correct
 2

How many recursive calls are made if $n=64$.
Assume the initial value of i is zero.
function f(i,n)
{
if (i < n)
{
println(i);
f(i+sqrt(n),n);
}
return 0;
}
 none of the other answers are correct
 9
 7
 1
 2
 8
 the function recurs infinitely
 4
Time complexity, recursive functions, single recursion

What is the time complexity of this function?
Assume the initial value of i is zero.
function f(i,n)
{
if (i < n)
{
println(i);
f(i+1,n);
}
return 0;
}
 $\theta \left(1\right)$
 $\theta \left(\mathrm{log}n\right)$
 $\theta \left(n\right)$
 $\theta \left(\sqrt{n}\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(\mathrm{log}n\right)$

What is the time complexity of this function?
Assume the initial value of i is one.
function f(i,n)
{
if (i < n)
{
println(i);
f(i*3,n);
}
return 0;
}
 $\theta \left({\left(\mathrm{log}n\right)}^{3}\right)$
 $\theta \left(n\sqrt{n}\right)$
 $\theta \left(\mathrm{log}n\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left(n\right)$
 $\theta \left({n}^{2}\right)$

What is the time complexity of this function?
Assume the initial value of i is one.
function f(i,n)
{
if (i < n)
{
f(i*sqrt(n),n);
println(i);
}
}
 $\theta \left(\sqrt{n}\right)$
 $\theta \left(1\right)$
 $\theta \left(n\sqrt{n}\right)$
 $\theta \left(\mathrm{log}n\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(n\right)$

What is the time complexity of this function?
Assume positive, integral input and integer division.
function f(n)
{
if (n > 0)
{
for (var i from 0 until n)
println(n);
f(n/2);
}
return 0;
}
 $\theta \left(n\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(n\sqrt{n}\right)$
 $\theta \left(\mathrm{log}n\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left(1\right)$
Time complexity, recursive functions, double recursion

What is the time complexity of this function?
Assume the initial value of i and j is zero.
function f(i,j,n)
{
println(i,j);
if (i < n)
{
if (j < n)
f(i,j+1,n);
else
f(i+1,0,n);
}
return 0;
}
 $\theta \left(\sqrt{n}\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left({\mathrm{log}}^{2}n\right)$
 $\theta \left(n\right)$
 $\theta \left(1\right)$

What is the time complexity of this function?
Assume the initial value of i and j is zero.
function f(i,j,n)
{
println(i,j);
if (i < n)
{
if (j < i)
f(i,j+1,n);
else
f(i+1,0,i+1);
}
return 0;
}
 $\theta \left({n}^{2}\right)$
 $\theta \left(1\right)$
 $\theta \left({\mathrm{log}}^{2}n\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left(n\right)$
 $\theta \left(\sqrt{n}\right)$

What is the time complexity of this function?
Assume the initial value of i is one and j is zero.
function f(i,j,n)
{
println(i,j);
if (i < n)
{
if (j < n)
f(i,j+1,n);
else
f(i*2,0,n);
}
return 0;
}
 $\theta \left(n\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(n\sqrt{n}\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left({\mathrm{log}}^{2}n\right)$
 $\theta \left(1\right)$

What is the time complexity of this function?
Assume the initial value of i is one and j is zero.
function f(i,j,n)
{
println(i,j);
if (i < n)
{
if (j < i)
f(i,j+1,n);
else
f(i*2,0,n);
}
return 0;
}
 $\theta \left(n\right)$
 $\theta \left(1\right)$
 $\theta \left({\mathrm{log}}^{2}n\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(n\sqrt{n}\right)$
 $\theta \left(n\mathrm{log}n\right)$

What is the time complexity of this function?
Assume the initial value of i is zero and j is one.
function f(i,j,n)
{
println(i,j);
if (i < n)
{
if (j < n)
f(i,j*2,n);
else
f(i+1,1,n);
}
return 0;
}
 $\theta \left({n}^{2}\right)$
 $\theta \left(n\right)$
 $\theta \left(1\right)$
 $\theta \left({\mathrm{log}}^{2}n\right)$
 $\theta \left(n\sqrt{n}\right)$
 $\theta \left(n\mathrm{log}n\right)$
Space complexity, recursive functions, single recursion

What is the space complexity of this function?
Assume positive, integral input and integer division.
function f(x,n)
{
if (x > 0)
{
f(x/2,n);
for (var i from 0 until n)
println(n);
}
}
 $\theta (x/2)$
 $\theta \left(n\right)$
 $\theta \left(nx\right)$
 $\theta \left(x\right)$
 $\theta \left(n\mathrm{log}x\right)$
 $\theta \left(\mathrm{log}n\right)$
 $\theta \left(1\right)$
 $\theta \left(\mathrm{log}x\right)$

What is the space complexity of this function?
Assume the initial value of i is zero.
function f(i,n)
{
if (i < n)
{
f(i+1,n);
println(i);
}
}
 $\theta \left(1\right)$
 $\theta \left(\sqrt{n}\right)$
 $\theta \left(n\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(\mathrm{log}n\right)$
 $\theta \left(n\mathrm{log}n\right)$

What is the space complexity of this function?
Assume the initial value of i is one.
function f(i,n)
{
if (i < n)
{
f(i*2,n);
println(i);
}
}
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(n\right)$
 $\theta \left(\sqrt{n}\right)$
 $\theta \left(1\right)$
 $\theta \left(\mathrm{log}n\right)$

What is the space complexity of this function?
Assume the initial value of i is one.
function f(i,n)
{
if (i < n)
{
f(i*sqrt(n),n);
println(i);
}
}
 $\theta \left(n\sqrt{n}\right)$
 $\theta \left(1\right)$
 $\theta \left(n\right)$
 $\theta \left(\sqrt{n}\right)$
 $\theta \left(n\genfrac{}{}{0.1ex}{}{n}{\sqrt{n}}\right)$
 $\theta (n\sqrt{n})$

What is the space complexity of this function?
Assume the initial value of i is one.
function f(i,n)
{
if (i < n)
{
f(i+sqrt(n),n);
println(i);
}
}
 $\theta \left(n\genfrac{}{}{0.1ex}{}{n}{\sqrt{n}}\right)$
 $\theta \left(n\right)$
 $\theta \left(\sqrt{n}\right)$
 $\theta \left(n\sqrt{n}\right)$
 $\theta \left(1\right)$
 $\theta (n\sqrt{n})$
Space complexity, recursive functions, double recursion

What is the space complexity of this function?
Assume the initial value of i and j is zero.
function f(i,j,n)
{
println(i,j);
if (i < n)
{
if (j < n)
f(i,j+1,n);
else
f(i+1,i+1,n);
}
return 0;
}
 $\theta \left({\mathrm{log}}^{2}n\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left(\sqrt{n}\right)$
 $\theta \left(n\right)$
 $\theta \left(1\right)$

What is the space complexity of this function?
Assume the initial value of i and j is zero.
function f(i,j,n)
{
println(i,j);
if (i < n)
{
if (j < i)
f(i,j+1,n);
else
f(i+1,0,i+1);
}
return 0;
}
 $\theta \left({n}^{2}\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left(1\right)$
 $\theta \left(n\right)$
 $\theta \left(\sqrt{n}\right)$
 $\theta \left({\mathrm{log}}^{2}n\right)$

What is the space complexity of this function?
Assume the initial value of i is one and j is zero.
function f(i,j,n)
{
println(i,j);
if (i < n)
{
if (j < n)
f(i,j+1,n);
else
f(i*2,0,n);
}
return 0;
}
 $\theta \left(n\sqrt{n}\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left(1\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left({\mathrm{log}}^{2}n\right)$
 $\theta \left(n\right)$

What is the space complexity of this function?
Assume the initial value of i is one and j is zero.
function f(i,j,n)
{
println(i,j);
if (i < n)
{
if (j < i)
f(i,j+1,n);
else
f(i*2,0,n);
}
}
 $\theta \left({\mathrm{log}}^{2}n\right)$
 $\theta \left(n\sqrt{n}\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(1\right)$
 $\theta \left(n\right)$
 $\theta \left(n\mathrm{log}n\right)$

What is the space complexity of this function?
Assume the initial value of i is zero and j is one.
function f(i,j,n)
{
println(i,j);
if (i < n)
{
if (j < n)
f(i,j*2,n);
else
f(i+1,1,n);
}
return 0;
}
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left(n\right)$
 $\theta \left(1\right)$
 $\theta \left(n\sqrt{n}\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left({\mathrm{log}}^{2}n\right)$
Time complexity, iterative loops, single loops

What is the time complexity of this code fragment?
for (i from 0 until n by 1)
println(i);
 $\theta \left({\mathrm{log}}^{2}n\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left({n}^{\sqrt{n}}\right)$
 $\theta \left(n\sqrt{n}\right)$
 $\theta \left(n\right)$
 $\theta \left({n}^{2}\right)$

What is the time complexity of this code fragment?
i = 1;
while (i < n)
{
println(i);
i = i * 2;
}
 $\theta \left({n}^{2}\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left(n\right)$
 $\theta \left({\left(\mathrm{log}n\right)}^{2}\right)$
 $\theta \left(n\sqrt{n}\right)$
 $\theta \left(\mathrm{log}n\right)$

What is the time complexity of this code fragment?
i = 1;
while (i < n)
{
println(i);
i = i * sqrt(n);
}
 $\theta \left(\sqrt{n}\right)$
 $\theta \left(n\sqrt{n}\right)$
 $\theta \left(n\genfrac{}{}{0.1ex}{}{n}{\sqrt{n}}\right)$
 $\theta (n\sqrt{n})$
 $\theta \left(n\right)$
 $\theta \left(1\right)$
Time complexity, iterative loops, double loops

What is the time complexity of this code fragment?
for (i from 0 until n by 1)
for (j from 0 until i by 1)
println(i,j);
 $\theta \left(n\right)$
 $\theta \left(1\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left(n\sqrt{n}\right)$
 $\theta \left({\mathrm{log}}^{2}n\right)$
 $\theta \left({n}^{2}\right)$

What is the time complexity of this code fragment?
i = 1;
while (i < n)
{
for (j from 0 until n by 1)
println(i,j);
i = i * 2;
}
 $\theta \left({\mathrm{log}}^{2}n\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(n\right)$
 $\theta \left(n\sqrt{n}\right)$
 $\theta \left(1\right)$

What is the time complexity of this code fragment?
i = 1;
while (i < n)
{
for (j from 0 until i by 1)
println(i,j);
i = i * 2;
}
 $\theta \left(n\right)$
 $\theta \left(n\sqrt{n}\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(1\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left({\mathrm{log}}^{2}n\right)$

What is the time complexity of this code fragment?
for (i from 0 until n)
{
j = 1;
while (j < n)
{
println(i,j);
j = j * 2;
}
}
 $\theta \left(n\sqrt{n}\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left({\mathrm{log}}^{2}n\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(n\right)$
 $\theta \left(1\right)$

What is the time complexity of this code fragment?
i = 1;
while (i < n)
{
println(i);
i = i * 2;
}
for (j from 0 until n by 2)
println(j);
 $\theta \left({\mathrm{log}}^{2}n\right)$
 $\theta \left(1\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(n\sqrt{n}\right)$
 $\theta \left(n\right)$

What is the time complexity of this code fragment?
for (i from 0 until n by 1)
println(i);
for (j from 0 until n by 1)
println(j);
 $\theta \left(n\right)$
 $\theta \left(1\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left({\mathrm{log}}^{2}n\right)$
 $\theta \left(n\sqrt{n}\right)$

What is the time complexity of this code fragment?
for (i from 0 until n by 2)
println(i);
j = 1;
while (j < n)
{
println(j);
j = j * 2;
}
 $\theta \left(n\sqrt{n}\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left(1\right)$
 $\theta \left({\mathrm{log}}^{2}n\right)$
 $\theta \left(n\right)$
Space complexity, iterative loops, single loops

What is the space complexity of this code fragment?
for (i from 0 until n by 1)
println(i);
 $\theta \left(n\right)$
 $\theta \left(1\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left(\mathrm{log}n\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(\sqrt{n}\right)$

What is the space complexity of this code fragment?
i = 1;
while (i < n)
{
println(i);
i = i * sqrt(n);
}
 $\theta \left(1\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(\mathrm{log}n\right)$
 $\theta \left(n\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left(\sqrt{n}\right)$

What is the space complexity of this code fragment?
i = 1;
while (i < n)
{
println(i);
i = i * 2;
}
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left(1\right)$
 $\theta \left(\sqrt{n}\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(n\right)$
 $\theta \left(\mathrm{log}n\right)$
Space complexity, iterative loops, double loops

What is the space complexity of this code fragment?
for (i from 0 until n by 1)
println(i);
for (j from 0 until n by 1)
println(j);
 $\theta \left({n}^{2}\right)$
 $\theta \left(1\right)$
 $\theta \left(\mathrm{log}n\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left(\sqrt{n}\right)$
 $\theta \left(n\right)$

What is the space complexity of this code fragment?
i = 1;
while (i < n)
{
println(i);
i = i * 2;
}
for (j from 0 until n by 2)
println(j);
 $\theta \left(\sqrt{n}\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left(1\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(\mathrm{log}n\right)$
 $\theta \left(n\right)$

What is the space complexity of this code fragment?
for (i from 0 until n by 2)
println(i);
j = 1;
while (j < n)
{
println(j);
j = j * 2;
}
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left(\mathrm{log}n\right)$
 $\theta \left(1\right)$
 $\theta \left(\sqrt{n}\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(n\right)$

What is the space complexity of this code fragment?
for (i from 0 until n by 1)
for (j from 0 until i by 1)
println(i,j);
 $\theta \left({n}^{2}\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left(\sqrt{n}\right)$
 $\theta \left(\mathrm{log}n\right)$
 $\theta \left(1\right)$
 $\theta \left(n\right)$

What is the space complexity of this code fragment?
i = 1;
while (i < n)
{
for (j from 0 until i by 1)
println(i,j);
i = i * 2;
}
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left(\sqrt{n}\right)$
 $\theta \left(\mathrm{log}n\right)$
 $\theta \left(1\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(n\right)$

What is the space complexity of this code fragment?
for (i from 0 until n)
{
j = 1;
while (j < n)
{
println(i,j);
j = j * 2;
}
}
 $\theta \left(1\right)$
 $\theta \left(\sqrt{n}\right)$
 $\theta \left(\mathrm{log}n\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left(n\right)$
 $\theta \left({n}^{2}\right)$

What is the space complexity of this code fragment?
i = 1;
while (i < n)
{
for (j from 0 until n by 1)
println(i,j);
i = i * 2;
}
 $\theta \left(\mathrm{log}n\right)$
 $\theta \left(n\mathrm{log}n\right)$
 $\theta \left(n\right)$
 $\theta \left(1\right)$
 $\theta \left({n}^{2}\right)$
 $\theta \left(\sqrt{n}\right)$
Concept: analysis of classic, simple algorithms

Which of the following describes the classic recursive fibonacci's time complexity?
 $\theta (n\sqrt{n})$
 $\theta \left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{n}}\right)$
 $\theta \left(\sqrt{n}\right)$
 $\theta \left({\Phi}^{n}\right)$
 $\theta \left(\Phi \right)$
 $\theta \left(1\right)$
 $\theta \left(\genfrac{}{}{0.1ex}{}{\Phi}{n}\right)$

Which of the following describes the classic recursive fibonacci's space complexity?
 $\theta \left(1\right)$
 $\theta \left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{n}}\right)$
 $\theta (n\sqrt{n})$
 $\theta \left(\genfrac{}{}{0.1ex}{}{\Phi}{n}\right)$
 $\theta \left(\sqrt{n}\right)$
 $\theta \left(n\right)$

Which of the following describes iterative factorial's time complexity?
 $\theta \left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{n}}\right)$
 $\theta \left(\sqrt{n}\right)$
 $\theta (n\sqrt{n})$
 $\theta \left(n\right)$
 $\theta \left(1\right)$
 $\theta \left(\genfrac{}{}{0.1ex}{}{\Phi}{n}\right)$

Which of the following describes iterative fibonacci's space complexity?
 $\theta \left(\genfrac{}{}{0.1ex}{}{\Phi}{n}\right)$
 $\theta \left(1\right)$
 $\theta \left(\sqrt{n}\right)$
 $\theta \left(n\right)$
 $\theta (n\sqrt{n})$
 $\theta \left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{n}}\right)$
Concept: linear selection

Suppose you wish to find both the minimum and maximum values in an
array of n values, n being odd. Consider this algorithm:

Make a pass through the array and find the minimum

Make a pass through the array and find the maximum

Report the minimum and the maximum at termination
What is the minimum number of key comparisons
that this algorithm needs to
perform?
 $2n2$
 $2n+2$
 $2n+1$
 $2n1$
 $2n$

Suppose you wish to find both the minimum and maximum values in an
array A of n values, n being odd. Consider this algorithm:

Set the maximum and minimum to the first value in the array

Make a pass through the array
(i = 1; i < n; i += 2)

Compare A[i] with A[i+1]

Update the minimum if the smaller of the two is less

Update the maximum if the larger of the two is greater

Report the minimum and the maximum at loop termination
What is the minimum number of key comparisons
that this algorithm needs to
perform?
 $3n3$
 $3n1$
 $3\genfrac{}{}{0.1ex}{}{n1}{2}+2$
 $3\genfrac{}{}{0.1ex}{}{n1}{2}$
 $2n3$
 $3n$

Consider running the linear selection algorithm on an array of
n unique elements.
What is a tight lower bound on the number of elements less than
the median of medians?
Assume the median of medians is found with groups of
five and that there are an odd number of groups.
 $2\genfrac{}{}{0.1ex}{}{n}{5}+3$
 $3\genfrac{}{}{0.1ex}{}{n}{10}+3$
 $2\genfrac{}{}{0.1ex}{}{n}{5}+2$
 $5\genfrac{}{}{0.1ex}{}{n}{10}+2$
 $3\genfrac{}{}{0.1ex}{}{n}{5}+2$
 $5\genfrac{}{}{0.1ex}{}{n}{10}+3$
 $3\genfrac{}{}{0.1ex}{}{n}{5}+3$
 $\genfrac{}{}{0.1ex}{}{3n+5}{10}$

Consider running the linear selection algorithm on an array with n
unique elements.
What is a tight lower bound on the number of elements less than
the median of medians?
Assume the median of medians is found with groups of
three and that there are an odd number of groups.
 $2\genfrac{}{}{0.1ex}{}{n}{6}+2$
 $3\genfrac{}{}{0.1ex}{}{n}{6}+2$
 $\genfrac{}{}{0.1ex}{}{n}{3}+1$
 $2\genfrac{}{}{0.1ex}{}{n}{6}+1$
 $2\genfrac{}{}{0.1ex}{}{n}{3}+2$
 $2\genfrac{}{}{0.1ex}{}{n}{3}+1$
 $\genfrac{}{}{0.1ex}{}{n}{3}$
 $3\genfrac{}{}{0.1ex}{}{n}{6}+1$

Consider running the linear selection algorithm on an array with n
unique elements.
What is a tight lower bound on the number of elements less than
the median of medians?
Assume the median of medians is found with groups of
seven and that there are an odd number of groups.
 $3\genfrac{}{}{0.1ex}{}{n}{7}+3$
 $3\genfrac{}{}{0.1ex}{}{n}{14}+2$
 $\genfrac{}{}{0.1ex}{}{2n+28}{14}$
 $\genfrac{}{}{0.1ex}{}{2n+7}{7}$
 $11\genfrac{}{}{0.1ex}{}{n}{14}+3$
 $11\genfrac{}{}{0.1ex}{}{n}{14}+2$
 $3\genfrac{}{}{0.1ex}{}{n}{7}+2$
 $3\genfrac{}{}{0.1ex}{}{n}{14}+3$

Consider running the linear selection algorithm on an array with $n={7}^{k}$
unique elements.
What is a reasonable recurrence equation to describe the
running time of the algorithm?
Assume the median of medians is found with groups of
seven.
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{7}\right)+T\left(\genfrac{}{}{0.1ex}{}{6n}{7}\right)+\theta \left(n\right)$
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{7}\right)+T\left(\genfrac{}{}{0.1ex}{}{5n}{7}\right)+\theta \left(n\right)$
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{7}\right)+T\left(\genfrac{}{}{0.1ex}{}{4n}{14}\right)+\theta \left(n\right)$
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{7}\right)+T\left(\genfrac{}{}{0.1ex}{}{4n}{7}\right)+\theta \left(n\right)$
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{7}\right)+T\left(\genfrac{}{}{0.1ex}{}{11n}{14}\right)+\theta \left(n\right)$
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{7}\right)+T\left(\genfrac{}{}{0.1ex}{}{10n}{14}\right)+\theta \left(n\right)$

Consider running the linear selection algorithm on an array with $n={3}^{k}$
unique elements.
What is a reasonable recurrence equation to describe the
running time of the algorithm?
Assume the median of medians is found with groups of
three.
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+T\left(\genfrac{}{}{0.1ex}{}{n}{6}\right)+\theta \left(n\right)$
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+T\left(\genfrac{}{}{0.1ex}{}{5n}{6}\right)+\theta \left(n\right)$
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\theta \left(n\right)$
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+\theta \left(n\right)$
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+T\left(\genfrac{}{}{0.1ex}{}{2n}{5}\right)+\theta \left(n\right)$
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+T\left(\genfrac{}{}{0.1ex}{}{2n}{3}\right)+\theta \left(n\right)$

Consider running the linear selection algorithm on an array with $n={5}^{k}$
unique elements.
What is a reasonable recurrence equation to describe the
running time of the algorithm?
Assume the median of medians is found with groups of
five.
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{5}\right)+T\left(\genfrac{}{}{0.1ex}{}{3n}{5}\right)+\theta \left(n\right)$
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{5}\right)+T\left(\genfrac{}{}{0.1ex}{}{9n}{10}\right)+\theta \left(n\right)$
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{5}\right)+T\left(\genfrac{}{}{0.1ex}{}{7n}{10}\right)+\theta \left(n\right)$
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{5}\right)+T\left(\genfrac{}{}{0.1ex}{}{4n}{5}\right)+\theta \left(n\right)$
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{5}\right)+T\left(\genfrac{}{}{0.1ex}{}{2n}{5}\right)+\theta \left(n\right)$
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{5}\right)+T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\theta \left(n\right)$

T or F:
If the linear selection algorithm uses groups of three to find the
median of medians, the asymptotic run time is still $\theta \left(n\right)$.

T or F:
If the linear selection algorithm uses groups of seven to find the
median of medians, the asymptotic run time is still $\theta \left(n\right)$.
Concept: decision trees

In proving a tight lower bound for a class of algorithms, one tries to
establish, over all possible algorithms:
 the worst possible best case
 the best possible best case
 the best possible worst case
 the worst possible worst case

In an efficient decision tree for comparison sorts of n numbers,
what is the smallest possible depth of a leaf, in the best case?
Assume the root is at depth 0.
 approximately $\mathrm{log}n$
 1
 $n1$
 n
 approximately $n\mathrm{log}n$
 $n!$

In an efficient decision tree for comparison sorts of n numbers,
what is the largest possible depth of a leaf, in the worst case?
Assume the root is at depth 0.
 $n!$
 1
 approximately $\mathrm{log}n$
 $n1$
 n
 approximately $n\mathrm{log}n$

In an efficient decision tree for comparison sorts of n numbers,
what does the shortest path from the root to a leaf represent?
 nothing, the longest path is what's important
 the average case behavior of the sort
 the best case behavior of the sort
 the worst case behavior of the sort

An efficient decision tree for comparison sorts of n numbers
has how many leaves?
 ${2}^{n}$
 $\mathrm{log}n$
 $n\mathrm{log}n$
 $n!$

Deriving a tight lower time bound for comparison sorts, based upon an
efficient decision tree, yields:
 $O\left(n\mathrm{log}n\right)$
 $\Omega \left(n\right)$
 $\omega \left(n\mathrm{log}n\right)$
 $\Omega \left(n\mathrm{log}n\right)$
 $O\left(n\right)$
 unbounded
Concept: linear sorting fundamentals

Stability in a sort means:
 the sort can work on any kind of number (integers, reals, etc.)
 the order of ties in the output reflects the input
 the asymptotic run time does not vary upon the input

The best case behavior for insertion sort is:
 exponential
 log linear
 linear
 quadratic

T or F:
A lineartime sort does not compare entire keys with one another.

T or F:
A lineartime sort must always compare entire keys with one another.
Concept: linear sorting  counting
For counting sort, assume array A holds the data to be sorted,
array B holds the sorted data, and array C holds the counts.
The index i is used to sweep arrays A and B and the
index j is used to sweep array C.

Counting sort is:

stable if lower indexed elements from the input array
are transferred to higher indexed elements in the output array.

stable if higher indexed elements from the input array
are transferred to higher indexed elements in the output array.
 always unstable
 always stable

Suppose you are to sort n numbers using counting sort.
What size should the C array be?
 there's not enough information
 less than n
 greater than n
 equal to n

Consider using a counting sort to sort the input array
[2,5,0,3,2,0,3,3]
.
After the first phase, when
C[i]
holds the number of
elements equal to i, the array C looks like:

[2,0,2,3,0,1]

[2,0,5,2,3,0,3,3]

[0,0,2,2,3,3,3,5]

[2,2,4,7,7,8]

[2,2,4,6,7,8]

Consider using a counting sort to sort the input array
[2,5,0,3,2,0,3,3]
with auxiliary array C.
After the second phase, when
C[i]
holds the number of
elements less than or equal to i, the array C looks like:

[2,2,4,7,7,8]

[2,2,4,6,7,8]

[2,0,5,2,3,0,3,3]

[2,0,2,3,0,1]

[0,0,2,2,3,3,3,5]

Consider using a stable counting sort to sort the input array
[2,5,0,3,2,0,3,3]
with destination array B.
At start of phase three, using a right to left placement,
the first element to be placed in B is:
 0, at index 1
 4, at index 5
 3, at index 6
 1, at index 0
 2, at index 3
 5, at index 7

Let n be the count of numbers in a collection of base${}_{10}$ numbers.
Suppose zero is the minimum number and
k is the maximum number in the collection.
The time complexity of counting sort is:
 $\Theta \left(n\mathrm{log}k\right)$
 $\Theta \left({n}^{k}\right)$
 $\Theta \left(nk\right)$
 $\Theta (n+k)$

Let n be the count of numbers in a collection of base${}_{10}$ numbers.
Suppose zero is the minimum number and
k is the maximum number in the collection.
If $k=o\left(n\right)$, then the time complexity of
counting sort is:
 $\Theta \left({k}^{2}\right)$
 $\Theta \left(k\right)$
 $\Theta \left(n\right)$
 $\Theta \left(k\mathrm{log}n\right)$
 $\Theta \left(n\mathrm{log}k\right)$
 $\Theta \left({n}^{2}\right)$

T or F:
Suppose the
lower bound of some numbers to be sorted is not zero.
Counting sort can still be used without changing its
fundamental time bounds.

T or F:
Suppose the
lower bound of some numbers to be sorted,
LB
, is not zero.
Counting sort can still be used without changing its
fundamental time bounds, if, among other changes, the
statement
C[A[i]] += 1
is changed
to
C[A[i]LB] += 1
.

T or F:
Suppose the
lower bound of some numbers to be sorted,
LB
, is not zero.
Counting sort can still be used without changing its
fundamental time bounds, if, among other changes, the
statement
B[C[A[i]]1] = A[i]
is changed
to
B[C[A[iLB]]1] = A[i]
.

T or F: Counting sort can be used to
sort n decimal numbers uniformly distributed
over the range of zero to ${n}^{5}$ in linear time.

Consider using counting sort to
sort n numbers uniformly distributed
over the range of zero to ${n}^{k}$.
The asymptotic complexity of the sort will be
 $\theta (n+k)$
 $\theta \left(k\mathrm{log}n\right)$
 $\theta \left(k\right)$
 $\theta ({n}^{k}+n)$
 $\theta \left(n\mathrm{log}k\right)$
 $\theta \left(n\right)$
Concept: linear sorting  radix

Suppose the recurrence equation
$T(p,q)=T(p,q1)+\theta \left(p\right)$
is used to describe radix sort.
Considering only the sorting of the numbers on a particular digit,
which sort is consistent with that equation?
 selection sort
 mergesort sort
 radix sort
 counting sort

Suppose the recurrence equation
$T(p,q)=T(p,q1)+\theta \left(p\mathrm{log}p\right)$
is used to describe radix sort.
Considering only the sorting of the numbers on a particular digit,
which sort is consistent with that equation?
 counting sort
 mergesort sort
 radix sort
 selection sort

Suppose the recurrence equation
$T(p,q)=T(p,q1)+\theta \left({p}^{2}\right)$
is used to describe radix sort.
Considering only the sorting of the numbers on a particular digit,
which sort is consistent with that equation?
 radix sort
 counting sort
 mergesort sort
 selection sort

Suppose the recurrence equation
$T(p,q)=T(p,q1)+\theta \left(p\right)$
is used to describe radix sort.
Which variable in the equation refers to the number of digits?
 d
 p
 n
 q

Consider using radix sort for sorting the following numbers:
558
354
318
622
After the first pass, the order of the numbers is:
 354, 318, 558, 622
 318, 354, 558, 622
 622, 354, 558, 318
 622, 558, 354, 318
 622, 354, 318, 558
 622, 558, 318, 354

Let n be the count of numbers in a collection of positive,
base${}_{10}$ numbers.
Let m be the number of digits in the largest number in the
collection. Suppose the auxiliary sort works in $\Theta \left(n\right)$ time.
Then radix sorting takes time:
 $\Theta \left(m\mathrm{log}n\right)$
 $\Theta \left(n\mathrm{log}n\right)$
 $\Theta \left(n\mathrm{log}m\right)$
 $\Theta \left(nm\right)$

Let n be the count of numbers in a collection of positive,
base${}_{10}$ numbers.
Let m be the number of digits in the largest number in the
collection. Suppose the auxiliary sort works in $\Theta \left(n\mathrm{log}n\right)$
time. Then radix sorting takes time:
 $\Theta \left(mn\mathrm{log}n\right)$
 $\Theta \left(n\mathrm{log}\right(mn\left)\right)$
 $\Theta \left(n\mathrm{log}\right(m+n\left)\right)$
 $\Theta \left(n\mathrm{log}m\right)$

T or F:
Suppose during one pass of radix sort, there is a tie between two numbers.
Since they are considered the same number,
it does not matter if those two numbers swap positions.

Consider the sort used for each pass of radix sort.
Must the auxiliary sort be stable?
 No, because radix sort works even if the auxilliary sort is unstable.
 Yes, because swapping ties can undo the work of previous passes
 No, because there can be no ties in radix sort.
 Yes, because swapping ties can undo the work of future passes

T or F: Radix sort can be used to
sort n decimal numbers uniformly distributed
over the range of zero to ${n}^{5}$ in linear time.
Concept: linear sorting  bucket

T or F: If bucket sort uses equallysized buckets,
then the distribution of the incoming numbers need not be
uniform for the sort to run in expected linear time.

Consider using bucket sort to sort
n numbers evenly distributed over the
range 0 to m. Roughly, how many buckets should you use?
 n
 m
 $\genfrac{}{}{0.1ex}{}{n}{m}$
 $\genfrac{}{}{0.1ex}{}{m}{n}$

Consider using bucket sort to sort
n numbers evenly distributed over the
range 0 to m. Roughly, what size bucket should you use?
 m
 $\genfrac{}{}{0.1ex}{}{m}{n}$
 n
 $\genfrac{}{}{0.1ex}{}{n}{m}$

Consider using bucket sort to sort
n nonnegative numbers evenly distributed over the
range 0 to m. The range of the first bucket should be:
 0 to $\genfrac{}{}{0.1ex}{}{n}{m}$
 $\genfrac{}{}{0.1ex}{}{m}{n}$ to $2\genfrac{}{}{0.1ex}{}{m}{n}$
 $0$ to $\genfrac{}{}{0.1ex}{}{m}{n}$
 $\genfrac{}{}{0.1ex}{}{n}{m}$ to $\genfrac{}{}{0.1ex}{}{2n}{m}$

Consider using bucket sort to sort
n nonnegative numbers evenly distributed over the
range 0 to m. The average count of numbers in a bucket is:
 0
 $\genfrac{}{}{0.1ex}{}{n}{m}$
 $\genfrac{}{}{0.1ex}{}{m}{n}$
 1

Consider a bucket sort that uses insertion sort to sort the
individual buckets.
Suppose you could bound the maximum count of numbers in a
bucket to a constant C. Then the asymptotic running time
to sort one bucket would be:
 $\Theta \left(1\right)$
 $\Theta \left(n\mathrm{log}n\right)$, because insertion sort is loglinear in the average case
 $\Theta \left(n\right)$, because insertion sort is linear in the best case
 $\Theta \left({n}^{2}\right)$, because insertion sort is quadratic in the worst case

Consider using bucket sort to sort
n nonnegative numbers evenly distributed over the
range 0 to m. The expected running time is:
 quadratic
 constant
 loglinear
 linear

Consider using bucket sort to sort
n nonnegative numbers in the
range 0 to m with no a priori knowledge of the
distribution of the numbers. Using insertion sort as
the auxilliary sort, the worst case running time is:
 quadratic
 constant
 cubic
 linear

T or F: Bucket sort can be used to
sort n decimal numbers uniformly distributed
over the range of zero to ${n}^{5}$ in linear time.
Concept: recurrence equations

Stooge sort has the following algorithm. Recursively sort
the lower twothirds of an array, then recursively sort
the upper twothirds, then recursively sort the lower
twothirds again.
The recursion stops when the array consists of two or fewer
elements. If the array size is two, the elements are swapped
if necessary. Which of the following recurrence equations
describe stooge sort?
 $T\left(n\right)=3T\left(2\genfrac{}{}{0.1ex}{}{n}{3}\right)+\Theta \left(1\right)$
 $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(\mathrm{log}n\right)$
 $T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(n\right)$
 $T\left(n\right)=2T\left(3\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(1\right)$
 $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+\Theta \left(1\right)$

Merge sort has the following algorithm. Recursively sort
the lower half of an array, then recursively sort
the upper half, then merge the two halves. When merging,
a single pass is made through each half.
The recursion stops when the array consists of one element.
Which of the following recurrence equations
describe merge sort?
 $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(1\right)$
 $T\left(n\right)=T(n1)+\Theta \left(n\right)$
 $T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(1\right)$
 $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(n\right)$

Binary search has the following algorithm. Look at the
middle element in an array. If the element is the one
begin looked for (the target), return FOUND. Otherwise, if the
element is greater than the target, recursively search
the lower half of the array. Otherwise, recursively search
the upper half of the array. If the array consists of three
or fewer elements, perform a linear search of the array,
returning FOUND if found and MISSING otherwise.
Which of the following recurrence equations
describe binary search?
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(1\right)$
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(n\right)$
 $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(1\right)$
 $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(n\right)$
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(\mathrm{log}n\right)$
 $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(\mathrm{log}n\right)$
 $T\left(n\right)=2T(n1)+\Theta \left(1\right)$

Quicksort can have the following algorithm: find the median value of
the array and partition the array into two regions, one region holds
all the values less than the median and the other holds all values
greater than the median. Median finding and partitioning can both be
done in linear time.
Finally, the two regions are recursively sorted.
Which of the following recurrence equations
describe this version of quicksort?
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left({n}^{2}\right)$
 $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(1\right)$
 $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(n\right)$
 $T\left(n\right)=2T(n1)+\Theta \left(1\right)$
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(n\right)$
 $T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(\mathrm{log}n\right)$
 $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left({n}^{2}\right)$

The recurrence equation $T\left(n\right)=T(n1)+\Theta \left(n\right)$ describes
which sort, in the worst case?
 quick sort
 radix sort
 merge sort
 heap sort

Consider using selection
sort to sort a previously sorted array.
Which of the following recurrence equations
describes this situation?
 $T\left(n\right)=2T(n1)+\Theta \left(1\right)$
 $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(n\right)$
 $T\left(n\right)=2T(n1)+\Theta \left(n\right)$
 $T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\Theta \left(1\right)$
 $T\left(n\right)=T(n1)+\Theta \left(1\right)$
 $T\left(n\right)=T(n1)+\Theta \left(n\right)$
Concept: recursion trees
Level 0 refers to the apex of recursion tree. Level 1 refers to the level
immediately below level 0, and so on. Assume at the bottom level, the
size of the problem is 1. Assume that it takes $\Theta \left(1\right)$ time to
solve a problem of size 1. Assume the problem size is always evenly
divisible when divided.

Given the recurrence
$T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$,
what is the work done at the level 0?
 ${4}^{lo{g}_{2}n}$
 $4\mathrm{log}\genfrac{}{}{0.1ex}{}{n}{2}$
 ${n}^{4}$
 ${n}^{{\mathrm{log}}_{2}4}$
 $4n$
 $\mathrm{log}n$

Given the recurrence
$T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$,
what is the work done at level 1?
 $2n$
 $\mathrm{log}n$
 ${\genfrac{}{}{0.1ex}{}{n}{2}}^{{\mathrm{log}}_{2}4}$
 ${4}^{lo{g}_{2}\genfrac{}{}{0.1ex}{}{n}{2}}$
 $4\mathrm{log}\genfrac{}{}{0.1ex}{}{n}{2}$
 ${n}^{4}$

Given the recurrence
$T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+n$,
how many nodes are level 2?
 16
 4
 8
 2
 6
 12

Given the recurrence
$T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+n$,
how many nodes are at the bottom level?
 ${4}^{{\mathrm{log}}_{3}n}$
 ${4}^{{\mathrm{log}}_{4}n}$
 ${3}^{{\mathrm{log}}_{4}n}$
 ${3}^{{\mathrm{log}}_{3}n}$

Given the recurrence
$T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+\mathrm{log}n$,
how much work is done at the bottom level?
 $\Theta \left({4}^{{\mathrm{log}}_{4}n}\mathrm{log}n\right)$
 $\Theta \left({4}^{{\mathrm{log}}_{3}n}\right)$
 $\Theta \left({4}^{{\mathrm{log}}_{3}n}\mathrm{log}n\right)$
 $\Theta \left({3}^{{\mathrm{log}}_{3}n}\mathrm{log}n\right)$
 $\Theta \left({3}^{{\mathrm{log}}_{4}n}\mathrm{log}n\right)$
 $\Theta \left({3}^{{\mathrm{log}}_{3}n}\right)$
 $\Theta \left({3}^{{\mathrm{log}}_{4}n}\right)$
 $\Theta \left({4}^{{\mathrm{log}}_{4}n}\right)$
Concept: identifying cases of the master recurrence theorem

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+{n}^{2}$
fall?
 between case 2 and case 3
 case 3
 between case 1 and case 2
 the equation is not in the correct form
 case 1
 case 2

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{2}$
fall?
 between case 1 and case 2
 case 3
 case 1
 between case 2 and case 3
 case 2
 the equation is not in the correct form

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{2}$
fall?
 the equation is not in the correct form
 case 1
 between case 2 and case 3
 case 3
 between case 1 and case 2
 case 2

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{2}^{n}$
fall?
 case 2
 between case 2 and case 3
 case 3
 between case 1 and case 2
 case 1
 the equation is not in the correct form

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)={2}^{n}T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{n}$
fall?
 case 3
 between case 1 and case 2
 the equation is not in the correct form
 between case 2 and case 3
 case 1
 case 2

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=T(n1)+1$
fall?
 the equation is not in the correct form
 case 3
 case 2
 between case 1 and case 2
 case 1
 between case 2 and case 3

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=2T(n2)+n$
fall?
 case 3
 between case 1 and case 2
 the equation is not in the correct form
 between case 2 and case 3
 case 1
 case 2

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=16T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+n$
fall?
 case 2
 between case 2 and case 3
 between case 1 and case 2
 case 3
 the equation is not in the correct form
 case 1

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n\mathrm{log}n$
fall?
 between case 1 and case 2
 case 3
 the equation is not in the correct form
 case 2
 between case 2 and case 3
 case 1

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n\mathrm{log}n$
fall?
 case 2
 case 3
 between case 2 and case 3
 between case 1 and case 2
 case 1
 the equation is not in the correct form

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\genfrac{}{}{0.1ex}{}{n}{\mathrm{log}n}$
fall?
 the equation is not in the correct form
 between case 1 and case 2
 between case 2 and case 3
 case 1
 case 2
 case 3

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+{n}^{0.51}$
fall?
 case 2
 the equation is not in the correct form
 case 1
 case 3
 between case 2 and case 3
 between case 1 and case 2

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=\genfrac{}{}{0.1ex}{}{1}{2}T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+1$
fall?
 case 1
 case 3
 case 2
 between case 2 and case 3
 the equation is not in the correct form
 between case 1 and case 2

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=16T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+n!$
fall?
 the equation is not in the correct form
 case 3
 case 2
 case 1
 between case 1 and case 2
 between case 2 and case 3

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=\sqrt{2}T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$
fall?
 the equation is not in the correct form
 between case 1 and case 2
 case 1
 case 2
 between case 2 and case 3
 case 3

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n$
fall?
 case 3
 case 1
 between case 1 and case 2
 between case 2 and case 3
 case 2
 the equation is not in the correct form

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+\sqrt{n}$
fall?
 case 1
 case 2
 case 3
 between case 1 and case 2
 between case 2 and case 3
 the equation is not in the correct form

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n$
fall?
 case 1
 case 3
 the equation is not in the correct form
 case 2
 between case 2 and case 3
 between case 1 and case 2

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+n\mathrm{log}n$
fall?
 between case 2 and case 3
 case 1
 the equation is not in the correct form
 case 2
 case 3
 between case 1 and case 2

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+\genfrac{}{}{0.1ex}{}{n}{2}$
fall?
 the equation is not in the correct form
 between case 1 and case 2
 case 3
 between case 2 and case 3
 case 1
 case 2

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=6T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+{n}^{2}\mathrm{log}n$
fall?
 case 2
 between case 1 and case 2
 case 1
 the equation is not in the correct form
 case 3
 between case 2 and case 3

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\genfrac{}{}{0.1ex}{}{n}{\mathrm{log}n}$
fall?
 the equation is not in the correct form
 between case 1 and case 2
 between case 2 and case 3
 case 3
 case 1
 case 2

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=64T\left(\genfrac{}{}{0.1ex}{}{n}{8}\right)\genfrac{}{}{0.1ex}{}{n}{\mathrm{log}n}$
fall?
 the equation is not in the correct form
 case 1
 between case 2 and case 3
 between case 1 and case 2
 case 2
 case 3

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=7T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{7}}\right)+{n}^{2}$
fall?
 case 3
 case 2
 between case 2 and case 3
 between case 1 and case 2
 case 1
 the equation is not in the correct form

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=7T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{2}$
fall?
 case 2
 between case 2 and case 3
 case 3
 between case 1 and case 2
 the equation is not in the correct form
 case 1

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=7T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+{n}^{2}$
fall?
 between case 2 and case 3
 case 1
 case 2
 the equation is not in the correct form
 case 3
 between case 1 and case 2

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$
fall?
 between case 1 and case 2
 case 2
 the equation is not in the correct form
 case 3
 case 1
 between case 2 and case 3

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$
fall?
 case 2
 between case 2 and case 3
 case 3
 case 1
 the equation is not in the correct form
 between case 1 and case 2

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n$
fall?
 the equation is not in the correct form
 case 3
 between case 2 and case 3
 case 1
 case 2
 between case 1 and case 2

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt[3]{5}}\right)+{n}^{3}$
fall?
 the equation is not in the correct form
 case 2
 between case 2 and case 3
 case 1
 between case 1 and case 2
 case 3

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt[3]{5}}\right)+\genfrac{}{}{0.1ex}{}{{n}^{3}}{\mathrm{log}n}$
fall?
 the equation is not in the correct form
 case 2
 between case 2 and case 3
 between case 1 and case 2
 case 3
 case 1

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{5}}\right)+{n}^{2}\mathrm{log}n$
fall?
 between case 2 and case 3
 between case 1 and case 2
 case 1
 the equation is not in the correct form
 case 3
 case 2

In terms of the master recurrence theorem, where does the equation
$T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{5}}\right)+n\mathrm{log}n$
fall?
 between case 1 and case 2
 case 2
 case 3
 the equation is not in the correct form
 case 1
 between case 2 and case 3
Concept: using the master recurrence theorem

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+{n}^{2}$?
 the master theorem cannot be used
 $\Theta \left({n}^{2}\mathrm{log}n\right)$
 $\Theta \left({n}^{2}\right)$
 $\Theta \left({n}^{3}\right)$
 $\Theta \left({n}^{3}\mathrm{log}n\right)$
 $\Theta \left({n}^{4}\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{2}$?
 $\Theta \left({n}^{\genfrac{}{}{0.1ex}{}{3}{2}}\right)$
 $\Theta \left({n}^{2}\right)$
 $\Theta \left({n}^{2}\mathrm{log}n\right)$
 $\Theta \left({n}^{3}\right)$
 the master theorem cannot be used
 $\Theta \left({n}^{3}\mathrm{log}n\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{2}$?
 $\Theta \left({n}^{2}\right)$
 $\Theta \left({n}^{3}\right)$
 $\Theta \left({n}^{4}\right)$
 $\Theta \left({n}^{3}\mathrm{log}n\right)$
 $\Theta \left({n}^{2}\mathrm{log}n\right)$
 the master theorem cannot be used

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{2}^{n}$?
 $\Theta \left({2}^{{n}^{\mathrm{log}n}}\right)$
 $\Theta \left({n}^{2}\mathrm{log}n\right)$
 the master theorem cannot be used
 $\Theta \left({n}^{2}\right)$
 $\Theta \left({2}^{n}\right)$
 $\Theta \left({2}^{n}\mathrm{log}n\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)={2}^{n}T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{n}$?
 $\Theta \left({n}^{n}\right)$
 $\Theta \left({2}^{n}\right)$
 $\Theta \left({2}^{n}\mathrm{log}n\right)$
 $\Theta \left({n}^{n}\mathrm{log}n\right)$
 $\Theta (\left({n}^{{n}^{\mathrm{log}n}}\right)$
 the master theorem cannot be used

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=T(n1)+1$?
 $\Theta \left(n\mathrm{log}n\right)$
 $\Theta \left(1\right)$
 $\Theta \left(\mathrm{log}{n}^{\mathrm{log}n}\right)$
 the master theorem cannot be used
 $\Theta \left(\mathrm{log}n\right)$
 $\Theta \left(n\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=2T(n2)+n$?
 the master theorem cannot be used
 $\Theta \left(n\right)$
 $\Theta \left(n\mathrm{log}n\right)$
 $\Theta \left(\mathrm{log}n\right)$
 $\Theta \left({\left(\mathrm{log}n\right)}^{2}\right)$
 $\Theta \left(1\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=16T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+n$?
 $\Theta \left(n\right)$
 $\Theta \left(n\mathrm{log}n\right)$
 $\Theta \left(\mathrm{log}n\right)$
 $\Theta \left({n}^{2}\right)$
 $\Theta \left({n}^{2}\mathrm{log}n\right)$
 the master theorem cannot be used

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n\mathrm{log}n$?
 $\Theta \left(n\right)$
 $\Theta \left(n\mathrm{log}n\right)$
 $\Theta \left(\mathrm{log}n\right)$
 $\Theta \left({n}^{2}\right)$
 $\Theta \left({n}^{2}\mathrm{log}n\right)$
 the master theorem cannot be used

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n\mathrm{log}n$?
 $\Theta \left({n}^{{\mathrm{log}}_{2}\left(3\right)}\right)$
 $\Theta \left(n\mathrm{log}n\right)$
 the master theorem cannot be used
 $\Theta \left({n}^{{\mathrm{log}}_{3}\left(2\right)}\right)$
 $\Theta \left({n}^{\genfrac{}{}{0.1ex}{}{3}{2}}\right)$
 $\Theta \left(n{\mathrm{log}}_{\genfrac{}{}{0.1ex}{}{3}{2}}\right(n\left)\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\genfrac{}{}{0.1ex}{}{n}{\mathrm{log}n}$?
 $\Theta \left({n}^{2}\right)$
 $\Theta \left(n\mathrm{log}n\right)$
 $\Theta \left({n}^{2}\mathrm{log}n\right)$
 the master theorem cannot be used
 $\Theta \left(\mathrm{log}n\right)$
 $\Theta \left(n\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+{n}^{0.51}$?
 the master theorem cannot be used
 $\Theta \left(\sqrt{n}\right)$
 $\Theta \left(\mathrm{log}n\right)$
 $\Theta \left(n\right)$
 $\Theta \left({n}^{0.51}\right)$
 $\Theta \left(\sqrt{n}\mathrm{log}n\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=\genfrac{}{}{0.1ex}{}{1}{2}T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+1$?
 the master theorem cannot be used
 $\Theta \left({n}^{\genfrac{}{}{0.1ex}{}{1}{2}}\right)$
 $\Theta \left({\mathrm{log}}_{2}\genfrac{}{}{0.1ex}{}{1}{2}\right)$
 $\Theta \left(\mathrm{log}n\right)$
 $\Theta \left(n\right)$
 $\Theta \left({n}^{{\mathrm{log}}_{2}\genfrac{}{}{0.1ex}{}{1}{2}}\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=16T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+n!$?
 $\Theta (n!)$
 the master theorem cannot be used
 $\Theta ({n}^{2}n!)$
 $\Theta \left(\right({n}^{2})!)$
 $\Theta \left({n}^{2}\right)$
 $\Theta (n!\mathrm{log}n)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=\sqrt{2}T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$?
 $\Theta \left(\mathrm{log}n\right)$
 $\Theta \left(n\mathrm{log}n\right)$
 $\Theta \left(\genfrac{}{}{0.1ex}{}{n}{\mathrm{log}\left(2\right)}\right)$
 $\Theta \left(\sqrt{n}\right)$
 $\Theta \left(\sqrt{n}\mathrm{log}n\right)$
 the master theorem cannot be used

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n$?
 $\Theta \left({n}^{{\mathrm{log}}_{2}\left(3\right)}\right)$
 $\Theta \left(n\right)$
 $\Theta \left(n{\mathrm{log}}_{\genfrac{}{}{0.1ex}{}{3}{2}}n\right)$
 the master theorem cannot be used
 $\Theta \left(n\mathrm{log}n\right)$
 $\Theta \left({n}^{{\mathrm{log}}_{3}\left(2\right)}\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+\sqrt{n}$?
 $\Theta \left(n\mathrm{log}\sqrt{n}\right)$
 $\Theta \left(n\mathrm{log}n\right)$
 $\Theta \left(\sqrt{n}\mathrm{log}n\right)$
 $\Theta \left(\mathrm{log}n\right)$
 $\Theta \left(n\right)$
 the master theorem cannot be used

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n$?
 the master theorem cannot be used
 $\Theta \left(n\mathrm{log}n\right)$
 $\Theta \left(n\right)$
 $\Theta \left({n}^{2}\right)$
 $\Theta \left(\mathrm{log}n\right)$
 $\Theta \left({n}^{2}\mathrm{log}n\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+n\mathrm{log}n$?
 $\Theta \left(n\right)$
 $\Theta \left(\mathrm{log}n\right)$
 the master theorem cannot be used
 $\Theta \left({n}^{{\mathrm{log}}_{4}\left(3\right)}\right)$
 $\Theta \left(n\mathrm{log}n\right)$
 $\Theta \left({n}^{{\mathrm{log}}_{3}\left(4\right)}\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+\genfrac{}{}{0.1ex}{}{n}{2}$?
 $\Theta \left({n}^{2}\mathrm{log}n\right)$
 the master theorem cannot be used
 $\Theta \left(n\right)$
 $\Theta \left(\mathrm{log}n\right)$
 $\Theta \left({n}^{2}\right)$
 $\Theta \left(n\mathrm{log}n\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=6T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+{n}^{2}\mathrm{log}n$?
 $\Theta \left({n}^{{\mathrm{log}}_{3}\left(6\right)}\mathrm{log}n\right)$
 the master theorem cannot be used
 $\Theta \left({n}^{{\mathrm{log}}_{6}\left(3\right)}\mathrm{log}n\right)$
 $\Theta \left({n}^{{\mathrm{log}}_{3}\left(6\right)}\right)$
 $\Theta \left({n}^{2}\mathrm{log}n\right)$
 $\Theta \left({n}^{{\mathrm{log}}_{6}\left(3\right)}\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\genfrac{}{}{0.1ex}{}{n}{\mathrm{log}n}$?
 $\Theta \left({n}^{2}\right)$
 $\Theta \left(\mathrm{log}n\right)$
 $\Theta \left(\genfrac{}{}{0.1ex}{}{{n}^{2}}{\mathrm{log}n}\right)$
 the master theorem cannot be used
 $\Theta \left(n\mathrm{log}n\right)$
 $\Theta \left({n}^{2}\mathrm{log}n\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=64T\left(\genfrac{}{}{0.1ex}{}{n}{8}\right)\genfrac{}{}{0.1ex}{}{n}{\mathrm{log}n}$?
 $\Theta \left(n\mathrm{log}n\right)$
 $\Theta \left({n}^{2}\right)$
 $\Theta \left({n}^{2}\mathrm{log}n\right)$
 the master theorem cannot be used
 $\Theta \left(\mathrm{log}n\right)$
 $\Theta \left(n\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=7T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{7}}\right)+{n}^{2}$?
 $\Theta \left(n\mathrm{log}n\right)$
 $\Theta \left(\genfrac{}{}{0.1ex}{}{n}{{\mathrm{log}}_{\sqrt{7}}\left(7\right)}\right)$
 $\Theta \left({n}^{2}\right)$
 $\Theta \left({n}^{{\mathrm{log}}_{\sqrt{7}}\left(7\right)}\right)$
 $\Theta \left({n}^{2}\mathrm{log}n\right)$
 the master theorem cannot be used

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=7T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{2}$?
 $\Theta \left({n}^{2+{\mathrm{log}}_{2}\left(7\right)}\right)$
 $\Theta \left({n}^{2+{\mathrm{log}}_{7}\left(2\right)}\right)$
 $\Theta \left({n}^{{\mathrm{log}}_{2}\left(7\right)}\right)$
 the master theorem cannot be used
 $\Theta \left({n}^{2}\right)$
 $\Theta \left({n}^{{\mathrm{log}}_{7}\left(2\right)}\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=7T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+{n}^{2}$?
 $\Theta \left({n}^{2+{\mathrm{log}}_{3}\left(7\right)}\right)$
 $\Theta \left({n}^{{\mathrm{log}}_{3}\left(7\right)}\right)$
 the master theorem cannot be used
 $\Theta \left({n}^{2+{\mathrm{log}}_{7}\left(3\right)}\right)$
 $\Theta \left({n}^{2}\right)$
 $\Theta \left({n}^{{\mathrm{log}}_{7}\left(3\right)}\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$?
 $\Theta \left({n}^{2}\mathrm{log}n\right)$
 the master theorem cannot be used
 $\Theta \left(n\mathrm{log}n\right)$
 $\Theta \left(n\right)$
 $\Theta \left({n}^{2}\right)$
 $\Theta \left(\mathrm{log}n\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$?
 $\Theta \left({n}^{{\mathrm{log}}_{5}\left(2\right)}\right)$
 $\Theta \left(\mathrm{log}n\right)$
 $\Theta \left({n}^{{\mathrm{log}}_{5}\left(2\right)}\mathrm{log}n\right)$
 $\Theta \left({n}^{{\mathrm{log}}_{2}\left(5\right)}\mathrm{log}n\right)$
 the master theorem cannot be used
 $\Theta \left({n}^{{\mathrm{log}}_{2}\left(5\right)}\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n$?
 the master theorem cannot be used
 $\Theta \left({n}^{{\mathrm{log}}_{5}\left(2\right)}\right)$
 $\Theta \left({n}^{{\mathrm{log}}_{2}\left(5\right)}\mathrm{log}n\right)$
 $\Theta \left({n}^{{\mathrm{log}}_{2}\left(5\right)}\right)$
 $\Theta \left({n}^{{\mathrm{log}}_{5}\left(2\right)}\mathrm{log}n\right)$
 $\Theta \left(n\mathrm{log}n\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt[3]{5}}\right)+{n}^{3}$?
 $\Theta \left({n}^{{\mathrm{log}}_{5}\sqrt[3]{5}}\right)$
 $\Theta ({n}^{{\mathrm{log}}_{5}\sqrt[3]{5}}\times \mathrm{log}n)$
 $\Theta ({n}^{{\mathrm{log}}_{\sqrt[3]{5}}\left(5\right)}\times \mathrm{log}n)$
 the master theorem cannot be used
 $\Theta \left({n}^{3}\mathrm{log}n\right)$
 $\Theta \left({n}^{{\mathrm{log}}_{\sqrt[3]{5}}\left(5\right)}\right)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt[3]{5}}\right)+\genfrac{}{}{0.1ex}{}{{n}^{3}}{\mathrm{log}n}$?
 $\Theta \left({n}^{3}\mathrm{log}n\right)$
 $\Theta \left({n}^{{\mathrm{log}}_{5}\sqrt[3]{5}}\right)$
 $\Theta ({n}^{{\mathrm{log}}_{5}\sqrt[3]{5}}\times \mathrm{log}n)$
 $\Theta ({n}^{{\mathrm{log}}_{\sqrt[3]{5}}\left(5\right)}\times \mathrm{log}n)$
 $\Theta \left({n}^{{\mathrm{log}}_{\sqrt[3]{5}}\left(5\right)}\right)$
 the master theorem cannot be used

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{5}}\right)+{n}^{2}\mathrm{log}n$?
 the master theorem cannot be used
 $\Theta \left({n}^{2}\mathrm{log}n\right)$
 $\Theta ({n}^{{\mathrm{log}}_{5}\sqrt{5}}\times \mathrm{log}n)$
 $\Theta \left({n}^{{\mathrm{log}}_{5}\sqrt{5}}\right)$
 $\Theta \left({n}^{{\mathrm{log}}_{\sqrt{5}}\left(5\right)}\right)$
 $\Theta ({n}^{{\mathrm{log}}_{\sqrt{5}}\left(5\right)}\times \mathrm{log}n)$

Using the master recurrence theorem, what is the solution of
$T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{5}}\right)+n\mathrm{log}n$?
 $\Theta \left({n}^{2+{\mathrm{log}}_{5}\sqrt{5}}\right)$
 $\Theta ({n}^{{\mathrm{log}}_{5}\sqrt{5}}\times \mathrm{log}n)$
 the master theorem cannot be used
 $\Theta \left({n}^{2}\right)$
 $\Theta \left({n}^{2+{\mathrm{log}}_{\sqrt{5}}\left(5\right)}\right)$
 $\Theta ({n}^{{\mathrm{log}}_{\sqrt{5}}\left(5\right)}\times \mathrm{log}n)$
 $\Theta \left(n\mathrm{log}n\right)$
Determining $\epsilon $ in the M.R.T.

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+\mathrm{log}n$.
What is the largest listed value of $\epsilon $ that can be used in the solution?
 no legal value is listed
 $\epsilon $ is not used in the solution
 the master theorem cannot be used
 0.5
 0.25
 0.1

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{2}$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 2) of 3 is ~1.585
and the log
(base 3) of 2 is ~0.631.
 no legal value is listed
 0.25
 0.1
 the master theorem cannot be used
 $\epsilon $ is not used in the solution
 0.5

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{2}$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 4) of 2 is 0.5
and the log
(base 2) of 4 is 2.
 the master theorem cannot be used
 $\epsilon $ is not used in the solution
 0.25
 no legal value is listed
 0.5
 1.0

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{2}^{n}$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
 0.5
 1.0
 0.1
 the master theorem cannot be used
 $\epsilon $ is not used in the solution
 no legal value is listed

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)={2}^{n}T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{n}$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
 1.0
 no legal value is listed
 0.5
 the master theorem cannot be used
 $\epsilon $ is not used in the solution
 0.1

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=16T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+n$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 16) of 4 is 0.5
and log
(base 4) of 16 is 2.
 0.25
 0.75
 no legal value is listed
 $\epsilon $ is not used in the solution
 the master theorem cannot be used
 0.5

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n\mathrm{log}n$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
 $\epsilon $ is not used in the solution
 no legal value is listed
 0.75
 0.5
 the master theorem cannot be used
 1.0

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n\mathrm{log}n$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 2) of 3 is ~1.585
and the log
(base 3) of 2 is ~0.631.
 $\epsilon $ is not used in the solution
 0.25
 no legal value is listed
 the master theorem cannot be used
 0.5
 1.0

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=8T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 8) of 2 is ~0.333
and the log
(base 2) of 8 is 3.
 0.025
 the master theorem cannot be used
 no legal value is listed
 $\epsilon $ is not used in the solution
 0.100
 0.05

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=2T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+{n}^{0.51}$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 4) of 2 is 0.5 and log
(base 2) of 4 is 2.
 0.075
 the master theorem cannot be used
 $\epsilon $ is not used in the solution
 no legal value is listed
 0.100
 0.050

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=\genfrac{}{}{0.1ex}{}{1}{2}T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+1$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
 the master theorem cannot be used
 0.05
 $\epsilon $ is not used in the solution
 0.10
 0.01
 no legal value is listed

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=16T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+n!$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 16) of 4 is 0.5
and the log
(base 4) of 16 is 2.
 1
 $\epsilon $ is not used in the solution
 100
 no legal value is listed
 10
 the master theorem cannot be used

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=\sqrt{2}T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 2) of $\sqrt{2}$ is 0.5
and the log
(base $\sqrt{2}$) of 2 is 2.
 1.0
 0.50
 0.25
 no legal value is listed
 $\epsilon $ is not used in the solution
 the master theorem cannot be used

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 3) of 2 is ~0.631
and the log
(base 2) of 3 is ~1.585.
 0.25
 $\epsilon $ is not used in the solution
 1.0
 0.10
 the master theorem cannot be used
 no legal value is listed

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+\sqrt{n}$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
 0.3
 the master theorem cannot be used
 $\epsilon $ is not used in the solution
 0.6
 0.9
 no legal value is listed

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 2) of 4 is 2
and the log
(base 4) of 2 is 0.5.
 $\epsilon $ is not used in the solution
 0.9
 0.3
 0.6
 no legal value is listed
 the master theorem cannot be used

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{4}\right)+n\mathrm{log}n$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 3) of 4 is ~1.262
and the log
(base 4) of 3 is ~0.792.
 the master theorem cannot be used
 0.6
 $\epsilon $ is not used in the solution
 0.9
 0.3
 no legal value is listed

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=3T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+\genfrac{}{}{0.1ex}{}{n}{2}$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
 the master theorem cannot be used
 $\epsilon $ is not used in the solution
 0.6
 0.9
 no legal value is listed
 0.3

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=6T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+{n}^{2}\mathrm{log}n$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 3) of 6 is ~1.631
and the log
(base 6) of 3 is ~0.613.
 0.4
 0.3
 no legal value is listed
 0.5
 $\epsilon $ is not used in the solution
 the master theorem cannot be used

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\genfrac{}{}{0.1ex}{}{n}{\mathrm{log}n}$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 4) of 2 is 0.5
and the log
(base 2) of 4 is 2.
 no legal value is listed
 the master theorem cannot be used
 1.0
 0.5
 $\epsilon $ is not used in the solution
 0.25

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=64T\left(\genfrac{}{}{0.1ex}{}{n}{8}\right)\genfrac{}{}{0.1ex}{}{n}{\mathrm{log}n}$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 8) of 64 is 2
and the log
(base 64) of 8 is 0.5.
 0.5
 0.25
 $\epsilon $ is not used in the solution
 1.0
 no legal value is listed
 the master theorem cannot be used

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=7T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{7}}\right)+{n}^{2}$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 7) of $\sqrt{7}$ is 0.5
and the log
(base $\sqrt{7}$) of 7 is 2.
 0.25
 $\epsilon $ is not used in the solution
 0.5
 no legal value is listed
 1.0
 the master theorem cannot be used

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=7T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+{n}^{2}$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 2) of 7 is ~2.807
and the log
(base 7) of 2 is ~0.356.
 0.8
 1.0
 no legal value is listed
 $\epsilon $ is not used in the solution
 0.6
 the master theorem cannot be used

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=7T\left(\genfrac{}{}{0.1ex}{}{n}{3}\right)+{n}^{2}$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 3) of 7 is ~0.565
and the log
(base 7) of 3 is ~1.771.
 the master theorem cannot be used
 0.6
 0.8
 1.0
 $\epsilon $ is not used in the solution
 no legal value is listed

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=4T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 2) of 4 is 2
and the log
(base 4) of 2 is 0.5.
 0.5
 $\epsilon $ is not used in the solution
 1.0
 the master theorem cannot be used
 no legal value is listed
 2.0

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+\mathrm{log}n$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 5) of 2 is ~0.431
and the log
(base 2) of 5 is ~2.322.
 1.0
 $\epsilon $ is not used in the solution
 2.0
 the master theorem cannot be used
 0.5
 no legal value is listed

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{2}\right)+n$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 2) of 5 is ~2.322
and the log
(base 5) of 2 is ~0.431.
 1.0
 the master theorem cannot be used
 no legal value is listed
 2.0
 $\epsilon $ is not used in the solution
 0.5

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt[3]{5}}\right)+{n}^{3}$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 5) of $\sqrt[3]{5}$ is ~0.333
and the log
(base $\sqrt[3]{5}$) of 5 is 3.
 1.0
 the master theorem cannot be used
 no legal value is listed
 2.0
 0.5
 $\epsilon $ is not used in the solution

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt[3]{5}}\right)+\genfrac{}{}{0.1ex}{}{{n}^{3}}{\mathrm{log}n}$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base $\sqrt[3]{5}$) of 5 is 3
and the log
(base 5) of $\sqrt[3]{5}$ is ~0.333.
 1.0
 0.5
 no legal value is listed
 2.0
 the master theorem cannot be used
 $\epsilon $ is not used in the solution

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{5}}\right)+{n}^{2}\mathrm{log}n$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base 5) of $\sqrt{5}$ is 0.5
and the log
(base $\sqrt{5}$) of 5 is 2.
 1.0
 the master theorem cannot be used
 $\epsilon $ is not used in the solution
 2.0
 0.5
 no legal value is listed

Consider using the master recurrence theorem to solve the equation
$T\left(n\right)=5T\left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{5}}\right)+n\mathrm{log}n$?
What is the largest listed value of $\epsilon $ that can be used in the solution?
The log
(base $\sqrt{5}$) of 5 is 2
and the log
(base 5) of $\sqrt{5}$ is 0.5.
 no legal value is listed
 2.0
 1.0
 0.5
 $\epsilon $ is not used in the solution
 the master theorem cannot be used
Determining c for M.R.T., case 3

Consider using the master recurrence theorem and the regularity condition
for case 3 to solve the equation
$T\left(n\right)=8T\left(\genfrac{}{}{0.1ex}{}{n}{8}\right)+{n}^{2}$.
What is the smallest legal value
of the constant c listed for the solution?
 no legal value is listed
 case 3 does not apply
 $\genfrac{}{}{0.1ex}{}{1}{9}$
 $\genfrac{}{}{0.1ex}{}{1}{5}$
 $\genfrac{}{}{0.1ex}{}{1}{7}$

Consider using the master recurrence theorem and the regularity condition
for case 3 to solve the equation
$T\left(n\right)=7T\left(\genfrac{}{}{0.1ex}{}{n}{8}\right)+{n}^{2}$.
What is the smallest legal value
of the constant c listed for the solution?
 no legal value is listed
 case 3 does not apply
 $\genfrac{}{}{0.1ex}{}{9}{64}$
 $\genfrac{}{}{0.1ex}{}{7}{64}$
 $\genfrac{}{}{0.1ex}{}{11}{64}$

Consider using the master recurrence theorem and the regularity condition
for case 3 to solve the equation
$T\left(n\right)=6T\left(\genfrac{}{}{0.1ex}{}{n}{8}\right)+{n}^{2}$.
What is the smallest legal value
of the constant c listed for the solution?
 case 3 does not apply
 $\genfrac{}{}{0.1ex}{}{5}{64}$
 no legal value is listed
 $\genfrac{}{}{0.1ex}{}{3}{64}$
 $\genfrac{}{}{0.1ex}{}{1}{64}$

Consider using the master recurrence theorem and the regularity condition
for case 3 to solve the equation
$T\left(n\right)=64\left(\genfrac{}{}{0.1ex}{}{n}{8}\right)+{n}^{2}$.
What is the smallest legal value
of the constant c listed for the solution?
 no legal value is listed
 $\genfrac{}{}{0.1ex}{}{63}{64}$
 $\genfrac{}{}{0.1ex}{}{61}{64}$
 case 3 does not apply
 $\genfrac{}{}{0.1ex}{}{65}{64}$

Consider using the master recurrence theorem and the regularity condition
for case 3 to solve the equation
$T\left(n\right)=32\left(\genfrac{}{}{0.1ex}{}{n}{8}\right)+{n}^{2}$.
What is the smallest legal value
of the constant c listed for the solution?
 $\genfrac{}{}{0.1ex}{}{63}{64}$
 no legal value is listed
 case 3 does not apply
 $\genfrac{}{}{0.1ex}{}{8}{64}$
 $\genfrac{}{}{0.1ex}{}{16}{64}$
Inductive Proofs
Here is a recursive version of mergesort, which works for all
arrays with size one or greater:
mergeSort(array, start, end)
{
if (endstart < 2) return;
m = (start + end) / 2;
mergesort(array,start,m);
mergesort(array,m,end);
merge(array,start,m,end);
}
Assume that the recurrence for merge sort is described as:
$T\left({2}^{0}\right)=1$
$T\left({2}^{n}\right)=2T\left({2}^{n1}\right)+{2}^{n}$
Consider a proof, by induction, that $T\left({2}^{n}\right)=(n+1){2}^{n}$
where ${2}^{n}$ is the size of the array and b is the base case value of n.

To completely prove the base case value b, you would need to show:
 $2T\left({2}^{b1}\right)={2}^{b}$
 $b{2}^{b}={2}^{b+1}$
 $2T\left({2}^{b1}\right)=2$
 $b{2}^{b}=2$
 $2T\left({2}^{b1}\right)+{2}^{b}=(b+1){2}^{b}$
 $T\left({2}^{b}\right)=4$

Assuming b is the largest base case value and
you wish to prove the inductive case n,
an inductive hypothesis for a strong inductive proof would be:
 Assume true for $i<n$.
 Assume true for $i\le n$.
 Assume true for $n$.
 Assume true for $b<i<n$.
 Assume true for $b<i\le n$.
 Assume true for $n+1$.

Assuming b is the largest base case value and
you wish to prove the inductive case n+1,
an inductive hypothesis for a weak inductive proof would be:
 Assume true for $n+1$.
 Assume true for $n1$.
 Assume true for $n+2$.
 Assume true for $n$.

Assuming you wish to prove the inductive case n,
the first line of the inductive case for a strong inductive
proof would be:
 $T\left({2}^{n1}\right)=2T\left({2}^{(n1)1}\right)+{2}^{n1}$
 $T\left({2}^{n+1}\right)=2T\left({2}^{(n+1)1}\right)+{2}^{n+1}$
 $T\left({2}^{n}\right)=(n+1){2}^{n}$
 $T\left({2}^{n}\right)=2T\left({2}^{n1}\right)+{2}^{n}$
 $T\left({2}^{n1}\right)=\left(\right(n1)+1){2}^{n1}$
 $T\left({2}^{n+1}\right)=\left(\right(n+1)+1){2}^{n+1}$

Assuming you wish to prove the inductive case n+1,
the first line of the inductive case for a weak inductive
proof would be:
 $T\left({2}^{n}\right)=2T\left({2}^{n1}\right)+{2}^{n}$
 $T\left({2}^{n+1}\right)=2T\left({2}^{(n+1)1}\right)+{2}^{n+1}$
 $T\left({2}^{n+1}\right)=\left(\right(n+1)+1){2}^{n+1}$
 $T\left({2}^{n1}\right)=\left(\right(n1)+1){2}^{n1}$
 $T\left({2}^{n1}\right)=2T\left({2}^{(n1)1}\right)+{2}^{n1}$
 $T\left({2}^{n}\right)=(n+1){2}^{n}$

Assuming you wish to prove the inductive case n,
the inductive step for a strong inductive proof would be:
 substitute $T\left({2}^{(n+1)1}\right)$ for $(n+1){2}^{n}$
 substitute $n{2}^{n1}$ for $T\left({2}^{n1}\right)$
 substitute $(n+1){2}^{n}$ for $T\left({2}^{(n+1)1}\right)$
 substitute $T\left({2}^{n1}\right)$ for $n{2}^{n1}$
 substitute $T\left({2}^{(n1)1}\right)$ for $(n1){2}^{n2}$
 substitute $(n1){2}^{n2}$ for $T\left({2}^{(n1)1}\right)$

Assuming you wish to prove the inductive case n+1,
the inductive step for a weak inductive proof would be:
 substitute $(n+1){2}^{n}$ for $T\left({2}^{(n+1)1}\right)$
 substitute $T\left({2}^{n1}\right)$ for $n{2}^{n1}$
 substitute $n{2}^{n1}$ for $T\left({2}^{n1}\right)$
 substitute $T\left({2}^{(n1)1}\right)$ for $(n1){2}^{n2}$
 substitute $(n1){2}^{n2}$ for $T\left({2}^{(n1)1}\right)$
 substitute $T\left({2}^{(n+1)1}\right)$ for $(n+1){2}^{n}$

T or F:
For strong inductive proofs, proving the inductive case
makes use of the previously proved base cases.

T or F:
Weak inductive proofs are more closely related to
recursive functions than strong inductive proofs.

The more formal names for weak and strong inductive
proofs are:
 simple and complex
 complex and simple
 mathematical and complete
 mathematical and simple
 simple and mathematical
 complete and mathematical
Concept: inductive proofs

(6 pts) Prove by strong induction, with n as the inductive case, that
$\displaystyle \sum _{k=1}^{n}}k=\genfrac{}{}{0.1ex}{}{n(n+1)}{2$.
Base case:
Inductive Hypothesis:
Inductive Case:

(6 pts) Prove by strong induction, with n as the inductive case, that
$T\left(n\right)=\genfrac{}{}{0.1ex}{}{n(n+1)}{2}$, where:
$T\left(0\right)=0$
$T\left(n\right)=T(n1)+n$
Base case:
Inductive Hypothesis:
Inductive Case:

(6 pts) Prove by strong induction, with n as the inductive case, that
$\displaystyle \sum _{k=0}^{n}}{k}^{2}=\genfrac{}{}{0.1ex}{}{n(n+1)(2n+1)}{6$.
Base case:
Inductive Hypothesis:
Inductive Case:

(6 pts) Prove by strong induction, with n as the inductive case, that
$T\left(n\right)=\genfrac{}{}{0.1ex}{}{n(n+1)(2n+1)}{6}$, where:
$T\left(0\right)=0$
$T\left(n\right)=T(n1)+{n}^{2}$
Base case:
Inductive Hypothesis:
Inductive Case:

(6 pts) Prove by strong induction that
$\displaystyle \sum _{k=0}^{n}}{k}^{3}=\genfrac{}{}{0.1ex}{}{{n}^{2}{(n+1)}^{2}}{4$.
Base case:
Inductive Hypothesis:
Inductive Case:

(6 pts) Prove by strong induction that
$T\left(n\right)=\genfrac{}{}{0.1ex}{}{{n}^{2}{(n+1)}^{2}}{4}$,
where:
$T\left(0\right)=0$
$T\left(n\right)=T(n1)+{n}^{3}$
Base case:
Inductive Hypothesis:
Inductive Case:

(6 pts) Prove by strong induction that
$\displaystyle \sum _{k=0}^{n}}{x}^{k}=\genfrac{}{}{0.1ex}{}{{x}^{n+1}1}{x1$.
Base case:
Inductive Hypothesis:
Inductive Case:

(6 pts) Prove by strong induction that
$T(x,n)=\genfrac{}{}{0.1ex}{}{{x}^{n+1}1}{x1}$,
where:
$T\left(x\mathrm{,0}\right)=1$
$T(x,n)=T(x,n1)+{x}^{n}$
Base case:
Inductive Hypothesis:
Inductive Case:

(6 pts) Prove by strong induction that
the number of nodes in a perfect binary tree with $n$ levels is
${2}^{n}1$.
Base case:
Inductive Hypothesis:
Inductive Case:

(6 pts) Prove by strong induction that
the number of nodes in a perfect quaternary tree with $n$ levels is
$\genfrac{}{}{0.1ex}{}{{4}^{n}1}{3}$.
Base case:
Inductive Hypothesis:
Inductive Case:

(6 pts) Prove by strong induction that
the number of nodes in a perfect binary tree with $n$ levels is ${2}^{n}1$.
Base case:
Inductive Hypothesis:
Inductive Case:

(6 pts) Prove by strong induction that
the number of interior nodes in a perfect binary tree with $n$ levels is
${2}^{n1}1$.
Base case:
Inductive Hypothesis:
Inductive Case:

(6 pts) Prove by strong induction that
the number of child pointers in an arbitrary binary tree
with $n$ nodes is $n1$.
Base case:
Inductive Hypothesis:
Inductive Case:
Concept: rotations in a BST

Which of the following is true for rotations in a BST:
 the number of leaf nodes always decreases
 the number of leaf nodes always increases
 the tree always becomes more balanced
 BSTordering is always preserved

Consider a rightrotation of a node n
upwards in a BST. The former right child of n, if it exists:
 becomes the right child of the former parent of n
 becomes the left child of the former parent of n
 becomes the niece or nephew of n
 becomes the sibling of n
 becomes the left child of n
 remains the right child of n

Consider a rightrotation of a node n
upwards in a BST. The former left child of n, if it exists:
 becomes the left child of the former parent of n
 remains the left child of n
 becomes the right child of n
 becomes the sibling of n
 becomes the right child of the former parent of n
 becomes the niece or nephew of n

Consider a rightrotation of a node n upwards
in a BST.
The former parent of n, assuming it exists:
 becomes the niece or nephew of n
 remains the parent of n
 becomes the sibling of n
 becomes the left child of n
 becomes the right child of n

Consider a rightrotation of a
node n upwards in a BST.
The former sibling of n, assuming it exists:
 becomes the niece or nephew of n
 becomes a grandchild of n
 remains the sibling of n
 becomes the left child of n
 becomes the right child of n
Concept: redblack trees

The number of rotations that occur
after an insertion into
a redblack tree is:
 $\Theta \left(n\mathrm{log}n\right)$
 $\Theta \left(\mathrm{log}n\right)$
 $\Theta \left(1\right)$
 $\Theta \left(n\right)$

The maximum number of rotations that occur
after an insertion into
a redblack tree is:
 $2$
 3
 $\sim \mathrm{log}n$
 1

The maximum number of rotations that occur
after a deletion from
a redblack tree is:
 $\Theta \left(n\right)$
 $\Theta \left(n\mathrm{log}n\right)$
 $\Theta \left(\mathrm{log}n\right)$
 $\Theta \left(1\right)$

The maximum number of rotations that occur
after a deletion from
a redblack tree is:
 $3$
 $\sim \mathrm{log}n$
 $2$
 1

Consider a node n in a redblack tree and all paths
from n to a leaf.
Which of the following is a constraint on these trees?
 each path must start with a black node
 the number of nodes (red or black) on each path is the same
 the number of red nodes on each path is the same
 the number of black nodes on each path is the same

Consider a node n in a redblack tree and all paths
from n to a leaf.
Which of the following is a constraint on these trees?
 no black node can have a red parent
 no red node can have a red parent
 no red node can have a black parent
 no black node can have a black parent

Consider a black interior node n in a
redblack tree and any path
from n to a leaf.
Which of the following is a constraint on these trees,
where R is the number of red nodes and B is the
number of black nodes?
 $B\le R$
 $R\le B+1$
 $R\le B$
 $B\le R+1$
 $B<R$
 $R=B$

Consider a red node n in a redblack tree
and the length of
the shortest possible path from n to a leaf, S,
and the length of
the longest possible path from n to a leaf, L.
Which of the following is a constraint on these trees?
 $L=2S+1$
 $L=2S2$
 $L=2S1$
 $L=2S$
 $L=2S+2$

Inserting a value in a redblack tree
and a regular BST, respectively, takes time:
 $\Theta \left(\mathrm{log}n\right)$ and $\Theta \left(\mathrm{log}n\right)$
 $\Theta \left(1\right)$ and $\Theta \left(\mathrm{log}n\right)$
 $\Theta \left(n\right)$ and $\Theta \left(n\right)$
 $\Theta \left(\mathrm{log}n\right)$ and $\Theta \left(n\right)$

Suppose one wished to allow more red nodes in a redblack tree,
but still wished this new tree to have the same asymptotic behavior as
before. One could allow more red nodes on any path to a leaf
as long as:
 no red node could have a red sibling.
 no black node could have a red parent.
 the number of black nodes between any two red nodes is bounded by a constant.
 the number of red nodes between any two black nodes is bounded by a constant.

T or F:
A black node in a redblack tree can have one child.

T or F:
A red node in a redblack tree can have one child.

T or F:
A red node in a redblack tree can have a red parent.

T or F:
A black node in a redblack tree can have a black parent.

Choose an order of insertion for seven consecutive integers
such that a redblack tree performs no rotations for any of
the insertions.
 3 2 6 0 1 4 5
 3 2 6 1 0 5 4
 3 2 5 1 0 4 6
 3 1 5 2 0 4 6
 3 2 5 0 1 4 6

Choose an order of insertion for seven consecutive integers
such that a redblack tree performs one rotation for one of the
insertions and no rotations for the other insertions.
 3 2 6 0 1 4 5
 3 2 5 1 0 4 6
 3 2 5 0 1 4 6
 3 1 5 2 0 4 6
 3 2 6 1 0 5 4

Choose an order of insertion for seven consecutive integers
such that a redblack tree performs two rotations for one of the
insertions and no rotations for the other insertions.
 3 2 5 1 0 4 6
 3 2 5 0 1 4 6
 3 2 6 0 1 4 5
 3 2 6 1 0 5 4
 3 1 5 2 0 4 6

Choose an order of insertion for seven consecutive integers
such that a redblack tree performs one rotation for two of the
insertions and no rotations for the other insertions.
 3 2 5 0 1 4 6
 3 1 5 2 0 4 6
 3 2 5 1 0 4 6
 3 2 6 1 0 5 4
 3 2 6 0 1 4 5

Choose an order of insertion for seven consecutive integers
such that a redblack tree performs no rotations
for any of the insertions and yields the most unbalanced
tree.

T or F:
Inserting the following numbers, in the order given,
into an empty BST:
0 4 3 8 1 2 6 5 9 7
yields a tree whose shape is consistent with a redblack tree.

Consider inserting the following numbers, in the order given,
into an empty BST and then coloring the root black and the
other nodes such that
no red node has a red parent:
each node:
0 4 3 8 1 2 6 5 9 7
What is the minimum / maximum number of red nodes possible?
 0 / 3
 the correct answer is not listed
 0 / 4
 3 / 5
 0 / 5
 3 / 4
 0 / 6
 3 / 3

Consider an node with a single child in a redblack tree. If that node has
a sibling and you wish to maximize the number of descendants the sibling
has, what
color is the sibling and how many descendants does it have?
Do not include the null children.
Choose the best answer.
 black / 6
 either black or red / 2
 either black or red / 4
 red / 2
 either black or red / 5
 the correct answer is not listed
 black / 2
 red / 6

Consider inserting the following numbers, in the order given,
into an empty redblack tree:
5 3 8 1 7 6 4
How many rotations and how many node recolorings are performed?
Don't forget what happens on the initial insertion.
 1 / 6
 the correct answer is not listed
 0 / 5
 1 / 7
 0 / 7
 0 / 6
 1 / 5

Consider inserting the following numbers, in the order given,
into an empty BST:
0 4 3 8 1 2 6 5 9 7
What is the minimum number of rotations
that would yield a tree with a shape
consistent with a redblack tree?
 1
 3
 0
 the correct answer is not listed
 5
 2
 4

Consider inserting the following numbers, in the order given,
into an empty redblack tree:
0 4 3 8 1 2 6 5 9 7
After which insertion value does the redblack tree make its first rotation?
 0
 6
 1
 3
 4
 8
 the correct answer is not listed
 2

Consider inserting the following numbers, in the order given,
into an empty redblack tree:
0 4 3 8 1 2 6 5 9 7
Which values, when inserted, cause a rotation?
 none
 2 6 7
 3 2 7
 the correct answer is not listed
 6 7
 3 2 6

Consider inserting the following numbers, in the order given,
into an empty redblack tree:
0 4 3 8 1 2 6 5 9 7
Which values, when inserted, cause a double rotaion?
 2 7
 7
 the correct answer is not listed
 2 6
 3 6
 6
 3 7
Concept: AVL trees

Consider a node n in an AVL tree
and the height of the left subtree of n, LH
and the height of the right subtree of n, RH.
Assuming $LH>RH$,
which of the following is a constraint on these trees?
 $LHRH=1$
 $LHRH<3$
 $LHRH=0$
 $LHRH<2$
 $LHRH=2$

The number of rotations that occur after an insertion into
an AVL tree is, in the worst case:
 $\Theta \left(n\mathrm{log}n\right)$
 $\Theta \left(\mathrm{log}n\right)$
 $\Theta \left(1\right)$
 $\Theta \left(n\right)$
 $\Theta \left(\mathrm{log}\phantom{\rule{0.278em}{0ex}}\mathrm{log}n\right)$

The number of rotations that occur after an insertion into
an AVL tree is, in the worst case:
 1
 $\Theta \left(n\right)$
 2
 $\Theta \left(\mathrm{log}n\right)$

The number of rotations that occur after an deletion in
an AVL tree is, in the worst case:
 $\Theta \left(n\mathrm{log}n\right)$
 $\Theta \left(\mathrm{log}\phantom{\rule{0.278em}{0ex}}\mathrm{log}n\right)$
 $\Theta \left(\mathrm{log}n\right)$
 $\Theta \left(1\right)$
 $\Theta \left(n\right)$

The number of rotations that occur after an deletion into
an AVL tree is, in the worst case:
 1
 $\Theta \left(n\right)$
 2
 $\Theta \left(\mathrm{log}n\right)$

The minimum number of rotations that occur after an deletion into
an AVL tree is:
 $\Theta \left(n\right)$
 1
 0
 $\Theta \left(\mathrm{log}n\right)$
 2

T or F:
Inserting the following numbers, in the order given,
into an empty BST:
0 4 3 8 1 2 6 5 9 7
yields a tree whose shape is consistent with an AVL tree.

Consider inserting the following numbers, in the order given,
into an empty BST and then computing the balance factors of
each node:
0 4 3 8 1 2 6 5 9 7
What nodes are out of balance, with respect to AVL balance factors?
 the correct answer is not listed
 0 3
 2 5 6 7 9
 0 1 2 3
 0 1 3 8
 0 4 3
 0 3 8

Consider an node with a single child in an AVL tree. If that node has
a sibling, what is the
least / most number of descendants the sibling can have?
 1 / 4
 1 / 5
 1 / 7
 0 / 7
 0 / 6
 0 / 4
 1 / 6
 0 / 5
 the correct answer is not listed

Consider inserting the following numbers, in the order given,
into an empty BST and then computing the balance factors of
each node:
0 4 3 8 1 2 6 5 9 7
Performing a level order traversal of the
tree in which balance factors are displayed,
which of the following sequences of balance factors
is consistent with the above tree?
Assume the height of a null child is zero.
Do not include the null children in the output.
 1 0 1 1 1 0 0 0 0 0
 the correct answer is not listed
 4 0 2 1 1 0 0 0 0 0
 1 0 1 1 1 0 0 0 0 0
 9 2 2 2 1 0 0 0 0 0

Consider inserting the following numbers, in the order given,
into an empty BST and then computing the heights of
each node:
0 4 3 8 1 2 6 5 9 7
Performing a level order traversal of the
tree in which node heights are displayed,
which of the following sequences of heights
is consistent with the above tree?
Assume the height of a null child is zero.
Do not include the null children in the output.
 5 5 4 4 3 3 2 2 1 1
 5 4 3 3 2 2 2 1 1 1
 4 4 3 3 2 2 1 1 0 0
 the correct answer is not listed
 4 3 2 2 1 1 1 0 0 0

Consider inserting the following numbers, in the order given,
into an empty BST:
0 4 3 8 1 2 6 5 9 7
What is the minimum number of rotations
that would yield a tree with balance factors
consistent with an AVL tree?
 5
 2
 the correct answer is not listed
 3
 4
 1
 0

Consider inserting the following numbers, in the order given,
into an empty AVL tree:
0 4 3 8 1 2 6 5 9 7
After which insertion value causes the AVL tree's first rotation?
 the correct answer is not listed
 2
 1
 7
 8
 5
 6
 3
 9

Consider inserting the following numbers, in the order given,
into an empty AVL tree:
0 4 3 8 1 2 6 5 9 7
Which values, when inserted, cause rotations?
 the correct answer is not listed
 7 6
 7 2 6
 3 7 2
 3 7 2 6

Consider inserting the following numbers, in the order given,
into an empty AVL tree:
0 4 3 8 1 2 6 5 9 7
Which values, when inserted, cause a double rotaion?
 the correct answer is not listed
 2
 2 6
 7
 6
 3
 3 7

Choose an order of insertion for seven consecutive integers
such that an AVL tree performs no rotations for any of
the insertions.

Choose an order of insertion for seven consecutive integers
such that an AVL tree performs one rotation for one of the
insertions and no rotations for the other insertions.

Choose an order of insertion for seven consecutive integers
such that an AVL tree performs two rotations for one of the
insertions and no rotations for the other insertions.

Choose an order of insertion for seven consecutive integers
such that an AVL tree performs one rotation each for two of the
insertions and no rotations for the other insertions.

Choose an order of insertion for seven consecutive integers
such that an AVL tree performs no rotations
for any of the insertions and yields the most unbalanced
tree.
Assume minheaps unless otherwise directed.
Binomial heaps

After 7 consecutive inserts into an empty binomial heap, how many
subheaps are in the root list?
 2
 3
 6
 4
 7
 8
 5
 1

After 9 insertions into an empty binomial heap,
how many subheaps are there?
 3
 8
 4
 9
 7
 6
 5
 2

T or F:
Consider a sequence of n insertions into an empty binomial heap,
followed by a single extractMin operation.
At this point, the number of
subheaps in the root list can be calculated from n.

Consider inserting the following values, in the order given:
3 2 9 5 6 4 1 0
into an empty binomial heap. The value 0 is found in a subheap whose
root has value:
 4
 1
 3
 0
 6
 7
 5
 2

Consider inserting the consecutive integers from 0 to 12, inclusive
and in increasing order,
into an empty binomial heap. After 3 extractMin operations,
the value 12 can be found in the subheap whose
root has value:
 5
 0
 3
 2
 4
 6
 7
 1

Consider inserting the consecutive integers from 0 to 12, inclusive
and in increasing order,
into an empty binomial heap. After deleting the
value 5,
the value 12 can be found in the subheap whose
root has value:
 7
 2
 0
 1
 6
 4
 5
 3

One expects to insert a value into
a binomial heap in time that is:
 log
 linear
 constant
 loglinear
 log log

Consider inserting the consecutive integers from 0 to 12, inclusive
and not necessarily in order,
into an empty binomial heap. What is the largest root value possible
after all values have been inserted?
Fibonacci heaps
Assume for fibonacci heaps that newly inserted values are inserted
at the beginning (leftmost) of the root list and that consolidation
runs from left to right. Assume that in the union of
heaps A and B, the root list of B is
appended to the root list of A.
Assume that extractMin appends the child list of the extracted value
to the root list.
Assume that after consolidation, the root list is ordered with lower
degree subheaps to the left of higher degree subheaps.

After 7 consecutive inserts into an empty fibonacci heap, how many
subheaps are in the root list?
 2
 5
 4
 3
 8
 6
 7
 1

T or F:
Consider a sequence of n insertions into an empty fibonacci heap,
followed by a single extractMin operation.
At this point, the number of
subheaps in the root list can be calculated from n.

T or F:
Consider a sequence of n insertions into an empty fibonacci heap,
followed by two consecutive extractMin operations.
At this point, the number of
subheaps in the root list can be calculated from n.

Consider inserting the following values, in the order given:
3 2 9 5 6 4 1 0
into an empty fibonacci heap. The value 0 is found in a subheap whose
root has value:
 5
 3
 6
 7
 2
 4
 1
 0

Consider inserting the consecutive integers from 0 to 12, inclusive
and in increasing order,
into an empty fibonacci heap. After 3 extractMin operations,
the value 12 can be found in the subheap whose
root has value:
 6
 2
 4
 5
 3
 1
 0
 7

Consider inserting the consecutive integers from 0 to 12, inclusive
and in increasing order,
into an empty fibonacci heap. After deleting the
value 5,
the value 12 can be found in the subheap whose
root has value:
 3
 6
 5
 2
 7
 4
 1
 0

Consider inserting the consecutive integers from 0 to 12
into an empty fibonacci heap. After deleting the
value 5,
which node is marked?
 0
 3
 2
 1
 5
 no node is marked
 the answer is not given
 4

One expects to find a value in
a Fibonacci heap in amortized:
 $\Theta \left(\mathrm{log}n\right)$ time
 $\Theta \left(1\right)$ time
 $\Theta (\mathrm{log}n\times \mathrm{log}n)$ time
 $\Theta \left(\mathrm{log}\right(\mathrm{log}n\left)\right)$ time
 $\Theta \left(n\mathrm{log}n\right)$ time
 $\Theta \left(n\right)$ time

One expects to find a value in
a Fibonacci heap in the worst case in:
 $\Theta \left(1\right)$ time
 $\Theta \left(\mathrm{log}\right(\mathrm{log}n\left)\right)$ time
 $\Theta \left(n\right)$ time
 $\Theta \left(n\mathrm{log}n\right)$ time
 $\Theta \left(\mathrm{log}n\right)$ time
 $\Theta (\mathrm{log}n\times \mathrm{log}n)$ time

Consider this set of operations: 15 inserts and one extraction of the
minimum (in any order). What is the fewest / most number of subheaps found
after the set is performed on an initially empty fibonacci heap?
 4 / 14
 4 / 12
 3 / 12
 3 / 15
 4 / 13
 3 / 13
 3 / 14
 4 / 15
When cutting a node out of a Fibonacci subheap,
the degree of the parent is decremented. In addition,
the parent of the node (unless it is a root) is
either marked, if it hasn't been previously marked, or
cut, if it has.

What is the most number of nodes that can be cut in a single cascade on
a subheap of degree 4?
 1
 5
 4
 6
 2
 7
 8
 3

What is the most number of marked nodes that can exist in a subheap
of degree 4?
 2
 4
 3
 5
 15
 8
 16
 7

What are the most number of nodes that can be cut from a Fibonacci
subheap of
degree 4 such that the degree of the root is not reduced?
 10
 6
 3
 5
 9
 7
 4
 8

What are the most number of nodes that can be cut from a Fibonacci
subheap of
degree 4 such that the degree of the root is reduced by exactly one?
 9
 7
 13
 12
 11
 8
 14
 10
Priority Queues

Consider a priority queue (max) based upon a singly lined list.
What is the time complexity of the insert operation?
 log linear
 logarithmic
 linear
 constant

Consider a priority queue (max) based upon a normal heap.
What is the time complexity of the insert operation?
 log linear
 constant
 logarithmic
 linear

Consider a priority queue (max) based upon a singly lined list.
What is the time complexity of the increasekey operation?
 linear
 constant
 log linear
 logarithmic

Consider a priority queue (max) based upon a normal heap.
What is the time complexity of the increasekey operation?
 logarithmic
 log linear
 linear
 constant

Consider a priority queue (max) based upon a singly lined list.
What is the time complexity of the decreasekey operation?
Assume the link list is sort
 logarithmic
 constant
 linear
 log linear

Consider a priority queue based upon a normal heap.
What is the time complexity of the decreasekey operation?
 log linear
 linear
 constant
 logarithmic
Concept: disjoint sets as linked lists
The following questions assume a linkedlist implementation of a disjoint set.
Assume each value has a pointer to
its representative. Assume worst case behavior.

The makeset operation takes time:
 constant
 logarithmic
 linear
 log linear

The findset operation takes time:
 none of the other answers are correct
 constant
 log linear
 linear
 quadratic
 logarithmic

Assuming no preference on which representative becomes
the representative of the resulting set,
the union operation takes time:
 constant
 log linear
 quadratic
 logarithmic
 linear
 none of the other answers are correct

Assuming the representative of the smaller set becomes
the representative of the resulting set,
the union operation takes time:
 quadratic
 log linear
 none of the other answers are correct
 linear
 constant
 logarithmic

Assuming the representative of the larger set
becomes
the representative of the resulting set,
the union operation takes time:
 linear
 none of the other answers are correct
 quadratic
 constant
 logarithmic
 log linear

Assuming no preference on which representative becomes
the representative of the resulting set,
the total work of a series of m disjoint set
operations is:
 quadratic
 log linear
 none of the other answers are correct
 logarithmic
 linear
 constant

Assuming the representative of the smaller set becomes
the representative of the resulting set,
the total work of a series of m disjoint set
operations is:
 linear
 none of the other answers are correct
 logarithmic
 quadratic
 constant
 log linear

Assuming the representative of the larger set becomes
the representative of the resulting set,
the total work of a series of m disjoint set
operations is:
 none of the other answers are correct
 quadratic
 constant
 linear
 log linear
 logarithmic

Assuming the representative of the larger set becomes
the representative of the resulting set,
how many times can a value's representative be updated
in a series of union operations?
 none of the other answers are correct
 logarithmic
 constant
 quadratic
 log linear
 linear

Assuming the representative of the smaller set becomes
the representative of the resulting set,
how many times can a value's representative be updated
in a series of union operations?
 quadratic
 none of the other answers are correct
 linear
 log linear
 constant
 logarithmic

Assuming no preference on which representative becomes
the representative of the resulting set,
how many times can a value's representative be updated
in a series of union operations?
 quadratic
 constant
 logarithmic
 linear
 log linear
 none of the other answers are correct
Concept: disjoint sets as trees
The following questions assume a tree implementation of a disjoint set.
Assume each value has a pointer to its parent with the root of
the tree serving as the
representative of the set.
Assume worst case behavior.

The makeset operation takes time:
 linear
 logarithmic
 constant
 log linear

The findset operation (no path compression and no unionbyrank)
takes time:
 quadratic
 constant
 linear
 log linear
 logarithmic
 none of the other answers are correct

The findset operation
(with path compression but no unionbyrank) takes time:
 none of the other answers are correct
 logarithmic
 constant
 quadratic
 log linear
 linear

The findset operation
(with no path compression but with unionbyrank) takes time:
 logarithmic
 none of the other answers are correct
 linear
 log linear
 constant
 quadratic

Assuming no preference on which representative becomes
the representative of the resulting set,
the union operation takes time:
 constant
 logarithmic
 quadratic
 none of the other answers are correct
 linear
 log linear

Assuming the representative whose tree has the smaller rank becomes
the representative of the resulting set,
the union operation takes time:
 logarithmic
 none of the other answers are correct
 linear
 log linear
 constant
 quadratic

Assuming the representative whose tree has the larger rank becomes
the representative of the resulting set,
the union operation takes time:
 linear
 constant
 none of the other answers are correct
 logarithmic
 log linear
 quadratic

Assuming no path compression and no unionbyrank,
the total work of a series of m disjoint set
operations is:
 log linear
 logarithmic
 linear
 quadratic
 none of the other answers are correct
 constant

Assuming path compression but no unionbyrank,
the total work of a series of m disjoint set
operations is:
 none of the other answers are correct
 logarithmic
 quadratic
 linear
 constant
 log linear

Assuming no path compression but unionbyrank,
the total work of a series of m disjoint set
operations is:
 log linear
 none of the other answers are correct
 logarithmic
 quadratic
 linear
 constant

Assuming path compression and unionbyrank,
the total work of a series of m disjoint set
operations is:
 quadratic
 none of the other answers are correct
 log linear
 logarithmic
 linear
 constant

Path compression is used to speed up the average cost
of which operation(s)?
 union
 union and findset
 makeset, findset, and union
 findset and makeset
 findset
 makeset
 none of the other answers are correct
 union and makeset

Union by rank is used to speed up the average cost
of which operation(s)?
 union and makeset
 makeset, findset, and union
 makeset
 union
 findset and makeset
 findset
 none of the other answers are correct
 union and findset

T or F:
A single findset operation with path compression
takes asymptotically longer
than a single findset operation without path compression,
in the worst case.

Suppose there are initially n disjoint sets. If m union
operations are performed, what is the fewest number of disjoint sets
that remain? Assume n is a power of two and n > m.
 $n{2}^{m}$
 $nm$
 m
 $n2m$
For the following set of questions, consider the following set of
operations:
for each i in 0..9 do makeset(i)
union(1,2);
union(3,4);
union(5,6);
union(7,8);
union(9,0);
union(4,1);
union(4,6);
union(7,9);
findset(3);
findset(1);
assuming unionbyrank and path compression. When unioning two sets having the
same rank, assume the root with the larger value becomes the root of the
resulting set.

How many disjoint sets remain?
 none of the other answers are correct
 4
 2
 5
 3
 1

How many values have a root as parent?
 4
 3
 1
 none of the other answers are correct
 2
 5
Concept: Graphs

What is the primary characteristic of a directed graph?
 the vertices are all reachable
 the edges are unidirectional
 the edges are bidirectional
 at least one vertex is unreachable

What is the primary characteristic of a weighted graph?
 each vertex has an associated weight
 the edge count exceeds vertex count
 each edge has an associated weight
 the vertex count exceeds some threshold

What is the primary characteristic of an undirected, simple graph?
 there is at most one edge between any two vertices
 the termini of an edge may be the same vertex
 the edge count exceeds or equals the vertex count
 the edge count is less than the vertex count

What is the degree of a vertex in an undirected, simple graph?
 the total number of paths emanating from the vertex
 the count of vertices reachable from that vertex
 the count of vertices not reachable from that vertex
 the total number of edges emanating from the vertex

What is the primary characteristic of a undirected, simple, connected graph?
 a path exists between every pair of vertices
 an edge exists between every pair of vertices
 each vertex has a edge that connects to itself
 there is at least one edge

What is the primary characteristic of an undirected, simple, regular graph?
 all edges have the same weight
 all vertices have the same weight
 all vertices have the same degree
 all edges terminate at the same vertex

What is the primary characteristic of an undirected, simple, complete graph?
Choose the most general answer.
 there exists a path between every pair of vertices
 all vertices have a degree greater than some threshold
 each vertex has two edges or no edges
 there exists an edge between every pair of vertices

What is the primary characteristic of an undirected, simple, connected,
acyclic graph?
Choose the most general answer.
 the vertex count is one greater than the edge count
 there are at least two unique paths between any two vertices
 the vertex count is equal to the edge count
 no edge has a single vertex as its termini

What is the primary characteristic of a planar graph?
 the maximum degree of a vertex is 4
 it can be drawn in a plane with no crossed edges
 when drawn in a plane, it must have crossed edges
 the maximum degree of a vertex is 3
Graph traversals

What is a walk?
Choose the most general answer.
 a sequence of vertices with no edge between ${v}_{i}$ and ${v}_{i+1}$
 a sequence of vertices with an edge between ${v}_{i}$ and ${v}_{i+1}$
 a set of edges that have one vertex in common
 a sequence of edges such that ${e}_{i}$ and ${e}_{i+1}$ have no common vertex

What is a trail?
Choose the most general answer.
 a walk with at least one edge appearing twice (or more)
 a walk with at least one vertex appearing twice (or more)
 a walk with no vertex appearing more than once
 a walk with no edge appearing more than once

What is the primary characteristic of a path?
 a trail with all vertices appearing at least once
 a trail with no vertex appearing more than once
 a trail with no vertex appearing more than twice
 a trail with all edges appearing at least once

What is an Euler trail?
Choose the most general answer.
 a trail which invovles every vertex, except one, exactly once
 a trail which invovles every vertex exactly once
 a trail which invovles every edge exactly once
 the longest trail in a graph

What is a Hamiltonian path?
Choose the most general answer.
 the longest path in a graph
 a path which involves every edge
 the shortest path in a graph
 a path which involves every vertex

T or F:
An Euler trail is always the longest trail in an undirected graph.

T or F:
All undireted graphs have an Euler trail.

T or F:
A Hamiltonian path
is always the longest path in an undirected graph.

T or F:
All undireted graphs have a Hamiltonian path.

The longest path in an undirected graph is bounded by:
 the maximum degree of a graph
 the number of vertices
 the number of edges
 the minimum degree of a graph

T or F:
An undirected graph with a mindegree of 2 must have a cycle.
Spanning trees and shortest paths

T or F:
All connected, undirected graphs have a spanning tree.

Refering to graphs, Kruskal's algorithm is used to find:
 the shortest path between two vertices
 the longest path with the least weight
 allpairs shortest paths
 a minimum spanning tree

Refering to graphs, Dijkstra's algorithm is used to find:
 the shortest path between a source vertex and the other vertices
 allpairs shortest paths
 a minimum spanning tree
 the longest path with the least weight

Refering to graphs, Prim's algorithm is used to find:
 the shortest path between two vertices
 allpairs shortest paths
 a minimum spanning tree
 the longest path with the least weight

Refering to graphs, the FloydWarshall algorithm is used to find:
 the longest path with the least weight
 a minimum spanning tree
 allpairs shortest paths
 the shortest path

Suppose $E=\omega \left(V\right)$. What is the asymptotic running
time of Kruskal's Algorithm? Choose the simplest answer.
 $\Theta (E\mathrm{log}E+V\mathrm{log}V)$
 $\Theta (E\mathrm{log}E+V\mathrm{log}E)$
 $\Theta (E+V)$
 $\Theta (E+V\mathrm{log}V)$
 $\Theta \left(E\right)$
 $\Theta \left(E\mathrm{log}E\right)$

Suppose $E=\Theta \left(V\right)$. What is the asymptotic running
time of Prim's Algorithm? Choose the simplest answer.
 $\Theta (E+V)$
 $\Theta \left(E\mathrm{log}E\right)$
 $\Theta (E+V\mathrm{log}V)$
 $\Theta (E\mathrm{log}E+V\mathrm{log}V)$
 $\Theta \left(E\right)$
 $\Theta (E\mathrm{log}E+V\mathrm{log}E)$

T or F:
Suppose you kept track of the level number for each vertex w in
a breadthfirst search of a simple, undirected, unweighted graph, starting
from a vertex v. The level numbers of each w would correspond
to the shortest path distance.

T or F:
For a simple, undirected graph, $\Theta \left(E\mathrm{log}E\right)=\Theta \left(E\mathrm{log}V\right)$

Consider running Kruskal's algorithm on the complete graph ${K}_{n}$,
processing the first i edges.
What is the smallest value of i that could yield the final result?
 $\genfrac{}{}{0.1ex}{}{n(n+1)}{2}1$
 n1
 $\genfrac{}{}{0.1ex}{}{n(n1)}{2}1$
 $\genfrac{}{}{0.1ex}{}{n(n+1)}{2}$
 $\genfrac{}{}{0.1ex}{}{n(n1)}{2}$
 n

Consider running Dijkstra's algorithm using a linked list (with a tail
pointer) as the basis for a priority queue. What is the asymptotic run time
for the algorithm?
 $\Theta \left({V}^{3}E\right)$
 $\Theta \left({V}^{3}\mathrm{log}V\right)$
 $\Theta \left({V}^{2}E\right)$
 $\Theta \left({V}^{2}\mathrm{log}V\right)$
 the correct answer is not listed
 $\Theta \left(V{E}^{2}\right)$
Concept: memoization
Assume zerobased indexing.

Consider memoizing this function:
function f(x)
{
if (x == 0) return 0;
if (x == 1) return 1;
return f(x2) + f(x1);
}
What would be memoization table's largest index/indices?
 x and x
 x  1
 x  2
 x + 1 and x
 x + 1 and x  1
 x + 1
 x + 1 and x + 1
 x

Consider memoizing this function:
function g(x,items,y)
{
if (x == 0) return 1;
if (x < 0) return 0;
if (y == items.size) return 0;
return minimum(g(xitems[y],items,y),g(x,items,y+1));
}
What would be the memoization table's largest index/indices,
where x refers to the original value of x?
 x and items.size
 x + 1 and items.size + 1
 x  2
 x and items.size + 1
 x
 x  1
 x + 1
 x and items.size  1
Dynamic programming

Consider using dynamic programming to improve the efficiency of
the following function:
function f(a,b,c,d,e)
{
if (a == 0) return 0;
if (a < 0) return INFINITY;
if (d == e) return 0;
return
max(
f(a,b,c,d+1,e),
f(ab[d],b,c,d,e) + c[d]
);
}
What would be the dimensionality of the dynamic programming table?
 3
 4
 6
 1
 2
 5

Consider using dynamic programming to improve the efficiency of
the following function:
function f(a,b,c,d,e)
{
if (a == 0) return 0;
if (a < 0) return INFINITY;
if (d == e) return 0;
return
max(
f(a,b,c,d+1,e),
f(ab[d],b,c,d,e) + c[d]
);
}
How would the dynamic programming table be filled, using a as an index?
 a is not used as an index
 smaller a to larger a
 larger a to smaller a

Consider using dynamic programming to improve the efficiency of
the following function:
function f(a,b,c,d,e)
{
if (a == 0) return 0;
if (a < 0) return INFINITY;
if (d == e) return 0;
return
max(
f(a,b,c,d+1,e),
f(ab[d],b,c,d,e) + c[d]
);
}
How would the dynamic programming table be filled, using b as an index?
 b is not used as an index
 smaller b to larger b
 larger b to smaller b

Consider using dynamic programming to improve the efficiency of
the following function:
function f(a,b,c,d,e)
{
if (a == 0) return 0;
if (a < 0) return INFINITY;
if (d == e) return 0;
return
max(
f(a,b,c,d+1,e),
f(ab[d],b,c,d,e) + c[d]
);
}
How would the dynamic programming table be filled, using d as an index?
 larger d to smaller d
 d is not used as an index
 smaller d to larger d

Consider using dynamic programming to improve the efficiency of
the following function:
function f(a,b,c,d,e)
{
if (a == 0) return 0;
if (a < 0) return INFINITY;
if (d == e) return 0;
return
max(
f(a,b,c,d+1,e),
f(ab[d],b,c,d,e) + c[d]
);
}
What is wrong, if anything,
about the following loop for filling out the dynamic programming table?
for (a = 0; a < max_a; ++a)
for (d = 0; d < max_d; ++d)
{
t[a][d] =
max(
getTable(a,d+1),
getTable(xb[d],d) + c[d]
);
}
 there should be three nested loops
 one or more of the loop indices is incorrect
 the a loop goes in the wrong direction
 there should only be one loop (no nesting)
 the d loop goes in the wrong direction
 the table is filled out correctly
Concept: amortized analysis

Suppose a dynamic array was implemented so that growing the
array increased the capacity by 1000 elements.
What is the worstcase cost of the append operation?
 log linear
 linear
 quadratic
 constant

Suppose a dynamic array was implemented so that growing the
array increased the capacity by 2%. What is the
amortized and worstcase costs of the append operation?
 constant and linear
 linear and linear
 log and linear
 quadratic and quadratic

Suppose a dynamic array was implemented so that growing the
array increased the capacity by 100 elements. What is the
amortized and worstcase costs of the append operation?
 constant and linear
 quadratic and quadratic
 linear and linear
 log and linear

Consider a dynamic fillable array which grows by doubling in size.
Which is a valid equation for calculating the total cost incurred
when insertion ${2}^{i}+1$ is made?
Assume the individual cost of an insert when there is room is 1 and
the individual cost of an insert when there is no room is ${2}^{i}+1$.
 $2n3$
 $3n1$
 $2n1$
 $3n2$
 $2n2$
 $3n3$

Consider a dynamic fillable array which grows from size n
to size $2n+1$ each time the array is filled.
Which is a valid equation for calculating the total cost incurred
when insertion ${2}^{i}$ is made?
Assume the individual cost of an insert when there is room is 1 and
the individual cost of an insert when there is no room is ${2}^{i}$.
 $3n3$
 $2n1$
 $2n$
 $3n\mathrm{log}n2$
 $3n6$
 $3n1$

Consider a dynamic fillable array which grows by doubling in size.
Let S represent the number of filled slots and E, the number
of empty slots, and C, the capacity of the array.
Which is a valid potential function for proving the amoritized
cost of an insert is constant?
 $2SE$
 $SE$
 $2SC$
 $S+E$
 $E+2C$
 $S+2E$

Consider implementing a queue with two stacks. Enqueues are
translated to pushes onto the first stack. For a dequeue,
if the second stack is empty,
each element on the first stack is popped and
pushed, in turn, onto the second stack.
In either case,
the item popped from the second stack is returned.
The worstcase times for enqueue and dequeue are:
 $\Theta \left(n\right)$ and $\Theta \left(1\right)$
 $\Theta \left(n\right)$ and $\Theta \left(n\right)$
 $\Theta \left(1\right)$ and $\Theta \left(1\right)$
 $\Theta \left(1\right)$ and $\Theta \left(n\right)$

Consider implementing a queue with two stacks. Enqueues are
translated to pushes onto the first stack, V. For a dequeue,
if the second stack, W, is empty,
each element on the first stack is popped and
pushed, in turn, onto the second stack.
The total work of this transfer is the number of elements popped plus the
number of elements pushed.
In either case,
the item popped from the second stack is returned.
The number of elements on stack V is denoted ${V}_{S}$ and the
number of elements on stack W is denoted ${W}_{S}$.
Which of the following potential functions can be used to
show an amortized bound of $\Theta \left(1\right)$ for operations
on this kind of queue?
 $\Phi =2{V}_{s}$
 $\Phi ={V}_{s}$
 $\Phi ={V}_{s}+{W}_{s}$
 $\Phi ={V}_{s}+2{W}_{s}$

Suppose a data structure has
operation A with a real cost of 1 and
operation B with a real cost of $2n+1$.
After an A operation, n increases by 1 while
after a B operation, n decreases to zero.
Which of the following potential functions can be used to
show an amortized bound of $\Theta $(1) for operations A and B
on this data structure?
 $Phi=n.$
 $\Phi =3n$
 $Phi=2n.$

Suppose a data structure has
operation A with a real cost of $3n+1$ and
operation B with a real cost of $\genfrac{}{}{0.1ex}{}{3}{2}n+1$.
After an A operation, n decreases to $\genfrac{}{}{0.1ex}{}{1}{4}n$ while
after a B operation, n decreases to $\genfrac{}{}{0.1ex}{}{5}{8}n$.
Which of the following potential functions can be used to
show an amortized bound of $\Theta $(1) for operations A and B
on this data structure?
 $\Phi =3n$
 $\Phi =\genfrac{}{}{0.1ex}{}{3}{2}n$
 $\Phi =4n$

Suppose a data structure has
operation A with a real cost of 1 and
operation B with a real cost of $k+n$.
After an A operation, n increases by 1.
After a B operation, n decreases to $k+1$.
Which of the following potential functions can be used to
show an amortized bound of $\Theta $(1) for A operations
and $\Theta \left(k\right)$ for B operations?
 $\Phi =2n$
 $\Phi =n$
 $\Phi =3n$
Concept: The classes $\mathcal{P}$ and $NP$

A problem can be in $\mathcal{P}$ and not in $NP$.
 False
 Not known
 True

A problem can be in $NP$ and not in $\mathcal{P}$.
 Not known
 False
 True

All problems are in $\mathcal{P}$.
 True
 False
 Not known

All problems are in $NP$.
 Not known
 True
 False

$NP$ stands for:
 Nondeterministic Polynomial.
 Nonexponential Program.
 Nonintractable Program.
 NonPolynomial.

T or F:
A constant time algorithm is in $\mathcal{P}$.

T or F:
A linear time algorithm is in $\mathcal{P}$.

T or F:
A constant time algorithm is in $NP$.

T or F:
A linear time algorithm is in $NP$.

Someone shows you a correct algorithm for problem A
whose solution can be verified in polynomial time.
You can conclude:
 problem A is in $NP$
 nothing about whether problem A is in $NP$ or not.
 problem A is not in $NP$

Someone proves that for a correct algorithm for problem A,
solutions must be verified in at least exponential time.
You can conclude:
 nothing about whether problem A is in $NP$ or not.
 problem A is in $NP$
 problem A is not in $NP$

Someone shows you a correct polynomial time algorithm for problem A.
You can conclude:
 nothing about whether problem A is in $\mathcal{P}$ or not.
 problem A is in $\mathcal{P}$
 problem A is not in $\mathcal{P}$

Someone shows you a correct exponential time algorithm for problem A
whose solution can be verified in polynomial time.
You can conclude:
 problem A is in $\mathcal{P}$
 problem A is not in $\mathcal{P}$
 nothing about whether problem A is in $\mathcal{P}$ or not.

Someone shows you a correct polynomial time algorithm for problem A.
You can conclude:
 problem A is not in $NP$
 nothing about whether problem A is in $NP$ or not.
 problem A is in $NP$

Which one of the following is not a valid way to prove
a problem is in $NP$:
 show that a solution can be found in polynomial time on a deterministic computer.
 show that a solution can be verified in polynomial time on a nondeterministic computer.
 show that a solution can be found in polynomial time on a nondeterministic computer.
 show that a solution can be verified in polynomial time on a deterministic computer.
Concept: $NP$completeness

To show that a problem A is $NP$complete, one task is to:
 show A is in $NP$.
 show A is not in $\mathcal{P}$.
 show A is in $\mathcal{P}$.
 show A is not in $NP$.

Suppose B is an $NP$complete problem.
To show that a problem A is $NP$complete, one task could
be:
 show a polynomial time/space reduction from B to A.
 show an exponential time/space reduction from A to B.
 show a polynomial time/space reduction from A to B.
 show a exponential time/space reduction from B to A.

Another way of stating “a reduction from A to B” is:
 solve Btype problems with an algorithm for A
 convert an algorithm for B to an algorithm for A
 convert an algorithm for A to an algorithm for B
 solve Atype problems with an algorithm for B
Concept: If $\mathcal{P}=NP$?

If $\mathcal{P}$ = $NP$, then all problems in $\mathcal{P}$ are in $NP$.
 True
 False
 Not known

If $\mathcal{P}$ = $NP$, then all problems in $NP$ are in $\mathcal{P}$.
 True
 Not known
 False

If $\mathcal{P}$ != $NP$, then there exist problems in $\mathcal{P}$ that are not in $NP$.
 Not known
 False
 True

If $\mathcal{P}$ != $NP$, then there exist problems in $NP$ that are not in $\mathcal{P}$.
 False
 True
 Not known
Concept: Proving $\mathcal{P}=NP$.

Factoring is in $NP$. Currently, the best known algorithm on a
conventional computer takes exponential time. If factoring is
proved to take at least exponential time, what is the effect on
the question $\mathcal{P}$ = $NP$?
 $\mathcal{P}$ = $NP$
 $\mathcal{P}$ != $NP$
 the question is still unanswered

Factoring is in $NP$. Currently, the best known algorithm on a
conventional computer takes exponential time. If factoring is
shown to take take polynomial time, what is the effect on
the question $\mathcal{P}$ = $NP$?
 $\mathcal{P}$ = $NP$
 the question is still unanswered
 $\mathcal{P}$ != $NP$

Factoring is in $NP$ and the best known
algorithm takes exponential time. In the past, a linear time algorithm
was discovered for quantum computers.
What is the effect on the question $\mathcal{P}$ = $NP$?
 $\mathcal{P}$ != $NP$, but just for quantum computers
 $\mathcal{P}$ = $NP$ for all types of computers.
 $\mathcal{P}$ = $NP$? is still unanswered.
 $\mathcal{P}$ = $NP$, but just for quantum computers
 $\mathcal{P}$ != $NP$ for all types of computers.

Subset Sum is $NP$complete.
Currently, the best known algorithm on a
conventional computer takes exponential time. If Subset Sum is
proved to take at least exponential time,
what is the effect on the question $\mathcal{P}$ = $NP$?
 $\mathcal{P}$ = $NP$
 the question is still unanswered
 $\mathcal{P}$ != $NP$

Subset Sum is $NP$complete.
Currently, the best known algorithm on a
conventional computer takes exponential time. If solving Subset Sum
can be
shown to take polynomial time, what is the effect on
the question $\mathcal{P}$ = $NP$?
 $\mathcal{P}$ = $NP$
 $\mathcal{P}$ != $NP$
 the question is still unanswered

In the past, it was shown how to solve Hamiltonian Path
(an $NP$complete
problem) in linear time, using a DNAbased computer. However,
the algorithm takes a factorial number of DNA strands, which need
to be created each time. This means:
 $\mathcal{P}$ != $NP$, but just for DNAbased computers
 $\mathcal{P}$ != $NP$ for all types of computers.
 $\mathcal{P}$ = $NP$, but just for DNAbased computers
 $\mathcal{P}$ = $NP$ for all types of computers.
 $\mathcal{P}$ = $NP$? is still unanswered.

T or F:
$\mathcal{P}$ = $NP$ is just another way of saying,
for problems in $NP$,
finding a solution is no harder than verifying a solution.