Prerequisite Concepts for Analysis of Algorithms

Basic Data Structures (Version 7)

## Concept:mathematics notation

1. ${\mathrm{log}}_{2}n$ is:
1. $\omega \left({\mathrm{log}}_{10}n\right)$
2. $\Theta \left({\mathrm{log}}_{10}n\right)$
3. $o\left({\mathrm{log}}_{10}n\right)$
1. ${\mathrm{log}}_{2}n$ is equal to:
1. $\genfrac{}{}{0.1ex}{}{{\mathrm{log}}_{10}n}{{\mathrm{log}}_{10}2}$
2. $\genfrac{}{}{0.1ex}{}{{\mathrm{log}}_{2}n}{{\mathrm{log}}_{2}10}$
3. $\genfrac{}{}{0.1ex}{}{{\mathrm{log}}_{10}n}{{\mathrm{log}}_{2}10}$
4. $\genfrac{}{}{0.1ex}{}{{\mathrm{log}}_{2}n}{{\mathrm{log}}_{10}2}$
1. $\mathrm{log}\left(nm\right)$ is equal to:
1. $m\mathrm{log}n$
2. $n\mathrm{log}m$
3. ${\left(\mathrm{log}n\right)}^{m}$
4. $\mathrm{log}n+\mathrm{log}m$
1. $\mathrm{log}\left({n}^{m}\right)$ is equal to:
1. $m\mathrm{log}n$
2. $n\mathrm{log}m$
3. $\mathrm{log}n+\mathrm{log}m$
4. ${\left(\mathrm{log}n\right)}^{m}$
1. ${\mathrm{log}}_{2}2$ can be simplified to:
1. ${\mathrm{log}}_{2}2$ cannot be simplified any further
2. 1
3. 2
4. 4
1. ${2}^{{\mathrm{log}}_{2}n}$ is equal to:
1. ${\mathrm{log}}_{2}n$
2. ${2}^{n}$
3. n
4. ${n}^{2}$
1. ${n}^{2}$ is $o\left({n}^{3}\right)$. Therefore, $\mathrm{log}{n}^{2}$ is $?\left(\mathrm{log}{n}^{3}\right)$. Choose the tightest bound.
1. big omicron
2. little omicron
3. big omega
4. theta
5. little omega
1. $\mathrm{log}{n}^{n}$ is $\Theta \left(?\right)$.
1. n log n
2. n
3. log n
4. log ${n}^{\mathrm{log}n}$
1. $\mathrm{log}{2}^{n}$ is $\Theta \left(?\right)$.
1. $n\mathrm{log}n$
2. n
3. $\mathrm{log}n$
4. ${2}^{n}$
1. The number of permutations of a list of n items is:
1. ${2}^{n}$
2. $\mathrm{log}n$
3. $n\mathrm{log}n$
4. n!
5. n

## Concept: relative growth rates

1. Which of the following has the correct order in terms of growth rate?
1. $1<\sqrt{n}<\mathrm{log}n
2. $1<\mathrm{log}n<\sqrt{n}
3. $1<\mathrm{log}n<\sqrt{n}
4. $1<\sqrt{n}<\mathrm{log}n
5. $1<\sqrt{n}<\mathrm{log}n
1. What is the correct ordering of growth rates for the following functions:
• $f\left(n\right)={n}^{0.9}\mathrm{log}n$
• $g\left(n\right)=1.{1}^{n}$
• $h\left(n\right)=9.9n$
1. f < h < g
2. g < f < h
3. h < f < g
4. h < g < f
5. g < h < f
6. f < g < h
1. What is the correct ordering of growth rates for the following functions:
• $f\left(n\right)=n{\left(\mathrm{log}n\right)}^{2}$
• $g\left(n\right)=n\mathrm{log}{2}^{n}$
• $h\left(n\right)=n\mathrm{log}\left(\mathrm{log}n\right)$
1. f > g > h
2. g > h > f
3. g > f > h
4. f > h > g
5. h > g > f
6. h > f > g

## Concept:order notation

1. What does big Omicron roughly mean?
1. worse than
2. worse than or the same as
3. the same as
4. better than or the same as
5. better than
1. What does $\omega$ roughly mean?
1. better than
2. the same as
3. worse than or the same as
4. better than or the same as
5. worse than
1. What does $\theta$ roughly mean?
1. worse than
2. worse than or the same as
3. better than
4. better than or the same as
5. the same as
1. T or F: All algorithms are $\omega \left(1\right)$.
1. T or F: All algorithms are $\theta \left(1\right)$.
1. T or F: All algorithms are $\Omega \left(1\right)$.
1. T or F: There exist algorithms that are $\omega \left(1\right)$.
1. T or F: There exist algorithms that are $O\left(1\right)$.
1. T or F: All algorithms are $O\left({n}^{n}\right)$.
1. Consider sorting 1,000,000 numbers with mergesort. What is the time complexity of this operation? [THINK!]
1. ${n}^{2}$, because mergesort takes quadratic time
2. constant, because n is fixed
3. n log n, because mergesort takes n log n time

## Concept:comparing algorithms using order notation

Assume the worst case and sufficiently large input size unless otherwise indicated. The phrase the same time as means equal within a constant factor (or lower order term) unless otherwise indicated. The phrase by a stopwatch means the actual amount of time needed for the algorithm to run to completion, as measured by a stopwatch.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs faster than g.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs faster than g, in all cases.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs faster than g, regardless of input size.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs faster than or takes the same time as g.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs faster than or takes the same time as g, in all cases.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs faster than or takes the same time as g, regardless of input size.
1. T or F: If $f=\omega \left(g\right)$, then it is possible that algorithm f can run faster than g, in some cases.
1. T or F: If $f=\omega \left(g\right)$, then it is possible that algorithm f can run faster than g.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs slower than g.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs slower than g, in all cases.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs slower than g, regardless of input size.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs slower than or takes the same time as g.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs slower than or takes the same time as g, in all cases.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f always runs slower than or takes the same time as g, regardless of input size.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f can take the same time as g, in some cases.
1. T or F: If $f=\omega \left(g\right)$, then algorithm f can run in the same time as g.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs faster than g.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs faster than g, in all cases.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs faster than g, regardless of input size.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs faster than or takes the same time as g.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs faster than or takes the same time as g, in all cases.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs faster than or takes the same time as g, regardless of input size.
1. T or F: If $f=\Omega \left(g\right)$, then it is possible that algorithm f can run faster than g, in some cases.
1. T or F: If $f=\Omega \left(g\right)$, then it is possible that algorithm f can run faster than g.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs slower than g.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs slower than g, in all cases.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs slower than g, regardless of input size.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs slower than or takes the same time as g.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs slower than or takes the same time as g, in all cases.
1. T or F: If $f=\Omega \left(g\right)$, then algorithm f always runs slower than or takes the same time as g, regardless of input size.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs slower than g.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs slower than g, in all cases.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs slower than g, regardless of input size.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs slower than or takes the same time as g.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs slower than or takes the same time as g, in all cases.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs slower than or takes the same time as g, regardless of input size.
1. T or F: If $f=o\left(g\right)$, then it is possible that algorithm f can run slower than g, in some cases.
1. T or F: If $f=o\left(g\right)$, then it is possible that algorithm f can run slower than g.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs faster than g.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs faster than g, in all cases.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs faster than g, regardless of input size.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs faster than or takes the same time as g.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs faster than or takes the same time as g, in all cases.
1. T or F: If $f=o\left(g\right)$, then algorithm f always runs faster than or takes the same time as g, regardless of input size.
1. T or F: If $f=o\left(g\right)$, then algorithm f can run slower than g, in some cases.
1. T or F: If $f=o\left(g\right)$, then algorithm f can take the same time as g, in some cases.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs slower than g.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs slower than g, in all cases.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs slower than g, regardless of input size.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs slower than or takes the same time as g.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs slower than or takes the same time as g, in all cases.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs slower than or takes the same time as g, regardless of input size.
1. T or F: If $f=O\left(g\right)$, then it is possible that algorithm f can run slower than g, in some cases.
1. T or F: If $f=O\left(g\right)$, then it is possible that algorithm f can run slower than g.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs faster than g.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs faster than g, in all cases.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs faster than g, regardless of input size.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs faster than or takes the same time as g.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs faster than or takes the same time as g, in all cases.
1. T or F: If $f=O\left(g\right)$, then algorithm f always runs faster than or takes the same time as g, regardless of input size.
1. T or F: If $f=\Theta \left(g\right)$, then algorithm f takes the same time as g.
1. T or F: If $f=\Theta \left(g\right)$, then algorithm f takes the same time as g, in all cases.
1. T or F: If $f=\Theta \left(g\right)$, then algorithm f takes the same time as g, regardless of input size.
1. T or F: If $f=\Theta \left(g\right)$, then it is possible that algorithm f can run slower than g, in some cases.
1. T or F: If $f=O\left(g\right)$, then it is possible that algorithm f can run slower than g.
1. T or F: If $f=O\left(g\right)$, then it is possible that algorithm f can run slower than g, for some problem sizes.
1. T or F: If $f=\Theta \left(g\right)$, then it is possible that algorithm f can run faster than g, in some cases.
1. T or F: If $f=O\left(g\right)$, then it is possible that algorithm f can run faster than g.
1. T or F: If $f=O\left(g\right)$, then it is possible that algorithm f can run true than g, for some problem sizes.
1. T or F: If $f=\omega \left(g\right)$, then f and g can be the same algorithm.
1. T or F: If $f=\Omega \left(g\right)$, then f and g can be the same algorithm.
1. T or F: If $f=o\left(g\right)$, then f and g can be the same algorithm.
1. T or F: If $f=O\left(g\right)$, then f and g can be the same algorithm.
1. T or F: Suppose algorithm $f=\theta \left(g\right)$. f and g can be the same algorithm.
1. T or F: If $f=\Omega \left(g\right)$ and $g=O\left(f\right)$, then $f=\Theta \left(g\right)$.
1. T or F: If $f=\Omega \left(g\right)$ and $g=O\left(f\right)$, then f and g must be the same algorithm.
1. T or F: Suppose algorithm $f=\theta \left(g\right)$. Algorithm f is never slower than g, by a stopwatch.
1. T or F: Suppose algorithm $f=\theta \left(g\right)$. Algorithm f always takes the same time as g, by a stopwatch.

## Concept:analyzing code

In the pseudocode, the lower limit of a `for` loop is inclusive, while the upper limit is exclusive. The step, if not specified, is one.
1. What is the time complexity of this function? Assume the initial value of i is zero.
```    function f(i,n)
{
if (i < n)
{
println(i);
f(i+1,n);
}
}
```
1. $\theta \left(\mathrm{log}n\right)$
2. $\theta \left({n}^{2}\right)$
3. $\theta \left(1\right)$
4. $\theta \left(n\right)$
5. $\theta \left(\mathrm{log}n\right)$
6. $\theta \left(\sqrt{n}\right)$
1. What is the time complexity of this function? Assume the initial value of i is one.
```    function f(i,n)
{
if (i < n)
{
println(i);
f(i*3,n);
}
}
```
1. $\theta \left(n\sqrt{n}\right)$
2. $\theta \left({n}^{2}\right)$
3. $\theta \left(n\mathrm{log}n\right)$
4. $\theta \left(n\right)$
5. $\theta \left(\mathrm{log}n\right)$
6. $\theta \left({\left(\mathrm{log}n\right)}^{3}\right)$
1. What is the time complexity of this function? Assume the initial value of i is one.
```    function f(i,n)
{
if (i < n)
{
f(i*sqrt(n),n);
println(i);
}
}
```
1. $\theta \left(\sqrt{n}\right)$
2. $\theta \left({n}^{2}\right)$
3. $\theta \left(n\sqrt{n}\right)$
4. $\theta \left(1\right)$
5. $\theta \left(n\right)$
6. $\theta \left(\mathrm{log}n\right)$
1. What is the time complexity of this function? Assume the initial value of i and j is zero.
```    function f(i,j,n)
{
if (i < n)
{
if (j < n)
f(i,j+1,n);
else
f(i+1,i+1,n);
}
println(i,j);
}
```
1. $\theta \left(1\right)$
2. $\theta \left({\mathrm{log}}^{2}n\right)$
3. $\theta \left({n}^{2}\right)$
4. $\theta \left(n\mathrm{log}n\right)$
5. $\theta \left(\sqrt{n}\right)$
6. $\theta \left(n\right)$
1. What is the time complexity of this function? Assume the initial value of i and j is zero.
```    function f(i,j,n)
{
if (i < n)
{
if (j < i)
f(i,j+1,n);
else
f(i+1,0,i+1);
}
println(i,j);
}
```
1. $\theta \left({n}^{2}\right)$
2. $\theta \left(\sqrt{n}\right)$
3. $\theta \left(n\mathrm{log}n\right)$
4. $\theta \left(n\right)$
5. $\theta \left({\mathrm{log}}^{2}n\right)$
6. $\theta \left(1\right)$
1. What is the time complexity of this function? Assume the initial value of i is one and j is zero.
```    function f(i,j,n)
{
if (i < n)
{
if (j < n)
f(i,j+1,n);
else
f(i*2,0,n);
}
println(i,j);
}
```
1. $\theta \left(1\right)$
2. $\theta \left(n\right)$
3. $\theta \left(n\mathrm{log}n\right)$
4. $\theta \left({n}^{2}\right)$
5. $\theta \left(n\sqrt{n}\right)$
6. $\theta \left({\mathrm{log}}^{2}n\right)$
1. What is the time complexity of this function? Assume the initial value of i is one and j is zero.
```    function f(i,j,n)
{
if (i < n)
{
if (j < n)
f(i,j+1,n);
else
f(i*2,i*2,n);
}
println(i,j);
}
```
1. $\theta \left({n}^{2}\right)$
2. $\theta \left(1\right)$
3. $\theta \left(n\right)$
4. $\theta \left(n\mathrm{log}n\right)$
5. $\theta \left({\mathrm{log}}^{2}n\right)$
6. $\theta \left(n\sqrt{n}\right)$
1. What is the time complexity of this function? Assume the initial value of i is zero and j is one.
```    function f(i,j,n)
{
if (i < n)
{
if (j < n)
f(i,j*2,n);
else
f(i+1,1,n);
}
println(i,j);
}
```
1. $\theta \left(n\sqrt{n}\right)$
2. $\theta \left(1\right)$
3. $\theta \left(n\right)$
4. $\theta \left({n}^{2}\right)$
5. $\theta \left({\mathrm{log}}^{2}n\right)$
6. $\theta \left(n\mathrm{log}n\right)$
1. What is the time complexity of this function? Assume positive, integral input and integer division.
```    function f(n)
{
if (n > 0)
{
f(n/2);
for (var i from 0 until n)
println(n);
}
}
```
1. $\theta \left({n}^{2}\right)$
2. $\theta \left(n\mathrm{log}n\right)$
3. $\theta \left(n\sqrt{n}\right)$
4. $\theta \left(1\right)$
5. $\theta \left(\mathrm{log}n\right)$
6. $\theta \left(n\right)$
1. What is the time complexity of this code fragment?
```    for (i from 0 until n by 1)
for (j from 0 until i by 1)
println(i,j);
```
1. $\theta \left(1\right)$
2. $\theta \left({\mathrm{log}}^{2}n\right)$
3. $\theta \left({n}^{2}\right)$
4. $\theta \left(n\sqrt{n}\right)$
5. $\theta \left(n\right)$
6. $\theta \left(n\mathrm{log}n\right)$
1. What is the time complexity of this code fragment?
```    i = 1;
while (i < n)
{
for (j from 0 until n by 1)
println(i,j);
i = i * 2;
}
```
1. $\theta \left(1\right)$
2. $\theta \left(n\sqrt{n}\right)$
3. $\theta \left({\mathrm{log}}^{2}n\right)$
4. $\theta \left({n}^{2}\right)$
5. $\theta \left(n\right)$
6. $\theta \left(n\mathrm{log}n\right)$
1. What is the time complexity of this code fragment?
```    i = 1;
while (i < n)
{
for (j from 0 until i by 1)
println(i,j);
i = i * 2;
}
```
1. $\theta \left(n\mathrm{log}n\right)$
2. $\theta \left({n}^{2}\right)$
3. $\theta \left(n\sqrt{n}\right)$
4. $\theta \left({\mathrm{log}}^{2}n\right)$
5. $\theta \left(1\right)$
6. $\theta \left(n\right)$
1. What is the time complexity of this code fragment?
```    for (i from 0 until n)
{
j = 1;
while (j < n)
{
println(i,j);
j = j * 2;
}
}
```
1. $\theta \left(1\right)$
2. $\theta \left({n}^{2}\right)$
3. $\theta \left({\mathrm{log}}^{2}n\right)$
4. $\theta \left(n\right)$
5. $\theta \left(n\sqrt{n}\right)$
6. $\theta \left(n\mathrm{log}n\right)$
1. What is the time complexity of this code fragment?
```    for (i from 0 until n by 1)
println(i);
```
1. $\theta \left({\mathrm{log}}^{2}n\right)$
2. $\theta \left(n\right)$
3. $\theta \left(n\mathrm{log}n\right)$
4. $\theta \left(n\sqrt{n}\right)$
5. $\theta \left({n}^{\sqrt{n}}\right)$
6. $\theta \left({n}^{2}\right)$
1. What is the time complexity of this code fragment?
```    i = 1;
while (i < n)
{
println(i);
i = i * 2;
}
```
1. $\theta \left(n\right)$
2. $\theta \left({n}^{2}\right)$
3. $\theta \left(n\sqrt{n}\right)$
4. $\theta \left({\left(\mathrm{log}n\right)}^{2}\right)$
5. $\theta \left(n\mathrm{log}n\right)$
6. $\theta \left(\mathrm{log}n\right)$
1. What is the time complexity of this code fragment?
```    i = 1;
while (i < n)
{
println(i);
i = i * sqrt(n);
}
```
1. $\theta \left(n\sqrt{n}\right)$
2. $\theta \left(1\right)$
3. $\theta \left(\sqrt{n}\right)$
4. $\theta \left(n-\sqrt{n}\right)$
5. $\theta \left(n\right)$
6. $\theta \left(n\genfrac{}{}{0.1ex}{}{n}{\sqrt{n}}\right)$
1. What is the time complexity of this code fragment?
```    i = 1;
while (i < n)
{
println(i);
i = i * 2;
}
for (j from 0 until n by 2)
println(j);
```
1. $\theta \left({\mathrm{log}}^{2}n\right)$
2. $\theta \left(n\sqrt{n}\right)$
3. $\theta \left(1\right)$
4. $\theta \left({n}^{2}\right)$
5. $\theta \left(n\right)$
6. $\theta \left(n\mathrm{log}n\right)$
1. What is the time complexity of this code fragment?
```    for (i from 0 until n by 1)
println(i);
for (j from 0 until n by 1)
println(j);
```
1. $\theta \left(n\right)$
2. $\theta \left(1\right)$
3. $\theta \left(n\mathrm{log}n\right)$
4. $\theta \left(n\sqrt{n}\right)$
5. $\theta \left({\mathrm{log}}^{2}n\right)$
6. $\theta \left({n}^{2}\right)$
1. What is the time complexity of this code fragment?
```    for (i from 0 until n by 2)
println(i);
j = 1;
while (j < n)
{
println(j);
j = j * 2;
}
```
1. $\theta \left({n}^{2}\right)$
2. $\theta \left({\mathrm{log}}^{2}n\right)$
3. $\theta \left(n\sqrt{n}\right)$
4. $\theta \left(n\mathrm{log}n\right)$
5. $\theta \left(1\right)$
6. $\theta \left(n\right)$
1. What is the space complexity of this code fragment?
```    for (i from 0 until n by 1)
println(i);
```
1. $\theta \left({n}^{2}\right)$
2. $\theta \left(\mathrm{log}n\right)$
3. $\theta \left(1\right)$
4. $\theta \left(\sqrt{n}\right)$
5. $\theta \left(n\right)$
6. $\theta \left(n\mathrm{log}n\right)$
1. What is the space complexity of this code fragment?
```    i = 1;
while (i < n)
{
println(i);
i = i * sqrt(n);
}
```
1. $\theta \left(\mathrm{log}n\right)$
2. $\theta \left(n\mathrm{log}n\right)$
3. $\theta \left(n\right)$
4. $\theta \left({n}^{2}\right)$
5. $\theta \left(\sqrt{n}\right)$
6. $\theta \left(1\right)$
1. What is the space complexity of this code fragment?
```    i = 1;
while (i < n)
{
println(i);
i = i * 2;
}
```
1. $\theta \left(1\right)$
2. $\theta \left(\mathrm{log}n\right)$
3. $\theta \left({n}^{2}\right)$
4. $\theta \left(\sqrt{n}\right)$
5. $\theta \left(n\mathrm{log}n\right)$
6. $\theta \left(n\right)$
1. What is the space complexity of this code fragment?
```    for (i from 0 until n by 1)
println(i);
for (j from 0 until n by 1)
println(j);
```
1. $\theta \left({n}^{2}\right)$
2. $\theta \left(\mathrm{log}n\right)$
3. $\theta \left(\sqrt{n}\right)$
4. $\theta \left(1\right)$
5. $\theta \left(n\mathrm{log}n\right)$
6. $\theta \left(n\right)$
1. What is the space complexity of this code fragment?
```    i = 1;
while (i < n)
{
println(i);
i = i * 2;
}
for (j from 0 until n by 2)
println(j);
```
1. $\theta \left(\mathrm{log}n\right)$
2. $\theta \left(n\mathrm{log}n\right)$
3. $\theta \left({n}^{2}\right)$
4. $\theta \left(\sqrt{n}\right)$
5. $\theta \left(1\right)$
6. $\theta \left(n\right)$
1. What is the space complexity of this code fragment?
```    for (i from 0 until n by 2)
println(i);
j = 1;
while (j < n)
{
println(j);
j = j * 2;
}
```
1. $\theta \left(n\right)$
2. $\theta \left({n}^{2}\right)$
3. $\theta \left(\sqrt{n}\right)$
4. $\theta \left(\mathrm{log}n\right)$
5. $\theta \left(1\right)$
6. $\theta \left(n\mathrm{log}n\right)$
1. What is the space complexity of this code fragment?
```    for (i from 0 until n by 1)
for (j from 0 until i by 1)
println(i,j);
```
1. $\theta \left(n\mathrm{log}n\right)$
2. $\theta \left(n\right)$
3. $\theta \left(\sqrt{n}\right)$
4. $\theta \left(\mathrm{log}n\right)$
5. $\theta \left({n}^{2}\right)$
6. $\theta \left(1\right)$
1. What is the space complexity of this code fragment?
```    i = 1;
while (i < n)
{
for (j from 0 until i by 1)
println(i,j);
i = i * 2;
}
```
1. $\theta \left(n\mathrm{log}n\right)$
2. $\theta \left(\mathrm{log}n\right)$
3. $\theta \left(\sqrt{n}\right)$
4. $\theta \left({n}^{2}\right)$
5. $\theta \left(n\right)$
6. $\theta \left(1\right)$
1. What is the space complexity of this code fragment?
```    for (i from 0 until n)
{
j = 1;
while (j < n)
{
println(i,j);
j = j * 2;
}
}
```
1. $\theta \left(\sqrt{n}\right)$
2. $\theta \left(n\mathrm{log}n\right)$
3. $\theta \left({n}^{2}\right)$
4. $\theta \left(\mathrm{log}n\right)$
5. $\theta \left(n\right)$
6. $\theta \left(1\right)$
1. What is the space complexity of this code fragment?
```    i = 1;
while (i < n)
{
for (j from 0 until n by 1)
println(i,j);
i = i * 2;
}
```
1. $\theta \left(n\mathrm{log}n\right)$
2. $\theta \left(n\right)$
3. $\theta \left({n}^{2}\right)$
4. $\theta \left(\sqrt{n}\right)$
5. $\theta \left(1\right)$
6. $\theta \left(\mathrm{log}n\right)$
1. What is the space complexity of this function? Assume the initial value of i and j is zero.
```    function f(i,j,n)
{
if (i < n)
{
if (j < n)
f(i,j+1,n);
else
f(i+1,i+1,n);
}
println(i,j);
}
```
1. $\theta \left(\sqrt{n}\right)$
2. $\theta \left({n}^{2}\right)$
3. $\theta \left(n\mathrm{log}n\right)$
4. $\theta \left(n\right)$
5. $\theta \left(1\right)$
6. $\theta \left({\mathrm{log}}^{2}n\right)$
1. What is the space complexity of this function? Assume the initial value of i and j is zero.
```    function f(i,j,n)
{
if (i < n)
{
if (j < i)
f(i,j+1,n);
else
f(i+1,0,i+1);
}
println(i,j);
}
```
1. $\theta \left(n\right)$
2. $\theta \left(n\mathrm{log}n\right)$
3. $\theta \left({\mathrm{log}}^{2}n\right)$
4. $\theta \left(1\right)$
5. $\theta \left({n}^{2}\right)$
6. $\theta \left(\sqrt{n}\right)$
1. What is the space complexity of this function? Assume the initial value of i is one and j is zero.
```    function f(i,j,n)
{
if (i < n)
{
if (j < n)
f(i,j+1,n);
else
f(i*2,0,n);
}
println(i,j);
}
```
1. $\theta \left(n\sqrt{n}\right)$
2. $\theta \left({\mathrm{log}}^{2}n\right)$
3. $\theta \left(n\mathrm{log}n\right)$
4. $\theta \left(n\right)$
5. $\theta \left(1\right)$
6. $\theta \left({n}^{2}\right)$
1. What is the space complexity of this function? Assume the initial value of i is one and j is zero.
```    function f(i,j,n)
{
if (i < n)
{
if (j < n)
f(i,j+1,n);
else
f(i*2,i*2,n);
}
println(i,j);
}
```
1. $\theta \left(n\right)$
2. $\theta \left({\mathrm{log}}^{2}n\right)$
3. $\theta \left({n}^{2}\right)$
4. $\theta \left(1\right)$
5. $\theta \left(n\mathrm{log}n\right)$
6. $\theta \left(n\sqrt{n}\right)$
1. What is the space complexity of this function? Assume the initial value of i is zero and j is one.
```    function f(i,j,n)
{
if (i < n)
{
if (j < n)
f(i,j*2,n);
else
f(i+1,1,n);
}
println(i,j);
}
```
1. $\theta \left({n}^{2}\right)$
2. $\theta \left({\mathrm{log}}^{2}n\right)$
3. $\theta \left(n\right)$
4. $\theta \left(n\mathrm{log}n\right)$
5. $\theta \left(1\right)$
6. $\theta \left(n\sqrt{n}\right)$
1. What is the space complexity of this function? Assume positive, integral input and integer division.
```    function f(x,n)
{
if (x > 0)
{
f(x/2,n);
for (var i from 0 until n)
println(n);
}
}
```
1. $\theta \left({n}^{2}\right)$
2. $\theta \left(n\right)$
3. $\theta \left(\mathrm{log}n\right)$
4. $\theta \left(n\sqrt{n}\right)$
5. $\theta \left(1\right)$
6. $\theta \left(n\mathrm{log}n\right)$
1. What is the space complexity of this function? Assume the initial value of i is zero.
```    function f(i,n)
{
if (i < n)
{
f(i+1,n);
println(i);
}
}
```
1. $\theta \left(n\right)$
2. $\theta \left(\sqrt{n}\right)$
3. $\theta \left({n}^{2}\right)$
4. $\theta \left(1\right)$
5. $\theta \left(\mathrm{log}n\right)$
6. $\theta \left(n\mathrm{log}n\right)$
1. What is the space complexity of this function? Assume the initial value of i is one.
```    function f(i,n)
{
if (i < n)
{
f(i*2,n);
println(i);
}
}
```
1. $\theta \left(1\right)$
2. $\theta \left(\sqrt{n}\right)$
3. $\theta \left(n\mathrm{log}n\right)$
4. $\theta \left({n}^{2}\right)$
5. $\theta \left(n\right)$
6. $\theta \left(\mathrm{log}n\right)$
1. What is the space complexity of this function? Assume the initial value of i is one.
```    function f(i,n)
{
if (i < n)
{
f(i*sqrt(n),n);
println(i);
}
}
```
1. $\theta \left(n\sqrt{n}\right)$
2. $\theta \left(\sqrt{n}\right)$
3. $\theta \left(n\right)$
4. $\theta \left(n-\sqrt{n}\right)$
5. $\theta \left(1\right)$
6. $\theta \left(n\genfrac{}{}{0.1ex}{}{n}{\sqrt{n}}\right)$
1. What is the space complexity of this function? Assume the initial value of i is one.
```    function f(i,n)
{
if (i < n)
{
f(i+sqrt(n),n);
println(i);
}
}
```
1. $\theta \left(1\right)$
2. $\theta \left(n\right)$
3. $\theta \left(\sqrt{n}\right)$
4. $\theta \left(n-\sqrt{n}\right)$
5. $\theta \left(n\genfrac{}{}{0.1ex}{}{n}{\sqrt{n}}\right)$
6. $\theta \left(n\sqrt{n}\right)$

## Concept:analysis of classic, simple algorithms

1. Which of the following describes the classic recursive fibonacci's time complexity?
1. $\theta \left(\Phi \right)$
2. $\theta \left(\genfrac{}{}{0.1ex}{}{\Phi }{n}\right)$
3. $\theta \left(1\right)$
4. $\theta \left({\Phi }^{n}\right)$
5. $\theta \left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{n}}\right)$
6. $\theta \left(n-\sqrt{n}\right)$
7. $\theta \left(\sqrt{n}\right)$
1. Which of the following describes the classic recursive fibonacci's space complexity?
1. $\theta \left(n\right)$
2. $\theta \left(1\right)$
3. $\theta \left(n-\sqrt{n}\right)$
4. $\theta \left(\genfrac{}{}{0.1ex}{}{\Phi }{n}\right)$
5. $\theta \left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{n}}\right)$
6. $\theta \left(\sqrt{n}\right)$
1. Which of the following describes iterative factorial's time complexity?
1. $\theta \left(n-\sqrt{n}\right)$
2. $\theta \left(\genfrac{}{}{0.1ex}{}{\Phi }{n}\right)$
3. $\theta \left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{n}}\right)$
4. $\theta \left(n\right)$
5. $\theta \left(\sqrt{n}\right)$
6. $\theta \left(1\right)$
1. Which of the following describes iterative fibonacci's space complexity?
1. $\theta \left(n\right)$
2. $\theta \left(1\right)$
3. $\theta \left(\genfrac{}{}{0.1ex}{}{\Phi }{n}\right)$
4. $\theta \left(\genfrac{}{}{0.1ex}{}{n}{\sqrt{n}}\right)$
5. $\theta \left(n-\sqrt{n}\right)$
6. $\theta \left(\sqrt{n}\right)$

## Concept: searching

1. T or F: The following code reliably sets the variable min to the minimum value of an unsorted, non-empty array.
```    min = 0;
for (i from 0 until array.length)
if (array[i] < min)
min = array[i];
```
1. T or F: The following code reliably sets the variable max to the maximum value in an unsorted, non-empty array.
```    max = array[0]
for (i from 0 to array.length)
if (array[i] > max)
max = array[i]
```
1. T or F: The following function reliably returns `True` if the value of item is present in the unsorted, non-empty array.
```    function find(array,item)
{
found = False;
for (i from 0 until array.length)
if (array[i] == item)
found = True;
return found;
}
```
1. T or F: The following function reliably returns `False` if the value of item is missing in the unsorted, non-empty array.
```    function find(array,item)
{
found = True;
for (i from 0 unitl array.length)
if (array[i] != item)
found = False;
return found;
}
```
1. What is the average and worst case time complexity, respectively, for searching an unordered list?
1. log, linear
2. linear, linear
3. log, log
4. linear, log,
1. What is the average and worst case time complexity, respectively, for searching an ordered list?
1. log, log
2. linear, log
3. log, linear
4. linear, linear

## Concept: sorting

1. The following strategy is employed by which sort: find the most extreme value in the unsorted portion and place it at the boundary of the sorted and unsorted portions?
1. insertion sort
2. bubble sort
3. quicksort
4. selection sort
5. mergesort
6. heapsort
1. The following strategy is employed by which sort: sort the lower half of the items to be sorted, then sort the upper half, then arrange things such that the largest item in the lower half is less than or equal to the smallest item in the upper half ?
1. mergesort
2. bubble sort
3. insertion sort
4. quicksort
5. heapsort
6. selection sort
1. The following strategy is employed by which sort: take the first value in the unsorted portion and place it where it belongs in the sorted portion?
1. heapsort
2. quicksort
3. bubble sort
4. insertion sort
5. mergesort
6. selection sort
1. The following strategy is employed by which sort: pick a value and arrange things such that the largest item in the lower portion is less than or equal to the value and that the smallest item in the upper portion is greater than or equal to the value, then sort the lower portion, then sort the upper ?
1. quicksort
2. heapsort
3. selection sort
4. insertion sort
5. bubble sort
6. mergesort
1. Which sort optimizes the worst case behavior of bubble sort?
1. insertion sort
2. heapsort
3. selection sort
4. mergesort
5. stooge sort
6. quicksort

## Concept:space and time complexity

1. What is the best time case complexity for classical mergesort?
1. cubic
2. linear
3. log n
4. n log n
1. What is the worst case complexity for classical mergesort?
1. cubic
3. linear
4. n log n
5. log n
1. If quicksort is implemented such that the pivot is chosen to be the first element in the array, the worst case behavior of the sort is:
1. exponential
2. log linear
4. linear
1. If quicksort is implemented such that the a random element is chosen to be the pivot, the worst case behavior of the sort is:
2. linear
3. exponential
4. log linear
1. What is the best case complexity for quicksort?
1. n log n
3. log n
4. cubic
5. linear
1. What is the best case complexity for classical selection sort?
1. log n
3. n log n
4. cubic
5. linear
1. What is the worst case complexity for classical selection sort?
1. cubic
2. linear
3. n log n
4. log n
1. What is the best case complexity for classical insertion sort?
1. n log n
3. linear
4. cubic
5. log n
1. What is the worst case complexity for classical insertion sort?
1. linear
2. log n
4. cubic
5. n log n

## Concept:simple arrays

Assume zero-based indexing for all arrays.

In the pseudocode, the lower limit of a `for` loop is inclusive, while the upper limit is exclusive. The step, if not specified, is one.

For all types of fillable arrays, the size is the number of elements added to the array; the capacity is the maximum number of elements that can be added to the array.

1. Consider a small array a and large array b. Accessing the element in the first slot of a takes more/less/the same amount of time as accessing the element in the first slot of b.
1. less time
2. the same amount of time
3. more time
4. it depends on how the arrays were allocated
1. Consider a small array a and large array b. Accessing the element in the last slot of a b takes more than/less than/the same amount of time as accessing an element in the middle slot of a. Both indices are supplied.
1. it depends on how the arrays were allocated
2. the same amount of time
3. less time
4. more time
1. Accessing the middle element of an array takes more/less/the same amount of time than accessing the last element.
1. more time
2. the same amount of time
3. it depends on how the array were allocated

4. less time
1. What is a major characteristic of a simple array?
1. getting the value at an index can be done in constant time
2. finding an element can be done in constant time
3. inserting an element between indices i and i+1can be done in constant time
1. What is a not a major characteristic of a simple array?
1. setting the value at an index can be done in constant time
2. finding an element can be done in constant time
3. swapping two elements can be done in constant time
4. getting the value at an index can be done in constant time
1. Does the following code set the variable v to the minimum value in an unsorted array with at least two elements?
```    v = 0;
for (i from 0 until array.length)
if (array[i] < v)
v = array[i];
```
1. only if all elements have the same value
2. yes, if all the elements are negative
3. only if the true minimum value is zero
4. always
5. yes, if all the elements are positive
6. never
1. Does the following code set the variable v to the minimum value in an unsorted array with at least two elements?
```    v = array[0];
for (i from 0 until array.length)
if (array[i] < v)
v = array[i];
```
1. yes, if all the elements are negative
2. always
3. yes, if all the elements are positive
4. never
5. only if all elements have the same value
6. only if the true minimum value is at index 0
1. Does the following code set the variable v to the minimum value in an unsorted array with at least two elements?
```    v = array[0];
for (i from 0 until array.length)
if (array[i] > v)
v = array[i];
```
1. only if all elements have the same value
2. yes, if all the elements are negative
3. only if the true minimum value is at index 0
4. yes, if all the elements are positive
5. always
6. never
1. Does the following code set the variable v to the minimum value in an unsorted, non-empty array?
```    v = array[0];
for (i from 0 until array.length)
if (array[i] > v)
v = array[i];
```
1. only if all elements have the same value
2. yes, if all the elements are negative
3. yes, if all the elements are positive
4. always
5. only if the true minimum value is at index 0
6. never
1. Does this find function return the expected result? Assume the array has at least two elements.
```    function find(array,item)
{
var i; var found = False;
for (i from 0 until array.length)
if (array[i] == item)
found = True;
return found;
}
```
1. only if the item is not in the array
2. always
3. never
4. only if the item is in the array
1. Does this find function return the expected result? Assume the array has at least two elements.
```    function find(array,item)
{
var i;
for (i from 0 until array.length)
if (array[i] == item)
return False;
return True;
}
```
1. only if the item is not in the array
2. always
3. never
4. only if the item is in the array
1. Is this find function correct? Assume the array has at least two elements.
```    function find(array,item)
{
var i; var found = True;
for (i from 0 until array.length)
if (array[i] != item)
found = False;
return found;
}
```
1. always
2. only if the item is not in the array
3. never
4. only if the item is in the array
1. Does this find function return the expected result? Assume the array has at least two elements.
```    function find(array,item)
{
var i;
for (i from 0 until array.length)
if (array[i] == item)
return True;
return False;
}
```
1. only if the item is not in the array
2. only if the item is in the array
3. always
4. never

## Concept:simple fillable arrays

Assume the back index in a simple fillable array points to the first available slot.
1. What is not a property of a simple fillable array?
1. elements are presumed to be contiguous
2. elements can be added in constant time
3. there exists an element that can be removed in constant time
4. the underlying simple array can increase in size
1. What is a property of a simple fillable array?
1. more that one element can be next to an empty slot
2. an element can be added anywhere in constant time
3. any element can be removed in constant time
4. elements are presumed to be contiguous
1. Adding an element at back of a simple fillable array can be done in:
1. linear time
2. logarithmic time
4. constant time
1. Removing an element at front of a simple fillable array can be done in:
1. linear time
3. constant time
4. logarithmic time
1. Suppose a simple fillable array has size s and capacity c. The next value to be added to the array will be placed at index:
1. s
2. c
3. s + 1
4. c + 1
5. c - 1
6. s - 1
1. Suppose for a simple fillable array, the size is one less than the capacity. How many values can still be added?
1. this situation cannot exist
2. zero, the array is full
3. one
4. two
1. Suppose for a simple fillable array, the capacity is one less than the size. How many values can still be added?
1. one
2. this situation cannot exist
3. zero, the array is full
4. two
1. Suppose a simple fillable array is empty. The size of the array is:
1. zero
2. the capacity of the array
3. one
4. the length of the underlying simple array
1. Suppose a simple fillable array is full. The capacity of the array is:
1. one
2. the length of the underlying simple array
3. its size minus one
4. zero
1. Which code fragment correctly inserts a new element into index j of a simple fillable array with size s? Assume there is room for the new element.
```    for (i from j until s-2)
array[i] = array[i+1];
array[i] = newElement;
---
for (i from s-2 until j)
array[i+1] = array[i];
array[i] = newElement;
```
1. neither are correct
2. both are correct
3. the first fragment
4. the second fragment
1. Which code fragment correctly inserts a new element into index j of an array with size s?
```    for (i from j until s-2)
array[i+1] = array[i];
array[i] = newElement;
---
for (i from s-2 until j)
array[i] = array[i+1];
array[i] = newElement;
```
1. the second fragment
2. both are correct
3. neither are correct
4. the first fragment

## Concept:circular arrays

For circular arrays, assume f is the start index, e is the end index, s is the size, and c is the capacity of the array. Both f and e point to the first available slots.
1. What is a property of a theoretical (not practical) circular array?
1. there are two places an element can be added
2. elements do not have to be contiguous
3. an element can be added anywhere in constant time
4. any element can be removed in constant time
1. What is not a property of a theoretical (not practical) circular array?
1. elements are presumed to be contiguous
2. appending an element can be done in constant time
3. prepending an element can be done in constant time
4. inserting an element in the middle can be done in constant time
1. The next value to be added to the front of a circular array will be placed at index:
1. s + f
2. s - f
3. c - f
4. c - 1
5. f
6. f - 1
1. Suppose for a circular array, the size is equal to the capacity. Can a value be added?
1. No, the array is completely full
2. Yes, there is room for one more value
1. Suppose a circular array is empty. The size of the array is:
1. one
2. the length of the array
3. the capacity of the array
4. zero
1. In a circular array, which is not a proper way to correct the start index f after an element is added to the front of the array?
1. `f = (f - 1 + c) % c;`
2. `f -= 1; f == 0? c - 1 : f;`
3. `f -= 1; f = f < 0? c - 1 : f;`
4. `f = f == 0? c - 1 : f - 1;`
5. `if (f == 0) f = c - 1; else f = f - 1;`
1. T or F: In a circular array, the start index (after correction) can never equal the size of the array.
1. T or F: In a circular array, the start index (after correction) can never equal the capacity of the array.
1. Is a separate end index e needed in a circular array?
1. no, it can be computed from s and f.
2. no, it can be computed from s, c, and f.
3. no, it can be computed from s and c.
4. no, it can be computed from c and f.
5. yes

## Concept:dynamic arrays

1. What is not a major characteristic of a dynamic array?
1. finding an element takes at most linear time
2. the only allowed way to grow is doubling the size
3. inserting an element in the middle takes linear time
4. elements are presumed to be contiguous
5. the array can grow to accommodate more elements
1. Suppose a dynamic array has size s and capacity c, with s equal to c. Is the array required to grow on the next addition?
1. yes
2. yes, but only if the dynamic array is not circular
3. no
1. Suppose array capacity grows by 50% every time a dynamic array fills. If the only events are insertions, the growing events:
1. occur more and more frequently
2. occur periodically
3. cannot be characterized in terms of frequency
4. occur less and less frequently
1. Suppose array capacity doubles every time a dynamic array fills, If the only events are insertions, the average cost of an insertion, in the limit, is:
1. the log of the size
2. constant
3. linear
4. the log of the capacity
1. Suppose array capacity grows by 10 every time a dynamic array fills, If the only events are insertions, the average cost of an insertion, in the limit, is:
1. linear
2. the log of the size
3. the log of the capacity
5. constant
1. Suppose array capacity grows by 10 every time a dynamic array fills, If the only events are insertions, the growing events:
1. cannot be characterized in terms of frequency
2. occur periodically
3. occur less and less frequently
4. occur more and more frequently
1. If array capacity grows by 10 every time a dynamic array fills, the average cost of an insertion in the limit is:
1. constant
2. linear
3. the log of the size
4. the log of the capacity

## Concept:singly-linked lists (insertions)

1. Appending to a singly-linked list without a tail pointer takes:
1. constant time
2. linear time
3. n log n time
4. log time
1. Appending to a singly-linked list with a tail pointer takes:
1. constant time
2. linear time
3. log time
4. n log n time
1. Suppose you have a pointer to a node near the end of a long singly-linked list. You can then insert a new node just prior in:
1. n log n time
2. log time
3. linear time
4. constant time
1. Suppose you have a pointer to a node near the end of a long singly-linked list. You can then insert a new node just after in:
1. log time
2. linear time
3. n log n time
4. constant time
1. Suppose you have a pointer to a node near the end of a long singly-linked list. You can then insert a new node just after with as few pointer assignments as:
1. 1
2. 5
3. 4
4. 3
5. 2

## Concept:singly-linked lists (deletions)

1. Removing the first item from a singly-linked list without a tail pointer takes:
1. log time
2. linear time
3. constant time
4. n log n time
1. Removing the last item from a singly-linked list with a tail pointer takes:
1. linear time
2. n log n time
3. log time
4. constant time
1. Removing the last item from a singly-linked list without a tail pointer takes:
1. linear time
2. log time
3. constant time
4. n log n time
1. Removing the first item from a singly-linked list with a tail pointer takes:
1. constant time
2. n log n time
3. log time
4. linear time
1. In a singly-linked list, you can move the tail pointer back one node in:
1. constant time
2. log time
3. n log n time
4. linear time
1. Suppose you have a pointer to a node in the middle of a singly-linked list. You can then delete that node in:
1. n log n time
2. log time
3. linear time
4. constant time

## Concept:doubly-linked lists (insertions)

1. Appending to a non-circular, doubly-linked list without a tail pointer takes:
1. log time
2. constant time
3. linear time
4. n log n time
1. Appending to a non-circular, doubly-linked list with a tail pointer takes:
1. linear time
2. constant time
3. log time
4. n log n time
1. Removing the first item from a non-circular, doubly-linked list without a tail pointer takes:
1. n log n time
2. constant time
3. linear time
4. log time
1. Suppose you have a pointer to a node in the middle of a doubly-linked list. You can then insert a new node just after in:
1. constant time
2. n log n time
3. log time
4. linear time
1. Suppose you have a pointer to a node in the middle of a doubly-linked list. You can then insert a new node just prior with as few pointer assignments as:
1. 2
2. 1
3. 3
4. 5
5. 4
1. T : F: Making a doubly-linked list circular removes the need for a separate tail pointer.

## Concept:doubly-linked lists (deletions)

1. Removing the first item from a doubly-linked list with a tail pointer takes:
1. log time
2. n log n time
3. linear time
4. constant time
1. In a doubly-linked list, you can move the tail pointer back one node in:
1. linear time
2. constant time
3. n log n time
4. log time
1. In a doubly-linked list, what does a tail-pointer gain you?
1. the ability to append the list in constant time
2. the ability to both append and remove the last element of list in constant time
3. the ability to remove the last element of list in constant time
4. the ability to both prepend and remove the first element of list in constant time
5. the ability to remove the first element of list in constant time
6. the ability to prepend the list in constant time

## Concept:input-output order

1. These values are pushed onto a stack in the order given: 1 5 9. A pop operation would return which value?
1. 1
2. 5
3. 9
1. LIFO ordering is the same as:
1. FILO
2. FIFO
3. LILO

## Concept:time and space complexity

1. Consider a stack based upon a fillable array with pushes onto the back of the array. What is the time complexity of the worst case behavior for push and pop, respectively? You may assume there is sufficient space for the push operation.
1. constant and linear
2. constant and constant
3. linear and linear
4. linear and constant
1. Consider a stack based upon a circular array with pushes onto the front of the array. What is the time complexity of the worst case behavior for push and pop, respectively? You may assume there is sufficient space for the push operation.
1. linear and constant
2. linear and linear
3. constant and linear
4. constant and constant
1. Consider a stack based upon a dynamic array with pushes onto the back of the array. What is the time complexity of the worst case behavior for push and pop, respectively? You may assume there is sufficient space for the push operation and that the array never shrinks.
1. linear and linear
2. linear and constant
3. constant and linear
4. constant and constant
1. Consider a stack based upon a dynamic array with pushes onto the back of the array. What is the time complexity of the worst case behavior for push and pop, respectively? You may assume there is sufficient space for the push operation and that the array may shrink.
1. linear and linear
2. constant and constant
3. constant and linear
4. linear and constant
1. Consider a stack based upon a dynamic circular array with pushes onto the front of the array. What is the time complexity of the worst case behavior for push and pop, respectively? You may assume the array may grow or shrink.
1. linear and constant
2. constant and constant
3. linear and linear
4. constant and linear
1. Consider a stack based upon a singly-linked list without a tail pointer with pushes onto the front of the list. What is the time complexity of the worst case behavior for push and pop, respectively?
1. constant and constant
2. linear and constant
3. linear and linear
4. constant and linear
1. Consider a stack based upon a singly-linked list with a tail pointer with pushes onto the front of the list. What is the time complexity of the worst case behavior for push and pop, respectively?
1. linear and constant
2. constant and linear
3. constant and constant
4. linear and linear
1. Consider a stack based upon a non-circular, doubly-linked list without a tail pointer with pushes onto the front of the list. What is the time complexity of the worst case behavior for push and pop, respectively?
1. linear and constant
2. linear and linear
3. constant and linear
4. constant and constant
1. Consider a stack based upon a doubly-linked list with a tail pointer with pushes onto the front of the list. What is the time complexity of the worst case behavior for push and pop, respectively?
1. linear and constant
2. linear and linear
3. constant and linear
4. constant and constant
1. Suppose a simple fillable array with capacity c is used to implement two stacks, one growing from each end. The stack sizes at any given time are stored in i and j, respectively. If maximum space efficiency is desired, a reliable condition for the stacks being full is:
1. `i == c/2 || j == c/2`
2. `i + j == c`
3. `i == c/2-1 && j == c/2-1`
4. `i == c/2-1 || j == c/2-1`
5. `i + j == c-2`
6. `i == c/2 && j == c/2`

## Concept:stack applications

For the following questions, assume the tokens in a post-fix equation are processed with the following code, with all functions having their obvious meanings and integer division.
```    s.push(readEquationToken());
while (moreEquationTokens())
{
if (isNumber(t))
s.push(t);
else /* t must be an operator */
{
operandB = s.pop();
operandA = s.pop();
result = performOperation(t,operandA,operandB);
s.push(result);
}
}
```
1. If the tokens of the postfix equation `8 2 3 ^ / 2 3 * + 5 1 * -` are read in the order given, what are the top two values in s immediately after the result of the first multiplication is pushed?
1. 3 3
2. 5 6

3. 1 2
4. 1 6

## Concept:input-output order

1. These values are enqueued onto a queue in the order given: 1 5 9 4. A dequeue operation would return which value?
1. 5
2. 9
3. 4
4. 1
1. FIFO ordering is the same as:
1. LIFO
2. LILO
3. FILO

## Concept:complexity

1. Consider a queue based upon a simple fillable array with enqueues onto the front of the array. What is the time complexity of the worst case behavior for enqueue and dequeue, respectively? Assume there is room for the operations.
1. linear and linear
2. linear and constant
3. constant and linear
4. constant and constant
1. Consider a queue based upon a circular array with enqueues onto the front of the array. What is the time complexity of the worst case behavior for enqueue and dequeue, respectively? Assume there is room for the operations.
1. linear and constant
2. constant and linear
3. linear and linear
4. constant and constant
1. Consider a queue based upon a singly-linked list without a tail pointer with enqueues onto the front of the list. What is the time complexity of the worst case behavior for enqueue and dequeue, respectively?
1. linear and constant
2. constant and linear
3. constant and constant
4. linear and linear
1. Consider a queue based upon a singly-linked list with a tail pointer with enqueues onto the front of the list. What is the time complexity of the worst case behavior for enqueue and dequeue, respectively?
1. linear and linear
2. constant and linear
3. linear and constant
4. constant and constant
1. Consider a queue based upon a doubly-linked list with a tail pointer with enqueues onto the front of the list. What is the time complexity of the worst case behavior for enqueue and dequeue, respectively?
1. linear and linear
2. linear and constant
3. constant and constant
4. constant and linear
1. Consider a queue based upon a non-circular, doubly-linked list without a tail pointer with enqueues onto the front of the list. What is the time complexity of the worst case behavior for enqueue and dequeue, respectively?
1. linear and constant

2. linear and linear
3. constant and constant
4. constant and linear

## Concept: complexity

1. Consider a worst-case binary search tree with n nodes. What is the average case time complexity for finding a value at a leaf?
1. n log n
2. linear
3. constant
4. log n
5. $\sqrt{n}$
1. Consider a binary search tree with n nodes. What is the worst case time complexity for finding a value at a leaf?
1. linear
2. constant
3. log n
5. n log n
6. $\sqrt{n}$
1. Consider a binary search tree with n nodes. What is the minimum and maximum height (using order notation)?
1. linear and linear
2. log n and linear
3. constant and log n
4. constant and linear
5. log n and log n

## Concept: balance

1. Which ordering of input values builds the most unbalanced BST? Assume values are inserted from left to right.
1. 1 2 3 4 5 7 6
2. 4 3 1 6 2 8 7
3. 1 7 2 6 3 5 4
1. Which ordering of input values builds the most balanced BST? Assume values are inserted from left to right.
1. 1 4 3 2 5 7 6
2. 1 2 7 6 0 3 8
3. 4 3 1 6 2 8 7

## Concept: tree shapes

1. What is the best definition of a perfect binary tree?
1. all leaves are equidistant from the root
2. all null children are equidistant from the root
3. all nodes have zero or two children
4. all leaves have zero children
1. Suppose a binary tree has 10 leaves. How many nodes in the tree must have two children?
1. 7
2. 10
3. 8
4. 9
5. no limit
1. Suppose a binary tree has 10 nodes. How many nodes are children of some other node in the tree?
1. 10
2. no limit
3. 9
4. 7
5. 8
1. Let P0, P1, and P2 refer to nodes that have zero, one or two children, respectively. Using the generally accepted definition, what is a full binary tree?
1. all leaves are equidistant from the root
2. all interior nodes are P2; all leaves are equidistant from the root
3. all nodes are P2
4. all interior nodes are P1
5. all interior nodes are P2
6. all interior nodes P1, except the root
1. Let P0, P1, and P2 refer to nodes that have zero, one or two children, respectively. Using the generally accepted definition, what is a perfect binary tree?
1. all nodes are P0 or P2
2. all interior nodes are P1
3. all interior nodes are P2; all leaves are equidistant from the root
4. all interior nodes P1, except the root
5. all leaves are equidistant from the root
6. all interior nodes are P2
1. Let P0, P1, and P2 refer to nodes that have zero, one or two children, respectively. Using the generally accepted definition, what is a degenerate binary tree?
1. all leaves are equidistant from the root
2. all interior nodes are P1
3. all interior nodes are P2
4. all nodes are P0 or P2
5. all interior nodes P1, except the root
6. all interior nodes are P2; all leaves are equidistant from the root
1. Let P0, P1, and P2 refer to nodes that have zero, one or two children, respectively. Using the generally accepted definition of a complete binary tree, which of the following actions can be used to make any complete tree? Assume the leftmost and rightmost sets may be empty.
1. another name for a perfect tree
2. making the leftmost leaf of a perfect tree P1
3. removing the rightmost leaves from a perfect tree
4. making the leftmost leaves of a perfect tree P2
5. making the leftmost leaves of a perfect tree P1 or P2
1. T or F: All perfect trees are full trees.
1. T or F: All full trees are complete trees.
1. T or F: All complete trees are perfect trees.
1. How many distinct binary trees can be formed from exactly two nodes with values 1, 2, or 3 respectively (hint: think about how many permutations of values there are for each tree shape)?
1. 1
2. 5
3. 3
4. 2
5. 4
1. How many distinct binary tree shapes can be formed from exactly two nodes?
1. 1
2. 3
3. 4
4. 5
5. 2
1. Let k be the the number of steps from the root to a leaf in a perfect tree. What are the number of nodes in the tree?
1. ${2}^{k-1}-1$
2. ${2}^{k-1}+1$
3. ${2}^{k}-1$
4. ${2}^{k+1}-1$
5. ${2}^{k+1}$
1. Let k be the the number of steps from the root to the furthest leaf in a binary tree. What would be the minimum number of nodes in such a tree? Assume k is a power of two.
1. ${2}^{k+1}-1$
2. log k
3. k + 1
4. k
5. ${2}^{k+1}$
6. (log k) + 1
1. Let k be the the number of steps from the root to the furthest leaf in a binary tree. What would be the maximum number of nodes in such a tree? Assume k is a power of two.
1. k
2. log k
3. ${2}^{k+1}-1$
4. ${2}^{k+1}$
5. k + 1
6. (log k) + 1

## Concept: ordering in a BST

1. For all child nodes in a BST, what relationship holds between the value of a left child node and the value of its parent? Assume unique values.
1. greater than
2. less than
3. there is no relationship
1. For all sibling nodes in a BST, what relationship holds between the value of a left child node and the value of its sibling? Assume unique values.
1. there is no relationship
2. greater than
3. less than
1. Which statement is true about the successor of a node in a BST, if it exists?
1. it is always a leaf node
2. it is always an interior node
3. has no right child
4. has no left child
5. it may be an ancestor
1. Consider a node which holds neither the smallest or the largest value in a BST. Which statement is true about the node which holds the next higher value of a node in a BST, if it exists?
1. it may be an ancestor
2. has no left child
3. it is always a leaf node
4. has no right child
5. it is always an interior node

## Concept: traversals

1. Consider printing out the node values of a binary tree with 25 nodes to the left of the root and 38 nodes to the right. How many nodes are processed before the root's value is printed in a pre-order traversal?
1. 38
2. none of the other answers are correct
3. 53
4. 25
5. 0
6. 54
1. Consider printing out the node values of a binary tree with 25 nodes to the left of the root and 38 nodes to the right. How many nodes are processed before the root's value is printed in an in-order traversal?
1. 25
2. 54
3. 0
4. 53
5. none of the other answers are correct
6. 38
1. Consider printing out the node values of a binary tree with 25 nodes to the left of the root and 38 nodes to the right. How many nodes are processed before the root's value is printed in a post-order traversal?
1. 54
2. 25
3. 38
4. none of the other answers are correct
5. 63
6. 0
1. Consider a perfect BST with even values 0 through 12, to which the value 7 is then added. Which of the following is an in-order traversal of the resulting tree?
1. 12 10 8 7 6 4 2 0
2. 0 2 4 6 7 8 10 12
3. 0 2 4 6 8 10 12 7
4. 0 4 2 7 8 10 12 6
5. 6 2 10 0 4 8 12 7
6. 7 0 2 4 6 8 10 12
1. Consider a perfect BST with even values 0 through 12, to which the value 7 is then added. Which of the following is a level-order traversal of the resulting tree?
1. 0 2 4 6 7 8 10 12
2. 12 10 8 7 6 4 2 0
3. 7 0 2 4 6 8 10 12
4. 0 4 2 7 8 10 12 6
5. 6 2 10 0 4 8 12 7
6. 0 2 4 6 8 10 12 7
1. Consider a level-order traversal of C B A D F E and an in-order traversal of B C A F D E . Do these traversals generate a unique tree and, if so, what is that tree's pre-order traversal?
1. yes, C A D B E F
2. no
3. yes, but the correct answer is not listed
4. yes, C B A F D E
5. yes, C B A D F E
1. Consider an in-order traversal of B C A F D E and a pre-order traversal of C B A D F E . Do these traversals generate a unique tree and, if so, what is that tree's post-order traversal?
1. yes, B F E D A C
2. no
3. yes, F A D B E C
4. yes, but the correct answer is not listed
5. yes, B F A E D C
1. Consider an in-order traversal of B C A F D E and a post-order traversal of C B A D F E . Do these traversals generate a unique tree and, if so, what is that tree's level-order traversal?
1. yes, E A C F B D
2. yes, but the correct answer is not listed
3. no
4. yes, E F C D B A
5. yes, E F A D B C
1. Consider a level-order traversal of C F D E B A and an pre-order traversal of C F E A D B . Do these traversals generate a unique tree and, if so, what is that tree's in-order traversal?
1. yes, F A E C B D
2. yes, F A E C B D
3. yes, F E A C D B
4. yes, but the correct answer is not listed
5. no

## Concept: insertion and deletion

1. T or F: Suppose you are given an in-order traversal of an unbalanced BST. If you were to insert those values into an empty BST in the order given, would the result be a balanced tree?
1. T or F: Suppose you are given a pre-order traversal of an unbalanced BST. If you were to insert those values into an empty BST in the order given, would the result be a balanced tree?
1. T or F: Suppose you are given an in-order traversal of a balanced BST. If you were to insert those values into an empty BST in the order given, would the result be a balanced tree?
1. T or F: Suppose you are given a pre-order traversal of a balanced BST. If you were to insert those values into an empty BST in the order given, would the result be a balanced tree?
1. Suppose 10 values are inserted inserted into an empty BST. What is the minimum and maximum resulting heights of the tree? The height is the number of steps from the root to the furthest leaf.
1. 4 and 9
2. 5 and 10
3. 5 and 9
4. 3 and 9
5. 3 and 10
6. 4 and 10
1. Which, if any, of these deletion strategies for non-leaf nodes reliably preserve BST ordering?
1. Swap the values of the node to be deleted and the smallest leaf node with a larger value, then remove the leaf.
2. Swap the values of the node to be deleted with its predecessor or successor. If the predecessor or successor is a leaf, remove it. Otherwise, repeat the process.
3. If the node to be deleted does not have two children, simply connect the parent's child pointer to the node to the node's child pointer, otherwise, use a correct deletion strategy for nodes with two children.
1. ii and iii
2. i and iii
3. none
4. i
5. ii
6. i and ii
7. all
8. iii

## Concept:heap shapes

1. In a heap, the upper bound on the number of leaves is:
1. $O\left(n\mathrm{log}n\right)$
2. $O\left(n\right)$
3. $O\left(\mathrm{log}n\right)$
4. $O\left(1\right)$
1. In a heap, the distance from the root to the furthest leaf is:
1. $\theta \left(n\mathrm{log}n\right)$
2. $\theta \left(n\right)$
3. $\theta \left(1\right)$
4. $\theta \left(\mathrm{log}n\right)$
1. In a heap, let ${d}_{f}$ be the distance of the furthest leaf from the root and let ${d}_{c}$ be the analogous distance of the closest leaf. What is ${d}_{f}-{d}_{c}$, at most?
1. 2
2. 0
3. $\theta \left(\mathrm{log}n\right)$
4. 1
1. What is the most number of nodes in a heap with a single child?
1. 0
2. 2
3. 1
4. $\Theta \left(n\right)$
5. $\Theta \left(\mathrm{log}n\right)$
1. What is the fewest number of nodes in a heap with a single child?
1. 1
2. one per level
3. 0
4. 2
1. T or F: There can be two or more nodes in a heap with exactly one child.
1. T or F: A heap can have no nodes with exactly one child.
1. T or F: All heaps are perfect trees.
1. T or F: No heaps are perfect trees.
1. T or F: All heaps are complete trees.
1. T or F: No heaps are complete trees.
1. T or F: A binary tree with one node must be a heap.
1. T or F: A binary tree with two nodes and with the root having the smallest value must be a min-heap.
1. T or F: If a node in a heap is a right child and has two children, then its sibling must also have two children.
1. T or F: If a node in a heap is a right child and has one child, then its sibling must also have one child.

## Concept:heap ordering

1. In a min-heap, what is the relationship between a parent and its left child?
1. the parent has the same value
2. the parent has a larger value
3. there is no relationship between their values
4. the parent has a smaller value
1. In a min-heap, what is the relationship between a left child and its sibling?
1. the right child has a larger value
2. there is no relationship between their values
3. both children cannot have the same value
4. the left child has a smaller value
1. T or F: A binary tree with three nodes and with the root having the smallest value and two children must be a min heap.
1. T or F: The largest value in a max-heap can be found at the root.
1. T or F: The largest value in a min-heap can be found at the root.
1. T or F: The largest value in a min-heap can be found at a leaf.

## Concept:heaps stored in arrays

1. How would this heap be stored in an array?
1. `[2,4,5,10,11,12,13,15,21]`
2. `[2,10,11,21,12,13,4,5,15]`
3. `[2,10,4,11,13,5,15,21,12]`
4. `[21,11,12,10,13,2,5,4,15]`
1. Printing out the values in the array yield what kind of traversal of the heap?
1. in-order
2. pre-order
3. level-order
4. post-order
1. Suppose the heap has n values. The root of the heap can be found at which index?
1. n
2. 1
3. n-1
4. 0
1. Suppose the heap has n values. The left child of the root can be found at which index?
1. 1
2. n-1
3. n
4. n-2
5. 0
6. 2
1. Left children in a heap are stored at what kind of indices?
1. all odd
2. all odd but one
3. all even
4. a roughly equal mix of odd and even
5. all even but one
1. Right children in a heap are stored at what kind of indices?
1. all even
2. a roughly equal mix of odd and even
3. all odd
4. all odd but one
5. all even but one
1. The formula for finding the left child of a node stored at index i is:
1. $i*2+1$
2. $i*2+2$
3. $i*2-1$
4. $i*2$
1. The formula for finding the right child of a node stored at index i is:
1. $i*2-1$
2. $i*2$
3. $i*2+1$
4. $i*2+2$
1. The formula for finding the parent of a node stored at index i is:
1. $i/2$
2. $\left(i+2\right)/2$
3. $\left(i-1\right)/2$
4. $\left(i+1\right)/2$
1. If the array uses one-based indexing, the formula for finding the left child of a node stored at index i is:
1. $i*2-1$
2. $i*2+1$
3. $i*2+2$
4. $i*2$
1. If the array uses one-based indexing, the formula for finding the right child of a node stored at index i is:
1. $i*2+2$
2. $i*2-1$
3. $i*2+1$
4. $i*2$
1. If the array uses one-based indexing, the formula for finding the parent of a node stored at index i is:
1. $\left(i+2\right)/2$
2. $i/2$
3. $\left(i-1\right)/2$
4. $\left(i+1\right)/2$
1. Consider a trinary heap stored in an array. The formula for finding the left child of a node stored at index i is:
1. $i*3-1$
2. $i*3$
3. $i*3-2$
4. $i*3+1$
5. $i*3+3$
6. $i*3+2$
1. Consider a trinary heap stored in an array. The formula for finding the middle child of a node stored at index i is:
1. $i*3+2$
2. $i*3-2$
3. $i*3$
4. $i*3-1$
5. $i*3+3$
6. $i*3+1$
1. Consider a trinary heap stored in an array. The formula for finding the right child of a node stored at index i is:
1. $i*3+1$
2. $i*3-1$
3. $i*3$
4. $i*3+3$
5. $i*3+2$
6. $i*3-2$
1. Consider a trinary heap stored in an array. The formula for finding the parent of a node stored at index i is:
1. $i/3-1$
2. $\left(i-2\right)/3$
3. $\left(i+2\right)/3$
4. $\left(i+1\right)/3$
5. $i/3+1$
6. $\left(i-1\right)/3$

## Concept:heap operations

1. In a max-heap with no knowledge of the minimum value, the minimum value can be found in time:
1. $\theta \left(n\mathrm{log}n\right)$
2. $\theta \left(1\right)$
3. $\theta \left(n\right)$
4. $\theta \left(\mathrm{log}n\right)$
1. Suppose a min-heap with n values is stored in an array a. In the extractMin operation, which element immediatelyreplaces the root element (prior to this new root being sifted down).
1. `a[n-1]`
2. `a[1]`
3. `a[2]`
4. the minimum of `a[1]` and `a[2]`
1. The findMin operation in a min-heap takes how much time?
1. $\Theta \left(n\right)$
2. $\Theta \left(1\right)$
3. $\Theta \left(n\mathrm{log}n\right)$
4. $\Theta \left(\mathrm{log}n\right)$
1. The extractMin operation in a min-heap takes how much time?
1. $\Theta \left(\mathrm{log}n\right)$
2. $\Theta \left(n\right)$
3. $\Theta \left(n\mathrm{log}n\right)$
4. $\Theta \left(1\right)$
1. Merging two heaps of size n and m, m < n takes how much time?
1. $\Theta \left(\mathrm{log}n+\mathrm{log}m\right)$
2. $\Theta \left(m\mathrm{log}n\right)$
3. $\Theta \left(n*m\right)$
4. $\Theta \left(\mathrm{log}n*\mathrm{log}m\right)$
5. $\Theta \left(n+m\right)$
6. $\Theta \left(n\mathrm{log}m\right)$
1. The insert operation takes how much time?
1. $\Theta \left(n\right)$
2. $\Theta \left(1\right)$
3. $\Theta \left(n\mathrm{log}n\right)$
4. $\Theta \left(\mathrm{log}n\right)$
1. Turning an unordered array into a heap takes how much time?
1. $\Theta \left(n\mathrm{log}n\right)$
2. $\Theta \left(\mathrm{log}n\right)$
3. $\Theta \left(n\right)$
4. $\Theta \left(1\right)$
1. Suppose the values 21, 15, 14, 10, 8, 5, and 2 are inserted, one after the other, into an empty min-heap. What does the resulting heap look like? Heap properties are maintained after every insertion.
1. x
2. y
3. z
4. w
1. Using the standard buildHeap operation to turn an unordered array into a max-heap, how many parent-child swaps are made if the initial unordered array is `[5,21,8,15,25,3,9]`?
1. 5
2. 3
3. 4
4. 2
5. 6
6. 7