Numerical Analysis Clicker Questions
Numerical Analysis Clicker Questions
Alamgir Hossain
FACULTY OF SCIENCE John M. Stockie
DEPARTMENT OF MATHEMATICS
This teaching resource (including LATEX source, graphical images and Matlab code) is made available under the
Creative Commons “CC BY-NC-SA” license. This license allows anyone to reuse, revise, remix and redistribute the
databank of clicker questions provided that it is not for commercial purposes and that appropriate credit is given
to the original authors. For more information, visit http://creativecommons.org/licenses/by-nc-sa/4.0.
1. Introduction
Q1–11 . Select the best definition for “numerical analysis”:
(A) the study of round-off errors
(B) the study of algorithms for computing approximate solutions to problems from continuous mathematics
(C) the study of quantitative approximations to the solutions of mathematical problems including consider-
ation of and bounds for the errors involved
(D) the branch of mathematics that deals with the development and use of numerical methods for solving
problems
(E) the branch of mathematics dealing with methods for obtaining approximate numerical solutions of math-
ematical problems
Answer: (B). All 5 definitions are valid in some sense since they reflect some aspect of the field (most are
pulled off the internet). But my favourite definition is (B) because it contains three very important keywords
underlined below:
the study of algorithms for computing approximate solutions to problems from continuous mathematics
[ algorithms ⇐⇒ computing, approximate ⇐⇒ floating point arithmetic, continuous ⇐⇒ solutions are smooth f ’ns ]
{ Source: JMS }
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
s exp mantissa
Q1a–34 . You are working with a hypothetical binary computer that stores integers as unsigned 4-bit words. What
is the largest non-negative integer that can be represented on this computer?
(A) 64
(B) 63
(C) 31
(D) 15
(E) 7
Answer: (D). (1111)2 = 1 × 20 + 1 × 21 + 1 × 22 + 1 × 23 = 15.
Q1a–45 . In 1958 the Russians developed a ternary (base-3) computer called Setun, after the Setun River that flows
near Moscow State University where it was built. In contrast with today’s binary computers, this machine
used “trits” (ternary bits) whose three possible states can be represented as {0, 1, 2}. Its floating-point number
system was based on 27-trit numbers, with 9 trits reserved for the exponent and 18 for the mantissa. What
was the value of machine epsilon M for the Setun?
(A) 3−19
(B) 3−18
(C) 3−9
(D) 1
3 · 2−18
Answer: (B).
Apply the formula M = B −t from the notes, where
Setun – Moscow State University
B = 3 is the base and t = 18 is the number dig-
its in the mantissa. You may have noticed that I
didn’t mention a “sign trit” for the mantissa. In
actual fact, the floating-point representation on Se-
tun was more complicated than this and the sign of
a number came from interpreting one specific trit as
{−1, 0, +1} instead.
Q1a–56 . In Canada, the total for any store purchase paid in cash is rounded to the nearest 5 cents, whereas no
rounding is done if the payment is by credit/debit card. Suppose that when you return home after purchasing
your groceries with cash, you notice that your bill was $10.07. What is the absolute error in your actual cash
payment?
|x − f`(x)|
Rx = 6u
|x|
where u denotes unit round-off error. Which of the following is true about u?
(
101−t , chopping
(A) u = 1 1−t
2 10 , rounding
(
1
101−t , chopping
(B) u = 2 1−t
10 , rounding
(
1
101−t , rounding
(C) u = 2 1−t
10 , chopping
(
101−t , rounding
(D) u = 1 1−t
2 10 , chopping
Answer: (A).
Q1a–89 . Fill in the blank: If f (x) is a real-valued function of a real variable, then the error in the
f (x + h) − f (x)
difference approximation for the derivative f 0 (x) ≈ goes to zero as h → 0.
h
(A) absolute
(B) relative
(C) cancellation
(D) truncation
Answer: (D). Strictly, response (A) is also correct since truncation error is an (absolute) difference from
the exact derivative.
Q1a–910 . The two solutions of the quadratic equation ax2 + bx + c = 0 given by
√
−b ± b2 − 4ac
x=
2a
are computed using floating point arithmetic. Which of the statements below is TRUE?
Q1a–1011 . In floating-point arithmetic, which of the following operations on two positive floating-point numbers can
produce an overflow?
(A) addition
(B) subtraction
(C) multiplication
(D) division
Answer: (A). But (C) and (D) are also valid responses. Let x be the largest number that can be represented.
Then the operations x + 1.0, x ∗ 2.0 and x ÷ 0.3 all generate an overflow.
{ Source: Heath [4], Review Question 1.29, p. 40 }
Q1a–1112 . In floating-point arithmetic, which of the following operations on two positive floating-point numbers can
produce an underflow?
(A) addition
(B) subtraction
(C) multiplication
(D) division
Answer: (C). But (D) is also a valid response. Let x be the smallest positive number that can be represented.
Then the operations x ∗ 0.5 and x ÷ 2.3 both generate an underflow.
{ Source: Heath [4], Review Question 1.30, p. 40 }
Q1a–1213 . Let {xk } be a decreasing sequence of positive numbers with xk+1 < xk for k = 1, 2, . . . . In what order
XN
should the sum xk be computed so as to minimize round-off error?
k=1
Answer: (B).
{ Source: Heath [4], adapted from Review Question 1.45, p. 41 }
Q1a–1314 . True or False: If two real numbers can be represented exactly as floating-point numbers, then the result
of a real arithmetic operation on them can also be represented exactly as a floating-point number.
Answer: FALSE. As a counterexample, let x1 = εM (machine epsilon) and x2 = 2, which are exact in any
other binary floating point system (like the IEEE standard). Then x1 /x2 has no floating point representation.
{ Source: Heath [4], Review Question 1.7, p. 39 }
Q1a–1415 . Below are four floating point approximations, each accompanied by its corresponding exact value. Which
approximation is the most accurate?
(A) 315700, exact value 315690
Q1a–1516 . You are computing using a floating-point number system that stores a total of t decimal digits in the
mantissa. If you perform an arithmetic computation that has a result with relative error Rx , which of the
statements below is TRUE?
(A) The number of significant digits in the answer is t.
(B) The number of significant digits cannot be predicted because of cancellation or round-off errors.
(C) The number of significant digits is roughly − log10 (Rx ).
(D) None of the above.
Answer: (D).
{ Source: JMS }
Q1a–1617 . The relative error in an approximate solution is 0.004%. The number of significant digits in the solution
that we can trust is:
(A) 2
(B) 3
(C) 4
(D) 5
Answer: (C). This percentage corresponds to a relative error of 4 × 10−5 . With rounding this means that
the solution is accurate to within only 4 significant digits.
{ Source: Holistic Numerical Methods [5], quiz 01 02 }
Q1a–1718 . Let x̂ be some approximation of an exact value x. Which of the statements below is FALSE?
(A) The relative error is always smaller than the absolute error.
(B) The relative error can be smaller than the absolute error provided x is large enough.
(C) The relative error gives a better idea of the number of correct significant digits.
(D) None of the above.
Answer: (A). Because absolute error is Ex = |x − x̂| and relative error is Rx = |x − x̂|/|x| = Ex /|x|, the
only time Rx < Ex is when |x| > 1.
{ Source: JMS }
Q1a–2021 . Suppose that you are working with some small quantity h 1. Which of the following expressions is NOT
of O(h4 ) as h → 0?
(A) h5 + 7h4 − 0.1h3
(B) 56h4
(C) h27
(D) 27h4.3
(E) All of the above
Answer: (A). Only response (A) has a term of the form chp with p < 4 that goes to zero slower than h4 .
{ Source: JMS }
Q1a–2122 . Suppose that you are working with some large quantity N 1. Which of the following expressions is NOT
of O(N 4 ) as N → ∞?
N6 + 1
(A)
2N 2 − N + 1
(B) 0.0001N 4
(C) N 3.6
(D) N 4 + eN
(E) All of the above
Answer: (D). Only response (D) has a term (eN ) that grows faster than N 4 .
{ Source: JMS }
Q1b–527 . Using the Taylor series expansion of f (x) = cos(x) near x = 0, an approximation of f (h) for small h is:
sin(h)
(A)
h
cos(h) − 1
(B)
h
(C) sin(0)
(D) cos(0)
1
(E) 1 − 2 h2
Answer: (D). The first term in the Taylor series is cos(0) = 1. However, response (E) is also correct since
it comes from expanding the Taylor series by an additional two terms:
1
cos(h) ≈ cos(0) − sin(0)h − cos(0)h2
2
Q1b–628 . Using the Taylor series expansion of f (x) = cos(x) near x = 0, an approximation of f 0 (0) for small h is:
sin(h)
(A)
h
cos(h) − 1
(B)
h
(C) sin(0)
(D) cos(0)
Answer: (B). Expand to get f (0 + h) ≈ f (0) + hf 0 (0) and then solve for f 0 (0).
Q1b–729 . Let f (x) be a function that is differentiable on the interval [−5, 5]. If f (−5) = 10, f (0) = −10, and
f (5) = 10, then which of the statements below must be TRUE?
I. For some c ∈ (−5, 5), f (c) = 0.
II. For some c ∈ (−5, 5), f 0 (c) = 0.
(A) I only
(B) II only
(C) Both I and II
(D) Neither I nor II
Regarding I, the Intermediate Value Theorem (IVT) doesn’t apply directly to the interval (−5, 5) since f (−5) =
f (5). However, f changes sign on both intervals [−5, 0] and [0, 5]. So there are at least two roots in (−5, 5)
which means that I is also true.
{ Source: JMS }
Q1b–830 . Let f (x) be continuous and differentiable for all x ∈ [a, b] as shown in the plot below. Then there exists
some real number c ∈ (a, b) satisfying:
80
f(x)
60
40
20
y
-20
-40
a c b
0 0.5 1 1.5 2 2.5 3
x
f (b) − f (a)
(A) f (c) =
b−a
f (b) − b
(B) f 0 (c) =
f (a) − a
(C) f 0 (c) = 0
f (b) − f (a)
(D) f 0 (c) =
b−a
Answer: (D). This is an application of the Mean Value Theorem.
∞
X x2n
Q1b–931 . The series (−4)n is the Taylor series at x = 0 for the following function:
n=0
(2n)!
(A) cos(x)
(B) cos(2x)
Q1b–1032 . Given that f (2) = 6, f 0 (2) = − 12 and f 00 (2) = 10, what is the most accurate Taylor polynomial approxi-
mation of f (2.2) that you can find?
(A) 5.9
(B) 6.1
(C) 6.2
(D) 7
Answer: (B). These three point values determine the second degree Taylor polynomial:
1 1 10
f (x) ≈ f (2) + f 0 (2)(x − 2) + f 00 (2)(x − 2)2 = 6 − (x − 2) + (x − 2)2
2 2 2
and then substituting x = 2.2 in the last expression gives
1
f (2.2) ≈ 6 − (0.2) + 5(0.2)2 = 6 − 0.1 + 0.2 = 6.1
2
{ Source: JMS }
Q1b–1133 . Which is the Taylor series for the function ln(x) at the point a = 1?
1 1 1
(A) (x − 1) − (x − 1)2 + (x − 1)3 − (x − 1)4 + · · ·
2 3 4
(B) (x − 1) − (x − 1)2 + 2(x − 1)3 − 6(x − 1)4 + · · ·
1 1 2 6
(C) ln(x) + (x − 1) − 2 (x − 1)2 + 3 (x − 1)3 − 4 (x − 1)4 + · · ·
x x x x
1 1 1 1
(D) ln(x) + (x − 1) − 2 (x − 1)2 + 3 (x − 1)3 − 4 (x − 1)4 + · · ·
x 2x 3x 4x
Answer: (A). It’s easy to eliminate (C) and (D) since they are not polynomials.
{ Source: MathQuest [8], Taylor series }
x2 x4 x6
Q1b–1234 . What function is represented by the Taylor series 1 − + − + · · · at a = 0?
2 24 720
(A) ex
(B) sin x
(C) cos x
(D) This is not a Taylor series
Answer: (C).
{ Source: MathQuest [8], Taylor series }
Q1b–1335 . Suppose that you determine the Taylor series for some function f (x) centered at a = 5. At which point x
would you expect a finite number of terms from this series to give a better approximation?
(A) x = 0
(B) x = 3
(C) x = 8
(D) There is no way to tell
(A) n + 2
(B) n + 1
(C) n
(D) n − 1
Answer: (C). The fundamental theorem of algebra guarantees that there are exactly n complex roots (or
zeroes), some of which might be repeated roots. However, if you are interested in real roots, then they could
number anywhere between zero and n.
{ Source: Holistic Numerical Methods [5] }
Q2–237 . How many real roots does the equation sin(x) − x = 0 have?
(A) 0
(B) 1
(C) 3
(D) Infinitely many
Answer: (B). Rewriting the equation as sin(x) = x suggests sketching the two functions y = x and y = sin(x)
on the same axes and looking for intersection points. Then it’s easy to confirm that x = 0 is the only real
root.
4
y = sin(x)
2 y=x
0
y
-2
-4
-10 -5 0 5 10
x
{ Source: MAH }
Q2–338 . How many real roots does the equation sin(x) − ax = 0 have? (a is some real constant)
(A) 0
(B) 1
(C) 3
(D) Infinitely many
(E) It depends on a
Answer: (E). By sketching the function y = sin(x) alongside various straight lines y = ax it is easy to
show that there is always at least 1 root (x = 0) and that there are infinitely many roots if a = 0. With a bit
more investigation, as a varies there can be any odd, positive number of roots. So strictly, responses (B)–(E)
are all correct, while response (A) is not.
y
-2
-4
-10 -5 0 5 10
x
{ Source: JMS }
Q2–439 . True or False: When searching iteratively for a solution x∗ of f (x) = 0, having a small value of the residual
|f (xk )| guarantees that xk is an accurate approximation of x∗ .
Answer: FALSE.
{ Source: Heath [4], Review Question 5.1, p. 245 }
-20
f(x) = (x+1)(x-1)(x-3)
-40
f(x)
-60
-80
-100
-120
-4 -3 -2 -1 0 1 2 3 4
x
(A) 3
(B) 2
(C) 1
(D) none of the above
Answer: (C). All three roots (−1, 1 and 3) lie within the interval [−4, 4], so the bisection algorithm is
guaranteed to converge. However, the algorithm will only converge to one of those roots.
{ Source: MAH }
Q2a–241 . True or False: Let f (x) = x2 − 2x + 1. The bisection method can be used to approximate the root of the
function f (x) pictured.
0.5
0
0 1 2
Answer: FALSE.
{ Source: MAH }
Q2a–342 . You are provided with a list of point values for the function f (x) = x + 10 − ex :
x 0 1 2 3 4
f (x) 9.000 8.282 4.611 −7.086 −40.598
Use this data to perform two steps of the bisection method for solving f (x) = 0, assuming the initial interval
[0, 4]. What is the approximation of the root?
(A) 1
(B) 2
(C) 3
(D) 4
Answer: (C).
{ Source: MACM 316 final exam question (Fall 2018) }
Q2a–443 . You are computing a root of f (x) = cos(x) − x on the interval [0, 2] using the bisection method. If you
require a result that accurate to within five significant digits, which of the following bounds holds for the
minimum number of bisection iterations k required?
1
(A) < 10−5
2k−1
1
(B) < 10−5
2k
2
(C) k+1 < 10−5
2
(D) none of the above
Answer: (A).
{ Source: MAH }
Q2a–544 . Which of the statements below regarding the convergence of the bisection method is TRUE?
I. The iteration is always guaranteed to converge.
II. The order of convergence is linear.
III. The asymptotic rate of convergence is 12 .
Answer: (D). All three statements are true, but only assuming that you choose an initial interval [a, b]
having f (a) · f (b) < 0. If that’s not the case, then the method will not converge so none of the statements
hold.
{ Source: JMS, MACM 316 lecture notes }
Q2a–645 . You are given an interval [a, b] where f (a)·f (b) < 0. Which of the statements below regarding the bisection
method is TRUE?
I. There must be a root x∗ somewhere in the interval (a, b).
b−a
II. The absolute error in the first step of bisection is |x1 − x∗ | 6 .
2
0
III. The bisection iteration is guaranteed to converge if |f (x)| < 1.
(A) I and II
(B) II and III
(C) I and III
(D) I, II and III
Answer: (A).
{ Source: JMS, MACM 316 lecture notes }
Q2a–746 . Which of the following is a suitable initial interval for computing a positive root of the equation
g(x) = 2x cos(πx) − ex−1 = 0
using the bisection method?
(A) [−1, 0]
(B) [−1, 1]
(C) [0, 1]
(D) [0, 2]
Answer: (D). From the interval end-point values
g(−1) = 2 − e−2 > 0, g(0) = −1 < 0, g(1) = −2 − e0 = −3 < 0, g(2) = 4 − e > 0,
the only interval that is a valid bracket for a positive root is [0, 2], since g(0) · g(2) < 0. Response (A) and
(B) are also brackets, but [−1, 0] can only contain negative roots, while [−1, 1] may or may not have a positive
root. This plot displays the function, the interval end-points, and the desired positive root:
Positive root
0
g(x)
-1
-2
-3
Answer: (A).
{ Source: MACM 316 midterm question (Fall 2018) }
Q2b–451 . Which of the following is a suitable fixed point iteration for solving the equation cos(x) − x = 0?
I. xk+1 = cos(xk )
II. xk+1 = cos−1 (xk )
III. xk+1 = xk + tan(xk )
(A) I
(B) I and II
(C) I and III
(D) I, II and III
Answer: (B).
{ Source: MAH }
0
0 2 4 6
(A) I and II
(B) II and III
(C) I and III
(D) I, II and III
Answer: (C).
{ Source: MAH }
-2
0 1 2 3
(A) I
(B) II
(C) I and II
(D) Neither I nor II
Answer: (D).
{ Source: MAH }
(A) I and II
(B) II and III
(C) I and III
(D) I, II and III
Answer: (C). Statement II is false in general, but could be true if |g 0 (x)| < 1
2 since then the asymptotic
rate constant for the fixed point method is smaller than that of bisection.
{ Source: JMS, MACM 316 lecture notes }
Answer: (B).
{ Source: MACM 316 midterm question (Fall 2018) }
Q2c–257 . True or False: The convergence of the secant method to simple roots is quadratic.
Answer: FALSE. The convergence of the secant method to simple roots is superlinear, and lies between
linear and quadratic convergence (the order of convergence is actually p ≈ 1.618).
{ Source: MAH }
Q2c–358 . True or False: Both Newton’s method and the secant method are examples of fixed-point iterations.
Answer: FALSE. Newton’s method is a fixed point iteration, but secant method is not.
Q2c–459 . The irrational number e can be approximated by applying Newton’s method to solve the nonlinear equation
f (x) = ln x − 1 = 0. What is the Newton iteration formula?
(A) xk+1 = ln xk
(B) xk+1 = xk − ln xk
(C) xk+1 = xk − (ln xk − 1)
(D) xk+1 = 2xk − xk ln xk
Answer: (D).
{ Source: MAH }
√
Q2c–560 . The irrational number 2 can be approximated by applying Newton’s method to the nonlinear equation
f (x) = x2 − 2 = 0. What is the Newton iteration formula?
x2k − 2
(A) xk+1 = xk +
2xk
2
x −2
(B) xk+1 = xk − k
xk
2
x −2
(C) xk+1 = xk − k
xk
x2k − 2
(D) xk+1 = xk −
2xk
Indeed, (E) is a remarkably efficient formula that has been used to implement
√ the square root operation on
many modern floating-point processors! Generalizing to the case of finding c as a root of f (c) = x2 − c, the
Newton iteration is
xk c 1 1
xk+1 = + = xk + c
2 2xk 2 xk
Counting the flops involved, each iteration requires a reciprocal (1/xk ), one multiplication by c, one addition,
√
and one division by 2 (which is just a bit-shift in binary). This is an incredibly cheap way to approximate c
when the number of Newton iterations is small.
{ Source: MAH }
√
Q2c–661 . Apply Newton’s method to the equation f (x) = x2 − 2 = 0 to estimate the root x∗ = 2. Starting with
the initial guess x0 = 1, the first iteration x1 is:
(A) 0.5
(B) 1.0
(C) 1.5
(D) 2.0
x20 −2 x0 1
Answer: (C). The Newton iteration is x1 = x0 − 2x0 = 2 + x0 . Setting x0 = 1 gives x1 = 32 .
{ Source: MAH }
Q2c–762 . For which values of x does Newton’s method exhibit linear and quadratic order of convergence when applied
to the function f (x) = x(x − 3)2 ?
(A) linear near x = 0 and quadratic near x = 3
(B) linear near x = 3 and quadratic near x = 0
(C) linear convergence everywhere
(D) quadratic convergence everywhere
Answer: (B).
{ Source: MAH }
Answer: (D).
{ Source: Heath [4], Review Question 5.16, p. 246 }
(A) I
(B) II
(C) I and II
(D) Neither I nor II
Answer: (D).
{ Source: JMS, MACM 316 lecture note }
f (xk )
Q2c–1065 . The Newton iteration xk+1 = xk − can be rewritten as a fixed point iteration
f 0 (xk )
f (x)
xk+1 = g(xk ) where g(x) = x − .
f 0 (x)
Which of these statements is TRUE?
I. Newton’s method fails when f 0 (xk ) = 0.
II. The iteration converges if |g 0 (xk )| < 1.
(A) I
(B) II
(C) I and II
(D) Neither I nor II
Answer: (C).
{ Source: MAH }
(A) I
(B) I and II only
(C) II and III only
(D) I, II and III
Answer: (C).
{ Source: JMS, MACM 316 lecture notes }
Q2c–1469 . After writing a Matlab code that implements Newton’s method, you apply it to the function f (x) =
2x3 − 3x2 + 4x − 6. Using an initial guess of x0 = 1.5, your code returns a result of “-Inf” on the first
iteration. What is the most likely explanation for this behaviour?
Answer: (C). This one is a little bit tricky. You need to look at the Newton iteration
f (xk )
xk+1 = xk −
f 0 (xk )
and see there are two possible invalid floating point operations:
• ±Inf or “ 10 ”: occurs if f 0 (xk ) = 0 and f (xk ) 6= 0. This is case (C).
• NaN or “ 00 ”: occurs if both f 0 (xk ) = 0 and f (xk ) = 0. This is response (B) – a root with multiplicity 2.
{ Source: JMS }
Q2c–1570 . Which of the following is an advantage of the secant method over Newton’s method?
Q2c–1671 . For the function f (x) plotted below, use each of the four points x1 , x2 , x3 and x4 as the starting guess for
Newton’s method. For which point(s) do you expect the iteration to converge to the root shown?
1.2
0.8
0.6
0.4
0.2
0
x1 x2 x3 x4
-0.2
-2 0 2 4 6
Q2c–1772 . For the function f (x) plotted below, use each of the intervals [x1 , x2 ], [x1 , x3 ] and [x1 , x4 ] as the starting
guess for the secant method. For which interval do you expect the iteration to converge to the root shown?
1.2
0.8
0.6
0.4
0.2
0
x1 x2 x3 x4
-0.2
-2 0 2 4 6
(A) [x1 , x2 ]
(B) [x1 , x3 ]
(C) [x1 , x4 ]
Answer: (A). Check by plotting the secant lines joining each pair of points.
{ Source: GoodQuestions [7], Newton’s method }
Q2c–1873 . Let f be a differentiable function that is defined for all x. Starting Newton’s method at a point x0 where
f 0 (x0 ) = 0 is . . .
(A) a good choice, because x = x0 is a critical point of f and Newton’s method will converge most rapidly
to the root from there
(B) a bad choice, because the Newton iteration fails
(C) usually a bad choice, but it might work if we’re lucky
Answer: (B). Strictly, (C) is also correct since it’s possible we might have starting with an x0 that is a root
– this is what is meant by “lucky”. In every other case, the Newton iteration formula fails because of division
by zero.
{ Source: GoodQuestions [7], Newton’s method }
Q2d–276 . Suppose you apply an iterative method and obtain the following errors from the first four steps:
Q2d–377 . Suppose you apply an iterative method and obtain the following errors from the first six steps of an iterative
method:
1
Q2d–478 . If you have an iterative method that converges linearly with rate constant 2, how many iterations are
required for the initial error to be reduced by a factor of at least 1000?
(A) 5
(B) 10
(C) 20
(D) 100
Answer: (B). The error from step k obeys Ek 6 12 Ek−1 =⇒ Ek 6 ( 12 )k E0 . This means that we need
1
k 1
2 6 1000 =⇒ 2k > 1000 =⇒ k > log2 (1000) ≈ 9.97.
{ Source: JMS }
Q2d–579 . If an iterative method approximately squares the error in every two iterations then what is its order of
convergence?
(A) 0.5
(B) 1
√
(C) 2
(D) 2
2
Answer: (C). We’re given that the error in step k satisfies Ek ≈ Ek−2 . But from the definition of
2
p p
convergence for a method of order p, the error in step k also satisfies Ek ≈ Ek−1 ≈ Ek−2 . Comparing these
2
√
two expressions for Ek suggests that p = 2 or p = 2.
{ Source: Heath [4], adapted from Review Question 5.14, p. 246 }
Q2d–680 . For some root finding method, you determine that the absolute error in step k satisfies Ek 6 0.2Ek−1
3
.
Which of the statements below is TRUE?
Q2d–781 . You use two different iterative methods to compute an approximation, and the errors from each method
are plotted below versus the iteration number:
0.25
Method 1
Method 2
0.2
Error, Ek
0.15
0.1
0.05
0 2 4 6 8 10
Iteration, k
What can you conclude about the iterative convergence of the two methods?
(A) Both methods converge quadratically, but Method 1 has a smaller rate constant.
(B) The order of convergence for Method 1 is linear and Method 2 is quadratic.
(C) Both methods converge faster than linear, but the order cannot be estimated.
(D) No conclusion is possible based on this plot.
Answer: (D). The order of convergence can’t be estimated using this error plot with both axes having linear
scales. This is because the definition of convergence reads
k
p
Ek 6 αEk−1 =⇒ Ek 6 αk E0p
(for order p and rate α) which requires taking a logarithm of Ek to reliably estimate either p or α.
{ Source: JMS }
Q2d–882 . You use two different iterative methods to compute an approximation, and the errors from each method
are plotted below versus the iteration number:
10 -5
Error, Ek
10 -10
10 -15
0 2 4 6 8 10
Iteration, k
What can you conclude about the iterative convergence of the two methods?
(A) Both methods converge quadratically, but Method 1 has a smaller rate constant.
(B) The order of convergence for Method 1 is linear and Method 2 is quadratic.
(C) Method 1 converges linearly and Method 2 is super-linear.
(D) No conclusion is possible based on this plot.
Answer: (C). The points from Method 1 appear to lie on a straight line. Method 2 curves downward so
it’s definitely superlinear. It might converge quadratically but that’s not possible to estimate from this semi-log
plot. Instead, you would have to plot log log Ek (double-log) versus k.
{ Source: JMS }
Q3a–385 . True or False: If a matrix A is nonsingular then the number of solutions to the linear system Ax = b
depends on the particular choice of right hand side vector b.
Q3a–486 . True or False: If a triangular matrix has a zero entry on its main diagonal then the matrix must be
singular.
Answer: TRUE. The determinant of any triangular matrix is the product of the diagonal entries and so its
determinant of this matrix is zero.
{ Source: Heath [4], Review Question 2.3, p. 92 }
Answer: TRUE. The 1-norm is the maximum column sum, the ∞-norm is the maximum row sum, and
the transpose “flips” the matrix so the columns turn into rows.
{ Source: Heath [4], Review Question 2.24, p. 93 }
Q3a–890 . Which of the following is a reliable indicator that a matrix is nearly singular?
(A) a small determinant
(B) a small norm
(C) a large norm
(D) a large condition number
Answer: (D). Both (A) and (D) are acceptable answers, at least in exact arithmetic. But floating-point
calculations of determinants can lead to very large growth in round-off error, so the condition number tends
to be a more reliable test.
{ Source: Heath [4], Review Question 2.62, p. 95 }
But response (D) is also correct since that matrix is singular and so has condition number ∞. The matrices
in (B) and (C) are multiples of the identity matrix, so they are perfectly conditioned.
{ Source: Heath [4], Review Question 2.61, p. 95 }
2x + 3y = 5
4x + ay = 8
(A) 1, 2, 3
(B) 1, 4, 6
(C) 1, 0, 0
(D) 4, 5, 6
Answer: (A).
{ Source: MAH }
Q3a–1294 . Which of the following matrices is strictly diagonally dominant, and hence invertible?
−6 0 3 −6 0 3 −6 0 3 −6 0 3
I. 7 8 −2 II. 5 8 −2 III. 5 8 −2 IV. 1 2 −2
1 −1 2 1 −1 2 1 −1 3 1 −1 8
(A) II
(B) III
(C) II and III
(D) IV
(E) All are strictly diagonally-dominant
Answer: (A).
{ Source: JMS }
Q3a–1395 . True or False: It’s a well-known fact that every strictly diagonally-dominant matrix is nonsingular. Is it
also true that every nonsingular matrix is strictly diagonally-dominant?
0 1
Answer: FALSE. A simple counterexample is the matrix A = (the permuted identity matrix). This
1 0
−1
is clearly invertible (A = A) but not diagonally-dominant (DD). This brings home an important point:
checking for DD isa really easy test to determine whether a matrix is nonsingular, but it doesn’t work for all
nonsingular matrices (that is, some matrices that aren’t DD can still be invertible, like A above).
{ Source: JMS }
−1 −2 1
Q3a–1496 . Find w = 4 as a linear combination of u = 2 and v = 0 .
12 3 2
(A) w = 2u + 3v
(B) w = 2u − 3v
(C) w = u + 3v
(D) w = 2u + v
−1 −2 1
Answer: (A). Check: 4 = 2 2 + 3 0 .
12 3 2
{ Source: MAH }
x + 2y = 5
2x + 4y = 10
(A) 3
(C) 3
2.5 2.5
2 2
y
y
1.5 1.5
1 1
-1 -0.5 0 0.5 1 1.5 2 -1 -0.5 0 0.5 1 1.5 2
x x
(B) 3
(D) 3
2.5 2.5
2 2
y
1.5 1.5
1 1
-1 -0.5 0 0.5 1 1.5 2 -1 -0.5 0 0.5 1 1.5 2
x x
Answer: (B). The second equation is simply the first multiplied by 2, so these are just two equations for
the same line (both with negative slope).
{ Source: MAH }
−3x + 6y = 3
2x − 4y = 4
(A) x = −1 and y = 0
(B) x = 3 and y = 2
1
(C) x = 3 and y = 2
(D) There are no solutions
(E) There are an infinite number of solutions
Answer: (D). The equations can be scaled to get x − 2y = −1 and x − 2y = 2, which are inconsisent with
each other.
{ Source: JMS }
2.5
y
1.5
1
-1 -0.5 0 0.5 1 1.5 2
x
(A) x + 2y = 5, 2x + 4y = 10
(B) x + 2y = 5, 2x + 4y = 8
(C) x + 2y = 5, 2x − 4y = 10
(D) x − 2y = 5, 2x + 4y = 8
Answer: (B). Both lines in the plot have the same slope (− 12 ), so we need to choose between (A) and (B).
Option (A) gives two identical lines and so the correct answer must be (B).
{ Source: MAH }
-1
-1 0 1 2
(A) 0
(B) 1
(C) 3
(D) Infinitely many
Answer: (A). There is no common intersection point and so this system has no solution.
{ Source: MathQuest [8], Linear Algebra, Systems of Equations }
Q3a–19101 . True or False: For any n × n matrix A, the equation Ax = 0 has the unique solution x = 0.
Answer: FALSE. The equation Ax = 0 has x = 0 as a unique solution if and only if A is nonsingular,
which is equivalent to saying that A−1 exists or det(A) 6= 0.
{ Source: MAH }
Q3a–20102 . Suppose that both A and B are n × n matrices. Which of the following statement is FALSE?
(A) diagonal
(B) identity
(C) lower triangular
(D) upper triangular
Answer: (D).
{ Source: JMS }
(A) det(A) = 0
(B) A zero entry arises in the pivot position as Gaussian elimination with partial pivoting is applied to A
(C) Ax = 0 has only the trivial solution x = 0
(D) cond(A) = ∞
Answer: (B).
{ Source: JMS }
Q3b–3106 . When solving an upper or lower triangular system of size n × n, the computational cost (measured in terms
of multiplication and division operations) is roughly equal to . . .
(A) n2
1
(B) 2 n2
(C) n3
(D) n
Answer: (A).
{ Source: JMS }
Answer: (C). Response (A) results from r3 ← 2r3 followed by the row swap r1 ↔ r3 . Response (B) results
from r3 ← r2 + r3 . Response (C) involves a column swap.
{ Source: MathQuest [8], Gaussian elimination }
Q3b–5108 . Which of the following operations on an augmented matrix could change the solution of the corresponding
linear system?
(A) Interchanging two rows
(B) Multiplying one row by any constant
(C) Adding one row to another
(D) None of the above
Answer: (B). Because every response seems to correspond to a row operation, it’s very easy to make the
mistake of choosing (D). However, response (B) could be an invalid row operation if the multiple is zero. So
response (D) is only correct as long as it’s understood that the constant multiple is non-zero.
{ Source: MathQuest [8], Gaussian elimination }
Q3b–6109 . What is the value of α so that the linear system represented by the following augmented matrix has
infinitely many solutions?
2 6 8
1 α 4
(A) α = 0
(B) α = 2
(C) α = 3
(D) α = 4
(E) There is always a unique solution
Answer: (C). The two rows are linearly independent as long as α 6= 3, in which case row 2 is multiple of
row 1 and the system becomes underdetermined.
{ Source: MathQuest [8], Gaussian elimination }
Answer: (E).
{ Source: MathQuest [8], Gaussian elimination }
Q3b–9112 . When computing the LU factorization of an n × n matrix, the computational cost (measured in terms of
multiplication and division operations) is . . .
(A) O(n)
(B) O(n3/2 )
(C) O(n2 )
(D) O(n3 )
Answer: (D).
{ Source: JMS }
Q3b–11114 . If you have to solve Ax = b many times for different vectors b but the same matrix A, it is best to . . .
Answer: (B).
{ Source: JMS }
Q3b–12115 . True or False: If a linear system is well-conditioned, then pivoting is unnecessary in Gaussian elimination.
This is a permutation of the 3 × 3 identity matrix so that cond(A) = 1, which is perfectly well-conditioned.
However, Gaussian elimination will fail without partial pivoting, because the first pivot element is a11 = 0.
{ Source: Heath [4], Review Question 2.14, p. 92 }
whose LU factorization we want to compute using Gaussian elimination. What will the initial pivot element
be without pivoting, and with partial pivoting?
(A) 0 (no pivoting), 6 (partial pivoting)
(B) 4 (no pivoting), 0 (partial pivoting)
(C) 4 (no pivoting), 6 (partial pivoting)
Answer: (C).
{ Source: Heath [4], adapted from Review Question 2.39, p. 93 }
Q3b–14117 . You have a system of three linear equations with three unknowns. If you perform Gaussian elimination
and obtain the row-reduced echelon form
1 −2 4 6
0 1 0 −3
0 0 3 0
Q3b–15118 . You have a system of three linear equations with three unknowns. If you perform Gaussian elimination
and obtain the row-reduced echelon form
1 −2 4 6
0 1 0 −3
0 0 0 3
Q3b–17120 . Suppose that a square matrix A is perfectly well-conditioned, meaning that cond(A) = 1. Which of the
following matrices shares this same property?
(A) cA where c is any nonzero scalar
(B) DA where D is any nonsingular diagonal matrix
(C) P A where P is any permutation matrix
(D) A−1 , the inverse of A
(E) AT , the transpose of A
Answer: (A). In fact, all of the responses are correct.
{ Source: Heath [4], Review Question 2.58, p. 94 }
Equating coefficients in the first column yields b = 0 and ab = 1, which has no solution. So this A has no LU
decomposition even though it’s nonsingular. A simpler way to recognize this is that very first step of the LU
algorithm fails when it encounters a zero in the 1, 1 pivot entry.
{ Source: MAH }
Q3b–19122 . If A = P I is some permutation P of the identity matrix, then Gaussian elimination yields the following L
and U factors:
(A) L = U = I
(B) L = U = P I
(C) L = I, U = 0
(D) The factors depend on the permutation matrix P
Answer: (A). If P is a permutation matrix then its inverse is another permutation matrix, P −1 = Pe, and
so PeA = I. This is already in row-reduced form, and so GE with partial pivoting yields the LU factorization
of the same matrix PeA = LU = I. The factors are clearly L = U = I.
{ Source: JMS }
Answer: FALSE. The LU decomposition computed by Matlab involves at least one row swap, so that the L
and U factors are different.
{ Source: JMS }
Q3b–21124 . You perform Gaussian elimination with partial pivoting to solve a linear system on a single-precision
floating point computer with roughly 7 decimal digits of accuracy. If the coefficient matrix has a condition
number of 103 and the the matrix and right hand side are both measured to within full machine precision,
about how many digits of accuracy would you expect in the approximate solution?
(A) 2
(B) 3
(C) 4
(D) 7
relative condition error
Answer: (C). The error estimate tells us that 6 · 6 10−7 · 103 =
error number in data
10−4 , which corresponds to 4 decimal digits. This is why we compute in double precision!!
{ Source: Heath [4], Review Question 2.64, p. 95 }
Q3b–22125 . Suppose you are solving a linear system Ax = b on a computer with 12 decimal digits of floating-point
precision, and that the matrix and right hand side are correct to within full machine precision. Roughly how
large can the condition number of the matrix A be before the computed solution x contains NO significant
digits?
(A) 1010
(B) 1012
(C) 1016
(D) 1022
Q3b–23126 . You have a system of linear equations Ax = b with kAk = 250 and kA−1 k = 40 that you solve using
Gaussian elimination on a computer with machine epsilon εM = 1.19 × 10−7 . What is the largest number of
significant digits that you can trust in the solution?
(A) 1
Q3b–24127 . You have a 100 × 100 matrix A and your computer is able to solve the linear system Ax = b in exactly one
minute using the LU factorization. Roughly how much of that minute is spent computing the factorization
A = LU ?
(A) 30 secs
(B) 45 secs
(C) 53 secs
(D) 59 secs
Answer: (D). From the lecture notes, the ratio of time spent on the row-reduction to the total is
3
R(n) 2n3 + 3n2 − 5n 2+ n
= 3 2
≈ 6
T (n) 2n + 6n − 2n 2+ n
2.03
When n = 100 this ratio is 2.06 ≈ 0.99 which corresponds to roughly 59 seconds.
{ Source: JMS }
Q3b–25128 . Suppose that your computer can solve 100 problems of the form U x = c in 1 second, where U is a 50 × 50
upper triangular matrix. Roughly how long will it take to solve a single 500 × 500 problem of the form Ax = b
where A is a full matrix?
(A) 10 seconds
(B) 1 minute
(C) 5 minutes, 30 seconds
(D) 5 hours
Answer: (C). The cost of solving U x = c is a backward substitution, which we know has cost with leading
2 2 3 3
order term n2 so the total cost is 100(50
2
)
= 503 . The full system has cost n3 = 500
3 which is a fraction 1000
3
larger. That translates into about 333 seconds. A rough order of magnitude estimate (missing the constants)
5003
yields a ratio of 100·502 = 500, and so choice (C) is still the closest answer.
{ Source: JMS }
10-10
10-15
PA (x0 +h)
PB(x0 +h)
-20
10
0 0.2 0.4 0.6 0.8 1
h
and so the degree can only be determined by plotting the data on a log-log scale. This is a semi-log plot and
so the nature of the polynomials can’t be determined.
{ Source: JMS }
error
10-10
10-15
PA (x0 +h)
PB(x0 +h)
10-20
10-8 10-6 10-4 10-2 100
h
(A) PA is degree 2, PB is degree 3
(B) PA is degree 3, PB is degree 4
(C) PA and PB are both degree 3
(D) The degree cannot be determined from this plot
Answer: (A). The remainder/error term for a Taylor polynomial of degree n takes the form
On this log-log plot, the errors behave linearly with slopes 3 and 4, for PA and PB respectively. The slope is
n + 1, one more than the degree.
Notice the oscillations appearing on the left half of the plot for small h – these are due to floating-point round-
off error that dominates when the truncation error in the Taylor polynomials approaches the value of machine
epsilon εM ≈ 2 × 10−16 .
{ Source: JMS }
error
10-10
10-15
0 5 10 15 20 25
# terms (N)
(A) The Taylor remainder term blows up when N gets too large.
(B) The accuracy is limited by the floating point machine epsilon.
(C) Subtractive cancellation errors dominate when too many terms of differing signs are added to the poly-
nomial.
Answer: (B). As N increases, the error in the Taylor approximation gets smaller until it reaches machine
epsilon, εM ≈ 2×10−16 , after which no more improvement in accuracy is possible. This is not really a question
about power laws, but the semi-log plot does clearly illustrate the linear dependence of error on N (for fixed
h) that we expect from:
f (N +1) (c) N +1
RN (h) = h =⇒ log RN ∼ (N + 1) log h + constant
(N + 1)!
{ Source: JMS }
0
0 2 4 6 8
log 10N
(A) Algorithm A
(B) Algorithm B
(C) Algorithm C
3
Answer: (B). This plot has a log-log scale. If cost scales like C = αN 3/2 , then log C = log α + 2 log N . So
we are looking for the straight line with slope 32 , which is (B).
{ Source: JMS }
0
0 2 4 6 8
log 10N
(A) O(N )
(B) O(N 3/2 )
(C) O(N 2 )
(D) O(N 3 )
Answer: (D). This plot has a log-log scale. If cost scales like C = αN p , then log C = log α + p log N and
should appear as a straight line with slope p. The plotted curve is roughly a straight line with slope ≈ 20−2
7−1 = 3.
{ Source: JMS }
Q3c–6134 . You write a code for an iterative algorithm that you believe converges quadratically. To test whether
you have implemented the algorithm correctly, you plot the relative error in each step, Rk = |xk − x∗ |/|x∗ |,
as a function of the iteration number k. Using which type of plot is it easiest to recognize the quadratic
convergence?
(A) A linear plot showing Rk versus k.
(B) A semi-log plot showing log Rk versus k.
(C) A semi-log plot showing Rk versus log k.
(D) A log-log plot showing log Rk versus log k.
(E) A double-log plot showing log(log Rk ) versus k.
k
Answer: (E). Because the error in a quadratic method behaves like Rk ∼ R02 , then log Rk ∼ (const) · 2k
and log(log Rk ) ∼ (const) · k. It is easiest to recognize a straight line and so you should plot log(log Rk ) versus
k.
{ Source: JMS }
Q3c–7135 . You write a code for an iterative algorithm that you believe converges superlinearly. To test whether
you have implemented the algorithm correctly, you plot the relative error in each step Rk = |xk − x∗ |/|x∗ |
as a function of the iteration number k. Using which type of plot is it easiest to recognize the superlinear
convergence?
(A) A linear plot showing Rk versus k.
(B) A semi-log plot showing log Rk versus k.
(C) A semi-log plot showing Rk versus log k.
(D) A log-log plot showing log Rk versus log k.
Answer: (B). Because the error in a linearly convergent method behaves like Rk ∼ αk R0 , then log Rk ∼
k log α + log R0 which is a linear function of k. So the easiest way to test for superlinear convergence is to plot
log Rk versus k and then look for downward curvature, which indicates a faster decay to zero than linear).
{ Source: JMS }
0.8
error
error
error
-10
0.6 10-10 10
0.4 Method 1
Method 1
Method 2 -15 Method 2
0.2 10-15 10 Method 3
Method 3
Method 4 Method 4
0
0 5 10 15 20 0 5 10 15 20 100 101
k k k
Q3d–4140 . True or False: You are given the 3 × 3 linear system in augmented matrix form
5 0 −1 2
0 0 2 4
0 1 0 2
Because the diagonal of the coefficient matrix contains zero entries, it is not possible to apply the Jacobi
iterative method.
Answer: FALSE. Swapping rows 2 and 3 converts the matrix to upper triangular form with non-zeroes on
the diagonal, and Jacobi can then be applied.
{ Source: JMS }
Q3d–5141 . You are given the 3 × 3 linear system in augmented matrix form
5 0 −1 2
0 1 0 2
0 0 2 4
Starting from an initial guess of x0 = [1, 1, 1]T , what is the result of the first iteration of the Jacobi method?
T
(A) x1 = 53 , 2, 2
T
(B) x1 = 25 , 2, 1
T
(C) x1 = 35 , 2, 5
Answer: (A).
x1 = 15 (2 − 0 · 1 + 1 · 1) = 3
5
1
x2 = 1 (2 − 0 · 1 − 0 · 1) = 2
1
x3 = 2 (4 − 0 · 1 − 0 · 1) = 2
Interesting observation: Jacobi applied to any upper triangular n × n matrix converges to the exact solution
in at most n steps AND it’s equivalent to doing backward substitution . . . although it can do a lot of extra
work to get there!
{ Source: JMS }
Starting from an initial guess of x0 = [1, 1, 1]T , what is the result of the first iteration of the Gauss-Seidel
method?
T
(A) x1 = 35 , 2, 2
T
(B) x1 = 25 , 2, 1
T
(C) x1 = 35 , 2, 5
Answer: (A). This is identical to the Jacobi iteration because the lower triangular part of the matrix is 0!
{ Source: JMS }
Q3d–7143 . You are given the 3 × 3 linear system in augmented matrix form
5 0 −1 2
0 1 0 2
0 −3 2 4
Starting from an initial guess of x0 = [1, 1, 1]T , what is the result of the first iteration of the Gauss-Seidel
method?
T
(A) x1 = 53 , 2, 2
T
(B) x1 = 25 , 2, 1
T
(C) x1 = 35 , 2, 5
Answer: (C).
x1 = 15 (2 − 0 · 1 + 1 · 1) = 3
5
1 3
x2 = 1 (2 − 0 · 5 − 0 · 1) = 2
1 3
x3 = 2 (4 − 0 · 5 + 3 · 2) = 5
{ Source: JMS }
Q3d–8144 . You apply the Gauss-Seidel method to solve a linear system having an iteration matrix T = −(D + L)−1 U
with spectral radius ρ(T ) = 0.998. Using the fact that log10 (0.998) ≈ −0.0009, roughly how many iterations
are needed to reduce the error in the initial guess by a factor of 10−6 ?
(A) 6
(B) 600
(C) 6000
Answer: (C). The error estimate for a matrix iteration takes the form of an inequality for the error Ek in
step k: Ek 6 ρ(T )k E0 . If the error needs to reduce by a factor of 10−6 then
−6
ρ(T )k 6 10−6 =⇒ k · log10 (ρ(T )) 6 −6 =⇒ k> ≈ 6000
| {z } −0.0009
−0.0009
{ Source: JMS }
Q3d–9145 . Below is the output from the Jacobi and Gauss-Seidel iterations for solving the linear system Ax = b, in
each case showing the iteration number k and the norm of the solution difference kxk − xk−1 k:
Answer: (B). Both converge, but Method A does so in about half the number of iterations. This is
characteristic of the Gauss-Seidel method.
{ Source: JMS }
Answer: FALSE. The nth degree polynomial interpolating these points is unique, provided only that the xi
are all distinct.
{ Source: Heath [4], adapted from Review Question 7.3, p. 333 }
Q4a–2147 . Suppose that you have n data points (x1 , y1 ), . . . , (xn , yn ), with the xi all distinct. Which of the following
statements about the nth degree interpolating polynomial is TRUE?
(A) The interpolating polynomial is unique.
(B) There are infinitely many such interpolating polynomials.
(C) There is no polynomial of this degree that interpolates all n points.
Answer: (B). An nth degree polynomial has n + 1 coefficients which are constrained by n interpolating
conditions. So there remains one degree of freedom.
{ Source: Heath [4], adapted from Review Question 7.3, p. 333 }
Q4a–3148 . You determine a function f (x) that passes through the points (x0 , y0 ), (x1 , y1 ), . . . , (xn , yn ) with x0 <
x1 < x2 · · · < xn . For some other point x∗ ∈ (x0 , xn ), you then use f (x∗ ) to approximate the value of the
actual smooth function that underlies the data. This procedure is called:
(A) interpolation
(B) extrapolation
(C) curve fitting
(D) regression
Answer: (A).
{ Source: Holistic Numerical Methods [5] }
Q4a–4149 . Fill in the blank: Suppose you have n + 1 data points (x0 , y0 ), (x1 , y1 ), . . . , (xn , yn ). A unique polynomial
of degree can be found that passes through these points.
(A) n
(B) n or less
(C) n + 1
(D) n + 1 or more
Answer: (B). Assuming the xi are distinct, then there is a unique polynomial of degree exactly n that
passes through the n + 1 data points. However, response (B) is the correct answer because if any of the xi are
repeated, then some of the higher degree coefficients could be zero.
{ Source: Holistic Numerical Methods [5], MC Question Solution Ch 05.01 Background of Interpolation.pdf }
Q4a–5150 . How many different polynomials can be found that pass through the points (1, 2) and (4, 5)?
(A) 1
(B) 2
(C) 1 or 2
(D) infinitely many
Answer: (D).
{ Source: Holistic Numerical Methods [5], MC Question Solution Ch 05.01 Background of Interpolation.pdf }
Q4a–7152 . The following data describes the velocity of a rocket during lift-off as a function of time:
t (s) 0 14 15 20 30 35
v(t) (m/s) 0 227 363 517 603 902
In order to determine the velocity at t = 25 s, you decide to use a quadratic polynomial v(t) = a+bt+ct2 to ap-
proximate the velocity profile. The most appropriate system of equations for determining the coefficients is . . .
1 14 176 a 227 1 0 0 a 0
(A) 1 15 225 b = 363 (C) 1 15 225 b = 363
1 20 400 c 517 1 30 900 c 603
1 15 225 a 363 1 20 400 a 517
(B) 1 20 400 b = 517 (D) 1 30 900 b = 603
1 30 900 c 603 1 35 1225 c 902
Answer: (B). Response (A) corresponds to extrapolation and (C) doesn’t use the three data points closest
to t = 25, and so both will probably be less accurate. Responses (B) and (D) should give comparable accuracy
since they are based on intervals that are closest to the interpolation point.
{ Source: JMS }
Q4a–8153 . Fill in the blanks: For the n + 1 data points (x0 , y0 ), (x1 , y1 ), . . . , (xn , yn ), the nth degree Lagrange
n n
X Y (x − xi )
polynomial is Pn (x) = yj Lj (x) where Lj (x) = .
j=0 i=0
(x j − xi )
i6=j
Q4a–10155 . The linear Lagrange polynomial that interpolates the points (x1 , f (x1 )) and (x2 , f (x2 )) is
(x − x2 ) (x − x1 )
(A) P (x) = f (x1 ) + f (x2 )
(x1 − x2 ) (x2 − x1 )
(x1 − x2 ) (x2 − x1 )
(B) P (x) = f (x1 ) + f (x2 )
(x − x2 ) (x − x1 )
f (x1 ) − f (x2 ) x1 f (x2 ) − x2 f (x1 )
(C) P (x) = x+
(x1 − x2 ) (x1 − x2 )
x x
(D) P (x) = f (x1 ) + f (x2 )
(x1 − x2 ) (x2 − x1 )
Answer: (A).
Q4a–11156 . This table lists the population of Canada every 5 years from 2000 to 2015:
one can approximate the population in any year x between 2000 to 2015. What is L0 (x)?
(x − 2005)(x − 2010)(x − 2015)
(A)
750
(x − 2005)(x − 2010)(x − 2015)
(B)
−750
(x − 2000)(x − 2005)(x − 2010)
(C)
−50
(x − 2000)(x − 2010)(x − 2015)
(D)
−150
Answer: (B). This is the only polynomial that satisfies L0 = 1 at x = 2000 and L0 = 0 at the other three
data points.
{ Source: MAH }
(A) a = 4, b = 7, c =1
(B) a = 8, b = 7, c =3
(C) a = 8, b = 4, c =3
(D) a = 7, b = 4, c =3
{ Source: MAH }
(A) a2 = 1
(B) a2 = 4
(C) a2 = 7
(D) a2 = 9
Answer: (C). It’s the second diagonal in the table that starts at the point x = 3, for which the interpolating
polynomial is:
(A) a = 1, b = 2, c=4
(B) a = 2, b = 2, c=4
(C) a = 1, b = 3, c=1
(D) a = 2, b = 4, c=5
{ Source: MAH }
Q4a–16161 . Consider the interpolating polynomial for f (x) = x3 based on the points (1, 1), (2, 8), (3, 27), and (4, 64).
Find an upper bound for the interpolation error on [1, 4].
1
(A) 6
15
(B) 8
105
(C) 16
(D) 0
Answer: (D). There are four points so the interpolating polynomial is a cubic. Since the original function
is a cubic, then P3 (x) = x3 , so the error is zero! You could also use the error formula
(4)
f (c)
|f (x) − P3 (x)| 6 (x − 1)(x − 2)(x − 3)(x − 4) = 0
4!
Q4a–17162 . Consider the interpolating polynomial for f (x) = x3 based on the points (0, 0), (1, 1) and (2, 8). Find an
upper bound for the interpolation error at the point x = 21 .
1
(A) 6
(B) 1
15
(C) 8
3
(D) 8
Answer: (D). Since there are three points, the interpolating polynomial is a quadratic for which the error
formula gives
000
f (c)
|f (x) − P2 (x)| 6 x(x − 1)(x − 2)
3!
1
=⇒ f ( 2 ) − P2 ( 21 ) 6 66 12 12 − 1 12 − 2 = 83 .
Then, we can generate a fourth-degree polynomial by adding any other point (a, b) which will generate an extra
coefficient c in the table. This corresponds to a new polynomial
x y
1 1
b
2 a -1
4
4 16
Answer: (A).
{ Source: MAH }
Answer: TRUE. The piecewise functions are both quadratic, and S(x) and S 0 (x) match at x = 0.
{ Source: MAH }
Why not?
(A) S0 (1) 6= S1 (1)
(B) S00 (1) 6= S10 (1)
(C) S000 (1) 6= S100 (1)
(D) S0000 (1) 6= S1000 (1)
Answer: (C).
{ Source: MAH }
Assuming that natural end-point conditions are used, determine the constants c and d.
(A) c = 0 and d = −1
(B) c = 0 and d = 1
(C) c = 1 and d = 1
(D) c = 1 and d = −1
Answer: (A). Differentiate to get S000 = 2c + 6x and S100 = 6 + 6d(x − 1). Then the natural end-point
conditions S0 (0) = 2c = 0 and S100 (2) = 6 + 6d = 0 are easy to solve.
00
{ Source: MAH }
Assuming that periodic end-point conditions are used, determine the constants b and c.
(A) b = 0 and c = 3
(B) b = 3 and c = 0
(C) b = 1 and c = 3
Answer: (A).
{ Source: MAH }
Assuming that not-a-knot end-point conditions are used, which of the following statements about the coeffi-
cients is TRUE?
(A) a0 = a1 = a2
(B) b0 = b1 = b2
(C) c0 = c1 = c2
(D) d0 = d1 = d2
Answer: (D). The not-a-knot end-point conditions match S0000 (1) = S1000 (1) and S1000 (2) = S2000 (2). Using
S0 = 6d0 , S1 = 6d1 and S2000 = 6d2 , it’s easy to see that d0 = d1 = d2 .
000 000
{ Source: MAH }
2.5
S(x)
1.5
0.5
-2 -1 0 1 2 3 4
x
(A) I = not-a-knot, II = clamped, III = natural
(B) I = natural, II = not-a-knot, III = clamped
(C) I = natural, II = clamped, III = not-a-knot
Answer: (A). Spline III has the smallest curvature near end-points, so this must be the natural spline.
Spline II is the only one with zero slope at both ends, so this is the clamped spline.
{ Source: JMS }
Q4b–11176 . True or False: Suppose you have n + 1 data points and interpolate them using splines with degree n (that
is, n + 1 piecewise polynomials each having degree n). The resulting splines are all identical and equal to the
interpolating polynomial that you would have obtained using a method like Lagrange interpolation.
Answer: TRUE.
{ Source: JMS }
20 20
I II
15 15
Y
10
Y
10
5 5
0 0
0 5 10 15 20 0 5 10 15 20
X X
Q4c–2178 . You are writing a Matlab code that computes a least-squares fit yfit to a set of data values ydata, both
of which are vectors of length m. Which of the following lines of code computes the root mean square (RMS)
error for the fit?
(A) RMS = sqrt(m * sum((yfit-y).^2));
(B) RMS = sqrt(sum((yfit-y).^2)) / m;
(C) RMS = sqrt(sum(yfit-y).^2 / m);
(D) RMS = sqrt(sum((yfit-y).^2) / m);
Answer: (D).
{ Source: JMS, MACM 316 lecture notes }
Q4c–3179 . The plot depicts noisy data points that were mea-
3.5
sured in an experiment. What is the best choice of
method for computing a smooth function that best
approximates such noisy data?
3
2.5
y
1.5
-0.5 0 0.5 1 1.5 2
x
(A) A single polynomial interpolant, because it represents the data exactly.
(B) A piecewise cubic spline, because the resulting curve will be smooth at each data point.
Data
4 Interpolant
Spline
3.5 LSQ fit
y 2.5
1.5
{ Source: JMS }
Q4c–4180 . True or False: Given n data points (xi , yi ) with all of the xi distinct, you apply the method of least
squares to fit a pth degree polynomial to the given data. If p = n − 1 then this is equivalent to polynomial
interpolation.
Answer: TRUE. When p = n − 1, the least squares matrix is size n × n and has the same structure as the
Vandermonde matrix.
{ Source: JMS }
Q4c–5181 . True or False: When fitting a straight line y = a0 + a1 x to the three data points (xi , yi ) = (0, 0), (1, 0)
and (1, 1), the least squares solution is unique.
Answer: TRUE. The method of least squares has no difficulty with the fact that the last two points have
the same x-coordinate. In this case, the linear fit bisects the repeated points as shown in the plot.
1.2
0.8
0.6
y
0.4
0.2
-0.2
-0.2 0 0.2 0.4 0.6 0.8 1
x
2000
Export index
1500
1000
500
2000
Export index
1500
1000
500
Answer: (B).
{ Source: Data source: https://ourworldindata.org/trade-and-globalization }
2000
Export index
1500
1000
500
Q4c–9185 . The plot shows a set of data points along with three
20
least-squares fits (numbered 1–3). The root mean Data
square (RMS) errors are listed below: Fit 1
15 Fit 2
Fit 1: RMS = 2.8120 Fit 3
Fit 2: RMS = 1.7655
Fit 3: RMS = 1.7241 10
Data
-5
0 1 2 3 4 5
t
(A) Fit 1
(B) Fit 2
(C) Fit 3
Answer: (B). It would be easy to choose Fit 3 judging by the RMS error alone, but the difference from Fit 2
is very small. So in such cases, it’s always important to evaluate the fit qualitatively or visually . . . in the
“eyeball norm”. Note that Fit 3 exhibits some worrying wiggles near the end-points t = 0 and 5 that deviate
from the overall trends in the data.
{ Source: JMS }
Q4c–10186 . Fill in the blank: To find the exponential fit Q(x) = αeβx using a given data set {(x0 , y0 ), (x1 , y1 ), . . . , (xm , ym )},
you can follow these steps:
• Take the natural logarithm of both sides
Answer: (D).
{ Source: JMS, MACM 316 lecture notes }
Q4c–11187 . Fill in the blank: To find the power-law fit Q(x) = αxβ using a given data set {(x0 , y0 ), (x1 , y1 ), . . . , (xm , ym )},
you can follow these steps:
• Take the natural logarithm of both sides
• Apply linear least squares to the points to obtain the coefficients a0 = log(α) and a1 = β.
• Then α = e a0
and β = a1 and Q(x) = e x . a0 a1
Answer: (B).
{ Source: JMS, MACM 316 lecture notes }
Q4c–12188 . To find the exponential fit Q(x) = αeβx using a given data set {(x0 , y0 ), (x1 , y1 ), . . . , (xm , ym )}, the sum
of squared residuals to be minimized is
m 2
yi − αeβxi
P
(A)
i=1
m
P 2
(B) (log(yi ) − α − βxi )
i=1
m
P 2
(C) (log(yi ) − α − β log(xi ))
i=1
m
P 2
(D) (log(yi ) − log(α) − βxi )
i=1
Answer: (D). Taking the log of both sides of the exponential fit
Q(x) = αeβx
gives
Answer: (C). Taking the log of both sides of the exponential fit Q(x) = αxβ gives
Q4c–14190 . Suppose you apply the method of least squares to obtain a linear fit y = ax + b to the four points (x1 , y1 ),
(x2 , y2 ), (x3 , y3 ) and (x4 , y4 ). You then repeat the process by fitting the same data points to a function of the
form x = cy + d. Which of the following statements is FALSE?
(A) The two fitted lines are identical
(B) The two linear fits may be different
(C) The procedure fails when ac = 0
1
(D) a =
c
Answer: (B). The first fit minimizes the (vertical) difference in the y-values, while the second fit minimizes
the (horizontal) difference in the x-values.
{ Source: JMS }
4d. Applications
Q4d–1191 . A robot has to follow a path that passes through seven
8
planning points in order to avoid two obstacles, as shown
in the plot. To find the shortest path that is also smooth, 7
which of the following solution strategies would you rec- 6
ommend?
5
4
y
0
0 2 4 6 8 10 12
x
(A) Pass a sixth degree polynomial through the data
(B) Pass linear splines through the data
In order to help graduands in selecting a safe procession path, you have taken GPS measurements of nine
strategically-placed points along the procession route. To determine the safest path over the pool, which of
the following strategies would you recommend?
(A) Pass an eighth degree interpolating polynomial through all data points
(B) Use a linear spline to interpolate the data
(C) Use a cubic spline to interpolate the data
(D) Use least squares to fit a third degree polynomial
Answer: (C).
Q5a–2194 . Which of the following difference formulas is a valid approximation of f 0 (xi )? Here, the xi represent
equally-spaced points with xi − xi−1 = h.
f (xi+1 ) − f (xi )
(A)
h
f (xi ) − f (xi−1 )
(B)
h
f (xi+1 ) − f (xi−1 )
(C)
2h
(D) All of the above
Answer: (D).
{ Source: MAH }
f (x + h) − f (x − h)
Q5a–3195 . What form does the truncation error take for the difference formula f 0 (x) ≈ ?
2h
(A) O(1)
(B) O(h)
(C) O(2h)
(D) O(h2 )
Answer: (D).
{ Source: MAH }
Q5a–4196 . How would you classify the following difference formula for the derivative
−3f (x) + 4f (x + h) − f (x + 2h)
f 0 (x) ≈ ?
2h
(A) forward difference
(B) backward difference
(C) centered difference
(D) one-sided difference
Answer: (D). You can also think of this as a forward difference formula, in the sense that the stencil
consists of points located “forward” of the approximation point x, so response (A) is equally valid.
{ Source: MAH }
Q5a–5197 . You want to estimate the first derivative of f (x), given values of the function at discrete points x =
0, 0.1, 0.2, . . . , 1. Which of these formulas is appropriate for estimating f 0 (1)?
Q5a–6198 . You want to estimate the first derivative of f (x), given values of the function at discrete points x0 , x1 , x2 , . . . , xn .
Which of these formulas is appropriate for estimating f 0 (x0 )?
−3f (xi ) + 4f (xi+1 ) − f (xi+2 )
(A) f 0 (xi ) ≈
2h
f (x i+1 ) − f (x i−1 )
(B) f 0 (xi ) ≈
2h
0 3f (xi ) − 4f (xi−1 ) + f (xi−2 )
(C) f (xi ) ≈
2h
(D) All of the above
Answer: (A). The other two involve points to the left of x0 where f is not known.
{ Source: MAH }
Q5a–7199 . Which of the statements below is TRUE regarding the difference formula
1 h i h4
f 0 (x) = f (x − 2h) − 8f (x − h) + 8f (x + h) − f (x + 2h) + f (5) (c) ?
12h 30
(A) it is a 5-point centered formula
(B) the truncation error is O(h4 )
(C) when h is large, the truncation error dominates
(D) All of the above
Answer: (D). Regarding response (A): this formula spans a five-point stencil centered at the approximation
point x, but the coefficient multiplying f (x) is zero.
Q5a–8200 . The given table lists the absolute errors from three
finite difference approximations of the derivative h Formula A Formula B Formula C
0.500000 5.764e-01 6.670e-02 9.329e-03
f 0 (x) (labelled A, B, C). The errors are computed for
0.250000 3.096e-01 1.683e-02 3.565e-04
a sequence of decreasing values of the grid spacing h.
0.125000 1.596e-01 4.218e-03 5.181e-05
0.062500 8.093e-02 1.055e-03 4.116e-06
Which of the statements below regarding the order 0.031250 4.074e-02 2.638e-04 2.836e-07
of accuracy for the three formulas is TRUE? 0.015625 2.044e-02 6.595e-05 1.853e-08
0.007812 1.024e-02 1.649e-05 1.183e-09
0.003906 5.122e-03 4.122e-06 7.513e-11
(A) Formula A converges with first order accuracy, while Formulas B and C diverge.
(B) All three formulas suffer from subtractive cancellation errors for small enough h.
(C) Local truncation error dominates at small h, and is largest for Formula C.
(D) Convergence stalls for Formulas B and C around h ≈ 10−5 , but then resumes when h gets small enough.
Answer: (B). Difference formulas are just that – differences – and so they experience subtractive cancellation
errors when h is small and function values get very close to each other. These errors are then magnified
by a factor of h1 . As an example, consider the first-order one-sided difference formula f (x)−fh(x−h) (this is
Formula A).
{ Source: JMS }
Q5a–10202 . An electric circuit contains a resistor (resistance R), an inductor (inductance L) and a variable voltage
source E(t) that obeys
di
E(t) = L + Ri,
dt
where i(t) is the electric current flowing through the circuit. You have measurements of current at several
times:
If the inductance is L = 0.82 henries and the resistance is R = 0.21 ohms, the most accurate approximation
for E(1.10) is
4.50 − 4.10
(A) 0.82 + 0.21(4.50)
0.05
(B) 0.21(4.50)
4.05 − 4.01
(C) 0.82 + 0.21(4.50)
0.01
4.50 − 4.01
(D) 0.82 + 0.21(4.50)
0.1
Answer: (A). This uses a backward approximation to i0 (1.1) that uses the points closest to x = 1.1.
Response (D) uses a higher order centered approximation of the derivative, but it’s centered at the wrong point
– x = 1.055!
{ Source: Holistic Numerical Methods [5] and Burden & Faires [1], p. 182 }
Q5a–11203 . The table below lists the distance s(t) that a car travels along a straight road in time t:
Time, t (seconds) 0 5 10 15
Distance, s (m) 0 100 300 700
(A) 50
(B) 60
(C) 100
(D) 200
700 − 100
Answer: (B). The centered difference approximation gives v(10) ≈ = 60.
15 − 5
{ Source: Holistic Numerical Methods [5] and Burden & Faires [1], p. 182 }
Q5a–12204 . In the following centered difference formula for the second derivative
1 h i
f 00 (x) = Af (x − 2h) + 16f (x − h) − 30f (x) + 16f (x + h) + Af (x + 2h) + O(h4 ),
12h2
what must the constant A be for the formula to be correct?
(A) −1
(B) 1
(C) −2
(D) 2
Answer: (A). You could answer this by expanding in Taylor series. But it’s easier to simply recall that in any
difference approximation of a derivative, the stencil coefficients must sum to zero. Then A+16−30+16+A = 0.
{ Source: MAH }
Q5a–13205 . A first-order difference approximation of the first derivative is written in the form f 0 (x) = G(h)+a·h+O(h2 ),
where a is some constant. Which of the extrapolation formulas below is an O(h2 ) approximation of f 0 (x)?
4G h2 − G(h)
(A)
3
h
2G 2 − G(h)
(B)
3
h
(C) 2G − G(h)
2
(D) none of the above
Answer: (C). This is analogous to Richardson extrapolation, just applied to an O(h) formula.
{ Source: MAH }
Q5a–14206 . Fill in the blank: The truncation error terms in a difference approximation G(h) for the first derivative of
a function can be written as f 0 (x) = G(h) + a1 h + a2 h2 + a3 h2 + . . . . Then one step of extrapolation can be
used to write f 0 (x) = 2G h2 − G(h) + O(h2 ). Applying one more extrapolation step yields the next higher
order formula f 0 (x) = + O(h3 ).
4G h2 − G(h)
(A)
3
8G 4 − G h2
h
(B)
3
h
8G 4 + G(h)
(C)
3
8G 4 − 6G h2 + G(h)
h
(D)
3
Q5a–15207 . Write the truncation error terms in the centered difference approximation for the first derivative as
f (x + h) − f (x − h)
f 0 (x) = + a1 h2 + a2 h4 + a3 h6 + . . .
2h
where the ai are constants. After applying n steps of Richardson extrapolation, what is the order of the
resulting approximation for f 0 (x)?
(A) O(h2 )
(B) O(hn )
(C) O(h2n )
(D) O(h2n+2 )
Answer: (D). Each step increases the order by a factor of h2 so the error is O(h2 · (h2 )n ) = O(h2n+2 ).
{ Source: MAH }
Answer: (B). Both (B) and (C) are valid interpretations. Response (A) might also be correct, but only if
1
Rb
b − a = 1 since the average value of a function is defined as b−a a
f (x) dx.
{ Source: JMS, MACM 316 lecture notes }
Z b
Q5b–3210 . The definite integral f (x) dx can be interpreted as
a
Answer: (D).
{ Source: Adapted from Holistic Numerical Methods [5], quiz 07 01 }
Q5b–5212 . True or False: The left and right rectangle rules are exact for constant functions, f (x) = c.
Answer: TRUE.
{ Source: JMS, MACM 316 lecture notes }
Q5b–6213 . Which of the statements below is TRUE? Trapezoidal rule is exact for:
I. constant functions, f (x) = c
II. linear functions, f (x) = ax + b
III. quadratic functions, f (x) = ax2 + bx + c
(A) I
(B) II
(C) I and II
(D) I, II and III
Answer: (C).
{ Source: JMS, MACM 316 lecture notes }
Answer: (D).
{ Source: JMS, MACM 316 lecture notes }
R5
Q5b–9216 . The integral f (x) dx is approximated using the
0
0 1 2 3 4 5
left rectangle rule (AL ), right rectangle rule (AR )
160
and midpoint rule (AM ) for the function f (x)
shown in the plot. 140
120
Which of the following inequalities must be TRUE?
100
y
80
60 y = f(x)
40
20
A0 A1 A2 A3 A4
0
a=x x1 x2 x3 x4 x 5 =b
0
AL < AM < AR
Right Rectangle Rule
Left Rectangle Rule Midpoint Rule
0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5
80 80 80
60 60 60
40 40 40
20 20 20
A0 A1 A2 A3 A4 A0 A1 A2 A3 A4 A0 A1 A2 A3 A4
0 0 0
a=x x1 x2 x3 x4 x 5 =b a=x x1 x2 x3 x4 x 5 =b a=x 0 x1 x2 x3 x4 x 5 =b
0 0
{ Source: MAH }
120
If AT is the approximation and A is the exact value,
which of the following relations is TRUE? 100
y
80
60 y = f(x)
40
20
A0 A1 A2 A3 A4
0
a=x x1 x2 x3 x4 x 5 =b
0
(A) AT > A
(B) AT < A
(C) AT = A
(D) none of the above
Answer: (A). The function is concave upwards and so the trapezoidal areas (defined by secant lines) always
lie slightly above the curve. Then AT is an overestimate of A.
Trapezoidal rule
0 1 2 3 4 5
160
140
120
100
y
80
60
40
20
A0 A1 A2 A3 A4
0
a=x x1 x2 x3 x4 x 5 =b
0
{ Source: MAH }
R5
Q5b–11218 . Approximate the integral A = f (x) dx using the
0
0 1 2 3 4 5
left rectangle rule (AL ), right rectangle rule (AR )
160
and trapezoidal rule (AT ) for the function f (x)
shown in the plot. 140
120
Which of the inequalities below must be TRUE?
100
y
80
60 y = f(x)
40
20
A0 A1 A2 A3 A4
0
a=x x1 x2 x3 x4 x 5 =b
0
AL <A< AT < AR
Trapezoidal rule
Left Rectangle Rule Right Rectangle Rule
0 1 2 3 4 5 0 1 2 3 4 5
0 1 2 3 4 5
160 160
160
140 140
140
120 120
120
100 100
100
y
y
80
y
80 80
60 60
60
40 40 40
20 20 20
A0 A1 A2 A3 A4 A0 A1 A2 A3 A4 A0 A1 A2 A3 A4
0 0 0
a=x 0 x1 x2 x3 x4 x 5 =b a=x x1 x2 x3 x4 x 5 =b a=x 0 x1 x2 x3 x4 x 5 =b
0
{ Source: MAH }
R9
Q5b–12219 . Approximate the integral g(x) dx using the left
0
rectangle rule (AL ) and right rectangle rule (AR )
for the function g(x) shown in the plot.
(A) AL > AR
(B) AL < AR
(C) AL = AR
(D) none of the above
Answer: (A).
AL > AR
{ Source: MAH }
Rb
Q5b–14221 . Approximate the integral h(x) dx using the left rectangle rule (AL ), right rectangle rule (AR ) and trape-
a
zoidal rule (AT ) for some arbitrary function y = h(x).
Whether inequalities (A), (B) or (C) are true depends on the shape of h(x).
{ Source: MAH }
Error
Which of these statements regarding the order of
10 -5
accuracy for the three formulas is TRUE?
Formula A
Formula B
-10
10 Formula C
-2 -1 0 1
10 10 10 10
h
(A) Formulas A and B are order 1, Formula C is order 2.
(B) Formula A is order 1, Formula B is order 2, Formula C is order 3.
(C) Formula A is order 1, Formula B is order 2, Formula C is order 4.
(D) All formulas have the same order of accuracy, but different error constants.
Answer: (C). On a log-log scale, the order of the scheme is given by the slope of each error curve. For
example, with Formula B, h decreases by roughly 1000 (3 orders of magnitude) while error decreases by a
factor of 10−6 (6 orders of magnitude). This means that the error behaves like E ∝ h6/3 ∝ h2 , so it’s a
second-order method.
{ Source: JMS }
is exact for all constant and linear functions. Determine the values of c0 and c1 .
(A) c0 = 1, c1 = 1
(B) c0 = −1, c1 = 1
(C) c0 = − 21 , c1 = 1
2
(D) c0 = 21 , c1 = 1
2
Answer: (D). Take the simplest possible constant function f (x) = 1 and linear function f (x) = x:
Z 1 1
f (x) = 1 =⇒ dx = x = 1 = c0 + c1
0 0
Z 1
1 1 1
f (x) = x =⇒ x dx = x2 = = c1
0 2 0 2
1 1
Solving the two linear equations yields c0 = 2 and c1 = 2, so the quadrature formula is just the arithmetic
average of f (0) and f (1).
{ Source: MAH }
is exact for all constant and linear functions. Determine all possible values of the parameters w1 and w2 .
(A) w1 = − 12 , w2 = 1
2
(B) w1 = −1, w2 = 1
(C) w1 = w2
(D) w1 = −w2
Answer: (D). Take the simplest possible constant function f (x) = 1 and linear function f (x) = x,
Z 1 1
f (x) = 1 =⇒ dx = x = 2 = 1 + 1 (always satisfied)
−1 −1
Z 1
1 2 1
f (x) = x =⇒ x dx = x = 0 = w1 + w2
−1 2 −1
Solving the two linear equations gives w1 = −w2 . This means that the quadrature formula can use any two points
as long as they are symmetric about the origin. Note that for nonlinear functions f , the formula won’t be very
accurate if w1 and w2 are not within (or at least close to) the interval [−1, 1].
{ Source: MAH }
Time (s) 2 5 7 9 12
Velocity (m/s) 12 16 24 15 33
You use the trapezoidal rule to calculate the distance (in metres) travelled by the object between t = 2 and
t = 12 seconds, and the result is . . .
(A) 155.0
(B) 161.0
{ Source: JMS }
Rb
Q5b–20227 . Let In be the trapezoidal rule approximation of the integral f (x) dx on n subintervals, then a more
a
accurate estimate is obtained using Richardson’s extrapolation as . . .
I2n − In
(A) I2n +
15
I2n − In
(B) I2n +
3
(C) I2n
I2n − In
(D) I2n +
I2n
Answer: (B). The leading order error term from trapezoidal rule is O(h2 ) and so the Richardson’s extrap-
4I2n − In
olation formula is .
3
{ Source: JMS }
Q6a–3230 . The deflection w(x) of the free end of a flexible beam in response to a load force q(x) is described by the
differential equation
d4 w 2
2d w
+ a = q(x)
dx4 dx2
What is the order of this ODE?
(A) 1
(B) 2
(C) 4
(D) 6
Answer: (C).
{ Source: MAH }
y
-1
-2
-3
-4
-4 -2 0 2 4
x
dy
(A) = f (y)
dx
dy
(B) = f (x)
dx
dy
(C) = f (x, y)
dx
Answer: (C). Consider any horizontal line (y = constant) and notice that the slope varies with x. Similarly,
the slope depends on y along vertical lines. So the ODE RHS must depend on both x and y. This is actually
dy
a slope field of dx = − xy .
{ Source: MAH }
0
y
-1
-2
-3
-4
-4 -2 0 2 4
x
dy
(A) = f (x)
dx
dy
(B) = f (y)
dx
dy
(C) = f (x, y)
dx
Answer: (A). The slope does not change with y along any vertical line (x = constant) and so the ODE
dy
RHS only depends on x. This is actually the slope field of dx = sin(x).
{ Source: MAH }
1.8
1.6
1.4
1.2
y
0.8
0.6
0.4
0.2
0
0 0.5 1 1.5 2 2.5 3 3.5 4
t
dy
(A) = f (t)
dt
dy
(B) = f (y)
dt
dy
(C) = f (t, y)
dt
Answer: (B). The slope does not change with t along any horizontal line (y = constant) and so the ODE
RHS only depends on y. This is actually the slope field of dy
dt = 2y(1 − y).
{ Source: MAH }
1.6
II. For y(0) = 1 the solution remains the same. 1.4
1
y
0.8
0.6
0.4
0.2
0
0 0.5 1 1.5 2 2.5 3 3.5 4
t
(A) I
(B) II
(C) I and II
(D) I and III
(E) I, II and III
Answer: (C).
{ Source: MAH }
Q6a–9236 . The slope field for an ODE is shown below on the left. Which of the solution curves (I–III) in the right
hand plot could be solutions?
1.8
I
1.8
II
1.6 1.6
III
1.4 1.4
1.2 1.2
1 1
y
y
0.8 0.8
0.6 0.6
0.4 0.4
0.2
0.2
0
0
0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.5 1 1.5 2 2.5 3 3.5 4
t t
(A) I
(B) II
(C) I and II
(D) I and III
(E) I, II and III
Answer: (E).
1.8
1.6
1.4
1.2
1
y
0.8
0.6
0.4
0.2
0
0 0.5 1 1.5 2 2.5 3 3.5 4
t
{ Source: MAH }
Q6a–10237 . The slope field for an ODE is shown below on the left. Which of the solution curves (I–III) in the right
hand plot could be solutions?
y
0.5
y
0 0
-0.5 -0.5
-1
-1
0 1 2 3 0 0.5 1 1.5 2 2.5 3
t t
(A) I
(B) II
(C) I and II
(D) I and III
(E) I, II and III
Answer: (C).
1.5
0.5
y
-0.5
-1
0 0.5 1 1.5 2 2.5 3
t
{ Source: MAH }
Q6b–2239 . Using Euler’s method to approximate an ODE, you obtain the difference equation yj+1 = yj + hayj where
a is a constant and h is the time step. What is the ODE?
dy
(A) = ay
dt
dy
(B) =a
dt
dy
(C) = hay
dt
dy
(D) = ha
dt
Answer: (A).
{ Source: MAH }
suppose that:
• f (t, y) is continuous for all y and t ∈ [a, b], and
• f (t, y) satisfies the “Lipschitz condition”
Then the IVP has a unique solution y(t) for all t ∈ [a, b].
Consider the IVP y 0 = −y + t + 1 with y(0) = 1, for t ∈ [0, 5]. What is the corresponding Lipschitz constant
that ensures this problem is well-posed?
(A) 0
(B) 1
(C) 5
(D) 6
Answer: (B). Substituting the RHS function into the Lipschitz condition:
|f (t, y1 ) − f (t, y2 )| = − y1 + t + 1 − (−y2 + t + 1) = |y2 − y1 | = |y1 − y2 |.
{ Source: MAH }
Q6b–6243 . True or False: When you solve an ODE using a time-stepping algorithm like Euler’s method, the local
truncation errors in every step add up and so the error will always grow with time.
Answer: FALSE. The sign of the error in each step can be positive or negative. If errors always have the
same sign, then they can grow with time as they accumulate. But if the signs alternate, then errors can cancel.
{ Source: JMS }
{ Source: JMS }
π π π2
y1 = y0 + hf (y0 ) = π + − cos π = π +
2 2 4
{ Source: JMS }
y1 = y0 + hf (t0 , y0 ) = 1 + 1(0 + 12 ) = 2
y2 = y1 + hf (t1 , y1 ) = 2 + 1(1 + 22 ) = 7
{ Source: JMS }
0
0 0.5 1 1.5 2
x
(A) Euler’s method does not converge for this problem.
(B) The local truncation error is sometimes large, and sometimes small.
(C) The global truncation error is small near x = 2.
(D) The accuracy of the solution could likely be improved by increasing N .
Answer: (A). Responses (B)–(D) are correct. And although (A) might be true, there is no way to determine
convergence with only results from a single N .
{ Source: Based on Fausett [3], p. 450 }
0.3
y(x)
0.2
0.1
0
0 0.5 1 1.5 2
x
(A) Reduce the time step
(B) Use a higher order method
(C) Use adaptive time-stepping
(D) Switch to an implicit method like backward Euler
(E) All of the above
Answer: (A). Responses (A)–(C) are all reasonable answers. Switching to backward Euler won’t help with
accuracy, but could improve stability.
{ Source: JMS }
Q6c–2250 . You perform the following two calculations on the same initial value problem:
• two steps of Euler’s method with step size 0.25
• one step of modified Euler’s method with step size 0.5
Q6c–3251 . You perform the following two calculations on the same initial value problem:
• two steps of Euler’s method with step size 0.1
• one step of modified Euler’s method with step size 0.2
Which method do you expect to give the more accurate solution?
(A) Euler’s method
(B) modified Euler’s method
(C) both give the same accuracy
Answer: (B). Euler’s method has LTE = O(h2 ) so that two steps of size 0.1 give LTE ≈ 0.02. Modified
Euler’s method has LTE = O(h3 ) so that one step of size 0.2 gives LTE ≈ 0.008, which is much smaller!
{ Source: JMS }
Q6c–4252 . When comparing Euler’s method with standard fourth-order Runge-Kutta (RK4), which of the following
is not an advantage of RK4?
(A) The cost per time step is lower.
(B) The local truncation error is much smaller.
(C) It is more stable and a much larger time step can usually be used.
Answer: (A). The cost of an RK4 step is roughly 4 times that of Euler’s method. The cost savings come
in because many fewer time steps are usually needed.
{ Source: JMS }
Q6c–6254 . You have a code for solving ODEs that predicts the solution at time tj+1 using
y ∗ = yj + hf (tj , yj ),
Q6c–7255 .
h
y ∗ = yj + f (tj , yj )
2
h ∗
yj+1 = yj + hf tj + , y
2
This iteration is called
(A) Euler’s method
(B) modified Euler method
(C) Runge-Kutta method
(D) midpoint method
Answer: (D).
{ Source: MAH }
10
ing sequence of step sizes h = 0.25, 0.125 and 0.0625,
alongside the exact solution. Based on these results,
your friend claims that their code is correct.
5
0
0 0.2 0.4 0.6 0.8 1
t
Answer: FALSE. Well, it’s most likely false. The right hand end-point seems to approach the exact
solution y(1) ≈ 20 as h gets smaller. But the middle values (near t ≈ 0.5) are not getting very close to
the exact solution. Furthermore, the numerical approximations all have a slope at t = 1 that is nonzero
and increasing with h, which seems to diverge from the exact slope of dy
dt (1) = 0.
y
10
tion. He also calculates the max-norm error in the
three solutions:
h maxj |yj − y(tj )| 5
0.2 4.0680
0.1 2.8348
0.05 1.3826 0
0 0.2 0.4 0.6 0.8 1
Based on these results, your friend claims that their t
code is correct.
Answer: FALSE. The approximations do seem to be converging to the exact solution as h → 0. However,
the error is only going down roughly by a factor of 2 in each step, which suggests a first-order method. Modified
Euler should be second-order, and so there must be a bug in the code.
{ Source: Based on Recktenwald [6], p. 728 }
Q6c–10258 . You have coded up an ODE solver and to test it you execute your code twice, first with time step h and
second with step h/2. Suppose these two solutions have absolute errors Eh and Eh/2 respectively. Which
expression is an estimate for the order of accuracy?
log2 Eh
(A)
log2 Eh/2
log2 Eh/2
(B)
log2 Eh
Eh
(C) log2
Eh/2
Eh/2
(D) log2
Eh
Answer: (C). If the error in the method is order p, then Eh = chp and Eh/2 = c(h/2)p . Taking the ratio
p
Eh /Eh/2 = 2 , it follows that p = log2 (Eh /Eh/2 ).
{ Source: JMS }
Q6c–11259 . You have coded up an ODE solver and to test it, you execute your code twice, once with time step h and
a second time with step h/10. Suppose these two solutions have absolute errors Eh and Eh/10 respectively.
Which expression is an estimate for the order of accuracy?
Eh
(A) log10
Eh/10
Eh
(B) log2
Eh/10
1 Eh
(C) log2
log2 10 Eh/10
log10 Eh
(D)
log10 Eh/10
Answer: (A). Response (C) is also correct. If the error in the method is order p, then Eh = chp and
Eh/10 = c(h/10) . Taking the ratio Eh /Eh/10 = 10p , it follows that p = log10 (Eh /Eh/10 ). Applying the
p
log2 x
logarithmic identity log10 x = log 10 gives the formula in (C).
2
{ Source: JMS }
(A) Both methods have the same accuracy, but Simpson’s rule is less expensive.
(B) Simpson’s rule is more accurate and less expensive.
(C) RK4 is more accurate and less expensive.
(D) Both methods have the same accuracy and cost.
Answer: (A). Both RK4 and Simpson’s rule have error that is O(h4 ). Each step of RK4 requires 4
function evaluations, for a total cost of 4N . In comparison, Simpson’s rule involves only N + 1 function
evaluations (one per point). So the cost of both is O(N ), but the quadrature approach will be roughly four
times cheaper/faster.
{ Source: Based on Cheney and Kincaid [2], p. 390. }
Q6d–2262 . A mass–spring problem is described by the ODE y 00 +2y = cos(2t). Which of the following is the equivalent
first-order system?
(A) y 0 = x, x0 + 2y = cos(2t)
(B) x0 = y, y 0 = −2y + cos(2t)
(C) This second-order ODE can’t be converted to a first-order system.
Answer: (A).
{ Source: MAH }
x0 = y
y 0 = −2x + y
(A) y 00 + 2y 0 − y = 0
(B) y 00 − 2y 0 − y = 0
x0 = x − 2y
y 0 = −2x + y
Q6d–5265 . A mass m is attached to a fixed wall with a spring–damper device having spring constant k and damping
parameter c. Our aim is to find the horizontal displacement x = x(t) as a function of time t > 0. Applying
Newton’s second law leads to the second-order ODE
where x0 denotes the initial displacement, v0 the initial velocity, and f = f (t) describes external forces acting
on the body. Re-write this IVP as a first-order system by using the new variable v(t) = x0 (t).
(A) x0 = v, mv 0 + cv + kx = f, x(0) = x0 , v(0) = v0
(B) v 0 = x, mv 0 + cv + kx = f, x(0) = x0 , v(0) = v0
(C) x0 = v, mx0 + cx + kv 0 = f, x(0) = x0 , v(0) = v0
Answer: (A).
{ Source: MAH }
Q6d–6266 . You are given the system of ODEs x0 = 3x − 2y and y 0 = 4y 2 − 7x, along with initial values x(0) = 2 and
y(0) = 1. Using one step of Euler’s method, what is the approximate solution at t = 0.1?
(A) x(0.1) = 4, y(0.1) = −10
(B) x(0.1) = 6, y(0.1) = −9
(C) x(0.1) = 2.4, y(0.1) = 0
(D) x(0.1) = 2.4, y(0.1) = −1
(E) none of the above
Answer: (C). Euler’s method with h = 0.1 gives:
Q6d–7267 . You are given the system of ODEs x0 = x(−x − 2y + 5) and y 0 = y(−x − y + 10), along with initial values
x(4.5) = 3 and y(4.5) = 2. What are approximate values of x and y at t = 4?
SUMMARY STATISTICS:
[2] Ward Cheney and David Kincaid. Numerical Mathematics and Computing. Brooks/Cole, Pacific Grove, CA,
4th edition, 1999.
[3] Laurene V. Fausett. Applied Numerical Analysis Using MATLAB. Pearson Education, 2nd edition, 2008.
[4] Michael T. Heath. Scientific Computing: An Introductory Survey, revised second edition, volume 80 of Classics
in Applied Mathematics. SIAM, Philadelphia, PA, 2018.
[5] Autar Kaw. Holistic numerical methods: Multiple choice questions. http://mathforcollege.com/nm/
assessment_text.html. Accessed 18 March 2019.
[6] Gerald W. Recktenwald. Introduction to Numerical Methods and MATLAB: Implementations and Applications.
Pearson, 2000.
[7] Maria Terrell and Robert Conelly. GoodQuestions project. Department of Mathematics, Cornell University,
Ithaca, NY. http://pi.math.cornell.edu/~GoodQuestions. Accessed 5 April 2019.
[8] Holly Zullo and Kelly Cline. MathQUEST/MathVote: Resources for clickers and classroom voting in collegiate
mathematics. Mathematics Program, Carroll College, Helena, MT. http://mathquest.carroll.edu. Accessed
5 April 2019.