0% found this document useful (0 votes)
610 views91 pages

Numerical Analysis Clicker Questions

This document provides a clicker question bank for numerical analysis consisting of 112 multiple choice questions covering topics like floating point arithmetic, errors, and numerical methods. The questions are designed to test students' understanding of concepts like machine epsilon, overflow, underflow, and error analysis. Sources for the questions and answers are cited, including textbooks and online resources. The material is being made freely available under a Creative Commons license allowing reuse and modification for non-commercial purposes with attribution.

Uploaded by

Mudasar Abbas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
610 views91 pages

Numerical Analysis Clicker Questions

This document provides a clicker question bank for numerical analysis consisting of 112 multiple choice questions covering topics like floating point arithmetic, errors, and numerical methods. The questions are designed to test students' understanding of concepts like machine epsilon, overflow, underflow, and error analysis. Sources for the questions and answers are cited, including textbooks and online resources. The material is being made freely available under a Creative Commons license allowing reuse and modification for non-commercial purposes with attribution.

Uploaded by

Mudasar Abbas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 91

SIMON FRASER UNIVERSITY M.

Alamgir Hossain
FACULTY OF SCIENCE John M. Stockie
DEPARTMENT OF MATHEMATICS

Clicker Question Bank for Numerical Analysis


(Version 1.0 – May 14, 2020)

This teaching resource (including LATEX source, graphical images and Matlab code) is made available under the
Creative Commons “CC BY-NC-SA” license. This license allows anyone to reuse, revise, remix and redistribute the
databank of clicker questions provided that it is not for commercial purposes and that appropriate credit is given
to the original authors. For more information, visit http://creativecommons.org/licenses/by-nc-sa/4.0.

1. Introduction
Q1–11 . Select the best definition for “numerical analysis”:
(A) the study of round-off errors
(B) the study of algorithms for computing approximate solutions to problems from continuous mathematics
(C) the study of quantitative approximations to the solutions of mathematical problems including consider-
ation of and bounds for the errors involved
(D) the branch of mathematics that deals with the development and use of numerical methods for solving
problems
(E) the branch of mathematics dealing with methods for obtaining approximate numerical solutions of math-
ematical problems
Answer: (B). All 5 definitions are valid in some sense since they reflect some aspect of the field (most are
pulled off the internet). But my favourite definition is (B) because it contains three very important keywords
underlined below:

the study of algorithms for computing approximate solutions to problems from continuous mathematics
[ algorithms ⇐⇒ computing, approximate ⇐⇒ floating point arithmetic, continuous ⇐⇒ solutions are smooth f ’ns ]

{ Source: JMS }

1a. Floating Point Arithmetic and Error


Q1a–12 . How many significant digits does the floating point number 0.03140 × 103 have?
(A) 6
(B) 5
(C) 4
(D) 3
Answer: (C).
Q1a–23 . Suppose that a hypothetical binary computer stores floating point numbers in 16-bit words as shown:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

s exp mantissa

Clicker Question Bank for Numerical Analysis (Version 1.0) 1/91


Bit 1 is used for the sign of the number, bit 2 for the sign of the exponent, bits 3-4 for the magnitude of the
exponent, and the remaining twelve bits for the magnitude of the mantissa. What is machine epsilon for this
computer?
(A) 2−16
(B) 2−12
(C) 2−8
(D) 2−4
Answer: (B). Assume that rounding is used and recall that εM is essentially the same as unit round-off
error u = 21 B 1−t , where B = 2 is the base and t is the number of significant digits. The number of digits
stored in the mantissa is t = 12 and so εM ≈ 21 21−12 = 2−12 .

Q1a–34 . You are working with a hypothetical binary computer that stores integers as unsigned 4-bit words. What
is the largest non-negative integer that can be represented on this computer?
(A) 64
(B) 63
(C) 31
(D) 15
(E) 7
Answer: (D). (1111)2 = 1 × 20 + 1 × 21 + 1 × 22 + 1 × 23 = 15.
Q1a–45 . In 1958 the Russians developed a ternary (base-3) computer called Setun, after the Setun River that flows
near Moscow State University where it was built. In contrast with today’s binary computers, this machine
used “trits” (ternary bits) whose three possible states can be represented as {0, 1, 2}. Its floating-point number
system was based on 27-trit numbers, with 9 trits reserved for the exponent and 18 for the mantissa. What
was the value of machine epsilon M for the Setun?
(A) 3−19
(B) 3−18
(C) 3−9
(D) 1
3 · 2−18
Answer: (B).
Apply the formula M = B −t from the notes, where
Setun – Moscow State University
B = 3 is the base and t = 18 is the number dig-
its in the mantissa. You may have noticed that I
didn’t mention a “sign trit” for the mantissa. In
actual fact, the floating-point representation on Se-
tun was more complicated than this and the sign of
a number came from interpreting one specific trit as
{−1, 0, +1} instead.

{ Source: JMS, plus info from http://homepage.divms.uiowa.edu/~jones/ternary/numbers.shtml }

Q1a–56 . In Canada, the total for any store purchase paid in cash is rounded to the nearest 5 cents, whereas no
rounding is done if the payment is by credit/debit card. Suppose that when you return home after purchasing
your groceries with cash, you notice that your bill was $10.07. What is the absolute error in your actual cash
payment?

Clicker Question Bank for Numerical Analysis (Version 1.0) 2/91


(A) 2 cents
(B) 3 cents
(C) 4 cents
(D) 5 cents
Answer: (A).
Q1a–67 . Let x̂ be some approximation of x. Which of the following error definitions is correct?
|x − x̂|
(A) absolute error = |x − x̂|, relative error =
|x|
|x − x̂|
(B) absolute error = , relative error = |x − x̂|
|x|
|x − x̂|
(C) absolute error = , x 6= 0, relative error = |x − x̂|
|x|
|x − x̂|
(D) absolute error = |x − x̂|, relative error = , x 6= 0
|x|
Answer: (D).
Q1a–78 . For a base-10 (decimal) floating point number x having t significant digits, the relative error satisfies

|x − f`(x)|
Rx = 6u
|x|
where u denotes unit round-off error. Which of the following is true about u?
(
101−t , chopping
(A) u = 1 1−t
2 10 , rounding
(
1
101−t , chopping
(B) u = 2 1−t
10 , rounding
(
1
101−t , rounding
(C) u = 2 1−t
10 , chopping
(
101−t , rounding
(D) u = 1 1−t
2 10 , chopping

Answer: (A).
Q1a–89 . Fill in the blank: If f (x) is a real-valued function of a real variable, then the error in the
f (x + h) − f (x)
difference approximation for the derivative f 0 (x) ≈ goes to zero as h → 0.
h
(A) absolute
(B) relative
(C) cancellation
(D) truncation
Answer: (D). Strictly, response (A) is also correct since truncation error is an (absolute) difference from
the exact derivative.
Q1a–910 . The two solutions of the quadratic equation ax2 + bx + c = 0 given by

−b ± b2 − 4ac
x=
2a
are computed using floating point arithmetic. Which of the statements below is TRUE?

Clicker Question Bank for Numerical Analysis (Version 1.0) 3/91


(A) For some values of the coefficients, this formula can generate cancellation errors.
(B) If the coefficients a, b and c are very small or very large, then b2 or 4ac may overflow or underflow.
2c
(C) The expression x = √ is an alternative formula for x that avoids truncation error.
−b ∓ b2 − 4ac
(D) All of the above.
Answer: (D).

Q1a–1011 . In floating-point arithmetic, which of the following operations on two positive floating-point numbers can
produce an overflow?
(A) addition
(B) subtraction
(C) multiplication
(D) division
Answer: (A). But (C) and (D) are also valid responses. Let x be the largest number that can be represented.
Then the operations x + 1.0, x ∗ 2.0 and x ÷ 0.3 all generate an overflow.
{ Source: Heath [4], Review Question 1.29, p. 40 }

Q1a–1112 . In floating-point arithmetic, which of the following operations on two positive floating-point numbers can
produce an underflow?
(A) addition
(B) subtraction
(C) multiplication
(D) division
Answer: (C). But (D) is also a valid response. Let x be the smallest positive number that can be represented.
Then the operations x ∗ 0.5 and x ÷ 2.3 both generate an underflow.
{ Source: Heath [4], Review Question 1.30, p. 40 }

Q1a–1213 . Let {xk } be a decreasing sequence of positive numbers with xk+1 < xk for k = 1, 2, . . . . In what order
XN
should the sum xk be computed so as to minimize round-off error?
k=1

(A) Order the xk from largest to smallest (1, 2, . . . , N ).


(B) Order the xk from smallest to largest (N, . . . , 2, 1).
(C) Sum the terms in random order.
(D) It doesn’t matter.

Answer: (B).
{ Source: Heath [4], adapted from Review Question 1.45, p. 41 }

Q1a–1314 . True or False: If two real numbers can be represented exactly as floating-point numbers, then the result
of a real arithmetic operation on them can also be represented exactly as a floating-point number.

Answer: FALSE. As a counterexample, let x1 = εM (machine epsilon) and x2 = 2, which are exact in any
other binary floating point system (like the IEEE standard). Then x1 /x2 has no floating point representation.
{ Source: Heath [4], Review Question 1.7, p. 39 }

Q1a–1415 . Below are four floating point approximations, each accompanied by its corresponding exact value. Which
approximation is the most accurate?
(A) 315700, exact value 315690

Clicker Question Bank for Numerical Analysis (Version 1.0) 4/91


(B) 0.0005500, exact value 0.0005510
−5
(C) 8.7362 × 10 , exact value 8.7743 × 10−5
(D) εM (machine epsilon), exact value 0
Answer: (A). Accuracy is measured either by counting significant digits or computing relative error. Answer
(A) has the most significant digits of accuracy (4 after rounding), whereas choices (B) and (C) have 2 and
3 significant digits. The accuracy of the answer from (D) can’t be compared because relative error formula is
undefined when the exact answer is zero.
{ Source: JMS }

Q1a–1516 . You are computing using a floating-point number system that stores a total of t decimal digits in the
mantissa. If you perform an arithmetic computation that has a result with relative error Rx , which of the
statements below is TRUE?
(A) The number of significant digits in the answer is t.
(B) The number of significant digits cannot be predicted because of cancellation or round-off errors.
(C) The number of significant digits is roughly − log10 (Rx ).
(D) None of the above.

Answer: (D).
{ Source: JMS }

Q1a–1617 . The relative error in an approximate solution is 0.004%. The number of significant digits in the solution
that we can trust is:
(A) 2
(B) 3
(C) 4
(D) 5
Answer: (C). This percentage corresponds to a relative error of 4 × 10−5 . With rounding this means that
the solution is accurate to within only 4 significant digits.
{ Source: Holistic Numerical Methods [5], quiz 01 02 }

Q1a–1718 . Let x̂ be some approximation of an exact value x. Which of the statements below is FALSE?
(A) The relative error is always smaller than the absolute error.
(B) The relative error can be smaller than the absolute error provided x is large enough.
(C) The relative error gives a better idea of the number of correct significant digits.
(D) None of the above.
Answer: (A). Because absolute error is Ex = |x − x̂| and relative error is Rx = |x − x̂|/|x| = Ex /|x|, the
only time Rx < Ex is when |x| > 1.
{ Source: JMS }

Q1a–1819 . What is the decimal equivalent of the binary number 1100102 ?


(A) 100
(B) 50
(C) 48
(D) 25
Answer: (B). 1100102 = 25 + 24 + 21 = 32 + 16 + 2 = 5010 .
{ Source: Holistic Numerical Methods [5], quiz 01 04 }

Clicker Question Bank for Numerical Analysis (Version 1.0) 5/91


Q1a–1920 . What is the binary equivalent of the decimal number 25.37510 ?
(A) 100110.011
(B) 10011.110
(C) 11001.011
(D) 10011.0011
Answer: (C). Rewrite the decimal number as a sum of powers of 2:
1 1
25.37510 = 16 + 8 + 1 + + = 24 + 23 + 20 + 2−2 + 2−3 = 11001.0112
4 8

{ Source: Holistic Numerical Methods [5], quiz 01 04 }

Q1a–2021 . Suppose that you are working with some small quantity h  1. Which of the following expressions is NOT
of O(h4 ) as h → 0?
(A) h5 + 7h4 − 0.1h3
(B) 56h4
(C) h27
(D) 27h4.3
(E) All of the above
Answer: (A). Only response (A) has a term of the form chp with p < 4 that goes to zero slower than h4 .
{ Source: JMS }

Q1a–2122 . Suppose that you are working with some large quantity N  1. Which of the following expressions is NOT
of O(N 4 ) as N → ∞?
N6 + 1
(A)
2N 2 − N + 1
(B) 0.0001N 4
(C) N 3.6
(D) N 4 + eN
(E) All of the above
Answer: (D). Only response (D) has a term (eN ) that grows faster than N 4 .
{ Source: JMS }

1b. Calculus Review


f (x + h) − f (x)
Q1b–123 . When estimating f 0 (2) for f (x) = x using the formula f 0 (x) ≈ and h = 0.1, the truncation
h
error is:
(A) 0
(B) 0.1
(C) 1
(D) 2
Answer: (A). This formula can be derived from the linear Taylor approximation, which is exact for linear
functions.
f (x + h) − f (x)
Q1b–224 . When estimating f 0 (2) for f (x) = x2 using the formula f 0 (x) ≈ and h = 0.1, the truncation
h
error is:

Clicker Question Bank for Numerical Analysis (Version 1.0) 6/91


(A) 0
(B) 0.1
(C) 1
(D) 2
Answer: (B).

Q1b–325 . Which of the following is NOT a Taylor series for f (x)?


f 00 (x) 2 f 000 (x) 3
(A) f (x + h) = f (x) + f 0 (x)h + h + h +···
2! 3!
f 00 (x0 ) f 000 (x0 )
(B) f (x) = f (x0 ) + f 0 (x0 )(x − x0 ) + (x − x0 )2 + (x − x0 )3 + · · ·
2! 3!
f 00 (0) 2 f 000 (0) 3
(C) f (x) = f (0) + f 0 (0)x + x + x +···
2! 3!
f 00 (x0 ) 2 f 000 (x0 ) 3
(D) f (x) = f (x0 ) + f 0 (x0 )x + x + x +···
2! 3!
Answer: (D).
Q1b–426 . The coefficient of x3 in the Taylor series for f (x) = ln(x) centered about x = 1 is:
1
(A)
4
1
(B)
3
2
(C)
3
2
(D) −
3
Answer: (B).

Q1b–527 . Using the Taylor series expansion of f (x) = cos(x) near x = 0, an approximation of f (h) for small h is:
sin(h)
(A)
h
cos(h) − 1
(B)
h
(C) sin(0)
(D) cos(0)
1
(E) 1 − 2 h2
Answer: (D). The first term in the Taylor series is cos(0) = 1. However, response (E) is also correct since
it comes from expanding the Taylor series by an additional two terms:
1
cos(h) ≈ cos(0) − sin(0)h − cos(0)h2
2

Q1b–628 . Using the Taylor series expansion of f (x) = cos(x) near x = 0, an approximation of f 0 (0) for small h is:
sin(h)
(A)
h
cos(h) − 1
(B)
h
(C) sin(0)
(D) cos(0)

Clicker Question Bank for Numerical Analysis (Version 1.0) 7/91


h2
(E) 1 − 2

Answer: (B). Expand to get f (0 + h) ≈ f (0) + hf 0 (0) and then solve for f 0 (0).

Q1b–729 . Let f (x) be a function that is differentiable on the interval [−5, 5]. If f (−5) = 10, f (0) = −10, and
f (5) = 10, then which of the statements below must be TRUE?
I. For some c ∈ (−5, 5), f (c) = 0.
II. For some c ∈ (−5, 5), f 0 (c) = 0.

(A) I only
(B) II only
(C) Both I and II
(D) Neither I nor II

Answer: (C). II follows from the Mean Value Theorem.

Regarding I, the Intermediate Value Theorem (IVT) doesn’t apply directly to the interval (−5, 5) since f (−5) =
f (5). However, f changes sign on both intervals [−5, 0] and [0, 5]. So there are at least two roots in (−5, 5)
which means that I is also true.
{ Source: JMS }

Q1b–830 . Let f (x) be continuous and differentiable for all x ∈ [a, b] as shown in the plot below. Then there exists
some real number c ∈ (a, b) satisfying:

80
f(x)
60

40

20
y

-20

-40
a c b
0 0.5 1 1.5 2 2.5 3
x
f (b) − f (a)
(A) f (c) =
b−a
f (b) − b
(B) f 0 (c) =
f (a) − a
(C) f 0 (c) = 0
f (b) − f (a)
(D) f 0 (c) =
b−a
Answer: (D). This is an application of the Mean Value Theorem.

X x2n
Q1b–931 . The series (−4)n is the Taylor series at x = 0 for the following function:
n=0
(2n)!

(A) cos(x)
(B) cos(2x)

Clicker Question Bank for Numerical Analysis (Version 1.0) 8/91


(C) sin(x)
(D) sin(2x)
Answer: (B).
{ Source: Holistic Numerical Methods [5], quiz 01 07 }

Q1b–1032 . Given that f (2) = 6, f 0 (2) = − 12 and f 00 (2) = 10, what is the most accurate Taylor polynomial approxi-
mation of f (2.2) that you can find?
(A) 5.9
(B) 6.1
(C) 6.2
(D) 7
Answer: (B). These three point values determine the second degree Taylor polynomial:
1 1 10
f (x) ≈ f (2) + f 0 (2)(x − 2) + f 00 (2)(x − 2)2 = 6 − (x − 2) + (x − 2)2
2 2 2
and then substituting x = 2.2 in the last expression gives
1
f (2.2) ≈ 6 − (0.2) + 5(0.2)2 = 6 − 0.1 + 0.2 = 6.1
2

{ Source: JMS }

Q1b–1133 . Which is the Taylor series for the function ln(x) at the point a = 1?
1 1 1
(A) (x − 1) − (x − 1)2 + (x − 1)3 − (x − 1)4 + · · ·
2 3 4
(B) (x − 1) − (x − 1)2 + 2(x − 1)3 − 6(x − 1)4 + · · ·
1 1 2 6
(C) ln(x) + (x − 1) − 2 (x − 1)2 + 3 (x − 1)3 − 4 (x − 1)4 + · · ·
x x x x
1 1 1 1
(D) ln(x) + (x − 1) − 2 (x − 1)2 + 3 (x − 1)3 − 4 (x − 1)4 + · · ·
x 2x 3x 4x
Answer: (A). It’s easy to eliminate (C) and (D) since they are not polynomials.
{ Source: MathQuest [8], Taylor series }

x2 x4 x6
Q1b–1234 . What function is represented by the Taylor series 1 − + − + · · · at a = 0?
2 24 720
(A) ex
(B) sin x
(C) cos x
(D) This is not a Taylor series
Answer: (C).
{ Source: MathQuest [8], Taylor series }

Q1b–1335 . Suppose that you determine the Taylor series for some function f (x) centered at a = 5. At which point x
would you expect a finite number of terms from this series to give a better approximation?
(A) x = 0
(B) x = 3
(C) x = 8
(D) There is no way to tell

Clicker Question Bank for Numerical Analysis (Version 1.0) 9/91


Answer: (B). This is the closest point to x = 5 and so the remainder term suggests that the error is smallest
there. However, the precise error also depends on the derivative factor f (n) (c) for some value of c between a
and x, which can’t be determined without knowing f (x). So (D) is strictly correct.
{ Source: MathQuest [8], Taylor series }

Clicker Question Bank for Numerical Analysis (Version 1.0) 10/91


2. Nonlinear Equations
Q2–136 . How many zeroes does a polynomial of degree n have?

(A) n + 2
(B) n + 1
(C) n
(D) n − 1
Answer: (C). The fundamental theorem of algebra guarantees that there are exactly n complex roots (or
zeroes), some of which might be repeated roots. However, if you are interested in real roots, then they could
number anywhere between zero and n.
{ Source: Holistic Numerical Methods [5] }

Q2–237 . How many real roots does the equation sin(x) − x = 0 have?

(A) 0
(B) 1
(C) 3
(D) Infinitely many

Answer: (B). Rewriting the equation as sin(x) = x suggests sketching the two functions y = x and y = sin(x)
on the same axes and looking for intersection points. Then it’s easy to confirm that x = 0 is the only real
root.
4
y = sin(x)
2 y=x

0
y

-2

-4
-10 -5 0 5 10
x

{ Source: MAH }

Q2–338 . How many real roots does the equation sin(x) − ax = 0 have? (a is some real constant)
(A) 0
(B) 1
(C) 3
(D) Infinitely many
(E) It depends on a
Answer: (E). By sketching the function y = sin(x) alongside various straight lines y = ax it is easy to
show that there is always at least 1 root (x = 0) and that there are infinitely many roots if a = 0. With a bit
more investigation, as a varies there can be any odd, positive number of roots. So strictly, responses (B)–(E)
are all correct, while response (A) is not.

Clicker Question Bank for Numerical Analysis (Version 1.0) 11/91


4
y = sin(x)
2 y = a*x

y
-2

-4
-10 -5 0 5 10
x

{ Source: JMS }

Q2–439 . True or False: When searching iteratively for a solution x∗ of f (x) = 0, having a small value of the residual
|f (xk )| guarantees that xk is an accurate approximation of x∗ .

Answer: FALSE.
{ Source: Heath [4], Review Question 5.1, p. 245 }

2a. Bisection Method


Q2a–140 . The function f (x) = (x + 1)(x − 1)(x − 3) is pictured
in the plot. If the bisection algorithm is applied with 20
initial interval [−4, 4], how many roots of f (x) will
you be able to compute? 0

-20

f(x) = (x+1)(x-1)(x-3)
-40
f(x)

-60

-80

-100

-120
-4 -3 -2 -1 0 1 2 3 4
x

(A) 3
(B) 2
(C) 1
(D) none of the above
Answer: (C). All three roots (−1, 1 and 3) lie within the interval [−4, 4], so the bisection algorithm is
guaranteed to converge. However, the algorithm will only converge to one of those roots.
{ Source: MAH }

Q2a–241 . True or False: Let f (x) = x2 − 2x + 1. The bisection method can be used to approximate the root of the
function f (x) pictured.

Clicker Question Bank for Numerical Analysis (Version 1.0) 12/91


1

0.5

0
0 1 2

Answer: FALSE.
{ Source: MAH }

Q2a–342 . You are provided with a list of point values for the function f (x) = x + 10 − ex :

x 0 1 2 3 4
f (x) 9.000 8.282 4.611 −7.086 −40.598

Use this data to perform two steps of the bisection method for solving f (x) = 0, assuming the initial interval
[0, 4]. What is the approximation of the root?
(A) 1
(B) 2
(C) 3
(D) 4
Answer: (C).
{ Source: MACM 316 final exam question (Fall 2018) }

Q2a–443 . You are computing a root of f (x) = cos(x) − x on the interval [0, 2] using the bisection method. If you
require a result that accurate to within five significant digits, which of the following bounds holds for the
minimum number of bisection iterations k required?
1
(A) < 10−5
2k−1
1
(B) < 10−5
2k
2
(C) k+1 < 10−5
2
(D) none of the above
Answer: (A).
{ Source: MAH }

Q2a–544 . Which of the statements below regarding the convergence of the bisection method is TRUE?
I. The iteration is always guaranteed to converge.
II. The order of convergence is linear.
III. The asymptotic rate of convergence is 12 .

Clicker Question Bank for Numerical Analysis (Version 1.0) 13/91


(A) I and II
(B) II and III
(C) I and III
(D) I, II and III

Answer: (D). All three statements are true, but only assuming that you choose an initial interval [a, b]
having f (a) · f (b) < 0. If that’s not the case, then the method will not converge so none of the statements
hold.
{ Source: JMS, MACM 316 lecture notes }

Q2a–645 . You are given an interval [a, b] where f (a)·f (b) < 0. Which of the statements below regarding the bisection
method is TRUE?
I. There must be a root x∗ somewhere in the interval (a, b).
b−a
II. The absolute error in the first step of bisection is |x1 − x∗ | 6 .
2
0
III. The bisection iteration is guaranteed to converge if |f (x)| < 1.

(A) I and II
(B) II and III
(C) I and III
(D) I, II and III

Answer: (A).
{ Source: JMS, MACM 316 lecture notes }

Q2a–746 . Which of the following is a suitable initial interval for computing a positive root of the equation
g(x) = 2x cos(πx) − ex−1 = 0
using the bisection method?
(A) [−1, 0]
(B) [−1, 1]
(C) [0, 1]
(D) [0, 2]
Answer: (D). From the interval end-point values
g(−1) = 2 − e−2 > 0, g(0) = −1 < 0, g(1) = −2 − e0 = −3 < 0, g(2) = 4 − e > 0,
the only interval that is a valid bracket for a positive root is [0, 2], since g(0) · g(2) < 0. Response (A) and
(B) are also brackets, but [−1, 0] can only contain negative roots, while [−1, 1] may or may not have a positive
root. This plot displays the function, the interval end-points, and the desired positive root:

Positive root
0
g(x)

-1

-2

-3

-1 -0.5 0 0.5 1 1.5 2


x

Clicker Question Bank for Numerical Analysis (Version 1.0) 14/91


{ Source: JMS }

Q2a–847 . This partial code implements the bisection


tol = 1e-6; % tolerance
method for finding a root of the nonlinear
fa = f(a); fb = f(b);
equation f (x) = 0. The Matlab function f
done = 0;
computes the function values and you can as-
while( (b-a)/2 > tol ),
sume that you start with an interval [a, b] sat-
mid = (a+b)/2;
isfying f (a) · f (b) < 0. Select the suitable re-
fmid = f(mid);
placement code for the blanks marked 1 and
if fmid == 0, % mid is a solution
2 from the list below.
break;
end
if sign(fmid)*sign(fa) < 0,
... 1 ...
else
... 2 ...
end
end
(A) 1 b=mid; fb=fmid; 2 a=mid; fa=fmid;
(B) 1 a=mid; fa=fmid; 2 b=mid; fb=fmid;
(C) 1 a=mid; fb=fmid; 2 b=mid; fa=fmid;
(D) 1 a=mid; 2 b=mid;

Answer: (A).
{ Source: MACM 316 midterm question (Fall 2018) }

2b. Fixed Point Method


x 4
Q2b–148 . Consider the fixed point iteration xk+1 = g(xk ) with g(x) = + . Which root-finding problem is this
3 3x
equivalent to?
(A) x2 − 2 = 0
x 4
(B) + =0
3 3x
1 4
(C) − 2 = 0
3 3x
1 4
(D) x − + 2 = 0
3 3x
Answer: (A).
{ Source: MAH }

Q2b–249 . Assuming that the following fixed point iteration converges


 
1 3
xk+1 = xk + ,
2 xk

to which fixed point will it converge?



(A) 3

(B) 2

(C) 5
(D) none of the above

Clicker Question Bank for Numerical Analysis (Version 1.0) 15/91


Answer: (A).
{ Source: MAH }

Q2b–350 . For which of the functions below is x∗ = 5 a fixed point?
x
(A) g(x) = √
5

(B) g(x) = 5 x
(C) g(x) = x2 − 4x
4
(D) g(x) = 1 +
x+1
Answer: (D).
{ Source: MAH }

Q2b–451 . Which of the following is a suitable fixed point iteration for solving the equation cos(x) − x = 0?
I. xk+1 = cos(xk )
II. xk+1 = cos−1 (xk )
III. xk+1 = xk + tan(xk )

(A) I
(B) I and II
(C) I and III
(D) I, II and III

Answer: (B).
{ Source: MAH }

Q2b–552 . The intersection points between the curves y = x


and y = g(x) are x = 0 and x = 4, as shown in the 6
plot. Which of the statements below regarding the
y=x
fixed point iteration xk+1 = g(xk ) is TRUE? 5
I. If x0 = 2 then xk converges to 4.
4
II. If x0 = 1 then xk converges to 0. y = g(x)
3
III. If x0 = 6 then xk converges to 4.
2

0
0 2 4 6

(A) I and II
(B) II and III
(C) I and III
(D) I, II and III
Answer: (C).
{ Source: MAH }

Clicker Question Bank for Numerical Analysis (Version 1.0) 16/91


Q2b–653 . The intersection point between the curves y = x and
y = g(x) is x = x∗ , as shown in the plot. Which 8
of the statements below regarding the fixed point
iteration xk+1 = g(xk ) is TRUE?
6
I. Starting at x0 = 2, the iteration converges to y = g(x)
x∗ .
4
II. Starting at x0 = 1, the iteration converges to
x∗ . 2 y=x
x*
0

-2
0 1 2 3

(A) I
(B) II
(C) I and II
(D) Neither I nor II
Answer: (D).
{ Source: MAH }

Q2b–754 . Which of these statements is TRUE?


I. The fixed point and bisection methods have the same order of convergence.
II. If the iteration function g(x) for the fixed point method satisfies |g 0 (x)| < 1, then the fixed point algorithm
converges faster than bisection.
III. When it converges, the fixed point method always converges slower than bisection.

(A) I and II
(B) II and III
(C) I and III
(D) I, II and III

Answer: (C). Statement II is false in general, but could be true if |g 0 (x)| < 1
2 since then the asymptotic
rate constant for the fixed point method is smaller than that of bisection.
{ Source: JMS, MACM 316 lecture notes }

Q2b–855 . This partial code implements the fixed point


tol = 1e-6; % tolerance
algorithm for finding a root of the equation
x = xguess; % initial guess
x = g(x), where the Matlab function g returns
done = 0;
the value of the fixed point iteration function.
while( ∼done ),
Select the best choice of terminating condition
niter = niter + 1;
for this algorithm to fill in the missing condi-
x0 = x;
tion in the if statement.
x = g(x);
if ...[fill in blank]...
done = 1;
end
end
(A) abs(x) < tol
(B) abs(x - x0) < tol
(C) x - x0 < tol

Clicker Question Bank for Numerical Analysis (Version 1.0) 17/91


(D) abs(g(x)) < tol

Answer: (B).
{ Source: MACM 316 midterm question (Fall 2018) }

2c. Secant and Newton’s Methods



Q2c–156 . The irrational number 2 can be approximated by applying the secant method to the nonlinear equation
x2 − 2 = 0. What is the iteration formula?
x2k − 2
(A) xk+1 = xk −
xk + xk−1
(x2k − 2)(xk − xk−1 )
(B) xk+1 = xk −
xk + xk−1
(x2k − 2)(xk − xk−1 )
(C) xk+1 = xk −
x2k + x2k−1
(x2k − 2)(xk − xk−1 )
(D) xk+1 = xk −
x2k − x2k−1 − 4

f (xk )(xk − xk−1 ) (x2 − 2)(xk − xk−1 ) x2k − 2


Answer: (A). xk+1 = xk − = xk − k 2 2 = xk −
f (xk ) − f (xk−1 ) xk − xk−1 xk + xk−1
{ Source: MAH }

Q2c–257 . True or False: The convergence of the secant method to simple roots is quadratic.

Answer: FALSE. The convergence of the secant method to simple roots is superlinear, and lies between
linear and quadratic convergence (the order of convergence is actually p ≈ 1.618).
{ Source: MAH }

Q2c–358 . True or False: Both Newton’s method and the secant method are examples of fixed-point iterations.

Answer: FALSE. Newton’s method is a fixed point iteration, but secant method is not.
Q2c–459 . The irrational number e can be approximated by applying Newton’s method to solve the nonlinear equation
f (x) = ln x − 1 = 0. What is the Newton iteration formula?

(A) xk+1 = ln xk
(B) xk+1 = xk − ln xk
(C) xk+1 = xk − (ln xk − 1)
(D) xk+1 = 2xk − xk ln xk

Answer: (D).
{ Source: MAH }

Q2c–560 . The irrational number 2 can be approximated by applying Newton’s method to the nonlinear equation
f (x) = x2 − 2 = 0. What is the Newton iteration formula?
x2k − 2
(A) xk+1 = xk +
2xk
2
x −2
(B) xk+1 = xk − k
xk
2
x −2
(C) xk+1 = xk − k
xk
x2k − 2
(D) xk+1 = xk −
2xk

Clicker Question Bank for Numerical Analysis (Version 1.0) 18/91


xk 1
(E) xk+1 = +
2 xk
Answer: (D). Response (E) is also correct, since it’s just a simplified version of (D).

Indeed, (E) is a remarkably efficient formula that has been used to implement
√ the square root operation on
many modern floating-point processors! Generalizing to the case of finding c as a root of f (c) = x2 − c, the
Newton iteration is
 
xk c 1 1
xk+1 = + = xk + c
2 2xk 2 xk

Counting the flops involved, each iteration requires a reciprocal (1/xk ), one multiplication by c, one addition,

and one division by 2 (which is just a bit-shift in binary). This is an incredibly cheap way to approximate c
when the number of Newton iterations is small.
{ Source: MAH }

Q2c–661 . Apply Newton’s method to the equation f (x) = x2 − 2 = 0 to estimate the root x∗ = 2. Starting with
the initial guess x0 = 1, the first iteration x1 is:

(A) 0.5
(B) 1.0
(C) 1.5
(D) 2.0
x20 −2 x0 1
Answer: (C). The Newton iteration is x1 = x0 − 2x0 = 2 + x0 . Setting x0 = 1 gives x1 = 32 .
{ Source: MAH }

Q2c–762 . For which values of x does Newton’s method exhibit linear and quadratic order of convergence when applied
to the function f (x) = x(x − 3)2 ?
(A) linear near x = 0 and quadratic near x = 3
(B) linear near x = 3 and quadratic near x = 0
(C) linear convergence everywhere
(D) quadratic convergence everywhere
Answer: (B).
{ Source: MAH }

Q2c–863 . Which of these statements is TRUE?


I. Newton’s method may converge linearly
II. Newton’s method may converge quadratically
III. Newton’s method may not converge at all

(A) none is true


(B) I and II only
(C) II and III only
(D) I, II and III

Answer: (D).
{ Source: Heath [4], Review Question 5.16, p. 246 }

Q2c–964 . Which of these statements is TRUE?

I. Newton’s method is guaranteed to converge

Clicker Question Bank for Numerical Analysis (Version 1.0) 19/91


II. Newton’s method has a quadratic order of convergence

(A) I
(B) II
(C) I and II
(D) Neither I nor II

Answer: (D).
{ Source: JMS, MACM 316 lecture note }

f (xk )
Q2c–1065 . The Newton iteration xk+1 = xk − can be rewritten as a fixed point iteration
f 0 (xk )

f (x)
xk+1 = g(xk ) where g(x) = x − .
f 0 (x)
Which of these statements is TRUE?
I. Newton’s method fails when f 0 (xk ) = 0.
II. The iteration converges if |g 0 (xk )| < 1.

(A) I
(B) II
(C) I and II
(D) Neither I nor II

Answer: (C).
{ Source: MAH }

Q2c–1166 . Which of these statements regarding the secant method is TRUE?


I. It is an example of a fixed-point iteration.
II. It requires two initial guesses.
III. No derivative calculation is needed.

(A) I
(B) I and II only
(C) II and III only
(D) I, II and III

Answer: (C).
{ Source: JMS, MACM 316 lecture notes }

Q2c–1267 . This partial code implements Newton’s


tol = 1e-6; % tolerance
method for finding a root to the nonlinear
x = 2.0; % initial guess
equation f (x) = 0, where Matlab functions
done = 0;
f and fprime compute the function and its
while( ∼done ),
derivative. Select a suitable terminating con-
xlast = x;
dition (to replace the blank if statement) that
fx = f(x);
will best improve the robustness of the code.
fpx = fprime(x);
x = xlast - fx / fpx;
if ...[fill in blank]...
done = 1;
end
end

Clicker Question Bank for Numerical Analysis (Version 1.0) 20/91


(A) abs(x) < tol
(B) abs(x - xlast)/abs(x) < tol
(C) abs(fx) < tol
(D) abs(x - xlast)/abs(x) < tol & abs(fx) < tol
Answer: (D).
{ Source: MACM 316 midterm question (Fall 2018) }

Q2c–1368 . This partial code implements the method of


while( abs(x0-x1) > tol ), % absolute error
false position for finding a root of the nonlinear
tolerance
equation f (x) = 0, where the Matlab function
x2 = x1 - f(x1)*(x1-x0)/(f(x1)-f(x0));
f computes the function value. Provide suit-
if f(x2)*f(x1) > 0,
able code to replace the blanks marked 1 and
... 1 ...
2.
else
... 2 ...
end
end
(A) 1 x1 = x2; 2 x0 = x2;
(B) 1 x2 = x1; 2 x2 = x0;
(C) 1 x0 = x2; 2 x1 = x2;
(D) 1 x2 = x0; 2 x2 = x1;
Answer: (A).
{ Source: MACM 316 midterm question (Fall 2018) }

Q2c–1469 . After writing a Matlab code that implements Newton’s method, you apply it to the function f (x) =
2x3 − 3x2 + 4x − 6. Using an initial guess of x0 = 1.5, your code returns a result of “-Inf” on the first
iteration. What is the most likely explanation for this behaviour?

(A) f (x) has a simple root at x = 1.5.


(B) f (x) has a multiple root at x = 1.5.
(C) f 0 (x) has a root at x = 1.5.
(D) Accumulation of round-off error due to subtractive cancellation.

Answer: (C). This one is a little bit tricky. You need to look at the Newton iteration

f (xk )
xk+1 = xk −
f 0 (xk )

and see there are two possible invalid floating point operations:
• ±Inf or “ 10 ”: occurs if f 0 (xk ) = 0 and f (xk ) 6= 0. This is case (C).
• NaN or “ 00 ”: occurs if both f 0 (xk ) = 0 and f (xk ) = 0. This is response (B) – a root with multiplicity 2.

{ Source: JMS }

Q2c–1570 . Which of the following is an advantage of the secant method over Newton’s method?

(A) lower cost per iteration


(B) faster convergence rate
(C) more robust convergence far away from the solution
(D) avoids computing the derivative

Clicker Question Bank for Numerical Analysis (Version 1.0) 21/91


Answer: (A). The secant method only requires a single function evaluation in each step (as long as the
previous value is saved), whereas the Newton iteration must compute values of both f and f 0 in every step.
This means that (D) is also correct.
{ Source: Heath [4], Review Question 5.38, p. 247 }

Q2c–1671 . For the function f (x) plotted below, use each of the four points x1 , x2 , x3 and x4 as the starting guess for
Newton’s method. For which point(s) do you expect the iteration to converge to the root shown?

1.2

0.8

0.6

0.4

0.2

0
x1 x2 x3 x4
-0.2

-2 0 2 4 6

(A) x1 and x2 only


(B) x2 only
(C) x1 , x2 and x3 only
(D) All four points
(E) none of the points

Answer: (C). Check by plotting the Newton iterations on the graph.


{ Source: GoodQuestions [7], Newton’s method }

Q2c–1772 . For the function f (x) plotted below, use each of the intervals [x1 , x2 ], [x1 , x3 ] and [x1 , x4 ] as the starting
guess for the secant method. For which interval do you expect the iteration to converge to the root shown?

1.2

0.8

0.6

0.4

0.2

0
x1 x2 x3 x4
-0.2

-2 0 2 4 6

(A) [x1 , x2 ]
(B) [x1 , x3 ]
(C) [x1 , x4 ]

Clicker Question Bank for Numerical Analysis (Version 1.0) 22/91


(D) none of the intervals

Answer: (A). Check by plotting the secant lines joining each pair of points.
{ Source: GoodQuestions [7], Newton’s method }

Q2c–1873 . Let f be a differentiable function that is defined for all x. Starting Newton’s method at a point x0 where
f 0 (x0 ) = 0 is . . .
(A) a good choice, because x = x0 is a critical point of f and Newton’s method will converge most rapidly
to the root from there
(B) a bad choice, because the Newton iteration fails
(C) usually a bad choice, but it might work if we’re lucky
Answer: (B). Strictly, (C) is also correct since it’s possible we might have starting with an x0 that is a root
– this is what is meant by “lucky”. In every other case, the Newton iteration formula fails because of division
by zero.
{ Source: GoodQuestions [7], Newton’s method }

Q2c–1974 . Newton’s method is an appealing root-finding technique because . . .



4

8
(A) it can be used to find quick and accurate approximations to radical numbers like 3 or 13
(B) it can reliably determine all real roots of an nth degree polynomial
(C) it can find a solution to 7x8 + x4 + 1 = 0
(D) it converges faster than other root-finding methods

Answer: (A). The radical expression n a is a root of f (x) = xn − a = 0, which can always be found with
Newton’s method (plot f (x) to see why). Response (B) is incorrect because Newton’s method requires a good
initial guess for each root, which can be difficult/impossible in practice. Response (C) is incorrect because
there is no real root (just rewrite as 7x8 = −1 − x4 , “positive equals negative”). Response (D) is often true,
but not always.
{ Source: GoodQuestions [7], Newton’s method }

2d. General Aspects of Convergence


Q2d–175 . Suppose you apply an iterative method and obtain the following errors from the first four steps:

10−2 , 10−4 , 10−6 , 10−8 , . . .

How would you characterize the order of convergence of this method?


(A) linear
(B) super-linear
(C) quadratic
(D) faster than quadratic
Answer: (B). The error Ek is reduced by a factor of 100 in every iteration and so can be written Ek =
10−2 Ek−1 . According to the definition of convergence, this is linear (order 1) convergence with an asymptotic
rate constant equal to 10−2 .
{ Source: Heath [4], adapted from Review Question 5.9, p. 245 }

Q2d–276 . Suppose you apply an iterative method and obtain the following errors from the first four steps:

10−2 , 10−4 , 10−8 , 10−16 , . . .

How would you characterize the order of convergence of this method?


(A) linear

Clicker Question Bank for Numerical Analysis (Version 1.0) 23/91


(B) super-linear
(C) quadratic
(D) faster than quadratic
2
Answer: (C). The error Ek is squared in every iteration and so can be written Ek = Ek−1 . According to
the definition, this is quadratic (order 2) convergence with asymptotic rate constant equal to 1. Note that the
response (B) is strictly also correct, although (C) is more precise.
{ Source: Heath [4], adapted from Review Question 5.9, p. 245 }

Q2d–377 . Suppose you apply an iterative method and obtain the following errors from the first six steps of an iterative
method:

0.01369, 0.02553, 0.01158, 0.01044, 0.009781, 0.008235, . . .

How would you characterize the order of convergence of this method?


(A) not converging
(B) linear
(C) quadratic
(D) impossible to determine
Answer: (A). It’s actually very hard to tell in this case, and it would be helpful if there were more iterations
to based the decision on. But we can make an educated guess. At best, the method is converging very slowly,
perhaps linear with a rate constant just slightly less than 1. So (A), (B) and (D) are all reasonable responses.
The anomalous “blip” at the start can probably be ignored because convergence isn’t always uniform.
{ Source: JMS }

1
Q2d–478 . If you have an iterative method that converges linearly with rate constant 2, how many iterations are
required for the initial error to be reduced by a factor of at least 1000?

(A) 5
(B) 10
(C) 20
(D) 100

Answer: (B). The error from step k obeys Ek 6 12 Ek−1 =⇒ Ek 6 ( 12 )k E0 . This means that we need
1
 k 1
2 6 1000 =⇒ 2k > 1000 =⇒ k > log2 (1000) ≈ 9.97.
{ Source: JMS }

Q2d–579 . If an iterative method approximately squares the error in every two iterations then what is its order of
convergence?

(A) 0.5
(B) 1

(C) 2
(D) 2
2
Answer: (C). We’re given that the error in step k satisfies Ek ≈ Ek−2 . But from the definition of
2
p p
convergence for a method of order p, the error in step k also satisfies Ek ≈ Ek−1 ≈ Ek−2 . Comparing these
2

two expressions for Ek suggests that p = 2 or p = 2.
{ Source: Heath [4], adapted from Review Question 5.14, p. 246 }

Q2d–680 . For some root finding method, you determine that the absolute error in step k satisfies Ek 6 0.2Ek−1
3
.
Which of the statements below is TRUE?

Clicker Question Bank for Numerical Analysis (Version 1.0) 24/91


(A) The method is superlinearly convergent.
(B) The order of the method is 0.2.
(C) The asymptotic rate of convergence is log10 (0.2).
(D) The iteration will always converge.
(E) All of the above.
Answer: (A). Based on our definition of convergence, α = 0.2 is the asymptotic rate constant and the order
of convergence is p = 3 (faster than linear). Response (D) is not always true because the iteration may not
converge if the initial guess has an error that satisfies E0 > 1.
{ Source: JMS }

Q2d–781 . You use two different iterative methods to compute an approximation, and the errors from each method
are plotted below versus the iteration number:

0.25
Method 1
Method 2
0.2
Error, Ek

0.15

0.1

0.05

0 2 4 6 8 10
Iteration, k

What can you conclude about the iterative convergence of the two methods?
(A) Both methods converge quadratically, but Method 1 has a smaller rate constant.
(B) The order of convergence for Method 1 is linear and Method 2 is quadratic.
(C) Both methods converge faster than linear, but the order cannot be estimated.
(D) No conclusion is possible based on this plot.
Answer: (D). The order of convergence can’t be estimated using this error plot with both axes having linear
scales. This is because the definition of convergence reads
k
p
Ek 6 αEk−1 =⇒ Ek 6 αk E0p

(for order p and rate α) which requires taking a logarithm of Ek to reliably estimate either p or α.
{ Source: JMS }

Q2d–882 . You use two different iterative methods to compute an approximation, and the errors from each method
are plotted below versus the iteration number:

Clicker Question Bank for Numerical Analysis (Version 1.0) 25/91


Method 1
Method 2

10 -5

Error, Ek
10 -10

10 -15
0 2 4 6 8 10
Iteration, k

What can you conclude about the iterative convergence of the two methods?
(A) Both methods converge quadratically, but Method 1 has a smaller rate constant.
(B) The order of convergence for Method 1 is linear and Method 2 is quadratic.
(C) Method 1 converges linearly and Method 2 is super-linear.
(D) No conclusion is possible based on this plot.

Answer: (C). The points from Method 1 appear to lie on a straight line. Method 2 curves downward so
it’s definitely superlinear. It might converge quadratically but that’s not possible to estimate from this semi-log
plot. Instead, you would have to plot log log Ek (double-log) versus k.
{ Source: JMS }

2e. Nonlinear Systems of Equations


[ nothing here yet ]

Clicker Question Bank for Numerical Analysis (Version 1.0) 26/91


3. Linear Systems of Equations
3a. Review of Linear Algebra
Q3a–183 . Which of the following properties of the matrix determinant is TRUE?
(A) Only square matrices have a determinant
(B) For the identity matrix, det(I) = 1
(C) det(AB) = det(A) det(B)
(D) det(AT ) = det(A)
(E) All of the above
Answer: (E).
{ Source: JMS, MACM 316 lecture notes }

Q3a–284 . If A is an invertible matrix then which of these statements is TRUE?


(A) det(A) 6= 0
(B) Ax = b has a unique solution for any vector b
(C) Ax = 0 has only the trivial solution x = 0
(D) AT is invertible
(E) All of the above
Answer: (E).
{ Source: JMS, MACM 316 lecture notes }

Q3a–385 . True or False: If a matrix A is nonsingular then the number of solutions to the linear system Ax = b
depends on the particular choice of right hand side vector b.

Answer: FALSE. There is a unique solution for any b.


{ Source: Heath [4], Review Question 2.1, p. 92 }

Q3a–486 . True or False: If a triangular matrix has a zero entry on its main diagonal then the matrix must be
singular.

Answer: TRUE. The determinant of any triangular matrix is the product of the diagonal entries and so its
determinant of this matrix is zero.
{ Source: Heath [4], Review Question 2.3, p. 92 }

Q3a–587 . If A is a 5 × 5 matrix then what is the value of det(3A)?


1
(A) det(A)
35
1
(B) 3 det(A)
5
(C) 35 det(A)
(D) 53 det(A)
Answer: (C).
{ Source: MAH }

Q3a–688 . If A is a 2 × 2 invertible matrix then what is the inverse of 2A?


1 −1
(A) A
2
1
(B) A−1
4

Clicker Question Bank for Numerical Analysis (Version 1.0) 27/91


(C) 2A−1
(D) 4A−1
Answer: (A). Notice that 2A · 21 A−1 = AA−1 = I.
{ Source: MathQuest [8], Linear Algebra, Matrix Inverses }

Q3a–789 . True or False: kAk1 = kAT k∞ .

Answer: TRUE. The 1-norm is the maximum column sum, the ∞-norm is the maximum row sum, and
the transpose “flips” the matrix so the columns turn into rows.
{ Source: Heath [4], Review Question 2.24, p. 93 }

Q3a–890 . Which of the following is a reliable indicator that a matrix is nearly singular?
(A) a small determinant
(B) a small norm
(C) a large norm
(D) a large condition number
Answer: (D). Both (A) and (D) are acceptable answers, at least in exact arithmetic. But floating-point
calculations of determinants can lead to very large growth in round-off error, so the condition number tends
to be a more reliable test.
{ Source: Heath [4], Review Question 2.62, p. 95 }

Q3a–991 . Which of the following matrices is ill-conditioned?


 10 
10 0
(A)
0 10−10
 10 
10 0
(B)
0 1010
 −10 
10 0
(C)
0 10−10
 
1 2
(D)
2 4
(E) All of the above
Answer: (A). This matrix has condition number (in the 1-norm)
10 −10
−1
10 0 10 0
kAk1 · kA k1 = · = 1010 · 1010 = 1020 (HUGE)
10−10 1 0

0 1010 1

But response (D) is also correct since that matrix is singular and so has condition number ∞. The matrices
in (B) and (C) are multiples of the identity matrix, so they are perfectly conditioned.
{ Source: Heath [4], Review Question 2.61, p. 95 }

Q3a–1092 . You are given a system of equations containing a real parameter a:

2x + 3y = 5
4x + ay = 8

Which of these statements is TRUE?


(A) For a = 6, the system has no solution.
(B) For a = 6, the system has infinitely many solutions.
(C) The system has a unique solution for any real value of a.
(D) The two lines intersect when a = 6.

Clicker Question Bank for Numerical Analysis (Version 1.0) 28/91


Answer: (A).
{ Source: MAH }

Q3a–1193 . What are the eigenvalues of the matrix


 
1 4 6
0 2 5 ?
0 0 3

(A) 1, 2, 3
(B) 1, 4, 6
(C) 1, 0, 0
(D) 4, 5, 6

Answer: (A).
{ Source: MAH }

Q3a–1294 . Which of the following matrices is strictly diagonally dominant, and hence invertible?
       
−6 0 3 −6 0 3 −6 0 3 −6 0 3
I.  7 8 −2  II.  5 8 −2  III.  5 8 −2  IV.  1 2 −2 
1 −1 2 1 −1 2 1 −1 3 1 −1 8

(A) II
(B) III
(C) II and III
(D) IV
(E) All are strictly diagonally-dominant
Answer: (A).
{ Source: JMS }

Q3a–1395 . True or False: It’s a well-known fact that every strictly diagonally-dominant matrix is nonsingular. Is it
also true that every nonsingular matrix is strictly diagonally-dominant?
 
0 1
Answer: FALSE. A simple counterexample is the matrix A = (the permuted identity matrix). This
1 0
−1
is clearly invertible (A = A) but not diagonally-dominant (DD). This brings home an important point:
checking for DD isa really easy test to determine whether a matrix is nonsingular, but it doesn’t work for all
nonsingular matrices (that is, some matrices that aren’t DD can still be invertible, like A above).
{ Source: JMS }
     
−1 −2 1
Q3a–1496 . Find w =  4  as a linear combination of u =  2  and v =  0 .
12 3 2

(A) w = 2u + 3v
(B) w = 2u − 3v
(C) w = u + 3v
(D) w = 2u + v
     
−1 −2 1
Answer: (A). Check:  4  = 2  2  + 3  0 .
12 3 2
{ Source: MAH }

Clicker Question Bank for Numerical Analysis (Version 1.0) 29/91


Q3a–1597 . Which of the graphs below represents the following system of equations?

x + 2y = 5
2x + 4y = 10

(A) 3
(C) 3

2.5 2.5

2 2
y

y
1.5 1.5

1 1
-1 -0.5 0 0.5 1 1.5 2 -1 -0.5 0 0.5 1 1.5 2
x x

(B) 3
(D) 3

2.5 2.5

2 2
y

1.5 1.5

1 1
-1 -0.5 0 0.5 1 1.5 2 -1 -0.5 0 0.5 1 1.5 2
x x
Answer: (B). The second equation is simply the first multiplied by 2, so these are just two equations for
the same line (both with negative slope).
{ Source: MAH }

Q3a–1698 . What is the solution to the following system of equations?

−3x + 6y = 3
2x − 4y = 4

(A) x = −1 and y = 0
(B) x = 3 and y = 2
1
(C) x = 3 and y = 2
(D) There are no solutions
(E) There are an infinite number of solutions
Answer: (D). The equations can be scaled to get x − 2y = −1 and x − 2y = 2, which are inconsisent with
each other.
{ Source: JMS }

Clicker Question Bank for Numerical Analysis (Version 1.0) 30/91


Q3a–1799 . Which of the following linear systems is depicted in
the graph? 3

2.5

y
1.5

1
-1 -0.5 0 0.5 1 1.5 2
x

(A) x + 2y = 5, 2x + 4y = 10
(B) x + 2y = 5, 2x + 4y = 8
(C) x + 2y = 5, 2x − 4y = 10
(D) x − 2y = 5, 2x + 4y = 8
Answer: (B). Both lines in the plot have the same slope (− 12 ), so we need to choose between (A) and (B).
Option (A) gives two identical lines and so the correct answer must be (B).
{ Source: MAH }

Q3a–18100 . A system of three linear equations in two unknowns


is pictured in the plot. How many solutions does 5
this system have?
4

-1
-1 0 1 2

(A) 0
(B) 1
(C) 3
(D) Infinitely many
Answer: (A). There is no common intersection point and so this system has no solution.
{ Source: MathQuest [8], Linear Algebra, Systems of Equations }

Q3a–19101 . True or False: For any n × n matrix A, the equation Ax = 0 has the unique solution x = 0.

Answer: FALSE. The equation Ax = 0 has x = 0 as a unique solution if and only if A is nonsingular,
which is equivalent to saying that A−1 exists or det(A) 6= 0.
{ Source: MAH }

Q3a–20102 . Suppose that both A and B are n × n matrices. Which of the following statement is FALSE?

Clicker Question Bank for Numerical Analysis (Version 1.0) 31/91


(A) det(A + B) = det(A) + det(B)
(B) det(AB) = det(A) det(B)
(C) det(kA) = k n det(A)
(D) None of the above
Answer: (A).
{ Source: MAH }

Q3a–21103 . True or False: If A is any n × n nonsingular matrix, then cond(A) = cond(A−1 ).

Answer: TRUE. cond(A−1 ) = ||A−1 || · ||(A−1 )−1 || = ||A−1 || · ||A|| = cond(A)


{ Source: Heath [4], Review Question 2.25, p. 93 }

3b. Gaussian Elimination and Pivoting


Q3b–1104 . Fill in the blank: The goal of the row-reduction phase in the Gaussian elimination algorithm is to convert
the coefficient matrix into a matrix.

(A) diagonal
(B) identity
(C) lower triangular
(D) upper triangular

Answer: (D).
{ Source: JMS }

Q3b–2105 . If A is a singular matrix, then which of the following statements is FALSE?

(A) det(A) = 0
(B) A zero entry arises in the pivot position as Gaussian elimination with partial pivoting is applied to A
(C) Ax = 0 has only the trivial solution x = 0
(D) cond(A) = ∞

Answer: (B).
{ Source: JMS }

Q3b–3106 . When solving an upper or lower triangular system of size n × n, the computational cost (measured in terms
of multiplication and division operations) is roughly equal to . . .

(A) n2
1
(B) 2 n2
(C) n3
(D) n

Answer: (A).
{ Source: JMS }

Q3b–4107 . Which of the following matrices CANNOT be obtained from


 
2 1 3 1
A = 0 1 3 4
1 2 0 4

using elementary row operations?

Clicker Question Bank for Numerical Analysis (Version 1.0) 32/91


 
2 4 0
8
(A) 0 1 3
4
2 1 3
1
 
2 1 3 1
(B) 0 1 3 4
1 3 3 8
 
1 2 3 1
(C) 1 0 3 4
2 1 0 4

Answer: (C). Response (A) results from r3 ← 2r3 followed by the row swap r1 ↔ r3 . Response (B) results
from r3 ← r2 + r3 . Response (C) involves a column swap.
{ Source: MathQuest [8], Gaussian elimination }

Q3b–5108 . Which of the following operations on an augmented matrix could change the solution of the corresponding
linear system?
(A) Interchanging two rows
(B) Multiplying one row by any constant
(C) Adding one row to another
(D) None of the above
Answer: (B). Because every response seems to correspond to a row operation, it’s very easy to make the
mistake of choosing (D). However, response (B) could be an invalid row operation if the multiple is zero. So
response (D) is only correct as long as it’s understood that the constant multiple is non-zero.
{ Source: MathQuest [8], Gaussian elimination }

Q3b–6109 . What is the value of α so that the linear system represented by the following augmented matrix has
infinitely many solutions?
 
2 6 8
1 α 4

(A) α = 0
(B) α = 2
(C) α = 3
(D) α = 4
(E) There is always a unique solution
Answer: (C). The two rows are linearly independent as long as α 6= 3, in which case row 2 is multiple of
row 1 and the system becomes underdetermined.
{ Source: MathQuest [8], Gaussian elimination }

Q3b–7110 . Let R be the row-reduced echelon form of an n × n matrix A. Then . . .


(A) R is the identity
(B) R has at least one row of zeroes
(C) Neither (A) nor (B)
(D) Both (A) and (B)
(E) The answer depends on the matrix A

Answer: (E).
{ Source: MathQuest [8], Gaussian elimination }

Clicker Question Bank for Numerical Analysis (Version 1.0) 33/91


Q3b–8111 . When solving a linear system of size n × n using Gaussian elimination with partial pivoting, the compu-
tational cost (measured in terms of multiplication and division operations) of the backward substitution step
is . . .
(A) O(n)
(B) O(n3/2 )
(C) O(n2 )
(D) O(n3 )
Answer: (C).
{ Source: JMS }

Q3b–9112 . When computing the LU factorization of an n × n matrix, the computational cost (measured in terms of
multiplication and division operations) is . . .
(A) O(n)
(B) O(n3/2 )
(C) O(n2 )
(D) O(n3 )
Answer: (D).
{ Source: JMS }

Q3b–10113 . The matrix B is singular if . . .


(A) B is not square
(B) Gaussian elimination with partial pivoting fails on B
(C) B is its own inverse
(D) B has no inverse
Answer: (D).
{ Source: JMS }

Q3b–11114 . If you have to solve Ax = b many times for different vectors b but the same matrix A, it is best to . . .

(A) compute the inverse of A


(B) determine the LU decomposition once, then apply forward/backward substitution for the different b’s
(C) use an iterative method
(D) use Gaussian elimination with partial pivoting

Answer: (B).
{ Source: JMS }

Q3b–12115 . True or False: If a linear system is well-conditioned, then pivoting is unnecessary in Gaussian elimination.

Answer: FALSE. A simple counterexample is given by the matrix


 
0 1 0
A = 1 0 0
0 0 1

This is a permutation of the 3 × 3 identity matrix so that cond(A) = 1, which is perfectly well-conditioned.
However, Gaussian elimination will fail without partial pivoting, because the first pivot element is a11 = 0.
{ Source: Heath [4], Review Question 2.14, p. 92 }

Clicker Question Bank for Numerical Analysis (Version 1.0) 34/91


Q3b–13116 . Consider the matrix
 
4 −8 1
A = 6 5 7
0 −10 −3

whose LU factorization we want to compute using Gaussian elimination. What will the initial pivot element
be without pivoting, and with partial pivoting?
(A) 0 (no pivoting), 6 (partial pivoting)
(B) 4 (no pivoting), 0 (partial pivoting)
(C) 4 (no pivoting), 6 (partial pivoting)
Answer: (C).
{ Source: Heath [4], adapted from Review Question 2.39, p. 93 }

Q3b–14117 . You have a system of three linear equations with three unknowns. If you perform Gaussian elimination
and obtain the row-reduced echelon form
 
1 −2 4 6
 0 1 0 −3 
0 0 3 0

then the system has . . .


(A) a unique solution
(B) no solution
(C) infinitely many solutions
(D) more than one solution
Answer: (A). The row-reduced matrix is upper triangular with nonzero diagonal entries.
{ Source: MAH }

Q3b–15118 . You have a system of three linear equations with three unknowns. If you perform Gaussian elimination
and obtain the row-reduced echelon form
 
1 −2 4 6
 0 1 0 −3 
0 0 0 3

then the system has . . .

(A) a unique solution


(B) no solution
(C) infinitely many solutions
(D) more than one solution

Answer: (B). The last equation reads “ 0 = 3” which is a contradiction.


{ Source: MAH }

Clicker Question Bank for Numerical Analysis (Version 1.0) 35/91


Q3b–16119 . You have a system of three linear equations with three unknowns. If you perform Gaussian elimination
and obtain the reduced row echelon form
 
1 −2 4 6
 0 1 0 −3 
0 0 0 0

then the system has . . .


(A) no solution
(B) a unique solution
(C) more than one solution
(D) infinitely many solutions
Answer: (D). The last equation reads “ 0 = 0” so x3 can be any real number. Strictly (C) is also correct,
but (D) is the most accurate answer.
{ Source: MAH }

Q3b–17120 . Suppose that a square matrix A is perfectly well-conditioned, meaning that cond(A) = 1. Which of the
following matrices shares this same property?
(A) cA where c is any nonzero scalar
(B) DA where D is any nonsingular diagonal matrix
(C) P A where P is any permutation matrix
(D) A−1 , the inverse of A
(E) AT , the transpose of A
Answer: (A). In fact, all of the responses are correct.
{ Source: Heath [4], Review Question 2.58, p. 94 }

Q3b–18121 . True or False: Every nonsingular n × n matrix A can be written as a product A = LU .


 
0 1
Answer: FALSE. For example, A = is nonsingular because det(A) = −1 6= 0. Any LU factorization
1 1
of A must have the form
      
0 1 1 0 b c b c
= = .
1 1 a 1 0 d ab ac + d

Equating coefficients in the first column yields b = 0 and ab = 1, which has no solution. So this A has no LU
decomposition even though it’s nonsingular. A simpler way to recognize this is that very first step of the LU
algorithm fails when it encounters a zero in the 1, 1 pivot entry.
{ Source: MAH }

Q3b–19122 . If A = P I is some permutation P of the identity matrix, then Gaussian elimination yields the following L
and U factors:
(A) L = U = I
(B) L = U = P I
(C) L = I, U = 0
(D) The factors depend on the permutation matrix P

Answer: (A). If P is a permutation matrix then its inverse is another permutation matrix, P −1 = Pe, and
so PeA = I. This is already in row-reduced form, and so GE with partial pivoting yields the LU factorization
of the same matrix PeA = LU = I. The factors are clearly L = U = I.
{ Source: JMS }

Clicker Question Bank for Numerical Analysis (Version 1.0) 36/91


Q3b–20123 . True or False: The LU factorization is unique, so the output from the following Matlab code must mean
that there is a bug in the built-in function “lu”:
>> L = [1 0 0; 2 1 0; 3 1 1];
>> U = [2 0 1; 0 2 1; 0 0 2];
>> [L2, U2] = lu(L*U)

L2 = 0.3333 -1.0000 1.0000


0.6667 1.0000 0
1.0000 0 0

U2 = 6.0000 2.0000 6.0000


0 0.6667 -1.0000
0 0 -2.0000

Answer: FALSE. The LU decomposition computed by Matlab involves at least one row swap, so that the L
and U factors are different.
{ Source: JMS }

Q3b–21124 . You perform Gaussian elimination with partial pivoting to solve a linear system on a single-precision
floating point computer with roughly 7 decimal digits of accuracy. If the coefficient matrix has a condition
number of 103 and the the matrix and right hand side are both measured to within full machine precision,
about how many digits of accuracy would you expect in the approximate solution?

(A) 2
(B) 3
(C) 4
(D) 7
     
relative condition error
Answer: (C). The error estimate tells us that 6 · 6 10−7 · 103 =
error number in data
10−4 , which corresponds to 4 decimal digits. This is why we compute in double precision!!
{ Source: Heath [4], Review Question 2.64, p. 95 }

Q3b–22125 . Suppose you are solving a linear system Ax = b on a computer with 12 decimal digits of floating-point
precision, and that the matrix and right hand side are correct to within full machine precision. Roughly how
large can the condition number of the matrix A be before the computed solution x contains NO significant
digits?
(A) 1010
(B) 1012
(C) 1016
(D) 1022

Answer: (B). The error estimate tells us that kx−x̂k


kxk 6 cond(A) krk
kbk . If the relative solution error is
O(1) = O(100 ) (no digits of accuracy) and the “data error” (krk/kbk) is O(εM ) ≈ 10−12 , then the condition
number must be O(1012 ).
{ Source: Heath [4], Review Question 2.65, p. 95 }

Q3b–23126 . You have a system of linear equations Ax = b with kAk = 250 and kA−1 k = 40 that you solve using
Gaussian elimination on a computer with machine epsilon εM = 1.19 × 10−7 . What is the largest number of
significant digits that you can trust in the solution?
(A) 1

Clicker Question Bank for Numerical Analysis (Version 1.0) 37/91


(B) 2
(C) 3
(D) 4
Answer: (D).
{ Source: Holistic Numerical Methods [5] }

Q3b–24127 . You have a 100 × 100 matrix A and your computer is able to solve the linear system Ax = b in exactly one
minute using the LU factorization. Roughly how much of that minute is spent computing the factorization
A = LU ?
(A) 30 secs
(B) 45 secs
(C) 53 secs
(D) 59 secs
Answer: (D). From the lecture notes, the ratio of time spent on the row-reduction to the total is
3
R(n) 2n3 + 3n2 − 5n 2+ n
= 3 2
≈ 6
T (n) 2n + 6n − 2n 2+ n
2.03
When n = 100 this ratio is 2.06 ≈ 0.99 which corresponds to roughly 59 seconds.
{ Source: JMS }

Q3b–25128 . Suppose that your computer can solve 100 problems of the form U x = c in 1 second, where U is a 50 × 50
upper triangular matrix. Roughly how long will it take to solve a single 500 × 500 problem of the form Ax = b
where A is a full matrix?
(A) 10 seconds
(B) 1 minute
(C) 5 minutes, 30 seconds
(D) 5 hours
Answer: (C). The cost of solving U x = c is a backward substitution, which we know has cost with leading
2 2 3 3
order term n2 so the total cost is 100(50
2
)
= 503 . The full system has cost n3 = 500
3 which is a fraction 1000
3
larger. That translates into about 333 seconds. A rough order of magnitude estimate (missing the constants)
5003
yields a ratio of 100·502 = 500, and so choice (C) is still the closest answer.

{ Source: JMS }

3c. Plotting Power Laws


Q3c–1129 . A function is approximated near some given point
x0 using two Taylor polynomials, PA (x) and PB (x). 100
On the right is a plot of the absolute error in these
two polynomials as a function of h, where both are
evaluated at the nearby point x0 + h. What is the 10-5
degree of the two Taylor polynomials?
error

10-10

10-15
PA (x0 +h)
PB(x0 +h)
-20
10
0 0.2 0.4 0.6 0.8 1
h

Clicker Question Bank for Numerical Analysis (Version 1.0) 38/91


(A) PA is degree 2, PB is degree 3
(B) PA is degree 3, PB is degree 4
(C) PA and PB are both degree 3
(D) The degree cannot be determined from this plot
Answer: (D). The remainder/error term for a Taylor polynomial of degree n takes the form

f (n+1) (c) n+1


Rn = h =⇒ log Rn ∼ (n + 1) log h + constant
(n + 1)!

and so the degree can only be determined by plotting the data on a log-log scale. This is a semi-log plot and
so the nature of the polynomials can’t be determined.
{ Source: JMS }

Q3c–2130 . A function is approximated near some given point 0


x0 using two Taylor polynomials, PA (x) and PB (x). 10
On the right is a plot of the absolute error in these
two polynomials as a function of h, where both are
evaluated at the nearby point x0 + h. What is the 10-5
degree of the two Taylor polynomials?

error
10-10

10-15
PA (x0 +h)
PB(x0 +h)
10-20
10-8 10-6 10-4 10-2 100
h
(A) PA is degree 2, PB is degree 3
(B) PA is degree 3, PB is degree 4
(C) PA and PB are both degree 3
(D) The degree cannot be determined from this plot
Answer: (A). The remainder/error term for a Taylor polynomial of degree n takes the form

f (n+1) (c) n+1


Rn = h =⇒ log Rn ∼ (n + 1) log h + constant
(n + 1)!

On this log-log plot, the errors behave linearly with slopes 3 and 4, for PA and PB respectively. The slope is
n + 1, one more than the degree.

Notice the oscillations appearing on the left half of the plot for small h – these are due to floating-point round-
off error that dominates when the truncation error in the Taylor polynomials approaches the value of machine
epsilon εM ≈ 2 × 10−16 .
{ Source: JMS }

Clicker Question Bank for Numerical Analysis (Version 1.0) 39/91


Q3c–3131 . A function is approximated with Taylor polynomials
of degrees 1 through 25, and the error in each ap- 100
proximation is plotted against the degree N of the
polynomial. What happens when N ' 18?
-5
10

error
10-10

10-15

0 5 10 15 20 25
# terms (N)
(A) The Taylor remainder term blows up when N gets too large.
(B) The accuracy is limited by the floating point machine epsilon.
(C) Subtractive cancellation errors dominate when too many terms of differing signs are added to the poly-
nomial.
Answer: (B). As N increases, the error in the Taylor approximation gets smaller until it reaches machine
epsilon, εM ≈ 2×10−16 , after which no more improvement in accuracy is possible. This is not really a question
about power laws, but the semi-log plot does clearly illustrate the linear dependence of error on N (for fixed
h) that we expect from:

f (N +1) (c) N +1
RN (h) = h =⇒ log RN ∼ (N + 1) log h + constant
(N + 1)!

{ Source: JMS }

Q3c–4132 . Suppose you are comparing three different algo-


20
rithms and you use all three to solve the same prob- Algorithm A
lem for increasing values of problem size N . A Algorithm B
plot of the CPU time required for each algorithm Algorithm C
15
is shown on the right. Which algorithm has a cost
log 10(CPU time)

that scales as O(N 3/2 )?


10

0
0 2 4 6 8
log 10N

(A) Algorithm A
(B) Algorithm B
(C) Algorithm C
3
Answer: (B). This plot has a log-log scale. If cost scales like C = αN 3/2 , then log C = log α + 2 log N . So
we are looking for the straight line with slope 32 , which is (B).
{ Source: JMS }

Clicker Question Bank for Numerical Analysis (Version 1.0) 40/91


Q3c–5133 . You have a code that you are using to compute sev-
eral problems of increasing size N . A plot of the 20
CPU time required for each computation is shown on
the right. What is the best estimate of the leading-
order cost of your algorithm as a function of N ? 15

log 10(CPU time)


10

0
0 2 4 6 8
log 10N

(A) O(N )
(B) O(N 3/2 )
(C) O(N 2 )
(D) O(N 3 )
Answer: (D). This plot has a log-log scale. If cost scales like C = αN p , then log C = log α + p log N and
should appear as a straight line with slope p. The plotted curve is roughly a straight line with slope ≈ 20−2
7−1 = 3.

{ Source: JMS }

Q3c–6134 . You write a code for an iterative algorithm that you believe converges quadratically. To test whether
you have implemented the algorithm correctly, you plot the relative error in each step, Rk = |xk − x∗ |/|x∗ |,
as a function of the iteration number k. Using which type of plot is it easiest to recognize the quadratic
convergence?
(A) A linear plot showing Rk versus k.
(B) A semi-log plot showing log Rk versus k.
(C) A semi-log plot showing Rk versus log k.
(D) A log-log plot showing log Rk versus log k.
(E) A double-log plot showing log(log Rk ) versus k.
k
Answer: (E). Because the error in a quadratic method behaves like Rk ∼ R02 , then log Rk ∼ (const) · 2k
and log(log Rk ) ∼ (const) · k. It is easiest to recognize a straight line and so you should plot log(log Rk ) versus
k.
{ Source: JMS }

Q3c–7135 . You write a code for an iterative algorithm that you believe converges superlinearly. To test whether
you have implemented the algorithm correctly, you plot the relative error in each step Rk = |xk − x∗ |/|x∗ |
as a function of the iteration number k. Using which type of plot is it easiest to recognize the superlinear
convergence?
(A) A linear plot showing Rk versus k.
(B) A semi-log plot showing log Rk versus k.
(C) A semi-log plot showing Rk versus log k.
(D) A log-log plot showing log Rk versus log k.
Answer: (B). Because the error in a linearly convergent method behaves like Rk ∼ αk R0 , then log Rk ∼
k log α + log R0 which is a linear function of k. So the easiest way to test for superlinear convergence is to plot
log Rk versus k and then look for downward curvature, which indicates a faster decay to zero than linear).
{ Source: JMS }

Clicker Question Bank for Numerical Analysis (Version 1.0) 41/91


Q3c–8136 . You use four different iterative methods to obtain a sequence of approximate solution values xk , for k =
1, 2, . . . , 20. Below are shown graphs of the absolute solution error, plotted versus k using three different axis
scales. Which plot gives you the most useful information about the convergence of the four methods?
(A) Linear scale (B) Semi-log scale (C) Log-log scale
1.4
Method 1 100 100
1.2 Method 2
Method 3
Method 4
1 10-5 10-5

0.8
error

error
error
-10
0.6 10-10 10

0.4 Method 1
Method 1
Method 2 -15 Method 2
0.2 10-15 10 Method 3
Method 3
Method 4 Method 4
0
0 5 10 15 20 0 5 10 15 20 100 101
k k k

(A) Linear scale


(B) Semi-log scale
(C) Log-log scale
Answer: (B). You want to choose an axis scaling where the behaviour of the method is represented as a
straight line, since this linear dependence is easy to recognize visually. The semi-log plot (B) is the only one
that contains linear features (for Methods 1 & 2). We know that a linearly convergent method with Ek ∼ cE0k
appears as a straight line on a semi-log scale. So Methods 1 & 2 are linearly convergent, while Methods 3 &
4 are superlinear (faster than linear, but we can’t say more than that.
{ Source: JMS }

3d. Iterative Methods for Sparse Systems


Q3d–1137 . Consider the n × n matrix
 
1 1 1 1 ··· 1
 1 2 4 8 ··· 2n−1 
3n−1
 
 1 3 9 27 ··· 
A= 4n−1
 
 1 4 16 64 ··· 

 .. .. .. .. .. .. 
 . . . . . . 
1 n n2 n3 ··· nn−1
and suppose that for some given n you know that det(A) = 2.49e+07, kAk1 = 1.85e+05 and cond1 (A) =
4.14e+07. Based on this information, which is the best method for solving the linear system Ax = b?
(A) Gaussian elimination
(B) Gaussian elimination with partial pivoting
(C) Jacobi’s iteration
(D) Gauss-Seidel iteration
Answer: (B). This matrix is not sparse (far from it, since there’s not a single zero entry!) and so iterative
methods are a bad idea and Gaussian elimination is the only option. The condition number is large, indicating
that A is ill-conditioned and partial pivoting is necessary.
{ Source: JMS }

Q3d–2138 . Consider the n × n matrix


 
−4 1 0 0 ··· 0

 1 −4 1 0 ··· 0

 0 1 −4 1 ··· 0
A=
 
.. .. .. .. ..


 . . . . .


 0 ··· 0 1 −4 1 
0 ··· 0 0 1 −4

Clicker Question Bank for Numerical Analysis (Version 1.0) 42/91


and suppose that for some given n you know that det(A) = −10864, kAk1 = 6 and cond1 (A) = 2.97. Based
on this information, identify the best method for solving the linear system Ax = b.
(A) Gaussian elimination
(B) Gaussian elimination with partial pivoting
(C) Jacobi’s iteration
(D) Gauss-Seidel iteration
Answer: (D). This is a sparse system and so Gauss-Seidel is the “obvious” answer. However, we’ve seen in
lectures that GE+PP can row-reduce this tridiagonal matrix with no fill-in and so it is a very efficient method
– so (B) is likely the best answer.
Q3d–3139 . If the n × n matrix A is ill-conditioned (that is, A has a very large condition number) then . . .
(A) Solving Ax = b accurately is difficult using the LU-decomposition, but iterative methods (Jacobi, Gauss-
Seidel) would not have a problem.
(B) Solving Ax = b accurately with iterative methods (Jacobi, Gauss-Seidel) would be difficult, but LU-
decomposition with partial pivoting would not have a problem.
(C) Solving Ax = b accurately will be challenging regardless of the method we use.
Answer: (C).
{ Source: JMS }

Q3d–4140 . True or False: You are given the 3 × 3 linear system in augmented matrix form
 
5 0 −1 2
 0 0 2 4 
0 1 0 2

Because the diagonal of the coefficient matrix contains zero entries, it is not possible to apply the Jacobi
iterative method.

Answer: FALSE. Swapping rows 2 and 3 converts the matrix to upper triangular form with non-zeroes on
the diagonal, and Jacobi can then be applied.
{ Source: JMS }

Q3d–5141 . You are given the 3 × 3 linear system in augmented matrix form
 
5 0 −1 2
 0 1 0 2 
0 0 2 4

Starting from an initial guess of x0 = [1, 1, 1]T , what is the result of the first iteration of the Jacobi method?
T
(A) x1 = 53 , 2, 2

T
(B) x1 = 25 , 2, 1

T
(C) x1 = 35 , 2, 5


Answer: (A).
x1 = 15 (2 − 0 · 1 + 1 · 1) = 3
5
1
x2 = 1 (2 − 0 · 1 − 0 · 1) = 2
1
x3 = 2 (4 − 0 · 1 − 0 · 1) = 2

Interesting observation: Jacobi applied to any upper triangular n × n matrix converges to the exact solution
in at most n steps AND it’s equivalent to doing backward substitution . . . although it can do a lot of extra
work to get there!
{ Source: JMS }

Clicker Question Bank for Numerical Analysis (Version 1.0) 43/91


Q3d–6142 . You are given the 3 × 3 linear system in augmented matrix form
 
5 0 −1 2
 0 1 0 2 
0 0 2 4

Starting from an initial guess of x0 = [1, 1, 1]T , what is the result of the first iteration of the Gauss-Seidel
method?
T
(A) x1 = 35 , 2, 2

T
(B) x1 = 25 , 2, 1

T
(C) x1 = 35 , 2, 5


Answer: (A). This is identical to the Jacobi iteration because the lower triangular part of the matrix is 0!
{ Source: JMS }

Q3d–7143 . You are given the 3 × 3 linear system in augmented matrix form
 
5 0 −1 2
 0 1 0 2 
0 −3 2 4

Starting from an initial guess of x0 = [1, 1, 1]T , what is the result of the first iteration of the Gauss-Seidel
method?
T
(A) x1 = 53 , 2, 2

T
(B) x1 = 25 , 2, 1

T
(C) x1 = 35 , 2, 5


Answer: (C).
x1 = 15 (2 − 0 · 1 + 1 · 1) = 3
5
1 3
x2 = 1 (2 − 0 · 5 − 0 · 1) = 2
1 3
x3 = 2 (4 − 0 · 5 + 3 · 2) = 5
{ Source: JMS }

Q3d–8144 . You apply the Gauss-Seidel method to solve a linear system having an iteration matrix T = −(D + L)−1 U
with spectral radius ρ(T ) = 0.998. Using the fact that log10 (0.998) ≈ −0.0009, roughly how many iterations
are needed to reduce the error in the initial guess by a factor of 10−6 ?
(A) 6
(B) 600
(C) 6000

Answer: (C). The error estimate for a matrix iteration takes the form of an inequality for the error Ek in
step k: Ek 6 ρ(T )k E0 . If the error needs to reduce by a factor of 10−6 then
−6
ρ(T )k 6 10−6 =⇒ k · log10 (ρ(T )) 6 −6 =⇒ k> ≈ 6000
| {z } −0.0009
−0.0009

{ Source: JMS }

Q3d–9145 . Below is the output from the Jacobi and Gauss-Seidel iterations for solving the linear system Ax = b, in
each case showing the iteration number k and the norm of the solution difference kxk − xk−1 k:

Clicker Question Bank for Numerical Analysis (Version 1.0) 44/91


Method A: Method B:
k= 1: 6.073730e+00 k= 1: 3.501795e+00
k= 2: 4.961063e+00 k= 2: 2.710771e+00
k= 3: 4.408955e+00 k= 3: 2.623906e+00
k= 4: 3.969368e+00 k= 4: 2.353607e+00
k= 5: 3.620131e+00 k= 5: 2.294563e+00
.
. .
. .
. .
.
. . . .
k= 184: 1.366841e-06 k= 354: 1.177728e-06
k= 185: 1.258350e-06 k= 355: 1.130021e-06
k= 186: 1.158471e-06 k= 356: 1.084248e-06
k= 187: 1.066519e-06 k= 357: 1.040328e-06
k= 188: 9.818663e-07 k= 358: 9.981874e-07
Which method corresponds to the Jacobi iteration?
(A) Method A
(B) Method B

Answer: (B). Both converge, but Method A does so in about half the number of iterations. This is
characteristic of the Gauss-Seidel method.
{ Source: JMS }

3e. Eigenvalue Problems


[ nothing here yet ]

Clicker Question Bank for Numerical Analysis (Version 1.0) 45/91


4. Function Approximation
4a. Polynomial Interpolation
Q4a–1146 . True or False: Suppose that you have n + 1 data points (x0 , y0 ), (x1 , y1 ), . . . , (xn , yn ). The interpolating
polynomial on the given points is unique.

Answer: FALSE. The nth degree polynomial interpolating these points is unique, provided only that the xi
are all distinct.
{ Source: Heath [4], adapted from Review Question 7.3, p. 333 }

Q4a–2147 . Suppose that you have n data points (x1 , y1 ), . . . , (xn , yn ), with the xi all distinct. Which of the following
statements about the nth degree interpolating polynomial is TRUE?
(A) The interpolating polynomial is unique.
(B) There are infinitely many such interpolating polynomials.
(C) There is no polynomial of this degree that interpolates all n points.
Answer: (B). An nth degree polynomial has n + 1 coefficients which are constrained by n interpolating
conditions. So there remains one degree of freedom.
{ Source: Heath [4], adapted from Review Question 7.3, p. 333 }

Q4a–3148 . You determine a function f (x) that passes through the points (x0 , y0 ), (x1 , y1 ), . . . , (xn , yn ) with x0 <
x1 < x2 · · · < xn . For some other point x∗ ∈ (x0 , xn ), you then use f (x∗ ) to approximate the value of the
actual smooth function that underlies the data. This procedure is called:
(A) interpolation
(B) extrapolation
(C) curve fitting
(D) regression
Answer: (A).
{ Source: Holistic Numerical Methods [5] }

Q4a–4149 . Fill in the blank: Suppose you have n + 1 data points (x0 , y0 ), (x1 , y1 ), . . . , (xn , yn ). A unique polynomial
of degree can be found that passes through these points.
(A) n
(B) n or less
(C) n + 1
(D) n + 1 or more
Answer: (B). Assuming the xi are distinct, then there is a unique polynomial of degree exactly n that
passes through the n + 1 data points. However, response (B) is the correct answer because if any of the xi are
repeated, then some of the higher degree coefficients could be zero.
{ Source: Holistic Numerical Methods [5], MC Question Solution Ch 05.01 Background of Interpolation.pdf }

Q4a–5150 . How many different polynomials can be found that pass through the points (1, 2) and (4, 5)?
(A) 1
(B) 2
(C) 1 or 2
(D) infinitely many
Answer: (D).
{ Source: Holistic Numerical Methods [5], MC Question Solution Ch 05.01 Background of Interpolation.pdf }

Clicker Question Bank for Numerical Analysis (Version 1.0) 46/91


Q4a–6151 . You are given the point values (1, 1), (2, 4), (3, 9) for some function f (x). You determine an approximation
P (x) that passes through the 3 given points, and you then approximate f (4) using P (4). What is this
procedure called?
(A) extrapolation
(B) interpolation
(C) curve fitting
(D) none of the above
Answer: (A). Because x = 4 lies outside the interval [1, 3] covered by the given x–coordinates.
{ Source: Holistic Numerical Methods [5] }

Q4a–7152 . The following data describes the velocity of a rocket during lift-off as a function of time:

t (s) 0 14 15 20 30 35
v(t) (m/s) 0 227 363 517 603 902

In order to determine the velocity at t = 25 s, you decide to use a quadratic polynomial v(t) = a+bt+ct2 to ap-
proximate the velocity profile. The most appropriate system of equations for determining the coefficients is . . .
         
1 14 176 a 227 1 0 0 a 0
(A) 1 15 225  b  = 363 (C) 1 15 225  b  = 363
1 20 400 c 517 1 30 900 c 603
         
1 15 225 a 363 1 20 400 a 517
(B) 1 20 400  b  = 517 (D) 1 30 900   b  = 603
1 30 900 c 603 1 35 1225 c 902
Answer: (B). Response (A) corresponds to extrapolation and (C) doesn’t use the three data points closest
to t = 25, and so both will probably be less accurate. Responses (B) and (D) should give comparable accuracy
since they are based on intervals that are closest to the interpolation point.
{ Source: JMS }

Q4a–8153 . Fill in the blanks: For the n + 1 data points (x0 , y0 ), (x1 , y1 ), . . . , (xn , yn ), the nth degree Lagrange
n n
X Y (x − xi )
polynomial is Pn (x) = yj Lj (x) where Lj (x) = .
j=0 i=0
(x j − xi )
i6=j

To compute the computational cost, constructing each Lj requires 1 multiplications and 2


divisions, and there are 3 such Lj to compute. Evaluating the interpolating polynomial then requires
a further n + 1 multiplications and n additions. Therefore, assuming all operations involve the same work,
the total cost is 4 = O(n2 ) operations.

(A) 1 n + 1, 2 n + 1, 3 n + 1, 4 2(n + 1)2 + (2n + 1)


(B) 1 n, 2 n, 3 n, 4 2n(n + 1)(2n + 1)
(C) 1 n, 2 n, 3 n, 4 2n2 + (2n + 1)
(D) 1 n, 2 n, 3 n + 1, 4 2n(n + 1) + (2n + 1)
Answer: (D).
{ Source: JMS, MACM 316 lecture notes }

Clicker Question Bank for Numerical Analysis (Version 1.0) 47/91


Q4a–9154 . For the n + 1 data points (x0 , y0 ), (x1 , y1 ), . . . , (xn , yn ), the
X n Lj = ... 1 ...
th
n degree Lagrange polynomial is Pn (x) = yj Lj (x) where for i = 1:n,
j=0 if i ∼= j,
n
Y (x − xi ) Lj = ... 2 ...
Lj (x) = . end
i=0
(x j − xi )
i6=j end
The given partial code calculates a single Lagrange polynomial Lj ,
where the given x-coordinates are assigned to the variable xdata.
Provide suitable code to fill the blank spaces labelled 1 and 2 .

(A) 1 0.0; 2 Lj*(x-xdata(i))/(xdata(j)-xdata(i));


(B) 1 1.0; 2 Lj*(x-xdata(i))/(xdata(j)-xdata(i));
(C) 1 1.0; 2 (x-xdata(i))/(xdata(j)-xdata(i));
(D) 1 0.0; 2 (x-xdata(i))/(xdata(j)-xdata(i));
Answer: (B).
{ Source: JMS, MACM 316 lecture notes }

Q4a–10155 . The linear Lagrange polynomial that interpolates the points (x1 , f (x1 )) and (x2 , f (x2 )) is
(x − x2 ) (x − x1 )
(A) P (x) = f (x1 ) + f (x2 )
(x1 − x2 ) (x2 − x1 )
(x1 − x2 ) (x2 − x1 )
(B) P (x) = f (x1 ) + f (x2 )
(x − x2 ) (x − x1 )
f (x1 ) − f (x2 ) x1 f (x2 ) − x2 f (x1 )
(C) P (x) = x+
(x1 − x2 ) (x1 − x2 )
x x
(D) P (x) = f (x1 ) + f (x2 )
(x1 − x2 ) (x2 − x1 )
Answer: (A).
Q4a–11156 . This table lists the population of Canada every 5 years from 2000 to 2015:

Year 2000 2005 2010 2015


Population (in millions) 30.69 32.24 34.01 35.83

Using the Lagrange interpolating polynomial to express the population as

P (x) = 30.69 L0 (x) + 32.24 L1 (x) + 34.01 L2 (x) + 35.83 L3 (x),

one can approximate the population in any year x between 2000 to 2015. What is L0 (x)?
(x − 2005)(x − 2010)(x − 2015)
(A)
750
(x − 2005)(x − 2010)(x − 2015)
(B)
−750
(x − 2000)(x − 2005)(x − 2010)
(C)
−50
(x − 2000)(x − 2010)(x − 2015)
(D)
−150
Answer: (B). This is the only polynomial that satisfies L0 = 1 at x = 2000 and L0 = 0 at the other three
data points.
{ Source: MAH }

Q4a–12157 . Fill in the blanks in the divided difference table:

Clicker Question Bank for Numerical Analysis (Version 1.0) 48/91


x y
1 1
a
3 9 c
b
4 16

(A) a = 4, b = 7, c =1
(B) a = 8, b = 7, c =3
(C) a = 8, b = 4, c =3
(D) a = 7, b = 4, c =3

Answer: (A). Following the Newton divided difference formula, we have


9−1 16 − 9 7−4
a = = 4, b = =7 and then c = = 1.
3−1 4−3 4−1

{ Source: MAH }

Q4a–13158 . The second degree polynomial


x y
P2 (x) = a1 + a2 (x − 3) + a3 (x − 3)(x − 4) 1 1
4
3 9 1
is determined using the given table of Newton divided differ- 7 0
ences. The correct value of a2 is . . . 4 16 1
9
5 25

(A) a2 = 1
(B) a2 = 4
(C) a2 = 7
(D) a2 = 9

Answer: (C). It’s the second diagonal in the table that starts at the point x = 3, for which the interpolating
polynomial is:

P2 (x) = 9 + 7(x − 3) + 1(x − 3)(x − 4).

{ Source: Holistic Numerical Methods [5] }

Q4a–14159 . The second degree polynomial


x y
P2 (x) = 4 + 6(x − a) + (x − b)(x − c). 1 1
3
2 4 1
is determined using the given table of Newton divided differ- 6 0
ences. The correct values of the coefficients are . . . 4 16 1
9
5 25

(A) a = 1, b = 2, c=4
(B) a = 2, b = 2, c=4
(C) a = 1, b = 3, c=1
(D) a = 2, b = 4, c=5

Clicker Question Bank for Numerical Analysis (Version 1.0) 49/91


Answer: (B). These coefficients come from the second diagonal in the table, which starts at the point x = 2,
so the interpolating polynomial is

P2 (x) = 4 + 6(x − 2) + (x − 2)(x − 4).

{ Source: MAH }

Q4a–15160 . Which of the following is a third degree interpolating polynomial


for the data given in the Newton divided difference table. xi a0 a1 a2 a3 a4
0 0
1
1 1 3
7 1
2 8 6 0
19 1
3 27 9
37
4 64
(A) P3 (x) = x + 3x(x − 1) + x(x − 1)(x − 2)
(B) P3 (x) = 1 + 7(x − 1) + 6(x − 1)(x − 2) + (x − 1)(x − 2)(x − 3)
(C) P3 (x) = 64 + 37(x − 4) + 9(x − 4)(x − 3) + (x − 4)(x − 3)(x − 2)
(D) All of the above
Answer: (D).
{ Source: MAH }

Q4a–16161 . Consider the interpolating polynomial for f (x) = x3 based on the points (1, 1), (2, 8), (3, 27), and (4, 64).
Find an upper bound for the interpolation error on [1, 4].
1
(A) 6
15
(B) 8
105
(C) 16
(D) 0
Answer: (D). There are four points so the interpolating polynomial is a cubic. Since the original function
is a cubic, then P3 (x) = x3 , so the error is zero! You could also use the error formula
(4)
f (c)
|f (x) − P3 (x)| 6 (x − 1)(x − 2)(x − 3)(x − 4) = 0
4!

because f (4) (x) ≡ 0.


{ Source: MAH }

Q4a–17162 . Consider the interpolating polynomial for f (x) = x3 based on the points (0, 0), (1, 1) and (2, 8). Find an
upper bound for the interpolation error at the point x = 21 .
1
(A) 6
(B) 1
15
(C) 8
3
(D) 8

Answer: (D). Since there are three points, the interpolating polynomial is a quadratic for which the error
formula gives
000
f (c)
|f (x) − P2 (x)| 6 x(x − 1)(x − 2)
3!
1
=⇒ f ( 2 ) − P2 ( 21 ) 6 66 12 12 − 1 12 − 2 = 83 .
  

Clicker Question Bank for Numerical Analysis (Version 1.0) 50/91


{ Source: MAH }

Q4a–18163 . How many fourth-degree polynomials pass through the points


(1, 1), (2, 8), (3, 27) and (4, 64)? x y
1 1
7
Hint: You might be able to use the Newton divided difference 2 8 6
table to help you decide. 19 1
3 27 9
37
4 64
(A) 1
(B) 2
(C) none
(D) infinitely many
Answer: (D). From the table, it’s easy to determine a cubic polynomial that interpolates the four points:

P3 (x) = 1 + 7(x − 1) + 6(x − 1)(x − 2) + 1(x − 1)(x − 2)(x − 3)

Then, we can generate a fourth-degree polynomial by adding any other point (a, b) which will generate an extra
coefficient c in the table. This corresponds to a new polynomial

P4 (x) = P3 (x) + c(x − 1)(x − 2)(x − 3)(x − 4)

that interpolates the original four points, plus (a, b).


{ Source: MAH }

Q4a–19164 . How many third-degree polynomials pass through the points


(1, 1), (2, 8), (3, 27) and (4, 64)? x y
1 1
7
Hint: You might be able to use the Newton divided difference 2 8 6
table to help you decide. 19 1
3 27 9
37
4 64
(A) 1
(B) 2
(C) 3
(D) infinitely many
Answer: (A). The divided difference table gives us a unique third-degree polynomial that can be written in
two ways

P3 (x) = 1 + 7(x − 1) + 6(x − 1)(x − 2) + 1(x − 1)(x − 2)(x − 3)


= 64 + 37(x − 4) + 9(x − 4)(x − 3) + 1(x − 4)(x − 3)(x − 2)

(expand them both to show they’re the same polynomial).


{ Source: MAH }

Q4a–20165 . Fill in the blanks in the divided difference table:

x y
1 1
b
2 a -1
4
4 16

Clicker Question Bank for Numerical Analysis (Version 1.0) 51/91


(A) a = 8, b =7
(B) a = 7, b = − 34
(C) a = 8, b = −3
(D) a = 8, b = − 32

Answer: (A).
{ Source: MAH }

4b. Piecewise Polynomial or Spline Interpolation


Q4b–1166 . Which of the following is NOT a valid reason for choosing piecewise cubic over piecewise linear interpola-
tion?
(A) Piecewise cubics can yield an interpolating function that has continuous derivatives.
(B) Piecewise cubics generally converge to the underlying function faster than linear interpolants when the
number of data points is increased.
(C) Piecewise cubics are cheaper to compute.
(D) Piecewise cubics can predict derivative values better than linear interpolants.
Answer: (C). Linear splines are built from polynomials of degree 1, which require many fewer operations
to construct and evaluate than cubic splines built with degree 3 polynomials.
{ Source: JMS }

Q4b–2167 . True or False: This piecewise polynomial is a quadratic spline:


(
0, if − 1 6 x 6 0
S(x) = 2
x , if 0 6 x 6 1

Answer: TRUE. The piecewise functions are both quadratic, and S(x) and S 0 (x) match at x = 0.
{ Source: MAH }

Q4b–3168 . This piecewise polynomial is NOT a cubic spline:


(
S0 (x) = 1, if 0 6 x 6 1
S(x) = 2
S1 (x) = 1 + (x − 1) , if 1 6 x 6 2

Why not?
(A) S0 (1) 6= S1 (1)
(B) S00 (1) 6= S10 (1)
(C) S000 (1) 6= S100 (1)
(D) S0000 (1) 6= S1000 (1)
Answer: (C).
{ Source: MAH }

Q4b–4169 . True or False: This piecewise polynomial cannot be a cubic spline:


(
S0 (x) = 1 + x + x2 + x3 , if 0 6 x 6 1
S(x) = 2 3
S1 (x) = 4 + (x − 1) + (x − 1) + (x − 1) , if 1 6 x 6 2

Answer: TRUE. Although S0 (1) = S1 (1) = 4, the derivatives don’t match at x = 1.


{ Source: MAH }

Clicker Question Bank for Numerical Analysis (Version 1.0) 52/91


Q4b–5170 . In cubic spline interpolation, which spline derivatives must be continuous at interior points?
(A) first derivatives
(B) second derivatives
(C) first and second derivatives
(D) third derivatives
Answer: (C).
{ Source: Holistic Numerical Methods [5] }

Q4b–6171 . Suppose that


(
S0 (x) = 1 + ax + x2 − x3 , if 0 6 x 6 1
S(x) = 2 3
S1 (x) = 1 + b(x − 1) − 2(x − 1) + (x − 1) , if 1 6 x 6 2

is a clamped cubic spline. What are the clamped end-point slopes?


(A) S 0 (0) = 0 and S 0 (2) = −1
(B) S 0 (0) = 1 and S 0 (2) = −2
(C) S 0 (0) = 0 and S 0 (2) = −2
(D) none of the above
Answer: (C). The matching conditions at x = 1 require that

S0 (1) = S1 (1) =⇒ 1+a+1−1=1 =⇒ a=0


S00 (1) = S10 (1) =⇒ a+2−3=b =⇒ b = −1

Then S00 (0) = a = 0 and S10 (2) = b − 1 = −2.


{ Source: MAH }

Q4b–7172 . A cubic spline is defined as


(
S0 (x) = 1 + cx2 + x3 , if 0 6 x 6 1
S(x) = 2 3
S1 (x) = 2 + 3(x − 1) + 3(x − 1) + d(x − 1) , if 1 6 x 6 2

Assuming that natural end-point conditions are used, determine the constants c and d.
(A) c = 0 and d = −1
(B) c = 0 and d = 1
(C) c = 1 and d = 1
(D) c = 1 and d = −1
Answer: (A). Differentiate to get S000 = 2c + 6x and S100 = 6 + 6d(x − 1). Then the natural end-point
conditions S0 (0) = 2c = 0 and S100 (2) = 6 + 6d = 0 are easy to solve.
00

{ Source: MAH }

Q4b–8173 . A cubic spline is defined by


(
S0 (x) = 1 + bx + cx2 − 2x3 , if 0 6 x 6 1
S(x) = 2 3
S1 (x) = 2 − 3(x − 1) + 2(x − 1) , if 1 6 x 6 2

Assuming that periodic end-point conditions are used, determine the constants b and c.
(A) b = 0 and c = 3
(B) b = 3 and c = 0
(C) b = 1 and c = 3

Clicker Question Bank for Numerical Analysis (Version 1.0) 53/91


(D) b = 0 and c = 1

Answer: (A).
{ Source: MAH }

Q4b–9174 . A cubic spline is defined by



2 3
S0 (x) = a0 + b0 x + c0 x + d0 x ,
 if 0 6 x 6 1
S(x) = S1 (x) = a1 + b1 (x − 1) + c1 (x − 1)2 + d1 (x − 1)3 , if 1 6 x 6 2

S2 (x) = a2 + b2 (x − 2) + c2 (x − 2)2 + d2 (x − 2)3 , if 2 6 x 6 3

Assuming that not-a-knot end-point conditions are used, which of the following statements about the coeffi-
cients is TRUE?

(A) a0 = a1 = a2
(B) b0 = b1 = b2
(C) c0 = c1 = c2
(D) d0 = d1 = d2

Answer: (D). The not-a-knot end-point conditions match S0000 (1) = S1000 (1) and S1000 (2) = S2000 (2). Using
S0 = 6d0 , S1 = 6d1 and S2000 = 6d2 , it’s easy to see that d0 = d1 = d2 .
000 000

{ Source: MAH }

Q4b–10175 . The data shown in the plot is interpolated with three


4
splines using different end-point conditions: natu- Data
ral, not-a-knot and clamped (zero slopes). Identify 3.5 I
the spline corresponding to each choice of end-point II
III
condition. 3

2.5
S(x)

1.5

0.5
-2 -1 0 1 2 3 4
x
(A) I = not-a-knot, II = clamped, III = natural
(B) I = natural, II = not-a-knot, III = clamped
(C) I = natural, II = clamped, III = not-a-knot

Answer: (A). Spline III has the smallest curvature near end-points, so this must be the natural spline.
Spline II is the only one with zero slope at both ends, so this is the clamped spline.
{ Source: JMS }

Q4b–11176 . True or False: Suppose you have n + 1 data points and interpolate them using splines with degree n (that
is, n + 1 piecewise polynomials each having degree n). The resulting splines are all identical and equal to the
interpolating polynomial that you would have obtained using a method like Lagrange interpolation.

Answer: TRUE.
{ Source: JMS }

Clicker Question Bank for Numerical Analysis (Version 1.0) 54/91


4c. Least Squares Fitting or Regression
Q4c–1177 . For each plot, identify the type of approximation that has been performed:

20 20

I II
15 15
Y

10

Y
10

5 5

0 0
0 5 10 15 20 0 5 10 15 20
X X

(A) I = interpolation, II = curve fitting


(B) I = curve fitting, II = interpolation
(C) I = interpolation, II = extrapolation
(D) I = extrapolation, II = interpolation
Answer: (B).
{ Source: MAH }

Q4c–2178 . You are writing a Matlab code that computes a least-squares fit yfit to a set of data values ydata, both
of which are vectors of length m. Which of the following lines of code computes the root mean square (RMS)
error for the fit?
(A) RMS = sqrt(m * sum((yfit-y).^2));
(B) RMS = sqrt(sum((yfit-y).^2)) / m;
(C) RMS = sqrt(sum(yfit-y).^2 / m);
(D) RMS = sqrt(sum((yfit-y).^2) / m);
Answer: (D).
{ Source: JMS, MACM 316 lecture notes }

Q4c–3179 . The plot depicts noisy data points that were mea-
3.5
sured in an experiment. What is the best choice of
method for computing a smooth function that best
approximates such noisy data?
3

2.5
y

1.5
-0.5 0 0.5 1 1.5 2
x
(A) A single polynomial interpolant, because it represents the data exactly.
(B) A piecewise cubic spline, because the resulting curve will be smooth at each data point.

Clicker Question Bank for Numerical Analysis (Version 1.0) 55/91


(C) A least-squares fit, because it can capture the overall trends in the data.
(D) None will work because this data is too noisy.
Answer: (C). The data follow a trend that seems close to a cubic, but it’s a bit noisy which suggests
that neither interpolation method (A,B) will work very well. The plot below demonstrates clearly how the
polynomial and spline interpolants can be extremely “wiggly” even in the presence of small amounts of noise.
On the other hand, the least-squares cubic fit is smooth and also matches the data pretty closely. In fact, if it
weren’t for the two points closest to x = 0, both interpolants would be much closer to the cubic fit!!

Data
4 Interpolant
Spline
3.5 LSQ fit

y 2.5

1.5

-0.5 0 0.5 1 1.5 2


x

{ Source: JMS }

Q4c–4180 . True or False: Given n data points (xi , yi ) with all of the xi distinct, you apply the method of least
squares to fit a pth degree polynomial to the given data. If p = n − 1 then this is equivalent to polynomial
interpolation.

Answer: TRUE. When p = n − 1, the least squares matrix is size n × n and has the same structure as the
Vandermonde matrix.
{ Source: JMS }

Q4c–5181 . True or False: When fitting a straight line y = a0 + a1 x to the three data points (xi , yi ) = (0, 0), (1, 0)
and (1, 1), the least squares solution is unique.

Answer: TRUE. The method of least squares has no difficulty with the fact that the last two points have
the same x-coordinate. In this case, the linear fit bisects the repeated points as shown in the plot.

1.2

0.8

0.6
y

0.4

0.2

-0.2
-0.2 0 0.2 0.4 0.6 0.8 1
x

{ Source: Heath [4], Exercise 3.4, p. 148 }

Clicker Question Bank for Numerical Analysis (Version 1.0) 56/91


Q4c–6182 . Over the last century trade has grown remarkably.
The graph shows the value of the world export index
over the period 1900–2000 along with a least squares 3000
fit. What type of fit is it?
2500

2000

Export index
1500

1000

500

1900 1920 1940 1960 1980 2000


Year

(A) linear fit


(B) quadratic fit
(C) cubic fit
(D) exponential fit
Answer: (A).
{ Source: Data source: https://ourworldindata.org/trade-and-globalization }

Q4c–7183 . Over the last century trade has grown remarkably.


The following graph shows the value of an world
export index over the period 1900-2000. We have 3000
shown a least square fit. What type of fit is it?
2500

2000
Export index

1500

1000

500

1900 1920 1940 1960 1980 2000


Year

(A) linear fit


(B) quadratic fit
(C) cubic fit
(D) exponential fit

Answer: (B).
{ Source: Data source: https://ourworldindata.org/trade-and-globalization }

Clicker Question Bank for Numerical Analysis (Version 1.0) 57/91


Q4c–8184 . Over the last century trade has grown remarkably.
The following graph shows the value of an world
export index over the period 1900-2000. We have 3000
shown a least square fit. What type of fit is it?
2500

2000

Export index
1500

1000

500

1900 1920 1940 1960 1980 2000


Year

(A) linear fit


(B) quadratic fit
(C) cubic fit
(D) exponential fit
Answer: (D).
{ Source: Data source: https://ourworldindata.org/trade-and-globalization }

Q4c–9185 . The plot shows a set of data points along with three
20
least-squares fits (numbered 1–3). The root mean Data
square (RMS) errors are listed below: Fit 1
15 Fit 2
Fit 1: RMS = 2.8120 Fit 3
Fit 2: RMS = 1.7655
Fit 3: RMS = 1.7241 10
Data

Which least-squares fit provides the best match with


the data? 5

-5
0 1 2 3 4 5
t
(A) Fit 1
(B) Fit 2
(C) Fit 3
Answer: (B). It would be easy to choose Fit 3 judging by the RMS error alone, but the difference from Fit 2
is very small. So in such cases, it’s always important to evaluate the fit qualitatively or visually . . . in the
“eyeball norm”. Note that Fit 3 exhibits some worrying wiggles near the end-points t = 0 and 5 that deviate
from the overall trends in the data.
{ Source: JMS }

Q4c–10186 . Fill in the blank: To find the exponential fit Q(x) = αeβx using a given data set {(x0 , y0 ), (x1 , y1 ), . . . , (xm , ym )},
you can follow these steps:
• Take the natural logarithm of both sides

log(Q(x)) = log(α) + βx.

Clicker Question Bank for Numerical Analysis (Version 1.0) 58/91


• Apply linear least squares to the points to obtain the coefficients a0 = log(α) and a1 = β.
• Then α = e a0
and β = a1 and Q(x) = e e a0 a1 x
.

(A) {(x0 , y0 ), (x1 , y1 ), . . . , (xm , ym )}


(B) {(log(x0 ), log(y0 )), (log(x1 ), log(y1 )), . . . , (log(xm ), log(ym ))}
(C) {(log(x0 ), y0 ), (log(x1 ), y1 ), . . . , (log(xm ), ym )}
(D) {(x0 , log(y0 )), (x1 , log(y1 )), . . . , (xm , log(ym ))}

Answer: (D).
{ Source: JMS, MACM 316 lecture notes }

Q4c–11187 . Fill in the blank: To find the power-law fit Q(x) = αxβ using a given data set {(x0 , y0 ), (x1 , y1 ), . . . , (xm , ym )},
you can follow these steps:
• Take the natural logarithm of both sides

log(Q(x)) = log(α) + β log(x).

• Apply linear least squares to the points to obtain the coefficients a0 = log(α) and a1 = β.
• Then α = e a0
and β = a1 and Q(x) = e x . a0 a1

(A) {(x0 , y0 ), (x1 , y1 ), . . . , (xm , ym )}


(B) {(log(x0 ), log(y0 )), (log(x1 ), log(y1 )), . . . , (log(xm ), log(ym ))}
(C) {(log(x0 ), y0 ), (log(x1 ), y1 ), . . . , (log(xm ), ym )}
(D) {(x0 , log(y0 )), (x1 , log(y1 )), . . . , (xm , log(ym ))}

Answer: (B).
{ Source: JMS, MACM 316 lecture notes }

Q4c–12188 . To find the exponential fit Q(x) = αeβx using a given data set {(x0 , y0 ), (x1 , y1 ), . . . , (xm , ym )}, the sum
of squared residuals to be minimized is
m 2
yi − αeβxi
P
(A)
i=1
m
P 2
(B) (log(yi ) − α − βxi )
i=1
m
P 2
(C) (log(yi ) − α − β log(xi ))
i=1
m
P 2
(D) (log(yi ) − log(α) − βxi )
i=1

Answer: (D). Taking the log of both sides of the exponential fit

Q(x) = αeβx

gives

log(Q(x)) = log(α) + βx =⇒ log(yi ) = log(α) + βxi =⇒ Ei = log(yi ) − log(α) − βxi

The sum of the square of the residuals


m
X m
X 2
Ei2 = (log(yi ) − log(α) − βxi ) .
i=1 i=1

{ Source: Holistic Numerical Methods [5] }

Clicker Question Bank for Numerical Analysis (Version 1.0) 59/91


Q4c–13189 . To find the power law fit Q(x) = αxβ using a given data set {(x0 , y0 ), (x1 , y1 ), . . . , (xm , ym )}, the sum of
squared residuals to be minimized is
m  2
yi − αxβi
P
(A)
i=1
m
P 2
(B) (log(yi ) − α − βxi )
i=1
m
P 2
(C) (log(yi ) − α − β log(xi ))
i=1
m
P 2
(D) (log(yi ) − log(α) − βxi )
i=1

Answer: (C). Taking the log of both sides of the exponential fit Q(x) = αxβ gives

log(Q(x)) = log(α) + β log(x) =⇒ log(yi ) = log(α) + β log(xi ) =⇒ Ei = log(yi ) − log(α) − β log(xi )

The sum of the square of the residuals


m
X m
X 2
Ei2 = (log(yi ) − log(α) − β log(xi )) .
i=1 i=1

{ Source: Holistic Numerical Methods [5] }

Q4c–14190 . Suppose you apply the method of least squares to obtain a linear fit y = ax + b to the four points (x1 , y1 ),
(x2 , y2 ), (x3 , y3 ) and (x4 , y4 ). You then repeat the process by fitting the same data points to a function of the
form x = cy + d. Which of the following statements is FALSE?
(A) The two fitted lines are identical
(B) The two linear fits may be different
(C) The procedure fails when ac = 0
1
(D) a =
c
Answer: (B). The first fit minimizes the (vertical) difference in the y-values, while the second fit minimizes
the (horizontal) difference in the x-values.
{ Source: JMS }

4d. Applications
Q4d–1191 . A robot has to follow a path that passes through seven
8
planning points in order to avoid two obstacles, as shown
in the plot. To find the shortest path that is also smooth, 7
which of the following solution strategies would you rec- 6
ommend?
5

4
y

0
0 2 4 6 8 10 12
x
(A) Pass a sixth degree polynomial through the data
(B) Pass linear splines through the data

Clicker Question Bank for Numerical Analysis (Version 1.0) 60/91


(C) Pass cubic splines through the data
(D) Use least squares to fit a second degree polynomial
Answer: (C). Linear splines would certainly give the shortest possible path, but then the path isn’t smooth
(maybe not the best for a robot). A least squares fit doesn’t interpolate the points and so it’s of no use. Among
the remaining two choices, (A) is likely to be more “wiggly” and so will either generate a longer path or run
into one of the objects.
{ Source: JMS }

Q4d–2192 . If you have ever watched an SFU convo-


cation ceremony, then you have no doubt
observed the students, faculty and SFU
Pipe Band process through the Aca-
demic Quadrangle and over the reflect-
ing pool. What you may not realize is
the fear this event strikes in the hearts
of its participants, where one tiny mis-
step on the treacherous pond walk can
turn a beautiful day into disaster.

In order to help graduands in selecting a safe procession path, you have taken GPS measurements of nine
strategically-placed points along the procession route. To determine the safest path over the pool, which of
the following strategies would you recommend?
(A) Pass an eighth degree interpolating polynomial through all data points
(B) Use a linear spline to interpolate the data
(C) Use a cubic spline to interpolate the data
(D) Use least squares to fit a third degree polynomial
Answer: (C).

Clicker Question Bank for Numerical Analysis (Version 1.0) 61/91


{ Source: MAH and JMS }

Clicker Question Bank for Numerical Analysis (Version 1.0) 62/91


5. Differentiation and Integration
5a. Numerical Differentiation
Q5a–1193 . Which of the following limit statements regarding the first derivative of a smooth function f (x) is TRUE?
f (x + h) − f (x)
(A) f 0 (x) = lim
h→0 h
f (x − h) − f (x)
(B) f 0 (x) = lim
h→0 −h
f (x + h) − f (x − h)
(C) f 0 (x) = lim
h→0 2h
(D) All of the above
Answer: (D).
{ Source: MAH }

Q5a–2194 . Which of the following difference formulas is a valid approximation of f 0 (xi )? Here, the xi represent
equally-spaced points with xi − xi−1 = h.
f (xi+1 ) − f (xi )
(A)
h
f (xi ) − f (xi−1 )
(B)
h
f (xi+1 ) − f (xi−1 )
(C)
2h
(D) All of the above
Answer: (D).
{ Source: MAH }

f (x + h) − f (x − h)
Q5a–3195 . What form does the truncation error take for the difference formula f 0 (x) ≈ ?
2h
(A) O(1)
(B) O(h)
(C) O(2h)
(D) O(h2 )
Answer: (D).
{ Source: MAH }

Q5a–4196 . How would you classify the following difference formula for the derivative
−3f (x) + 4f (x + h) − f (x + 2h)
f 0 (x) ≈ ?
2h
(A) forward difference
(B) backward difference
(C) centered difference
(D) one-sided difference
Answer: (D). You can also think of this as a forward difference formula, in the sense that the stencil
consists of points located “forward” of the approximation point x, so response (A) is equally valid.
{ Source: MAH }

Q5a–5197 . You want to estimate the first derivative of f (x), given values of the function at discrete points x =
0, 0.1, 0.2, . . . , 1. Which of these formulas is appropriate for estimating f 0 (1)?

Clicker Question Bank for Numerical Analysis (Version 1.0) 63/91


−3f (x) + 4f (x + h) − f (x + 2h)
(A) f 0 (x) ≈
2h
f (x + h) − f (x − h)
(B) f 0 (x) ≈
2h
3f (x) − 4f (x − h) + f (x − 2h)
(C) f 0 (x) ≈
2h
(D) All of the above
Answer: (C). The other two involve points x = 1.1 or x = 1.2 where f is not known.
{ Source: MAH }

Q5a–6198 . You want to estimate the first derivative of f (x), given values of the function at discrete points x0 , x1 , x2 , . . . , xn .
Which of these formulas is appropriate for estimating f 0 (x0 )?
−3f (xi ) + 4f (xi+1 ) − f (xi+2 )
(A) f 0 (xi ) ≈
2h
f (x i+1 ) − f (x i−1 )
(B) f 0 (xi ) ≈
2h
0 3f (xi ) − 4f (xi−1 ) + f (xi−2 )
(C) f (xi ) ≈
2h
(D) All of the above
Answer: (A). The other two involve points to the left of x0 where f is not known.
{ Source: MAH }

Q5a–7199 . Which of the statements below is TRUE regarding the difference formula

1 h i h4
f 0 (x) = f (x − 2h) − 8f (x − h) + 8f (x + h) − f (x + 2h) + f (5) (c) ?
12h 30
(A) it is a 5-point centered formula
(B) the truncation error is O(h4 )
(C) when h is large, the truncation error dominates
(D) All of the above
Answer: (D). Regarding response (A): this formula spans a five-point stencil centered at the approximation
point x, but the coefficient multiplying f (x) is zero.
Q5a–8200 . The given table lists the absolute errors from three
finite difference approximations of the derivative h Formula A Formula B Formula C
0.500000 5.764e-01 6.670e-02 9.329e-03
f 0 (x) (labelled A, B, C). The errors are computed for
0.250000 3.096e-01 1.683e-02 3.565e-04
a sequence of decreasing values of the grid spacing h.
0.125000 1.596e-01 4.218e-03 5.181e-05
0.062500 8.093e-02 1.055e-03 4.116e-06
Which of the statements below regarding the order 0.031250 4.074e-02 2.638e-04 2.836e-07
of accuracy for the three formulas is TRUE? 0.015625 2.044e-02 6.595e-05 1.853e-08
0.007812 1.024e-02 1.649e-05 1.183e-09
0.003906 5.122e-03 4.122e-06 7.513e-11

(A) Formulas A and B are order 1, Formula C is order 2.


(B) Formula A is order 1, Formula B is order 2, Formula C is order 3.
(C) Formula A is order 1, Formula B is order 2, Formula C is order 4.
(D) All formulas have the same order of accuracy, but different error constants.
Answer: (C). Notice first that h is reduced by a factor of 2 in each step. Then observe that the error in
Formula A goes down roughly by a factor of 2 in each step, Formula B by 4 = 22 , and Formula C by 16 = 24 .
{ Source: JMS }

Clicker Question Bank for Numerical Analysis (Version 1.0) 64/91


Q5a–9201 . The given table lists approximations for f 0 (x)
using three finite difference formulas (labelled h Formula A Formula B Formula C
1.0e-00 2.474412954 1.263946140 1.974278619
A, B, C). The approximations are computed for
1.0e-01 1.649322255 1.518206757 1.520883362
a sequence of decreasing values of the grid spacing h.
1.0e-02 1.534001862 1.520879903 1.520906914
1.0e-03 1.522218854 1.520906647 1.520906918
Consider the convergence of the approximations as 1.0e-04 1.521038136 1.520906915 1.520906918
h → 0, and notice that the some of the results be- 1.0e-05 1.520920040 1.520906918 1.520906918
have anomalously for the smallest values of h. What 1.0e-06 1.520908230 1.520906918 1.520906917
is the most likely explanation for this behaviour? 1.0e-07 1.520907045 1.520906916 1.520906919
1.0e-08 1.520906956 1.520906934 1.520906645
1.0e-09 1.520906512 1.520906956 1.520908177
1.0e-10 1.520903403 1.520905624 1.520903403
1.0e-11 1.520916726 1.520916726 1.520794601
1.0e-12 1.520561455 1.520783499 1.520672477

(A) Formula A converges with first order accuracy, while Formulas B and C diverge.
(B) All three formulas suffer from subtractive cancellation errors for small enough h.
(C) Local truncation error dominates at small h, and is largest for Formula C.
(D) Convergence stalls for Formulas B and C around h ≈ 10−5 , but then resumes when h gets small enough.
Answer: (B). Difference formulas are just that – differences – and so they experience subtractive cancellation
errors when h is small and function values get very close to each other. These errors are then magnified
by a factor of h1 . As an example, consider the first-order one-sided difference formula f (x)−fh(x−h) (this is
Formula A).
{ Source: JMS }

Q5a–10202 . An electric circuit contains a resistor (resistance R), an inductor (inductance L) and a variable voltage
source E(t) that obeys
di
E(t) = L + Ri,
dt
where i(t) is the electric current flowing through the circuit. You have measurements of current at several
times:

time, t (seconds) 1.00 1.01 1.05 1.10


current, i (amperes) 4.01 4.05 4.10 4.50

If the inductance is L = 0.82 henries and the resistance is R = 0.21 ohms, the most accurate approximation
for E(1.10) is
 
4.50 − 4.10
(A) 0.82 + 0.21(4.50)
0.05
(B) 0.21(4.50)
 
4.05 − 4.01
(C) 0.82 + 0.21(4.50)
0.01
 
4.50 − 4.01
(D) 0.82 + 0.21(4.50)
0.1
Answer: (A). This uses a backward approximation to i0 (1.1) that uses the points closest to x = 1.1.
Response (D) uses a higher order centered approximation of the derivative, but it’s centered at the wrong point
– x = 1.055!
{ Source: Holistic Numerical Methods [5] and Burden & Faires [1], p. 182 }

Q5a–11203 . The table below lists the distance s(t) that a car travels along a straight road in time t:

Time, t (seconds) 0 5 10 15
Distance, s (m) 0 100 300 700

Clicker Question Bank for Numerical Analysis (Version 1.0) 65/91


Using the forward, backward or centered difference approximation, what is the best estimate you can find for
the velocity v = ds
dt of the car at t = 10 seconds?

(A) 50
(B) 60
(C) 100
(D) 200
 
700 − 100
Answer: (B). The centered difference approximation gives v(10) ≈ = 60.
15 − 5
{ Source: Holistic Numerical Methods [5] and Burden & Faires [1], p. 182 }

Q5a–12204 . In the following centered difference formula for the second derivative
1 h i
f 00 (x) = Af (x − 2h) + 16f (x − h) − 30f (x) + 16f (x + h) + Af (x + 2h) + O(h4 ),
12h2
what must the constant A be for the formula to be correct?
(A) −1
(B) 1
(C) −2
(D) 2
Answer: (A). You could answer this by expanding in Taylor series. But it’s easier to simply recall that in any
difference approximation of a derivative, the stencil coefficients must sum to zero. Then A+16−30+16+A = 0.
{ Source: MAH }

Q5a–13205 . A first-order difference approximation of the first derivative is written in the form f 0 (x) = G(h)+a·h+O(h2 ),
where a is some constant. Which of the extrapolation formulas below is an O(h2 ) approximation of f 0 (x)?

4G h2 − G(h)

(A)
3
h

2G 2 − G(h)
(B)
3
 
h
(C) 2G − G(h)
2
(D) none of the above
Answer: (C). This is analogous to Richardson extrapolation, just applied to an O(h) formula.
{ Source: MAH }

Q5a–14206 . Fill in the blank: The truncation error terms in a difference approximation G(h) for the first derivative of
a function can be written as f 0 (x) = G(h) + a1 h + a2 h2 + a3 h2 + . . . . Then one step of extrapolation can be
used to write f 0 (x) = 2G h2 − G(h) + O(h2 ). Applying one more extrapolation step yields the next higher
order formula f 0 (x) = + O(h3 ).

4G h2 − G(h)

(A)
3
8G 4 − G h2
h
 
(B)
3
h

8G 4 + G(h)
(C)
3
8G 4 − 6G h2 + G(h)
h
 
(D)
3

Clicker Question Bank for Numerical Analysis (Version 1.0) 66/91


Answer: (D).
{ Source: MAH }

Q5a–15207 . Write the truncation error terms in the centered difference approximation for the first derivative as

f (x + h) − f (x − h)
f 0 (x) = + a1 h2 + a2 h4 + a3 h6 + . . .
2h
where the ai are constants. After applying n steps of Richardson extrapolation, what is the order of the
resulting approximation for f 0 (x)?

(A) O(h2 )
(B) O(hn )
(C) O(h2n )
(D) O(h2n+2 )

Answer: (D). Each step increases the order by a factor of h2 so the error is O(h2 · (h2 )n ) = O(h2n+2 ).
{ Source: MAH }

5b. Numerical Integration or Quadrature


Q5b–1208 . The Fundamental Theorem of Calculus is
Z b
(A) f (x) dx = f (b) − f (a)
a
Z b
(B) f (x) dx = F (b) − F (a) where f 0 (x) = F (x)
a
Z b
(C) f (x) dx = F (b) − F (a) where F 0 (x) = f (x)
a
b
f (b) − f (a)
Z
(D) f (x) dx =
a b−a
Answer: (C). In the theorem statement, F is an antiderivative and so F 0 = f .
Z b
Q5b–2209 . The definite integral f (x) dx can be interpreted as . . .
a

(A) the average value of f (x) on the interval [a, b]


(B) the area under the curve y = f (x) between x = a and x = b
(C) the work done by a force f (x) that acts on an object moving from position x = a to b
(D) all of the above

Answer: (B). Both (B) and (C) are valid interpretations. Response (A) might also be correct, but only if
1
Rb
b − a = 1 since the average value of a function is defined as b−a a
f (x) dx.
{ Source: JMS, MACM 316 lecture notes }
Z b
Q5b–3210 . The definite integral f (x) dx can be interpreted as
a

(A) the area below the curve y = f (x) from a to b


(B) the area above the curve y = f (x) from a to b
(C) the difference in end-point values, f (b) − f (a)
f (a) + f (b)
(D) the arithmetic average,
2

Clicker Question Bank for Numerical Analysis (Version 1.0) 67/91


Answer: (A). Response (A) is correct, but only as long as f (x) is positive on [a, b]. Wherever f (x) is
negative, then (B) is the correct interpretation.
{ Source: MAH }

Q5b–4211 . The average value of f (x) on [a, b] is


f (a) + f (b)
(A)
2
f (a) + f (b)
(B)
a+b
Z b
(C) f (x) dx
a
Z b
1
(D) f (x) dx
b−a a

Answer: (D).
{ Source: Adapted from Holistic Numerical Methods [5], quiz 07 01 }

Q5b–5212 . True or False: The left and right rectangle rules are exact for constant functions, f (x) = c.

Answer: TRUE.
{ Source: JMS, MACM 316 lecture notes }

Q5b–6213 . Which of the statements below is TRUE? Trapezoidal rule is exact for:
I. constant functions, f (x) = c
II. linear functions, f (x) = ax + b
III. quadratic functions, f (x) = ax2 + bx + c

(A) I
(B) II
(C) I and II
(D) I, II and III

Answer: (C).
{ Source: JMS, MACM 316 lecture notes }

Q5b–7214 . Which of the following statements is TRUE?


I. Simpson’s rule is exact for linear functions, f (x) = ax + b.
II. Simpson’s rule is exact for second-degree polynomials (quadratics), f (x) = ax2 + bx + c.
III. Simpson’s rule is exact for fourth-degree polynomials.

(A) none is true


(B) I
(C) II
(D) I and II
(E) I, II and III

Answer: (D).
{ Source: JMS, MACM 316 lecture notes }

Q5b–8215 . Numerical integration is called “quadrature” because . . .


(A) it is exact for quadratic functions.

Clicker Question Bank for Numerical Analysis (Version 1.0) 68/91


(B) the Greeks conceived of computing the area of a complex shape by replacing it with a square having the
same area, and “quad” ∼ square.
(C) the simplest formulas involve dividing up the area into quadrangles, which are just rectangles (the Latin
word quadrangulum means “four-cornered”.)
Answer: (B). This is what Wikipedia tells us, but (C) has got to be a reasonable explanation as well!
{ Source: JMS }

R5
Q5b–9216 . The integral f (x) dx is approximated using the
0
0 1 2 3 4 5
left rectangle rule (AL ), right rectangle rule (AR )
160
and midpoint rule (AM ) for the function f (x)
shown in the plot. 140

120
Which of the following inequalities must be TRUE?
100

y
80

60 y = f(x)
40

20
A0 A1 A2 A3 A4
0
a=x x1 x2 x3 x4 x 5 =b
0

(A) AL < AM < AR


(B) AM < AL < AR
(C) AR < AM < AL
(D) AM < AR < AL
Answer: (A).

AL < AM < AR
Right Rectangle Rule
Left Rectangle Rule Midpoint Rule
0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5

160 160 160

140 140 140

120 120 120

100 100 100


y
y

80 80 80

60 60 60

40 40 40

20 20 20
A0 A1 A2 A3 A4 A0 A1 A2 A3 A4 A0 A1 A2 A3 A4
0 0 0
a=x x1 x2 x3 x4 x 5 =b a=x x1 x2 x3 x4 x 5 =b a=x 0 x1 x2 x3 x4 x 5 =b
0 0

{ Source: MAH }

Clicker Question Bank for Numerical Analysis (Version 1.0) 69/91


Q5b–10217 . Apply the trapezoidal rule to approximate the
R5 0 1 2 3 4 5
integral A = f (x) dx for the given function f (x)
0 160
shown in the plot.
140

120
If AT is the approximation and A is the exact value,
which of the following relations is TRUE? 100

y
80

60 y = f(x)
40

20
A0 A1 A2 A3 A4
0
a=x x1 x2 x3 x4 x 5 =b
0

(A) AT > A
(B) AT < A
(C) AT = A
(D) none of the above

Answer: (A). The function is concave upwards and so the trapezoidal areas (defined by secant lines) always
lie slightly above the curve. Then AT is an overestimate of A.
Trapezoidal rule
0 1 2 3 4 5

160

140

120

100
y

80

60

40

20
A0 A1 A2 A3 A4
0
a=x x1 x2 x3 x4 x 5 =b
0

{ Source: MAH }

R5
Q5b–11218 . Approximate the integral A = f (x) dx using the
0
0 1 2 3 4 5
left rectangle rule (AL ), right rectangle rule (AR )
160
and trapezoidal rule (AT ) for the function f (x)
shown in the plot. 140

120
Which of the inequalities below must be TRUE?
100
y

80

60 y = f(x)
40

20
A0 A1 A2 A3 A4
0
a=x x1 x2 x3 x4 x 5 =b
0

(A) AL < AT < A < AR


(B) AL < A < AT < AR
(C) AR < AT < A < AL
(D) AT < AL < A < AR

Clicker Question Bank for Numerical Analysis (Version 1.0) 70/91


Answer: (B).

AL <A< AT < AR
Trapezoidal rule
Left Rectangle Rule Right Rectangle Rule
0 1 2 3 4 5 0 1 2 3 4 5
0 1 2 3 4 5
160 160
160

140 140
140

120 120
120

100 100
100

y
y

80

y
80 80

60 60
60

40 40 40

20 20 20
A0 A1 A2 A3 A4 A0 A1 A2 A3 A4 A0 A1 A2 A3 A4
0 0 0
a=x 0 x1 x2 x3 x4 x 5 =b a=x x1 x2 x3 x4 x 5 =b a=x 0 x1 x2 x3 x4 x 5 =b
0

{ Source: MAH }

R9
Q5b–12219 . Approximate the integral g(x) dx using the left
0
rectangle rule (AL ) and right rectangle rule (AR )
for the function g(x) shown in the plot.

Which of the following relations is TRUE?

(A) AL > AR
(B) AL < AR
(C) AL = AR
(D) none of the above
Answer: (A).

AL > AR

{ Source: MAH }

Clicker Question Bank for Numerical Analysis (Version 1.0) 71/91


R9
Q5b–13220 . Approximate the integral A = g(x) dx using the
0
left rectangle rule (AL ), right rectangle rule (AR )
and trapezoidal rule (AT ) for the function g(x)
shown in the plot.

Which of the inequalities below must be TRUE?

(A) AL < AT < A < AR


(B) AL < A < AT < AR
(C) AR < AT < A < AL
(D) AT < AL < A < AR
Answer: (C).
{ Source: MAH }

Rb
Q5b–14221 . Approximate the integral h(x) dx using the left rectangle rule (AL ), right rectangle rule (AR ) and trape-
a
zoidal rule (AT ) for some arbitrary function y = h(x).

Which of the following relations must be TRUE?


(A) AL < AT < AR
(B) AR < AT < AL
(C) AR < AL < AT
1
(D) AT = (AL + AR )
2
Answer: (D). The left and right rectangle rules are
n−1
X n
X
AL = h f (xi ) and AR = h f (xi ),
i=0 i=1

and their sum is just


" n−1
#
X
AL + AR = h f (x0 ) + 2 f (xi ) + f (xn ) = 2AT (implies (D))
i=1

Whether inequalities (A), (B) or (C) are true depends on the shape of h(x).
{ Source: MAH }

Clicker Question Bank for Numerical Analysis (Version 1.0) 72/91


Q5b–15222 . The given plot shows the absolute error from three
different quadrature approximations of the integral
10 0
Rb
a
f (x) dx (labelled Formulas A, B, C). The errors
are computed for a sequence of decreasing values of
h, the grid spacing.

Error
Which of these statements regarding the order of
10 -5
accuracy for the three formulas is TRUE?

Formula A
Formula B
-10
10 Formula C
-2 -1 0 1
10 10 10 10
h
(A) Formulas A and B are order 1, Formula C is order 2.
(B) Formula A is order 1, Formula B is order 2, Formula C is order 3.
(C) Formula A is order 1, Formula B is order 2, Formula C is order 4.
(D) All formulas have the same order of accuracy, but different error constants.

Answer: (C). On a log-log scale, the order of the scheme is given by the slope of each error curve. For
example, with Formula B, h decreases by roughly 1000 (3 orders of magnitude) while error decreases by a
factor of 10−6 (6 orders of magnitude). This means that the error behaves like E ∝ h6/3 ∝ h2 , so it’s a
second-order method.
{ Source: JMS }

Q5b–16223 . The given plot shows the absolute error in three


different quadrature approximations of the integral
Rb 10 0
a
f (x) dx (labelled Formulas A, B, C). The errors
are computed for a sequence of decreasing values of
h, the grid spacing.
10 -5
Error

Consider the convergence of the approximations as


h → 0, and notice that there is some odd behaviour
with Formula C. What is the most likely explanation
for this behaviour? 10 -10
Formula A
Formula B
Formula C
10 -15
10 -4 10 -2 10 0
h
(A) Formulas A and B converge with order 1 and 2 respectively, whereas Formula C diverges.
(B) Formula C suffers from subtractive cancellation errors when h is small enough.
(C) The accuracy of all three formulas is limited by round-off errors that dominate the calculation when h is
small.
(D) There is probably a coding error in Formula C.
Answer: (C). The error in Formula C levels off at around 10−15 which is pretty close to machine epsilon in
double-precision floating point arithmetic. This is an indication that round-off errors are what’s limiting the
accuracy. Response (B) isn’t correct because subtractive cancellation errors tend not to occur in quadrature
formulas, since they generally involve summing terms with mostly the same sign.
{ Source: JMS }

Clicker Question Bank for Numerical Analysis (Version 1.0) 73/91


Q5b–17224 . The quadrature formula
Z 1
f (x) dx = c0 f (0) + c1 f (1)
0

is exact for all constant and linear functions. Determine the values of c0 and c1 .
(A) c0 = 1, c1 = 1
(B) c0 = −1, c1 = 1
(C) c0 = − 21 , c1 = 1
2
(D) c0 = 21 , c1 = 1
2

Answer: (D). Take the simplest possible constant function f (x) = 1 and linear function f (x) = x:
Z 1 1
f (x) = 1 =⇒ dx = x = 1 = c0 + c1

0 0
Z 1
1 1 1
f (x) = x =⇒ x dx = x2 = = c1
0 2 0 2
1 1
Solving the two linear equations yields c0 = 2 and c1 = 2, so the quadrature formula is just the arithmetic
average of f (0) and f (1).
{ Source: MAH }

Q5b–18225 . The quadrature formula


Z 1
f (x) dx = f (w1 ) + f (w2 )
−1

is exact for all constant and linear functions. Determine all possible values of the parameters w1 and w2 .
(A) w1 = − 12 , w2 = 1
2
(B) w1 = −1, w2 = 1
(C) w1 = w2
(D) w1 = −w2
Answer: (D). Take the simplest possible constant function f (x) = 1 and linear function f (x) = x,
Z 1 1
f (x) = 1 =⇒ dx = x = 2 = 1 + 1 (always satisfied)

−1 −1
Z 1
1 2 1
f (x) = x =⇒ x dx = x = 0 = w1 + w2
−1 2 −1

Solving the two linear equations gives w1 = −w2 . This means that the quadrature formula can use any two points
as long as they are symmetric about the origin. Note that for nonlinear functions f , the formula won’t be very
accurate if w1 and w2 are not within (or at least close to) the interval [−1, 1].
{ Source: MAH }

Q5b–19226 . The velocity of an object as a function of time is given by

Time (s) 2 5 7 9 12
Velocity (m/s) 12 16 24 15 33

You use the trapezoidal rule to calculate the distance (in metres) travelled by the object between t = 2 and
t = 12 seconds, and the result is . . .
(A) 155.0
(B) 161.0

Clicker Question Bank for Numerical Analysis (Version 1.0) 74/91


(C) 193.0
(D) 232.5
Answer: (C). Trapezoidal rule for this unequally-spaced set of points is
1h i 1h i
3(12 + 16) + 2(16 + 24) + 2(24 + 15) + 3(15 + 33) = 84 + 80 + 78 + 144 = 193
2 2

{ Source: JMS }

Rb
Q5b–20227 . Let In be the trapezoidal rule approximation of the integral f (x) dx on n subintervals, then a more
a
accurate estimate is obtained using Richardson’s extrapolation as . . .
I2n − In
(A) I2n +
15
I2n − In
(B) I2n +
3
(C) I2n
I2n − In
(D) I2n +
I2n
Answer: (B). The leading order error term from trapezoidal rule is O(h2 ) and so the Richardson’s extrap-
4I2n − In
olation formula is .
3
{ Source: JMS }

Clicker Question Bank for Numerical Analysis (Version 1.0) 75/91


6. Initial Value Problems for Ordinary Differential Equations (ODEs)
6a. Background on ODEs
Q6a–1228 . What is the most accurate classification of the equation:
dy
= cy(1 − y) ?
dt
(A) first-order linear ordinary differential equation
(B) first-order nonlinear ordinary differential equation
(C) first-order linear partial differential equation
(D) first-order nonlinear partial differential equation
Answer: (B).
{ Source: MAH }

Q6a–2229 . A differential equation is classified as “ordinary” if it has:


(A) one dependent variable
(B) more than one dependent variable
(C) one independent variable
(D) more than one independent variable
Answer: (C).
{ Source: Holistic Numerical Methods [5], quiz 08 01 }

Q6a–3230 . The deflection w(x) of the free end of a flexible beam in response to a load force q(x) is described by the
differential equation

d4 w 2
2d w
+ a = q(x)
dx4 dx2
What is the order of this ODE?
(A) 1
(B) 2
(C) 4
(D) 6
Answer: (C).
{ Source: MAH }

Q6a–4231 . Which of the following is a well-posed ODE initial value problem?


dy
(A) = y 2 − ey , y(0) = 0
dx
(B) y = 4y 0 − log(t), y(0) = 4
df 1
(C) t2 − t|f | + sin(f + t) − t3 = 0, f (π) = −1
dt 3
z0 + t
(D) = 1, z(1) = 0
z
(E) xx0 − 3xy + 4x0 = y 2
Answer: (A). Responses (C) and (D) are also correct. But (B) is not well-posed because log(t) isn’t
continuous at the initial point t = 0. And neither is (E) since no initial condition is given.
{ Source: JMS }

Clicker Question Bank for Numerical Analysis (Version 1.0) 76/91


Q6a–5232 . Which of the ODEs listed below could have gener-
ated this slope field plot?
4

y
-1

-2

-3

-4
-4 -2 0 2 4
x

dy
(A) = f (y)
dx
dy
(B) = f (x)
dx
dy
(C) = f (x, y)
dx
Answer: (C). Consider any horizontal line (y = constant) and notice that the slope varies with x. Similarly,
the slope depends on y along vertical lines. So the ODE RHS must depend on both x and y. This is actually
dy
a slope field of dx = − xy .
{ Source: MAH }

Q6a–6233 . Which of the ODEs listed below could have gener-


ated this slope field plot?
4

0
y

-1

-2

-3

-4
-4 -2 0 2 4
x

dy
(A) = f (x)
dx
dy
(B) = f (y)
dx
dy
(C) = f (x, y)
dx
Answer: (A). The slope does not change with y along any vertical line (x = constant) and so the ODE
dy
RHS only depends on x. This is actually the slope field of dx = sin(x).
{ Source: MAH }

Clicker Question Bank for Numerical Analysis (Version 1.0) 77/91


Q6a–7234 . Which of the ODEs listed below could have gener-
ated this slope field plot?
2

1.8

1.6

1.4

1.2

y
0.8

0.6

0.4

0.2

0
0 0.5 1 1.5 2 2.5 3 3.5 4
t

dy
(A) = f (t)
dt
dy
(B) = f (y)
dt
dy
(C) = f (t, y)
dt
Answer: (B). The slope does not change with t along any horizontal line (y = constant) and so the ODE
RHS only depends on y. This is actually the slope field of dy
dt = 2y(1 − y).

{ Source: MAH }

Q6a–8235 . The slope field for an ODE is shown. Which of these


statements is TRUE?
2

I. For y(0) > 1 all solutions are decreasing. 1.8

1.6
II. For y(0) = 1 the solution remains the same. 1.4

III. For y(0) < 1 all solutions are increasing. 1.2

1
y

0.8

0.6

0.4

0.2

0
0 0.5 1 1.5 2 2.5 3 3.5 4
t

(A) I
(B) II
(C) I and II
(D) I and III
(E) I, II and III

Answer: (C).
{ Source: MAH }

Q6a–9236 . The slope field for an ODE is shown below on the left. Which of the solution curves (I–III) in the right
hand plot could be solutions?

Clicker Question Bank for Numerical Analysis (Version 1.0) 78/91


2
2

1.8
I
1.8
II
1.6 1.6
III
1.4 1.4

1.2 1.2

1 1
y

y
0.8 0.8

0.6 0.6

0.4 0.4

0.2
0.2

0
0
0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.5 1 1.5 2 2.5 3 3.5 4
t t

(A) I
(B) II
(C) I and II
(D) I and III
(E) I, II and III

Answer: (E).

1.8

1.6

1.4

1.2

1
y

0.8

0.6

0.4

0.2

0
0 0.5 1 1.5 2 2.5 3 3.5 4
t

{ Source: MAH }

Q6a–10237 . The slope field for an ODE is shown below on the left. Which of the solution curves (I–III) in the right
hand plot could be solutions?

Clicker Question Bank for Numerical Analysis (Version 1.0) 79/91


2
2 I
1.5 II
1.5 III
1
1
0.5

y
0.5
y

0 0

-0.5 -0.5

-1
-1
0 1 2 3 0 0.5 1 1.5 2 2.5 3
t t

(A) I
(B) II
(C) I and II
(D) I and III
(E) I, II and III

Answer: (C).

1.5

0.5
y

-0.5

-1
0 0.5 1 1.5 2 2.5 3
t

{ Source: MAH }

6b. Euler’s Method


 
238 dN N
Q6b–1 . Consider the population growth model = aN 1 − , where N (t) is population as a function of
dt K
time, a is the growth rate, and K is the maximum sustainable population. If you were to approximate the
solution using Euler’s method with time step h, what is the difference equation you would use?
 
Nj
(A) Nj+1 = Nj + haNj 1 −
K
 
Nj
(B) Nj+1 = hNj + aNj 1 −
K
 
Nj
(C) Nj+1 = aNj + hNj 1 −
K

Clicker Question Bank for Numerical Analysis (Version 1.0) 80/91


h 2
(D) Nj+1 = aNj − N
K j
Answer: (A).
{ Source: MAH }

Q6b–2239 . Using Euler’s method to approximate an ODE, you obtain the difference equation yj+1 = yj + hayj where
a is a constant and h is the time step. What is the ODE?
dy
(A) = ay
dt
dy
(B) =a
dt
dy
(C) = hay
dt
dy
(D) = ha
dt
Answer: (A).
{ Source: MAH }

Q6b–3240 . Recall the well-posedness (existence-uniqueness) theorem from class:


Theorem: For the initial value problem

y 0 = f (t, y) for t ∈ [a, b] with y(a) = y0 ,

suppose that:
• f (t, y) is continuous for all y and t ∈ [a, b], and
• f (t, y) satisfies the “Lipschitz condition”

|f (t, y1 ) − f (t, y2 )| 6 L|y1 − y2 |.

Then the IVP has a unique solution y(t) for all t ∈ [a, b].
Consider the IVP y 0 = −y + t + 1 with y(0) = 1, for t ∈ [0, 5]. What is the corresponding Lipschitz constant
that ensures this problem is well-posed?
(A) 0
(B) 1
(C) 5
(D) 6
Answer: (B). Substituting the RHS function into the Lipschitz condition:

|f (t, y1 ) − f (t, y2 )| = − y1 + t + 1 − (−y2 + t + 1) = |y2 − y1 | = |y1 − y2 |.

{ Source: MAH }

Q6b–4241 . The local truncation error for Euler’s method is


(A) O(1)
(B) O(h)
(C) O(h2 )
(D) O(h3 )
Answer: (C). This is the error in a single step, which comes from the Taylor series expansion.
{ Source: MAH }

Clicker Question Bank for Numerical Analysis (Version 1.0) 81/91


Q6b–5242 . The global truncation error for Euler’s method is
(A) O(1)
(B) O(h)
(C) O(h2 )
(D) O(h3 )
1

Answer: (B). Each step has LTE = O(h2 ), which is multiplied by N = O h steps.
{ Source: MAH }

Q6b–6243 . True or False: When you solve an ODE using a time-stepping algorithm like Euler’s method, the local
truncation errors in every step add up and so the error will always grow with time.

Answer: FALSE. The sign of the error in each step can be positive or negative. If errors always have the
same sign, then they can grow with time as they accumulate. But if the signs alternate, then errors can cancel.
{ Source: JMS }

Q6b–7244 . Consider the IVP


dy y−2
= , y(2) = −1.
dx x+2
Using a single step of Euler’s method, what is the approximate solution at x = 2.5?
(A) 0.75
(B) 0.25
(C) −0.625
(D) −1
(E) −1.375
y−2
Answer: (E). The RHS function is f (x, y) = x+2 and we start with y0 = −1. Taking one step of Euler’s
method with h = 0.5 gives
 
−1 − 2
y1 = y0 + hf (x0 , y0 ) = −1 + 0.5 = −1.375
2+2

{ Source: JMS }

Q6b–8245 . Consider the IVP


dy
2 + y cos y = 0, y(π) = π.
dx

Using a single step of Euler’s method, what is the approximate solution at x = ?
2
1
(A) π + 4 π2
1
(B) π − 2 π2
3
(C) 4 π
1
(D) 2 π2
(E) 0
Answer: (A). The RHS function is f (y) = − 21 y cos y and we start with y0 = π. Taking one step of Euler’s
method with h = π2 gives

π π  π2
y1 = y0 + hf (y0 ) = π + − cos π = π +
2 2 4

{ Source: JMS }

Clicker Question Bank for Numerical Analysis (Version 1.0) 82/91


Q6b–9246 . Consider the IVP
dy
− y 2 − t = 0, y(0) = 1.
dt
Using Euler’s method with time step h = 1, what is the approximate value for y(2)?
(A) 1
(B) 2
(C) 7
(D) 9
Answer: (C). The RHS function is f (t, y) = t + y 2 and we start with y0 = 1. Taking two steps of Euler’s
method gives

y1 = y0 + hf (t0 , y0 ) = 1 + 1(0 + 12 ) = 2
y2 = y1 + hf (t1 , y1 ) = 2 + 1(1 + 22 ) = 7

{ Source: JMS }

Q6b–10247 . You apply Euler’s method to solve the initial value


0.5
problem exact
( computed
y −2x + x1 , if x 6= 0

0 0.4
y =
1, if x = 0
0.3
on the interval x ∈ [0, 2] with initial condition
y(x)

y(0) = 0. The numerical solution for N = 20 points


2
is plotted alongside the exact solution y(x) = xe−x . 0.2
Which of the following statements is wrong?
0.1

0
0 0.5 1 1.5 2
x
(A) Euler’s method does not converge for this problem.
(B) The local truncation error is sometimes large, and sometimes small.
(C) The global truncation error is small near x = 2.
(D) The accuracy of the solution could likely be improved by increasing N .
Answer: (A). Responses (B)–(D) are correct. And although (A) might be true, there is no way to determine
convergence with only results from a single N .
{ Source: Based on Fausett [3], p. 450 }

Clicker Question Bank for Numerical Analysis (Version 1.0) 83/91


Q6b–11248 . You apply Euler’s method to compute the results
0.5
displayed in the plot, but you decide that the nu- exact
merical solution is much too inaccurate for your pur- computed
poses. What could you do to increase the accuracy? 0.4

0.3

y(x)
0.2

0.1

0
0 0.5 1 1.5 2
x
(A) Reduce the time step
(B) Use a higher order method
(C) Use adaptive time-stepping
(D) Switch to an implicit method like backward Euler
(E) All of the above
Answer: (A). Responses (A)–(C) are all reasonable answers. Switching to backward Euler won’t help with
accuracy, but could improve stability.
{ Source: JMS }

6c. Higher Order Methods


Q6c–1249 . This partial code implements the modified Eu-
function [t, y] = meuler(f,a,b,y0,h)
ler method for solving the IVP y 0 = f (t, y),
t = a : h : b;
y(a) = y0 , for a 6 t 6 b. The Matlab function
y(1) = y0;
f returns the value of f (t, y). Select suitable
for k = 1 : length(t),
replacement code for the [blank] from the list
ystar = y(k) + h * f(t(k), y(k));
below.
...[fill in blank]...
end
(A) y(k+1) = y(k) + h/2 * (f(t, y(k)) + f(t+h, ystar));
(B) y(k+1) = y(k) + h/2 * (f(t(k), y(k)) + f(t(k)+h, ystar));
(C) y(k+1) = y(k) + h * f(t(k), y(k)) + f(t(k)+h, y(k)));
(D) y(k+1) = y(k) + h * f(t(k)+h, ystar);
Answer: (B).
{ Source: JMS, MACM 316 lecture notes }

Q6c–2250 . You perform the following two calculations on the same initial value problem:
• two steps of Euler’s method with step size 0.25
• one step of modified Euler’s method with step size 0.5

Which method do you expect to give the more accurate solution?


(A) Euler’s method
(B) modified Euler’s method
(C) both give the same accuracy

Clicker Question Bank for Numerical Analysis (Version 1.0) 84/91


Answer: (C). Euler’s method has LTE = O(h2 ) so that two steps of size 0.25 give LTE ≈ 0.125. Modified
Euler’s method has LTE = O(h3 ) so that one step of size 0.5 give the same LTE ≈ 0.125.
{ Source: JMS }

Q6c–3251 . You perform the following two calculations on the same initial value problem:
• two steps of Euler’s method with step size 0.1
• one step of modified Euler’s method with step size 0.2
Which method do you expect to give the more accurate solution?
(A) Euler’s method
(B) modified Euler’s method
(C) both give the same accuracy
Answer: (B). Euler’s method has LTE = O(h2 ) so that two steps of size 0.1 give LTE ≈ 0.02. Modified
Euler’s method has LTE = O(h3 ) so that one step of size 0.2 gives LTE ≈ 0.008, which is much smaller!
{ Source: JMS }

Q6c–4252 . When comparing Euler’s method with standard fourth-order Runge-Kutta (RK4), which of the following
is not an advantage of RK4?
(A) The cost per time step is lower.
(B) The local truncation error is much smaller.
(C) It is more stable and a much larger time step can usually be used.
Answer: (A). The cost of an RK4 step is roughly 4 times that of Euler’s method. The cost savings come
in because many fewer time steps are usually needed.
{ Source: JMS }

Q6c–5253 . This partial code implements the midpoint


function [t, y] = midpointmethod(f,a,b,y0,h)
method for solving the differential equation
t = a : h : b;
y 0 = f (t, y), a 6 t 6 b, y(a) = y0 , where
y(1) = y0;
the Matlab function f returns the value of
for k = 1 : length(t),
f (t, y). Select suitable replacement code for
ystar = y(k) + h/2 * f(t(k), y(k));
the [blank].
...[fill in blank]...
end
(A) y(k+1) = y(k) + h/2 * (f(t(k), y(k)) + f(t(k)+h, ystar));
(B) y(k+1) = y(k) + h * f(t(k)+h/2, y(k));
(C) y(k+1) = y(k) + h * f(t(k)+h/2, ystar);
(D) y(k+1) = y(k) + h * f(t(k)+h, ystar);
Answer: (C).
{ Source: JMS, MACM 316 lecture notes }

Q6c–6254 . You have a code for solving ODEs that predicts the solution at time tj+1 using

y ∗ = yj + hf (tj , yj ),

and then computes the actual solution approximation as


h 
yj+1 = yj + f (tj , yj ) + f (tj+1 , y ∗ ) .
2
This approach is known as
(A) Euler’s method

Clicker Question Bank for Numerical Analysis (Version 1.0) 85/91


(B) modified Euler’s method
(C) Runge-Kutta method
(D) midpoint method
Answer: (B). This is a Runge-Kutta method of order 2, and so (C) is also correct.
{ Source: MAH }

Q6c–7255 .
h
y ∗ = yj + f (tj , yj )
2
 
h ∗
yj+1 = yj + hf tj + , y
2
This iteration is called
(A) Euler’s method
(B) modified Euler method
(C) Runge-Kutta method
(D) midpoint method

Answer: (D).
{ Source: MAH }

Q6c–8256 . True or False:


Your friend writes a Matlab code that integrates the
ODE 20 h=0.25
h=0.125
dy 2(1 − t) h=0.0625
= , y(0) = 1 exact
dt (0.05 + (t − 1)2 )2 15
from t = 0 to 1 using the modified Euler method.
The plot shows the computed solution for a decreas-
y

10
ing sequence of step sizes h = 0.25, 0.125 and 0.0625,
alongside the exact solution. Based on these results,
your friend claims that their code is correct.
5

0
0 0.2 0.4 0.6 0.8 1
t
Answer: FALSE. Well, it’s most likely false. The right hand end-point seems to approach the exact
solution y(1) ≈ 20 as h gets smaller. But the middle values (near t ≈ 0.5) are not getting very close to
the exact solution. Furthermore, the numerical approximations all have a slope at t = 1 that is nonzero
and increasing with h, which seems to diverge from the exact slope of dy
dt (1) = 0.

{ Source: Based on Recktenwald [6], p. 728 }

Q6c–9257 . True or False:

Clicker Question Bank for Numerical Analysis (Version 1.0) 86/91


Your friend writes a Matlab code that integrates
20 h=0.2
dy 2(1 − t) h=0.1
= , y(0) = 1
dt (0.05 + (t − 1)2 )2 h=0.05
exact
15
from t = 0 to 1 using the modified Euler method.
The plot shows the computed solution for a decreas-
ing sequence of step sizes alongside the exact solu-

y
10
tion. He also calculates the max-norm error in the
three solutions:
h maxj |yj − y(tj )| 5
0.2 4.0680
0.1 2.8348
0.05 1.3826 0
0 0.2 0.4 0.6 0.8 1
Based on these results, your friend claims that their t
code is correct.
Answer: FALSE. The approximations do seem to be converging to the exact solution as h → 0. However,
the error is only going down roughly by a factor of 2 in each step, which suggests a first-order method. Modified
Euler should be second-order, and so there must be a bug in the code.
{ Source: Based on Recktenwald [6], p. 728 }

Q6c–10258 . You have coded up an ODE solver and to test it you execute your code twice, first with time step h and
second with step h/2. Suppose these two solutions have absolute errors Eh and Eh/2 respectively. Which
expression is an estimate for the order of accuracy?
log2 Eh
(A)
log2 Eh/2
log2 Eh/2
(B)
log2 Eh
 
Eh
(C) log2
Eh/2
 
Eh/2
(D) log2
Eh
Answer: (C). If the error in the method is order p, then Eh = chp and Eh/2 = c(h/2)p . Taking the ratio
p
Eh /Eh/2 = 2 , it follows that p = log2 (Eh /Eh/2 ).
{ Source: JMS }

Q6c–11259 . You have coded up an ODE solver and to test it, you execute your code twice, once with time step h and
a second time with step h/10. Suppose these two solutions have absolute errors Eh and Eh/10 respectively.
Which expression is an estimate for the order of accuracy?
 
Eh
(A) log10
Eh/10
 
Eh
(B) log2
Eh/10
 
1 Eh
(C) log2
log2 10 Eh/10
log10 Eh
(D)
log10 Eh/10
Answer: (A). Response (C) is also correct. If the error in the method is order p, then Eh = chp and
Eh/10 = c(h/10) . Taking the ratio Eh /Eh/10 = 10p , it follows that p = log10 (Eh /Eh/10 ). Applying the
p
log2 x
logarithmic identity log10 x = log 10 gives the formula in (C).
2

{ Source: JMS }

Clicker Question Bank for Numerical Analysis (Version 1.0) 87/91


Rx 2 df 2
Q6c–12260 . The integral f (x) = 0 e−t dt can also be written as an ODE, dx = e−x (by the Fundamental Theorem
of Calculus). With this in mind, consider two methods for estimating the value f (1):
R1 2
• Apply Simpson’s rule to approximate the integral 0 e−x dx.
2
• Use the RK4 method to solve df
dx = e−x on the interval [0, 1] using initial condition f (0) = 0.
1
Discretize both problems at the same points xi = ih for i = 0, 1, . . . , N , with grid spacing h = N. Which of
the statements below regarding the accuracy and efficiency of the methods is TRUE?

(A) Both methods have the same accuracy, but Simpson’s rule is less expensive.
(B) Simpson’s rule is more accurate and less expensive.
(C) RK4 is more accurate and less expensive.
(D) Both methods have the same accuracy and cost.

Answer: (A). Both RK4 and Simpson’s rule have error that is O(h4 ). Each step of RK4 requires 4
function evaluations, for a total cost of 4N . In comparison, Simpson’s rule involves only N + 1 function
evaluations (one per point). So the cost of both is O(N ), but the quadrature approach will be roughly four
times cheaper/faster.
{ Source: Based on Cheney and Kincaid [2], p. 390. }

6d. Systems of First-Order ODEs


Q6d–1261 . Which of the following is a well-posed ODE initial value problem?
d2 y
(A) = y 2 − ey , y(0) = 0
dx2
(B) z 00 − 4xz 0 + 3z − xex = 4, z(1) = z 0 (1) = 0
y 000 − y
(C) = et , y(0) = 1, y 0 (0) = 2, y 00 (0) = −3
y 00 + y 0
d2 z dz 1
(D) +t − = 0, z(1) = −1, z 0 (1) = π
dt2 dt 1+z
(E) f f 00 = 1, f (0) = f 0 (0) = 3
Answer: (B). Responses (C) and (E) are also correct. But (A) is not well-posed because it is missing an
dy
initial condition on dx . And (D) isn’t well-posed because the RHS function is discontinuous at the initial value
of z = −1.
{ Source: JMS }

Q6d–2262 . A mass–spring problem is described by the ODE y 00 +2y = cos(2t). Which of the following is the equivalent
first-order system?
(A) y 0 = x, x0 + 2y = cos(2t)
(B) x0 = y, y 0 = −2y + cos(2t)
(C) This second-order ODE can’t be converted to a first-order system.
Answer: (A).
{ Source: MAH }

Q6d–3263 . Which second-order ODE is equivalent to this first-order system?

x0 = y
y 0 = −2x + y

(A) y 00 + 2y 0 − y = 0
(B) y 00 − 2y 0 − y = 0

Clicker Question Bank for Numerical Analysis (Version 1.0) 88/91


(C) y 00 + 2y 0 − y = 0
(D) y 00 − y 0 + 2y = 0
Answer: (D).
{ Source: MAH }

Q6d–4264 . Which second-order ODE is equivalent to this first-order system?

x0 = x − 2y
y 0 = −2x + y

(A) x00 − 2x0 − 3x = 0


(B) y 00 − 2y 0 − 3y = 0
(C) y 00 + 2y 0 + 3y = 0
(D) This first-order system can’t be converted to a second-order ODE.
Answer: (A). We’re going to eliminate y and so differentiate the first equation to get x00 = x0 − 2y 0 .
Then substitute for y 0 from the second equation to get x00 = x0 − 2(−2x + y) = x0 + 4x − 2y. Finally, replace
−2y = x0 − x from the first equation, which yields x00 = 2x0 + 3x.
{ Source: MAH }

Q6d–5265 . A mass m is attached to a fixed wall with a spring–damper device having spring constant k and damping
parameter c. Our aim is to find the horizontal displacement x = x(t) as a function of time t > 0. Applying
Newton’s second law leads to the second-order ODE

mx00 + cx0 + kx = f (t), x(0) = x0 , x0 (0) = v0 ,

where x0 denotes the initial displacement, v0 the initial velocity, and f = f (t) describes external forces acting
on the body. Re-write this IVP as a first-order system by using the new variable v(t) = x0 (t).
(A) x0 = v, mv 0 + cv + kx = f, x(0) = x0 , v(0) = v0
(B) v 0 = x, mv 0 + cv + kx = f, x(0) = x0 , v(0) = v0
(C) x0 = v, mx0 + cx + kv 0 = f, x(0) = x0 , v(0) = v0
Answer: (A).
{ Source: MAH }

Q6d–6266 . You are given the system of ODEs x0 = 3x − 2y and y 0 = 4y 2 − 7x, along with initial values x(0) = 2 and
y(0) = 1. Using one step of Euler’s method, what is the approximate solution at t = 0.1?
(A) x(0.1) = 4, y(0.1) = −10
(B) x(0.1) = 6, y(0.1) = −9
(C) x(0.1) = 2.4, y(0.1) = 0
(D) x(0.1) = 2.4, y(0.1) = −1
(E) none of the above
Answer: (C). Euler’s method with h = 0.1 gives:

x1 = x0 + h(3x0 − 2y0 ) = 2 + 0.1(6 − 2) = 2.4


y1 = y0 + h(4y02 − 7x0 ) = 1 + 0.1(4 − 14) = 0

{ Source: MathQuest [8], Euler’s method and systems of equations }

Q6d–7267 . You are given the system of ODEs x0 = x(−x − 2y + 5) and y 0 = y(−x − y + 10), along with initial values
x(4.5) = 3 and y(4.5) = 2. What are approximate values of x and y at t = 4?

Clicker Question Bank for Numerical Analysis (Version 1.0) 89/91


(A) x(4) = 0, y(4) = −3
(B) x(4) = 6, y(4) = 10
(C) x(4) = 6, y(4) = 7
(D) none of the above
Answer: (D). There’s nothing wrong with taking a negative step size, h = −0.5! Euler’s method gives:

x1 = x0 + hx0 (−x0 − 2y0 + 5) = 3 − 0.5(3)(−3 − 4 + 5) = 6


y1 = y0 + hy0 (−x0 − y0 + 10) = 2 − 0.5(2)(−3 − 2 + 10) = −3

{ Source: MathQuest [8], Euler’s method and systems of equations }

6e. Stability and Implicit Methods


[ nothing here yet ]

SUMMARY STATISTICS:

Total questions: 267 (25 TF, 242 MC)


MC responses: A=65, B=55, C=59, D=56, E=7

Printed: May 25, 2020

Clicker Question Bank for Numerical Analysis (Version 1.0) 90/91


References
[1] Richard L. Burden, J. Douglas Faires, and Annette M. Burden. Numerical Analysis. Cengage Learning, 10th
edition, 2015.

[2] Ward Cheney and David Kincaid. Numerical Mathematics and Computing. Brooks/Cole, Pacific Grove, CA,
4th edition, 1999.
[3] Laurene V. Fausett. Applied Numerical Analysis Using MATLAB. Pearson Education, 2nd edition, 2008.
[4] Michael T. Heath. Scientific Computing: An Introductory Survey, revised second edition, volume 80 of Classics
in Applied Mathematics. SIAM, Philadelphia, PA, 2018.
[5] Autar Kaw. Holistic numerical methods: Multiple choice questions. http://mathforcollege.com/nm/
assessment_text.html. Accessed 18 March 2019.
[6] Gerald W. Recktenwald. Introduction to Numerical Methods and MATLAB: Implementations and Applications.
Pearson, 2000.
[7] Maria Terrell and Robert Conelly. GoodQuestions project. Department of Mathematics, Cornell University,
Ithaca, NY. http://pi.math.cornell.edu/~GoodQuestions. Accessed 5 April 2019.
[8] Holly Zullo and Kelly Cline. MathQUEST/MathVote: Resources for clickers and classroom voting in collegiate
mathematics. Mathematics Program, Carroll College, Helena, MT. http://mathquest.carroll.edu. Accessed
5 April 2019.

Clicker Question Bank for Numerical Analysis (Version 1.0) 91/91

You might also like