0% found this document useful (0 votes)
9 views19 pages

Christian 2016

The document discusses the geometric calibration of the Orion Optical Navigation Camera, which is essential for the autonomous navigation of the Orion Multi-Purpose Crew Vehicle in cislunar space using star field images. It outlines the need for precise calibration parameters, including intrinsic and distortion parameters, and highlights the importance of on-orbit recalibration due to environmental factors. The paper provides a theoretical foundation and mathematical treatment for the calibration process, which is expected to be demonstrated in future missions.

Uploaded by

hikaridragon0216
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views19 pages

Christian 2016

The document discusses the geometric calibration of the Orion Optical Navigation Camera, which is essential for the autonomous navigation of the Orion Multi-Purpose Crew Vehicle in cislunar space using star field images. It outlines the need for precise calibration parameters, including intrinsic and distortion parameters, and highlights the importance of on-orbit recalibration due to environmental factors. The paper provides a theoretical foundation and mathematical treatment for the calibration process, which is expected to be demonstrated in future missions.

Uploaded by

hikaridragon0216
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

J of Astronaut Sci

DOI 10.1007/s40295-016-0091-3

Geometric Calibration of the Orion Optical Navigation


Camera using Star Field Images

John A. Christian1 · Lylia Benhacine1 ·


Jacob Hikes1 · Christopher D’Souza2

© American Astronautical Society 2016

Abstract The Orion Multi Purpose Crew Vehicle will be capable of autonomously
navigating in cislunar space using images of the Earth and Moon. Optical navigation
systems, such as the one proposed for Orion, require the ability to precisely relate
the observed location of an object in a 2D digital image with the true correspond-
ing line-of-sight direction in the camera’s sensor frame. This relationship is governed
by the camera’s geometric calibration parameters — typically described by a set of
five intrinsic parameters and five lens distortion parameters. While pre-flight esti-
mations of these parameters will exist, environmental conditions often necessitate
on-orbit recalibration. This calibration will be performed for Orion using an ensem-
ble of star field images. This manuscript provides a detailed treatment of the theory
and mathematics that will form the foundation of Orion’s on-orbit camera calibration.
Numerical results and examples are also presented.

Keywords Camera calibration · Optical navigation · Orion

This work was made possible by NASA under award NNX13AJ25A. An earlier version of this paper
appeared as AAS 16-116 at the 39th Annual AAS Guidance, Navigation, and Control Conference in
Breckenridge, CO.

 John A. Christian
john.christian@mail.wvu.edu

1 Department of Mechanical and Aerospace Engineering, West Virginia University, Morgantown,


West Virginia, 26506, USA

2 GNC Autonomous Flight Systems Branch, NASA Johnson Space Center, Houston, Texas,
77058, USA
J of Astronaut Sci

Introduction

The Orion Multi Purpose Crew Vehicle (MPCV) is NASA’s next generation crewed
vehicle and will take astronauts to exploration destinations beyond low Earth orbit.
As a result, crew safety concerns require that the Orion be capable of returning
to Earth independent of ground-based tracking and communication. The naviga-
tion aspect of this requirement is being met via optical navigation (OPNAV). Orion
will use images of the Earth and Moon to enable the autonomous estimation of the
vehicle’s translational states while in cislunar space [5].
A critical ingredient for precision OPNAV is high-quality geometric calibration of
the camera system used to collect the OPNAV images. By geometric calibration, we
mean estimation of the calibration parameters that govern where a physical object in
the 3D world will appear in a 2D digital image. The radiometric calibration of the
OPNAV camera (which would calibrate things such as the spectral responsivity of
each pixel) is not considered in this paper. The parameters estimated in a geometric
camera calibration routine can generally be divided into three categories: (1) cam-
era intrinsic parameters, (2) lens distortion parameters, and (3) camera misalignment.
The camera intrinsic parameters describe image formation under perfect perspective
projection and capture the effect of things such as camera focal length, pixel size, and
principal point location. The lens distortion parameters describe how optical system
imperfections cause the apparent location of a point source to systematically devi-
ate from its ideal location under perspective projection. The camera misalignment
describes the orientation of the camera relative to some reference (in this case, the
alignment of the OPNAV camera frame relative to the star tracker frame).
The Orion OPNAV camera will be calibrated on the ground prior to flight. A
variety of factors, however, may cause the in-flight calibration parameters to vary
from their pre-flight values. These factors include (but are not limited to) vibrations
during launch, thermal cycling, and pressure changes. As a result, on-orbit cali-
bration is expected to be necessary to ensure that end-to-end OPNAV performance
requirements are met.
On-orbit calibration of the Orion OPNAV camera will be performed using images
of star fields. The use of star field images for on-orbit estimation of a camera’s geo-
metric calibration parameters is well established [17–19, 21, 24, 28]. In fact, our
precise knowledge of star line-of-sight directions often makes the quality of these
on-orbit calibrations superior to the pre-flight ground calibrations.
The pre-flight calibrations we refer to here are precision ground calibrations. We
feel it necessary to make a strong distinction between this and the calibration using
checkerboard patterns and based on the method presented by Zhang [29] (and as
implemented in both MATLAB1 and OpenCV2 ). While Zhang’s method and its
derivatives produce a calibration suitable for general computer vision or creating
an aesthetically pleasing undistorted image, they do not yield adequate results for
precision OPNAV.

1 Camera Calibration Toolbox for Matlab: http://www.vision.caltech.edu/bouguetj/calib doc/.


2 OpenCV: http://opencv.org.
J of Astronaut Sci

This paper provides a detailed treatment of the theory and mathematics that
will form the foundation of Orion’s on-orbit camera calibration (or re-calibration).
We expect these algorithms, or some variant thereof, to be demonstrated on the
upcoming Orion Exploration Mission 1 (EM1). The baseline design for EM1
is for the calibration to be performed on the ground (see Fig. 1), although
this task will be performed onboard Orion in future missions. The sections that
follow provide details for each of the blocks appearing on the righthand side
of Fig. 1.

Camera Model

Pinhole Camera Model (Perfect Perspective Projection)

A typical camera consists of an optical system (usually an assembly of lenses) that


collects and focuses light onto a planar detector array. The detector array, commonly
a CCD or CMOS detector [7], accumulates charge as photons strike each pixel loca-
tion and the result may be interpreted as a digital image. Thus, each point in the
image corresponds to a line-of-sight direction from which the light detected at that
location came. In an ideal camera, the geometric relationship between image plane
coordinates and line-of-sight direction is governed by perfect perspective projection.
Perfect perspective projection in a camera system is most straightforwardly described
by the pinhole camera model [12].

Fig. 1 The camera calibration task consists of many distinct steps and will be performed on the ground
for Orion EM1
J of Astronaut Sci

Fig. 2 Simple diagram of the pinhole camera model and the image plane. Similar triangles (shown in
blue) relate the 3D position of an observed object and its apparent location in the image plane

The key observation that underlies the pinhole camera model is that the ray
of light that passes through the center of an ideal lens is undistorted by the lens. This
permits a simple geometric model that may be derived via similar triangles,
XCi YC
xi = f and yi = f i , (1)
ZCi ZCi
where f is the focal length, [xi , yi ] is the 2D coordinate of the i-th observed point
in the image, and eTCi = [XCi YCi ZCi ] is the line-of-sight vector along the i-th direc-
tion as expressed in the camera’s sensor frame. As a line-of-sight vector, note that
eCi  = 1. Also note that coordinates are expressed in the image plane rather than in
the focal plane. The image plane is an imaginary plane that lies in front of the lens by
a distance of f and is parallel to the focal plane. Use of the image plane is standard
practice in the computer vision community, as it is both more intuitive and simplifies
the subsequent mathematics [12]. All of this may be seen graphically in Fig. 2.

Lens Distortion Model

The pinhole camera model describes a system that is not practically realizable.
Real lenses suffer from both optical aberrations as well as manufacturing/assembly
defects. The former leads to both radial and decentering distortion; the latter to just
decentering distortion.
A single ray of light may be traced through an ideal optical system using classic
geometrical optics. Assuming relatively small angles, the sine terms in the ray-tracing
formulas may be replaced by the first two terms of the Maclaurin series. The result,
known as third-order theory, is used to approximate the aberration of a ray from its
ideal path. The consequent aberration can be defined by five terms, known as the
Seidel sums, which exist for any specified wavelength of light [9, 25].3

3 The Seidel sums describe the classic monochromatic aberrations. There are also chromatic aberrations,
which describe imperfections that arise from a lens focusing different wavelenghts of light (i.e. different
colors of light) onto slightly different points. The present discussion focuses only on the monochromatic
aberrations.
J of Astronaut Sci

The first three Seidel sums — spherical aberration, coma, and astigmatism —
describe various types of blurring of an image point. The fourth Seidel sum is cur-
vature of field. Any attempt to correct the first three aberrations will cause the focal
surface to become curved. If a flat plane, such as a typical CCD/CMOS detector,
replaces this curved focal plane, then either the center of field will be in sharp focus
and the periphery will be blurred or vice versa, depending on the placement of the
plane. The fifth Seidel sum is radial distortion, which results from non-uniform mag-
nification across the field of view, thereby inducing an apparent displacement in the
image plane without a loss of focus. It is impossible to simultaneously correct all five
monochromatic aberrations [9].
Radial distortion is the most critical of the five aberrations for precision OPNAV
since it alters the apparent location of an object in the image. The perturbation of an
object’s observed location away from the ideal pinhole camera model due to radial
distortion is described by
xri = (k1 ri2 + k2 ri4 + k3 ri6 )xi and yri = (k1 ri2 + k2 ri4 + k3 ri6 )yi ,
where xi and yi are from Eq. 1 and ri2 = xi2 + yi2 . Negative values of the coefficients
kj lead to barrel distortion, while positive values of kj lead to pincushion distortion.
Lenses, no matter how meticulously produced, will not be perfectly centered.
Thus, decentering error occurs when the optical axes of lens surfaces are not per-
fectly aligned or when the the detector array is not perfectly orthogonal to the optical
axis. The decentering distortion may be analytically derived from first principles, as
was first done by Conrady [4] and later clarified by Brown [1],
2xi2 2xi yi
xdi = −Pi [(1 + ) sin φ0 − cos φ0 ] (2)
ri2 ri2
2yi2
ydi = −Pi [ 2xi2yi sin φ0 − (1 + ) cos φ0 ], (3)
ri ri2

where φ0 is the angle between the x-axis and the direction of maximum tangential
distortion. The profile function, Pi , can be expressed as powers of ri2
Pi = J1 ri2 + J2 ri4 + J3 ri6 + J4 ri8 + . . .
Making the following change of variables:
p1 = −J1 cos φ0 p2 = −J1 sin φ0
p3 = J2 /J1 p4 = J3 /J1 p5 = J4 /J1 ,
we arrive at the standard expression for the decentering distortion [1],
xdi = [1 + p3 ri2 + p4 ri4 + p5 ri6 + . . .][2p1 xi yi + p2 (ri2 + 2xi2 )] (4)
ydi = [1 + p3 ri2 + p4 ri4 + p5 ri6 + . . .][p1 (ri2 + 2yi2 ) + 2p2 xi yi ]. (5)
It is common practice to neglect the decentering distortion coefficients p3 and
higher [15]. Although we proceed with this assumption here, we are presently
revisiting this convention as part of our broader research in this area.
The decentering distortion model warrants a few additional observations. First, as
discussed by Owen [20] and derived by Christian [3], a pinhole camera with the FPA
not perpendicular to the optical axis (detector tip/tilt error) will exhibit decentering
J of Astronaut Sci

error terms containing xi2 , yi2 , and xi yi . It is apparent that Eqs. 4 and 5 capture these
effects, along with an additional radial component that is missed when approaching
the problem via the simplified pinhole camera model. Second, decentering distortion
also arises from the imperfect centering of the lens surfaces due to practical limita-
tions of the lens manufacturing process. This again creates both radial and tangential
distortions.
Brown combined the radial distortion model and the decentering distortion model
to create a flexible framework for camera calibration [2]. This approach, now com-
monly referred to as the Brown distortion model, has become ubiquitous in the
computer vision, optics, and photogrammetry communities. Following this con-
vention, we will consider three radial distortion terms (k1 , k2 , and k3 ) and two
decentering distortion terms (p1 and p2 ) — yielding a total of five coefficients to
describe the lens distortion effects. Thus, the final model is as follows,

xi
    x   2p x y + p r 2 + 2x 2  
i 1 i i 2 i
= 1 + k1 ri + k2 ri + k3 ri
2 4 6
+ i , (6)
yi yi p1 ri2 + 2yi2 + 2p2 xi yi

where [xi , yi ] is the observed (distorted) location on the image plane, [xi , yi ] is the
ideal pinhole location of that same point, and ri2 = xi2 + yi2 . The effects of the five
lens distortion parameters are illustrated in Fig. 3.

Conversion to Pixel Coordinates and Camera Intrinsic Parameters

Suppose we have a camera with a pixel pitch of 1/sx in the x-direction, 1/sy in the
y-direction (pixels are square if sx = sy ), a focal length of f , and a principal point
at pixel coordinates [up , vp ]. Also assume that the skewness is defined by α, where
α = 0 if pixel rows are exactly perpendicular to pixel columns. For such a camera,
the location of an observed (distorted) point [xi , yi ] on the focal plane may be related
to the corresponding [ui , vi ] pixel location in the raw (distorted) image by,
⎡ ⎤ ⎡ ⎤⎡  ⎤
ui dx α up xi
⎣ v  ⎦ = ⎣ 0 dy vp ⎦ ⎣ y  ⎦ , (7)
i i
1 0 0 1 1

No Pincushion Barrel Decentering Decentering


Distortion Distortion Distortion Distortion Distortion
k1 > 0 k1 < 0 p1 > 0 p2 > 0

Fig. 3 Visualization of the warping of a grid caused by the lens distortion parameters
J of Astronaut Sci

where dx = f sx and dy = f sy . We choose to use dx and dy because the quantities


sx and sy are not independently observable from the focal length. The 3 × 3 matrix
in Eq. 7 is typically called the camera calibration matrix.
In summary, the conversion from image plane coordinates to pixel coordinates is
described by the five intrinsic parameters: dx , dy , α, up , and vp .

Image Processing and Star Centroiding

Accurately finding the locations of observed stars in an image is a multi-step image


processing task. The first step is dark frame removal. The dark frame describes the
fixed pattern noise (FPN) — a random intensity bias for each pixel that occurs even
when no photons are striking the focal plane. The intensity bias from FPN persists
from image to image.
The Orion OPNAV camera does not have a mechanical shutter. Therefore, it is not
possible to close the shutter and collect a truly “dark” image. Instead, we attempt
to construct a dark frame from a small set of images collected at random attitudes.
Proceeding under the assumption that each pixel is illuminated by a star of discernible
magnitude in fewer than half of the images, we approximate the dark frame by finding
the median intensity value for each individual pixel over the collection images. To
remove the FPN from any image, we simply subtract the dark frame value at each
pixel location.
With the FPN removed, our task is now to find stars in each image. Due to a
variety of factors, the light from a single star (which is very nearly a point source) will
spread out over multiple pixels, with a shape described by the Point Spread Function
(PSF). For the Orion OPNAV camera, the shape and size of the PSF is expected to be
dominated by a small amount of intentional defocusing.
There are a number of methods for computing star centroids, including center of
intensity (COI), convolution, and model fitting via non-linear least squares. Although
the non-linear least squares approach usually provides the best performance in sce-
narios where the a priori PSF model is well understood [8, 22], variants of the COI
approach are very popular due to their good performance, robustness, speed, and ease
of implementation [10, 27]. Our present design uses a COI-based technique, but this
may change in the future.
The OPNAV camera is to be mounted on a common optical bench with a pair
of star trackers. Each of the star trackers will produce an attitude quaternion which
is an estimate of the star tracker attitude with respect to an inertial coordinate sys-
tem (usually J2000). Since the orientation of the OPNAV camera with respect to
the star trackers is well known (up to the very small misalignment that is estimated
as part of the calibration process), the star tracker attitude may be used to find
the attitude of the camera. Using this a priori image attitude makes the star iden-
tification process easy and removes the need for a lost-in-space star identification
algorithm.
To begin the centroiding, the a priori image attitude is used to draw a square
region of interest (ROI) of 31 × 31 pixels centered at each expected star location.
J of Astronaut Sci

Within this ROI, the brightest point is found. We now recenter about this bright point
and create a mask by finding all the pixels within a radius of three pixels above
a specified threshold. This will result in a small mask that will change shape for
each star. The star centroid may then be computed by taking the weighted center of
intensity within the mask,
1 1
ui = uI  (u, v) and vi = vI  (u, v), (8)
Itot Itot
Mi Mi
where
Itot = I  (u, v) (9)
Mi
and Mi is the set of pixels belonging to the i th star’s mask and I  is the distorted
image after dark frame removal.
By using expected star locations to initiate the centroiding process we limit our-
selves to using only those stars that exist in our catalog (the SKY2000 catalog [16]).
While it is also possible to perform a calibration with uncataloged stars using Eich-
horn’s overlapping plate technique [6, 22] or something similar, we do not do so
here because our star field images are not all guaranteed to have substantial regions
covering the same portion of sky.

Estimation of Camera Calibration Parameters

To begin the on-orbit calibration, we will have an initial guess of the camera intrin-
sic parameters and the lens distortion parameters from ground-based calibration or
previous on-orbit calibrations. While these quantities may change over time, and it is
important to estimate this change, the changes are likely to be small. Thus, the initial
guesses are expected to be relatively good.
Because the problem is nonlinear, something more sophisticated than a linear
least squares estimator is required to accurately estimate the ten camera calibration
parameters. We choose to use the iterative Levenberg-Marquardt algorithm (LMA)
[23], which is the standard optimization algorithm used by others performing camera
calibration (e.g., [29]).

Notation and Measurement Model for a Single Star Centroid from the k-th
Image

Suppose that on the j -th iteration of the LMA algorithm we define the attitude
correction of the k-th image as
(j ) I
TCk = (j ) TST k I
C TSTk , (10)
where TISTk is the rotation matrix from the inertial frame to the star tracker (ST) frame
for the k-th image and (j ) TST k
C is the estimate of the OPNAV-ST alignment at the j -th
LMA iteration for the k-th image (we make this distinction because it is possible
for each image to have a different OPNAV-ST alignment). That said, the OPNAV
J of Astronaut Sci

camera and star trackers will be mounted on a common optical bench for Orion, so
the misalignment between the these sensors is not expected to change appreciably
while on orbit.
At each LMA iteration, the OPNAV-ST alignment is updated according to
(j +1) STk
TC = (j ) δTk (j ) TST
C ,
k
(11)
where the rotation matrix (j ) δTk is generally small and may be expressed in terms of
the corresponding 3×1 rotation vector, (j ) δθ k . Further, since the difference in (0) TST
C
k

and (j ) TST k
C describes the change in our estimate of the OPNAV-ST alignment from
the calibration process, it should be immediately evident that this difference may be
used to update the onboard OPNAV-ST misalignment.
Therefore, we can define a state vector for the k-th image as (where we temporarily
drop the LMA iteration notation to keep things readable),
T
φ k = kT ξ T δθ Tk , (12)
where k and ξ contain the camera intrinsic parameters and lens distortion parameters,
respectively,
T
k = dx α dy up vp (13)
T
ξ = k1 k2 k3 p1 p2 . (14)
Thus, we could also write the measurement model of a single star centroid (a 2 × 1
vector) as  
[ui ]k = g eIi , φ k , (15)
where [ui ]k is the i-th distorted star centroid location from the k-th image in pixel
coordinates.

Notation and Measurement Model for Star Centroids from an Ensemble of


Images

Because the LMA algorithm is a batch estimation routine and will process all of
the images together, we can define a state vector for an entire image ensemble of
m images. There are two ways to do this, which will be compared throughout the
remainder of this paper. First, we may carry a separate alignment state for each image,
yielding a (10 + 3m) × 1 state vector,
T
β 1 = kT ξ T δθ T1 . . . δθ Tm . (16)
The alternative approach is to only carry a single misalignment for all images in the
ensemble, yielding a 13 × 1 state vector,
T
β 2 = kT ξ T δθ T . (17)
Regardless of the form of β chosen, we define a measurement vector y by stacking
all of the observed star centroid locations (nk stars in the k-th image) from all the
images (total of m images) into a single vector,
 T T
  T T
 T
ỹ = ũ 1 . . . ũ n1 . . . ũ 1 . . . ũ nm . (18)
1 m
J of Astronaut Sci

We may also stack the measurement model (Eq. 15) to get


⎡ ⎡  ⎤ ⎤ ⎡ ⎡ g e , φ  ⎤ ⎤
u1 I1 1
⎢ ⎢ .. ⎥ ⎥ ⎢ ⎢ .. ⎥ ⎥
⎢⎣ . ⎦ ⎥ ⎢ ⎢ . ⎥ ⎥
⎢ ⎥ ⎢ ⎣   ⎦ ⎥
⎢ un ⎥ ⎢ ⎢ g e , φ ⎥

⎢ 1 1 ⎥ ⎢
In 1 1
1 ⎥
⎢ . ⎥ ⎢ ⎥ −→ y = u = h (eI , β) (19)
y=⎢ . ⎥= .
⎢ ⎡ . ⎤ ⎥ ⎢ ⎢ ⎡ 
..
 ⎤


⎢ u ⎥ ⎢ ⎥
⎢ 1 ⎥ ⎢ g eI1 , φ m ⎥
⎢⎢ . ⎥ ⎥ ⎢⎢ ⎥ ⎥
⎣ ⎣ .. ⎦ ⎦ ⎣ ⎣ ..
⎦ ⎦
unm m  . 
g eInm , φ m m

Implementation of Levenberg-Marquardt Algorithm (LMA) Routine

As mentioned above, we have chosen to use the iterative Levenberg-Marquardt algo-


rithm (LMA) to find the optimal estimate of β (the vector from Eq. 16 or Eq. 17
containing the five camera intrinsic parameters, the five lens distortion parameters,
and the alignment correction/corrections). The LMA attempts to solve the following
optimization problem to produce the best possible estimate of β

Minimize J (β) = ỹ − h (eI , β) 2 , (20)

where ỹ is the vector of all the measured centroids from all the images (see Eq. 18)
and h (eI , β) is the corresponding nonlinear measurement model (see Eq. 19). Now,
let us quickly define the Jacobian at the j -th iteration of the LMA algorithm as

∂h(eI , (j ) β)
(j )
J= . (21)
∂ (j ) β
The equations to compute each of the entries in J are presented in the following
subsection.
The LMA is a member of the ‘trust region’ family of optimization algorithms.
First developed in 1944 [11] (and then rediscovered in 1964 [14]), the LMA is widely
used and has proven to be remarkably robust. The objective function is iteratively
approximated by a quadratic surface which restricts the size of each correction step
and keeps it within a ‘trust region’ around the current iterate. If we linearize the
cost function in Eq. 20 about the current iterate and take a steepest decent approach
to finding the solution, we would typically arrive at an update to the state estimate
described by the normal equations (the classic solution to the least squares problem),

JT Jδx = JT δy, (22)

where J is the Jacobian, δx is the state estimate update, and δy is the current
measurement residual. To compact notation, define N as

N = JT J, (23)
J of Astronaut Sci

where we also observe that N is an approximation of the Hessian. Thus, substituting


N into Eq. 22, we obtain

Nδx = JT δy. (24)

Depending on the topology of the cost function and the initial guess, however, iter-
atively solving the linearized normal equations can be quite slow and may exhibit a
variety of convergence difficulties. Thus, the LMA modifies this approach by intro-
ducing the tuning parameter λ which is chosen to ensure that the solution moves
strongly toward a minimum,

N + λ diag (N) δx = JT δy, (25)

where diag(N) is a square matrix the same size as N that contains only the diagonal
elements of N and is zero elsewhere.
Although there are many variants of the Levinberg-Marquardt algorithm, we
choose the following technique (modeled after [23]):

1. Get an initial guess for k and ξ from preflight calibrations or previous on-orbit
calibrations.
2. Collect m star field images at known attitudes.
3. Construct an initial guess of the Jacobian, (0) J, from Eq. 21 and compute (0) N
from Eq. 23.
4. Find the mean value of the diagonal entries of N and define this as n̄, then set
(0) λ = 0.001n̄.

5. Solve the following linear system for δ


   T
 
(j )
N + (j ) λ diag (j )
N δ = (j ) J ỹ − h(eI , (j ) β) . (26)


6. Compute γ = (j ) β + δ, and compare the residuals ỹ − h(eI , (j ) β) and
ỹ − h(eI , γ ) . Then do one of the following:

(a) If γ produces a reduction in the measurement residuals compared with (j ) β,


then γ constitutes an improvement to the estimate and we set (j +1) β = γ .
Then reduce λ by an order of magnitude by setting (j +1) λ = (j ) λ/10.
(b) If γ produces a larger measurement residual compared with (j ) β, then
increase λ by an order of magnitude by setting (j ) λ = 10 (j ) λ and return to
Step # 5 and try again.
7. Continue iteration until the residual ỹ − h(eI , (j ) β) < tol or a maximum
number of iterations is reached.
The end result is the best estimate of β, which includes all the camera calibration
parameters and the updated OPNAV-ST alignment/alignments.
J of Astronaut Sci

Calculation of Jacobian Matrix at each LMA Iteration

This section presents the equations for all the terms in the Jacobian matrix required
for the LMA algorithm. The Jacobian is defined as either
⎡ [∂u ] [∂u1 ]1 [∂u1 ]1

1 1
∂δθ 1 02×3 . . . 02×3
⎢ [∂u ]1 [∂u ]1 [∂u ]1
∂k ∂ξ

⎢ 2 2 2 ⎥
⎢ ∂k ∂ξ ∂δθ 1 02×3 . . . 02×3 ⎥
⎢ .. .. .. .. . . .. ⎥
∂h(eI , β 1 ) ⎢⎢ . . . . . .


J1 = = ⎢ [∂u ]2 [∂u ]2 [∂u  ] ⎥ (27)
∂β 1 ⎢ 1 1
02×3 ∂δθ 2 . . . 02×3 ⎥
1 2
⎢ ∂k ∂ξ ⎥
⎢ .. .. .. .. . . .. ⎥
⎢ . ⎥
⎣ . . . . . ⎦
[∂unm ]m [∂unm ]m [∂unm ]m
∂k ∂ξ 0 2×3 0 2×3 . . . ∂δθ m
or ⎡ ⎤
[∂u1 ]1 [∂u1 ]1 [∂u1 ]1
⎢ ∂k
[∂u2 ]1
∂ξ
[∂u2 ]1
∂δθ
[∂u2 ]1 ⎥
⎢ ⎥
⎢ ∂k ∂ξ ∂δθ ⎥
⎢ .. .. .. ⎥
∂h(eI , β 2 ) ⎢
⎢ . . .


J2 = =⎢ [∂u1 ]2 [∂u1 ]2 [∂u1 ]2 ⎥, (28)
∂β 2 ⎢ ⎥
⎢ ∂k ∂ξ ∂δθ ⎥
⎢ .. .. .. ⎥
⎢ ⎥
⎣ . . . ⎦
[∂unm ]m [∂unm ]m [∂unm ]m
∂k ∂ξ ∂δθ
where [ui ]k describes the i-th star measurement in the k-th image. The partials are as
follows (and have been numerically checked via forward finite differencing):
   
∂ui xi y i 0 1 0
= ∈ R2×5 , (29)
∂k 0 0 yi 0 1
∂ui ∂u ∂x
= i i ∈ R2×5 , (30)
∂ξ ∂xi ∂ξ
∂ui ∂u ∂x ∂xi ∂ei
= i i ∈ R2×3 . (31)
∂δθ k ∂xi ∂xi ∂ei ∂δθ k
The two partials that make up Eq. 30 are as follows:
 
∂ui dx α
= , (32)
∂xi 0 dy
 2 
∂xi xi ri xi ri4 xi ri6 2xi yi (ri2 + 2xi2 )
= . (33)
∂ξ yi ri2 yi ri4 yi ri6 (ri2 + 2yi2 ) 2xi yi
Likewise, the remaining partials in Eq. 31 are as follows:
 2 
∂xi 1/ZC,i 0 −XC,i /ZC,i
= , (34)
∂ei 0 1/ZC,i −YC,i /ZC,i 2

∂ei  
= TIC ei × , (35)
∂δθ k
J of Astronaut Sci

 
∂xi   ∂ri2 ∂ri4 ∂ri6
= 1 + k1 ri + k2 ri + k3 ri I2×2 + xi k1
2 4 6
+ k2 + k3 (36)
∂xi ∂xi ∂xi ∂xi
    2
2p1 yi + 4p2 xi 2p1 xi p2 ∂ri
+ + ,
2p2 yi 4p1 yi + 2p2 xi p1 ∂xi

where
∂ri2  ∂ri4 ∂r 2 ∂ri6 ∂r 2
= 2 xi yi , = 2ri2 i , and = 3(ri2 )2 i . (37)
∂xi ∂xi ∂xi ∂xi ∂xi

Calculation of OPNAV Camera to Star Tracker Alignment

The purpose of the alignment portion of the calibration is to improve the estimate of
the orientation of the OPNAV camera relative to the star tracker, TST C . For the second
version of the state vector, β 2 , there is only one alignment correction for the entire
image ensemble and TST C is directly updated with each LMA iteration. Thus, it is
only the first version of the state vector, β 1 , that requires additional discussion.
Recall from Eq. 16 that β 1 contains a separate alignment correction for each image
in the ensemble. As a result, each star field image will have its own unique estimate
of TST k
C (where the subscript k has been reintroduced to denote the k-th image) and
we obtain m redundant estimates of the quantity TST C . These redundant attitude esti-
mates are combined via the quaternion averaging approach developed by Markley,
et al. [13].

Observability Considerations

In many cases, not all five of the camera intrinsic parameters, five of the lens dis-
tortion parameters, and m image attitudes are simultaneously observable in practice.
Further, some terms may have such a negligible effect on the resulting distortion map
that they can be neglected with no discernible impact on the calibrated line-of-sight
direction corresponding to each pixel.
We now focus on the most important observability limitation. The pointing error
associated with each image and the principal point coordinates are highly correlated,
and it is often not practical to estimate both. The typical solution is to assume that
the principal point is fixed. To see this, we consider a system with no lens distortion
and focus on perturbations in just misalignment and principal point.
Suppose we have an imaging system with a misalignment defined by the small-
angle transformation δθ T = [δθ1 δθ2 δθ3 ], such that each line-of-sight vector in the
camera’s sensor frame becomes

eCi ≈ (I3×3 − [δθ ×]) eCi , (38)

where eTCi = [e1i e2i e3i ]. As a result, is easy to see that

δei = eCi − eCi ≈ − [δθ ×] eCi , (39)


J of Astronaut Sci

⎡ ⎤ ⎡ ⎤
δe1i e2i δθ3 − e3i δθ2
δei = ⎣ δe2i ⎦ ≈ ⎣ e3i δθ1 − e1i δθ3 ⎦ . (40)
δe3i e1i δθ2 − e2i δθ1
Now, momentarily assuming that α = 0 to focus on the observability problem of
interest, the conversion from line-of-sight to pixel coordinates may be rewritten as
e1
ui = dx xi + up = dx i + up , (41)
e3i
e2
vi = dy yi + vp = dx i + vp . (42)
e3i
Taking the variations of this with respect to eCi , up , and vp ,
1 e1
δui = dx δe1i − dx 2i δe3i + δup , (43)
e3i e3i

1 e2
δvi = dy δe2i − dy 2i δe3i + δvp . (44)
e3i e3i
Finally, if eCi is perturbed due misalignment then we may substituting Eq. 40 into
the above,
 
 e1i e2i e12i e2
δui ≈ δup − dx δθ2 + dx 2
δθ1 − 2 δθ2 + dx i δθ3 , (45)
e3i e3i e3i
 2 
 e2i e1i e2i e1
δvi ≈ δvp + dy δθ1 + dy 2 δθ1 − 2 δθ2 − dy i δθ3 . (46)
e3i e3i e3i
To proceed, consider the three terms on the right-hand side of Eq. 45 and Eq. 46,
beginning with the middle term. The Orion OPNAV camera has a full FOV of around
16 × 20 degrees. Therefore, the maximum possible values of the line-of-sight com-
ponent ratios are e1i /e3i = 0.1763 and e2i /e3i = 0.1405 — and values this large
only occur when stars are at the very edge of the image (they are generally much
smaller). The second term contains the square of these values, which are very small
compared to one, making the middle term negligible compared to the other two terms.
Therefore,
 e2
δui ≈ δup − dx δθ2 + dx i δθ3 , (47)
e3i
 e1
δvi ≈ δvp + dy δθ1 − dy i δθ3 . (48)
e3i
From the first terms of Eq. 47 and Eq. 48, we see that principal point error is indis-
tinguishable from the camera pitch/yaw misalignment to first order, as they will both
affect all of the star measurements in very nearly the same way. This relationship is
shown graphically in Fig. 4. The third term simply shows that camera roll misalign-
ment (rotation about the boresight) is independently observable from principal point
error.
For these reasons it is common to simply fix the principal point location (usually to
the image center) and then treat all residuals of this nature as ‘misalignment,’ as was
done for Stardust [17], MESSENGER [19], Dawn [24], and a host of other missions.
An alternate approach is to slowly introduce updates to up and vp in an iterative
fashion as described in [15].
J of Astronaut Sci

star line-of-sight star line-of-sight


(fixed inertial direction) (fixed inertial direction)

new boresight

boresight
old boresight
apparent location of
points is unchanged
between these two
systems

principal point
FPA movement = dx

Fig. 4 Graphical depiction showing that pointing error and principal point movement are interchangeable
for small angles. Angles and errors are greatly exaggerated to make things visible

Numerical Results

Monte Carlo with Simulated Star Images

A Monte Carlo analysis was performed to better understand calibration performance


and if carrying an alignment state for every image is necessary. We analyzed per-
formance for 1,000 different calibration scenarios — each containing 15 calibration
images collected at random attitudes. We assumed a 2048 × 2592 pixel focal plane
array with a 16 × 20 degree field of view. Each of the 1,000 scenarios had a OPNAV
camera to star tracker misalignment of 15 arcsec (1σ ). Within each scenario, each
image had a star tracker pointing error of 10 arcsec (1σ ) and each star centroid had
an error of 0.15 pixel (1σ ).
The calibration results are summarized in Table 1 for both forms of the state vector
introduced above. Method 1 uses β 1 from Eq. 16 and Method 2 uses β 2 from Eq. 17.
A superior calibration is clearly obtained when a separate alignment is carried for
each image in the batch estimation process and then the misalignment is computed
via quaternion averaging (Method 1). The superior calibration results also lead to
smaller errors in post-calibration star centroid location prediction, with an error of
0.095 pixel for Method 1 and an error of 0.110 pixel for Method 2.

Tests with a Real Camera

The approach described in this paper was tested on a collection of 359 star field
images collected at ten second intervals. Images were collected near Houston, TX,
with a PixeLink PL-D735 camera and a 35 mm focal length lens. The camera was
stationary (with respect to the Earth), zenith-pointing, and primarily observing stars
in the constellation Hercules. Over the course of the one-hour data collection period,
stars transited across the image and provided a good sampling of the entire focal plane
array. A visualization of the star centroids tracked across all 359 images is shown in
Fig. 5.
J of Astronaut Sci

Table 1 Monte Carlo analysis demonstrates that superior calibration performance is achieved when every
image is allowed to have its own attitude correction (Method 1) instead of having only a single attitude
correction for all images (Method 2)

Error in Parameter Estimate


(standard deviation)

Calibration Actual Method 1 Method 2


Parameter Value

dx 7,308.2 0.534 1.345


α -1.4 0.141 0.373
dy 7,307.2 0.536 1.351
k1 -0.075 0.011 0.027
k2 0.883 0.475 1.167
k3 -12.60 6.32 15.417
p1 0.0012 4.79 × 10−5 12.88 × 10−5
p2 0.0016 3.72 × 10−5 9.66 × 10−5
δθ1 (arcsec) 0.0 2.616 2.907
δθ2 (arcsec) 0.0 2.611 2.834
δθ3 (arcsec) 0.0 3.700 7.487

We acknowledge that imaging stars through the atmosphere is not a perfect emula-
tion of the performance that will be achieved with space-based imagery. Of particular
note is that the Earth’s atmosphere refracts light, thus changing the apparent line-of-
sight direction to stars [26]. Although this effect has been somewhat mitigated by
using zenith-pointing images, it is still present and likely introduces apparent lens
distortion. Consequently, the lens distortion results presented here show the aggregate
effect of both lens distortions and atmospheric refraction.

Fig. 5 A superposition of all star centroids tracked across all 359 test images shows good sampling of the
focal plane array
J of Astronaut Sci

1 1

3
2
200

5
200
400 400

4
600 600

1
Row, pixels

800

1
Row, pixels
800

3
2
1000 1000
1200 1200

4
1400 1400

1
1

2
1600 1600

5
2

3
1800 1800 1

6 7
3
2000 4
2000 4 2
500 1000 1500 2000 2500 500 1000 1500 2000 2500
Column, pixels Column, pixels

Fig. 6 Vector field (left) and contours (right) of overall lens distortions in units of pixels. The vectors in
the left image are not to scale for the purpose of readability

It was determined that only about 15 images were required to adequately calibrate
the camera. Larger sets of images produced no statistical improvement in the final
calibration results. The estimated distortion maps are shown in Fig. 6. After comput-
ing the ten calibration parameters using 15 images, the quality of the calibration was
assessed by looking at the star coordinate residuals on the remaining 344 images in
the data set. These results are shown in Fig. 7.
Finally, although the general model developed here carries ten calibration param-
eters, it is not always necessary to estimate them all in practice. Some are not well
observable (e.g., up and vp ) and others simply don’t contribute much to the dis-
tortion for well-made lenses (e.g., k3 ). Thus, the calibration was repeated removing
the least prominent remaining calibration parameter one at a time, with the results
shown in Fig. 8. Only 5-6 of the calibration parameters appear to be important for the
specific camera system used in this example. It should be stressed that the specific
calibration terms which are the most important will vary from camera to camera. As a
result, such an analysis will need to be repeated for the actual Orion OPNAV camera
hardware.

Mean: -0.0128 pixels


Std. deviation: 0.179 pixels (~5.1 arcsec)
Skewness: 0.54
Kurtosis: 5.44

Fig. 7 Even when using only 15 images to compute the calibration, the post-calibration residuals across
all stars from all 359 images remain low and well-behaved
J of Astronaut Sci

Removed terms
which are probably
not important.

Removed terms
that are definitely
important.

Fig. 8 Deterioration of calibration performance as each of the ten calibration parameters are removed one
at a time. Red boxes show which terms are excluded from the calibration. Residual standard deviations are
in units of pixels

Conclusions

The Orion vehicle will demonstrate an autonomous navigation capability based on


optical navigation (OPNAV) during the upcoming Exploration Mission 1 (EM1). An
enabling component of this demonstration is precise on-orbit geometric calibration of
the OPNAV camera. Therefore, this paper describes the theory underlying the present
design of Orion on-orbit camera calibration. An example is provided using real star
field images.

Acknowledgments The authors thank Steve Lockhart, Renato Zanetti, Rebecca Johanning, and other
members of the Orion OPNAV team. This work was made possible by NASA under award NNX13AJ25A.

References

1. Brown, D.: Decentering distortion of lenses. Photogramm. Eng. 32(3), 444–462 (1966)
2. Brown, D.: Close-range camera calibration. Photogramm. Eng. 37(8), 522–566 (1971)
3. Christian, J.: Optical navigation for a spacecraft in a planetary system. Ph.D. thesis The University of
Texas at Austin (2010)
4. Conrady, A.: Decentered lens system. Mon. Not. R. Astron. Soc. 79, 384–390 (1919)
5. D’Souza, C., Zanetti, R.: Navigation and Dispersion Analysis of the First Orion Exploration Mission.
In: AAS/AIAA Space Flight Mechanics Meeting. Vail, CO (2015)
6. Eichhorn, H.: The development of the overlapping-plate method. Mapping the Sky, 177–199 (1988)
7. Holst, G., Lomheim, T.: CMOS/CCD Sensors and camera systems. JCD publishing, winter park FL
(2007)
8. Jackman, C., Dumont, P.: Optical Navigation Capabilities for Deep Space Missions. In: AAS/AIAA
Space Flight Mechanics Conference. Kauai, HI (2013)
9. Jenkins, F.A., White, H.E. Fundamentals of Optics, 4th edn. McGraw- Hill, Inc., New York City, NY
(1957)
10. Knutson, M., Miller, D.: Fast star tracker centroid algorithm for high performance cubesat with air
bearing validation. Master’s thesis Massachusetts Institute of Technology (2012)
11. Levenberg, K.: A method for the solution of certain non-linear problems in least squares. Q. Appl.
Math., 1964–168 (1944)
12. Ma, Y., Soatto, S., Košecká, J., Sastry, S.: An invitation to 3-D vision: From images to geometric
models. springer, New York, NY (2010)
J of Astronaut Sci

13. Markley, F.L., Cheng, Y., Crassidis, J., Oshman, Y.: Averaging quaternions. J. Guid. Control. Dyn.
30(4), 1193–1197 (2007)
14. Marquardt, D.: An algorithm for least-squares estimation of nonlinear parameters. J. Soc. Ind. Appl.
Math. 11, 431–441 (1963)
15. McGlone, J. Manual of Photogrammetry, 6th edn. American Society for Photogrammetry and Remote
Sensing, Bethesda, MD (2013)
16. Myers, J.R., Sande, C.B., Miller, A.C., Warren Jr., W.H., Tracewell, D.A.: Vizier Online Data Catalog:
SKY2000 Catalog, Version 4 (Myers+ 2002). VizieR Online Data Catalog, 5109 (2001)
17. Newburn, R., Bhaskaran, S., Duxbury, T., Fraschetti, G., Radey, T., Schwochert, M.: Stardust imaging
camera Journal of Geophysical Research 108(E10) (2003)
18. Oberst, J., Brinkmann, B., Giese, B.: Geometric Calibration of the MICAS CCD Sensor for the DS1
(Deep Space 1) Spacecraft: Laboratory Vs. In-Flight Data Analysis. In: International Archives of
Photogrammetry and Remote Sensing, Vol. XXXIII (2000)
19. Oberst, J., Elgner, S., Turner, F., Perry, M., Gaskell, R., Zuber, M., Robinson, M., Solomon, S.: Radius
and limb topography of mercury obtained from images acquired during the MESSENGER flybys.
Planet. Space Sci. 59, 1918–1924 (2011)
20. Owen, W.: Methods of Optical Navigation. In: AAS/AIAA Space Flight Mechanics Meeting (13-17)
(2011)
21. Owen, W., Dumont, P., Jackman, C.: Optical Navigation Preparations for New Horizons Pluto Flyby.
In: 23Rd International Symposium on Space Flight Dynamics (ISSFD) (2012)
22. Owen, W.M., Synnott, S.P., Null, G.W.: High-Accuracy Asteroid Astrometry from Table Mountain
Observatory. In: Modern Astrometry and Astrodynamics (1998)
23. Press, W., Teukolsky, S., Vetterling, W., Flannery, B. Numerical Recipes: The Art of Scientific
Computing, 3rd edn. Cambridge University Press, New York, NY (2007)
24. Schröder, S., Maue, T., Marquès, P., Mottola, S., Aye, K., Sierks, H., Keller, H.: In-flight calibration
of the dawn framing camera. Icarus 226, 1304–1317 (2013)
25. von Seidel, P.L.: Ueber die theorie der fehler, mit welchen die durch optische instrumente gesehenen
bilder, behaftet sind, und über die mathematischen bedingungen ihrer aufthebung. Abhandlun-
gen der naturwissenschaftlich-technischen Commission bei der Königl. Bayerischen Akademie der
Wissenschaften in München 1, 227–267 (1857)
26. Thomas, M., Joseph, R.: Astronomical refraction. J. Hopkins APL Tech. Dig. 17(3), 284 (1996)
27. Thomas, S., Fusco, T., Tokovinin, A., Nicolle, M., Michau, V., Rousset, R.: Comparison of centroid
computation algorithms in a shack-hartmann sensor. Mon. Not. R. Astron. Soc. 371, 323–336 (2006)
28. Wang, M., Cheng, Y., Yang, B., Jin, S., Su, H.: On-orbit calibration approach for optical navigation
camera in deep space exploration. Opt. Express, 24 (2016)
29. Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell.
22(11), 1330–1334 (2000)

You might also like