0% found this document useful (0 votes)
9 views72 pages

Detectors 2017

The document discusses various astronomical detectors, highlighting their necessary qualities and types, including the human eye, photographic methods, and electronic detectors like CCDs. It details the functioning of the eye as a photodetector, the evolution of astronomical photography, and the advantages and disadvantages of different imaging technologies. Additionally, it explains the workings of CCDs, their applications in astronomy, and the challenges associated with their use, such as saturation and blooming.

Uploaded by

Newbin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views72 pages

Detectors 2017

The document discusses various astronomical detectors, highlighting their necessary qualities and types, including the human eye, photographic methods, and electronic detectors like CCDs. It details the functioning of the eye as a photodetector, the evolution of astronomical photography, and the advantages and disadvantages of different imaging technologies. Additionally, it explains the workings of CCDs, their applications in astronomy, and the challenges associated with their use, such as saturation and blooming.

Uploaded by

Newbin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

Astronomical Detectors

necessary qualities:
low noise
appropriate resolution
linear response
ability to integrate
sensitivity to photons
quantum efficiency
Kinds of Detectors

the eye
photographs
phototubes
electronic (CCD, CMOS, bolometers) detectors
The Eye
oldest detector
and the only one available until the mid-nineteenth
century
a spherical object that focusses light and forms an
image on the back surface (retina)
the retina contains specialized cells that are
photodetectors
muscles in the eye change the focal length so that we
can focus on objects at different distances
aperture

image processing (brain)

detector

image formation
The Eye as a Photodetector

the eye has a remarkable ability to adjust for


differences in the level of illumination
the range is about ten billion to one!
process called adaptation involves several
mechanisms
The Eye as a Photodetector

the iris is a sphincter muscle that defines our pupil and


can change in size from about 2 to 8 mm
the iris responds to the level of illumination
this change in size corresponds to an adaptation
ratio of only 16:1
The Eye as a Photodetector

the retina is covered in two kinds of cells: rods and


cones
rods (scotopic vision): respond to faint light more
effectively than cones but do not discriminate colour
very well
cones (photopic vision): better in bright light and
responsible for colour sensitivity
Cones
concentrated in the central part of the retina
spacing of the cells determines the acuity of our vision
most concentrated central region (called the fovea
centralis) has a total angular diameter of about 1°
sensitivity is only about 1/100 of the rods
three types of cones distinguished by different
pigments in the cells with peaks at 4250, 5300, and
5600 Å (R, G, B!)
response time of about 75 ms
Rods
located towards the edges of the retina
responsible for peripheral/averted vision
response time of about 300 ms
100x better sensitivity than the cones!
colour discrimination for our peripheral vision is not
very good!
peak sensitivity at about 5000 Å (green) and cuts off
above 5800 Å (red)
The Eye as a Photodetector

when a photon strikes a rod or a cone, a measurable


current is produced
signal goes to the brain for processing
Dark Adaptation
exact mechanisms for dark adaptation are unclear, but
they are known to include biochemical, physical, and
neural processes
rods contain the visual pigment rhodopsin is formed by
reaction between vitamin A and a protein
this pigment is bleached by exposure to light
rhodopsin is reformed in dark conditions in a
process that is thought to take about 20 minutes
vitamin A deficiency results in night blindness!
Response of the Eye
Our eye has the
ability to see
illumination
levels that differ
in intensity by
10-billion times!

The response of
our eye is not
linear; especially
for very low or
very high
10 orders of magnitude! illumination
levels
Photography
Astronomical Photography

first astrophoto: 1840,


J. W. Draper
daguerreotype
(silver platinum
plate) process
image of the Moon
Astronomical Photography

Foucault and Fizeau,


1845
Daguerrotype image of
the sun
Astronomical Photography

1870s: invention
of dry emulsion
silver bromide
crystals
suspended in
transparent gelatin
Photographic Emulsion
photons interact with the crystals
developing creates a dark spot where light hit the film
a “negative”
emulsions that are sensitive to different wavelengths
can be produced
photons can be collected over a long period of time,
making very faint objects visible
Photographic Advantages
photographs allowed astronomers to record the
position and brightness of objects and analyse them in
a quantitative manner
shine light through a photographic film and measure
the voltage in a photocell
photographs can be stored for a long period of time
with the advent of digital technology, photographs
could be digitized
photographic magnitudes could be measured to an
accuracy of about 0.01 mag
Disadvantages

photographic emulsion responds to photons in a non-


linear way
for low and high signals, more photons are needed
for a corresponding change in density on the
photographic plate
sensitivity of photographic emulsions only records a
very small fraction of incident photons
Charge Coupled Device
CCDs
combines the area detection and light accumulation
abilities of photographic emulsion with high sensitivity,
linearity, and broad spectral response of a photodiode
two dimensional array of thousands to millions of metal
insulator semiconductor (MIS) photosensitive
capacitors called pixels
pixels are interconnected such that stored charges can
flow from pixel to pixel as voltages are changed in a
systematic way
# detected photons
QE =
# incident photons
CCDs in Astronomy
CCDs were first developed in 1970 at Bell Labs by
Boyle and Smith
2009 Nobel Prize in Physics: "for the invention of an
imaging semiconductor circuit – the CCD sensor"
original use of CCDs was as a memory storage
medium
first astronomical image in 1975
widespread astronomical use by the 1980s
“Photographic film is rapidly taking a back seat to the
new sensor where solid state color CCD still cameras
(e.g. 35mm) are now commercially available. Although
relatively expensive, by the end of the century a low-
cost color “instamatic” CCD type camera is expected.”
- Janesick and Elliott, Large Array Scientific CCD Imagers, 1992
How a CCD Works
dense array of light sensitive capacitors
silicon crystal structure
a CCD pixel can store charge as long as a voltage is
applied across its electrodes, creating many “potential
wells”
Higher Voltage Lower
around the edges Voltage
How a CCD Works

an incoming photon can “free” an electron


energy from the photon is absorbed by the atom,
imparting sufficient energy to promote its electron
from the “valence band” of the silicone substrate to
the “conduction band”
the free electrons are contained within the potential
wells created by the electrodes (sometimes called
an electron-hole pair)
Energy Levels in an Atom

More energy is
required to move
the electron to a
greater distance
from the nucleus
Energy Bands in a
Semiconductor Crystal

Semiconductors have a relatively small energy gap between


their last valence band and their conduction band.
Photoelectric Effect
photoelectric effect: e=hc/λ
e, energy; h, Planck’s constant (4.14 x 10-15 eVs); c,
speed of light; λ, wavelength
silicon has an energy band gap of 1.1 eV
what wavelength does this represent?
λ=(4.14 x 10-15 eVs)(3 x 108 m/s)/1.1 eV

λ=11 300 Å, near infrared (about 1 micron)


This represents the minimum energy that a photon must have
to excite an electron from the valence band to the conduction
band.
Spectral Range
a typical CCD has a useful spectral range from about
3000 to 11,000 Å
wavelengths longer than 11,000 Å (1.1 micron) do not
have enough energy to create an electron-hole pair
(i.e. excite the electron enough to “free” it)
photons with wavelengths shorter than 3000 Å are too
energetic to be detected (these photons pass right
through the detector)
Dark Current

even without being exposed to light, thermal motion of


the atoms still results in electrons accumulating in each
pixel
this is what we measure when we take “darks”
Taking an Image with a CCD

the detector is first exposed to light for a set amount of


time (i.e. the exposure time)
this is usually accomplished by opening a
mechanical shutter, which blocks the light when
closed
Taking an Image with a CCD
during the exposure, photons strike the CCD detector
and interact with the silicon, freeing some electrons
these electrons are “free” within the boundaries of a
single pixel; the boundaries of the pixels are kept at a
high voltage and the electrons are repelled from these
borders
charge (in the form of free electrons) accumulates in
each pixel over the length of the exposure
the amount of charge (i.e. the voltage) is proportional
to the number of photons that struck the particular
pixel
Readout
voltages on the gates
are varied so that the
charge is transferred
from pixel-to-pixel
down the length of a
column
the signal (voltage) is
usually amplified
before going to an
anolog-to-digital
converter (A/D)
Problems With Readout
any breaks or defects in the electrodes may cause
charge to become trapped
evidenced in the image as “bad” columns

faulty or inefficient
connections between
parallel and serial registers
can also result in
inefficiency
non-linearity at low
signal levels
Bad Columns
Sample from European Southern Observatory
Quantum Efficiency

a measure of the number of photons that strike the


chip’s surface but do not create a free electron
can be the result of many things: the photon may have
been reflected or instead of hitting the centre of a pixel
it may have hit and edge and been absorbed in the
electrodes or other insensitive areas of the chip
Charge Collection Efficiency

freed electrons are stored in the potential wells created


by electron depleted silicon
it is possible for the freed electrons to diffuse into
neighbouring pixels or other areas of the chip and thus
do not get collected or counted in the signal
Saturation each pixel can only hold a
limited number of electrons,
which is set by the voltage
of the edges of the well (i.e.
the pixel “walls”)
once a pixel is saturated, it
becomes insensitive to
additional incoming photons
pixels of a saturated region
on an image would appear
to have the same value even
though incident number of
photons is different
Blooming
occurs at or near
saturation
when the potential of the
well equals the potential of
the barrier, the charge is
free to cross the barrier
region into neighbouring
pixels
when blooming occurs,
the charge “bleeds” up and
down the column
Saturation

the information in saturated regions of an image is lost


we only have a lower limit on intensity at that point
CCD sensors tend to lose their linearity as saturation is
approached
linearity is a very desirable feature of CCDs and an
advantage of the CCD over other forms of imaging
Saturation

in practice it will probably be difficult to saturate parts


of an extended object (like your galaxies)
BUT bright stars can easily become saturated in long
exposures
taking several short exposures and combining them as
opposed to taking one long exposure is a way to avoid
saturating parts of your image
Can be a problem if there is a
bright star near your object.
Solution?

Shorten the exposure time of


the individual images (i.e.
you could take sixty 30-
second exposures instead of
thirty 60-second exposures).

Blooming goes down the


columns. You can rotate the NGC3344 - Image by James Young, PHYS
camera so the blooming 2070 2011/12

does not interfere with your


object.
Digital Conversion
the number of electrons must be converted to a digital
signal
i.e. the value of the pixels, also called the number of
“counts” on an image
this conversion factor is not usually one-to-one
A/D converters are limited in their dynamic range,
which can be referred to as the “bit-depth”
common bit depths are 8, 12, 14, or 16 bits
corresponding to a range in counts of 256, 4096,
16384, or 65536 (i.e. 28, 212, 214, 216)
Digital Conversion
for example, is a CCD has a 16-bit A/D converter (like
ours does), this means it is capable of dividing the total
signal into 216=65536 digital bins (could also be called
levels or counts)
if the pixels could collect exactly 65536 electrons then
1 electron would equal 1 count
but say, for example that a pixel can hold 100000
electrons
then 1 count would equal 100000/65536 = 1.5
electrons/count
GAO Instruments
CCD: Apogee U47

uses a Marconi,
thinned, back-
illuminated CCD chip

U stands for USB


interface
Front side illuminated chip

incoming photons
Back side illuminated chip

This used to be a
neutral substrate
material (that the
crystal silicon was
grown upon) but it
was removed by a
incoming photons process called
thinning
Specs

1024 x 1024 pixels

pixels 13 μm x 13 μm
total image area: 13.3 x 13.3 mm
16-bits
Specs

gain: 1.2 electrons per count


full well depth: 76 000 e-
charge transfer efficiency: 0.999999

bias level: 1275 counts


dark current: 0.11 electrons per pixel per second (at -23C)
Quantum Efficiency
Specs

exposure time: 30 msec to 10 400 sec


cooling: 30°C - 50°C below ambient

temperature stability: ± 0.01°C


system noise: <11.4 e- RMS
Pixel Size

the more resolution the better... right?


not necessarily!

must look at the USEABLE resolution


What do you want to see?

stars are so far away that most are unresolved (all are
unresolved for us at the GAO... except the sun!)
unresolved point sources have a Gaussian distribution of
brightness
Measure?

how is a Gaussian measured?


one way is the Full Width at Half-Maximum (FWHM)
Full Width
Half Maximum
Sampling

for round star images, should have a FWHM of at least 3


pixels

less than this and the star would be under-sampled (square


stars!)

much more than this is over-sampled


larger image, but no gain in detail
Matching
match the optical system to the CCD camera

if you have average seeing at a location of say, 2˝ then your


pixel size should be about 2˝/3 (because you want your
stars to be at least 3 pixels) or about 0.67˝/pixel
use your telescope focal length to calculate your image
scale and then choose a CCD camera with pixels of the
appropriate size

same principle applies to digital cameras!


Size of FWHM?
Difference between bright
and faint stars?
All the
same!
Pixel Size

image scale at the GAO is about 0.45’/mm


typical FWHM = 3”

how many pixels is this on the Apogee camera?


Over-sampled!

don’t need to use the full resolution of the chip


bin the pixels (2x2)
increase sensitivity

decrease image size


no change in amount of detail
Image Size

effective chip size


512 x 512 pixels

pixel size 26 μm x 26 μm
how many pixels in total?
only 262,000 (about 0.25 megapixels)
16 bits per pixel
16 bits?

16 bits means 216 = 65536


this is the number of grey levels in the image

8 bits = 1 byte
16 bits = 2 bytes
what is the image size?
about 0.5 MB

You might also like