Cmos CCD
Cmos CCD
P=P0exp[−Ϗz]�=�0exp-ϏzE1
where,P0 is the intensity at zero depth. Note that the rate of absorption, dP/dz, decreases
exponentially with depth. Therefore, more photogeneration is expected to occur near the surface.
Because the photogenerated carriers exist in an excited state, the excess electrons and holes will
recombine after a short period of time (~ picoseconds on average) to release the excess energy.
This process is known as recombination, and it returns the carriers distributions to thermal
equilibrium condition. These excess carriers are lost if they are not captured to create an
electrical signal for light detection. Therefore, a semiconductor device structure is needed to
facilitate the capturing of the photogenerated carriers. The simplest and most commonly used
structure for this purpose is a diode structure known as photodiode (PD).
Before the discussion on photodiode, two important parameters that are used to characterize the
effectiveness of detection by a photodetector should be discussed; these are quantum efficiency
and responsivity. Quantum efficiency is defined as the probability that an incident photon will
generate an electron-hole pair that will contribute to the detection signal, so it can be expressed
as,
ϕ=(1−R)ϔ[1−exp(−Ϗd)]�=(1-�)ϔ1-exp(-Ϗd)E2
where, R� is the surface reflectance, ϔϔ is the probability that the generated electron-hole pair
will become a contribution to the detection signal, and d is the depth of the photo-absorption
region. Therefore, quantum efficiency is affected by material properties and device geometry.
The captured carriers are used to generate a signal either as a voltage or a current. The measure
of signal strength to incident power of the device is called responsivity. If the output is a current,
then it is related to the quantum efficiency by the following expression,
R=qhvϕ=qhcϙϕ=ϙ1.24[nm⋄W/A]ϕ�=�hv�=�hcϙϕ=ϙ1.24[nm⋄�/�]�E3
A DVERTISEMENT
   4. Photodiode
   Many photodetectors utilizes the formation of p-n junctions, and the simplest of these is a
   photodiode, because a photodiode is simply a p-n junction that is designed for capturing the
   photogenerated carriers. In CMOS sensing, a photodiode is usually made by forming an n-type
   region on a p-type semiconductor substrate, or vice-versa. This can be done by epitaxial growth,
   diffusion or ion implantation.
   Figure 1 shows the carrier distributions, charge distribution, built-in electric field and band-
   diagram of a typical p-n junction. The inhomogeneous charge and carrier distributions are the
   result of a state of equilibrium between diffusion, drift and recombination. The following is a
   review of the key features of a p-n junction,
1. An absent of the carriers in a region known as the depletion region or space charge region. It has
   layer width,W, and ionic space charge of the donors and acceptors are exposed in this region.
   Figure 1.
   a) Diffusion of carriers at the p-n junction, (b) the resulting electron and hole density
   distribution, where Na and Nd are the acceptors and donors densities, (c) the resulting charge
   density distribution, (d) the resulting electric field, and (e) the band diagram of a p-n junction
   showing the alignment of the Fermi level, Ef, and shifting of the bands. Ev is the top of the
   valence band and Ec and the bottom of the conduction band.
2. Alignment of Ef between the p-type region and the n-type, as the conduction and valance bands
   shift in the depletion region.
3. There is a potential difference of V0 between the p-side and the n-side.
   The operation of a photodiode relies upon the separation of the photogenerated carriers by the
   built-in field inside the depletion region of the p-n junction to create the electrical signal of
   detection. Under the influence of the built-in electric field, the photogenerated electrons will drift
   towards the n-side, and photogenerated holes will drift towards the p-side. The photogenerated
   carriers that reach the quasi-neutral region outside of the depletion layer will generate an electric
   current flowing from the n-side to the p-side; this current is called a photocurrent. The generation
   of the photocurrent results in the shift of the I-V characteristic of the photodiode as shown in
   Figure 2. Therefore, the I-V characteristic of a photodiode is expressed as,
                      IL=Is[expexp(qVkT)−1]−Iph��=��expexpqVkT-1-�phE4
   where, the first term is the Schottky Equation that described the ideal I-V characteristics with
   IS being the saturation current, k the Boltzmann constant and T the operating temperature, and
   the second term, Iph, is the photocurrent.
Figure 2.
Photogeneration in a p-n junction: (a) the built-in electric field driving the photogenerated
carriers from the depletion region away from the junction, and (b) shifting of the I-V
characteristic due to the photogenerated current, Iph.
A photodiode can be operated in three basic modes: open circuit mode, short circuit mode, and
reverse bias (or photoconductive) mode. The circuit diagrams of the three different basic
operating modes are shown in Figure 3.
Open circuit (OC) mode is also known as photovoltaic mode. As the name implies, in this mode,
the terminals of the photodiode is connected an open circuit. In this mode, there is no net current
flowing across the photodiode, but due to the photogenerated current, a net voltage is created
across the photodiode, called the open circuit voltage, VOC. In reference to Figure 2(a), the
photodiode is operating at the point where the I-V characteristic curve intersects the x-axis.
Figure 3.
Basic operating mode of a PD: (a) open circuit mode, (b) short circuit mode, and (c) reverse-bias
mode.
In contrast, in the short circuit (SC) mode, the terminal of the photodiode is short-circuited. This
allows the photogenerated current to flow in a loop as illustrated in Figure 3(b). In
Figure 2(b), this is represented by the point at which the I-V characteristic curve intersects the y-
axis. The current that flows in the loop in SC mode is also known as the short circuit, Isc, and it
has the same magnitude as Iph.
In reverse bias mode, a reverse bias is applied across the photodiode as shown in Figure 3(c).
Therefore it is operated in the lower left quadrant of Figure 2(b). Note that by applying a bias
voltage, the potential difference across the p-n junction changes to V0– V, and the balance
between drift and diffusion in the p-n junction also changes. This will affect the depletion width
(W) and E as well. The dependence of W on the bias voltage can be described by,
                                W=K(V0−V)mj�=�(�0-�)��E5
where, K is a constant, and mj depends on the junction geometry (mj = 1/2 for a step junction and
mj = 1/3 for a linear junction). Therefore, operating in reverse bias has the effect of increasing W.
The increase in W is not as great as the change in the potential difference, because m j< 1, so E
should also increase. From the point of view of charge distribution and Gauss’s Law, a wider
depletion region exposes more of the ionic space charge, which in turn increases the electric
field.
The widened depletion region under reverse bias creates a greater photogeneration region, while
the stronger E increases the drift velocity of the photogenerated carriers. In principle, the drift
velocity increases in proportion with E, so even with an increase in W given by Equation (5), the
transit time (the average time that a drifting carrier to reach the end of the depletion region) is
reduced. Therefore signal loss due to recombination in the depletion region is reduced. Because
of these beneficial effects, reverse bias operation is often preferred.
As shown in Figure 2(b) there exist a small current under revers bias in the I-V characteristic
even in dark condition. This dark current is caused by saturation current, IS. On the boundaries of
the depletion region, minority carriers (electrons on the p-side and holes on the n-type side) can
diffuse into the depletion region. Because of the built-in electric field, these diffused minority
carrier may drift across to the depletion region; this is a source of the saturation current.
Therefore, in photodiodesthere exist a dark current; however,the diffusion process is not the only
contribution the dark current.
Apart from the diffusion contribution to the dark current, carriers that are generated by thermal
excitation inter-band trap (defect) states in the depletion region can also have a contribution to
the dark current. This trap-assisted process is essentially the reverse of Shottky-Read-Hall (SRH)
recombination. Just like the photogenerated carriers, carriers created by trap-assisted generation
in the depletion region can also be swept away from the depletion region by drift before they can
recombine and form part of the dark current. This dark current contribution is known as the
Generation-Recombination (G-R) current, and it can be more significant than the diffusion
contribution.
Under sufficient reverse bias, the EC on the n-side can fall below EV on the p-side. In this
condition, there is a finite possibility that an electron in the valence band of the p-side can tunnel
through the bandgap into the n-side conduction band. This process is call direct tunneling or
band-to-band tunneling. If sufficient numbers of direct tunneling events occur, its contribution to
the dark current will be measurable.
Tunneling can also occur through an inter-band trap (defect) state. Due to thermal excitation, a
carrier can be trapped in one these states. If this state exists in the depletion region, and sufficient
reverse biased is applied, a trapped electron from the valence band can have energy higher than
EC, and tunneling into the conduction band can occur. This is called trap-assisted tunnel.
Due to the interruption of the crystal lattice structure, there can be a high density of surface
charge and interface states at the physical surface of a device. The surface charge and interface
states can affect the position of the depletion region, as well as behaving as generation-
recombination centers. Therefore, the surface of a device can introduce another contribution to
the dark current called surface leakage current. Passivation of the surface can be used to control
the surface leaking current. This is usually achieved by adding a layer of insulator such as oxide
to the surface.
When a sufficiently large electric field is applied to an insulator, it will start to conduct. This is
called the Frankel-Poole Effect. This effect is caused by escape of electrons from their localized
state into the conduction band. Therefore, the Frankel-Poole Effect can occur in a semiconductor
as well. When a sufficiently large reverse bias is applied acrossa p-n junction, electrons
generated by the Frankel-Poole Effect can also have a contribution to the dark current.
4.4.5. Impact ionization current
Under reverse bias, the motion of carriers in the depletion can be described as a drift where the
carriers are repeatedly accelerated by the electric field and collide with the atoms in the crystal
lattice. Under strong reverse bias, the acceleration between collisions can be large enough for a
carrier to obtain the energy required to dislodge a valance electron to create a new electron-hole
pair. This process is known as impact ionization and it can generate new carriers that contribute
to the reverse bias current. When the applied reverse bias is beyond the breakdown voltage, Vbd,
impact ionization becomes a dominant factor of the photodiodebehavior, and the photodiodeis
said to be operating in avalanche mode.
Table 1 summarizes the different dark currents and their dependence. When these dark currents
are taken into consideration, the photodiode no longer follows the ideal diode characteristic. A
detailed discussion on dark current can be found in [39].
Table 1.
There are two basic noise generating mechanisms in a photodiode: the statistical fluctuation in
the number of carriers and photons, and the random motion of the carriers. The statistical
fluctuation in particle numbers is the cause of shot noise. The root mean square of the current
fluctuation due to shot noise is
ish,rms=2qIavg∅f−−−−−−−√�sh,rms=2��avg∅�E6
where,Iavg is the average signal current and f is the bandwidth of the signal. The signal-to-noise
ratio (SNR) of shot noise is given by,
SNR=Iavg√2q∅f.SNR=Iavg2q∅f.E7
The randommovement of carriers produces thermal noise also known as Johnson-Nyquist noise.
The root mean square of the current fluctuation due to Johnson-Nyquist noise is
ish,rms=2kT∅fRL−−−−−√�sh,rms=2kT∅���E8
In high-speed light detection application of the photodiode, the dynamic response of the
photodiode is of the utmost importance. The dynamic response of the photodiode depends on the
drift velocities of the photogenerated carriers, the junction capacitance that is associated with the
space charge in the depletion region, and the diffusion of photogenerated carriers from the quasi-
neutral regions into the diffusion region. The delay related to the drift can be characterized by a
transit time, the amount of time that it takes a photogenerated carrier to reach the quasi-neutral
region. It is simply given by,
ttr=vdriftx=ϚEx�tr=�drift�=ϚE�E9
where,vdrift is the drift velocity, x is the distance from the point of photogeneration to the quasi-
neutral region, and is the mobility of the carrier. Then, the longest possible transit time is,
                                ttr(max)=ϚEW.�trmax=ϚEW.E10
Another delaying factor is due to the bias voltage dependence of the depletion layer width. A
change in bias voltage will change the depletion region width as described by
W=K(V0−V)mj(�0-�)��Equation (5), which in turn changes the amount of exposed
space charge. This change in the amount of space charge due to a change in the bias voltage can
simply be modeled by the junction capacitance. The junction capacitance is given by,
                                     CPD=ϓAW�PD=ϓA�E11
where, is the dielectric constant, A is the cross-sectional area of the p-n junction.
Photogenerated carriers in the quasi-neutral region are normally lost by recombination. However,
on occasion, the minority species of the photogenerated electron-hole pair can diffuse into the
depletion region and contribute to the photogenerated current. Although the contribution to the
overall signal by these carrierdiffusion is small, a delay due to this diffusion process is
observable [40, 41]. The time that takes a minority carrier in the quasi-neutral region to diffuse
into the depletion region is approximately,
                                     tdiff=x24D�diff=�24�E12
where,x is the carrier’s distance to the depletion region boundary, and D is the diffusion constant.
A DVERTISEMENT
Charge accumulation can be achieved in the accumulation mode, and it is used in CCD and
CMOS imaging.
Figure 4.
1. To begin, the reset switch is closed. The photodiode is in reverse bias, and VD= VDD.
3. After tint, the shutter closes, followed by the read switch, and Vout = VD.
4. Read switch opens, and then reset switch closes. Circuit returns to initial state.
   To measure the light intensity of the pixel for single image, only one cycle of operation is
   needed. To measure a series of images (e.g. in video recording), the operation cycle can be
   repeated continuously. During the integration phase, the rate of voltage drop depends on Iph,
   Id and CPD, which can be described by the following differential equation,
dVDdt=−Iph+IdCPD��Ddt=-�ph+���PDE13
   Note that CPD varies with the bias voltage. As discussed in [42], this voltage drop is usually linear
   for a wide range of values belowVDD. Therefore, the voltage readout at the end of the integration
   period can be used as a measurement of the amount of light that has fallen on the pixel during the
   integration phase.
   A DVERTISEMENT
A DVERTISEMENT
7. P-i-N photodiode
Increasing the active region where the signal generating photogenerated carriers originated from
should in principle increase the collection efficiency. Previously, increasing the active region by
increasing W through reverse biased was discussed. To further increase the active region, the
device geometry can be altered to include an intrinsic region between the p-type and n-type
regions, as shown in Figure 6, this results is a p-i-n junction.
Figure 6(e), shows a band diagram of a p-i-n junction under reverse bias. Under reverse bias,
certain assumptions can be made because the external field has driven almost all the carriers
from the intrinsic region [47]. These assumptions are,
      A very narrow depletion region on the doped side of each of the doped-intrinsic
       junctions.
With these assumptions, the depletion layer width is simply the width of the intrinsic region; the
electric field in the intrinsic region is simply,
                                    E=qV0−VW,E=qV0-VW,E14
and the junction capacitance is given by Equation 11.
Consequently, the p-i-n photodiode under reverse bias can have a CPD smaller than a p-n junction
photodiode, but the wider depletion region also implies a longer transit time. Therefore, transit
time becomes the dominant limiting factor in the speed the device, and operating the device
under sufficient reverse bias is essential, and quick estimation in [41] shows that the transit time
of a carrier to cross the intrinsic layer is on the order 0.1 ns. Moreover, to optimize for quantum
efficiency without sacrificing speed, it is common practice to design the intrinsic layer thickness
to be larger than absorption length given by -1, but not much more [41]. Further discussion on p-
i-n photodiodes can be found in references [40, 41].
Figure 6.
a) Structure, (b) carrier distribution, (c) charge distribution, (d) electric field, and (e) band
diagram of P-I-N junction under reverse bias.
A DVERTISEMENT
8. Avalanche photodiode
The avalanche photodiode (APD) is the solid state equivalent of a photomultiplier tube. Its main
application is for detection of weak optical signal such as single photon events. The APD
exploits the impact ionization of carriers by photogenerated carriers under extremely high
reverse bias. During its operation, carriers that trigger impact ionization and the carriers that are
generated by impact ionization continue to drift to cause more impact ionization events to occur.
The result is a cascade of impact ionization events that produces an avalanche effect to amplify
the photogenerated current. The amplification of the current is given by a multiplication factor,
M = Imph / Iph. It is fundamentally related to the ionization coefficients of the carrier by the follow
Equation,
    1−1M=∪0WϏnexp[−∪xW(Ϗn−Ϗp)dx']dx1-1�=∪0�Ϗ�exp-∪��Ϗ�-Ϗ�dx'dxE15
where,n and p are the ionization coefficients of the electrons and holes respectively [48].
Empirically, it can be approximated by,
                                   M=11−(VVBD)nM=11-VVBDnE16
where, n is material dependent parameter [49]. When taking the dark current into account, it
becomes
                            M=11−(V−IR'VBD)n�=11-�-IR'�BD�E17
where,I is the total current following through the APD, and R’ is the differential resistance
observed in junction breakdown [50]. Because the random nature of impact ionization, APD
suffers from another form of statistical noise call excessive noise [41]. It is given by,
                         F=M[1−(1−k)(M−1M)2]�=�1-1-��-1�2E18
An APD can also be operated in Geiger mode under reverse bias beyond the breakdown voltage.
In this case, an electron-hole pair generated by a single photon will trigger the avalanche effect
that generates a large current signal. Therefore, photon counting (like particle counting with a
Geiger counter) can be achieved in Geiger mode. An APD that can operate in Geiger mode is
also known as a single photon avalanche diode (SPAD).
Figure 7.
The cross-section of a SPAD CMOS sensor [51] showing the guard ring surrounding the active
region.
Because APDs operate in or near the breakdown region, a physical feature known as a guard ring
around the active region is used prevent surface breakdown, as shown in Figure 7. Moreover, the
high reverse bias voltage required to produce the avalanche effect had hindered the incorporation
of CMOS technology with APD. In 2000, Biber et al. at Centre Suisse d’Electronique et de
Microtechnique (CSEM) produced a 1224 pixel APD fabricated in standard BiCMOS
technology [52, 53]. Since that pioneering work, there has been a steady growth in the
development of CMOS APD [54-56] and CMOS SPAD [51, 57-61] for application such as
fluorescence sensing [51, 62, 63] and particle detection [64].
A DVERTISEMENT
What is clearly missing in this process is the third dimension, which is the depth into the
substrate or height above the substrate. Since designers have no control over depths or heights, it
is “hidden” from the designer in the CAD tools and ignored. While, for integrated
photodetectors, the depths of the p-n junctions are critical, the designer still does not have control
over these in a standard CMOS fabrication process. Also, it should be noted that most processes
are an n-well/p-substrate process, and we will assume that for the discussion of photodetector
devices.
The simplest structure is the vertical p-n photodiode; it can be formed as a p+ region in an n-well
(Figure 10) or as ann+ region in a p-substrate (Figure 11). The uncovered active area is the
region that is intended to be the photon collection area. To prevent unwanted charge carrier
generation, other regions of the IC should be covered in a metal layer (see Figure 12). It is also
possible to create a p-n photodiode using n-well/p-substrate; the difference with this type of
device is that the p-n junction is quite a bit deeper than the junction for the p+ active or n+ active
devices. As discussed previously, this can affect the wavelength sensitivity of the device.
Figure 8.
a) Mask-level layout of a circuit. (b) Microphotograph of the fabricated circuit shown in (a).
Figure 9.
Figure 10.
Figure 11.
Photodiode with metal-2 layer as a shield to block photons from reaching the substrate.
In order to create a dense array of photodiodes, as needed for a high-resolution imaging device,
the ratio of the area designated for the collection of light to the area used for control circuitry
should be as high as possible. This is known as the fill-factor. Ideally, this would be unity, but
this is not possible for an imaging device with individual pixel read-out. Thus, actual fill-factors
are less than one. A layout of the APS pixel as shown in Figure 5 is shown in Figure 13. The fill
factor of this 3 transistor pixel is 41% using scalable CMOS design rules. The metal shielding of
the circuitry outside of the photodetector is not shown for clarity; in practice, this shielding
would cover all non-photoactive areas and can also be used as the circuit ground plane.
In order to create an array of imaging pixels, the layout not only requires maximizing the active
photodetector area, but also requires that the power (VDD), control and readout wires be routed
so that when a pixel is put into an array, these wires are aligned. An example of this is shown in
Figure 14.
Figure 13.
A slightly more complex structure is the buried double junction, or BDJ, photodiode [66]. The
BDJ is formed from two vertically stacked standard p-n junctions, shown in Figure 15. The
shallow junction is formed by the p-base and N-well, and the deep junction is formed by the N-
well and P-substrate. As discussed previously, the depth of each junction is determined by the
thickness of the p-base and n-well. Incident light will be absorbed at different depths, so the two
junctions will produce currents based on the wavelength of the incident light. The current flow
through two junctions is proportional to the light intensity at the junction depth. An example
layout of the structure is shown in Figure 16.
Figure 14.
Simple 3x3 arrays of pixels shown in Figure 13. Notice that the wires align vertically and
horizontally.
Figure 15.
The final structure we will discuss is the phototransistor. Typically, a phototransistor can
produce a current output which isseveral times larger than a same area size photodiode due to the
high gain of the transistor. However, a major drawback of these phototransistors is their low
bandwidth, which is typically limited to hundreds of kHz. Additionally, the current-irradiance
relationship of thephototransistor is nonlinear, which makes it less than ideal to use in many
applications. Like the photodiode, there are a number of possible configurations for the
phototransistor in a standard CMOS process, such as the vertical p-n-p phototransistor and lateral
p-n-p phototransistor [67-71].
A cross-section of a vertical p-n-p phototransistor is shown in Figure 17 and an example layout is
provided in Figure 18.
Figure 17.
A DVERTISEMENT
In a standard CMOS technology, a photodiode can be formed using different available active
layers, including n-active/p-substrate, p-active/n-well and n-well/p-substrate, to form a p-n
junction. In a photodiode, the photo-conversion mostly takes place in the depletion region where
an incident photon creates an electron and hole pair with the electron passing to the n-region and
hole to the p-region. Hence, varying the depth at which the depletion region forms in the silicon
wafer would control the performance of the photodiodes in terms of responsivity and quantum
efficiency. Also, varying the width of the depletion region by appropriately applying a reverse
bias to the photodiode, one could control the response time of the detector. A wider depletion
region reduces the junction capacitance of the p-n-junction and improves the response time of the
detector.
Here, we will aim to understand the effect on responsivity and external quantum efficiency on
the design of photodiode structures. Given that all materials in a standard CMOS process are set
by the manufacturer, the external quantum efficiency, which takes into account only the photon-
generated carriers collected as a result of the light absorption or, in other words, the useful
portion of signal generated by interaction of light and photodetector, is more relevant. The
external quantum efficiency depends on the absorption coefficient of the material, a(units: cm -1)
and thickness of the absorbing material. Assuming that the entire incident light is absorbed by
the detector, if the photon flux density incident at the surface is Φo, then the photon flux at depth,
x, is given by Beer’s law (Equation 1) [72].
The external quantum efficiency is also a function of wavelength of the incident light. Thus in a
CMOS photodiode, one can strategically chose the depth of the location of the depletion region
to which photons are likely to penetrate and thereby optimize the photodetector to provide high
absorption for particular spectrum of wavelengths. In practical optoelectronic systems
development, responsivity, that is defined as the output current divided by the incident light
power, may be a more relevant performance metric. Responsivity is related to quantum
efficiency by a factor of hµ/q, where, q is the electron charge, h is Planck’s constant, and µ is the
frequency of the incident photon. The spectral response curveis a plot of responsivity as a
function of wavelength.
Thus, to optimize a silicon photodiode structure for detecting blue wavelengths, the depletion
region should be near to the silicon surface. For red wavelengths, the depletion region should be
placed deeper in the silicon substrate. Based on this idea, Yotter et al. [73] have compared
photodiode structures (p-active/n-well and n-well/p-substrate) to develop photodiodes better
suited for blue or green wavelengths for specific biosensing applications. The blue-enhanced
structure used interdigitated p+-diffusion fingers to increase the depletion region area near the
surface of the detector, while the green-enhanced structure used n-well fingers to increase the
depletion region slightly deeper within the substrate. Bolten et al. [74] provided a thorough
treatment of the photodiode types and their properties. They reported that in a standard CMOS
process n-well/p-substrate structure provides relatively better quantum efficiency for biosensors
operating in visible electromagnetic spectrum.
Using the properties that external quantum efficiency varies as a function of wavelength of the
incident light and Beer’s law, many research groups reported the use of buried double p-n
junction (BDJ) and buried triple p-n junction structures, which can be implemented with a
standard CMOS process, for monochromatic color detection [75, 76]. The BDJ structure has two
standard p-n junctions (p-base/n-well/p-substrate) are stacked vertically in the CMOS chip. For
the BDJ detector, we obtain Itop (only from top p-n junction) and Ibottom (sum of currents from top
and bottom p-n junctions) from the detector. The current ratio, Itop/Itop-Ibottom can be used for the
color/ wavelength measurements. The CMOS BDJ detector has been used for fluorescence
detection in microarrays [77], and for the detection and measurement of ambient light sources
[78]. The BDJ color detectors have been used in many chemical and biological sensors such as
seawater pH measurement [79] and volatile organic compounds detection [80].
Most CMOS image sensors are monochrome devices that record the intensity of light. A layer of
color filters or color filter array (CFA) is fabricated over the silicon integrated circuit using a
photolithography process to add color detection to the digital camera. CFA is prepared by using
color pigments mixed with photosensitive polymer or resist carriers. Many recent digital color
imaging systems use three separate sensors to record red, green, and blue scene information, but
single-sensor systems are also common [81]. Typically, single-sensor color imaging systems
have a color filter array (CFA) in a Bayer pattern as shown in Figure 19. The Bayer pattern was
invented at Eastman Kodak Company by Bryce Bayer in 1976 [82]. This CFA pattern has twice
as many green filtered pixels as red or blue filtered pixels. The spatial configuration of the Bayer
pattern is tailored to match the optimum sensitivity of human vision perception. Imager sensors
also include microlenses placed over the CFA to improve the photosensitivity of the detection
system and improve the efficiency of light collection by proper focusing of the incident optical
signal over the photodetectors [83]. A microlens is usually a single element with one plane
surface facing the photodiode and one spherical convex surface to collect and focus the light.
Thus, as photons pass through the microlens and through the CFA filter, thus passing only
wavelengths of red, green, or blue color and finally reach the photodetectors. The photodetectors
are integrated as part of an active pixel sensor to convert the incident optical signal into electrical
output [84]. The analog electrical data from the photopixels are then digitized by an analog-to-
digital converter. To produce a full color image, a spatial color interpolation operation known as
demosaicingis used. The image data is then further processed to perform color correction and
calibration, white balancing, infrared rejection, and reducing the negative effects of faulty pixels
[85, 86].
Figure 19.
One of the first example of a monolithic microlens array fabricated on the MOS color imager
was done using photolithography of a polymethacrylate type transparent photoresist [87]. In
commercial camera production, glass substrates are typically used as carrier and spacer wafers
for the lenses and are filled with an optical polymer material which is photolithographically
patterned to form the microlenses. A fairly straightforward method used in many microlens
implementations is to photolithographically pattern small cylinders of a suitable resin on a
substrate. The small cylinders are then melted in carefully controlled heating conditions. Hence,
after melting they tend to form into small hemispheres due to surface tension forces. However,
molten resin had a tendency to spread such that lens size and spatial location is difficult to
control. A well-defined spherical surface for the microlens is required to achieve high numerical
aperture which improves the image sensor efficiency. Different techniques are used to control the
spherical shape and spatial location of the microlens including pre-treatment of the substrate to
adjust the surface tension to control the reflow of the microlens [88] and use of microstructures
such as pedestals to control the surface contact angle [83]. In more recent processes the glass
substrates are eliminated and instead microlenses are made with polymer materials that are
molded using master stamps. The molded polymer microlenses are cured with ultra violet
exposure or heat treatment. By replacing the glass substrates, wafer-level system manufacturers
face fewer constraints on the integration optics and imager integrated circuit enabling the
production of compact and efficient imager sensors.
In this section, we will concentrate on understanding device architectures that deal with
monolithic integration of photonic waveguides, gratings and couplers with CMOS
photodetectors for applications in optoelectronics to improve quantum efficiency, spectral
response selectivity, and planar coupling and guiding of light signals to on-chip photodetectors.
CMOS photodetectors operate only in visible and near infra-red region between 400nm and
1.1µm of the electromagnetic spectrum. There are applications in sensing and optical
communications in this wavelength region where silicon or CMOS photodetectors can offer low-
cost and miniaturized systems. Monolithic integration of photonic components with
silicon/CMOS photodetectors started as a major research area since early 1980’s [89-92]. It is
advantageous that a monolithic integrated optoelectronic system on silicon use materials
typically employed in CMOS-technology. The dielectrics available in CMOS are favorable as
the light guiding layer for wavelengths in the visible and near infrared region. The available
materials in CMOS processing technology to develop the photonic devices include layers such as
silicon nitride [3], Phospho-Silicate Glass (PSG) —SiO2 doped with P2O5 [4] or silicon
oxynitride layers deposited as insulating and passivation layers. Confinement of light is achieved
by an increased refractive index in the light guiding film, compared to silicon oxide. The first
proposed CMOS compatible devices were based on using silicon oxynitride waveguides
sandwiched with silicon dioxide (SiO2) layers [93-95].
System-level integration is commonly used in compact spectrometers with Lysaght et al. [96]
[97] first proposing a spectrometer system in the year 1991that would integrate silicon
photodiode array with microfabricated grating structures for diffraction of the incident light
signals and subsequent detection of the optical spectrum components by the silicon photodiode
array. More recent and commercial available compact spectrometers use a CMOS line array.
Csutak et al.[98] provided an excellent background for related work done prior to their research
article. After considering the absorption length of silicon and required bandwidth for high-speed
optical communications, the improvement of quantum efficiency of the photodetectors remains
an important challenge.
10.4. Biosensors on CMOS detectors
Many research groups are working on the idea of contact imaging systems for imaging or
detection of a biological specimens coupled directly to the chip surface which was first proposed
by Lamture et al. [99] using a CCD camera. As the photodetector components in biosensors,
CMOS imagers are preferable to convert the optical signals into electrical signals because of
monolithic integration of photodetection elements and signal processing circuitry leading to low
cost miniaturized systems [100, 101]. In 1998, a system termed as bioluminescent-bioreporter
integrated circuit (BBIC) was introduced that described placing genetically engineered whole
cell bioreporters on integrated CMOS microluminometers [102]. In a more recent
implementation of BBIC system includes sensing low concentrations of a wide range of toxic
substances such as salicylate and naphthalene in both gas and liquid environments using
genetically altered bacteria, Pseudomonas fluorescens 5RL, as the bioreporter [103]. BBIC
system operates on the basis of using a large CMOS photodiode (1.47 mm2 area using n-well/p-
substrate structure) for detection of low levels of luminescence signals by integration of the
photocurrent generated by the photodiode over time and a current-to-frequency converter as
signal processing circuit to provide a digital output proportional to the photocurrent.
Recent implementations of contact imaging include using custom-designed CMOS imagers as
platform for imaging of cell cultures [104] and DNA sequencing[105, 106]. Now researchers are
working on the integration of molded and photolithographically patterned polymer filters and
microlenses with CMOS photodetectors and imagers towards complete development of
miniaturized luminescence sensors. Typically, luminescence sensors require an optical excitation
source for exciting the sensor materials with electromagnetic radiation and a photodetector
component for monitoring the excited state emission response from the sensor materials at a
higher wavelength electromagnetic spectrum that is filtered from the excitation input. The next
step towards convenient monolithic integration of filters, biological support substrates, and
microfluidic interfaces create interesting challenges for engineers and scientists. A recent report
discusses the approach of using poly(acrylic acid) filters integrated with custom-designed CMOS
imager ICs to detect fluorescent micro-spheres [107]. Polydimethylsiloxane (PDMS) could offer
a more versatile material to fabricate lenses, filters, diffusers and other components for optical
sensors [108]. PDMS is a silicone-based organic polymer that is soft, flexible, biocompatible and
optically transparent and well amenable to various microfabrication techniques. PDMS can be
doped with apolar hydrophobic color dyes such as Sudan-I, -II or -III to form optical filters that
work in different regions of visible electromagnetic spectrum [109]. The Authors group recently
proposed a prototype compact optical gaseous O2 sensor microsystem using xerogel based sensor
elements that are contact printed on top of trapezoidal lens-like microstructures molded into
PDMS that is doped with Sudan-II dye as shown in Figure 20[110]. The molded PDMS structure
serves triple purpose acting as immobilization platform, filtering of excitation radiation and
focusing of emission radiation onto the detectors. The PDMS structure is then integrated on top
of a custom design CMOS imager to create a contact imaging sensor system. The low-cost
polymer based filters is best suited for LED excitation and may not be able to provide optimum
excitation rejection performance when laser radiation is used for excitation. As a more traditional
alternative, Singh et al. [111] proposed micromachining a commercially available thin-film
interference filter and gluing it to the CMOS imager die.
Figure 20.
The irradiance detection pathway out is shown in Figure 21. The response of the irradiance
detection pathway circuit is logarithmic over the measured irradiance range spanning nearly 3
orders of magnitude.
Figure 22.
Figure 23.
Output of the color detection pathway as a function of incident light wavelength (from [115]).
    Figure 24.
Outputof the color detection pathway as a function of incident power (from [115]).
    From the experimentalresults, we can see that the output voltage is larger for longer incident
    light wavelengths (see Figure 23). So, for a practical implementation based on this chip, a look-
    up table can be used to map the output voltage to the incident wavelength. Moreover, from
    Figure 24, the changes in the output voltage caused by irradiance change will not cause
    confusion between which color (primary wavelength) is detected; the R, G and B curves will not
    overlap for a normal operating range of irradiance. The reason for this performance isbecause the
    I2/I1 ratio from the BDJ is (ideally) independent of light intensity.
   HOME
   ELECTRICAL
   ELECTRONICS
   COMMUNICATION
   ROBOTICS
   PROJECTS
   GENERAL
    Projects
    Project Ideas
    IC
    Embedded
    Sensors
    Components
    Tools
    Infographics
    News
CMOS sensor is a digital device where every site includes a photodiode &
three transistors to perform different tasks like activating & resetting the pixel,
amplification & charge conversion, and multiplexing or selection.
                                                               CMOS
Sensor Design
The CMOS sensor’s multiplexing configuration is frequently connected with an
electronic rolling shutter; even though with extra transistors at the pixel
location, a global shutter will be achieved where all pixels are uncovered at
the same time and then read out serially.
The CMOS sensor multilayer fabrication procedure does not allow using
microlenses over the chip, thus reducing the effective collection efficiency. So
this less efficiency combined through pixel-to-pixel difference gives a lower
signal-to-noise ratio & less overall image quality as compared to CCD
sensors.
CMOS sensors convert photons into electrons to a voltage & after that into a
digital value through an on-chip ADC (Analog to Digital Converter).
Depending on the manufacturer of the digital camera, the components used in
the camera system and its design will be changed. The main function of this
design is to change light into a digital signal then it can be examined to
activate some further enhancement or user defined actions.
The cameras which are used at the consumer level are available with
additional components to store the image in memory, LCD & switches, and
control knobs but machine vision type cameras do not include.
CMOS Sensor Types
There are two types of CMOS sensors like active pixel sensor and passive
pixel sensor where each type is discussed below.
Each column end includes an amplifier. These sensors mainly suffer from
several limitations like slow readout, lack of scalability & high noise. So these
problems can be addressed by adding an amplifier to every pixel.
Please refer to this link for CMOS OV7670 Camera Module Interfacing
Difference between CMOS and CCD Sensors
The difference b/w the CMOS sensor vs CCD sensors is
discussed below.
             CMOS Sensor                                   CCD Sensor
CMOS sensor is a metal oxide
semiconductor chip, used to change          It is a charge-coupled device, used to
light into an electrical signal.            transmit electrically charged signals.
                                            CCD sensors are available in three
CMOS sensors are available in two           types like Full-Frame, Frame-Transfer
types active pixel and passive pixel.       & Interline-Transfer.
Low power consumption                       Moderate to high power consumption
Moderate complexity                         Low complexity
CMOS resolution is low to high              CCD resolution is low to high
Low uniformity                              High uniformity
It has a moderate dynamic range             It has a low dynamic range
Moderate to the high noise level            Low noise level
The fill factor is moderate                 The fill factor is high
Chip signal is digital                      Chip signal is analog
These sensors are not expensive to
design because these sensors are
designed on most typical Si production
lines.                                 These are expensive to generate.
CMOS sensors are used from             CCD sensors are used in hand-held,
automation in industries to traffic    surveillance, video cameras of desktop
control-based applications.                 computers, etc.
Advantages
 The advantages of CMOS sensors include the following.
 Power consumption is low.
 Less cost.
 Inferior dark noise will give a higher reliability image.
 The camera size as small as the readout logic can be included in a similar
   chip.
 Flexible readout because of the direct individual pixels addressing which
   allows more storage bin & limited scan possibilities.
 Higher sensitivity in the NIR range.
 As compared to CCD, frame rates are high.
 Blooming is strongly reduced.
 They produce good HD videos.
 These sensors are used in Phones, Tablets, and many more.
 CMOS imager has better performance.
Disadvantages
 The disadvantages of the CMOS sensor include the following.
 These sensors are more susceptible to noise and images are grainy
   sometimes.
 These sensors utilize more light for an enhanced image.
 In this sensor, every pixel executes its conversion.
 The homogeneity & quality of the image is low.
Applications
 The applications of CMOS sensors include the following.
 These sensors are used in different fields like marine, automotive,
    manufacturing, aviation, healthcare, astronomy, medical & digital
    photography.
 CMOS sensors cover from automation to traffic-based applications like
    blind guidance, aiming systems, active or passive range finders, etc.
 These sensors change photons to electrons for use in digital processing.
 These sensors will create images within digital cameras, digital CCTV
    cameras & digital video cameras.
 These sensors are used in high-resolution cameras.
 Advanced CMOS sensors are used in different fields like augmented reality,
    computational photography, biomedical imaging, digital healthcare, etc.
 Thus, this is all about an overview of an Introduction of CMOS Sensors which
 includes design, working, types, advantages, disadvantages, and its
    applications. Here is a question for you, what is the difference between CMOS
    & CCD image sensors?
                                     SHARE THIS POST:
          Facebook
          Twitter
         Google+
          LinkedIn
          Pinterest
    Post navigation
    ‹ PREVIOUSDipole  Antenna : Working & Its Applications
               NEXT ›What is a Capacitive Sensor : Working & Its Applications
RELATED CONTENT
    NVIDIA RTX A6000 : Features, Specifications, Architecture, Working, Differences & Its
    Applications
    AI-Enabled Safe Locker with BrainChip Project
    A Comparative Analysis of NVIDIA Jetson Nano and Google Coral : Unleashing the Power
    of Edge AI
                                     RECENT POSTS
   Quantum Sensor : Working, Specifications, Components, Types, Differences & Its
    Applications
   NVIDIA A40 GPU : Features, Specifications, Architecture, Working, Differences &
    Its Applications
   Clash of AI Titans : NVIDIA vs AMD – A Battle for Supremacy
   NVIDIA RTX A6000 : Features, Specifications, Architecture, Working, Differences
    & Its Applications
   AI-Enabled Safe Locker with BrainChip Project
   IBM TrueNorth : Features, Specifications. Architecture, Working, Differences & Its
    Applications
   Top Neuromorphic Chips in 2025 : BrainChip Akida, Intel Loihi & IBM TrueNorth
    PROJECTS
   555 Timer Circuits (6)
   8051 Projects (60+)
   Arduino Projects (40+)
   ARM Projects (100+)
   Communication Projects (140+)
   DTMF Projects (25+)
   ECE Projects (400+)
   EEE Projects (240+)
   EIE Projects (100+)
   Electrical Mini Projects (120+)
   Electrical Project Ideas (300+)
   Electronics Projects (160+)
   Embedded Projects (200+)
   GSM Projects (100+)
   Home Automation Projects (100+)
   IOT Projects (30)
   LabView Projects (100+)
   Matlab Projects (190+)
   Microcontroller Mini Projects (80+)
   Mini Project Circuits (20+)
   Mini Project Ideas (45+)
   PIC Projects (30+)
   Power Electronics Projects (45+)
   Raspberry Pi Projects (9)
   RFID Projects (70+)
   Robotics Projects (100+)
   Sensor Projects (40+)
   Solar Projects (150+)
   VLSI Projects (180+)
   Wireless Projects (100+)
   Zigbee Projects (120+)
    CATEGORIES
   Communication
   Electrical
   Electronics
   Project Ideas
   Robotics
   Technology
    RECENT COMMENTS
   K BALAJI on Simple Electronic Circuits for Beginners
   Anny Arbert on Gyroscope Sensor Working and Its Applications
   Abhuday dangi on What is a UJT Relaxation Oscillator – Circuit Diagram and
    Applications
   Satyadeo Vyas on Construction and Working of a 4 Point Starter
    FOLLOW ON FACEBOOK
                                        <="" img="" style="box-sizing: border-
box; border: 0px; vertical-align: middle;">
                                            Advertise With Us
                                                  Disclaimer
                                                Report Violation
                                        Image Usage Policy
                                                 Privacy Policy
                                                  Contact Us
                    Copyright 2013 - 2025 © Elprocus
Rephrase