fdtd仿真BRDF
fdtd仿真BRDF
Abstract: Stray light in an optical system is unwanted parasitic light that may degrade perfor-
mance. It can originate from different sources and may lead to different problems in the optical
system such as fogging, ghost images for imagers, or inaccurate measurements for time of flight
applications. One of the root causes is the reflectivity of the sensor itself. In this paper we
present a new optical simulation methodology to analyze the stray light contribution due to the
sensor reflectivity by coupling electromagnetic simulation (to calculate the pixels’ bidirectional
reflectance distribution function, also named BRDF) and ray-tracing simulation (for stray light
analysis of the camera module). With this simulation flow we have been able to reproduce
qualitatively red ghost images observed on different sensors in our laboratory.
© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
1. Introduction
When designing an optical system, whatever the application, minimizing the stray light is an
important aspect of the work. It is unavoidable that some parasitic light will remain in the system
and degrade the performance, so it is essential to identify the origin of this unwanted light in
order to reduce its impact. Stray light is often referred to as flare or veiling glare in an imaging
system and defined in the ISO 935 and 18844 [1,2] as the light falling on an image which does
not emanate from the subject point. The term “unwanted light” summarizes best what most
people define as flare [3]. Depending on the application, it can create different issues.
For visible imaging applications flare is a well-known observed phenomenon and can sometimes
be used deliberately for artistic reasons in photography, movies, or video games. It can arise from
different origins in the system: bright sources outside the field of view of the system that find a
path to the sensor, internal reflections on the optics (lenses, filters) or mechanics of the system,
scattering due to material imperfections (roughness for example), diffraction on the elements
of the system, etc. Flare reduces the image quality as it can reduce the Signal to Noise Ratio
(SNR) of the sensor, reduce the dynamic range of High Dynamic Range (HDR) cameras, create
saturated areas of the sensor, or create overlapped features in the final image [3–9]. For these
reasons, it is seen most of the time as an artifact that should be reduced for better image quality.
The two pictures in Fig. 1 illustrate some of the different types of flare on images; what can be
seen in the left picture around the bright source is called petal flare and will be explained and
reproduced in this work in the final section.
For time of flight (TOF) applications in the near infrared range where the distance to the object
is evaluated to reconstruct a 3D image of the scene (either with global shutter pixel technology or
single photon avalanche diodes), stray light can lead to errors in depth data accuracy [12–15].
The degradation can be quite important, for example when a weak signal from a distant object is
affected by strong scattering/reflectivity signals from close objects as explained in Fig. 2. As an
example, Mure-Dubois [12] reports an error in the range of 100mm to 400mm at 4m distance.
#436244 https://doi.org/10.1364/OE.436244
Journal © 2021 Received 7 Jul 2021; revised 17 Oct 2021; accepted 18 Oct 2021; published 28 Oct 2021
Research Article Vol. 29, No. 23 / 8 Nov 2021 / Optics Express 37640
Even though a lot of work can be done for these different applications to reduce these stray
light issues by post-processing of the data or calibration of the system [7,12–14,16–20], it is
critical to identify the origin of the problem in the optical module to reduce the amount of stray
light perturbing the system.
Analytical calculations in the past [21–26], and ray-tracing software today [19,27–29] are
powerful methods to model light propagation in optical systems and estimate the degradation of
performance due to stray light. These studies require a knowledge of the surface properties of the
elements of the system. The principle of the analytic calculations done in the past to model stray
light in optical modules is to consider the diffraction and scattering of the light on the optical
elements of the system. For that purpose, the Bidirectional Scattering Distribution Function
(BSDF) is widely used to describe the scattering due to the surface roughness of optical elements
[30]. Peterson [22] has for example used the Harvey model [31] to describe and calculate the
scattering of rough surfaces of optical elements in an imaging system. More recently Caron,
Mariani, and Ritt have extended these calculations for their work and the readers could refer to
these interesting reviews and works for more details [25,26,28,29,32]. Of course, any BSDF
model could be defined on the surfaces and nowadays most ray-tracing software tools available in
the market have different scatter models that can be applied directly on the surfaces of the optical
elements and the housing of the system (ABg model, Lambertian, Mie scattering, imported
BSDF data, etc.). When only the reflectance of a surface is required for the analysis (as studied
here in our work), the bidirectional reflectance distribution function (BRDF) rather than the full
BSDF, which includes both reflectance and transmittance, can be used.
Nevertheless, in all these studies, the reflectivity of the detector itself in the parasitic light of
the system is either not considered or a simple coating model on the surface of the detector is
Research Article Vol. 29, No. 23 / 8 Nov 2021 / Optics Express 37641
used to simulate a given amount of specular reflectivity. But considering the wavelength of the
applications (visible for photography, or near infrared range for TOF and LIDAR applications as
some examples) and the size of the pixels of the detector in the market nowadays (between 1µm
to 4µm for standard CMOS imagers and between 5µm to 10µm for TOF pixels or SPAD) the
detector will act like a diffraction grating and reflect power in different directions corresponding
to the grating orders. Analytical models of a grating could be applied to the detector surface to
obtain the correct diffracted order directions depending on the wavelength, the pixel size, and the
incident angle, but they cannot predict what fraction of the incident power will be reflected to
each diffracted order. To be able to estimate the correct diffuse reflectivity of the detector for each
incidence angle of the source (i.e. the BRDFs of the pixels) an electromagnetic simulation of
pixels solving Maxwell equations is required. The finite-difference time-domain (FDTD) method
is now widely used in the optical community for the simulation of imaging or TOF sensors
[33–37], as well as for the simulation of surface reflectivity [38–41]. Thus, this solver allows us
to obtain accurate BRDFs of our pixels, and subsequently use them as the surface property of
our detector in the ray-tracing simulation for stray light analysis of the system. The coupling
between two simulation worlds (electromagnetic and ray-tracing) with vastly different scales is
an emergent field in the scientific community for different applications [42–50]. In this work
we present a demonstration of such an FDTD to ray-tracing simulation workflow that couples
the light propagation at different spatial scales (µm and mm) in order to analyse stray light in an
optical module. We believe this methodology could be used to complement and improve the
stray light model in the aforementioned references in this field.
The work is organized as follows: in Section 2, we describe the principle of the simulation
flow, explaining the constraints considered and the general methodology; in Section 3, we detail
the BRDF calculation by electromagnetic simulation and demonstrate this methodology by
comparing with the experimental BRDF characterization of some pixels; finally, in Section
4, we apply this coupling methodology to reproduce some stray light effects observed in the
characterization of real modules.
(blue and green pixels) with schematic geometrical rays and electromagnetic simulation of a
green wavelength. The pixels form a locally periodic pattern on the image sensor surface, and
the pitch refers to the minimum distance between pixels. For color sensitive detectors, the pixels
are arranged into a 2 by 2 locally periodic structure with red, green and blue (RGB) color filters,
called the Bayer pattern, with a period that is twice the pitch, as shown in the top view of Fig. 3.
Fig. 3. Representation of an image sensor module. Figure 3(a) shows rays for an on-axis
field and Fig. 3(b) shows rays for an off-axis field with inserts representing light propagation
inside two pixels (blue and green) of a 1.1µm pitch sensor (on the left is a schematic of
geometrical rays, and on the right the electric field intensity from an FDTD simulation of a
green wavelength). Between the inserts is a top view of the RGB sensor with the periodic
Bayer RGB pattern (2 by 2 pixels) simulated by FDTD.
As represented in Fig. 3(b), to re-center the off-axis incoming light on the photodiode, color
filters (if present) and microlenses are shifted in most commercial sensors today. Each pixel on
the matrix has a different design because the radial shift of each filter/microlens is optimized
depending on the CRA corresponding to the position on the sensor and, as a result, the optical
efficiency and the reflectivity will also be different. Thus, the pixel BRDFs must be calculated
for several incidence angles (defined by the illumination cone and the CRA), for different pixel
designs (defined by the position on the matrix), and for the wavelengths of interest. Of course,
on the detector, the local variation of the pixel design and illumination is small and simulating
only a few different areas of interest is sufficient to estimate the local BRDFs over the entire
sensor plane. We therefore simulate different small groups of pixels at different locations on
the sensor (i.e., corresponding to different fields in the scene) that receive the same uniform
illumination; the spatial extent of each group is small compared to the spatial variation of
the source. The reader could refer to our previous work that describes the CMOS simulation
methodology for more details [33]. At the pixel level, light is distributed uniformly: spatially
over the pixel area and angularly inside a cone defined by the exit pupil of the objective and the
pixel. With electromagnetic simulations, it is possible to simulate these groups of pixels for
different incidence angles (with a plane wave source) to get the corresponding BRDF as will
Research Article Vol. 29, No. 23 / 8 Nov 2021 / Optics Express 37643
be explained in more detail in Section 3. Finally, in the ray-tracing software, these BRDFs are
imported and used as surface properties at the corresponding location on the detector plane. This
electromagnetism to ray-tracing simulation flow is summarized in Fig. 4.
Fig. 4. Representation of the electromagnetism to ray-tracing simulation flow for stray light
analysis of an optical module considering the reflectivity of the detector. As noted on the
right, each incident angle (and indeed polarization) must be simulated separately because
FDTD is a fully coherent electromagnetic solver.
when a Bayer pattern is used for visible applications (as shown in the top view of the sensor in
Fig. 3). In this locally periodic approximation, the Bayer pattern forms a two-dimensional grating
with well-defined primitive lattice vectors a⃗1 and a⃗2 . The excitation source is a plane wave with a
given incident angle and polarization. For boundary conditions, we use perfectly matched layer
(PML) absorbing boundaries above and below the pixels to absorb both the reflected light and
the light transmitted to silicon substrate carrier. On the lateral boundaries, either Bloch-periodic
boundaries or split field boundaries, also known as the broadband fixed angle source technique
(BFAST), can be used. The Bloch-periodic boundaries result in a wavelength dependent incident
angle, while the BFAST method results in a substantial increase in simulation time, especially at
steeper incident angles. In this work, we chose Bloch-periodic boundaries because the wavelength
dependence of the source angle can be accounted for during the analysis. During the FDTD
simulation, the transient electromagnetic fields are Fourier transformed to the frequency domain
and normalized to the source pulse used to excite the system [53]. By calculating these fields on
a plane above the image sensor surface, we can calculate the Poynting vector in the frequency
domain and therefore the total reflected power, normalized to the source power, at any desired
wavelength. Typical sizes of the FDTD grid are on the order of λmin /10 where λmin is the
minimum wavelength (in each material) over the desired bandwidth and standard methods can be
used to ensure convergence.
In order to calculate the reflected power to each angle (or each diffracted order), we must
perform a near to far field projection using well known Green’s function methods [53]. In general,
calculating the far field from the near field involves evaluating integrals of the tangential electric
and magnetic fields over a near field surface for each position in the far field. When the far field
position approaches an infinite distance compared to the dimensions of the near field surface
(the Fraunhofer diffraction limit), the far field position can be defined by a wave vector k⃗ = k0 u⃗
and a distance R = |⃗rfar − ⃗rnear | where k0 is the wavenumber in the far field medium, u⃗ is the
unit vector given by u⃗ = (⃗rfar − ⃗rnear )/R and ⃗rnear is the center of the near field surface. The
integrals over the near field surface can then be reduced to a series of Fourier transforms of the
near electromagnetic fields.
Here, we know the reflected near fields on a surface corresponding to one period of a periodic
system. We can therefore calculate the contribution of a single unit cell to the far field, E ⃗ far (k)
⃗
0
from the near field of a single period, E ⃗ (⃗r). From there, we can calculate the reflected far
near
0
field of a finite number of periods simply by summing the contributions of each period with the
appropriate phase
∑︂N
⃗ =
⃗ far (k) ⃗ far (k).exp[i(
⃗ k⃗ ∥ − k⃗inc a1 + q⃗a2 )]
E
p,q=1
w(p, q).E 0 ∥ ) · (p⃗ (1)
simple case where a⃗1 and a⃗2 are aligned with the x and y axes respectively, the grating orders
occur when
kx − kxinc = n2π/a1 (2)
ky − kyinc = m2π/a2 (3)
where n and m are integers which can be used to reference each order.
From Eq. (1), we can calculate the normalized power (and polarization, if desired) of reflected
light at any angle from the surface, as a function of the incident angle and polarization. This
can be done either in the limit of an infinite or a finite number of periods. This information
Research Article Vol. 29, No. 23 / 8 Nov 2021 / Optics Express 37645
on the reflective properties of the image sensor surface can be saved for subsequent analysis in
ray-tracing. We note that the FDTD simulation provides the fully coherent results, including phase
and polarization, which can be post-processed into incoherent results as required when passing
the information to a ray tracer. There are several methods which can be used to store the reflective
properties for ray-tracing. For example, the grating order efficiencies could be used directly to
determine the fraction of rays reflected to each order, while the direction could be determined
from Eqs. (2) and (3). While this method is preferable in principle, support for wavelength
and incident angle dependent diffraction of 2D gratings is not always available in ray-tracing
tools. An alternative is to use the tabular BRDF data format that most ray-tracing software can
handle easily. When using a tabular BRDF data format, the diffracted order efficiencies cannot be
employed directly because the reflectance of a grating as a function of output angle is composed
of Dirac delta functions that cannot be easily interpolated. Instead, we want to treat the ideal
plane waves as finite sized beams and thereby create a BRDF that is smooth enough to allow for
interpolation. One method to achieve this is to apply a weighting function in Eq. (1) of:
where we have assumed that the primitive lattice vectors, a⃗1 and a⃗2 , are aligned with the cartesian
x and y axes.
This is approximately equivalent to the angular reflection that will occur for a finite sized
incident beam of radius R0 . Here, we are neither so concerned with the precise value of R0 nor
are we concerned with the specific choice of function used, but rather with smoothing the Dirac
delta functions present in a BRDF of an ideal grating into functions that can be managed through
the tabular BRDF formalism available in ray-tracing software. Choosing a larger value of R0
gives narrower diffracted beams, which requires sampling the BRDF at more angles. A smaller
value of R0 is less accurate but allows sampling the BRDF at fewer angles. More quantitatively,
the above weighting function will result in a far field composed of multiple beams of the form
∑︂ ⎡ 2⃗ ⃗ 2 )2 ⎤⎥
⃗ 1 − mK
⎢ −R0 (k ∥ − nK
E ⃗ =
⃗ far (k) αn,m exp ⎢
⎢ ⎥ (5)
n,m
⎢ 16 ⎥
⎥
⎣ ⎦
at normal incidence on two different process technologies with different pixel pitch (4.6µm
pixel and 2.2µm pixel without color filters). The illumination spot size in characterization was
around 1mm diameter. We performed the measurements on samples without a radial shift of the
microlenses (i.e., all pixels were identical) in order to be equivalent to the simulation conditions.
Figures 5(a) and 5(b) show, respectively, the results from characterization and simulation on the
2.2µm pixel pitch. We see that both results qualitatively agree in terms of energy and direction
angles corresponding to the ones predicted by the grating equation (Eqs. (2) and (3)) for the
940nm wavelength at normal incidence in the air. A log scale is used to emphasize the signal in
the diffracted orders. We also notice that the weak 2nd orders are not detected in characterization,
due to the limited detector responsivity of the equipment for these conditions [55]. Figures 5(c)
and 5(d) show, respectively, the results from characterization and simulation on the 4.6µm pixel
pitch. Once again, the simulation results qualitatively match the characterized ones with angles
corresponding to the grating equation. The small differences observed could be explained by the
simulation model. Even though we try to model the pixels as accurately as possible, there are
always some differences with the final process realized on a silicon wafer (refractive indices of
the materials, layer thicknesses, microlens shape, isolation trenches profile etc. . . ). Furthermore,
the simulated beams have an angular width that is determined by our choice of R0 , while the
experimental results are limited by the angular resolution of the measurement device.
Fig. 5. BRDF characterization and simulation comparison for different pixel pitch at 940nm
at normal incidence. Figures 5(a) and 5(c) show characterization BRDF result for 2.2µm
and 4.6µm pixel respectively. Figures 5(b) and 5(d) show FDTD simulation BRDF result for
2.2µm and 4.6µm pixel respectively. The white rings correspond to the polar angles in 10
degree increments up to 90 degrees.
We also indicate in Table 1 the absolute values of reflectance from simulation and experimental
characterization at 940nm. For reflectance measurements on scattering samples, it is often
interesting to combine two different tools: a BRDF equipment like the mini-diff equipment we
use here (or a more sophisticated goniometer if available) that gives the angular information but
Research Article Vol. 29, No. 23 / 8 Nov 2021 / Optics Express 37647
captures less accurately the total diffuse reflectance, and an integrating sphere equipment that
accurately measures the total reflectance value but without angular information. For the latter we
performed the measurement with a Lambda 1050 from Perkin Elmer [56] to compare the total
reflectance with simulation. The values indicated in the Table 1 for the measurement correspond
to the mean value measured over 5 samples with uncertainty reported in parenthesis. The results
in Table 1 show a good agreement between simulation and characterization (considering the
uncertainty of the measurements), for both specular and total integrated reflectance. One should
notice from the Table 1 that the reflectance can be quite significant for some pixel technologies,
especially at 940nm on BSI technology due to the low absorption of silicon and therefore the light
reflected from the metallic interconnections under the silicon substrate. As already discussed, it
can create important stray light issues in the optical module that have to be identified and reduced.
Table 1. Image sensor reflectance results comparison
FDTD Simulation Characterization (BRDF tool) Characterization (Integrating sphere)
Pixel pitch Specular Total Specular Total Total
2.2µm 16% 19% 15% (±1.2) 17% (±1.8) 19% (±0.9)
4.6µm 12% 46% 14% (±1.7) 44% (±2.7) 45% (±1.5)
Fig. 6. BRDF for the two pixel dimensions simulated at 630nm. Figures 6(a) and 6(c)
show the BRDF at normal incidence for the 1.75µm and 1.4µm pixel pitch respectively.
Figures 6(b) and 6(d) show the incoherent sum of the BRDFs corresponding to a sampling
of incident angles over the full aperture (F/2.8).
tabular BRDF data format to represent diffraction from gratings because, for ideal accuracy, a
large number of incident angles, output angles and wavelengths would be required to correctly
represent the scattering behavior. Nonetheless, the relatively small number of incident angles
used are sufficient to obtain qualitative agreement with experiment, as we will see in Fig. 8,
despite the fact that artifacts of the limited angular sampling are visible in the simulated results.
Improvements to avoid the use of a tabular BRDF data format, such as the use of plugins to better
handle the diffraction order efficiencies of the periodic image sensor surface, while analytically
accounting for the directions of the diffracted orders, will be the subject of future work.
Once the FDTD electromagnetic simulations were performed, the BRDF data was imported
into the ray-tracing software using a tabular text file format and applied as a surface property at
the center sensor plane as explained in Section 2. The results of the final ray-tracing simulation at
630nm on the two module configurations are shown in Fig. 7 as well as the captured images with
a bright light source. The measured images show red ghost patterns that are well reproduced
with the simulation flow. The size, spacing, and directions of the ghosts are comparable between
measurement and simulation in the two module configurations (yellow diagonal lines and yellow
dot squares have been added to the images to more easily compare dimensions between the
patterns).
To better understand the origin of the ghosts, a detailed stray light analysis in ray-tracing
software is necessary. The analysis shows that the ghost pattern comes from light that is reflected
to various diffracted orders of the periodic sensor that is again reflected on the IR filter and comes
back as ghosts captured by the sensor. In Fig. 8 we have represented some simulated rays to
illustrate the optical path leading to ghost images on the sensor. The ghost pattern is thus highly
dependent on the wavelength, the pixel pitch, the incidence angle on the sensor, the infrared filter
to sensor distance and the thickness of the infrared filter. This demonstrates that electromagnetic
simulation of the sensor coupled with ray-tracing simulation of the module is the only way to
reproduce this kind of behavior.
Research Article Vol. 29, No. 23 / 8 Nov 2021 / Optics Express 37649
Fig. 7. Comparison images with ghost patterns between measurement and simulation.
Figure 7(a) shows the image captured with the module using the 1.4µm pixel sensor.
Figure 7(b) is the corresponding ghost ray simulation at 630nm. Figure 7(c) shows the image
captured with module using the 1.75µm pixel. Figure 7(d) is the corresponding ghost ray
simulation at 630nm.
For this specific issue, a variety of solutions could be explored, such as optimizing the sensor
design to reduce reflectivity, changing the IR filter to sensor distance to defocus the ghosts, or
optimizing antireflective coatings on the IR filter.
Research Article Vol. 29, No. 23 / 8 Nov 2021 / Optics Express 37650
References
1. International Organization for Standardization, “Optics and optical instruments, Veiling glare of image, Forming
systems, Definitions and methods of measurement,” ISO 9358:1994 (1994).
2. International Organization for Standardization, “Photography, Digital cameras, Image flare measurement,” ISO
18844 (2017).
3. D. Wueller, “Image Flare measurement according to ISO 18844,”in IS&T International Symposium on Electronic
Imaging, Digital Photography and Mobile Imaging XII (2016), pp. 1–7(7).
4. M. Hullin, E. Eisemann, H.P. Seidel, and S. Lee, “Physically-based real-time lens Flare rendering,” ACM Transactions
On Graphics Vol. 30 N. 4, Proceedings of SIGGRAPH (2011).
5. S. Lee and E. Eisemann, “Practical Real-Time Lens-Flare Rendering,” in Eurographics Symposium on Rendering
Vol. 32 N. 4 (2013),
6. J. J. McCann and A. Rizzi, “Veiling glare: the dynamic range limit of HDR images„” Proc. SPIE 6492, 649213
(2007).
7. E-V. Talvala, A. Adams, M. Horowitz, and M. Levoy, “Veiling Glare in High Dynamic Range Imaging,” ACM Trans.
Graphics 26(3), 37(2007).
8. N. Koren, “Measuring the impact of flare light on Dynamic Range,” in IS&T International Symposium on Electronic
Imaging, Image Quality and System Performance XV (2018), pp. 169-1-169-6(6).
9. J. Jur, “Stray Light Analysis of a Mobile Phone Camera,” Master’s project, College of Optical Sciences, University
of Arizona (2016), https://www.optics.arizona.edu/sites/optics.arizona.edu/files/jordan-jur-msreport.pdf
10. S. Wylie, “Forest Light – Lens flare (real),” (2012), https://commons.wikimedia.org/wiki/File:Forest_Light_-
_Lens_flare_(real)_(7882567470).jpg
11. Byronv2, “early spring sunset,” (2016), https://www.flickr.com/photos/7512717@N06/25025964623
12. J. Mure-Dubois and H. Hügli, “Real-time scattering compensation for time-of-flight camera,” in Proc. of the ICVS
Workshop on Camera Calibration Methods for Computer Vision Systems (2007).
13. J. Mure-Dubois and H. Hügli, “Optimized scattering compensation for time-of-flight camera,” Proc. SPIE 6762,
67620H (2007).
14. J. Holmund, “Characterization and compensation of Stray light effects in Time of flight based range sen-
sors,” Master’s thesis Engineering Physics, Department of Physics, Umeå University (2013), http://www.diva-
portal.org/smash/record.jsf?pid=diva2%3A633199&dswid=-2452
15. R. Whyte, Chronoptics (2021), https://medium.com/chronoptics-time-of-flight/multipath-interference-in-indirect-
time-of-flight-depth-sensors-7f59c5fcd122
16. J. Achatzi, G. Fischer, V. Zimmer, D. Paulus, and G. Bonnet, “Measurement and analysis of the point spread function
with regard to straylight correction,” Proc. SPIE 9404, 940406 (2015.
17. B. Bitlis, P. Jansson, and J. Allebach, “Parametric point spread function modeling and reduction of stray light effects
in digital still cameras,” Elec. Imaging Computat. Imaging 64980, 64980V (2007).
Research Article Vol. 29, No. 23 / 8 Nov 2021 / Optics Express 37651
18. J. Wei, B. Bitlis, A. Bernstein, A. de Silva, P. A. Jansson, and J. P. Allebach, “Stray light and shading reduction in
digital photography: a new model and algorithm,” Elect. Imaging Dig. Photo., 6817, 68170H (2008).
19. E. Oh, J. Hong, S.W. Kim, Y.J. Park, and S.I. Cho, “Novel ray tracing method for stray light suppression from ocean
remote sensing measurement,” Opt. Express 24(10), 10232–10245 (2016).
20. P. Gamet, S. Fourest, T. Sprecher, and E. Hillairet, “Measuring, modeling and removing optical straylight from Venus
Super Spectral Camera images,” Proc. SPIE 11180, 111804F (2018).
21. A. Greynolds, “Formulas For Estimating Stray-Radiation Levels in Well-Baffled Optical Systems,” Proc. SPIE 0257,
39 (1981).
22. G. L. Peterson, “Analytic expressions for in-field scattered light distributions,” Proc. SPIE 5178, 184–193 (2004).
23. J.E. Harvey, N. Choi, A. Krywonos, G. Peterson, and M. Bruner, “Image degradation due to scattering effects in
two-mirror telescopes,” Opt. Eng. 49(6), 063202 (2010).
24. K. Sun, H.M. Jiang, and X.A. Cheng, “In-field stray light due to surface scattering effects in infrared imaging
systems,” Proc. SPIE 8193, 81933F (2011).
25. G. Ritt, “Laser Safety Calculations for Imaging Sensors,” Sensors 19(17), 3765 (2019).
26. G. Ritt, “Estimation of Lens Stray Light with Regard to the Incapacitation of Imaging Sensors,” Sensors 20(21),
6308 (2020).
27. F. Landini, M. Romoli, S. Fineschi, and E Antonucci, “Stray-light analysis for the SCORE coronagraphs of
HERSCHEL,” App. Opt. 45(26), 6657–6667 (2006).
28. J. Caron, M. Taccola, and J.-L. Bezy, “Towards a standardized method to assess stray-light in earth observing optical
instruments,” Proc. SPIE, 10562, 105622B (2017).
29. F. Mariani, J. Caron, M. Taccola, and S. Grabarnik, “StrayLux: an efficient tool for stray-light modelling in optical
Instruments,” Proc. SPIE 11180, 1118089 (2019).
30. F. E. Nicodemus, J.C. Richmond, J.J. Hsia, I.W. Ginsberg, and T. Limperis, “Geometric considerations and
nomenclature for reflectance,” National Bureau of Standards, NBS Monograph 160 (1977).
31. J. E. Harvey, “Light-Scattering Characteristics of Optical Surfaces,” Ph.D. Dissertation, University of Arizona (1976).
32. R. N. Pfisterer, “Approximated Scatter Models for Stray Light Analysis,” Opt. Photonics News 22(6), 16–17 (2011).
33. J. Vaillant, A. Crocherie, F. Hirigoyen, A. Cadien, and J. Pond, “Uniform illumination and rigorous electromagnetic
simulations applied to CMOS image sensors,” Opt. Express 15(9), 5494–5503 (2007).
34. A. Crocherie, P. Boulenc, J. Vaillant, F. Hirigoyen, D. Hérault, and C. Tavernier, “From photons to electrons: a
complete 3D simulation flow for CMOS image sensor,” IEEE International Image Sensor Workshop (2009).
35. Y. Huo, C. C. Fesenmaier, and P. B. Catrysse, “Microlens performance limits in sub-2µm pixel CMOS image sensors,”
Opt. Express 18(6), 5861–5872 (2010).
36. T. Jung, Y. Kwon, and S. Seo, “A 4-tap global shutter pixel with enhanced IR sensitivity for VGA time-of-flight
CMOS image,” Sensors, Imaging Sensors and Systems conference (2020).
37. F. Hirigoyen, J. Vaillant, E. Huss, F. Barbier, J. Prima, F. Roy, and D. Hérault, “1.1 µm Backside Imager vs. Frontside
Imager: an optics-dedicated FDTD approach,” in IEEE Int. Image Sensor Workshop (2009), pp. 1–4.
38. A.H. Wang, P.F. Hsu, and J.J. Cai, “Modeling bidirectional reflection distribution function of microscale random
rough surfaces,” J. Cent. South Univ. Technol. 17(2), 228–234 (2010).
39. Y. Xuan, Y. Han, and Y. Zhou, “Spectral Radiative Properties of Two- Dimensional Rough Surfaces,” Int. J.
Thermophys. 33(12), 2291–2310 (2012).
40. K.F. Warnick and W.C. Chew, “Numerical simulation methods for rough surface scattering,” Waves Random Media
11(1), R1–R30 (2001).
41. M. A. Marciniak, S. R. Sellers, R. B. Lamott, and B. T. Cunningham, “Bidirectional scatter measurements of a
guided-mode resonant filter photonic crystal structure,” Opt. Express 20(25), 27242–27252 (2012).
42. Y. Wang, S. Safavi-Naeini, and S. K. Chaudhuri, “A hybrid technique based on combining ray tracing and FDTD
methods for site-specific modeling of indoor radio wave propagation,” IEEE Trans. Antennas Propagat. 48(5),
743–754 (2000).
43. Y. Kawaguchi, K. Nishizono, J. S. Lee, and H. Katsuda, “Light extraction simulation of surface-textured light-emitting
diodes by finite-difference time-domain method and ray-tracing method,” Jpn. J. Appl. Phys. 46(1), 31–34 (2007).
44. L. W. Cheng, Y. Sheng, C. S. Xia, W. Lu, M. Lestrade, and Z. M. Li, “Simulation for light power distribution of 3D
InGaN/GaN MQW LED with textured surface,” Opt. Quantum Electron. 42(11-13), 739–745 (2011).
45. X. Ding, J. Li, Q. Chen, Y. Tang, Z. Li, and Binhai Yu, “Improving LED CCT uniformity using micropatterned films
optimized by combining ray tracing and FDTD methods,” Opt. Express 23(3), A180–A191 (2015).
46. C. Leiner, W. Nemitz, F. P. Wenzl, P. Hartmann, U. Hohenester, and C. Sommer, “A simulation procedure for
light-matter interaction at different length scales,” Proc. SPIE 8429, 84290L (2012).
47. C. Leiner, S. Schweitzer, V. Schmidt, M. Belegratis, F.- P. Wenzl, P. Hartmann, U. Hohenester, and C. Sommer,
“Multiscale simulation of an optical device using a novel approach for combining ray-tracing and FDTD,” Proc. SPIE
8781, 87810Z (2013).
48. C. Leiner, S. Schweitzer, F-P. Wenzl, P. Hartmann, U. Hohenester, and C. Sommer, “A Simulation Procedure
Interfacing Ray-Tracing and Finite-Difference Time-Domain Methods for a Combined Simulation of Diffractive and
Refractive Optical Elements,” J. Lightwave Technol. 32(6), 1054–1062 (2014).
Research Article Vol. 29, No. 23 / 8 Nov 2021 / Optics Express 37652
49. C. Sommer, C. Leiner, S. Schweitzer, F.-P. Wenzl, U. Hohenester, and P. Hartmann, “A combined simulation approach
using ray-tracing and finite-difference time-domain for optical systems containing refractive and diffractive optical
elements,” Proc. SPIE 9193, 91930F (2014).
50. G. Chataignier, B. Vandame, and J. Vaillant, “Joint electromagnetic and ray-tracing simulations for quad-pixel sensor
and computational imaging,” Opt. Express 27(21), 30486–30501 (2019).
51. https://www.lumerical.com/
52. https://www.zemax.com/
53. A. Taflove and S. C. Hagness, “Computational electrodynamics: the finite-difference time-domain method,” (Artech
House, Norwood, 2005).
54. D. Guarnera, C. Guarnera, A. Ghosh, C. Denk, and M. Glencross, “BRDF Representation and Acquisition,”
Proceedings of the 37th Annual Conference of the European Association for Computer Graphics: State of the Art
Reports, pp. 625–650 (2016).
55. http://www.lighttec.fr/scattering-measurements/
56. https://www.perkinelmer.com/category/molecular-spectroscopy-instruments