4th International Congress
of Croatian Society of Mechanics
September, 18-20, 2003
Bizovac, Croatia
THE ASSESSMENT OF STRUCTURED LIGHT AND LASER
SCANNING METHODS IN 3D SHAPE MEASUREMENTS
Stjepan Jecić, Nenad Drvar
Keywords: 3D scanning, structured light, laser, accuracy
1. Introduction
The development of optical 3D shape measurement methods is rapidly gaining importance as
industry raises its demands in high technical performance of final products, short production times,
low manufacturing costs and the overall product quality. This development can be clearly wit-
nessed by the vast number of research papers published in the last 20+ years [2] as well as by the
number of commercially available measurement sensors [8]. During that time, not all efforts were
engaged in the inventions of new measurement technologies, but were mostly dedicated to the re-
finement of the existing knowledge thus improving the measurement accuracy of the existing sen-
sors.
Apart from scientific work dedicated to this field, further boost to the development pace was
additionally given through the availability of cheap yet powerful desktop microcomputers, low cost
CCD and recently CMOS sensors, cheap and eye-safe low power laser sources and various kinds of
optical components. Clearly, both hardware and software components of the measurement sensors
were improved over that period of time.
Typical example of free-form surfaces that respond to high esthetical, ergonomical and techni-
cal design is the automotive industry where product design changes on a daily basis. Since manual
surface modeling requires a huge effort of time and money, this motivates further 3D optical shape
measurement techniques development. This leads to many different specialized types of 3D scan-
ners [8] that are developed in conjunction with the actual and often very specific industrial needs.
However, not only industry motivates this development, as modern medicine, heritage, architecture
and other end users recognize potentials of 3D sensors.
Nowadays, there are two mainstream non-contact optical measurement techniques that are well
established with high technical and economic performance, based upon projected fringe and laser
scanning methods.
In this paper, the assessment of the structured light and laser scanning methods in 3D shape
measurements that are the core technologies of currently widespread 3D measuring sensors will be
presented and critically observed. It will be done by analyzing basic principles of core technologies
and potential sources of error of both mentioned methods, together with their current stage of ap-
plication and achieved measurement accuracy on commercially available shape measuring sensors.
Advantages and disadvantages with respect to various aspects are critically observed, like sensor
types, method application, data acquisition conditions, measurement range, object reflectance,
automation, accuracy, spatial resolution, method maturity, measurement planning and overall
measurement costs.
237
2. Measurement principles overview
Based on the means that currently commercially available vision systems exploit in order to ob-
tain object coordinates, vision systems can be classified as passive and active. Passive vision sys-
tems use the information contained in intensity coded images to obtain discrete object coordinates
(e.g. classical photogrammetry), thus achieving high accuracy on well defined object/image fea-
tures like coded targets or artificial and natural object texture and edges. Surfaces without those
characteristic markings cannot be successfully measured with this type of sensors, which narrows
its applicability so this type of sensors won't be analyzed here. Active vision systems however, ob-
tain measurement information regardless of object visual features from the additional information
provided by spatial and temporal active encoding, by utilizing structured light or laser beam projec-
tion techniques.
Both vision systems consist of similar optical components and thus have similar sources of er-
ror, but the influence of these sources doesn't affect the accuracy of the active and passive vision
systems in the same way. I.e. in passive vision system image acquisition through optical lens sys-
tem, features that define object surface and geometry, influence of ambient light and methods for
feature detection can be regarded as functions of image coordinate measurements in 2D image field
whilst sensor calibration provides required information for the location of the 3D object coordi-
nates. In an active vision system however, sensor has the active role in definition of measurement
point and its measurement range so the applied method for object point coding together with the
sensor calibration and object features affects the accuracy and repeatability of location of 3D object
coordinates with the active vision system.
However, regardless of the principle by which these systems obtain measurement information,
both systems are still based on the ancient geometric triangulation principle for determination of
the actual object point coordinates, which still provides high measurement accuracy. The only dif-
ference lies in the way in which the sufficient data for the triangulation procedure is obtained, e.g.
single spot of a laser beam or a unique surface phase map produced via phase shifting.
Typical elementary active vision system usually consists of an active light source, a detection
unit and a data processing unit. This leads to a conclusion that it can be expected that similar draw-
backs affect both laser and structured light scanning methods.
Light sources used for measurement spot encoding emit either coherent light such as laser
beams, or non-coherent structured light, or are a result of a third party production process such as
plasma spot in the laser/plasma ablation process. Number and shape of light spots emitted by light
sources can vary from a single point, line or a series of fringes. Therefore, the measurement speed,
spatial resolution, accuracy and size of a single-view measurement volume greatly depend upon the
chosen light projection method.
Modern detection units are often assembled of a rectangular array of photodiodes that are used
for recording of spatial and temporal position of the light spot on the object surface. Regardless of
sensor type, the influence of lens light distortion causes additional source of measurement error.
Data processing units are usually commercially available microcomputers capable of running
both on-line and off-line analysis of optically gathered data. On-line processing might introduce la-
tency in the time-dependent measurement techniques (e.g. time-of-scan laser triangulation or time-
of-flight airborne earth scanning) or a complete loss of measurement data thus increasing the time
and total cost of measurement process. A common mistake is the attitude that most of the known
physical drawbacks of a scanning system can be solved via proper software routines, originated as
a consequence of the specialization of research activities.
238
2.1 Laser scanner principle and error sources
Principles of laser scanner operation are sufficiently described throughout the literature [1,6,9],
so here we'll provide just a brief description and put a focus on the sources of measurement error.
Figure 1.
Most of the laser scanner systems are based on the principle where one or more static detection
units record projected coherent laser beam reflected off the object surface, Fig. 1a. Extension to
this principle is the synchronized scanner approach Fig. 1b. where both laser and detector have a
synchronous motion. Shape of the beam projected by modern sensors varies from a single spot, line
(slit) or series of parallel lines, Fig. 2. Provided that the geometry of relative orientation of optical
components (obtained by previous sensor calibration) is known, the object coordinates of the pro-
jected laser beam can be easily calculated by the application of triangulation techniques.
Figure 2. Figure 3.
Commercially available sensors utilize laser beams of different wavelengths, see Table 1., with
tendency of usage of low power outputs and eye safe wavelengths. If laser is used together with a
CCD sensor, then wavelength of 670nm is suggested since it shows a good agreement with the
maximum spectral sensitivity of the CCD sensor.
Manufacturer 3rdTech Cyra Tech MetricVision Optech Riegl USA
Laser wavelength (in nm) 670 532 1550 1540 904
Laser power (in mW) 5 1 4 10 1.2-85
Measurement range (in m) 0.3-12 1.5-50 0.3-55 1.5-1200 0.3-2500
Accuracy (mm at X m) 10 at 12 6 at 50 0.02 6 at 100 76 at 2400
Cost $45,000 $125,000 $360,000 $150,000 $35-85,000
Table 1.
In general, the accuracy of calculated object points is affected by the errors introduced by the
acquisition system geometry, reflectance of projected beam together with the ambient light
changes, sharp corners and edges, sudden shape discontinuities with the respect to illumination,
sensor occlusions, speckle noise and the inaccurate location of the projected line/point center.
239
Since shape measurements nowadays take place in many different illumination conditions, the
use of spatially coherent, bright laser source is well justified. Obtained light is monochromatic,
very directional and capable of staying in focus when projected on an object surface. But the
speckle effect resulted by the projection of a spatially coherent light beam onto the optically rough
surface introduces a shape variation of the spot image thus introducing error in point triangulation.
Speckles are a function of local surface micro topology, so the spatial triangulation analysis
alone doesn't provide sufficient accuracy. Triangulation of points obtained by space-time analysis
method [4] shows better accuracy, and is also capable of eliminating problems of a Gaussian point
disappearance on the sharp object edges.
Spatial coherency is a feature of a projecting device so the only way of avoiding coherence
noise is by altering object surface in such a way that the returned observed light spot shows inco-
herent properties or by using the incoherent light source. Figure 4. shows drastic effect of using
non-coherent fluorescent light (right graph) versus measurement with the classical laser source (left
graph) across a milled surface [7].
Figure 4.
Considering that triangulation assumes that the source of laser beam and the observation unit
aren't coaxial, the backscattering from real opaque and diffusely reflecting surfaces has to be taken
into consideration, Fig 3. Incident light falling on a real surface is distributed on the following
components: retro reflective component, Lambertian component, heat and specularly diffused com-
ponent. The weight of those components depends upon the surface properties. There are also
materials such as marble (sculptures) whose structure allows light to scatter inside the material sur-
face thus leading to the bias in distance measurement, and an increase in noise level.
If the surface of the measured object isn't specially treated to suit measurement needs, then it's
also possible to obtain regions of high reflectance bordering with regions of low reflectance. Figure
5. shows the effect of the reflectance alteration on the position of true projected spot centroid and
the actually measured centroid.
Figure 5.
240
2.2 Structured light scanner principle and error sources
Previous chapter outlines the effect of a spatially coherent laser source beam on the measure-
ment point definition and thus on the triangulation accuracy. Structured light sensors, Table 2.,
usually utilize visible non-coherent light sources for object point coding purposes that are projected
on a whole camera field of view, thus being able to measure points in a range of a million within a
single view measurement. Because of its non-coherent light source type, there is no speckle effect
affecting the recorded images but its light intensity decreases rapidly with distance from the source.
Early types of sensors consisted of a single camera and single projecting device, Fig. 6. [10],
but since the two cameras provide over-determined mathematical triangulation model [5], sensors
with the two cameras of the same focal distances are nowadays more widespread. If there is a need
for more than a single view measurement, unlike dual camera systems, single camera systems re-
quire a precise turn-table or the robotic positioning devices since they can’t exploit the passive pho-
togrammetric principles which require more than one observation of the same visually coded object
spot. However, numerical registration of raw 3D points in different coordinate systems could re-
place the need of a precise mechanical alignment units providing there is a sufficient overlapping
surface area with the distinctive features. Calculation of object point position is still based on the
triangulation techniques, usually based on the principles of the epipolar geometry, Fig 7.
Figure 6. Figure 7.
During the measurements, the relative orientation between camera(s) and the projector is sup-
posed to remain the same. This constraint is used as an assumption for a successful calibration of
the sensor, i.e. determination of camera(s) intrinsic and extrinsic orientation parameters [5]. Taking
into account finite camera sensor size and resolution, distance of the image plane from the object
plane, the spatial resolution (number and spatial distance between measurement points in a single
measurement) is a direct function of number of pixels in used cameras.
Fundamental problem with structured light projecting technology is in the correspondence prob-
lem, since to obtain triangulation points one needs to locate for each pixel in left image m the cor-
responding pixel in the right image m', Fig 7. Projector purpose is there just to provide the unique
definition of matchable object points, hence for a dual camera system projector doesn't necessarily
has to be calibrated.
The correspondence problem is being solved by projections
of series of images consisting of some sort of structured
pattern, Fig 6. [5]. The motivation is to obtain the unambigu-
ous point (or stripe) indexing in all illumination conditions,
regardless of the size of the measurement volume, object
shape, surface color and reflection properties. Development of
LCD projectors with the same or better resolution of cameras
used, lead to systems based on projections of randomly
distributed gray patterns, or even colored patterns [3].
241 Figure 8.
Phase shifting methods, namely Gray code-based, Figure 8., or heterodyne methods, are based
on the multiple projections of various stripe patterns that provide continuous phase maps, thus solv-
ing the correspondence problem [5].
Ideally, measurement surfaces should be evenly illuminated opaque and bright Lambertian sur-
faces Fig 9. , but are usually dark or shiny with different coloring and unevenly illuminated as well
as with the self-occluded areas.
reflection
Figure 9.
To override possible sensor occlusions due to model geometry and curved surface reflections,
current dual camera systems could be extended to operate as a combined system based on two sin-
gle cameras and a calibrated projector device. Reflection on the flat surfaces can be easily reduced
by a small change of sensor orientation, but curved surfaces always have areas whose normal coin-
cides with at least one of the camera axis thus producing a bright spot regardless of the sensor ori-
entation, Fig 9.
Table 2. presents some performance facts of several commercially available structured light sen-
sors.
Manufacturer Product Accuracy Measurement Speed
[mm] volume
0.005- 35x28x20mm to
GOM Atos II 1,300,000 points in 7 seconds
0.02 1200x960x960mm
OptoTOP-
Breuckmann GmbH 0.015 80x60x50mm 1,300,000 points/second
HE100
OptoTOP-
Breuckmann GmbH 0.050 480x380x300mm 1,300,000 points/second
HE600
EI 3D Di- 59x48x32mm to
Genex 0.025-0.25 442,368 in <1 second
gitizer 250x200x200mm
Steinbichler Comet
0.02 45x35mm 6666 points/second
Optotechnik C50
Steinbichler Comet
0.07 420x340mm 6666 points/second
Optotechnik C400
Table 2.
242
3. Comparison
Evaluation of either method actual accuracy can be accomplished only by comparing it against
the equivalent reference method whose accuracy, and possibly resolution, outstands the accuracy
achievable by tested method. Previous chapters illustrated the influence of various parameters on
their accuracy, so the presented methods thus won’t be evaluated solely by their achievable accu-
racy, but rather by overall method maturity and their end application issues.
For a numerous applications the already achievable measurement accuracy satisfies their meas-
urement needs, so the method maturity, degree of automation and overall cost of measurement, its
planning time and learning curve should be a basis for decision of which system should one invest
in. Facts in favor of previous statement show that it is often forgotten that measurement is just the
initial part of the shape analyzing process, so it is also important to separately evaluate the quality
of digitized data from the quality of measured point cloud. Meaning, not just which method is used
for a specific measurement purpose, but in which ways it was used to obtain a complete model with
all of its artifacts in a shortest time possible with the necessary point cloud density. To illustrate this
statement, let us review a rudimental laser scanner as seen on Figure 1. The principle behind the ac-
tive laser scanner consisting of a single camera and a projecting laser source allows it to acquire
dense surface 3D information but from a single view only. If there is a need for a scan of a com-
plete object’s shape, then for the coverage of the whole measurement volume some sort of turntable
or other means of a controlled mechanical sensor or object movement is required. This limits laser
scanners usage to objects that are transportable and/or able to fit within sensor’s measurement
frame. However, such specialized scanners require less user influence than the other all-purpose
scanners, thus minimizing scanning time and operator-introduced errors. Notice that the term meas-
urement volume now has the extended meaning, in favor of a complete object’s shape. Overall ac-
curacy of such scanners will now depend not only upon the primary method drawbacks, but also on
the accuracy of the point cloud alignment and registration.
Extension of the measurement methods to dual or more camera systems allows the integration of
passive photogrammetric principles for the whole shape measurement. If the registration is con-
ducted just with several photogrammetrically measured reference points then the possible error of
the separate point cloud alignment might easily be bigger than the method’s single view accuracy.
The application of unique coded points to the object’s surface requires certain time and experience
but enables in-situ full-shape measurements of objects ranging even couple of square meters, some-
thing that laser sensors that are limited by their framed construction can’t perform. In general, laser
scanners usually (but not necessarily) are frame-based, and the structured light sensors can orientate
separate measurements by additional optical measurements based on photogrammetry.
The accuracy of laser scanners slightly varies with the measurement distance, thus making it
useful for various measurement tasks in several meter ranges, especially if there is a sudden change
of illumination conditions. Structured light scanners of the similar accuracy are characterized with
the smaller measurement volume, but in conjunction with photogrammetric measurements, this
drawback can be reduced. Looking from the point of view of acquired data end user, measurement
of large objects by structured light generates a huge amount of redundant data, which requires extra
time for measurement, big storage space and is computationally costly during cloud alignment, reg-
istration and refinement procedures as well as during the post processing feature extraction. Be-
cause of the physical resolution of its cameras and mostly because of the significant influence of
the ambient illumination laser scanner perform better than structured light scanners in case of large
physical measurements.
Since projecting of various structured light patterns requires a certain period usually longer than
a single second, laser method perform better in the real-time control measurements when there is a
continuous and repeatable measurement task.
243
4. Conclusion
Which sensor type and measurement technique will be used for a specific measurement task one
has to decide upon the characteristic requests for the post processing needs, taking into account the
size of the measured object, required resolution, required accuracy, robustness, and acquisition time
as well as the total cost of measurement. Both presented methods judging by commercially avail-
able sensors proved competitive and have achieved robustness and accuracy sometimes better than
0.01mm that is required for current industrial needs, but with regards to size of the measurement
volume structured light methods are more suitable for smaller objects of irregular surface geometry
while lasers can successively measure objects several meters in range.
If we extrapolate the preceding development pace, it becomes clear that the application of opti-
cal shape measurement will continue to expand. There is a general trend towards universal multi-
sensor and multi-data measurement systems that will potentially result in a higher level of integra-
tion of those currently competitive techniques. Such integration would result in highly versatile sys-
tems that would widen its current application potential.
References
[1] Beraldin J-A. et al.,"Active 3D Sensing", Modelli E Metodi per lo studio e la conservazione
dell'architettura storica, NRC 44159, 2000,pp 22-46
[2] Blais, F., "A Review of 20 Years of Range Sensor Development", Proceedings of SPIE-IS&T, Vol.
5013.,2003,pp 62-76
[3] Brenner, C. et al,"Photogrammetric calibration and accuracy of a cross-pattern stripe projector", SPIE
Videometrics VI', Vol. 3641, 1999, pp 164-172
[4] Curless, B., Levoy, M.,"Better Optical Triangulation through Spacetime Analysis", Proceedings of 5th
International Conference on Computer Vision, 1995, pp 987-994
[5] Gomerčić, M.,"Doprinos automatskoj obradi optičkog efekta u eksperimentalnoj analizi
naprezanja",531.715-3:531.717.2, FSB Zagreb,1999
[6] El-Hakim, S.F. et al.,"A Comparative Evaluation of the Performance of Passive and Active 3-D Vision
Systems", SPIE Proceedings Volume 2646, Conference on Digital Photogrammetry, 1995, pp 14-25.
[7] Hausler G., et al, "New Range Sensors at the Phyisical Limit of Measuring Uncertainty", Proc. of the
EOS Topical Meeting on Optoelectronics Distance Measurements and Applications, 1997,
[8] Raindrop Geomagic "3D Scanner Report", http://www.geomagic.com
[9] Toedter, O., Koch, A. W., "A simple laser-based distance measuring device", Measurement, Vol 20 No.
2,1997,pp 121-128
[10] Trobina, M.,"Error Model of a Coded-Light Range Sensor", Technical Report BIWI-TR-164, ETH,
Zurich, 1995.
Stjepan Jecić
Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb,Chair of Experimental
Mechanics, pp 102, 10002 Zagreb, Croatia, Tel. 38516168105, E-mail: stjepan.jecic@fsb.hr
Nenad Drvar
Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb,Chair of Experimental
Mechanics, pp 102, 10002 Zagreb, Croatia, Tel. 38516168447, E-mail: nenad.drvar@fsb.hr
244