STP 1594-2016
STP 1594-2016
Autonomous
Industrial Vehicles:
From the Laboratory to the
Factory Floor
STP1594
Editors:
Roger Bostelman
Elena Messina
www.astm.org
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
Selected technical PaPerS
StP1594
Autonomous Industrial
Vehicles: From the
Laboratory to the
Factory Floor
ASTM Stock #1594
DOI: 10.1520/STP1594-EB
ASTM International, 100 Barr Harbor Drive, PO Box C700, West Conshohocken, PA 19428-2959.
Printed in the U.S.A.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
Library of Congress Cataloging-in-Publication Data
Copyright © 2016 ASTM INTERNATIONAL, West Conshohocken, PA. All rights reserved. This material
may not be reproduced or copied, in whole or in part, in any printed, mechanical, electronic, f lm, or other
distribution and storage media, without the written consent of the publisher.
Photocopy Rights
Authorization to photocopy items for internal, personal, or educational classroom use, or the internal,
personal, or educational classroom use of speci f c clients, is granted by ASTM International provided that
the appropriate fee is paid to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923,
Tel: (978) 646-2600; http://www.copyright.com/
The Society is not responsible, as a body, for the statements and opinions expressed in this publication.
ASTM International does not endorse any products represented in this publication.
Citation of Papers
When citing papers from this publication, the appropriate citation includes the paper authors, “paper title,”
STP title, STP number, book editor(s), ASTM International, West Conshohocken, PA, year, page range,
paper DOI listed in the footnote of the paper. A citation is provided on page one of each paper.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
Foreword
THIS COMPILATION OF Selected Technical Papers, STP1594, Autonomous In-
dustrial Vehicles: From the Laboratory to the Factory Floor, contains peer-reviewed
papers that were presented at a workshop held May 26–30, 2015, in Seattle, Wash-
T
ington, USA. e workshop was sponsored by ASTM International Committee F45
on Driverless Automatic Guided Industrial Vehicles.
Roger Bostelman
Elena Messina
National Institutes of Standards and Technology
Gaithersburg, MD, USA
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
Con ten ts
Overview vi i
Acknowledgments xi
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017 v
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
Overvi ew
Automatic guided vehicles (AGVs) were one of the earliest applications for mobile
Tf
robots. e rst AGVs were deployed in the 1950s to transport materials in large fa-
f
cilities and warehouses. Mobile robot capabilities have advanced signi cantly in the
T
past decades. is progress is due in large part to researchers at technical universities
who have made tremendous strides in applying computer control and sensors to mo-
bile platforms for uses in applications such as manufacturing, health care, military,
and emergency response. As industrial vehicles gained more capabilities, the “A” in
AGV began to transition from “automatic” to “automated” in informal usage. is T
mirrors the progress in guided vehicles in areas such as safety sensing and reacting.
Further advancements in mobile robotics, such as in more general-purpose sens-
ing, planning, communications, and control, are paving the way for an era where
T
the “A” stands for “autonomous.” is evolution in onboard intelligence has greatly
expanded the potential scope of applications for AGVs and thus raised the need for
standard means of measuring performance.
A new committee was formed under ASTM International to develop these miss-
ing standards for measuring, describing, and characterizing performance for this new
breed of AGVs. ASTM’s Committee F45 on “Driverless Automatic Guided Industrial
Vehicles” (http://www.astm.org/COMMITTEE/F45.htm) is scoped to include stand-
f
ardized nomenclature and de nitions of terms, recommended practices, guides, test
f T
methods, speci cations, and performance standards for AGVs. ese new perfor-
mance standards will complement the ongoing work in AGV safety standards by
the Industrial Truck Standards Development Foundation [1] , the British Standards
Institution [2] , and others. F45 is addressing areas that are important for potential
AGV users to understand when making purchase and task application decisions.
T f
erefore, the committee is divided into ve technical subcommittees that focus on
the key areas of interest for the community:
F45.01 Environmental Efects
F45.02 Docking and Navigation
F45.03 Object Detection and Protection
F45.04 Communication and Integration
F45.91 Terminology
Tfe rst event organized by the ASTM F45 Committee was a workshop intended
to foster communication between researchers and practitioners and was held at the
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017 vii
2015 Institute of Electrical and Electronic Engineers International Conference on
Robotics and Automation (ICRA). Organized by Roger Bostelman of the National
Institute of Standards and Technology and Pat Picariello from ASTM International,
the workshop “Autonomous Industrial Vehicles: From the Laboratory to the Factory
Floor” solicited researcher input for the development of consensus standards and
sought to educate researchers about a standards-based mechanism for rapid technol-
ogy transfer from the laboratory to industry.
T is book comprises expanded versions of selected papers presented at the ICRA
T
workshop. e workshop and this book feature perspectives from related standards
Tf
eforts, industry needs, and cutting-edge research. e rst chapter, “Towards Devel-
opment of an Automated Guided Vehicle Intelligence Level Performance Standard”
by Bostelman and Messina, sets the stage by reviewing standards development for
other mobile robot application domains, such as emergency response, and suggests
T
approaches for tackling performance measurement for intelligent AGVs. e authors
discuss examples of performance standards that could be used for vehicle navigation
performance and for perception systems (which would be key components of intel-
ligent vehicles).
Norton and Yanco’s chapter, entitled “Preliminary Development of a Test Method
for Obstacle Detection and Avoidance in Industrial Environments,” builds on the
f T
rst chapter by documenting the process for developing a test method. eir pro-
cess starts with building an understanding of the deployment environment through
T
the development of a taxonomy of relevant obstacles. e key characteristics are ab-
f
stracted to create recon gurable artifacts for conducting tests that are representative
f
of robot tasks. Statistical signi cance of performance data and other key aspects nec-
essary for successful test methods are also considered.
One of the challenges of deploying AGVs in unstructured facilities is the pos-
sibility of obstacles appearing not just on the ground but also above the foor. To
broaden obstacle detection capabilities, Hedenberg and Åstrand implemented time
of fight and structured light sensors on an unmanned vehicle and conducted sev-
eral experiments to characterize the performance of this sensing combination in
T
the laboratory and in an industrial setting. e results of their experiments are pre-
sented in “3D Sensors on Driverless Trucks for Detection of Overhanging Objects in
the Pathway,” which discusses the implications of using these sensors in real-world
settings.
In the chapter “Multi-AGV Systems in Shared Industrial Environments: Advanced
Sensing and Control Techniques for Enhanced Safety and Improved Efciency,” Sab-
attini et al. tackle the complexities of multiple AGVs operating in unstructured envi-
T
ronments. ey do so through fusion of sensor data from the diferent vehicles. e T
fused data produces a global environment representation that is updated in real-time
and is used for assigning missions to AGVs and supporting path planning and ob-
stacle avoidance.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017 viii
T eobald and Heger’s chapter considers the transition from research capabili-
T
ties to implementations in industry from an incremental perspective. eir chapter,
T
entitled “ e Safety-to-Autonomy Curve: An Incremental Approach to Introducing
Automation to the Workforce,” proposes gradual implementation of automation for
robotic systems. Starting with the deployment of the necessary safety systems, which
include sensing and supporting algorithms, the authors advocate leveraging the sen-
sor data from the safety systems to accumulate information and knowledge about the
T
environment and humans. us, the robots learn how to navigate and behave on an
f
ongoing basis, building con dence in the industry to allow incremental adoption.
T e criticality of robust sensing to enable advanced performance and safety for
AGVs heightens the importance of measuring how well a sensor system performs.
Performance test methods must have a basis for comparison to a reference—or
ground truth—system that is typically ten times better than the system under test.
T e chapter by Bostelman et al., “Dynamic Metrology Performance Measurement of
a Six Degrees-of-Freedom Tracking System Used in Smart Manufacturing,” describes
a method for evaluating the accuracy of a potential ground truth system.
T e chapter “Harmonization of Research and Development Activities Toward
Standardization in the Automated Warehousing Systems” by Kovačić et al. high-
T
lights the role of standards in bridging research and commercialization. eir work
describes a European Commission project in advancing automated warehousing
T
through a set of freely navigating AGVs in large-scale facilities. e authors discuss
performance standards and benchmarks that can enable technology transfer from
the laboratory to industry.
T f
e book’s nal chapter, “Recommendations for Autonomous Industrial Vehicle
Performance Standards,” by Bostelman, summarizes and synthesizes a discussion
Tf
session that was held at the ICRA workshop. e ndings from the workshop pre-
sented in this chapter are meant to inform the standardization eforts under ASTM
Committee F45 and accelerate the infusion of intelligence so as to enable autono-
mous guided vehicles.
References
[1] ANSI/ITSDF B56.5:2012, Safety Standard for Driverless, Automatic Guided
Industrial Vehicles and Automated Functions of Manned Industrial Vehicles,
November 2012, http://www.itsdf.org
[2] British Standard Safety of Industrial Trucks—Driverless Trucks and eir Sys- T
tems. Technical Report BS EN 1525, 1998.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017 ix
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
Acknowledgments
Te ICRA workshop was supported by the IEEE Technical Committee on Perfor-
mance Evaluation and Benchmarking of Robotic and Automation Systems. Te
Editors are grateful to the reviewers of this book whose comments greatly improved
its quality.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017 xi
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
AUTONOMOUS INDUSTRIAL VEHICLES: FROM THE LABORATORY TO THE FACTORY FLOOR 1
STP 1594, 2016 / available online at www. astm. org / doi: 10. 1520/STP159420150054
Towards Development of an
Automated Guided Vehicle
Intelligence Level Performance
Standard
Citation
Bostelman, R. and Messina, E., “Towards Development of an Automated Guided Vehicle
Intelligence Level Performance Standard,” Autonomous Industrial Vehicles: From the
Laboratory to the Factory Floor, ASTM STP1594, R. Bostelman and E. Messina, Eds., ASTM
International, West Conshohocken, PA, 2016, pp. 1–22, doi:10.1520/STP1594201500542
ABSTRACT
Automated guided vehicles (AGVs) typically have been used for industrial
material handling since the 1950s. In the years following, U.S. and European
safety standards have been evolving to protect nearby workers. However, no
performance standards have been developed for AGV systems. In our view,
lessons can be learned for developing such standards from the research
and standards associated with mobile robots applied to search and rescue and
military applications. Research challenge events, tests and evaluations, and
intelligence-level efforts have also occurred that can support industrial AGV
developments into higher-level intelligent systems and provide useful standards
development criteria for AGV performance test methods. This chapter provides
background information referenced from all of these areas to support the need
for an AGV performance standard.
Keywords
standards, performance, mobile robot, automated guided vehicle (AGV)
Manuscript received June 16, 2015; accepted for publication August 11, 2015.
1
National Institute of Standards and Technology, 100 Bureau Dr., Gaithersburg, MD 20899-8230
2
ASTM Workshop on Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor on
May 26–30, 2015 in Seattle, Washington.
Copyright VC 2016 by ASTM International, 100 Barr Harbor Drive, PO Box C700, West Conshohocken, PA 19428-2959.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
2 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
Introduction
Automated guided vehicles (AGVs) have been used since 1953. In the years
following, “AGVs have evolved into complex material handling transport vehicles
ranging from mail handling AGVs to highly automatic trailer loading AGVs using
laser and natural target navigation technologies” [1]. Potential users of AGV
technology know that AGVs are safe when AGV manufacturers conform to the
American National Standards Institute/Industrial Truck Safety Development
Foundation (ANSI/ITSDF) B56.5:2012, Safety Standard for Driverless, Automatic
Guided Industrial Vehicles and Automated Functions of Manned Industrial
Vehicles [2]. However, there are no current standards to directly compare AGV
intelligent performance such that users can fully appreciate their potential AGV
investment without independent tests and evaluations.
Nonindustrial vehicle applications (e.g., driverless cars, search and rescue
robots, military unmanned vehicles) are rapidly improving their capabilities and
intelligence, thus providing a clear sense of the advanced features that could be
installed in industrial AGVs. The benefits to AGV users would be enormous if
AGVs gained onboard intelligence that would allow the vehicles to adapt to their
manufacturing facilities instead of vice versa. An intelligence-level performance
standard would benchmark current capability levels and standardize test methods
to do the benchmarking. Benchmarks provide an incentive for AGV developers to
achieve higher performance, which enables them to expand their markets to include
broader applications, such as those within unstructured environments with workers
present.
This chapter proposes methods for measuring AGV performance that can
provide the foundation for a new, voluntary AGV intelligence-level performance
standard. The standard would cover a broad range of AGV classes and include
performance-measurement test methods for estimating vehicle capabilities associ-
ated with the particular vehicle classes. We provide (1) background information
about current standards, including those under development for vehicles in emer-
gency response applications; (2) examples of vehicle challenge events and programs
that have improved autonomous vehicle intelligence; and (3) a list of capabilities to
be considered in a new AGV performance standard.
A performance standard would provide AGV manufacturers with test meth-
ods to estimate performance that could be referenced as part of their product
marketing. The results of the performance test methods, which would measure
capabilities along a spectrum, would also provide insight for manufacturers and
developers who could use the results to guide investments in research and devel-
opment or to target particular market niches. Because there is no current per-
formance standard, users must rely on specifications provided by manufacturers
with, perhaps, no traceable and reproducible basis. This lack of standards-based
performance characterization discourages many potential AGV users from
attempting to automate processes or leads them to purchase AGVs that may not be
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
BOSTELMAN AND MESSINA, DOI 10.1520/STP159420150054 3
appropriate for their particular environments and tasks. Mismatches between expect-
ations and results may require additional capital investment to upgrade the equip-
ment or environment.
As used in this document, “At a minimum, intelligence requires the ability to
sense the environment, to make decisions, and to control actions. Higher levels of
intelligence may include the ability to recognize objects and events, to represent
knowledge in a world model, and to reason about and plan for the future” [3].
Some intelligence measurements are included in the ANSI/ITSDF B56.5 safety
standard. For example, vehicle speed must be reduced when navigating through
confined areas, and control system performance tests must prove that the vehicle
stops prior to contact with human-representative obstacles when they are sensed
using noncontact safety sensors. Specific tests for the latter example in the ANSI/
ITSDF B56.5 standard measure performance of the navigation and safety sensors
when integrated into the vehicle controller so that safe performance is ensured
prior to transferring the vehicle to the user. However, most vehicle capabilities,
which may or may not include safety functions, lack standard means of conducting
performance measurements for reporting to potential users to enable them to make
informed decisions that reduce the risk of adopting AGVs.
Background
Efforts to develop methods for testing and evaluation of the safety performance
of automated and semiautomated vehicles (SSVs) have been ongoing at the
National Institute of Standards and Technology (NIST) and other organizations for
many years. The NIST has supported the ANSI/ITSDF B56.5, B56.11.6 (powered
industrial vehicle operator visibility), and B56.1 (fork trucks) standards [4]. This
project showed that it is possible to make static and dynamic measurements of both
vehicle safety and capability needs. Measurement results have provided a basis to
suggest improvements to the ANSI/ITSDF B56.5 standard.
Similarly, other vehicle programs and challenge events have improved the
intelligence and capabilities of autonomous and SSVs. Additionally, standards for
emergency response robot vehicles have been and continue to be developed.
Examples of relevant efforts are shown in this section, gathered from Internet
searches and the individuals and organizations shown in the acknowledgments
section.
of these robots are being developed under ASTM’s Committee for Homeland Security
Applications, Operational Equipment, Robots (E54.08.01). This section provides a list
of those ASTM standards that are most relevant to AGVs [5]. In the descriptions,
approved standards are shown with their standard number preceded by ASTM; work
items are prefixed with WK; and status (as of this writing) of preliminary develop-
ment is designated by V ¼ validating (i.e., checking or proving repeatability) or
P ¼ prototyping (i.e., experimenting with artifacts and procedures to best measure the
particular system capability concerned). The majority of commercially available
robots for response applications have limited onboard intelligence; hence, they are
remotely operated by a human (typically a responder) using displays transmitted back
from the cameras or other sensors onboard. There are some emerging assistive
autonomy capabilities (for example, in stair climbing), but generally speaking, the
response robots require much greater human interaction than AGVs do. The test
methods described in the following sections are designed to be agnostic as to whether
the robot is programmed to run through the tests independently or if it has to be con-
trolled by an operator during the entire test, meaning that a fully autonomous robot
should run through the same tests as one that has to be “driven” by an operator.
In general, the test methods developed under ASTM E54.08.01 consist of the
following elements:
• Apparatus (or prop): A repeatable, reproducible, and inexpensive representa-
tion of tasks that the robot is expected to perform. It should challenge the
robot with increasing difficulty or complexity and be easy to fabricate interna-
tionally to ensure all robots are measured similarly.
• Procedure: A script for the test administrator and the robot operator to follow.
These tests are not intended to surprise anybody. They should be practiced to
improve technique.
• Metric: A quantitative way to measure the capability. For example, complete-
ness of ten continuous repetitions of a task, or terrain figure eights, resulting
in a cumulative distance traversed. Together with the elapsed time, a resulting
rate in tasks/time or distance/time can be calculated.
• Fault conditions: A failure of the robotic system preventing completion of
ten or more continuous repetitions. This could include an inverted robot, a
stuck robot, or failure requiring field maintenance.
Starting the analysis off with terminology, many terms defined in the standard
terminology for urban search and rescue robots could have applicability to AGVs.
For example, ramps, towing, confined area/space, maneuvering, obstacles, and peak
power are just a few potentially relevant and overlapping terms across the two
industries.
O pera ti on s (AS TM E 25 21 –0 7a )
Mobility standards listed here may prove relevant to industrial AGVs and mobile
robots by describing test methods and definitions for areas such as environmental
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
BOSTELMAN AND MESSINA, DOI 10.1520/STP159420150054 5
effects, obstacle detection and avoidance, and terminology. Ramps, speed, terrain
types, towing, and so on are all associated with industrial vehicle intelligent safety
and performance capabilities (similar to response robots). The environmental
conditions typically are not expected to be as harsh in the industrial settings that
AGVs encounter, but the concepts for testing mobility are still transferrable.
Standard Test Method for Evaluating Emergency Response Robot Capabilities,
Mobility:
• Confined Area Obstacles: Gaps (ASTM E2801)
• Confined Area Obstacles: Hurdles (ASTM E2802)
• Confined Area Obstacles: Inclined Planes (ASTM E2803)
• Confined Area Obstacles: Stair/Landings (ASTM E2804)
• Confined Area Terrains: Gravel (V) (WK35213)
• Confined Area Terrains: Sand (V) (WK35214)
• Confined Area Terrains: Mud (P)
• Confined Area Terrains: Continuous Pitch/Roll Ramps (ASTM E2826)
• Confined Area Terrains: Crossing Pitch/Roll Ramps (ASTM E2827)
• Confined Area Terrains: Symmetric Stepfields (ASTM E2828)
• Confined Space Terrains: Vertical Insertion/Retrieval Stack with Drops (P)
• Maneuvering Tasks: Sustained Speed (ASTM E2829)
• Maneuvering Tasks: Towing: Grasped/Hitched Sleds (ASTM E2830)
• Maneuvering Tasks: Towing Hitched Trailers (P)
Energy and power standards listed here are relevant to industrial AGVs
and mobile robots. The energy and power measurements are conducted under
somewhat arduous conditions in order to expedite the test process (draining of
the battery) and to represent some typical energy usage profiles in response appli-
cations. Test methods inspired by the ones in ASTM E54 but adapted to AGVs
would ensure all vehicle manufacturers and users conform to the same energy
and power measurement techniques. Although not as potentially catastrophic as
losing a robot that has penetrated a hazardous area due to a dead battery, users of
AGVs need to have predictable and known battery performance to ensure
efficient operations.
Standard Test Method for Evaluating Emergency Response Robot Capabilities,
Energy/Power:
• Endurance Tasks: Confined Area Terrains: Continuous Pitch/Roll Ramps (V)
(W34433)
• Peak Power Tasks: Confined Area Obstacles: Stairs/Landings (P)
Vehicle communication with the master or warehouse management systems
as used in AGV applications relates to rescue robot communication capabilities.
The following standards can provide the beginning of communication and inter-
ference test methods for industrial AGVs and mobile robots. In particular, being
able to characterize the wireless communications between the vehicle and either
the operator control station or the central factory or warehouse controller is
essential. In typically teleoperated response robot applications, there is need for
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
6 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
constant video streaming from the robot’s onboard cameras to the operator
control station and of motion and other commands from the operator to the
robot. When AGVs have onboard path replanning capabilities (for example, to go
around an obstacle), they may need to provide updates to the centralized control-
ler if they deviate from a programmed path. The electromagnetic environment in
factories and warehouses may be challenging to wireless communications, height-
ening the priority of having a means of characterizing the AGV’s communication
system.
Standard Test Method for Evaluating Emergency Response Robot Capabilities,
Radio Communication:
• Control and Inspection Tasks: Line-of-Sight Environment (ASTM E2854)
• Control and Inspection Tasks: Non-Line-of-Sight Environment (ASTM
E2855)
• Control and Perception Tasks: Structure Penetration Environment (P)
• Control and Perception Tasks: Urban Canyon Environment (P)
• Control and Perception Tasks: Interference Signal Environment (P)
Human-system interaction performance standards have perhaps minimal
relevance on industrial vehicle test methods because AGVs require less and differ-
ent types of human interaction. Nevertheless, there may be some human-robot
interactions, potentially with factory or warehouse workers who need to intervene
with the AGV. Some concepts may be transferrable from the ASTM E54 human-
system interaction test methods to those for AGVs.
Standard Test Method for Evaluating Emergency Response Robot Capabilities,
Human-System Interaction (HSI):
• Search Tasks: Random Mazes with Complex Terrain (ASTM E2853)
• Navigation Tasks: Random Mazes with Complex Terrain (V) (WK33260)
• Search Tasks: Confined Space Voids with Complex Terrain (V) (WK34434)
Sensors are commonly used on AGVs and mobile robots. Response robot
standards, listed here, provide performance test methods for how capable sensors
are when integrated with the vehicle. Test methods also evaluate how well the
control algorithms place sensor data in maps to localize the vehicle and for use in
obstacle detection and avoidance.
Standard Test Method for Evaluating Emergency Response Robot Capabilities,
Sensors:
• Ranging: Spatial Resolution (P)
• Localization and Mapping: Hallway Labyrinths with Complex Terrain (P)
• Localization and Mapping: Wall Mazes with Complex Terrain (P)
• Localization and Mapping: Sparse Feature Environments (P)
The aforementioned standards can measure performance of current teleoper-
ated robots as well as emerging robots with autonomous capabilities. The standards
are already being used to compare capabilities of different rescue, military recon-
naissance, and bomb disposal robots. Based on such comparisons, users can choose
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
BOSTELMAN AND MESSINA, DOI 10.1520/STP159420150054 7
the best system for their needs. Similar standards-based means of comparing
performance of different candidate AGVs are needed.
of intelligent performance for tasks that may occur in real situations. Two such
challenges were:
• Virtual Manufacturing Automation Competitions (VMACs) 2007–2009 [16]
* These were workshops and virtual/real AGV competitions based on real-
driverless cars that were tasked to autonomously drive 240 km (150 miles)
through the Mojave Desert from Los Angeles to Las Vegas.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
12 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
gram [25]: The LAGR program had the goal of accelerating progress in
autonomous, perception-based, off-road navigation in robotic UGVs.
The program had a novel approach of providing all teams with the same
baseline platform and software, which they would augment with their
specific advancements. Regular trials were held to compare each team’s
results against the baseline and the other teams.
* 2007 Urban Challenge: This was a competition on a 96-km (60 mile) urban
area course at George Air Force Base, CA. It required driving autono-
mously while obeying all traffic regulations and negotiating other traffic
and obstacles and merging into traffic.
* 2012–2015 DARPA Robotics Challenge: In this competition, the goal was
The ALFUS [29] Ad Hoc Working Group has formulated, through consensus, a
framework within which the different levels of autonomy can be described. The
initial version of the framework was presented at the 2004 ASME International
Mechanical Engineering Congress [26]. Significant progress has been made since
then [28]. However, the complexity of the autonomy-level issue forced the group to
identify additional technical challenges—many of which are active issues in the
research communities [10]. The group agreed that the autonomy levels for
unmanned systems must be characterized using three dimensions: mission com-
plexity, environmental difficulty, and human-robot interaction. The group devised
a three-axis representation for those dimensions. Fig. 1a shows this representation
applied to industrial AGVs, Fig. 1b shows the levels of autonomy, and Fig. 1c shows
a summary score card on which to enter the autonomy level along each axis. We
believe this same model could be used to identify contextual autonomy for AGVs.
FIG. 1 ALFUS model applied to AGVs (a), autonomy levels (b), and utonomy level
summary score graph that incorporates the autonomy level along each axis (c).
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
14 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
provide areas to consider in generic test method development. For example, vehicle
classes have particular loading, type, guidance, and so on and pose questions to
future standards task groups as to how best to consider the variety of AGVs. Simi-
larly, AGV applications in docking, palletizing, and so forth may also provide
situations for performance test methods that may or may not fit all vehicles,
and perhaps more than one test method for each application may need to be
considered.
VEHICLE CLASSES
1. Loading
a. Light weight/capacity
b. Medium weight/capacity
c. Heavy weight/capacity
2. Type
a. Unit load
b. Tugger
c. Forklift
d. Other (e.g., hybrid, mobot, etc.)
3. Guidance
a. Wire
b. Laser triangulation
c. Ceiling bar code
d. Magnetic
e. Markers
f. Chemical/paint stripe
g. Simultaneous Localization and Mapping
h. Hybrid (combinations of guidance methods)
i. Position resolution and accuracy
4. Teach Modes
a. Offline
b. Human-led in situ
5. Cognition/Autonomy Level
a. Fully Autonomous—operator never intervenes
b. Semiautonomous—operator intervenes:
i. For each new maneuver
ii. To manually clear the path and let mobility continue
c. Human-Machine Interface Control—jog or pendant control
APPLICATION-SPECIFIC PERFORMANCE CRITERIA
For each criterion listed, consider task complexity, adaptability to the environment,
and verification of performance.
• Docking with tray tables, conveyers
• Palletizing
* Known/unknown locations
* Finding the pallet openings
* Loaded/unloaded pallets
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
16 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
• Human detection
* Represented by test pieces
* Represented by mannequins
* Actual humans
* Coverings (e.g., clothes worn)
• Capacities
* Speed
* Vehicle weight versus payload
* Lift height
• X/Y movement
* Ackerman
* Omnidirectional
* Skid steer
• Open source
* Plug and play
* Underlying architecture or operating system (e.g., Robot Operating
System)
• Intelligence
* Autonomy level
* Situational awareness
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
BOSTELMAN AND MESSINA, DOI 10.1520/STP159420150054 17
1 Scope
2 Referenced Documents
3 Terminology
4 Summary of Test Method
5 Significance of Use
6 Apparatus
7 Hazards
8 Calibration and Standardization
9 Procedure
10 Report
11 Precision and Bias
12 Measurement and Uncertainty
13 Keywords
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
18 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
mobile robot industries. Test methods are definitive procedures that produce test
results and, as shown in , precision and bias and measurement and uncer-
Ta bl e 1
tainty. Therefore, scientific reference must be provided within each test method
standard. Additionally, replicable and propagatable artifacts are expected to be
developed and used as relatively accurate and inexpensive test method support
devices, similar to, for example, the test pieces used for noncontact sensing evalua-
tion within ANSI/ITSDF B56.5. Standards with simple, relatively inexpensive arti-
facts would, therefore, not require vehicle vendors and users to procure expensive
measurement systems to ensure their vehicles conform to standards or to conduct
in-house testing. As suggested in previous sections of this chapter, metrics, test
methods, and terminology for the AGV and mobile robot industries may be
adopted from those associated with autonomous and intelligent capabilities evalua-
tions in other domains. A summary of potential AGV relevance for each non-AGV
standard described previously is listed in and a summary of potential AGV
Ta bl e 2 ,
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
BOSTELMAN AND MESSINA, DOI 10.1520/STP159420150054 19
TABLE 3 Potential AGV relevance for each chal l enge and program.
Conclusions
Currently, there are no performance measurement standards for AGVs, only safety
standards. However, that situation is likely to change. In developing such standards, it
is important to realize that much ofthe mobile robot research from different organiza-
tions and application areas is applicable to AGVs. Furthermore, performance stand-
ards do exist for mobile robots. As such, performance standards for AGVs can be
based on these performance standards for mobile robot capabilities. In this chapter,
we described a number of areas where overlaps are possible. The next steps are to
proceed with the development of performance measurement standards for AGVs and
mobile robots with input from mobile robot, sensors, and other supporting industries.
ACKNOWLEDGMENTS
The authors would like to thank several key individuals for their help with this chap-
ter, including:
• Andrew Moore of the Southwest Research Institute, San Antonio, TX
• Ann Virts of the National Institute of Standards and Technology, Gaithers-
burg, MD
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
20 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
References
[1] Egemin, “History of AGVs,” Egemin Automation, Inc., Holland, MI, 2011, www.egeminusa.
com/pages/agv_education/education_agv_history.html (accessed April 4, 2016).
[2] ANSI/ITSDF B56.5:2012, Safety Standard for Driverless, Automatic Guided Industrial
Vehicles and Automated Functions of Manned Industrial Vehicles, Industrial Truck Stand-
ards Development Foundation, Washington, DC, 2012, www.itsdf.org
[3] Albus, J. S., “Outline for a Theory of Intelligence,” IEEE Transactions on Systems, Man,
and Cybernetics, Vol. 21, No. 3, 1991, pp. 473–509.
[4] Bostelman, R., Shackleford, W., Cheok, G., and Saidi, K., “Safe Control of Manufacturing
Vehicles Research Towards Standard Test Methods,” Proceedings of the International
Material Handling Research Colloquium, Gardanne, France, June 25–28, 2012.
[5] ASTM International, West Conshohocken, PA, 2012, www.astm.org
[6] Huang, H.-M, Messina, E., English, R., Wade, R., Albus, J., and Novak, B., “Autonomy
Measures for Robots,” Proceedings of the ASME 2004 International Mechanical Engi-
neering Congress and Exposition , American Society of Mechanical Engineers, Anaheim,
CA, November 13–19, 2004.
[7] ISO/DIS 18646-1, Robots and Robotic Devices—Performance Criteria and Related Test
Methods for Service Robot, International Organization for Standardization, Geneva,
Switzerland, 2015, www.iso.org
[8] Huang, H. and Messina, E., Autonomy Levels for Unmanned Systems (ALFUS) Framework
Volume II: Framework Models Initial Version , NIST Special Publication 1011-II-1.0, National
Institute of Standards and Technology (NIST), Gaithersburg, MD, 2007.
[9] IEC SC 59F, Surface Cleaning Appliances, International Electrotechnical Commission,
Geneva, Switzerland, 2015, http://www.iec.ch/dyn/www/f?p ¼ 103:7:0::::FSP_ORG_ID,
FSP_LANG_ID:1395,25 (accessed April 4, 2016).
[10] The RoboCup Federation, 2012, www.robocup.org (accessed April 4, 2016).
[11] “Welcome to RoboCup@Work,” 2015, www.robocupatwork.org/index.html (accessed
April 4, 2016).
[12] “The RoboCup@Home League,” 2015, www.robocupathome.org (accessed April 4, 2016).
[13] Marvel, J. A., Hong, T., and Messina, E., “Solutions in Perception Challenge Performance
Metrics and Results,” Proceedings of the Workshop on Performance Metrics for
Intelligent Systems (PerMIS 0 12) , Association for Computing Machinery, New York, NY,
March 20–22, 2012, pp. 59–63.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
BOSTELMAN AND MESSINA, DOI 10.1520/STP159420150054 21
[14] “About AUVSI,” Association for Unmanned Vehicle Systems International, Arlington, VA,
2012, www.auvsi.org (accessed April 4, 2016).
[15] Sheh, R., Jacoff, A., Virts, A. M., Kimura, T., Pellenz, J., Schwertfeger, S., and
Suthakorn, J. (January). “Advancing the State of Urban Search and Rescue Robotics
Through the RoboCupRescue Robot League Competition,” Field and Service
Robotics, K. Yoshida and S. Tadokoro, Eds., Springer, Berlin, Heidelberg, 2014,
pp. 127–142.
[16] Balakirsky, S., Chitta, S., Dimitoglou, G., Gorman, J., Kim, K., and Yim, M., “Robot
Challenge,” Robotics and Automation , December 2012, pp. 9–11.
[17] Balakirsky, S. and Madhavan, R., “Advancing Manufacturing Research Through
Competitions,” Proceedings of SPIE Defense Security and Sensing, Orlando, FL,
April 13–17, 2009.
[18] Balakirsky, S., Scrapper, C., Carpin, S., and Lewis, M., “USARSim: Providing a Framework
for Multi-Robot Performance Evaluation,” Proceedings of the Performance Metrics for
Intelligent Systems Workshop, NIST, Gaithersburg, MD, August 21–23, 2006.
[19] Shoemaker, C., “Development of Autonomous Robotic Ground Vehicles: DoD’s Ground
Robotics Research Programs: Demo I through Demo III,” Intelligent Vehicle Systems:
A 4D/RCS Approach , R. Madhavan, E. R. Messina, and J. S. Albus, Eds., Nova Publishers,
New York, 2006, pp. 283–315.
[20] Haas, G. A., David, P., and Haug, B. T., “Target Acquisition and Engagement from an
Unmanned Ground Vehicle: The Robotics Test Bed of Demo 1,” Technical Report
ARL-TR-1063, Army Research Laboratory, Adelphi, MD, March 1996.
[21] Army Research Laboratory, “Robotics Collaborative Technology Alliance (RCTA), FY
2011 Annual Program Plan,” March 2011, http://www.arl.army.mil/www/pages/392/
rcta.fy11.ann.prog.plan.pdf (accessed April 4, 2016).
[22] “Micro Autonomous Systems and Technology (MAST),” GRASP Laboratory, University of
Pennsylvania, Philadelphia, PA, 2015, www.mast-cta.org
[23] Wikipedia, “DARPA Grand Challenge,” Defense Advanced Research Project Agency
Challenges, 2012, http://en.wikipedia.org/wiki/DARPA_Grand_Challenge (accessed
April 4, 2016).
[24] Spofford, J. R., Rimey, R. D., and Munkeby, S. H., “Overview of the UGV/Demo II
Program,” Lockheed Martin Astronautics, Denver, CO, 1996.
[25] Wikipedia, “Defense Advanced Research Project Agency Learning Applied to Ground
Robots (LAGR) Program,” 2009, http://en.wikipedia.org/wiki/DARPA_LAGR_Program
(accessed April 4, 2016).
[26] Ratcliff, A., “OSD Manufacturing Technology Overview,” NDIA Quarterly meeting,
Department of Defense Manufacturing Technology Program presentation, May 14, 2009.
[27] Swinson, K., “Test and Evaluation of Autonomous Ground Robots,” NDIA Ground
Robotics Capabilities Conference and Exhibition, Aberdeen, MD, March 23, 2012, http://
www.dtic.mil/ndia/2012grcce/Swinson.pdf (accessed April 4, 2016).
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
22 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
[28] Macias, F., “The Test and Evaluation of Unmanned and Autonomous Systems,” Interna-
tional Test and Evaluation Association Journal, Vol. 29, No. 4, 2008, pp. 388–395.
[29] Huang, H., Pavek, K., Albus, J., and Messina, E., “Autonomy Levels for Unmanned
Systems (ALFUS) Framework: An Update, 2005,” Proceedings of the SPIE Defense
and Security Symposium, Orlando, FL, March 28–April 1, 2005.
[30] Huang, H., “Performance Measures for Unmanned Systems,” presented at the SAE AS4D
Meeting, San Diego, CA, October 18, 2010.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
AUTONOMOUS INDUSTRIAL VEHICLES: FROM THE LABORATORY TO THE FACTORY FLOOR 23
STP 1594, 2016 / available online at www. astm. org / doi: 10. 1520/STP159420150059
Preliminary Development of
a Test Method for Obstacle
Detection and Avoidance
in Industrial Environments
Citation
Norton, A. and Yanco, H., “Preliminary Development of a Test Method for Obstacle Detection
and Avoidance in Industrial Environments,” Autonomous Industrial Vehicles: From the
Laboratory to the Factory Floor, ASTM STP1594, R. Bostelman and E. Messina, Eds., ASTM
International, West Conshohocken, PA, 2016, pp. 23–40, doi:10.1520/STP1594201500593
ABSTRACT
There is currently no standard method for comparing autonomous capabilities
among systems. We propose a test method for evaluating an automated
mobile system’s ability to detect and avoid obstacles, specifically those in an
industrial environment. To this end, a taxonomy is being generated to
determine the relevant physical characteristics of obstacles so that they can be
accurately represented in the test method. Our preliminary development
includes the design of an apparatus, props, procedures, and metrics. We have
fabricated a series of obstacle test props that reflect a variety of physical
characteristics and have performed a series of tests with a small mobile robot
toward validation of the test method. Future work includes expanding the
taxonomy, designing more obstacle test props, collecting test data with more
automatically guided vehicles and robots, and formalizing our work as a
potential standard test method through the ASTM F45 Committee on
Driverless Automatic Guided Industrial Vehicles, specifically ASTM F45.03
Object Detection and Protection.
Manuscript received July 1, 2015; accepted for publication August 28, 2015.
1
New England Robotics Validation and Experimentation (NERVE) Center, University of Massachusetts Lowell,
1001 Pawtucket Blvd., Lowell, MA 01854
2
Department of Computer Science, University of Massachusetts Lowell, 1 University Ave., Lowell, MA 01854
3
ASTM Workshop on Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor on
May 26–30, 2015 in Seattle, Washington.
Copyright VC 2016 by ASTM International, 100 Barr Harbor Drive, PO Box C700, West Conshohocken, PA 19428-2959.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
24 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
Keywords
obstacle detection, obstacle avoidance, standard test method, mobile robot,
automatically guided vehicle (AGV)
Introduction
Automatically guided vehicles (AGVs) have become very common in industrial
manufacturing environments. The use of autonomous mobile robots in this
domain is also on the rise. A necessary capability of both systems is obstacle
detection and avoidance. Obstacles in this domain range between static objects
(e.g., tables, pallets, barrels) and moving agents (e.g., forklifts, people). The loca-
tion of static objects in some environments is fixed, while in others it changes
very frequently when a job requires a different work flow and layout. If a system
is capable of detecting and avoiding obstacles, it creates a safer work environment
and allows for faster integration into a facility because less a priori knowledge of
the environment is needed [1].
Currently, there is no standard for comparing this capability between systems.
The Committee on Driverless Automatic Guided Industrial Vehicles (ASTM F45
[2]) has been formed to achieve this goal, specifically ASTM F45.03, which is
focused on object detection and protection. We propose a test method design that
can aid in this effort by accurately simulating the relevant physical characteristics
of an industrial manufacturing environment. In particular, the test method will
replicate the physical qualities of common obstacles and objects that can affect a
system’s ability to detect them with its sensors and avoid collisions.
Related Work
There are a variety of efforts working toward standardized performance metrics and
test methods for robotic systems. The National Institute of Standards and Technology
(NIST) has been leading an effort for the development of standard test methods for
response robots [3] for well over a decade through the Committee on Homeland
Security Applications; Operational Equipment; Robots (ASTM E45.08.01 [4]). Those
test methods focus on different capabilities of mobility, manipulation, sensors, and
human-system interaction, most prominently for teleoperated robots.
For AGVs, there is a safety standard test method specified in American
National Standards Institute/Industrial Truck Safety Development Foundation
(ANSI/ITSDF) B56.5 [5] that verifies whether or not a system’s safety sensor(s)
are able to detect a potential obstacle in its path. Within that test method, two test
pieces that must be detected and avoided are placed at varying orientations and
distances from the system. The surface of the test pieces are either black, because
that can cause issues for optical sensors, or highly reflective, because that can
cause issues for ultrasonic sensors. That test is primarily focused on dynamic
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
NORTON AND YANCO, DOI 10.1520/STP159420150059 25
agents (e.g., a person) that temporarily enter a vehicle’s path. A similar standard
for autonomous robots is ISO 13482 [6], which includes appropriate distances
between the system and an agent/object that enters its space and emergency stop
functions. It is aimed at personal care robots (not those in industrial environ-
ments) and is largely focused on the system’s safety when co-located with people.
Although it specifies standard performance, it does not specify a standard test
method for determining performance.
The complexity levels of environment and obstacles (CLEO) and prediction in
dynamic environments framework efforts outlined in Madhavan et al. [7] are aimed
at measuring the performance of autonomous systems. They note that the capabil-
ity of a system to work in unstructured, dynamic environments is a “critical enabler
for next-generation industrial mobile robots.” The CLEO framework provides a
method for characterizing an autonomous system’s ability to navigate through
increasingly complex environments and obstacles as both aspects become more
dynamic. The metrics used include geometric correctness, dynamic map update
methods, and amount of time to update for environment changes.
The development of standard test methods for AGVs is also prevalent at
NIST [8], focusing on collaborative workspaces among humans, unmanned
vehicles, and manned vehicles. That work focuses on the detection of objects and
agents that either enter the path or stop zone of a vehicle or that are beyond it.
The test pieces from ANSI/ITSDF B56.5 are used for obstacles as well as for an
alternative to ground truth measurement called the grid-video method. This
method involves placing a grid on the ground and computing ground truth loca-
tions from recorded video of a test.
Scope
Obstacle detection and avoidance is a common capability of any mobile, autono-
mous system. Given the existing audience and effort for ASTM F45, the develop-
ment is initially focused on automated mobile systems used in indoor industrial
environments, particularly for manufacturing applications. Mobile systems in this
domain include AGVs, which can take the form of traditionally human-operated
vehicles that are instead automated (e.g., Seegrid Vision Guided Vehicles [9]) and
autonomous robots (e.g., Adept MobileRobots [10]).
All of these systems are required to detect and avoid objects and agents that
either enter their path or that form the edges of their path. This includes any entity
in an industrial environment that, if a mobile system were to collide with it, could
cause damage to the system, its payload, or to the entity itself. For developmental
purposes, these are what we refer to as obstacles. Obstacles can sit on the ground
or protrude from a wall or ceiling in the environment.
Depending on a variety of factors, including the system’s capabilities and the
layout of the environment, the manner in which an obstacle is avoided varies.
Avoidance can mean stopping in place until the obstacle is no longer in the way or
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
26 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
REQUIREMENTS
All of these factors form a set of requirements that define the scope of the test method
so that it meets the needs of the manufacturing robotics domain, allows both tradi-
tional AGVs and mobile robots to be tested, and can be fabricated by anyone:
• R1 : The relevant characteristics of common obstacles in an industrial environ-
ment must be physically represented.
• R2: Obstacle test props must have variable settings to allow for a variety of
real-world objects to be represented.
• R3: Obstacle test prop settings, both internal and external qualities, must be varied
during a test session to prevent gaming and to test the flexibility ofthe autonomy.
• R4: The semipermanent boundaries of the environment and locations within
it must be represented in the test apparatus such that they are appropriately
detectable by the system being tested.
• R5 : The test apparatus and props must be fabricated using readily available
building materials that are inexpensive.
TAXONOMY OF RELEVANT CHARACTERISTICS
In order to accurately simulate an appropriate level of detail in manufacturing
environments, a taxonomy of relevant characteristics is being developed. The tax-
onomy will guide the design of the obstacle test props and will provide a unified
language to describe their purpose. The characteristics that are to be included are
distilled through the following process:
• T1 : Identify types of real-world obstacles and features found in an industrial
environment.
• T2: Break down each obstacle into their physical components.
• T3 : Outline the constant and variable physical relationships for each obstacle
component.
• T4: Identify overlaps in physical characteristics among real-world obstacles
(this is performed to limit the number of unique obstacle test props that will
need to be developed).
• T5 : Design obstacle test props that capture the physical characteristics while
reducing overlap among other obstacle test props.
The use of the taxonomy development process ensures that the requirements that
pertain to the obstacle test props are met. Specifically, R1 is satisfied by T1 and T2, R2 is
satisfied by T3, and R3 and R5 will help guide T5. This process has been used to develop
an initial set ofexample test prop designs, which are detailed in “Test Method Design.”
A snapshot of T1–T3 can be seen in , using a table and shelving unit
Tabl e 1
TABLE 1 A snapshot of T1–T3 of the obstacl e taxonomy devel opment process, using a tabl e and
shelving unit as examples.
Table Constants
Leg (vertical column extending from ground) At least one vertical leg that extends between
the ground and the tabletop
Tabletop (horizontal plane above ground) A tabletop that sits above the leg(s) with
empty space between it and the ground
Feet (horizontal or vertical features extending Variables
from leg on ground) Number of legs
Bracer (horizontal plane extending between Horizontal distance between legs
legs above or on ground) Horizontal distance between legs and
tabletop edge
Vertical distance between ground and
tabletop
Tabletop dimensions
Feet type (e.g., perpendicular extensions,
wheels)
Vertical distance between horizontal bracers
and ground (if any)
Type of horizontal bracers (e.g., solid plane
between ground and tabletop, bar)
Shelving unit Constants
Shelf (horizontal plane above or on the ground) At least one shelf that sits above the ground
with back support
Side support or leg (vertical plane or column Variables
extending from ground or between shelves, or Number of shelves
both) Vertical distance between shelves
Back support (vertical plane extending from Width and depth of shelves
ground between shelves) Horizontal distance between back support
and shelf edge
Feet (horizontal or vertical features extending Side support type (e.g., posts, solid planes
from support on ground) that extend from front to back of shelves)
Horizontal distance between side supports
(if posts)
Back support type (e.g., environment wall,
solid plane that spans shelf width)
Shelf, side, and back support material density
(e.g., solid, wire frame)
Feet type (e.g., perpendicular extensions,
wheels)
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
NORTON AND YANCO, DOI 10.1520/STP159420150059 29
• Volume:
* Closed: All of the obstacle’s components are contained within its volume
or the volume cannot be entered by part of the system (e.g., a solid block),
or both.
* Open: The volume of the object can be entered by part of the system (e.g.,
a desk).
• Spatial characteristics:
* Ground: A component touches or is attached to the ground and can be
sensed when on a side of the system.
* Elevated: A component overhangs the ground without another component
directly underneath it and can be sensed when above or on a side of the
system (or both).
* Inset: A component is set into the volume a distance from another
component that sits above it (e.g., a table whose legs do not touch the
tabletop edges) and can be sensed when on a side or above the system
(or both).
• Surface density:
* Solid: The outer surface of the obstacle is solid.
* Porous: The outer surface has many holes (e.g., a wire mesh shelving unit).
* Empty: The side-facing outer edges of the obstacle are empty.
• Location:
*Static, fixed: The obstacle is fixed in place and cannot be moved.
*Static, could be moved: The obstacle is not explicitly fixed and can be
moved if hit with enough force.
* Dynamic, moving: The obstacle moves on its own (e.g., a human, a forklift).
* Dynamic, component-enabled movement: The obstacle is able to
TEST APPARATUS
The size of the system, the obstacles, and the environment determine if an obstacle
can be avoided by the system stopping or by navigating around it. Many AGVs
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
30 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
are not currently equipped with the functionality to navigate around an obstacle,
but some existing autonomous mobile robots can. The size of the apparatus will
also dictate where the obstacles can be placed within it and the direction of
approach by the system.
Preliminary development of this test method is not concerned with confined
space but rather if an obstacle can be detected and avoided successfully. Therefore,
the dimensions of the apparatus should be sized such that there is sufficient space
for the system to pass on at least one side of the obstacle, which will depend on the
dimensions, locomotion type, and turning radius of the system. An exact formula
for determining this has not yet been developed.
As specified in R4, the boundaries of an environment and the locations within
it can be interpreted differently by each system. Regardless of how they are sensed,
the apparatus should be built such that any people near it are protected from
potentially unsafe system behavior (e.g., exiting the boundaries). Thus, a barrier is
implemented outside of all physical or virtual augmentations. To meet R5, the
barrier is made using wood posts and sheets of wood, which are most commonly
available measuring 2.4 by 1.2 m (96 by 48 in). The wall panels can also be used to
define the system’s path if no additional augmentation is required.
The apparatus is built to define a space for testing wherein the system is
instructed to drive from location A to B and back continuously. A dead end after
both locations forces the system to turn around and traverse its path again,
approaching the obstacle from the opposite direction. If the system is unable to
change travel directions within the dead zone, then an additional path can be added
to allow for continuous, looped travel. The interior measurements can vary, most
easily in multiples of 2.4 m (96 in), or 1.2 m (48 in), for simple fabrication. Any
required physical or virtual augmentation for boundaries and location definition
can occur within this space. A diagram of the apparatus can be seen in Fig. 1 . A wall
in the middle of the apparatus is designed for mounting obstacles to obstruct the
path. To adjust the location of the obstacle test props within the space (per R3),
bars of 80/20 Holey Tubing are attached to the wall to allow for precise attachment.
The apparatus can be seen in Fig. 1 and Fig. 2.
TEST PROPS
The test props represent a variety of obstacles whose characteristics are specified
through the taxonomy. Their locations and orientations within a space can vary,
each introducing a new challenge of detection for the system being tested. To meet
R3, the variable characteristics of each obstacle defined in T3 must be changed
during a test session. Rather than designing many obstacle test props that capture
all of the possible variations of a single obstacle type, malleable props with adjusta-
ble settings can be used. To change the settings of an obstacle test prop, minimal
tools should be required so as to not add excessive length to a test session.
To meet R5, we have opted to use a common set of building materials to fabri-
cate the obstacle test props. For flat horizontal or vertical solid planes, wooden
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
NORTON AND YANCO, DOI 10.1520/STP159420150059 31
FIG. 1 Diagram of the test apparatus environment. The optional loop zone can be
added for systems that cannot change travel directions in the allotted space
(or at all).
B y
obstacle optional
mounting loop z x
wall zone
A x
Possible obstacle locations A, B: travel locations for system Wall panel barriers
Boundaries of the robot path, implemented either physically or virtually (if applicable)
x, y Width and length of path such that system can traverse and turn around (if possible)
z Width of path such that system can traverse around an obstacle obstructing the path 1 22 cm from the mounting wall
panels are used, 122 cm along at least one dimension. For vertical or horizontal
columns, aluminum square tubing (specifically, 80/20 Holey Tubing) is used, which
comes predrilled with holes that are separated by 3.8 cm, which allows for a very
precise granular scale for adjusting attachment dimensions. Each of these items can
be painted flat black, or metal sheets can be attached to match the surface qualities
of ANSI/ITSDF B56.5. Other features such as porous surfaces and elevated
obstacles are achieved by wire mesh panels and ropes with pulleys, respectively.
All obstacle test props are fabricated using hand-tightened hardware such as bolts
and wingnuts for easy assembly and adjustment. Additional pieces of aluminum
FIG. 2 The test apparatus environment set up at the UMass Lowell NERVE Center. Left:
Three-dimensional rendering. Right: Photo of the obstacle mounting wall.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
32 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
square tubing can be used as infrastructure on the horizontal plane to attach the
obstacle test props to the apparatus along the mounting wall. Holes are cut in the
horizontal plane to allow square tubing to pass through perpendicularly and to
serve as inset features. The common building materials can be seen in Fig. 3 .
A series of obstacle test props have been designed, each satisfying a different
combination of the higher-level qualities outlined in “Taxonomy of Relevant Char-
acteristics.” See Table 2 for images of each obstacle test prop and their corresponding
characteristics. Obstacles A and B can used as qualifiers before advancing to
obstacles that use multiple surfaces of that type. The “infinite” height characteristic
means that the implementation of the obstacle is not considering where the top of
the obstacle is; generally, an AGV or mobile robot system does not detect obstacles
from the sky down but rather from the ground up. The default size for wide obsta-
cle features is 122 cm, given the availability of 122 cm by 244 cm wood panels.
Elevated obstacles can have their components raised to a height that allows part of
the system (or the entire system) to drive underneath it, potentially causing it to
collide with the horizontal plane, unless the obstacle has components on the ground
so that the system may detect the obstacle before a collision occurs.
PROCEDURE
Before conducting a test, the apparatus must be set to the appropriate dimen-
sions. Wall panels and boundaries must be set at dimensions that allow for the
system to drive comfortably with enough space for it to detect and navigate
around an obstacle (if applicable). The optional loop zone can be added if necessary.
FIG. 3 The common building materials used to fabricate the obstacle test props. Left,
top to bottom: Aluminum square tubing, black square tubing, horizontal plane
with holes, and hardware for mounting additional components. Right, top to
bottom: Thin solid panels, tall solid panels, mesh side panels.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
NORTON AND YANCO, DOI 10.1520/STP159420150059 33
TABLE 2 A set of exampl e obstacle test props using the common set of buil ding materials.
(h ) (i ) (j )
Note: All example images use a flat black surface quality on all outward facing planes (e.g., side pan-
els, underside of horizontal plane, etc.). The same designs are also possible using reflective material.
*All “infinite” heights are depicted at 61 cm because those are the specific settings used for valida-
tion testing in “Validation Testing.”
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
34 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
Physical or virtual augmentation (or both) should occur to define the system’s path
as needed. The travel locations (A and B) must also be defined to the system, which
can occur virtually in software, accompanied by physical augmentations such as
quick response codes or reflectors, and so on.
A downselection of obstacle test prop types and a threshold for their
variable settings should also be performed to prevent exhaustive testing. Depend-
ing on the system’s dimensions and the sensors it has available, some obstacle
settings will have larger impacts than others. For instance, if only a forward
facing, two-dimensional lidar is used and is mounted low to the ground, then
obstacles that are elevated completely above the system will not be detectable and
therefore do not need to be tested. A proper downselection process is still in
development.
The system is instructed to traverse from location A to B, then B to A. If a
specific path can be commanded, then it should fall right through the center of
the space defined by the boundaries. One instance of this action performed by the
system is referred to as a lap. During each lap, the system will interact with the
obstacle twice, or once if the loop zone is used. After each lap, the obstacle’s
settings are adjusted as necessary, such as its location and orientation along the
mounting wall. This process should be repeated as many times as necessary to
achieve a statistically significant measure of successful detection and avoidance
of the obstacle type(s).
If the system does not properly avoid the obstacle, then that lap will be noted as
such. This would require a reset to the last travel location. If too many faults occur,
the settings of the test should be adjusted to an easier difficulty, which has not yet
been determined.
METRICS
The most important metric of performance is whether or not the obstacle
was avoided. If noncontact sensors are used by the system being tested, then
avoidance means that the system did not collide with any part of the obstacle.
If contact sensors are used (e.g., a bumper), then avoidance means that, upon
contact, the obstacle did not move enough to cause any damage or to create
a safety hazard. This can be determined by observing the system as it per-
forms within the test method. The test can also be video recorded for review
afterwards.
A more detailed metric of performance is the distance between the obstacle and
the system after it has avoided the obstacle. The distance depends on the speed the
system is traveling, at what distance it detects/reacts to the obstacle, and how
quickly it stops moving toward the obstacle. For this type of measurement, a
motion capture system can be used, although this would not satisfy R5. An inex-
pensive way to do this is to draw a grid on the ground and calculate the distance by
processing images from the recorded video of the test (as is done in Bostelman,
Norcross, Falco, and Marvel [8]).
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
NORTON AND YANCO, DOI 10.1520/STP159420150059 35
Validation Testing
To aid in validating the design of the test method, a series of tests was conducted
at the UMass Lowell NERVE Center. Ten tests were conducted using a mobile
robot programmed to traverse within the apparatus from A to B and back for five
laps. For each test, a different obstacle was used; those used can be seen in Table 2
(where their settings are detailed). In between each lap, the location of the obstacle
along the mounting wall was altered, varying between 0 cm from the left edge,
61 cm from the left edge, center, 61 cm from the right edge, and 0 cm from the right
edge. All tests were recorded using a multi-angle camera system, depicting the
obstacle from all observable sides (see Fig. 4).
FIG. 4 A still frame from the multi-angle camera system used to record the test
sessions.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
36 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
FIG. 5 Map of the apparatus generated by the robot using the robot visualization
package in ROS.
to avoid any obstacles that obstruct its path) and to localize within it. With the
additional augmentations, the Pioneer measures 50 cm by 38 cm by 40 cm. The
Hokuyo URG is located approximately 30 cm above the ground, offering a two-
dimensional LIDAR view around the body of the robot. An image of the robot can
be seen in Fig. 6 .
One instance of each obstacle type listed in Table 2 was used with specific set-
tings tuned for the Pioneer. The robot’s two-dimensional sensors were located
30 cm above the ground, meaning all ground obstacles of “infinite” height did
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
NORTON AND YANCO, DOI 10.1520/STP159420150059 37
not need to be taller than that height because they would be detected by the
robot regardless. For this reason, 61-cm-tall side panels were used for obstacles
A, B, D, and E. Obstacle C used 8-cm-tall side panels such that its thin ground
components were below the sensors. No overhead components of obstacles
could be detected; if they were elevated less than 40 cm high, the robot would
have collided with them (unless they had accompanying ground features that
were otherwise detectable). In order to reduce potential damage to the robot,
obstacles E–J were elevated 61 cm high. The robot could also possibly enter the
volume of obstacles G–J; the default 122-cm size was used for all wide obstacles,
as well as insets of 30 cm, to allow for this possibility. Hokuyo URG sensors have
been shown to have issues with dark surfaces [12] , so all obstacle test props used
black surfaces.
TABLE 3 Testing results of obstacle detection and avoidance test method with the Pioneer.
Note: When performing Trial 2 with Obstacle J, the robot’s navigation parameters were changed.
*These obstacles were not detected because their components were elevated above the robot’s sensor
field of view but, technically, they were avoided because no part of the robot collided with the
obstacles.
ACKNOWLEDGMENTS
This research has been supported in part by NIST under 70NANB14H235. The
authors would like to thank Jordan Allspaw, Brian Carlson, James Dalphond, and
Alexandra Derderian of the UMass Lowell NERVE Center for their assistance in
fabricating the test method and in validation testing.
Referen ces
[8] Bostelman, R., Norcross, R., Falco, J., and Marvel, J., “Development of Standard Test
Methods for Unmanned and Manned Industrial Vehicles Used Near Humans,” Proceed-
ings of the SPIE Defense, Security, and Sensing Conference, International Society for
Optics and Photonics, Baltimore, MD, April 29–May 3, 2013.
[9] Seegrid, “Vision Guided Vehicles,” Seegrid Corp., Pittsburgh, PA, 2014, http://www.
seegrid.com/products.php (accessed July 2015).
[10] Adept Technology, “Adept Lynx,” Adept Technology, Inc., San Ramon, CA, 2015,
www.adept.com
[11] Quigley, M., Conley, K., Gerkey, B. P., Faust, J., Foote, T., Leibs, J., Wheeler, R., and Ng,
A. Y., “ROS: An Open-Source Robot Operating System,” ICRA Workshop on Open Source
Software, Vol. 3, No. 3.2, 2009, p. 5.
[12] Kneip, L., Tâche, F., Caprari, G., and Siegwart, R., “Characterization of the Compact
Hokuyo URG-04LX 2D Laser Range Scanner,” Proceedings of the IEEE International
Conference on Robotics and Automation , Kobe, Japan, May 12–17, 2009, pp. 1447–1454.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
AUTONOMOUS INDUSTRIAL VEHICLES: FROM THE LABORATORY TO THE FACTORY FLOOR 41
STP 1594, 2016 / available online at www. astm. org / doi: 10. 1520/STP159420150051
ABSTRACT
Human-operated and driverless trucks often collaborate in a mixed work space in
industries and warehouses. This is more efficient and flexible than using only one
kind of truck. However, because driverless trucks need to give way to driven
trucks, a reliable detection system is required. Several challenges exist in the
development of such a system. The first is to select interesting situations and
objects. Overhanging objects are often found in industrial environments (e.g., tines
on a forklift). Second is choosing a system that has the ability to detect those
situations. (The traditional laser scanner situated two decimetres above the floor
does not detect overhanging objects.) Third is to ensure that the perception
system is reliable. A solution used on trucks today is to mount a two-dimensional
laser scanner on top and tilt the scanner toward the floor. However, objects at the
top of the truck will be detected too late, and a collision cannot always be
avoided. Our aim is to replace the upper two-dimensional laser scanner with a
three-dimensional camera, structural light, or time-of-flight (TOF) camera. It is
important to maximize the field of view in the desired detection volume. Hence,
the sensor placement is important. We conducted laboratory experiments to
check and compare the various sensors’ capabilities for different colors, using
tines and a model of a tine in a controlled industrial environment. We also
conducted field experiments in a warehouse. Our conclusion is that both the
Manuscript received June 15, 2015; accepted for publication November 3, 2015.
1
University of Skövde, School of Engineering Science, Portalen, Kaplansgatan 11, SE-541 34 Skövde, Sweden
2
Halmstad University, School of Information Technology, Kristian IV:s väg 3, SE-30118 Halmstad, Sweden
3
ASTM Workshop on Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor on
May 26–30, 2015 in Seattle, Washington.
Copyright VC 2016 by ASTM International, 100 Barr Harbor Drive, PO Box C700, West Conshohocken, PA 19428-2959.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
42 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
tested structural light and TOF sensors have problems detecting black items that
are non-perpendicular to the sensor. It is important to optimize the light
economy—meaning the illumination power, field of view, and exposure time—in
order to detect as many different objects as possible.
Keywords
mobile robots, safety, obstacle detection
Introduction
The need for an obstacle detection system for driverless forklift trucks is obvious.
However, such a system also may be used on driven trucks with automated func-
tions, such as the ability to stop if an obstacle appears. Hence, developing systems
that can be implemented for driven and driverless forklift trucks will not only create
a safer environment but will also decrease the cost of the obstacle detection system.
The world market for driven forklift trucks is orders of magnitude higher than for
driverless forklift trucks. American companies sold 927 automated guided vehicles
(AGVs) in 2011 [1]. During that period, more than 200,000 forklift trucks were
sold in the United States [2]. This includes electric rider trucks, electric warehouse
rider trucks, electric warehouse pedestrian trucks, and internal combustion trucks.
For the world market, World Industrial Truck Statistics reported order bookings
slightly below one million driven trucks for 2011 [2].
Two different safety standards exist for driverless trucks, one for Europe
(EN1525) and one for the United States (ANSI/ITSDF B56.5-2012), and each has
developed differently. In terms of obstacles, both consider contact with humans and
have two test items that represent parts of a human—a lower leg and a body. How-
ever, an object representing a piece of a machinery is added to the U.S. standard,
and the standard also considers different materials for different sensors as well as
more test cases [3,4]. A continuous development of standards for driverless trucks
is important to make use of state-of-the-art sensor technology.
Overall, the motivation for this work is to improve the safety of automated
material handling by proposing better sensor solutions for obstacle detection. The
challenge in developing an obstacle detection system in industrial settings is three-
fold. The first is to select situations to detect that are of special interest. The second
is choosing a perception system that has the ability to detect those situations. The
third is to ensure that the perception system is reliable.
This chapter is organized as follows. First, we discuss related work. This is
followed by a problem definition and then descriptions of the experiments and the
results. Finally, we offer a discussion and conclusions drawn from the results.
Related Work
The National Institute of Standards and Technology has presented covering
standards for driverless forklift trucks in industrial environments. Bostelman,
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
HEDENBERG AND ÅSTRAND, DOI 10.1520/STP159420150051 43
Hong, and Madhavan [5] use a time-of-flight (TOF) camera to detect objects
described in the U.S. safety standard [4] in which they also test the camera out-
doors and conclude that it shows good results in shady environments. Bostelman
[6] conducts tests with a sonar, a two-dimensional lidar, and a three-dimensional
TOF camera. The sensors are tested against objects in the standards as well as
an additional item, 500 mm by 100 mm, posed at 0 ? and 45 ? to the robot’s direc-
tion of travel to make a difference for TOF sensors. The tests also included various
materials (e.g., cardboard, gray plastic, cotton denim, black reflectance paper,
aluminum, and clear glass). Sonar detected all objects in various materials but
had problems with different angles. The two-dimensional lidar had problems with
flat glass at a 45 ? angle but detected other objects. The three-dimensional TOF
camera showed a notable difference between highly reflective and low-reflective
materials. Bostelman [6] proposed changes to ANSI/ITSDF B56.5 that were later
adopted into the standard [4]. Bostelman, Norcross, Falco, and Marvel proposed
test methods for driverless trucks and give examples of potential human and
equipment effects on the path of a driverless truck in human/AGV collaborative
work spaces [7].
Hedenberg and Åstrand use a test apparatus (Fig. 1 ) to evaluate three-dimensional
sensors that include test items in the safety standard as well as new items that repre-
sent objects in an industrial environment [8].
Problem Definition
SENSORS FOR OBSTACLE DETECTION
A driverless truck equipped with a two-dimensional range scanner (e.g., laser scan-
ner) situated less than two decimetres above the floor to detect objects described in
the safety standards (ANSI/ITSDF B56.5-2012, EN1525) does not detect all
obstacles in the desirable detection volume—the yellow area in Fig. 2. A solution
used on trucks today is to mount a two-dimensional laser scanner on the top of the
truck and tilt the scanner toward the floor. However, objects on the top of the truck
will be detected too late, and a collision cannot always be avoided. Another solution
proposed by Bostelman, Shackleford, Cheok, and Saidi is to use laser scanners on
each side of the truck to detect all items that enter the contour area of the truck [9].
However, this will dramatically increase sensor costs.
Our aim is to replace the upper two-dimensional laser scanner with a three-
dimensional camera in order to increase the detection volume and thus detect
obstacles earlier compared to the systems used today ( Fig. 2).
For all vision systems, the placement of the cameras is essential for obtaining
a good result. This has to be considered for every new setup [10]. There are
several ways to determine the placement of the camera system on the robot. The
easiest way is to just choose a placement by intuition. Putting a little more
effort into this judgment will probably increase the precision in the system. It
will also make a discussion of camera placement more unbiased if the system’s
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
44 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
FIG. 1 The test apparatus. In one scene, the test rig represents a collection of objects:
Item A—a prone human, Item B—a standing human, Item C—a flat target used by
Bostelman and Shackleford [6], Item D—a ladder, Item E—tines on a forklift, Item
F—hanging cable, Items G and H—vertical bars, Item I—horizontal bar, and Item
J—thin cable. A ladder, Item D, typically has a steeper slope than 45 ? . However,
objects that have a larger angle may be considered vertical, while objects with a
lower angle can be considered horizontal. The test apparatus measures 1.8 m by
1.8 m and the bars have a thickness of 25 mm. The hanging cable has a diameter
of 13 mm. All items are painted in matt black.
F
G H
E
D
A
performance has to be increased later. Huang and Krotkov concluded that the
best placement for cameras on mobile robots is at the highest point possible [10].
This is true if the desired detection volume is small in comparison to the available
field of view (FOV). In other cases, maximizing the FOV in the desired detection
volume is necessary.
the same work space, Fig. 4. On a system level, it is no problem to keep track of the
position and the action taken for all the driverless trucks, but a similar system for
human-operated trucks is more problematic. A driverless truck must give way to a
human-operated truck, and the lower two-dimensional laser scanner on a driverless
truck can indicate the presence of a human-operated truck. One problem with this
is if the human-operated truck delivers goods in front of a driverless truck ( Fig. 4).
The laser scanner will not detect the tines, presenting the risk of the driverless truck
running into the human-operated truck.
The problem with the coverage of the desired detection volume and FOV of a
structural light sensor is demonstrated in Fig. 4. The FOV hardly covers the floor
close enough to the truck and at the top of the truck’s contour area.
Other similar scenarios are turns around corners ( Fig. 5) as described earlier [7].
Mirrors are often used in human work spaces to see around corners. For driverless
trucks, this is a more complex scenario and still is an open issue.
To illustrate the complexity in an industrial environment, Fig. 4b shows how
spilled water may change the environment for sensor systems. The wet spot on the
otherwise matt floor may cause objects to be reflected, which reduces the optical
signal returning to the sensor. This could cause false readings.
In the existing safety standards for driverless trucks, objects commonly
handled are those that represent humans [3,4]. Since 2012, the U.S. standard also
has a 500 mm by 500 mm plate that needs to be detected at 0? and 45 ? with differ-
ent colors and reflectance depending on the sensor used.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
46 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
FIG. 4 View from a driverless truck. Human-operated and driverless trucks cooperate
in many industrial environments. A human-operated truck delivers goods in
front of a driverless truck. The two-dimensional laser scanner on the driverless
truck indicates the presence of the human-operated truck and stops. Note the
wet spot on the otherwise matt floor behind the truck, which is more visible in
(b). This makes a reflective surface that might cause problems for some sensors.
FIG. 5 An AGV with a bumper turns around a pillar. Note the window to the left in the
left image. Sunlight may have an impact on three-dimensional sensors.
Published with permission from Volvo Car Corporation.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
48 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
TABLE 1 Results for the structured light camera. The camera has problems with bl ack col ors at
almost al l angles. The resul ts are rounded off to the cl osest 1 0 % valu e. The results are not
symmetrical due to different distances to the board at different angles.
? 45
?
1 00 % 1 00 % 1 00 % 0 %
? 15
?
1 00 % 1 00 % 1 00 % 1 00 %
?
0 1 00 % 1 00 % 1 00 % 1 00 %
þ1 5? 1 00 % 1 00 % 1 00 % 0 %
þ 45 ? 1 00 % 1 00 % 1 00 % 0 %
A TOF camera (Fotonic E70) and a structured light camera (Microsoft Kinect) were
used and placed 2.5 m from a piece of board painted in four different colors:
gray, white, red, and black. A white/neutral background was placed 1 m behind the
board. The board was arranged at five different angles: ? 45? , ? 15? , 0? , þ 15? , and
þ 45? . Obstacles at an angle of ? 45 ? are used in the U.S. safety standard [4]. Three
consecutive images were taken by each sensor. The analysis was made by counting the
number of pixels within a given depth value from the sensor. Then the ratio between
the number of detected pixels and the number of maximum possible pixels covered by
the sensor was computed. The average of the results from the three images is given in
Table 1 and Table 2, and a scene is shown in Fig. 6. The results were rounded off to the
closest 10 % value.
It is clear that TOF and structured light cameras perform well for untextured
regions. However, this depends heavily on the reflectance. Both sensors performed
poorly on black surfaces in this test due to the fact that the black color had poor
reflectance. Usually, black is a good light absorber, but reflectance is important, and
one issue is to define the reflectance for different wavelengths. The structured light
sensor performed better on the perpendicular flat object in black. The board was
TABLE 2 Resul ts for the time-of-fl ight camera. The camera has probl ems with bl ack colors at
al most all angl es and al so with the gray plate at ? 45 ? . The results are rounded off to the
closest 10 % val ue.
TOF camera
? 45
?
60 % 1 00 % 1 00 % 0 %
? 15
?
1 00 % 1 00 % 1 00 % 0 %
?
0 1 00 % 1 00 % 1 00 % 60 %
þ1 5? 1 00 % 1 00 % 1 00 % 0 %
þ 45 ? 1 00 % 1 00 % 1 00 % 0 %
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
HEDENBERG AND ÅSTRAND, DOI 10.1520/STP159420150051 49
FIG. 6 The plate at þ 15 ? rotated about the vertical axis. The time-of-flight (b) and
structured light (c) camera do not detect the black plate. All other colors are
detected by both sensors.
FIG. 7 A time-of-flight and structural light camera were mounted on a driverless truck.
A scenario is to turn around a corner and detect tines on a forklift.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
50 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
rather diffuse, meaning that the light was scattered in all directions. If the board
had been more specular, we would probably have seen an even larger effect by
tilting it. The results in Table 1 are not symmetric due to the different distances to
the board at different angles.
FIG. 8 The results from a scenario where tines from a forklift are used as an
obstacle around the corner. I n the upper left, an RGB image from the Kinect
is shown. The lower left shows the two-dimensional plot from the laser
scanner in which the red line illustrates where the beam from the upper laser
scanner hits the floor. The upper right image shows the depth image from
the structured light (Kinect) camera. The lower right image shows the depth
image from the TOF camera (Fotonic). The structured light camera (Kinect)
as well as the TOF camera (Fotonic) detect the tines. Note the problems for
the TOF camera with the reflectors mounted on the wall and used for laser
navigation.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
HEDENBERG AND ÅSTRAND, DOI 10.1520/STP159420150051 51
and a model of a tine. The approach in the safety standards is to make a model of
an existing object and conduct tests. If the model is detected, the perception system
is approved for that specific situation [3,4] . The model was painted black where
the tine of the forklift had been worn off naturally. We make no claim that either
the model or the real tine represent the average tines used in the industry. Our
investigation showed that it is very hard to find a common denominator for making
one general model of a tine.
The results are shown in Fig. 8 and Fig. 9 . The lower left image shows the
two-dimensional plot from the laser scanner in which the red line illustrates
where the beam from the upper laser scanner hits the floor. The upper right
image shows the depth image from the structured light (Kinect) camera. The
lower right image shows the depth image from the TOF camera (Fotonic). Both
the structured light camera (Kinect) and the TOF camera (Fotonic) detected the
FIG. 9 The results from a scenario where a model represents a tine from a forklift and is
used as an obstacle around a corner. On the upper left, an RGB image from the
Kinect is shown. The lower left image shows the two-dimensional plot from the
laser scanner in which the red line illustrates where the beam from the upper
laser scanner hits the floor. The upper right image shows the depth image from
the structured light (Kinect) camera. The lower right image shows the depth
image from the TOF camera (Fotonic). The structured light camera (Kinect)
detects the model of the tine, while the TOF camera (Fotonic) has difficulty
detecting the model.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
52 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
model of the tine. It is also notable that the TOF camera had problems with
the reflectors used for navigation (see Fig. 8 ). The test items in the U.S. safety
standard were also used. Both sensors had difficulty detecting the black flat test
item but did detect the cylindrical black test pieces.
It is clear that the structured light sensor detects both objects, whereas the
TOF sensor has problems with the model of the tine. According to the test men-
tioned earlier, this is due to the black color of the model; but another reason
can be a lower resolution (160 by 120 versus 320 by 240) with a higher FOV (70 ?
by 53 ? versus 58.5 ? by 46.6? ) for the TOF sensor compared to the structural light
sensor. There are not enough pixels to detect objects of this size.
FIG. 10 Field test from a warehouse where a structured light camera is used.
A wooden broomstick is placed in front of the truck. The thin stick is
detected by the sensor. The dark blue color in the depth image indicates
undefined distances. The lower left image shows the two-dimensional plot
from the laser scanner in which the red line illustrates where the beam from
the upper laser scanner hits the floor. Detections of the truck are shown
at (0, 0).
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
HEDENBERG AND ÅSTRAND, DOI 10.1520/STP159420150051 53
to make several laps throughout the warehouse. Humans and driverless and
human-operated trucks occurred as obstacles as well as manually placed obstacles
to test the system’s limitations; see Figs. 10 – 12 .
The sensor detected thin structures, such as the broomstick in Fig. 10 and
the metal cage in Fig. 12 , but had problems with the black parts in the black-yellow
pattern of the safety railing in Fig. 11 .
FIG. 11 Field test from a warehouse where a structured light camera is used. The bar
in black and yellow (on the left) is only partly detected by the sensor. The
black part is not detected. The dark blue color in the depth image indicates
undefined distances. The lower left image shows the two-dimensional
plot from the laser scanner in which the red line illustrates where the
beam from the upper laser scanner hits the floor. Detections of the truck
are shown at (0, 0).
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
54 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
FIG. 12 Field test from a warehouse where a structured light camera is used. A metal
cage is placed in front of the truck. The thin structures are detected by the
sensor. The dark blue color in the depth image indicates undefined distances.
The lower left image shows the two-dimensional plot from the laser scanner in
which the red line illustrates where the beam from the upper laser scanner hits
the floor. Detections of the truck are shown at (0, 0).
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
HEDENBERG AND ÅSTRAND, DOI 10.1520/STP159420150051 55
tine were used as obstacles. Both sensors detected the tine. However, the TOF cam-
era had problems detecting the black tine model, while the structural light camera
performed better. The results show that a black color with low reflectance is hard
for TOF and structural light cameras to detect. They also show how difficult it is
to make a representative model of tines for a forklift. To our knowledge, making a
tine model (or a representative test of a tine model) to be used in a safety standard
is still an open issue.
To determine a good placement for a three-dimensional sensor, a large FOV
is necessary to cover the desired detection volume. The placement of the sensor
is more important due to the poor sensor performance in detecting black plates at
nonperpendicular angles. Active three-dimensional sensors currently on the market
have a limited FOV. Sensors with a larger FOV are more suitable for this task,
without losing resolution to detect thin structures. This may require infrared light
with more power. It may also require different structural light patterns more suitable
for detecting obstacles in an industrial environment.
ACKNOWLEDGMENTS
As a part of the Automatic Inventory and Mapping of Stock project, this work is sup-
ported by the Swedish Knowledge Foundation and by industry partners Kollmorgen,
Optronic, and Toyota Material Handling Europe.
Referen ces
[1] Material Handling Institute, “AGVS Quarterly Report,” MHI, Charlotte, NC, Summer 2012.
[2] European Federation of Materials Handling, “World Industrial Truck Statistics,” Informa-
tion Sheet, July 2012, Frankfurt/Main, http://www.fem-eur.com/data/File/N460-WITS_
fact_sheet_2012_FEM_corr2.pdf (accessed April 4, 2016).
[3] European Committee for Standardization (CEN), “Safety of Industrial Trucks—Driverless
Trucks and Their Systems,” CEN, Brussels, Belgium, 1997.
[4] ANSI/ITSDF B56.5, “Safety Standard for Driverless, Automatic Guided Industrial
Vehicles and Automated Functions of Manned Industrial Vehicles,” Industrial Truck
Standards Development Foundation, Washington, DC, 2012.
[5] Bostelman, R., Hong, T., and Madhavan, R., “Towards AGV Safety and Navigation
Advancement Obstacle Detection Using a TOF Range Camera,” Proceedings of the
12th International Conference on Advanced Robotics, Seattle, WA, July 18–20, 2005,
Institute of Electrical and Electronics Engineers (IEEE), New York, 2005, pp. 460–467.
[6] Bostelman, W. and Shackleford, R., “Time of Flight Sensors Experiments Towards Vehicle
Safety Standard Advancements,” Draft submitted to the Computer Vision and Image
Understanding special issue on Time of Flight Sensors, 2010.
[7] Bostelman, R., Norcross, R., Falco, J., and Marvel, J., “Development of Standard
Test Methods for Unmanned and Manned Industrial Vehicles Used Near Humans,” Pro-
ceedings of the SPIE Defense, Security, and Sensing Conference, Baltimore, MD, April
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
56 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
AUTONOMOUS INDUSTRIAL VEHICLES: FROM THE LABORATORY TO THE FACTORY FLOOR 57
STP 1594, 2016 / available online at www. astm. org / doi: 10. 1520/STP159420150052
ABSTRACT
This chapter describes innovative sensing technologies and control techniques
that aim at improving the performance of groups of automated guided
vehicles (AGVs) used for logistics operations in industrial environments. We
explicitly consider the situation where the environment is shared among AGVs,
manually driven vehicles, and human operators. In this situation, safety is a
major issue that needs always to be guaranteed, while still maximizing the
efficiency of the system. This paper describes some of the main achievements
of the PAN-Robots European project.
Manuscript received June 15, 2015; accepted for publication November 6, 2015.
1
University of Modena and Reggio Emilia, Dept. of Sciences and Methods for Engineering, via Amendola 2,
42122 Reggio Emilia, Italy
2
ASTM Workshop on Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor on
May 26–30, 2015 in Seattle, Washington.
Copyright VC 2016 by ASTM International, 100 Barr Harbor Drive, PO Box C700, West Conshohocken, PA 19428-2959.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
58 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
Introduction
Production flow of goods in manufacturing plants has been highly automated in
the last decades, primarily to reduce costs and avoid unsafe working conditions.
Manufacturing plants often need warehouses for raw materials and final products
at the beginning and at the end of the production line. Although production often
is automated to a large extent, logistics typically are only marginally automated
and generally require manual operations performed by human workers and hand-
operated forklifts. Therefore, logistics, which are not fully integrated into the manu-
facturing processes, cause inefficiencies as well as high-risk working conditions for
employees [1]. Factory logistics are crucial for overall production flow, and logisti-
cal weaknesses affect production efficiency and the quality of goods delivery, partic-
ularly in terms of product traceability. Bottlenecks and problems in warehouse
logistics heavily impact factory competitiveness in the market.
Warehousing in factories of the future can rely on automated guided vehicles
(AGVs) and integrated systems for the complete handling of logistical operations
( Fig. 1 ). Nowadays these autonomous systems have a market share of about a
few thousands vehicles sold every year, and they are not yet ready to be in wide-
spread use in manufacturing plants. In fact, safety, efficiency, and plant installa-
tion costs are still ongoing problems, and technology is not mature enough to
fully support a pervasive diffusion of AGVs. Therefore, innovations to address
AGV weaknesses and automated warehouse systems will boost the capabilities of
these logistical solutions, bringing them toward a pervasive diffusion in modern
factories.
This chapter is organized as follows: First, we describe the system under con-
sideration and introduce the problem to be solved. Related works on multisensor
data fusion are outlined and the proposed centralized data fusion methodology is
discussed. The results of the data fusion methodology are then expanded to define
advanced navigation strategies. This is followed by concluding remarks.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
SABATTINI ET AL., DOI 10.1520/STP159420150052 59
Problem Definition
In this chapter we consider technological issues related to AGV systems used for
factory logistics [2,3]. Several research groups have been working on AGV systems
in the last few decades. Tsumura presents a comprehensive survey of the relevant
literature [4], where authors describe the main technologies adopted for localization
and guidance of AGVs in industrial environments. Stouten and de Graaf [5]
describe the use of multiple AGVs for cooperative transportation of huge and
heavy loads.
Generally speaking, AGV systems are used for moving goods among different
positions in the industrial environment [6,7]. Each movement operation is generally
referred to as a mission . Different kinds of missions can be performed—pallets of
goods can be transported from the end of an automated production line to the
warehouse, from the warehouse to the shipment, or between two locations of the
warehouse. Typically, goods prepared in automated production lines need to be
picked up from a wrapper or from a palletizer.
AGVs are exploited to accomplish missions in an automated manner. For this
purpose, the AGV system is handled by a centralized controller—usually referred
to as a warehouse management system—that is in charge of assigning each mission
to be completed to a specific AGV. Once each mission has been assigned to a
specific AGV, then the centralized controller needs to coordinate the motion of the
AGVs themselves for mission completion. When dealing with a single AGV, several
strategies can be exploited for single-robot path planning (e.g., Martinez-Barberá
and Herrero-Peréz [8]). Conversely, when multiple AGVs share the same environ-
ment, coordination strategies need to be adopted in order to optimize the traffic.
Typically, the central controller is in charge of coordinating the motion of the
AGVs [9–13]. In order to simplify the coordination and enhance the safety of
operations, AGVs often are constrained to move along a predefined set of roads,
referred to as a road map ( Fig. 2).
Next, we will summarize the main characteristics of AGV systems typically
adopted in modern automated warehouses.
SYSTEM INSTALLATION
Automated motion of AGVs requires precise and constantly updated knowledge of
their current position. This is typically obtained by using laser-based technologies [14]
that provide very reliable results. In particular, laser-based localization is obtained by
each vehicle, computing its relative position with respect to opportunely placed
artificial landmarks (i.e., reflectors). A precise knowledge of the landmarks map is
mandatory for obtaining a highly precise localization. Moreover, the position of the
landmarks themselves has a great influence on the localization accuracy; optimal land-
mark placement is addressed, for instance, by Beinhofer, Muller, and Burgard [15].
For large plants, hundreds to thousands of reflectors are necessary for obtaining
accurate localization. Hence, one of the first phases of plant installation consists
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
60 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
of the coordination algorithm both contribute to the overall efficiency of the system.
Typically, specific features of the road map are handled by adding exceptions to the
coordination algorithm in the form of traffic rules [16].
Moreover, although a road map is a very effective manner of reducing the
computational resources needed for traffic management, constraining the motion
of the AGVs on a finite set of roads severely reduces the flexibility of the system.
For instance, if an obstacle appears on an AGV’s road, it is necessary to replan the
path to circumvent the obstacle. When an alternative path is not available, traffic
jams might occur that can be solved only with the intervention of an operator who
would manually remove the obstacle.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
62 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
Moreover, they have a limited field of view and thus do not provide a complete view
of the surrounding environment; sensors are limited to predefined areas. Therefore,
AGVs need to greatly reduce their speed in critical zones in order to guarantee safe
behavior in response to unpredictable situations.
Along the same lines, although road maps are a very effective manner of reduc-
ing the computational resources needed for traffic management, constraining the
motion of the AGVs on a finite set of roads severely reduces the flexibility of
the system. In particular, this reduced flexibility clearly affects the performance of
the system in the presence of unforeseen obstacles. In fact, if an obstacle suddenly
appears in front of an AGV, it is necessary to replan the AGV’s path in order
to avoid collisions with the obstacle. If AGVs are constrained on the road map,
replanning means finding an alternative path, which is not always feasible.
Consider, for instance, the frequent case of monodirectional roads. In this case, if
an alternative path cannot be found on the road map, the AGV is stuck in one spot
until the obstacle has been removed.
Two main issues prevent the application of advanced control strategies that
would greatly increase the performance of the system.
First, laser scanners are the most common sensing devices that typically are
mounted on each AGV. Although these devices are very effective in guaranteeing
safety, they are not suitable for obtaining a reliable classification of the acquired
object. In particular, it is not possible to distinguish between humans and other
kind of obstacles. This is very relevant because humans act in an unpredictable
manner; therefore, for safety reasons, it is not possible to assume any knowledge
about the intentions of the humans themselves. Hence, if a human is within
the sensing range of an AGV, the only safe procedure is to avoid any movement.
Conversely, static obstacles do not make any unpredictable movement; hence, in
principle, they could be safely passed using a local detour. However, the impossibil-
ity of reliably distinguishing between humans and other kind of obstacles prevents
the implementation of this kind of advanced control technique.
Second, sensor systems installed on board each AGV are not able to acquire
global information about the surrounding environment. Roughly speaking, they
cannot look around corners. Hence, when approaching an intersection, it is neces-
sary for the AGV to slow down in order to ensure safety in the presence of unex-
pected moving objects (or humans).
CONTRIBUTION
Several studies address issues related to system installation [17–19], where methodolo-
gies are described for obtaining (in a semi-automated manner) a three-dimensional
map of an industrial environment, which can be subsequently exploited for automati-
cally designing a road map.
Subsequent motion coordination has also been considered [20–23], with
methodologies proposed for optimizing the coordination of the vehicles along the
road map, taking into account the model of the traffic.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
SABATTINI ET AL., DOI 10.1520/STP159420150052 63
Related Works
In this section, we briefly analyze the most relevant literature related to multisensor
data fusion. Multisensor data fusion deals with the combination of information
coming from multiple sources in order to provide a robust and accurate represen-
tation of an environment or process of interest. A review and discussion of several
data fusion definitions is presented by Boström et al. [31].
The Joint Directors of Laboratories [32] define data fusion as “a process dealing
with the association, correlation, and combination of data and information from
single and multiple sources to achieve refined position and identity estimates, and
complete and timely assessments of situations and threats, and their significance.
The process is characterized by continuous refinements of its estimates and assess-
ments, and the evaluation of the need for additional sources, or modification of the
process itself, to achieve improved results.”
Multisensor data fusion is a multidisciplinary technology that involves several
application domains, such as robotics [33,34], military applications [35], biomedi-
cine [36,37], wireless sensor networks [38], and video and image processing [39].
Significant attention has been dedicated to the field in recent years; a review of
contemporary data fusion methodologies, as well as the most recent developments
and emerging trends in the research field, is presented by Khalegh et al. [40].
Different criteria can be used for the classification of data fusion techniques, as
discussed by Castanedo [41]. Considering the characteristics of the utilized data,
Luo, Yih, and Su [42] propose four types of abstraction: signal level, pixel level,
characteristic (based on features extracted from images or signals), and symbols (or
decision). More generally, we address three main levels of abstraction: measure-
ments, features, and decisions.
Another possible classification relative to the data abstraction level concerns
the following:
• Low-level fusion: This level deals directly with raw data to improve the
accuracy of the individual sources.
• Medium-level fusion: This is based on the processing of features or character-
istics (dimension, shape, position).
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
64 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
• High-level fusion: Also known as decision fusion, this level addresses symbolic
representation, such as object classes. This level is also known as the feature
or characteristic level.
• Multiple-level fusion: This level is based on the processing of data provided
at different levels of abstraction.
Sensor fusion can be also characterized, as introduced by Durrant-Whyte [43],
based on the relationship among the fused data, namely:
• Complementary, where each sensor provides incomplete information about
the world, and the objective of data fusion is combining these different parts
to achieve a more complete and accurate representation.
• Competitive, where information about the same target is provided by two or
more sources, and data fusion is used to increase reliability and accuracy,
reducing conflicts as well as noisy and erroneous measurements.
• Cooperative, in which the information provided by different sources is com-
bined into new and, typically, more complex information.
Finally, considering the implementation architecture, it is possible to distinguish
three main types of data fusion categories: centralized, distributed, and hierarchical:
• In a centralized architecture, a single module collects information from all the
input sources and makes decisions according to the received raw data. The
principal drawbacks of this solution are the possibility of a communication
bottleneck and the large amount of bandwidth requested to transmit raw data
over the network.
• In a distributed architecture, source nodes process raw data independently
and provide an estimation of the object status based on only their local views;
this information is the input to the multisensor fusion, which provides a fused
global view.
• Hierarchical architectures are combinations of decentralized and distributed
nodes in which the data fusion process is performed at different levels in the
hierarchy.
GLOBAL ENVIRONMENT REPRESENTATION
In this section, we present the main methodologies that can be found in the
literature for obtaining a constantly updated global environment representation as
a result of multisensor data fusion.
The focus is to obtain a global live view of the environment that contains both
structural elements and dynamic entities acquired by sensors. In other words, this
will define a global map that mainly will contain information about the static and
dynamic obstacles detected in the sensors’ surrounding area. Hence, it is necessary
to analyze methodologies for multisensor data fusion that are focused on obstacle
detection. Sensor fusion methods are particularly common in the obstacle detection
field to achieve improved accuracies that could not be guaranteed by the use of a
single sensor [44–47].
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
SABATTINI ET AL., DOI 10.1520/STP159420150052 65
Occupancy grids [48,49] are among the most commonly utilized low-level
sensor fusion strategies. They are often used in robotics to detect and track moving
objects and for simultaneous localization and mapping as well as path planning.
They allow automatic generation of a discrete map of the environment, represent-
ing the area of interest as a grid of two- or three-dimensional cells of equal size.
Originally designed for sonar data fusion, occupancy grid approaches have been
extended for fusion of stereo and optical flow data [50] and, under certain circum-
stances, for fusion of monocular camera data [51]. Compared with feature-based
approaches [52], grid maps are particularly flexible and robust for the fusion of
noisy information. In fact, they allow the integration of different kinds of input
data in the same framework, while considering the inherent uncertainty of each
input sensor. Fast inverse models [49,53] or, alternatively, more accurate forward
models [54], can be utilized as occupancy mapping algorithms for the update of the
grid cells. The major drawback of fixed grid structures is their large memory
requirement, especially during their initialization phase. Moreover, the extent of
the mapped environment needs to be known beforehand; otherwise, every time the
map is expanded, high-cost operations must be performed to increase the size of
the utilized memory.
Octrees [55] are a means of coping with these limitations; they are hierarchical
tree-based representations that delay the initialization of map volumes until meas-
urements need to be integrated. Thus, the map is populated only with volumes that
have been measured, and the hierarchical structure of the trees also can be used as a
multiresolution representation.
An alternative to grid-based methods is the sensor fusion strategy presented by
Jung et al. [46] for obstacle detections/classification in an active pedestrian protec-
tion system. Range data provided by a laser scanner are fused with images coming
from a camera, obtaining a set of images representing vehicle and pedestrian candi-
dates. These images are used as input for two pattern classifiers (one for vehicle
detection and the other for pedestrian detection) implemented by a support vector
machine with a Gabor filter bank [56].
It is worth noting that these approaches require the processing of low-level
information (images, three-dimensional point clouds, laser raw data) in the data
fusion level. Therefore, despite the provided accuracy and robustness, they are not
suitable for global live view implementation.
Conversely, in order to optimize the data transmission time and reduce the
network overhead, we consider a hierarchical data fusion strategy that processes
only medium-level features (identification [ID], age, position, orientation, velocity,
and size) and high-level features (class and classification quality).
M ed i u m Level
Generally speaking, medium-level data fusion entails processing the object meas-
urements (ID, age, position, orientation, velocity, and size) estimated—possibly
with uncertainty—by the different sensing sources available in the system.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
66 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
Thus, from a medium-level point of view, the data fusion problem can be dealt
with as a target tracking process focused on maintaining the state estimates of one
or several objects over a period of time.
In a multisensor framework, because object tracking is performed by different
sensing sources, distributed track fusion methodologies can be utilized for the
medium-level data fusion implementation, including both maximum likelihood
[57] and minimum mean square error solutions.
When considering reliability, survivability, communication bandwidth, and
computational resources, distributed processing architectures are more practical
solutions than the centralized ones. As highlighted by Kalandros et al. [45], in a
distributed architecture, the sources transmit only the target tracks instead of all
measurements; this reduces the cost in computational demand as well as in com-
munication bandwidth requirements. The drawback of dividing the tracking task
among multiple processors is the possible introduction of correlated errors among
the tracks [58]. While measurements of a target from different sensors generally are
uncorrelated, different local track estimates for a common target are correlated,
requiring additional processing. To cope with the introduction of correlation in the
estimation, the cross covariances among track estimates must be computed.
Because this calculation is computationally expensive, it is possible to utilize meth-
odologies based on direct track-to-track fusion [45] or, alternatively, to treat the
decorrelation of the state estimates [59]. In both cases, local source processors must
send additional information to the global level, such as covariances and correspond-
ing measurement matrices.
An alternative approach for implementation of medium level data fusion is
the use of a heuristic based on the evaluation of the obstacles’ occupational area.
Starting from the bounding boxes delimiting the obstacles detected by the source
sensors, the algorithm considers their positions, orientations, and occupational
overlapping in order to reconstruct a two-dimensional/three-dimensional map con-
taining the set of blobs corresponding to the region covered by each candidate.
Integrating the information about the velocities and directions estimated for
each tracked obstacle, it is possible to discriminate among static and dynamic
obstacles. Then, split and merge techniques [60] can be utilized to resolve conflicts
in the discrimination among blobs that may represent different views of the same
object or, alternatively, separated elements.
If necessary, the information representing the fused obstacles can subsequently
be integrated in a grid map on which free space and unknown regions are modeled,
supporting the implementation of path planning and navigation functions.
H i g h Level
Generally speaking, high-level data fusion entails combining the classes estimated by
each sensing source; hence, these methodologies solve classifier combination problems.
In particular, each sensing source can be represented as a classifier. Then, in a
multisensor framework, we have a set of classifiers that, given an input pattern,
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
SABATTINI ET AL., DOI 10.1520/STP159420150052 67
provide an output score for each possible class of the system (e.g., human, manual
forklift, AGV, other dynamic and static objects). This value represents a confidence
measure for the class to be the correct one for the input pattern. Therefore, accord-
ing to the type of classifiers’ outputs, methods for classifier combinations
at measurement level (or Type III [61]) can be utilized.
As discussed by Tulyakov et al. [62], it is possible to distinguish between two
main categories of combination methods: score combination functions and combi-
nation decisions. In the first approach, a function is used to combine the classifiers’
scores in a predetermined manner. Conversely, in the second method, the classi-
fiers’ outputs are used as inputs to a secondary classifier that is trained to determine
the combination function.
Simple aggregation schemes at the measurement level, such as sum rule,
product rule, average rule, and max rule are all examples of score combination
functions. Despite their simplicity, these elementary combination rules compete
with the most sophisticated combination methods, as highlighted by Kuncheva,
Bezdek, and Duin [63] . Although they demonstrated high recognition rates, the
simple aggregation schemes do not allow determination a priori, which is the best
rule for a particular data set.
Alternatively, when dealing with high-level data fusion, it is possible to utilize
complex combination decision methodologies, such as neural networks [64], naive
Bayes [61], Dempster-Shafer theory [65], and classifier ensembles, such as bagging
[66] or boosting [67].
In general, a drawback of these techniques concerns the complexity of the
training scheme. Moreover, when an obstacle does not appear in the field of view
of a perception system, no classification hypotheses are provided by that system;
in the data fusion process, a missing response from a classifier does not mean unre-
liability. For these reasons, simple aggregation rules may be a better solution for the
implementation of high level data fusion.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
SABATTINI ET AL., DOI 10.1520/STP159420150052 69
FIG. 5 The general system architecture designed for obstacle data detection, tracking,
classification, and fusion.
Thus, in the proposed architecture, the information about the obstacles in the
scene may be provided by several sources, involving the possibility of data redun-
dancy, inconsistency, ambiguity, noise, and incompleteness. To overcome these
problems, the global live view is introduced as a module that collects all data
acquired by the sensors and combines them in a unique, complete, and coherent
representation of the overall system, including the static and dynamic entities that
act inside it. In particular, the global live view gathers higher-quality information
(with respect to information based on local sensing only) and provides a global
updated map representing the static entities (the three-dimensional map of the
plant—the road map), the dynamic entities (the current position and velocity of
the AGVs, the position and velocity of currently identified objects), the congestion
zones, and the status of the monitored intersections.
In general, the information acquired by the infrastructure and on-board
perception systems consists of tracked and classified objects that are identified with
a unique ID. In detail, data regarding each object are:
• Position, orientation, velocity, size
• Class of the objects: human, manual forklift, AGV, other dynamic object, static
object
• An assessment regarding the quality and reliability of the classification
The global live view is then updated with the information acquired during
the operation and a real-time global map is generated. This output is shared with
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
70 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
the AGV fleet in order to improve their local on-board navigation capabilities and
to support safe operations.
It is important to guarantee consistency with respect to the real world; each
virtual object represented on the map must correspond to a real-world object.
Therefore, the global live view performs data fusion to merge data acquired from
the different sensors, reducing information redundancy and verifying the presence
of data inconsistency and ambiguity. In particular, we propose a two-level method-
ology that separately implements medium- and decision-level data fusion.
MEDIUM LEVEL
In the described architecture, dealing with data fusion at the medium level means
processing the object measurements (ID, age, position, orientation, velocity, and
size), estimated with uncertainty by the on-board and infrastructure systems, as
well as the elements inside the static map of the environment.
Thus, from a medium level point of view, we introduce a heuristic based on the
evaluation of the obstacle’s occupational area; the main steps of this solution are
represented in Fig. 6. Starting from the bounding boxes delimiting the obstacles
detected by the source sensors, the algorithm considers their positions, orientations,
and occupational overlapping in order to reconstruct a two-dimensional/three-
dimensional map containing the set of blobs corresponding to the region covered
by each candidate. Integrating the information about the velocities and directions
estimated for each tracked obstacle, it is possible to discriminate among static and
dynamic obstacles. Then, split and merge techniques [60] are utilized to resolve
conflicts in the discrimination between blobs that may represent different views
of the same object or, alternatively, separated elements. The information represent-
ing the fused obstacles is then integrated in a grid map on which free space and
unknown regions are modeled, supporting the implementation of path planning
and navigation functions. (Details are provided in the text section on advanced
navigation strategies.)
HIGH LEVEL
The choice of the data fusion strategies for the implementation of the global live
view can be considered, from a high level point of view, as a classifier combination
problem. According to this problem formulation, the static three-dimensional map
FIG. 6 Principal steps of the heuristic for the global live view implementation.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
SABATTINI ET AL., DOI 10.1520/STP159420150052 71
of the environment, the on-board sensor systems, and the infrastructure perception
systems represent a set of classifiers that, given an input pattern, provide an output
score for each possible class of the system (human, manual forklift, AGV, other
dynamic object, static object). This value represents a confidence measure for the
class to be the correct one for the input pattern.
Several methods can be found in the literature for solving the problem of classi-
fier combination at the measurement level (or at Type III [69]). Among these meth-
ods, we propose to exploit simple aggregation schemes at the measurement level,
such as sum rule, product rule, average rule, and max rule. Despite their simplicity,
these elementary combination rules compete with the more sophisticated combina-
tion methods, as is highlighted by Kuncheva, Bezdek, and Duin [70]. Moreover,
these methodologies are well suited for real-time implementation, which is manda-
tory in this kind of application.
The coordination of the AGVs along the road map can be performed exploit-
ing the strategy presented by Digani et al. [20]. In particular, this coordination
strategy consists of a hierarchical control architecture composed of two layers.
The higher level performs the coordination over macro areas of the environment
called sectors, while the lower level considers the coordination within each sector.
A portion of the road map divided into sectors is depicted in Fig. 7.
Based on the hierarchical division of the road map, it is possible to introduce
a definition for a traffic model that takes into account both the number of vehicles
and the presence of obstacles within each sector. Mission assignment and motion
coordination is then performed taking into account an opportunely weighted
road map. In particular, as described in detail by Sabattini et al. [71], we consid-
ered the following model: Each sector is characterized by a certain value of
capacity C that represents the maximum number of allowed vehicles. Let nk ð tÞ be
the number of vehicles in the k-th sector at time t. Then, the weight was defined
as follows:
? ?
1
wk ð tÞ ¼ ? n
?1 ?1 k
C
(1)
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
SABATTINI ET AL., DOI 10.1520/STP159420150052 73
FIG. 8 Average number of missions accomplished per hour: percentage increase with
respect to the nominal case (i.e., k2 ¼ 0 for different values of the capacity C).
FIG. 9 Local deviation from the road map for obstacle avoidance; Obstacles are identified
with numbers. For obstacle number 1, which needs to be avoided, the bounding
box is depicted as well. Furthermore, the picture shows the original (straight line)
portion of the road map and the computed local deviation (curved line).
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
74 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
Simulation results are reported hereafter for k1 ¼ 1, k2¼ 10 for different values
of the capacity C 2 ½2 … 30 . In particular, simulations were performed that
; ; ?
(a) (b)
(c) (d)
(e) (f)
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
SABATTINI ET AL., DOI 10.1520/STP159420150052 75
Conclusions
Advanced sensing technologies, together with centralized data fusion systems,
represent a very effective tool for improving the efficiency of multi-AGV systems
that share the environment with human operators, where safety is a primary issue.
Despite the availability of several technological solutions that exhibit good per-
formance in a laboratory environment, a significant effort is necessary to bring
those technologies to real working environments. The results obtained within the
PAN-Robots project represent a significant step in this direction, bringing to-
gether researchers from academia and industry to develop reliable solutions and
validate them in real factory environments.
Real-world implementation and validation, performed in cooperation with
industry, represents a fundamental milestone toward the definition of new safety
and technological regulations and standards that take into account state-of-
the-art technology. The definition of regulations and standards will lead to the
possibility of a massive deployment of advanced sensing solutions in industrial
environments.
ACKNOWLEDGMENTS
This paper was written within the PAN-Robots project. The research leading to these
results has received funding from the European Union Seventh Framework Pro-
gramme (FP7/2007-2013) under grant agreement no. 314193.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
76 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
References
[1] European Commission, “Eurostat,” European Union, Brussels, Belgium, 2016, http://
ec.europa.eu/eurostat (accessed January 30, 2015).
[2] Sabattini, L., Digani, V., Secchi, C., Cotena, G., Ronzoni, D., Foppoli, M., and Oleari, F.,
“Technological Roadmap to Boost the Introduction of AGVs in Industrial Applications,”
Proceedings of the IEEE International Conference on Intelligent Computer Communi-
cation and Processing (ICCP) , Cluj-Napoca, Romania, September 5–7, 2013, Institute of
Electrical and Electronics Engineers (IEEE), New York, 2013, pp. 203–208.
[3] Oleari, F., Magnani, M., Ronzoni, D., and Sabattini, L., “Industrial AGVs: Toward a
Pervasive Diffusion in Modern Factory Warehouses,” Proceedings of the 2014 IEEE
International Conference on Intelligent Computer Communication and Processing
(ICCP) , Cluj-Napoca, Romania, September 4–6, 2014, Institute of Electrical and Electron-
ics Engineers, New York, 2014, pp. 233–238.
[4] Tsumura, T. “AGV in Japan—Recent Trends of Advanced Research, Development, and
Industrial Applications,” Proceedings of the IEEE/RSJ/GI International Conference on
Intelligent Robots and Systems ’94. “Advanced Robotic Systems and the Real World,”
IROS ’94. , Vol. 3, Munich, September 12–16, 1994, IEEE, New York, 1994, pp. 1477–1484.
[5] Stouten, B. and de Graaf, A. J., “Cooperative Transportation of a Large Object-
Development of an Industrial Application,” in Proceedings of the 2004 IEEE International
Conference on Robotics and Automation, ICRA ’04, Vol. 3, Barcelona, Spain, April
26–May 1, 2004, IEEE, New York, 2004, pp. 2450–2455.
[6] Mahadevan, B. and Narendran, T. T., “Design of an Automated Guided Vehicle-Based
Material Handling System for a Flexible Manufacturing system,” The International Jour-
nal of Production Research , Vol. 28, No. 9, 1990, pp. 1611–1622.
[7] Berman, S. and Edan, Y., “Decentralized Autonomous AGV System for Material
Handling,” International Journal of Production Research, Vol. 40, No. 15, 2002,
pp. 3995–4006.
[8] Martı́nez-Barberá, H. and Herrero-Pérez, D., “Autonomous Navigation of an Automated
Guided Vehicle in Industrial Environments,” Robotics and Computer-Integrated Manu-
facturing, Vol. 26, No. 4, 2010, pp. 296–311.
[9] Wurman, P. R., D’Andrea, R., and Mountz, M., “Coordinating Hundreds of Cooperative,
Autonomous Vehicles in Warehouses,” AI Magazine, Vol. 29, No. 1, 2008, p. 9.
[10] Olmi, R., Secchi, C., and Fantuzzi, C., “Coordination of Multiple AGVs in an Industrial
Application,” Proceedings of the 2008 IEEE International Conference on Robotics and
Automation , Pasadena, CA, May 19–23, 2008, IEEE, New York, 2008, pp. 1916–1921.
[11] Olmi, R., Secchi, C., and Fantuzzi, C., “Coordination of Industrial AGVs,” International
Journal of Vehicle Autonomous Systems, Vol. 9, No. 1, 2011, pp. 5–25.
[12] Herrero-Perez, D. and Matinez-Barbera, H., “Decentralized Coordination of Autonomous
AGVs in Flexible Manufacturing Systems,” Proceedings of the IEEE/RSJ International
Conference on Intelligent Robots and Systems, Nice, France, September 22–26, 2008,
IEEE, New York, 2008, pp. 3674–3679.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
SABATTINI ET AL., DOI 10.1520/STP159420150052 77
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
78 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
[24] Boehning, M., “Improving Safety and Efficiency of AGVs at Warehouse Black Spots,”
Proceedings of the 2014 IEEE International Conference on Intelligent Computer
Communication and Processing, Cluj Napoca, Romania, September 4–6, 2014, IEEE,
New York, 2014, pp. 245–249.
[25] Drulea, M., Szakats, I., Vatavu, A., and Nedevschi, S., “Omnidirectional Stereo Vision
Using Fisheye Lenses,” Proceedings of the 2014 IEEE International Conference on
Intelligent Computer Communication and Processing, Cluj Napoca, Romania, September
4–6, 2014, IEEE, New York, 2014, pp. 251–258.
[26] Nagy, A. E., Szakats, I., Marita, T., and Nedevschi, S., “Development of an Omnidirectional
Stereo Vision System,” Proceedings of the IEEE International Conference on Intelligent
Computer Communication and Processing, Cluj-Napoca, Romania, September 5–7, 2013,
IEEE, New York, 2013, pp. 235–242.
[27] Aikio, M., Makinen, J. T., and Yang, B., “Omnidirectional Camera,” Proceedings of the
IEEE International Conference on Intelligent Computer Communication and Processing
(ICCP) , Cluj-Napoca, Romania, September 5–7, 2013, IEEE, New York, 2013, pp. 217–221.
[28] Reinke, C. and Beinschob, P., “Strategies for Contour-Based Self-Localization in
Large-Scale Modern Warehouses,” Proceedings of the IEEE International Conference on
Intelligent Computer Communication and Processing, Cluj-Napoca, Romania, September
5–7, 2013, IEEE, New York, 2013, pp. 223–227.
[29] Cardarelli, E., Sabattini, L., Secchi, C., and Fantuzzi, C., “Cloud Robotics Paradigm for
Enhanced Navigation of Autonomous Vehicles in Real World Industrial Applications,” Pro-
ceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems,
Hamburg, Germany, September 28–October 2, 2015, IEEE, New York, pp. 4518–4523.
[30] Sabattini, L., Cardarelli, E., Digani, V., Secchi, C., Fantuzzi, C., and Fuerstenberg, K.,
“Advanced Sensing and Control Techniques for Multi AGV Systems in Shared Industrial
Environments,” in Proceedings of the 2015 IEEE International Conference on Emerging
Technologies and Factory Automation , Luxembourg, September 8–11, 2015, IEEE, New
York, 2015, pp. 1–7.
[31] Boström, H., Andler, S. F., Brohede, M., Johansson, R., Karlsson, A., van Laere, J.,
Niklasson, L., Nilsson, M., Persson, A., and Ziemke, T., “On the Definition of Information
Fusion as a Field of Research,” University of Skvde, School of Humanities and Infor-
matics, Tech. Rep. HS- IKI -TR-07-006, 2007.
[32] White, F. E., Data Fusion Lexicon , Joint Directors of Laboratories, Technical Panel for C3,
Data Fusion Sub-Panel, Naval Ocean Systems Center, San Diego, CA, 1986.
[33] Bellotto, N. and Hu, H., “Vision and Laser Data Fusion for Tracking People with a Mobile
Robot,” Proceedings of the IEEE International Conference on Robotics and Biomimetics
(ROBIO 2006), Kunming, China, December 17–20, 2006, IEEE, New York, 2006, pp. 7–12.
[34] Haijun, W. and Yimin, C., “Sensor Data Fusion Using Rough Set for Mobile Robots
System,” Proceedings of the 2nd IEEE/ASME International Conference on Mechatronic
and Embedded Systems and Applications, Beijing, China, August 13–16, 2006, IEEE,
New York, 2006, pp. 1–5.
[35] Bossé, E., Valin, P., Boury-Brisset, A., and Grenier, D., “Exploitation of A Priori Knowledge
for Information Fusion,” Information Fusion , Vol. 7, No. 2, 2006, pp. 161–175.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
SABATTINI ET AL., DOI 10.1520/STP159420150052 79
[36] Bracio, B., Horn, W., and Moller, D., “Sensor Fusion in Biomedical Systems,” Proceedings
of the 19th Annual International Conference of the IEEE Engineering in Medicine and
Biology Society, Vol. 3, Chicago, IL, October 30–November 2, 1997, IEEE, New York,
1997, pp. 1387–1390.
[37] Tulyakov, S. and Govindaraju, V., “Classifier Combination Types for Biometric
Applications,” IEEE Computer Society Conference on Computer Vision and Pattern
Recognition Workshop, New York, June 17–22, 2006, IEEE, New York, 2006, p. 58.
[38] Krishnamachari, B., Estrin, D., and Wicker, S. B., “The Impact of Data Aggregation in
Wireless Sensor Networks,” Proceedings of the 22nd International Conference on Dis-
tributed Computing Systems, Vienna, Austria, July 2–5, 2002, IEEE Computer Society,
Washington, DC, 2002, pp. 575–578.
[39] Joo, S. and Chellappa, R., “A Multiple-Hypothesis Approach for Multiobject Visual
Tracking,” IEEE Transactions on Image Processing, Vol. 16, No. 11, 2007, pp. 2849–2854.
[40] Khalegh, B., Khamis, A., Karray, F. O., and Razavi, S., “Multisensor Data Fusion: A Review
of the State-of-the-Art,” Information Fusion , Vol. 14, No. 1, 2013, pp. 28–44.
[41] Castanedo, F., “A Review of Data Fusion Techniques,” The Scientific World Journal,
Vol. 2013, No. 6, 2013, doi:10.1155/2013/704504
[42] Luo, R., Yih, C.-C., and Su, K.-L., “Multisensor Fusion and Integration: Approaches,
Applications, and Future Research Directions,” Sensors Journal, IEEE, Vol. 2, No. 2, 2002,
pp. 107–119.
[43] Durrant-Whyte, H. F., “Sensor Models and Multisensor Integration,” International Journal
of Robotics, Research , Vol. 7, No. 6, 1988, pp. 97–113.
[44] Dima, C., Vandapel, N., and Hebert, M., “Sensor and Classifier Fusion for Outdoor
Obstacle Detection: An Application of Data Fusion to Autonomous Off-Road Navi-
gation,” The 32nd Applied Imagery Recognition Workshop (AIPR2003), Washington,
DC, October 15–17, 2003, IEEE Computer Society, Washington, DC, 2003, pp. 255–262.
[45] Kalandros, M., Trailovic, L., Pao, L. Y., and Bar-Shalom, Y., “Tutorial on Multisensor
Management and Fusion Algorithms for Target Tracking,” Proceedings of the 2004
American Control Conference, Vol. 5, Boston, MA, June 30–July 2, 2004, IEEE,
New York, pp. 4734–4748.
[46] Jung, H., Lee, Y., Yoon, P., Hwang, I., and Kim, J., “Sensor Fusion Based Obstacle
Detection/Classification for Active Pedestrian Protection System,” Advances in
Visual Computing, G. Bebis, B. Parvin, D. Koracin, A. Nefian, G. Meenakshisundaram,
V. Pascucci, J. Zara, J. Molineros, H. Theisel, and T. Malzbender, Eds., Springer, Berlin
Heidelberg, 2006, vol. 4292, pp. 294–305.
[47] Schueler, K., Weiherer, T., Bouzouraa, E., and Hofmann, U., “360 Degree Multi Sensor
fusion for Static and Dynamic Obstacles,” 2012 IEEE Intelligent Vehicles Symposium (IV) ,
Madrid, June 3–7, 2012, IEEE, New York, 2012, pp. 692–697.
[48] Elfes, A., “Using Occupancy Grids for Mobile Robot Perception and Navigation,”
Computer, Vol. 22, No. 6, 1989, pp. 46–57.
[49] Moravec, H., “Sensor Fusion in Certainty Grids for Mobile Robots,” AI Magazine, Vol. 9,
No. 2, 1988, pp. 61–74.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
80 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
[50] Braillon, C., Usher, K., Pradalier, C., Crowley, J., and Laugier, C., “Fusion of Stereo and
Optical Flow Data Using Occupancy Grids,” in Intelligent Transportation Systems
Conference, 2006, ITSC ’06, Toronto, September 17–20, 2006, IEEE, New York, 2006,
pp. 1240–1245.
[51] Stepan, P., Kulich, M., and Preucil, L., “Robust Data Fusion with Occupancy Grid,” IEEE
Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews,
Vol. 35, No. 1, 2005, pp. 106–115.
[52] Leonard, J. and Durrant-Whyte, H., “Simultaneous Map Building and Localization for an
Autonomous Mobile Robot,” Proceedings of IROS’91, IEEE/RSJ International Workshop
on Intelligent Robots and Systems ’91, Intelligence for Mechanical Systems, Vol. 3, Osaka,
November 3–5, 1991, IEEE, New York, 1991, pp. 1442–1447.
[53] Pietzsch, S., Vu, T. D., Burlet, J., Aycard, O., Hackbarth, T., Appenrodt, N., Dickmann, J.,
and Radig, B., “Results of a Precrash Application Based on Laser Scanner and Short-
Range Radars,” IEEE Transactions on Intelligent Transportation Systems, Vol. 10, No. 4,
2009. pp. 584–593.
[54] Thrun, S., “Learning Occupancy Grids with Forward Models,” Proceedings of the 2001
IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. 3, Maui, HI,
October 29–November 3, 2001, IEEE, New York, 2001, pp. 1676–1681.
[55] Hornung, A., Wurm, K., Bennewitz, M., Stachniss, C., and Burgard, W., “Octomap: An
Efficient Probabilistic 3D Mapping Framework Based on Octrees,” Autonomous Robots,
Vol. 34, No. 3, 2013, pp. 189–206.
[56] Sun, Z., Bebis, G., and Miller, R., “On-Road Vehicle Detection Using Evolutionary Gabor
Filter Optimization,” IEEE Transactions on Intelligent Transportation Systems, Vol. 6,
No. 2, 2005, pp. 125–137.
[57] Bar-Shalom, Y. and Campo, L., “The Effect of the Common Process Noise on the
Two-Sensor Fused-Track Covariance,” IEEE Transactions on Aerospace and Electronic
Systems, Vol. AES-22, No. 6, 1986, pp. 803–805.
[58] Bar-Shalom, Y., “On the Track-to-Track Correlation Problem,” IEEE Transactions on
Automatic Control, Vol. 26, No. 2, 1981, pp. 571–572.
[59] Khawsuk, W. and Pao, L. Y., “Decorrelated State Estimation for Distributed Tracking
of Interacting Targets in Cluttered Environments,” Proceedings of the 2002 American
Control Conference, Vol. 2, Anchorage, May 8–10, 2002, IEEE, New York, 2002,
pp. 899–904.
[60] Parvin, B. and Medioni, G., “Segmentation of Range Images into Planar Surfaces by Split
and Merge,” Proceedings of International Conference on Computer Vision and Pattern
Recognition (CVPR 86) , Miami Beach, June 22–26, 1986, IEEE Computer Society Press,
Washington, DC, pp. 415–417.
[61] Xu, L., Krzyzak, A., and Suen, C., “Methods of Combining Multiple Classifiers and
Their Applications to Handwriting Recognition,” IEEE Transactions on Systems, Man, and
Cybernetics, Vol. 22, No. 3, 1992, pp. 418–435.
[62] Tulyakov, S., Jaeger, S., Govindaraju, V., and Doermann, D., “Review of Classifier
Combination Methods,” Machine Learning in Document Analysis and Recognition ,
S. Marinai and H. Fujisawa, Eds., Springer, Berlin Heidelberg, 2008, pp. 361–386.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
SABATTINI ET AL., DOI 10.1520/STP159420150052 81
[63] Kuncheva, L. I., Bezdek, J. C., and Duin, R. P., “Decision Templates for Multiple Classifier
Fusion: An Experimental Comparison,” Pattern Recognition , Vol. 34, No. 2, 2001,
pp. 299–314.
[64] Lee, D.-S. and Srihari, S., “A Theory of Classifier Combination: The Neural Network
Approach,” Proceedings of the Third International Conference on Document Analysis
and Recognition , 1995, Vol. 1, Montreal, August 14–16, 1995, IEEE, New York, pp. 42–45.
[65] Dempster, A., “A Generalization of Bayesian Inference,” Classic Works of the Dempster-
Shafer Theory of Belief Functions, R. Yager, and L. Liu, Eds., Springer, Berlin Heidelberg,
2008, pp. 73–104.
[66] Breiman, L., “Bagging Predictors,” Machine Learning, Vol. 24, No. 2, 1996, pp. 123–140.
[67] Schapire, R. E., “A Brief Introduction to Boosting,” Proceedings of the 16th International
Joint Conference on Artificial Intelligence, IJCAI ’99, Vol. 2, Stockholm, July 31–August 6,
1999, Morgan Kaufmann, San Francisco, CA, 1999, pp. 1401–1406.
[68] Cardarelli, E., Sabattini, L., Secchi, C., and Fantuzzi, C., “Multisensor Data Fusion for
Obstacle Detection in Automated Factory Logistics,” Proceedings of the IEEE
International Conference on Intelligent Computer Communication and Processing, Cluj-
Napoca, Romania, September 4–6, 2014, IEEE, New York, 2014, pp. 221–226.
[69] Xu, L., Krzyzak, A., and Suen, C., “Methods of Combining Multiple Classifiers and
Their Applications to Handwriting Recognition,” IEEE Transactions on Systems, Man, and
Cybernetics, Vol. 22, No. 3, 1992, pp. 418–435.
[70] Kuncheva, L. I., Bezdek, J. C., and Duin, R. P., “Decision Templates for Multiple
Classifier Fusion: An Experimental Comparison,” Pattern Recognition , Vol. 34, No. 2,
2001, pp. 299–314.
[71] Sabattini, L., Digani, V., Lucchi, M., Secchi, C., and Fantuzzi, C., “Mission Assignment for
Multi-Vehicle Systems in Industrial Environments,” Proceedings of the IFAC Symposium
on Robot Control (SYROCO) , Salvador, Brazil, August 26–28, 2015, IFAC–PapersOnLine,
Vol. 48, No. 19, 2015, pp. 268–273.
[72] Digani, V., Caramaschi, F., Sabattini, L., Secchi, C., and Fantuzzi, C., “Obstacle Avoidance
for Industrial AGVs,” Proceedings of the IEEE International Conference on Intelligent
Computer Communication and Processing (ICCP) , Cluj-Napoca, Romania, September
4–6, 2014, IEEE, New York, 2014, pp. 227–232.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
82 AUTONOMOUS INDUSTRIAL VEHICLES: FROM THE LABORATORY TO THE FACTORY FLOOR
STP 1594, 2016 / available online at www. astm. org / doi: 10. 1520/STP159420150053
ABSTRACT
An explicit focus on safety first can help accelerate the adoption of robotics
and intelligent automation technology across many sectors of the economy.
The safety-to-autonomy approach to introducing automation in the workplace
provides a glide path that allows businesses to leverage the benefits of new
technology at the pace they are comfortable with and that they can sustain,
while realizing return on investment from day one. The transition to large-scale
adoption of intelligent automation starts with sensorized industrial equipment
that provides active safety features to immediately curb accident rates. They
continuously collect data and model how humans use them in existing processes.
As businesses become comfortable with these augmented machines, additional
autonomous functionality can be enabled incrementally, supported by the
information learned from observing human operators over time. Focusing on
safety can provide the glide path for businesses to embrace the robot revolution
across all sectors.
Keywords
safety, automation, machine vision
Manuscript received June 15, 2015; accepted for publication August 12, 2015.
1
Vecna Technologies, Inc., 36 Cambridge Park Dr., Cambridge, MA 02140
2
ASTM Workshop on Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor on
May 26–30, 2015 in Seattle, Washington.
Copyright VC 2016 by ASTM International, 100 Barr Harbor Drive, PO Box C700, West Conshohocken, PA 19428-2959.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
THEOBALD AND HEGER, DOI 10.1520/STP159420150053 83
installed on the equipment can do even better by removing the need for the
operator to continually interpret a marked-up video feed or other interface. That
augmented system could be configured to audibly alert the operator to the presence
of humans in the vicinity and also limit the equipment’s velocity in directions that
bring it close to detected pedestrians.
The simple presence of a pedestrian near industrial equipment is not necessar-
ily cause for concern. Much depends on what that person is doing, where she is
looking, and whether or not she is aware of the machine close by. Understanding a
detected person’s activity and gaze from sensor data can help to determine what
kind of warning or action is appropriate. Vecna has demonstrated technology for
detecting individuals’ and groups’ activities in imagery and video data. Fig. 2 shows
the system recognizing air marshalling gestures. This technology could be used
as a means for a pedestrian to command or interact with a piece of automated
equipment—whether the person is actively involved in the robot’s activity or whether
she is pursuing an unrelated activity that happens to bring her close to the machine.
Fig. 3 shows how a person’s activities and focus can be identified by combining
person detection with context information from the images.
Vecna’s robotic logistics solutions make use of a variety of sensors (including
Omni-Eye [see Fig. 4] , which is currently under development) to support its safety-
first philosophy through maximum situational awareness (see Fig. 5).
From day one, a robot’s active safety features protect the human workforce and
reduce the cost of accidents. There is no risk of inadequate or undesirable autono-
mous behavior because human operators continue to be in control of the vehicle.
By sensorizing a piece of industrial equipment for safety, all sectors can prevent
death and injury, damage, and delays and can save billions of dollars annually.
In addition, intelligently safe machines with capabilities such as those described
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
THEOBALD AND HEGER, DOI 10.1520/STP159420150053 85
FIG. 2 The ability to detect gestures of people in the environment allows anyone to
command or interact with the robot.
here directly increase productivity in their facilities by allowing humans and robots
to work in close proximity. This approach avoids process waste that is introduced
when human and robot operations need to be physically separated.
FIG. 3 Automated activity recognition augments the knowledge that humans are
nearby with additional information about their current activities. With this, the
system can evaluate whether they are aware of the machine operating near
them, or if they are occupied otherwise, and adjust its operation accordingly.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
86 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
more capabilities than just operational aids and awareness; additional features
can be easily and incrementally introduced to make vehicles more fully autono-
mous. The equipment is not just sensing the surroundings, it is making sense of
its environment ( Fig. 5).
Just like an office customer relationship management software suite, the con-
sumer invests in the basic package but, after realizing initial return on investment,
takes those cost savings to upgrade and turn on additional features—the same can
be achieved with sensorized industrial equipment. For example, forklift accidents
cost U.S. businesses $135 million annually. With the cost savings from reduced
accidents through a safety-enabled sensorized forklift, businesses can fund addi-
tional features toward full autonomy.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
THEOBALD AND HEGER, DOI 10.1520/STP159420150053 87
In addition to monitoring the vehicle for safety reasons, collecting all that
sensor data also enables the system to learn how humans operate the machines in a
particular environment. Learning from demonstration or other supervised machine
learning approaches can then be used to provide tailored autonomy behavior for
specific environments. This concept is particularly powerful because robotics engi-
neers and developers of robot systems generally do not have firsthand experience
working in their target domains. Important and potentially unintuitive aspects
of tasks that expert operators perform naturally without even thinking about them
can be picked up and incorporated implicitly into the autonomous system. The
more “human-like” autonomous vehicles perform key aspects of their tasks, such
as picking up pallets or getting out of the way for emergency personnel, the more
they will be trusted and the easier it will be to incorporate them into existing
human-centric processes to boost productivity.
Vecna’s QC BotV hospital logistics platform navigates autonomously through
R
spaces shared with humans, many of whom are not trained to interact with the
robot (see Fig. 6). By analyzing a human’s behavior and reactions as it travels
through the environment, the robot can modify its behavior to be as predictable as
possible in all situations.
With the safety of automated equipment ensured, advanced autonomy
features can be introduced incrementally. A business that has gradually become
accustomed to robots in its workforce and that has built up trust in the robots’
capabilities and functions will be much more willing to consider process modifi-
cations that allow even larger benefits and return on investment to be realized.
The robotic “chess game” shown in Fig. 7 is an example demonstrating such a
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
88 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
process change. Inputs to the system were desired moves of pieces around the
board. Humans provided the high-level input, and the system decomposed these
inputs as necessary into actionable parts for execution. Robots perform their
tasks while the operator is free to work on other tasks that require her cognitive
capabilities. She receives a notification when one move is complete and can then
choose the next one. Users of Vecna’s system move from having specific robots
perform tasks for them to assigning missions to the overall system and allowing
software to decide how the available resources should be applied to meet the
requests—thus the safety-to-autonomy curve, the glide path on which an indus-
try incrementally adopts robotic solutions.
FIG. 7 High-level planning and optimization can decompose abstract goals such as
“move the black king to square E5” into actions the robot(s) can execute. The
robot monitors execution and recovers autonomously or escalates to request
expert operator assistance.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
THEOBALD AND HEGER, DOI 10.1520/STP159420150053 89
drones as part of their business model [4]. This is just one example of the regulatory
realities our new robotic economy will face.
Policy needs time to catch up to the technology we already have at our finger-
tips. A recent report by the McKinsey Global Institute explains how policy makers
need time to understand economic trends and nuances in regions, innovations, and
exports in order to create the right policies that support manufacturing industries
and the role of automation within them [5]. In addition, human workers also need
time to adjust to sharing their work space with intelligent machines, and business
owners need time to innovate their work flows and processes with the help of
automation.
In addition, industry needs time to evolve as well as to establish and adopt
standards for both safety and interoperability. Robots from different vendors need
to be able to coexist in shared environments. They need to be able to integrate
seamlessly with existing infrastructure, such as elevators and automatic doors,
and to interact effectively with humans. This is only possible within a healthy
ecosystem of vendors, suppliers, and manufacturers who solve common problems
once, thus reducing barriers to adoption. MassRobotics (www.massrobotics.org)
is one initiative that aims to bring together vendors, researchers, and government
and commercial interests within the growing the robotics sector to meet current
and future needs.
ACKNOWLEDGMENTS
Vecna’s research and development efforts toward ensuring that its robot solutions
support the smooth safety-to-autonomy transition described in this paper were
supported by various research grants, including projects from the Office of Naval
Research, the Defense Advanced Research Projects Agency, and NASA.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
90 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
References
[1] Pricewaterhouse Coopers, “The New Hire: How a New Generation of Robots is Trans-
forming Manufacturing,” PwC, New York, 2014, http://www.pwc.com/us/en/industrial-
products/assets/industrial-robot-trends-in-manufacturing-report.pdf (accessed April 30,
2016).
[2] National Safety Council, “Injury Facts,” National Safety Council, Itasca, IL, 2013, http://
www.mhi.org/downloads/industrygroups/ease/technicalpapers/2013-National-Safety-
Council-Injury-Facts.pdf (accessed April 30, 2016).
[3] Frane, D., “Top 10 Forklift Accidents,” Tools of the Trade, July 2013, http://www.
toolsofthetrade.net/jobsite-equipment/top-10-forklift-accidents.aspx (accessed April 30,
2016).
[4] Pilkington, E., “US Experts Join Companies Protesting FAA Commercial Drones
Proposals,” The Guardian , February 22, 2015, http://www.theguardian.com/world/2015/
feb/22/experts-companies-protest-faa-commercial-drones-proposals (accessed April 30,
2016).
[5] Manyika, J., Sinclair, J., Dobbs, R., Strube, G., Rassey, L., Mischke, J., Remes, J., Roxburgh,
C., George, K., O’Halloran, D., and Ramaswamy, S., “Manufacturing the Future: The
Next Era of Global Growth and Innovation,” McKinsey Global Institute, McKinsey & Com-
pany, Philadelphia, PA, 2012, http://www.mckinsey.com/insights/manufacturing/the_
future_of_manufacturing (accessed April 30, 2016).
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
AUTONOMOUS INDUSTRIAL VEHICLES: FROM THE LABORATORY TO THE FACTORY FLOOR 91
STP 1594, 2016 / available online at www. astm. org / doi: 10. 1520/STP159420150056
ABSTRACT
Multi-camera motion capture systems are commercially available and typically are
used in the entertainment industry to track human motions for video gaming and
movies. These systems are proving useful as ground truth measurement systems to
assess the performance of robots, autonomous ground vehicles, and assembly
tasks in smart manufacturing. In order to be used as ground truth, the accuracy of
the motion capture system must be at least ten times better than a given system
under test. This chapter creates an innovate artifact and test method to measure
the accuracy of a given motion capture system. These measurements will then be
used to assess the performance of the motion capture system and validate that it
can be used as ground truth. The motion capture system will then serve as ground
truth for evaluating the performance of an automatic guided vehicle (AGV) with an
onboard robot arm (mobile manipulator) and for evaluating the performance of
robotic workstation assembly tasks that utilize robot arms and hands.
Manuscript received June 19, 2015; accepted for publication October 13, 2015.
1 National Institute of Standards and Technology, 100 Bureau Dr., Gaithersburg, MD 20899-8230
2
IEM, Le2i, Université de Bourgogne, BP 47870, 21078 Dijon, France
3
Department of Mathematics and Statistics, Loyola University Maryland, 4501 N. Charles St., Baltimore,
MD 21210-2699
4
ASTM Workshop on Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor on
May 26–30, 2015 in Seattle, Washington.
Copyright VC 2016 by ASTM International, 100 Barr Harbor Drive, PO Box C700, West Conshohocken, PA 19428-2959.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
92 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
Keywords
dynamic, performance measurement, robot, tracking system
Introduction
Numerous optical tracking systems, including motion capture systems, have been
developed in research centers and commercialized. Over the past several years,
these systems have gained enormous market share [1] in the entertainment indus-
try, neuroscience, biomechanics, flight and medical training, and in simulations
[2–8]. As a result, there have been several advances in improving the accuracy of
such human motion caption systems as documented in two surveys. The first [2]
analyzes research up to the year 2000; the second [3] analyzes research from 2000
to 2006 and the overview of a history of motion capture systems in 2013 [4]. These
surveys cite more than 350 articles with topics such as novel methodologies for
automatic initialization, reliable tracking and pose estimation in natural scenes, and
movement recognition.
Tracking systems also have been used in the field of robotics [5–10]. Specific
applications have included programming by demonstration, imitation, tele-operation,
activity or context recognition, and humanoid designs. This chapter presents yet
another use for these systems in robotics: to provide ground truth for assessing the
performance of robot and robot vehicle motion. Specifically, this chapter focuses on a
test method to validate the accuracy of a tracking system within the work volume of a
given robotic system under test by using a novel metrology bar artifact. This method
will ensure that the tracking system is capable of providing the necessary measure-
ment uncertainty to be used as ground truth by guaranteeing the tracking system is at
least an order of magnitude better than the expected performance of the given robotic
system under test.
As the field of robotics advances and expands to new application spaces, such
as assembly, performance measures are needed to fully understand robot capabil-
ities. Tracking systems that can provide ground truth measurement for dynamic
robots are critical for supporting robot performance evaluation. The National
Institute of Standards and Technology (NIST) conducts research on the safety and
performance of robot arms and hands, automatic guided vehicles (AGVs), and inte-
grated systems such as those comprised of arm, hand, and perception components,
as well as collaborating robots, in support of standards development. The Interna-
tional Organization for Standards (ISO) 9283:1998 [11] and the American National
Standards Institute/Robotic Industries Association (ANSI/RIA) 15.05 [12] are avail-
able standards used to assess the performance of an industrial robotic arm as an
individual unit. The recently formed ASTM Committee F45 on Driverless Auto-
matic Industrial Vehicles [13] will be used to assess the performance of AGVs.
It is predicted that future smart manufacturing systems will include robot arms
performing high-tolerance assembly tasks, AGVs making fine adjustment of docking
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
BOSTELMAN ET AL., DOI 10.1520/STP159420150056 93
Two metrology bars, 620 mm and 320 mm in length, were used, each having
five reflective markers attached to prongs on each end (see Fig. 1 ). The metrology
bars were used to measure the tracking system measurement uncertainties within
the vehicle lab and robotic work cell, respectively. The metrology bar markers on
each end form two perpendicular planes to the bar that define the bar length. The
bar length was shortened for the robotic work cell in an attempt to maximize
metrology bar movement. Carbon fiber bars were chosen based on a combination
of cost and reduction of the effects of thermal expansion on the position uncer-
tainty. The latter is defined by using the standard metrics that were developed in
ASTM E2919, Standard Test Method for Evaluating the Performance of Systems that
Measure Static, Six Degrees of Freedom (6DOF) Pose [16].
Actual positions and motions were only approximated because the metrology
bar was randomly held and moved by a person throughout the test spaces. For the
vehicle lab/static case experiments, the 620-mm metrology bar initially was placed
in the center of the space, approximately 1.5 m above the floor. For the vehicle lab/
dynamic case experiments, the bar was carried by a researcher at a height of
approximately 2.5 m above the floor (i.e., overhead) and walked in a raster scan pat-
tern throughout the space to maximize coverage. Note that the approximate height
of the AGV navigation sensor is 2.1 m above the floor. Similarly, for the robotic
work cell/static case experiments, the 320-mm bar was placed approximately 0.2 m
above a table. For the robotic work cell/dynamic case experiments, the bar was
moved by a researcher throughout the volume created by the camera field of views
and was reachable by a robot arm to be mounted within the space. Velocities of bar
FIG. 1 NIST metrology bars, (a) 620 mm long and (b) 320 mm long, used to measure
static and dynamic ground truth system uncertainty. The bars are sitting on a
holder that is on the NIST reconfigurable mobile manipulator apparatus.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
BOSTELMAN ET AL., DOI 10.1520/STP159420150056 95
motion also were not measured and are approximated at a slow walk, perhaps
0.5 m/s, for the vehicle lab and between 0.5 m/s and 1 m/s for the robotic work
cell. Future measurements of the vehicle lab will include programmed vehicle
movement of the metrology bar throughout the space.
ðÞ¼
^ t
H
^ t
R ðÞ T ðÞ
^ t
(1 )
0 1
at time t. The ground truth of the relative pose is assumed to be known and meas-
ured by a coordinate measuring machine and represented as the homogeneous
matrix:
?
ðÞ ðÞ
?
ðÞ¼
H t
R t
0
T t
1
(2)
where R(t) is the 3 by 3 rotation matrix representing the known orientation of the
relative pose and T(t) is the 3 by 1 translation vector representing the known posi-
tion of the relative pose. The ground truth relative pose is measured by a coordinate
measurement machine.
The position error, eT, can then be computed as follows:
¼ k Tk ? k T^ ð tÞk ¼ ? Length of T^
? ? ? ?
eT ? ? ?
Length of T ?
(3)
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
96 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
UNCERTAINTY STATISTICS
The error statistics from the position error and angle error can be calculated as:
1. Computing the average error:
PN
ek
k¼ 1
?e ¼
N (6)
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
BOSTELMAN ET AL., DOI 10.1520/STP159420150056 97
FIG. 2 AGV test lab (left) and screenshot of the multi-camera system ground truth
system space showing cameras and AGV rigid body (right).
Ground
truth
system
cameras
(1 2
each)
AGV
above the floor and used to view the area where the experiments are performed.
Cameras have 4.1 megapixel resolution, 120 frames-per-second, and 51 ? field of
view with focus and aperture-opening adjustments. Eighteen markers are grouped
into a rigid body, as shown in Fig. 2, and tracked by the GT1 system.
ROBOT TEST SPACE AND ROBOTS
In comparison, a relatively small 2 m by 2 m robot test space (see Fig. 3 ) at NIST is
used to research the safety and performance of collaborative robot arms and
advanced, multi-fingered robotic hands. Findings are frequently reported to the
industrial robot industry and used as a reference to propose revisions to the RIA
15.06 [17], ISO 10218-1, -2 [11] safety standards subcommittees, and to the
robotic hands research community through NIST’s robot hand performance test
portal [8].
FIG. 3 Robot test space showing the GT2 cameras mounted to a frame above the
space and the 320-mm metrology bar (centered) on a red bar holder.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
98 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
Results
We found no published nominal uncertainties for the GT1 tracking system from
the manufacturer because they describe the system uncertainty as “sub-millimeter.”
The GT2 system manufacturer published the uncertainties as 0.5 mm or more of
translation and 0.5 ? of rotation in a 4 m by 4 m volume using 9-mm diameter
markers. The descriptions also do not include procedures for ensuring traceability
of measurement uncertainty.
We tested both tracking systems using the NIST developed test method described
earlier. We tested the tracking systems in their calibrated states. We provide both the
GT1 and GT2 tracking distance and angle uncertainties in the following subsections.
VEHICLE LAB MEASUREMENTS
The GT1 system was first calibrated by mainly adjusting the focus on the cameras
in the system. After calibration, the system was measured using the 620-mm
metrology bar over an approximate 10 m in width by 8 m in length lab center work-
space where most of the AGV testing is performed. The metrology bar was placed
at the workspace center. Analysis shows average measurement uncertainty of the
static metrology bar length was r ¼ 0.02 mm, and for the static angle, it was
r ¼ 0.05 ? . In this experiment, N (number of poses) is greater than 30,000 points.
The metrology bar was then moved throughout the workspace. The dynamic me-
trology bar position uncertainty was calculated as r ¼ 0.26 mm, and the dynamic
angle uncertainty was calculated as r ¼ 0.20? . Fig. 4a shows the dynamic bar length
uncertainty, and Fig. 4b shows the dynamic angle uncertainty. Each block in the fig-
ure graphs uses a natural-neighbor interpolation to obtain the value.
ROBOT SPACE MEASUREMENTS
In contrast to the GT1 system, the GT2 system used a 320-mm-long metrology bar.
Similar to the GT1 system, calibration consisted of adjusting the zoom, focus, and
aperture of the cameras in the system.
The metrology bar was placed at the workspace center. Analysis shows average
measurement uncertainty over three runs of the static metrology bar length was
r ¼ 0.004 mm and, for the static angle, r ¼ 0.006 ? . The bar was then moved
throughout the entire robot work volume. The dynamic position uncertainty was
r ¼ 0.60 mm, and the dynamic angle uncertainty was r ¼ 0.29 ? . Fig. 5a shows the
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
BOSTELMAN ET AL., DOI 10.1520/STP159420150056 99
FIG. 4 GT1 data captured from the 620-mm metrology bar (a) length and (b) angle
within the AGV lab.
(a)
(b)
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
100 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
FIG. 5 GT2 data captured of the 320-mm metrology bar (a) length and (b) angle
within the robot space.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
BOSTELMAN ET AL., DOI 10.1520/STP159420150056 1 01
Manipulator
RMMA
AGV
metrology bar length dynamic uncertainty data, and Fig. 5b shows the bar angle
dynamic uncertainty data.
Interestingly, we noticed a degradation of uncertainty in the robot space on
consecutive dynamic test runs. This behavior is currently being investigated.
MOBILE MANIPULATOR MEASUREMENTS
A recent application for the calibrated GT1 system was to measure mobile manipu-
lator performance of a robot arm installed onboard the AGV as shown in Fig. 6.
A NIST reconfigurable mobile manipulator artifact (RMMA) was developed as a
possible concept for comparing ground truth technologies such as tracking systems,
FIG. 7 Screen captures from the GT1 system showing the AGV, manipulator, and the
RMMA rigid bodies formed from markers on each device.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
102 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
FIG. 8 GT data points relative to the GT system origin (in mm) of (a) the stationary
AGV and (b) the RMMA movement over time (in minutes), shown by the
varying colors, while the manipulator moves.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
BOSTELMAN ET AL., DOI 10.1520/STP159420150056 103
laser trackers, and so on. The RMMA is further detailed in Bostelman, Hong, and
Cheok [6]. Experiment 1 included placing a mobile manipulator next to the
RMMA and then moving the manipulator to various points on the RMMA as
shown in Fig. 6. Results showed the average position uncertainty (calibration)
between the RMMA and AGV to be x ¼ 0.07 mm and y ¼ 0.02 mm, both being near
the static measurement range of the GT1 system (i.e., r ¼ 0.02 mm and r ¼ 0.05? ).
However, further experiments were performed and suggested surprising results.
Experiment 2 measured the uncertainty of the static AGV when the manipulator
uses noncontact positioning above the RMMA points. The AGV, RMMA, and ma-
nipulator were measured using a single ground truth system, GT1, resulting in
motion tracking and relative measurements of the components. Experiment 3
measured the static RMMA movement during the noncontact Experiments 1 and 2.
A screenshot of the rigid bodies formed in the GT1 system is shown in Fig. 7, and
uncertainty results are shown in Fig. 8.
Experiments 2 and 3 proved that both the AGV and the RMMA were moving
even while the AGV was stopped while the manipulator was moving. This occurred
despite the fact that the AGV weight was nearly 40 times that of the manipulator,
with tests conducted on the ground level with concrete flooring. Results show that
position uncertainty spans from approximately 0.15 mm in x and 0.25 mm in y for
the AGV to 0.5 mm in x and 0.6 mm in y for the RMMA. These results showed that
the ground truth optical tracking measurement system used in the mobile
manipulator experiments was accurate enough to detect motion of a static table
(RMMA) and a relatively heavy vehicle due to onboard lightweight manipulator
motion. When these uncertainties are combined, maximum uncertainties can be
r ¼ 0.52 mm in x and r ¼ 0.65 mm in y, which could induce enough position offset
of the manipulator to affect the results of manufacturing operations, such as a rela-
tively high-tolerance assembly operation.
Conclusions
Multi-camera motion capture systems are now commercially available, and their
application as ground truth systems for robots and vehicles is on the horizon.
This chapter describes a test method and metrics for evaluating and validating
tracking system calibration within the operational space of a robotic work volume.
The goal of the method is to provide assurance that measurements made by the
tracking system are at least an order of magnitude better than the expected
performance of the robotic system under test. The test method used is exemplified
on two different motion capture systems each in a different size workspace. An
example application of one system was used to measure the performance of an
AGV with an onboard robot arm (mobile manipulator). Experiments on a mobile
manipulator showed that tracking systems in large spaces can even measure small
wall, floor, and equipment movements despite their static conditions. This test
method and metrics can be used to measure and analyze the performance of any
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
104 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
tracking system that computes the pose of an object while the object is moving.
It also helps provide the performance of optical tracking systems to improve opti-
cal tracking systems.
ACKNOWLEDGMENTS
The authors would like to thank Sebti Foufou, Qatar University, Doha, Qatar, for his
guidance in this research.
References
[1] “Motion Capture Software Developers in the US: Market Research Report,” IBISWorld, 2014,
http://www.ibisworld.com/industry/motion-capture-software-developers.html (accessed
May 21, 2015).
[2] Moeslund, T. B. and Granum, E., “A Survey of Computer Vision-Based Human Motion
Capture,” Computer Vision and Image Understanding, Vol. 81, No. 3, 2001, pp. 231–268.
[3] Moeslund, T. B., Hilton, A., and Krüger, V., “A Survey of Advances in Vision-Based Human
Motion Capture and Analysis,” Computer Vision and Image Understanding, Vol. 104,
No. 2, 2006, pp. 90–126.
[4] Fischer, R., “History of Motion Capture,” 2013, http://motioncapturesociety.com
(accessed August 10, 2013).
[5] Field, M., Stirling, D., Naghdy, F., and Pan, Z., “Motion Capture in Robotics Review,”
Proceedings of the IEEE International Conference on Control and Automation (ICCA),
Christchurch, New Zealand, 2009, pp. 1697–1702.
[6] Bostelman, R. V., Hong, T.-H., and Cheok, G., “Navigation Performance Evaluation for
Automated Guided Vehicle,” Proceedings from the 7th Annual IEEE International Con-
ference on Technologies for Practical Robot Applications (TePRA), Boston, MA, 2015.
[7] Bostelman, R., Hong, T.-H., and Marvel, J., Performance Measurement of Mobile Manipu-
lators, Proceedings of the SPIE-DSS Commercial Sensing Conference, Baltimore, MD,
April 20–24, 2015.
[8] National Institute of Standards and Technology Engineering Laboratory, “Performance
Metrics and Benchmarks to Advance the State of Robotic Grasping,” National Institute
of Standards and Technology, Gaithersburg, MD, 2014, http://www.nist.gov/el/isd/
grasp.cfm (accessed April 1, 2015).
[9] Yang, P. F., Sanno, M., and Bruggemann, G. P., “Evaluation of the Performance of a
Motion Capture System for Small Displacement Recording and a Discussion for Its
Application Potential in Bone Deformation in Vivo Measurements,” Proceedings of the
Institution of Mechanical Engineers, Vol. 226, No. 11, 2012, pp. 838–847.
[10] Summan, R., Pierce, S. G., Macleod, C. N., Dobie, G., Gears, T., Lester, W., Pritchett, P., and
Smyth, P., “Spatial Calibration of Large Volume Photogrammetry Based Metrology Sys-
tems,” Journal of Measurement, Vol. 68, 2015, pp. 189–200.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
BOSTELMAN ET AL., DOI 10.1520/STP159420150056 1 05
[11] ANSI/RIA 15.05-2, “Industrial Robots and Robot Systems—Path-Related and Dynamic
Performance Characteristics,” American National Standards Institute (ANSI), Washing-
ton, DC, 1992, www.ansi.org
[12] ANSI/RIA 15.05, American National Standards Institute (ANSI), Washington, DC, 1999,
www.ansi.org
[13] ASTM, “Committee F45 on Driverless Automatic Guided Industrial Vehicles,” ASTM Inter-
national, West Conshohocken, PA, 2014, http://www.astm.org/COMMITTEE/F45.htm
(accessed April 1, 2015).
[14] Hamner, B., Koterba, S., Shi, J., Simmons, R., and Singh, S., “Mobile Robotic Dynamic
Tracking for Assembly Tasks,” Proceedings of the 2009 IEEE/RSJ International Confer-
ence on Intelligent Robots and Systems, St. Louis, MO, October 10–15, 2009.
[15] ANSI/ITSDF B56.5, Safety Standard for Driverless, Automatic Guided Industrial Vehicles
and Automated Functions of Manned Industrial Vehicles, ANSI, Washington, DC, 2014,
www.itsdf.org
[16] ASTM E2919-14, Standard Test Method for Evaluating the Performance of Systems That
Measure Static, Six Degrees of Freedom (6DOF), Pose, ASTM International, West
Conshohocken, PA, 2014, www.astm.org
[17] ANSI/RIA R15.06-2012, American National Standard for Industrial Robots and Robot
Systems—Safety Requirements, Robotic Industries Association, Ann Arbor, MI, 2013,
http://www.robotics.org
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
106 AUTONOMOUS INDUSTRIAL VEHICLES: FROM THE LABORATORY TO THE FACTORY FLOOR
STP 1594, 2016 / available online at www. astm. org / doi: 10. 1520/STP159420150050
ABSTRACT
In this chapter, we describe some ideas of robotic system standardization based on
ongoing research and development processes in a European FP7 project named
EC-SAFEMOBIL, which is focused on estimation and control technologies for safe,
wireless, high-mobility cooperative systems. Strongly influenced by the European
Commission, demand has been to commercialize as many project results as
possible, EC-SAFEMOBIL researchers and developers needed some standards to
follow for the main project application areas—unmanned aerial systems (UAS) and
automated warehousing systems (AWS). Although many aspects of UAS are
covered by adequate standards, this does not hold true for automated warehouses.
In the given analysis of possible standardization of automated warehousing
systems, we elaborate on ideas on how to overcome evident gaps between
academic achievements and viable industry practice. Paying particular attention to
process and development standards, as well as function-specific standards, we
Manuscript received June 14, 2015; accepted for publication August 12, 2015.
1
University of Zagreb, Electrical Engineering and Computing, Unska 3, 10000 Zagreb, Croatia
2
Selex ES Ltd., Sigma House, Christopher Martin Road, Basildon, Essex SS14 3EL, UK
3
Euroimpianti SpA, Via Lago di Vico 80, 36015 Schio, Italy
4
ASTM Workshop on Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor on
May 26–30, 2015 in Seattle, Washington.
Copyright VC 2016 by ASTM International, 100 Barr Harbor Drive, PO Box C700, West Conshohocken, PA 19428-2959.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
KOVAČIĆ ET AL., DOI 10.1520/STP159420150050 107
Keywords
warehouses, automated guided vehicles, localization, distributed control,
benchmark tests, standardization
Introduction
Logistical systems for automated warehouses have been in existence for a relatively
long time, with an apparent lack of universally recognized standards. This report
attempts to explain the development of a practical pathway toward systematic
standardization, based mainly on the experience gained through a decade-long
collaboration of academia and industry and quite recently through an international
collaboration via an FP7 project named EC-SAFEMOBIL, funded by the European
Commission within the FP7 research framework [1]. Here EC-SAFEMOBIL is a
personification of a collaboration composed of partners from academia, research
institutes, original equipment manufacturers, and end users.
The EC-SAFEMOBIL project is aiming to develop estimation and control
technologies for safe, wireless, high-mobility cooperative systems, with specific focus
on unmanned aerial systems and autonomous warehouse robotic applications. Both
of these applications encompass complex system architectures, and the standards
that must be considered in an implementation such as this example of the whole
system are wide ranging.
The purpose of this document is to identify and describe existing international
standards that are of direct relevance to the development work being carried out
under the EC-SAFEMOBIL project (i.e., considering only the estimation and control
aspects of the larger, more complex encompassing applications). As such, automated
warehouse development partners within the program paid more attention to the
processes in the research and development phases, which led to a common, well-
defined, and constrained set of standards for the considered application area. These
standards can be separated into two classes, those that describe process and develop-
ment methods without prescribing design and implementation attributes, and
those that present design requirements or constraints. The latter are application/
environment specific, while the former are more generally applicable for systems that
are conceptually similar at a higher level.
When considering application-specific standards, the applications of interest
are primarily the technology test bed demonstrations for new estimation and
control technologies.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
108 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
livelock free. The introduction of any new coordination algorithms raises the afore-
mentioned safety questions related to the adopted process of software development.
In this analysis, we do not focus on AGVs whose operation completely relies
on prepared floors for motion guidance. Instead, we focus on freely navigating
AGVs that are used in large-scale manufacturing and logistics applications. Freely
navigating AGVs mainly use laser scanners for navigation; these provide an
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
110 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
FIG. 3 Academic research (left) versus industrial state of the art (right).
precision is still an order of magnitude worse than is required for safe operation in
manufacturing environments. As a consequence of insufficiently good indoor local-
ization, path-planning algorithms are strongly affected by uneven terrain, bad trac-
tion, and slower motion. Consequently, current solutions in the industry assume that
the manufacturing facility environment is known prior to the design of facility
layouts such as docking stations, paths, turns, intersections, idle positions, and so on.
Academia can offer solutions such as different online planning strategies based on
rapidly-exploring random trees and A* algorithms for shortest path search [8], but
because of poor localization, the industry is still only using predefined paths.
At higher control levels in relation to coordination and mission control, the
industry is applying centralized control solutions, while academia is turning more
to the decentralized solutions [9–13]. Centralized control requires reliable com-
munications with all AGVs. Practice has proven repeatedly that communication
losses happen in all systems that directly affect the safety of multi-AGV systems.
The situation becomes worse when the number of vehicles in the system
increases because the density of data transferred through wireless communication
channels becomes a bottleneck for the whole control system and threatens the over-
all system functionality. As a result, when there are tens to hundreds of installed
AGVs, academia can introduce various optimization techniques, whereas the indus-
try still relies on simpler heuristics.
of 1 cm and 0.5 degree achieved without markers is not yet reported. It is worth
comment that the weights of vehicles used for experimenting in academia are
compared to the weights of vehicles used by the industry. One can conclude that,
in warehousing, the size does matter because AGVs represent an evident threat to
the environment in which they operate (to people, to warehouse elements, stored
goods, etc.).
Note: The first practical demonstration of 1 cm and 0.5 degree positioning
accuracy without markers was made by researchers from the University of Zagreb
in January 2015 during the EC-SAFEMOBIL consortium meeting in the manufac-
turing facility of Euroimpianti SpA (Schio, Italy). The novel AGV pose estimation
method is still being tested in different operating conditions, and publication of
results is expected at the end of the project (December 2015). One can find more
information about publicly performed experiments in Schio, Italy, by visiting the
EC-SAFEMOBIL Web page and relevant videos [1].
SAFETY
Any autonomous system must perform its task in a safe manner; consequently,
safety standards form the top level of the hierarchy from which all other process
standards originate.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
KOVAČIĆ ET AL., DOI 10.1520/STP159420150050 113
System
Intended AGV Func ? on, Failure,
Design
Func ? on and Safety
Informa ? on
SOFTWARE SAFETY
The standard IEC 61508 specifies safety-based requirements for developed software
in a traceable manner [14]. This international standard is intended to be a basic
functional safety standard that is applicable to all industries. Part 3 of this standard
deals with specific requirements software must comply with in order to ensure its
safety. In this standard, a safety life cycle is used as the basis for the compliance
of the requirements. The IEC 61508 standard defines four safety integrity levels
(SILs) shown in Table 2, which determine risk-reduction levels related to a safety
function. A brief description of the main requirements regarding software is pre-
sented in the following.
• Software Safety Requirements Specification: Specification of the functional
safety requirements, which must be clear, accurate, verifiable, testable, main-
tainable, and feasible.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
114 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
• Software Safety Validation Planning: Plan to prove that the software satisfies
the safety requirements set in the specification. The plan must consider
required equipment, validation schedule, validation authority, modes of opera-
tion validation, foreseeable abnormal conditions, reference to safety require-
ments, and expected results.
• Software Design and Development: Includes the definition of major com-
ponents and subsystems of the software. The architectural design should
include the interconnection among components, techniques necessary to
satisfy requirements, software safety integrity levels of the components,
software-hardware interactions and tests performed to ensure safety integrity
of data, and so on.
• Integration and Testing: There must be tests applied to the integration
between the hardware and the software during the design and development
phases. It is necessary to define the test cases and data, the test environment,
tools and configuration, the test criteria, and the procedures for corrective
actions.
• Software Safety Validation : Checks to ensure the software design meets the
software safety requirements. This validation should be done in accordance
with the safety validation planning. The validation is done for each safety
function reporting a record of the validation activities, the version of the
validation plan, the safety function that is validated, the test environment, and
the results of the validation.
• Operation and Modification: Modifications should always be made under
authorization, taking into account the procedures described in the safety
planning phase.
• Software Verification: Tests the results of the consistency analysis of the soft-
ware safety life cycle phases.
• Software Functional Safety Assessment: Concludes the level of safety achieved,
including all phases of the safety life cycles.
SYSTEM DEVELOPMENT
System development covers the process from concept through to system implemen-
tation, including requirements capture and analysis, system design and implemen-
tation, and verification. Application of relevant standards for system development
will result in a more efficient program implementation and in increased acceptance
of the developed technologies by industrial partners.
Referring to the 2014 annual report on service robots [2], with Section 8 devoted
to standardization and safety in service robotics, one can learn from Fig. 5 about
establishment of a whole work group scheme for standardization organized within
the technical committee ISO TC 184/SC 2 (robots and robotic devices) [15]. The
scope of the committee covers standardization of the following topics: definition/
characterization, terminology, performance testing methods, safety, mechanical inter-
faces and end effectors, programming methods, and requirements for information
exchange.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
KOVAČIĆ ET AL., DOI 10.1520/STP159420150050 115
As described by Jacobs [16], there are several standards on robots and robotic
devices that have been published (e.g., standards on terms and conditions, coordi-
nate systems with extension for mobile robots, safety of industrial robots, and safety
of personal care robots), but the process for standardization in new application
areas has only just begun. A brief look at the structure shows that standards in the
domain of Work Groups 8 and 10 are of particular significance for the robots used
in automated warehousing.
Function Specific Standards
In addition to the development process standards that are broadly applicable
across the field of autonomous systems, there are standards specific to particular
applications—typically covering absolute functional limits, such as maximum vehicle
speed, maximum stopping distance, and so on. The following section discusses appli-
cation specific standards that were directly relevant to the project EC-SAFEMOBIL
development work on unmanned aerial systems (UAS) operations and distributed
autonomous warehouse traffic management. One can observe an obvious difference
between these two technical areas. There are many function specific standards related
to unmanned aerial vehicles systems but only a few for the AGV counterparts.
Actually, for autonomous warehousing development aspects of the EC-SAFEMOBIL
project only one standard of direct relevance has been identified [17]:
• ISO 15534-3, Ergonomic design for the safety of machinery—Part 3: Anthro-
pometric data
* This standard provides anthropometric data on the speed of personnel
manipulators.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
KOVAČIĆ ET AL., DOI 10.1520/STP159420150050 117
robots such as mobile servant robots, person carrier robots, and physical
assistant robots.
By way of example, current automated guided vehicles of Euroimpianti are
declared to conform to the following European directives:
• 2006/42/EC, machinery directive
• 2006/95/EC, low-voltage directive
• 2004/108/EC, electromagnetic compatibility directive
Also, Euroimpianti’s AGVs are declared to conform to the following interna-
tional technical standards:
• UNI EN ISO 12100-1, Safety of Machinery—Basic Concepts, General Principles
for Design 1 , 2005
• UNI EN ISO 12100-2, Safety of Machinery—Basic Concepts, General Principles
for Design 2, 2005
• UNI EN 1525, Safety of Driverless Industrial Trucks, 1999
• UNI EN 1175-1, Electrical Requirements for Battery Powered Trucks, 1999
• UNI EN ISO 13849-1, Design of Safety-Related Parts of Control Systems,
2007
• CEI EN 60204-1, Safety of Machinery—Electrical Equipment of Machines—
General Requirements, 2006
paths.
* Navigation areas (free-ranging vehicles): A path through the configuration space
must be found for the points representing the way the center of an autonomous
vehicle is moving. Usage of this method in dynamic complex environments
with more vehicles sharing the same navigational area leads to dynamic routing.
* Requirements of the dynamic properties of autonomous vehicles (maxi-
recharging, convoying.
* Definition of travel changes according to carried loads.
New standards for industrial systems with autonomous vehicles should address
the fundamental restrictions on usage of vehicles in the areas where human encoun-
ters are possible as the requirement for operations only within a structured work-
space unless an acceptable detect and avoid system is used. A similar standard
(CAA CAP 722) exists for unmanned air systems [20] where detect and avoid is
defined in the glossary as “the capability to see, sense, or detect conflicting traffic or
other hazards and take the appropriate action.”
COMMUNICATIONS
Reliable communications are the backbone of correctly functioning automated
warehouses. The distinctions among requirements concerning communications can
be defined within the methods in that the automated industrial vehicles are
controlled. Accordingly, automated warehousing applications can be divided into
two categories with regards to the type of control and the number of deployed au-
tonomous industrial vehicles:
• Warehouse installations of 1 to n vehicles controlled in a centralized way:
* Communication among the control center, each autonomous vehicle,
any) and among all neighboring vehicles (vehicles positioned within the
effective communication range).
This implies definition of the following functional specifications to
communications:
• Communication infrastructure in the industrial warehouse facility must
provide a continuous flow of information:
* Latency in the communication channels must be low enough to allow
conveyer belts (pick-up positions) and define the position of the unloading
station (Station A).
3. Provide marker-based localization infrastructure for achieving the best possible
accuracy oflocalization as a reference for assessment of new localization methods.
4. Move the AGV from the first pick-up position (e.g., Conveyor Belt 3) to the
Station A unloading site n times in a row and set markings to record the posi-
tions at which the AGV stopped.
5. Repeat this sequence for different starting positions of the AGV and different
pick-up positions (e.g., Conveyor Belts 1 or 2).
6. Repeat experiments for low speed and regular speed of AGV motion.
This benchmark scenario was used at the use case demonstration during the
Sixth Consortium Meeting in Schio, January 19–21, 2015. At both AGV docking
sites, there were markers on the floor to demonstrate the accuracy of the position-
ing system, which reflects the accuracy of the localization. Additionally, the orienta-
tion was measured on each site [1].
mission to pick up the pallet from Conveyor Belt 1 and transport it to Station A,
while LGV1400 will receive the request to pick up the pallet from Conveyor Belt 1.
While LGV1000 is moving toward the loading site, LGV1400 will be stopped by a
human to demonstrate that safety features are enabled and to produce enough delay
in the execution of the LGV1400 mission for the conflict to arise.
When LGV1000 loads the pallet from Conveyor Belt 1, LGV1400 will be
released. At this time, two missions are active (marked with the same dashing of the
paths in Fig. 9 ):
• LGV1000 delivery at Station A
• LGV1400 pickup from Belt 1
This situation means that the vehicles will need to exchange their positions, which
will cause them to start moving toward each other (head-on), and a control system will
resolve this situation by altering their paths so that they can bypass each other safely.
At the end of this situation, LGV1000 unloaded the pallet at Station A while
LGV1400 picked up the pallet from Conveyor Belt 1. These were the initial condi-
tions for the next benchmark scenario.
BENCHMARK SCENARIO 3: REMOVAL OF THE IDLE VEHICLE FROM THE PATH
OF ANOTHER VEHICLE
This situation starts with LGV1000 successfully unloading a pallet at Station A and
LGV1400 picking up a pallet at Conveyor Belt 3 (see Fig. 10).
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
124 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
KOVAČIĆ ET AL., DOI 10.1520/STP159420150050 125
Conclusion
Adherence to standards is key to the certification of any system implementation.
Automated warehousing systems lack standards that would contribute to system
developers as guidance toward quicker acceptance of new products on the auto-
mated manufacturing and logistics systems global market.
The analysis of the existing standards shows that many standards from other
application areas, such as unmanned aerial systems, can be a starting point for
adoption and adaptation for automated warehousing systems. Due to the fact that
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
126 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
References
[1] Estimation and Control for Safe Wireless High Mobility Cooperative Industrial Systems
(EC-SAFEMOBIL, Project No. 288082), http://ec-safemobil-project.eu (accessed March 1,
2016).
[2] World Robotics Service Robots, IFR Statistical Department, Frankfurt, Germany, 2014.
[3] Jensfelt, P. and Christensen, H. I., “Laser Based Position Acquisition and Tracking in an
Indoor Environment,” Proceedings of the IEEE International Symposium on Robotics
and Automation , Institute of Electrical and Electronics Engineers, Leuven, Belgium,
May 16–20, 1998.
[4] Chen, L., Hu, H., and McDonald-Maier, K., “EKF Based Mobile Robot Localization,”
Proceedings of the Third International Conference on Emerging Security Technologies,
Institute of Electrical and Electronics Engineers, Lisbon, September 5–7, 2012,
pp. 149–154.
[5] Teslic, L., Skrjanc, I., and Klancar, G., “EKF-Based Localization of a Wheeled Mobile
Robot in Structured Environments,” Journal of Intelligent & Robotic Systems, Vol. 62,
No. 2, 2011, pp 187–203.
[6] Kummerle, R., Pfaff, P., Triebel, R., and Burgard, W., “Monte Carlo Localization in Outdoor
Terrains using Multi-Level Surface Maps,” Journal of Field Robotics, Vol. 25, No. 6–7,
2008. pp. 346–359.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
KOVAČIĆ ET AL., DOI 10.1520/STP159420150050 1 27
[7] Grisetti, G., Stachniss, C., and Burgard, W., “Improved Techniques for Grid Mapping with
Rao-Blackwellized Particle Filters,” IEEE Transactions on Robotics, Vol. 23, No. 1, 2007,
pp. 34–46.
[8] LaValle, S. M., “Planning Algorithms,” Cambridge University Press, Cambridge, UK, 2006.
[9] Weyns, D., Holvoet, T., Schelfthout, K., and Wielemans, J., “Decentralized Control of
Automatic Guided Vehicles: Applying Multi-Agent Systems in Practice,” Companion to
the 23rd ACM SIGPLAN Conference on Object-Oriented Programming Systems Lan-
guages and Applications, Association for Computing Machinery, New York, 2008,
pp. 663–674.
[10] Yamamoto, H. and Yamada, T., “Control of AGVs in Decentralized Autonomous FMS
Based on a Mind Model,” Agent and Multi-Agent Systems. Technologies and Applica-
tions, G. Jezic, M. Kusek, N.-T. Nguyen, R. Howlett, and L. Jain, Eds., Springer, Berlin
Heidelberg, 2012, pp. 186–198.
[11] Herrero-Perez, D. and Martinez-Barbera, H., “Decentralized Coordination of Automated
Guided Vehicles,” Proceedings of the 7th International Joint Conference on Autonomous
Agents and Multiagent Systems, Vol. 3, International Foundation for Autonomous
Agents and Multiagent Systems, Richland, SC, 2008, pp. 1195–1198.
[12] Ayanian, N., Rus, D., and Kumar, V., “Decentralized Multirobot Control in Partially Known
Environments with Dynamic Task Reassignment,” Proceedings of the Third IFAC
Workshop on Distributed Estimation and Control in Networked Systems, International
Federation of Automatic Control, Santa Barbara, CA, September 14–15, pp. 311–316.
[13] Digani, V., Sabattini, L., Secchi, C., and Fantuzzi, C., “Toward Decentralized Coordination
of Multi Robot Systems in Industrial Environments: A Hierarchical Traffic Control
Strategy,” Proceedings of the 2013 IEEE International Conference on Intelligent
Computer Communication and Processing, Institute of Electrical and Electronics Engi-
neers, Cluj-Napoca, Romania, September 5–7, 2013, pp. 209–215.
[14] IEC-61508, Functional Safety of Electrical/Electronic/Programmable Electronic Safety-
Related Systems—Part 3: Software Requirements, International Electrotechnical
Commission, Geneva, Switzerland, 2010, www.iec.ch
[15] TC 184/SC 2, Robots and Robotic Devices, International Organization for Standardiza-
tion, Geneva, Switzerland, www.iso.org, (accessed March 1, 2016).
[16] Jacobs, T., “Standardisation and Safety in Service Robotics,” World Robotics Service
Robots, IFR Statistical Department, Frankfurt, Germany, 2014, pp. 255–259.
[17] ISO 15534-3, Ergonomic Design for the Safety of Machinery—Part 3: Anthropometric
Data, International Organization for Standardization, Geneva, Switzerland, www.iso.org,
(accessed March 1, 2016).
[18] Barraquand, J., Langlois, B., and Latombe, J. C., “Numerical Potential Field Techniques
for Robot Path Planning,” IEEE Transactions on Systems, Man, and Cybernetics, Vol. 2,
No. 2, 1992, pp. 224–241.
[19] Hwang, Y. K. and Ahuja, N. A., “Potential Field Approach To Path Planning,” IEEE Trans-
actions on Robotics and Automation , Vol. 8, No. 1, 1992, pp. 23–32.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
128 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
AUTONOMOUS INDUSTRIAL VEHICLES: FROM THE LABORATORY TO THE FACTORY FLOOR 129
STP 1594, 2016 / available online at www. astm. org / doi: 10. 1520/STP159420150055
Roger Bostelman 1
Recommendations for
Autonomous Industrial Vehicle
Performance Standards
Citation
Bostelman, R., “Recommendations for Autonomous Industrial Vehicle Performance
Standards,” Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor,
ASTM STP1594, R. Bostelman and E. Messina, Eds., ASTM International, West Conshohocken,
PA, 2016, pp. 129–141, doi:10.1520/STP159420150055 2
ABSTRACT
A workshop on “Autonomous Industrial Vehicles: From the Laboratory to the
Factory Floor” was held at the 2015 Institute of Electrical and Electronic
Engineers International Conference on Robotics and Automation. Nine research
papers were presented, followed by a discussion session. All of the findings are
summarized in this chapter and are intended to be used in the standards
development process within ASTM International Committee F45 Driverless
Automatic Guided Industrial Vehicles. This paper provides feedback from the
discussion and suggests recommendations for standards that evolved from the
discussion.
Keywords
standards, mobile robots, automatic guided vehicle (AGV), recommendations
Introduction
A workshop entitled “Autonomous Industrial Vehicles: From the Laboratory to
the Factory Floor” was held as a part of the Institute of Electrical and Electronic
Manuscript received June 16, 2015; accepted for publication November 3, 2015.
1
National Institute of Standards and Technology, 100 Bureau Dr., MS 8230, Gaithersburg, MD 20899-8230
2
ASTM Workshop on Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor on
May 26–30, 2015 in Seattle, Washington.
Copyright VC 2016 by ASTM International, 100 Barr Harbor Drive, PO Box C700, West Conshohocken, PA 19428-2959.
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
130 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
O rg a n i z a ti o n Co u n t ry
FIG. 1 Example navigation test method setup for defined areas as shown in the ASTM
F45.02 navigation working document.
Open-space tests could be defined as simple geometric shaped paths (e.g., square, circle,
straight line) for the vehicle to navigate. These tests could be used to evaluate the
vehicle’s accuracy in maintaining its commanded path over time. As with the defined
space navigation test method, this one will be agnostic to the manner in which the paths
are commanded to the vehicle, as long as the geometric shapes, dimensions, and so on
are consistent. Combinations of defined and open space navigation test methods should
also be considered where barriers may define one side of the vehicle and, for example,
a tape line defining a pedestrian walkway may define the other side ofthe vehicle.
The ASTM F45.02 subcommittee has also started a working document on
docking for industrial vehicles. Challenges for docking are the positioning uncer-
tainty and repeatability to which the vehicle can dock to a location and the speed at
which the docking can occur; again, as with navigation, various sizes and types of
vehicles are taken into account within the document, as shown in Fig. 2.
Fig. 2a and 2c show unit load vehicles of different sizes, and Fig. 2b shows a tug-
ger vehicle. (Not shown is a forklift vehicle.) Fig. 2d shows examples of vehicle size
variations, and Fig. 2e shows an AGV procured and used by NIST with an added
onboard robot arm (mobile manipulator) being used for performance test method
development for assembly tasks. All of the vehicles require docking with varying
levels of precision; for example, the NIST mobile manipulator requires much less
docking uncertainty than a typical unit load or tugger vehicle because the vehicle
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
134 STP 1594 On Autonomous Industrial Vehicles: From the Laboratory to the Factory Floor
FIG. 2 Top view of example AGV size variability: (a) low-profile, (b) industrial tug [4],
and (c) container AGV. (d) Vehicle size variability examples. (e) NIST mobile
manipulator being used for performance test method development for
assembly tasks.
position can be compensated by the onboard manipulator. One concept for generic
docking, shown in Fig. 3 , is to command the vehicle to access a point (a) followed
by a second point (b) or to contact both point (a) and point (b) simultaneously as
with the Fig. 3 (right) photo showing two forklift tines simultaneously docking to an
apparatus. The taped points on the tines are to align with the apparatus repeatedly
with uncertainty measurement from the tape point to the target centers to be
recorded. Various tines’ heights could also be measured.
The ASTM F45.02 subcommittee has also received further recommendations
toward standards developments for docking and navigation. Specifically, three
questions were documented and distributed to the committee to foster discussion
toward supporting current or developing new working documents:
1. With what accuracy does the AGV need to stop at dock/assembly mating locations?
• Pallets (low or high pick/place)—least accuracy needed
• Tray stations, International Organization for Standardization (ISO)
lock insertion—more accuracy needed
• Peg/part insertion (sheet goods, long rods, etc.) into assemblies—high
accuracy needed
• Pick up/place delicate equipment—high accuracy needed
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
BOSTELMAN, DOI 10.1520/STP159420150055 1 35
FIG. 3 Example docking test method showing (left) block vehicle and apparatus dock
points (a) and (b) for docking tests using various AGVs. Points (a) and (b) are
fixed points in a facility or on an apparatus as shown in the photo (right).
Approach vectors and sensor point spacing and locations are variable.
• Dust/dirt
• Outdoor, under conditions such as
• Temperature (e.g., extreme heat, cold)
• Lighting (e.g., day, night)
• Humidity/precipitation (e.g., dry, rain, snow)
• Fog, smoke
• Dust/dirt
• Surfaces
• Smooth/rough terrain
• Floor gaps
• Dusty/dirty
• Wet
• Surface Slope
• Level
• Slope angle > 0?
• Areas
• Defined
* Walls
* Obstacles (safety guards, rails, columns, etc.)
* Other agents
• Open
• Entrance and exit to/from areas
* Softwall curtain partitions
* Automated doors
* Open doorway spaces
• Diagnostics
• Changes
• Repairs
• Test methods should not be designed specifically for a particular vehicle sys-
tem and should instead allow any vehicle design the developers choose to be a
viable option
• Networking should be standardized for connections to vehicles so that all systems
installed in a facility can communicate regardless of the network or manufacturer
• Building integration/interface standards—should there be a working group in
this area?
* Similar to elevators and fire doors
* Standards that allow vehicles to adapt to the facility, including communi-
source
• Develop generalized test methods to test the relevant part or activity of the sys-
tem so that the component, system, etc., performance can be measured as
compared to the task
* Can’t test every possible combination of the system as compared to a task,
therefore generalize the test method to capture the most important aspects
• ASTM E54.08.01 [8] and other standards can be used as a good model for ve-
hicle performance standards development
The workshop presentations and closing discussion provided several areas that have
not been previously considered toward standards developments. The enthusiasm of the
workshop presenters and attendees demonstrated an obvious need for developing new
industrial vehicle performance standards, as well as the components (e.g., communica-
tion/network, virtual test data sets, testbed facilities, etc.) that support these systems.
ACKNOWLEDGMENTS
The author would like to thank the IEEE International Conference on Robotics and
Automation “Autonomous Industrial Vehicles: From the Laboratory to the Factory
Floor” workshop attendees and participants. Their feedback and support for the work-
shop provided necessary standard development focus. As well, the author would like
to thank Sebti Foufou, University of Qatar, for his editorial guidance.
References
[1] International Electrical and Electronics Engineering (IEEE) International Conference
on Robotics and Automation, Seattle, WA, May 26–30, 2015, http://icra2015.org
(accessed April 3, 2016).
[2] ASTM Committee F45 for Driverless Automatic Guided Industrial Vehicles, ASTM Inter-
national, West Conshohocken, PA, 2015, www.astm.org
[3] ANSI/ITSDF B56.5:2012, Safety Standard for Driverless, Automatic Guided Industrial
Vehicles and Automated Functions of Manned Industrial Vehicles, Industrial Truck Stand-
ards Development Foundation, Washington, DC, 2012, www.itsdf.org
[4] Material Handling Industry of America, “Glossary, Automatic Guided Vehicle Systems,”
MHI, Charlotte, NC, 2014, www.mhi.org/glossary (accessed April 3, 2016).
[5] ISO/FDIS 8373:2011(E/F), Robots and Robotic Devices—Vocabulary, International Orga-
nization for Standardization, Geneva, Switzerland, 2014.
[6] Huang, H.-M., Messina, E., Wade, R., English, R., Novak, B., and Albus, J., “Autonomy
Measures for Robots,” ASME 2004 International Mechanical Engineering Congress
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
BOSTELMAN, DOI 10.1520/STP159420150055 1 41
and Exposition , American Society of Mechanical Engineers, New York, NY, 2004,
pp. 1241–1247. Proceedings of AUVSI’s Unmanned Systems North America 2005.
[7] ISO/FDIS 18646-1, Robots and Robotic Devices—Performance Criteria and Related
Test Methods for Service Robots—Part 1: Locomotion for Wheeled Robots, International
Organization for Standardization, Geneva, Switzerland, 2016 (in review), http://
www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber=63127
[8] ASTM E54.08.01, Robots for Urban Search and Rescue, Performance Metrics and Stand-
ards, ASTM International, West Conshohocken, PA, 2015, www.astm.org
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
www. a s tm .org
Copyright by ASTM Int'l (all rights reserved); Tue May 16 20:41:07 EDT 2017
Downloaded/printed by
Coventry University (Tongji University) pursuant to License Agreement. No further reproductions authorized.