0% found this document useful (0 votes)
9 views23 pages

Elderly Fall Detection Survey

The document surveys literature on elderly fall detection systems using sensor networks and the Internet of Things. It discusses data collection, transmission, sensor fusion, analysis, and security/privacy considerations. It also reviews benchmark datasets and identifies areas for further research to improve system accuracy and reduce false alarms.

Uploaded by

janaw30888
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views23 pages

Elderly Fall Detection Survey

The document surveys literature on elderly fall detection systems using sensor networks and the Internet of Things. It discusses data collection, transmission, sensor fusion, analysis, and security/privacy considerations. It also reviews benchmark datasets and identifies areas for further research to improve system accuracy and reduce false alarms.

Uploaded by

janaw30888
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

REVIEW

published: 23 June 2020


doi: 10.3389/frobt.2020.00071

Elderly Fall Detection Systems: A


Literature Survey
Xueyi Wang 1*, Joshua Ellul 2 and George Azzopardi 1
1
Department of Computer Science, Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence,
University of Groningen, Groningen, Netherlands, 2 Computer Science, Faculty of Information & Communication Technology,
University of Malta, Msida, Malta

Falling is among the most damaging event elderly people may experience. With the
ever-growing aging population, there is an urgent need for the development of fall
detection systems. Thanks to the rapid development of sensor networks and the Internet
of Things (IoT), human-computer interaction using sensor fusion has been regarded as
an effective method to address the problem of fall detection. In this paper, we provide
a literature survey of work conducted on elderly fall detection using sensor networks
and IoT. Although there are various existing studies which focus on the fall detection with
individual sensors, such as wearable ones and depth cameras, the performance of these
systems are still not satisfying as they suffer mostly from high false alarms. Literature
shows that fusing the signals of different sensors could result in higher accuracy and
lower false alarms, while improving the robustness of such systems. We approach this
Edited by:
survey from different perspectives, including data collection, data transmission, sensor
Soumik Sarkar, fusion, data analysis, security, and privacy. We also review the benchmark data sets
Iowa State University, United States
available that have been used to quantify the performance of the proposed methods.
Reviewed by:
The survey is meant to provide researchers in the field of elderly fall detection using
Sambuddha Ghosal,
Massachusetts Institute of sensor networks with a summary of progress achieved up to date and to identify areas
Technology, United States where further effort would be beneficial.
Carl K. Chang,
Iowa State University, United States Keywords: fall detection, Internet of Things (IoT), information system, wearable device, ambient device, sensor
fusion
*Correspondence:
Xueyi Wang
xueyi.wang@rug.nl
1. INTRODUCTION
Specialty section:
More than nine percent of the population of China was aged 65 or older in 2015 and within 20 years
This article was submitted to
(2017–2037) it is expected to reach 20%1 . According to the World Health Organization (WHO),
Sensor Fusion and Machine
Perception, around 646 k fatal falls occur each year in the world, the majority of whom are suffered by adults
a section of the journal older than 65 years (WHO, 2018). This makes it the second reason for unintentional injury death,
Frontiers in Robotics and AI followed by road traffic injuries. Globally, falls are a major public health problem for the elderly.
Received: 17 December 2019 Needless to say, the injuries caused by falls that elderly people experience have many consequences
Accepted: 30 April 2020 to their families, but also to the healthcare systems and to the society at large.
Published: 23 June 2020 As illustrated in Figure 1, Google Trends2 show that fall detection has drawn increasing
Citation: attention from both academia and industry, especially in the last couple of years, where a sudden
Wang X, Ellul J and Azzopardi G increase can be observed. Moreover, on the same line, the topic of fall-likelihood prediction is very
(2020) Elderly Fall Detection Systems: significant too, which is coupled with some applications focused on prevention and protection.
A Literature Survey.
Front. Robot. AI 7:71. 1 https://chinapower.csis.org/aging-problem/

doi: 10.3389/frobt.2020.00071 2 https://www.google.com/trends

Frontiers in Robotics and AI | www.frontiersin.org 1 June 2020 | Volume 7 | Article 71


Wang et al. Elderly Fall Detection Systems

FIGURE 1 | Interest of fall detection over time, from January 2004 to December 2019. The data is taken from Google Trends with the search topic “fall detection.” The
values are normalized with the maximum interest, such that the highest interest has a value of 100.

El-Bendary et al. (2013) reviewed the trends and challenges issues of security and privacy. Section 7 introduces projects
of elderly fall detection and prediction. Detection techniques are and applications of fall detection. In section 8, we provide a
concerned with recognizing falls after they occur and trigger discussion about the current trends and challenges, followed by a
an alarm to emergency caregivers, while predictive methods discussion on challenges, open issues, and other aspects on future
aim to forecast fall incidents before or during their occurrence, directions. Finally, we provide a summary of the survey and draw
and therefore allow immediate actions, such as the activation conclusions in section 9.
of airbags.
During the past decades, much effort has been put into these 2. TYPES OF FALLS AND PREVIOUS
fields to improve the accuracy of fall detection and prediction REVIEWS ON ELDERLY FALL DETECTION
systems as well as to decrease the false alarms. Figure 2 shows
the top 25 countries in terms of the number of publications 2.1. Types of Falls
about fall detection from the year 1945 to 2020. Most of The impact and consequences of a fall can vary drastically
the publications originate from the United States, followed by depending upon various factors. For instance, falling whilst either
England, China, and Germany, among others. The data indicates walking, standing, sleeping or sitting on a chair, share some
that developed countries invest more in conducting research in characteristics in common but also have significant differences
this field than others. Due to higher living standards and better between them.
medical resources, people in developed countries are more likely In El-Bendary et al. (2013), the authors group the types of falls
to have longer life expectancy, which results in a higher aging in three basic categories, namely forward, lateral, and backward.
population in such countries (Bloom et al., 2011). Putra et al. (2017) divided falls into a broader set of categories,
In this survey paper, we provide a holistic overview of fall namely forward, backward, left-side, right-side, blinded-forward,
detection systems, which is aimed for a broad readership to and blinded-backward, and in the study by Chen et al. (2018) falls
become abreast with the literature in this field. Besides fall are grouped in more specific categories including fall lateral left
detection modeling techniques, this review covers other topics lie on the floor, fall lateral left and sit up from floor, fall lateral right
including issues pertaining to data transmission, data storage and and lie on the floor, fall lateral and left sit up from the floor, fall
analysis, and security and privacy, which are equally important in forward and lie on the floor, and fall backward and lie on the floor.
the development and deployment of such systems. Besides the direction one takes whilst falling another
The other parts of the paper are organized as follows. In important aspect is the duration of the fall, which may be
section 2, we start by introducing the types of fall and reviewing influenced by age, health and physical condition, along with any
other survey papers to illustrate the research trend and challenges consequences of activities that the individual was undertaking.
up to date, followed by a description of our literature search Elderly people may suffer from longer duration of falls, because
strategy. Next, in section 3 we introduce hardware and software of motion with low speed in the activity of daily living. For
components typically used in fall detection systems. Sections 4 instance, in fainting or chest pain related episodes an elderly
and 5 give an overview of fall detection methods that rely on person might try to rest by a wall before lying on the floor. In
both individual or a collection of sensors. In section 6, we address other situations, such as injuries due to obstacles or dangerous

Frontiers in Robotics and AI | www.frontiersin.org 2 June 2020 | Volume 7 | Article 71


Wang et al. Elderly Fall Detection Systems

FIGURE 2 | (A) A map and (B) a histogram of publications on fall detection by countries and regions from 1945 to 2020.

settings (e.g., slanting or uneven pavement or surfaces), an elderly cameras, and 3D-based methods using camera arrays. Since the
person might fall abruptly. The age and gender of the subject also advent of depth cameras, such as Microsoft Kinect, fall detection
play a role in the kinematics of falls. with RGB-D cameras has been extensively and thoroughly
The characteristics of different types of falls are not taken into studied due to the inexpensive price and easy installation.
consideration in most of the work on fall detection surveyed. In Systems which use calibrated camera arrays also saw prominent
most of the papers to date, data sets typically contain falls that uptake. Because such systems rely on many cameras positioned at
are simulated by young and healthy volunteers and do not cover different viewpoints, challenges related to occlusion are typically
all types of falls mentioned above. The resulting models from reduced substantially, and therefore result in less false alarm
such studies, therefore, do not lead to models that generalize well rates. Depth cameras have gained particular popularity because
enough in practical settings. unlike RGB camera arrays they do not require complicated
calibration and they are also less intrusive of privacy. Zhang et al.
2.2. Review of Previous Survey Papers (2015) also reviewed different types of fall detection methods,
There are various review papers that give an account of the that rely on the activity/inactivity of the subjects, shape (width-
development of fall detection from different aspects. Due to to-height ratio), and motion. While that review gives a thorough
the rapid development of smart sensors and related analytical overview of vision-based systems, it lacks an account of other
approaches, it is necessary to re-illustrate the trends and fall detection systems that rely on non-vision sensors such as
development frequently. We choose the most highly cited review wearable and ambient ones.
papers, from 2014 to 2020, based on Google Scholar and Web Further to the particular interest in depth cameras, Cai et al.
of Science, and discuss them below. These selected review papers (2017) reviewed the benchmark data sets acquired by Microsoft
demonstrate the trends, challenges, and development in this field. Kinect and similar cameras. They reviewed 46 public RGB-D
Other significant review papers before 2014 are also covered in data sets, 20 of which are highly used and cited. They compared
order to give sufficient background of earlier work. and highlighted the characteristics of all data sets in terms of
Chaudhuri et al. (2014) conducted a systematic review of fall their suitability to certain applications. Therefore, the paper is
detection devices for people of different ages (excluding children) beneficial for scientists who are looking for benchmark data sets
from several perspectives, including background, objectives, data for the evaluation of new methods or new applications.
sources, eligibility criteria, and intervention methods. More than Based on the review provided by Chen et al. (2017a),
100 papers were selected and reviewed. The selected papers were individual depth cameras and inertial sensors seem to be the most
divided into several groups based on different criteria, such as significant approaches in vision- and non-vision-based systems,
the age of subjects, method of evaluation and devices used in respectively. In their review, the authors concluded that fusion
detection systems. They noted that most of the studies were based of both types of sensor resulted in a system that is more robust
on synthetic data. Although simulated data may share common than a system relying on one type of sensor.
features with real falls, a system trained on such data cannot reach The ongoing and fast development in electronics have resulted
the same reliability of those that use real data. in more miniature and cheaper electronics. For instance, the
In another survey, Zhang et al. (2015) focused on vision-based survey by Igual et al. (2013) noted that low-cost cameras
fall detection systems and their related benchmark data sets, and accelerometers embedded in smartphones may offer the
which have not been discussed in other reviews. Vision-based most sensible technological choice for the investigation of fall
approaches of fall detection were divided into four categories, detection. Igual et al. (2013) identified two main trends on how
namely individual single RGB cameras, infrared cameras, depth research is progressing in this field, namely the use of vision

Frontiers in Robotics and AI | www.frontiersin.org 3 June 2020 | Volume 7 | Article 71


Wang et al. Elderly Fall Detection Systems

and smartphone-based sensors that give input and the use of those classifiers, which are based solely on either accelerometers
machine learning for the data analysis. Moreover, they reported or gyroscopes, are argued to suffer from insufficient robustness
the following three main challenges: (i) real-world deployment (Tsinganos and Skodras, 2018). Later, Li et al. (2009) investigated
performance, (ii) usability, and (iii) acceptance. Usability refers fusion of gyroscope and accelerometer data for the classification
to how practical the elderly people find the given system. Because of falls and non-falls. In their work, they demonstrated how a
of the issue of privacy and intrusive characteristics of some fusion based approach resulted in a more robust classification.
sensors, there is a lack of acceptance for the elderly to live in an For instance, it could distinguish falls more accurately from
environment monitored by sensors. They also pointed out several certain fall-like activities, such as sitting down quickly and
issues which need to be taken into account, such as smartphone jumping, which is hard to detect using a single accelerometer.
limitations (e.g., people may not carry smartphones all the time This work had inspired further research on sensor fusion. These
with them), privacy concerns, and the lack of benchmark data two types of sensors can nowadays be found in all smart phones
sets of realistic falls. (Zhang et al., 2006; Dai et al., 2010; Abbate et al., 2012).
The survey papers mentioned above focus mostly on the Besides the two non-vision based types of sensors mentioned
different types of sensors that can be used for fall detection. above, vision-based sensors, such as surveillance cameras, and
To the best of our knowledge, there are no literature surveys ambience-based, started becoming an attractive alternative.
that provide a holistic review of fall detection systems in terms Rougier et al. (2011b) proposed a shape matching technique
of data acquisition, data analysis, data transport and storage, to track a person’s silhouette through a video sequence. The
sensor networks and Internet of Things (IoT) platforms, as well deformation of the human shape is then quantified from the
as security and privacy, which are significant in the deployment silhouettes based on shape analysis methods. Finally, falls are
of such systems. classified from normal activities using a Gaussian mixture
model. After surveillance cameras, depth cameras also attracted
2.3. Key Results of Pioneering Papers substantial attention in this field. The earliest research which
In order to illustrate a timeline of fall detection development, in applied Time-of-Flight (TOF) depth camera was conducted in
this section we focus on the key and pioneering papers. Through 2010 by Diraco et al. (2010). They proposed a novel approach
manual filtering of papers using the web of science, one can based on visual sensors, which does not require landmarks,
find the trendsetting and highly cited papers in this field. By calibration patterns or user intervention. A ToF camera is,
analyzing retrieved articles using citespace one can find that fall however, expensive and has low image resolution. Following that,
detection research first appeared in the 1990s, beginning with the the Kinect depth camera was first used in 2011 by Rougier et al.
work by Lord and Colvin (1991) and Williams et al. (1998). A (2011a). Two features, human centroid height and velocity of
miniature accelerometer and microcomputer chip embedded in body, were extracted from depth information. A simple threshold
a badge was used to detect falls (Lord and Colvin, 1991), while based algorithm was applied to detect falls and an overall success
Williams et al. (1998) applied a piezoelectric shock sensor and a rate of 98.7% was achieved.
mercury tilt switch which monitored the orientation of the body After the introduction of Kinect by Microsoft, there was a
to detect falls. At first, most studies were based on accelerometers large shift in research from accelerometers to depth cameras.
including the work by Bourke et al. (2007). In their work, they Accelerometers and depth cameras have become the most
compared which of the trunk and thigh offer the best location to popular individual and combined sensors (Li et al., 2018).
attach the sensor. Their results showed that a person’s trunk is The combination of these two sensors achieved a substantial
a better location in comparison to the thigh, and they achieved improvement when compared to the individual use of the
100% specificity with a certain threshold value with a sensor sensors separately.
located in the trunk. This method was the state-of-the-art at
the time, which undoubtedly supported it in becoming the most 2.4. Strategy of the Literature Search
highly cited paper in the field. We use two databases, namely Web of Science and Google
At the time the trend was to use individual sensors for Scholar, to search for relevant literature. Since the sufficient
detection, within which another key paper by Bourke and Lyons advancements have been made at a rapid pace recently, searches
(2008) was proposed to explore the problem at hand by using a included articles that were published in the last 6 years (since
single gyroscope that measures three variables, namely angular 2014). We also consider, all survey papers that were published
velocity, angular acceleration, and the change in the subject’s on the topic of fall detection. Moreover, we give an account
trunk-angle. If the values of these three variables in a particular of all relevant benchmark data sets that have been used in
instance are above some empirically determined thresholds, then this literature.
that instance is flagged as a fall. Three thresholds were set For the keywords “fall detection”, 4,024 and 575,000 articles
to distinguish falls from non-falls. Falls are detected when the were found for the above two mentioned databases, respectively,
angular velocity of a subject is greater than the fall threshold, since 2014. In order to narrow down our search to the more
and the angular acceleration of the subject is greater than the relevant articles we compiled a list of the most frequently used
second fall threshold, and the change in the trunk-angle of the keywords that we report in Table 1.
subject is greater than the third fall threshold. They reported We use the identified keywords above to generate the queries
accuracy of 100% on a data set with only four kinds of falls and listed in Table 2 in order to make the search more specific to
480 movements simulated by young volunteers. However, for the three classes of sensors that we are interested in. For the

Frontiers in Robotics and AI | www.frontiersin.org 4 June 2020 | Volume 7 | Article 71


Wang et al. Elderly Fall Detection Systems

TABLE 1 | The most frequently used keywords in the topic of fall detection. illustrated in Figure 3, ends up with a total of 87 articles, 13 of
which describe benchmark data sets.
Wearable sensor Visual sensor Ambient sensor Sensor fusion

Fall detection Fall detection Fall detection Fall detection 3. HARDWARE AND SOFTWARE
Falls Falls Falls Falls COMPONENTS INVOLVED IN A FALL
Fall accident Fall accident Fall accident Fall accident
DETECTION SYSTEM
Machine learning Machine learning Machine learning Machine learning
Deep learning Deep learning Deep learning Deep learning Most of the research of fall detection share a similar system
Reinforcement Reinforcement Reinforcement Reinforcement architecture, which can be divided into four layers, namely
learning learning learning learning
Physiological Sensing Layer (PSL), Local Communication Layer
Body area networks Multiple camera Ambient sensor Health monitoring
(LCL), Information Processing Layer (IPL), and User application
Wearable Visual Ambient Sensor fusion
Layer (UAL), as suggested by Ray (2014) and illustrated in
Worn Vision-based Ambience Sensor network
Figure 4.
Accelerometer Kinect RF-sensing Data fusion
PSL is the fundamental layer that contains various (smart)
Gyroscope Depth camera WiFi Multiple sensors sensors used to collect physiological and ambient data from
Biosensor Video surveillance Radar Camera arrays the persons being monitored. The most commonly used
Smart watch RGB camera Cellular Decision fusion sensors nowadays include accelerometers that sense acceleration,
Gait Infrared camera Vibration Anomaly detection gyroscopes that detect angular velocity, and magnetometers
Wearable based Health- monitoring Ambience-based IoT which sense orientation. Video surveillance cameras, which
They are manually classified into four categories. provide a more traditional means of sensing human activity, are
also often used but are installed in specific locations, typically
with fixed fields of views. More details about PSL are discussed
in sections 4.1 and 5.1.
TABLE 2 | Search queries used in Google Scholar and Web of Science for the The next layer, namely LCL, is responsible for sending the
three types of sensor and sensor fusion.
sensor signals to the upper layers for further processing and
Sensor type Query analysis. This layer may have both wireless and wired methods
of transmission, connected to local computing facilities or to
Wearable-based (Topic): ((“Fall detection" OR “Fall” OR “Fall accident”) AND cloud computing platforms. LCL typically takes the form of
(“Wearable” OR “Worn” OR “Accelerometer” OR “Machine one (or potentially more) communication protocols, including
learning” OR “Deep learning” OR “Reinforcement learning”)
NOT “Survey” NOT “Review” NOT “Kinect” NOT “Video” NOT
wireless mediums like cellular, Zigbee, Bluetooth, WiFi, or even
“Infrared” NOT “Ambient”) wired connections. We provide more details on LCL in sections
4.2 and 5.2.
Vision-based (Topic): ((“Fall detection” OR “Falls” OR “Fall accident”) AND IPL is a key component of the system. It includes hardware
(“Video” OR “Visual” OR “Vision-based” OR “Kinect” OR and software components, such as micro-controller, to analyze
“Depth camera” OR “Video surveillance” OR “RGB camera”
OR “Infrared camera” OR “Monocular camera” OR “Machine
and transfer data from PSL to higher layers. In terms of
learning” OR “Deep learning” OR “Reinforcement learning”) software components, different kinds of algorithms, such as
NOT “Wearable” NOT “Ambient”) threshold, conventional machine learning, deep learning, and
deep reinforcement learning are discussed in sections 4.3,
Ambient-based (Topic): ((“Fall detection” OR “Falls” OR “Fall accident”) AND 5.3, and 8.1.
(“Ambient” OR “Ambient-based” OR “Ambience-based” OR
“RF-sensing” OR “WiFi” OR “Cellular” OR “vibration” OR
Finally, the UAL concerns applications that assist the users.
“Ambience” OR “Radar” OR “Machine learning” OR “Deep For instance, if a fall is detected in the IPL, a notification can
learning” OR “Reinforcement learning”) NOT “Wearable” NOT first be sent to the user and if the user confirms the fall or does
“vision”) not answer, an alarm is sent to the nearest emergency caregivers
who are expected to take immediate action. There are plenty of
Sensor Fusion (Topic): ((“Fall detection” OR “Falls” OR “Falls accident”) AND
other products like Shimmer and AlertOne, which have been
(“Health monitoring” OR “Multiple sensors” OR “Sensor
fusion” OR “Sensor network” “Data fusion” OR “IoT” OR deployed as commercial applications to users. We also illustrate
“Camera arrays” OR “Decision fusion” OR “Health other different kinds of applications in section 7.
monitoring” OR “Fusion” OR “Multiple sensors” OR “Machine
learning” OR “Deep learning” OR “Reinforcement learning”))
4. FALL DETECTION USING INDIVIDUAL
SENSORS
4.1. Physiological Sensing Layer (PSL) of
retrieved articles, we discuss their contributions and keep only Individual Sensors
those that are truly relevant to our survey paper. For instance, As mentioned above, fall detection research applied either a single
articles that focus on rehabilitation after falls, and causes of falls, sensor or fusion by multiple sensors. The methods of collecting
among others, are filtered out manually. This process, which is data are typically divided into four main categories, namely

Frontiers in Robotics and AI | www.frontiersin.org 5 June 2020 | Volume 7 | Article 71


Wang et al. Elderly Fall Detection Systems

FIGURE 3 | Illustration of the literature search strategy. The wearable-based queries in Table 2 return 28 articles. The vision- and ambient-based queries return 31
articles, and the sensor fusion queries return 28 articles.

FIGURE 4 | The main components typically present within fall detection system architectures include the illustrated sequence of four layers. Data is collected in the
physiological sensing layer, transferred through the local communication layer, then it is analyzed in the information processing layer, and finally the results are
presented in the user application layer.

individual wearable sensors, individual visual sensors, individual types of sensors for fall detection and have been widely studied.
ambient sensors and data fusion by sensor networks. Whilst some Numerous studies have been conducted to investigate wearable
literature groups visual and ambient sensors together we treat devices, which are regarded as a promising direction to study fall
them as two different categories in this survey paper due to visual detection and prediction.
sensors becoming more prominent as a detection method with Based on our search criteria and filtering strategy (Tables 1,
the advent of depth cameras (RGBD), such as the Kinect. 2), 28 studies, including eight papers focusing on public data
sets, focusing on fall detection by individual wearable devices
4.1.1. Individual Wearable Sensors are selected and described to illustrate trends and challenges
Falls may result in key physiological variations of the of fall detection during the past 6 years. Some conclusions
human body, which provide a criterion to detect a fall. By can be drawn based on the literature during the past 6 years
measuring various human body related attributes using in comparison to the studies before 2014. From Table 3, we
accelerometers, gyroscopes, glucometers, pressure sensors, ECG note that studies applying accelerometers account for a large
(Electrocardiography), EEG (Electroencephalography), or EOG percentage of research in this field. To the best of our knowledge,
(Electromyography), one can detect anomalies within subjects. only Xi et al. (2017) deployed electromyography to detect falls,
Due to the advantages of mobility, portability, low cost, and and 19 out of 20 papers applied an accelerometer to detect
availability, wearable devices are regarded as one of the key falls. Although the equipment used, such as Shimmer nodes,

Frontiers in Robotics and AI | www.frontiersin.org 6 June 2020 | Volume 7 | Article 71


Frontiers in Robotics and AI | www.frontiersin.org

Wang et al.
TABLE 3 | Fall detection using individual wearable devices from 2014 to 2020.

References Sensor Location No. subjects (age) Data sets Algorithms Equipment Alarm

Saleh and Jeannès (2019) Accelerometer Waist 23 (19–30), 15 (60–75) Simulated SVM N/A N
Zitouni et al. (2019) Accelerometer Sole 6 (N/A) Simulated Threshold Smartsole N/A
Thilo et al. (2019) Accelerometer Torso 15 (mean = 81) N/A N/A N/A Y
Wu et al. (2019) Accelerometer Chest and Thigh 42 (N/A), 36 (N/A) Public (Simulated) Decision tree Smartwatch (Samsung N/A
watch)
Sucerquia et al. (2018) Accelerometer Waist 38 (N/A) Public data sets
Chen et al. (2018) Accelerometer Leg (pockets) 10 (20–26) N/A ML(SVM) Smartphones Y
Putra et al. (2017) Accelerometer Waist 38 (N/A), 42 (N/A) Public data sets ML N N/A
Khojasteh et al. (2018) Accelerometer N/A 17 (18–55), 6 (N/A), 15 Public (Simulated) Threshold/ML N/A N/A
(mean = 66.4)
de Araújo et al. (2018) Accelerometer Wrist 1 (30) N/A Threshold Smartwatch N/A
Djelouat et al. (2017) Accelerometer Waist N/A Collected by authors (Simulated) ML Shimmer-3 Y
Aziz et al. (2017) Accelerometer Waist 10 (mean = 26.6) Collected by authors (Simulated) Threshold/ML Accelerometers (Opal N
model, APDM Inc)
Kao et al. (2017) Accelerometer Wrist N/A Collected by authors (Simulated) ML ZenWatch(ASUS) Y
Islam et al. (2017) Accelerometer Chest (pocket) 7 (N/A) N/A Threshold Smartphone N/A
Xi et al. (2017) Electro-myography Ankle, Leg 3 (24–26) Collected by authors (Simulated) ML EMGworks 4.0 (DelSys N
(sEMG) Inc.)
7

Chen et al. (2017b) Accelerometer Lumbar, Thigh 22 (mean = 69.5) Public data sets (Real) ML N/A N/A
Chen et al. (2017b) Accelerometer Chest, Waist, Arm, N/A Collected by authors (Simulated) Threshold N/A Y
Hand
Medrano et al. (2017) Accelerometer N/A 10 (20–42) Public (Simulated) ML Smartphones N
Shi et al. (2016) Accelerometer N/A 10 (mean = 25) N/A Threshold Smartphone N/A
Wu et al. (2015) Accelerometer Waist 3 (23, 42, 60) Collected by authors (Simulated) Threshold ADXL345 Y
Accelerometer(ADI)
Mahmud and Sirat (2015) Accelerometer Waist 13 (22–23) Collected by authors (Simulated) Threshold Shimmer N/A

ML is the abbreviation of Machine Learning.


June 2020 | Volume 7 | Article 71

Elderly Fall Detection Systems


Wang et al. Elderly Fall Detection Systems

TABLE 4 | Fall detection using individual vision-based devices from 2014 to 2020.

References Sensor No. subjects (age) Data sets Algorithms Real-time Alarm

Han et al. (2020) Web camera N/A Simulated CNN N/A N/A
Kong et al. (2019) Camera (Surveillance) N/A Public (Simulated) CNN Y N/A
Ko et al. (2018) Camera (Smartphone) N/A Simulated Rao-Blackwellized Particle Filtering N/A N
Shojaei-Hashemi et al. (2018) Kinect 40 (10–15) Public (Simulated) LSTM Y N
Min et al. (2018) Kinect 4 (N/A), 11 (22–39) Public (Simulated) SVM Y N
Ozcan et al. (2017) Web camera 10 (24–31) Simulated Relative-entropy-based N/A N/A
Akagündüz et al. (2017) Kinect 10 (N/A) Public (Simulated) SDU (2011) Silhouette N/A N
Adhikari et al. (2017) Kinect 5 (19–50) Simulated CNN N/A N
Ozcan and Velipasalar (2016) Camera (Smartphone) 10 (24–31) Simulated Threshold/ML N/A N/A
Senouci et al. (2016) Web Camera N/A Simulated SVM Y Y
Amini et al. (2016) Kinect v2 11 (24–31) Simulated Adaptive Boosting Trigger, Heuristic Y N
Kumar et al. (2016) Kinect 20 (N/A) Simulated SVM N/A N
Aslan et al. (2015) Kinect 20 (N/A) Public (Simulated) SVM N/A N
Yun et al. (2015) Kinect 12 (N/A) Simulated SVM N/A N
Stone and Skubic (2015) Kinect 454 (N/A) Public (Simulated+Real) Decision trees N/A N
Bian et al. (2015) Kinect 4 (24–31) Simulated SVM N/A N
Chua et al. (2015) RGB camera N/A Simulated Human shape variation Y N
Boulard et al. (2014) Web camera N/A Real Elliptical bounding box N/A N
Feng et al. (2014) Monocular camera N/A Simulated Multi-class SVM Y N
Mastorakis and Makris (2014) Infrared sensor (Kinect) N/A Simulated 3D bounding box Y N
Gasparrini et al. (2014) Kinect N/A Simulated Depth frame analysis Y N
Yang and Lin (2014) Kinect N/A Simulated Silhouette N/A N

smartphones, and smart watches, often contain other sensors like to the levels of detail that cameras can capture, such as personal
gyroscopes and magnetometers, these sensors were not used to information, appearance, and visuals of the living environment.
detect falls. Bourke et al. (2007) also found that accelerometers Further to the information that we report in Table 4, we
are regarded as the most popular sensors for fall detection note that RGB, depth, and infrared cameras are the three main
mainly due to its affordable cost, easy installation and relatively visual sensors used. Moreover, it can be noted that the RGB-D
good performance. camera (Kinect) is among the most popular vision-based sensor,
Although smartphones have gained attention for studying as 12 out of 22 studies applied it in their work. Nine out of
falls, the underlying sensors of systems using them are still the other 10 studies used RGB cameras including cameras built
accelerometers and gyroscopes (Shi et al., 2016; Islam et al., 2017; into smartphones, web cameras, and monocular cameras, while
Medrano et al., 2017; Chen et al., 2018). Users are more likely the remaining study used an infrared camera within Kinect, to
to carry smartphones all day rather than extra wearable devices, conduct their experiments.
so smartphones are useful for eventual real-world deployments Static RGB cameras are the most widely used sensors
(Zhang et al., 2006; Dai et al., 2010). within the vision-based fall detection research conducted before
2004, although the accuracies of RGB camera-based detection
systems vary drastically due to environmental conditions, such as
4.1.2. Individual Visual Sensors illumination changes—which often results in limitations during
Vision-based detection is another prominent method. Extensive the night. Besides, RGB cameras are inherently likely to have a
effort in this direction has been demonstrated, and some of higher false alarm rate because some deliberate actions like lying
which (Akagündüz et al., 2017; Ko et al., 2018; Shojaei-Hashemi on the floor, sleeping or sitting down abruptly are not easily
et al., 2018) show promising performance. Although most distinguished by frames captured by RGB cameras. With the
cameras are not as portable as wearable devices, they offer other launch of the Microsoft Kinect, which consists of an RGB camera,
advantages which deem them as decent options depending upon a depth sensor, and a multi-array microphone, it stimulated a
the scenario. Most static RGB cameras are not intrusive and trend in 3D data collection and analysis, causing a shift from RGB
wired hence there is no need to worry about battery limitations. to RGB-D cameras. Kinect depth cameras took the place of the
Work on demonstrating viability of vision-based approaches traditional RGB cameras and became the second popular sensors
have been demonstrated which makes use of infrared cameras in the field of fall detection after 2014 (Xu et al., 2018).
(Mastorakis and Makris, 2014), RGB cameras (Charfi et al., 2012), In the last years, we are seeing an increased interest in
and RGB-D depth cameras (Cai et al., 2017). One main challenge the use of wearable cameras for the detection of falls. For
of vision-based detection is the potential violation of privacy due instance, Ozcan and Velipasalar (2016) tried to exploit the

Frontiers in Robotics and AI | www.frontiersin.org 8 June 2020 | Volume 7 | Article 71


Wang et al. Elderly Fall Detection Systems

TABLE 5 | Fall detection using individual ambient devices from 2014 to 2020.

References Sensor No. subjects (age) Data sets Algorithms Real-time Alarm

Huang et al. (2019) Vibration 12 (19-29) Simulated HMM Y N/A


Hao et al. (2019) WiFi N/A Simulated SVM Y N/A
Tian et al. (2018) FMCW radio 140 (N/A) Simulated CNN Y N/A
Palipana et al. (2018) WiFi 3 (27-30) Simulated SVM Y N/A
Wang et al. (2017a) WiFi 6 (21-32) Simulated SVM Y N/A
Wang et al. (2017b) WiFi N/A Simulated SVM, Random Forests N/A N/A

cameras on smartphones. Smartphones were attached to the WiFi Channel State Information (CSI) and achieved above 93%
waists of subjects and their inbuilt cameras were used to record average accuracy.
visual data. Ozcan et al. (2017) investigated how web cameras RF-sensing technologies have also been widely applied to
(e.g., Microsoft LifeCam) attached to the waists of subjects can other recognition activities beyond fall detection (Zhao et al.,
contribute to fall detection. Although both approaches are not 2018; Zhang et al., 2019) and even for subtle movements. Zhao
yet practical to be deployed in real applications, they show a et al. (2018) studied human pose estimation with multiple
new direction, which combines the advantages of wearable and persons. Their experiment showed that RF-pose has better
visual sensors. performance under occlusion. This improvement is attributable
Table 4 reports the work conducted for individual vision- to the ability of their method to estimate the pose of the subject
based sensors. The majority of research still makes use of through a wall, something that visual sensors fail to do. Further
simulated data. Only two studies use real world data; the one by research on RF-sensing was conducted by Niu et al. (2018) with
Boulard et al. (2014) has actual fall data and the other by Stone applications to finger gesture recognition, human respiration
and Skubic (2015) has mixed data, including 9 genuine falls and and chins movement. Their research can be potentially used
445 simulated falls by trained stunt actors. In contrast to the for applications of autonomous health monitoring and home
real data sets from the work of Klenk et al. (2016) collected by appliances control. Furthermore, Zhang et al. (2019) used an
wearable devices, there are few purely genuine data sets collected RF-sensing approach in the proposed system WiDIGR for gait
in real life scenarios using individual visual sensors. recognition. Guo et al. (2019) claimed that RF-sensing is drawing
more attention which can be attributed to being device-free for
4.1.3. Individual Ambient Sensors users, and in contrast to RGB cameras it can work under low light
The ambient sensor provides another non-intrusive means of fall conditions and occlusions.
detection. Sensors like active infrared, RFID, pressure, smart tiles,
magnetic switches, Doppler Radar, ultrasonic, and microphone 4.1.4. Subjects
are used to detect the environmental changes due to falling For most research groups there is not enough time and funding
as shown in Table 5. It provides an innovative direction in to collect data continuously within several years to study fall
this field, which is passive and pervasive detection. Ultra-sonic detection. Due to the rarity of genuine data in fall detection
sensor network systems are one of the earliest solutions in and prediction, Li et al. (2013) have started to hire stunt actors
fall detection systems. Hori et al. (2004) argues that one can to simulate different kinds of fall. There are also many data
detect falls by putting a series of spatially distributed sensors in sets of falls which are simulated by young healthy students as
the space where elderly persons live. In Wang et al. (2017a,b), indicated in the studies by Bourke et al. (2007) and Ma et al.
a new fall detection approach which uses ambient sensors is (2014). For obvious reasons elderly subjects cannot be engaged
proposed. It relies on Wi-Fi which, due to its non-invasive and to perform the motion of falls for data collection. For most of
ubiquitous characteristics, is gaining more and more popularity. the existing data sets, falls are simulated by young volunteers
However, the studies by Wang et al. (2017a,b) are limited in who perform soft falls under the protection of soft mats on the
terms of multi-person detection due to their classifiers not being ground. Elderly subjects, however, often have totally different
robust enough to distinguish new subjects and environments. behavior due to less control over the speed of the body. One
In order to tackle this issue, other studies have developed potential solution could include simulated data sets created using
more sophisticated methods. These include the Aryokee (Tian physics engines, such as OpenSim. Previous research (Mastorakis
et al., 2018) and FallDeFi (Palipana et al., 2018) systems. The et al., 2007, 2018) have shown that simulated data from OpenSim
Aryokee system is ubiquitous, passive and uses RF-sensing contributed to an increase in performance to the resulting
methods. Over 140 people were engaged to perform 40 kinds models. Another solution includes online learning algorithms
of activities in different environments for the collection of data which adapt to subjects who were not represented in the training
and a convolutional neural network was utilized to classify falls. data. For instance, Deng et al. (2014) applied the Transfer
Palipana et al. (2018) developed a fall detection technique named learning reduced Kernel Extreme Learning Machine (RKELM)
FallDeFi, which is based on WiFi signals as the enabling sensing approach and showed how they can adapt a trained classifier—
technology. They provided a system applying time-frequency of based on data sets collected by young volunteers—to the elderly.

Frontiers in Robotics and AI | www.frontiersin.org 9 June 2020 | Volume 7 | Article 71


Wang et al. Elderly Fall Detection Systems

Protocol of data communication is divided into two


categories, namely wireless and wired transmission. For the
former, transmission protocols include Zigbee, Bluetooth, Wifi,
WiMax, and Cellular network.
Most of the studies that used individual wearable sensors
deployed commercially available wearable devices. In those
cases, data was communicated by transmission modules built
in the wearable products, using mediums such as Bluetooth
and cellular networks. In contrast to detection systems using
wearable devices, most static vision- and ambient-based studies
are connected to smart gateways by wired connections. These
approaches are usually applied as static detection methods, so a
wired connection is a better choice.

4.3. Information Processing Layer (IPL) of


Individual Sensors
4.3.1. Detection Using Threshold-Based and
Data-Driven Algorithms
Threshold-based and data-driven algorithms (including machine
learning and deep learning) are the two main approaches that
have been used for fall detection. Threshold-based approaches
are usually used for data coming from individual sensors,
such as accelerometers, gyroscopes, and electromyography.
Their decisions are made by comparing measured values from
concerned sensors to empirically established threshold values.
Data driven approaches are more applicable for sensor fusion as
FIGURE 5 | Different types of methods used in fall detection using individual they can learn non-trivial non-linear relationships from the data
wearable sensors in the period 1998–2012 based on the survey of Schwickert of all involved sensors. In terms of the algorithms used to analyze
et al. (2013) and in the period 2014–2020 based on our survey. The term
data collected using wearable devices, Figure 5 demonstrates that
“others” refers to traditional methods that are neither based on threshold nor
on machine learning, and the term “N/A” stands for not available and refers to
there is a significant shift to machine learning based approaches,
studies whose methods are not clearly defined. in comparison to the work conducted between 1998 and 2012.
From papers presented between 1998 and 2012, threshold-based
approaches account for 71%, while only 4% applied machine
learning based methods (Schwickert et al., 2013). We believe
The algorithm consists of two parts, namely offline classification that this shift is due to two main reasons. First, the rapid
modeling and online updating modeling, which is used to adapt development of affordable sensors and the rise of the Internet-of-
to new subjects. After the model is trained by labeled training Things made it possible to more easily deploy multiple sensors in
data offline, unlabeled test samples are fed into the pre-trained different applications. As mentioned above the non-linear fusion
RKELM classifier and obtain a confidence score. The samples of multiple sensors can be modeled very well by machine learning
that obtain a confidence score above a certain threshold are used approaches. Second, with the breakthrough of deep learning,
to update the model. In this way, the model is able to adapt threshold-based approaches have become even less preferable.
to new subjects gradually when new samples are received from Moreover, different types of machine learning approaches have
new subjects. Namba and Yamada (2018a,b) demonstrated how been explored, namely, Bayesian networks, rule-based systems,
deep reinforcement learning can be applied to assisting mobile nearest neighbor-based techniques, and neural networks. These
robots, in order to adapt to conditions that were not present in data-driven approaches (Gharghan et al., 2018) show better
the training set. accuracy and they are more robust in comparison to threshold-
based methods. Notable is the fact that data-driven approaches
are more resource hungry than threshold-based methods. With
4.2. Local Communication Layer (LCL) of the ever advancement of technology, however, this is not a major
Individual Sensors concern and we foresee that more effort will be invested in
There are two components which are involved with this direction.
communication within such systems. Firstly, data collected
from different smart sensors are sent to local computing facilities 4.3.2. Detection Using Deep Learning
or remote cloud computing. Then, after the final decision is Traditional machine learning approaches determine mapping
made by these computing platforms, instructions and alarms functions between extracted handcrafted features from raw
are sent to appointed caregivers for immediate assistance training data and the respective output labels (e.g., no fall or
(El-Bendary et al., 2013). fall, to keep it simple). The extraction of handcrafted features

Frontiers in Robotics and AI | www.frontiersin.org 10 June 2020 | Volume 7 | Article 71


Wang et al. Elderly Fall Detection Systems

requires domain expertise and are, therefore, limited to the


knowledge of the domain experts. Though such a limitation
is imposed, literature shows that traditional machine learning,
based on support vector machines, hidden Markov models, and
decision trees are still very active in the field of fall detection
that uses individual wearable non-visual or ambient sensors (e.g.,
accelerometer) (Wang et al., 2017a,b; Chen et al., 2018; Saleh
and Jeannès, 2019; Wu et al., 2019). For visual sensors the trend
has been moving toward deep learning for convolutional neural
networks (CNN) (Adhikari et al., 2017; Kong et al., 2019; Han
et al., 2020), or LSTM (Shojaei-Hashemi et al., 2018). Deep
learning is a sophisticated learning framework that besides the
mapping function (as mentioned above and used in traditional
machine learning), it also learns the features (in a hierarchy
fashion) that characterize the concerned classes (e.g., falls and no
falls). This approach has been inspired by the visual system of
the mammalian brain (LeCun et al., 2015). In computer vision
applications, which take as input images or videos, deep learning
has been established as state-of-the-art. In this regard, similar to
other computer vision applications, fall detection approaches that
rely on vision data have been shifting from traditional machine FIGURE 6 | Different kinds of individual sensors and sensor networks,
learning to deep learning in recent years. including vision-based, wearable, and ambient sensors, along with sensor
fusion.

4.3.3. Real Time and Alarms


Real-time is a key feature for fall detection systems, especially for
commercial products. Considering that certain falls can be fatal
or detrimental to the health, it is crucial that the deployed fall sensors. Static vision-based devices shifted from RGB to
detection systems have high computational efficiency, preferably RGB-D cameras.
operating in (near) real-time. Below, we comment how the • Data-driven machine learning and deep learning approaches
methods proposed in the reviewed literature fit within this aspect. are gaining more popularity especially with vision-
The percentage of studies applying real-time detection by based systems. Such techniques may, however, be
static visual sensors are lower than that of wearable devices. For heavier than threshold-based counterparts in terms of
the studies using wearable devices, Table 3 illustrates that six out computational resources.
of 20 studies that we reviewed can detect falls and send alarms. • The majority of proposed approaches, especially those that rely
There are, however, few studies which demonstrate the ability on vision-based sensors, work in offline mode as they cannot
to process data and send alerts in real-time for work conducted operate in real-time. While such methods can be effective in
using individual visual sensors. Based on Table 4, one can note terms of detection, their practical use is debatable as the time
that although 40.9% (nine out of 22) of the studies claim that to respond is crucial.
their systems can be used in real-time only one study showed
that an alarm can actually be sent in real-time. The following
are a couple of reasons why a higher percentage of vision-based 5. SENSOR FUSION BY SENSOR
systems can not be used in real time. Firstly, visual data is much NETWORK
larger and, therefore, its processing is more time consuming than
that of one dimensional signals coming from non-vision-based 5.1. Physiological Sensing Layer (PSL)
wearable devices. Secondly, most of the work using vision sensors Using Sensor Fusion
conducted their experiments with off-line methods, and modules 5.1.1. Sensors Deployed in Sensor Networks
like data transmission were not involved. In terms of sensor fusion, there are two categories, typically
referred to as homogeneous and heterogeneous which take input
4.3.3.1. Summary from three types of sensors, namely wearable, visual, ambient
• For single-sensor-based fall detection systems most of the sensors, as shown in Figure 6. Sensor fusion involves using
studies used data sets that include simulated falls by young multiple and different signals coming from various devices,
and healthy volunteers. Further work is needed to establish which may for instance include, accelerometer, gyroscope,
whether such simulated falls can be used to detect genuine falls magnetometer, and visual sensors, among others. This is all done
by the elderly. to complement the strengths of all devices for the design and
• The types of sensors utilized in fall detection systems development of more robust algorithms that can be used to
have changed in the past 6 years. For individual wearable monitor the health of subjects and detect falls (Spasova et al.,
sensors, accelerometers are still the most frequently deployed 2016; Ma et al., 2019).

Frontiers in Robotics and AI | www.frontiersin.org 11 June 2020 | Volume 7 | Article 71


Wang et al. Elderly Fall Detection Systems

TABLE 6 | Fall detection by fusion of wearable sensors from 2014 to 2020.

Fusion within wearable sensors

References Sensor No. subjects Data sets Algorithms Real-time Fusion Platforms
(age) (Alarm) method

Kerdjidj et al. (2020) Accelerometer, Gyroscope 17 (N/A) Simulated Compressive Y (N/A) Feature fusion N/A
sensing
Xi et al. (2020) Electromyography, Plantar 12 (23–27) Simulated FMMNN, Y (Y) Feature fusion N/A
Pressure DPK-OMELM
Chelli and Pätzold Accelerometer, Gyroscope 30 (N/A) Public KNN, ANN, Y (N/A) Feature fusion N/A
(2019) (Simulated) QSVM, EBT
Queralta et al. (2019) Accelerometer, Gyroscope, 57 (20-47) Public LTSM Y(Y) Feature fusion N/A
Magnetometer (Simulated)
Gia et al. (2018) Accelerometer, Gyroscope, 2 (N/A) N/A Threshold Y (Y) Feature fusion N/A
Magnetometer
de Quadros et al. Accelerometer, Gyroscope, 22 (mean = 26.09) Simulated Threshold/ML N/A Feature fusion N/A
(2018) Magnetometer
Yang et al. (2016) Accelerometer, Gyroscope, 5 (N/A) Simulated SVM Y (Y) Feature fusion PC
Magnetometer
Pierleoni et al. (2015) Accelerometer, Gyroscope, 10 (22–29) Simulated Threshold Y (Y) Feature fusion ATmega328p
Magnetometer (ATMEL)
Nukala et al. (2014) Accelerometer, Gyroscopes 2 (N/A) Simulated ANN Y (N/A) Feature fusion PC
Kumar et al. (2014) Accelerometer, Pressure N/A Simulated Threshold Y (Y) Partial fusion PC
sensors, Heart rate monitor
Hsieh et al. (2014) Accelerometer, Gyroscope 3 (N/A) Simulated Threshold N/A Partial fusion N/A

For the visual detection based approaches, the fusion of ES, and A13-OlinuXino. A13-OlinuXino is an ARM-based
signals coming from RGB (Charfi et al., 2012), and RGB-D single-board computer development platform, which runs
depth cameras along with camera arrays have been studied Debian Linux distribution. PandaBoard ES, which is the updated
(Zhang et al., 2014). They showed that such fusion provides version of PandaBoard, is a single-board computer development
more viewpoints of detected locations, and improves the stability platform running at Linux. The PandaBoard ES can run different
and robustness by decreasing false alarms due to occluded falls kinds of Linux-based operating systems, including Android
(Auvinet et al., 2011). and Ubuntu. It consists of 1 GB of DDR2 SDRAM, dual USB
Li et al. (2018) combined accelerometer data from 2.0 ports as well as wired 10/100 Ethernet along with wireless
smartphones and Kinect depth data as well as smartphone Ethernet and Bluetooth connectivity. Linux is well-known for
camera signals. Liu et al. (2014) and Yazar et al. (2014) fused real-time embedded platforms since it provides various flexible
data from infrared sensors with ambient sensors, and data inter-process communication methods, which is quite suitable
from doppler and vibration sensors separately. Among them, for fall detection using sensor fusion.
accelerometers and depth cameras (Kinect) are most frequently In the research by Kwolek and Kepski (2014, 2016), wearable
studied due to their low costs and effectiveness. devices and Kinect were connected to the Pandaboard through
Bluetooth and cable, separately. Firstly, data was collected by
5.1.2. Sensor Networks Platform accelerometers and Kinect sensors, individually, which was then
Most of the existing IoT platforms, such as Microsoft Azure IoT, transmitted and stored in a memory card. The procedure of data
IBM Watson IoT Platform, and Google Cloud Platform, have transmission is asynchronous since there are different sampling
not been used in the deployment of fall detection approaches by rates for accelerometers and Kinect. Finally, all data was grouped
sensor fusion. In general, research studies on fall detection using together and processed by classification models that detected
sensor fusion are carried out by offline methods and decision falls. The authors reported high accuracy rates but could not
fusion approaches. Therefore, in such studies, there is no need compare with other approaches since there is no benchmark
for data transmission and storage modules. From Tables 6, 7, one data set.
can also observe that most of the time researchers applied their Spasova et al. (2016) applied the A13-OlinuXino board
own workstations or personal computers as their platforms, as as their platform. A standard web camera was connected to
there was no need for the integration of sensors and real-time it via USB and an infrared camera was connected to the
analysis in terms of fall detection in off-line mode. development board via I2C (Inter-Integrated Circuit). Their
Some works, such as those in Kwolek and Kepski (2014), experiment achieved excellent performance with over 97%
Kepski and Kwolek (2014), and Kwolek and Kepski (2016), sensitivity and specificity. They claim that their system can be
applied low-power single-board computer development applied in real-time with hardware of low-cost and open source
platforms running in Linux, namely PandaBoard, PandaBoard software platform.

Frontiers in Robotics and AI | www.frontiersin.org 12 June 2020 | Volume 7 | Article 71


Wang et al. Elderly Fall Detection Systems

TABLE 7 | Fall detection using fusion of sensor networks from 2014 to 2020.

References Sensor No. subjects Data sets Algorithms Real-time Fusion Platforms
(age) (Alarm) method

Fusion within visual sensors and ambient sensors


Espinosa et al. (2019) Two cameras 17 (18-24) Simulated CNN N/A (N) Feature fusion N/A
Ma et al. (2019) RGB camera, Thermal camera 14 (N/A) Simulated CNN N/A (N) Partial fusion N/A
Spasova et al. (2016) Web Camera, Infrared sensor 5 (27-81) Simulated SVM Y (Y) Partial fusion A13-
OlinuXino
Fusion within different kinds of individual sensors
Martínez-Villaseñor Accelerometer, Gyroscope, 17 (18–24) Simulated Random Feature N/A N/A
et al. (2019) Ambient light, Forest, SVM, fusion
Electroencephalograph, ANN, kNN,
Infrared sensors, Web CNN
cameras
Li et al. (2018) Accelerometer (smartphone), N/A Simulated SVM, Y (N/A) Decision N/A
Kinect Threshold fusion
Daher et al. (2017) Force sensors, 6 (N/A) Simulated Threshold N (N/A) Decision N/A
Accelerometers fusion
Ozcan and Velipasalar Camera (smartphone), 10 (24 -30) Simulated Histogram of Y (Y) Decision N/A
(2016) Accelerometer oriented fusion
gradients
Kwolek and Kepski Accelerometer, Kinect 5 (N/A) Simulated Fuzzy logic Y (Y) Feature PandaBoard
(2016) fusion, Partial ES
fusion
Sabatini et al. (2016) Barometric altimeters, 25 (mean = 28.3) Simulated Threshold N/A (N) Feature fusion N/A
Accelerometer, Gyroscope
Chen et al. (2015) Kinect, Inertial sensor 12 (23–30) Public Collaborative N/A (N) Feature fusion N/A
Simulated representation,
Ofli et al.
(2013)
Gasparrini et al. (2015) Kinect v2, Accelerometer 11 (22-39) Simulated Threshold N (N/A) Data fusion N/A
Kwolek and Kepski Accelerometer, Kinect 5 (N/A) Public SVM, k-NN Y (Y) Partial fusion PandaBoard
(2014) (Simulated) ES
URF (2014)
Kepski and Kwolek Accelerometer, Kinect 30 (under 28) Simulated Alogorithms Y (N) Partial fusion PandaBoard
(2014)
Liu et al. (2014) Passive infrared sensor, 454 (N/A) Simulated SVM N/A (N) Decision N/A
Doppler radar sensor + Real life fusion
Yazar et al. (2014) Passive infrared sensors, N/A Simulated Threshold, N/A (N) Decision N/A
Vibration sensor SVM fusion

Despite the available platforms mentioned above, the majority 5.1.3. Subjects and Data Sets
of fall detection studies trained their models in an offline mode Although some groups devoted their efforts to acquire data
with a single sensor on personal computers. The studies in of genuine falls, most researchers used data that contained
Kwolek and Kepski (2014), Kepski and Kwolek (2014), Kwolek simulated falls. We know that monitoring the lives of elderly
and Kepski (2016), and Spasova et al. (2016) utilized single- people and waiting to capture real falls is very sensitive and
board computer platforms in their experiments to demonstrate time consuming. Having said that though, with regards to sensor
the efficacy of their approaches. The crucial aspects of scalability fusion by wearable devices, there have been some attempts
and efficiency were not addressed and hence it is difficult to which have tried to build data sets of genuine data in real
speculate the appropriateness of their methods in real-world life. FARSEEING (Fall Repository for the design of Smart and
applications. We believe that the future trend is to apply an self-adaptive Environments prolonging Independent living) is
interdisciplinary approach that deploys the data analysis modules one such data set (Klenk et al., 2016). It is actually the largest
on mature cloud platforms, which can provide a stable and data set of genuine falls in real life, and is open to public
robust environment while meeting the exploding demands of research upon request on their website. From 2012 to 2015,
commercial applications. more than 2,000 volunteers have been involved, and more than

Frontiers in Robotics and AI | www.frontiersin.org 13 June 2020 | Volume 7 | Article 71


Wang et al. Elderly Fall Detection Systems

TABLE 8 | Comparison of different kinds of communication protocol.

Protocol Zigbee Bluetooth WiFi WiMax Cellular network

Range 100 m 10 m 5 km 15 km 10–50 km


Data rate 250–500 kbps 1–3 Mbps 1–450 Mbps 75 Mbps 240 kbps
Band-width 2.4 GHz 2.4 GHz 2.4, 3.7, and 5 GHz 2.3, 2.5, and 3.5 GHz 824–894 MHz/1,900 MHz
Energy consumption Low Medium High N/A N/A

300 real falls have been collected under the collaboration of As for the data transmission using vision-based and ambient-
six institutions3 . based approaches, wired options are usually preferred. In the
As for the fusion by visual sensors and the combination of work by Spasova et al. (2016), a standard web camera was
other non-wearable sensors, it becomes quite hard to acquire connected to an A13-OlinuXino board via USB and an infrared
genuine data in real life. There was one group which tried camera was connected to the development board via I2C (Inter-
to collect real data by visual sensors, but only nine real falls Integrated Circuit). Data and other messages were exchanged
by elderly (Demiris et al., 2008) were captured during several within the smart gateways through the internet.
years. The availability of only nine falls is too limited to train a For sensor fusion using different types of sensors, both
meaningful model. As an alternative, Stone and Skubic (2015) wireless and cabled methods were utilized because of data variety.
hired trained stunt actors to simulate different kinds of falls and In the work by Kwolek and Kepski (2014, 2016), wearable devices
made a benchmark data set with 454 falls including 9 real falls and Kinect were connected to the Pandaboard through Bluetooth
by elderly. and cable, separately. Kinect was connected to a PC using USB
interface and smart phones were connected by wireless methods
(Li et al., 2018). These two types of sensor, smartphone and
5.2. Local Communication Layer (LCL) Kinect, were first used separately to monitor the same events and
Using Sensor Fusion the underlying methods that processed their signals sent their
Data transmission for fall detection using sensor networks can output to a Netty server through the Internet where another
be done in different ways. In particular, Bluetooth (Pierleoni method was used to fuse the outcomes of both methods to come
et al., 2015; Yang et al., 2016), Wi-Fi, ZigBee (Hsieh et al., 2014), to a final decision of whether the involved individual has fallen
cellular network using smart phones (Chen et al., 2018) and smart or not.
watches (Kao et al., 2017), as well as wired connection have all In the studies by Kwolek and Kepski (2014, 2016),
been explored. In studies that used wearable devices, most of accelerometers and Kinect cameras were connected to a
them applied wireless methods, such as Bluetooth, which allowed pandaboard through Bluetooth and USB connections. Then, the
the subject to move unrestricted. final decision was made based on the data collected from the
Currently, when it comes to wireless sensors, Bluetooth has two sensors.
become probably the most popular communication protocol
and it is widely used in existing commercial wearable products 5.3. Information Processing Layer (IPL)
such as Shimmer. In the work by Yang et al. (2016), data is Using Sensor Fusion
transmitted to a laptop in real-time by a Bluetooth module that 5.3.1. Methods of Sensor Fusion
is built in a commercial wearable device named Shimmer 2R. Speaking of the fusion of different sensors, there are several
The sampling frame rate can be customized, and they chose criteria to group them. Yang and Yang (2006) and Tsinganos
to work with the 32-Hz sampling rate instead of the default and Skodras (2018) grouped them into three categories, namely
sampling rate of 51.2-Hz. At high sampling frequencies, packet direct data fusion, feature fusion, and decision fusion. We divide
loss can occur and higher sampling rate also means higher energy sensor fusion techniques into four groups as shown in Figure 7,
consumption. Bluetooth is also applied to transmit data in non- which we refer to as fusion with partial sensors, direct data fusion,
commercial wearable devices. For example, Pierleoni et al. (2015) feature fusion, and decision fusion.
customized a wireless sensor node, where sensor module, micro- For the partial fusion, although multiple sensors are deployed,
controller, Bluetooth module, battery, mass-storage unit, and only one sensor is used to take the final decision, such as the
wireless receiver were integrated within a prototype device of size work by Ma et al. (2019). They used an RGB and a thermal
70–45–30 mm. Zigbee was used to transmit data in the work by camera to conduct their experiments, with the thermal camera
Hsieh et al. (2014). In Table 8, we compare different kinds of being used only for the localization of faces. Falls were eventually
wireless communication protocols. detected only based on the data collected from the regular RGB
cameras. A similar approach was applied by Spasova et al. (2016),
3 1.
where an infrared camera was deployed to confirm the presence
Robert-Bosch Hospital (RBMF), Germany; 2. University of Tübingen,
Germany; 3. University of Nürnberg/Erlangen, Germany; 4. German Sport
of the subject and the data produced by the RGB camera was
University Cologne, Germany; 5. Bethanien-Hospital/Geriatric Center at the used to detect falls. There are also other works that used wearable
University of Heidelberg, Germany; 6. University of Auckland, New Zealand. devices that deployed the sensors at different stages. For instance,

Frontiers in Robotics and AI | www.frontiersin.org 14 June 2020 | Volume 7 | Article 71


Wang et al. Elderly Fall Detection Systems

FIGURE 7 | Four kinds of sensor fusion methods including partial fusion, feature fusion, decision fusion, and data fusion. Partial fusion means that a subset of sensors
are deployed to make decisions, while the other types of fusion techniques use all sensors as input.

in (Kepski and Kwolek, 2014; Kwolek and Kepski, 2014) a fall of falls. Different sensors, such as accelerometer, RGB and RGB-
detection system was built by utilizing a tri-axial accelerometer D cameras were deployed in these studies. Decisions are made
and an RGB-D camera. The accelerometer was deployed to detect separately based on the individual sensors, and then the final
the motion of the subject. If the measured signal exceeded a decision is achieved by combining the individual sensors.
given threshold then the Kinect was activated to capture the The final approach is data fusion. This is achieved by
ongoing event. first fusing the data from different sensors and perform
The second approach of sensor fusion is known as feature feature extraction from the fused data. This is in contrast to
fusion. In such an approach, feature extraction takes places feature fusion where data from these sensors is homogeneous
on signals that come from different sensors. Then all features synchronous with the same frequency. Data fusion can be applied
are merged into long feature vectors and used to train to different sensors with different sampling frequency and data
classification models. Most of the studies that we reviewed characteristics. Data from various sensors can be synchronized
applied feature fusion for wearable-based fall detection systems. and combined directly for some sensors of different types.
Many commercial products of wearable devices, sensors like Because of the difference in sampling rate between the Kinect
accelerometers, gyroscope, magnetometer are built in one device. camera and wearable sensors, it is challenging to conduct
Data from these sensors is homogeneous synchronous with feature fusion directly. In order to mitigate this difficulty, the
the same frequency and transmitted with built-in wireless transmission and exposure times of the Kinect camera are
modules. Having signals producing data with the synchronized adapted to synchronize the RGB-D data with that of wearable
frequency simplifies the fusion of data. Statistical features, such sensors by an ad-hoc acquisition software, as was done by
as mean, maximum, standard deviation, correlation, spectral Gasparrini et al. (2015).
entropy, spectral, sum vector magnitude, the angle between y- Ozcan and Velipasalar (2016) used both partial and feature
axis and vertical direction, and differential sum vector magnitude fusion. They divided the procedure in two stages. In the first
centroid can be determined from the signals of accelerometers, stage, only the accelerometer was utilized to indicate a potential
magnetometers, and gyroscopes, and used as features to train a fall, then the Kinect camera activates after the accelerometer
classification model that can detect different types of falls (Yang flagged a potential fall. Features from both the Kinect camera and
et al., 2016; de Quadros et al., 2018; Gia et al., 2018). accelerometer were then extracted to classify activities of fall or
Decision fusion is the third approach, where a chain of non-fall in the second stage.
classifiers is used to come to a decision. A typical arrangement
is to have a classification model that takes input from one type 5.3.2. Machine Learning, Deep Learning, and Deep
of sensor, another model that takes input from another sensor, Reinforcement Learning
and in turn the outputs of these two models are used as input to In terms of fall detection techniques based on wearable
a third classification model that takes the final decision. Li et al. sensor fusion, the explored methods include threshold-based,
(2018) explored this approach with accelerometers embedded in traditional machine learning, and deep learning. The latter two
smart phones and Kinect sensors. Ozcan and Velipasalar (2016) are the most popular due to their robustness. The research
deployed an accelerometer and an RGB camera for the detection by Chelli and Pätzold (2019) applied both traditional machine

Frontiers in Robotics and AI | www.frontiersin.org 15 June 2020 | Volume 7 | Article 71


Wang et al. Elderly Fall Detection Systems

learning [kNN, QSVM, Ensemble Bagged Tree (EBT)] and deep as Shimmer, embedded with sensing sensors, communication
learning. Their experiments were divided into two parts, namely protocols, and sufficient computational ability are available as
activity recognition and fall detection. For the former, their affordable commercial products. For example, some wearable-
experiments showed that traditional machine learning and deep based applications have been applied to the detection of falls
learning outperformed other approaches, which showed 94.1 and for monitoring health, in general. The target of the wearable
and 93.2% accuracy, respectively. Queralta et al. (2019) applied devices is to wear and forget. Taking as an example the electronic
a long short-term memory (LSTM) approach, where wearable skins (e-skins) that adhere to the body surface, clothing-based or
nodes including accelerometer, gyroscope, and magnetometer accessory-based devices where proximity is sufficient. To fulfill
were embedded in a low power wide area network, with the target of wearing and forgetting, many efforts have been put
combined edge and fog computing. The LSTM algorithm is a into the study of wearable systems, such as the My Heart project
type of recurrent neural network aimed at solving long sequence (Habetha, 2006), the Wearable Health Care System (WEALTHY)
learning tasks. Their system achieved an average recall of 95% project (Paradiso et al., 2005), the Medical Remote Monitoring of
while providing a real-time solution of fall detection running clothes (MERMOTH) project (Luprano, 2006), and the project by
on cloud platforms. Another example is the work by Nukala Pandian et al. (2008). Some wearable sensors are also developed
et al. (2014) who fused the measurements of accelerometers and specifically to address fall detection. Shibuya et al. (2015) used a
gyroscopes and applied an Artificial Neural Network (ANN) for wearable wireless gait sensor for the detection of falls. More and
the modeling of fall detection. more research work use existing commercial wearable products,
As for visual sensor based fusion techniques, the limited which includes function of data transmission and sending alarms
studies that were included in our survey applied either traditional when falls are detected.
machine learning or deep learning (Espinosa et al., 2019; Ma
et al., 2019) approaches. Fusion of multiple visual sensors from
5.4.1. Summary
a public data set was presented by Espinosa et al. (2019), where a
• Due to the sampling frequency and data characteristic,
2D CNN was trained to classify falls during daily life activities.
there are two main categories for sensor fusion. As
Another approach is reinforcement learning (RL), which
shown in Tables 6, 7, studies by sensor fusion are divided
is a growing branch in machine learning, and is gaining
into fusion by sensor from the same category (e.g.,
popularity in the fall detection field as well. Deep reinforcement
fusion of wearable sensors, fusion of visual sensors, and
learning (DRL) combines the advantages of deep learning and
fusion of ambient sensors) and fusion of sensors from
reinforcement learning, and has already shown its benefits in fall
different categories.
prevention (Namba and Yamada, 2018a,b; Yang, 2018) and fall
• Subjects in fall detection studies using sensor networks are
detection (Yang, 2018). Namba and Yamada (2018a) proposed
still young and healthy volunteers, which is similar to that of
a fall risk prevention approach by assisting robots for the
individual sensors. Only one research adopted mixed data with
elderly living independently. Images and movies with the location
simulated and genuine data.
information of accidents were collected. Most conventional
• More wearable-based approaches are embedded with IoT
machine learning and deep learning methods are, however,
platforms than that of vision-based approaches because
challenged when the operational environment changes. This is
data transmission and storage modules are built in existing
due to their data-driven nature that allows them to learn how
commercial products.
to become robust mostly in the same environments where they
• For the research combining sensors from different categories,
were trained.
the combination of accelerometer and Kinect camera is the
5.3.3. Data Storage and Analysis most popular method.
Typical data storage devices include SD cards, local storage on the • Partial fusion, data fusion, feature fusion, and decision fusion
integration device, or remote storage on the cloud. For example, are four main methods of sensor fusion. Among them, feature
some studies used the camera and accelerometer in smartphones, fusion is the most popular approach, followed by decision
and stored the data on the local storage of the smarphones fusion. For fusion using non-vision wearable sensors, most
(Ozcan and Velipasalar, 2016; Shi et al., 2016; Medrano et al., of the studies that we reviewed applied feature fusion, while
2017). Other studies applied off-line methods and stored data decision fusion is the most appealing one for fusing sensors
in their own computer, and could be processed at a later stage. from different categories.
Alamri et al. (2013) argue that sensor-cloud will become the
future trend because cloud platforms can be more open and more
flexible than local platforms, which have limited local storage and 6. SECURITY AND PRIVACY
processing power.
Because data generated by autonomous monitoring systems are
5.4. User Application Layer (UAL) of Sensor security-critical and privacy-sensitive, there is an urgent demand
Fusion to protect user’s privacy and prevent these systems from being
Due to the rapid development of miniature bio-sensing devices, attacked. Cyberattacks on the autonomous monitoring systems
there has been a booming development of wearable sensors may cause physical or mental damages and even threaten the lives
and other fall detection modules. Wearable modules, such of subjects under monitoring.

Frontiers in Robotics and AI | www.frontiersin.org 16 June 2020 | Volume 7 | Article 71


Wang et al. Elderly Fall Detection Systems

6.1. Security 7. PROJECTS AND APPLICATIONS


In this survey we approached the systems of fall detection from AROUND FALL DETECTION
different layers, including Physiological Sensing Layer (PSL),
Local Communication Layer (LCL), Information Processing Approaches of fall detection evolve from personal emergency
Layer (IPL), Internet Application Layer (IAL), and User response systems (PERS) to intelligent automatic ones. One of
Application Layer (UAL). Every layer faces security issues. the early fall detection systems sends an alarm by the PERS
For instance, information may leak in the LCL during data push-button, but it may fail when the concerned person loses
transmission, along with potential vulnerabilities with cloud consciousness or is too weak to move (Leff, 1997). Numerous
storage and processing facility. Based on the literature that we attempts have been made to monitor not only falls but also
report in Tables 3–7, most of the studies in the field of fall other specific activities in autonomous health monitoring.
detection do not address security matters. Only few studies Many projects have been conducted to develop applications
(Edgcomb and Vahid, 2012; Mastorakis and Makris, 2014; Ma of autonomous health monitoring, including fall detection,
et al., 2019) take privacy into consideration. Because of the prediction, and prevention. Some of the aforementioned studies
distinct characteristics of wired and wireless transmission, it is were promoted as commercial products. Different sensors from
still an open problem to find a comprehensive security protocol wearable sensors, visual sensors, and ambient sensors are
which can cover the security issues in both wired and wireless deployed as commercial applications for fall detection. Among
data transmission and storage (Islam et al., 2015). them, more wearable sensors have been developed as useful
applications. For example, a company named Shimmer has
6.2. Privacy developed 7 kinds of wearable sensing products aiming at
As mentioned above, privacy is one of the most important issue autonomous health monitoring. One of the products is the
for users of autonomous health monitoring systems. Methods Shimmer3 IMU Development Kit. It is a wearable sensor node
to protect privacy are dependent on the type of sensor used. including a sensing module, data transmission module, receiver,
Not all sensors tend to suffer from the issues of privacy equally. and it has been used by Mahmud and Sirat (2015) and Djelouat
For example, vision-based sensors, like RGB cameras, are more et al. (2017). The iLife fall detection sensor is developed by
vulnerable than wearable sensors, such as accelerometers, in AlertOne4 , which provides the service of fall detection and one-
terms of privacy. In the case of a detection system that uses button alert system. Smartwatch is another commercial solution
only wearable sensors, problems of privacy are not as critical as for fall detection. Accelerometers embedded in smartwatches
systems involved with visual sensors. have been studied to detect falls (Kao et al., 2017; Wu et al.,
In order to address the privacy concerns associated with RGB 2019). Moreover, Apple Watch Series 4 and later versions are
cameras some researchers proposed to mitigate them by blurring equipped with the fall detection function, and it can help the
and distorting the appearances as post-processing steps in the consumer to connect to the emergency service. Although there
application layer (Edgcomb and Vahid, 2012). An alternative way are few specific commercial fall detection products based on RGB
is to address the privacy issue in the design stage, as suggested by cameras, the relevant studies also show a promising future in
Ma et al. (2019). They investigated an optical level anonymous the field. There are open source solutions provided by Microsoft
image sensing system. A thermal camera was deployed to locate using Kinect which could detect falls in real time and have the
faces and an RGB camera was used to detect falls. The location of potential to be deployed as commercial products. As for ambient
the subject’s face was used to generate a mask pattern on a spatial sensors, Linksys Aware apply tri-band mesh WiFi systems to fall
light modulator to control the light entering the RGB camera. detection, and they provide a premium subscription service as
Faces of subjects were blurred by blocking the visible light rays a commercial motion detection product. CodeBlue, a Harvard
using the mask pattern on the spatial light modulator. University research project, also focused on developing wireless
The infrared camera is another sensor which could protect the sensor networks for medical applications (Lorincz et al., 2004).
privacy of subjects. Mastorakis and Makris (2014) investigated The MIThril project (DeVaul et al., 2003) is a next-generation
an infrared camera built in a Kinect sensor. It only captures wearable research platform developed by researchers at the MIT
the thermal distribution of subjects and there is no information Media Lab. They made their software open source and hardware
on the subject’s appearance and living environment involved. specifications available to the public.
Other vision-based sensors which could protect privacy are depth The Ivy project (Pister et al., 2003) is a sensor network
cameras. The fact they only capture depth information has made infrastructure from the Berkeley College of Engineering,
them more popular than RGB cameras. University of California. The project aims to develop a sensor
As for the research of fall detection using sensor networks, network system to provide assistance for the elderly living
different kinds of data are collected when more sensors are independently. Using a sensor network with fixed sensors and
involved. Because of more data collection and transfer involved, mobile sensors worn on the body, anomalies by the concerned
the whole fall detection system by sensor fusion becomes more elderly can be detected. Once falls are detected, the system sends
complicated and it makes the protection of privacy and security alarms to caregivers to respond urgently.
even harder. There is a trade-off between privacy and benefits of A sensor network was built in 13 apartments in TigerPlace,
autonomous monitoring systems. The aim is to keep improving which is an aging in place for people of retirement in Columbia,
the algorithms while keeping the privacy and security issues Missouri, and continuous data was collected for 3,339 days
to a minimum. This is the only way to make such systems
socially acceptable. 4 https://www.alert-1.com/

Frontiers in Robotics and AI | www.frontiersin.org 17 June 2020 | Volume 7 | Article 71


Wang et al. Elderly Fall Detection Systems

(Demiris et al., 2008). The sensor network with simple motion by Gholampooryazdi et al. (2017) for the detection of crowd-size,
sensors, video sensors, and bed sensors that capture sleep presence detection, and walking speed, and their experiments
restlessness and pulse and respiration levels, were installed in showed accuracy of 80.9, 92.8, and 95%, respectively. Thirdly, we
some apartments of 14 volunteers. Activities of 16 elderly people expect that 5G as a network is going to become a highly efficient
in TigerPlace, whose age range from 67 to 97, were recorded and accurate platform to achieve better performance of anomaly
continuously and 9 genuine falls were captured. Based on the data detection. Smart networks or systems powered by 5G IoT and
set, Li et al. (2013) developed a sensor fusion algorithm. which deep learning can be applied not only in fall detection systems,
achieved low rate of false alarms and a high detection rate. but also in other pervasive sensing and smart monitoring systems
which assist elderly groups to live independently with high-
quality life.
8. TRENDS AND OPEN CHALLENGES
8.1. Trends 8.1.4. Personalized or Simulated Data
8.1.1. Sensor Fusion El-Bendary et al. (2013) and Namba and Yamada (2018b) have
There seems to be a general consensus that sensor fusion provides proposed to include historical medical and behavioral data of
a more robust approach for the detection of elderly falls. The individuals along with sensor data. This allowed the enrichment
use of various sensors may complement each other in different of the data and consequently to make better informed decisions.
situations. Thus, instead of relying on only one sensor, which may This innovative perspective allows a more personalized approach
be unreliable if the conditions are not suitable for that sensor, as it uses the health profile of the concerned individual and it has
the idea is to rely on different types of sensor that together can the potential to become a trend also in this field. Another trend
capture reliable data in various conditions. This results in a more could be the way data sets are created to evaluate systems for
robust system that can keep false alarms to a minimum while fall detection. Mastorakis et al. (2007, 2018) applied the skeletal
achieving high precision. model simulated in Opensim, which is an open-source software
developed by Stanford University. It can simulate different kinds
8.1.2. Machine Learning, Deep Learning and Deep of pre-defined skeletal models. They acquired 132 videos of
Reinforcement Learning different types of falls, and trained their own algorithms based
Conventional machine learning approaches have been widely on those models. The high results that they report indicate that
applied in fall detection and activity recognition, and results the simulated falls by OpenSim are very realistic and, therefore,
outperform those of threshold-based methods in studies that use effective for training a fall detection model. Physics engines, like
wearable sensors. Deep learning is a subset of machine learning, Opensim, can simulate customized data based on the height
which is concerned with artificial neural networks inspired by and age of different subjects and it offers the possibility of new
the mammalian brain. Approaches of deep learning are gaining directions to detect falls. Another solution, which can potentially
popularity especially for visual sensors and sensor fusion and are address the scarcity of data, is to develop algorithms that can be
becoming the state-of-the-art for fall detection and other activity adapted to subjects that were not part of the original training set
recognition. Deep reinforcement learning is another promising (Deng et al., 2014; Namba and Yamada, 2018a,b) as we described
research direction for fall detection. Reinforcement learning is in section 4.1.4.
inspired by the psychological neuro-scientific understandings
8.1.5. Fog Computing
of humans which can adapt and optimize decisions in a
As to architecture is concerned, Fog computing offers the
changing environment. Deep reinforcement learning combines
possibility to distribute different levels of processing across the
advantages of deep learning, and reinforcement learning which
involved edge devices in a decentralized way. Smart devices
can provide alternatives for detection that can adapt to the
that can carry out some processing and that can communicate
changing condition without sacrificing accuracy and robustness.
directly with each other are more attractive for (near) real-time
8.1.3. Fall Detection Systems on 5G Wireless processing as opposed to systems based on cloud computing
Networks (Queralta et al., 2019). An example of such smart devices include
5G is a softwarized and virtualized wireless network, which the Intel R RealSenseTM depth camera, which includes a 28
includes both a physical network and software virtual network nanometer (nm) processor to compute real-time depth images.
functions. In comparison to 4G networks, 5th generation mobile
8.2. Open Challenges
introduces the ability of data transmission with high speed
The topic of fall detection has been studied extensively during
and low latency, which could contribute to the development
the past two decades and many attempts have been proposed.
of fall detection by IoT systems. Firstly, 5G is envisioned to
The rapid development of new technologies keeps this topic very
become an important and universal communication protocol
active in the research community. Although much progress has
for IoT. Secondly, 5G cellular can be used for passive sensing
been made, there are still various open challenges, which we
approaches. Different from other kinds of RF-sensing approaches
discuss below.
(e.g., WiFi or radar) which are aimed for short-distance indoor
fall detection, the 5G wireless network can be applied to both 1. The rarity of data of real falls: There is no convincing
indoor and outdoor scenarios as a pervasive sensing method. public data set which could provide a gold standard. Many
This type of network has already been successfully investigated simulated data sets by individual sensors are available, but

Frontiers in Robotics and AI | www.frontiersin.org 18 June 2020 | Volume 7 | Article 71


Wang et al. Elderly Fall Detection Systems

it is debatable whether models trained on data collected by of the components of fall detection and it is aimed to give
young and healthy subjects can be applied to elderly people a comprehensive understanding of physical elements, software
in real-life scenarios. To the best of our knowledge, only Liu organization, working principles, techniques, and arrangement
et al. (2014) used a data set with nine real falls along with of different components that concern fall detection systems.
445 simulated ones. As for data sets with multiple sensors, We draw the following conclusions.
the data sets are even scarcer. There is, therefore, an urgent
1. The sensors and algorithms proposed during the past 6 years
need to create a benchmark data set of data coming from
are very different in comparison to the research before 2014.
multiple sensors.
Accelerometers are still the most popular sensors in wearable
2. Detection in real-time: The attempts that we have seen in
devices, while Kinect took the place of the RGB camera and
the literature are all based on offline methods that detect falls.
became the most popular visual sensor. The combination
While this is an important step, it is time that research starts
of Kinect and accelerometer is turning out to be the most
focusing more on real-time systems that can be applied in
sought after.
the real-world.
2. There is not yet a benchmark data set on which fall detection
3. Security and privacy: We have seen little attention to the
systems can be evaluated and compared. This creates a hurdle
security and privacy concerned with fall detection approaches.
in advancing the field. Although there has been an attempt to
Security and privacy is therefore another topic which to
use middle-age subjects to simulate falls (Kangas et al., 2008),
our opinion must be addressed in cohesion with fall
there are still differences in behavior between the elderly and
detection methods.
middle-aged subjects.
4. Platform of sensor fusion: It is still a novice topic with a lot of
3. Sensor fusion seems to be the way forward. It provides more
potential. Studies so far have treated this topic to a minimum
robust solutions in fall detection systems but come with higher
as they mostly focused on the analytics aspect of the problem.
computational costs when compared to those that rely on
In order to bring solutions closer to the market more holistic
individual sensors. The challenge is therefore to mitigate the
studies are needed to develop full information systems that
computational costs.
can deal with the management and transmission of data in an
4. Existing studies focus mainly on the data analytics aspect and
efficient, effective and secure way.
do not give too much attention to IoT platforms in order to
5. Limitation of location: Some sensors, such as visual ones,
build full and stable systems. Moreover, the effort is put on
have limited capability because they are fixed and static.
analyzing data in offline mode. In order to bring such systems
It is necessary to develop fall detection systems which
to the market, more effort needs to be invested in building all
can be applied to controlled (indoor) and uncontrolled
the components that make a robust, stable, and secure system
(outdoor) environments.
that allows (near) real-time processing and that gains the trust
6. Scalability and flexibility: With the increasing number
of the elderly people.
of affordable sensors there is a crucial necessity to study
the scalability of fall detection systems especially when The detection of elderly falls is an example of the potential of
inhomogeneous sensors are considered (Islam et al., 2015). autonomous health monitoring systems. While the focus here
There is an increasing demand for scalable fall detection was on elderly people, the same or similar systems can be
approaches that do not sacrifice robustness or security. When applicable to people with mobility problems. With the ongoing
considering cloud-based trends, fall detection modules, such development of IoT devices, autonomous health monitoring and
as data transmission, processing, applications, and services, assistance systems that rely on such devices seems to be the key
should be configurable and scalable in order to adapt to for the detection of early signs of physical and cognitive problems
the growth of commercial demands. Cloud-based systems that can range from cardiovascular issues to mental disorders,
enable more scalability of health monitoring systems at such as Alzheimer’s and dementia.
different levels as the need for resources of both hardware and
software level changes with time. Cloud-based systems can AUTHOR CONTRIBUTIONS
add or remove sensors and services with little effort on the
architecture (Alamri et al., 2013). GA and XW conceived and planned the paper. XW wrote the
manuscript in consultation with GA and JE. All authors listed
in this paper have made a substantial, direct and intellectual
9. SUMMARY AND CONCLUSIONS contribution to the work, and approved it for publication.

In this review we give an account on fall detection systems FUNDING


from a holistic point of view that includes data collection, data
management, data transmission, security and privacy as well XW holds a fellowship (grant number: 201706340160) from
as applications. the China Scholarship Council supplemented by the University
In particular we compare approaches that rely on individual of Groningen. The support provided by the China Scholarship
sensors with those that are based on sensor networks with Council (CSC) during the study at the University of Groningen
various fusion techniques. The survey provides a description is acknowledged.

Frontiers in Robotics and AI | www.frontiersin.org 19 June 2020 | Volume 7 | Article 71


Wang et al. Elderly Fall Detection Systems

REFERENCES Chen, K.-H., Hsu, Y.-W., Yang, J.-J., and Jaw, F.-S. (2017b).
Enhanced characterization of an accelerometer-based fall detection
(2011). Sdufall. Available online at: http://www.sucro.org/homepage/wanghaibo/ algorithm using a repository. Instrument. Sci. Technol. 45, 382–391.
SDUFall.html doi: 10.1080/10739149.2016.1268155
(2014). Urfd. Available online at: https://sites.google.com/view/haibowang/home Chen, K.-H., Hsu, Y.-W., Yang, J.-J., and Jaw, F.-S. (2018). Evaluating
Abbate, S., Avvenuti, M., Bonatesta, F., Cola, G., Corsini, P., and Vecchio, A. the specifications of built-in accelerometers in smartphones on
(2012). A smartphone-based fall detection system. Pervas. Mobile Comput. 8, fall detection performance. Instrument. Sci. Technol. 46, 194–206.
883–899. doi: 10.1016/j.pmcj.2012.08.003 doi: 10.1080/10739149.2017.1363054
Adhikari, K., Bouchachia, H., and Nait-Charif, H. (2017). “Activity recognition Chua, J.-L., Chang, Y. C., and Lim, W. K. (2015). A simple vision-based fall
for indoor fall detection using convolutional neural network,” in 2017 Fifteenth detection technique for indoor video surveillance. Signal Image Video Process.
IAPR International Conference on Machine Vision Applications (MVA) (Nagoya: 9, 623–633. doi: 10.1007/s11760-013-0493-7
IEEE), 81–84. doi: 10.23919/MVA.2017.7986795 Daher, M., Diab, A., El Najjar, M. E. B., Khalil, M. A., and Charpillet, F. (2017).
Akagündüz, E., Aslan, M., Şengür, A., Wang, H., and İnce, M. C. (2017). Silhouette Elder tracking and fall detection system using smart tiles. IEEE Sens. J. 17,
orientation volumes for efficient fall detection in depth videos. IEEE J. Biomed. 469–479. doi: 10.1109/JSEN.2016.2625099
Health Inform. 21, 756–763. doi: 10.1109/JBHI.2016.2570300 Dai, J., Bai, X., Yang, Z., Shen, Z., and Xuan, D. (2010). “PerfallD: a
Alamri, A., Ansari, W. S., Hassan, M. M., Hossain, M. S., Alelaiwi, A., and Hossain, pervasive fall detection system using mobile phones,” in 2010 8th IEEE
M. A. (2013). A survey on sensor-cloud: architecture, applications, and International Conference on Pervasive Computing and Communications
approaches. Int. J. Distribut. Sensor Netw. 9, 917923. doi: 10.1155/2013/917923 Workshops (PERCOM Workshops) (Mannheim: IEEE), 292–297.
Amini, A., Banitsas, K., and Cosmas, J. (2016). “A comparison between heuristic de Araújo, Í. L., Dourado, L., Fernandes, L., Andrade, R. M. C., and Aguilar, P. A.
and machine learning techniques in fall detection using kinect v2,” in 2016 IEEE C. (2018). “An algorithm for fall detection using data from smartwatch,” in 2018
International Symposium on Medical Measurements and Applications (MeMeA) 13th Annual Conference on System of Systems Engineering (SoSE) (Paris: IEEE),
(Benevento: IEEE), 1–6. doi: 10.1109/MeMeA.2016.7533763 124–131. doi: 10.1109/SYSOSE.2018.8428786
Aslan, M., Sengur, A., Xiao, Y., Wang, H., Ince, M. C., and Ma, X. (2015). Shape de Quadros, T., Lazzaretti, A. E., and Schneider, F. K. (2018). A movement
feature encoding via fisher vector for efficient fall detection in depth-videos. decomposition and machine learning-based fall detection system using
Applied Soft. Comput. 37, 1023–1028. doi: 10.1016/j.asoc.2014.12.035 wrist wearable device. IEEE Sens. J. 18, 5082–5089. doi: 10.1109/JSEN.2018.
Auvinet, E., Multon, F., Saint-Arnaud, A., Rousseau, J., and Meunier, J. (2011). 2829815
Fall detection with multiple cameras: an occlusion-resistant method based on Demiris, G., Hensel, B. K., Skubic, M., and Rantz, M. (2008). Senior residents’
3-D silhouette vertical distribution. IEEE Trans. Inform. Technol. Biomed. 15, perceived need of and preferences for “smart home” sensor technologies. Int.
290–300. doi: 10.1109/TITB.2010.2087385 J. Technol. Assess. Health Care 24, 120–124. doi: 10.1017/S0266462307080154
Aziz, O., Musngi, M., Park, E. J., Mori, G., and Robinovitch, S. N. (2017). Deng, W.-Y., Zheng, Q.-H., and Wang, Z.-M. (2014). Cross-person activity
A comparison of accuracy of fall detection algorithms (threshold-based vs. recognition using reduced kernel extreme learning machine. Neural Netw. 53,
machine learning) using waist-mounted tri-axial accelerometer signals from a 1–7. doi: 10.1016/j.neunet.2014.01.008
comprehensive set of falls and non-fall trials. Med. Biol. Eng. Comput. 55, 45–55. DeVaul, R., Sung, M., Gips, J., and Pentland, A. (2003). “Mithril 2003:
doi: 10.1007/s11517-016-1504-y applications and architecture,” in Null (White Plains, NY: IEEE), 4.
Bian, Z.-P., Hou, J., Chau, L.-P., and Magnenat-Thalmann, N. (2015). Fall doi: 10.1109/ISWC.2003.1241386
detection based on body part tracking using a depth camera. IEEE J. Biomed. Diraco, G., Leone, A., and Siciliano, P. (2010). “An active vision system for
Health Inform. 19, 430–439. doi: 10.1109/JBHI.2014.2319372 fall detection and posture recognition in elderly healthcare,” in 2010 Design,
Bloom, D. E., Boersch-Supan, A., McGee, P., and Seike, A. (2011). Population Automation & Test in Europe Conference & Exhibition (DATE 2010) (Dresden:
aging: facts, challenges, and responses. Benefits Compens. Int. 41, 22. IEEE), 1536–1541. doi: 10.1109/DATE.2010.5457055
Boulard, L., Baccaglini, E., and Scopigno, R. (2014). “Insights into the role of Djelouat, H., Baali, H., Amira, A., and Bensaali, F. (2017). “CS-based fall
feedbacks in the tracking loop of a modular fall-detection algorithm,” in 2014 detection for connected health applications,” in 2017 Fourth International
IEEE Visual Communications and Image Processing Conference (Valletta: IEEE), Conference on Advances in Biomedical Engineering (ICABME) (Beirut: IEEE),
406–409. doi: 10.1109/VCIP.2014.7051592 1–4. doi: 10.1109/ICABME.2017.8167540
Bourke, A., O’brien, J., and Lyons, G. (2007). Evaluation of a threshold- Edgcomb, A., and Vahid, F. (2012). Privacy perception and fall detection accuracy
based tri-axial accelerometer fall detection algorithm. Gait Post. 26, 194–199. for in-home video assistive monitoring with privacy enhancements. ACM
doi: 10.1016/j.gaitpost.2006.09.012 SIGHIT Rec. 2, 6–15. doi: 10.1145/2384556.2384557
Bourke, A. K., and Lyons, G. M. (2008). A threshold-based fall-detection El-Bendary, N., Tan, Q., Pivot, F. C., and Lam, A. (2013). Fall detection and
algorithm using a bi-axial gyroscope sensor. Med. Eng. Phys. 30, 84–90. prevention for the elderly: a review of trends and challenges. Int. J. Smart Sens.
doi: 10.1016/j.medengphy.2006.12.001 Intell. Syst. 6. doi: 10.21307/ijssis-2017-588
Cai, Z., Han, J., Liu, L., and Shao, L. (2017). RGB-D datasets using Microsoft Espinosa, R., Ponce, H., Gutiérrez, S., Martínez-Villaseñor, L., Brieva, J.,
Kinect or similar sensors: a survey. Multimedia Tools Appl. 76, 4313–4355. and Moya-Albor, E. (2019). A vision-based approach for fall detection
doi: 10.1007/s11042-016-3374-6 using multiple cameras and convolutional neural networks: a case study
Charfi, I., Miteran, J., Dubois, J., Atri, M., and Tourki, R. (2012). Definition and using the up-fall detection dataset. Comput. Biol. Med. 115:103520.
performance evaluation of a robust SVM based fall detection solution. SITIS doi: 10.1016/j.compbiomed.2019.103520
12, 218–224. doi: 10.1109/SITIS.2012.155 Feng, W., Liu, R., and Zhu, M. (2014). Fall detection for elderly person care in a
Chaudhuri, S., Thompson, H., and Demiris, G. (2014). Fall detection devices and vision-based home surveillance environment using a monocular camera. Signal
their use with older adults: a systematic review. J. Geriatr. Phys. Ther. 37, 178. Image Video Process. 8, 1129–1138. doi: 10.1007/s11760-014-0645-4
doi: 10.1519/JPT.0b013e3182abe779 Gasparrini, S., Cippitelli, E., Gambi, E., Spinsante, S., Wåhslén, J., Orhan, I.,
Chelli, A., and Pätzold, M. (2019). A machine learning approach for fall et al. (2015). “Proposal and experimental evaluation of fall detection solution
detection and daily living activity recognition. IEEE Access 7, 38670–38687. based on wearable and depth data fusion,” in International Conference on ICT
doi: 10.1109/ACCESS.2019.2906693 Innovations (Ohrid: Springer), 99–108. doi: 10.1007/978-3-319-25733-4_11
Chen, C., Jafari, R., and Kehtarnavaz, N. (2015). “UTD-MHAD: a multimodal Gasparrini, S., Cippitelli, E., Spinsante, S., and Gambi, E. (2014). A depth-
dataset for human action recognition utilizing a depth camera and a wearable based fall detection system using a kinect R sensor. Sensors 14, 2756–2775.
inertial sensor,” in 2015 IEEE International Conference on Image Processing doi: 10.3390/s140202756
(ICIP) (Quebec City: IEEE), 168–172. doi: 10.1109/ICIP.2015.7350781 Gharghan, S., Mohammed, S., Al-Naji, A., Abu-AlShaeer, M., Jawad, H., Jawad, A.,
Chen, C., Jafari, R., and Kehtarnavaz, N. (2017a). A survey of depth and inertial et al. (2018). Accurate fall detection and localization for elderly people based on
sensor fusion for human action recognition. Multimedia Tools Appl. 76, neural network and energy-efficient wireless sensor network. Energies 11, 2866.
4405–4425. doi: 10.1007/s11042-015-3177-1 doi: 10.3390/en11112866

Frontiers in Robotics and AI | www.frontiersin.org 20 June 2020 | Volume 7 | Article 71


Wang et al. Elderly Fall Detection Systems

Gholampooryazdi, B., Singh, I., and Sigg, S. (2017). “5G ubiquitous sensing: Kumar, D. P., Yun, Y., and Gu, I. Y.-H. (2016). “Fall detection in RGB-D
passive environmental perception in cellular systems,” in 2017 IEEE videos by combining shape and motion features,” in 2016 IEEE International
86th Vehicular Technology Conference (VTC-Fall) (Toronto: IEEE), 1–6. Conference on Acoustics, Speech and Signal Processing (ICASSP) (Shanghai:
doi: 10.1109/VTCFall.2017.8288261 IEEE), 1337–1341. doi: 10.1109/ICASSP.2016.7471894
Gia, T. N., Sarker, V. K., Tcarenko, I., Rahmani, A. M., Westerlund, T., Liljeberg, P., Kumar, S. V., Manikandan, K., and Kumar, N. (2014). “Novel fall detection
et al. (2018). Energy efficient wearable sensor node for iot-based fall detection algorithm for the elderly people,” in 2014 International Conference on Science
systems. Microprocess. Microsyst. 56, 34–46. doi: 10.1016/j.micpro.2017.10.014 Engineering and Management Research (ICSEMR) (Shanghai: IEEE), 1–3.
Guo, B., Zhang, Y., Zhang, D., and Wang, Z. (2019). Special issue on device- doi: 10.1109/ICSEMR.2014.7043578
free sensing for human behavior recognition. Pers. Ubiquit. Comput. 23, 1–2. Kwolek, B., and Kepski, M. (2014). Human fall detection on embedded platform
doi: 10.1007/s00779-019-01201-8 using depth maps and wireless accelerometer. Comput. Methods Programs
Habetha, J. (2006). “The myheart project-fighting cardiovascular diseases by Biomed. 117, 489–501. doi: 10.1016/j.cmpb.2014.09.005
prevention and early diagnosis,” in Engineering in Medicine and Biology Society, Kwolek, B., and Kepski, M. (2016). Fuzzy inference-based fall detection using
2006. EMBS’06. 28th Annual International Conference of the IEEE (New York, kinect and body-worn accelerometer. Appl. Soft Comput. 40, 305–318.
NY: IEEE), 6746–6749. doi: 10.1109/IEMBS.2006.260937 doi: 10.1016/j.asoc.2015.11.031
Han, Q., Zhao, H., Min, W., Cui, H., Zhou, X., Zuo, K., et al. (2020). A two-stream LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature 521, 436–444.
approach to fall detection with mobileVGG. IEEE Access 8, 17556–17566. doi: 10.1038/nature14539
doi: 10.1109/ACCESS.2019.2962778 Leff, B. (1997). Persons found in their homes helpless or dead. J. Am. Geriatr. Soc.
Hao, Z., Duan, Y., Dang, X., and Xu, H. (2019). “KS-fall: Indoor human fall 45, 393–394. doi: 10.1111/j.1532-5415.1997.tb03788.x
detection method under 5GHZ wireless signals,” in IOP Conference Series: Li, Q., Stankovic, J. A., Hanson, M. A., Barth, A. T., Lach, J., and Zhou, G. (2009).
Materials Science and Engineering, Vol. 569 (Sanya: IOP Publishing), 032068. “Accurate, fast fall detection using gyroscopes and accelerometer-derived
doi: 10.1088/1757-899X/569/3/032068 posture information,” in 2009 Sixth International Workshop on Wearable
Hori, T., Nishida, Y., Aizawa, H., Murakami, S., and Mizoguchi, H. (2004). “Sensor and Implantable Body Sensor Networks (Berkeley, CA: IEEE), 138–143.
network for supporting elderly care home,” in Sensors, 2004, Proceedings of IEEE doi: 10.1109/BSN.2009.46
(Vienna: IEEE), 575–578. doi: 10.1109/ICSENS.2004.1426230 Li, X., Nie, L., Xu, H., and Wang, X. (2018). “Collaborative fall detection using
Hsieh, S.-L., Chen, C.-C., Wu, S.-H., and Yue, T.-W. (2014). “A wrist-worn smart phone and kinect,” in Mobile Networks and Applications, eds H. Janicke,
fall detection system using accelerometers and gyroscopes,” in Proceedings of D. Katsaros, T. J. Cruz, Z. M. Fadlullah, A.-S. K. Pathan, K. Singh et al.
the 11th IEEE International Conference on Networking, Sensing and Control (Springer), 1–14. doi: 10.1007/s11036-018-0998-y
(Miami: IEEE), 518–523. doi: 10.1109/ICNSC.2014.6819680 Li, Y., Banerjee, T., Popescu, M., and Skubic, M. (2013). “Improvement of acoustic
Huang, Y., Chen, W., Chen, H., Wang, L., and Wu, K. (2019). “G-fall: device- fall detection using kinect depth sensing,” in 2013 35th Annual International
free and training-free fall detection with geophones,” in 2019 16th Annual Conference of the IEEE Engineering in medicine and biology society (EMBC)
IEEE International Conference on Sensing, Communication, and Networking (Osaka: IEEE), 6736–6739.
(SECON) (Boston, MA: IEEE), 1–9. doi: 10.1109/SAHCN.2019.8824827 Liu, L., Popescu, M., Skubic, M., and Rantz, M. (2014). “An automatic fall detection
Igual, R., Medrano, C., and Plaza, I. (2013). Challenges, issues and trends in fall framework using data fusion of Doppler radar and motion sensor network,” in
detection systems. Biomed. Eng. Online 12, 66. doi: 10.1186/1475-925X-12-66 2014 36th Annual International Conference of the IEEE Engineering in Medicine
Islam, S. R., Kwak, D., Kabir, M. H., Hossain, M., and Kwak, K.-S. (2015). The and Biology Society (Chicago, IL: IEEE), 5940–5943.
internet of things for health care: a comprehensive survey. IEEE Access 3, Lord, C. J., and Colvin, D. P. (1991). “Falls in the elderly: detection and assessment,”
678–708. doi: 10.1109/ACCESS.2015.2437951 in Proceedings of the Annual International Conference of the IEEE Engineering
Islam, Z. Z., Tazwar, S. M., Islam, M. Z., Serikawa, S., and Ahad, M. A. R. in Medicine and Biology Society (Orlando, FL: IEEE), 1938–1939.
(2017). “Automatic fall detection system of unsupervised elderly people using Lorincz, K., Malan, D. J., Fulford-Jones, T. R., Nawoj, A., Clavel, A., Shnayder,
smartphone,” in 5th IIAE International Conference on Intelligent Systems and V., et al. (2004). Sensor networks for emergency response: challenges and
Image Processing 2017 (Hawaii), 5. doi: 10.12792/icisip2017.077 opportunities. IEEE Pervas. Comput. 3, 16–23. doi: 10.1109/MPRV.2004.18
Kangas, M., Konttila, A., Lindgren, P., Winblad, I., and Jämsä, T. (2008). Luprano, J. (2006). “European projects on smart fabrics, interactive textiles:
Comparison of low-complexity fall detection algorithms for body attached Sharing opportunities and challenges,” in Workshop Wearable Technol. Intel.
accelerometers. Gait Post. 28, 285–291. doi: 10.1016/j.gaitpost.2008.01.003 Textiles (Helsinki).
Kao, H.-C., Hung, J.-C., and Huang, C.-P. (2017). “GA-SVM applied Ma, C., Shimada, A., Uchiyama, H., Nagahara, H., and Taniguchi, R.-i. (2019). Fall
to the fall detection system,” in 2017 International Conference detection using optical level anonymous image sensing system. Optics Laser
on Applied System Innovation (ICASI) (Sapporo: IEEE), 436–439. Technol. 110, 44–61. doi: 10.1016/j.optlastec.2018.07.013
doi: 10.1109/ICASI.2017.7988446 Ma, X., Wang, H., Xue, B., Zhou, M., Ji, B., and Li, Y. (2014). Depth-based human
Kepski, M., and Kwolek, B. (2014). “Fall detection using ceiling-mounted 3D depth fall detection via shape features and improved extreme learning machine. IEEE
camera,” in 2014 International Conference on Computer Vision Theory and J. Biomed. Health Inform. 18, 1915–1922. doi: 10.1109/JBHI.2014.2304357
Applications (VISAPP), Vol. 2 (Lisbon: IEEE), 640–647. Mahmud, F., and Sirat, N. S. (2015). Evaluation of three-axial wireless-based
Kerdjidj, O., Ramzan, N., Ghanem, K., Amira, A., and Chouireb, F. (2020). accelerometer for fall detection analysis. Int. J. Integr. Eng. 7, 15–20.
Fall detection and human activity classification using wearable sensors Martínez-Villaseñor, L., Ponce, H., Brieva, J., Moya-Albor, E., Núñez-Martínez,
and compressed sensing. J. Ambient Intell. Human. Comput. 11, 349–361. J., and Peñafort-Asturiano, C. (2019). Up-fall detection dataset: a multimodal
doi: 10.1007/s12652-019-01214-4 approach. Sensors 19:1988. doi: 10.3390/s19091988
Khojasteh, S., Villar, J., Chira, C., González, V., and de la Cal, E. (2018). Improving Mastorakis, G., Ellis, T., and Makris, D. (2018). Fall detection without people:
fall detection using an on-wrist wearable accelerometer. Sensors 18:1350. a simulation approach tackling video data scarcity. Expert Syst. Appl. 112,
doi: 10.3390/s18051350 125–137. doi: 10.1016/j.eswa.2018.06.019
Klenk, J., Schwickert, L., Palmerini, L., Mellone, S., Bourke, A., Ihlen, E. A., Mastorakis, G., Hildenbrand, X., Grand, K., and Makris, D. (2007). Customisable
et al. (2016). The farseeing real-world fall repository: a large-scale collaborative fall detection: a hybrid approach using physics based simulation and machine
database to collect and share sensor signals from real-world falls. Eur. Rev. learning. IEEE Trans. Biomed. Eng. 54, 1940–1950.
Aging Phys. Activity 13:8. doi: 10.1186/s11556-016-0168-9 Mastorakis, G., and Makris, D. (2014). Fall detection system using kinect’s infrared
Ko, M., Kim, S., Kim, M., and Kim, K. (2018). A novel approach for outdoor sensor. J. Realtime Image Process. 9, 635–646. doi: 10.1007/s11554-012-0246-9
fall detection using multidimensional features from a single camera. Appl. Sci. Medrano, C., Igual, R., García-Magariño, I., Plaza, I., and Azuara, G. (2017).
8:984. doi: 10.3390/app8060984 Combining novelty detectors to improve accelerometer-based fall detection.
Kong, Y., Huang, J., Huang, S., Wei, Z., and Wang, S. (2019). Learning Med. Biol. Eng. Comput. 55, 1849–1858. doi: 10.1007/s11517-017-1i632-z
spatiotemporal representations for human fall detection in surveillance video. J. Min, W., Yao, L., Lin, Z., and Liu, L. (2018). Support vector machine approach
Visual Commun. Image Represent. 59, 215–230. doi: 10.1016/j.jvcir.2019.01.024 to fall recognition based on simplified expression of human skeleton action

Frontiers in Robotics and AI | www.frontiersin.org 21 June 2020 | Volume 7 | Article 71


Wang et al. Elderly Fall Detection Systems

and fast detection of start key frame using torso angle. IET Comput. Vis. 12, Schwickert, L., Becker, C., Lindemann, U., Maréchal, C., Bourke, A., Chiari, L.,
1133–1140. doi: 10.1049/iet-cvi.2018.5324 et al. (2013). Fall detection with body-worn sensors. Z. Gerontol. Geriatr. 46,
Namba, T., and Yamada, Y. (2018a). Fall risk reduction for the elderly by using 706–719. doi: 10.1007/s00391-013-0559-8
mobile robots based on deep reinforcement learning. J. Robot. Network. Artif. Senouci, B., Charfi, I., Heyrman, B., Dubois, J., and Miteran, J. (2016). Fast
Life 4, 265–269. doi: 10.2991/jrnal.2018.4.4.2 prototyping of a SOC-based smart-camera: a real-time fall detection case study.
Namba, T., and Yamada, Y. (2018b). Risks of deep reinforcement learning applied J. Real Time Image Process. 12, 649–662. doi: 10.1007/s11554-014-0456-4
to fall prevention assist by autonomous mobile robots in the hospital. Big Data Shi, T., Sun, X., Xia, Z., Chen, L., and Liu, J. (2016). Fall detection algorithm based
Cogn. Comput. 2:13. doi: 10.3390/bdcc2020013 on triaxial accelerometer and magnetometer. Eng. Lett. 24:EL_24_2_06.
Niu, K., Zhang, F., Xiong, J., Li, X., Yi, E., and Zhang, D. (2018). “Boosting fine- Shibuya, N., Nukala, B. T., Rodriguez, A., Tsay, J., Nguyen, T. Q., Zupancic, S., et al.
grained activity sensing by embracing wireless multipath effects,” in Proceedings (2015). “A real-time fall detection system using a wearable gait analysis sensor
of the 14th International Conference on emerging Networking EXperiments and and a support vector machine (SVM) classifier,” in 2015 Eighth International
Technologies (Heraklion), 139–151. doi: 10.1145/3281411.3281425 Conference on Mobile Computing and Ubiquitous Networking (ICMU) (IEEE),
Nukala, B., Shibuya, N., Rodriguez, A., Tsay, J., Nguyen, T., Zupancic, S., et al. 66–67. doi: 10.1109/ICMU.2015.7061032
(2014). “A real-time robust fall detection system using a wireless gait analysis Shojaei-Hashemi, A., Nasiopoulos, P., Little, J. J., and Pourazad, M. T. (2018).
sensor and an artificial neural network,” in 2014 IEEE Healthcare Innovation “Video-based human fall detection in smart homes using deep learning,” in
Conference (HIC) (Seattle: IEEE), 219–222. doi: 10.1109/HIC.2014.7038914 2018 IEEE International Symposium on Circuits and Systems (ISCAS) (Florence:
Ofli, F., Chaudhry, R., Kurillo, G., Vidal, R., and Bajcsy, R. (2013). “Berkeley IEEE), 1–5. doi: 10.1109/ISCAS.2018.8351648
MHAD: a comprehensive multimodal human action database,” in 2013 IEEE Spasova, V., Iliev, I., and Petrova, G. (2016). Privacy preserving fall detection based
Workshop on Applications of Computer Vision (WACV) (Clearwater Beach, FL: on simple human silhouette extraction and a linear support vector machine.
IEEE), 53–60. doi: 10.1109/WACV.2013.6474999 Int. J. Bioautomat. 20, 237–252.
Ozcan, K., and Velipasalar, S. (2016). Wearable camera-and accelerometer- Stone, E. E., and Skubic, M. (2015). Fall detection in homes of older adults
based fall detection on portable devices. IEEE Embed. Syst. Lett. 8, 6–9. using the Microsoft Kinect. IEEE J. Biomed. Health Inform. 19, 290–301.
doi: 10.1109/LES.2015.2487241 doi: 10.1109/JBHI.2014.2312180
Ozcan, K., Velipasalar, S., and Varshney, P. K. (2017). Autonomous fall detection Sucerquia, A., López, J., and Vargas-Bonilla, J. (2018). Real-life/real-time
with wearable cameras by using relative entropy distance measure. IEEE Trans. elderly fall detection with a triaxial accelerometer. Sensors 18:1101.
Hum. Mach. Syst. 47, 31–39. doi: 10.1109/THMS.2016.2620904 doi: 10.3390/s18041101
Palipana, S., Rojas, D., Agrawal, P., and Pesch, D. (2018). Falldefi: ubiquitous fall Thilo, F. J., Hahn, S., Halfens, R. J., and Schols, J. M. (2019). Usability of a wearable
detection using commodity wi-fi devices. Proc. ACM Interact. Mobile Wearable fall detection prototype from the perspective of older people-a real field testing
Ubiquit. Technol. 1, 1–25. doi: 10.1145/3161183 approach. J. Clin. Nurs. 28, 310–320. doi: 10.1111/jocn.14599
Pandian, P., Mohanavelu, K., Safeer, K., Kotresh, T., Shakunthala, D., Tian, Y., Lee, G.-H., He, H., Hsu, C.-Y., and Katabi, D. (2018). RF-based fall
Gopal, P., et al. (2008). Smart vest: Wearable multi-parameter remote monitoring using convolutional neural networks. Proc. ACM Interact. Mobile
physiological monitoring system. Med. Eng. Phys. 30, 466–477. Wearable Ubiquitous Technol. 2, 1–24. doi: 10.1145/3264947
doi: 10.1016/j.medengphy.2007.05.014 Tsinganos, P., and Skodras, A. (2018). On the comparison of wearable sensor data
Paradiso, R., Loriga, G., and Taccini, N. (2005). A wearable health care system fusion to a single sensor machine learning technique in fall detection. Sensors
based on knitted integrated sensors. IEEE Trans. Inform. Technol. Biomed. 9, 18, 592. doi: 10.3390/s18020592
337–344. doi: 10.1109/TITB.2005.854512 Wang, H., Zhang, D., Wang, Y., Ma, J., Wang, Y., and Li, S. (2017a). RT-fall: a real-
Pierleoni, P., Belli, A., Palma, L., Pellegrini, M., Pernini, L., and Valenti, S. (2015). time and contactless fall detection system with commodity wifi devices. IEEE
A high reliability wearable device for elderly fall detection. IEEE Sens. J. 15, Trans. Mob. Comput. 16, 511–526. doi: 10.1109/TMC.2016.2557795
4544–4553. doi: 10.1109/JSEN.2015.2423562 Wang, Y., Wu, K., and Ni, L. M. (2017b). Wifall: device-free fall detection
Pister, K., Hohlt, B., Ieong, I., Doherty, L., and Vainio, I. (2003). Ivy-a Sensor by wireless networks. IEEE Trans. Mobile Comput. 16, 581–594.
Network Infrastructure for the College of Engineering. Available online at: http:// doi: 10.1109/TMC.2016.2557792
www-bsac.eecs.berkeley.edu/projects/ivy WHO (2018). Falls. Available online at: https://www.who.int/news-room/fact-
Putra, I., Brusey, J., Gaura, E., and Vesilo, R. (2017). An event-triggered machine sheets/detail/falls
learning approach for accelerometer-based fall detection. Sensors 18, 20. Williams, G., Doughty, K., Cameron, K., and Bradley, D. (1998). “A smart
doi: 10.3390/s18010020 fall and activity monitor for telecare applications,” in Proceedings of the
Queralta, J. P., Gia, T., Tenhunen, H., and Westerlund, T. (2019). “Edge-AI 20th Annual International Conference of the IEEE Engineering in Medicine
in Lora-based health monitoring: fall detection system with fog computing and Biology Society. Vol. 20 Biomedical Engineering Towards the Year
and LSTM recurrent neural networks,” in 2019 42nd International Conference 2000 and Beyond (Cat. No. 98CH36286), Volume 3 (IEEE), 1151–1154.
on Telecommunications and Signal Processing (TSP) (IEEE), 601–604. doi: 10.1109/IEMBS.1998.747074
doi: 10.1109/TSP.2019.8768883 Wu, F., Zhao, H., Zhao, Y., and Zhong, H. (2015). Development of a
Ray, P. P. (2014). “Home health hub internet of things (H 3 IoT): an architectural wearable-sensor-based fall detection system. Int. J. Telemed. Appl. 2015:2.
framework for monitoring health of elderly people,” in 2014 International doi: 10.1155/2015/576364
Conference on Science Engineering and Management Research (ICSEMR) Wu, T., Gu, Y., Chen, Y., Xiao, Y., and Wang, J. (2019). A mobile cloud
(IEEE), 1–3. doi: 10.1109/ICSEMR.2014.7043542 collaboration fall detection system based on ensemble learning. arXiv
Rougier, C., Auvinet, E., Rousseau, J., Mignotte, M., and Meunier, J. (2011a). [Preprint]. arXiv:1907.04788.
“Fall detection from depth map video sequences,” in International Conference Xi, X., Jiang, W., Lü, Z., Miran, S. M., and Luo, Z.-Z. (2020). Daily activity
on Smart Homes and Health Telematics (Montreal: Springer), 121–128. monitoring and fall detection based on surface electromyography and plantar
doi: 10.1007/978-3-642-21535-3_16 pressure. Complexity. 2020:9532067. doi: 10.1155/2020/9532067
Rougier, C., Meunier, J., St-Arnaud, A., and Rousseau, J. (2011b). Robust video Xi, X., Tang, M., Miran, S. M., and Luo, Z. (2017). Evaluation of feature extraction
surveillance for fall detection based on human shape deformation. IEEE Trans. and recognition for activity monitoring and fall detection based on wearable
Circ. Syst. Video Technol. 21, 611–622. doi: 10.1109/TCSVT.2011.2129370 SEMG sensors. Sensors 17, 1229. doi: 10.3390/s17061229
Sabatini, A. M., Ligorio, G., Mannini, A., Genovese, V., and Pinna, L. Xu, T., Zhou, Y., and Zhu, J. (2018). New advances and challenges of fall detection
(2016). Prior-to-and post-impact fall detection using inertial and barometric systems: a survey. Appl. Sci. 8, 418. doi: 10.3390/app8030418
altimeter measurements. IEEE Trans. Neural Syst. Rehabil. Eng. 24, 774–783. Yang, G. (2018). A Study on Autonomous Motion Planning of Mobile Robot by Use
doi: 10.1109/TNSRE.2015.2460373 of Deep Reinforcement Learning for Fall Prevention in Hospita. Japan: JUACEP
Saleh, M., and Jeannés, R. L. B. (2019). Elderly fall detection using wearable Indenpedent Research Report Nagoya University.
sensors: a low cost highly accurate algorithm. IEEE Sens. J. 19, 3156–3164. Yang, G.-Z., and Yang, G. (2006). Body Sensor Networks. Springer.
doi: 10.1109/JSEN.2019.2891128 doi: 10.1007/1-84628-484-8

Frontiers in Robotics and AI | www.frontiersin.org 22 June 2020 | Volume 7 | Article 71


Wang et al. Elderly Fall Detection Systems

Yang, K., Ahn, C. R., Vuran, M. C., and Aria, S. S. (2016). Semi-supervised near- Zhang, Z., Conly, C., and Athitsos, V. (2015). “A survey on vision-based
miss fall detection for ironworkers with a wearable inertial measurement unit. fall detection,” in Proceedings of the 8th ACM International Conference on
Automat. Construct. 68, 194–202. doi: 10.1016/j.autcon.2016.04.007 PErvasive Technologies Related to Assistive Environments (Las Vegas: ACM),
Yang, S.-W., and Lin, S.-K. (2014). Fall detection for multiple pedestrians using 46. doi: 10.1145/2769493.2769540
depth image processing technique. Comput. Methods Programs Biomed. 114, Zhao, M., Li, T., Abu Alsheikh, M., Tian, Y., Zhao, H., Torralba, A., et al. (2018).
172–182. doi: 10.1016/j.cmpb.2014.02.001 “Through-wall human pose estimation using radio signals,” in Proceedings of
Yazar, A., Erden, F., and Cetin, A. E. (2014). “Multi-sensor ambient assisted living the IEEE Conference on Computer Vision and Pattern Recognition (Long Beach,
system for fall detection,” in Proceedings of the IEEE International Conference CA), 7356–7365. doi: 10.1109/CVPR.2018.00768
on Acoustics, Speech, and Signal Processing (ICASSP-14) (Florence), 1–3. Zitouni, M., Pan, Q., Brulin, D., and Campo, E. (2019). Design of a
Yun, Y., Innocenti, C., Nero, G., Lindén, H., and Gu, I. Y.-H. (2015). “Fall detection smart sole with advanced fall detection algorithm. J. Sensor Technol. 9:71.
in RGB-D videos for elderly care,” in 2015 17th International Conference on E- doi: 10.4236/jst.2019.94007
health Networking, Application & Services (HealthCom) (Boston, MA: IEEE),
422–427. Conflict of Interest: The authors declare that the research was conducted in the
Zhang, L., Wang, C., Ma, M., and Zhang, D. (2019). Widigr: direction-independent absence of any commercial or financial relationships that could be construed as a
gait recognition system using commercial wi-fi devices. IEEE Internet Things J. potential conflict of interest.
7, 1178–1191. doi: 10.1109/JIOT.2019.2953488
Zhang, T., Wang, J., Liu, P., and Hou, J. (2006). Fall detection by embedding an Copyright © 2020 Wang, Ellul and Azzopardi. This is an open-access article
accelerometer in cellphone and using kfd algorithm. Int. J. Comput. Sci. Netw. distributed under the terms of the Creative Commons Attribution License (CC BY).
Security 6, 277–284. The use, distribution or reproduction in other forums is permitted, provided the
Zhang, Z., Conly, C., and Athitsos, V. (2014). “Evaluating depth-based computer original author(s) and the copyright owner(s) are credited and that the original
vision methods for fall detection under occlusions,” in International publication in this journal is cited, in accordance with accepted academic practice.
Symposium on Visual Computing (Las Vegas: Springer), 196–207. No use, distribution or reproduction is permitted which does not comply with these
doi: 10.1007/978-3-319-14364-4_19 terms.

Frontiers in Robotics and AI | www.frontiersin.org 23 June 2020 | Volume 7 | Article 71

You might also like