0% found this document useful (0 votes)
24 views7 pages

Hassan 2018

Uploaded by

Ninad Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views7 pages

Hassan 2018

Uploaded by

Ninad Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Future Generation Computer Systems 81 (2018) 307–313

Contents lists available at ScienceDirect

Future Generation Computer Systems


journal homepage: www.elsevier.com/locate/fgcs

A robust human activity recognition system using smartphone


sensors and deep learning
Mohammed Mehedi Hassan a, *, Md. Zia Uddin b , Amr Mohamed c , Ahmad Almogren a
a
College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
b
Department of Informatics, University of Oslo, Norway
c
Department of Electrical Engineering, University of British Columbia, Canada

highlights

• A smartphone inertial sensors-based approach for human activity recognition.


• Uses deep learning based solution for successful activity recognition.
• The proposed approach was compared with traditional expression recognition approaches.

article info a b s t r a c t
Article history: In last few decades, human activity recognition grabbed considerable research attentions from a wide
Received 31 July 2017 range of pattern recognition and human–computer interaction researchers due to its prominent applica-
Received in revised form 30 October 2017 tions such as smart home health care. For instance, activity recognition systems can be adopted in a smart
Accepted 14 November 2017
home health care system to improve their rehabilitation processes of patients. There are various ways
Available online 22 November 2017
of using different sensors for human activity recognition in a smartly controlled environment. Among
Keywords: which, physical human activity recognition through wearable sensors provides valuable information
Activity recognition about an individual’s degree of functional ability and lifestyle. In this paper, we present a smartphone
Sensors inertial sensors-based approach for human activity recognition. Efficient features are first extracted
Smartphones from raw data. The features include mean, median, autoregressive coefficients, etc. The features are
Deep belief network further processed by a kernel principal component analysis (KPCA) and linear discriminant analysis
(LDA) to make them more robust. Finally, the features are trained with a Deep Belief Network (DBN)
for successful activity recognition. The proposed approach was compared with traditional expression
recognition approaches such as typical multiclass Support Vector Machine (SVM) and Artificial Neural
Network (ANN) where it outperformed them.
© 2017 Elsevier B.V. All rights reserved.

1. Introduction observe patients for health diagnosis and medication [5] or it can
be applied for automated surveillance of public places to predict
Human Activity Recognition (HAR) has become an elegant re- crimes to be happening in near future [6].
search field for its remarkable contributions in ubiquitous com- In last few decades, many HAR systems were surveyed [7–9]
puting [1–3]. Researchers use these systems as a medium to get where the authors focused on several activities in distinguished
information about peoples’ behavior [4]. The information is com- application domains [10,11]. For instance, the activities can be
monly gathered from the signals of sensors such as ambient and including, walking, running, cooking, exercising, etc. Regarding the
wearable sensors. The data from the signals are then processed duration and complexity of the activities; they can be categorized
through machine learning algorithms recognize the events lying into three key groups: short activities, simple activities, and com-
there. Hence, such HAR systems can be applied in many practical
plex activities. The group of short activities consist of activities
applications in smart environments such as smart home health-
with very short duration such as transition from sit to stand. The
care systems. For example, a smart HAR system can continuously
second kind of activities is basic activities walking and reading
[12]. The final one is basically combinations of progressions of basic
* Corresponding author.
E-mail addresses: mmhassan@ksu.edu.sa (M.M. Hassan), mdzu@ifi.uio.no
activities with the interaction with other objects and individuals.
(Md.Z. Uddin), amrm@ece.ubc.ca (A. Mohamed), ahalmogren@ksu.edu.sa Such kind of activities can be partying or official meeting together
(A. Almogren). [13]. In this paper, we focus on recognizing basic activities.

https://doi.org/10.1016/j.future.2017.11.029
0167-739X/© 2017 Elsevier B.V. All rights reserved.
308 M.M. Hassan et al. / Future Generation Computer Systems 81 (2018) 307–313

HAR has been actively explored based on a distinguished kind of smartphone inertial sensors were used with a mixture-of-expert
ambient and wearable sensors [1]. Some instances of such sensors model for classification. In [33], the authors proposed an offline
include motion, proximity, microphone, video sensors. Most of the HAR system where a smartphone with built-in triaxial accelerom-
ambient sensor-based latest HAR researchers have mainly focused eter sensor was used. The phone was kept in the pocket during
on video cameras as cameras make it easy to retrieve the images of experiments. In [34], the authors used a smartphone mounted in
surrounding environment. Video sensors are included with some the waist to collect inertial sensors’ data for activity recognition.
other prominent sensors in some works in novel ubiquitous appli- They used Support Vector Machine (SVM) for activity modeling.
cations [14,15]. Though video sensors have been very popular for In [35], a smartphone was used to recognize six different activities
basic activity recognition. However, it faces very many difficulties in real-time. In [36], the authors proposed a real-time motion
when privacy issue arrives. On the contrary, wearable sensors such recognition system with the help of a smartphone with accelerom-
as inertial sensors can overcome this kind of privacy issues and eter sensors. Similarly, the authors in [37] used a smartphone with
hence; such sensors deserve more focus for activity recognition in an embedded accelerometer to recognize four different activities in
smart homes [16]. real-time.
Many HAR systems over the past used accelerometers to recog- As the dimension of the features from different sensors be-
nize a big range of daily activities such as standing, walking, sitting, comes very high in activity recognition, Principal Component Anal-
running, and lying [17–23]. In [20], the authors have explored ysis (PCA) can be applied in this regard [37]. PCA applies a linear
the accelerometer data to find out the repeating activities such approach to find out the directions with maximum variations.
as grinding, filling, drilling, and sanding. In [21–23], the authors Thus, PCA is adopted in this work to reduce the dimensions
tried to do elderly peoples’ fall detection and prevention in smart of high-dimensional features. Recently, deep learning techniques
environments. Majority of the aforementioned systems adopted have been getting a lot of attentions by pattern recognition and
many accelerometers fixed in different places of the human body artificial intelligence researchers [38–40]. Though deep learning
[17–21]. However, this approach apparently not applicable to daily is more efficient than typical neural networks, it consists of two
life to observe long-term activities due to attachment of many major disadvantages: it has overfitting problem, and it is often
sensors in the human body and cable connections. Some studies much time-consuming. Deep Belief Network (DBN) is one of the
tried to explore the data of single accelerometers at sternum or robust deep learning tools that use Restricted Boltzmann Machines
waist [22,23]. These works reported substantial recognition results (RBMs) during training [39]. Hence, DBN is a good candidate to the
of basic daily activities such as running, walking, lying, etc. How- model activity recognition system.
ever, they could not show good accuracy for some complex activity In this work, a smartphone-based novel approach is proposed
situations such as transitional activities (e.g., sit to stand, lie to for HAR using efficient features and DBN. The rest of the paper is
stand, and stand to sit). organized as follows. Section 2 explains the feature extraction pro-
Thus, regarding different sensors in activity recognition, the ac- cess from depth images. Then, Section 3 illustrates the modeling of
celerometer is the most commonly utilized sensor for focusing on different expression through deep learning. Furthermore, Section 4
human body motion [8]. The sensor can be deployed in two ways. shows the experimental results using different approaches. Finally,
First, one is in multi-sensor package such as triaxial accelerometers Section 6 concludes the work with some remarks.
or Body Sensor Networks (BSN). The second one is in combination
with other sensors such as gyroscopes, temperature, and heart rate
2. Proposed method
sensors [24]. Bao and Intille [12] proposed one of the earliest HAR
systems for the recognition of 20 activities of daily living using
Fig. 1 shows the basic flowchart of the proposed system. The
five wearable biaxial accelerometers and well-known machine
proposed system basically consists of three main parts: Sensing,
learning classifiers. They achieved reasonably good classification
Feature extraction, and recognition. The part is sensing. It collects
accuracy reaching up to 84% considering the number of activities
sensor’s data as input to the HAR system. For this study, two
involved. One evident drawback was related with the number and
prominent sensors in smartphones have been selected for data
location of the body sensors used, which made the system highly
collection: triaxial accelerometers and gyroscopes. The sensors
obtrusive. Gyroscopes have also been employed for HAR and have
demonstrated to improve the recognition performance when used provide measurements at frequencies within 0 Hz and 15 Hz.
in combination with accelerometers [25,26]. The second major part is the feature extraction. This part starts
In the case of wearable sensors in activity recognition, the with removing noise to isolate relevant signals such as gravity
smartphone is an alternative to them due to the support of the from triaxial acceleration. After removing noise, it does statistical
diversity of sensors in it. Handling sensors such as accelerometers analysis on fixed-size sliding windows over the time-sequential
and gyroscopes along with the device processing with wireless inertial sensor signals to generate robust features. The third key
communication capabilities made smartphones a very useful tool part of the system is modeling activities from the features via deep
for activity monitoring in smart homes [27,28]. Besides, smart- learning where DBN is adopted.
phones are very ubiquitous and require almost no static infras-
tructure to operate it. This advantage makes it more practically ap- 2.1. Signal processing
plicable than other ambient multi-modal sensors in smart homes.
As recent smart phones consist of inertial sensors (e.g., gyroscopes Triaxial angular velocity and linear acceleration signals are
and accelerometers), they can be appropriate sensing resources to considered from the smartphone gyroscope and accelerometer
obtain human motion information for HAR [29,30]. sensors. The sampling rate of the raw signals is 50 Hz for both
Recently, smartphones have attracted many activity recogni- the sensors. These signals are then preprocessed to reduce noise.
tion researchers as they have fast processing capability, and they Two filters are used in this regard: median and low-pass Butter-
are easily deployable [31–34]. For instance, in [31], the authors worth filter. Twenty Hz is considered as cutoff frequency for the
used wirelessly connected smartphones to collect a user’s data Butter-worth filter. Another low-pass Butter-worth filter is applied
from a chest unit composed of the accelerometer and vital sign to the acceleration signal with gravitational and body motion
sensors. The data was later processed and analyzed using different components to filter out body acceleration and gravity informa-
machine learning algorithms. In [32], the authors developed a HAR tion. It gravitational forces are assumed to have low-frequency
system to recognize five transportation activities where data from components and 0.3 Hz is considered as optimal corner frequency
M.M. Hassan et al. / Future Generation Computer Systems 81 (2018) 307–313 309

The highest value in a fixed-length sliding window is obtained as

m = max(w ). (4)

The highest value in a fixed-length sliding window is determined


as

n = min(w ). (5)

The frequency skewness of a sliding window is obtained as


⎡( )3 ⎤
f −f
s=E⎣ ⎦. (6)
σ

The frequency skewness of a sliding window is obtained as


[( )4 ]
E f −f
K = [( )2 ]2 . (7)
E f −f

The maximum frequency in a sliding window is obtained as

a = max(fw ). (8)

The average energy in a sliding window is determined as


N
1 ∑
e= wi2 . (9)
N
i=1

The signal magnitude area (SMA) features for three consecutive


windows are determined as
3 N
1 ∑∑
S= |wij |. (10)
Fig. 1. Flowcharts of the proposed physical activity recognition system. 3
i=1 j=1

The entropy features of a sliding window is determined as


to obtain a constant gravity signal. In addition to the body ac- N
1∑
celeration signals in time as well as a frequency domain, and t= ci log(ci ), (11)
gravitational acceleration in the time domain, more signals were 3
i=1
obtained by changing them. The additional signals are the body
angular speed magnitude, body angular speed, body acceleration wi
ci = ∑N . (12)
jerk, gravity acceleration magnitude, body angular acceleration, wj
j=1
body acceleration magnitude, body acceleration jerk magnitude,
and body angular acceleration magnitude. Then, the signals are The interquartile range in a window is determined based on the
sampled with sliding windows with time of 2.56 s where there are medians as
overlapped of 50% in-between two consecutive windows.
Q = Q 3(w ) − Q 1(w ). (13)

2.2. Feature extraction The autoregression (AR) coefficients of a window can be deter-
mined as
Robust features are obtained based on a different kind of signal P

processing feature extraction methods. Five hundred and sixty- W (t) = α (i)w(t − i) + ε (t) (14)
one informative features are extracted based on different previous i=1
works related to inertial sensors for human activity recognition [1]. where W (t) is the time-series signal, α represents the AR coeffi-
The mean w of a window w is determined as cients, ε (t) is the noise term, and p is the order of the filter. The
N Pearson correlation coefficients of two windows w1 and w2 are
1 ∑
w= wi . (1) determined as
N
i=1 C12
P = √ , (15)
The standard deviation of a sliding window can be determined as C11 C22

N

1 ∑
σ =√ (wi − w )2 . (2) C = Cov (w1 , w2 ). (16)
N
i=1
Then, the frequency signal weighted average is calculated as
The mean absolute deviation of a sliding window is determined as ∑N
j=1 (jfj )
A = ∑N . (17)
median(|wi − median(w )|). (3) (fi )
i=1
310 M.M. Hassan et al. / Future Generation Computer Systems 81 (2018) 307–313

Fig. 2. Mean of the normalized features for four sample physical activities.

The spectral energy of a frequency band [x, y] is determined as


y
1 ∑
S= fi2 . (18)
x+y+1
i=x

Then, angle between a central vector and mean of three consecu-


tive windows can be obtained as
→ →
F = tan−1 (∥[w 1 , w 2 , w 3 ] × v ∥, [w 1 , w 2 , w 3 ], v ). (19)

Fig. 2 shows the mean of the features of four different activities. In


the figures, the mean of the features an activity is different from the
mean of the features of other activities. Hence, the aforementioned
features are used to represent different activity features obtained
smartphone inertial sensors’ data.

2.3. Dimension reduction

The next step of the feature extraction is to apply dimension


reduction using Kernel PCA (KPCA) [30]. In KPCA, a statistical kernel
is applied to the input features, followed by typical PCA. Given Fig. 3. Hundred eigenvalues after applying KPCA on the features of training samples.
spatiotemporal robust features F, the covariance matrix of the
features can be defined as
q where m represents the number of top principal components.
1∑
Y = (2(Fi ).2(Fi )T ) (20) Fig. 3 shows 100 eigenvalues for 100 principal components where
q
i=1 the values after 20th components are almost zero. However, 100
components are considered throughout this work as rest of them
are negligible.
2(Fi ) = 8(Fi ) − 8 (21)

q
3. Activity modeling in the proposed work
1∑
8= 8(Fi ) (22)
q Modeling facial expressions through Deep Belief Network
i=1
(DBN) has two basic parts: pre-training and fine-tuning. The
where q represents the total number of feature segments for train- pre-training phase is based on a Restricted Boltzmann Machine
ing and 8 is a Gaussian kernel. Now, the principal components (RBM) [39]. Once the network is pre-trained, weights of the net-
can be found by satisfying following eigenvalue decomposition works are adjusted by a fine-tuning algorithm. RBM is useful for
problem. unsupervised learning. As shown in Fig. 4, two hidden layers are
used in this work for RBM. RBM is basically used to initialize the
λE = QE (23) networks where a greedy layer-wise training methodology is used.
Once the weights of the RBMs in the first hidden layer are trained,
they are used as inputs to the second hidden layer. Similarly, the
Q = E T λE (24)
weights of the RBMs of the second hidden layer are trained and
where E represents principal components and λ corresponding used as inputs to the output layer. Fig. 4 shows a basic architecture
eigenvalues. The feature vectors using KPCA for a signal segment of pre-training and fine-tuning in typical DBN with inputs, I, n
can be represented as hidden layer’s H, and output layer O.
To update the weight matrix, a Contrastive Divergence algo-
T
K = FEm (25) rithm is used. First, the binary state of first hidden layer H1 is
M.M. Hassan et al. / Future Generation Computer Systems 81 (2018) 307–313 311

Fig. 4. Structure of a DBN used in this work with 100 neurons in input layer, 60 in
hidden layer1, 20 in hidden layer2, and 12 in output layer.

Table 1
HAR-experiment results using traditional ANN-based approach.
Fig. 5. Convergence of DBN using 1000 epochs.
Activity Recognition rate Mean
Standing 94.56%
Sitting 90.87 Table 2
Lying down 85.71 HAR-experiment results using traditional SVM-based approach.
Walking 83.27 Activity Recognition rate Mean
Walking-upstairs 94.96
Standing 99.60%
Walking-downstairs 97.80
65.31% Sitting 93.84
Stand-to-Sit 34.78
Lying down 97.14
Sit-to-Stand 0.00
Walking 87.20
Sit-to-Lie 56.25
Walking-upstairs 97.30
Lie-to-Sit 76.00
Walking-downstairs 98.90
Stand-to-Lie 51.02 82.02%
Stand-to-Sit 73.91
Lie-to-Stand 18.52
Sit-to-Stand 90.00
Sit-to-Lie 50.00
Lie-to-Sit 64.00
computed as Stand-to-Lie 69.39
Lie-to-Stand 62.96
1, f (r + IGT ) > t
{
H1 = , (26)
0 Other w ise Table 3
HAR-experiment results using proposed DBN-based approach.
Activity Recognition rate Mean
1
f (v ) = (27) Standing 99.60%
1 + e−v Sitting 95.97
Lying down 96.67
where r is the bias vector for input layer I, G initial weight matrix, Walking 93.50
and t threshold learned along with the weight matrix based on a Walking-upstairs 97.12
sigmoid function. Then, the binary state of the input layer vrecon is Walking-downstairs 99.45
89.61%
Stand-to-Sit 82.61
reconstructed as Irecon from the binary state of the hidden layer H1
Sit-to-Stand 90.00
as Sit-to-Lie 81.25
Lie-to-Sit 72.00
1, f (b + H1 G) > t
{
Irecon = (28) Stand-to-Lie 85.71
0, Other w ise Lie-to-Stand 81.48

where b is the bias vector for the input layer. Afterward, hidden
layer is reconstructed as Hrecon from Irecon as
4. Experiments and results

Hrecon = f (r + Irecon G ). T
(29)
For experiments, a publicly available database was col-
Then, the weight difference is computed as lected [41]. The database consisted of twelve activities: Stand-
(
H1 I
) (
H1recon Irecon
) ing, Sitting, Lying Down, Walking, Walking-upstairs, Walking-
1G = − (30) downstairs, Stand-to-Sit, Sit-to-Stand, Sit-to-Lie, Lie-to-Sit, Stand-
4 4
to-Lie, and Lie-to-Stand. A total of 7767 and 3162 events were
where 4 is the batch size. Once pre-training is done, a conven- used for training and testing activities respectively. Each event
tional back propagation algorithm is run to adjust all parameters, consisted of 561 basic features. It is to be noted that in the database
this is called fine-tuning. Fig. 5 shows a sample convergence plot used in this work, the number of samples for training and testing
of DBN using proposed features that indicate that the training different activity is not evenly distributed. Some activities contain
error becomes almost zero when a number of epochs is near to a large number of samples whereas some of them have a very small
1000. number of samples.
312 M.M. Hassan et al. / Future Generation Computer Systems 81 (2018) 307–313

Table 4
Accuracy and error rate using different HAR approaches.
Approach Total testing samples Total rightly classified samples Overall accuracy (%) Total wrongly classified samples Overall
error (%)
ANN 3162 2816 89.06 346 10.94
SVM 3162 2976 94.12 186 5.88
DBN 3162 3031 95.85 131 4.14

We started the experiments with traditional Artificial Neural References


Networks (ANNs). For that, we run typical ANN algorithm several
times and obtained 65.31% of mean recognition rate at best. The
[1] Y. Chen, C. Shen, Performance analysis of smartphone-sensor behavior for
ANN-based experimental results are shown in Table 1. The overall human activity recognition, IEEE Access 5 (2017) 3095–3110.
accuracy obtained by ANN was 89.05%. Then, we proceeded to [2] M. Cornacchia, K. Ozcan, Y. Zheng, S. Velipasalar, A survey on activity detection
apply multiclass Support Vector Machines (SVMs) that brought us and classification using wearable sensors, IEEE Sensors 17 (2) (2017) 386–403.
mean recognition rate of 82.02% at best. The SVM-based experi- [3] A. Campbell, T. Choudhury, From smart to cognitive phones, IEEE Pervasive
Comput. 11 (3) (2012) 7–11.
mental results are reported in Table 2. Finally, we applied the pro-
[4] B.P. Clarkson. Life patterns: Structure from wearable sensors (Ph.D. thesis),
posed approach that provided us the mean recognition rate of 95%, Massachusetts Institute of Technology, 2002.
the highest recognition rate. Thus, the proposed approach showed [5] A. Avci, S. Bosch, M. Marin-Perianu, R. Marin-Perianu, P. Havinga, Activity
the superiority over others. Table 3 shows the experimental results recognition using inertial sensing for healthcare, wellbeing and sports appli-
cations: A survey, in: International Conference on Architecture of Computing
using the proposed approach.
Systems, 2010.
As there are a different number of samples in different test- [6] W. Lin, M.-T. Sun, R. Poovandran, Z. Zhang, Human activity recognition for
ing activities, poor mean recognition rate does not indicate poor video surveillance, in: IEEE International Symposium on Circuits and Systems,
accuracy. For instance, the mean recognition rate of ANN-based 2008.
approach was 65.31%. But, the accuracy of the approach was 89.06% [7] O. Lara, M. Labrador, A survey on human activity recognition using wearable
sensors, IEEE Commun. Surv. Tutor. 1 (2012) 1–18.
as 2816 samples were rightly classified among 3162 samples. The [8] A. Mannini, A.M. Sabatini, Machine learning methods for classifying human
accuracy obtained via SMV was 94.12% as 2976 samples were physical activity from on-body accelerometers, Sensors 10 (2010) 1154–1175.
rightly classified. Similarly, the accuracy using the proposed deep [9] R. Poppe, A survey on vision-based human action recognition, Image Vis.
learning-based approach was 95.85% as 3031 samples were rightly Comput. 28 (2010) 976–990.
[10] B. Nham, K. Siangliulue, S. Yeung, Predicting mode of transport from iphone
classified. Table 4 shows the accuracy and errors using different
accelerometer data, Technical Report, Stanford University, 2008.
approaches for HAR where the proposed one shows highest overall [11] E. Tapia, S. Intille, K. Larson, Activity recognition in the home using simple and
accuracy and lowest overall errors. ubiquitous sensors, Pervasive Computing (2004).
[12] L. Bao, S. Intille, Activity recognition from user-annotated acceleration data,
Pervasive Computing (2004).
5. Conclusion
[13] J. Aggarwal, M.S. Ryoo, Human activity analysis: A review, ACM Comput. Surv.
43 (3) (2011) 1–16.
The main purpose of this work is to develop a robust human [14] S.K. Tasoulis, N. Doukas, V.P. Plagianakos, I. Maglogiannis, Statistical data
activity recognition system based on the smartphone sensors’ data. mining of streaming motion data for activity and fall recognition in assistive
environments, Neurocomputing 107 (2013) 87–96.
It seems very feasible to use smartphones for activity recognition
[15] A. Behera, D. Hogg, A. Cohn, Egocentric activity monitoring and recovery, Asian
as the smartphone is one of the mostly used devices by people Conference on Computer Vision (2013).
in their daily life not only for communicating each other but also [16] D. Townsend, F. Knoefel, R. Goubran, Privacy versus autonomy: A tradeoff
for a very wide range of applications, including healthcare. Thus, model for smart home monitoring technologies, in: 2011 Annual International
Conference of the IEEE Engineering in Medicine and Biology Society, EMBC,
a novel approach has been proposed here for activity recognition
2011.
using smartphone inertial sensors such as accelerometers and gy- [17] A.M. Khan, Y.K. Lee, S.Y. Lee, T.S. Kim, A triaxial accelerometer-based physical-
roscope sensors. From the sensor signals, multiple robust features activity recognition via augmented-signal features and a hierarchical recog-
have been extracted followed by KPCA for dimension reduction. nizer, IEEE Transactions on Information Technology in Biomedicine 14 (5)
Furthermore, the robust features have been combined with deep (2010) 1166–1172.
[18] U. Maurer, A. Smailagic, D. Siewiorek, M. Deisher, Activity recognition and
learning technique, Deep Belief Network (DBN) for activity training monitoring using multiple sensors on different body positions. in: Proc. Int.
and recognition. The proposed method was compared with tradi- Workshop Wearable Implantable Body Sens. Netw., (2006) 113–116.
tional multiclass SVM approach where it showed its superiority. [19] N. Kern, B. Schiele, H. Junker, P. Lukowicz, G. Troster, Wearable sensing t
The system has been checked for twelve different physical activi- oannotate meeting recordings, Pers. UbiquitousComput. 7 (2003) 263–274.
[20] D. Minnen, T. Starner, J. Ward, P. Lukowicz, G. Troester, Recognizing and
ties where it has obtained a mean recognition rate of 89.61% and
discovering human actions from on-body sensor data, in: Proc. IEEE Int. Conf.
an overall accuracy of 95.85%. On the contrary, other traditional Multimedia Expo, (2005) 1545–1548.
approaches could achieve a mean recognition rate of 82.02% and [21] D. Giansanti, V. Macellari, G. Maccioni, New neural network classifier of fall-
an overall accuracy of 94.12% at best. Besides, it has shown its abil- risk based on the Mahalanobis distance and kinematic parameters assessed by
ity to distinguish between basic transitional and non-transitional a wearable device, Physiol. Meas., 29:11–19.
[22] M.R. Narayanan, M.E. Scalzi, S.J. Redmond, S.R. Lord, B.G. Celler, N.H. Lovell,
activities. In future, we plan to focus on more robust features and A wearable triaxial accelerometry system for longitudinal assessment of falls
learning for more efficient and complex activity’s recognition in risk, in: Proc. 30th Annu. IEEE Int. Conf. Eng. Med. Biol. Soc., (2008) 2840–2843.
real-time environments. [23] M. Marschollek, K. Wolf, M. Gietzelt, G. Nemitz, H.M.Z. Schwabedissen, R. Haux,
Assessing elderly persons’ fall risk using spectral analysis on accelerometric
data—A clinical evaluation study, in: Proc. 30th Annu. IEEE Int. Conf. Eng. Med.
Acknowledgment Biol. Soc., (2008) 3682–3685.
[24] G.-Z. Yang, M. Yacoub, Body Sensor Networks, Springer, London, 2006.
The authors would like to extend their sincere appreciation to [25] W. Wu, S. Dasgupta, E.E. Ramirez, C. Peterson, G.J. Norman, Classification
accuracies of physical activities using smartphone motion sensors, J. Med.
the Deanship of Scientific Research at King Saud University for its
Internet Res. 14 (2012) 105–130.
funding of this research through the research group project no. [26] D. Anguita, A. Ghio, L. Oneto, X. Parra, J.-L. Reyes-Ortiz, Training computa-
RGP-281. The second author has the equal contribution as the first tionally efficient smartphone-based human activity recognition models, in:
author to accomplish this work. International Conference on Artificial Neural Networks, 2013.
M.M. Hassan et al. / Future Generation Computer Systems 81 (2018) 307–313 313

[27] A. Ghio, L. Oneto, Byte the bullet: learning on real-world computing architec- international conferences/workshops like IEEE HPCC, ACM BodyNets, IEEE ICME,
tures, in: European Symposium on Artificial Neural Networks, Computational IEEE ScalCom, ACM Multimedia, ICA3PP, IEEE ICC, TPMC, IDCS, etc. He has also
Intelligence and Machine Learning ESANN, 2014. played role of the guest editor of several international ISI-indexed journals such as
[28] D. Anguita, A. Ghio, L. Oneto, X. Parra, J.-L. Reyes-Ortiz, A public domain dataset IEEE IoT, FGCS, etc. His research areas of interest are cloud federation, multimedia
for human activity recognition using smartphones, in: European Symposium cloud, sensor-cloud, Internet of things, Big data, mobile cloud, cloud security,
on Artificial Neural Networks, Computational Intelligence and Machine Learn- IPTV, sensor network, 5G network, social network, publish/subscribe system and
ing ESANN, 2013. recommender system. He is a member of IEEE.
[29] A.M. Khan, Y.-K. Lee, S. Lee, T.-S. Kim, Human activity recognition via an
accelerometer-enabled-smartphone using kernel discriminant analysis, in:
IEEE International Conference on Future Information Technology, 2010.
[30] D. Roggen, K. Förster, A. Calatroni, T. Holleczek, Y. Fang, G. Tröster, P. Lukowicz, Md. Zia Uddin received his Ph.D. in Biomedical Engineer-
G. Pirkl, D. Bannach, K. Kunze, A. Ferscha, C. Holzmann, A. Riener, R. Chavar- ing in February of 2011. He is currently working as a
riaga, J. del, R. Millán, Opportunity: towards opportunistic activity and context post-doctoral research fellow under Dept. of Informatics,
recognition systems, in: IEEE Workshop on Autononomic and Opportunistic University of Oslo, Norway. Dr. Zias researches are mainly
Communications, 2009. focused on computer vision, image processing, artificial
[31] O.D. Lara, A.J. Pérez, M.A. Labrador, J.D. Posada, Centinela: A human activity intelligence, and pattern recognition. He got more than
recognition system based on acceleration and vital sign data, Pervasive Mob. 60 research publications including international journals,
Comput. 8 (2012) 717–729. conferences and book chapters.
[32] Y.S. Lee, S.B. Cho, Activity recognition with android phone using mixture-
ofexperts co-trained with labeled and unlabeled data, Neurocomputing 126
(2014) 106–115.
[33] J.R. Kwapisz, G.M. Weiss, S.A. Moore, Activity recognition using cell phone
accelerometers, SIGKDD Explor. Newsl. 12 (2011) 74–82.
[34] D. Anguita, A. Ghio, L. Oneto, X. Parra, J.L. Reyes-Ortiz, Human activity recog-
nition on smartphones using a multiclass hardware-friendly support vector Amr Mohamed received his M.S. and Ph.D. in electrical
machine, Ambient Assisted Living and Home Care (2012). and computer engineering from the University of British
[35] T. Brezmes, J. Gorricho, J. Cotrina, Activity recognition from accelerometer Columbia, Vancouver, Canada, in 2001, and 2006 respec-
data on a mobile phone, Distrib. Comput. Artif. Intell. Bioinform. Soft Comput. tively. He has over 20 years of experience in wireless net-
Ambient Assist. Living 5518 (2009) 796–799. working research and industrial systems development. He
[36] D. Fuentes, L. Gonzalez-Abril, C. Angulo, J. Ortega, Online motion recognition holds 3 awards from IBM Canada for his achievements and
using an accelerometer in a mobile device, Expert Syst. Appl. 39 (2012) 2461– leadership, and 3 best paper awards, latest from IEEE/IFIP
2465. International conference on New Technologies, Mobility,
[37] M. Kose, O.D. Incel, C. Ersoy, Online human activity recognition on smart and Security (NTMS) 2015 in Paris. His research interests
phones, in: Workshop on Mobile Sensing: From Smartphones and Wearables include networking and MAC layer techniques mainly in
to Big Data, 2012. wireless networks. Dr. Amr Mohamed has authored or
[38] H.M. Ebied, Feature extraction using PCA and Kernel-PCA for face recognition. coauthored over 120 refereed journal and conference papers, textbook, and book
8th International Conference on Informatics and Systems (INFOS), (2017) 72- chapters in reputed international journals, and conferences. He has served as
77. a technical program committee (TPC) co-chair for workshops in IEEE WCNC’16.
[39] G.E. Hinton, S. Osindero, Y.-W. Teh, A fast learning algorithm for deep belief He has served as a co-chair for technical symposia of international conferences,
nets, Neural Computation 18 (7) (2006) 1527–1554. including Globecom’16, Crowncom’15, AICCSA’14, IEEE WLN’11, and IEEE ICT’10. He
[40] M.Z. Uddin, M.M. Hassan, A. Almogren, M. Zuair, G. Fortino, J. Torresen, A has served on the organization committee of many other international conferences
facial expression recognition system using robust face features from depth as a TPC member, including the IEEE ICC, GLOBECOM, WCNC, LCN and PIMRC, and a
videos and deep learning, Computers & Electrical Engineering (2017). http: technical reviewer for many international IEEE, ACM, Elsevier, Springer, and Wiley
//dx.doi.org/10.1016/j.compeleceng.2017.04.019. journals.
[41] M. Lichman, UCI Machine Learning Repository, University of California, School
of Information and Computer Science, Irvine, CA, 2013. [http://archive.ics.uci.
edu/ml].
Ahmad Almogren has received PhD degree in com-
puter sciences from Southern Methodist University, Dal-
las, Texas, USA in 2002. Previously, he worked as an as-
Mohammad Mehedi Hassan is currently an Associate sistant professor of computer science and a member of
Professor of Information Systems Department in the Col- the scientific council at Riyadh College of Technology. He
lege of Computer and Information Sciences (CCIS), King also served as the dean of the college of computer and
Saud University (KSU), Riyadh, Kingdom of Saudi Arabia. information sciences and the head of the council of aca-
He received his Ph.D. degree in Computer Engineering demic accreditation at Al Yamamah University. Presently,
from Kyung Hee University, South Korea in February 2011. he works as an Associate Professor and the vice dean for
He received Best Paper Award from CloudComp confer- the development and quality at the college of computer
ence at China in 2014. He also received Excellence in and information sciences at King Saud University in Saudi
Research Award from CCIS, KSU in 2015 and 23016 re- Arabia. He has served as a guest editor for several computer journals. His research
spectively. He has published over 100+ research papers areas of interest include mobile and pervasive computing, computer security,
in the journals and conferences of international repute. sensor and cognitive network, and data consistency.
He has served as, chair, and Technical Program Committee member in numerous

You might also like