Artificial Intelligence Based Approach for Classification of Human Activities Using MEMS Sensors Data
Abstract
:1. Introduction
- (a)
- Rigorous experiments were conducted to prepare an extensive dataset of 9 different human motion activity classes which include (a) Laying Down, (b) Stationary, (c) Walking, (d) Brisk Walking, (e) Running (f) Stairs Up (g) Stairs Down (h) Squatting and (i) Cycling, the prepared dataset was then used for training and testing purposes for the ML and DL model(s). A detailed explanation is provided in Section 3.1.
- (b)
- Dataset prepared through these experiments was then used to train various ML and DL model(s) as specified in Section 3.2.
- (c)
- By combining an auto-labeling module with a DNN that uses Bi-LSTM structures, a supervised DL framework is designed, constructed, and proposed, which efficiently uses the extensively prepared dataset to achieve maximum HAR accuracy of 98.1%.
- (d)
- The proposed DNN Bi-LSTM-based model was then tuned by varying several model parameters to conclude the best possible model (hyperparameter tuning). Various parameters like training & testing time, and size of the trained network were also observed for the different cases (parametric analysis), as elaborated in Section 3.3.
- (e)
- Comparative analysis has been performed on the WISDM dataset, which is a publicly available dataset, Section 5 describes it in detail.
2. Literaure Survey
3. Methodology
3.1. Dataset
3.2. Machine Learning for HAR
- (a)
- Exactly as its name suggests, a Decision Tree represents a flowchart-like structure resembling a tree, where each internal node represents a test on an attribute, each branch represents a decision rule, and each leaf node (also known as a terminal node) exhibits the output. The parameters used for training the Decision Tree Classifier in our work are as follows, min_samples_split: this value indicates how many samples are required to split an internal node, min_samples_leaf: the minimum number of samples that must be at a leaf node. In each branch, the split point must leave at least min_samples_leaf training samples [63].
- (b)
- Random Forest Classifier is a supervised ML algorithm that can be used to perform classification as well as regression problems, It aggregates several decision trees from various subsets of the dataset and improves predictive accuracy by taking the average. Its advantages include less train time than other algorithms and running efficiently on large datasets. The parameters used for training the Random Forest Classifier in our work are as follows, n_estimators: it specifies the number of trees in the forest, criterion: the quality of split is measured using this function, Random State: the randomness and bootstrapping is controlled with the help of this function [63].
- (c)
- One of the simplest machine learning algorithms is the K Nearest Neighbors (KNN) Classifier, which uses proximity to classify or predict data points. A new case is placed into the category with the highest similarity to the available categories based on the similarity between the new case and the previously available cases. Since it does not learn from the training set immediately, it is also known as a lazy learner algorithm. Instead of learning from the dataset immediately, it stores it and later on performs a classification algorithm on it. The parameters used for training the KNN Classifier in our work are as follows, algorithm: the algorithm used to compute the nearest neighbours, possible values are ‘auto’, ‘ball_tree’, ‘kd_tree’, and ‘brute’, n_neighbors: specifies the number of neighbors to use by default for k-neighbors queries, Weights: function is used to make predictions, possible values are ‘uniform’, ‘distance’, and [callable] [63].
- (d)
- Multinomial Logistic Regression is a modified version of logistic regression to incorporate multi-class problems as by default logistic regression performs binary classification (i.e., 0 or 1). The parameters used for training the KNN Classifier in our work are as follows, Dual: formulation with dual or primal components. The dual formulation is only implemented with the liblinear solver for l2 penalties. When the value of n_samples is greater than n_features, dual=False is preferred, Tol: stopping criteria tolerance, C: this value is the reverse of regularization strength and must be positive. Smaller values indicate stronger regularization, as in support vector machines, fit_intercept: it indicates whether the decision function should include a constant (a.k.a. bias or intercept) [63].
- (e)
- Bayes’ theorem is applied with strong independence assumptions in Gaussian Naive Bayes probabilistic classification algorithm. Regarding classification, independence means that the presence of one feature value does not affect the presence of another. The parameters used for training the Gaussian Naive Bayes Classifier in our work are as follows, var_smoothing: for calculation stability, a portion of the largest variance of all features is added to variances [63].
- (f)
- Support Vector Machine (SVM) plots each data item as a point in n-dimensional space (where n is the number of features), with each feature’s value being the coordinate value. Once the hyperplane differentiates the two classes very well, classification is conducted. After breaking down the multiclassification problem into multiple binary classification problems, the same principle is applied to the multiclass classification problem. In this technique, data points are mapped onto high-dimensional space and mutually linearly separated into two classes by breaking the multiclass problem into multiple binary classification problems. The parameters used for training the SVM Classifier in our work are as follows, C: this is the regularization parameter, must be positive, Kernel: an algorithm’s kernel type is specified here, Degree: Degree of the polynomial kernel function (‘poly’), Gamma: it is a kernel coefficient [63].
3.3. Deep Learning for HAR
3.4. Architecture of the Proposed DL Model Using Bi-LSTM Neural Network for HAR
- (a)
- Bi-LSTM can output two output modes, namely, ‘sequence’ & ‘last’. Sequence outputs the entire sequence and last outputs the end of it. Since we only need the sequence’s final step, we selected ‘last’ [74].
- (b)
- State activation function of Bi-LSTM model has two activation functions ‘tanh’ & the ‘softsign’ functions for updating the hidden layers. We used ‘tanh’ as the weights and bias are updated more frequently when using the ‘tanh’ function due to its high derivative [74].
- (c)
- (d)
- Input weight initializers, initialize input weights, based on the following options, ‘glorot’—create weights such that every layer’s activation variance is the same, ‘he’—used in order to achieve a variance of approximately one, ‘orthogonal’—used to prevent gradients from exploding and disappearing, ‘narrow-normal’—starting with an average of ‘0’ and a standard deviation of ‘0.01’ input weights randomly selected from a normal distribution, ‘zeros’—weights are initialized to zeros, ‘ones’—weights are initialized to ones. We have selected ‘glorot’ as our input weights initialization function to maintain a smooth distribution for both forward and backward propagation [74].
- (e)
- Recurrent weights initializer serves as an initialization function for the recurrent weights. There are the same options as in the input weights initializer that we discussed earlier. We have selected ‘orthogonal’ as our recurrent weights initialization function because the gradient descent can achieve zero training error in a linear convergence rate for orthogonal initialization [74].
- (f)
- Input weights learn rate factor is multiplied by the global rate of learning in order to determine the input weights’ learning rate. To make the learning rate factor equal to the global rate of learning, we set it to ‘1’ [74].
- (g)
- Recurrent weights learn rate factor is the learning rate factor of the recurrent weights and multiplying it by the global rate of learning gives us the recurrent weights of the layer. For the recurrent weights, we set the learning rate factor to ‘1’ to make it equal to the global rate of learning [74].
- (h)
- Input weights layer-2 factor is used to reduce the possibility of overfitting, layer-2, it is a data link layer, regularization keeps weights and biases small. For the value 1, the input weights of data link layer factor matches the current global data link layer regularization factor [74].
- (i)
- Bias learn rate factor is a non-negative scalar or 1-by-8 numerical vector that specifies the learning rate for biases. A learning rate factor of ‘1’ is applied to biases to make them equal to the global rate of learning [74].
- (j)
- Bias layer-2 factor is a non-negative scalar is specified as the regularization factor for the biases based on the data link layer regularization. By multiplying this factor to the global factor data link layer regularization determines the data link layer regularization for biases in the layer. It’s set to zero because it doesn’t need to be equal to global data link layer regularization factor [74].
- (k)
- In Bias initializer, one of the following functions is used to initialize the bias, ‘unit-forget-gate’—creates the forget gate bias with ‘1’, the other biases with ‘0’, ‘narrow-normal’—starting with an average of ‘0’ and a standard deviation of ‘0.01’ input weights randomly selected from a normal distribution, ‘ones’—weights are initialized to ones. We used ‘unit-forget-gate’ to decide what information should be paid attention to and which should be ignored [74].
4. Performance Evaluation and Results
5. Comparative Analysis
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Ramamurthy, S.R.; Roy, N. Recent trends in machine learning for human activity recognition—A survey. WIREs Data Min. Knowl. Discov. 2018, 8, e1254. [Google Scholar] [CrossRef]
- Chen, K.; Zhang, D.; Yao, L.; Guo, B.; Yu, Z.; Liu, Y. Deep learning for sensor-based human activity recognition: Overview, challenges and opportunities. ACM Comput. Surv. 2021, 54, 1–40. [Google Scholar] [CrossRef]
- Lara, O.D.; Labrador, M.A. A Survey on Human Activity Recognition using Wearable Sensors. IEEE Commun. Surv. Tutor. 2013, 15, 1192–1209. [Google Scholar] [CrossRef]
- Zhang, S.; Li, Y.; Zhang, S.; Shahabi, F.; Xia, S.; Deng, Y.; Alshurafa, N. Deep Learning in Human Activity Recognition with Wearable Sensors: A Review on Advances. Sensors 2022, 22, 1476. [Google Scholar] [CrossRef]
- Khan, A.A.H.; Kukkapalli, R.; Waradpande, P.; Kulandaivel, S.; Banerjee, N.; Roy, N.; Robucci, R. RAM: Radar-based activity monitor. In Proceedings of the 35th Annual IEEE International Conference on Computer Communications, San Francisco, CA, USA, 10–14 April 2016; pp. 1–9. [Google Scholar] [CrossRef]
- Md Abdullah Al Hafiz Khan Khan, A.A.H.; Hossain, H.M.S.; Roy, N. Infrastructure-less Occupancy Detection and Semantic Localization in Smart Environments. CASA 2015, 2, e3. [Google Scholar] [CrossRef] [Green Version]
- CCook, D.; Feuz, K.D.; Krishnan, N.C. Transfer learning for activity recognition: A survey. Knowl. Inf. Syst. 2013, 36, 537–556. [Google Scholar] [CrossRef] [Green Version]
- Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control. Signals Syst. 1989, 2, 303–314. [Google Scholar] [CrossRef]
- Schäfer, A.M.; Zimmermann, H.G. Recurrent Neural Networks Are Universal Approximators. In Artificial Neural Networks – ICANN 2006; Kollias, S.D., Stafylopatis, A., Duch, W., Oja, E., Eds.; Lecture Notes in Computer Science; Springer: Berlin, Heidelberg, 2006; Volume 4131. [Google Scholar] [CrossRef]
- Hassan, M.M.; Uddin, Z.; Mohamed, A.; Almogren, A. A robust human activity recognition system using smartphone sensors and deep learning. Futur. Gener. Comput. Syst. 2018, 81, 307–313. [Google Scholar] [CrossRef]
- Zhou, D.-X. Universality of deep convolutional neural networks. Appl. Comput. Harmon. Anal. 2020, 48, 787–794. [Google Scholar] [CrossRef] [Green Version]
- Prasad, A.; Tyagi, A.K.; Althobaiti, M.M.; Almulihi, A.; Mansour, R.F.; Mahmoud, A.M. Human Activity Recognition Using Cell Phone-Based Accelerometer and Convolutional Neural Network. Appl. Sci. 2021, 11, 12099. [Google Scholar] [CrossRef]
- Zhu, N.; Diethe, T.; Camplani, M.; Tao, L.; Burrows, A.; Twomey, N.; Kaleshi, D.; Mirmehdi, M.; Flach, P.; Craddock, I. Bridging e-Health and the Internet of Things: The SPHERE Project. IEEE Intell. Syst. 2015, 30, 39–46. [Google Scholar] [CrossRef] [Green Version]
- Zhou, X.; Liang, W.; Wang, K.I.-K.; Wang, H.; Yang, L.T.; Jin, Q. Deep-Learning-Enhanced Human Activity Recognition for Internet of Healthcare Things. IEEE Internet Things J. 2020, 7, 6429–6438. [Google Scholar] [CrossRef]
- Nirmalya, R.; Archan, M.; Diane, C. Infrastructure-assisted smartphone-based ADL recognition in multi-inhabitant smart environments. In Proceedings of the 2013 IEEE International Conference on Pervasive Computing and Communications, San Diego, CA, USA, 18–22 March 2013; pp. 38–46. [Google Scholar] [CrossRef] [Green Version]
- Wan, S.; Qi, L.; Xu, X.; Tong, C.; Gu, Z. Deep Learning Models for Real-time Human Activity Recognition with Smartphones. Mob. Networks Appl. 2019, 25, 743–755. [Google Scholar] [CrossRef]
- Doherty, S.T.; Lemieux, C.J.; Canally, C. Tracking Human Activity and Well-Being in Natural Environments Using Wearable Sensors and Experience Sampling. Soc. Sci. Med. 2014, 106, 83–92. [Google Scholar] [CrossRef]
- Tyagi; Kumar, A.; Rekha, G. Challenges of Applying Deep Learning in Real-World Applications. In Challenges and Applications for Implementing Machine Learning in Computer Vision; 2020; pp. 92–118. Available online: www.igi-global.com/chapter/challenges-of-applying-deep-learning-in-real-world-applications/242103 (accessed on 20 July 2022). [CrossRef]
- Khan, Y.A.; Imaduddin, S.; Prabhat, R.; Wajid, M. Classification of Human Motion Activities using Mobile Phone Sensors and Deep Learning Model. In Proceedings of the 2022 8th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 25–26 March 2022; pp. 1381–1386. [Google Scholar] [CrossRef]
- Wikipedia Contributors. Activity Recognition. In Wikipedia, the Free Encyclopedia. Available online: https://en.wikipedia.org/w/index.php?title=Activity\_recognition&oldid=1099289546 (accessed on 20 July 2022).
- Ronao, C.A.; Cho, S.B. Human activity recognition using smartphone sensors with two-stage continuous hidden Markov models. In Proceedings of the 10th IEEE International Conference on Natural Computation (ICNC), Xiamen, China, 19–21 August 2014; pp. 681–686. [Google Scholar]
- Anguita, D.; Ghio, A.; Oneto, L.; Parra, X.; Reyes-Ortiz, J.L. A public domain dataset for human activity recognition using smartphones. In Proceedings of the European Symposium on Artificial Neural Networks (ESANN), 21st European Symposium on Artificial Neural Networks, Computational Intelligence And Machine Learning, Bruges, Belgium, 24–26 April 2013; pp. 437–442.
- Krishnan, N.C.; Colbry, D.; Juillard, C.; Panchanathan, S. Real Time Human Activity Recognition Using Tri-Axial Accelerometers. In Proceedings of the Sensors Signals and Information Processing Workshop, Sedona, AZ, USA, 11–14 May 2008. [Google Scholar]
- Qi, W.; Su, H.; Yang, C.; Ferrigno, G.; De Momi, E.; Aliverti, A. A Fast and Robust Deep Convolutional Neural Networks for Complex Human Activity Recognition Using Smartphone. Sensors 2019, 19, 3731. [Google Scholar] [CrossRef]
- Ali, S.E.; Khan, A.N.; Zia, S.; Mukhtar, M. Human Activity Recognition System using Smart Phone based Accelerometer and Machine Learning. In Proceedings of the 2020 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT), Bali, Indonesia, 7–8 July 2020; pp. 69–74. [Google Scholar] [CrossRef]
- Chen, H.; Mahfuz, S.; Zulkernine, F. Smart Phone Based Human Activity Recognition. In Proceedings of the 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), San Diego, CA, USA, 18–21 November 2019; pp. 2525–2532. [Google Scholar] [CrossRef]
- Maurer, U.; Smailagic, A.; Siewiorek, D.P.; Deisher, M. Activity recognition and monitoring using multiple sensors on different body positions. In Proceedings of the International Workshop on Wearable and Implantable Body Sensor Networks, Washington, DC, USA, 3–5 April 2006. [Google Scholar]
- Yin, J.; Yang, Q.; Pan, J.J. Sensor-Based Abnormal Human-Activity Detection. IEEE Trans. Knowl. Data Eng. 2008, 20, 1082–1090. [Google Scholar] [CrossRef]
- Kao, T.P.; Lin, C.W.; Wang, J.S. Development of a portable activity detector for daily activity recognition. In Proceedings of the 2009 IEEE International Symposium on Industrial Electronics, Seoul, Republic of Korea, 5–8 July 2009; pp. 115–120. [Google Scholar]
- He, Z.; Jin, L. Activity recognition from acceleration data using AR model representation and SVM. In Proceedings of the 2008 International Conference on Machine Learning and Cybernetics, Kunming, China, 12–15 July 2008; pp. 2245–2250. [Google Scholar] [CrossRef]
- Tapia, E.M.; Intille, S.S.; Haskell, W.; Larson, K.; Wright, J.; King, A.; Friedman, R. Real-time recognition of physical activities and their intensities using wireless accelerometers and a heart monitor. In Proceedings of the International Symposium on Wearable Computers, Boston, MA, USA, 11–13 October 2007. [Google Scholar]
- Frank, K.; Rockl, M.; Nadales, M.; Robertson, P.; Pfeifer, T. Comparison of exact static and dynamic bayesian context inference methods for activity recognition. In Proceedings of the IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops), Mannheim, Germany, 29 March–2 April 2010; pp. 189–195. [Google Scholar]
- Suwannarat, K.; Kurdthongmee, W. Optimization of deep neural network-based human activity recognition for a wearable device. Heliyon 2021, 7, e07797. [Google Scholar] [CrossRef]
- Demrozi, F.; Pravadelli, G.; Bihorac, A.; Rashidi, P. Human Activity Recognition Using Inertial, Physiological and Environmental Sensors: A Comprehensive Survey. IEEE Access 2020, 8, 210816–210836. [Google Scholar] [CrossRef]
- Wang, Y.; Cang, S.; Yu, H. A survey on wearable sensor modality centred human activity recognition in health care. Expert Syst. Appl. 2019, 137, 167–190. [Google Scholar] [CrossRef]
- Jobanputra, C.; Bavishi, J.; Doshi, N. Human activity recognition: A survey. Procedia Comput. Sci. 2019, 155, 698–703. [Google Scholar] [CrossRef]
- Dang, L.M.; Min, K.; Wang, H.; Piran, J.; Lee, C.H.; Moon, H. Sensor-based and vision-based human activity recognition: A comprehensive survey. Pattern Recognit. 2020, 108, 107561. [Google Scholar] [CrossRef]
- Beddiar, D.R.; Nini, B.; Sabokrou, M.; Hadid, A. Vision-based human activity recognition: A survey. Multimed. Tools Appl. 2020, 79, 30509–30555. [Google Scholar] [CrossRef]
- Lima, W.S.; Souto, E.; El-Khatib, K.; Jalali, R.; Gama, J. Human Activity Recognition Using Inertial Sensors in a Smartphone: An Overview. Sensors 2019, 19, 3213. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wang, H.; Zhao, J.; Li, J.; Tian, L.; Tu, P.; Cao, T.; An, Y.; Wang, K.; Li, S. Wearable Sensor-Based Human Activity Recognition Using Hybrid Deep Learning Techniques. Secur. Commun. Netw. 2020, 2020, 2132138. [Google Scholar] [CrossRef]
- Ramos, R.G.; Domingo, J.D.; Zalama, E.; Gómez-García-Bermejo, J.; López, J. SDHAR-HOME: A Sensor Dataset for Human Activity Recognition at Home. Sensors 2022, 22, 8109. [Google Scholar] [CrossRef]
- Luwe, Y.J.; Lee, C.P.; Lim, K.M. Wearable Sensor-Based Human Activity Recognition with Hybrid Deep Learning Model. Informatics 2022, 9, 56. [Google Scholar] [CrossRef]
- Liu, H.; Hartmann, Y.; Schultz, T. CSL-SHARE: A Multimodal Wearable Sensor-Based Human Activity Dataset. Front. Comput. Sci. 2021, 3. [Google Scholar] [CrossRef]
- Pardeshi, S.S.; Patange, A.D.; Jegadeeshwaran, R.; Bhosale, M.R. Tyre Pressure Supervision of Two Wheeler Using Machine Learning. Struct. Durab. Heal. Monit. 2022, 16, 271–290. [Google Scholar] [CrossRef]
- Patange, A.D.; Jegadeeshwaran, R.; Bajaj, N.S.; Khairnar, A.N.; Gavade, N.A. Application of Machine Learning for Tool Condition Monitoring in Turning. Sound Vib. 2022, 56, 127–145. [Google Scholar] [CrossRef]
- Shewale, M.S.; Mulik, S.S.; Deshmukh, S.P.; Patange, A.D.; Zambare, H.B.; Sundare, A.P. Novel Machine Health Monitoring System. In Proceedings of the 2nd International Conference on Data Engineering and Communication Technology, Pune, MA, USA, 15–16 December 2017. [Google Scholar] [CrossRef]
- Available online: https://support.apple.com/en-us/HT207941#:~:text=Every%20full%20minute%20of\%20movement,is%20measured%20in%20brisk%20\pushes (accessed on 7 March 2022).
- Available online: https://www.wareable.com/fitness-trackers/how-your-fitness-tracker-works-1449 (accessed on 11 March 2022).
- Available online: https://germaniainsurance.com/blogs/post/germania-insurance-blog/2020/12/04/how-do-fitness-trackers-work-how-accurate-are-they-really (accessed on 12 March 2022).
- Available online: https://venturebeat.com/uncategorized/3-big-problems-with-datasets-in-ai-and-machine-learning/ (accessed on 2 May 2022).
- Available online: https://www.deepwizai.com/simply-deep/why-random-shuffling-improves-generalizability-of-neural-nets (accessed on 4 June 2022).
- Chavarriaga, R.; Sagha, H.; Calatroni, A.; Digumarti, S.T.; Tröster, G.; Millán, J.D.R.; Roggen, D. The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition. Pattern Recognit. Lett. 2013, 34, 2033–2042. [Google Scholar] [CrossRef] [Green Version]
- Reiss, A.; Stricker, D. Introducing a New Benchmarked Dataset for Activity Monitoring. In Proceedings of the 2012 16th International Symposium on Wearable Computers, Newcastle, UK, 18–22 June 2012; pp. 108–109. [Google Scholar] [CrossRef]
- Altun, K.; Barshan, B.; Tunc¸el, O. Comparative study on classifying human activities with miniature inertial and magnetic sensors. Pattern Recognit. 2010, 43, 3605–3620. [Google Scholar] [CrossRef]
- Banos, O.; Garcia, R.; Holgado-Terriza, J.A.; Damas, M.; Pomares, H.; Rojas, I.; Saez, A.; Villalonga, C. mHealthDroid: A novel framework for agile development of mobile health applications. In International Workshop on Ambient Assisted Living; Springer International Publishing: Cham, Switzerland, 2014; pp. 91–98. [Google Scholar]
- Stisen, A.; Blunck, H.; Bhattacharya, S.; Prentow, T.S.; Kjærgaard, M.B.; Dey, A.; Sonne, T.; Jensen, M.M. Smart Devices are Different: Assessing and MitigatingMobile Sensing Heterogeneities for Activity Recognition. In Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems (SenSys ’15), Seoul, Republic of Korea, 1–4 November 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 127–140. [Google Scholar] [CrossRef]
- Zappi, P.; Lombriser, C.; Stiefmeier, T.; Farella, E.; Roggen, D.; Benini, L.; Tröster, G. Activity Recognition from On-Body Sensors: Accuracy-Power Trade-Off by Dynamic Sensor Selection. In Wireless Sensor Networks, Proceedings of the EWSN 2008, Bologna, Italy, 30 January–1 February 2008; Verdone, R., Ed.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2008; Volume 4913. [Google Scholar] [CrossRef]
- Bachlin, M.; Plotnik, M.; Roggen, D.; Maidan, I.; Hausdorff, J.M.; Giladi, N.; Troster, G. Wearable Assistant for Parkinson’s Disease Patients With the Freezing of Gait Symptom. IEEE Trans. Inf. Technol. Biomed. 2009, 14, 436–446. [Google Scholar] [CrossRef]
- Zhang, M.; Sawchuk, A.A. USC-HAD: A daily activity dataset for ubiquitous activity recognition using wearable sensors. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing (UbiComp ’12), Pittsburgh, PA, USA, 5–8 September 2012; Association for Computing Machinery: New York, NY, USA, 2012; pp. 1036–1043. [Google Scholar] [CrossRef]
- Shoaib, M.; Bosch, S.; Incel, O.D.; Scholten, J.; Havinga, P.J.M. Fusion of Smartphone Motion Sensors for Physical Activity Recognition. Sensors 2014, 14, 10146–10176. [Google Scholar] [CrossRef] [PubMed]
- Kwapisz, J.R.; Weiss, G.M.; Moore, S.A. Activity recognition using cell phone accelerometers. ACM Sigkdd Explor. Newsl. 2011, 12, 74–82. [Google Scholar] [CrossRef]
- Lockhart, J.W.; Weiss, G.M.; Xue, J.C.; Gallagher, S.T.; Grosner, A.B.; Pulickal, T.T. Design considerations for the WISDM smart phone-based sensor mining architecture. In Proceedings of the Fifth International Workshop on Knowledge Discovery from Sensor Data (SensorKDD ’11), San Diego, CA, USA, 21 August 2011. [Google Scholar]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Duchesnay, E.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Wikipedia Contributors. “Supervised Learning”. Wikipedia, The Free Encyclopedia. Available online: https://en.wikipedia.org/wiki/Supervised_learning (accessed on 13 October 2022).
- Sekhar, C.; Meghana, P.S. A Study on Backpropagation in Artificial Neural Networks. Asia-Pac.J. Neural Netw. Its Appl. 2020, 4, 21–28. [Google Scholar] [CrossRef]
- Banerjee, S.; Bhattacharjee, P.; Das, S. Performance of Deep Learning Algorithms vs. Shallow Models, in Extreme Conditions—Some Empirical Studies. In International Conference on Pattern Recognition and Machine Intelligence; Springer: Cham, Switzerland, 2017; Volume 10597, pp. 565–574. [Google Scholar] [CrossRef]
- Li, G.; Hari, S.K.S.; Sullivan, M.; Tsai, T.; Pattabiraman, K.; Emer, J.; Keckler, S.W. Understanding error propagation in deep learning neural network (DNN) accelerators and applications. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC ’17), Denver, CO, USA, 11–17 November 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 1–12. [Google Scholar] [CrossRef]
- Zhang, M.; Rajbhandari, S.; Wang, W.; He, Y. DeepCPU: Serving RNN-based Deep Learning Models 10x Faster. In Proceedings of the 2018 USENIX Annual Technical Conference (USENIX ATC 18), Boston, MA, USA, 11–13 July 2018; pp. 951–965. [Google Scholar]
- Sun, L.; Du, J.; Dai, L.-R.; Lee, C.-H. Multiple-target deep learning for LSTM-RNN based speech enhancement. In Proceedings of the 2017 Hands-free Speech Communications and Microphone Arrays (HSCMA), San Francisco, CA, USA, 1–3 March 2017; pp. 136–140. [Google Scholar] [CrossRef]
- Sherstinsky, A. Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef] [Green Version]
- Wikipedia Contributors. Long Short-Term Memory. In Wikipedia, The Free Encyclopedia. Available online: https://en.wikipedia.org/w/index.php?title=Long_short-term_memory&oldid=1109264283 (accessed on 8 September 2022).
- Aljarrah, A.A.; Ali, A.H. Human Activity Recognition using PCA and BiLSTM Recurrent Neural Networks. In Proceedings of the 2019 2nd International Conference on Engineering Technology and its Applications (IICETA), Al-Najef, Iraq, 27–28 August 2019; pp. 156–160. [Google Scholar] [CrossRef]
- The MathWorks, Inc. Deep Learning Toolbox: User’s Guide (r2018a). 2021. Available online: https://www.mathworks.com/products/deep-learning.html (accessed on 22 June 2022).
- Available online: https://in.mathworks.com/help/deeplearning/ref/nnet.cnn.layer.bilstmlayer.html (accessed on 5 June 2022).
- Maksutov, R. Deep Study of a Not Very Deep Neural Network. Part 2: Activation Functions. Available online: https://towardsdatascience.com/deep-study-of-a-not-very-deep-neural-network-part-2-activation-functions-fd9bd8d406fc (accessed on 21 June 2022).
- Brownlee, J. A Gentle Introduction to Dropout for Regularizing Deep Neural Networks. Available online: https://machinelearningmastery.com/dropout-for-regularizing-deep-neural-networks (accessed on 21 June 2022).
- OpenGenus Foundation. Fully Connected Layer: The Brute Force Layer of a Machine Learning Model. Available online: https://iq.opengenus.org/fully-connected-layer (accessed on 21 June 2022).
- Koech, K.E. Cross-Entropy Loss Function. Available online: https://towardsdatascience.com/cross-entropy-loss-function-f38c4ec8643e (accessed on 23 June 2022).
- Shankar297. Understanding Loss Function in Deep Learning. Published on 20 June 2022. Available online: https://www.analyticsvidhya.com/blog/2022/06/understanding-loss-function-in-deep-learning (accessed on 25 June 2022).
- Shung, K.P. Accuracy, Precision, Recall or F1? Available online: https://towardsdatascience.com/accuracy-precision-recall-or-f1-331fb37c5cb9 (accessed on 11 July 2022).
- Jayaswal, V. Performance Metrics: Confusion matrix, Precision, Recall, and F1 Score. Available online: https://towardsdatascience.com/performance-metrics-confusion-matrix-precision-recall-and-f1-score-a8fe076a2262 (accessed on 23 June 2022).
- Available online: https://towardsdatascience.com/understanding-confusion-matrix-a9ad42dcfd62 (accessed on 15 July 2022).
- Available online: https://kharshit.github.io/blog/2018/12/07/loss-vs-accuracy (accessed on 16 July 2022).
- Mohajon, J. Confusion Matrix for Your Multi-Class Machine Learning Model. Available online: https://towardsdatascience.com/confusion-matrix-for-your-multi-class-machine-learning-model-ff9aa3bf7826 (accessed on 12 November 2022).
- Rueda, F.M.; Fink, G.A. Learning attribute representation for human activity recognition. In Proceedings of the IEEE International Conference on Pattern Recognition, Beijing, China, 20–24 August 2018; pp. 523–528. [Google Scholar]
- Ravi, D.; Wong, C.; Lo, B.; Yang, G.-Z. A Deep Learning Approach to on-Node Sensor Data Analytics for Mobile or Wearable Devices. IEEE Biomed. Heal. Inform. 2016, 21, 56–64. [Google Scholar] [CrossRef] [Green Version]
- Zhang, X.; Wong, Y.; Kankanhalli, M.S.; Geng, W. Hierarchical multi-view aggregation network for sensor-based human activity recognition. PLoS ONE 2019, 14, e0221390. [Google Scholar] [CrossRef]
- Athota, R.; Sumathi, D. Human activity recognition based on hybrid learning algorithm for wearable sensor data. Measurement. Sensors 2022, 24, 100512. [Google Scholar] [CrossRef]
- Ullah, M.; Ullah, H.; Khan, S.D.; Cheikh, F.A. Stacked Lstm Network for Human Activity Recognition Using Smartphone Data. In Proceedings of the 2019 8th European Workshop on Visual Information Processing (EUVIP), Roma, Italy, 28–31 October 2019; pp. 175–180. [Google Scholar]
- Ordóñez, F.J.; Roggen, D. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition. Sensors 2016, 16, 115. [Google Scholar] [CrossRef] [PubMed]
Author, Year | Dataset | Purpose | Classification Techniques | Accuracy | Comments |
---|---|---|---|---|---|
Prasad et al, 2021 [12] | Self collected | Classification of 6 different classes of activities | Two dimensional CNN model | 89.67% | Only accelerometer is used to collect data, accuracy can also be improved by other DL models. |
Ronao et al, 2014 [21] | Self collected | Classification of 6 different classes of activities | HMM-GMM classifier | 91.76% | The HMM-GMM model performed better than ANN, DT and NB |
Krishnan et al, 2009 [23] | Self collected | Recognition of short duration hand movement | AdaBoost, HMM, k-NN | 86% | Collecting data using large amount of sensors can increase accuracy but it is not feasible. |
Qi et al, 2019 [24] | Self collected | Classification of 12 different classes of activities | FR-DCNN classifier | Normal dataset–95.27% Compressed dataset–94.18% | The proposed model performed really well with a fast speed and good accuracy. |
Ali et al, 2020 [25] | Self collected | Classifying activities in stationary, light ambulatory, intense ambulatory and abnormal classes | J48 Classifier | stationary activities–80% other activities–70% | Their work can have implementation in medical field for monitoring purposes but a higher magnitude of accuracy is required. |
Hammerla et al, 2019 [26] | Opp, PAMAP2 DG | Classifying 11 Activities of daily living | CNN, LSTM and b-LSTM | CNN–93.7% LSTM–76% b-LSTM–92.7% | This works claimed that CNN should be preffered for long-term activities and RNN for short-term |
Maurer et al, 2006 [27] | Self collected | Comparing the impact of sampling rate and location of data collecting device on the accuracy | Decision tree | Highest accuracy–92.8% | No significant change in accuracy was noted above 20Hz sampling rate |
He et al, 2008 [30] | Self collected | Classification of 4 different classes of activities | SVM model | 92.25% | The position of accelerometer depends on the type of activity one wants to recognise. |
Suwannarat et al, 2021 [33] | UCI HAR, the Real World 2016 and the WISDM | To create a light weight classification model | DNN based classifier | Comparitive or better accuracy than baseline classifier | The model presented can have many application specially in smartwatches. |
HAR Class | Numeric Value | HAR Class | Numeric Value |
---|---|---|---|
Laying | 0 | Squatting | 5 |
Stationary | 1 | Stairs-up | 6 |
Walking | 2 | Stairs-down | 7 |
Brisk-walking | 3 | Cycling | 8 |
Running | 4 |
Sensors | Parameters Read |
---|---|
Accelerometer | Acceleration, Orientation |
Gyroscope | Angular Velocity, Orientation |
Magnetometer | Magnetic Field |
Purpose | Device | Specifications |
---|---|---|
Data collection | Smartphone | 128 GB 6 GB RAM, Exynos 9825 (7 nm), Octa-core (2 × 2.73 GHz Exynos M4 & 2 × 2.40 GHz Cortex-A75 & 4 × 1.95 GHz Cortex-A55) |
Model training | Laptop | 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40 GHz 2.42 GHz, 16.0 GB RAM |
Dataset | Subject | Sample Rate (Hz) | Activity | Sample | Sensor | Reference |
---|---|---|---|---|---|---|
OPPORTUNITY | 4 | 32 | 16 | 191,564 | A, G, M | [52] |
PAMAP2 | 9 | 100 | 18 | 64,173 | A, G, M | [53] |
DSA | 8 | 25 | 19 | 75,998 | A, G, M | [54] |
MHEALTH | 10 | 50 | 12 | 40,522 | A, G, M | [55] |
HHAR | 9 | 100–200 | 6 | 366,038 | A, G | [56] |
Skoda | 1 | 96 | 10 | 22,000 | A | [57] |
Daphnet Gait | 10 | 64 | 2 | 49,942 | A | [58] |
UCI Smartphone | 30 | 50 | 6 | 10,299 | A, G | [22] |
USC-HAD | 14 | 100 | 12 | 41,998 | A, G | [59] |
SHO | 10 | 50 | 7 | 20,998 | A, G, M | [60] |
WISDM v1.1 | 29 | 20 | 6 | 91,515 | A | [61] |
WISDM v2.0 | 36 | 20 | 6 | 248,653 | A | [62] |
Our Custom Dataset | 3 | 100 | 9 | 3,631,500 | A, G, M | - |
Classifier Name | Parameter | Parameter Value |
---|---|---|
Random Forest | 1. n_estimators 2. criterion 3. random state | 1. 100 2. gini 3. 43 |
Decision Tree | 1. min_samples_split 2. min_samples_leaf | 1. 2 2. 1 |
Support Vector Machine | 1. C 2. Kernel 3. Degree 4. gamma | 1. 1 2. rbf 3. 3 4. scale |
Gaussian Naïve Bayes | 1. var_smoothing | 1. |
K Nearest Neighbours | 1. algorithm 2. n_neighbors 3. weights | 1. auto 2. 10 3. uniform |
Multinomial Logistic Regression | 1. dual 2. tol 3. C 4. fit_intercept | 1. false 2. 3. 1 4. true |
X_acc | Y_acc | Z_acc | X_angvel | Y_angvel | Z_angvel | X_mf | Y_mf | Z_mf | X_orien | Y_orien | Z_orien | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
X_acc | 1 | |||||||||||
Y_acc | 1 | |||||||||||
Z_acc | 1 | |||||||||||
X_angvel | 1 | |||||||||||
Y_angvel | 1 | |||||||||||
Z_angvel | 1 | |||||||||||
X_mf | 1 | |||||||||||
Y_mf | 1 | |||||||||||
Z_mf | 1 | |||||||||||
X_orien | 1 | |||||||||||
Y_orien | 1 | |||||||||||
Z_orien | 1 |
Model | Test Accuracy % |
---|---|
Multinomial Logistic Regression | 67 |
Gaussian Naive Bayes | 89 |
Decision Tree Classifier | 93 |
Random Forest Classifier | 95 |
K Neighbors Classifier | 91 |
Support Vector Machine | 93 |
No. of Hidden Layers | No. of Training elements | No. of Testing Elements | Total Training Time (sec) | Total Testing Time (sec) | Training Time per Element | Testing Time per Element | Testing Accuracy | Size of Trained Network (KB) |
---|---|---|---|---|---|---|---|---|
10 | 5084 | 2179 | 1274.9 | 26.26 | 250.77 | 12.05 | 90.5 | 1,143,306 |
20 | 5084 | 2179 | 366.73 | 24.68 | 72.13 | 11.33 | 95.1 | 1,143,364 |
30 | 5084 | 2179 | 743.61 | 8.38 | 146.26 | 3.85 | 95.7 | 1,143,387 |
40 | 5084 | 2179 | 432.96 | 24.9 | 85.16 | 11.43 | 96.9 | 1,143,465 |
50 | 5084 | 2179 | 412.32 | 8.61 | 81.1 | 3.95 | 97.06 | 1,143,522 |
60 | 5084 | 2179 | 408.23 | 8.45 | 80.3 | 3.88 | 97.29 | 1,143,557 |
70 | 5084 | 2179 | 371.23 | 8.42 | 73.02 | 3.86 | 95.96 | 1,143,647 |
80 | 5084 | 2179 | 336.83 | 9.18 | 66.25 | 4.21 | 96.65 | 1,143,758 |
90 | 5084 | 2179 | 969.36 | 20.24 | 190.67 | 9.29 | 98.1 | 1,143,849 |
100 | 5084 | 2179 | 376.1 | 8.27 | 73.98 | 3.8 | 97.15 | 1,144,000 |
110 | 5084 | 2179 | 342.52 | 8.07 | 67.37 | 3.7 | 97.4 | 1,144,137 |
120 | 5084 | 2179 | 417.89 | 9.65 | 82.2 | 4.43 | 97.43 | 1,144,315 |
Parameters | Value/Function |
---|---|
Output mode | last |
State activation function | tanh |
Gate activation function | sigmoid |
Input weights initializer | glorot |
Recurrent weights initializer | orthogonal |
Input weights learn rate factor | 1 |
Recurrent weights learn rate factor | 1 |
Input weights layer-2 factor | 1 |
Bias learn rate factor | 1 |
Bias layer-2 | 1 |
Bias initializer | unit-forget-gate |
Class | Precision | Recall | F1 Score |
---|---|---|---|
0 | 1 | 1 | 1 |
1 | 1 | 1 | 1 |
2 | 0.90 | 0.96 | 0.93 |
3 | 0.96 | 0.88 | 0.92 |
3 | 0.99 | 1 | 0.99 |
5 | 1 | 1 | 1 |
6 | 0.99 | 0.99 | 0.99 |
7 | 0.99 | 0.99 | 0.99 |
8 | 0.78 | 0.99 | 0.99 |
Train-Test (%) | Observed Accuracy (%) |
---|---|
60–40 | 96.5 |
65–35 | 97.2 |
70–30 | 98.1 |
75–25 | 97.9 |
80–20 | 95.5 |
85–15 | 97.8 |
Parameters | Value |
---|---|
Number of examples | 1,098,207 |
Number of classes | 6 |
Missing attribute values | NONE |
Walking | 424,400 (38.6%) |
Jogging | 342,177 (31.2%) |
Upstairs | 122,869 (11.2%) |
Downstairs | 100,427 (9.1%) |
Sitting | 59,939 (5.5%) |
Standing | 48,395 (4.4%) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Khan, Y.A.; Imaduddin, S.; Singh, Y.P.; Wajid, M.; Usman, M.; Abbas, M. Artificial Intelligence Based Approach for Classification of Human Activities Using MEMS Sensors Data. Sensors 2023, 23, 1275. https://doi.org/10.3390/s23031275
Khan YA, Imaduddin S, Singh YP, Wajid M, Usman M, Abbas M. Artificial Intelligence Based Approach for Classification of Human Activities Using MEMS Sensors Data. Sensors. 2023; 23(3):1275. https://doi.org/10.3390/s23031275
Chicago/Turabian StyleKhan, Yusuf Ahmed, Syed Imaduddin, Yash Pratap Singh, Mohd Wajid, Mohammed Usman, and Mohamed Abbas. 2023. "Artificial Intelligence Based Approach for Classification of Human Activities Using MEMS Sensors Data" Sensors 23, no. 3: 1275. https://doi.org/10.3390/s23031275
APA StyleKhan, Y. A., Imaduddin, S., Singh, Y. P., Wajid, M., Usman, M., & Abbas, M. (2023). Artificial Intelligence Based Approach for Classification of Human Activities Using MEMS Sensors Data. Sensors, 23(3), 1275. https://doi.org/10.3390/s23031275