Paper 40
Paper 40
Abstract
People with motor disabilities require some mechanism or technological solution to move around
autonomously. The most common way is through a voice-controlled electromechanical chair; however,
these solutions are expensive and often require expert support in training. This paper aims to design a
low-cost and easy-to-use electromechanical chair that allows autonomous navigation controlled by an
electronic interface, a joystick, and voice recognition. In the first step, modeling an electrical wheelchair
prototype with 3D software is essential; subsequently, a voice command system to control wheelchair
navigation is developed based on the speech recognition library of Python. Finally, an electronic
interface is designed and integrated into an Arduino Mega embedded module to recognize the voice
commands and control the wheelchair's mechanical system navigation. The results of this research are
promising, considering the low cost of hardware design, the well-performance prototype, and the easy-
to-use navigation system. The joystick control was accurate and gave four essential movements to the
wheelchair; the voice recognition uses multi-tasking with threading algorithms to interpret and execute
wheelchair navigation commands simultaneously. To enhance the prototype performance in future
advances, optimizing the time response in command execution and setup system to individual user's
requirements is recommendable.
Keywords
Electrical wheelchair, disabilities person, voice recognizing, threading programming1
1. Introduction
Motor disability is a condition characterized by an individual's inability to independently engage
in physical movement. Peru has a notable prevalence in this particular domain, and despite
ongoing efforts towards the integration of individuals with disabilities, achieving this objective
remains challenging within our societal context. The inclusion of individuals with disabilities in
all economic activities is essential for promoting their independence. However, it is often
observed that these individuals have challenges in securing employment due to their specific
disability-related problems. According to the annual report of the Informatic and Statistic
National Institute of Perú (INEI) in 2013, it was found that a total of 932 thousand individuals
exhibit motor disabilities, which encompass challenges in mobility, ambulation, and the
utilization of limbs [1]. According to a recent statistics analysis published by the National Institute
of Statistics and Informatics (INEI) in 2019 [2], approximately 10% of the Peruvian population is
affected by impairments. Among this group, 15.1% specifically experience motor disabilities,
which corresponds to a total of 484,598 individuals facing challenges related to mobility and
ambulation. The aforementioned publications provide an analysis of the various constraints
experienced by individuals with disabilities, encompassing physical, occupational, social, and
economic dimensions.
CITIE 2023: International Congress on Education and Technology in Sciences, December 04–06, 2023, Zacatecas,
Mexico
jmendozamo@utp.edu.pe (J. Mendoza-Montoya); u18100130@utp.edu.pe (K. E. Angulo-Oviedo);
u18103301@utp.edu.pe (E. Y. Turpo-Apaza); c19206@utp.edu.pe (R. Alfonte-Zapana)
0000-0001-9365-1723 (J. Mendoza-Montoya); 0000-0002-5602-201X (R. Alfonte-Zapana)
© 2023 Copyright for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR Workshop Proceedings (CEUR-WS.org)
CEUR
ht
tp:
//
ceur
-ws
.or
g
Works
hop I
SSN1613-
0073
Pr
oceedi
ngs
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
In the academic literature, there has been significant emphasis placed on the investigation of
electrical wheelchair navigation. As a result, numerous solutions have been proposed for
controlling an electrical wheelchair through the integration of electronic interfaces with a
joystick [3]. This particular control method relies on manual input through the use of hands.
However, in cases where individuals are unable to move their arms or hands due to disabilities,
alternative solutions based on signal processing are required. These solutions involve the
utilization of bio-signals, which are generated from various parts of the human body. For instance,
the bio-signal known as electro-oculography (EOG) can be produced through eye movement and
can be obtained using a system consisting of an operational amplifier, an analog-to-digital
converter (ADC), and a computer. This setup allows for the processing of the bio-signal and the
transmission of commands to the wheelchair [4, 5, 6]. Another type of biological signal is the
electroencephalogram (EEG), which originates from brain activity. However, the analysis and
processing of this signal are highly intricate and necessitate the use of expensive equipment [7].
The electromyogram (EMG) is a cost-effective method for capturing bio-signals associated with
muscle movements. However, its implementation necessitates the use of bio-amplifiers, analog-
to-digital converters (ADCs), and computational resources for signal processing [7, 8]. Voice
recognition technology has emerged as a highly promising solution for individuals with motor
disabilities. To enable this functionality, the use of a headset and microphone is required to
facilitate communication and issue commands to the wheelchair. In order to accurately interpret
and respond to specific verbal instructions for wheelchair control, the integration of dedicated
interpreter software and a microcontroller capable of processing voice signals is imperative [9,
10, 11, 3, 12, 13]. In conclusion, certain hybrid or dual approaches have been suggested in the
form of voice and vision-based mechanisms [14] or head gesture recognition [15, 16].
Nevertheless, it should be noted that these solutions tend to be expensive and frequently
necessitate the involvement of an expert to facilitate their utilization, upkeep, and instruction.
The objective of this study is to present a conceptual design for an electric wheelchair that can
be controlled using both voice commands and a joystick. The inclusion of these dual control
options is intended to assure the wheelchair's functionality in the event of voice command failure.
In such a scenario, the joystick serves as a backup control mechanism, allowing the wheelchair to
continue running seamlessly. The speech recognition library in Python is employed for voice
recognition, while the Arduino Atmega embedded system is utilized for processing and
recognizing voice commands. Additionally, the Arduino Atmega system is responsible for
controlling the motor that enables the wheelchair's regulated navigation.
The present study is organized in the following manner: Section 2 provides a comprehensive
account of the materials and methods employed in this study, encompassing the hardware design
and software development aspects. Section 3 of the document provides an exposition of the
experimental findings. Section 4 of the document encompasses the presentation of the
conclusions and recommendations.
The electronic system comprises two power H-bridges, specifically the LM298n module, which
incorporates buffers and enables the control of two motors. In our situation, these motors are
responsible for controlling the left and right movements of the wheelchair. The control circuit is
regulated by an Arduino embedded system, which incorporates an atmega328 microcontroller.
This microcontroller, with its eight-bit architecture, possesses sufficient computational
capabilities to effectively handle real-time data processing for our specific application. In order
to access the voice recognition library in the cloud, it is necessary for the embedded system to
establish an internet connection.
The system initiates the activation of two H-bridges to enable continuous control of the left
and right motors. The movement of the wheelchair is determined by six fundamental commands,
Table 1.
Table 1
Six voice commands
English Spanish
Fordward Avanza
Turn right Derecha
Turn left Izquierda
Backward Retrocede
Fast Rápido
Stop Para
A pulse width modulation (PWM) signal is sent from the microprocessor to each H-bridge that
controls the gearmotors and controls the speed motor as well, Figure 2. It works with the Echo
and Trigger inputs, which are linked to Arduino pins 7 and 6.
Below is a list of the parts that were used in the system.
• 1 Arduino uno with 5v
• 1 LM298n H-bridge 5v
• 2 12v/730mA gearmotors with 4.2Nm of torque
• 1 ultrasound monitor with 5v
• 2 power inputs (5v to 12v, 20W)
• 1 microphone
• 1 headset
Python is needed to build the system, along with the speech recognition library and the
algorithm to understand voice directions like "Avanza" for "Forward," "derecha" for "turn right,"
"izquierda" for "turn left," "Retrocede" for "backward," "Para" for "stop," and "Rápido" for "Fast."
In order to operate the wheelchair using a joystick, a logical framework will be implemented,
consisting of four distinct movements: forward, backward, left turn, and right turn. Each of these
movements will be calibrated to accommodate pulse width modulation based on the degree of
depression of the lever positioned at its center. To analyze this, we represent the range of joystick
displacement for each movement on a Cartesian coordinate system. In order to make progress,
we divide the Cartesian region into two zones: Zone "a" represents the left zone, which begins at
0, and Zone "b" represents the right zone, which ends at 1023. The range is measured in bits,
taking into account the sensitivity of the analog-to-digital converter (ADC). In this particular case,
the ADC has 10 bits, resulting in a range of [0-1023] bits, which is equivalent to a voltage range
of [0-5] Volts. The center zone can be adjusted based on the joystick's sensitivity and individual
requirements.
To facilitate the selection of the mode for joystick or voice command, a red button has been
incorporated to enable the immediate choice of the desired operation mode by simply pressing
the button.
3. Results
3.1. Voice recognition
The experimentation involved conducting success and error tests with Google's speech
recognition library, and the outcomes are presented in Table 2.
Table 2
Voice command hits and misses list
Command Success Error
“Avanza” 18 (90%) 2 (10%)
“Retrocede” 13 (65%) 7 (35%)
“Izquierda” 10 (50%) 10 (50%)
“Derecha” 18 (90%) 2 (10%)
“Para” 19 (95%) 1 (5%)
“Rapido” 20 (100%) 0 (0%)
Based on these data, it is possible to find out how effective each command is and how effective
all commands are.
1. “Avanza”: 18 hits were found out of 20 samples. It works 90% of the time when the order
is given.
2. “Retrocede”: Thirteen out of twenty samples gave right answers, which is a 35% error
rate. Sometime this order wouldn't work in an emergency, which could hurt the person
using it if the ultrasonic sensor emergency stop wasn't available. The order is changed to
"retro," and the test is done again.
3. “Izquierda”: Out of 20 samples, 10 had right answers, which means there was a 50% error.
If we leave this command alone, it wouldn't be very useful in an emergency. But if the user
types the same command more than twice and doesn't get an answer, they might think
that the recognition isn't working and stop using this option. We change this command to
"anti" (which means reverse) and run the test again.
4. “Derecha”: 18 hits were found out of 20 samples. It works 90% of the time when the order
is given.
5. “Para”: Nineteen hits were found out of twenty samples. It works 95% of the time when
the order is given.
6. “Rapido”: Twenty samples were used, and twenty hits were found. It works every time
that the order is given.
7. We'll now change "Izquierda" to "anti" and "Retrocede" to "retro" to get the result shown
in Table 3.
Table 3
Recognizing commands after making changes
Retrocede Retro Izquierda anti
13 19 10 17
65 % 95 % 50 % 85 %
We will add "anti" and "retro" as key commands for left turn and reverse after this test. This
will also make it faster to say a command.
A novel software application was developed with the purpose of regulating the functioning of
the motors. In order to supply power to the Arduino and the L298n, a power bank was employed.
Similarly, it was linked to the computer system and initiated the issuance of commands, which
were executed in accordance with anticipated outcomes. An issue that became apparent was the
rather long recognition time of approximately 4 seconds for our initial code implementation.
Table 4 displays the duration it takes for an individual to articulate each of the previously
suggested instructions.
Table 4
Command recognizing delay
N Avanza Retrocede Izquierda Derecha Para Rápido
1 0.567 0.940 0.827 0.626 0.379 0.673
2 0.520 0.809 0.860 0.580 0.423 0.455
3 0.529 0.831 0.714 0.639 0.401 0.586
4 0.473 1.094 0.682 0.637 0.410 0.629
5 0.540 0.930 0.721 0.584 0.334 0.719
6 0.814 0.824 0.752 0.620 0.379 0.694
7 0.700 0.853 0.601 0.586 0.401 0.738
8 0.669 0.924 0.850 0.599 0.360 0.607
Mean 0.602 0.901 0.751 0.609 0.386 0.638
Consequently, a novel experiment was conducted utilizing the updated orders in order to
enhance the time response. The outcomes of this experiment are outlined in Table 5, presented
as follows:
Table 5
Command recognizing delay after change order
Retrocede Retro Izquierda Anti
0.901 0.523 0.751 0.524
Regarding the preceding instructions, we have obtained these novel findings with a reduced
time interval. The new "anti" command for the "Izquierda" function offers enhanced precision
and improved efficiency, resulting in reduced recording time.
In accordance with our established approach, an assessment was conducted to determine the
precision of the HCSR-04 sensor when measuring various materials, including concrete,
cardboard, MDF, mirror plastic, and technopor. The results of this evaluation are presented in
Table 6.
In order to have a deeper comprehension of the data, an analysis was conducted on the
dispersion pertaining to the mean. Hence, it is evident that the sensor exhibits a superior
responsiveness when applied to concrete surfaces. However, when utilized with alternative
materials such as mirror plastic and styrofoam, the signal may introduce inaccuracies in distance
estimation. Hence, it is recommended that the wheelchair software control be modified to
incorporate a joystick interface in order to enhance the navigation performance.
Table 6
Standard deviation of how far the HC-SR04 can measure
Material Mean Standard deviation
Concrete 100.0636 0.1236
Cardboard 100.0377 0.1880
MDF 99.6463 0.1874
Plastic 100.6274 0.4727
Technopor 99.9567 0.2714
Figure 4: Data spread over a one-meter distance measured with the HC-SR04
With these simulations, we can see that the deformation changes depending on the chassis's
shape and the type of material used. Following these tests, the PVC frame has the most
deformation, going up to a height of 6,307 mm. The next one is made of metal and is 0.282 mm
thick. The last one is made of steel and is 0.239 mm thick. The displacement zone in all of these
simulations is the same. It's in the area where the load acts directly (Table 7).
Table 7
Deformation in different materials
Material Max. deformation (mm)
PVC 6.307
Aluminum 0.282
Steel 0.239
Table 8
Relationship of power and weight
Power 45kg 50kg 55kg 60kg 65kg 70kg
120W 2.7 2.4 2.2 2 1.8 1.7
150W 3.3 3 2.7 2.5 2.3 2.1
180W 4 3.6 3.3 3 2.8 2.6
210W 4.7 4.2 3.8 3.5 3.2 3
240W 5.3 4.8 4.4 4 3.7 3.4
270W 6 5.4 4.9 4.5 4.1 3.8
300W 6.7 6 5.4 5 4.6 4.3
330W 7.3 6.6 6 5.5 5 4.7
4. Conclusions
In summary, we have successfully developed a mechatronics-based prototype wheelchair that
incorporates essential features and is cost-effective. This design enables manual navigation using
a joystick or automated navigation through voice commands issued by the user.
A cost-effective prototype of a wheelchair was developed using both computational modeling
and physical construction, with the primary material being polyvinyl chloride (PVC). A
comprehensive structural analysis was conducted on the prototype in order to validate its ability
to endure the designated load, despite its relatively modest cost. Furthermore, the final iteration
of the wheelchair prototype incorporated the installation of tires sourced from an authentic
wheelchair, so enhancing its aesthetic appeal and lending it an authentic semblance.
The development of a control system that effectively coordinates the navigation of the
prototype wheelchair has been successfully achieved. This system demonstrates adequacy,
efficiency, and the potential for optimization. The implementation involved utilizing speech
recognition and threading techniques. Python was employed as the programming language for
voice command control, while the Arduino Software was utilized to facilitate the transmission
and reception of instructions to the actuators.
Similarly, we have successfully enhanced the efficiency of voice command responses by
reprogramming and organizing the available phrases or commands in a manner that ensures user
comfort. In addition, it is possible to receive these words simultaneously with the transmission
of commands to the actuators. Similarly, a delay of 1.1 seconds was achieved, resulting in an
enhanced efficiency of our workflow that initially experienced a wait of 4 seconds between the
execution of each order. Ultimately, we replaced the lexemes that exhibited the greatest temporal
duration for directives that conveyed an identical directive, yet possessed enhanced efficiency in
terms of articulation.
Ultimately, we have successfully integrated all the constituent stages or systems comprising
the mechatronic wheelchair prototype in order to validate its functionality and enhance its
efficiency. This phenomenon is evident in the outcomes gained and is demonstrated in the
present research study.
5. Recommendations
Based on the findings of our research, we are able to offer the following advice for future
endeavors.
• The prototype needs to be made from something other than PVC so that it can hold a
person who weighs more than 50 kilograms. Aluminum may be a good choice because
it is not more expensive than other materials and can hold more weight than PVC.
• Similarly, we suggest getting motors with more power that can move the suggested
load through some kind of extra transmission. Online shops sell these, but they cost a
lot more than the ones we used for this project.
• To use electronic control, we suggest making a PCB card with all the parts that will be
used. In the same way, a Raspberry Pi or an ESP32 card can be used instead of an
Arduino. This makes sending messages to the actuators faster. Also, we believe it
would be more effective to use the VRM v3 Voice Recognition Module version 3. This
module can recognize voice commands without an Internet connection, does not
require a computer for signal processing, and lets you train up to seven commands.
• Finally, based on where the prototype will be used, we suggest adding more sensors
to the area around the wheelchair to make the person using it safer.
Acknowledgements
We express our gratitude to Universidad Tecnológica del Perú for their valuable logistical
assistance, which has facilitated the execution of this review paper.
References
[1] INEI, En el perú 1 millón 575 mil personas presentan algún tipo de discapacidad, 2013. URL:
https://m.inei.gob.pe/media/MenuRecursivo/noticias/np-178-2013.pdf.
[2] INEI, En el país existen 3 millones 209 mil 261 personas con discapacidad, 2019. URL:
https://m.inei.gob.pe/media/MenuRecursivo/noticias/np_136_2019_inei.pdf.
[3] S. M. Shifa, T. M. Mridul, S. S. Rafat, M. H. Monjur, M. R. Islam, M. K. Hassan, Design and
implementation of smart wheelchair with advanced control interfaces, in: 2023 3rd
International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST),
2023, pp. 155–159. doi:10.1109/ICREST57604.2023.10070067.
[4] J. P. A. Ogenga, P. W. Njeri, J. K. Muguro, Development of a virtual environment-based
electrooculogram control system for safe electric wheelchair mobility for individuals with
severe physical disabilities, Journal of Robotics and Control (JRC) 4 (2023) 165–178. URL:
https://doi.org/10.18196/jrc.v4i2.17165. doi:10.18196/jrc.v4i2.17165.
[5] J. Xu, Z. Huang, L. Liu, X. Li, K. Wei, Eye-gaze controlled wheelchair based on deep learning,
Sensors 23 (2023). URL: https://www.mdpi.com/1424-8220/23/13/6239. doi:10.
3390/s23136239.
[6] L. Maule, M. Zanetti, A. Luchetti, P. Tomasin, M. Dallapiccola, N. Covre, G. Guandalini, M. De
Cecco, Wheelchair driving strategies: A comparison between standard joystick and gaze-
based control, Assistive technology : the official journal of RESNA 35 (2023) 180—192. URL:
https://doi.org/10.1080/10400435.2021.2009593. doi:10.1080/10400435.
2021.2009593.
[7] L. Gunarathne, D. Welihinda, H. Herath, S. Yasakethu, Eeg-assisted emg-controlled
wheelchair for improved mobility of paresis patients, in: 2023 IEEE IAS Global Conference
on Emerging Technologies (GlobConET), 2023, pp. 1–6. doi:10.1109/GlobConET56651.
2023.10149923.
[8] M. Gopichand, K. Rajeswari, E. Deepthi, Human–machine interface for wheelchair control
using semg signals, in: A. Kumar, G. Ghinea, S. Merugu, T. Hashimoto (Eds.), Proceedings of
the International Conference on Cognitive and Intelligent Computing, Springer Nature
Singapore, Singapore, 2023, pp. 395–406.
[9] K. Muthulakshmi, C. Padmavathy, N. Kirthika, B. Vidhya, M. A. P. Manimekalai, Internet of
things and smart intelligence-based google assistant voice controller for wheelchair, in: S.
Shakya, G. Papakostas, K. A. Kamel (Eds.), Mobile Computing and Sustainable Informatics,
Springer Nature Singapore, Singapore, 2023, pp. 761–774.
[10] T. K. Hou, Yagasena, Chelladurai, Arduino based voice controlled wheelchair, Journal of
Physics: Conference Series 1432 (2020) 012064. URL: https://dx.doi.org/10.1088/1742-
6596/ 1432/1/012064. doi:10.1088/1742-6596/1432/1/012064.
[11] S. M. Omair, S. A. Syed, M. M. Khan, A. Shaikh, R. Ismail, H. Shah, Z. Irfan, Prototype model of
voice activated wheelchair, in: 2023 Global Conference on Wireless and Optical Technologies
(GCWOT), 2023, pp. 1–6. doi:10.1109/GCWOT57803.2023.10064667.
[12] S. Murugaiyan, J. Varatharaj, K. Anandraj, M. Duraisamy, Voice controlled wheelchair, AIP
Conference Proceedings 2581 (2023) 030001. URL: https://doi.org/10.1063/5.0126311.
doi:10.1063/5.0126311.
[13] H. F. Fakhruldeen, A. A. Meri, A. H. Sa'id, A. N. Makttoof, M. A. Kadhim, H. A.-J. Al-Asady, An
arduino-based voice-recognition elevator for special purposes, Indonesian Journal of
Electrical Engineering and Computer Science 31 (2023) 828. URL:
https://doi.org/10.11591/ ijeecs.v31.i2.pp828-834. doi:10.11591/ijeecs.v31.i2.pp828-834.
[14] M. Anshar, R. Sadjad, D. Dewiani, M. Abry, R. Prayudha, F. Fiqhi, M. Takbir, Control system
design for smart wheelchair robot, in: AIP Conference Proceedings, volume 2543, AIP
Publishing, 2022.
[15] I. K. Somawirata, F. Utaminingrum, Smart wheelchair controlled by head gesture based on
vision, Journal of Physics: Conference Series 2497 (2023) 012011. URL: https://dx.doi.org/
10.1088/1742-6596/2497/1/012011. doi:10.1088/1742-6596/2497/1/012011.
[16] S. Chatzidimitriadis, S. M. Bafti, K. Sirlantzis, Non-intrusive head movement control for
powered wheelchairs: A vision-based approach, IEEE Access 11 (2023) 65663–65674.
doi:10.1109/ACCESS.2023.3275529.