Internet of Things: Kanak Manjari, Madhushi Verma, Gaurav Singal
Internet of Things: Kanak Manjari, Madhushi Verma, Gaurav Singal
Internet of Things
journal homepage: www.elsevier.com/locate/iot
Tutorial
a r t i c l e i n f o a b s t r a c t
Article history: One-Sixth of the world population is suffering from vision impairment, as per WHO’s re-
Received 18 July 2019 port. In the past decades, many efforts have been done in developing several devices to
Revised 28 February 2020
provide support to visually blind and enhance the quality of their lives by making them
Accepted 3 March 2020
capable. Many of those devices are either heavy or costly for general purposes. In this pa-
Available online 16 March 2020
per, a detailed comparative study of all the relevant devices developed for them has been
Keywords: presented which are wearable and handheld. The focus was on the prominent features of
Visually Impaired those devices and analysis has also been performed based on few factors like power con-
Assistive Technology sumption, weight, economic and user-friendliness. The idea was to build a path for the
Electronic Travel Aids researchers who are trying to work in this field by either developing a portable device or
Obstacle Detection through some efficient algorithm to ensure independence, mobility, and safety for visually
Obstacle Avoidance impaired persons.
Image Processing
Deep Learning © 2020 Elsevier B.V. All rights reserved.
Wearable Device
1. Introduction
According to WHO [1], 1.3 billion persons are estimated to be suffering from vision impairment out of which persons
having mild vision impairment, moderate to severe vision impairment and blind persons are 188.5 million, 217 million and
36 million [2] respectively. According to Medical Express [3], the estimate of blind persons will rise from 36 million to 115
million in 2050 with the increase in population and as the individuals grow older. With the acceleration in the number of
blind persons, there is acute rquirements of an effective aid for them. The majority of people who are visually impaired
need a kind of assistive technology [4] for their daily tasks. Assistive Technology (AT) can be any device, product, item or
software program that is used to enhance the functional capabilities of any disabled person. The main aim of an assistive
device built for visually blind is to make them capable, independent and self-sufficient.
As the figures of visually impaired are increasing, the necessity of having a solution for navigation and orientation has
also increased. The motive is to present the current state of this problem to researchers who want to contribute further in
this field. This paper contains the state-of-art of this area i.e. what works have been done for visually impaired till now,
what devices have been developed for them, what are the advantages and limitations of those devices are discussed in
depth which provides a skeleton of the work done for visually impaired.
The main aim was to contribute to this area by gathering the information of the work done in this field by authors and
elaborating the devices built by them to help visually blind in their daily life. Recognizable work done for visually impaired
have been reviewed here. The popular and useful devices built for visually impaired are discussed in detail in this paper.
Detailed sensor analysis has also been done that has been used to build Electronic Travel Aids (ETAs). The working and
∗
Corresponding author.
E-mail addresses: KM5723@bennett.edu.in (K. Manjari), madhushi.verma@bennett.edu.in (M. Verma), gauravsingal789@gmail.com (G. Singal).
https://doi.org/10.1016/j.iot.2020.100188
2542-6605/© 2020 Elsevier B.V. All rights reserved.
2 K. Manjari, M. Verma and G. Singal / Internet of Things 11 (2020) 100188
Table 1
Types of Sensors used to Build Devices for Visually Impaired.
RGB-D Camera Depth range: 0.15 - 4.0 m $9,000 Good Detection Range Costly
Ultrasonic Sensor Depth range: 1 m $2 - $256 Cheap Low Detection Range
Fisheye Camera Angle of View: 100◦ - 180◦ $165 - $800 Large Detection Angle Costly
Monocular Vision Camera Provides high spatial resolution images $169 - $350 Provides Quality Image -
Binocular Vision Sensor Capture images in a fixed frequency $800 - $3000 Better than Monocular Sensor Costly
Microsoft Kinect Depth Range: 0.5 to 4.5 m $150 Good Detection range Costly
3D/Depth-Camera Provides 3-D data $149 - $179 has 3-D feature Costly
features of all the ETAs have been described and the importance of the devices based on which few evaluation parameters
have been discussed in Table 3. The advantages of our manuscript over the earlier surveys are as follows:
• Devices developed till 2019 have been added and discussed here. This may help the readers to be updated with the
latest technologies being used and gain better insight. In the past couple of years, not only artificial intelligence and
deep learning-based technologies are being used behind the development of software module of devices but various
applications dedicated to helping visually impaired have also been developed. We have tried to incorporate these data
into our survey.
• In section 4, technology and approaches such as sensors, image processing techniques and deep learning models used in
the development of devices have been discussed.
• App-based solutions have been discussed in section 4.3 of our paper. There are many applications developed to help
visually impaired out of which the popular ones have been presented in our paper.
• A comparison of various sensors has been presented in Table 1 as sensors are one of the oldest mechanisms used by the
researchers for developing assistive devices for VI.
This paper contains various sections out of which Section 1 is Introduction where we have discussed the facts and figures
related to this field. The statistics of persons suffering from vision impairment all over the world are discussed in this
section. In Section 2, the background of this problem are discussed where the category of assistive solutions built for blind
persons is discussed along with their guidelines. In Section 3, work is done and devices built for visually impaired have been
discussed. These devices are categorized and have been shown in a tabular format with their features. Various approaches
such as sensors, image processing, and artificial intelligence, etc. used for building these devices for visually impaired are
discussed in Section 4. Their details are shown in a table for ease of understanding. In Section 5, we have discussed all the
information that we have extracted after reading all these papers. Section 6 is the Current Research Stage and Section 7 is
Conclusion & Future Scope. In Section 6, the current stage of our research is described in brief and in Section 7, we have
concluded with our opinions and observations along with the scope where work is needed.
2. Background
Under this section, the basic background details of this problem along with the issues & challenges faced by visually
impaired are discussed.
Visual Substitution can be explained as the alternative for blind where an image may be captured from a camera, in-
formation is processed, and output is provided to the user in non-visual form such as auditory mode/vibratory mode or it
may be a combination of both. Visual Enhancement is almost similar to Visual Substitution, but the difference is that here
the output is provided to the user in a visual mode which is “Virtual Reality” (“VR”)/ “Augmented Reality” (“AR”). Through
the process of Visual Replacement, data is displayed to the cortex of user in the brain which is mainly used in medical
field. In this paper, our prime focus is on Visual Substitution, which can be further subdivided in three parts as suggested
by various authors i.e. “Electronic Travel Aids” (“ETAs”) [5], “Electronic Orientation Aids” (“EOAs”) [5] & “Position Locator
Devices” (“PLDs”) [5]. Many advancements has been done in the field of development of ETAs which makes them more
popular than EOAs and PLDs among blind persons.
Electronic Travel Aids (ETAs) : It retrieves data from the environment and then provide it to user via sensors such
as Cane [6], NavBelt [7], Guide Cane [8] etc. These are the most commonly used visual substitution by visually impaired.
According to National Research Council [9], the rules for ETAs are:
Electronic Orientation Aids (EOAs) : It guides the person about their path by providing directions or through the path
signs such as Wheelesley [10], Smart Walker [11] etc. These devices help in guiding the user through and around the way.
The EOA guidelines [12] are as follows:
Position Locator Devices (PLDs) : It locates the user by finding the user’s location. This helps visually blind persons to
locate themselves while they are traveling. Global Positioning System (GPS) is a popular example of Position Locator Devices.
Familiarization with the challenges and issues faced by a visually impaired person can help sighted understand what a
person with vision impairment has to face in routine life.
Environmental Challenges People who are visually impaired have a hard time navigating the outdoor environment.
Traveling in crowded places like markets, railway station, etc. is even more challenging for them. Therefore, blind people
seek help from either assistive technology or family members.
Social Challenges Visually impaired people sometimes suffer from inferiority complex as they are not able to engage
with others in some activities like sighted people. Also, they face difficulties in playing outdoor games.
Technological Challenges Blind people face difficulties while internet surfing for research, recreation, shopping, etc. A
blind person can not extract information easily from the web pages. Although many devices have been developed for this
purpose of information extraction, it is not so common in all age blind persons.
Others There are many areas where blind people face challenges and feel different from sighted people. There are many
more challenges blind people face like doing household chores, putting on make-up, currency denomination recognition,
obstacle detection, navigation, crossing the road, etc.
In the paper [5], solutions developed for visually impaired till a quarter of 2017 have been discussed. Overall analysis,
as well as merits and demerits of those solutions, have been shown in tabular format. Based on features and performance
parameters of the systems, devices have been categorized in another survey paper [13]. In the paper [14], the developments
in tactile and audio-based assistive technologies for the blind community have been summarized to provide an overview of
those solutions. In [15], authors have reviewed the studies to explore how sensory substitution might allow for the online
control of action via visual information perceived through sound or touch. The current state of the art for sensory substitu-
tion approaches to object recognition, localization, and navigation, and the potential these approaches have for revealing a
metamodal behavioral and neural basis for the online control of action have also been reviewed. These survey papers helped
in understanding the approach and flow of writing the survey paper in this area. Although devices have been properly de-
scribed and compared, not much focus has been done on explaining the approaches used behind developing those devices.
Artificial Intelligence-based devices have been developed in the past few years which were not included in previous survey
papers. The advantages of reading this survey over the existing ones are discussed in the contribution part of Section 1.
First of all, we have created a cluster of keywords relevant for searching the papers for the survey. Google Scholar web
search engine along with the database of IEEE and ACM was explored for searching the relevant papers. Year-wise filtering
was done and then the selected papers were categorized into two i.e. survey and regular papers. Then the papers were
examined and data were extracted in excel/word format for further study. Keeping the notes and relevant data in separate
files helped in performing an efficient exploration and tracking the work done in the past few years. New keywords were
added in the cluster of keywords each time a new paper was studied. The process was simple and straight but this may help
the readers in performing their research. The process we have adopted for writing this survey paper has been described in
Fig. 1.
As discussed earlier, the requirement is to help visually impaired by providing them an assistive solution in their daily
tasks and make their life easier, safer and independent. Since long, researchers have been developing this kind of solu-
tion which can help them in obstacle detection, navigation, object recognition, traveling, etc. Few such devices have been
discussed further in this section to know the state of art for this problem and those devices are shown in a hierarchy in
Fig. 2.
4 K. Manjari, M. Verma and G. Singal / Internet of Things 11 (2020) 100188
Manual Google
Search Scholar
Query 1 Query 2 Query N
ACM IEEE
Filter
Year Wise
Survey
Regular
Examine
Choose
Keywords
Keywords
In this section, all the ETAs built for visually impaired have been discussed decade wise. Wearable and Handheld devices
are discussed separately for all three decades.
3.1.1. 1990-2000
ETAs developed in the decade 1990-20 0 0 are discussed in this section.
Wearable NavBelt [7] was developed to help blind people walk safely as shown in Fig. 3. It is a kind of belt fitted with
sensors which automatically senses environment with an angle of 120 degree in the horizontal front direction and alerts the
user by detecting obstacles. But it is quite heavy to wear as its weight is 3.4 kg. Also, it does not allow them to control the
speed. NavBelt was developed to guide users about the presence of obstacles while going to a certain destination. It is worn
just like a normal belt with sensors fitted on it and output is received through stereo earphones. As described in Table 3,
it is a wearable device that works in offline mode. It is suitable for indoor as well as outdoor environment and can detect
both dynamic and static objects.
An Ultrasonic Ranging System [16] was developed which was built in the form of helmet and sensors were fitted on it. Its
main aim was to expand the detection range than existing solutions. Another effective and low-cost solution was developed
in the form of People Sensor [17] which was capable to detect any hindrance and presence of persons around user which
helped the user to travel safely with reduced cost.
Handheld An Ultrasonic Cane [6] was developed which replaced the traditional white cane as it has longer range than
traditional cane. A new device Guide Cane [8] was developed which solved the limitation of NavBelt of comprehending the
guidance signal to allow fast walking as shown in Fig. 3. It incorporates the same techniques as used in NavBelt, but it has
a wheel attached below and there is a steering axle on which sensor head is mounted. It is less time consuming and the
user can control the speed. But it is not capable to detect hanging objects, sidewalks, steps, etc. It overcomes the limitation
K. Manjari, M. Verma and G. Singal / Internet of Things 11 (2020) 100188 5
1990 - 2000 2001 - 2010 2011 - Present 1990 - 2000 2001 - 2010 2011 - Present
W H W W H W H W H
H W H
Mechatronic Sys.
Smart Glass
of NavBelt by giving the user direct control of speed and allow fast walking. It has the same technology as used in NavBelt
but here wheel is attached via a cane and sensor head. In [20], a vision substitution system was developed for the blind
as a solution for studying the processing of information in central nervous system. Four hundred solenoid stimulators were
arranged in a 20∗ 20 array built into a dental chair. The stimulators were spaced 12mm apart having 1mm diameter ’Teflon’
tips which vibrate against the skin. The users manipulates the television camera mounted on a tripod which scans objects
placed on a table in front of him/her.
3.1.2. 2001-2010
ETAs developed in the decade 2001-2010 are discussed in this section.
Wearable In [21], authors have provided experimental tools for examining brain plasticity and has implications for per-
ceptual and cognition studies. Tactile-Vision Sensory Substitution (TVSS) was used for this purpose which was miniaturized
and inexpensive. This tool will help in exploring perceptual, physiological, and brain mechanism correlates of a complex
cognitive process. A wearable Sonar Navigation System [22] shown in Fig. 4, was developed to replace cane as it provides
navigation support in a closed environment with the advantage to run on limited resources. Sonar Navigation System de-
termines obstacles around the user in different directions using 6 sonar sensors. As mentioned in Table 3, it is suitable for
indoor as well as outdoor environments. Additionally, it can detect both dynamic and static objects. Another effective and
low-cost solution Ultrasonic Navigation System [23] shown in Fig. 4, was developed to help blind people in navigation which
uses the sonar system and provides vibrotactile feedback. Ultrasonic Navigation System is a wearable device as mentioned
in Table 3, which can detect objects in an unconstrained environment with the help of ultrasonic sensor and can detect
static as well as dynamic objects. A Wearable Helmet [24] was developed which uses stereo vision camera and provides the
position, distance along the size of the obstacles in the form of musical tone. It overcomes the limitation of the previous
solution by giving importance to the obstacle rather than focusing on the background details.
Another real-time assistance prototype was also developed in the form of Helmet [25] where in a stereo camera was
used but it has one limitation of being incapable to detect objects at a lower level like stones, holes, etc. Another ETA was
developed in the form of Helmet [26] where the camera was fitted on the helmet, sensors were placed on user’s belt to
provide navigation to visually blind. It can detect the presence of a person through face detection and cloth texture analysis.
Since the low-cost processor is used in this device, it is budget-friendly. However, the range of face detection is poor and
needs to be improved. In this case, the user needs to carry additional devices like belt and backpack along with helmet.
Another solution for navigation was developed in the form of Smart Robot which was integrated with RFID and GPS which
6 K. Manjari, M. Verma and G. Singal / Internet of Things 11 (2020) 100188
Fig. 3. a) NavBelt [7], b) Guide Cane [8], c) HOMERE [18], d) Garment [19].
guides user along a predefined path shown in Fig. 2 of [27]. It has the biggest advantage that it is both technically and
economically feasible but has the disadvantage of dependency on RFID tags. As mentioned in Table 3, it is suitable for indoor
as well as outdoor environment and provide location and orientation to the user. There have not been many handheld ETAs
developed in this decade.
3.1.3. 2011-present
ETAs developed in the decade 2011-Present are discussed in this section.
Wearable In [28], authors have discussed about the theory of operation and a design overview of the Tongue Display Unit
(TDU). TDU is a 144-channel programmable pulse generator that delivers dc-balanced voltage pulses suitable for electrotac-
tile (electrocutaneous) stimulation of the anteriordorsal tongue, through a matrix of surface electrodes. A Smart Clothing
Prototype (Garment) [19] was developed which is capable of detecting obstacles for visually impaired integrated with sen-
sors, vibration motors, and power supplies as shown in Fig. 3. Garment is an obstacle detection system that is wearable and
integrated with textile shapes. The advantage of this prototype is that it is lightweight, flexible, washable and can be easily
worn. As stated in Table 3, it is suitable for indoor environment and can detect static objects. A wearable obstacle detec-
tion system integrated on shoe [29] was developed to help the elderly and visually impaired person. It consists of sensors,
microcontroller, transmitter & receiver module and detects the obstacle in front of the user.
A wearable Acoustic Prototype [32] mounted on sunglasses was developed to help visually impaired in navigation using
a 3-D ”CMOS image sensor” [33] is shown in Fig. 5. ”Multiple Double Short Time Integration” (MDSI) [33] method was
used for subtraction of background illumination. It has the advantage of having a wide range of 60 degrees which helps the
user in getting the details of the position and width of objects. It is a sensory navigation device for visually blind and the
approach used in it is like bat’s acoustic orientation. As stated in Table 3, it can work in an outdoor environment and can
K. Manjari, M. Verma and G. Singal / Internet of Things 11 (2020) 100188 7
Fig. 4. a) Drishti [30], b) Sonar Navigation [22], c) RAW [31], d) Ultrasonic Navigation System [23].
detect static as well as dynamic objects. A wearable navigation system was developed in the shape of spectacles and waist
belts [34,35] which uses AT89S52/LPC2148 micro controller based embedded system. On detection of obstacle, it provides
pre-recorded voice-based output to the user with low-power consumption. An ”Ultrasonic Spectacle & Waist-Belt” [34] was
developed for blind as a portable tracking device. Sensors are fitted on a belt which detects object in three directions (left,
right & front) & provides voice-based output to the user. It is cost-effective but the device is not light-weight.
Navigation Assistance for Visually Impaired (NAVI) [36] was developed by using an RGB-D sensor which is shown in
Fig. 5. It is a kind of wearable device where camera is hanging on user’s neck which provides the user an obstacle-free path
to navigate safely in an unknown environment. This is robust to lighting changes, glows and reflection. NAVI as discussed in
Table 3, can detect both static as well as the dynamic type of objects and is suitable for indoor and outdoor environment.
Deep-SEE [37] uses a computer vision algorithm and Deep Convolution Neural Network (CNN) for the detection of objects,
tracking them and recognizing obstacles in an outdoor environment. You Only Look Once (YOLO) model [38] is used here
and the system is robust to any light intensity but is costly. For training purpose, ”Visual Object Tracking 2016” database
has been used which contains 60 videos. Two CNN has been used here which is a unique object tracking method as it
is trained offline. Switching between finding object location and object tracking is the unique principle used here through
visual similarity.
A novel electronic device NavGuide [39] in the form of a shoe was developed for visually impaired shown in Fig. 5. It is
capable of providing obstacle-free path-finding. NavGuide is a wearable Electronic Travel Aid in the form of shoe where ul-
trasonic sensors, vibration motor, and other hardware are fitted on the shoe. Sensors are fitted in the left and right direction
to detect the object from all these directions. It is cheaper, light-weight and consumes low-power. But it is unable to detect
pit, downstairs or downhill. As shown in Table 3, it is suitable for indoor as well as outdoor environments and can detect
dynamic and static objects both. A Monocular Vision-Based System, Mechatronic System [40] was developed to guide people
during running, walking and jogging in an outdoor environment. The user is guided along through a lane or line which is
extracted through a set of image processing algorithms. The system is deployed as a camera mounted on user’s chest and
set of gloves for providing vibratory output to users as shown in Fig. 1 of [40]. The device can detect the right path with the
speed of 10 km/h. According to Table 3, it is developed for the outdoor environment and can properly detect static objects.
A novel Convolution Neural Network named Kinect Real-Time Convolution Neural Network (KrNet) [41] was developed to
perform scene classification on mobile devices in real-time. It is deployed in the form of a wearable smart glass which is
cost-effective. In [42], authors have tested the acuity of blindfolded sighted naïve vOICE users. “The vOICe” is a visual-to-
auditory Sensory substitution devices which encodes images taken by a camera worn by the user into “soundscapes such
that experienced users can extract information about their surroundings.
8 K. Manjari, M. Verma and G. Singal / Internet of Things 11 (2020) 100188
Fig. 5. a) Acoustic Prototype [32], b) NAVI [36], c) NavGuide [39], d) Bbeep [43].
Handheld A new low-cost handheld Electronic White Cane was developed which consisted of ultrasonic sensors &
monocular camera as shown in Fig. 2 of [44]. This cane is cheap and works in real-time with good performance in the in-
door environment. As discussed in Table 3, it is suitable for indoor as well as outdoor environments and can detect dynamic
and static objects both. Another “Smartphone-Based guiding system” [45] was developed for the solution of the navigation
problem of visually blind. Initially, the smartphone process the captured image on the server. “Faster-Region Convolutional
Neural Network” (F-RCNN) was used by the server for detection and recognition of objects in every image. This device is bet-
ter in terms of usability than conventional systems in terms of data but requires constant internet connection. Multi-Sensor
based Obstacle Detection System [46] was developed to be integrated on cane using Model-Based-State-Feedback-Control
Strategy. Results obtained proves that this technique provides a good improvement over previous methods in diminishing
error.
Robot Wheeled Device [47] is an autonomous navigation system that guides people in an indoor environment. Through
the initial robot and human pose, the model calculates the new human position using A∗ graph search. Obstacle detection
and classification system based on deep learning, Patterned Light Projector was developed which incorporates patterned
light field with a camera as shown in Fig. 1 of [48]. It is tested for both multiple classes and binary using Support Vec-
tor Machine (SVM), CNN & Long-Short Term Memory (LSTM). This solution is very lightweight, efficient, robust and cost-
effective. As pointed out in Table 3, it is suitable for indoor as well as the outdoor environment and can detect dynamic and
static objects both.
“Intelligent Situation Awareness & Navigation Aid” (ISANA) was developed to be integrated with cane and help visually
impaired during indoor navigation as shown in Fig. 2 of [49]. “Time-Stamped Map Kalman Filter” [49] approach was used
K. Manjari, M. Verma and G. Singal / Internet of Things 11 (2020) 100188 9
f.maps
Feature Maps
f.maps Output
Input
for obstacle detection and avoidance. According to Table 3, it works well in the indoor environment and requires an internet
connection. Bbeep [43] is a sonic collision avoidance system for blind travelers where the camera is integrated into the
suitcase as shown in Fig. 5. Based on the current position of pedestrians using depth images, it calculates the future position
of pedestrians and determines the risk of collision with a blind person. Stereo Image Sensing is combined with YOLOv2 for
this purpose. It automatically calculates the future pose of pedestrians and alarms users before the collision. According to
Table 3, it is a hand-carried device integrated on suitcase and built for outdoor environment like airport, railway station,
which is a crowded place and where the chances of collision are more.
An Autonomous Path Guiding Robot (APGR) for visually impaired was developed to assist them in movements shown
in Fig. 6 of [50]. The robot can move on multiple paths and remember all the paths thus it acts as a substitute to Guide
Dog [51] which is a very costly solution for the people with low income. This robot works in all types of surfaces and is
affordable to all the user groups, unlike guide dogs. It guides the blind person in path planning and works well in the places
where GPS does not work. According to Table 3, it works well in all kinds of environment be it indoor/outdoor, plans the
path and then retraces them by avoiding both static and dynamic obstacles.
In this section, all the EOAs developed for visually impaired have been discussed decade wise. Wearable and Handheld
devices are discussed separately for all three decades.
3.2.1. 1990-2000
EOAs developed in the decade 1990-20 0 0 are discussed in this section.
Wearable Another real-time tactile simulation-based navigation and mobility solution was developed for visually blind
in the form of a wearable Stereo Vision System [52] which is inexpensive and consumes less power.
Handheld A Personal Adaptive Mobility Aid (PAM-AID) [11] was proposed which was able to overcome the limitation of
long canes and guide dogs as it gives the user mobility independence. It is a type of smart walker which is very simple
and intuitive in use. A wheelchair called Wheelesley [10] was developed which is smart enough to help the user in the
navigation of unconstrained environments efficiently through sensors and on-board computer, but it limits the independent
navigation ability as a person needs to sit on the wheelchair. Wheels are connected to either side of the base as shown
in Fig. 5 of [10]. As specified in Table 3, it is a non-wearable device that needs an internet connection for some of its
functionalities. It is specially designed for indoor environment and can detect both static and dynamic objects.
3.2.2. 2001-2010
EOAs developed in the decade 1990-20 0 0 are discussed in this section.
Wearable Another navigational system Drishti [30] shown in Fig. 4, is wearable was developed which overcomes the
limitation of an existing navigational system by providing dynamic interaction and can easily adapt to the changes in both
indoor and outdoor environment. As stated in Table 3, it is a wearable device that can detect both static and dynamic
objects.
Handheld A new Robot-Assisted Wayfinding (RAW) [31] system shown in Fig. 4, was developed which is a combination
of robotic guide and Radio Frequency Identification (RFID) sensors which is an inexpensive solution to be used in large
10 K. Manjari, M. Verma and G. Singal / Internet of Things 11 (2020) 100188
public places such as airport, malls, supermarkets, etc. RAW is a system designed for visually blind persons which works in
an indoor area and helps them in finding the way with the help of integrated RFID sensors. One more new effort was done
in developing a Traffic Light Detector [53] which was based on a mobile-cloud collaborative approach. It is quite portable
and there is no need for carrying additional hardware, but it constantly requires an internet connection.
3.2.3. 2011-2019
EOAs developed in the decade 1990-20 0 0 are discussed in this section.
Wearable A wearable RGB-D camera-based navigation system was also developed in the form of Vest for visually blind
by using Simultaneous Localization and Mapping (SLAM) algorithm shown in Fig. 1 of [54]. This algorithm overcomes the
limitation of the stereo-vision based algorithm by working properly in low texture environment such as the corridor where
the stereo-vision algorithm does not work properly. It works well in an indoor environment and can detect both static and
dynamic objects. NAVIG [55] was developed to increase the blind’s autonomy through the virtual Augmented Reality System.
This system helps in route selection and guidance for complex routes through integrating Geographic Information System
with different classes of objects. It has a higher precision rate but requires an internet connection in real-time. There have
not been many handheld EOAs developed in this decade.
PLDs are the devices which locate the users by finding the user’s location. This helps the blind to locate themself while
they are travelling.
3.4. Others
Another device called Personal Guidance System (PGS) [56] was developed which helped the blind people while trav-
eling. It helped them in finding the orientation and location. It does not require any additional cost of installation and
maintenance, but it requires hardware miniaturization. Stereo Vision-based Obstacle Detection System [57] was built to help
visually blind in walking. It can detect obstacles with 10 cm height and 3 to 5-meter distance, but the algorithm was slow.
A portable & fast processing navigation system [58] was developed which can help the user in route planning and informa-
tion extraction as well. Another real-time tactile simulation-based navigation and mobility solution was developed for blind
persons in the form of wearable Stereo Vision System [52] which is very inexpensive and consumes less power.
A virtual environment system, “Haptic and Audio Multimodality to Explore and Recognize the Environment” (“HOMRE”)
[18] shown in Fig. 3, was developed to provide navigation and guidance inside a virtual environment to train the visually
blind regarding the usage of cane and help them in exploring a new site. This type of system helps the blind person to
be aware of the usage of any new device or new site. It is a system built for Visually impaired to train them for an un-
known environment and make them familiar with the places for future visits & infrastructure of the place. A handheld novel
Tactile Navigation System [59] was developed which gives vibratory output and is compact, light-weight and easily fits in
different palm sizes. A head-mounted prototype of Depth-Based Obstacle Detection System [60] was developed which helps
visually impaired by providing distance information. It works in both indoor as well as outdoor environments but has a
better performance in an indoor environment. A “Haptic Alerts for Low-hanging Obstacles” (“HALO”) [61] was developed as
integration with traditional cane to detect and warn users for low hanging objects. Adding this functionality to the cane,
however, increases its weight.
A Blind Navigation Support System [62] was developed for blind persons to be used in an indoor environment for nav-
igation. Neural Network was used for feature extraction which is relevant from the depth information provided by Kinect.
Neural Network was used for classification purpose and achieved good accuracy in that. Microsoft Kinect is, however, not a
practical solution as it cannot be carried easily due to its weight and size. In [63], a multiple sensor-based prototype was
developed to detect a wider range of obstacles. It is capable of detecting smaller objects such as toys to staircases to uneven
surfaces with the varying response times. A preliminary prototype of the Obstacle Avoidance System [64] was developed to
assist visually impaired by generating disparity maps through the stereo camera carried. This system aims to help the user
in avoiding crashing with people in crowded areas.
A prototype of the Obstacle Recognition System [65] was developed for visually impaired using Radio Frequency Iden-
tification. Here, a Braille interface is used, and the RFID tag reader is placed on the cane. It is durable and cost-effective.
Another solution for providing navigation support to visually impaired was developed using stereo imaging named Blavi-
gator [66] prototype. Ensemble Empirical mode decomposition (EEMD) algorithm was developed for collision avoidance in
this prototype. Then came Smartphone-Based Obstacle Detection & Classification System [67] which is suitable for indoor
as well as outdoor environment. Multiscale Lucas - Kanade algorithm was used for the extraction of interest points. An
Automated Mobility and Orientation System [68] integrated on the cane, was developed for the blind as well as partially
sighted people which provides the information of direction as well as obstacle information to help in avoiding them. GSM-
GPS module is linked with the system to determine the user’s location. An Ultrasonic Cane [69] was developed with three
pairs of ultrasonic transmitter and receivers such that it was able to detect both ground level and aerial objects efficiently.
It is cost-effective with the detection range of 5 - 150 cm.
K. Manjari, M. Verma and G. Singal / Internet of Things 11 (2020) 100188 11
Another effective and low-cost Cane based Smart Obstacle Detection System [70] was developed with sensors at bottom
and circuitry at the top for feedback. It achieves higher accuracy when the cane is moved at an angle of 45 degrees in any
direction. A novel smartphone-based Aerial Obstacle Detection System [71] was developed using a 3-D smartphone which
can detect branches, awnings, etc. The algorithm used here captures scenic 3-D data through stereo-vision. The limitation
of the system is the dependency to incorporate a 3-D camera. A new wearable mobility aid integrated on sunglass and
gloves [72] was developed to detect and categorize crosswalks using cloud-processing and deep learning techniques. It has
the advantage of working in real-time being computationally efficient. Another solution in the form of Android Application,
a Mobile-Face Recognition System [73] was developed for detection and recognition of faces to help visually impaired. Mo-
bile Computing along with Open CV library was used here to perform detection. It was observed that the face detection
accuracy is greater than face recognition accuracy in well-lit conditions. It uses low memory and low data for its function-
ing.
Another work for visually impaired was done by developing Indoor Stair Case Detection and Recognition System [74] to
help them in indoor navigation. This system uses an RGB-D camera which is mounted on the user’s chest and the depth
frames captured by a camera is used to recognize the upstairs, downstairs, etc. The SVM classifier was used to detect stair
cases. The algorithm overcomes the problem of detection in extreme illumination but does not work well in dark envi-
ronment. A Wearable Mobility Aid [75], based on smartphone was developed for the blind by using 3D vision and deep
learning. Here, an RGB-D camera was used for capturing frames and Convolution Neural Network (CNN) was used for the
categorization of detected obstacles. The detection performance was observed to be excellent close to 98 percent using LeNet
architecture. It is small, lightweight and can perform in real-time but cannot categorize each element of the environment
because of the use of LeNet architecture.
Using Microsoft Kinect Sensor, a real-time Obstacle Detection System [67,76] was developed to help visually blind persons
in indoor. The 3D-image processing technique is used for this purpose through which system can detect even walls and
loose obstacles, but it does not work well in extreme light conditions. A Collision detection Method [77] was developed
by using Image Segmentation to detect those objects which are not able to be detected by present solution like doors,
walls, etc. Graph-Based Region Merging Algorithm is applied to obtain segmentation result which can successfully detect
both textured and non-textured objects. A novel Obstacle Detection System [78], based on infrared and visual sensor data
was developed to help visually impaired in navigation using “Google Project Tango Development Kit” [79]. This system was
unable to detect small size obstacles and cannot operate in direct sunlight.
Based on fuzzy logic image depth, a novel Obstacle Avoidance Algorithm [80] was developed for guiding visually impaired
system. It includes components such as a camera, compass, GPS, microphone, gyroscope, wi-fi and micro-controller. It is
cost-efficient, but even this system also is not capable of detecting walls, doors, etc. An ”Electromagnetic Sensor Prototype”
[81] was developed to help visually impaired in autonomous walking. A microwave radar [81] was placed on a traditional
cane which detects the obstacle in a wider range. It provides a better resolution and consumes low power. Augmented
Reality (AR) markers-based system [82] was developed to assist visually impaired in indoor navigation where a pre-trained
AlexNet network was used. It has the limitation of facing blind spots because of hardware issues. A new algorithm [83] was
developed using Deep Neural Network (DNN) & objects were detected using a single camera. For extracting global features
of image, unsupervised VGG16 training algorithm was used and to extract local features of image, Google-Net inception V1
was used.
Another algorithm was developed for real-time pathfinding [84] for blind people, which also used Deep Convolution Neu-
ral Network to recognize patterns across images. It was observed that VGG16 performed best in semantic segmentation task.
An Array of Lidars and Vibrotactile Units (ALVU) [85] was developed which is a hands-free and contactless wearable device
that can detect low and high hanging objects in the environment. Sensor Belt and Haptic Strap need to be worn by the user
like a belt which is light-weight, small and inexpensive. GoNet [86] is a deep learning approach which is semi-supervised
used for traversing ability estimation. Fisheye camera is used here to capture images. It uses Generative Adversarial Net-
works (GANs) to predict whether the area seen in the image is safe or not. It is memory efficient and fast in terms of
real-time processing. A wearable head-mounted pair of glass [87] was developed to provide navigation to the visually im-
paired. It uses depth segmentation & deep convolution neural network which provides both traversability awareness and
terrain awareness. It has good accuracy and speed for real-world applications.
Vision-Based Indoor Navigation System (ViNav) [88] is a very cost-effective system that implements navigation with
indoor mapping and localization based [88] on smartphones. Another different concept of Virtual Navigation [89] was intro-
duced to help blind people in experiencing unknown locations by a real walk while still staying in a controlled environment.
For this, the user needs to wear a camera on the head and carry the laptop at the back. Another wearable solution in the
form of Optical See-Through Glasses [90] was developed to help visually impaired in reaching the destination in an in-
door environment which is low cost and small. To help visually blind in following the shortest path by avoiding obstacles
parallelly, a novel scheme of the “Sub-goal Selection” based algorithm was developed.
A prototype of a wearable vision assistance system [91] was introduced to ease the life of blind persons using Binocular
Vision Sensors. Stereo Image Quality Assessment (SIQA) was used to choose the informative images and was sent to the
cloud for further processing. ASSIST [92] is an indoor positioning as well as navigation system which guides the user in
an indoor environment with high accuracy. It performs three main tasks: Location Recognition, Object Recognition, and
Semantic Recognition. User needs to fit the camera on the body and use smartphone-based application for this purpose.
A wearable Smart Glass [93] was developed for recognition of store sign text to help visually impaired. SSD models along
12 K. Manjari, M. Verma and G. Singal / Internet of Things 11 (2020) 100188
with the Text Recognition instrument was integrated to achieve the goal but there is a dependency on the internet for this
purpose.
There have been different approaches used by authors to develop solutions for visually impaired. Approaches used are
sensor-based, image processing based and application-based.
Sensors are the basic devices being commonly used to gather data from the environment and most ETAs use sensors.
Angle of View (AoV) can be described as the angular range of a scene captured by a camera which means the angle to
which a camera can capture any type of images. In Table 1, few of the sensors are listed which have been used in the past
and are also being currently used by researchers in this area. RGB-D Sensors are those sensors which provide both depth
and RGB information. Image captured by RGB-D sensors provides RGB as well as depth information. A depth image is an
image in which every pixel relates to the difference between the picture plane and the relating object in the RGB image. Its
main advantage is that it is more precise than other sensors but the working range is limited.
Ultrasonic Sensors are the most commonly used sensors as it is inexpensive and not affected by color or transparency of
objects. This sensor transmits ultrasonic wave which reflects after hitting any object present in front direction. It calculates
the separation to the object by estimating the time between the transmission and reception. But it is unable to detect
obstacles at ground levels. Fisheye Camera is used for surveillance purposes due to its wide angle of view but works in
a shorter distance. Monocular Vision Camera provides high spatial resolution images and is inexpensive. However, it is
inconsistent with the human eye vision system. Binocular Vision Sensor captures images in a fixed frequency which allows
3D vision. It is very expensive and has a limited focus. Microsoft Kinect is a type of sensor which is also used by researchers
very commonly these days as it provides a variety of features, though it is quite heavy which makes it impractical to use.
Also, it has some privacy issues as it can easily get hacked. 3D/Depth Camera gives 3-D data. It computes real-time depth
images and provides output.
Radio-Frequency Identification (RFID) is the most cost-efficient solution which is used for helping in navigation for vi-
sually blind. Radio waves are utilized to read and capture data stored on a tag of an item. A tag can be read from up to
a distance and shouldn’t be inside direct pathway of the reader to be tracked. Augmented Reality/ Virtual Reality (AR/VR)
has also been used in a couple of papers specifically for training visually blind in the navigation of unknown environment.
AR is a technology that adds digital information to an image. It is used in applications for mobile phones and tablets. VR is
used to create a simulated environment. It creates a different environment than a real one used in entertainment & training
etc. Cloud Computing approach has also been used where people have built smartphone-based applications to help visually
impaired. Cloud Computing deals with distributed computing resources to handle applications. In simple terms, cloud com-
puting is taking cloud resources and using them outside the organization. These services are used over the internet and are
paid by the customers (cloud customers) once delivered as per the needed business model.
In [94], pattern recognition was investigated in 6 blind and 6 bindfolded sighted persons through auditory substitution
in a computer environment. Users had to scan the patterns displayed on the screen of PC by moving the pen of a tablet.
And then the area centred on the pointer was translated into sounds according to the visual-auditory mode. Users were
trained to learn this code through 12 one-hour sessions. Their study proved that the actual blind persons performed better
than the sighted persons. In [95], an experimental prototype have been developed by authors to allow optimization of the
sensory substitution process. A personal computer was used that was connected to a miniature head-fixed video camera and
to headphone. Through image processing, edge detection and graded resolution was performed on the image captured by
camera. Each pixel of the image was assigned to a sinusoidal tone and the information was sent to user through headphone.
In [96], authors have tried to analyze whether the blind individuals treated with a retinal prosthesis could also benefit
from using the visual signal with non-visual signal while navigating. The participants wore goggles that approximated the
field of view and the resolution of the Argus II prosthesis. They showed better precision when navigating with reduced vi-
sion, compared to without vision. In [97], authors have reviewed the methods used to present visual, auditory, and modified
tactile information to the skin. The present and potential future applications of sensory substitution, including tactile vision
substitution (TVS), tactile auditory substitution, and remote tactile sensing or feedback (teletouch) have also been discussed.
To investigate the effects of perceptual experience (visual versus sensory substitution) on depth perception through a
Sensory Substitution (SS) system, object localization abilities of early blind (n=10), and blindfolded sighted control subjects
(n=20) were assessed before and after a practicing period with a visual-to-auditory SS device by the authors in [98]. In [99],
the ’visual’ acuity of blind persons perceiving information through a newly developed human-machine interface, with an
array of electrical stimulators on the tongue, has been quantified using a standard Ophthalmological test (Snellen Tumbling
E) by the authors. The interface may lead to practical devices for persons with sensory loss such as blindness, and offers a
means of exploring late brain plasticity. In [100], a methodological approach to perceptual learning was used to allow both
early blind subjects (experimental group) and blindfolded sighted subjects (control group) to experience optical information
and spatial phenomena, on the basis of visuo-tactile information transmitted by a 64-taxel pneumatic sensory substitution
K. Manjari, M. Verma and G. Singal / Internet of Things 11 (2020) 100188 13
Table 2
Comparison of Different Models used for Developing Solutions for Visually Impaired.
device. The methodological approach adopted by authors could constitute a background for perceptual learning programmes
aiming to give knowledge about visual-spatial concepts, particularly the ones relative to depth.
Image processing is another approach based on which many devices process the image captured by the camera. It
uses many techniques for this purpose such as Image Segmentation, Depth Map Estimation, and Simultaneous Localiza-
tion/Mapping, etc. “Image Segmentation” is the way towards segmenting an image into different sections called super-pixels.
The motive of segmentation is to rearrange the picture into something that is progressively important and simpler to ana-
lyze. “Image Segmentation” is used to segment the area containing objects and edges in pictures. A depth map is an image
or group of images that contains data identifying the distance of the surfaces of scene objects from a perspective. A set
of techniques and algorithms that finds the representation of spatial structure of a scene is called Depth Map Estimation.
Simultaneous Localization and Mapping (SLAM) algorithm helps in solving the problem of creation or updating the map of
a new environment along with keeping the track of an agent’s location.
Many techniques of Machine Learning & Deep Learning has also been used in the last few years to help visually impaired
such as SVM, CNN & LSTM, etc. Machine Learning algorithms gives computer the ability to learn without having to program
explicitly. It is the study of algorithms and statistical models to perform a specific task. “Deep Learning” is the part of
machine learning algorithms that extracts features from input data. Most of these models are based on “Artificial Neural
Networks” (ANNs) like CNN. Learning here can be supervised, unsupervised or semi-supervised. “SVM” is a supervised model
that analyzes data used for regression and classification analysis. It performs classification tasks by drawing hyperplanes in
a multidimensional space that separates different class labels.
In [101], an experimental system for the conversion of images into sound patterns has been presented. The system was
designed to provide auditory image representations within limitations of the human hearing system as a step for develop-
met of vision substiuion system for blind. In [102], author has discussed about the sensory substitution devices that convert
images into sound. Sensorimotor learning may facilitate and perhaps even be required to develop expertise in the use of
multimodal information. The blind users acquired synthetic synaesthesia, with visual experience evoked by sounds only af-
ter gaining such expertise. In [103], authors have tested image-to-sound conversion-based localization of visual stimuli (LEDs
and objects) in 13 blindfolded participants. The results demonstrate that the device allowed blindfolded subjects to increas-
ingly know where something was by listening, and indicate that practice in naturalistic conditions effectively improved
“visual” localization performance.
There are many exiting application based applications built for visually impaired which works like an extra pair of eyes
for them. Applications have made life easier for individuals living with visual deficiency or a visual disability.
Seeing AI by Microsoft [104] An amazing application is Microsoft’s Seeing AI, and sadly, it’s right now available for iOS
user only. This is a stunning application, offered free. Seeing AI can perceive and talk content distinguished by the cell phone
14 K. Manjari, M. Verma and G. Singal / Internet of Things 11 (2020) 100188
camera, either in minor grabs or full pages at once. It can peruse standardized tags on the staple and other item names,
offer up the item name, and generally extra data, for example, nourishment naming, cooking, and different guidelines.
Utilizing Seeing AI we can snap photos of our loved ones, and later utilize the application to disclose to us who’s close-
by. A trial setting can depict the scene around us, for example, “A fenced-in yard,” or “A blue entryway on a condo building.”
We can likewise advance pictures we get in email or find on Facebook or Twitter, and Seeing AI will do its best to portray
the activity and read any content contained in the picture.
LookTel by IPPLEX [105] LookTel Money Reader can perceive various sorts of cash and talk its category, permitting the
outwardly impeded to check their cash. Clients basically point their iOS gadget at the bill, snap a photo with the camera, and
trust that the sum will be spoken so anyone might hear to them. Before this application, the outwardly impeded depended
on others to reveal to them the estimation of each bill, however at this point clients can tally their money autonomously.
VizWiz by ROCHCI [106] VizWiz is an application that enables the client to snap a photo with their gadget and pose
any inquiry about the picture. Questions are sent to either a volunteer Web Worker, email, or Twitter. Questions are or-
dinarily replied inside a couple of minutes or even seconds. This innovation helps the outwardly impeded recognize their
environment.
SayText by Norfello Oy [107] SayText checks the message inside a picture, for example, a medicinal structure or cafe
menu, and understands it so anyone might hear. The application’s Optical Character Recognition utility at that point exam-
ines the content. Tap the screen for announcements. Once filtered, swipe ideal to hear the archive read so anyone might
hear
Eye Color Studio [108] Eye Color Studio is a photo editing application concentrated only on altering the eyes of any
individual in any photograph. Since it is centered around this by itself, the outcomes are regularly much superior to those
created by progressively broad photograph altering applications. To utilize Eye Color Studio, simply select a photograph from
your gadget. The application naturally perceives the eyes of the individual in the photograph, yet you can likewise finish this
progression physically for far and away superior exactness.
TetraMail [109] TetraMail is a usable email client that helps blind people in checking the email and replying the same
via voice commands. All the responses are touch-based and voice-based to be used conveniently by blind persons. This
application is user-friendly and throughout consistent tested and verified by 38 blind participants.
In Table 3, all the devices are listed along and are divided into five categories i.e. Name of devices, Analysis Type, Cover-
age, Object Type, Carrying Mode & Evaluation Parameters. “Analysis Type” category is then divided into two sub-categories
which are Online and Offline mode. “Coverage” category is then sub-divided into three categories which are Indoor, Out-
door & Both. “Object Type” is also sub-divided into three sub-categories i.e. Static, Dynamic & Both. “Carrying Mode” is
sub-divided into two which are Wearable & Hand-held. The last category “Evaluation Parameters” is sub-divided into four
categories i.e. Power Consumption, Weight, Economic & User-Friendly. “Online” category means devices need internet con-
nection for its working and the “Offline” category means devices do not need an internet connection for its working. “Indoor”
category means that devices can perform its functionalities in an indoor area only. And “Outdoor” category means that de-
vices are suitable for working in an outdoor environment only. The category “Both” denotes that the devices can perform in
indoor as well as outdoor environment. The “Static” category means that the device can detect only static/stationary objects
and the “Dynamic” category means that device can detect only moving objects. Again the category “Both” here, means that
device can detect static as well as dynamic objects both. “Wearable” category means devices that user can wear and “Hand-
held” category means that the devices are non-wearable that needs to be carried in hands. “Power Consumption” is one
of the evaluation parameters which means that how much power is consumed by the devices that are evaluated in three
ways- Low, Average & High. “Weight” is another evaluation parameter which means whether the devices is Light, Average or
Heavy in terms of weight. “Economic” is another evaluation parameter which means whether the device is budget-friendly
or not. Lastly, the “User-Friendly” parameter determines whether the use of the device is easily understandable by the user
or not.
Out of the devices that have been described in brief in Table 3, most of the devices are heavy to carry as they have
a camera and other sensors attached to it. Efforts have been done to develop a compact and light-weight device like in
the form of vest, garment, spectacles, shoes, etc. so that it becomes easy for blind persons to wear in their daily life. With
the addition of different cameras and new sensors and other new technologies, advancements have been made but it also
increased the cost of the device. Although many efforts have been made in this area, there is still a need for a device that
can solve basic problems of visually impaired. If a visually blind person want to identify the currency, read the text written
on signboard, measure the depth distance, detect the texture of path, recognize the objects, know about the surrounding
scene and much more about the environment around thems, maybe a hardware or a software-based solution can be of help
to them.
5. Observation
The literature survey indicates that earlier (till 20 0 0) sensor-based devices were developed to help visually blind in
navigation and obstacle detection. Sensors like ultrasonic sensors, radar sensors were integrated into the cane or other
wearable/hand-held devices in a way to make it comfortable for use. Then camera integrated devices (till 2015) were de-
veloped by using different techniques of image processing which made devices slightly heavier than previous ones because
of the weight of cameras. In the last few years, people have started using deep learning approaches for obstacle detection
Table 3
Multi-parameter Analysis of Different Devices used by Visually Impaired Persons.
Device Ref Year Analysis Type Coverage Object Type Carrying Mode Evaluation Parameters
Online Offline Indoor Outdoor Both Static Dynamic Both Wearable Handheld Power Weight Econ User
Consu. omic Friendly
15
16 K. Manjari, M. Verma and G. Singal / Internet of Things 11 (2020) 100188
which requires good processing power. As shown in Table 3, a few popular devices are mentioned with their features. It is
observed that most of the devices do not need an internet connection to perform the task. Devices in which Global Posi-
tioning System (GPS) is integrated and various applications built for visually blind are dependent on internet connection.
Also, most of the devices are suitable for both indoor and outdoor coverage and detects static as well as dynamic obstacles.
And there is a good ratio of wearable and hand-operated devices developed since the beginning.
To determine the feasibility of techniques proposed to help the blind persons can be evaluated using parameters like
Power Consumption, Weight, Economic, and User-Friendliness. It has been observed that if the devices are basic which uses
just the sensors for processing then it is lightweight, power consumption is low, it is affordable and user-friendly. However,
when more features are induced to those devices like an integration of camera and processing power, these devices become
heavier, require more power and become costlier.
After reading the papers and analyzing the devices built so far for visually impaired, the following points have been
extracted, which can provide future direction to the researchers interested to work in this field:
• There is always a trade-off between the feature we want to integrate into our device and the resources we need such as
power consumption and cost. It depends upon the priority of the user whether they want to keep it cost-effective, light
and handy or they want to focus on the features and functionalities of the device.
• An accurate and multi-feature device will not be lightweight and cost-effective in most cases as hardware requirements
will increase which may increase the overall weight/dimensionality of the device. Also, a lightweight and cost-effective
solution will not be rich with functionalities. So, to bring about a balance between the features and resources in a real-
time device is a challenge which can be a significant future direction to the researchers.
• This paper presents a variety of devices which provides several functionalities to the user, but it is either costly or is
heavy which will not make it an ideal solution for visually challenged person. Therefore, the need of the era is to have a
solution which is cost-effective, light-weight, handy and rich in features and is suitable for working in real-time.
• Lot of devices have been developed for visually impaired which has their objective and solves the problem of visually
impaired in one or another way. But there is no one-stop solution built to help them which serves almost all the re-
quirements.
Currently, we have built a smart cane integrated with camera and raspberry pi. The prototype of cane has been shown in
Fig. 7. Earlier, Arduino was integrated with the cane but since the need was the camera and have some fast processing for
K. Manjari, M. Verma and G. Singal / Internet of Things 11 (2020) 100188 17
deploying the object detection model, so we switched to Raspberry pi. A pre-built object detection model i.e. SSD Lite Mobile
Net model has been deployed for general obstacle detection which is providing a voice-based output through Bluetooth
earphones to the users. This was just a prototype to test how the device performs with a deployed model in real-time. We
are focusing broadly on two categories:
1. Animal Detection visually blind persons face difficulties in navigating freely in the outside environment, especially if it
is a crowded place. We aim is to make a person aware of animals (cows and dogs) present around them for better and
safe navigation.
2. Currency Denomination Detection A person suffering from vision impairment should be able to detect the currency
denomination so that nobody can cheat them in real life.
The paper is a review of work done for visually impaired till now. We have tried to discuss the useful devices built
for visually impaired and focused upon their working, usefulness, and features. We have tried to make it more interactive
and clearer by comparing the devices based on several parameters as presented in Table 3. In the process of developing
an assistive device, the essential feature is the interface between user and system along with the scheme through which
information is sent to the user. The device should be simple, wearable and user-friendly so that users can use it without
much effort.
Lately, a good amount of work has already been contributed for the visually blind but still there is a need to have a
cost-effective solution with enriched features to make visually blind capable and independent. The device should be easy to
use & light-weight which can perform well in real-time with high accuracy. Presently, there are many basic devices which
are easy to use but along with the advancement of technology more better devices have been developed. These devices are
feature-rich but not all of these perform in real-time. Also, on average devices have heavyweight which makes it difficult
to carry along and impractical for real-time usage. The focus should be done on increasing the accuracy of these devices,
reducing the power consumption, making it light-weight, easy-to-use, adaptable, and efficient in real-time. A single device
having all these factors would make the lives of visually impaired people convenient as compared to the available devices.
The authors whose names are listed immediately below certify that they have NO affiliations with or involvement in any
organization or entity with any financial interest (such as honoraria; educational grants; participation in speakers’ bureaus;
membership, employment, consultancies, stock ownership, or other equity interst; and expert testimony or patent-licensing
arrangements), or non-financial interst (such as personal or professional relationships, affiliations, knowledge or beliefs) in
the subject matter or materials discussed in this manuscript.
References
[1] WHO, Blindness and vision impairment, 2018 https://www.who.int/news-room/fact-sheets/detail/blindness- and- visual- impairment (accessed
September 3, 2019).
[2] R.R. Bourne, S.R. Flaxman, T. Braithwaite, M.V. Cicinelli, A. Das, J.B. Jonas, J. Keeffe, J.H. Kempen, J. Leasher, H. Limburg, et al., Magnitude, temporal
trends, and projections of the global prevalence of blindness and distance and near vision impairment: a systematic review and meta-analysis, The
Lancet Global Health 5 (9) (2017) e888–e897.
[3] M. Express, World’s blind population to soar: study, 2017 https://medicalxpress.com/news/2017- 08- world- population- soar.html (accessed February
5, 2019).
[4] A.T.I. Association, What is AT?, 2019 https://www.atia.org/at-resources/what-is-at/ (accessed February 6, 2019).
[5] W. Elmannai, K. Elleithy, Sensor-based assistive devices for visually-impaired people: current status, challenges, and future directions, Sensors 17 (3)
(2017) 565.
[6] T. Hoydal, J. Zelano, An alternative mobility aid for the blind: the’ultrasonic cane’, in: Proceedings of the 1991 IEEE Seventeenth Annual Northeast
Bioengineering Conference, IEEE, 1991, pp. 158–159.
[7] J. Borenstein, The navbelt-a computerized multi-sensor travel aid for active guidance of the blind, in: CSUN’s Fifth Annual Conference on Technology
and Persons with Disabilities, 1990, pp. 107–116.
[8] S. Shoval, I. Ulrich, J. Borenstein, et al., Computerized obstacle avoidance systems for the blind and visually impaired, Intelligent Systems and Tech-
nologies in Rehabilitation Engineering (20 0 0) 414–448.
[9] C.W. Bledsoe, Originators of orientation and mobility training, Foundations of orientation and mobility (1997) 580–623.
[10] H.A. Yanco, Wheelesley: A robotic wheelchair system: Indoor navigation and user interface, in: Assistive technology and artificial intelligence,
Springer, 1998, pp. 256–268.
[11] S. MacNamara, G. Lacey, A smart walker for the frail visually impaired, in: Proceedings 20 0 0 ICRA. Millennium Conference. IEEE International Con-
ference on Robotics and Automation. Symposia Proceedings (Cat. No. 00CH37065), 2, IEEE, 2000, pp. 1354–1359.
[12] S. Kammoun, M.J.-M. Macé, B. Oriola, C. Jouffrais, Toward a better guidance in wearable electronic orientation aids, in: IFIP Conference on Human–
Computer Interaction, Springer, 2011, pp. 624–627.
[13] D. Dakopoulos, N.G. Bourbakis, Wearable obstacle avoidance electronic travel aids for blind: a survey, IEEE Transactions on Systems, Man, and Cyber-
netics, Part C (Applications and Reviews) 40 (1) (2009) 25–35.
[14] Á. Csapó, G. Wersényi, H. Nagy, T. Stockman, A survey of assistive technologies and applications for blind users on mobile platforms: a review and
foundation for research, Journal on Multimodal User Interfaces 9 (4) (2015) 275–286.
[15] M.J. Proulx, J. Gwinnutt, S. Dell’Erba, S. Levy-Tzedek, A.A. de Sousa, D.J. Brown, Other ways of seeing: From behavior to neural mechanisms in the
online “visual” control of action with sensory substitution, Restorative neurology and neuroscience 34 (1) (2016) 29–44.
[16] D.T. Batarseh, T.N. Burcham, G.M. McFadyen, An ultrasonic ranging system for the blind, in: Proceedings of the 1997 16 Southern Biomedical Engi-
neering Conference, IEEE, 1997, pp. 411–413.
18 K. Manjari, M. Verma and G. Singal / Internet of Things 11 (2020) 100188
[17] S. Ram, J. Sharf, The people sensor: a mobility aid for the visually impaired, in: Digest of Papers. Second International Symposium on Wearable
Computers (Cat. No. 98EX215), IEEE, 1998, pp. 166–167.
[18] A. Lécuyer, P. Mobuchon, C. Mégard, J. Perret, C. Andriot, J.-P. Colinot, Homere: a multimodal system for visually impaired people to explore virtual
environments, in: IEEE Virtual Reality, 2003. Proceedings., IEEE, 2003, pp. 251–258.
[19] S.K. Bahadir, V. Koncar, F. Kalaoglu, Wearable obstacle detection system fully integrated to textile structures for visually impaired people, Sensors and
Actuators A: Physical 179 (2012) 297–311.
[20] P. Bach-y Rita, C.C. Collins, F.A. Saunders, B. White, L. Scadden, Vision substitution by tactile image projection, Nature 221 (5184) (1969) 963–964.
[21] P. Bach-y Rita, Y. Danilov, M. Tyler, R. Grimm, Late human brain plasticity: vestibular substitution with a tongue brainport human-machine interface,
Intellectica 40 (1) (2005) 115–122.
[22] D. Aguerrevere, M. Choudhury, A. Barreto, Portable 3d sound/sonar navigation system for blind individuals, in: 2nd LACCEI Int. Latin Amer. Caribbean
Conf. Eng. Technol. Miami, FL, 2004.
[23] M. Bousbia-Salah, A. Redjati, M. Fezari, M. Bettayeb, An ultrasonic navigation system for blind people, in: 2007 IEEE International Conference on
Signal Processing and Communications, IEEE, 2007, pp. 1003–1006.
[24] R.G. Praveen, R.P. Paily, Blind navigation assistance for visually impaired based on local depth hypothesis from a single image, Procedia Engineering
64 (2013) 351–360.
[25] L. Dunai, G.P. Fajarnes, V.S. Praderas, B.D. Garcia, I.L. Lengua, Real-time assistance prototyp-a new navigation aid for blind people, in: IECON 2010-36th
Annual Conference on IEEE Industrial Electronics Society, IEEE, 2010, pp. 1173–1178.
[26] A. Kumar, R. Patra, M. Manjunatha, J. Mukhopadhyay, A.K. Majumdar, An electronic travel aid for navigation of visually impaired persons, in: 2011
Third International Conference on Communication Systems and Networks (COMSNETS 2011), IEEE, 2011, pp. 1–5.
[27] K. Yelamarthi, D. Haas, D. Nielsen, S. Mothersell, Rfid and gps integrated navigation system for the visually impaired, in: 2010 53rd IEEE International
Midwest Symposium on Circuits and Systems, IEEE, 2010, pp. 1149–1152.
[28] K. Kaczmarek, The tongue display unit (tdu) for electrotactile spatiotemporal pattern presentation, Scientia Iranica 18 (6) (2011) 1476–1485.
[29] B. Mustapha, A. Zayegh, R. Begg, Wireless obstacle detection system for the elderly and visually impaired people, in: 2013 IEEE International Confer-
ence on Smart Instrumentation, Measurement and Applications (ICSIMA), IEEE, 2013, pp. 1–5.
[30] L. Ran, S. Helal, S. Moore, Drishti: an integrated indoor/outdoor blind navigation system and service, in: Second IEEE Annual Conference on Pervasive
Computing and Communications, 2004. Proceedings of the, IEEE, 2004, pp. 23–30.
[31] V. Kulyukin, C. Gharpure, J. Nicholson, G. Osborne, Robot-assisted wayfinding for the visually impaired in structured indoor environments, Au-
tonomous Robots 21 (1) (2006) 29–41.
[32] L. Dunai, G. Peris-Fajarnés, E. Lluna, B. Defez, Sensory navigation device for blind people, The Journal of Navigation 66 (3) (2013) 349–362.
[33] O. Elkhalili, O. Schrey, P. Mengel, M. Petermann, W. Brockherde, B. Hosticka, A 4/spl times/64 pixel cmos image sensor for 3-d measurement applica-
tions, IEEE Journal of Solid-State Circuits 39 (7) (2004) 1208–1212.
[34] M.O. Yeboah, E. Kuada, M. Sitti, K. Govindan, H. Hagan, M.C. Miriam, Design of a voice guided ultrasonic spectacle and waist belt with gps for the
visually impaired, in: 2018 IEEE 7th International Conference on Adaptive Science & Technology (ICAST), IEEE, 2018, pp. 1–7.
[35] S. Mahalle, Ultrasonic spectacles & waist-belt for visually impaired & blind person, IOSR Journal of Engineering 4 (2014) 46–49.
[36] A. Aladren, G. López-Nicolás, L. Puig, J.J. Guerrero, Navigation assistance for the visually impaired using rgb-d sensor with range expansion, IEEE
Systems Journal 10 (3) (2014) 922–932.
[37] R. Tapu, B. Mocanu, T. Zaharia, Deep-see: Joint object detection, tracking and recognition with application to visually impaired navigational assistance,
Sensors 17 (11) (2017) 2473.
[38] M. Chablani, YOLO – You only look once, real time object detection explained, 2017 https://towardsdatascience.com/yolo-you-only-look-once-
real- time- object- detection- explained- 492dc9230 0 06 (accessed May 8, 2019).
[39] K. Patil, Q. Jawadwala, F.C. Shu, Design and construction of electronic aid for visually impaired people, IEEE Transactions on Human-Machine Systems
48 (2) (2018) 172–182.
[40] A. Mancini, E. Frontoni, P. Zingaretti, Mechatronic system to help visually impaired users during walking and running, IEEE transactions on intelligent
transportation systems 19 (2) (2018) 649–660.
[41] S. Lin, K. Wang, K. Yang, R. Cheng, Krnet: A kinetic real-time convolutional neural network for navigational assistance, in: International Conference
on Computers Helping People with Special Needs, Springer, 2018, pp. 55–62.
[42] A. Haigh, D.J. Brown, P. Meijer, M.J. Proulx, How well do you see what you hear? the acuity of visual-to-auditory sensory substitution, Frontiers in
psychology 4 (2013) 330.
[43] S. Kayukawa, K. Higuchi, J. Guerreiro, S. Morishima, Y. Sato, K. Kitani, C. Asakawa, Bbeep: A sonic collision avoidance system for blind travellers and
nearby pedestrians, in: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19), ACM, 2019, doi:10.1145/3290605.
3300282.
[44] S.A. Bouhamed, J.F. Eleuch, I.K. Kallel, D.S. Masmoudi, New electronic cane for visually impaired people for obstacle detection and recognition, in:
2012 IEEE International Conference on Vehicular Electronics and Safety (ICVES 2012), IEEE, 2012, pp. 416–420.
[45] B.-S. Lin, C.-C. Lee, P.-Y. Chiang, Simple smartphone-based guiding system for visually impaired people, Sensors 17 (6) (2017) 1371.
[46] N.S. Ahmad, N.L. Boon, P. Goh, Multi-sensor obstacle detection system via model-based state-feedback control in smart cane design for the visually
challenged, IEEE Access 6 (2018) 64182–64192.
[47] A. Nanavati, X.Z. Tan, A. Steinfeld, Coupled indoor navigation for people who are blind, in: Companion of the 2018 ACM/IEEE International Conference
on Human-Robot Interaction, ACM, 2018, pp. 201–202.
[48] M. Cornacchia, B. Kakillioglu, Y. Zheng, S. Velipasalar, Deep learning-based obstacle detection and classification with portable uncalibrated patterned
light, IEEE Sensors Journal 18 (20) (2018) 8416–8425.
[49] B. Li, J.P. Munoz, X. Rong, Q. Chen, J. Xiao, Y. Tian, A. Arditi, M. Yousuf, Vision-based mobile indoor assistive navigation aid for blind people, IEEE
transactions on mobile computing 18 (3) (2018) 702–714.
[50] R.K. Megalingam, S. Vishnu, V. Sasikumar, S. Sreekumar, Autonomous path guiding robot for visually impaired people, in: Cognitive Informatics and
Soft Computing, Springer, 2019, pp. 257–266.
[51] guidedogs, HARNESSING THE POWER OF PARTNERSHIP!, 2019 https://www.guidedogs.com/ (accessed May 8, 2019).
[52] J. Zelek, R. Audette, J. Balthazaar, C. Dunk, A stereo-vision system for the visually impaired, University of Guelph 1999 (1999).
[53] P. Angin, B. Bhargava, S. Helal, A mobile-cloud collaborative traffic lights detector for blind navigation, in: 2010 Eleventh International Conference on
Mobile Data Management, IEEE, 2010, pp. 396–401.
[54] Y.H. Lee, G. Medioni, Rgb-d camera based navigation for the visually impaired, in: Proceedings of the RSS, 2011.
[55] B.F. Katz, S. Kammoun, G. Parseihian, O. Gutierrez, A. Brilhault, M. Auvray, P. Truillet, M. Denis, S. Thorpe, C. Jouffrais, Navig: augmented reality
guidance system for the visually impaired, Virtual Reality 16 (4) (2012) 253–269.
[56] R.G. Golledge, J.M. Loomis, R.L. Klatzky, A. Flury, X.L. Yang, Designing a personal guidance system to aid navigation without sight: Progress on the gis
component, International Journal of Geographical Information System 5 (4) (1991) 373–395.
[57] N. Molton, S. Se, J. Brady, D. Lee, P. Probert, A stereo vision-based aid for the visually impaired, Image and vision computing 16 (4) (1998) 251–263.
[58] J.M. Loomis, R.G. Golledge, R.L. Klatzky, Navigation system for the blind: Auditory display modes and guidance, Presence 7 (2) (1998) 193–203.
[59] C. Shah, M. Bouzit, M. Youssef, L. Vasquez, Evaluation of ru-netra-tactile feedback navigation system for the visually impaired, in: 2006 International
Workshop on Virtual Rehabilitation, IEEE, 2006, pp. 72–77.
[60] C.-H. Lee, Y.-C. Su, L.-G. Chen, An intelligent depth-based obstacle detection system for visually-impaired aid applications, in: 2012 13th International
Workshop on Image Analysis for Multimedia Interactive Services, IEEE, 2012, pp. 1–4.
K. Manjari, M. Verma and G. Singal / Internet of Things 11 (2020) 100188 19
[61] Y. Wang, K.J. Kuchenbecker, Halo: Haptic alerts for low-hanging obstacles in white cane navigation, in: 2012 IEEE Haptics Symposium (HAPTICS),
IEEE, 2012, pp. 527–532.
[62] V. Filipe, F. Fernandes, H. Fernandes, A. Sousa, H. Paredes, J. Barroso, Blind navigation support system based on microsoft kinect, Procedia Computer
Science 14 (2012) 94–101.
[63] B. Mustapha, A. Zayegh, R. Begg, Multiple sensors based obstacle detection system, in: 2012 4th International Conference on Intelligent and Advanced
Systems (ICIAS2012), 2, IEEE, 2012, pp. 562–566.
[64] A. Rodríguez, L.M. Bergasa, P.F. Alcantarilla, J. Yebes, A. Cela, Obstacle avoidance system for assisting visually impaired people, in: Proceedings of the
IEEE Intelligent Vehicles Symposium Workshops, Madrid, Spain, 35, 2012, p. 16.
[65] M. Nassih, I. Cherradi, Y. Maghous, B. Ouriaghli, Y. Salih-Alj, Obstacles recognition system for the blind people using rfid, in: 2012 Sixth International
Conference on Next Generation Mobile Applications, Services and Technologies, IEEE, 2012, pp. 60–63.
[66] P. Costa, H. Fernandes, P. Martins, J. Barroso, L.J. Hadjileontiadis, Obstacle detection using stereo imaging to assist the navigation of visually impaired
people, Procedia Computer Science 14 (2012) 83–93.
[67] R. Tapu, B. Mocanu, A. Bursuc, T. Zaharia, A smartphone-based obstacle detection and classification system for assisting visually impaired people, in:
Proceedings of the IEEE International Conference on Computer Vision Workshops, 2013, pp. 444–451.
[68] N. Alshbatat, A. Ilah, Automated mobility and orientation system for blind or partially sighted people., International Journal on Smart Sensing &
Intelligent Systems 6 (2) (2013).
[69] K. Kumar, B. Champaty, K. Uvanesh, R. Chachan, K. Pal, A. Anis, Development of an ultrasonic cane as a navigation aid for the blind people, in: 2014
International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT), IEEE, 2014, pp. 475–479.
[70] N. Mahmud, R. Saha, R. Zafar, M. Bhuian, S. Sarwar, Vibration and voice operated navigation system for visually impaired person, in: 2014 interna-
tional conference on informatics, electronics & vision (ICIEV), IEEE, 2014, pp. 1–5.
[71] J.M. Sáez, F. Escolano, M.A. Lozano, Aerial obstacle detection with 3-d mobile devices, IEEE journal of biomedical and health informatics 19 (1) (2014)
74–80.
[72] M. Poggi, L. Nanni, S. Mattoccia, Crosswalk recognition through point-cloud processing and deep-learning suited to a wearable mobility aid for the
visually impaired, in: International Conference on Image Analysis and Processing, Springer, 2015, pp. 282–289.
[73] S. Chaudhry, R. Chandra, Design of a mobile face recognition system for visually impaired persons, arXiv preprint arXiv:1502.00756 (2015).
[74] R. Munoz, X. Rong, Y. Tian, Depth-aware indoor staircase detection and recognition for the visually impaired, in: 2016 IEEE international conference
on multimedia & expo workshops (ICMEW), IEEE, 2016, pp. 1–6.
[75] M. Poggi, S. Mattoccia, A wearable mobility aid for the visually impaired based on embedded 3d vision and deep learning, in: 2016 IEEE Symposium
on Computers and Communication (ISCC), IEEE, 2016, pp. 208–213.
[76] H.-H. Pham, T.-L. Le, N. Vuillerme, Real-time obstacle detection system in indoor environment for the visually impaired using microsoft kinect sensor,
Journal of Sensors 2016 (2016).
[77] S.-H. Chae, M.-C. Kang, J.-Y. Sun, B.-S. Kim, S.-J. Ko, Collision detection method using image segmentation for the visually impaired, IEEE Transactions
on Consumer Electronics 63 (4) (2017) 392–400.
[78] R. Jafri, R.L. Campos, S.A. Ali, H.R. Arabnia, Utilizing the google project tango tablet development kit and the unity engine for image and infrared
data-based obstacle detection for the visually impaired, in: Proceedings of the 2016 international conference on health informatics and medical
systems (HIMS’15), Las Vegas, Nevada, USA, 2016, pp. 163–164.
[79] M. Sun, P. Ding, J. Song, M. Song, L. Wang, “watch your step”: Precise obstacle detection and navigation for mobile users through their mobile service,
IEEE Access 7 (2019) 66731–66738.
[80] W.M. Elmannai, K.M. Elleithy, A highly accurate and reliable data fusion framework for guiding the visually impaired, IEEE Access 6 (2018)
33029–33054.
[81] E. Cardillo, V. Di Mattia, G. Manfredi, P. Russo, A. De Leo, A. Caddemi, G. Cerri, An electromagnetic sensor prototype to assist visually impaired and
blind people in autonomous walking, IEEE Sensors Journal 18 (6) (2018) 2568–2576.
[82] X. Yu, G. Yang, S. Jones, J. Saniie, Ar marker aided obstacle localization system for assisting visually impaired, in: 2018 IEEE International Conference
on Electro/Information Technology (EIT), IEEE, 2018, pp. 0271–0276.
[83] P. Salavati, H.M. Mohammadi, Obstacle detection using googlenet, in: 2018 8th International Conference on Computer and Knowledge Engineering
(ICCKE), IEEE, 2018, pp. 326–332.
[84] U. Malūkas, R. Maskeliūnas, R. Damaševičius, M. Woźniak, Real time path finding for assisted living using deep learning, J. Univ. Comput. Sci 24
(2018) 475–487.
[85] R.K. Katzschmann, B. Araki, D. Rus, Safe local navigation for visually impaired users with a time-of-flight and haptic feedback device, IEEE Transactions
on Neural Systems and Rehabilitation Engineering 26 (3) (2018) 583–593.
[86] N. Hirose, A. Sadeghian, M. Vázquez, P. Goebel, S. Savarese, Gonet: A semi-supervised deep learning approach for traversability estimation, in: 2018
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2018, pp. 3044–3051.
[87] K. Yang, L.M. Bergasa, E. Romera, R. Cheng, T. Chen, K. Wang, Unifying terrain awareness through real-time semantic segmentation, in: 2018 IEEE
Intelligent Vehicles Symposium (IV), IEEE, 2018, pp. 1033–1038.
[88] J. Dong, M. Noreikis, Y. XIAO, A. Yla-Jaaski, Vinav: A vision-based indoor navigation system for smartphones, IEEE Transactions on Mobile Computing
(2018).
[89] A. Kunz, K. Miesenberger, L. Zeng, G. Weber, Virtual navigation environment for blind and low vision people, in: International Conference on Com-
puters Helping People with Special Needs, Springer, 2018, pp. 114–122.
[90] J. Bai, S. Lian, Z. Liu, K. Wang, D. Liu, Virtual-blind-road following-based wearable navigation device for blind people, IEEE Transactions on Consumer
Electronics 64 (1) (2018) 136–143.
[91] B. Jiang, J. Yang, Z. Lv, H. Song, Wearable vision assistance system based on binocular sensors for visually impaired users, IEEE Internet of Things
Journal (2018).
[92] V. Nair, M. Budhai, G. Olmschenk, W.H. Seiple, Z. Zhu, Assist: personalized indoor navigation via multimodal sensors and high-level semantic infor-
mation, in: Proceedings of the European Conference on Computer Vision (ECCV), 2018. 0–0
[93] K. Xiang, K. Wang, L. Fei, K. Yang, Store sign text recognition for wearable navigation assistance system, in: Journal of Physics: Conference Series,
1229, IOP Publishing, 2019, p. 012070.
[94] P. Arno, A. Vanlierde, E. Streel, M.-C. Wanet-Defalque, S. Sanabria-Bohorquez, C. Veraart, Auditory substitution of vision: pattern recognition by the
blind, Applied Cognitive Psychology: The Official Journal of the Society for Applied Research in Memory and Cognition 15 (5) (2001) 509–519.
[95] C.T. Capelle, C. Arno, P. and veraart, c (1998). a real time experimental prototype for enhancement of vision rehabilitation using auditory substitution,
IEEE T. BioMed Eng 45.
[96] S. Garcia, K. Petrini, G.S. Rubin, L. Da Cruz, M. Nardini, Visual and non-visual navigation in blind patients with a retinal prosthesis, PloS one 10 (7)
(2015).
[97] K.A. Kaczmarek, J.G. Webster, P. Bach-y Rita, W.J. Tompkins, Electrotactile and vibrotactile displays for sensory substitution systems, IEEE transactions
on biomedical engineering 38 (1) (1991) 1–16.
[98] L. Renier, A.G. De Volder, Vision substitution and depth perception: early blind subjects experience visual perspective through their ears, Disability
and Rehabilitation: Assistive Technology 5 (3) (2010) 175–183.
[99] E. Sampaio, S. Maris, P. Bach-y Rita, Brain plasticity:’visual’ acuity of blind persons via the tongue, Brain research 908 (2) (2001) 204–207.
[100] H. Segond, D. Weiss, M. Kawalec, E. Sampaio, Perceiving space and optical cues via a visuo-tactile sensory substitution system: A methodological
approach for training of blind subjects for navigation, Perception 42 (5) (2013) 508–528.
20 K. Manjari, M. Verma and G. Singal / Internet of Things 11 (2020) 100188