0% found this document useful (0 votes)
80 views6 pages

Wheelchair Control Using Speech Recognition: P. B. Ghule and M. G. Bhalerao R. H. Chile and V. G. Asutkar

Speech extraction

Uploaded by

Lendry Norman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views6 pages

Wheelchair Control Using Speech Recognition: P. B. Ghule and M. G. Bhalerao R. H. Chile and V. G. Asutkar

Speech extraction

Uploaded by

Lendry Norman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Wheelchair Control Using Speech Recognition

P. B. Ghule and M. G. Bhalerao R. H. Chile and V. G. Asutkar


Student, Instrumentation Enginering Department, Professor, Instrumentation Enginering Department,
S.G.G.S. Institute of Engineering & Technology, S.G.G.S. Institute of Engineering & Technology,
Nanded – 431606, India. Nanded – 431606, India.
Email: pratapsinhghule@gmail.com & Email: rhchile@yahoo.com &
manojbhalerao93@gmail.com vgasutkar@yahoo.com

Abstract—In this paper a speech controlled wheelchair for [1]. Technology of voice Recognition was first used in 1999
physically disabled person is developed which can be used for by Siamo University of Alcala, Spain [1]. They designed a
different languages. A speech recognition system using Mel wheelchair which was controlled by head gesture and also
Frequency Cepstral Coefficients (MFCC) was developed in the
laptop with an interactive and user friendly GUI and the normal voice commands as a secondary parameter. In 1999-2000
wheelchair was converted to an electric wheelchair by applying in India, CEERI University developed voice cum auto steer
a gear mechanism to the wheels with DC motor attached to wheelchair which had a line following mode along with voice
the gear. An Arduino Uno board is used to acquire the control controlling mode [1]. In 2002 Mr. Huri in Yonsei univer-
signal from MATLAB and give it to relay driver circuit which sity, Koria designed a wheelchair with multiple controlling
intern results in the motion of wheelchair in desired direction.
The speech inputs such as forward, back, left, right and stop modes, including facial gesture, EMG signal from neck and
acquired from the user and then the motion of the wheelchair voice commands [1]. In 2007, M. Nishimory and T. Saitoh
made according to the respective command. The performance proposed a voice controlled intelligent wheelchair. The user
of MFCC in presence of noise and for different languages was could control the wheelchair through voice commands in
studied to know the reliability of the algorithm in different Japanese language [2]. Voice controlled wheelchair using
condition.
DSK TMS320C6711 was proposed in April 2009 by Qadri
I. I NTRODUCTION M. T. and Ahmed S. A. which uses a DSP processor
from Texas Instrument for voice signal processing [3]. Zero
The principal moto behind the research i.e., benefit of the crossing count and standard deviation of spoken words are
research to the common persons and country is least bothered the algorithms used for the voice recognition by them. In July
by the researchers. There is a need to thing about the product 2009 a robust speech recognition is applied to voice driven
which will benefit people in society along with its benefit to wheelchair by Akira sasou and hiroaki Kojima [4].They had
nation. We will focus on one of the issue faced by the people used the array of microphones attached to wheelchair for
i.e. physical disability and will try to provide an engineering voice input. This wheelchair had a disadvantage of more
solution i.e. an electric wheelchair with maximum liberties processing time for voice recognition. In 2013, A. Ruiz
and minimum costing. The need to reduce the cost of was Errano and R. P. Gomez had developed a dual control system
elaborated by Dr. Amartya Sen in his keynote address held which was capable of driving a wheelchair through tongue
at World Bank’s conference on the issue of disability, the and speech signal [5].
poverty line for physically disabled people should also take
into account the greater expenses they suffer in exercising Mostly the electric wheelchair developed run with the help
with what purchasing power they have. A study in the U.K of joystick. The further solutions proposed for making it
found that poverty rate for the disabled people was 23.1% more comfortable are controlling the wheelchair using tongue
as compared to 17.9% for the non-disabled people, but when movement [5], hand gesture [6], voice command [7] and brain
their greater expenses associated owing to their disabled were control interface [8]. The tongue is not much feasible as when
considered, the poverty rate for the people with disabilities we are using tongue to control wheelchair we cannot talk
was shot up to 47.4% . This tells us the higher expenses and and might be hectic for the user for long term use. The hand
the need to reduce this cost. gesture [6] is better option than tongue but it will cause pain
The application of technology in the field of wheelchairs and discomfort after an ample amount of time. The Brain
was first tried by George Klein in 1953[4] consequently this control interface [8] is effective but a very costly solution
area of technology i.e. electric wheelchair is continuously for wheelchair control and can give a tiring experience to
being flourished and expanding immensely with magnificent the user it also requires a lot of setup for acquiring the brain
discoveries which aims to makes user more competent and signal, then processing it and extracting an exact information
potent in the society. In 1986, Arizona State University, for the use. The voice controlled Wheelchair gives a far
U.S. occasionally launched a wheelchair which used machine better platform for the wheelchair control considering the
vision to identify landmarks and center wheelchair in hallway accessibility and comfortableness of the user. We finally want
978-1-5090-3251-8/16/$31.00 ©2016 IEEE
to our user to be a potent citizen of the country. frequencies. To boost the magnitude of higher frequencies,
Speech is a natural way of communication used by the input speech waveform is pre-emphasized by a first order
human. It is a resilient way to interchange the information filter with transfer function;
in between two person. This concept motivated many
researchers to use speech as communication channel for man H(Z) = 1 − αz −1 ; 0.9 ≤ α ≤ 1.0 (1)
machines interaction. This gives rise to Speech Processing.
In the 1920’s, speech recognition came into existence.
First machine to recognize speech commercially named
Radio Rax(toy) was been manufactured. Advance research  

in speech processing began in early 1936 at Bell Lab. In
1939, Bell Lab demonstrated a speech synthesis machine
   
invented by them at the World Fair, New York. In decade of
1940-1950 many of the researchers tried to utilize the basic
ideas of acoustic, phonetics and speech properties based on   

that. In 1952, at Bell Laboratories, Devis, Biddulph and


Balashek built a system for isolated digit recognition for
  
a single speaker. In 1970’s, speech recognition research
advanced and achieved a number of milestone. First the
 
area of isolated word recognition technique becomes a  
usable technology based on the fundamental studies by
Velichko and Zagoruyko in Russia, Sakoi and Chiba in    
 
Japan and Itakura in U.S. Russian studies helped in a robust
pattern recognition ideas.The Japanese study shows how
   
the dynamic programming methods could be successfully  
applied and an independent research done by Itakura gave an
idea about linear Predictive coding (LPC). In 1980’s, LPC  
    
was replaced by frequency domain perspective proposed by
Mermelstain and Davis. This technique is famously known

as the Mel-Frequency Cepstral Co-efficient (MFCC). Later
many techniques were developed process of development
of better algorithm is still going on. This paper dictates  

the main objectives of the performed project i.e to run the
wheelchair automatically in all desired directions given by
Fig. 1. Block Diagram For Speech Recognition Using MFCC
using speech signal. A 28-pin Arduino micro-controller
was operated smoothly to ensure EWC’s movement in
desired direction, after receiving command signal from the
laptop which uses speech as input. Each rear wheel of B. Framing
EWC is attached with a DC gear motor which acquires If the frame is too long, signal properties may change abun-
required power from battery connected to respective motors. . dantly across window, influencing time resolution negatively.
Thus signal is divided in small Frames of length 512 and an
II. M EL F REQUENCY C EPTRAL C OEFFICIENTS (MFCC) overlap of 256. We choose number of samples in each frame
A LGORITHM as 512, with the number of samples overlapping between
adjacent frames as 256. Overlapping frames are used to
The speech recognition algorithm used is discussed in de-
capture information that may occur at the frame boundaries.
tail its detailed block diagram is given in Fig.1.The objective
of pre-processing is to convert speech signal suitable for
C. Windowing
feature extraction. It involves following steps;
Window function in signal processing represent a mathe-
A. Pre-emphasis matical function with null-valued after specific time slot. A
In handling of audio signals, pre-emphasis corresponds function which has same amplitude inside and zero amplitude
to a system process planned to increase the magnitude of outside is called rectangular window, in accordance to its
greater frequencies with respect to the magnitude of smaller graphical depiction. Window function are used mainly for
frequencies. Mathematical modeling of the speech producing spectral analysis, design of filters, and beam formation. A
track is done to design this filter. One of the pole of glottis smooth, positive, “Bell Shaped” function is used in most
model is canceled by lip model, but still one pole of glottis of the application. A window obeying these properties was
model is remaining. This pole causes attenuation in higher designed by Richard W. Hamming. The window tapered
towards both its extremities. Equation[?] shows the discrete G. Logarithmic Filter energies
time domain representation of Hamming window function. Humans recognize intensity in logarithmic scale so in this
 step we calculate logarithm of a signal to mimic hearing
0.54 − 0.46 cos( 2πn
N ), 0 ≤ n ≤ N system.
h[n] = (2)
0, otherwise  
N −1
Lp (m, k) = ln k=0 M (m, k) ∗ | Xp (k) |
D. Discrete Fourier Transform (DFT)
H. Discrete Cosine Transform (DCT).
Discrete Fourier transform (DFT) transforms finite length
Discrete cosine transform (DCT) symbolizes a finite se-
equidistant samples to finite combinations of complex si-
quence signal in terms of sum of cosine functions oscillating
nusoids, according to frequencies, with the same sample
at various frequencies. DCTs are used in various applications
values. It converts time domain signal to frequency domain.
of science and engineering e.g., lossy compression of audio
The input and output coefficients are complex numbers. Out
and image. The coefficients obtained are also known as
frequencies contain harmonics of fundamental frequencies
ceptral coefficients.
whose period is of length of sampling interval.  
F
N −1 φτp {x [n]} = m=1 Lp (m, k) cos r(2m−1)π
Xp (k) = n=0 xp [n] ω [n] exp(−j N )
2πkn
2F

The transformed Frequency domain signal contains both I. Mean Square Error
real and imaginary values in the signal. This signal is The classifier is used to classify the input and give the rec-
converted to Real values using below formula to reduce the ognized word. There are many classifiers like ANN, GMM,
mathematical complexities. HMM but we have used Mean square error as classifier
 because of its mathematical simplicity.
2 2
Xp (k) = (Re (Xp (k))) + (Im (Xp (k))) MSE is a frequently used measure of the differences
between estimated vector and Actual or Ideal vector. In this
It eliminates the imaginary part thus brings down the math-
application MSE is used to calculate distance between the
ematical computation drastically.
ceptral coefficients of newly recorded signal with ceptral co-
efficients of pre-recorded signals. The MSE will be minimum
E. Mel-Filter bank creation for the best matched feature vectors. If R1 are a vector from
In signal processing the raw data is converted to informa- MFCC of Pre-recorded signal and R2 are MFCC of newly
tive, non-redundant leading to better human interpretation. recorded signal then MSE can be calculated using below
Generally feature Extraction is used to reduce size of the equation,
vectors.When input is large to be handled and is supposed
to be spare, then it can be altered to a condensed form n
1
called features vectors called as feature extraction. Mel- M SE = (R2 − R1 )2 (4)
frequency cepstrum (MFC) is used in speech processing n i=1
which is depiction of short-term power spectrum of sound,
Where, n is length of both vectors.
dependent on linear cosine transform of log power spectrum
The MSE of Entra-class (different recordings of same
on nonlinear mel scale of frequency.
words) feature vectors is less, while MSE of Enter-class
Triangular membership function are generated using the
(different recordings of different words) is more. Hence
formula given below.
one can find the relation between two feature vectors by





0
lf (k)−lf (m−1)
f or lf (k) < lf
c
(m − 1) calculating the MSE between them.

⎪ c

⎨l f or lf (m − 1) ≤ lf (k) < lf (m)
fc (m)−lfc (m−1) c c
M (m, k) =
lf (k)−lf (m+1)




⎪ l
c
(m)−lf (m+1)
f or lf
c
(m − 1) ≤ lf (k) < lf
c
(m) III. H ARDWARE I MPLEMENTATION

⎩ fc
⎪ c
0 f or lf (k) < lf (m + 1)
c A. Mechanical Design

F. Linear to Mel-frequency conversion In this project a readily available traveling type wheelchair
is used and modified. Pair of DC geared motors are connected
Stevens, Volkmann, and Newman in 1937 proposed mel to the rear wheels by sprocket and chain mechanism as shown
scale, which is perceptual scale of pitches equidistant from in Fig.2. Front wheels are of caster type and are free to rotate
each other according to the listeners. Mel arises from word in angle of 360 degree.
melody to designate that scale is dependent upon pitch
comparisons. Formula to convert f hertz into mel scale is: B. Hardware arrangement
An arduino board is connected to PC. A relay driver circuit
f is attached to arduino board. Relay driver is attached to the
M el(f ) = 2595 log10 (1 + ) (3)
700 motors attached to the Chain and sprocket arrangement.
 
  





 
  
 
 


 !


Fig. 4. Block Diagram of Working of Wheelchair

in built in the microphone most of the external disturbances


are removed through the hardware filters the noise has a large
effect on the performance of the algorithm. The effect of
noise is shown in result section.The signal is recorded in
MATLAB.
Fig. 2. Proposed Chain Sprocket Arrangement the signal is filtered using band pass filters as we know
that human voice ranges from 30Hz to 3000Hz the upper
and the lower frequencies are filtered using band pass filter
IV. G RAPHICAL U SER I NTERFACE D ESIGN thus giving much more meaningful signal from raw signal
this signal is extracted then the MFCC algorithm is used
To make Algorithm more User Friendly, the Graphical to extract the features from the speech signal. The recorded
User Interface(GUI) is designed in MATLAB software as signal is then matched with the feature vectors of the signals
shown in Fig.3. stored in the database the matched signal is said to be the
command and the action to be executed is displayed in the
GUI and the MATLAB sends the action to Arduino serially
and the Arduino reads the command, decodes the command
and generates an appropriate control signal which is given to
the relay driver circuit which lead to the motion of wheelchair
in which the command was passed by the user. The MFCC
algorithm is used as it deals with the frequencies present
the voice signals it is a better real time algorithm as the
shifting of the signal in the time domain will take place while
recording the sound in real time. When the Program is run a
GUI appears as shown below.
Fig. 3. GUI Developed Using For Speech Recognition System
A. Manual Buttons
→ GUI contain separate buttons to record the command A ‘Start button is provided to start the program after
words from users and status is display in the side box. Recording of the words (Training Stage). Once program is
→ It provides a display to display the spectrogram of each started user can use voice commands to control the directions
recorded signal. of wheelchair. In the Emergency condition, the operation can
→ It contain Buttons for Manual Start, Stop and Pause/Run be paused manually by using ‘Pause’ button and it can be
the program. restarted by using same button. For stopping the operation,
→ It provides a display to display the recognized word and a ‘Stop’ button is provided. Once stop button is pressed
running status of Wheelchair. recorded words in training stage get erased and operation
stops completely.
V. W ORKING B. Training of Words
The working is as shown in Fig.4.The working is explained Buttons are provided to record the voice of the user in
in detail below: training stage. Each button used to record a recording of one
second. After recording the voice user can hear the recorded
VI. ACQUIRING S PEECH S IGNAL sound, so that he/she can assure about a correct recording.
The voice commands are given by the user are captured The text-box next to the recording buttons represents the
using microphone. The microphone is attached to the PC and status of the recordings. i.e., whether the respective word is
the data is filtered using the hardware noise filters which are recorded or not. Feature extraction of the words takes place
using MFCC and the extracted feature vectors are saved in G. Movement of Wheelchair
the MATLAB. The wheelchair moves in forward, backward, left and right
C. Spectrograms Patterns directions according to the command given. The wheelchair
Once words are recorded, theirs spectrograms are dis- motion stops when the stop command is given
played on GUI. From these patterns user can get idea whether VII. R ESULT B Y MFCC A PPROACH
the recordings are accurate or not. Hence user can re-record
the signal if the pattern is not correct. In testing stage the The flowchart for the feature extraction and matching is
Spectrogram of Pronounced word is also displayed so that given in Fig.5. To test MFCC Algorithm under various noisy
user can get a visual information about the pronounced word. conditions we used a database containing various types of
noise like Station Noise,Street Noise ,Restaurant Noise,Car
D. Display Axes Noise , Babble Noise ,Airport Noise each noise was used with
In the testing stage once word is matched with a word different intensity levels i.e clean signal,15db,10db,5db,0db
in training stage, the result is displayed in display axes as noise .i.e., SNR’s. Each signal with adulterated noise was
shown in fig. This display is also used to display status compared with the features extracted from clean signal’s
of the program. i.e., when program is manually stopped and the following results are obtained. And after testing the
or paused, respective window is appeared in display axes. algorithm for different languages database taken containing
Another display box is provided so that user can get an idea languages Marathi, Hindi, Bengali, Kannada, Tamil, Telugu,
of which command is given to the arduino to take control Malayalam and following result were obtained.
action.
E. Acquiring and Matching Speech Signal TABLE I
C ORRECT RECOGNITION ’ S FOR DIFFERENT NOISE S IGNAL USING MFCC
After the training of words is over user can give the
Station Street Restaurant Car Babble Airport
commands to control the wheelchair directions. The speech noise Noise Noise Noise Noise Noise
signal is acquired in MATLAB, then feature extraction is Clean Signal
15db Noise
30
21
30
23
30
25
30
18
30
25
30
27
done using MFCC. The extracted features are matched with 10db Noise
5db Noise
8
4
8
4
14
4
6
3
14
4
13
4
the previously extracted feature vector using MSE as classi- 0db Noise 3 2 3 3 3 3

fier. The matched command is displayed on the screen and


the command is passed to arduino.The process of feature
extraction and matching can be clearly understood from
Fig.5.

 

 

 




 

     


 



   

     

Fig. 6. Speech recognition for different types of noise database


   



  TABLE II
   
C ORRECT RECOGNITION ’ S FOR DIFFERENT LINGUISTIC S IGNAL USING
   
MFCC
 !
No. of Recognised
Language
Signals
Hindi 1000
Marathi 1000
Fig. 5. Speech recognition from database Bengali 1000
Kannada 1000
Tamil 1000
F. Control Signal Generation Telugu
Malayalam
1000
1000

The matched command is given to the arduino serially.


Arduino generates control signals required to move the
wheelchair in that direction and gives it to the relay driver VIII. W HEELCHAIR T ESTING
circuit. The relay driver circuit controls the motor directions The wheelchair after integrating with all the components
controlling the direction of wheelchair. should be tested for its reliability. The wheelchair was tested
[2] M. Nishimori, T. Saitoh, and R. Konishi, “Voice controlled intelligent
wheelchair,” in SICE, 2007 Annual Conference. IEEE, 2007, pp.
336–340.
[3] M. T. Qadri and S. A. Ahmed, “Voice controlled wheelchair using
dsk tms320c6711,” in Signal Acquisition and Processing, 2009. ICSAP
2009. International Conference on. IEEE, 2009, pp. 217–220.
[4] A. Sasou and H. Kojima, “Noise robust speech recognition applied
to voice-driven wheelchair,” EURASIP Journal on Advances in Signal
Processing, vol. 2009, p. 41, 2009.
[5] A. Ruı́z-Serrano, R. Posada-Gómez, A. M. Sibaja, G. A. Rodrı́guez,
B. Gonzalez-Sanchez, and O. Sandoval-Gonzalez, “Development of a
dual control system applied to a smart wheelchair, using magnetic and
speech control,” Procedia Technology, vol. 7, pp. 158–165, 2013.
[6] N. Kawarazaki, D. Stefanov, and A. I. B. Diaz, “Toward gesture
controlled wheelchair: A proof of concept study,” in Rehabilitation
Fig. 7. Speech recognition result for linguistic database
Robotics (ICORR), 2013 IEEE International Conference on. IEEE,
2013, pp. 1–6.
[7] A. Škraba, R. Stojanović, A. Zupan, A. Koložvari, and D. Kofjač,
by giving different commands like forward, back, left, right “Speech-controlled cloud-based wheelchair platform for disabled per-
and stop. The wheelchair was also tested by giving the same sons,” Microprocessors and Microsystems, vol. 39, no. 8, pp. 819–828,
2015.
commands in different language. The experimental setup is [8] B. Rebsamen, C. L. Teo, Q. Zeng, M. H. Ang Jr, E. Burdet, C. Guan,
shown in Fig.8. H. Zhang, and C. Laugier, “Controlling a wheelchair indoors using
thought,” Intelligent Systems, IEEE, vol. 22, no. 2, pp. 18–24, 2007.
[9] M. A. Hossan, S. Memon, M. Gregory et al., “A novel approach for
mfcc feature extraction,” in Signal Processing and Communication
Systems (ICSPCS), 2010 4th International Conference on. IEEE,
2010, pp. 1–5.
[10] S. K. Kopparapu and M. Laxminarayana, “Choice of mel filter bank in
computing mfcc of a resampled speech,” in Information Sciences Sig-
nal Processing and their Applications (ISSPA), 2010 10th International
Conference on. IEEE, 2010, pp. 121–124.
[11] H. Wang, Y. Xu, and M. Li, “Study on the mfcc similarity-based voice
activity detection algorithm,” in Artificial Intelligence, Management
Science and Electronic Commerce (AIMSEC), 2011 2nd International
Conference on. IEEE, 2011, pp. 4391–4394.
[12] K. S. Ahmad, A. S. Thosar, J. H. Nirmal, and V. S. Pande, “A unique
approach in text independent speaker recognition using mfcc feature
sets and probabilistic neural network,” in Advances in Pattern Recogni-
tion (ICAPR), 2015 Eighth International Conference on. IEEE, 2015,
pp. 1–6.
[13] R. Ajgou, S. Sbaa, S. Ghendir, A. Chamsa, and A. Taleb-Ahmed,
“Robust remote speaker recognition system based on ar-mfcc features
and efficient speech activity detection algorithm,” in Wireless Commu-
nications Systems (ISWCS), 2014 11th International Symposium on.
IEEE, 2014, pp. 722–727.
[14] H. C.-H. Hsu and A. Liu, “A flexible architecture for navigation control
Fig. 8. Hardware Setup While Experimentation of a mobile robot,” Systems, Man and Cybernetics, Part A: Systems
and Humans, IEEE Transactions on, vol. 37, no. 3, pp. 310–318, 2007.
[15] Q. Zeng, C. L. Teo, B. Rebsamen, and E. Burdet, “A collaborative
wheelchair system,” Neural Systems and Rehabilitation Engineering,
IX. C ONCLUSION IEEE Transactions on, vol. 16, no. 2, pp. 161–170, 2008.
In this work, we have proposed a novel isolated word [16] D.-J. Kim, R. Hazlett-Knudsen, H. Culver-Godfrey, G. Rucks, T. Cun-
ningham, D. Portee, J. Bricout, Z. Wang, and A. Behal, “How auton-
recognition technique, which provides efficient results using omy impacts performance and satisfaction: Results from a study with
ceptral coefficient and discrete cosine transform. It can pro- spinal cord injured subjects using an assistive robot,” Systems, Man
vide both time and frequency properties of speech signal. As and Cybernetics, Part A: Systems and Humans, IEEE Transactions on,
vol. 42, no. 1, pp. 2–14, 2012.
it converts the speech signal to ceptral coefficients, it is one of [17] T. Carlson and Y. Demiris, “Collaborative control for a robotic
the best method to extract features from Speech Signal. In the wheelchair: evaluation of performance, attention, and workload,” Sys-
particular application of wheelchair, the database of training tems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions
on, vol. 42, no. 3, pp. 876–888, 2012.
stage is limited up to few words, proposed method gives [18] X. Chen and S. K. Agrawal, “Assisting versus repelling force-feedback
great accuracy with less time of execution. The feasibility for learning of a line following task in a wheelchair,” Neural Systems
of proposed approach has been successfully tested on a real and Rehabilitation Engineering, IEEE Transactions on, vol. 21, no. 6,
pp. 959–968, 2013.
time wheelchair. The hardware response was also good as [19] J. Pineau, A. K. Moghaddam, H. K. Yuen, P. S. Archambault,
it didn’t take much time from acquiring signal to generate F. Routhier, F. Michaud, and P. Boissy, “Automatic detection and clas-
control signal for motor control. sification of unsafe events during power wheelchair use,” Translational
Engineering in Health and Medicine, IEEE Journal of, vol. 2, pp. 1–9,
R EFERENCES 2014.

[1] R. C. Simpson, “Smart wheelchairs: A literature review,” Journal of


rehabilitation research and development, vol. 42, no. 4, p. 423, 2005.

You might also like