0% found this document useful (0 votes)
86 views78 pages

Stuttering Detection (1) (1) - 1

This document presents a project report on 'Stuttering Detection' submitted by Lalitha Lakshmi D for a Master's degree in Computer Science at Sathyabama Institute of Science and Technology. The project aims to develop a new algorithm to enhance speech recognition for individuals who stutter, addressing challenges in identifying and correcting stuttered speech through various machine learning techniques. The report includes an overview of stuttering, problem statements, literature survey, system analysis, and proposed methodologies for improving speech fluency and recognition accuracy.

Uploaded by

Anitha Veeramane
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views78 pages

Stuttering Detection (1) (1) - 1

This document presents a project report on 'Stuttering Detection' submitted by Lalitha Lakshmi D for a Master's degree in Computer Science at Sathyabama Institute of Science and Technology. The project aims to develop a new algorithm to enhance speech recognition for individuals who stutter, addressing challenges in identifying and correcting stuttered speech through various machine learning techniques. The report includes an overview of stuttering, problem statements, literature survey, system analysis, and proposed methodologies for improving speech fluency and recognition accuracy.

Uploaded by

Anitha Veeramane
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

STUTTERING DETECTION

Submitted in partial fulfilment of the required for the award of


Masters of Computer Science

by

LALITHA LAKSHMI D

(REGISTER NO 42734009)

DEPARTMENT OF COMPUTER SCIENCE

SCHOOL OF SCIENCE AND HUMANITIES

SATHYABAMA
INSTITUTE OF SCIENCE AND TECHNOLOGY

(DEEMED TO BE UNIVERSITY)

Accredited with Grade “A++” by NAAC

JEPPIAR NAGAR, RAJIV GANDHI SALAI, CHENNAI 600 119

NOVEMBER 2023

i
SATHYABAMA
INSTITUTE OF SCIENCE AND TECHNOLOGY

(DEEMED TO BE UNIVERSITY)
Accredited "A++" Grade by NAAC | 12B Status by UGC I Approved by AICTE

www.sathyabama.ac.in

DEPARTMENT OF COMPUTER SCIENCE

BONAFIDE CERTIFICATE
This is to certify that this Project Report is the bonafide work of
LALITHA LKSHMI D (42734009) who carried out the project entitled
“STUTTERING DETECTION” under our supervision from
September 2023 TO November 2023.

Internal Guide
Ms. VINODHINI K, M.Sc., Assistant Professor

Dean And Head Of The Department


Dr. REKHA CHAKRAVARTHI M.E., Ph.D.
_____________________________________________________________

Submitted for Viva voce examination held on

Internal Examiner External Examiner

ii
DECLARATION

I am, LALITHA LAKSHMI D (42734009) hereby declare that the


Project Report entitled “STUTTERING DETECTION” done by us under the
guidance of Ms. VINODHINIK, M.Sc., Assistant Professor Department of
Computer Science at SATHYABAMA INSTITUTE OF SCIENCE AND
TECHNOLOGY, Jeppiaar Nagar, Rajiv Gandhi Salai, Chennai-600 119 is
submitted in partial fulfilment of the requirements for the award of Master of
Science degree in Computer Science.

DATE: LALITHA LAKSHMI D

PLACE: CHENNAI SIGNATURE OF THE CANDIDATE

iii
ACKNOWLEDGEMENT

I am pleased to acknowledge my sincere thanks to Board of Management of


SATHYABAMA INSTITUTE OF SCIENCE AND TECHNOLOGY for their
kind encouragement in doing this project and for completing it successfully. I am
grateful to them.

I convey my thanks to Dr. REKHA CHAKRAVARTHI M.E., Ph.D.


Dean of Arts and Science for providing me necessary support and details at the
right time during the progressive reviews.

I would like to express my sincere and deep sense of gratitude to my


project guide Ms. VINODHINI K, M.Sc., Assistant Professor Department of
Computer Science for their valuable guidance, suggestions and constant
encouragement paved way for the successful completion of my project work.

I wish to express my thanks to all Teaching and Non-teaching staff


member of the Department of Computer Science who were helpful in many ways
for the completion of the project

LALITHA LAKSHMI D

iv
ABSTRACT

The aim of this project is to come up with a new algorithm to enhance Speech
recognition for people suffering from stuttering .Stuttering identification is an
interesting domain research problem involving pathology, acoustics, and signal
processing making it hard and complicated to detect. The basic idea is to first
remove shuttering from the sample by using the amplitude threshold obtained
from neural network and passing the clean sample.
It is a disorder which affect the fluency of speech involuntary repetition of words,
syllables, Etc or involuntary silent intervals. Of all these, repetition is the most
common and prominent characteristic of stuttering. The increase usage of
stuttering speech recognition by people who are suffering has led to ease access
in their life

v
TABLE OF CONTENT

CH.NO NAME OF THE TOPIC PAGE.NO


INTRODUCTION 1
1.1 Overview 1
1
1.2 Problem Statement 1
1.3 Literature Survey 3
SYSTEM ANALYSIS 6
2 2.1 Existing System 6
2.2 Proposed System 6
SYSTEM CONFIGURATION 8
3.1 Hardware Requirement 8
3 3.2 Software Requirement 8
3.3 Domain 8
3.4 PyChamber 8
SYSTEM DESIGN 9
4.1 Unified Modelling Language 9
4.1.1 Use Case Diagram Of Stuttering Detection 9
4.1.2 Sequence Diagram Of Stuttering Detection 10
4.1.3 Activity Diagram Of Stuttering Detection 10
4.1.4 Collaboration Diagram Of Stuttering Detection 11
4
4.1.5 Component Diagram Of Stuttering Detection 13
4.1.6 Deployment Diagram Of Stuttering Detection 14
4.1.7 Package Diagram Of Stuttering Detection 14
4.2 System Architecture 15
4.2.1 Architectural Design 15
4.2.2 Architectural Description 17
SYSTEM IMPLEMENTATION 19
5.1 Modules 19
5
5.2 Module Description 19
5.3 Sample Coding 22
SYSTEM TESTING 25
6
6.1 Methodology 25
7 CONCLUSION 29
8 FUTURE ENHANCEMENT 30
APPENDIX 31
9 9.1 Coding 31
9.2 Screen Shot 75
10 REFERENCES 80

vi
TABLE OF FIGURES

Fig No Name of the Figure Page No

1.1 Block diagram of speech input 2

4.1.1 Use case diagram of stuttering detection 10

4.1.3 Activity diagram of stuttering detection 11

4.1.4 Collaboration diagram of stuttering detection 12

4.1.5 Component diagram of stuttering detection 13

4.2.1 Architectural Diagram 16

6.1 Pipeline of natural language processing in artificial intelligence 25

9.1 Start Screen 75

9.2 Loading Screen 75

9.3 Recognizing Screen 76

9.4 Output Screen 77

vii
CHAPTER 1
INTRODUCTION
1.1 OVERVIEW

Stuttering is one manifestation of speech disorders, where the performance of the speech
is not smooth, including pronunciation repetition, prolonged, blocked or stalled at the syllable
or phone level. The incidence rate of stuttering is high. In social groups, about 1% people have
stuttering, and as high as 6% to 6.6% in children have stuttering, about 80% stuttering of which
automatically disappears, while the remaining 20% - that is 1% of the entire population have
difficulties to return to normal and finally become "developmental stuttering". Stuttering
phenomenon can be divided into the following categories:

a) Bursts stuttering: A syllable is repeated when speaking.

b) Reciprocating stuttering: Some syllables are repeated when speaking.

c) Blocking stuttering: Sentence at the beginning of words may still be barely able to
pronounce. However, when meeting the hard pronunciations, their words are blocked.

d) Ankylosing stuttering: If a few bursts stuttering occurred, they will be nervous, tongue and
just like froze. Even the easy words cannot be pronounced.

e) Difficult pronunciation stuttering: The first pronunciation of every sentence will not be able
to say. Despite they try hard, they can only pronounce in a dull, low "ah", "eh" sound. Among
them, the bursts stuttering and reciprocating stuttering are the most common types of stuttering,
which is one of the main factors affecting speech fluency.

1.2 PROBLEM STATEMENT

➢ Obtaining improved speech quality for real time recording


➢ Speech recognition security systems
➢ Measuring psychological illness and mental stress in children
➢ Improving vocal abilities, suggesting early detection for treatment

The work involves one dimensional speech signal analysis and a study of factors influencing
audio quality. The flow will be stuttered data collection, signal preprocessing and feature
extraction algorithm, disorder identification and correction using different classification
techniques.

1
STUTTERED SPEECH RECOGNITION: TRADITIONAL MACHINE LEARNING &
DEEP LEARNING BASED APPROACHES

Finally, different classification and clustering methods are used to recognize stutter speech.
Early studies on stuttered speech recognition mainly based on DTW score matching and
traditional machine learning algorithm. Before discussing different approaches to classify
stuttering, we are going to recapitulate some basics of machine learning. Machine learning is
basically a sub part of the broader family of Artificial Intelligence.

Fig 1.1 Block diagram of speech input

Architectural representation of Artificial Intelligence vs. Machine Learning vs. Deep


Learning with different types of machine learning is given in figure. From the characteristics
perspective of training data, Machine learning can be broadly categorized as supervised
learning, unsupervised learning & reinforcement learning. These types of machine learning
techniques are briefly discussed belowd Learning: Most widely used type of machine learning
technique that learns from labeled training data to make predictions about learning targets.
Unsupervised Learning: It is a learning methodology that learns from training data that is
neither classified nor labeled to group unsorted information according to similarities, patterns,
and differences without any prior training of data.

2
TYPICAL DISFLUENCY

If there is a breakdown in fluency, then we can say that the resultant speech is disfluent.
Disfluencies occur frequently in typical spontaneous speech, at a rate of around 6 per 100
words (Bortfeld, Leon, Bloom, Schober & Brennan, 2001; Eklund, 2004; Fox Tree, 1995;
Shriberg, 1994). They occur at a higher rate in longer utterances (Oviatt, 1995; Shriberg, 1994)
and in more complex utterances (Lickley, 2001; Shriberg, 1994). Individuals vary considerably
in the rate at which they produce disfluencies, but it is difficult to find a speaker who is never
disfluent. The word ‘disfluency’ is defined in several different ways in the research literature
and there seems to be no consensus on what phenomena it includes, so it is important to begin
this piece with our own operational definition. Our definition of fluency refers to the flow of
speech, so disfluency involves a break in that flow, when the speaker stops for a moment in a
place or for a length of time not predicted by typical fluent production.

1.3 LITRATURE SURVEY :

TITLE 1: STUTTERING DETECTION USING ATROUS


CONVOLUTIONAL NEURAL NETWORKS
YEAR AND AUTHOR: 2022 ,Abedal-Kareem Al-Banna; Eran Edirisinghe;
Hui Fang.
DESCRIPTION:
Stuttering is a neurodevelopment speech disorder that affects 70 million people
worldwide, approximately 1% of the whole population. People who stutter (PWS) have
common speech symptoms such as block, interjection, repetition, and prolongation. The
speech-language pathologists (SLPs) commonly observe these four groups of symptoms to
evaluate stuttering severity. The evaluation process is tedious and time-consuming for (SLP)
and (PWS). Therefore, this paper proposes a new model for stuttering events detection that
may help (SLP) to evaluate stuttering severity. Our model is based on a log mel spectrogram
and 2D atrous convolutional network designed to learn spectral and temporal features. We
rigorously evaluate the performance of our model on two stuttering datasets (UCLASS and
FluencyBank) using common speech metrics, i.e. F1-score, recall, and the area under the curve
(AUC). Our experimental results indicate that our model outperforms state-of-the-art methods
in prolongation with an F1 of 52% and 44.5% on the UCLASS and FluencyBank datasets,
respectively. Also, we gain 5% and 3% margins on the UCLASS and FluencyBank datasets
for fluent class.

3
TITLE 2: SPEECH PROCESSING FOR STUTTER DETECTION

YEAR AND AUTHOR:


DESCRIPTION:
Speech recognition and analysis has gained much attention in the field of healthcare,
military, security and machine learning scenarios. Modern speech analysis converge signal
processing, pattern recognition and linguistics techniques with higher statistical framework.
Research works are going on in the field of speech synthesis, improving audio quality and
accuracy with flat response to compression or coding.

TITLE 3: SEP-28k: A DATASET FOR STUTTERING EVENT


DETECTION FROM PODCASTS WITH PEOPLE WHO STUTTER
YEAR AND AUTHOR: 2021, Aparna Joshi, Colin Lea, Jeffrey P. Bigham,
Sachin Kajarekar and Vikramjit Mitra
DESCRIPTION:
The ability to automatically detect stuttering events in speech could help speech pathologists
track an individual’s fluency over time or help improve speech recognition systems for people
with atypical speech patterns. Despite increasing interest in this area, existing public datasets
are too small to build generalizable dysfluency detection systems and lack sufficient
annotations. In this work, we introduce Stuttering Events in Podcasts (SEP-28k), a dataset
containing over 28k clips labeled with five event types including blocks, prolongations, sound
repetitions, word repetitions, and interjections. Audio comes from public podcasts largely
consisting of people who stutter interviewing other people who stutter. We benchmark a set of
acoustic models on SEP-28k and the public FluencyBank dataset and highlight how simply
increasing the amount of training data improves relative detection performance by 28% and
24% F1 on each. Annotations from over 32k clips across both datasets will be publicly
released.

4
TITLE 4: A PROPOSED FRAMEWORK FOR STUTTER DETECTION:
IMPLEMENTATION ON EMBEDDED SYSTEMS
YEAR AND AUTHOR: 2022, Abhijit S. Pandya, Bassem Alhalabi, Harshal
A. Sanghvi and Jonatahn Taylor
DESCRIPTION:
It is estimated that more than 70 million people in the world stutter. One of the major problems
facing speech professionals who collaborate with stuttering patients is quantitatively
monitoring and tracking improvements in and outside of therapy sessions. After extensive
research, it was proposed to develop a bio-medical device that could be worn daily by patients
to monitor and record key events in everyday conversations to track instances of stutters to be
later analyzed by speech professionals. This bio-medical innovation shall assist the health
professionals and caretakers of the stuttering individuals to help them get out of this behavior
and compete in the real world. This paper extensively describes in detail a feasibility study
carried out and prototype developed for such a device and contemplates its future uses and
developments. This biomedical innovation shall provide data regarding various parameters in
stuttering which needs to be evaluated and this evaluation fastens the process of the therapy
provided by health professionals.

TITLE 5: A COMPUTER-ASSIST ALGORITHM TO DETECT


REPETITIVE STUTTERING AUTOMATICALLY

YEAR AND AUTHOR: 2013, Bin Dong, Junbo Zhang and Yonghong Yan
DESCRIPTION:
An algorithm to detect Chinese repetitive stuttering by computer is studied. According to
the features of repetitions in Chinese stuttered speech, improvement solutions are provided
based on the previous research findings. First, a multi-span looping forced alignment decoding
networks is designed to detect multi-syllable repetitions in Chinese stuttered speech. Second,
branch penalty factor is added in the networks to adjust decoding trend using recursive search
in order to reduce the error from the complexity of the decoding networks. Finally, we rejudge
the detected stutters by calculating confidence to improve the reliability of the detection result..

5
CHAPTER 2

SYSTEM ANALYSIS

2.1 EXISTING SYSTEM

The existing methods for stuttering detection employ spectral features such as mel-
frequency cepstral coefficients (MFCCs) and linear prediction cepstral coefficients (LPCCs)
or their variants that capture that formant-related information. Other spectral features such as
pitch, zero-crossing rate, shimmer, and spectral spread are also used. Finally, those features
are modeled with statistical modeling methods such as hidden Markov model (HMM), support
vector machine (SVM), Gaussian mixture model (GMM), etc. An alternative strategy of
stuttering detection is to apply ASR on the audio speech signal to get the spoken texts and then
to use language models. Even though this method of detecting stuttering has achieved
encouraging results and has been proven effective, the reliance on ASR makes it
computationally expensive and prone to error.

2.2 PROPOSED SYSTEM

Proposed framework mainly includes construction of a dataset for facilitating studies on


speech disfluenciesin Telugu language primarily.It then goes on to building machine learning
models using four different machine learning algorithms and test the models with other voice
samples and verifying the efficiency of the model by measuring accuracy.The final stage
includes integrating the model with a user interface for general purpose use. The proposed
system uses LSTM classification for the speech classification.

Characteristic of stuttering

Stuttering is a speech disorder. Usually stuttering has been detected from the age of 18
months to 24 months. It’s mainly the problem of fluency and delivery of speeches in case of
stuttering varies considerably across different speaking situation like diplomatic, official or
presentation mode of conversation and home atmosphere conversation. There are several
reasons of stuttering. Some of those along with types are discussed in this section.
Concentrating on the previous studies and researches some of the following causes of stuttering
are being explained. Based on the literature, some of the common causes of stuttering are
Genetic, Physiological, Congenital, Auditory and Environmental. The brief discussion on the
6
same is presented as follows: & Genetics – On the basis of recent international researches there
are few certain genes identified for the stuttering and the genetic family linkage plays a role.
& Physiological – From the field of brain imaging study a little bit of dis-functioning of brain
during speaking is the reason for lack of speech production and unable to keep the fluency of
saying words or sentences weakness of human neuro system may act in the process. &
Congenital – Congenital factors like physical trauma at the time of birth, cerebral palsy,
retardation may cause the stuttering. The conditions are found in case of sibling and sudden
growth in linguistic ability. & Auditory – Deafness and hard of hearing have an impact on
stuttering. Slow response to audio increases the stuttering habit. & Environmental – An
uncomfortable and stressful situation is a significant reason for development of stuttering
behaviours

7
CHAPTER 3
SYSTEM CONFIGURATION

3.1 HARDWARE REQUIREMENTS


• Processor : Dual core processor 2.6.0 GHZ
• RAM : 1GB
• Hard disk : 160 GB
• Compact Disk : 650 Mb
• Keyboard : Standard keyboard
• Monitor : 15 inchcolor monitor

3.2 SOFTWARE REQUIREMENTS

• Front End :PYTHON


• CSV : DATASET

3.3 Domain

Machine Learning

Machine Learning is a branch of artificial intelligence that develops


algorithms by learning the hidden patterns of the datasets used it to
make predictions on new similar type data, without being explicitly
programmed for each task.

3.4 PyChamber

PyChamber is a Python program written with scikit-rf, pyqtgraph, and


Qt to provide a modern and easy to use means of collecting anechoic
chamber data. This was developed at Penn State using the anechoic
chamber there, and so currently only supports that equipment.

8
CHAPTER 4

SYSTEM DESIGN

In this chapter, the various UML diagrams for House Price Prediction System is
represented and the various functionalities are explained.

4.1 UNIFIED MODELLING LANGUAGE

Unified Modeling language (UML) is a standardized modeling language enabling


developers to specify, visualize, construct and document artifacts of a software system. Thus,
UML makes these artifacts scalable, secure and robust in execution. It uses graphic notation to
create visual models of software systems. UML is designed to enable users to develop an
expressive, ready to use visual modeling language. In addition, it supports high-level
development concepts such as frameworks, patterns and collaborations. Some of the UML
diagrams are discussed.

4.1.1 USE CASE DIAGRAM OF STUTTERING DETECTION

Use case diagrams are considered for high level requirement analysis of a system. So
when the requirements of a system are analysed the functionalities are captured in use cases.
So it can be said that uses cases are nothing but the system functionalities written in an
organized manner. Now the second things which are relevant to the use cases are the actors.
Actors can be defined as something that interacts with the system. The actors can be human
user, some internal applications or may be some external applications. Use case diagrams are
used to gather the requirements of a system including internal and external influences. These
requirements are mostly design requirements. Hence, when a system is analyzed to gather its
functionalities, use cases are prepared and actors are identified. In the Unified Modeling
Language (UML), a use case diagram can summarize the details of your system's users (also
known as actors) and their interactions with the system. To build one, you'll use a set of
specialized symbols and connectors. An effective use case diagram represents Scenarios in
which your system or application interacts with people, organizations, or external systems,
Goals that your system or application helps those entities (actors) achieve.

9
SPEECH
INPUT

DATASET
TRAIN

SPEECH
PREPROCESSING

FEATURE
EXTRACTION

CLASSIFICATIO
N

Figure 4.1.1 Use case diagram of stuttering detection

4.1.2 SEQUENCE DIAGRAM OF STUTTERING DETECTION

Sequence diagrams model the flow of logic within the system in a visual manner,
enabling to both document and validate the logic, and are commonly used for both analysis
and design purposes. Figure 3.2 Sequence diagram of corporate training portal.

4.1.3 ACTIVITY DIAGRAM OF STUTTERING DETECTION

Activity is a particular operation of the system. Activity diagram is suitable for


modelling the activity flow of the system. Activity diagrams are not only used for visualizing
dynamic nature of a system but 22 they are also used to construct the executable system by
using forward and reverse engineering techniques. The only missing thing in activity diagram
is the message part. An application can have multiple systems. Activity diagram also captures
these systems and describes the flow from one system to another. This specific usage is not
available in other diagrams. These systems can be database, external queues, or any other
system. Activity diagram is suitable for modeling the activity flow of the system.

10
SPEECH
INPUT

HMM
CONVERSION

DATASET
TRAIN

SPEECH
PREPROCESSING

FEATURE
EXTRACTION

CLASSIFICATION

STUTTERING WORD
DETECTION

Figure 4.1.3 Activity diagram of stuttering detection

4.1.4 COLLABORATION DIAGRAM OF STUTTERING DETECTION

The next interaction diagram is collaboration diagram. It shows the object organization.
Here in collaboration diagram the method call sequence is indicated by some numbering
technique. The number indicates how the methods are called one after another. The method
calls are similar to that of a sequence diagram. But the difference is that the sequence diagram
does not describe the object organization whereas the collaboration diagram shows the object
organization.

11
Speech
input

Voice
module
VOICE TO TEXT

Dataset
train

Speech
preprocessing

NLP PREPROCESSING

Feature
extraction

Classification

Stuttering
detection

Figure 4.1.4 Collaboration diagram of stuttering detection

4.1.5 COMPONENT DIAGRAM OF STUTTERING DETECTION

A component diagram displays the structural relationship of components of a software


system. These are mostly used when working with complex systems that have many
components such as sensor nodes, cluster head and base station. It does not describe the
functionality of the system but it describes the components used to make those functionalities.
A component diagram allows verification that a system's required functionality is acceptable.
These diagrams are also used as a communication tool between the developer and stakeholders
of the system. Programmers and developers use the diagrams to 24 formalize a roadmap for

12
the implementation, allowing for better decision-making about task assignment or needed skill
improvements. System administrators can use component diagrams to plan ahead, using the
view of the logical software components and their relationships on the system. Components
communicate with each other using interfaces. The interfaces are linked using connectors.

STUTTERING DETECTION

STUTTERING WORD
SPEECH INPUT DETECTION

CLASSIFICATI
VOICE
ON
MODULE

DATAS FEATURE
ET EXTRACTI

TRAIN ON

SPEECH

PREPROCES

SING

Figure 4.1.5 Component diagram of stuttering detection

4.1.6 DEPLOYMENT DIAGRAM OF STUTTERING DETECTION

A deployment diagrams shows the hardware of your system and the software in those
hardware. Deployment diagrams are useful when your software solution is deployed across
multiple machines such as sensor nodes, cluster head and base station with each having a
unique configuration. The Figure 3.6 represents deployment diagram for the developed
application. A deployment diagram is a UML diagram type that shows the execution
architecture of a system, including nodes such as hardware or software execution
environments, and the middleware connecting them. Deployment diagrams are typically used
to visualize the physical hardware and software of a system. Using it you can understand how

13
the system will be physically deployed on the hardware. 25 Deployment diagrams help model
the hardware topology of a system compared to other UML diagrams types which mostly
outline the logical components of a system. A UML deployment diagram is a diagram that
shows the configuration of run time processing nodes and the components that live on them.
Deployment diagrams are a kind of structure diagram used in modeling the physical aspects of
an object-oriented system. They are often be used to model the static deployment view of a
system. Deployment Diagram in the figure 3.6 shows how the modules get deployed in the
system.

4.1.7 PACKAGE DIAGRAM OF STUTTERING DETECTION

Package diagrams are used to reflect the organization of packages and their elements.
When used to represent class elements, package diagrams provide a visualization of the
namespaces. Package diagrams are used to structure high level system elements. Package
diagrams can be used to simplify complex class diagrams, it can group classes into packages.
A package is a collection of logically related UML elements. Packages are depicted as file
folders and can be used on any of the UML diagrams. Package diagrams are structural diagrams
used to show the organization and arrangement of various model elements in the form of
packages. A package is a 26 grouping of related UML elements, such as diagrams, documents,
classes, or even other packages. Each element is nested within the package, which is depicted
as a file folder within the diagram, then arranged hierarchically within the diagram. Package
diagrams are most commonly used to provide a visual organization of the layered architecture
within any UML classifier, such as a software system. The Figure 3.7 represents package
diagram for the developed application which represents how the elements are logically related.

4.2 SYSTEM ARCHITECTURE

4.2.1 Architectural Design

An architecture diagram is a graphical representation of a set of concepts that are part of


architecture, including their principles, elements and components. Architecture diagram can
help system designers and developers visualize the high-level, overall structure of their system
or application, in order to ensure the system meets their users' needs. Using architecture
diagram, you can also describe patterns that are used throughout the design. It's somewhat like

14
a blueprint that you use as a guide, so that you and your colleagues can discuss, improve and
follow.

STUTTERING DETECTION USING MACHINE


LEARNING

SPEECH INPUT

VOICE MODULE

DATASET TRAIN

PREPROCESSING

CLASSIFICATIO
N

YES NO

STUTTER WORD PREDICTION

Fig 4.2.1 Architectural Diagram

15
4.2.2 Architectural Description

A system architecture or systems architecture is the conceptual model that defines the
structure, behavior, and more views of a system. An architecture description is a formal
description and representation of a system, organized in a way that supports reasoning about
the structures and behaviors of the system. System architecture can comprise system
components, the externally visible properties of those components, the relationships (e.g. the
behavior) between them. It can provide a plan from which products can be procured, and
systems developed, that will work together to implement the overall system. There have been
efforts to formalize languages to describe system architecture; collectively these are called
architecture description languages (ADLs).

Various organizations define systems architecture in different ways, including:

➢ An allocated arrangement of physical elements which provides the design solution for
a consumer product or life-cycle process intended to satisfy the requirements of the
functional architecture and the requirements baseline.
➢ Architecture comprises the most important, pervasive, top-level, strategic inventions,
decisions, and their associated rationales about the overall structure (i.e., essential
elements and their relationships) and associated characteristics and behavior.
➢ If documented, it may include information such as a detailed inventory of current
hardware, software and networking capabilities; a description of long-range plans and
priorities for future purchases, and a plan for upgrading and/or replacing dated
equipment and software.

Focusing on the functioning of biological nervous systems, one mathematical model is


Artificial Neural Networks (ANNs). These models are structured as three parts namely
input layer, hidden layers and output layers. ANNs signify as weighted directed graph
where artificial neurons represent as nodes and connections between neuron input and
neuron output as directed edges with weights. Based on connection pattern ANN
categorization has been cited as feed-forward networks and recurrent networks. If there
is no loop in the graph then those networks are categorized as single layer perceptron,
multilayer perceptron and radial basis function network of feed forward network
typesNatural language processing can be introduced to get repetition types of stuttering
free speech. For that different existing Text-toSpeech (TTS) system can be used.

16
CHAPTER 5

5. SYSTEM IMPLEMENTATION

5.1 MODULES

• SPEECH INPUT
• VOICE MODULE
• DATASET TRAIN
• SPEECH PREPROCESSING
• CLASSIFICATION
• STUTTER WORD DETECTION

1. SPEECH INPUT

Stuttering speech recognition is a well-studied concept in speech signal processing.


Classification of speech disorder is the main focus of this study. Classification of stuttered
speech is becoming more important with the enhancement of machine learning and deep
learning. Stuttering speech is given as a input for processing the stutter and detect the
words.

For example

"I w-w-w-want a drink." One-syllable word repetitions – "Go-go-go away." Prolonged


sounds – "Ssssssssam is nice."

2. VOICE MODULE

Voice to text converters have become a necessary tool for individuals and businesses alike.
These tools use speech recognition technology to convert audio files, including voice
commands and speech from video files, into a text transcription.

Voice to text conversion is a relatively simple process:

• Record your voice: Start by recording your voice on a device such as an iPhone or
Android smartphone, or on your Mac or PC. The recorded audio is often saved as a
WAV file, but other formats are typically supported as well.

17
• Choose a transcription tool: Upload the audio recording to a transcription software
or online tool. This could be an app, a desktop program, or a browser-based online
tool. Some of these tools even offer real-time transcription.
• Transcribe audio: The transcription service will convert your audio file to a text file,
often in TXT or DOC format. Many services offer high-quality transcription,
though accuracy can vary. Some tools also allow you to convert speech directly to
text online, without the need for an audio recording.
• Edit the text: After transcription, you can edit the text to ensure it accurately
represents your voice recording. Many tools offer integrated editing
functionality.There are several free speech to text tools you can use.

3. DATASET TRAIN

Training data is labeled data used to teach AI models or machine learning algorithms
to make proper decisions. For example, if you are trying to build a model for a self-driving
car, the training data will include images and videos labeled to identify cars vs street signs
vs people. Here the stutter dataset is analyzed.

TORGO. This was developed by a collaboration between departments of Speech Language


Pathology Computer Science at the University of Toronto and the Holland-Bloorview Kids
Rehab hospital. This dataset comprises samples from seven persons, diagnosed with
cerebralpalsy or amyotrophic lateral sclerosis including four males and three females aged
between 16 to 50 years. In addition to this, it also contains samples from control speakers of

18
the same age. FluencyBank. This is a shared database for the study of fluency development
which has been developed by Nan Bernstein Ratner (University of Maryland) and Brian
MacWhinney (Carnegie Mellon University). The platform proposes audio and video files with
transcriptions of adults and children who stutter. The FluencyBank is an interview data of 32
PWS.

4. SPEECH PREPROCESSING

NLP preprocessing is used to preprocess the stutter speech detection. Artificial intelligence has
become part of our everyday lives – Alexa and Siri, text and email autocorrect, customer
service chatbots. They all use machine learning algorithms and Natural Language Processing
(NLP) to process, “understand”, and respond to human language, both written and spoken.
Give this NLP sentiment analyzer a spin to see how NLP automatically understands and
analyzes sentiments in text (Positive, Neutral, Negative). Although NLP and its sister study,
Natural Language Understanding (NLU) are constantly growing in huge leaps and bounds with
their ability to compute words and text, human language is incredibly complex, fluid, and
inconsistent and presents serious challenges that NLP is yet to completely overcome.

Errors in text and speech

Misspelled or misused words can create problems for text analysis. Autocorrect and grammar
correction applications can handle common mistakes, but don’t always understand the writer’s
intention. With spoken language, mispronunciations, different accents, stutters, etc., can be
difficult for a machine to understand. However, as language databases grow and smart
assistants are trained by their individual users, these issues can be minimized.

Keyword Extraction

The final key to the text analysis puzzle, keyword extraction, is a broader form of the
techniques we have already covered. By definition, keyword extraction is the automated
process of extracting the most relevant information from text using AI and machine learning
algorithms. You can mold your software to search for the keywords relevant to your needs –
try it out with our sample keyword extractor.

Lemmatization and Stemming

More technical than our other topics, lemmatization and stemming refers to the breakdown,
tagging, and restructuring of text data based on either root stem or definition. That might

19
seem like saying the same thing twice, but both sorting processes can lend different
valuable data. Discover how to make the best of both techniques in our guide to Text
Cleaning for NLP. That’s a lot to tackle at once, but by understanding each process and
combing through the linked tutorials, you should be well on your way to a smooth and
successful NLP application.

5. CLASSIFICATION

The ANN model is used to detect the stuttered events. The particular stuttered events to be
located are repetitions and prolongations. This is because the repetitions and prolongations
are ubiquitous in stuttered speech. Artificial neural networks are widely applied in the field
of classification. One of the main reasons of the ANN popularity in the area is the fact that,
in contrast to traditional statistical methods, networks adjust data without the necessity of
defining any additional function or distribution of input variables. They are also able to
determine the probability of an element belonging to the group that permits the use of the
ANN application as a posteriori probability estimators for some specified objects.

6. STUTTER WORD DETECTION

The stuttered word is detected after classifying the trained and tested dataset of the stutter
detection.

5.2 Sample Coding

import time

from tkinter import Tk, messagebox, ttk

from tkinter import *

from tkinter.messagebox import askyesno

import cv2

from PIL import Image, ImageTk

import numpy as np

import pyttsx3

import speech_recognition as sr

20
from test import sample

class tk_master:

def __init__(self):

self.master='ar_master'

self.title ='Stutter Recogntion'

self.titlec ='STUTTER RECOGNITION'

self.backround_color ='#2F4F4F '

self.text_color ='#c0c0c0'

self.backround_image='images/background_hd1.jpg'

def get_title(self):

return self.title

def get_titlec(self):

return self.titlec

def get_backround_color(self):

return self.backround_color

def get_text_color(self):

return self.text_color

def get_backround_image(self):

return self.backround_image

def set_window_design(self):

root = Tk()

w = 780

h = 500

ws = root.winfo_screenwidth()

21
hs = root.winfo_screenheight()

x = (ws / 2) - (w / 2)

y = (hs / 2) - (h / 2)

root.geometry('%dx%d+%d+%d' % (w, h, x, y))

self.bg = ImageTk.PhotoImage(file='images/background_hd1.jpg')

root.title(self.title)

root.resizable(False,False)

bg = ImageTk.PhotoImage(file=self.backround_image)

canvas = Canvas(root, width=200, height=300)

canvas.pack(fill="both", expand=True)

canvas.create_image(0, 0, image=bg, anchor=NW)

canvas.create_text(390, 20, text=self.title, font=("Times New Roman", 24),


fill=self.text_color)

###

def clickHandler(event):

tt=tk_master

tt.chat_app(event)

image = Image.open('images/admin.png')

img = image.resize((150, 150))

my_img = ImageTk.PhotoImage(img)

image_id=canvas.create_image(400, 170, image=my_img)

canvas.tag_bind(image_id, "<1>", clickHandler)

###

admin_id=canvas.create_text(400, 270, text="Start", font=("Times New Roman", 24),


fill=self.text_color)

22
CHAPTER 6

SYSTEM TESTING

6.1 METHODOLOGY
1. NLP text processing

Natural language processing (NLP) is a branch of artificial intelligence (AI) that


enables machines to understand human language. The main intention of NLP is to build
systems that are able to make sense of text and then automatically execute tasks like spell-
check, text translation, topic classification, etc. Companies today use NLP in artificial
intelligence to gain insights from data and automate routine tasks.

Pipeline of natural language processing in artificial intelligence

Step 1: Sentence segmentation

Sentence segmentation is the first step in the NLP pipeline. It divides the entire paragraph
into different sentences for better understanding. For example, "London is the capital and
most populous city of England and the United Kingdom. Standing on the River Thames in
the southeast of the island of Great Britain, London has been a major settlement for two
millennia. It was founded by the Romans, who named it Londinium."

After using sentence segmentation, we get the following result:

“London is the capital and most populous city of England and the United Kingdom.”

“Standing on the River Thames in the southeast of the island of Great Britain, London has
been a major settlement for two millennia.”

“It was founded by the Romans, who named it Londinium.”

23
Step 2: Word tokenization

Word tokenization breaks the sentence into separate words or tokens. This helps understand
the context of the text. When tokenizing the sentence “London is the capital and most
populous city of England and the United Kingdom”, it is broken into separate words, i.e.,
“London”, “is”, “the”, “capital”, “and”, “most”, “populous”, “city”, “of”, “England”,
“and”, “the”, “United”, “Kingdom”, “.”

Step 3: Stemming

Stemming helps in preprocessing text. The model analyzes the parts of speech to figure out
what exactly the sentence is talking about.

Stemming normalizes words into their base or root form. In other words, it helps to predict
the parts of speech for each token. For example, intelligently, intelligence, and intelligent.
These words originate from a single root word ‘intelligen’. However, in English there’s no
such word as ‘intelligen’.

Step 4: Lemmatization

Lemmatization removes inflectional endings and returns the canonical form of a word or
lemma. It is similar to stemming except that the lemma is an actual word. For example,
‘playing’ and ‘plays’ are forms of the word ‘play’. Hence, play is the lemma of these words.
Unlike a stem (recall ‘intelligen’), ‘play’ is a proper word.

Step 5: Stop word analysis

The next step is to consider the importance of each and every word in a given sentence. In
English, some words appear more frequently than others such as "is", "a", "the", "and". As they
appear often, the NLP pipeline flags them as stop words. They are filtered out so as to focus
on more important words.

24
Step 6: Dependency parsing

Next comes dependency parsing which is mainly used to find out how all the words in a
sentence are related to each other. To find the dependency, we can build a tree and assign a
single word as a parent word. The main verb in the sentence will act as the root node.

Step 7: Part-of-speech (POS) tagging

POS tags contain verbs, adverbs, nouns, and adjectives that help indicate the meaning of words
in a grammatically correct way in a sentence.

Voice conversion
The captured speech signals of the speaker contain background noise signals which
come from different sources such as ambient noise, microphone terminal, and the
communication channel. To remove the low background noise signals, the discretized speech
signal of the spoken utterance is passed through a second-order Butterworth IIR (infinite
impulse response) highpass digital filter for the filtering (g[n]*h[n]). The choice for this IIR
filter is based on its flexibility to easily manage signals that are heavily nonlinear in nature due
to its nonlinear phase characteristics and its flexibility of requirements for meeting of
constraints such as arbitrary response. Since most of the energy content of the voice signal is
concentrated within the low frequency range, the filter cut-off frequency is set at 0.40 kHz.
The GMM for speaker recognition is formulated to compute the GMM parameters of extracted
feature vectors of spoken utterances that best match the speech feature templates in the system
database. There are several techniques that may be used to estimate the parameters of the GMM
(mixture weights, mean vector, covariance matrix) that describe the component distribution of
the extracted speech feature vectors. To establish the number of Gaussian distributions useful
for the GMM speaker model for the recognition task, it performed experiments using varying
number of Gaussian distributions and MFCC feature vectors from the utterance “increase
volume upwards” for two speakers. Table 2 shows the results obtained of recognition rates
with varying number of Gaussians. Based on the experimental results, 20 Gaussians was found
adequate for the GMM speaker models.

25
Algorithm used
1. ANN classification algorithm
ANN ALGORITHM USING IN SR CLASSIFICATION The neural network (NN) model
uses state transitions, association strengths, and functions whereas Markov chains are serial
and parallel in a key distinction. Artificial neurons are the fundamental unit of ANN
consists simple processors known as neurons simulate the performance of biological nerve
cell that conceived as a model of the natural neuron. The output signal is directed through
the neuron's outgoing link, which is split into many divisions to send the same signal. The
incoming links with other neurons in the network terminates the outgoing divisions.

ANN Architecture

ANN is a reasoning prototypical consists a data unit of neuron which is created on the
human brain to perform its functions. NN consists a single neuron with variable synaptic
weights comprises of a linear alliance.

Training with Artificial Neural Network


ANN is a basic electronic network of neuron chains to brain creation that processes records at
a time and learns by comparing them with an actual classification. In the training part, each
record is considered as supervised training and output nodes are allocated as correct values 1
for the node and others as 0. The cluster of classification with input and output values is shown
in Figure 6. A neuron in an ANN that includes, (i) Cluster of input values (xi) and linked
weights (wi). (ii) Function (g) that sums the wi and maps the consequences to an output (y).

26
CHAPTER 7

CONCLUSION

Speech is the communication carrier to express human thoughts, feelings and ideas.
Stuttering, or stammering is a disorder of speech which affects millions of people in the glove.
In the field of stuttered speech recognition, different machine learning models were applied for
analysis and classification over the last few decades. In this study, different machine learning
and deep learning models with their application in stuttered speech recognition are discussed.
The major classifiers ANN have been used to classify different types of stutterers. Deep
learning algorithms have become very popular nowadays over traditional machine learning
algorithms for stuttering speech recognition, discussed briefly in this study. Deep neural
network can be employed to classify different types of stuttering with better accuracy. There
are very few researches on removing of stuttering of different types from a speech signal.
Identification of stuttering is required but main focused should be on removal of stuttering.
Interjection, prolongation type of stuttering and unvoiced speech can be removed by different
ways.

27
CHAPTER 8

FUTURE ENHANCEMENT

In future the stutter speech detection csn be done like the AI technology assistant
who can assist the people with such disorder with correct voice module like alexa
and siri.

28
CHAPTER 9

APPENDIX

9.1 CODING

Same fragment duration with same no of fluent speech were implemented in the network at
two stages for 4-s fragments of 40 number having blockades in pronunciation in words stating
with the consonants (p, b, t, d, k and g) and repetition of 1 to 11 stop consonants. For
decreasing the dimension of the input signal, Kohonen network consisting of 21 input neurons
and 25 output

neurons was used first. Then multilayer perceptron including 171 input neurons, 53 hidden
layers and one output layer was examined to achieve 96.67% classification accuracy. In order
to identify dysfluencies in stuttered speech of children, a two-stage technique was used to build
an automatic recognition method. In the first stage, speech was segmented in words and words
were classified as fluent or disfluent using ANN classifier.

MAIN

import time

from tkinter import Tk, messagebox, ttk

from tkinter import *

from tkinter.messagebox import askyesno

import cv2

from PIL import Image, ImageTk

import numpy as np

import pyttsx3

import speech_recognition as sr

from test import sample

29
class tk_master:

def __init__(self):

self.master='ar_master'

self.title ='Stutter Recogntion'

self.titlec ='STUTTER RECOGNITION'

self.backround_color ='#2F4F4F '

self.text_color ='#c0c0c0'

self.backround_image='images/background_hd1.jpg'

def get_title(self):

return self.title

def get_titlec(self):

return self.titlec

def get_backround_color(self):

return self.backround_color

def get_text_color(self):

return self.text_color

def get_backround_image(self):

return self.backround_image

def set_window_design(self):

root = Tk()

w = 780

h = 500

ws = root.winfo_screenwidth()

hs = root.winfo_screenheight()

x = (ws / 2) - (w / 2)

30
y = (hs / 2) - (h / 2)

root.geometry('%dx%d+%d+%d' % (w, h, x, y))

self.bg = ImageTk.PhotoImage(file='images/background_hd1.jpg')

root.title(self.title)

root.resizable(False,False)

bg = ImageTk.PhotoImage(file=self.backround_image)

canvas = Canvas(root, width=200, height=300)

canvas.pack(fill="both", expand=True)

canvas.create_image(0, 0, image=bg, anchor=NW)

canvas.create_text(390, 20, text=self.title, font=("Times New Roman", 24),


fill=self.text_color)

###

def clickHandler(event):

tt=tk_master

tt.chat_app(event)

image = Image.open('images/admin.png')

img = image.resize((150, 150))

my_img = ImageTk.PhotoImage(img)

image_id=canvas.create_image(400, 170, image=my_img)

canvas.tag_bind(image_id, "<1>", clickHandler)

###

admin_id=canvas.create_text(400, 270, text="Start", font=("Times New Roman", 24),


fill=self.text_color)

canvas.tag_bind(admin_id, "<1>", clickHandler)

###

31
root.mainloop()

def chat_app(self):

chat_app_root =Toplevel()

get_data = sample()

w = 780

h = 500

ws = chat_app_root.winfo_screenwidth()

hs = chat_app_root.winfo_screenheight()

x = (ws / 2) - (w / 2)

y = (hs / 2) - (h / 2)

chat_app_root.geometry('%dx%d+%d+%d' % (w, h, x, y))

chat_app_root.title(get_data.get_title())

chat_app_root.resizable(False, False)

bg = ImageTk.PhotoImage(file='images/background_hd1.jpg')

canvas = Canvas(chat_app_root, width=200, height=300)

canvas.pack(fill="both", expand=True)

canvas.create_image(0, 0, image=bg, anchor=NW)

canvas.create_text(390, 20, text="STUTTER RECOGNITION", font=("Times New Roman",


24),

fill=get_data.get_text_color())

admin_id1 = canvas.create_text(390, 70, text="Voice Input", font=("Times New Roman",


24),

fill=get_data.get_text_color())

admin_id2 = canvas.create_text(100, 120, text="USER i/p: ", font=("Times New


Roman", 24),

32
fill=get_data.get_text_color())

admin_user = canvas.create_text(200, 120, text="-", font=("Times New Roman", 24),

fill=get_data.get_text_color())

# admin_id2 = canvas.create_text(100, 170, text="TYPE : ", font=("Times New Roman",


24),

# fill=get_data.get_text_color())

# admin_type = canvas.create_text(200, 170, text="-", font=("Times New Roman", 24),

# fill=get_data.get_text_color())

status_id = canvas.create_text(400, 120, text="Loading...", font=("Times New Roman", 24),

fill=get_data.get_text_color())

global e1,tree

global w1, e2

# e1 = Entry(canvas, font=('times', 15, ' bold '))

# canvas.create_window(400, 170, window=e1)

w1 = StringVar()

e2 = Label(canvas, font=('times', 10, ' bold '), width=50, height=15, textvariable=w1,


anchor="w")

canvas.create_window(450, 320, window=e2)

def SpeakText(command):

engine = pyttsx3.init()

33
engine.say(command)

engine.runAndWait()

engine.stop()

def SpeakText1(command):

engine = pyttsx3.init()

engine.say(command)

engine.runAndWait()

engine.stop()

def voice_input():

try:

canvas.update()

canvas.itemconfig(status_id, text='Loading...')

text = ''

if 1 == 1:

# e1.delete(0, END)

r = sr.Recognizer()

with sr.Microphone() as source:

audio_data = r.record(source, duration=8)

text = r.recognize_google(audio_data)

previous_data = w1.get()

w1.set(previous_data + "\nData : " + str(text))

# print(text)

canvas.update()

time.sleep(1)

SpeakText1(text)

34
canvas.itemconfig(status_id, text='Recognizing...')

canvas.update()

time.sleep(1)

except:

msg="Enter Input Voice"

SpeakText1(msg)

# voice_input()

def exit_program():

voice_input()

b1 = Button(canvas, text="Start", command=exit_program, font=('times', 15, ' bold '))

canvas.create_window(600, 120, window=b1)

def test():

get_data = tk_master()

user = "Hellow User"

# w1.set(user)

# SpeakText(user)

# SpeakText(first_text)

# canvas.itemconfig(admin_user, text=user)

canvas.itemconfig(status_id, text='Loading...')

canvas.update_idletasks()

engine = pyttsx3.init()

engine.say(user)

engine.runAndWait()

engine.stop()

chat_app_root.after_idle(test)

35
chat_app_root.mainloop()

ar=tk_master()

root=ar.set_window_design()

TEST

import threading

import time

from difflib import SequenceMatcher

from tkinter import Tk, Canvas, NW, Entry

from tkinter import *

import pyttsx3

import speech_recognition as sr

from PIL import ImageTk

class sample:

def __init__(self):

self.master='ar_master'

self.title = 'Stutter Recogntion'

self.titlec = 'STUTTER RECOGNITION'

self.backround_color ='##2F4F4F '

self.text_color ='#c0c0c0'

self.backround_image='images/background_hd1.jpg'

def get_title(self):

return self.title

def get_titlec(self):

return self.titlec

def get_backround_color(self):

36
return self.backround_color

def get_text_color(self):

return self.text_color

def get_backround_image(self):

return self.backround_image

# postinstall script for pywin32

# copies PyWinTypesxx.dll and PythonCOMxx.dll into the system directory,

# and creates a pth file

import glob

import os

import shutil

import sys

import sysconfig

try:

import winreg as winreg

except:

import winreg

# Send output somewhere so it can be found if necessary...

import tempfile

tee_f = open(os.path.join(tempfile.gettempdir(), "pywin32_postinstall.log"), "w")

class Tee:

def __init__(self, file):

37
self.f = file

def write(self, what):

if self.f is not None:

try:

self.f.write(what.replace("\n", "\r\n"))

except IOError:

pass

tee_f.write(what)

def flush(self):

if self.f is not None:

try:

self.f.flush()

except IOError:

pass

tee_f.flush()

# For some unknown reason, when running under bdist_wininst we will start up

# with sys.stdout as None but stderr is hooked up. This work-around allows

# bdist_wininst to see the output we write and display it at the end of

# the install.

if sys.stdout is None:

sys.stdout = sys.stderr

sys.stderr = Tee(sys.stderr)

sys.stdout = Tee(sys.stdout)

com_modules = [

# module_name, class_names

38
("win32com.servers.interp", "Interpreter"),

("win32com.servers.dictionary", "DictionaryPolicy"),

("win32com.axscript.client.pyscript", "PyScript"),

# Is this a 'silent' install - ie, avoid all dialogs.

# Different than 'verbose'

silent = 0

# Verbosity of output messages.

verbose = 1

root_key_name = "Software\\Python\\PythonCore\\" + sys.winver

try:

# When this script is run from inside the bdist_wininst installer,

# file_created() and directory_created() are additional builtin

# functions which write lines to Python23\pywin32-install.log. This is

# a list of actions for the uninstaller, the format is inspired by what

# the Wise installer also creates.

file_created

is_bdist_wininst = True

except NameError:

is_bdist_wininst = False # we know what it is not - but not what it is :)

def file_created(file):

pass

def directory_created(directory):

pass

39
def get_root_hkey():

try:

winreg.OpenKey(

winreg.HKEY_LOCAL_MACHINE, root_key_name, 0, winreg.KEY_CREATE_SUB_KEY

return winreg.HKEY_LOCAL_MACHINE

except OSError:

# Either not exist, or no permissions to create subkey means

# must be HKCU

return winreg.HKEY_CURRENT_USER

try:

create_shortcut

except NameError:

# Create a function with the same signature as create_shortcut provided

# by bdist_wininst

def create_shortcut(

path, description, filename, arguments="", workdir="", iconpath="", iconindex=0

):

import pythoncom

from win32com.shell import shell

ilink = pythoncom.CoCreateInstance(

shell.CLSID_ShellLink,

None,

pythoncom.CLSCTX_INPROC_SERVER,

shell.IID_IShellLink,

40
)

ilink.SetPath(path)

ilink.SetDescription(description)

if arguments:

ilink.SetArguments(arguments)

if workdir:

ilink.SetWorkingDirectory(workdir)

if iconpath or iconindex:

ilink.SetIconLocation(iconpath, iconindex)

# now save it.

ipf = ilink.QueryInterface(pythoncom.IID_IPersistFile)

ipf.Save(filename, 0)

# Support the same list of "path names" as bdist_wininst.

def get_special_folder_path(path_name):

from win32com.shell import shell, shellcon

for maybe in """

CSIDL_COMMON_STARTMENU CSIDL_STARTMENU CSIDL_COMMON_APPDATA

CSIDL_LOCAL_APPDATA CSIDL_APPDATA
CSIDL_COMMON_DESKTOPDIRECTORY

CSIDL_DESKTOPDIRECTORY CSIDL_COMMON_STARTUP CSIDL_STARTUP

CSIDL_COMMON_PROGRAMS CSIDL_PROGRAMS
CSIDL_PROGRAM_FILES_COMMON

CSIDL_PROGRAM_FILES CSIDL_FONTS""".split():

if maybe == path_name:

csidl = getattr(shellcon, maybe)

return shell.SHGetSpecialFolderPath(0, csidl, False)

41
raise ValueError("%s is an unknown path ID" % (path_name,))

def CopyTo(desc, src, dest):

import win32api

import win32con

while 1:

try:

win32api.CopyFile(src, dest, 0)

return

except win32api.error as details:

if details.winerror == 5: # access denied - user not admin.

raise

if silent:

# Running silent mode - just re-raise the error.

raise

full_desc = (

"Error %s\n\n"

"If you have any Python applications running, "

"please close them now\nand select 'Retry'\n\n%s"

% (desc, details.strerror)

rc = win32api.MessageBox(

0, full_desc, "Installation Error", win32con.MB_ABORTRETRYIGNORE

if rc == win32con.IDABORT:

raise

42
elifrc == win32con.IDIGNORE:

return

# else retry - around we go again.

# We need to import win32api to determine the Windows system directory,

# so we can copy our system files there - but importing win32api will

# load the pywintypes.dll already in the system directory preventing us

# from updating them!

# So, we pull the same trick pywintypes.py does, but it loads from

# our pywintypes_system32 directory.

def LoadSystemModule(lib_dir, modname):

# See if this is a debug build.

import importlib.machinery

import importlib.util

suffix = "_d" if "_d.pyd" in importlib.machinery.EXTENSION_SUFFIXES else ""

filename = "%s%d%d%s.dll" % (

modname,

sys.version_info[0],

sys.version_info[1],

suffix,

filename = os.path.join(lib_dir, "pywin32_system32", filename)

loader = importlib.machinery.ExtensionFileLoader(modname, filename)

spec = importlib.machinery.ModuleSpec(name=modname, loader=loader, origin=filename)

mod = importlib.util.module_from_spec(spec)

spec.loader.exec_module(mod)

43
def SetPyKeyVal(key_name, value_name, value):

root_hkey = get_root_hkey()

root_key = winreg.OpenKey(root_hkey, root_key_name)

try:

my_key = winreg.CreateKey(root_key, key_name)

try:

winreg.SetValueEx(my_key, value_name, 0, winreg.REG_SZ, value)

if verbose:

print("-> %s\\%s[%s]=%r" % (root_key_name, key_name, value_name, value))

finally:

my_key.Close()

finally:

root_key.Close()

def UnsetPyKeyVal(key_name, value_name, delete_key=False):

root_hkey = get_root_hkey()

root_key = winreg.OpenKey(root_hkey, root_key_name)

try:

my_key = winreg.OpenKey(root_key, key_name, 0, winreg.KEY_SET_VALUE)

try:

winreg.DeleteValue(my_key, value_name)

if verbose:

print("-> DELETE %s\\%s[%s]" % (root_key_name, key_name, value_name))

finally:

my_key.Close()

if delete_key:

44
winreg.DeleteKey(root_key, key_name)

if verbose:

print("-> DELETE %s\\%s" % (root_key_name, key_name))

except OSError as why:

winerror = getattr(why, "winerror", why.errno)

if winerror != 2: # file not found

raise

finally:

root_key.Close()

def RegisterCOMObjects(register=True):

import win32com.server.register

if register:

func = win32com.server.register.RegisterClasses

else:

func = win32com.server.register.UnregisterClasses

flags = {}

if not verbose:

flags["quiet"] = 1

for module, klass_name in com_modules:

__import__(module)

mod = sys.modules[module]

flags["finalize_register"] = getattr(mod, "DllRegisterServer", None)

flags["finalize_unregister"] = getattr(mod, "DllUnregisterServer", None)

klass = getattr(mod, klass_name)

func(klass, **flags)

45
def RegisterHelpFile(register=True, lib_dir=None):

if lib_dir is None:

lib_dir = sysconfig.get_paths()["platlib"]

if register:

# Register the .chm help file.

chm_file = os.path.join(lib_dir, "PyWin32.chm")

if os.path.isfile(chm_file):

# This isn't recursive, so if 'Help' doesn't exist, we croak

SetPyKeyVal("Help", None, None)

SetPyKeyVal("Help\\Pythonwin Reference", None, chm_file)

return chm_file

else:

print("NOTE: PyWin32.chm can not be located, so has not " "been registered")

else:

UnsetPyKeyVal("Help\\Pythonwin Reference", None, delete_key=True)

return None

def RegisterPythonwin(register=True, lib_dir=None):

"""Add (or remove) Pythonwin to context menu for python scripts.

??? Should probably also add Edit command for pys files also.

Also need to remove these keys on uninstall, but there's no function

like file_created to add registry entries to uninstall log ???

"""

import os

if lib_dir is None:

lib_dir = sysconfig.get_paths()["platlib"]

46
classes_root = get_root_hkey()

## Installer executable doesn't seem to pass anything to postinstall script indicating if it's a
debug build,

pythonwin_exe = os.path.join(lib_dir, "Pythonwin", "Pythonwin.exe")

pythonwin_edit_command = pythonwin_exe + ' -edit "%1"'

keys_vals = [

"Software\\Microsoft\\Windows\\CurrentVersion\\App Paths\\Pythonwin.exe",

"",

pythonwin_exe,

),

"Software\\Classes\\Python.File\\shell\\Edit with Pythonwin",

"command",

pythonwin_edit_command,

),

"Software\\Classes\\Python.NoConFile\\shell\\Edit with Pythonwin",

"command",

pythonwin_edit_command,

),

try:

if register:

for key, sub_key, val in keys_vals:

47
## Since winreg only uses the character Api functions, this can fail if Python

## is installed to a path containing non-ascii characters

hkey = winreg.CreateKey(classes_root, key)

if sub_key:

hkey = winreg.CreateKey(hkey, sub_key)

winreg.SetValueEx(hkey, None, 0, winreg.REG_SZ, val)

hkey.Close()

else:

for key, sub_key, val in keys_vals:

try:

if sub_key:

hkey = winreg.OpenKey(classes_root, key)

winreg.DeleteKey(hkey, sub_key)

hkey.Close()

winreg.DeleteKey(classes_root, key)

except OSError as why:

winerror = getattr(why, "winerror", why.errno)

if winerror != 2: # file not found

raise

finally:

# tell windows about the change

from win32com.shell import shell, shellcon

shell.SHChangeNotify(

shellcon.SHCNE_ASSOCCHANGED, shellcon.SHCNF_IDLIST, None, None

48
def get_shortcuts_folder():

if get_root_hkey() == winreg.HKEY_LOCAL_MACHINE:

try:

fldr = get_special_folder_path("CSIDL_COMMON_PROGRAMS")

except OSError:

# No CSIDL_COMMON_PROGRAMS on this platform

fldr = get_special_folder_path("CSIDL_PROGRAMS")

else:

# non-admin install - always goes in this user's start menu.

fldr = get_special_folder_path("CSIDL_PROGRAMS")

try:

install_group = winreg.QueryValue(

get_root_hkey(), root_key_name + "\\InstallPath\\InstallGroup"

except OSError:

vi = sys.version_info

install_group = "Python %d.%d" % (vi[0], vi[1])

return os.path.join(fldr, install_group)

# Get the system directory, which may be the Wow64 directory if we are a 32bit

# python on a 64bit OS.

def get_system_dir():

import win32api # we assume this exists.

try:

import pythoncom

import win32process

49
from win32com.shell import shell, shellcon

try:

if win32process.IsWow64Process():

return shell.SHGetSpecialFolderPath(0, shellcon.CSIDL_SYSTEMX86)

return shell.SHGetSpecialFolderPath(0, shellcon.CSIDL_SYSTEM)

except (pythoncom.com_error, win32process.error):

return win32api.GetSystemDirectory()

except ImportError:

return win32api.GetSystemDirectory()

def fixup_dbi():

# We used to have a dbi.pyd with our .pyd files, but now have a .py file.

# If the user didn't uninstall, they will find the .pyd which will cause

# problems - so handle that.

import win32api

import win32con

pyd_name = os.path.join(os.path.dirname(win32api.__file__), "dbi.pyd")

pyd_d_name = os.path.join(os.path.dirname(win32api.__file__), "dbi_d.pyd")

py_name = os.path.join(os.path.dirname(win32con.__file__), "dbi.py")

for this_pyd in (pyd_name, pyd_d_name):

this_dest = this_pyd + ".old"

if os.path.isfile(this_pyd) and os.path.isfile(py_name):

try:

if os.path.isfile(this_dest):

print(

"Old dbi '%s' already exists - deleting '%s'"

50
% (this_dest, this_pyd)

os.remove(this_pyd)

else:

os.rename(this_pyd, this_dest)

print("renamed '%s'->'%s.old'" % (this_pyd, this_pyd))

file_created(this_pyd + ".old")

except os.error as exc:

print("FAILED to rename '%s': %s" % (this_pyd, exc))

def install(lib_dir):

import traceback

# The .pth file is now installed as a regular file.

# Create the .pth file in the site-packages dir, and use only relative paths

# We used to write a .pth directly to sys.prefix - clobber it.

if os.path.isfile(os.path.join(sys.prefix, "pywin32.pth")):

os.unlink(os.path.join(sys.prefix, "pywin32.pth"))

# The .pth may be new and therefore not loaded in this session.

# Setup the paths just in case.

for name in "win32 win32\\lib Pythonwin".split():

sys.path.append(os.path.join(lib_dir, name))

# It is possible people with old versions installed with still have

# pywintypes and pythoncom registered. We no longer need this, and stale

# entries hurt us.

for name in "pythoncompywintypes".split():

keyname = "Software\\Python\\PythonCore\\" + sys.winver + "\\Modules\\" + name

51
for root in winreg.HKEY_LOCAL_MACHINE, winreg.HKEY_CURRENT_USER:

try:

winreg.DeleteKey(root, keyname + "\\Debug")

except WindowsError:

pass

try:

winreg.DeleteKey(root, keyname)

except WindowsError:

pass

LoadSystemModule(lib_dir, "pywintypes")

LoadSystemModule(lib_dir, "pythoncom")

import win32api

# and now we can get the system directory:

files = glob.glob(os.path.join(lib_dir, "pywin32_system32\\*.*"))

if not files:

raise RuntimeError("No system files to copy!!")

# Try the system32 directory first - if that fails due to "access denied",

# it implies a non-admin user, and we use sys.prefix

for dest_dir in [get_system_dir(), sys.prefix]:

# and copy some files over there

worked = 0

try:

for fname in files:

base = os.path.basename(fname)

dst = os.path.join(dest_dir, base)

52
CopyTo("installing %s" % base, fname, dst)

if verbose:

print("Copied %s to %s" % (base, dst))

# Register the files with the uninstaller

file_created(dst)

worked = 1

# Nuke any other versions that may exist - having

# duplicates causes major headaches.

bad_dest_dirs = [

os.path.join(sys.prefix, "Library\\bin"),

os.path.join(sys.prefix, "Lib\\site-packages\\win32"),

if dest_dir != sys.prefix:

bad_dest_dirs.append(sys.prefix)

for bad_dest_dir in bad_dest_dirs:

bad_fname = os.path.join(bad_dest_dir, base)

if os.path.exists(bad_fname):

# let exceptions go here - delete must succeed

os.unlink(bad_fname)

if worked:

break

except win32api.error as details:

if details.winerror == 5:

# access denied - user not admin - try sys.prefixdir,

# but first check that a version doesn't already exist

53
# in that place - otherwise that one will still get used!

if os.path.exists(dst):

msg = (

"The file '%s' exists, but can not be replaced "

"due to insufficient permissions. You must "

"reinstall this software as an Administrator" % dst

print(msg)

raise RuntimeError(msg)

continue

raise

else:

raise RuntimeError(

"You don't have enough permissions to install the system files"

# Pythonwin 'compiles' config files - record them for uninstall.

pywin_dir = os.path.join(lib_dir, "Pythonwin", "pywin")

for fname in glob.glob(os.path.join(pywin_dir, "*.cfg")):

file_created(fname[:-1] + "c") # .cfg->.cfc

# Register our demo COM objects.

try:

try:

RegisterCOMObjects()

except win32api.error as details:

if details.winerror != 5: # ERROR_ACCESS_DENIED

54
raise

print("You do not have the permissions to install COM objects.")

print("The sample COM objects were not registered.")

except Exception:

print("FAILED to register the Python COM objects")

traceback.print_exc()

# There may be no main Python key in HKCU if, eg, an admin installed

# python itself.

winreg.CreateKey(get_root_hkey(), root_key_name)

chm_file = None

try:

chm_file = RegisterHelpFile(True, lib_dir)

except Exception:

print("Failed to register help file")

traceback.print_exc()

else:

if verbose:

print("Registered help file")

# misc other fixups.

fixup_dbi()

# Register Pythonwin in context menu

try:

RegisterPythonwin(True, lib_dir)

except Exception:

print("Failed to register pythonwin as editor")

55
traceback.print_exc()

else:

if verbose:

print("Pythonwin has been registered in context menu")

# Create the win32com\gen_py directory.

make_dir = os.path.join(lib_dir, "win32com", "gen_py")

if not os.path.isdir(make_dir):

if verbose:

print("Creating directory %s" % (make_dir,))

directory_created(make_dir)

os.mkdir(make_dir)

try:

# create shortcuts

# CSIDL_COMMON_PROGRAMS only available works on NT/2000/XP, and

# will fail there if the user has no admin rights.

fldr = get_shortcuts_folder()

# If the group doesn't exist, then we don't make shortcuts - its

# possible that this isn't a "normal" install.

if os.path.isdir(fldr):

dst = os.path.join(fldr, "PythonWin.lnk")

create_shortcut(

os.path.join(lib_dir, "Pythonwin\\Pythonwin.exe"),

"The Pythonwin IDE",

dst,

"",

56
sys.prefix,

file_created(dst)

if verbose:

print("Shortcut for Pythonwin created")

# And the docs.

if chm_file:

dst = os.path.join(fldr, "Python for Windows Documentation.lnk")

doc = "Documentation for the PyWin32 extensions"

create_shortcut(chm_file, doc, dst)

file_created(dst)

if verbose:

print("Shortcut to documentation created")

else:

if verbose:

print("Can't install shortcuts - %r is not a folder" % (fldr,))

except Exception as details:

print(details)

# importing win32com.client ensures the gen_pydir created - not strictly

# necessary to do now, but this makes the installation "complete"

try:

import win32com.client #noqa

except ImportError:

# Don't let this error sound fatal

pass

57
print("The pywin32 extensions were successfully installed.")

if is_bdist_wininst:

# Open a web page with info about the .exe installers being deprecated.

import webbrowser

try:

webbrowser.open("https://mhammond.github.io/pywin32_installers.html")

except webbrowser.Error:

print("Please visit https://mhammond.github.io/pywin32_installers.html")

def uninstall(lib_dir):

# First ensure our system modules are loaded from pywin32_system, so

# we can remove the ones we copied...

LoadSystemModule(lib_dir, "pywintypes")

LoadSystemModule(lib_dir, "pythoncom")

try:

RegisterCOMObjects(False)

except Exception as why:

print("Failed to unregister COM objects: %s" % (why,))

try:

RegisterHelpFile(False, lib_dir)

except Exception as why:

print("Failed to unregister help file: %s" % (why,))

else:

if verbose:

print("Unregistered help file")

try:

58
RegisterPythonwin(False, lib_dir)

except Exception as why:

print("Failed to unregister Pythonwin: %s" % (why,))

else:

if verbose:

print("Unregistered Pythonwin")

try:

# removegen_py directory.

gen_dir = os.path.join(lib_dir, "win32com", "gen_py")

if os.path.isdir(gen_dir):

shutil.rmtree(gen_dir)

if verbose:

print("Removed directory %s" % (gen_dir,))

# Remove pythonwin compiled "config" files.

pywin_dir = os.path.join(lib_dir, "Pythonwin", "pywin")

for fname in glob.glob(os.path.join(pywin_dir, "*.cfc")):

os.remove(fname)

# The dbi.pyd.old files we may have created.

try:

os.remove(os.path.join(lib_dir, "win32", "dbi.pyd.old"))

except os.error:

pass

try:

os.remove(os.path.join(lib_dir, "win32", "dbi_d.pyd.old"))

except os.error:

59
pass

except Exception as why:

print("Failed to remove misc files: %s" % (why,))

try:

fldr = get_shortcuts_folder()

for link in ("PythonWin.lnk", "Python for Windows Documentation.lnk"):

fqlink = os.path.join(fldr, link)

if os.path.isfile(fqlink):

os.remove(fqlink)

if verbose:

print("Removed %s" % (link,))

except Exception as why:

print("Failed to remove shortcuts: %s" % (why,))

# Now remove the system32 files.

files = glob.glob(os.path.join(lib_dir, "pywin32_system32\\*.*"))

# Try the system32 directory first - if that fails due to "access denied",

# it implies a non-admin user, and we use sys.prefix

try:

for dest_dir in [get_system_dir(), sys.prefix]:

# and copy some files over there

worked = 0

for fname in files:

base = os.path.basename(fname)

dst = os.path.join(dest_dir, base)

if os.path.isfile(dst):

60
try:

os.remove(dst)

worked = 1

if verbose:

print("Removed file %s" % (dst))

except Exception:

print("FAILED to remove %s" % (dst,))

if worked:

break

except Exception as why:

print("FAILED to remove system files: %s" % (why,))

# NOTE: If this script is run from inside the bdist_wininst created

# binary installer or uninstaller, the command line args are either

# '-install' or '-remove'.

# Important: From inside the binary installer this script MUST NOT

# call sys.exit() or raise SystemExit, otherwise not only this script

# but also the installer will terminate! (Is there a way to prevent

# this from the bdist_wininst C code?)

def verify_destination(location):

if not os.path.isdir(location):

raise argparse.ArgumentTypeError('Path "{}" does not exist!'.format(location))

return location

def main():

import argparse

parser = argparse.ArgumentParser(

61
formatter_class=argparse.RawDescriptionHelpFormatter,

description="""A post-install script for the pywin32 extensions.

* Typical usage:

> python pywin32_postinstall.py -install

If you installed pywin32 via a .exe installer, this should be run

automatically after installation, but if it fails you can run it again.

If you installed pywin32 via PIP, you almost certainly need to run this to

setup the environment correctly.

Execute with script with a '-install' parameter, to ensure the environment

is setup correctly.

""",

parser.add_argument(

"-install",

default=False,

action="store_true",

help="Configure the Python environment correctly for pywin32.",

parser.add_argument(

"-remove",

default=False,

action="store_true",

help="Try and remove everything that was installed or copied.",

parser.add_argument(

62
"-wait",

type=int,

help="Wait for the specified process to terminate before starting.",

parser.add_argument(

"-silent",

default=False,

action="store_true",

help='Don\'t display the "Abort/Retry/Ignore" dialog for files in use.',

parser.add_argument(

"-quiet",

default=False,

action="store_true",

help="Don't display progress messages.",

parser.add_argument(

"-destination",

default=sysconfig.get_paths()["platlib"],

type=verify_destination,

help="Location of the PyWin32 installation",

args = parser.parse_args()

if not args.quiet:

print("Parsed arguments are: {}".format(args))

63
if not args.install ^ args.remove:

parser.error("You need to either choose to -install or -remove!")

if args.wait is not None:

try:

os.waitpid(args.wait, 0)

except os.error:

# child already dead

pass

silent = args.silent

verbose = not args.quiet

if args.install:

install(args.destination)

if args.remove:

if not is_bdist_wininst:

uninstall(args.destination)

if __name__ == "__main__":

main()

"""A test runner for pywin32"""

import os

import site

import subprocess

import sys

# locate the dirs based on where this script is - it may be either in the

# source tree, or in an installed Python 'Scripts' tree.

this_dir = os.path.dirname(__file__)

64
site_packages = [

site.getusersitepackages(),

] + site.getsitepackages()

failures = []

# Run a test using subprocess and wait for the result.

# If we get an returncode != 0, we know that there was an error, but we don't

# abort immediately - we run as many tests as we can.

def run_test(script, cmdline_extras):

dirname, scriptname = os.path.split(script)

# some tests prefer to be run from their directory.

cmd = [sys.executable, "-u", scriptname] + cmdline_extras

print("--- Running '%s' ---" % script)

sys.stdout.flush()

result = subprocess.run(cmd, check=False, cwd=dirname)

print("*** Test script '%s' exited with %s" % (script, result.returncode))

sys.stdout.flush()

if result.returncode:

failures.append(script)

def find_and_run(possible_locations, extras):

for maybe in possible_locations:

if os.path.isfile(maybe):

run_test(maybe, extras)

break

else:

raise RuntimeError(

65
"Failed to locate a test script in one of %s" % possible_locations

def main():

import argparse

code_directories = [this_dir] + site_packages

parser = argparse.ArgumentParser(

description="A script to trigger tests in all subprojects of PyWin32."

parser.add_argument(

"-no-user-interaction",

default=False,

action="store_true",

help="(This is now the default - use `-user-interaction` to include them)",

parser.add_argument(

"-user-interaction",

action="store_true",

help="Include tests which require user interaction",

parser.add_argument(

"-skip-adodbapi",

default=False,

action="store_true",

help="Skip the adodbapi tests; useful for CI where there's no provider",

66
args, remains = parser.parse_known_args()

# win32, win32ui / Pythonwin

extras = []

if args.user_interaction:

extras += ["-user-interaction"]

extras.extend(remains)

scripts = [

"win32/test/testall.py",

"Pythonwin/pywin/test/all.py",

for script in scripts:

maybes = [os.path.join(directory, script) for directory in code_directories]

find_and_run(maybes, extras)

# win32com

maybes = [

os.path.join(directory, "win32com", "test", "testall.py")

for directory in [

os.path.join(this_dir, "com"),

+ site_packages

extras = remains + ["1"] # only run "level 1" tests in CI

find_and_run(maybes, extras)

# adodbapi

if not args.skip_adodbapi:

67
maybes = [

os.path.join(directory, "adodbapi", "test", "adodbapitest.py")

for directory in code_directories

find_and_run(maybes, remains)

# This script has a hard-coded sql server name in it, (and markh typically

# doesn't have a different server to test on) but there is now supposed to be a server out
there on the Internet

# just to run these tests, so try it...

maybes = [

os.path.join(directory, "adodbapi", "test", "test_adodbapi_dbapi20.py")

for directory in code_directories

find_and_run(maybes, remains)

if failures:

print("The following scripts failed")

for failure in failures:

print(">", failure)

sys.exit(1)

print("All tests passed \\o/")\

if __name__ == "__main__":

main()

# s=sample()

# s.chat_window()

68
9.2 SCREEN SHOT

Fig 9.2.1 Start Screen

Fig 9.2.2 Loading Screen


69
Fig 9.2.3 Recognizing Screen

Fig 9.2.4 Output Screen

70
CHAPTER 10
REFERENCES

[1] Van Riper C. The Nature of Stuttering. New Jersey: Prentice Hall, 1971

[2] Sichang Jiang, Rui Gu. Speech Language Patholody. Beijing, 2005

[3] Wingate M E, Howell P. Foundations of Stuttering. The Journal of the Acoustical Society
of America, 2002; 112: 1229-1231

[4] Qingli Zang. The evaluation and treatment of speech-language disorders. Shijiazhuang,
1991: 185-199

[5] Bergl P,Cmejla R, Prague. Change Detection with Applications to Speech Analysis. bid(n-
i), 2005; 1: 1-4

[6]Tan T S, Ariff A K, Ting C M, et al. Application of Malay speech technology in Malay


speech therapy assistance tools. Intelligent and Advanced Systems, 2007; 330-334

[7] Noth E, Niemann H, Haderlein T, et al. Automatic stuttering recognition using hidden
Markov models. Sixth International Conference on Spoken Language Processing, 2000.

[8] Witt S M, Young S J. Phone-level pronunciation scoring and, assessment for interactive
language learning. Speech Communication, 2000; 30: 95-108

[9] Ge F P, Pan F P, Dong B, et al. Experimental investigation of Putonghua pronunciation


quality assessment system. Actam Acustica, 2010; 2: 261-266

[10] L. Iverach et al., “Comparison of adults who stutter with and without social anxiety
disorder,” J. Fluen. Disord., vol. 56, pp. 55–68, Jun. 2018, doi: 10.1016/j.jfludis.2018.03.001.

71

You might also like