College of Management Studies UNNAO
Affiliated to Chhatrapati Sahu Ji Maharaj University
Accredited by NAAC A++
Project Report
On
Title of project “Desktop assistant through voice control”
Submitted in Partial Fulfilment of the requirement
For the award of Degree of
BACHELOR OF computer application
Department of computer application
Session (2024-25)
Under the Guidance of Submitted by
Mr. Mohit Srivastava Keshav Mishra
Assistant Professor BCA- 6th
College of Management Studies Roll No.22015003310
DECLARATION
I, Keshav Mishra a student of BCA 6th Sem. at the College of
Management Studies, Unnao, hereby declare that the project report
titled “Desktop assistant through voice control” has been completed
by me under the supervision of Mohit Srivastava, Head of Business
Administration. I further affirm that the contents of this report are
original and have not been submitted for the award of any other degree
or diploma at any other University or Institute.
I have duly credited all original authors and sources for words, ideas,
diagrams, graphics, programs, experiments, and results that are not my
own contributions. Wherever direct quotations have been used, they
have been clearly indicated and properly referenced. I affirm that no
part of this project is plagiarized and that all reported results are genuine
and unaltered.
Keshav Mishra
BCA 6th Semester
Roll No.: 22015003310
College of Management Studies
Affiliate to CSJMU (Accredited by NAAC A++)
INSTITUTE CERTIFICATE
This is to certify that the project titled “Desktop assistant through
voice control” undertaken by Keshav Mishra, has been successfully
completed. The study was conducted as a partial requirement for the
award of the degree of Bachelor of Computer Application at the
College of Management Studies, Unnao, affiliated with Chhatrapati
Shahu Ji Maharaj University, Kanpur, U.P.
This project is the original work of the candidate, completed
independently and meeting the high standards required for submission
towards the said degree. All assistance and resources utilized for this
project have been duly acknowledged.
Mohit Srivastava
(Designation)
ACKNOWLEDGMENT
The project report is an essential component of the BCA program,
providing valuable experience that will undoubtedly benefit me in my
future career.
With profound gratitude, I take this opportunity to express my sincere
thanks to Mohit Srivastava (Head, Department of Computer
Application, College of Management Studies) for his invaluable
support, encouragement, and guidance throughout the course of this
project.
I am deeply thankful for his interest, constructive feedback, persistent
encouragement, and unwavering support during every stage of the
project's development. It has truly been an honour and a privilege to
work under his mentorship.
I also extend my heartfelt thanks to my parents and friends for their
continuous encouragement and cooperation, which greatly contributed
to the successful completion of this study.
Name of Student: Keshav Mishra
BCA- 6th Semester
Roll No.: 22015003310
TABLE OF CONTENT
CHAPTER NO. PAGE NO.
1. INTRODUCTION…………………………………………………………...6
2. SYSTEM ANALYSIS………………………………………………………10
3. FEASIBILITY………………………………………………………………13
4. SOFTWARE REQUIREMENTS……………………………………….......16
5. HARDWARE REQUIREMENTS………………………………………….18
6. SYSTEM DESIGN………………………………………………………....19
7. CODING……………………………………………………………………24
8. VALIDATION ……………………………………………………………...27
9. IMPLEMENTATION AND MAINTENANCE………………………….... 30
10. TESTING TECHNIQUES AND STRATEGIES……………………………33
11. CONCLUSION …………………………………………………………......37
12. FUTURE SCOPE AND FURTHER ENHANCEMENTS…………….........39
13. BIBLIOGRAPHY ………………………………………………………......41
14. APPENDICES …………………………………………………………...…42
LIST OF FIGURES
CHAPTER NAME PAGE NO.
CHAPTER 6. DFD LEVEL 0 DAIGRAM…………………………………......21
DFD LEVEL 1 DIGRAM……………………………………...21
DFD LEVEL 2 DIGRAM……………………………………...22
CHAPTER 7. DESKTOP ASSISTANT CODE…………………………….......24
DESKTOP ASSISTANT CODE…………………………….....25
DESKTOP ASSISTANT CODE…………………………….....26
CHAPTER 14. VOICE RECOGNITION CODE…………………...………….42
TEXT-TO-SPEECH CODE…………………………………..42
TASK EXECUTION CODE………………………………....42
LIST OF TABLES
CHAPTER NO. TABLE NAME PAGE NO.
CHAPTER 10. UNIT TESTING STRATEGY TABLE………………………...34
INTEGRATION TESTING STRATEGY TABLE………………34
FUNCTIONAL TESTING STRATEGY TABLE………………..35
USER ACCEPTANCE TESTING STRATEGY TABLE………..35
PERFORMANCE TESTING STRATEGY TABLE…………….36
ERROR HANDLING TESTING STRATEGY TABLE………...36
CHAPTER 14. APPENDIX B: UNIT TEST CASE……………………..……43
APPENDIX B: INTEGRATION TEST CASE………………...43
CHAPTER 1
INTRODUCTION
1. General Theory
In today’s era almost all tasks are digitalized. We have Smartphone in hands
and it is nothing less than having world on a finger tips. These days we aren’t
even using fingers. We just speak of the task and it is done. There exist
systems where we can say Text Dad, “I’ll be late today.” And the text is sent.
That is the task of a Virtual Assistant. It also supports specialized task such as
booking a flight, or finding cheapest book online from various ecommerce
sites and then providing an interface to book an order are helping automate
search, discovery and online order operations. Virtual Assistants are software
programs that help us to perform day to day tasks, such as showing weather
report, creating reminders, making shopping lists etc. They can take
commands via text (online chat bots) or by voice. Voice based intelligent
assistants need an invoking word or wake word to activate the listener,
followed by the command. We have so many virtual assistants, such as
Apple’s Siri, Amazon’s Alexa and Microsoft’s Cortana. This system is
designed to be used efficiently on desktops. Personal assistant software
improves user productivity by managing routine tasks of the user and by
providing information from online sources to the user. Jaanvi is effortless to
use. Call the wake word ‘HELLO Jaanvi’ followed by the command. And
within seconds, it gets executed. Voice searches have dominated over text
search. Web searches conducted via mobile devices have only just overtaken
those carried out using a computer and the analysts are already predicting that
50% of searches will be via voice by 2020.Virtual assistants are turning out to
be smarter than ever. Allow We are intelligent assistant to make email work
for We. Detect intent, pick out important information, automate processes, and
deliver personalized responses. This project was started on the premise that
there is sufficient amount of openly available data and information on the web
that can be utilized to build a virtual assistant that has access to making
intelligent decisions for routine user activities.
We can use any IDE like Anaconda, Visual Studio etc. So the first step is to
create a function for a voice which has arguments. Also, we are using Speech
API to take voice as input. There are two default voices in our computer I.e.
Male and Female. So We can use either any from these. Also, check the voice
function by giving some text input and it will be converted into voice.
We can create a new function for wishing the user. Use if and else condition
statement for allocating the wished output. E.g. If its time between 12 to 18
then the system will say “Good Evening”. Along with this We can get a
welcome voice also. E.g. “Welcome What can I do for us”. After that, we
have to install the speech recognition model and then import it.
Define a new function for taking command from a user. Also mention class
for speech recognition and input type like microphone etc. Set a query for
voice recognition language. We can use Google engine to convert voice input
to the text.
We have to install and import some other packages like pyttsx3, Wikipedia
etc. Pyttsx3 helps We to convert text input to speech conversion. If We ask for
any information then it will display the result in textual format. We can
convert it very easily in the voice format as we have already defined a
function in our code previously.
Else We ask to open YouTube in the query then it will go to the YouTube
address automatically. For that We have to install a web browser package and
import it. In the same way We can add queries for many websites like Google,
Instagram, Facebook etc. The next task is to play the songs. It is same as We
have done before. Add a query for “play songs”. Also, add the location of
songs folder so that assistant will play the song from that only.
We can add so many pages and commands for Web desktop assistant.
Have We ever wondered how cool it would be to have Wer own A.I. assistant?
Imagine how easier it would be to send emails without typing a single word,
doing Wikipedia searches without opening web browsers, and performing
many other daily tasks like playing music with the help of a single voice
command.
What can this A.I. assistant do for us?
• It can OPEN CALCULATOR
• It can play music for US.
• It can do Wikipedia searches.
• It is capable of opening websites like Google, Youtube etc.,
• It is capable of opening code editor or IDE with a single voice command.
1.2 OBJECTIVES
Main objective of building personal assistant software (a virtual assistant) is
using semantic data sources available on the web, user generated content and
providing knowledge from knowledge databases. The main purpose of an
intelligent virtual assistant is to answer questions that users may have. This
may be done in a business environment, for example, on the business website,
with a chat interface. On the mobile platform, the intelligent virtual assistant
is available as a call-button operated service where a voice asks the user
“What can I do for us?” and then responds to verbal input. Virtual assistants
can tremendously save We time. We spend hours in online research and then
making the report in our terms of understanding. JIA can do that for We.
Provide a topic for research and continue with tasks while JIA does the
research. Another difficult task is to remember test dates, birthdates or
anniversaries.
It comes with a surprise when We enter the class and realize it is class test
today. Just tell JIA in advance about tests and she reminds We well in advance
so We can prepare for the test. One of the main advantages of voice searches
is their rapidity. In fact, voice is reputed to be four times faster than a written
search: whereas we can write about 40 words per minute, we are capable of
speaking around 150 during the same period of time15. In this respect, the
ability of personal assistants to accurately recognize spoken words is a
prerequisite for them to be adopted by consumers
1.3 PURPOSE
Purpose of virtual assistant is to being capable of voice interaction,
music playback, making to-do lists, setting alarms, streaming podcasts,
playing audiobooks, and providing weather, traffic, sports, and other
real-time information, such as news. Virtual assistants enable users to
speak natural language voice commands in order to operate the device
and its apps.
There is an increased overall awareness and a higher level of comfort
demonstrated specifically by millennial consumers. In this ever-evolving
digital world where speed, efficiency, and convenience are constantly
being optimized, it’s clear that we are moving towards less screen
interaction.
CHAPTER 2
System Analysis
2.1 Project Overview
The voice assistant project aims to create an interactive software application that
can understand and respond to voice commands. The assistant can perform
various tasks such as searching the web, playing music, opening applications,
and providing information, thereby enhancing user productivity and convenience.
2.2 Objectives
To develop a voice-activated assistant that can perform tasks based on user
commands.
To provide a user-friendly interface for interaction.
To integrate various APIs and libraries for functionality such as speech
recognition, text-to-speech, and web browsing.
To ensure the system is responsive and can handle multiple commands
efficiently.
2.3 Functional Requirements
The voice assistant should be able to:
Voice Recognition: Understand and process spoken commands using
speech recognition technology.
Text-to-Speech: Convert text responses into spoken words to communicate
with the user.
Information Retrieval: Access and retrieve information from Wikipedia
and other online sources.
Web Browsing: Open web pages and perform searches on Google and
YouTube.
Application Control: Open and control applications like Notepad, Visual
Studio Code, and Calculator.
Media Playback: Play music from platforms like Spotify and YouTube.
Time Announcement: Provide the current time to the user.
Restaurant Search: Find and display top restaurants nearby.
Typing Simulation: Simulate typing text in applications.
Shutdown and Restart: Execute system commands to shut down or restart
the computer.
2.4 Non-Functional Requirements
Performance: The system should respond to commands within a few
seconds.
Usability: The interface should be intuitive and easy to use for all users.
Reliability: The system should accurately recognize commands and
perform tasks without errors.
Scalability: The system should be able to integrate additional features in
the future.
Security: Ensure that the system does not expose sensitive user data.
2.5 System Architecture
The architecture of the voice assistant can be divided into several components:
User Interface: The interface through which users interact with the
assistant (e.g., voice commands).
Speech Recognition Module: Converts spoken language into text using
libraries like SpeechRecognition.
Natural Language Processing (NLP): Processes the recognized text to
understand user intent.
Task Execution Module: Executes commands based on user input (e.g.,
opening applications, searching the web).
Text-to-Speech Module: Converts text responses into speech using
libraries like pyttsx3.
External APIs: Integrates with external services like Wikipedia, YouTube,
and Spotify for information retrieval and media playback.
2.6. Technologies Used
Programming Language: Python
Libraries:
pyttsx3: For text-to-speech functionality.
SpeechRecognition: For voice recognition.
wikipedia: For retrieving information from Wikipedia.
webbrowser: For opening web pages.
os and subprocess: For executing system commands.
pyautogui: For simulating keyboard input.
Development Environment: Any Python IDE (e.g., PyCharm, Visual
Studio Code).
2.7 User Scenarios
Scenario 1: A user wants to know the current time. They say, "What time is
it?" The assistant responds with the current time.
Scenario 2: A user wants to play a specific song on YouTube. They say,
"Play 'Shape of You' on YouTube." The assistant opens YouTube and
searches for the song.
Scenario 3: A user wants to shut down their computer. They say,
"Shutdown." The assistant confirms and executes the shutdown command.
2.8 Testing and Validation
Unit Testing: Test individual components (e.g., speech recognition, text-to-
speech) to ensure they work as expected.
Integration Testing: Test the interaction between different modules (e.g.,
voice recognition and task execution).
User Acceptance Testing: Gather feedback from users to validate the
usability and functionality of the assistant.
CHAPTER 3
Feasibility Study
3.1 Technical Feasibility
Technology Requirements: The project requires specific technologies and
libraries, such as:
Programming Language: Python
Libraries: pyttsx3, SpeechRecognition, wikipedia, webbrowser, os
, subprocess, pyautogui
Hardware Requirements:
A computer with a microphone for voice input.
Sufficient RAM and processing power to run the speech recognition
and text-to-speech functionalities smoothly.
Software Requirements:
Python installed on the system.
Required libraries installed via pip.
Integration: The project can integrate with various APIs (e.g., Wikipedia,
YouTube) and external applications (e.g., Notepad, Visual Studio Code).
3.2 Economic Feasibility
Cost Analysis:
Development Costs: Minimal, as the project primarily uses open-
source libraries and tools.
Operational Costs: Low, as it runs on personal computers without
the need for expensive infrastructure.
Potential Benefits:
Increased productivity for users by automating tasks.
Enhanced user experience through voice interaction.
Return on Investment (ROI): The project can save time and improve
efficiency, leading to a positive ROI.
3.3 Legal Feasibility
Licensing:
Ensure that all libraries and APIs used in the project comply with their
respective licenses (most are open-source).
Data Privacy:
The project should adhere to data protection regulations, especially if
it processes personal data (e.g., voice recordings).
Intellectual Property:
Ensure that the project does not infringe on any existing patents or
copyrights.
3.4 Operational Feasibility
User Acceptance:
The project aims to enhance user experience, and user feedback will
be crucial for its success.
Training and Support:
Users may require training to effectively use the voice assistant,
especially if they are not familiar with voice commands.
Maintenance:
Regular updates and maintenance will be necessary to ensure
compatibility with new libraries and APIs.
3.5 Scheduling Feasibility
Project Timeline:
The project can be developed in phases, with an estimated timeline as
follows:
Phase 1: Research and Planning (1-2 weeks)
Phase 2: Development of Core Features (3-4 weeks)
Phase 3: Testing and Debugging (2 weeks)
Phase 4: User Training and Feedback (1 week)
Phase 5: Final Adjustments and Deployment (1 week)
Milestones:
Set clear milestones for each phase to track progress and ensure
timely completion.
Overall Feasibility
Based on the analysis of technical, economic, legal, operational, and scheduling
aspects, the voice assistant project is deemed feasible. The project has the
potential to provide significant benefits to users while being cost-effective and
manageable within the given timeframe.
Recommendations
Proceed with the development of the voice assistant project, ensuring that
all legal and operational considerations are addressed.
Gather user feedback during the testing phase to refine features and
improve usability.
Plan for future enhancements based on user needs and technological
advancements.
This feasibility study provides a comprehensive overview of the project's
viability and serves as a foundation for further development and implementation.
CHAPTER 4
Software Requirements
4.1 Operating System:
Windows 10 or later (recommended)
macOS (optional, if developing on a Mac)
Linux (optional, if developing on a Linux distribution)
4.2 Programming Language:
4.3 Python 3.x (latest version recommended)
4.4 Libraries and Frameworks:
4.5 Speech Recognition:
4.5.1 SpeechRecognition (for converting speech to text)
4.6 Text-to-Speech:
4.6.1 pyttsx3 (for converting text to speech)
4.7 Web Interaction:
4.7.1 webbrowser (for opening web pages)
4.8 Information Retrieval:
4.8.1 wikipedia (for fetching information from Wikipedia)
4.9 GUI Automation:
4.9.1 pyautogui (for simulating keyboard and mouse actions)
4.10 Operating System Interaction:
4.10.1 os and subprocess (for executing system commands)
4.11 Development Environment:
4.12 Integrated Development Environment (IDE) or text editor:
4.12.1 PyCharm
4.12.2 Visual Studio Code
4.13 Python Package Manager:
4.13.1 pip (for installing required libraries)
4.14 Additional Tools (optional):
4.15 Git (for version control)
4.16 Virtual Environment (e.g., venv or conda) for managing dependencies
CHAPTER 5
Hardware Requirements
5.1 Minimum Hardware Specifications:
Processor:
Intel Core i3 or equivalent (minimum)
RAM:
4 GB (minimum)
Storage:
500 MB of free disk space for the application and libraries
Microphone:
A working microphone for voice input (built-in or external)
Speakers/Headphones:
For audio output (to hear the assistant's responses)
5.2 Recommended Hardware Specifications:
5.3 Processor:
5.3.1 Intel Core i5 or equivalent (recommended for better performance)
5.4 RAM:
5.4.1 8 GB or more (recommended for multitasking and smoother
performance)
5.5 Storage:
5.5.1 1 GB of free disk space or more (to accommodate additional libraries
and data)
5.6 Microphone:
5.6.1 High-quality external microphone for better voice recognition
accuracy
5.7 Speakers/Headphones:
5.7.1 Good quality speakers or headphones for clear audio output
5.8 Network Requirements
Internet Connection:
A stable internet connection is required for:
Accessing online APIs (e.g., Wikipedia, YouTube)
Performing web searches
Streaming music from platforms like Spotify
CHAPTER 6
System Design
6.1 Architectural Design
The architecture of the voice assistant can be represented as a layered
architecture, consisting of the following layers:
Presentation Layer:
This layer is responsible for user interaction. It captures voice
commands and displays responses.
Application Layer:
This layer contains the core logic of the voice assistant. It processes
user commands, interacts with external APIs, and manages the flow of
information between the presentation layer and the data layer.
Data Layer:
This layer handles data storage and retrieval. In this case, it may not
require a traditional database but will interact with external data
sources (e.g., Wikipedia, YouTube).
6.2 Component Design
The system can be broken down into several key components, each responsible
for specific functionalities:
1. Voice Recognition Component:
Functionality: Converts spoken language into text.
Technology: Uses the SpeechRecognition library.
Input: Audio from the microphone.
Output: Recognized text command.
2. Natural Language Processing (NLP) Component:
Functionality: Analyzes the recognized text to determine user intent.
Technology: Custom logic using Python.
Input: Recognized text command.
Output: Parsed command and identified action.
3. Task Execution Component:
Functionality: Executes the identified action based on user
commands.
Sub-components:
Web Interaction: Opens web pages and performs searches.
Application Control: Opens applications like Notepad, Visual
Studio Code, etc.
Media Playback: Plays music from Spotify or YouTube.
System Commands: Executes shutdown or restart commands.
Input: Parsed command from the NLP component.
Output: Action performed (e.g., opening a web page, playing music).
4. Text-to-Speech Component:
Functionality: Converts text responses into spoken words.
Technology: Uses the pyttsx3 library.
Input: Text response.
Output: Audio output through speakers.
5. User Interface Component:
Functionality: Provides a simple interface for user interaction.
Technology: Console-based interface (or GUI if developed further).
Input: User voice commands.
Output: Visual feedback (optional) and audio responses.
6.3 Data Flow Diagram (DFD)
A Data Flow Diagram illustrates how data moves through the system. Below is a
simplified DFD for the voice assistant:
DFD LEVEL 0 DIGRAM:
DFD LEVEL 1 DIGRAM:
DFD LEVEL 2 DIGRAM:
CHAPTER 7
CODING
import json
import pyttsx3 # pip install pyttsx3
import requests
import speech_recognition as sr # pip install SpeechRecognition
import datetime
import wikipedia # pip install wikipedia
import webbrowser
import os
import subprocess
import pyautogui # pip install pyautogui
import random
import re
# Initialize the text-to-speech engine
engine = pyttsx3.init('sapi5')
voices = engine.getProperty('voices')
engine.setProperty('voice', voices[1].id) # Change index for different voice
def speak(audio):
"""Function to make the assistant speak."""
engine.say(audio)
engine.runAndWait()
def wishMe():
"""Function to wish the user based on the time of day."""
hour = int(datetime.datetime.now().hour)
if hour < 12:
speak("Good Morning Ke shav, Welcome to Our Final Year Project!")
elif hour < 18:
speak("Good Afternoon Ke shav, Welcome to Our Final Year Project!")
else:
speak("Good Evening Ke shav, Welcome to Our Final Year Project!")
speak("It's me Jaanvi. So please tell me what can I do for you.")
def takeCommand():
"""Function to take voice input from the user."""
r = sr.Recognizer()
with sr.Microphone() as source:
print("Listening...")
r.pause_threshold = 1
audio = r.listen(source)
try:
print("Recognizing...")
query = r.recognize_google(audio, language='en-in')
print(f"You said: {query}\n")
except sr.UnknownValueError:
print("Sorry, I didn't understand that.")
return "None"
except sr.RequestError:
print("Could not request results from Google Speech Recognition service.")
return "None"
return query.lower()
def get_weather(city):
api_key = "76d09dec67ee495b80985755250404" # Replace with your
OpenWeatherMap API key
base_url = f"http://api.weatherapi.com/v1/current.json?
key={api_key}&q={city}&aqi=no"
response = requests.get(base_url)
data = response.json()
if "error" not in data:
weather_desc = data["current"]["condition"]["text"]
temperature = data["current"]["temp_c"]
speak(f"The temperature in {city} is {temperature} degrees Celsius with
{weather_desc}.")
else:
speak("City not found.")
if __name__ == "__main__":
wishMe()
while True:
query = takeCommand()
if 'open wikipedia and tell me' in query:
speak('Searching Wikipedia...')
query = query.replace("open wikipedia and tell me", "")
results = wikipedia.summary(query, sentences=1)
speak("According to Wikipedia")
print(results)
speak(results)
elif 'play' in query and 'on youtube' in query:
video_title = query.replace("play", "").replace("on youtube", "").strip()
if video_title:
webbrowser.open(f"https://www.youtube.com/results?
search_query={video_title}")
speak(f"Searching for {video_title} on YouTube.")
else:
speak("Please specify the video title.")
elif 'open google and search' in query:
search_query = re.search(r"open google and search (.+)", query)
if search_query:
search_query = search_query.group(1)
url = f"https://www.google.com/search?q={search_query}"
webbrowser.open(url)
speak(f"Opening Google and searching {search_query}.")
elif 'open youtube and search' in query:
search_query = re.search(r"open youtube and search (.+)", query)
if search_query:
search_query = search_query.group(1)
url = f"https://www.youtube.com/results?
search_query={search_query}"
webbrowser.open(url)
speak(f"Opening YouTube and searching {search_query}.")
elif 'the time' in query:
strTime = datetime.datetime.now().strftime("%H:%M:%S")
speak(f"Ke shav, the time is {strTime}")
elif 'weather in' in query:
city = query.replace("weather in", "").strip()
get_weather(city)
elif 'open code' in query:
codePath = "D:\\one drive\\OneDrive\\Desktop\\janhavi\\new.py" #
Adjust the path as needed
os.startfile(codePath)
speak("Opening Visual Studio Code.")
elif 'open notepad' in query:
subprocess.Popen("notepad.exe")
speak("Opening Notepad.")
elif 'open calculator' in query:
subprocess.Popen("calc.exe")
speak("Opening Calculator.")
elif 'open spotify' in query:
random_song = random.choice([
"https://open.spotify.com/track/2Hg2x9hK9lmwqVXZ6FM1Ry?
si=064f25fd0cee49e0",
"https://open.spotify.com/track/6bIGEeG0Q5mC310MfqjrPi?
si=00c990bd02f54bcd",
"https://open.spotify.com/track/1rEVydQSe04NJUqyyEyeEq?
si=7e4721205921466a",
"https://open.spotify.com/track/6z5Zp1yjdjDuaLSZh9QJgi?
si=9935357b41c04a66",
"https://open.spotify.com/track/48IFKNmzdGM1Cs1lujGKrh?
si=8f87cd60f4e4431a"
])
webbrowser.open(random_song)
speak("Playing a random song on Spotify.")
elif 'play a random song' in query:
random_song = random.choice([
"https://youtu.be/VCNLZflKQ7o?si=TNjGFtatIlEv3Sol",
"https://youtube.com/shorts/kEvjaB-Rq8c?
si=nW7yv0QOO7UHovpZ",
"https://youtu.be/POvFEQaK634?si=n5iX1ztRiCgKuj_5"
])
webbrowser.open(random_song)
speak("Playing a random song on YouTube.")
elif 'search top restaurants' in query:
webbrowser.open("http://www.google.com/maps/search/top+restaurants+
near+me")
speak("Searching for top restaurants around you on Google Maps.")
elif 'type' in query:
text_to_type = query.replace("type", "").strip()
pyautogui.typewrite(text_to_type)
speak(f"Typing {text_to_type}.")
elif 'delete the last word' in query:
pyautogui.press('backspace', presses=1)
speak("Deleted the last word.")
elif 'thank you' in query:
speak("Please Keshav, do not say thank you. In friendship, no thank you
and no sorry.")
elif 'open best places to visit' in query:
webbrowser.open("http://www.google.com/maps/search/best+places+near
+me")
speak("Opening the best places to visit.")
elif 'open amazon' in query:
webbrowser.open("http://amazon.com")
speak("Opening Amazon.")
elif 'open flipkart' in query:
webbrowser.open("http://flipkart.com")
speak("Opening Flipkart.")
elif 'open bookmyshow' in query:
webbrowser.open("https://in.bookmyshow.com/")
speak("Opening BookMyShow.")
elif 'shutdown' in query:
speak("Shutting down the computer.")
subprocess.call(["shutdown", "/s", "/t", "1"]) # Shutdown immediately
elif 'restart' in query:
speak("Restarting the computer.")
subprocess.call(["shutdown", "/r", "/t", "1"]) # Restart immediately
elif 'open camera' in query:
speak("Opening camera.")
subprocess.Popen("start microsoft.windows.camera:", shell=True)
elif 'open whatsapp' in query:
speak("Opening WhatsApp.")
webbrowser.open("https://web.whatsapp.com/")
elif 'what is your name' in query or 'tell me your name' in query or 'who are
you' in query or "what's your name" in query:
speak("My name is Jaanvi, how can you forget my name?")
elif 'who is your owner' in query or 'who is your master' in query:
speak("My owner is Ke shav. He is my best friend.")
elif 'bye-bye' in query or 'exit' in query:
speak("Love you! Have a great day!")
break # Exit the loop and end the program
CHAPTER 8
Validation Checks
8.1 Functional Validation
Functional validation ensures that the system performs its intended functions
correctly.
Voice Recognition:
Test Case: Speak a clear command (e.g., "What time is it?").
Expected Result: The system accurately recognizes the command
and converts it to text.
Natural Language Processing:
Test Case: Provide various commands (e.g., "Open Notepad",
"Search Wikipedia for Python").
Expected Result: The system correctly identifies the intent of each
command.
Task Execution:
Test Case: Execute commands to open applications (e.g., "Open
Visual Studio Code").
Expected Result: The specified application opens without errors.
Text-to-Speech:
Test Case: Ask the assistant to say a specific phrase (e.g., "Tell me a
joke").
Expected Result: The assistant responds with the correct audio
output.
Web Interaction:
Test Case: Command the assistant to search for a term on Google
(e.g., "Open Google and search for weather").
Expected Result: The browser opens and displays the correct search
results.
8.2 Performance Validation
Performance validation checks the responsiveness and efficiency of the system.
Response Time:
Test Case: Measure the time taken from issuing a command to
receiving a response.
Expected Result: The response time should be within an acceptable
range (e.g., under 3 seconds).
Simultaneous Commands:
Test Case: Issue multiple commands in quick succession.
Expected Result: The system should handle commands without
crashing or slowing down.
8.3 Usability Validation
Usability validation ensures that the system is user-friendly and intuitive.
User Feedback:
Test Case: Conduct user testing sessions with different users.
Expected Result: Gather feedback on the ease of use, clarity of
commands, and overall user experience.
Error Handling:
Test Case: Provide unclear or ambiguous commands (e.g., "Open that
thing").
Expected Result: The system should respond with a helpful error
message or prompt for clarification.
8.4 Security Validation
Security validation checks for vulnerabilities and data protection.
Data Privacy:
Test Case: Analyze how the system handles voice data and personal
information.
Expected Result: Ensure that no sensitive data is stored or transmitted
without user consent.
Access Control:
Test Case: Attempt to execute system commands (e.g., shutdown)
without proper authorization.
Expected Result: The system should prevent unauthorized access to
sensitive functions.
8.5 Integration Validation
Integration validation ensures that all components of the system work together
seamlessly.
API Integration:
Test Case: Test the integration with external APIs (e.g., Wikipedia,
YouTube).
Expected Result: The system should successfully retrieve data from
these APIs without errors.
Component Interaction:
Test Case: Verify the interaction between the voice recognition, NLP,
task execution, and text-to-speech components.
Expected Result: Each component should communicate effectively
and produce the expected outcomes.
CHAPTER 9
Implementation and Maintenance
The implementation and maintenance phases are crucial for the successful
deployment and longevity of the voice assistant project. Below is a detailed plan
outlining the steps involved in both phases.
9.1 Implementation Plan
9.1.1. Preparation Phase
Define Project Scope: Clearly outline the features and functionalities to be
included in the voice assistant.
Set Up Development Environment:
Install Python and required libraries
(pyttsx3, SpeechRecognition, wikipedia, webbrowser, pyautogui).
Choose an Integrated Development Environment (IDE) such as
PyCharm or Visual Studio Code.
9.1.2. Development Phase
Component Development:
Voice Recognition: Implement the voice recognition functionality
using the SpeechRecognition library.
Natural Language Processing: Develop the logic to parse and
understand user commands.
Task Execution: Create functions to handle various tasks (e.g.,
opening applications, searching the web).
Text-to-Speech: Integrate the pyttsx3 library to provide audio
feedback.
Integration:
Ensure that all components (voice recognition, NLP, task execution,
text-to-speech) work together seamlessly.
Test the integration of external APIs (e.g., Wikipedia, YouTube) for
information retrieval.
9.1.3. Testing Phase
Unit Testing: Test individual components to ensure they function correctly.
Integration Testing: Verify that all components interact as expected.
User Acceptance Testing: Conduct testing sessions with real users to
gather feedback and identify usability issues.
9.1.4. Deployment Phase
Final Adjustments: Make necessary adjustments based on testing
feedback.
Documentation: Create user manuals and technical documentation for
future reference.
Deployment: Deploy the voice assistant on the target systems (e.g.,
personal computers).
9.2 Maintenance Plan
9.2.1 Monitoring and Support
User Feedback: Continuously gather feedback from users to identify areas
for improvement.
Error Reporting: Implement a mechanism for users to report bugs or
issues encountered while using the assistant.
9.2.2 Regular Updates
Feature Enhancements: Based on user feedback, plan and implement new
features or improvements.
Library Updates: Regularly update the libraries and dependencies to
ensure compatibility and security.
Bug Fixes: Address any bugs or issues reported by users promptly.
9.2.3 Performance Optimization
Performance Monitoring: Monitor the performance of the voice assistant
to identify any slowdowns or inefficiencies.
Optimization: Optimize code and algorithms to improve response times
and overall performance.
9.2.4 User Training and Documentation
User Training: Provide training sessions or materials for new users to help
them understand how to use the voice assistant effectively.
Documentation Updates: Keep user manuals and technical documentation
up to date with any changes or new features.
9.2.5 Backup and Recovery
Data Backup: Regularly back up any important data or configurations
related to the voice assistant.
Recovery Plan: Develop a recovery plan in case of system failures or data
loss.
CHAPTER 10
(Testing Techniques and Strategies)
10.1 Testing Techniques
1. Unit Testing:
Description: Tests individual components or functions in isolation to
ensure they work correctly.
Tools: unittest or pytest in Python.
2. Integration Testing:
Description: Tests the interaction between integrated components to
ensure they work together as expected.
Tools: pytest or custom scripts.
3. Functional Testing:
Description: Tests the system against functional requirements to
ensure it behaves as expected.
Tools: Manual testing or automated testing frameworks.
4. User Acceptance Testing (UAT):
Description: Conducted with real users to validate the system's
usability and functionality.
Tools: Surveys, feedback forms, and direct observation.
5. Performance Testing:
Description: Tests the system's responsiveness and stability under
various conditions.
Tools: Load testing tools like Apache JMeter or custom scripts.
6. Error Handling Testing:
Description: Tests how the system handles invalid input or
unexpected situations.
Tools: Manual testing or automated scripts.
10.2 Testing Strategies
10.2.1 Unit Testing Strategy
Components to Test:
Voice Recognition
Natural Language Processing
Task Execution
Text-to-Speech
Test Data and Expected Results:
Expected Potential
Test Case Input
Output Errors
Test Voice "What "What time is Misrecognitio
Recognition time is it?" it?" n of speech
Test NLP for Command Incorrect
"Open
Command identified: Open command
Notepad"
Parsing Notepad identification
Test Task Command: Notepad
Application
Execution for Open application
fails to open
Notepad Notepad opens
"Hello,
Test Text-to- Audio output of No audio
how can I
Speech the text output
help?"
10.2.2 Integration Testing Strategy
Components to Test:
Interaction between Voice Recognition, NLP, and Task Execution.
Test Data and Expected Results:
Potential
Test Case Input Expected Output
Errors
Test Full "Play Opens YouTube Failure to
Command music on and searches for open or
Flow YouTube" music search
Test Command System
Error message or
with Invalid "Play" crashes or
prompt for input
Input hangs
10.2.3 Functional Testing Strategy
Functional Requirements to Test:
Voice commands for various tasks (e.g., opening applications, searching the
web).
Test Data and Expected Results:
Expected Potential
Test Case Input
Output Errors
Test "Open Wikipedia Wikipedia Incorrect or
Wikipedia and tell me about summary of no summary
Search Python" Python returned
Test System
System does
Shutdown "Shutdown" initiates
not respond
Command shutdown
10.2.4 User Acceptance Testing (UAT) Strategy
Test Scenarios:
Real users interact with the assistant to perform various tasks.
Test Data and Expected Results:
User
Expected User
Test Case Comma Potential Errors
Experience
nd
Test Various User finds the Confusion or
General comman assistant easy to difficulty in
Usability ds use commands
"Open
Test Error Assistant prompts Assistant fails to
that
Handling for clarification respond
thing"
10.2.5 Performance Testing Strategy
Test Scenarios:
Measure response times and system behavior under load.
Test Data and Expected Results:
Expected Potential
Test Case Load
Performance Errors
Test 10
Response time Slow response
Response simultaneous
< 3 seconds or timeouts
Time commands
Test Continuous
System Crashes or
System usage for 1
remains stable memory leaks
Stability hour
10.2.6 Error Handling Testing Strategy
Test Scenarios:
Provide invalid input and observe system behavior.
Test Data and Expected Results:
Potential
Test Case Input Expected Output
Errors
"Ope System
Test Invalid Error message or prompt
n crashes or
Command for valid command
XYZ" hangs
Test Empty Error message or prompt
""
Command for input
CHAPTER 11
Conclusion
The voice assistant project represents a significant step forward in the integration
of voice recognition technology into everyday tasks, enhancing user interaction
and productivity. Throughout the development process, we have focused on
creating a robust, user-friendly application that can understand and respond to
voice commands effectively.
11.1 Key Achievements:
1. Functional Capabilities: The voice assistant successfully implements core
functionalities, including voice recognition, natural language processing,
task execution, and text-to-speech capabilities. Users can perform a variety
of tasks, such as searching the web, opening applications, and retrieving
information from online sources like Wikipedia.
2. User -Centric Design: The project emphasizes usability, ensuring that the
interface is intuitive and accessible to users of varying technical
backgrounds. User feedback during testing phases has been instrumental in
refining the assistant's capabilities and improving the overall user
experience.
3. Testing and Validation: A comprehensive testing strategy has been
employed, encompassing unit testing, integration testing, functional testing,
and user acceptance testing. This rigorous approach has helped identify and
resolve issues, ensuring that the voice assistant operates reliably and
efficiently.
4. Scalability and Future Enhancements: The architecture of the voice
assistant is designed to be scalable, allowing for the integration of
additional features and improvements in future iterations. Potential
enhancements include multi-language support, contextual awareness, and
integration with smart home devices.
5. Performance Optimization: The system has been optimized for
performance, ensuring quick response times and stability under various
conditions. Continuous monitoring and updates will help maintain
performance as user demands evolve.
11.2 Challenges and Lessons Learned:
Voice Recognition Accuracy: Achieving high accuracy in voice
recognition posed challenges, particularly with diverse accents and
background noise. Ongoing improvements in this area will be essential for
enhancing user satisfaction.
Error Handling: Developing effective error handling mechanisms has
been crucial in providing users with clear feedback and guidance when
commands are not understood.
User Training: Educating users on how to interact with the voice assistant
effectively has proven beneficial, highlighting the importance of user
training and support.
11.3 Final Thoughts:
The voice assistant project has successfully demonstrated the potential of voice
technology to streamline tasks and improve user interaction with digital systems.
As technology continues to advance, the project lays a solid foundation for future
developments in voice recognition and artificial intelligence.
In conclusion, the voice assistant not only meets the initial project goals but also
opens up new avenues for exploration and enhancement. With ongoing support
and development, it has the potential to become an indispensable tool for users
seeking to enhance their productivity and simplify their daily tasks.
CHAPTER 12
Future Scope and Further Enhancements
The voice assistant project has laid a solid foundation for further development
and enhancement. As technology evolves and user needs change, there are
numerous opportunities to expand the capabilities and functionalities of the voice
assistant. Below are some potential future scopes and enhancements for the
project:
12.1 Multi-Language Support
Description: Implement support for multiple languages to cater to a
broader audience.
Benefit: This enhancement would make the voice assistant accessible to
non-English speakers and users from diverse linguistic backgrounds.
12.2 Contextual Awareness
Description: Develop the ability for the voice assistant to understand
context and maintain conversation history.
Benefit: This would allow for more natural interactions, where the assistant
can follow up on previous commands or questions, improving user
experience.
12.3 Integration with Smart Home Devices
Description: Enable the voice assistant to control smart home devices (e.g.,
lights, thermostats, security systems).
Benefit: This would position the voice assistant as a central hub for home
automation, enhancing convenience and user control over their
environment.
12.4 Personalization Features
Description: Implement user profiles that allow the assistant to learn
individual preferences and provide personalized responses.
Benefit: Personalization can improve user satisfaction by tailoring
interactions based on user habits, preferences, and frequently used
commands.
12.5 Enhanced Natural Language Processing (NLP)
Description: Utilize advanced NLP techniques, such as machine learning
models, to improve command recognition and understanding.
Benefit: Enhanced NLP capabilities would lead to better comprehension of
complex commands and more accurate responses.
12.6 Integration with Third-Party APIs
Description: Expand the assistant's capabilities by integrating with
additional third-party APIs (e.g., weather services, news updates, calendar
management).
Benefit: This would allow users to access a wider range of information and
services through voice commands, making the assistant more versatile.
12.7 Voice Customization
Description: Allow users to customize the voice and tone of the assistant
(e.g., different accents, genders).
Benefit: Customization can enhance user engagement and satisfaction by
allowing users to choose a voice that they find more appealing.
12.8 Offline Functionality
Description: Develop capabilities that allow the voice assistant to function
offline for basic commands and tasks.
Benefit: This would enhance usability in areas with limited internet
connectivity and provide a more reliable experience.
12.9 Security and Privacy Enhancements
Description: Implement robust security measures to protect user data and
ensure privacy, including voice data encryption and user authentication.
Benefit: Enhancing security and privacy will build user trust and comply
with data protection regulations.
12.10 User Feedback Mechanism
Description: Create a built-in feedback system that allows users to report
issues, suggest features, and provide general feedback.
Benefit: This would facilitate continuous improvement of the assistant
based on real user experiences and needs.
12.11 Cross-Platform Compatibility
Description: Develop versions of the voice assistant for various platforms
(e.g., mobile devices, web applications).
Benefit: Cross-platform compatibility would increase accessibility and
allow users to interact with the assistant from different devices
CHAPTER 13
Bibliography
13.1 Websites and Online Resources
Python Software Foundation. (n.d.). Python Documentation. Retrieved
from https://docs.python.org/3/
Official documentation for Python, providing information on
installation, libraries, and language features.
SpeechRecognition Library. (n.d.). SpeechRecognition Documentation.
Retrieved from https://pypi.org/project/SpeechRecognition/
Documentation for the SpeechRecognition library, detailing
installation and usage.
pyttsx3 Library. (n.d.). pyttsx3 Documentation. Retrieved
from https://pyttsx3.readthedocs.io/en/latest/
Documentation for the pyttsx3 library, which provides text-to-
speech capabilities.
Wikipedia. (n.d.). Wikipedia API Documentation. Retrieved
from https://www.mediawiki.org/wiki/API:Main_page
Documentation for the Wikipedia API, which allows for
programmatic access to Wikipedia content.
Google
Visual studio code (code editor)
PYcham (python code editor)
CHAPTER 14
Appendices
14.A Appendix A: Code Snippets
14.A.1. Voice Recognition Code
def takeCommand():
"""Function to take voice input from the user."""
r = sr.Recognizer()
with sr.Microphone() as source:
print("Listening...")
r.pause_threshold = 1
audio = r.listen(source)
try:
print("Recognizing...")
query = r.recognize_google(audio, language='en-in')
print(f"You said: {query}\n")
except sr.UnknownValueError:
print("Sorry, I didn't understand that.")
return "None"
except sr.RequestError
print("Could not request results from Google Speech Recognition service.")
return "None"
return query.lower()
14.A.2. Text-to-Speech Code
engine = pyttsx3.init('sapi5')
voices = engine.getProperty('voices')
engine.setProperty('voice', voices[1].id) # Change index for different voice
def speak(audio):
"""Function to make the assistant speak."""
engine.say(audio)
engine.runAndWait()
14.B Appendix B: Test Cases
14.B.1. Unit Test Cases
Test
Description Input Expected Output Status
Case ID
TC-001 Test voice recognition "What time is it?" "What time is it?" Passed
Test NLP command Command identified:
TC-002 "Open Notepad" Passed
parsing Open Notepad
Test task execution Command: Open Notepad application
TC-003 Passed
for Notepad Notepad opens
TC-004 Test text-to-speech "Hello, how can I Audio output of the Passed
Test
Description Input Expected Output Status
Case ID
functionality help?" text
14.B.2. Integration Test Cases
Test
Description Input Expected Output Status
Case ID
Test full command "Play music on Opens YouTube and
ITC-001 Passed
flow YouTube" searches for music
Test command with Error message or prompt
ITC-002 "Play" Passed
invalid input for input
14.C Appendix C: User Manual
14.C.1. Getting Started
1. Installation:
Ensure Python is installed on your system.
Install required libraries using pip:
14.C.2. Using the Voice Assistant
Basic Commands:
"What time is it?" - The assistant will respond with the current time.
"Open Notepad" - The assistant will open the Notepad application.
"Search [query]" - The assistant will perform a Google search for the
specified query.
14.C.3. Troubleshooting
If the assistant does not recognize your voice, ensure that your microphone
is working and properly configured.
For any errors, check the console output for error messages and follow the
prompts.
14.D Appendix D: Additional Resources
Documentation: Tutorials:
SpeechRecognition How to Build a Voice
Documentation Assistant with Python
pyttsx3 Documentation Creating a Voice Assistant in
Wikipedia API Python
Documentation
College of Management Studies UNNAO
Affiliated to Chhatrapati Sahu Ji Maharaj University
Accredited by NAAC A++
Project Report
On
Title of project “Desktop assistant through voice control”
Submitted in Partial Fulfilment of the requirement
For the award of Degree of
BACHELOR OF computer application
Department of computer application
Session (2024-25)
Under the Guidance of Submitted by
Mrs. Priya Vohra Uday Shanker Srivastava
Assistant Professor BCA- 6th
College of Management Studies Roll No.22015003398
College of Management Studies UNNAO
Affiliated to Chhatrapati Sahu Ji Maharaj University
Accredited by NAAC A++
Project Report
On
Title of project “Desktop assistant through voice control”
Submitted in Partial Fulfilment of the requirement
For the award of Degree of
BACHELOR OF computer application
Department of computer application
Session (2024-25)
Under the Guidance of Submitted by
Mrs. Priya Vohra Varun Singh
Assistant Professor BCA- 6th
College of Management Studies Roll No.22015003406
College of Management Studies UNNAO
Affiliated to Chhatrapati Sahu Ji Maharaj University
Accredited by NAAC A++
Project Report
On
Title of project “Desktop assistant through voice control”
Submitted in Partial Fulfilment of the requirement
For the award of Degree of
BACHELOR OF computer application
Department of computer application
Session (2024-25)
Under the Guidance of Submitted by
Vishal Srivastava Nischay Gupta
Assistant Professor BCA- 6th
College of Management Studies Roll No.22015003341
DECLARATION
I, Uday Shanker Srivastava a student of BCA 6th Sem. at the College
of Management Studies, Unnao, hereby declare that the project report
titled “Desktop assistant through voice control” has been completed
by me under the supervision of Priya Vohra, Head of Business
Administration. I further affirm that the contents of this report are
original and have not been submitted for the award of any other degree
or diploma at any other University or Institute.
I have duly credited all original authors and sources for words, ideas,
diagrams, graphics, programs, experiments, and results that are not my
own contributions. Wherever direct quotations have been used, they
have been clearly indicated and properly referenced. I affirm that no
part of this project is plagiarized and that all reported results are genuine
and unaltered.
Priya Vohra
BCA 6th Semester
Roll No.: 22015003398
College of Management Studies
Affiliate to CSJMU (Accredited by NAAC A++)
INSTITUTE CERTIFICATE
This is to certify that the project titled “Desktop assistant through
voice control” undertaken by Uday Shanker Srivastava fully completed.
The study was conducted as a partial requirement for the award of the
degree of Bachelor of Computer Application at the College of
Management Studies, Unnao, affiliated with Chhatrapati Shahu Ji
Maharaj University, Kanpur, U.P.
This project is the original work of the candidate, completed
independently and meeting the high standards required for submission
towards the said degree. All assistance and resources utilized for this
project have been duly acknowledged.
Priya Vohra
(Designation)
ACKNOWLEDGMENT
The project report is an essential component of the BCA program,
providing valuable experience that will undoubtedly benefit me in my
future career.
With profound gratitude, I take this opportunity to express my sincere
thanks to Priya Vohra (Head, Department of Computer Application,
College of Management Studies) for his invaluable support,
encouragement, and guidance throughout the course of this project.
I am deeply thankful for his interest, constructive feedback, persistent
encouragement, and unwavering support during every stage of the
project's development. It has truly been an honour and a privilege to
work under his mentorship.
I also extend my heartfelt thanks to my parents and friends for their
continuous encouragement and cooperation, which greatly contributed
to the successful completion of this study.
Name of Student: Uday Shanker Srivastava
BCA- 6th Semester
Roll No.: 22015003398
DECLARATION
I, Nischay Gupta a student of BCA 6th Sem. at the College of
Management Studies, Unnao, hereby declare that the project report
titled “Desktop assistant through voice control” has been completed
by me under the supervision of Vishal Srivastava, Head of Business
Administration. I further affirm that the contents of this report are
original and have not been submitted for the award of any other degree
or diploma at any other University or Institute.
I have duly credited all original authors and sources for words, ideas,
diagrams, graphics, programs, experiments, and results that are not my
own contributions. Wherever direct quotations have been used, they
have been clearly indicated and properly referenced. I affirm that no
part of this project is plagiarized and that all reported results are genuine
and unaltered.
Nischay Gupta
BCA 6th Semester
Roll No.: 22015003341
College of Management Studies
Affiliate to CSJMU (Accredited by NAAC A++)
INSTITUTE CERTIFICATE
This is to certify that the project titled “Desktop assistant through
voice control” undertaken by Nischay Gupta, has been successfully
completed. The study was conducted as a partial requirement for the
award of the degree of Bachelor of Computer Application at the
College of Management Studies, Unnao, affiliated with Chhatrapati
Shahu Ji Maharaj University, Kanpur, U.P.
This project is the original work of the candidate, completed
independently and meeting the high standards required for submission
towards the said degree. All assistance and resources utilized for this
project have been duly acknowledged.
Vishal Srivastava
(Designation)
ACKNOWLEDGMENT
The project report is an essential component of the BCA program,
providing valuable experience that will undoubtedly benefit me in my
future career.
With profound gratitude, I take this opportunity to express my sincere
thanks to Vishal Srivastava (Head, Department of Computer
Application, College of Management Studies) for his invaluable
support, encouragement, and guidance throughout the course of this
project.
I am deeply thankful for his interest, constructive feedback, persistent
encouragement, and unwavering support during every stage of the
project's development. It has truly been an honour and a privilege to
work under his mentorship.
I also extend my heartfelt thanks to my parents and friends for their
continuous encouragement and cooperation, which greatly contributed
to the successful completion of this study.
Name of Student: Nischay Gupta
BCA- 6th Semester
Roll No.: 22015003341
DECLARATION
I, Varun Singh a student of BCA 6th Sem. at the College of
Management Studies, Unnao, hereby declare that the project report
titled “Desktop assistant through voice control” has been completed
by me under the supervision of , Head of Business Administration. I
further affirm that the contents of this report are original and have not
been submitted for the award of any other degree or diploma at any
other University or Institute.
I have duly credited all original authors and sources for words, ideas,
diagrams, graphics, programs, experiments, and results that are not my
own contributions. Wherever direct quotations have been used, they
have been clearly indicated and properly referenced. I affirm that no
part of this project is plagiarized and that all reported results are genuine
and unaltered.
Varun Singh
BCA 6th Semester
Roll No.: 22015003406
College of Management Studies
Affiliate to CSJMU (Accredited by NAAC A++)
INSTITUTE CERTIFICATE
This is to certify that the project titled “Desktop assistant through
voice control” undertaken by Varun Singh has been successfully
completed. The study was conducted as a partial requirement for the
award of the degree of Bachelor of Computer Application at the
College of Management Studies, Unnao, affiliated with Chhatrapati
Shahu Ji Maharaj University, Kanpur, U.P.
This project is the original work of the candidate, completed
independently and meeting the high standards required for submission
towards the said degree. All assistance and resources utilized for this
project have been duly acknowledged.
Priya Vohra
(Designation)
ACKNOWLEDGMENT
The project report is an essential component of the BCA program,
providing valuable experience that will undoubtedly benefit me in my
future career.
With profound gratitude, I take this opportunity to express my sincere
thanks to Priya Vohra (Head, Department of Computer Application,
College of Management Studies) for his invaluable support,
encouragement, and guidance throughout the course of this project.
I am deeply thankful for his interest, constructive feedback, persistent
encouragement, and unwavering support during every stage of the
project's development. It has truly been an honour and a privilege to
work under his mentorship.
I also extend my heartfelt thanks to my parents and friends for their
continuous encouragement and cooperation, which greatly contributed
to the successful completion of this study.
Name of Student: Varun Singh
BCA- 6th Semester
Roll No.: 22015003406