0% found this document useful (0 votes)
41 views71 pages

Luna

The project report details the development of 'Luna: Voice Assistant' by Miss. Sana Shakil Chougle as part of her Bachelor of Science in Computer Science. It outlines the project's objectives, methodology, expected outcomes, and the technologies used, including Python and various libraries for voice recognition and natural language processing. The report emphasizes Luna's potential to enhance user productivity through voice-activated functionalities and task automation.

Uploaded by

chouglesana03
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views71 pages

Luna

The project report details the development of 'Luna: Voice Assistant' by Miss. Sana Shakil Chougle as part of her Bachelor of Science in Computer Science. It outlines the project's objectives, methodology, expected outcomes, and the technologies used, including Python and various libraries for voice recognition and natural language processing. The report emphasizes Luna's potential to enhance user productivity through voice-activated functionalities and task automation.

Uploaded by

chouglesana03
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 71

A PROJECT REPORT

On

Luna: Voice Assistant.


Submitted by

Miss. Sana Shakil Chougle.


in partial fulfillment for the award of the degree
of

BACHELOR OF SCIENCE
In

COMPUTER SCIENCE
under the guidance of

Asst. Prof. Bismah Killedar


Department of Computer Science

Anjuman Islam Janjira Degree College of Science


Sem Vth
A.Y. 2024 – 2025
Anjuman Islam Janjira Degree College of Science
JANJIRA MURUD, DIST. RAIGAD, PIN.402401

Department of Computer Science

CERTIFICATE

This is to certify that Miss. Sana Shakil Chougle of T.Y.B.Sc. Sem


Vth Roll No. 24464 /University Seat No._____________ has
satisfactorily completed Project Luna: Voice Assistant, to be
submitted in the partial fulfilment for the award of Bachelor of
Science in Computer Science during the academic year 2024-2025.

Date of Submission:

Project Guide Head, I/C Principal

Department Computer Science

College Seal Signature of Examiner


DECLARATION

I, Miss. Sana Shakil Chougle of T.Y. B.Sc. Roll No. 24464/


University Seat No. ____________, hereby declare that the project
entitled “Luna: Voice Assistant” submitted in the partial fulfilment
for the award of Bachelor of Science in Computer Science during
the academic year 2024-2025 is my original work and the project has
not formed the basis for the award of any degree, associateship,
fellowship or any other similar titles.

Signature of the Student:

Place:

Date:
ACKNOWLEDGEMENT

I extend my sincere gratitude to the individuals pivotal in the completion of


this project. Dr. Sajid Shaikh, our esteemed Principal, provided invaluable
motivation. Miss. Bismah Killedar, my teacher, shared years of experience,
offering unwavering support and insightful suggestions.
Special thanks to the dedicated educators, Miss. Maria Shaban and Mrs.
Rushda Khanzada, whose extra efforts were integral to the project's success.
My friends played a crucial role, offering collaborative efforts and steadfast
support.
Heartfelt appreciation goes to my family, the constant guiding force during
moments of impatience. Their unwavering support ensured my perseverance.
I express deep gratitude to all who, directly or indirectly, contributed to the
realisation of this project. Each individual played a significant role, and I am
truly thankful for their support.
TABLE OF CONTENT
Sr. no. Content Pages

Project Proposal 6-14

Plagiarism Report 15-28

Introduction 29-36

(1) Abstract 29
(2) Description of project 30
(3) Limitations of the present project 31

I (4) Advantages 32
(5) Requirement Specifications 33-34

(6) Review of Literature 35-36

System Design 37-41

(1) Methodology 37-38


II
(2) Diagrams 39-41

System Implementation 42-65

(1) Code Implementation. 42-56


III
(2) Outputs 57-64

Results 65

IV (1) Test Cases 65

Conclusion & Future Scope 66-67

(1) Specify the final Conclusion 66


V
(2) Future scope 67

VI References 68

VII Glossary 69

Appendices 70-71
VIII (1) User Guide 70-71
A PROJECT PROPOSAL
On
Luna: A Voice Assistant

Name: Sana Shakil Chougle


Class: T.Y. B.Sc. (Computer Science)
Roll No:24464

Submitted To
Asst. Prof. Bismah Nazim Killedar

Approved by:

Internal Guide Head of Department I/C Principal


Bismah Killedar Bismah Killedar Dr. Sajid Shaikh

Date:
ANJUMAN ISLAM JANJIRA DEGREE COLLEGE OF SCIENCE, MURUD JANJIRA
DEPARTMENT OF COMPUTER SCIENCE

Luna: A Voice Assistant


INTRODUCTION:
In today's fast-paced world, efficiency and organization are key. Voice assistants have
emerged as powerful tools to streamline daily tasks and information access. This project
proposes the development of "Luna," a user-friendly voice assistant built with Python and
the PyCharm IDE. Luna will be inspired by popular AI assistants like Jarvis, offering voice-
activated functionalities to enhance user productivity and simplify daily routines.

Luna aims to leverage the power of Python, a versatile and beginner-friendly


programming language. Python's extensive libraries like Speech Recognition and pyttsx3
will enable Luna to understand user voice commands and respond with text-to-speech
functionality. Through PyCharm's user-friendly interface and debugging tools, Luna's
development will be optimized for efficient coding and rapid prototyping.

This project will contribute to the growing field of voice assistants, providing a user-
centric and customizable solution. By harnessing the capabilities of Python and PyCharm,
Luna has the potential to become a valuable tool for personal and professional tasks,
empowering users to manage their schedules, access information, and control their digital
environment through simple voice commands or text-based instructions.

OBJECTIVES:
1.Voice Interaction:

 Implement speech recognition to allow users to interact with Luna using voice commands.
 Integrate text-to-speech capabilities to enable Luna to respond verbally.

2.Natural Language Processing:

 Use NLP techniques to understand and process user commands effectively.


 Implement a conversational model to allow for more natural interactions.

3.Task Automation:

 Enable Luna to perform basic tasks such as setting reminders, creating to-do lists, and
sending emails.
 Integrate with calendar services to manage events and appointments.

4.Information Retrieval:

 Allow Luna to fetch information from the internet, such as weather updates, news, and
general knowledge queries.
 Implement API integrations for specific services (e.g., weather API, news API).

7
ANJUMAN ISLAM JANJIRA DEGREE COLLEGE OF SCIENCE, MURUD JANJIRA
DEPARTMENT OF COMPUTER SCIENCE

5.Personalization:

 Enable user-specific settings and preferences for a customized experience.


 Implement user authentication to support multiple users with personalized data.

6.User Interface:

 Develop a simple and intuitive graphical user interface (GUI) for users who prefer
interacting through text.
 Implement a web-based or mobile app interface for easy access.

7.Extensibility:

 Design Luna to be easily extendable, allowing for the addition of new features and
functionalities in the future.
 Provide a plugin system for developers to contribute new capabilities.

SCOPE:
1. Advanced Natural Language Processing (NLP) for understanding complex queries.

2. Voice recognition and synthesis capabilities for seamless communication.

3. Text-based interaction options for user convenience.

4. Comprehensive task management including prioritized to-do lists and reminders.

5. Real-time information access such as weather updates, news summaries, and knowledge
retrieval.

6. Entertainment features like music and video control, jokes, and trivia.

7. Productivity tools for calendar integration and email management.

8. Integration with smart home devices for control and automation.

9. Robust Python backend with Flask/Django, optional GUI, and API integration.

10. Strong security measures including user authentication and data protection.

METHODOLOGY:
1.Requirement Gathering and Analysis:

8
ANJUMAN ISLAM JANJIRA DEGREE COLLEGE OF SCIENCE, MURUD JANJIRA
DEPARTMENT OF COMPUTER SCIENCE

 Define Luna's scope and features based on extensive user feedback and needs analysis.

 Research existing voice assistants to identify successful functionalities and potential


improvements.

 Develop detailed user personas and scenarios to anticipate usage patterns and
requirements effectively.

2.Design and Architecture:

 Decompose Luna's functionalities into modular components to ensure flexibility and


maintainability.

 Choose Python libraries and frameworks such as NLTK for NLP and Flask/Django for
backend development.

 Architect system components, including a scalable database schema and well-defined


APIs.

 Design intuitive user interaction methods, prioritizing usability through a cohesive


interface design approach.

3.Development and Implementation:

 Implement each module meticulously using Python within a robust IDE environment
like PyCharm.

 Integrate sophisticated NLP algorithms to enhance Luna's understanding and


responsiveness.

 Uphold stringent coding standards to ensure high code quality, scalability, and future
extensibility.

4.Testing and Evaluation:

 Execute comprehensive unit testing and integration testing to validate each module's
functionality and interaction.

 Conduct thorough user acceptance testing to solicit user feedback and refine Luna's
features based on real-world usage scenarios.

 Address identified bugs promptly and optimize performance to enhance overall user
experience.

5.Deployment and Maintenance:

 Deploy Luna seamlessly on multiple platforms, leveraging deployment tools and best
practices.

9
ANJUMAN ISLAM JANJIRA DEGREE COLLEGE OF SCIENCE, MURUD JANJIRA
DEPARTMENT OF COMPUTER SCIENCE

 Monitor Luna's performance continuously, applying optimizations to maintain peak


efficiency and reliability.

 Provide ongoing maintenance and dedicated support to address user inquiries and ensure
sustained operational excellence.

TOOLS AND TECHNOLOGIES:


1.Programming Language:

 Python: Python is a widely used, high-level programming language known for its
readability, extensive libraries, and large developer community. Its easy-to-learn syntax
makes it ideal for rapid prototyping and project development.

2.Development Environment:

 PyCharm: PyCharm is a powerful Integrated Development Environment (IDE)


specifically designed for Python development. It provides features like code completion,
debugging tools, project management, and version control integration (if needed). These
features will significantly improve your development efficiency and code quality.

3.Core Libraries:

 datetime: This built-in Python library provides functions for date and time manipulation,
crucial for handling schedules, reminders, and deadlines.

 os (Optional): The os module allows interaction with your operating system, potentially
helpful for tasks like opening files or controlling desktop settings (consider if needed for
specific features).

 json (Optional): This library facilitates data storage and retrieval in JSON format,
potentially useful for storing user preferences or task data (if applicable).

4.Advanced Libraries (Consider based on Features):

 Natural Language Processing (NLP):NLTK is a powerful NLP library that offers tools
for text processing, tokenization, part-of-speech tagging, and sentiment analysis. It can
help Luna understand user commands and respond accordingly (useful for complex text
interaction).

 Speech Recognition and Synthesis (Optional):

 SpeechRecognition: This library enables speech-to-text functionality, allowing users to


interact with Luna through voice commands.

 pyttsx3: This module provides text-to-speech capabilities, enabling Luna to respond


verbally (consider for a more immersive experience).

 Graphical User Interface (GUI) (Optional):

10
ANJUMAN ISLAM JANJIRA DEGREE COLLEGE OF SCIENCE, MURUD JANJIRA
DEPARTMENT OF COMPUTER SCIENCE

 PySimpleGUI: This library simplifies GUI development in Python, allowing you to


create a user-friendly interface for Luna if you prefer a visual interaction method.

TIMELINE:
1.Planning and Requirement Analysis(1-2 Weeks)

 Define Luna's core functionalities and features based on user needs and research.

 Conduct interviews or surveys to gather user requirements.

 Research existing voice assistants like Jarvis and Siri for insights and inspiration.

 Identify potential technical and logistical challenges.

2.Design and Architecture(3-4 Weeks)

 Break down Luna's functionalities into modular components and subsystems.

 Choose appropriate Python libraries and frameworks (e.g., NLTK for NLP, Flask/Django
for backend).

 Design the overall system architecture, including database schema (if applicable) and API
design.

 Plan user interaction flows, considering text-based commands, voice commands


(optional), and GUI design.

3.Development and Implementation(5-8 Weeks)

 Begin coding each module and subsystem in Python using an IDE (e.g., PyCharm).

 Implement core functionalities such as natural language processing (NLP), task


management, and information retrieval.

 Integrate modules to ensure seamless communication and data flow between components.

 Implement error handling and logging mechanisms to enhance stability and usability.

4.Testing and Evaluation(9-10 Weeks)

 Conduct unit testing for individual modules to verify functionality and correctness.

 Perform integration testing to ensure modules work together as expected.

 Engage in user acceptance testing (UAT) to gather feedback on usability, performance,


and user satisfaction.

 Address any bugs or issues identified during testing and refine functionalities as needed.

11
ANJUMAN ISLAM JANJIRA DEGREE COLLEGE OF SCIENCE, MURUD JANJIRA
DEPARTMENT OF COMPUTER SCIENCE

5.Deployment (11-12 Weeks)

 Explore deployment options, such as deploying Luna on desktop platforms or as a web


application.

 Prepare deployment scripts and procedures for smooth deployment.

 Ensure scalability, security measures, and performance optimization during deployment.

RESOURCES:
Personnel:

Sana Chogule, Serves as the Project Manager, Lead Developer, UI/UX Designer, and
Quality Assurance Tester for the "Luna: A Voice Assistant in Python" project. As Project
Manager, Sana oversees project coordination, resource management, and team leadership. In
the role of Lead Developer, they handle all technical aspects of coding and development.
Simultaneously, Sana takes responsibility as the UI/UX Designer to ensure the project's
interface and user experience are intuitive and visually appealing. Additionally, as Quality
Assurance Tester, they ensure the project meets high standards of quality and functionality.

Equipment:

 Laptops: Utilized for development, design tasks, and testing throughout the project.

 Servers: Employed for hosting and testing purposes to facilitate deployment and
performance testing.

 Microphone and Speaker: Essential for voice interaction functionalities of Luna,


enabling users to interact via voice commands and receive audio responses.

Software Licenses:

 Python: Utilized under the Python Software Foundation License, providing a robust
framework for development.

 Flask and Django: These web frameworks are utilized under open-source licenses for
backend development.

 NLTK: Utilized under the Apache License 2.0 for natural language processing tasks.

 PyCharm: Used under a proprietary license for integrated development environment


(IDE) purposes.

12
ANJUMAN ISLAM JANJIRA DEGREE COLLEGE OF SCIENCE, MURUD JANJIRA
DEPARTMENT OF COMPUTER SCIENCE

 SQLite: Utilized under a public domain license for local database management.

Facilities:

 Home Office: Serves as the primary workspace for daily project tasks and meetings.

 Computer Lab: Used for occasional collaborative meetings and discussions.

Responsibilities:

 Project Manager (Sana Chogule) Responsible for overall project coordination, resource
management, and team leadership for the "Luna: A Voice Assistant in Python" project.

EXPECTED OUTCOMES:
1.Functional Voice Assistant:

o Core Functionalities: Luna will successfully perform core tasks such as scheduling,
information retrieval, and task management.

o Natural Language Processing (NLP): Accurate processing of user commands and


queries using NLTK or similar libraries.

o Voice Interaction (Optional): Ability to interact via voice commands with microphone
and speaker integration.

o User Interface: Intuitive and user-friendly interface for seamless interaction, potentially
including both text-based and graphical interfaces.

2.Reliable Performance:

o Stability: Luna operates reliably without frequent crashes or errors.

o Speed: Responses are prompt and efficient, enhancing user experience.

3.Quality Assurance:

o Bug-Free Operation: Thorough testing ensures minimal bugs and issues.

o User Acceptance: Positive feedback from user testing and acceptance criteria.

4.Documentation and Support:

o Comprehensive Documentation: Well-documented codebase and user guides for easy


maintenance and understanding.

o User Support: Provision of support channels for user inquiries and assistance.

13
ANJUMAN ISLAM JANJIRA DEGREE COLLEGE OF SCIENCE, MURUD JANJIRA
DEPARTMENT OF COMPUTER SCIENCE

5.User Satisfaction:

o Effectiveness: Luna meets user expectations for functionality and performance.


o Utility: Adds value by enhancing productivity and convenience through effective task
management and information retrieval.

REFERENCE:
1. S. Bird, E. Klein, and E. Loper, Natural Language Processing with Python: Analyzing
Text with the Natural Language Toolkit. O'Reilly Media, 2009.

2. M. Grinberg, Flask Web Development: Developing Web Applications with Python.


O'Reilly Media, 2018.

3. A. Holovaty and J. Kaplan-Moss, The Definitive Guide to Django: Web Development


Done Right. Apress, 2009.

4. D. Das, NLTK Essentials. Packt Publishing, 2015.

5. X. Huang, A. Acero, and H. W. Hon, Spoken Language Processing: A Guide to Theory,


Algorithm, and System Development. Prentice Hall, 2001.

6. D. Klatt, "Review of text-to-speech conversion for English," Journal of the Acoustical


Society of America, vol. 82, no. 3, pp. 737-793, 1987.

7. J. Preece, Y. Rogers, and H. Sharp, Interaction Design: Beyond Human-Computer


Interaction. John Wiley & Sons, 2015.

8. J. A. Owens and J. D. Owens, SQLite: C Programming: Professional Made Easy.


CreateSpace Independent Publishing Platform, 2006.

9. G. McKinney, Python for Data Analysis: Data Wrangling with Pandas, NumPy, and
IPython. O'Reilly Media, 2017.

10. J. VanderPlas, Python Data Science Handbook: Essential Tools for Working with Data.
O'Reilly Media, 2016.

14
PLAGIARISM SCAN REPORT

Date 2024-09-23
0% 100% Words 860
Plagiarised Unique
Characters 6260

Content Checked For Plagiarism

I-INTRODUCTION
(1) ABSTRACT
Luna: Voice Assistant" is a desktop application designed to make everyday computer
interactions more intuitive and
efficient. The project utilizes speech recognition technology to execute predefined
commands, allowing users to control their computer through voice input. Unlike more
complex voice assistants that rely heavily on natural language processing (NLP), Luna
focuses on simplicity and reliability, interpreting specific commands to perform tasks like
web browsing, media control, setting alarms, and capturing screenshots.
The project leverages Python-based libraries such as Pyttsx3 for text-to-speech conversion
and Google's Speech
Recognition API for processing voice inputs. By mapping distinct voice commands to
particular actions, Luna ensures quick and accurate execution of tasks. This design approach
caters to users who prefer straightforward, hands-free interaction
with their devices without the complexity of understanding conversational language.
The project report details the implementation process, system requirements, and the various
advantages and limitations of Luna. It also provides insight into potential areas for future
development, such as improving command flexibility and expanding functionality. Luna
represents a practical tool for enhancing productivity and accessibility, particularly for users
who may find traditional input methods challenging or inconvenient. The report aims to
provide a comprehensive overview of the project's goals, methods, and outcomes, offering
a clear understanding of Luna's capabilities and its potential impact on everyday computer
use.
Keywords: Luna: Voice Assistant, Speech recognition, Predefined commands, Natural
Language Processing (NLP), Python-based libraries, Pyttsx3, Text-to-speech conversion,
Google Speech Recognition API, Hands-free interaction.

(2) DESCRIPTION OF PROJECT


"Luna: Voice Assistant" is a desktop-based application developed to enable hands-free
interaction with a computer
through the use of voice commands. The project is designed with a focus on simplicity,
allowing users to control various computer functions using a predefined set of commands

15
rather than relying on complex natural language processing. The primary goal of Luna is to
provide a user-friendly experience that makes everyday tasks more accessible and efficient.
The application operates by listening for specific voice inputs and mapping these inputs to
predefined actions. For example, a user can instruct Luna to perform tasks like searching
the web, controlling media playback, setting an alarm, or taking a screenshot. These
commands are processed using the SpeechRecognition library, which interfaces with
Google's Speech Recognition API to convert spoken words into text. Once the input is
recognized, Luna uses Python's standard libraries to execute the corresponding action.
Luna's text-to-speech functionality is powered by the Pyttsx3 library, which allows the
assistant to provide audible feedback to the user. This interaction loop enhances the user's
experience by making the communication process more dynamic and engaging. Additional
libraries, such as PyAutoGUI and OpenCV, are integrated into the system to handle specific
tasks like screen capture and webcam interaction.
The project is designed with modularity in mind, making it easy to expand and customize.
Developers can add new commands or modify existing ones by updating the command-
processing logic. This flexibility allows Luna to adapt to various use cases and user
preferences. While the current implementation focuses on a desktop environment, the
underlying architecture can be extended to support other platforms, such as mobile devices,
in future iterations.
Overall, Luna aims to enhance productivity by providing a hands-free alternative to
traditional input methods. The application is particularly useful in scenarios where manual
interaction with the computer is inconvenient or impractical, making it a valuable tool for a
wide range of users

(3)LIMITATION OF PRESENT PROJECT


While "Luna: Voice Assistant" offers significant benefits, several limitations impact its
current functionality. One of the primary limitations is the reliance on predefined
commands, which restricts the flexibility and naturalness of user
interactions. Unlike more sophisticated voice assistants that utilize natural language
processing (NLP) to understand and respond to conversational language, Luna requires
users to remember and use specific phrases. This can make the
interaction less intuitive, especially for users who are accustomed to the more flexible input
methods of modern voice
assistants.
Another limitation is Luna's dependence on external services for speech recognition,
particularly Google's Speech Recognition API. This reliance introduces a need for constant
internet connectivity, which can be a significant drawback in environments with unstable or
unavailable network access. Moreover, the use of third-party APIs raises potential privacy
and security concerns, as voice data must be transmitted to external servers for processing.
The application also lacks the ability to retain context across interactions. Each command
is processed independently, without any understanding of previous inputs or the broader
context of the conversation. This limitation makes it challenging to implement complex,
multi-step commands or provide a more conversational user experience. Users must re-state
commands for each task, which can be cumbersome and reduce the efficiency of the
assistant.
Error handling within Luna is another area that requires improvement. The current system
may not provide adequate feedback when it encounters unrecognized commands or

16
processing errors, leading to user frustration. Enhancing the application's ability to handle
errors gracefully and offer helpful suggestions could significantly improve the overall user
experience.
Finally, Luna is primarily designed for desktop use and may not be optimized for other
platforms, such as mobile devices or smart home systems. This limitation restricts the
assistant's accessibility and utility in an increasingly interconnected digital landscape.
Expanding Luna's compatibility to support a wider range of devices would make the
application more versatile and appealing to a broader audience.

Matched Source
No plagiarism found

17
PLAGIARISM SCAN REPORT

Date 2024-09-23

0% 100% Words 872


Plagiarised Unique
Characters 6330

Content Checked For Plagiarism

(4) ADVANTAGES
"Luna: Voice Assistant" offers several key advantages that make it an attractive solution for
enhancing daily computer interactions. One of the most significant advantages is its
simplicity and ease of use. Unlike voice assistants that require complex setup and extensive
training to understand natural language, Luna operates on a set of predefined commands.
This approach ensures that users can quickly learn how to interact with the assistant, making
it accessible to a wide range of users, including those who may not be technologically savvy.
The use of reliable and well-supported Python libraries such as Pyttsx3 and Google's Speech
Recognition API contributes to the stability and performance of Luna. These technologies
provide accurate speech recognition and responsive text-to-speech conversion, resulting in
a smooth and engaging user experience. The assistant can process voice commands in real-
time, allowing users to perform tasks efficiently without the need for manual input.
Luna's modular design is another major advantage, as it allows for easy customization and
expansion. Developers can add new features or modify existing ones without significantly
altering the underlying architecture. This flexibility makes Luna adaptable to various user
needs and preferences, ensuring that the assistant can evolve over time to incorporate new
functionalities and improvements.
The project also enhances productivity by enabling hands-free control of common computer
tasks. Whether it's browsing the web, managing media, setting alarms, or taking screenshots,
Luna allows users to perform these actions through simple voice commands. This hands-
free capability is particularly beneficial in situations where manual interaction with the
computer is inconvenient or impossible, such as when cooking, driving, or working in
environments where hands are occupied.
Additionally, Luna serves as an excellent educational tool for those interested in learning
about voice-controlled applications. The project demonstrates how to integrate various
Python libraries to create a functional voice assistant, providing valuable insights into the
development process. This makes Luna not only a practical application but also a learning
platform for students and developers.
Finally, Luna's lightweight design ensures that it can run on a wide range of hardware
configurations without requiring significant system resources. This makes it accessible to
users with standard computing setups, allowing them to benefit from voice assistant
technology without needing high-end hardware.

18
(5) REQUIREMENT SPECIFICATION
The development and deployment of "Luna: Voice Assistant" require specific hardware and
software components to ensure smooth operation and an optimal user experience.
• Software Requirement
Operating System.
Python Environment.
• Hardware Requirement

Computer System.
Microphone.
Speakers or Headphones.
Internet Connection.
Software Requirements
Operating System: Luna is compatible with major operating systems, including Windows,
macOS, and Linux. The system must support the installation of Python and its dependencies.
Python Environment: The project is developed using Python 3.x. It is recommended to use
a virtual environment or a Python distribution such as Anaconda to manage dependencies
and ensure that the correct versions of required libraries are installed.
Python Libraries and Dependencies:
▪ Pyttsx3: This library is used for offline text-to-speech conversion. It allows Luna to
generate speech output from text without needing an internet connection.
▪ SpeechRecognition: This library is essential for converting spoken words into text. It
interfaces with various speech
recognition engines, including Google's Speech Recognition API, to process voice inputs.
▪ Pyaudio: Required for accessing the system's microphone input, this library is a
dependency for the SpeechRecognition module.
▪ Requests: Used to make HTTP requests for web-based operations, such as fetching weather
data or performing online
searches.
▪ Webbrowser: This standard Python module is used to open and control web browsers,
allowing Luna to execute search queries and navigate websites.
▪ OS and Sys Modules: These standard libraries are used for interacting with the operating
system, enabling Luna to perform tasks like file management and executing system
commands.
▪ OpenCV (cv2): This library is used for accessing and controlling the webcam, allowing
Luna to capture photos and perform image-related tasks.
▪ PyAutoGUI: This module enables Luna to control the mouse and keyboard
programmatically, which is useful for tasks
like taking screenshots and controlling media playback.
▪ Pygame: This library is used for playing audio files, such as alarms and notifications.
▪ Threading: This module is used to run multiple operations concurrently, improving the
application's performance and responsiveness.
▪ BeautifulSoup: Used for parsing HTML and extracting information from web pages, this
library is useful for

19
implementing web scraping features when necessary.
Hardware Requirements:
• Computer System: Luna is designed to run on a standard desktop or laptop computer. A
modern processor, such as an Intel i5 or equivalent, is recommended, along with at least
4GB of RAM. This ensures that the system can handle multiple processes simultaneously
without lag.
• Microphone: A high-quality microphone is essential for accurate voice input. The
microphone should be capable of capturing clear audio with minimal background noise to
improve the accuracy of speech recognition.
• Speakers or Headphones: For the text-to-speech functionality, reliable audio output
devices are necessary. Luna will use these devices to provide auditory feedback and
notifications to the user.
• Internet Connection: While some features of Luna may work offline, many of the voice
recognition functionalities rely on an active internet connection. A stable internet
connection is therefore necessary for optimal performance.

Matched Source
No plagiarism found

20
PLAGIARISM SCAN REPORT

Date 2024-09-23
4% 96%
Words 998
Plagiarised Unique
Characters 7289

Content Checked For Plagiarism

(6) REVIEW OF LITERATURE


The development and implementation of voice assistants have been a significant area of
interest in technology over the past several decades. "Luna: Voice Assistant" builds upon
this rich history, focusing on the practical use of speech
recognition and text-to-speech technology to provide an efficient and user-friendly
experience.
Evolution of Voice Assistants
Voice recognition technology has evolved significantly since its inception in the mid-20th
century. Early systems were limited to recognizing basic commands or numerical inputs,
often requiring extensive training and specialized hardware. As computing power and
algorithmic approaches advanced, so did the capabilities of voice assistants, making them
more accurate and versatile in their applications[1].
Influence of Early Voice-Controlled Systems
The 1990s and early 2000s saw a significant leap in the integration of voice recognition into
consumer products. Notable developments included IBM’s ViaVoice and Microsoft’s
Speech API, which played crucial roles in establishing the feasibility of voice-controlled
computing[2]. These systems, although rudimentary by today’s standards, laid the
foundation for more sophisticated applications like Luna.
Open-Source Libraries and Modern Development
Recent years have seen an explosion in the availability of open-source tools and libraries,
which have democratized the development of voice assistants. Python, in particular, has
become a popular programming language for developing voice- based applications due to
its robust library support[3]. Libraries such as SpeechRecognition for voice recognition and
Pyttsx3 for text-to-speech synthesis have made it easier for developers to create custom
voice assistants without requiring deep expertise in the underlying technologies[4].
Simple Command-Based Systems
While research into complex conversational AI continues, there is growing evidence that
users often prefer simple, command-based systems for specific tasks. Studies suggest that
users value reliability and ease of use over the ability to engage in natural language
conversations[5]. Luna’s design aligns with this trend, prioritizing straightforward
command
execution to enhance user experience.
Accessibility and Inclusivity
Voice assistants have increasingly been recognized for their potential to make technology
more accessible. For individuals with disabilities, voice commands offer an alternative
means of interacting with digital devices, eliminating the need for physical input[6]. Luna's

21
focus on predefined commands and compatibility with common hardware ensures that it is
accessible to a wide range of users, regardless of their technical abilities.
Real-World Implementations and Impact
The practical application of voice assistants has been explored through various case studies
and pilot projects[7]. These real- world implementations have provided valuable insights
into the effectiveness and challenges of deploying such systems. Luna, with its emphasis on
simplicity and functionality, reflects the lessons learned from these projects, offering a tool
that is both practical and easy to use.
Future Directions
The ongoing development of voice technology suggests that future voice assistants may
become even more integrated into everyday life. With advancements in AI and machine
learning, there is potential for voice assistants like Luna to evolve, offering more
personalized and context-aware interactions[8]. However, Luna's current focus remains on
providing a reliable, straightforward, and accessible voice assistant for a broad audience.

II-SYSTEM DESIGN
(1) METHODOLOGY
The development of "Luna: Voice Assistant" follows a structured methodology that
emphasizes modularity, ease of
integration, and user-centric design. The methodology can be broken down into several key
phases: requirement analysis, design, implementation, testing, and deployment.
Requirement Analysis
The first phase involves gathering and analyzing the requirements of the voice assistant.
This includes understanding the specific needs of the users, the environments in which the
assistant will operate, and the core functionalities it must perform. The goal here is to ensure
that Luna is not only technically sound but also meets the practical needs of its intended
audience. During this phase, extensive research is conducted to determine the most suitable
libraries, programming languages, and hardware components required for building the
system. Given that Luna is a voice assistant, the focus is on identifying the best tools for
speech recognition and text-to-speech conversion, as well as ensuring
compatibility with various devices. Design
In the design phase, the architecture of Luna is developed. The system is designed to be
modular, with different
components handling various tasks such as voice input processing, command execution, and
response generation. The design emphasizes simplicity and efficiency, ensuring that each
module can function independently while being easily integrated into the overall system.
The architecture is also designed to be scalable, allowing for the addition of new features
and capabilities in the future. User interface design is also a critical aspect of this phase,
ensuring that Luna is intuitive and easy to use.
Implementation
The implementation phase involves coding and integrating the various modules that make
up Luna. This phase leverages Python due to its extensive library support and ease of use.
Key libraries used include SpeechRecognition for processing voice inputs and Pyttsx3 for
generating speech outputs. The implementation is done incrementally, starting with basic
voice recognition and gradually adding more complex functionalities. Throughout this
phase, the focus remains on writing clean, efficient code that adheres to best practices.
Testing
Testing is a critical part of the development process. During this phase, each module is
rigorously tested to ensure it functions as expected. Both unit tests and integration tests are
performed to identify and fix any issues that arise.
Additionally, user testing is conducted to gather feedback and make improvements based

22
on real-world use cases. The goal is to ensure that Luna is not only technically sound but
also provides a seamless user experience. Deployment the final phase is deployment, where
Luna is released for use. This phase involves setting up the environment in which Luna will
operate, whether it be a personal computer, a smartphone, or another device. The
deployment process is designed to be straightforward, allowing users to easily install and
configure Luna on their devices. Post-deployment, continuous monitoring and updates are
planned to ensure that Luna remains functional and up-to-date with the latest advancements
in voice technology.
This methodology ensures that "Luna: Voice Assistant" is developed in a systematic and
organized manner, resulting in a product that is both functional and user-friendly.

Matched Source

Similarity 25%
Title:Enhancing Immersion: Voice Commands for Virtual Reality ...
Voice recognition technology has evolved significantly since its inception in the mid-20th
century. Early systems were based on simple pattern recognition ...
https://techwhizgadgets.com/voice-commands-for-virtual-reality-experiences/

Similarity 4%
Title:When Should You Stop Software Testing and How to Do It?
Software testing is a critical part of the development process. It is important to pay attention
to finer details and errors in the product of the ...
https://momentumsuite.com/software-testing/when-to-stop-software-testing/

23
PLAGIARISM SCAN REPORT

Date 2024-09-27
3% 97% Words 776
Plagiarised Unique Characters 5853

Content Checked For


Plagiarism

V-CONCLUSION & FUTURE SCOPE

(1)SPECIFY THE FINAL CONCLUSION

The "Luna: Voice Assistant" project has effectively shown how a voice-controlled desktop
application can make daily computer tasks simpler and more efficient by using predefined
commands. Luna’s emphasis on straightforwardness ensures that users, regardless of
technical expertise, can easily engage with the assistant. By leveraging dependable Python
libraries like Pyttsx3 for speech output and Google’s Speech Recognition API for voice
input, the project achieves smooth
operation and accurate task execution.
While Luna currently doesn't feature advanced natural language processing (NLP), it
remains adaptable for future enhancements. This means Luna could eventually handle more
flexible, conversational commands and offer a richer, more interactive experience. The
project serves as a practical example of how voice assistants can automate basic tasks today,
with the exciting possibility of evolving into more powerful, context-aware systems.
Through ongoing updates and the addition of advanced features, Luna has the potential to
become even more dynamic and useful in the future.

(2) FUTURE SCOPE


While Luna is already a functional and efficient voice assistant, there are multiple
opportunities for future improvements. One of the most promising advancements would be
the integration of natural language processing (NLP). By incorporating NLP, Luna would
be able to interpret more sophisticated and conversational commands, vastly enhancing the
user experience. Instead of being limited to specific, predefined phrases, users could
communicate in a more natural manner, and Luna could respond appropriately. This would
make the assistant far more intuitive and versatile in everyday
Interactions.

24
Additionally, improving the system’s error-handling features is another key area of
development. By creating more advanced feedback mechanisms, Luna could offer helpful
suggestions when an issue arises or when it cannot understand a command. This would not
only reduce user frustration but also make the assistant more user-friendly. Moreover,
integrating machine learning could allow Luna to adapt to individual users’ behaviors over
time. Through this, the assistant could begin to provide more personalized responses, learn
user preferences, and even predict commands based on past
interactions, making Luna feel more tailored to each user.

There’s also great potential in expanding Luna’s compatibility beyond just desktop
environments. By making the assistant compatible with mobile devices and smart home
systems, Luna’s utility would expand significantly. This would allow users to control a
wider variety of devices using voice commands, making Luna a more versatile and powerful
tool in daily life. Lastly, privacy and security should be a priority moving forward. A
potential improvement would involve processing voice data locally, which would eliminate
the need to rely on external services like Google’s Speech Recognition API. This step would
help address user concerns over data privacy and enhance the overall security of Luna’s
operations.

These future enhancements could position Luna as an even more intelligent, adaptive, and
secure voice assistant, offering users greater functionality and a more seamless experience
across multiple devices.

VI-REFERENCES

[1] B. H. Juang and L. R. Rabiner, "A brief history of automatic speech recognition
technology development," Elsevier, pp. 1– 5, 2005.

[2] J. Baker, X. Huang, and R. Reddy, "Advances in speech recognition technologies,"


IEEE Signal Processing Magazine, vol. 52, no. 6, pp. 1–18, 2009.

[3] S. Bird, E. Klein, and E. Loper, Practical Applications of Natural Language Processing
Using Python, O'Reilly Media, 2009.

[4] M. Smith, "Overview of the SpeechRecognition Python library: Key features and
applications," Real Python. Available at: https://realpython.com, 2021.

[5] M. McTear, Developing Conversational AI Systems: Dialogue Agents and Chatbots,


Morgan & Claypool, 2016.

[6] K. Shinohara and J. O. Wobbrock, "Assistive technology use and social interaction
challenges," ACM Trans. Accessible Comput., vol. 8, no. 2, pp. 1–30, 2016.

25
[7] E. Luger and A. Sellen, "User expectations vs. reality in conversational agents," in Proc.
ACM Int. Conf. Human Factors in Computing Systems (CHI), pp. 5286–5297, 2016.

[8] P. Sundar and N. Ewen, "The future outlook for voice assistants in AI-driven
environments," Forbes Tech Council.Available at: https://www.forbes.com, 2019.

[9] M. Grinberg, Developing Web Applications Using Flask and Python: A Programmer’s
Guide, O'Reilly Media, 2018.

[10] X. Huang, A. Acero, and H. W. Hon, Spoken Language Processing: Foundations and
Methodologies, Prentice Hall, 2001.

[11] D. Klatt, "Text-to-speech systems overview: Historical development and current


trends," J. Acoust. Soc. Am., vol. 82, no. 3, pp. 737–793, 1987.

[12] J. Preece, Y. Rogers, and H. Sharp, Human-Computer Interaction and User Interface
Design, John Wiley & Sons, 2015.

[13] P. Norvig, Modern Approaches in Artificial Intelligence: A Textbook Overview, 3rd


ed., Prentice Hall, 2010.

[14] T. Mitchell, An Introduction to Machine Learning: Concepts and Techniques, McGraw-


Hill, 1997.

Matched Source

Similarity 4%
Title:Build a Mobile Application With the Kivy Python Framework
In this step-by-step tutorial, you'll learn how to build a mobile application with Python and
the Kivy GUI framework. You'll discover how to develop an ...
https://realpython.com/mobile-app-kivy-python/

26
PLAGIARISM SCAN REPORT

Date 2024-09-23

0% 100% Words 376


Plagiarised Unique
Characters 2392

Content Checked For Plagiarism

VIII-APPENDICIES

(1) USER GUIDE Getting Started


1. Launch Luna: Double-click the Luna script file in your PyCharm project directory.
2. Introduction: Luna will play a short introductory animation and greet you.

Using Luna
 Basic Interactions:
o Say "hello" to greet Luna.
o Ask "how are you" to hear Luna's response.
o Say "thank you" to express gratitude.
 Information Retrieval:
o Ask "what's the time" to get the current time.
o Say "google" followed by your search query (e.g., "google weather today") to search the
web using Google.
o Ask "what's the temperature" to get the current temperature in Murud Janjira (requires
internet connection).
o Say "wikipedia" followed by your search term (e.g., "wikipedia Albert Einstein") to
search Wikipedia.
 Entertainment:
o Say "play music" to play a random song from a specified music directory (configure path
within the script).
o Say "play something" or "play on youtube" followed by your search terms to play a video
on Youtube (requires internet connection).
o Ask "play video" or "pause video" to control video playback (works on most online
platforms).
 System Control:
o Say "take a screenshot" to capture a screenshot of your screen.
o Ask "volume up" or "volume down" to adjust system volume.
o Say "mute" to mute/unmute audio in the current program.
o Use commands like "full screen on/off," "next video," or "previous video" to control
media playback (works on most online platforms).
 Other Features:
o Say "set an alarm" followed by the desired time (e.g., "set an alarm for 10:30 AM") to set

27
an alarm.
o Say "click a photo" or "take a photo" to capture an image using your webcam.
o Say "remember that" followed by a message to create a reminder.
o Ask "reminders" or "reminder for me" to view your saved reminders.
o Say "calculate" followed by a mathematical expression to perform calculations (e.g.,
"calculate 2 + 3").

Sleep Mode and Wake Up


 Entering Sleep Mode: To put Luna into sleep mode, say "go to sleep." She will respond
with "Ok, You can call me anytime" and enter a listening state.
 Waking Luna Up: To wake Luna up from sleep mode, say "wake up." She will respond
with "I'm awake!" and return to her active state.

Matched Source
No plagiarism found

28
I-INTRODUCTION
(1) ABSTRACT
Luna: Voice Assistant is a desktop application designed to make everyday computer
interactions more intuitive and efficient. The project utilizes speech recognition technology
to execute predefined commands, allowing users to control their computer through voice
input. Unlike more complex voice assistants that rely heavily on natural language processing
(NLP), Luna focuses on simplicity and reliability, interpreting specific commands to
perform tasks like web browsing, media control, setting alarms, and capturing screenshots.
The project leverages Python-based libraries such as Pyttsx3 for text-to-speech
conversion and Google's Speech Recognition API for processing voice inputs. By mapping
distinct voice commands to particular actions, Luna ensures quick and accurate execution
of tasks. This design approach caters to users who prefer straightforward, hands-free
interaction with their devices without the complexity of understanding conversational
language.
The project report details the implementation process, system requirements, and the
various advantages and limitations of Luna. It also provides insight into potential areas for
future development, such as improving command flexibility and expanding functionality.
Luna represents a practical tool for enhancing productivity and accessibility, particularly
for users who may find traditional input methods challenging or inconvenient. The report
aims to provide a comprehensive overview of the project's goals, methods, and outcomes,
offering a clear understanding of Luna's capabilities and its potential impact on everyday
computer use.
Keywords: Luna: Voice Assistant, Speech recognition, Predefined commands, Natural
Language Processing (NLP), Python-based libraries, Pyttsx3, Text-to-speech conversion,
Google Speech Recognition API, Hands-free interaction.

29
(2) DESCRIPTION OF PROJECT

"Luna: Voice Assistant" is a desktop-based application developed to enable hands-free


interaction with a computer through the use of voice commands. The project is designed
with a focus on simplicity, allowing users to control various computer functions using a
predefined set of commands rather than relying on complex natural language processing.
The primary goal of Luna is to provide a user-friendly experience that makes everyday tasks
more accessible and efficient.
The application operates by listening for specific voice inputs and mapping these inputs
to predefined actions. For example, a user can instruct Luna to perform tasks like searching
the web, controlling media playback, setting an alarm, or taking a screenshot. These
commands are processed using the SpeechRecognition library, which interfaces with
Google's Speech Recognition API to convert spoken words into text. Once the input is
recognized, Luna uses Python's standard libraries to execute the corresponding action.
Luna's text-to-speech functionality is powered by the Pyttsx3 library, which allows the
assistant to provide audible feedback to the user. This interaction loop enhances the user's
experience by making the communication process more dynamic and engaging. Additional
libraries, such as PyAutoGUI and OpenCV, are integrated into the system to handle specific
tasks like screen capture and webcam interaction.
The project is designed with modularity in mind, making it easy to expand and
customize. Developers can add new commands or modify existing ones by updating the
command-processing logic. This flexibility allows Luna to adapt to various use cases and
user preferences. While the current implementation focuses on a desktop environment, the
underlying architecture can be extended to support other platforms, such as mobile devices,
in future iterations.
Overall, Luna aims to enhance productivity by providing a hands-free alternative to
traditional input methods. The application is particularly useful in scenarios where manual
interaction with the computer is inconvenient or impractical, making it a valuable tool for
a wide range of users.

30
(3) LIMITATION OF PRESENT PROJECT
While "Luna: Voice Assistant" offers significant benefits, several limitations impact its
current functionality. One of the primary limitations is the reliance on predefined
commands, which restricts the flexibility and naturalness of user interactions. Unlike more
sophisticated voice assistants that utilize natural language processing (NLP) to understand
and respond to conversational language, Luna requires users to remember and use specific
phrases. This can make the interaction less intuitive, especially for users who are
accustomed to the more flexible input methods of modern voice assistants.
Another limitation is Luna's dependence on external services for speech recognition,
particularly Google's Speech Recognition API. This reliance introduces a need for constant
internet connectivity, which can be a significant drawback in environments with unstable
or unavailable network access. Moreover, the use of third-party APIs raises potential
privacy and security concerns, as voice data must be transmitted to external servers for
processing.
The application also lacks the ability to retain context across interactions. Each
command is processed independently, without any understanding of previous inputs or the
broader context of the conversation. This limitation makes it challenging to implement
complex, multi-step commands or provide a more conversational user experience. Users
must re-state commands for each task, which can be cumbersome and reduce the efficiency
of the assistant.
Error handling within Luna is another area that requires improvement. The current
system may not provide adequate feedback when it encounters unrecognized commands or
processing errors, leading to user frustration. Enhancing the application's ability to handle
errors gracefully and offer helpful suggestions could significantly improve the overall user
experience.
Finally, Luna is primarily designed for desktop use and may not be optimized for other
platforms, such as mobile devices or smart home systems. This limitation restricts the
assistant's accessibility and utility in an increasingly interconnected digital landscape.
Expanding Luna's compatibility to support a wider range of devices would make the
application more versatile and appealing to a broader audience.

31
(4) ADVANTAGES
"Luna: Voice Assistant" offers several key advantages that make it an attractive solution for
enhancing daily computer interactions. One of the most significant advantages is its simplicity and
ease of use. Unlike voice assistants that require complex setup and extensive training to understand
natural language, Luna operates on a set of predefined commands. This approach ensures that users
can quickly learn how to interact with the assistant, making it accessible to a wide range of users,
including those who may not be technologically savvy.

The use of reliable and well-supported Python libraries such as Pyttsx3 and Google's Speech
Recognition API contributes to the stability and performance of Luna. These technologies provide
accurate speech recognition and responsive text-to-speech conversion, resulting in a smooth and
engaging user experience. The assistant can process voice commands in real-time, allowing users
to perform tasks efficiently without the need for manual input.

Luna's modular design is another major advantage, as it allows for easy customization and
expansion. Developers can add new features or modify existing ones without significantly altering
the underlying architecture. This flexibility makes Luna adaptable to various user needs and
preferences, ensuring that the assistant can evolve over time to incorporate new functionalities and
improvements.

The project also enhances productivity by enabling hands-free control of common computer
tasks. Whether it's browsing the web, managing media, setting alarms, or taking screenshots, Luna
allows users to perform these actions through simple voice commands. This hands-free capability
is particularly beneficial in situations where manual interaction with the computer is inconvenient
or impossible, such as when cooking, driving, or working in environments where hands are
occupied.

Additionally, Luna serves as an excellent educational tool for those interested in learning about
voice-controlled applications. The project demonstrates how to integrate various Python libraries to
create a functional voice assistant, providing valuable insights into the development process. This
makes Luna not only a practical application but also a learning platform for students and developers.

Finally, Luna's lightweight design ensures that it can run on a wide range of hardware
configurations without requiring significant system resources. This makes it accessible to users with
standard computing setups, allowing them to benefit from voice assistant technology without
needing high-end hardware.

32
(5) REQUIREMENT SPECIFICATION
The development and deployment of "Luna: Voice Assistant" require specific hardware and
software components to ensure smooth operation and an optimal user experience.
Software Requirement
Operating System.
Python Environment.
Hardware Requirement
Computer System.
Microphone.
Speakers or Headphones.
Internet Connection.

Software Requirements
 Operating System: Luna is compatible with major operating systems, including
Windows, macOS, and Linux. The system must support the installation of Python and its
dependencies.
 Python Environment: The project is developed using Python 3.x. It is recommended
to use a virtual environment or a Python distribution such as Anaconda to manage
dependencies and ensure that the correct versions of required libraries are installed.
 Python Libraries and Dependencies:
▪ Pyttsx3: This library is used for offline text-to-speech conversion. It allows Luna to
generate speech output from text without needing an internet connection.
▪ SpeechRecognition: This library is essential for converting spoken words into text. It
interfaces with various speech recognition engines, including Google's Speech Recognition
API, to process voice inputs.
▪ Pyaudio: Required for accessing the system's microphone input, this library is a
dependency for the SpeechRecognition module.
▪ Requests: Used to make HTTP requests for web-based operations, such as fetching
weather data or performing online searches.
▪ Webbrowser: This standard Python module is used to open and control web browsers,
allowing Luna to execute search queries and navigate websites.
▪ OS and Sys Modules: These standard libraries are used for interacting with the operating
system, enabling Luna to perform tasks like file management and executing system
commands.
▪ OpenCV (cv2): This library is used for accessing and controlling the webcam, allowing
Luna to capture photos and perform image-related tasks.
▪ PyAutoGUI: This module enables Luna to control the mouse and keyboard
programmatically, which is useful for tasks like taking screenshots and controlling media
playback.
▪ Pygame: This library is used for playing audio files, such as alarms and notifications.
▪ Threading: This module is used to run multiple operations concurrently, improving the
application's performance and responsiveness.
▪ BeautifulSoup: Used for parsing HTML and extracting information from web pages, this
library is useful for implementing web scraping features when necessary.

33
Hardware Requirements:
Computer System: Luna is designed to run on a standard desktop or laptop computer. A
modern processor, such as an Intel i5 or equivalent, is recommended, along with at least
4GB of RAM. This ensures that the system can handle multiple processes simultaneously
without lag.
 Microphone: A high-quality microphone is essential for accurate voice input. The
microphone should be capable of capturing clear audio with minimal background noise to
improve the accuracy of speech recognition.
 Speakers or Headphones: For the text-to-speech functionality, reliable audio output
devices are necessary. Luna will use these devices to provide auditory feedback and
notifications to the user.
 Internet Connection: While some features of Luna may work offline, many of the voice
recognition functionalities rely on an active internet connection. A stable internet
connection is therefore necessary for optimal performance.

34
(6) REVIEW OF LITERATURE
The development and implementation of voice assistants have been a significant area
of interest in technology over the past several decades. "Luna: Voice Assistant" builds upon
this rich history, focusing on the practical use of speech recognition and text-to-speech
technology to provide an efficient and user-friendly experience.

1. Evolution of Voice Assistants

Voice recognition technology has evolved significantly since its inception in the mid-
20th century. Early systems were limited to recognizing basic commands or numerical
inputs, often requiring extensive training and specialized hardware. As computing power
and algorithmic approaches advanced, so did the capabilities of voice assistants, making
them more accurate and versatile in their applications [1].

2. Influence of Early Voice-Controlled Systems

The 1990s and early 2000s saw a significant leap in the integration of voice recognition
into consumer products. Notable developments included IBM’s ViaVoice and Microsoft’s
Speech API, which played crucial roles in establishing the feasibility of voice-controlled
computing[2]. These systems, although rudimentary by today’s standards, laid the
foundation for more sophisticated applications like Luna.

3. Open-Source Libraries and Modern Development

Recent years have seen an explosion in the availability of open-source tools and
libraries, which have democratized the development of voice assistants. Python, in
particular, has become a popular programming language for developing voice-based
applications due to its robust library support[3]. Libraries such as SpeechRecognition for
voice recognition and Pyttsx3 for text-to-speech synthesis have made it easier for
developers to create custom voice assistants without requiring deep expertise in the
underlying technologies[4].

4. Simple Command-Based Systems

While research into complex conversational AI continues, there is growing evidence


that users often prefer simple, command-based systems for specific tasks. Studies suggest
that users value reliability and ease of use over the ability to engage in natural language
conversations[5]. Luna’s design aligns with this trend, prioritizing straightforward
command execution to enhance user experience.

5. Accessibility and Inclusivity

Voice assistants have increasingly been recognized for their potential to make technology
more accessible. For individuals with disabilities, voice commands offer an alternative
means of interacting with digital devices, eliminating the need for physical input[6]. Luna's
focus on predefined commands and compatibility with common hardware ensures that it is
accessible to a wide range of users, regardless of their technical abilities.

35
6. Real-World Implementations and Impact

The practical application of voice assistants has been explored through various case
studies and pilot projects[7]. These real-world implementations have provided valuable
insights into the effectiveness and challenges of deploying such systems. Luna, with its
emphasis on simplicity and functionality, reflects the lessons learned from these projects,
offering a tool that is both practical and easy to use.

7. Future Directions

The ongoing development of voice technology suggests that future voice assistants may
become even more integrated into everyday life. With advancements in AI and machine
learning, there is potential for voice assistants like Luna to evolve, offering more
personalized and context-aware interactions[8]. However, Luna's current focus remains on
providing a reliable, straightforward, and accessible voice assistant for a broad audience.

36
II-SYSTEM DESIGN
(1) METHODOLOGY

The development of "Luna: Voice Assistant" follows a structured methodology that


emphasizes modularity, ease of integration, and user-centric design. The methodology can
be broken down into several key phases: requirement analysis, design, implementation,
testing, and deployment.

Requirement Analysis

The first phase involves gathering and analyzing the requirements of the voice assistant.
This includes understanding the specific needs of the users, the environments in which the
assistant will operate, and the core functionalities it must perform. The goal here is to ensure
that Luna is not only technically sound but also meets the practical needs of its intended
audience. During this phase, extensive research is conducted to determine the most suitable
libraries, programming languages, and hardware components required for building the
system. Given that Luna is a voice assistant, the focus is on identifying the best tools for
speech recognition and text-to-speech conversion, as well as ensuring compatibility with
various devices.

Design

In the design phase, the architecture of Luna is developed. The system is designed to be
modular, with different components handling various tasks such as voice input processing,
command execution, and response generation. The design emphasizes simplicity and
efficiency, ensuring that each module can function independently while being easily
integrated into the overall system. The architecture is also designed to be scalable, allowing
for the addition of new features and capabilities in the future. User interface design is also
a critical aspect of this phase, ensuring that Luna is intuitive and easy to use.

Implementation

The implementation phase involves coding and integrating the various modules that
make up Luna. This phase leverages Python due to its extensive library support and ease of
use. Key libraries used include SpeechRecognition for processing voice inputs and Pyttsx3
for generating speech outputs. The implementation is done incrementally, starting with
basic voice recognition and gradually adding more complex functionalities. Throughout this
phase, the focus remains on writing clean, efficient code that adheres to best practices.

Testing

Testing is a critical part of the development process. During this phase, each module is
rigorously tested to ensure it functions as expected. Both unit tests and integration tests are
performed to identify and fix any issues that arise. Additionally, user testing is conducted
to gather feedback and make improvements based on real-world use cases. The goal is to
ensure that Luna is not only technically sound but also provides a seamless user experience.

37
Deployment

The final phase is deployment, where Luna is released for use. This phase involves
setting up the environment in which Luna will operate, whether it be a personal computer,
a smartphone, or another device. The deployment process is designed to be straightforward,
allowing users to easily install and configure Luna on their devices. Post-deployment,
continuous monitoring and updates are planned to ensure that Luna remains functional and
up-to-date with the latest advancements in voice technology.

This methodology ensures that "Luna: Voice Assistant" is developed in a systematic


and organized manner, resulting in a product that is both functional and user-friendly.

38
(2) DIAGRAMS
2.1 USECASE DIAGRAM

39
2.2 ACTIVITY DIAGRAM

40
2.3 SEQUENCE DIAGRAM

41
III-SYSTEM IMPLEMENTATION

(1) CODE IMPLEMENTATION


luna.py
import pyttsx3
import speech_recognition as sr
import requests
import WishMe
import webbrowser
import os
import random
import youtube
import message
import pygame
import threading
import INTRO
import cv2
import keyboard
import pyautogui
import subprocess
from Calculatenumbers import Calc
from datetime import datetime
from bs4 import BeautifulSoup
from datetime import timedelta
engine = pyttsx3.init('sapi5')
voices = engine.getProperty('voices')
engine.setProperty('voice', voices[1].id)
engine.setProperty("rate", 170)
def speak(audio):
engine.say(audio)
engine.runAndWait()

def take_command():
r = sr.Recognizer()
with sr.Microphone() as source:
print("Adjusting for ambient noise...")
r.adjust_for_ambient_noise(source, duration=1)
print("Listening...")
try:
audio = r.listen(source, timeout=3)
print("Recognizing...")
try:
query = r.recognize_google(audio, language='en-in')
print(f"User said: {query}\n")
return query
except sr.UnknownValueError:
print("Sorry, I did not understand that.")
except sr.RequestError as e:
42
print(f"Error; {e}")
except sr.WaitTimeoutError:
print("Timeout: No audio detected.")
except Exception as e:
print(f"An unexpected error occurred: {e}")
return "None"

strTime = int(datetime.now().strftime("%H"))
update = int((datetime.now()+timedelta(minutes = 2)).strftime("%M"))
def checkTemperature(query):
if "what's the temperature" in query:
search = "temperature in murud janjira"
url = f"https://www.google.com/search?q={search}"
try:
r = requests.get(url)
data = BeautifulSoup(r.text, "html.parser")
temp = data.find("div", class_="BNeawe").text
speak(f"current {search} is {temp}")
except Exception as e:
speak("Sorry, I couldn't find the temperature.")
def take_screenshot():
speak("Taking screenshot...")
image = pyautogui.screenshot()
image.save("ss.jpg")
speak("Screenshot saved as ss.jpg")

def alarm(query):
timehere = open("Alarmtext.txt","a")
timehere.write(query)
timehere.close()
threading.Thread(target=ring).start()
def ring():
extractedtime = open("Alarmtext.txt","rt")
time = extractedtime.read()
Time = str(time)
extractedtime.close()
deletetime = open("Alarmtext.txt", "r+")
deletetime.truncate(0)
deletetime.close()

timeset = str(Time)
timenow = timeset.replace("luna", "")
timenow = timenow.replace("set an alarm", "")
timenow = timenow.replace(" and ", ":")
Alarmtime = str(timenow)
print(Alarmtime)

while True:
currenttime = datetime.now().strftime("%H:%M:%S")
if currenttime == Alarmtime:
pygame.init()
43
pygame.mixer.init()
pygame.mixer.music.load("C:\\Users\\faizee\\Music\\1 Alarm.wav")
pygame.mixer.music.play()
while pygame.mixer.music.get_busy():
pygame.time.Clock().tick(10)
elif currenttime + "00:00:30" == Alarmtime:
exit()
def click_photo():
speak("Camera is ready. Say 'capture' to take a photo.")
while True:
query = take_command().lower()
if "capture" in query:
speak("Capturing photo...")
camera = cv2.VideoCapture(0)
return_value, image = camera.read()
del(camera)
cv2.imwrite("photo.jpg", image)
speak("Photo saved as photo.jpg")
break
else:
speak("Please say 'capture' to take a photo.")
def volumeup():
pyautogui.press("volumeup")
def volumedown():
pyautogui.press("volumedown")
WAKE_COMMAND = "wake up"
SLEEP_COMMAND = "go to sleep"
if __name__ == "__main__":
INTRO.play_gif()
WishMe.wish_me()
while True:
query = take_command().lower()
if query == SLEEP_COMMAND:
speak("Ok, You can call me anytime")
while True:
query = take_command().lower()
if query == WAKE_COMMAND:
speak("I'm awake!")
WishMe.wish_me(skip_greeting=True)
break
else:
print("I'm sleeping...")
elif "hello" in query:
speak("Hello, how are you ?")
elif "i am fine" in query:
speak("that's great")
elif "how are you" in query:
speak("Perfect")
elif "thank you" in query:
speak("you are welcome")
elif "google" in query:
44
from SearchNow import searchGoogle
searchGoogle(query)
elif 'google' in query:
webbrowser.open("google.com")
elif "youtube" in query:
from SearchNow import searchYoutube
searchYoutube(query)
elif 'youtube' in query:
webbrowser.open("youtube.com")
elif "wikipedia" in query:
from SearchNow import searchWikipedia
searchWikipedia(query)
elif "what's the temperature" in query:
checkTemperature(query)
elif 'play music' in query:
music_dir = 'C:\\Users\\faizee\\Music\\Nasheed'
try:
songs = os.listdir(music_dir)
if songs:
random_song = random.choice(songs)
os.startfile(os.path.join(music_dir, random_song))
else:
speak("No songs found in the directory.")
except FileNotFoundError:
speak("Music directory not found.")
print("Music directory not found.")
elif 'time' in query:
str_time = datetime.now().strftime("%H:%M:%S")
speak(f"the time is {str_time}")
elif "take a screenshot" in query:
take_screenshot()
elif "play something" in query or "play on youtube" in query:
youtube.play_youtube_playlist(query)
elif "send message" in query:
message.sendMessage()
elif ("make"
" call") in query.lower():
message.makeCall()
elif "play video" in query:
keyboard.press_and_release('space') # Play/pause video
elif "pause video" in query:
keyboard.press_and_release('space') # Play/pause video
elif "volume up" in query:
volumeup()
elif "volume down" in query:
volumedown()
elif "mute" in query:
keyboard.press_and_release('m') # Mute/unmute video
elif "full screen on" in query:
keyboard.press_and_release('f') # Toggle full screen
elif "full screen off" in query:
45
keyboard.press_and_release('esc') # Exit full screen
elif "next video" in query:
keyboard.press_and_release('shift+n') # Next video
elif "previous video" in query:
keyboard.press_and_release('shift+p') # Previous video
elif "set an alarm" in query:
print("input time example:- 10 and 10 and 10")
speak("Set the time")
a = input("Please tell the time :- ")
alarm(a)
speak("Done")
elif "play a game" in query:
try:
from game import game_play
game_play()
except ImportError:
print("game module is missing.")
speak("Unable to play the game.")
elif "remember that" in query:
rememberMessage = query.replace("remember that", "")
rememberMessage = rememberMessage.replace("luna", "")
speak("You told me to remember that " + rememberMessage)
remember = open("Reminder.txt", "a")
remember.write(rememberMessage + "\n")
remember.close()
elif "reminders" in query or "reminder for me" in query:
try:
remember = open("Reminder.txt", "r")
reminders = remember.read()
if reminders:
speak("You have the following reminders: " + reminders)
else:
speak("You don't have any reminders.")
remember.close()
except FileNotFoundError:
speak("I don't have any memories yet!")
elif "calculate" in query:
query = query.replace("calculate", "")
query = query.replace("luna", "")
Calc(query)
elif "click a photo" in query or "take a photo" in query:
click_photo()
elif "focus mode" in query:
a = int(input("Are you sure that you want to enter focus mode :- [1 for YES / 2 for NO
"))
if (a == 1):
speak("Entering the focus mode....")
os.startfile("C:\\Users\\faizee\\PycharmProjects\\pythonProject\\FocusMode.py")
exit()
else:
pass
46
elif "exit focus mode" in query:
speak("Exiting focus mode...")
os.system("taskkill /im python.exe")
elif "show my focus" in query:
from FocusGraph import focus_graph
focus_graph()
else:
speak("I didn't understand that Please try again.")

INTRO.py
import tkinter as tk
from PIL import Image, ImageTk, ImageSequence
import time
from pygame import mixer

mixer.init()

def play_gif():
root = tk.Tk()
root.geometry("2160x1440")

img = Image.open(r"C:\\Users\\faizee\\Pictures\\Saved Pictures\\Minimal Modern You


Are Enough Quote Desktop Wallpaper (1).gif")
lbl = tk.Label(root)
lbl.place(x=0, y=0)

mixer.music.load(r"C:\\Users\\faizee\\Music\\mixkit-doorbell-tone-2864.wav")
mixer.music.play()

for img in ImageSequence.Iterator(img):


img = img.resize((2160, 1440 ))
img = ImageTk.PhotoImage(img)
lbl.config(image=img)
root.update()
time.sleep(0.1)

root.destroy()

WishMe.py
from datetime import datetime
import pyttsx3

engine = pyttsx3.init("sapi5")
voices = engine.getProperty("voices")
engine.setProperty("voice", voices[1].id)
engine.setProperty("rate",300)
def speak(audio):
engine.say(audio)
engine.runAndWait()
def wish_me(skip_greeting=False):
if not skip_greeting:
47
hour = int(datetime.now().hour)
if hour >= 0 and hour < 12:
speak("Good Morning!")
elif hour >= 12 and hour < 18:
speak("Good Afternoon!")
else:
speak("Good Evening!")
speak("I am Luna")
speak("Please tell me how may I help you")

SearchNow.py
import pyttsx3
import wikipedia
import webbrowser
import pywhatkit
engine = pyttsx3.init("sapi5")
voices = engine.getProperty("voices")
engine.setProperty("voice", voices[1].id)
engine.setProperty("rate",170)
def speak(audio):
engine.say(audio)
engine.runAndWait()
def searchGoogle(query):
if "google" in query:
import wikipedia as googleScrap
query = query.replace("luna","")
query = query.replace("google search","")
query = query.replace("google","")
speak("This is what I found on google")
try:
pywhatkit.search(query)
result = googleScrap.summary(query,1)
speak(result)
except:
speak("No speakable output available")
def searchYoutube(query):
if "youtube" in query:
query = query.replace("luna","")
query = query.replace("youtube search","")
query = query.replace("youtube","")
speak("This is what I found on youtube")
try:
chrome_path = "C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe
%s"
web = "https://www.youtube.com/results?search_query=" + query
webbrowser.get(chrome_path).open(web, new=0)
pywhatkit.playonyt(query)
except:
speak("No speakable output available")
def searchWikipedia(query):
if "wikipedia" in query:
48
speak("Searching from wikipedia....")
query = query.replace("wikipedia","")
query = query.replace("search wikipedia","")
query = query.replace("luna","")
results = wikipedia.summary(query,sentences = 2)
speak("According to wikipedia..")
print(results)
speak(results)

youtube.py
import webbrowser
import random
import pyautogui
import time
import pyttsx3
import keyboard

engine = pyttsx3.init("sapi5")
voices = engine.getProperty("voices")
engine.setProperty("voice", voices[1].id)
engine.setProperty("rate",300)
def speak(audio):
engine.say(audio)
engine.runAndWait()

playlists = {
"daily_surah": "PL6Rqj0rtLNIDEYYiE6OTdVbGvzV4ogXMn",
"my": "PL6Rqj0rtLNIBl3rk0cSdbEPUjhTxAPcrk",
}

def play_youtube_playlist(query):
speak("Opening YouTube...")
if "play something" in query:
playlist_name = random.choice(list(playlists.keys()))
playlist_url = f"https://www.youtube.com/playlist?list={playlists[playlist_name]}"
if playlist_name == "daily_surah":
playlist_url += "&si=EsWYNhnNKrZS3qT5"
elif playlist_name == "my":
playlist_url += "&si=RatdS9-1TVgciDyo"
webbrowser.open(playlist_url)
speak(f"Playing {playlist_name}...")
# Wait for the page to load
time.sleep(5)
# Click on a random video in the playlist
pyautogui.moveTo(500, 500) # Move to a random location on the page
pyautogui.click() # Click on a video
keyboard.press('space') # Play/pause video
keyboard.press_and_release('f') # Toggle full screen
speak("Video playing...")
49
else:
speak("Sorry, I couldn't understand your command.")
message.py
import pywhatkit
import datetime
import pyttsx3
import speech_recognition as sr
from datetime import datetime, timedelta
import pyautogui
import webbrowser
import os

engine = pyttsx3.init("sapi5")
voices = engine.getProperty("voices")
engine.setProperty("voice", voices[1].id)
engine.setProperty("rate", 170)

def speak(audio):
engine.say(audio)
engine.runAndWait()

def take_command():
r = sr.Recognizer()
with sr.Microphone() as source:
print("Adjusting for ambient noise...")
r.adjust_for_ambient_noise(source, duration=1)
print("Listening...")
try:
audio = r.listen(source, timeout=3)
print("Recognizing...")
try:
query = r.recognize_google(audio, language='en-in')
print(f"User said: {query}\n")
return query
except sr.UnknownValueError:
print("Sorry, I did not understand that.")
except sr.RequestError as e:
print(f"Error; {e}")
except sr.WaitTimeoutError:
print("Timeout: No audio detected.")
except Exception as e:
print(f"An unexpected error occurred: {e}")
return "None"
import pywhatkit

def sendMessage():
speak("Who do you want to message")
speak("Person one or Person two")
a = take_command()
if "person one" in a.lower():
speak("What's the message")
50
message = take_command()
phone_number = "+91xxxxxxxxxx"
pywhatkit.sendwhatmsg_instantly(phone_number, message, 3) # 3 seconds wait time
elif "person two" in a.lower():
speak("What's the message")
message = take_command()
phone_number = "+91xxxxxxxxxx"
pywhatkit.sendwhatmsg_instantly(phone_number, message, 3) # 3 seconds wait time
else:
speak("Invalid input. Please try again.")
def makeCall():
speak("Who do you want to call")
speak("Person one or Person two")
a = take_command()
if "person one" in a.lower():
os.startfile(f"whatsapp://call?phone=+91xxxxxxxxxx")
pyautogui.sleep(5) # wait for WhatsApp to open
pyautogui.press('enter') # make the call
elif "person two" in a.lower():
os.startfile(f"whatsapp://call?phone=+91xxxxxxxxxx")
pyautogui.sleep(5) # wait for WhatsApp to open
pyautogui.press('enter') # make the call
else:
speak("Invalid input. Please try again.")

game.py
import pyttsx3
import speech_recognition as sr
import random
import time

engine = pyttsx3.init('sapi5')
voices = engine.getProperty('voices')
engine.setProperty('voice', voices[1].id)
engine.setProperty("rate", 170)

def speak(audio):
engine.say(audio)
engine.runAndWait()

def takeCommand():
r = sr.Recognizer()
try:
with sr.Microphone() as source:
print("Listening.....")
r.pause_threshold = 1
r.energy_threshold = 300
audio = r.listen(source, 0, 4)
except Exception as e:
print("Microphone not detected.")
return "None"
51
try:
print("Recognizing..")
query = r.recognize_google(audio, language='en-in')
print(f"You Said : {query}\n")
except Exception as e:
print("Say that again")
return "None"
return query

def game_play():
speak("Lets Play ROCK PAPER SCISSORS !!")
print("LETS PLAYYYYYYYYYYYYYY")
i=0
Me_score = 0
Com_score = 0
while (i < 5):
choose = ("rock", "paper", "scissors") # Tuple
com_choose = random.choice(choose)
query = takeCommand().lower()
if query is None:
speak("Sorry, I didn't understand that. Please try again.")
continue
if query not in choose:
speak("Invalid input. Please say rock, paper or scissors.")
continue
speak(f" {com_choose}")
time.sleep(1) # wait for 1 second
if (query == "rock"):
if (com_choose == "rock"):
speak("It's a tie!")
print(f"Score:- ME :- {Me_score} : COM :- {Com_score}")
elif (com_choose == "paper"):
speak("I wins this round!")
Com_score += 1
print(f"Score:- ME :- {Me_score} : COM :- {Com_score}")
else:
speak("You win this round!")
Me_score += 1
print(f"Score:- ME :- {Me_score} : COM :- {Com_score}")

elif (query == "paper"):


if (com_choose == "rock"):
speak("You win this round!")
Me_score += 1
print(f"Score:- ME :- {Me_score} : COM :- {Com_score}")

elif (com_choose == "paper"):


speak("It's a tie!")
print(f"Score:- ME :- {Me_score} : COM :- {Com_score}")
else:
52
speak("I wins this round!")
Com_score += 1
print(f"Score:- ME :- {Me_score} : COM :- {Com_score}")

elif (query == "scissors" or query == "scissor"):


if (com_choose == "rock"):
speak("I wins this round!")
Com_score += 1
print(f"Score:- ME :- {Me_score} : COM :- {Com_score}")
elif (com_choose == "paper"):
speak("You win this round!")
Me_score += 1
print(f"Score:- ME :- {Me_score} : COM :- {Com_score}")
else:
speak("It's a tie!")
print(f"Score:- ME :- {Me_score} : COM :- {Com_score}")
i += 1
time.sleep(1) # wait for 1 second

print(f"FINAL SCORE :- ME :- {Me_score} : COM :- {Com_score}")


if Me_score > Com_score:
speak("You win!")
elif Me_score < Com_score:
speak("I win!")
else:
speak("It's a tie!")

Calculatenumbers.py
import wolframalpha
import pyttsx3
import speech_recognition

engine = pyttsx3.init("sapi5")
voices = engine.getProperty("voices")
engine.setProperty("voice", voices[1].id)
rate = engine.setProperty("rate", 170)

def speak(audio):
engine.say(audio)
engine.runAndWait()

def WolfRamAlpha(query):
apikey = "GUXXKP-39X2LEPYGQ"
requester = wolframalpha.Client(apikey)
requested = requester.query(query)

try:
answer = next(requested.results).text
return answer
except:
speak("The value is not answerable")
53
def Calc(query):
Term = str(query)
Term = Term.replace("luna", "")
Term = Term.replace("multiply", "*")
Term = Term.replace("plus", "+")
Term = Term.replace("minus", "-")
Term = Term.replace("divide", "/")

Final = str(Term)
try:
result = WolfRamAlpha(Final)
print(f"{result}")
speak(result)

except:
speak("The value is not answerable")

FocusMode.py
import time
import datetime
import ctypes, sys

def is_admin():
try:
return ctypes.windll.shell32.IsUserAnAdmin()
except:
return False

if is_admin():
current_time = datetime.datetime.now().strftime("%H:%M")
Stop_time = input("Enter time example:- [10:10]:- ")
a = current_time.replace(":", ".")
a = float(a)
b = Stop_time.replace(":", ".")
b = float(b)
Focus_Time = b - a
Focus_Time = round(Focus_Time, 3)
host_path = 'C:\\Windows\\System32\\drivers\\etc\\hosts'
redirect = '127.0.0.1'

print(current_time)
time.sleep(2)
website_list = ["www.youtube.com", "youtube.com"] # Enter the websites that you
want to block
if (current_time < Stop_time):
with open(host_path, "r+") as file: # r+ is writing+ reading
content = file.read()
time.sleep(2)
for website in website_list:
if website in content:
54
pass
else:
file.write(f"{redirect} {website}\n")
print("DONE")
time.sleep(1)
print("FOCUS MODE TURNED ON !!!!")

try:
while True:
current_time = datetime.datetime.now().strftime("%H:%M")
website_list = ["www.youtube.com", "youtube.com"] # Enter the websites that
you want to block
if (current_time >= Stop_time):
with open(host_path, "r+") as file:
content = file.readlines()
file.seek(0)

for line in content:


if not any(website in line for website in website_list):
file.write(line)

file.truncate()

print("Websites are unblocked !!")


file = open("focus.txt", "a")
file.write(f",{Focus_Time}") # Write a 0 in focus.txt before starting
file.close()
except KeyboardInterrupt:
print("Exiting focus mode...")
with open(host_path, "r+") as file:
content = file.readlines()
file.seek(0)

for line in content:


if not any(website in line for website in website_list):
file.write(line)

file.truncate()

print("Websites are unblocked !!")

else:
ctypes.windll.shell32.ShellExecuteW(None, "runas", sys.executable, " ".join(sys.argv),
None, 1)

FocusGraph.py
import matplotlib.pyplot as pt

def focus_graph():
file = open("focus.txt", "r")
55
content = file.read()
file.close()

content = content.split(",")
x1 = []
for i in range(0, len(content)):
content[i] = float(content[i])
x1.append(i)

print(content)
y1 = content

pt.plot(x1, y1, color="red", marker="o")


pt.title("YOUR FOCUSED TIME", fontsize=16)
pt.xlabel("Times", fontsize=14)
pt.ylabel("Focus Time", fontsize=14)
pt.grid()
pt.show()

56
(2) OUTPUTS
Intro.py

Temperature check

57
Screenshot

Photo

Current Time

58
Calculate.py

Music

59
Game.py

60
SearchNow.py

61
Focus Mode.py

62
Message.py

Alarm.py

63
Youtube.py

64
IV-RESULTS
(1)TEST CASES
ID Test Case Test Case Description Expected Result Actual Result
Name
1 Wake-up Verify that Luna responds Luna wakes up and [Luna
Command to the wake-up command says a greeting. responds to
"wake up" and resumes wake-up
operation. command]
2 Greeting Check if Luna greets the Luna greets the user [Luna greets
user appropriately based with "Good Morning", user correctly]
on the time of day. "Good Afternoon", or
"Good Evening".
3 Basic Test Luna's ability to Luna plays music, sets [Luna
Commands perform basic tasks like the alarm for a performs basic
playing music, setting specified time, and tasks
alarms, and checking the provides the current correctly]
weather. weather.
4 Web Search Verify that Luna can search Luna provides relevant [Luna
the web using Google, search results from the performs web
YouTube, or Wikipedia. specified search search
engine. correctly]
5 Calculations Check if Luna can perform Luna provides the [Luna
calculations accurately. correct result for the performs
given calculation. calculations
correctly]
6 Reminders Test Luna's ability to set Luna sets a reminder [Luna sets and
and retrieve reminders. for a specific time and retrieves
retrieves it when reminders
requested. correctly]
7 Focus Mode Verify that Luna can enter Luna enters focus [Focus mode
and exit focus mode. mode and blocks enters and
specified websites, exits
then exits focus mode correctly]
and unblocks websites.
8 Error Check how Luna handles Luna provides an [Luna handles
Handling errors, such as invalid informative error errors
commands or unexpected message and tries to gracefully]
inputs. recover from the error.
9 Natural Test Luna's ability to Luna responds to [Luna
Language understand and respond to complex queries and understands
Processing natural language queries. provides relevant natural
information. language
queries]
10 Integration Verify that Luna can Luna successfully [Luna
with Other integrate with external makes calls or sends integrates with
Services services like WhatsApp messages using external
and make calls or send WhatsApp. services
messages. correctly]

65
V-CONCLUSION & FUTURE SCOPE

(1) SPECIFY THE FINAL CONCLUSION


The "Luna: Voice Assistant" project has effectively shown how a voice-controlled
desktop application can make daily computer tasks simpler and more efficient by using
predefined commands. Luna’s emphasis on straightforwardness ensures that users,
regardless of technical expertise, can easily engage with the assistant. By leveraging
dependable Python libraries like Pyttsx3 for speech output and Google’s Speech
Recognition API for voice input, the project achieves smooth operation and accurate task
execution.
While Luna currently doesn't feature advanced natural language processing (NLP), it
remains adaptable for future enhancements. This means Luna could eventually handle more
flexible, conversational commands and offer a richer, more interactive experience. The
project serves as a practical example of how voice assistants can automate basic tasks today,
with the exciting possibility of evolving into more powerful, context-aware systems.
Through ongoing updates and the addition of advanced features, Luna has the potential to
become even more dynamic and useful in the future.

66
(2) FUTURE SCOPE
While Luna is already a functional and efficient voice assistant, there are multiple
opportunities for future improvements. One of the most promising advancements would be
the integration of natural language processing (NLP). By incorporating NLP, Luna would
be able to interpret more sophisticated and conversational commands, vastly enhancing the
user experience. Instead of being limited to specific, predefined phrases, users could
communicate in a more natural manner, and Luna could respond appropriately. This would
make the assistant far more intuitive and versatile in everyday interactions.

Additionally, improving the system’s error-handling features is another key area of


development. By creating more advanced feedback mechanisms, Luna could offer helpful
suggestions when an issue arises or when it cannot understand a command. This would not
only reduce user frustration but also make the assistant more user-friendly. Moreover,
integrating machine learning could allow Luna to adapt to individual users’ behaviors over
time. Through this, the assistant could begin to provide more personalized responses, learn
user preferences, and even predict commands based on past interactions, making Luna feel
more tailored to each user.

There’s also great potential in expanding Luna’s compatibility beyond just desktop
environments. By making the assistant compatible with mobile devices and smart home
systems, Luna’s utility would expand significantly. This would allow users to control a
wider variety of devices using voice commands, making Luna a more versatile and powerful
tool in daily life.
Lastly, privacy and security should be a priority moving forward. A potential
improvement would involve processing voice data locally, which would eliminate the need
to rely on external services like Google’s Speech Recognition API. This step would help
address user concerns over data privacy and enhance the overall security of Luna’s
operations.

These future enhancements could position Luna as an even more intelligent, adaptive,
and secure voice assistant, offering users greater functionality and a more seamless
experience across multiple devices.

67
VI-REFERENCES
[1] B. H. Juang and L. R. Rabiner, "A brief history of automatic speech recognition
technology development," Elsevier, pp. 1– 5, 2005.

[2] J. Baker, X. Huang, and R. Reddy, "Advances in speech recognition technologies,"


IEEE Signal Processing Magazine, vol. 52, no. 6, pp. 1–18, 2009.

[3] S. Bird, E. Klein, and E. Loper, Practical Applications of Natural Language Processing
Using Python, O'Reilly Media, 2009.

[4] M. Smith, " Overview of the SpeechRecognition Python library: Key features and
applications," Real Python. Available at: https://realpython.com, 2021.

[5] M. McTear, Developing Conversational AI Systems: Dialogue Agents and Chatbots,


Morgan & Claypool, 2016.

[6] K. Shinohara and J. O. Wobbrock, "Assistive technology use and social interaction
challenges," ACM Trans. Accessible Comput., vol. 8, no. 2, pp. 1–30, 2016.

[7] E. Luger and A. Sellen, "User expectations vs. reality in conversational agents," in Proc.
ACM Int. Conf. Human Factors in Computing Systems (CHI), pp. 5286–5297, 2016.

[8] P. Sundar and N. Ewen, "The future outlook for voice assistants in AI-driven
environments," Forbes Tech Council.Available at: https://www.forbes.com, 2019.

[9] M. Grinberg, Developing Web Applications Using Flask and Python: A Programmer’s
Guide, O'Reilly Media, 2018.

[10] X. Huang, A. Acero, and H. W. Hon, Spoken Language Processing: Foundations and
Methodologies, Prentice Hall, 2001.

[11] D. Klatt, "Text-to-speech systems overview: Historical development and current


trends," J. Acoust. Soc. Am., vol. 82, no. 3, pp. 737–793, 1987.

[12] J. Preece, Y. Rogers, and H. Sharp, Human-Computer Interaction and User Interface
Design, John Wiley & Sons, 2015.

[13] P. Norvig, Modern Approaches in Artificial Intelligence: A Textbook Overview, 3rd ed.,
Prentice Hall, 2010.

[14] T. Mitchell, An Introduction to Machine Learning: Concepts and Techniques, McGraw-


Hill, 1997.

68
VII-GLOSSARY
1. Speech Recognition: This is the technology that lets computers understand spoken
words by converting them into text. In Luna, this feature allows the system to hear and
understand voice commands from users.

2. Pyttsx3: A Python library used to turn written text into speech. In Luna, it helps the
voice assistant "speak" by providing vocal responses to users.

3. API (Application Programming Interface): A set of rules that allows different


software programs to talk to each other. For Luna, the API from Google is used to help
process voice inputs and turn them into actions.

4. Command Mapping: This refers to how specific voice commands are matched with
certain tasks that Luna can carry out, ensuring that when a user says something, Luna
knows what action to perform.

5. Natural Language Processing (NLP): A field of AI that focuses on teaching computers


to understand and interpret human language. While Luna doesn't currently use NLP, it
can be added in the future to make it more flexible and capable of handling more natural,
conversational commands.

6. Voice Assistant: A program that carries out tasks in response to voice commands. Luna
is a basic voice assistant, focusing on simple tasks without the ability to engage in
complex conversations.

7. Modular Design: A design approach where the system is built from separate parts or
modules. Each module can be modified or replaced without affecting the rest of the
system, making it easy to update or expand Luna in the future.

69
VIII-APPENDICIES
(1) USER GUIDE

Getting Started

1.Launch Luna: Double-click the Luna script file in your PyCharm project directory.

2.Introduction: Luna will play a short introductory animation and greet you.

Using Luna
 Basic Interactions:

o Say "hello" to greet Luna.

o Ask "how are you" to hear Luna's response.

o Say "thank you" to express gratitude.

 Information Retrieval:

o Ask "what's the time" to get the current time.

o Say "google" followed by your search query (e.g., "google weather today") to search
the web using Google.

o Ask "what's the temperature" to get the current temperature in Murud Janjira
(requires internet connection).

o Say "wikipedia" followed by your search term (e.g., "wikipedia Albert Einstein") to
search Wikipedia.

 Entertainment:

o Say "play music" to play a random song from a specified music directory (configure
path within the script).

o Say "play something" or "play on youtube" followed by your search terms to play a
video on Youtube (requires internet connection).

o Ask "play video" or "pause video" to control video playback (works on most online
platforms).

 System Control:

o Say "take a screenshot" to capture a screenshot of your screen.

o Ask "volume up" or "volume down" to adjust system volume.

70
o Say "mute" to mute/unmute audio in the current program.

o Use commands like "full screen on/off," "next video," or "previous video" to control
media playback (works on most online platforms).

 Other Features:

o Say "set an alarm" followed by the desired time (e.g., "set an alarm for 10:30 AM")
to set an alarm.

o Say "click a photo" or "take a photo" to capture an image using your webcam.

o Say "remember that" followed by a message to create a reminder.

o Ask "reminders" or "reminder for me" to view your saved reminders.

o Say "calculate" followed by a mathematical expression to perform calculations (e.g.,


"calculate 2 + 3").

Sleep Mode and Wake Up


 Entering Sleep Mode: To put Luna into sleep mode, say "go to sleep." She will respond
with "Ok, You can call me anytime" and enter a listening state.

 Waking Luna Up: To wake Luna up from sleep mode, say "wake up." She will respond
with "I'm awake!" and return to her active state.

71

You might also like