0% found this document useful (0 votes)
20 views8 pages

MDPI Format (12008014)

The document outlines the development of Triadon, a voice assistant inspired by existing technologies like Siri and Cortana, designed to perform various tasks through voice and keyboard commands. It discusses the project's objectives, challenges faced during development, and the tools used, emphasizing the importance of voice recognition technology. Future plans include enhancing Triadon to serve as a comprehensive server assistant with additional functionalities.

Uploaded by

Tahsin Nujum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views8 pages

MDPI Format (12008014)

The document outlines the development of Triadon, a voice assistant inspired by existing technologies like Siri and Cortana, designed to perform various tasks through voice and keyboard commands. It discusses the project's objectives, challenges faced during development, and the tools used, emphasizing the importance of voice recognition technology. Future plans include enhancing Triadon to serve as a comprehensive server assistant with additional functionalities.

Uploaded by

Tahsin Nujum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Article

TRIADON A Voice Assistant


Sunilkumar Hattaraki 1,†,‡ , Abhilsh Patil 2,‡ and Sujay Kulkarni 2, *

1 Affiliation 1; sunilmh039@gmail.com
2 Affiliation 2; patilkumarabhi@gmail.com
3 Affiliation 3; sujayk928@gmail.com
† Current address: Department of Electronics and Communication Engineering, B.L.D.E.A’s V. P. Dr. P. G.
Halakatti College of Engineering and Technology, Vijayapur, India.
‡ Current address: Department of Electronics and Communication Engineering, B.L.D.E.A’s V. P. Dr. P. G.
Halakatti College of Engineering and Technology, Vijayapur, India.
§ Current address: Department of Electronics and Communication Engineering B.L.D.E.A’s V. P. Dr. P. G.
Halakatti College of Engineering and Technology, Vijayapur, India.

Abstract: This project aims to develop a simple human assistant using python. Triadon inspires 1

apps like Cortana for Windows, and Siri for iOS. It is designed to provide an easy-to-use interface 2

for a variety of tasks using well-defined instructions. Users can contact the assistant either by voice 3

commands or by using the keyboard input. Voice assistants are software agents who can interpret 4

a person’s speech and respond with integrated voices. ... It will also discuss specific privacy and 5

security issues available to voice assistants and other future use of these devices. Users can ask their 6

assistants, play media, and manage other basic functions such as email, to-do lists, and calendars 7

with voice commands. This column will examine the primary functionality and not unusual features 8

of modern voice assistants. It’ll also discuss unique privacy and protection problems available to 9

voice assistants and different destiny use of these devices 10

Keywords: Voice Assistant, futue, design 11

1. Introduction 12

The help of smart people is a significant achievement, which has become an integral 13

part of the digital access system. These visual aids can be found on all gadgets like 14

smartphones, tablets and smart watches now. Increasing competition in this area has led to 15

many improvements. Big companies like Amazon, Google, Microsoft and Apple offer a 16

Citation: Lastname, F.; Lastname, F.;


complete digital infrastructure that can be controlled by voice assistants. So, we decided 17

Lastname, F. Reducing Error Rate for to make our own voice assistants and our team work on that. Triadon Assistant, a visual 18

Eye-Tracking System by Applying voice over your contact assistant, gives you a good idea if you experience any kind of 19

SVM anxiety, depression, depression or kind of stuff, communicates with the system, analyzes 20

. Journal Not Specified 2023, 1, 0. your mood and talks to you. It’ll also discuss precise privacy and safety troubles related 21

https://doi.org/ to voice assistants and feasible destiny use of f those gadgets. As voice helpers grow to 22

be extra extensively used, librarians will need to turn out to be familiar with generation, 23
Received:
which has the potential to become a manner of handing over library resources and services. 24
Revised:
Accepted:
2. OBJECTIVE 25
Published:
AI like Alexa, Cortana google home, etc.: be the most popular computer communi- 26
Copyright: © 2023 by the authors.
cation system and do almost everything possible with voice command. One continuous 27
Submitted to Journal Not Specified
activity of attacking voice assistants shows that hidden voice commands that are not un- 28
for possible open access publication
derstood by humans can control VAs. Visual Assistant is a super-voice voice that acts as a 29
under the terms and conditions
of the Creative Commons Attri-
voice recognition, using native language, and speech synchronization to make it easier for 30

bution (CC BY) license (https://


users with phone and voice recognition apps. 31

creativecommons.org/licenses/by/
4.0/).

Version December 11, 2023 submitted to Journal Not Specified https://www.mdpi.com/journal/notspecified


Version December 11, 2023 submitted to Journal Not Specified 2 of 8

3. LITERATURE REVIEW 32

Speech recognition has a long history with several major new waves. Speech recog- 33

nition, search, and voice commands have become quite common on Smartphones and 34

portable devices. The design of a large integrated speech recognition system that can 35

work well on mobile devices, with precision and low latency. This is achieved through 36

Artificial intelligence using python. The ASR and Search components perform speech 37

recognition and search functions. In addition to ASR and Search, we have also included 38

a module to separate queries between ASR and Search for a number of reasons. A set of 39

strategies to improve the performance of default voice search services aimed at mobile 40

users accessing these services across a wide range of mobile devices. Voice Search is started 41

as a stage search process where the string selectors that are generated by the automatic 42

speech recognition system (ASR) also get points to identify the best inputs from a specific 43

application database. 44

4. ANALYSIS 45

With voice recognition, all we need to do is always how the voice assistant can do 46

the names of the users who were annoyed. We did not consider the setting or limitations 47

of the feedback I was given, just the important acknowledgment. In this project we have 48

tried voice recognition with different software variations and with varying levels of basic 49

sound. Amazon’s Alexa, Google’s homemaker and Siri taught us a great deal according 50

to the graphs. There were couples who made fun of misconceptions in understanding 51

deceptive questions. Siri’s assistant is good at understanding natural language, Alexa is 52

good at sound and music, Microsoft’s assistant is not comfortable with basic questions, and 53

Cortana isn’t very good at basic voice recognition. Google Assistant had problems while 54

understanding or using the user’s voice or interrupted by a small noise. 55

5. PROBLEM FORMULATION 56

During the construction of the project, we encountered many problems while using 57

the modules and functions. Some of the selected problems and how they are managed are 58

listed below in the following section: 59

• Chatting VS Commanding Discussion: When we want to communicate with a robot 60

by chat, the software does not separate keywords used in statements, because the 61

conversation is uncertain and every sentence has a higher chance of containing the 62

keyword Mapping to a command, will cause the program confuse about the words or 63

phrases and therefore might give a wrong response. 64

• Solution: This program is structured in two ways: During chat mode, the system will 65

provide responses to commands for command statements such as “who is the time” 66

or “who is your name” without answering the tasks called, it will be much easier for 67

the system to separate keywords. 68

• Location: There were problems accessing GEO information based on the city name 69

provided when launching the location service. Not only should you discover the 70

contemporary vicinity of the user, there have to also be a function allowed to discover 71

area by using city name. vicinity must be supplied accurately from the map from 72

google maps to your destination at the user’s command. 73

• Calling service: we have encountered a major and fundamental issue while using the 74

call service. The software system could not function properly by entering the expected 75

or output after completing the code usage. And it has always happened as the same 76

problem of working time when it was tested and updated and changed many times 77

without solutions. 78

6. Purpose 79

This Software aims to build your assistant using Python. The primary purpose of 80

the software is to perform consumer responsibilities with particular commands, given in 81

any layout, speech or text. It’ll lessen maximum of the user’s task as entire work can be 82
Version December 11, 2023 submitted to Journal Not Specified 3 of 8

performed in a single command. Triadon promotes digital Aids like Cortana for Windows 83

and ios siri. Users can connect the assistant either with voice commands or keyboard input. 84

7. Appropriateness of the Proposed Plan 85

• Voice assistants allow us to perform various tasks without hands, which is a major 86

reason why many people like to use them, especially on their phones. Apple has 87

Siri. Google phones and most Android phones with Google. ... With the addition of 88

separate applications on the phone, our voice can become a kind of remote control 89

in our lives. As technology evolves, so do the ways in which people communicate 90

with them. Artificial art assistants have also changed. Initially, text was the only way 91

to communicate with the helper app (typing a phrase created a response). Now, the 92

voice has taken over 93

– • Flexibility: - Voice popularity isn’t always linked to a single device. "With brand 94

new generation there are cloud-primarily based apps that permit the user to share 95

a single profile at the board, so wherever it’s used docs have the ability to connect. 96

8. Tools Used 97

Decorate tools and surrounding: PyCharm, visual studio, jdk, Eclipse ide, Android sdk, 98

adt Plugin, adv, and Pygarm Plug-in, MySQL query language, I -DB Designer, Microsoft 99

Visual net Developer 2010 Specific and Windows Azure Cloud Platform. 100

API and reference: PyCharm, Android API, Google API (Google Map, Google Weather), 101

Wikipedia API, SQL tutorial, UML reference, JSON, XML, WSDL, Cloud computing, more 102

learning strategies. 103

Need for Software application for Android phones: Easy Internet Explorer, Google 104

voice recognition, TTS Service prolonged, Alarm, mobile cellphone calling services, textual 105

content messaging services 106

9. Goals 107

Presently, the venture pursuits to provide Linux customers with a virtual assistant 108

who will help no longer simplest in each day responsibilities, including searching on the 109

Internet to reap data on the back, with the assist of phrases, and many, many, many extra, 110

however they will additionally help with the automation of numerous tasks. After a time 111

period, we will try to improve the overall Server assistant for the crowning glory of the 112

whole server, management, it controls, system implementation, and a back-up, automatic 113

measuring, recording, tracking, and smart enough to do the update at the complete server’s 114

administrator. 115

Width : 116

Presently, Triadon is being evolved as an automation tool and a real helper. A few of 117

the numerous roles performed by Triadon are: 1. Voice search engine 2. Voice diagnostics 118

and Medicinal drug aid. 3. Reminder and To-Do utility. 4. Vocabulary App to show 119

definitions and accurate spelling mistakes. 5. Climate Forecast Application. 120

10. Description of Performance 121

The voice service works with TRIADON to control smart home devices (e.g., smart 122

bulb, thermostat, etc.). To manipulate a smart device, the consumer can speech a voice 123

command to TRIADON after voice activating it TRIADON and ship the sounds of that 124

voice command to a far off voice seek cloud via a related Wi-Fi network. Whilst the cloud 125

recognizes sounds as a legitimate command, it is transferred to a server, called clever home 126

adapter skills, saved by our system. After that, that command is dispatched to every other 127

cloud that can manage a clever tool that is remotely connected. Steps to use: 128

• Audio is first converted to text using Py audio and google API. 2. The system will 129

interpret the sentence according to the rules of the ISL system. 3. In it he will produce 130

a voice. 4. We also need a database that will store images, gif., And database coding 131

names. Do not use the word "basically" to mean "almost" or "successfully". 132
Version December 11, 2023 submitted to Journal Not Specified 4 of 8

11. Architecture Diagram and Working 133

• This diagram indicates how the voice service works with TRIADON to control smart 134

home devices (e.g., smart bulb, thermostat, etc.). To manipulate a smart device, 135

the consumer can speech a voice command to TRIADON after voice activating it 136

TRIADON and ship the sounds of that voice command to a far-off voice seek cloud via 137

a related Wi-Fi network. Whilst the cloud recognizes sounds as a legitimate command, 138

it is transferred to a server, called clever home adapter skills, saved by our system. 139

After that, that command is dispatched to every other cloud that can manage a clever 140

tool that is remotely connected. 141


Version December 11, 2023 submitted to Journal Not Specified 5 of 8

12. Implementation of Module wolfram alpha in our project 142

Output : 143

13. Conclusion 144

With this voice assistant, we created a ramification of resources using a single line 145

command. It performs many user functions such as web search, weather information, name 146
Version December 11, 2023 submitted to Journal Not Specified 6 of 8

assistance and medical questions. We have intention to make this project a entire server 147

assistant and make it clever enough to characteristic as a wellknown server control server. 148

Destiny plans include Triadon integration with cell the usage of React local to provide an 149

integrated experience between two connected devices. In addition, over time, Triadon is 150

scheduled to include automated shipments that support expandable beans, backup files, 151

and everything else Server Administrator does. Performance would not be competent 152

enough to replace Server Administrator with Triadon. 153

14. Acknowledgement 154

We had an awesome working experience on this project and learned many new abilities 155

about this project. However, it might no longer have been viable without the help and 156

kindness of many people. We would really like to extend our gratitude to all. We owe 157

the teachers a lot and especially Mr. Anandhan K through their regular leadership and 158

recruitment and providing us with the necessary information about the project and their 159

support in completing the project. We would love to extend our gratitude to our parents 160

and friends for the coolest cooperation and encouragement that facilitates us to finish the 161

Project. 162

163

1. Y. Chen, W.S. Newman, “A Human-robot Interface Based on Electrooculography”, IEEE Inter- 164

national Conference on Robotics and Automation, 243-2484, 2004. 165

2. Ma, J., Zhang, Y., Cichocki, A., Matsuno, F., “A novel EOG/EEG hybrid humanmachine 166

interface adopting eye movements and ERPs: application to robot control”, IEEE transactions 167

on bio-medical engineering, 62, 876–889, 2015. 168

3. Úbeda, A., Iáñez, E., Azorín, J., “An Integrated Electrooculography and Desktop Input Bimodal 169

Interface to Support Robotic Arm Control”, IEEE Transactions on HumanMachine Systems, 43, 170

338-342, 2013. 171

4. Paul, G., Cao, F., Torah, R., Yang, K., Beeby, S., Tudor, J., “A Smart Textile Based Facial EMG 172

and EOG Computer Interface”, IEEE Sensors Journal,14, 393-400, 2014. 173

5. Paul, G., Cao, F., Torah, Huang, Q., He, S., Wang, Q., Gu, Z., Peng, N., Li, K., Zhang, Y., Shao, M., 174

Li, Y., “An EOG-Based Human–Machine Interface for Wheelchair Control”, IEEE Transactions 175

on Biomedical Engineering, 65, 2023-2032 , 2018. 176

6. Iáñez, E., Úbeda, A., Azorín, J., “Multimodal human-machine interface based on a brain- 177

computer interface and an electrooculography interface”, 2011 Annual International Conference 178

of the IEEE Engineering in Medicine and Biology Society, 4572- 4575, 2011. 179

7. Khushaba, R. N., Kodagoda, S., Lal, S., Dissanayake, G., “Driver drowsiness classification using 180

fuzzy wavelet-packet-based feature-extraction algorithm”, IEEE transactions on bio-medical 181

engineering, 58, 121–131, 2011. 182

8. Torres-Valencia CA, Álvarez MA, Orozco-Gutiérrez AA, “Multiple-output support vector ma- 183

chine regression with feature selection for arousal/valence space emotion assessment”, Annu 184

Int Conf IEEE Eng Med Biol Soc 2014,970-973, 2014. 185

9. English, Erik et al., “EyePhone: A mobile EOG-based Human-Computer Interface for assistive 186

healthcare”, 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER), 187

105-108, 2013. 188

10. Khalighi S, Sousa T, Oliveira D, Pires G, Nunes U., “Efficient feature selection for sleep staging 189

based on maximal overlap discrete wavelet transform and SVM”, Annu Int Conf IEEE Eng Med 190

Biol Soc, 3306-3309, 2011. 191

11. Korkalainen, H., Aakko, J., Nikkonen, S., Kainulainen, S., Leino, A., Duce, B., Afara, I. O., 192

Myllymaa, S., Toyras, J., Leppanen, T., “Accurate Deep Learning-Based Sleep Stag- ing in a 193

Clinical Population With Suspected Obstructive Sleep Apnea”, IEEE journal of biomedical and 194

health informatics,24, 2073–2081, 2020. 195

12. Zhang, B., Zhou, W., Cai, H., Su, Y., Wang, J., Zhang, Z., Lei, T., “Ubiquitous Depression Detec- 196

tion of Sleep Physiological Data by Using Combination Learning and Functional Networks”, 197

IEEE Access, 94220-94235, 2020. 198

13. Lin, C., King, J., Bharadwaj, P., Chen, C., Gupta, A., Ding, W., Prasad, M., “EOGBased Eye 199

Movement Classification and Application on HCI Baseball Game”, IEEE Access, 7, 96166-96176, 200

2019. 201
Version December 11, 2023 submitted to Journal Not Specified 7 of 8

14. Wu, S. L., Liao, L. D., Lu, S. W., Jiang, W. L., Chen, S. A., Lin, C. T., “Controlling a human- 202

computer interface system with a novel classification method that uses electrooculography 203

signals”, IEEE transactions on bio-medical engineering, 60, 2133–2141, 2013. 204

15. Kwang-Ryeol Lee, Won-Du Chang, Sungkean Kim, Chang-Hwan Im., “Real-Time "Eye-Writing" 205

Recognition Using Electrooculogram”, IEEE transactions on neural systems and rehabilitation, 206

25, 37–48, 2017. 207

16. A. J. Molina-Cantero, C. Lebrato-Vázquez, M. Merino-Monge, R. Quesada-Tabares, J. A. Castro- 208

García and I. M. GómezGonzález, “Communication Technologies Based on Voluntary Blinks: 209

Assessment and Design”, IEEE Access, 70770-70798, 2019. 210

17. Tzu-Yun Wang, Min-Rui Lai, Twigg, C. M., Sheng-Yu Peng, “A Fully Reconfigurable Low- 211

Noise Bi-opotential Sensing Amplifier With 1.96 Noise Efficiency Factor”, IEEE transactions on 212

biomedical circuits and systems, 411–422, 2014. 213

18. Dasgupta, A., Chakraborty, S., Mondal, P., Routray, A., “Identification of eye saccadic sig- 214

natures in electrooculography data using time-series motifs”, IEEE Annual India Conference 215

(INDICON), 1-5, 2016. 216

19. J. Ward, G. Troster, A. Bulling and H. Gellersen, “Eye Movement Analysis for Activity Recogni- 217

tion Using Electrooculography”, IEEE Transactions on Pattern Analysis Machine Intelligence, 218

33, 741-753, 2011. 219

20. Ding, X., Lv, Z., Zhang, C., Gao, X., Zhou, B., “A Robust Online Saccadic Eye Movement 220

Recognition Method Combining Electrooculography and Video”, IEEE Access, 17997-18003, 221

2017. 222

21. Puttasakul, T., Archawut, K., Matsuura, T., Thumwarin, P., Airphaiboon, S., “Electrooculo- 223

gram identification from eye movement based on FIR system”, 9th Biomedical Engineering 224

International Conference (BMEiCON), 1-4, 2016. 225

22. Nugrahaningsih, Nahumi Porta, Marco Ricotti, Stefania., “Gaze behavior analysis in multiple- 226

answer tests: An Eye tracking investigation”, 12th International Conference on Information 227

Technology Based Higher Education and Training, 1-6, 2013. 228

23. Cai, H., Ma, J., Shi, L., Lu, B., “A novel method for EOG features extraction from the forehead”, 229

Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 230

3075-3078, 2011. 231

24. Breuer, A., Elflein, S., Joseph, T., Termöhlen, J., Homoceanu, S., Fingscheidt, T., “Analysis of the 232

Effect of Various Input Representations for LSTM-Based Trajectory Prediction”, IEEE Intelligent 233

Transportation Systems Conference (ITSC), 2728-2735, 2019. 234

25. Jin, L., Guo, B., Jiang, Y., Wang, F., Xie, X., Gao, M., “Study on the Impact Degrees of Several 235

Driving Behaviors When Driving While Performing Secondary Tasks”, IEEE Access, 65772- 236

65782, 2018. 237

26. Kang, M., Yoo, C., Uhm, K., Lee, D., Ko, S., “A Robust Extrinsic Calibration Method for 238

Non-Contact Gaze Tracking in the 3-D Space”, IEEE Access, 48840-48849, 2018. 239

27. F. Nasrin, N.I. Ahmed and M.A. Rahman, "Auditory Attention State Decoding for the Quiet 240

and Hypothetical Environment: A Comparison Between bLSTM and SVM", 2nd In- ternational 241

Conference on Trends in Computational and Cognitive Engineering (TCCE- 2020), 2020. 242

28. M.J. Hasan, A.I. Badhan and N.I. Ahmed, "Enriching Existing Ontology Using Semiautomated 243

Method", Future of Information and Communication Conference, 468-478, 2018. 244

29. Nasrin, Fatema, Arifa Yasmin, and Nafiz Ishtiaque Ahmed. "Anomaly Detection Method for 245

Sensor network in Under Water Environment." 2021 International Conference on Information 246

and Communication Technology for Sustainable Development (ICICT4SD), 380-384, 2021. 247

30. S.S. Sumit, J. Watada, F. Nasrin, N.I. Ahmed and D.R.A. Rambli, "Imputing missing values: rein- 248

forcement bayesian regression and random forest", The International Conference on Artificial 249

Intelligence and Computational Intelligence (AICI 2020) , 2020. 250

31. Jialu, G., Ramkumar, S., Emayavaramban, G., Thilagaraj, M., Muneeswaran, V., Rajasekaran, 251

M.P., Hussein, A.F., “Offline Analysis for Designing Electrooculogram Based Human Computer 252

Interface Control for Paralyzed Patients”. IEEE Access, 79151-79161, 2018. 253

32. P. R. Kennedy, R. A. E. Bakay, M. M. Moore, K. Adams, and J. Goldwaithe,“Direct control of a 254

computer from the human central nervous system,” IEEE Trans. Rehabil. Eng., 198–202, Jun. 255

2000. 256

33. K. Yamagishi, J. Hori, and M. Miyakawa, “Development of EOG-basedcommunication system 257

controlled by eight-directional eye movements”, in Proc. 28th IEEE EMBS Annu. Int. Conf., 258

2574–2577, 2006. 259


Version December 11, 2023 submitted to Journal Not Specified 8 of 8

34. R. Barea, L. Boquete, M. Mazo, and E. Lopez, “System for assisted mobility using eye movements 260

based on electrooculography”, IEEE Trans. Neural Syst. Rehabil. Eng., 10, 209–218, Dec. 2002. 261

35. Jia, Y. and C. W. Tyler.: Measurement of saccadic eye movements by electrooculography for 262

simultaneous EEG recording, Behavior Research Methods, 51, 2139–2151, 2019. 263

36. Shang-Lin Wu, Lun De Liao, Shao-Wei Lu, Wei-Ling Jiang, Shi-An Chen, and Chin-Teng Lin. 264

: Controlling a Human–Computer Interface System With a Novel Classification Method that 265

Uses Electrooculography Signals, IEEE Transactions on Biomedical Engineering, 60, 2133-2141, 266

2013. 267

37. Manabe, H., Fukumoto, M., Yagi, T.: Direct Gaze Estimation Based on Nonlinearity of EOG, 268

IEEE transactions on bio-medical engineering, 62, 1553–1562, 2015. 269

You might also like