Skip to content

An accessibility app for the visually impaired. Available on both Android (Free) and iOS (Beta)

Notifications You must be signed in to change notification settings

rajin-khan/TapSense

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

✨ An Accessibility App for the Visually Impaired ✨

Version Contributions

Flutter Google ML Kit iOS Android

Our app is available on Android, and will be published on the iOS App Store soon.


Get TapSense on Play Store now!


🌟 Overview

Tap_Sense is a thoughtfully designed accessibility app tailored for visually impaired users. The app provides intuitive tools to simplify daily tasks, paired with a minimalistic user interface.

  • Dynamic "READ SCREEN" Buttons: Ensures easy-to-access text-to-speech functionality at the bottom of every section.
  • Cross-Platform Compatibility: Developed using Flutter, ensuring availability on both iOS and Android platforms.
Please note that the videos used in the demos are in GIF format to give you an idea of what the app looks like. The actual app is much smoother and responsive.

Features at a Glance

Section Functionality
Reader Scan text from images and convert it to speech
To-Do Voice-powered dynamic to-do list creation
Navigation Fetch directions dynamically with voice commands
Detection Identify objects in a scene with confidence levels

Main Menu Demo


πŸ” Reader

Description

Leverages OCR (Optical Character Recognition) powered by TensorFlow Lite (TFLite) to scan text from images and convert it to readable output. Users can select an image from their gallery or capture one using their device’s camera.

Technologies Used

Dependency Purpose
Image Picker Library For selecting or capturing images
Google ML Kit's Text Recognition For extracting text from images
Flutter's Text-to-Speech For converting text to speech

Reader Demo


πŸ—’οΈ To-Do

Description

Uses speech-to-text technology to create, store, and delete to-do items dynamically. Aimed at increasing productivity for visually impaired users by leveraging voice commands.

Technologies Used

Dependency Purpose
Speech-to-Text Package For voice input
Flutter's Text-to-Speech For audio feedback
Hive NoSQL Database for persistent storage

To-Do Demo


πŸ—ΊοΈ Navigation

Description

Fetches and reads out dynamic directions based on user input via speech-to-text. Copies instructions to the clipboard for convenience and displays a scrollable list of available locations.

Technologies Used

Dependency Purpose
Speech-to-Text Package For capturing voice input
Flutter's Text-to-Speech For voice-guided navigation
LatLong Package For geolocation
Dio Package For making API requests

Navigation Demo


πŸ‘“ Detection

Description

Uses Google ML Kit's Object Detection model to identify objects within a scene. Allows users to either select an image from their gallery or take a live image for detection. Displays confidence levels for identified objects.

Technologies Used

Dependency Purpose
Image Picker Library For selecting or capturing images
Google ML Kit's Object Detection For identifying objects in a scene
Flutter's Text-to-Speech For reading detection results aloud

Detection Demo


🎯 Dynamic Features Across All Sections

  • Text-to-Speech Outputs: Provides real-time audio feedback for every button, ensuring a consistent and accessible experience.
  • Consistent UI: The "READ SCREEN" button is always located above the "BACK" button for easy navigation and reduced learning curve.

🀝 How to Contribute

We would love contributions from you guys to make Tap_Sense even better. You can contribute in a lot of ways:

  • Report Bugs: Found an issue? Please open an issue on our GitHub repository.
  • Propose Features: Have ideas for new features? Let us know through an issue or a pull request.
  • Submit Pull Requests: Fix bugs or add features by submitting a pull request to our repository.
  • Documentation: Help improve the README or other documentation files.

Steps to Contribute:

  • Fork the repository.
  • Create a new branch (git checkout -b feature-name).
  • Commit your changes (git commit -m "Description of changes").
  • Push to your fork (git push origin feature-name).
  • Open a pull request.

Your contributions are always appreciated!


πŸ‘₯ Development Team:

This project was developed by:

Name Institution ID GitHub Followers
Rajin Khan North South University 2212708042 Rajin's GitHub Followers
Saumik Saha Kabbya North South University 2211204042 Saumik's GitHub Followers

Star the repo if you wanna support more projects like this!

About

An accessibility app for the visually impaired. Available on both Android (Free) and iOS (Beta)

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published