Get TapSense on Play Store now!
Tap_Sense is a thoughtfully designed accessibility app tailored for visually impaired users. The app provides intuitive tools to simplify daily tasks, paired with a minimalistic user interface.
- Dynamic "READ SCREEN" Buttons: Ensures easy-to-access text-to-speech functionality at the bottom of every section.
- Cross-Platform Compatibility: Developed using Flutter, ensuring availability on both iOS and Android platforms.
Please note that the videos used in the demos are in GIF format to give you an idea of what the app looks like. The actual app is much smoother and responsive.
Section | Functionality |
---|---|
Reader | Scan text from images and convert it to speech |
To-Do | Voice-powered dynamic to-do list creation |
Navigation | Fetch directions dynamically with voice commands |
Detection | Identify objects in a scene with confidence levels |
Leverages OCR (Optical Character Recognition) powered by TensorFlow Lite (TFLite) to scan text from images and convert it to readable output. Users can select an image from their gallery or capture one using their deviceβs camera.
Dependency | Purpose |
---|---|
Image Picker Library | For selecting or capturing images |
Google ML Kit's Text Recognition | For extracting text from images |
Flutter's Text-to-Speech | For converting text to speech |
Uses speech-to-text technology to create, store, and delete to-do items dynamically. Aimed at increasing productivity for visually impaired users by leveraging voice commands.
Dependency | Purpose |
---|---|
Speech-to-Text Package | For voice input |
Flutter's Text-to-Speech | For audio feedback |
Hive | NoSQL Database for persistent storage |
Fetches and reads out dynamic directions based on user input via speech-to-text. Copies instructions to the clipboard for convenience and displays a scrollable list of available locations.
Dependency | Purpose |
---|---|
Speech-to-Text Package | For capturing voice input |
Flutter's Text-to-Speech | For voice-guided navigation |
LatLong Package | For geolocation |
Dio Package | For making API requests |
Uses Google ML Kit's Object Detection model to identify objects within a scene. Allows users to either select an image from their gallery or take a live image for detection. Displays confidence levels for identified objects.
Dependency | Purpose |
---|---|
Image Picker Library | For selecting or capturing images |
Google ML Kit's Object Detection | For identifying objects in a scene |
Flutter's Text-to-Speech | For reading detection results aloud |
- Text-to-Speech Outputs: Provides real-time audio feedback for every button, ensuring a consistent and accessible experience.
- Consistent UI: The "READ SCREEN" button is always located above the "BACK" button for easy navigation and reduced learning curve.
We would love contributions from you guys to make Tap_Sense even better. You can contribute in a lot of ways:
- Report Bugs: Found an issue? Please open an issue on our GitHub repository.
- Propose Features: Have ideas for new features? Let us know through an issue or a pull request.
- Submit Pull Requests: Fix bugs or add features by submitting a pull request to our repository.
- Documentation: Help improve the README or other documentation files.
- Fork the repository.
- Create a new branch (
git checkout -b feature-name
). - Commit your changes (
git commit -m "Description of changes"
). - Push to your fork (
git push origin feature-name
). - Open a pull request.
Your contributions are always appreciated!
This project was developed by:
Name | Institution | ID | GitHub | Followers |
---|---|---|---|---|
Rajin Khan | North South University | 2212708042 | ||
Saumik Saha Kabbya | North South University | 2211204042 |
Star the repo if you wanna support more projects like this!