See Through Me: An Assistive Application for the Blind
All our surroundings contain visual information which makes it difficult for visually impaired people in performing even simple daily tasks such as finding the color of an object. Such people will be immensely helped with the help of an assistive technology that can understand their surroundin
2025-06-28 16:29:02 - Adil Khan
See Through Me: An Assistive Application for the Blind
Project Area of Specialization Software EngineeringProject SummaryAll our surroundings contain visual information which makes it difficult for visually impaired people in performing even simple daily tasks such as finding the color of an object. Such people will be immensely helped with the help of an assistive technology that can understand their surroundings and provide hints or information regarding them. We propose a smart assistive mobile application that helps blind and visually impaired users to read the text and understand their surroundings with the help of voice commands. It provides a set of digital eyes to make the physical world more accessible for the blind and low vision community.
Project ObjectivesThe main goal of the project is to develop an assistive application for the blind to make their daily life easier. The objectives of the project are to:
- Provide an application that is able to take a set of voice commands as perform the required action.
- Provide an application that can use device's camera to recognize text and objects and then provide voice descriptions about what it sees.
- Provide an application with the capability to detect the dominant color of the object in front of the camera and describe it.
The project is an android application that will be initiated by the google voice command accessibility in the android Operating system. The user will be able to initiate an object recognizer or color identifier using the voice commands. The camera will be opened up and the application shall provide the classification result as speech output.
The functional layer is composed of at least three different modules:
- Speech processing module: The application will be capable of navigation based on voice commands
- Color identification module: The application shall have an integrated AI module that is able to recognize the dominant color at the center of the screen in real time.
- Object and Text recognition module: The application contains integrated Machine Learning API / models to recognize all the objects and text present in the camera and vocalize the list of results.

- The project shall provide ease in the life of blind or partially impaired people.
- The project will provide a cost effective solution to the problem of understanding their surrounding with already available sophisticated AI models in the palm of the hand.
An android application will be the final deliverable of the project which contains:
- Speech processing (recognition + synthesis) module
- On device object recognition and text recognition module using MLkit API
- Dominant Color detection module
| Item Name | Type | No. of Units | Per Unit Cost (in Rs) | Total (in Rs) |
|---|---|---|---|---|
| Total in (Rs) | 4700 | |||
| Google Playstore registration fee | Miscellaneous | 1 | 4700 | 4700 |