TAN: Technology Assisted Navigation for visually-impaired
Our project aims to provide independent mobilization and navigation for the visually-impaired. It is designed to improve their quality of life by using obstacle detection techniques based on the concepts of Machine Learning as well as sensor-based detections. We have used ultrasonic sensors for obje
2025-06-28 16:36:16 - Adil Khan
TAN: Technology Assisted Navigation for visually-impaired
Project Area of Specialization Electrical/Electronic EngineeringProject SummaryOur project aims to provide independent mobilization and navigation for the visually-impaired. It is designed to improve their quality of life by using obstacle detection techniques based on the concepts of Machine Learning as well as sensor-based detections. We have used ultrasonic sensors for object detection and water level sensor for avoiding puddles or hazardous presence of water.
We have also incorporated an assistive pathway embedded with RFID tags that will be recognized by the cane containing an RFID reader for localization, carried by the person in need for navigational assistance. Our project also comprises of a collection of GPS data to create a route map approach, implement concepts of Machine Learning on it and as well as to object identification through Pi camera.
The assistive apparel consists of a jacket, belt and cane, with the RFID embedded pathway setup. Moreover, an added feature to the apparel is that the user can locate it over Wi-Fi.
The main goal is to achieve a prototype that is smart enough to become a source of safe and independent guidance to the person in need of navigational assistance.
Project ObjectivesThe main objectives revolve around creating a prototype that is sufficient enough to provide independent navigation for the visually-impaired. Hence our objectives are related to
- determining obstacles around the user,
- determining the hazardous presence of water close to the user,
- identifying objects around the user,
- providing instructions for the ease of mobility through a sense of touch or sense of hearing to the user,
- providing instructions to the user to locate our prototype,
- creating a pathway that guides the user for indoor navigation,
- defining a route for the user based on the best path selected through outdoor GPS navigation.
There are three major conceptual implementations in our project.
1) Navigation
- Indoor Navigation:
For this, we have to build a pavement/pathway containing RFID cards of 1K memory byte encapsulated in rubber sheets or embedded/covered in the pathway. Each of these cards has a 32 bit or 4 bytes UID (Unique Identification) code. As soon as the user carrying the cane, having an RFID reader inside, passers over the pavement/pathway, each card will be detected along the way and their codes will be displayed on the computer. these codes are changeable. This gives us the authority to create a local map by referring to the code of each card as a location. Therefore, localisation is achieved.
- Outdoor Navigation:
To navigate the user independently outdoors, we have used a GPS module NEO-6M which generates NMEA(National Marine Electronics Association) sentences which are then decoded to extract the longitude and latitude coordinates of the user to receive their exact location and provide accurate mobility.
2) Obstacle detection and avoidance:
This part of the project consists of using sensors like ultrasonic and water to detect close objects and the hazardous presence of water respectively. The outcome of these sensors will be provided through buzzer (sense of hearing) and vibratory motor module (sense of touch).
3) Machine Learning:
This part of the project is to create an image for the visually impaired of their surroundings by object identification through machine learning and simultaneously prompting them about each object through the audio output. Consequently, making them aware of their surroundings.
Benefits of the ProjectVisual Impairment as defined by the International statistical classification of diseases, injuries and causes of death, tenth revision (ICD-10), visual impairment encompasses both low vision and blindness (Table I).
| Category | Worse than | Equal to or better than |
| 1. Mild or no visual impairment | 6/18, 20/70 | |
| 2. Moderate visual impairment | 6/18, 20/70 | 6/60, 20/200 |
| 3. Severe visual impairment | 6/60, 20/200 | 3/60, 20/400 |
| 4. Blindness | 3/60, 20/400 | 1/60 or counts fingers at 1meter 5/300 (20/1200) |
| 5. Blindness | No light perception | |
| 6. | Undetermined or unspecified | |
Table I
Source: International classification of disease-10 (2007)
In Pakistan, according to the Pakistan National Blindness and Impairment survey, the leading cause of blindness in adults more than 30 years of age is the cataract. While globally 39.1% of all blindness is attributable to cataract, in Pakistan the burden of blindness due to cataract is significantly larger at 51.5%. 85.4% of blindness is avoidable in Pakistan. Individuals with moderate visual impairment (<6/18 to ?6/60) had a refractive error (43%) and cataract (42%) as the cause of their visual impairment.
With the statistics mentioned, it is evident that a lot of individuals in Pakistan are dependent on others for their basic tasks. Moreover, Pakistan as a developing country also faces poverty, unemployment and lower standards of living. Combining visual impairment and lack of socioeconomic growth leads to a further decline in economic productivity and quality of life. They also face the loss of opportunity and are excluded because of institutional, environmental and attitudinal discrimination.
Multiple studies reinforce the notion that any form of disability, including blindness, afflicts the poor. The economic cost of blindness results in a further decline of the economic status of the individual, as well as, the entire family. The social discrimination of the blind alienates them from society and results in depression and suicidal ideation.
Our project will benefit those individuals who are dependent on navigational assistance from another person. It will make them independent in their own everyday activities. They can move around, navigate and locate certain objects. Also, avoid injuries and getting tripped over obstacles. They can have a sense of direction in their movement.
Category
1. Mild or no visual impairment
2. Moderate visual impairment
3. Severe visual impairment
4. Blindness
5. Blindness
6.
Technical Details of Final Deliverable1) Thorough research on sensors is applied which covers all including datasheets and alternatives available for each specific task so as to achieve better accuracy. Integration of sensors through Arduino, since being a micro-controller it has on-chip embedded flash memory which aids in quick code execution and has a shorter start-up period. Also, microcontrollers are used for hardware interaction where tasks are predefined. Data processing for object identification can not be achieved through Raspberry Pi as it does not have memory constraints, unlike a microcontroller. A comprehensive study of all algorithms of machine learning relevant to our domain and identifying the best possible fit for our project. Developing the task, experience and performance measures for the model created.
2) The main purpose of integrating all sensors with each other is to make the code shorter and efficient since their responses are dependant on each other. we have used a water level sensor to detect water, ultrasonic sensors for object detection in a given distance. Vibratory motor module and buzzer are used as output components for water level sensor and ultrasonic sensors, respectively.
3) For indoor navigation, we have planned to set up an embedded pathway having passive RFID tags underneath. Each tag will have a UID (Unique Identification) code. the size of this code depends on the memory size of the RFID card. In our case, it is MIFARE MF1S50YYX_V1 which has 1KB memory. So, the UID in our case will be a 32 bit or 4 bytes code. When the user carrying the smart cane which has the RFID reader (RC-522) will pass over the pathway, the code of each tag will be detected along the way, hence making localisation possible.
4) For outdoor navigation, we have used NEO-6M which is a GPS module. The purpose of this module is to generate NMEA sentences which will then be decoded to receive the location co-ordinates ie. longitude and latitude of the user for accurate outdoor navigation. The NEO-6M module can detect up to 22 satellites which makes it the most accurate choice for the given task.
5) Since our project requires both, a microcontroller and a microprocessor, we have to interface them through serial communication. This is achieved by install Arduino on Raspbian, which is a Linux based OS for Raspberry Pi. this makes it possible for Arduino to read the data of the sensors and output it through Raspberry Pi.
6) Object identification through machine learning using Raspberry Pi and Pi Camera. for this, we have used TensorFlow which is an open-source platform for machine learning. Also, in order to deploy our TensorFlow model on Raspberry Pi, we have used TensorFlowLite which is a lightweight library made for the purpose of deploying TF models on embedded devices and mobiles. The algorithm used in our project is CCN (Convolutional Neural Networks) which is a deep-learning model preferred for image classification and object detection applications.
| Elapsed time in (days or weeks or month or quarter) since start of the project | Milestone | Deliverable |
|---|---|---|
| Month 1 | Literature Review | Specifications and functionality of sensors, understanding the distinction between a micro-controller and a microprocessor, understanding elementary concepts of machine learning. |
| Month 2 | Interfacing of Sensors | Integration of sensors with micro-controller to produce the desired output |
| Month 3 | Hardware Assembly (indoor) | Setting up an embedded pathway/pavement with passive RFID tags in order to be read by an RFID reader on the cane |
| Month 4 | Collection of GPS data (longitude & latitude) | Decoding of NMEA (National Marine Electronics Association) sentences in order to extract the exact location co-ordinates of the user |
| Month 5 | Interfacing of micro-controller and microprocessor | Achieving serial communication over micro-controller and microprocessor |
| Month 6 | Implementation of Machine Learning | Object identification through Machine Learning using Pi Camera integrated with Raspberry Pi (microprocessor) |
| Month 7 | Hardware Assembly (outdoor) | Setting up the final apparel with all components, soldering connections and encapsulating delicate components. |
| Month 8 | Documentation & Marketing | Report writing, printing etc |