Multi-data and Multi-model based Smart Wheelchair for Physically challenged Patients
This project proposal is about a smart wheelchair that will make the use of Brain-Computer Interface (BCI) to operate. The project is targeted at people who are suffering from medical conditions in which they cannot use their limbs for activities of daily life, such as paralysis. The wheelchair will
2025-06-28 16:28:38 - Adil Khan
Multi-data and Multi-model based Smart Wheelchair for Physically challenged Patients
Project Area of Specialization NeuroTechProject SummaryThis project proposal is about a smart wheelchair that will make the use of Brain-Computer Interface (BCI) to operate. The project is targeted at people who are suffering from medical conditions in which they cannot use their limbs for activities of daily life, such as paralysis. The wheelchair will work on the principle that the human brain generates different electrical signals for different thoughts and activities, so we will train our system to recognize those thoughts and convert them to control commands, such as move, stop, etc. Our system will use the non-invasive approach of BCI, in which the BCI device is placed on the scalp of a person to record the electrical signals, electroencephalogram (EEG). These signals will be sent to the computing device via Bluetooth, where they will be processed. The processed signals will then be mapped to control commands, which will be sent to the microcontroller placed in a wheelchair. An obstacle detection system will be integrated with a microcontroller using laser sensors which will decide whether to move the wheelchair or not after a command has been sent. Smart glasses will be used to provide a live camera view of the surrounding. Moreover, a speed control mechanism will be added, which will allow the user to control the speed of the wheelchair based on his/her mental state. An excited state will increase the speed and a stressful state will decrease the speed. A multi-model approach will be used integrating different approaches to achieve better results. The project will be executed at Pakistan Society for Rehabilitation of Disabled (https://psrd.org.pk/ ).
Project Objectives- The main goal of our project is to make it easy for the user to use the wheelchair, by having minimum interaction with the hardware.
- Prioritize the safety of the user by integrating a camera and ultrasound sensors to avoid hurdles in the environment.
- Allow the user to control the speed of the wheelchair through emotions, if the user is excited, the speed should increase and vice versa when the user is under stress.
- Make the system adaptable to different users and environments.

Our non-invasive BCI approach in figure 1 will be based on different components:
The first step in any BCI application is to acquire brain signals. We will record the brain signals (EEG) of the user with Emotive EPOC + headset shown in fig 2 and the placement of electrodes in fig 4. The user must wear the headset to allow the EEG readings to [12] be taken and sent to software via Bluetooth. EPOC+ headset records EEG at a rate of 256 samples per second with values in microvolts, which are displayed on EMOTIV PRO software.
As the values of EEG are in microvolts, therefore the values are pre-processed and converted to an understandable format. Initial values also contain noise and artifacts from the environment, which are removed in pre-processing to get the optimal signals.
| Figure 4 Placement of electrodes |

A useful feature will be extracted from signals via machine learning algorithms, and the output of these signals will be used for the classification of different commands. We will extract those features which will have the biggest impact on our output.
As every person generates different brain signals for different thoughts and behaviours, we will train the model on an individual’s EEG. A profile will be generated for every user who wants to use the system, and every profile will be trained separately for translating thoughts to wheelchair commands.
The smart glasses in fig 3 will provide the live view of the surrounding where computer vision algorithms will be used to recognize hurdles in the path of wheelchair, while the obstacle detection sensor in fig 1 integrated with the wheelchair will measure the distance of the object up to 5.6 meters in real time
Moreover, feature extracted signals will be converted to control commands with the help of machine learning. There will be seven different classifications:
- Move forward
- Move backward
- Move right
- Move left
- Stop
- Increase speed
- Decrease speed
In this step, the classified outputs will be translated to wheelchair commands which will be sent to the microcontroller. The microcontroller will execute the commands in wheelchair, such as for ‘Move Forward’, the wheelchair will move in the forward direction, and if the user’s mental state is excited, the wheelchair will accelerate, etc.
Figure 4 Placement of electrodes
Benefits of the ProjectThe project will be a big success if the proposed system is developed where the results show high accuracy of translating the brain signals to control commands by different users. A big challenge in achieving this will be the high training time of the model and the latency factor. Minimum interference with hardware would ensure that the user sensitivity issue is addressed, and a smart obstacle avoidance system will ensure that the safety of the user is guaranteed. Speed control mechanism based on the mental states of the user will empower the user and it will boost the morale and confidence because the user will be in complete control of the movement of the wheelchair.
Technical Details of Final DeliverableFinal deliverable of our project will be HW/SW integrated system. A patient will wear the EPOC+ headset which will be connected via Bluetooth to laptop and wheelchair microcontroller will be connected via Wi-Fi to a laptop. Real-time EEG signals will be acquired and processed on EMOTIV pro software which will then be fed to machine learning algorithms to remove artifacts and noise. The signals will be classified and translated into wheelchair commands.
Final Deliverable of the Project HW/SW integrated systemCore Industry HealthOther Industries IT Core Technology NeuroTechOther TechnologiesSustainable Development Goals Good Health and Well-Being for PeopleRequired Resources| Elapsed time in (days or weeks or month or quarter) since start of the project | Milestone | Deliverable |
|---|---|---|
| Month 1 | Project approval and Requirement Gathering | Documentation and Analysis |
| Month 2 | Signal acquisition and pre-processing | Integrating hardware and software |
| Month 3 | Feature extraction | Integrating hardware and software |
| Month 4 | Model training | Integrating hardware and software |
| Month 5 | Real-time classification | Implementation |
| Month 6 | Testing with patients | Evaluation |