Adil Khan 9 months ago
AdiKhanOfficial #FYP Ideas

Thought to Text Conversion Using BCI Based Smart Glasses

Project Summary We introduced ?Thought to Text Conversion Using BCI Based Smart Glasses?. The proposed Hybrid-BCI model will target neurological disorder patients which cannot move or talk and needs a way to communicate with other people or even doctors. Our model will provid

Project Title

Thought to Text Conversion Using BCI Based Smart Glasses

Project Area of Specialization

Wearables and Implantable

Project Summary

Project Summary

We introduced “Thought to Text Conversion Using BCI Based Smart Glasses”. The proposed Hybrid-BCI model will target neurological disorder patients which cannot move or talk and needs a way to communicate with other people or even doctors. Our model will provide BCI smart glasses that will be Artificially Intelligent and will be highly portable and easy to use. The key objective of our model is to make communication easier for patients so our model consists of a thought to text module which will allow patients to communicate through BCI based smart glasses. This will allow users to write text using an app that will connect wirelessly to BCI-controlled smart glasses. An on-screen keyboard will be shown on smart glasses which will also reduce the need of interacting with any device, as the existing systems that are used previously require an interaction with a physical device such as a monitor or mobile screen which makes mobility and portability a challenge. BCI headset will be used to convert brain signals to text. This will decode the brain signals and will allow the patient to type text with just thinking and enable the patient to communicate or search over the intranet enabling communication over social media by using thought to text module, which provides an extra edge to normal human beings. This model will further help doctors to understand what patients need to say and hence makes communication easier, faster, and reliable.

Project Objectives

Objectives and Goals

Our system will be an intelligent, innovative and efficient thought to text module which will allow the patients that cannot move or talk to write text and communicate by just thinking. Our module is a hybrid approach of smart glasses and BCI so it will be more efficient and accurate.

Objectives

  • The objectives of our project are to design and implement an intelligent as well as portable thought to text module.
  • Develop a smart thought to text module that will be used by not only patients but can also be used by any other person.
  • Our model is made by using machine learning, and artificial intelligence for training and testing of the model.
  • Further our model will also allow a doctor to patient communication easy so the patient will easily communicate with doctors.
  • It will assist the doctors/caretakers to monitor the patient and provide a comprehensive user interface to allow the patient to tell anything in case of any critical condition.
  • Develop an intelligent technology that will be backed up with cloud services, to provide storage and analysis in real time.
  • It will also reduce the cost of implanting electrodes in the brain. So, the patient can communicate by just thinking.
  • It will also allow wireless communication so there is no need for jumbled up wires like in previous technologies.
  • Design a system incorporating smart glasses with BCI.

Goals

  • Our main goal is to help neurological patients that cannot move or talk, to communicate easily.
  • The aim is to make a project that the patients find it easy to use and have friendly user interface.
  • We aim to build a thought to text system which is efficient, cost-effective and works on all real-time data.
  • Our project will be using different machine learning algorithms and artificial intelligence so that our module will be intelligent and smart that will assist patients and will also learn.
  • Our system will also work wirelessly and in the previous technologies the patient has to see a keyboard to type but, in our project the keyboard will be displayed on smart glasses.

Project Implementation Method

System Implementation

                                              Figure 1-System architecture

This architecture for our model is shown in figure 1. The basic components of architecture are discussed as follows:

MADGAZE Ares:

                                             

                                                      Figure 2-MADGAZE Ares

Madgaze ares will be used to see an on-screen keyboard on the see-through display. This will allow the patient to communicate without even looking at the other device as the keyboard will be displayed on madgaze.

Emotive EPOC+:

                                       

                                                   Figure 3-Emotive EPOC+

EMOTIV EPOC+ is the BCI Interface that connects with the brain and obtains brain signals. The brain signals are then sent to cloud as a data. These brain signals are then decoded to text and the EPOC+ then connects with the app to write on screen of any device. This helps the patients to communicate.

Thought to text app:

This app will be used to communicate with BCI (Brain Computer Interface). So, the thoughts that are decoded by the BCI is displayed on the device screen. The main structure of the app moves around how this app will communicate with BCI. For that, direct communication between BCI and the app is be done over Wireless Fidelity (Wifi) or Bluetooth and it is a hardware and software communications system. When the connectivity is done the app will be able to write down each letter decode by BCI.

End User Interface:

In our case the end user interface will be provided by the app which will an interactive, efficient and adaptive interface. This interface will be used by the patient as a way of communication and the patient will be able to write text through this app by just thinking.

Cloud Storage:

Cloud Storage will be used to store previous typed texts so that the typed history can be saved to provide suggestions to patients while typing. Also, we will be storing brain signals data on cloud server of EMOTIV EPOC+. Unlike the traditional databases, this information will be accessed instantly anywhere with high processing power. Long-time data processing and storage.

Benefits of the Project

Benefits of the Project

  • It has been a dream to control the environment through thoughts and patients who have neurological disorders find communication difficult therefore with the help of our model the patients can use the electrical signals from brain activity to interact with, influence, or communicate.
  • With the inclusion of BCI in the health care field many problems related to this field have vanished and so is the case with smart glasses. Smart glasses have also evolved to be a miracle for patients and helped people with a neurological disorder that cannot move or talk.
  • If we look a few years back it was like a wonder how it will be like if a device could decode your thoughts into actual speech or written words? This idea seems very interesting as it might enhance the capabilities of already existing speech interfaces with devices, it could be a potential game-changer for those with speech pathologies, and even more so for “locked-in” patients who lack any speech or motor function. So instead of saying something, our model will type text by just thinking.
  • In the healthcare field, the technology used is complicated and difficult to understand that’s why it takes time to manage and control such devices. To overcome this problem our model is using wireless technologies like Wi-Fi and Bluetooth.
  • Our system detects the brain signals of patients by electrodes and decodes them into text which allows the patients to communicate with doctors and other people.
  • It is difficult for patients with neurological disorders to operate or interact with end-user devices therefore our model has incubated smart glasses in it. Smart glasses have been adopted into the healthcare setting with several useful applications including, hands-free photo and video documentation, telemedicine, Electronic Health Record retrieval and input, rapid diagnostic test analysis, education, and live broadcasting.
  • Most of the patients find difficulty in communicating with doctors or physicians physically and therefore might consider messaging alternatives to meet consumer preferences for communicating via text messaging.
  • Our system presents the Brain-to-text system in which they type texts by an on-screen keyboard presented on a see-through display in front of them while their brain activity was recorded by BCI. This formed the basis of a database of patterns of neural signals that could now be matched to speech elements or “phones” .
  • The system will use artificial intelligence (AI) and machine learning algorithms which will automatically give suggestions related to text that the patient wants to type.
  • Our system will enhance the portability and mobility by providing an on-screen keyboard on smart glasses.

Technical Details of Final Deliverable

Technical Details of Final Deliverable

These technical details of final deliverable will be implemented in BCI smart glasses, which will convert the brain signals into text and will implement thought to text system. The technical details of final deliverable are as follows:

Signal Acquisition:

This is the measurement of brain signals using a particular sensor (e.g., electrodes for electrophysiologic activity, fMRI for metabolic activity). The signals are then amplified to levels suitable for pre-processing. The signals are then digitized and transmitted to a computer.

Feature Extraction:

This is the process of extracting useful information from the signals to distinguish different characteristics from extraneous content and representing them in a compact form suitable for translation into output commands. Feature extraction will be performed by the machine learning algorithms that we will use to make our model intelligent and smart. In feature extraction we extract features that have the most impact on output in our case we will extract features based on brain activity and brain signals.

Feature Translation:

In this process we use the features extracted from previous step and pass it to feature translation algorithm mainly different machine learning algorithms which converts features into the appropriate commands for the output device. For example, a decrease in the bandwidth of a given frequency can be converted to a higher throughput of a computer cursor, or a power output of P300 can be converted to a selected book selection. The translation algorithm should be strong enough to adapt to automatic or readable changes in signal signals and ensure that the user feature price list covers the full range of device controls.

Device Output:

In device output step the output is generated which is the text and the text is generated from feature translation algorithm which operate the external device, providing functions such as letter selection and cursor control. This device output operates the thought to text module which is then used to communicate with other people.

Text Suggestion:

Patient's previously typed text will be stored on the database or cloud. Which will proide text suggestions to the patient.

Final Deliverable of the Project

HW/SW integrated system

Core Industry

Health

Other Industries

IT

Core Technology

Wearables and Implantables

Other Technologies

Artificial Intelligence(AI), Internet of Things (IoT), NeuroTech

Sustainable Development Goals

Good Health and Well-Being for People

Required Resources

Item Name Type No. of Units Per Unit Cost (in Rs) Total (in Rs)
MadGaze Ares Smart Glasses Equipment17000070000
Miscellaneous Miscellaneous 11000010000
Total in (Rs) 80000
If you need this project, please contact me on contact@adikhanofficial.com
Design and Analysis of Continuous Flow Left Ventricular Assist Using C...

Congestive heart failure (CHF) is a chronic progressive condition that affects the pumping...

1675638330.png
Adil Khan
9 months ago
Corona Virus Inspection Robot powered By Data Science AI

The COVID-19 has adversely affected almost every aspect of human life. All developing and...

1675638330.png
Adil Khan
9 months ago
Mumathil Similarity Analysis of Bible with Holy Quran

Today?s world is composed of a high-tech lifestyle along with a very busy schedule. It is...

1675638330.png
Adil Khan
9 months ago
Blockchain Application for the Verification of Textile Industry Produc...

Traditionally, Data are stored, retrieved and processed using centralized database systems...

1675638330.png
Adil Khan
9 months ago
Weed detection in crops using aerial vehicle and machine learning tech...

Agriculture is the largest sector, covering 47.03% of land area of Pakistan and almost 45%...

1675638330.png
Adil Khan
9 months ago