Multimedia is everything that we watch, listen or read in form of sounds, text, graphics(image/video), animation etc and it has become a common component of modern software and hardware as we can find it almost in both desktop and mobile platform applications. But The traditional multimedia engages
Emotion Recognition in Response to Mulsemedia
Multimedia is everything that we watch, listen or read in form of sounds, text, graphics(image/video), animation etc and it has become a common component of modern software and hardware as we can find it almost in both desktop and mobile platform applications. But The traditional multimedia engages auditory and visual sense of the user. Humans have five basic senses and we can create a more immersive and enjoyable environment by engaging more than 2 senses. For this purpose, we can use Mulsemedia (MULtiple SEnsorial MEDIA). Engaging additional senses such as tactile and olfaction can enhance the user experience. Multiple sensorial environment is one that engage more than two human senses while Watching the video by synchronizing different components with the audio-visual content of the video. Mulsemedia started the new era of modern technology by providing a new perspective for all the fields of life. But with the advancement, new solutions are also required for new media as hardware as well as software.
In this project, a setup is developed, which engages more than two senses i.e., auditory, vision, haptic, tactile and other. A video is selected with different Environments having different weather conditions. These conditions are created artificially by synchronizing audio-visual content of the video with fan, heater and haptic to generate cold air, hot air and other effects. We will use a video with two to three different weather condition simultaneously and see how the user responses against mulsemedia environment.
The conventional multimedia uses a combination of two senses i.e. sense of vision and hearing. The traditional multimedia experience does not allow user to fully submerge into the viewing experience. So, this gives us the motivation to set-up and work on this project.
The main aim of the project is to develop a setup to enhance the conventional multimedia experience by developing a mulsemedia environment.
A four-step approach i.e. Setup Development, data acquisition pre-processing, feature extraction and classification of emotion is utilized for emotion recognition in response to mulsemedia using EEG.
The main steps for designing a mulsemedia environment are explain below
Video is selected on the bases of the emotions and the background effects. We are dealing with six effects i.e. cold air, hot air, olfaction (sense of smell), haptic, visual and audio effect. Total of 12 videos were selected. Videos were selected on the basis of arousal scale. For each quadrant of arousal scale 3 videos were selected and the senses involved in each video is different. The senses involved in video are shown in the table.
Senses involved in videos
| Videos | Senses Involved |
| 1 | Audio, Visual, Cold air, Olfaction |
| 2 | Audio, Visual, Hot air, Olfaction |
| 3 | Audio, Visual, Haptic, Olfaction |
Every video clip is of 90 to 120 seconds. And the effect duration is almost 60 seconds see image.
After video selection, we need to synchronize each video clip with the hardware. The framework is designed in synchronization of Arduino which is controlling other hardware components. With respect to the video the effects that participant feels sensations produced by hardware components. Videos are synchronized by integrating hardware and software. We need to test the integration so that it will work properly. Arduino code was written in which the time stamp of videos and hardware components are synchronized. The video player and Arduino is connected through the inport of computer. Simply we can say that before video synchronization we need to complete these steps
We are designing GUI which include a multimedia player on Visual Studio. A video is selected in media player and played on the screen. User will watch the video and after every video the subjective analysis form occur that user have to fill for classification the emotions of the human. As we will detect emotion of Brain signal using EEG, we will compare the result the subjective analysis. So, after every video a form occur and after filling it user will see next video.
Emotion is a strong feeling that derive from the condition, mood and the relationships with others. We can classify emotion into different categories i.e., happy, sad, angry, relax, excited and etc. We can classify emotion into different categories i.e., happy, sad, angry, relax, excited and etc. we detect emotion of different user by face expression and brain activity using EEG as the studies also conclude that humans’ emotions are distinguishable by facial expression using brain signal. The videos are selected using the arousal scale.
Videos
1
2
3
We have designed a control system via Arduino Microcontroller which is connected with heater, cooling fan, haptic vest and olfaction dispenser through relays, transistors and resistors. These components are used to create immersive mulsemedia environment. We have developed a media player GUI that will not only be able to play media on the device but also will be able to regulate the fan and the heater to use them at required sequences of time with respect to the video playing. Fan is used to provide immersive environment for airy conditions while the heater will be used in warm heated scenery, haptic vest will be used to recreate impact situations while olfaction dispenser is used for the aroma production.
After the complete hardware synchronization, we will record the data of participants Using EEG and PSG for the classification of Emotion. So, for this purpose we need to develop a setup.
In the software, we have developed a Media player using Visual Studio. At the back-end of it, it works with the Arduino software for the synchronization with the chosen media clips and videos. It is a media player with basic playback capabilities of pause, play and forward to the next media item clip. It can play the video/music files on the computer. Other features such as fast forward, reverse, file markers (if present) and variable playback speed can be added later.
The arduino software is used foe the synchronization of videos with the hardware. For this purpose we have used the time stamp of videos and according to the effect the arduino give the ON and OFF signal to the hardware and hardware work accordingly.
For the proper working all the component are connected to one another. Visual studio plays the video and sends the signal to arduino. Arduino gets the signal, controls it and waits for the time when effects occur in video. Each time the effects occur, arduino will send the ON signal to the hardware and the hardware responds accordingly.
| Elapsed time in (days or weeks or month or quarter) since start of the project | Milestone | Deliverable |
|---|---|---|
| Month 1 | Solution Design and planning | Estimation of the all of the work and a clear cut working idea of the project |
| Month 2 | Literature review and setup development | The setup of the working model in light of already done work |
| Month 3 | Setup Development | To develop all the setup of the Final Year Project and setting up of muslemedia enviornment |
| Month 4 | Video Selection | The selection of the video so the project's hardware is totally up and running |
| Month 5 | Data Acquisition | This involves the acquisition of data from the participants |
| Month 6 | Feature Extraction & Data processing | The data that will be acquired, in this step the features to be used will be extracted and processed to show the impact of the setup |
| Month 7 | Result Analysis | The difference of the impact of melsemedia and multimedia will be compared |
| Month 8 | Additional Work & optimization and Research | In this the results will be optimized and further research and report writing will be done |
In the project ?Pneumonia Detection Through X-Ray?, a web application will be developed fo...
It is an iot based garbage n sewage monitoring system. As future is of technology n of iot...
A wearable force-feedback glove is to be developed that will produce forces on each of the...
Our brain is composed of neurons, glial cells, and blood vessels. The number of neurons is...
We are actually developing an All-in-one mobile app where customers can chat, do their mon...