Adil Khan 9 months ago
AdiKhanOfficial #FYP Ideas

Auto Music Generation using Facial Expressions

We are fostering a versatile application that plans to utilize Artificial Intelligence and Machine Learning to carry out creating music with the assistance of facial acknowledgment. In this venture we will prepare a model trained on a dataset of tones from instruments like Piano. The application wil

Project Title

Auto Music Generation using Facial Expressions

Project Area of Specialization

Artificial Intelligence

Project Summary

We are fostering a versatile application that plans to utilize Artificial Intelligence and Machine Learning to carry out creating music with the assistance of facial acknowledgment. In this venture we will prepare a model trained on a dataset of tones from instruments like Piano. The application will use the camera from the Smart phone to distinguish the presence of the client and guarantee the liveness of the video feed.

Assessment of the expressions and emotions the application will create music at run time. We believe a multi-channel Convolutional Neural Network (CNN) is a way better way to deal with distinguish designs like these since it gives a worldwide normal layer.

Each time the application is dispatched new music will be generated to guarantee uniqueness and will be played on the music player inside the application. The unique aspect of this project that it utilizes facial expression and evaluation to generate the music. For instance, when the user is in front of the camera it will first verify if the subject is live or not. Then, it will utilize machine learning algorithms to distinguish the facial expression. If the user is smiling, then that implies that the user is in a joyous or a pleasant mood. Using this data from the live feed the algorithm will get the input of mood, which will the generate music from a model trained on a piano tones dataset to create new and unique tunes. These tunes will be played in the music player. Also we are ensuring that the data is not being collected for malicious or personal intents through our application, this is being done to maintain security and integrity of the application. The video feed will not be recorded for security purposes. It will ask for the users consent to use the camera and during the live feed no recording will take place. Also the user can replay the music generated by the application using the music player.

  1.  

Project Objectives

he application that will be developed will be an area of interest for people and users who share a passion for music or they are just looking for a good time. This application can quickly become a widely sought-after outlet for enthusiasts since it eliminates the need to browse for music, instead it creates music well suited to the user according to the facial expression the application detects.

Any music lover user can install this application in Android or IOS since it will be developed on Flutter which provides the facility of cross-platform development.

Project Implementation Method

We are developing a mobile based application that aims to use Artificial Intelligence and Machine Learning to implement the idea of generating music with the help of facial recognition. In this project we have train a model composed of tones from instruments such as Piano, Drums, Guitar, and Violin etc. The application will utilize the camera from the smart phone to detect the face of the user and ensure the liveliness of the video feed. To detect faces Haar Cascading methodology has been widely used to extract facial features of a human, and the algorithm used and widely recommended is the Viola-Jones algorithm. Support Vector Machine (SVM), AdaBoost and Histogram of Oriented Gradients are also other algorithms which were used to recognize faces. The performance of these algorithms vary a lot and it was observed that Viola-Jones framework provides the most optimized performance. And upon evaluation of the facial expression and emotion the application will generate music at run time. Therefore, facial recognition plays a vital role in the area of pattern research and recognition. Traditional machine learning techniques have proven to be less effective and with extremely poor stability and efficiency when it comes to detecting patterns of a human face. Therefore, a multi-channel Convolutional Neural Network (CNN) is a way better approach to detect patterns like these since it provides a global average layer. It provides the privilege to improve and enhance the dataset prior training to improve the model. The accuracy achieved so far is 68.4% on FER2013 emoticon dataset  and it performs a prediction within an impressive time of 0.12s. Therefore, it is now possible to detect the facial expression in real time .

Each time the application is launched new music will be orchestrated in order to ensure uniqueness.

As far as music is concerned, it seems to obey certain rules such as harmonics and counterpoint. However, the subjectivity and variation of music cannot be easily distinguished with varying combinations of predefined rules. The automation of music can be achieved anyhow with music theory, mathematical models or randomness [9]. So far there are two dominant approaches to this, using WaveNet or Long Short-Term Memory Model (LSTM). The Wave Net model tries to generate new samples from the original distribution of data which gives it the name of a Generative Model. Long Short-Term Memory Model is a derivation of Recurrent Neural Networks (RNN). Which is capable of doing applications such as Speech Recognition, Text Summarization and Video Classification.

Benefits of the Project

  1. Machine Learning algorithms
  2. Mobile application development using Flutter
  3. Compiling Datasets
  4. Facial Expression analysis

Technical Details of Final Deliverable

  • We have used flutter for app development. 
  • High productivity. Since Flutter is cross-platform, you can use the same code base for your iOS and Android app.
  • Great performance. Dart compiles into native code and there is no need to access OEM widgets as Flutter has its own.
  • Fast and simple development.
  • Compatibility.
  • Open-source.

Final Deliverable of the Project

Software System

Core Industry

IT

Other Industries

Core Technology

Artificial Intelligence(AI)

Other Technologies

Sustainable Development Goals

No Poverty

Required Resources

Item Name Type No. of Units Per Unit Cost (in Rs) Total (in Rs)
Smart Phone Equipment32300069000
Total in (Rs) 69000
If you need this project, please contact me on contact@adikhanofficial.com
SwiftBot Autonomous Delivery Robot

The main problem that this project seeks to address is the safe and timely delivery of sma...

1675638330.png
Adil Khan
9 months ago
Kids can Code

Kids can code is a platform where students of age 12-15 will be able to learn programming...

1675638330.png
Adil Khan
9 months ago
Departmental Management System

This project Departmental Management system (online system) proposed to replace the curren...

1675638330.png
Adil Khan
9 months ago
Integration of MTD System in SDN Architecture

The novel idea behind software-defined networking is the dissection of intelligence from t...

1675638330.png
Adil Khan
9 months ago
The Impact of IT capabilities on Firm Performance and the mediating ro...

Purpose ? The aim of this study is to explore the relation of IT capabilities on Firm perf...

1675638330.png
Adil Khan
9 months ago