Natural disasters can cause severe damage to cities and infrastructure. It is difficult to assess and get a clear insight of the damage at different locations. In this era of networking, social media streams have proven to be a source of inform
Classification of Multimodal Natural Disaster tweets
Natural disasters can cause severe damage to cities and infrastructure. It is difficult to assess and get a clear insight of the damage at different locations. In this era of networking, social media streams have proven to be a source of information about the disaster location in the form of tweets, images, and videos. Social media now has become one of the main sources of providing information in the form of user generated content(UGC) on the severity of Natural disaster. However, extracting the relevant information in an organized manner from social media has been a challenging task. Therefore, the purpose of this research paper is to provide an efficient algorithm that can reduce the workforce of disaster management by classifying relevant social media streams, humanitarian aid, and damage assessment. The project will classify relevant tweets of hurricane harvey, hurricane irma, California wildfire, Mexico earthquake, Nepal earthquake,vSri Lanka floods, and iran-iraq earthquake by applying Natural language processing and computer vision based deep learning models ensemble together. The first task will include the categorization of tweets and their respective images on the basis of their relevance(containing information about natural disasters). Secondly, the image data will be categorized further based on humanitarian aid which includes injured or dead people, infrastructure damage, vehicle damage, and missing or found people. Finally, the damage assessment of the events will be categorized based on mild, moderate, and severe. The demo application will be developed to provide a user interface for natural disaster management teams to access disaster.
To Classify tweets and their respective images of Multi-Modal Natural disasters twitter datasets using ensemble based deep learning models and achieve the following objectives:
1. To Categorize images and tweets into informative and non-informative.
2. To Classify humanitarian aid of the informative category into injured or dead people, infrastructure damage, vehicle damage, and missing or found people.
3. To Classify the severity damage assessment from severe to no damage.
The Approach focuses on preprocessing of the MultiModal CrisisMMD dataset that involves Data Augmentation of images, Resizing of images as per model, Data Cleaning of tweets, and extraction of persons and cars from the images for tasks like damaged vehicle, and injured and missing people as categorized by tweet. Dataset will be divided into training and development sets which will be used for F1 Score, Precision, and Recall. The focus is to create a model consisting of multiple of deep learning that can solve the hierarchical label problem. The work will emphasize on combining pre-trained models such as YOLOv3 on images. Furthermore, Tweets will be classified using Natural language processing models such as RNN, BERT + LSTM. Moreover, for image classification Computer Vision models such as DenseNet, VGG16 fine tuned, Inception Network and their ensembles will be used. These models can be applied in the flow. Twitter based search tweets can be provided to the model as the testing sample to classify relevancy of label hierarchy.
1. Categorization of dataset into informative tweets and images will perform with relativitively better accuracy and can possibly outperform similar models.
2. Optimize the information processing by Humanitarian organizations as they dont want an overload of noisy messages that are of a personal nature or those that do not contain any useful information regarding natural disaster.
3. With first hand data obtained from the model about the disaster, critical and potentially actionable information can be extracted from twitter to help rescue organization optimize their relief efforts.
Final deliverables contain:
A prototype in the form of a web application that will take a tweet and an image and tell the following:
1- Relevant data or not.
2- Humantarian aid/ Type of damage.
3- Severity of damage (if any).
| Item Name | Type | No. of Units | Per Unit Cost (in Rs) | Total (in Rs) |
|---|---|---|---|---|
| Total in (Rs) | 0 |
Intelligent Emotion Prediction from voice for call centers (IEPVCC) is an application whic...
Our Project Smart Irrigation & Crop protection system is IoT based project.Different s...
Exercise doesn't just keep you trim it helps you stay energetic, Physically, and m...
Vision is one of the very essential human senses and it plays the most important role in h...
The summary of our FYP presents the modeling and development of a hybrid Photovoltaic and&...