STUDENT ENGAGEMENT ANALYSIS USING FACIAL EXPRESSION RECOGNITION

Summary Expanding student engagement has risen as a key test for lecturers and teaching organizations. A few of the present instruments used to quantify engagement ? for example; self-accessing-reports, teachers introverted assessments, and agendas ? are unwieldy, do not hav

2025-06-28 16:36:10 - Adil Khan

Project Title

STUDENT ENGAGEMENT ANALYSIS USING FACIAL EXPRESSION RECOGNITION

Project Area of Specialization Artificial IntelligenceProject Summary

Summary

Expanding student engagement has risen as a key test for lecturers and teaching organizations. A few of the present instruments used to quantify engagement – for example; self-accessing-reports, teachers introverted assessments, and agendas – are unwieldy, do not have the temporal resolution required to comprehend the interplay between engagement and learning, and sometimes student consistency rather than the engagement. But, yet there is no such computerized framework or device so as to quantify the degree of effective learning of the student. The objective of expanding student engagement has propelled enthusiasm for strategies to quantify it.

Here our focus in this project is to analyze the students through face emotions, either they are engaged in the lecture or not. Analyzing student engagement is an important parameter for the teacher in order to access his own lecture delivery whether the student grasps the concept, or they were bored during the lecture. Basically, this is the tool for the evaluation of the student towards effective learning.

This project aims at acquiring the dataset of student facial expressions. Then experiment with multiple approaches to analyze the student attention. Then face detection and recognition module is used to find the face of students using voila jones, after face the module for feature extraction reveals whether there is an expression of a happy, sad, angry, disgusting, scary, surprised or neutral. Based on the features taken with LBP, HOG, SIFT, SURF a judgment is made regarding the expression shown.

This is done on the classification step which classifies the emotions of the student. SVM is used as a classifier for the recognitions of facial expression; the results will indicate that computer vision techniques can be used to form real-time automated engagement detectors with similar precision as in human observers.

In this project, we go for perceived judgment contribution and automating the procedure utilizing machine learning and Computer vision.

Project Objectives

Our Aim is to develop a system that will analyze the attention of students during the lecture through facial expressions.

Following will be the deliverables of the project.

  1. Not engaged at all - e.g., not looking on the board and, not considering, the eyes are totally shut.
  2. Engaged – e.g., the eyes open, clear "to" the undertaking. The student can be "praised" for his/her dimension of commitment in the work.
Project Implementation Method

Working Model /Implementation

Block diagram in fig 1 showing the working model of our project.

Figure 1: Working Model

Pre-processing

The image pre-processing strategy comes as an essential step in the facial expression acknowledgment task. The point of the pre-processing stage is to get pictures which have normalized intensity, uniform size, and shape, and delineate just a face communicating a specific emotion. The preprocessing methodology ought to likewise take out the impacts of illumination and lighting. The face area of a picture is recognized using the Viola-Jones strategy dependent on the Haar-like features. The Viola and Jones strategy is an algorithm for the determination of competitive object detection rates in real-time processes. It was first intended for face detection issues. The parts used by Viola and Jones are derived from pixels looked over rectangular area forced over the image and demonstrate a high sensitivity to the vertical and flat lines.

STUDENT ENGAGEMENT ANALYSIS USING FACIAL EXPRESSION RECOGNITION _1582920658.jpeg

Feature Extraction

Local Binary Patterns (LBP), Speed up robust features (SURF), Histogram of oriented gradient (HOG), and Scale-invariant feature transform (SIFT) are some from many techniques for feature extraction. Features are extracted from the face and then for classification and training purposes, it is passed to the SVM.  

Classification

The learning stage is made from two sections: extraction the feature vector and training steps. Amid the initial step, for each image, a descriptor (SURF, SIFT, LBP, and HOG) is utilized to extract a feature vector. Amid the subsequent stage, we utilize the feature vectors selected from the initial step for training based a Support Vector Machine (SVM) classifier. The support vector machine classifier used for training is a fast and robust.

Testing

For testing purposes, as a matter of first importance, we will detect a face from test images and apply some pre-processing tasks for visual acknowledgment of the facial expression. A constraint in face detection is occlusions that make this distinguishing procedure complex because of changes in illuminations, lighting or face orientations.

When the face is detected, the feature extraction module recognizes for the acknowledgment of expression. In view of the extracted features, a choice is made about the expressions being identified. This is done on the Classification step which classifies the emotions of students.

Benefits of the Project

The teacher and the student only witnessed the area of ??the lecture room. It is possible to monitor and inspect the academic quality of the lecture without interference with the person, using visual information. Computer vision supports us to revisit the information into semantic statistics to investigate the nature and academic situation within the class.

This project assesses student behavior and level of teachers’ satisfaction in the classroom. Emotions are the best idea of ??how a person feels. This framework makes it easier for teachers to assess student concentration levels and make future lectures interesting. On the other hand, this module will also help teachers to self-evaluate their performance and strive to improve their shortcomings in subsequent discussions.

Technical Details of Final Deliverable

In this project, we dissect the understudies through face emotions, possibly they are engaged in the lecture or not. At that point, face identification and acknowledgment module are utilized to discover the essence of understudies utilizing voila jones, after face the module for feature extraction. For this, we have acquired a data set of our own, comprising of video shots, frames are extracted from those videos after every 5 seconds, then face are extracted from those frames. Now then we will train the system on the bases of features that are extracted from student faces either the student is  engaged or not

The final product is incorporated into a raspberry pie module and will give a total review of student engagement in real time.

Final Deliverable of the Project HW/SW integrated systemType of Industry Education , IT Technologies OthersSustainable Development Goals Quality EducationRequired Resources
Item Name Type No. of Units Per Unit Cost (in Rs) Total (in Rs)
Total in (Rs) 71000
Raspberry pie Equipment170007000
Other Equipments for Raspberry pie Miscellaneous 330009000
Camera Equipment150005000
Laptop Equipment15000050000

More Posts