Education is a vital aspect of our life because it builds the necessary foundation on how we can progress as a society. The world is getting increasingly complex, seamless and dynamic, and education is the vehicle for ensuring that we can navigate this complexity with understanding, collaboration an
Lecture Evaluation in Smart classroom using gestures and Facial expressions
Education is a vital aspect of our life because it builds the necessary foundation on how we can progress as a society. The world is getting increasingly complex, seamless and dynamic, and education is the vehicle for ensuring that we can navigate this complexity with understanding, collaboration and problem-solving across cultures. Furthermore, language competencies and educators play a crucial role in ensuring that students are properly educated. According to Velasquez et al. (2013), numerous studies have indicated that a caring teacher can positively impact learning outcomes, motivation, and social and moral development. However, Chen et al. (2019) and Islam et al. 2016 mentioned that especially in universities, classrooms are often overcrowded with a large number of students which makes it strenuous for lecturers to monitor students’ reaction on the lecture being delivered, and obtain immediate feedback from the students in the classroom on whether they are able to follow the lecture being delivered.
Therefore, in view of this issue, This project approach to automatic estimation of attention of students during lectures in the classroom. Both facial and body properties of a student, including gaze point and body posture are used in order to observe how students perceive the lecture being delivered. Machine learning algorithms are used to train classifiers which estimate time-varying attention levels of individual students.
In case of the present system the lectures are going regularly but main reason behind this 10-20% students get 80% knowledge, 20-60% students get 60% knowledge and remaining students do not get their concepts. To overcome this situation and provide honest feedback of lecturer there is need of lecture evaluation using face recognition.
Instead of using the conventional methods, this proposed system aims to develop an automated system that records the student's attention by using facial recognition technology. The main objective of this work is to make the lecture more effective.
The main purpose for monitoring Student’s attention is to collect information that will inform and facilitate improvement in classroom learning.
Monitoring student’s attention using cameras is a non-invasive approach of digitizing students' behavior. Understanding students' attention span and what type of behaviors will help to make teaching methods more effective for students.
Step 1: Facial expression and gesture Detection
To begin, the camera will detect and recognize facial expression and gesture. The facial expression and gesture is best detected when the person is looking directly at the camera. The technological advancements have enabled slight variations from this to work as well.
Step 2: Face Analysis
Next, a photo of the facial expression and gesture is captured and analyzed. Most facial recognition relies on 2D images rather than 3D. Distinguishable landmarks or nodal points make up each face. Each human face has 80 nodal points. Facial recognition software will analyze the nodal points such as the distance between your eyes or the shape of your cheekbones.
Step 3: Converting an Image to Data
The analysis of your facial expression and gesture is then turned into a mathematical formula. These facial expression and gesture features become numbers in a code.
Step 4: Finding the Result
Your code is then compared against a database of facial expression and gesture. The breakdown of facial detections on the dashboard into the three most common facial expressions (bored, satisfied, confused) allows the lecturer to have a better observation of the students’ actual reaction to a lecture.
Facial recognition technology can be programmed to recognize a wide range of nonverbal expressions and emotions. Through this, a professor can assess the emotion levels of the class to determine the parts of his lecture that are the most exciting and engaging, or where students’ attention appears to diminish. In this way, every unique face can function like a uniquely identifiable thumbprint that also speaks, through verbal and nonverbal data.
As class-engagement data of this sort comes in, week to week and semester to semester, faculty and administrators can partner to build new data models that unlock powerful insights into how students learn, what methods are most effective, and what differentiates great classes (and great teachers) from less-effective learning experiences.
Furthermore, as a student matriculates toward graduation one semester at a time, aggregate data can perhaps be used to discover learning strengths and areas of concern, enabling more tailored learning experiences that can lead each student to better outcomes.
The database used in the study consisted of facial expression images from the Cohn-Kanade database [16]. Two types of parameters were extracted from the facial image: real valued and binary. A total of 15 parameters consisting of eight real-valued parameters and seven binary parameters were extracted from each facial image. The real valued parameters were normalized. Generalized neural networks were trained with all fifteen parameters as inputs. There were seven output nodes corresponding to the seven facial expressions (neutral, angry, disgust, fear, happy, sad and surprised).
Based on initial testing, the best performing neural networks were recruited to form a generalized committee for expression classification. Due to a number of ambiguous and no-classification cases during the initial testing, specialized neural networks were trained for angry, disgust, fear and sad expression. Then, the best performing neural networks were recruited into a specialized committee to perform specialized classification. A final integrated committee neural network classification system was built utilizing both generalized committee networks and specialized committee networks. Then, the integrated committee neural network classification system was evaluated with an independent expression dataset not used in training or in initial testing. A generalized block diagram of the entire system is shown in Figure 1.
Figure 1
Two types of parameters were extracted from the facial images of 97 subjects: (1) real valued parameters and (2) binary parameters. The real valued parameters have a definite value depending upon the distance measured. This definite value was measured in number of pixels. The binary measures gave either a present (= 1) or an absent (= 0) value. In all, eight real valued measures and seven binary measures were obtained.
Upper eyelid to eyebrow distance– The distance between the upper eyelid and eyebrow surface.
Inter-eyebrow distance– The distance between the lower central tips of both the eyebrows.
Upper eyelid – lower eyelid distance– The distance between the upper eyelid and lower eyelid.
Top lip thickness– The measure of the thickness of the top lip.
Lower lip thickness– The measure of the thickness of the lower lip.
Mouth width– The distance between the tips of the lip corner.
Mouth opening– The distance between the lower surface of top lip and upper surface of lower lip.
| Item Name | Type | No. of Units | Per Unit Cost (in Rs) | Total (in Rs) |
|---|---|---|---|---|
| IP Camera 4MP | Equipment | 1 | 15000 | 15000 |
| IPTV DVR | Equipment | 1 | 15000 | 15000 |
| SSD 1TB | Equipment | 1 | 15000 | 15000 |
| Cabels | Equipment | 1 | 8000 | 8000 |
| Adopters 12V DC | Equipment | 2 | 1000 | 2000 |
| LED Screen | Equipment | 1 | 15000 | 15000 |
| Stationary | Miscellaneous | 1 | 5000 | 5000 |
| Total in (Rs) | 75000 |
Problem Statement: Non-small Cell Lung Carcinoma (NSCLC) is the most common and aggressive...
Every day we come to know about some buildings which are taking months and years to comple...
this project is related to automation and monitoring of grid station transformers accordin...
IOT or internet of things is an upcoming technology that allows us to control hardware dev...
In real world, the children safety is a huge question mark in everyone?s mind. Parents alw...