Face Detection and [[ Smile Classifier ]]

Patrick Wong, Eric Chan, Gregory Peaker

Home | Face Detection | Smile Classifier | Results | Full Paper

Introduction

We decided to extend our face detector by adding a second component to our project: smile detection algorithm and classifier. In this part of the project, we investigate if a machine learning algorithm, particularly a neural network, can deliver a smile recognition system usable in everyday life. We employ computer vision techniques (some of which were used for the face detection algorithm earlier) to accomplish this. Although current smile detection technology is deemed reliable, our goal is to be able to implement our very own detector such that it comes close to the existing technology in hopes of improving them.

Method

The algorithm utilizes faces already extracted from pictures, but for the interests of efficiency this was implemented as a separate module from the face detector. We used a collection of 134 smiling and non-smiling faces found throughout the Web. These pictures are then split this into two sets: 46 as training images and 88 as testing images. Each face is at least 250 pixels wide and 350 pixels in height so as to ensure we could extract and measure out the facial features. Ten features of the face were identified that are potentially vital in determining a person’s smile, and we quantified them using image analysis algorithms, and with anomalous input, calculated the values manually. These attributes along with their values are then compiled into a spreadsheet as input for our subsequent machine learning algorithms.

Feature Quantification

To figure out the percentage of teeth given a picture of just the lips portion of our face was not a trivial task. We employed the segmentation algorithm used for the face detector and modified it so that we could train on different colors.

Percentage of white found

Calculating the curvature of the upper and lower lips was done with the help of an edge detector. Here, we use the canny edge detector algorithm for the task as it gave us the most accurate results. Afterwards, we scan the results to obtain the left and right-most point that the lip starts and ends, respectively. Using the highest point of the lip, we can calculate the angle created from the three points, and do the same for the lowest point. Having the edge detected version of the image also helped us with finding out whether the given face had forehead wrinkles or cheek folding by doing another scan for lines at predicted areas of the face.

The facial detector also facilitated with finding the eyes, nose and helped us calculate the dimensions of the features in a similar manner to the curvature calculation, sans adding the third point.

Final Steps

Once all the information for all the picture samples has been quantified, we transferred our training and test data into Weka , open-source data-mining software with many machine learning classifiers.