A Comparative Analysis of Feed-Forward and Generalized Regression Neural Networks for Face Recognition Using Principal Component Analysis

In this paper we give a comparative analysis of performance of feed forward neural network and generalized regression neural network based face recognition. We use different inner epoch for different input pattern according to their difficulty of recognition. We run our system for different number of training patterns and test the system’s performance in terms of recognition rate and training time. We run our algorithm for face recognition application using Principal Component Analysis and both neural network. PCA is used for feature extraction and the neural network is used as a classifier to identify the faces. We use the ORL database for all the experiments


Introduction
The task of recognition of human faces is quite complex. The human face is full of information but working with all the information associated with the face is time consuming and less efficient. It is better to use some unique and important information (facial feature vectors) and discard other useless information in order to make system efficient. Face recognition systems can be widely used in areas where more security is needed. For example on Air ports, Military bases, Government offices etc. Also, these systems can help in places where unauthorized access of persons is prohibited. Sirovich and Kirby [1] had efficiently represented human faces using principal component analysis. M.A. Turk and Alex P. Pentland [2] developed a near real time Eigen faces system for face recognition using Euclidean distance. A face recognition system can be considered as a good system if it can fetch the important features, without making the system complex and can make use of those features for recognizing the unseen faces. For feature extraction we use Principle Component Analysis and for recognition feed forward neural network and generalized regression neural network are used. In this paper we give an approach to recognize the faces in less training time and less training patterns (images).

Principal Component Analysis
Principal component analysis (PCA) involves a mathematical procedure that transforms a number of possibly correlated variables into a smaller number of uncorrelated variables called principal components.
PCA is a popular technique, to derive a set of features for both face recognition. Any particular face can be: (i) Economically represented along the Eigen pictures coordinate space (ii) Approximately reconstructed using a small collection of Eigen pictures To do this, a face image is projected to several face templates called eigenfaces which can be considered as a set of features that characterize the variation between face images. Once a set of eigenfaces is computed, a face image can be approximately reconstructed using a weighted combination ofthe eigen-faces. The projection weights form a feat ure vector for face representation and recognition. When a new test image is given, the weights are computed by projecting the image onto the eigen-face vectors. The classification is then carried out by comparing the distances between the weight vectors of the test image and the images from the database. Conversely, using all of the eigen-faces extracted from the original images, one can reconstruct the original image from the eigen-faces so that it matches the original image exactly. Suppose there are P patterns and each pattern has t training images of m x n configuration. with all the signatures in the database. 5. Image is identified as the one which gives least distance with the signature of the image to recognize.

Neural Network
A Neural Network is made up of neurons residing in various layers of network. These neurons of different layers are connected with each other via links and those links have some values called weights. These weights store the information. Basically the neural network is composed of 3 types of layers: first is Input layer, which is responsible for inserting the information to the network. Second is Hidden layer. It may consist of one or more layers as needed but it has been observed that one or two hidden layers are sufficient to solve difficult problems. The hidden layer is responsible for processing the data and training of the network. Last layer is the output layer which is used to give the network's output to a comparator which compares the output with predefined target value . Neural networks require training. We give some input patterns for training and some target values and the weights of neural networks get adjusted. A Neural network is said to be good and efficient if it requires less training patterns, takes less time for training and is able to recognize more unseen patterns.
Face recognition problem has been studied for more than two decades for its significant commercial applications. A number of research efforts have been made to build the automated face recognition systems. There are so many face recognition systems available today which use different approaches. In this paper approach for face recognition uses neural network with PCA. In this paper we use two types of neural networks such as feed forward neural network, generalized regression neural network and these neural networks have different characteristics. Therefore, we studied these two neural networks based face recognition systems that use PCA for feature extraction. We study the results from these neural networks based face recognition systems to find which neural network gives better results in all circumstances such as changing of lighting condition, expression rotation of human faces and distractions like glasses, beards, and moustaches. We work on ORL standard face databases and report the results of our study.

IMPLEMENTATION AND RESULT
In this paper we show the effect of the use of variable learning rate. In each outer epoch we increase the value of learning rate by a very less value. By one outer epoch we mean a single presentation of the all training patterns to the neural network. This step helps in fast convergence of weights. Using this method the learning rate, started from 0.25, reaches to a high value of 4 to 20 without making system unstable. In this paper we introduce a new approach to select the learning rate for face recognition. We increase the learning rate by 0.005 in each outer epoch. This requires less learning time and gives comparatively better recognition accuracy. In this experiment we use 10 hidden neurons and magnitude is fixed to 0.95.

ORL Database Results
First we show the two different neural network based face recognition results for ORL database.

Feed-forward Neural Network Results for ORL Database
We take the results of feed-forward network on different number of images in training set, for example 150 images (6 images per persons), 100 images (4 images per person) and so on. . Here we show the average recognition rate of two different neural network based face recognition systems from two way cross validation. In two way cross validation, we interchange training set into test set and test set into training set. the-graph between number of images in training set and its correspondent recognition rate or another graph between number of images in training set and training time shown in Figure 1 and Figure 2 respectively. Figure 1: Recognition rate of feed-forward on different training set for ORL database.
From the above Figure 1 we can show that when number of images in training set is reduced, recognition rate is also reduced.  Table 2.
From the table 2 we can say that feed-forward network correctly identify 27 images class out of 27 images in test data set means give 100 % recognition rate in ease total 150 images present in training database and for 100 images in training data set recognition rate is 93% shown in Figure 3    In this paper we present the variation in the recognition rate when we consider the images with moustaches.There is total 20 images with moustaches in ORL database. In which 8 images with moustaches present in test data set but in training data set number of images with moustaches vary for example training data set contains total 150 images in which only 12 images with moustaches and for 100 images in gaining data set only 8 images with moustaches present in training data set. Here we show the recognition rate of feed-forward of 8 images with moustaches in Table 4   Table 4 Recognition rate of Feed-forward for person with moustaches for ORL database From the Table 4 we can say that we can say that in case of 150 images and 100 images in training dataset recognition rate of feed forward is 100%. Shown in Figure 5.

Generalized Regression Neural Network Results for ORL Database
In this paper we present generalized regression neural network with spread value 0.7. Spread is a spread of radial basis function and its default value of spread is 1. Table 5, Table 6, Table 7and Table 8 Table 6 Recognition rate of generalized regression neural network for person wearing glasses for ORL database Table 7 Recognition rate of generalized regression neural network for female person for ORL database Table 8 Recognition rate of generalized regression neural network for person with moustaches for ORL database In this paper we present the analysis of the over all results of all two neural networks. Table 9 shows two neural network result for ORL database when training data set contains total 150 images. From Table 9 we can say that feed-forward network has 100% accuracy in all variation of images.  Table 10 shows overall result of two neural network, when 150 images in the training set then the recognition rate of these two network are shown in Table 10.  From the experiment we conclude that the feedforward neural network has recognition rate 96% which is more in comparison to generalized regression neural network. Graph between two neural network and recognition rate are shown in Figure 6. Figure 6 Recognition rates of different neural networks for ORL database Table 11 shows the total training time of the two neural networks and we can conclude that generalized regression neural network has training time 43.40 sec which is very less as compared to feed forward network whose training time is 932.50 sec .  Figure 7 shows the graph between feed forward neural network and generalized regression neural network for total training time .

5.Conclusion
This paper present comparative analysis of performance and accuracy of feed-forward neural network and generalized regression neural network. In this paper we used Eigen faces to represent the feature vectors. This paper introduced a new approach to select the learning rate for feed forward neural network. The new approach gave better results