### 1. Introduction

### 1.1 Overview of the Proposed Approach

### 1.2 Contributions of This Paper

Evaluation of a novel 3D face verification system based on different recent local descriptors, LBP, TPLBP, FPLBP, BSIF, and LPQ. We give a detailed analysis of these algorithms and compared their performances to build an efficient, automatic 3D face verification system in which we can resolve challenges where the expressions and illumination conditions in the training and testing data are very different.

Combination of the LPQ descriptor with different face descriptors at the feature level by concatenating the histograms features.

Histogram feature extraction-based local features achieved a high verification performance.

Assessment and comparison of 3D reduction techniques, PCA (only), OLPP, and PCA+EFM.

### 2. Related Work

### 3. Local Descriptor

### 3.1 Local Phase Quantization

*f*(

*x*), which is defined by:

*u*1 = [

*a*, 0]

*,*

^{T}*u*2 = [0,

*a*]

*,*

^{T}*u*3 = [

*a*,

*a*]

*and*

^{T}*u*4 = [

*a*, −

*a*]

*where*

^{T}*a*represents the sufficiently highest scalar frequency for which

*H*

*>0 for each pixel position in the depth image, these results in a vector*

_{ui}*Re*) and imaginary (

*Im*) parts of

*F*

*. The scalar quantizer is given by the following equation:*

_{x}*g*

*(*

_{j}*x*) is the jth component of the vector

*G*

*= [*

_{x}*Re*{

*F*

*},*

_{x}*Im*{

*F*

*}]. The obtained eight binary coefficients*

_{x}*q*

*(*

_{j}*x*) are represented as integer values between 0 and 255 using a simple binary coding to get the LPQ labels,

*F*

*defined by:*

_{LPQ}### 3.2 Binarized Statistical Image Features

*X*is the input image and

*W*

*is a linear filter of size l×l, the filter response*

_{i}*s*

*is given by the following equation:*

_{i}*w*and

*x*include the pixels of

*W*

*and*

_{i}*X*respectively. The binarized feature

*b*

*is calculated as follows:*

_{i}### 3.2 Local Binary Patterns

*r*[30]. The values of the sampled points

*P*on the edge of this circle are taken and compared with the value of the central pixel.

*I*with the coordinate (

*x*

*,*

_{c}*y*

*), the LBP code can be defined as a decimal form as follows:*

_{c}*Ic*: is the depth values of the central pixel and*Ip*(1,..,*p*) are the values of its neighborhood with a radius*r*.*s*(*x*): is a function defined as:

### 3.3 Three-Patch Local Binary Patterns

*r*centered at the site of pixel

*S*. The TPLBP operator is given by the following equation:

*d*is the distance measure between patches,

*P*is the patch, and

*f*is a threshold function that is calculated as follows:

*τ*is the threshold of comparison. The principle of the TPLBP operator method is illustrated in Fig. 4.

### 3.4 Four-Patch Local Binary Patterns

*r*1 and

*r*2 are used and centered in the pixel. The w×w patches distributed around these two rings and the comparison occurs between two center symmetric patches in the inner ring with two center symmetric patches in the outer ring positioned patches away along the circle [10]. After the comparison, one bit in each pixel is taken into accounts according to which of the two patches are the most similar. Along each circle, is the center’s symmetric pairs and this value is the final binary code. The FPLBP operator is given by:

### 4. Histogram Feature Extraction

*h*) of the input image (

*I*) is computed as follows:

*P*is the number of bits.

### 5. Dimensionality Reduction

### 5.1 PCA+EFM

*A*= [

*A*

_{1}

*A*

_{2}…

*A*

*], with N rows and M columns. M is the number of training images and N is the size of the feature vector. Each class or each person is represented by a number of column vectors (samples) with a different variation of illuminations and expressions according to the protocol depicted in Table 1.*

_{M}*U*

*of every feature vector in the eigenvectors subspace, where*

_{PCA}*W*is the training face vector projection on the eigenvector subspace. The steps to compute the

*U*

*can be summarized as follows:*

_{PCA}Step 1: Find the mean face vector

*Ā*where*A*(_{i}*i*=1,2, …,*M*) represents the*i*th column of*A*.Step 2: Subtract the mean face

*Ā*from each training face.Step 3: Calculate the covariance matrix

*C*form the new matrix*X*, where*X*= [*Q*_{1},*Q*_{2}, …*Q*]_{M}-
Step 4: Calculate the eigenvalues

*V*and the eigenvectors*U*of*C*. Sort the eigenvectors in decreasing order. The*U*matrix contains the first_{PCA}*k*eigenvectors corresponding to the*k*greatest eigenvalues.We then find a global linear transformation matrix*U*of every feature vector in the eigenvectors subspace, in the form of:_{PCA}Second, in order to improve the discrimination power between classes (column vectors), EFM is used after the PCA. The training face vectors projection on the eigenvector subspace of the PCA algorithm (*W*) is an input for EFM. The EFM algorithm, which is presented below as: -
Step 5: Find the intra-class (

*S*) and inter-class (_{w}*S*) dispersion matrices_{b}Where:*W*is the_{ij}*j*-th samples of the class*i*,*m̄*is the mean of the samples in the class_{i}*i*,*m̄*is the mean of all samples, and*n*is the number of samples in the class_{i}*i*. Step 6: Calculate the eigenvalues (

*Y*) and the eigenvectors vectors (*E*) of the*S*matrix._{w}Step 7: Calculate the new inter-class matrix

*K*:_{b}Step 8: Calculate the eigenvalues (

*O*) and the eigenvectors vectors (*H*) of*K*._{b}Step 9: Calculate the global transformation matrix

*U*._{EFM}

*W*

*) on the subspace described by the eigenvector is calculated using the global linear transformation matrix*

_{final}*U*

*, in the form of:*

_{EFM}### 5.2 Orthogonal Locality Preserving Projection

*x*

_{1},

*x*

_{2}, ….

*x*

*} is a given set in*

_{k}*R*

*to find a transformation matrix*

^{l}*A*that projects the k points to a set of points {

*y*

_{1},

*y*

_{2},….

*y*

*} in*

_{k}*R*

*where m≪l, and*

_{m}*y*

*=*

_{i}*A*

***

^{T}*x*

*. The OLPP algorithm includes five steps [19], which are described below:*

_{i}-
Step 1: PCA projection

The facial images*x*are projected into the PCA subspace. The PCA matrix transformation is represented by_{i}*W*. The uncorrelated features are extracted and the rank of the new data matrix is equal to the number of features._{PCA} -
Step 2: The Adjacency Graph

Let*G*indicate a graph with k nodes. The*i*node corresponds to the facial images^{th}*x*. An edge is placed between nodes_{i}*i*and*j*, if*x*and_{i}*x*are close,_{j}*x*is among the_{i}*p*nearest neighbors of*x*or vice versa. The Adjacency Graph is an approximation of the local manifold structure. If the class information is available, we simply put an edge between two data points belonging to the same class._{j} -
Step 3: Calculate the weights.

Where*t*is a suitable constant. The weight matrix*W*of graph_{ij}*G*models the face manifold structure by preserving the local structure. -
Step 4: The orthogonal basis functions

Let*D*denote a diagonal matrix where whose entries are column or row (*W*is symmetric) sums of*W*.*D*= ∑_{i}_{j}*W*, the Laplacian matrix is defined by_{ji}*L*=*D*−*W*. Let {*a*_{1},*a*_{2}, ….*a*} be the orthogonal basis vectors and we define:_{k}The orthogonal basis vector {*a*_{1},*a*_{2}, ….*a*} is computed as shown below:_{k}– Compute

*a*_{1}, the eigenvector of (*XDX*)^{T}^{−1}*XDX*associated with the smallest eigenvalues.– Compute

*a*, the eigenvector of:_{k}

*M*^{(}^{k}^{)}= {*I*−(*XDX*)^{T}^{−1}*A*^{(}^{k}^{−1)}[*B*^{(}^{k}^{−1)}]^{−1}[*A*^{(}^{k}^{−1)}]}. (^{T}*XDX*)^{T}^{−1}*XDX*is associated with the smallest eigenvalues of^{T}*M*^{(}^{k}^{)}. -
Step 5: OLPP Embedding

Where*y*is the*l*-dimensional representation of the facial image*x*, and*W*is the final transformation matrix.

### 6. Classification Using a Support Vector Machine

*xi*are the training feature vectors in

*k*-dimensional and

*yi*are the labels, which is shown as:

*k*versus other training classes. The final decision on the M-class is carried out by the combination of each classifier.

### 7. Experiments and Results

### 7.1 Experimental Data

The scans (001 to 005) with illumination variations under neutral expression,

The scans (006 to 010) with the five expression variations of laughter, smiling, anger, surprise, eyes close as seen under office lighting.

The scans (011 to 015) with expressions under illumination variations.

### 7.2 Discussion

*P*=16 and

*r*=2 were found to provide the best results.