### 1. Introduction

### 2. Background Theory

### 2.1 Texture Analysis

#### 2.1.1 Feature extraction

*x*= [

*x*

_{1},

*x*

_{2},

*x*

_{3}...,

*x*

*] or by its transpose. The key decision is determining which features to extract. Various methods for texture feature extraction have been proposed over the last decades by [9–12, 14].*

_{n}#### 2.1.2 Pixel classification

### 2.2 Information Fusion

Sensor level fusion refers to the combination of raw data from different sensors.

Feature level fusion refers to the combination of different features vectors.

Score level fusion refers to the combination of matching scores provided by different classifiers.

Decision level fusion refers to the combination of decisions provided by individual classifiers.

#### 2.2.1 Majority vote rule

*z*most voted by the individual classifier is selected by computing the number of times that each class appears:

*α*

*represents the reliability degree of the classifier and we can estimate it by using the recognition rate of each classifier. These coefficients are used to tackle the problem of the conflict between classifiers. In our case, we omitted this coefficient because we used one classifier.*

_{j}#### 2.2.2 Fuzzy set theory

*A*= {

*x*

_{1}, ...,

*x*

*} be a fuzzy set in the universe of discourse U (in our case, the set of class labels), defined as*

_{c}*A*= {

*μA*(

*x*

*),*

_{i}*x*

*}i = 1, 2... c where, the membership function*

_{i}*μA*(

*x*

*) having positive values in the interval [0 1] denotes the degree to which an event*

_{i}*x*

*may be a member of*

_{i}*A*.

### 3. The Proposed Segmentation Algorithm

### 3.1 Pixels Classification

*d*

*,*

_{l}*l*=1…4) denoted as: LL1, HL1, LH1, HH1, where 1 is designated as the first level.

LL1 contains both horizontal and vertical low frequencies: approximation coefficients.

LH1 contains horizontal low frequencies and vertical high frequencies: vertical details.

HL1 contains horizontal high frequencies and vertical low frequencies: horizontal details.

HH1 contains both horizontal and vertical high frequencies: diagonal details.

*W*from the corresponding channel as:

*N*

*denotes the number of pixel in the window*

_{w}*W*.

### 3.2 Fusion for Post-Classification

*I*be the segmented image containing the class decisions of each pixel (the output of the SVM classifier)

*I*=

*d*

*with*

_{ij}*i*= 1 ...

*n*and

*j*= 1 ...

*n*. We browsed the image by using a sliding window of size

*N*×

*N*, so that each pixel is surrounded by

*N*

^{2}– 1 pixels.

*p*

*of window*

_{ij,l}*w*

*with decision*

_{l}*d*

*belongs to the*

_{ij,l}*N*

^{2}– 1 window in a different position before the classification process. Whereas, each central pixel

*p*

_{ij}_{,}

*of the windows*

_{m}*m*with

*m*= 1...

*N*

^{2}— 1 made different decisions

*d*

_{ij,m}_{.}

*p*

_{3,3}with decision

*d*

_{3,3}and

*N*= 3, the central pixel is surrounded by eight pixels, which are the center of the eight windows that pixel

*p*

_{3,3}belonged to.

#### 3.2.1 Decision fusion

*d*

_{3,3},

*d*

_{3,2},

*d*

_{3,4},

*d*

_{2,3},

*d*

_{2,4},

*d*

_{2,2},

*d*

_{4,2},

*d*

_{4,4},

*d*

_{4,3}} by applying majority vote rule (cf. Fig. 4).

#### 3.2.2 Scores fusion

*S*be a matrix containing the scores of each pixel to each class provided by SVM,

*S*=

*s*

*with*

_{lk}*l*= 1 ...

*M*and

*k*= 1 ...

*C*where,

*M*=

*n*×

*mis*is a number of pixels of the image and

*C*is the number of classes. We decomposed the

*S*to

*k*image, which was used as the source of information for data fusion

*i*= 1 ...

*N*and

*j*= 1 ...

*m*. We applied the proposed model described in Section 3.2 to each source and we combined these scores in the context of probability theory and fuzzy logic (cf. Fig. 5).

### Rules based on the probability theory

*w*

*offered by source*

_{k}*k*for the feature input

*x*

*. We investigated several ways to implement the combination of these probabilities as follows:*

_{ij}