sensors Article

Aspect-Aided Dynamic Non-Negative Sparse Representation-Based Microwave Image Classification Xinzheng Zhang 1, *, Qiuyue Yang 1 , Miaomiao Liu 1 , Yunjian Jia 1 , Shujun Liu 1 and Guojun Li 2 1

2

*

College of Communication Engineering, Chongqing University, Chongqing 400044, China; [email protected] (Q.Y.); [email protected] (M.L.); [email protected] (Y.J.); [email protected] (S.L.) Department of Communication Commanding, Chongqing Communication Institute, Chongqing 400035, China; [email protected] Correspondence: [email protected]; Tel.: +86-139-8322-7998

Academic Editor: Vittorio M. N. Passaro Received: 29 June 2016; Accepted: 29 August 2016; Published: 2 September 2016

Abstract: Classification of target microwave images is an important application in much areas such as security, surveillance, etc. With respect to the task of microwave image classification, a recognition algorithm based on aspect-aided dynamic non-negative least square (ADNNLS) sparse representation is proposed. Firstly, an aspect sector is determined, the center of which is the estimated aspect angle of the testing sample. The training samples in the aspect sector are divided into active atoms and inactive atoms by smooth self-representative learning. Secondly, for each testing sample, the corresponding active atoms are selected dynamically, thereby establishing dynamic dictionary. Thirdly, the testing sample is represented with `1 -regularized non-negative sparse representation under the corresponding dynamic dictionary. Finally, the class label of the testing sample is identified by use of the minimum reconstruction error. Verification of the proposed algorithm was conducted using the Moving and Stationary Target Acquisition and Recognition (MSTAR) database which was acquired by synthetic aperture radar. Experiment results validated that the proposed approach was able to capture the local aspect characteristics of microwave images effectively, thereby improving the classification performance. Keywords: microwave imaging sensor; image classification; aspect angle; sparse representation

1. Introduction Microwave imaging sensors have gained extensive applications in bio-tissue image sensing, concealed weapon detection, and aerial remote sensing [1–3], in which the first and foremost objective is target microwave image classification. The present study is focused on target classification in images collected by a microwave imaging radar. Microwave imaging of targets is basically based on the electromagnetic scattering mechanism, so the images are usually not as detailed as optical images. Moreover, microwave imaging is very sensitive to the aspect angle and depression angle, so changes of these angles can cause significant changes in images. Figure 1 describes the aspect angle and depression angle in microwave imaging radar. In Figure 2, the images of a BMP2 target at varying aspect angles are given with a constant depression angle of 17◦ . As shown, image information differed profoundly for different aspect angles. Thus, these factors increase the difficulty in microwave image recognition [4,5].

Sensors 2016, 16, 1413; doi:10.3390/s16091413

www.mdpi.com/journal/sensors

Sensors 2016, 16, 1413 Sensors Sensors2016, 2016,16, 16,1413 1413

of 19 222of of19 19

Figure 1. Descriptionofofaspect aspect angleand and depressionangle angle inmicrowave microwave imagingradar radar sensor. Figure Figure1.1.Description Description of aspectangle angle anddepression depression anglein in microwaveimaging imaging radarsensor. sensor.

1.79° 1.79°

58.79° 58.79°

98.79° 98.79°

136.79° 136.79°

318.79° 318.79°

◦ ). Figure Microwaveimages aspectangles angles(depression (depressionangle angle17°). 17°). Figure2.2.Microwave imagesof ofBMP2 BMP2at atdifferent differentaspect aspect angles (depression angle 17

There Thereare aretwo twoprimary primarytypes typesof oftarget targetclassification classificationtechnology technologyused usedfor forimages imagesof ofmicrowave microwave There are two primary types of target classification technology used for images of microwave imaging radar: the template-based method [6] and model-based method [7]. The principle underlying imaging radar: the template-based method [6] and model-based method [7]. The principle underlying imaging radar: the template-based method [6] and model-based method [7]. The principle underlying the template-based method involves extracting features and them with the stored the template-based template-basedmethod method involves extracting features and comparing comparing them stored the involves extracting features and comparing them with the with storedthe templates templates generated by the training images, thereby identifying the unknown test image as the class templates generated by the training images, thereby identifying the unknown test image as the generated by the training images, thereby identifying the unknown test image as the class ofclass the of template. The isis that clutters in images interfere of the the matched matched template. The problem problem that background background in microwave microwave images interfere matched template. The problem is that background clutters inclutters microwave images interfere profoundly profoundly with template matching. The sensitivity to angle angle causes profoundly with templateThe matching. Theto sensitivity toaspect aspect angleand anddepression depression anglealso also causes with template matching. sensitivity aspect angle and depression angle also causes difficulties difficulties in template matching. For the model-based method, SAR images were described using a difficulties in template matching. For the model-based method, SAR images were described using in template matching. For the model-based method, SAR images were described using a statisticala statistical model based scattering Then the type determined based statistical model basedon onmechanisms. scatteringmechanisms. mechanisms. Thentype thetarget’s target’s typewas wasbased determined basedon on model based on scattering Then the target’s was determined on the maximum the maximum posterior probability of model parameters. Because of the difficulty of parameter the maximum posterior probability of model parameters. Because of difficulty of parameter posterior probability of model parameters. Because of the difficulty of the parameter estimation, this estimation, this process can easily produce in weak correlation between training and testing samples, estimation, this process can easily produce in weak correlation between training and testing samples, process can easily produce in weak correlation between training and testing samples, causing the causing the method to fail. In addition, these methods have much worse classification performance causing the method to fail. In addition, these methods have much worse classification performance method to fail. In addition, these methods have much worse classification performance at extended atatextended operating conditions, because working of samples and extended operatingbecause conditions, becausethe the workingparameters parameters oftraining training andtesting testing operating conditions, the working parameters of training samples andsamples testing samples are samples are significantly different. Other optional methods include discriminative tensor analysis, samples are significantly different. Other optional methods include discriminative tensor analysis, significantly different. Other optional methods include discriminative tensor analysis, non-negative non-negative matrix factorization (NMF), non-negative matrix(NMF), factorization (NMF),etc. etc.[8,9]. [8,9]. matrix factorization etc. [8,9]. In recent years, the sparse representation theory In recent years, the sparse representation theoryhas hasbeen beenwidely widelyadopted adoptedin inimage imageprocessing processing In recent years, the sparse representation theory has been widely adopted in image processing and pattern recognition. The principle isisto represent observed signal using linear combinations of a and pattern recognition. The principle to represent observed signal using linear combinations and pattern recognition. The principle is to represent observed signal using linear combinationsofofa series of known signals called atoms. In sparse constraint of representation coefficient (minimal ℓℓ signals called atoms. In In sparse constraint of representation coefficient (minimal aseries seriesofofknown known signals called atoms. sparse constraint of representation coefficient (minimal norm), aa unique unique solution solution was was obtained. obtained. Then Then the the target target type type was was determined determined by by minimizing minimizing the the `norm), 0 norm), a unique solution was obtained. Then the target type was determined by minimizing the reconstruction error of every type of training sample. Sparse representation has been used extensively reconstruction error error of of every every type type of of training training sample. sample. Sparse has been reconstruction Sparse representation representation has been used used extensively extensively in various recognition tasks, such as facial recognition, speech recognition, and hyperspectral image in various recognition tasks, such as facial recognition, speech recognition, and hyperspectral image in various recognition tasks, such as facial recognition, speech recognition, and hyperspectral image classification [10–12]. Sparse representation based recognition has gained in-depth development in classification [10–12]. Sparse representation based recognition has gained in-depth development in classification [10–12]. Sparse representation based recognition has gained in-depth development accordance to different requirements of image recognition. Fang et al. investigated multi-task sparse accordance to different requirements of image recognition. Fang Fang et al. investigated multi-task sparse in accordance to different requirements of image recognition. et al. investigated multi-task learning algorithm [13]. In aa previous study by Zhang etet al., the kernel technique and sparse learning algorithm [13]. In previous study by Zhang al., the kernel technique and sparse sparse learning algorithm [13]. In a previous study by Zhang et al., the kernel technique and sparse representation used to create representationin innon-linear non-linearprocessing processingwere were createaaakernel kernelsparse sparserepresentation representationmodel, model, representation in non-linear processing were used used to to create kernel sparse representation model, which was then used in facial recognition and hyperspectral image classification [14]. Meanwhile, which was wasthen thenused usedininfacial facialrecognition recognition and hyperspectral image classification Meanwhile, which and hyperspectral image classification [14].[14]. Meanwhile, the the ℓℓ -regularized non-negative least square sparse was in -regularized non-negative least (NNLS) square (NNLS) (NNLS) sparse representation representation wasinemployed employed in `the -regularized non-negative least square sparse representation was employed classification 1 classification of gene data [15,16]. The NNLS sparse representation model possessed advantages of classification of gene data [15,16]. The NNLS sparse representation model possessed advantages of clear clearphysical physicalmeaning, meaning,high highclassification classificationaccuracy accuracyand andrapid rapidcalculation. calculation.

Sensors 2016, 16, 1413

3 of 19

of gene data [15,16]. The NNLS sparse representation model possessed advantages of clear physical meaning, high classification accuracy and rapid calculation. In microwave radar image recognition, several studies have focused on sparse representationbased classification. For example, Zhang et al. proposed a joint sparse representation target classification method by evaluating the correlation between multi-view microwave images [17]. On this basis, Chen et al. presented an improved joint sparse representation by using low-rank matrix recovery [18]. Dong et al. studied monogenic signal joint sparse representation-based target recognition of microwave image sensors, and further developed sparse representation of monogenic signals on Grassmann manifolds [19,20]. Liu et al. investigated microwave image recognition via multiple sparse Dempster-Shafer fusion [21]. Liu et al. introduced fusion of sparse representation and support vector machine for microwave radar image target recognition [22]. Cao et al. proposed a target classification method based on joint sparse representation with multi-view SAR images over a locally adaptive dictionary [23]. Hou et al. investigated SAR image classification with hierarchical sparse representation and multisize patch features [24]. Recently, there are also several techniques based on sparse representation for polarimetric SAR image classification [25–28]. In current microwave image recognition methods involving sparse representation theory, training samples of all aspect angle from 0◦ to 360◦ were considered as dictionary atoms. It is assumed that the all aspect angle microwave images of one type of target exist in a linear subspace. In the presence of large differences in aspect angle, the assumption is inappropriate due to the sensitivity of microwave images to aspect angle. According to the scattering characteristics of a target, the microwave images are, in essence, characterized by the electromagnetic scattering mechanism. According to this, only microwave images with approximately the same aspect angles have strong correlation, and those with different aspect angles have weak correlation. This is in fact the aspect-dependent structural characteristics of microwave images. For example, images at 6◦ aspect angle have a close correlation with images with 1◦ –11◦ aspect angle, and the correlation with other aspect angle images is weak, especially when the difference in aspect angle is large. In this way, the sparse representation of a testing image should only involve training samples with high aspect correlation with the testing image, rather than all SAR training images. Meanwhile, the dictionary matrix is not optimal if constituted by all of 0◦ –360◦ training samples. The production of excessive numbers of dictionary entries might render computation and storage unnecessarily taxing. The main contribution of the current study lies in the presentation of a microwave image recognition method based on aspect aided dynamic non-negative least square (ADNNLS) sparse representation. Figure 3 shows a schematic view of the proposed algorithm. First, the aspect angle of a testing sample was estimated. Taking the estimated aspect angle as the center, a sector was then defined. Training samples of all targets in the sector were considered effective dictionary atoms, others not in the sector were null atoms, and replaced with zero matrices. However, estimations of aspect angle are subject to errors. To prevent actual dictionary atoms not falling into an effective sector, the sector cannot be too small, so there are still some atoms in the sector that have weak correlations with the testing sample. In order to further narrow down the range of effective atoms, they were divided into two groups via smooth self-presentation clustering learning. The first group considered the testing sample to be the clustering center, and the second considered the furthest training sample to the testing sample to be the center. It is worth noting that the clustering samples included the testing sample, rather than only training samples in the sector. In this way, the selection of effective atoms could be enhanced. Then, only the effective atoms in the testing sample group were preserved. These are here called active atoms. The remaining atoms were set to zero matrices. Hence, the corresponding dictionary atoms of different testing samples need to be updated dynamically. That is why it is called aspect-dependent dynamic sparse representation. In this way, the size of dictionary did not change, with the active atoms keeping their original value and invalid atoms becoming zero matrices. Then the `1 -regularized non-negative least square sparse representation was used to perform regression representation of the tested sample. Decision making of the testing sample was then performed based

Sensors 2016, 16, 1413

4 of 19

on the principle of minimum reconstruction error. The advantage of the proposed model is that the quantity of effective data in training samples was reduced, and the interference of invalid atoms with large differences in aspect angle was eliminated, thereby improving the recognition accuracy. Sensors 2016, 16, 1413 4 of 19 Additionally, the large number of zero matrices in the dynamic dictionary helps to reduce the required computation time datathe storage space in sparse solving. on the space Moving and Stationary dictionary helps to and reduce required computation time andBased data storage in sparse solving. Target Acquisition and Recognition (MSTAR) public database, several experiments were conducted to Based on the Moving and Stationary Target Acquisition and Recognition (MSTAR) public database, evaluate the performance of the proposed algorithm. The target microwave images in the database several experiments were conducted to evaluate the performance of the proposed algorithm. The covered 0◦ –360◦ aspect angles, enabled algorithm of the proposed aspect-aided target microwave images in thewhich database covered 0°–360°verification aspect angles, which enabled algorithm dynamic non-negative least square sparse representation. The results were then compared to those of verification of the proposed aspect-aided dynamic non-negative least square sparse representation. state-of-art The results algorithms. were then compared to those of state-of-art algorithms.

Figure 3. Flowchart of microwave image recognition based on aspect-aided dynamic non-negative Figure 3. Flowchart of microwave image recognition based on aspect-aided dynamic non-negative least square sparse representation. least square sparse representation.

This paper is organized as follows: Section 2 describes the algorithm for aspect-aided dynamic selection of active atoms. In Section 3, the non-negative least square sparse representation algorithm reviewed. Then Thenthe the microwave image recognition algorithm based on aspect-aided dynamic is reviewed. microwave image recognition algorithm based on aspect-aided dynamic nonnon-negative square sparse representation presented. Section4,4,experiments experiments carried carried out negative leastleast square sparse representation is is presented. InInSection according to the MSTAR database and comparisons between the proposed algorithm and the current algorithm are described. Conclusions are given in Section 5. 2. Aspect-Aided Aspect-AidedDynamic DynamicActive ActiveAtoms AtomsSelection Selection 2. The characteristics characteristics of of target target microwave microwave images images obtained obtained from from microwave imaging radar radar are are The microwave imaging closely related to their aspect angle. When the aspect angle changes severely, the pixel distribution in closely related to their aspect angle. When the aspect angle changes severely, the pixel distribution in the target images also show abrupt changes. In this study, only the influence of the aspect angle was analyzed because it had the most dramatic influence on microwave images characteristics, and the depression angle did not change much in the experimental data.

Sensors 2016, 16, 1413

5 of 19

the target images also show abrupt changes. In this study, only the influence of the aspect angle was analyzed because it had the most dramatic influence on microwave images characteristics, and the depression angle did not change much in the experimental data. 2.1. Estimation of the Testing Sample Aspect Angle As given above, the aspect angle of a testing sample needs to be estimated in the microwave image recognition algorithm based on ADNNLS sparse representation. It was also needed in template matching microwave image recognition [6]. Current estimation methods, which are only based on the information of the testing sample images, mainly use a technique such as radon transformation to estimate the target aspect angle [29]. The current methods did not consider using training samples with known aspect angles. Actually, in microwave image recognition tasks, there are large numbers of training samples with known aspect angle, which can be used to estimate the testing sample aspect angle. Thus, we present a simple but effective testing sample aspect angle estimation method based on the training sample aspect angle. This is an innovation of the current study. The precondition of the method is that the training sample aspect angle is known and distributed evenly across 0◦ –360◦ . The principle of the method is that the testing sample is closely correlated to training samples which have similar aspect angle. Hence, the correlation coefficients between the testing samples and training samples were calculated, and the aspect angle of training samples with the largest correlation coefficient was taken as the estimation value of the testing sample. Assuming that the testing sample are A, and certain training sample is B, the correlation coefficient r could be calculated as below: ∑ ∑ ( Amn − A)( Bmn − B) r = s

m n

∑ ∑ ( Amn − A) m n

2



∑ ∑ ( Bmn − B)

2



(1)

m n

Herein, A represents the mean of A, B is the mean of B, and Amn , Bmn represent the value of pixels at row m and column n in A and B, respectively. The microwave images of target BMP2-9563 were used for verification. Correlation analyses were performed between one testing image of target BMP2-9563 at 15◦ depression angle and 48.49◦ aspect angle and training images of targets BMP2-9563, BTR70-C71, and T72-132 at 17◦ depression angle and all aspect angles. Figure 4 shows the variation curve of the correlation coefficients. As shown, the training samples with largest correlation coefficient were the 34th training samples of BMP2-9563. The corresponding aspect angle was 46.49◦ , which was then taken as the estimation value of testing sample aspect angle. As shown, there was an error of 2◦ relative to the true aspect angle of the testing sample. Figure 5 shows the errors of all testing samples in BMP2-9563. As shown, the error was around 5◦ for most cases and slightly above 10◦ for some. Data analysis showed that the estimated aspect angle mean error for BMP2-9563 testing samples was 0.19◦ , and the standard deviation is 2.76◦ . A similar trend was found for other testing samples in experiments. This method of estimation has the following advantages: (1) it fully utilizes the aspect angle information of the training samples; (2) the calculations were simple; and (3) the problem of distinguishing target head and tail while only using testing samples information was avoided. Once the aspect angle of the testing sample has been estimated, it was taken as the center to determine a sector. For example, if the testing sample aspect angle estimated is 40◦ , we can determine a sector which aspect angle range is from 25◦ to 55◦ . All types of training sample subsets in the sector were considered effective atoms. The other training samples not in the sector were set to zero matrices. A preliminary dictionary was composed of the effective atoms and zero matrices atoms.

BMP2-9563. The corresponding aspect angle was 46.49°, which was then taken as the estimation value of testing sample aspect angle. As shown, there was an error of 2° relative to the true aspect angle of the testing sample. Figure 5 shows the errors of all testing samples in BMP2-9563. As shown, the error was around 5° for most cases and slightly above 10° for some. Data analysis showed that the estimated aspect angle mean error for BMP2-9563 testing samples was 0.19°, and the standard Sensors 2016, 16, 1413 6 of 19 deviation is 2.76°. A similar trend was found for other testing samples in experiments. 0.8 BMP2-9563 BTR70-C71 T72-132

0.7

Correlation coefficient

0.6 0.5 0.4 0.3 0.2 0.1 0 -0.1 -0.2

0

50

100 150 Training Sample Index

200

250

◦ , aspect Figure 4.4.Correlation Correlationcoefficient coefficient curve between BMP2-9563 testing sample (depression angle 15°, Figure curve between BMP2-9563 testing sample (depression angle 15 ◦ ) 48.49°) aspect angle and all training samples of targets BMP2-9563, BTR70-C71, and T72-132. angle 48.49 and all training samples of targets BMP2-9563, BTR70-C71, and T72-132. Sensors 2016, 16, 1413 6 of 19 15

Aspect Estimation Error

10

5

0

-5

-10

0

20

40

60 80 100 120 140 BMP2-9563 Test Sample Index

160

180

200

Figure Figure 5. 5. Estimation Estimation error error in in aspect aspect angle angle of all testing samples of BMP2-9563.

This method ofSelection estimation has the following advantages: (1) it fully utilizes the aspect angle 2.2. Enhanced Atoms in Aspect Sector Based on Smooth Self-Representation information of the training samples; (2) the calculations were simple; and (3) the problem of The corresponding aspect sector was determined based on aspect angle estimation of testing distinguishing target head and tail while only using testing samples information was avoided. sample, and the effective atoms were chosen. However, as shown above, the maximum estimation Once the aspect angle of the testing sample has been estimated, it was taken as the center to error might be as high as 15◦ . Inif order to prevent of any effective atoms, size determine a sector. For example, the testing sampleomission aspect angle estimated is 40°, we the can sector determine ◦ ◦ . Hence, the effective atoms subsets contained should be at least ± 15 , namely covering an area of 30 a sector which aspect angle range is from 25°to 55°. All types of training sample subsets in the sector redundancy. Theeffective smooth self-representation learning clustering method waswere adopted this matrices. study to were considered atoms. The other training samples not in the sector set toinzero further refine these subsets. The clusteringofreported by Shafiee al. zero was only for the training samples A preliminary dictionary was composed the effective atomsetand matrices atoms. themselves [30]. Here, clustering was performed for both the testing sample and training samples. TheEnhanced aim of clustering was to in divide effective atomsoninto two groups, one group containing the atoms 2.2. Atoms Selection Aspect Sector Based Smooth Self-Representation closely correlated to the testing sample, and the other group had low correlations to the testing sample. The corresponding sector was determined ontesting aspectsample angle estimation offurthest testing Therefore, clustering was aspect conducted with two centers, thebased current and the atom sample, the effective atoms were chosen. However, as shown above, the maximum estimation from theand current testing sample. errorThe might be as high as 15°. In order to prevent omission of any effective atoms, theused sector smooth self-representation clustering algorithm presented by Han et al. was in size this should be at least ±15°, namely covering an area of 30°. Hence, the effective atoms subsets contained study [31]. In clustering, the grouping effect is very important. The enforced grouping effect conditions redundancy. The self-representation learning clustering method was adopted this study are introduced insmooth the smooth self-representation algorithm. This is what allows the in algorithm to to further refine these subsets. The clustering reported by Shafiee et al. was only for the training achieve such a strong clustering effect. samples themselves Here, clustering performed both the testing training sample and training Assuming the[30]. dataset of the was testing samplefor and effective atoms is samples. The aim of clustering was to divide effective atoms into two groups, one group containing m × n X = [ x1 , x2 , · · · x n ] ∈ R where xi , i = 1, . . . ,n is a vector with every sample arranged in columns. x1 the atoms closely correlated to the testing sample, and the other group had low correlations to the testing sample. Therefore, clustering was conducted with two centers, the current testing sample and the atom furthest from the current testing sample. The smooth self-representation clustering algorithm presented by Han et al. was used in this study [31]. In clustering, the grouping effect is very important. The enforced grouping effect conditions are introduced in the smooth self-representation algorithm. This is what allows the

Sensors 2016, 16, 1413

7 of 19

is the testing sample, xn is the atom farthest from the testing sample. The clustering process is as below [31]: First, the number of subspaces from clustering was obtained as d = 2, with two clustering centers x1 e was calculated [31]. and xn . Then a K-NN plot W was made, and the corresponding Laplacian matrix L It is necessary to calculate the self-representation matrix U, i.e., the following optimization problem: e T) min f (U ) = αk X − XU k2F + tr(U LU (2) U

Here, α > 0 is a control parameter, k · kF represents the matrix norm, and tr (·) is the trace of the matrix. Equation (2) is a smooth convex programming problem. According to [31], the optimal solution U* could be found by solving the Sylvester matrix in Equation (3): e = αX T X αX T XU ∗ + U ∗ L

(3)

Thereafter, the affinity matrix J was calculated using Equation (4):   J = |U ∗ | + U ∗T /2

(4)

Then, the final clustering result was obtained via the spectral clustering algorithm [31,32]. According to the clustering results, the atoms in the testing sample-centered type were refined effective atoms, here called active atoms. The other type of atom was replaced with zero vectors. Thus, the active atoms and zero vectors atoms constituted the final dictionary. 3. Non-Negative Least Square Sparse Representation Classifier 3.1. Basic Sparse Representation Classifier Lately, sparse representation has gained widespread applications in image restoration, pattern recognition and hyperspectral image classification [33]. The success of the sparse signal model mainly lies in sparse representation of observed signal with a group of known observation samples (dictionary). The optimal sparse representation could be solved via the convex optimization method [34,35].   Given sufficient k types of training samples (k = 1, . . . ,K), and Xk = xk,1 , xk,2 , . . . , xk,nk ∈ Rm×nk , the samples were sorted in a column vector in space Rm . An unknown observation sample y ∈ Rm×nk was represented as the linear combination of the k-th type training samples with the representation  T coefficient αk = αk,1 , αk,2 , . . . , αk,nk ∈ Rnk . y = xk,1 αk,1 + xk,2 αk,2 + . . . + xk,nk αk,nk

(5)

It was assumed that a certain type of sample formed an approximate linear subspace. Because the class label of observation sample y was unknown, it was represented with all k types of training samples of the training set. X = [X1 , X2 , . . . , XK ] ∈ Rm×n , here n = ∑kK=1 nk is the total number of training samples. Then the linear representation of y was written as follows: y = X1 α1 + X2 α2 + . . . + XK αK = Xα

(6)

Herein, α = [α1 , α2 , . . . , αK ] T ∈ Rn where Rn is the representation coefficient vector. Theoretically, the optimal representation α is a coefficient vector, in which the elements corresponding to observation samples y were non-zero, and the other elements were all zero. When m < n, the solution of Equation (6) is not unique. A common method was to add sparse constraint term to the representation coefficient vector, thereby searching for the sparsest representation as below [36]: mink α k0 s.t.k y − Xαk2 ≤ ε (7) α

Sensors 2016, 16, 1413

8 of 19

Herein, k · k0 : represents the `0 -norm of α, and ε is the allowable error. Since the optimization problem of minimal `0 -norm is NP-hard, Equation (7) could only be approximately solved with certain algorithms, such as matching pursuit and orthogonal matching pursuit [37,38]. According to the compressed sensing theory [39], Equation (7) could be relaxed to `1 -norm minimization as below. mink α k1 s.t.k y − Xα k2 ≤ ε α

(8)

Herein, k · k1 is the `1 -norm of α. Since Equation (8) is a convex optimization problem, it could be solved with a traditional optimization algorithm [40]. In sparse representation classifier, after the optimal sparse representation was obtained, the class label of y could be determined based on the law of minimum reconstruction error: identity(y) = min k y − Xk α k2 k=1,...,K

(9)

The above is the common-used basic sparse representation classifier (SRC) [41]. 3.2. Non-Negative Least Square Sparse Representation Classifier In SRC, the sparse representation coefficient might be either positive or negative, although a negative value has no physical meaning in actual application. Because of the presence of negative coefficients, different atoms might be eliminated as a result of offset, which does not conform to the visual perception mechanism. Previous studies have suggested that non-negative property was more accordant with human visual perception mechanism, therefore providing more effective representation [15,16]. Meanwhile, to a certain extent, non-negativity caused further sparsity. In terms of microwave image recognition, the non-negative constraint can eliminate atoms in sparse solution that are negatively correlated to input images, which is more reasonable for microwave image recognition. In the present study, the `1 −egularized non-negative least square sparse representation classifier was adopted [16]: 1 min k y − X β k22 + λk β k1 β 2

s.t. β ≥ 0

(10)

Herein, y is the testing sample, X is the dictionary of training samples, and λ is the control parameter. Equation (10) could be transformed into the following non-negative quadratic programming (NNQP) problem (11): 1 (11) min βT H β + wT β s.t. β ≥ 0 β 2 Herein, H = X T X and w = λ − X T y. The optimization problem (11) could be solved with the Active-Set algorithm [16]. After obtaining the non-negative sparse coefficient vector β, the law of minimal reconstruction error was used to identify the label of the testing sample: identity(y) = min k y − Xk β k2 k =1,...,K

(12)

3.3. Microwave Image Classification with ADNNLS Sparse Representation In summary, the steps of microwave image recognition based on ADNNLS sparse representation are listed in Algorithm 1 as follows:

Sensors 2016, 16, 1413

9 of 19

Algorithm ADNNLS sparse representation for microwave image recognition Sensors 2016, 16,1:1413

9 of 19

Input: Algorithm ADNNLS sparse representation for microwave image recognition Xm×n : All 1: types of training samples Input: Ym× p : All test samples types of training Output: identity of Ym×samples × : Allthe p Steps: × : All test samples Output: the identity × 1) A test sample yof i was selected from Ym× p , and its correlation coefficients vector ryi with all Steps:training samples were calculated. The aspect angle of the testing sample θy was estimated i with all training selected from samples itsshowed correlation coefficients vectorcoefficient. 1) A test on sample yi wasangle × , and based the aspect of training that the largest correlation was estimated on the aspect were calculated. The aspect angle the testing sample 2) samples An aspect sector was determined based onof the estimated aspect angle, and all typesbased of angle of training that showed the largest correlation coefficient. training samplessamples in the sector were included in a set of effective atoms. Based on smooth 2) An aspect sector was determined based the on the estimated aspect and all types of training samples was obtained. self-representation learning clustering, set of active atoms Xy0angle, i 00 0 in the sector were included in a set of effective atoms. Based on smooth self-representation learning 3) A dynamic dictionary Xyi was created according to Xyi . obtained. algorithm, the testing sample yi the set of active atomssparsewas 4) clustering, Using non-negative least square representation 00 " was represented with the dynamic dictionary Xyi ,tothereby 3) A dynamic dictionary was created according . deriving the representation coefficient β yi . 4) Using non-negative least square sparse representation algorithm, the testing sample yi was represented 5) with Classthe label of testing sample yi "was obtained based on law of minimal reconstruction , thereby deriving thethe representation coefficient . error. dynamic dictionary 6) If all testing samples were classified, go to step 7). Otherwise, go back to step 1); 5) Class label of testing sample yi was obtained based on the law of minimal reconstruction error. 7) IfEnd 6) all testing samples were classified, go to step 7). Otherwise, go back to step 1);

7)

End

4. Experiments and Discussion 4. Experiments and Discussion 4.1.Description Descriptionofofthe theDataset Dataset 4.1. The MSTAR MSTAR database database of used in The of microwave microwave images imagesobtained obtainedusing usingsynthetic syntheticaperture apertureradar radarwas was used the experiments. The microwave imaging radar sensor worked in the X band, and the image resolution in the experiments. The microwave imaging radar sensor worked in the X band, and the image was 0.3 × was 0.3 m. three types of targets, BMP2, T72 and wereT72 adopted. resolution 0.3 The × 0.3images m. Theofimages of three types ofi.e., targets, i.e.,BTR70 BMP2,and BTR70 were Optical images of the three targets are shown in Figure 6, and several microwave images of them adopted. Optical images of the three targets are shown in Figure 6, and several microwave images of with near aspect angle are shown in Figure 7. Specifically, the target BMP2 has three configuration them with near aspect angle are shown in Figure 7. Specifically, the target BMP2 has three types BMP2-9563, BMP2-C21, has one type has BTR70-C71, T72 has three configuration typesBMP2-9566, BMP2-9563,and BMP2-9566, andBTR70 BMP2-C21, BTR70 one typeand BTR70-C71, and configuration types T72-132, T72-812, and T72-S7. Table 1 lists the number of all training samples and T72 has three configuration types T72-132, T72-812, and T72-S7. Table 1 lists the number of all training ◦ testing samples. The samples. microwave images at 17 depression angle of three targets taken as training samples and testing The microwave images at 17° depression angle were of three targets were ◦ depression angle were testing samples. For each type of target, the samples, and the images at 15 taken as training samples, and the images at 15° depression angle were testing samples. For each type ◦ aspect angle. The sample type, depression angle, target type, and sample images 0◦ –360 of target,covered the images covered 0°–360° aspect angle. The sample type, depression angle, target type, numbers arenumbers shown in are Table 1. First, carried out for all carried trainingout samples and testing and sample shown in pre-processing Table 1. First,was pre-processing was for all training samples by logarithm transformation to compress the dynamic range and to change multiplicative samples and testing samples by logarithm transformation to compress the dynamic range and to noise spots into additive noise.spots Because originalnoise. imagesBecause contained clutter, contained in order to change multiplicative noise intothe additive thebackground original images reduce interference, × 60to m reduce sub-image centered at image center was extracted, and all background clutter, ain60order interference, a the 60 ×original 60 m sub-image centered at the original targets were located in the sub-image. image center was extracted, and all targets were located in the sub-image.

(a)

(b) Figure Figure6.6.Optical Opticalimages imagesof ofthree threetargets. targets.(a) (a)BMP2, BMP2;(b) (b)BTR70, BTR70;(c) (c)T72. T72.

(c)

Sensors 2016, 16, 1413 Sensors 2016, 16, 1413

10 of 19 10 of 19

BMP2

BTR70

T72

Figure Figure7. 7.Microwave Microwaveimages imagesof ofthree three targets targets at at near near aspect aspect angles. angles. Table Table 1. 1. Name Name and and number number of of the the training training and and testing testing samples. samples. Number of Targets 1 1 Training sample type BMP2 Training sample type BMP2 (17°) sn-9563 (17◦ ) sn-9563 Number 233 Number 233 Testing sample BMP2 Testing sample type type BMP2 sn-9563 (15◦(15°) ) sn-9563 Number 195195 Number Number of Targets

2 BMP2 BMP2 sn-9566 sn-9566 232 232 BMP2 BMP2 sn-9566 sn-9566 196196 2

3 BMP2 BMP2 sn-c21 sn-c21 233 233 BMP2 BMP2 sn-c21 sn-c21 196196 3

4 BTR70 BTR70 sn-c71 sn-c71 233 233 BTR70 BTR70 sn-c71 sn-c71 196 196 4

5 T72 T72 sn-132 sn-132 232 232 T72 T72 sn-132 sn-132 196 196 5

6 6 T72 T72 sn-812 sn-812 231 231 T72 T72 sn-812 sn-812 195 195

7 7 T72 T72 sn-s7 sn-s7 228 228 T72 T72 sn-s7 sn-s7 191 191

4.2. 4.2. Classification Classification Results Results The classificationperformance performanceofof proposed ADNNLS algorithm was analyzed To The classification thethe proposed ADNNLS algorithm was analyzed first. Tofirst. address address the advantages of ADNNLS in microwave images classification, one BMP2-9563 testing the advantages of ADNNLS in microwave images classification, one BMP2-9563 testing sample, sample, one testing of BTR70-C71, one testing of T72-132 were selected as the one testing sample sample of BTR70-C71, and oneand testing samplesample of T72-132 were selected as the testing testing assess thecoefficient sparse coefficient distribution diagram and reconstruction error graph, samplessamples to assesstothe sparse distribution diagram and reconstruction error graph, which was which was thento compared to that SRC The and dictionaries NNLS. The in dictionaries in SRCcomprised and NNLS comprised then compared that of SRC and of NNLS. SRC and NNLS global aspect global aspect training of allincluding target types, including 233 training samples232 of training BMP2-9563, 232 training samples of all samples target types, 233 training samples of BMP2-9563, samples training samples of BMP2-9566, 233 training samples of BMP2-C21, 233 training samples of BTR70of BMP2-9566, 233 training samples of BMP2-C21, 233 training samples of BTR70-C71, 232 training C71, 232 of training samples of T72-132, 231oftraining T72-812, and 228 trainingInsamples of samples T72-132, 231 training samples T72-812,samples and 228 of training samples of T72-S7. ADNNLS, T72-S7. In ADNNLS, the training were selected the aspect sector after smooth selfthe training samples were selected samples within the aspect sectorwithin after smooth self-representation clustering, representation so, even dictionary was the same size as SRC and NNLS, the so, even thoughclustering, the dictionary was though the samethe size as SRC and NNLS, the number of effective non-zero number of effective non-zerothan atoms smaller thanSparse in the SRC and was NNLS. Sparse coding was atoms was much smaller in was the much SRC and NNLS. coding carried out for three carried for three samples with corresponding producing threevectors sparse testing out samples withtesting corresponding dictionary, thereby dictionary, producing thereby three sparse coefficient coefficient vectors and corresponding erroringraphs, shown Figures 8–10. As and corresponding reconstruction errorreconstruction graphs, as shown Figuresas8–10. As in shown in Figure 8b, shown in Figure 8b, the maximum sparse coefficient corresponded to BMP2-9563. of the maximum sparse coefficient corresponded to BMP2-9563. Comparison of Figure Comparison 8c–e suggested Figures 8c–e suggested that minimal reconstruction error for SRC and NNLS corresponded to that minimal reconstruction error for SRC and NNLS corresponded to incorrect classification, while incorrect classification, whileerror the minimal reconstruction for ADNNLS was able identify the minimal reconstruction for ADNNLS was ableerror to identify the correct type.toAs shownthe in correct shown three in Figures 9c–e, although three algorithms were able to correctly identify the Figure type. 9c–e, As although algorithms were able to correctly identify the type, the reconstruction type, theBTR70-C71 reconstruction error of BTR70-C71 sample in ADNNLS far lessFigure than 10 other types. error of sample in ADNNLS was far less than other types.was Similarly, shows the Similarly, Figure shows the classification of a SRC, testing sample of T72-132. UsingasSRC, it was classification of a 10 testing sample of T72-132. Using it was incorrectly identified BMP2-9563. incorrectly BMP2-9563. Whenasusing NNLS, it was then identified as BMP2-9566. When usingidentified NNLS, itas was then identified BMP2-9566. In traditional SRC and NNLS without the assistance of aspect angle, sparse decomposition was performed across global aspect range of 0◦ –360◦ , namely using training samples at all aspect angles. For the ADNNLS algorithm, the sparse decomposition and reconstruction were both performed within the local aspect sector, which was described in Algorithm 1 in detail. Therefore, because of the sensitivity of microwave images to aspect angle, ADNNLS was able to capture more aspect-related information compared with SRC and NNLS, not being interfered by training samples with large difference in aspect angle. These results indicated that the ADNNLS algorithm could prevent interference from training samples with large differences in aspect angle, thereby improving the (a) (b) classification performance. In order to analyze the influence of sector size on ADNNLS performance, recognition experiments were carried out at different aspect sectors (30◦ , 60◦ , 90◦ , 180◦ , and 360◦ ). Apparently, with 360◦ aspect 0.4

0.35

Sparse Representation Coefficient

0.3

0.25

0.2

0.15

0.1

0.05

0

0

200

400

600

800 Training Sample Index

1000

1200

1400

1600

representation clustering, so, even though the dictionary was the same size as SRC and NNLS, the number of effective non-zero atoms was much smaller than in the SRC and NNLS. Sparse coding was carried out for three testing samples with corresponding dictionary, thereby producing three sparse coefficient vectors and corresponding reconstruction error graphs, as shown in Figures 8–10. As shown in Figure 8b, the maximum sparse coefficient corresponded to BMP2-9563. Comparison of Sensors 2016, 16, 1413 11 of 19 Figures 8c–e suggested that minimal reconstruction error for SRC and NNLS corresponded to incorrect classification, while the minimal reconstruction error for ADNNLS was able to identify the sector, algorithm wasinthe full-aspect algorithm. Figure 11were shows thetoresults of recognition. correctthe type. As shown Figures 9c–e, NNLS although three algorithms able correctly identify the ◦ aspect sector. As the sector increased, Ittype, can be seen that the recognition rate peaked at 99.63% for 30 the reconstruction error of BTR70-C71 sample in ADNNLS was far less than other types. the rate gradually decreased, which was due toofthe increasesample of confusion atoms.Using Despite this, the Similarly, 10 shows the classification a testing of T72-132. SRC, was Sensors 2016, 16, Figure 1413 11 it of 19 ◦. recognition was still high as 97.39% sectorit was incorrectly rate identified as as BMP2-9563. Whenwhen usingthe NNLS, was 360 then identified as BMP2-9566. 4

2

3.5

1.8

4 0.4

1.6

3.5

0.35

3

3

2 1.5

0.3

1.2 1 0.8 0.6

1 0.4

0.5

0.25

0.2

0.15

1

2

3

4 Class Label

5

6

2 1.5 1 0.5

0.05

0

7

2.5

0.1

0.2

0

Reconstruction Error

2.5

Sparse Representation Coefficient

Reconstruction Error

Reconstruction Error

1.4

1

20

3

0

Sensors 2016, 16, 1413

(c)(a)

4 200 Class Label

5

6

0

7

400

600

(d)

4

1

800 Training Sample Index

1000

2

3

4 Class Label

5

1200

1400

6

7 1600

11 of 19

(e)

(b)

2

4

Figure 8. (a) Test sample BMP2-9563; (b) Sparse vector obtained based on ADNNLS algorithm; (c) Reconstruction error of ADNNLS algorithm; (d) Reconstruction error of SRC algorithm; (e) Reconstruction error of NNLS algorithm. 1.8

3.5

3.5

1.6

3

3

2 1.5

Reconstruction Error

Reconstruction Error

Reconstruction Error

1.4

2.5

1.2 1 0.8

2.5

2 1.5

In traditional SRC and NNLS without the assistance of aspect angle, sparse decomposition was performed across global aspect range of 0°–360°, namely using training samples at all aspect angles. For the ADNNLS algorithm, the sparse decomposition and reconstruction were both performed within the local aspect in Algorithm 1 in detail. Therefore, (c) sector, which was described(d) (e)because of the sensitivity of microwave images to aspect angle, ADNNLS was able to capture more aspect-related Figurecompared 8. (a) BMP2-9563; (b) vector obtained based onon ADNNLS algorithm; information with SRC and NNLS, not being interfered by training samples with (c) large Figure 8. (a) Test Test sample sample BMP2-9563; (b)Sparse Sparse vector obtained based ADNNLS algorithm; Reconstruction error of ADNNLS algorithm; (d) Reconstruction error of SRC algorithm; (e) difference in aspect angle. results indicated the ADNNLS error algorithm prevent (c) Reconstruction errorThese of ADNNLS algorithm; that (d) Reconstruction of SRCcould algorithm; Reconstruction error ofsamples NNLS algorithm. (e) Reconstruction error of NNLS algorithm. interference from training with large differences in aspect angle, thereby improving the classification performance. In traditional SRC and NNLS without the assistance of aspect angle, sparse decomposition was performed across global aspect range of 0°–360°, namely using training samples at all aspect angles. For the ADNNLS algorithm, the sparse decomposition and reconstruction were both performed within the local aspect sector, which was described in Algorithm 1 in detail. Therefore, because of the sensitivity of microwave images to aspect angle, ADNNLS was able to capture more aspect-related information compared with SRC and NNLS, not being interfered by training samples with large difference in aspect angle. These results indicated that the ADNNLS algorithm could prevent (a) training samples with large differences in aspect (b) angle, thereby improving the interference from classification performance. 0.6

1

1

0.4

0.5

0.5

0.2

0

1

2

3

4 Class Label

5

6

0

7

1

2

3

4 Class Label

5

6

0

7

1

2

3

4 Class Label

5

6

7

0.35

Sparse Representation Coefficient

0.3

0.25

0.2

0.15

0.1

0.05

0

6

0

200

400

600

800 Training Sample Index

6

1000

1200

1400

1600

3

0.35

5

5

2.5

2

1

3

2

1

2

3

4 Class Label

5

6

0.2

0.15

1.5

1

0.1

0

1

0

2

3

(c)(a)

6

2

0.05

0

7

0.25

0.5

1

0

Reconstruction Error

3

4 Sparse Representation Coefficient

Reconstruction Error

Reconstruction Error

0.3

4

200

4 Class Label

5

400

6

7

600

0

800 Training Sample Index

1000

1

1200

2

3

(b)

(d)

4 Class Label

1400

5

1600

6

7

(e)

6

3

Figure 9. (a) Test sample BTR70-C71; (b) Sparse vector obtained based on ADNNLS algorithm; (c) Figure 9. (a) Test sample BTR70-C71; (b) Sparse vector obtained based on ADNNLS algorithm; Reconstruction error of ADNNLS algorithm; (d) Reconstruction error of SRC algorithm; (e) (c) Reconstruction error of ADNNLS algorithm; (d) Reconstruction error of SRC algorithm; Reconstruction error error of NNLS algorithm. (e) Reconstruction of NNLS algorithm. 3

2

5

2.5

4

2

Reconstruction Error

4

Reconstruction Error

Reconstruction Error

5

3

2

0.35

1.5

1

0.3

1

0.5

1

1

2

3

4 Class Label

(c)

5

6

7

Sparse Representation Coefficient

0

0

0.25

1

0.2

2

3

4 Class Label

5

6

0

7

1

2

0.15

4 Class Label

5

6

7

(e)

(d)

0.1

3

Figure 9. (a) Test sample BTR70-C71; (b) Sparse vector obtained based on ADNNLS algorithm; (c) Reconstruction error of ADNNLS algorithm; (d) Reconstruction error of SRC algorithm; (e) (a) error of NNLS algorithm. (b) Reconstruction 0.05

0

0

200

0.35

400

600

800 Training Sample Index

1000

1200

1400

1600

0

1

2

3

4 Class Label

5

6

0

7

1

2

3

4 Class Label

(c)

5

6

0

7

1

2

3

4 Class Label

5

6

7

(e)

(d)

8

0.35

4

7

7

0.3

3.5

6

6

0.25

4 3

5 4 3

0.2

0.15

0.1

0

Sensors 2016, 16, 1413

1

1

1

2

3

8

(a)

4 Class Label

5

6

0

7

200

1

2

3

8

(c)

7

400

4 Class Label

600

5

6

800 Training Sample Index

7

1200

0

1

1400

2

3

4

1600

12 of 19

4 Class Label

5

6

7

(e)

3.5

6 Reconstruction Error

1000

0.5

(b)

(d)

7

6

2 1.5 1

0

3 Reconstruction Error

0

2.5

0.05

2

2

3 Reconstruction Error

5

Sparse Representation Coefficient

8

Reconstruction Error

Reconstruction Error

Figure 9. (a) Test sample BTR70-C71; (b) Sparse vector obtained based on ADNNLS algorithm; (c) Reconstruction Sensors 2016, 16, 1413 error of ADNNLS algorithm; (d) Reconstruction error of SRC algorithm; (e) 12 of 19 Sensors 2016, 16, 1413 12 of 19 Reconstruction error of NNLS algorithm.

Reconstruction Error

Figure 10. (a) Test sample T72-132; (b) Sparse vector obtained based on ADNNLS algorithm; (c) Reconstruction error of ADNNLS algorithm; (d) Reconstruction error of SRC algorithm; (e) Reconstruction error of NNLS algorithm. 5 4 3

5 4 3

2.5 2

1.5

2

2

1

1

1

0.5

In order to analyze the influence of sector size on ADNNLS performance, recognition experiments were(c) carried out at different aspect(d) sectors (30°, 60°, 90°, 180°, and (e) 360°). Apparently, with 360° aspect sector, the algorithm was the full-aspect NNLS algorithm. Figure 11 shows the Figure 10.10.(a) (a) Test sample T72-132; (b) Sparse vector obtained based on ADNNLS algorithm; (c) sample T72-132; vector obtained based ADNNLS results ofFigure recognition.Test It can be seen that (b) theSparse recognition rate peaked aton99.63% foralgorithm; 30° aspect sector. Reconstruction error of ADNNLS algorithm; (d) Reconstruction error of SRC algorithm; (e) (c) Reconstruction error of ADNNLS algorithm; (d) Reconstruction error of SRC algorithm; As the Reconstruction sector increased, the ratealgorithm. gradually decreased, which was due to the increase of confusion error error of NNLS (e) Reconstruction of NNLS algorithm. atoms. Despite this, the recognition rate was still as high as 97.39% when the sector was 360°. 0

1

2

3

4 Class Label

5

6

0

7

1

2

3

4 Class Label

5

6

7

0

1

2

3

4 Class Label

5

6

7

Recognition Rate(%)

In order to analyze the influence of sector size on ADNNLS performance, recognition 100 experiments were carried out at different aspect sectors (30°, 60°, 90°, 180°, and 360°). Apparently, 99 the algorithm was the full-aspect NNLS algorithm. Figure 11 shows the with 360° aspect sector, results of recognition. It can be seen that the recognition rate peaked at 99.63% for 30° aspect sector. 98 As the sector increased, the rate gradually decreased, which was due to the increase of confusion 97 atoms. Despite this, the recognition rate was still as high as 97.39% when the sector was 360°. 96 100

95 99

94 98

93

Recognition Rate(%)

97

92

96

91

95

9094

30

60

90

180 Aspect Sector Degree(°)

360

93 92 Figure Variation theperformance performance of ADNNLS with thethe sizesize of aspect sector. Figure 11. 11. Variation inin the ADNNLSalgorithm algorithm with of aspect sector. 91

To assess performance the proposed proposed algorithm, results of the To assess thethe performance ofof the algorithm,the therecognition recognition results of current the current 90 30 60 90 180 360 Aspect Sector Degree(°)SRC, aspect-aided dynamic SRC (ADSRC), ADNNLS algorithm were compared to full-aspect ADNNLS algorithm were compared to full-aspect SRC, aspect-aided dynamic SRC (ADSRC), fullfull-aspect non-negative sparse representation (NNLS).AAfull-aspect full-aspect algorithm means thatthat the training aspect non-negative sparse means Figure 11. Variation inrepresentation the performance (NNLS). of ADNNLS algorithm withalgorithm the size of aspect sector. the training samples across all aspects were included in the dictionary. An aspect-aided algorithm means the samples across all aspects were included in the dictionary. An aspect-aided algorithm means the proposed aspect-aided smooth self-representation clustering was performed during dictionary design. To assess the performance the proposed algorithm, the recognition results ofduring the current proposed aspect-aided smooth of self-representation clustering was performed dictionary The recognition experiments were conducted at different dimensions of PCA features. The results are ADNNLS algorithm were compared to full-aspect SRC, aspect-aided dynamic SRC (ADSRC), full- The design. The recognition experiments were conducted at different dimensions of PCA features. as shown in Figure 12. aspect non-negative sparse representation (NNLS). A full-aspect algorithm means that the training results are asperformance shown in Figure 12.algorithm increased with increasing dimension. At 800 dimensions, The of each

samples across all aspects were included in the dictionary. An aspect-aided algorithm means the ADNNLS showed the largest recognition rate, which was 99.49%. The maximal recognition rates proposed aspect-aided smooth self-representation clustering was performed during dictionary for SRC, NNLS, and ADSRC were 92.23%, 96.8%, and 96.97%, respectively. Moreover, ADNNLS design. The recognition experiments were conducted at different dimensions of PCA features. The demonstrated good recognition performance at low dimensions. The results validated the effectiveness results are as shown in Figure 12. of the proposed algorithm in microwave images recognition. In addition, the results of the presented

Sensors 2016, 16, 1413

13 of 19

algorithm were compared to the results of recently published IJSRC algorithm [18]. The reason of choosing this article was that it used the same dataset as in the current study. The ADNNLS algorithm showed a recognition rate of 99.83% at 30◦ aspect sector, only two out of 1365 testing samples were incorrectly identified. For IJSRC, the recognition rate was 98.53%, less than our algorithm. IJSRC Sensors 2016, 16, 1413 13 of 19used multi-view joint sparse representation, which was different from ADNNLS. 100

95

90

Recognition Rate(%)

85

80 SRC ADSRC NNLS ADNNLS

75

70

65

60

55 100

200

300

400

500

600

700

800

Dimension

Figure 12. Recognition performance of theoffour with varying feature dimensions. Figure 12. Recognition performance thealgorithms four algorithms with varying feature dimensions.

The performance of each algorithm increased with increasing dimension. At 800 dimensions, It is worth noting that the convergence condition used in four algorithms is error tolerance rather ADNNLS showed the largest recognition rate, which was 99.49%. The maximal recognition rates for than sparsity in the manuscript. For comparing four algorithms performance with the same level of SRC, NNLS, and ADSRC were 92.23%, 96.8%, and 96.97%, respectively. Moreover, ADNNLS sparsity, we conducted the following experiment. We fixed sparsity value at 12, which means that demonstrated good recognition performance at low dimensions. The results validated the non-zero elements number of solved sparse coefficient vector is 12. One query sample of BMP2-SN9563 effectiveness of the proposed◦algorithm in microwave images recognition. In addition, the results of with aspect angle 171.49 is selected to compare the representation coefficient vectors from the four the presented algorithm were compared to the results of recently published IJSRC algorithm [18]. The algorithms. According to correspondence between training samples aspect angle and their index reason of choosing this article was that it used the same dataset as in the current study. The ADNNLS number, we know that samples near index number 114 should be selected and correspond to larger algorithm showed a recognition rate of 99.83% at 30° aspect sector, only two out of 1365 testing coefficient values. Figure 13 shows four representation vectors corresponding four algorithms. samples were incorrectly identified. For IJSRC, the recognition rate was 98.53%, less than our SRC and NNLS picked dispersed atoms without aspect angle information aided, which can be algorithm. IJSRC used multi-view joint sparse representation, which was different from ADNNLS. seen from Figure 8a,b. The atoms selected by ADSRC are more assembled than SRC and NNLS. It is worth noting that the convergence condition used in four algorithms is error tolerance rather However, from Figure 8d, it can be observed that ADNNLS can select most representative atoms than sparsity in the manuscript. For comparing four algorithms performance with the same level of near index number 114 corresponding larger coefficient values. Figure 8e shows that ADNNLS has sparsity, we conducted the following experiment. We fixed sparsity value at 12, which means that minimum reconstruction error for this testing sample among four algorithms. The advantage of the non-zero elements number of solved sparse coefficient vector is 12. One query sample of BMP2ADNNLS is that it can reduce the interference of invalid atoms with large differences in aspect angle, SN9563 with aspect angle 171.49° is selected to compare the representation coefficient vectors from thereby improving the recognition accuracy. the four algorithms. According to correspondence between training samples aspect angle and their We conducted another experiment to study whether OMP could pick similar dictionary atoms index number, we know that samples near index number 114 should be selected and correspond to as ADNNLS by enforcing the same number of non-zero coefficients from ADNNLS. We select larger coefficient values. Figure 13 shows four representation vectors corresponding four algorithms. one BMP2-SN9563 testing sample with 213.49◦ aspect angle for this experiment. Ideally, those training samples near 213.49◦ in the dictionary will be picked under optimal representation algorithm. According to correspondence between training samples aspect angle and their index number, we know that samples near index number 135 should be selected and correspond to larger coefficient values. We first solve the representative coefficient vector under ADNNLS, which is shown in Figure 14a. The number of non-zero coefficients from ADNNLS is 29. The OMP representative vector is solved by enforcing 29 non-zero coefficients, which is shown in Figure 14b. From Figure 14a, it can be seen that there were four atoms near index number 135 picked and one atom corresponded to the largest value in the whole ADNNLS representative vector. However, there was only one atom near index number 135 selected with a small value under the OMP algorithm. From the OMP representing vector, we can see that atoms selected are more dispersed than those picked with ADNNLS. Although OMP is able to 2

0.4

1.5

0.35

0.3

Sparse Representative Coefficient

Sparse Representative Coefficient

1

0.5

0

-0.5

-1

0.25

0.2

0.15

0.1

-1.5

0.05

-2

-2.5

0

0

200

400

600

800 Training Sample Index

(a)

1000

1200

1400

1600

0

200

400

600

800 Training Sample Index

(b)

1000

1200

1400

1600

samples were incorrectly identified. For IJSRC, the recognition rate was 98.53%, less than our algorithm. IJSRC used multi-view joint sparse representation, which was different from ADNNLS. It is worth noting that the convergence condition used in four algorithms is error tolerance rather than sparsity in the manuscript. For comparing four algorithms performance with the same level of sparsity, we conducted the following experiment. We fixed sparsity value at 12, which means that Sensors 2016, 16, 1413 14 of 19 non-zero elements number of solved sparse coefficient vector is 12. One query sample of BMP2SN9563 with aspect angle 171.49° is selected to compare the representation coefficient vectors from the fouraalgorithms. According to select correspondence between training samples aspect angle andresults their acquire sparse solution, it cannot those best represented atoms for a query sample which index number, we know that near results index number 114 should and correspond to in a larger representation error.samples Experiment indicated that OMP be hadselected larger reconstruction error larger coefficient values. Figure 13 shows four representation vectors corresponding four algorithms. than ADNNLS, which can be seen from Figure 14c. 2

0.4

1.5

0.35

0.3 Sparse Representative Coefficient

Sparse Representative Coefficient

1

0.5

0

-0.5

-1

0.25

0.2

0.15

0.1

-1.5 0.05

-2

-2.5

0

0

200

400

600

Sensors 2016, 16, 1413

800 Training Sample Index

1000

1200

1400

1600

0

200

400

600

(a)

800 Training Sample Index

1000

1200

1400

1600

14 of 19

(b) 0.4

0.8

0.35

0.6

Sparse Representative Coefficient

Sparse Representative Coefficient

0.3

0.4

0.2

0

-0.2

0.25

0.2

0.15

0.1

-0.4

-0.6

0.05

0

200

400

600

800 Training Sample Index

1000

1200

1400

1600

0

0

200

400

(c)

600

800 Training Sample Index

1000

1200

1400

1600

(d) 3

Reconstruction Error

2.5

2

1.5

1

0.5

0

SRC

NNLS

ADSRC

ADNNLS

Algorithm

(e) Figure Sparse coefficient vector obtained based on SRC (b) Sparse coefficient vector Figure13. 13.(a)(a) Sparse coefficient vector obtained based on algorithm; SRC algorithm; (b) Sparse coefficient obtained based based on NNLS algorithm; (c) Sparse coefficient vector obtained vector obtained on NNLS algorithm; (c) Sparse coefficient vector obtainedbased basedon on ADSRC ADSRC algorithm; (d) Sparse coefficient vector obtained based on ADNNLS algorithm; (e) Reconstruction algorithm; (d) Sparse coefficient vector obtained based on ADNNLS algorithm; (e) Reconstruction error error of algorithms. four algorithms. of four

SRC and NNLS picked dispersed atoms without aspect angle information aided, which can be seen from Figure 8a,b. The atoms selected by ADSRC are more assembled than SRC and NNLS. However, from Figure 8d, it can be observed that ADNNLS can select most representative atoms near index number 114 corresponding larger coefficient values. Figure 8e shows that ADNNLS has minimum reconstruction error for this testing sample among four algorithms. The advantage of the ADNNLS is that it can reduce the interference of invalid atoms with large differences in aspect angle, thereby improving the recognition accuracy. We conducted another experiment to study whether OMP could pick similar dictionary atoms as ADNNLS by enforcing the same number of non-zero coefficients from ADNNLS. We select one BMP2-SN9563 testing sample with 213.49° aspect angle for this experiment. Ideally, those training samples near 213.49° in the dictionary will be picked under optimal representation algorithm. According to correspondence between training samples aspect angle and their index number, we know that samples near index number 135 should be selected and correspond to larger coefficient values. We

Sensors 2016, 16, 1413 Sensors 2016, 16, 1413

15 of 19 15 of 19

0.25

0.3

0.25

0.2

Sparse Representative Coefficient Sparse Representative Coefficient

0.15 0.25

0.1 0.2

0.05

0

0

Sparse Representative Coefficient

Sparse Representative Coefficient

0.2

Sensors 2016, 16, 1413

0.15

0.1

200

400

600

800 Training Sample Index

1000

1200

1400

1600

0

200

400

600

0.2

0

0.15

-0.05 0.1

-0.1 0.05

-0.15

0

200

400

800 Training Sample Index

800 Training Sample Index

1000

1200

1400

1600

(b)

1000

1200

1400

-0.15

1600

0

200

400

2

(a)

600

800 Training Sample Index

1000

1200

1400

1600

(b) 2.5

1.5 2

1

Reconstruction Error

Reconstruction Error

600

0

-0.1

2.5 0

0.1 0.3

0.25 0.05

-0.05

(a)

0.05

15 of 19 0.15

1.5

0.5

0

1

0.5

OMP

ADNNLS Algorithm

0

OMP

(c)

ADNNLS

Algorithm

Figure 14. 14. (a) (a) Sparse Sparse coefficient coefficient vector vector obtained obtained based based on ADNNLS ADNNLS algorithm; algorithm. (b) (b) Sparse Sparse coefficient coefficient (c) on Figure vector obtained based on OMP algorithm. (c) Reconstruction error of two algorithms. vector obtained based on OMP algorithm; (c) Reconstruction error of two algorithms. Figure 14. (a) Sparse coefficient vector obtained based on ADNNLS algorithm. (b) Sparse coefficient vector obtained based on OMP algorithm. (c) Reconstruction error of two algorithms.

4.3. Robustess 4.3. Robustess to to Noise Noise

4.3. Robustess tothe Noise Additionally, anti-noise performance performance of of the the proposed proposed algorithm algorithm was was investigated. investigated. Different Different Additionally, the anti-noise level Gaussian noises with varying Signal-to-Noise Ratio values (SNRs) were added into the testing Additionally, the anti-noise performance of the proposed algorithm was investigated. Different level Gaussian noises with varying Signal-to-Noise Ratio values (SNRs) were added into the testing samples thereby evaluating the robustness of the algorithm. Figure 15 shows the SAR images at levelthereby Gaussianevaluating noises withthe varying Signal-to-Noise Ratio values Figure (SNRs) were addedthe intoSAR the testing samples robustness of the algorithm. 15 shows images at samples thereby evaluating thethe robustness of the algorithm. Figure the SAR images at different SNRs. Figure 16 shows shows the recognition performance curves15of ofshows four algorithms algorithms at different different different SNRs. Figure 16 recognition performance curves four at different SNRs. Figure 16 can shows the recognition performance curves of four algorithms at different SNRs. From Figure 16, one see that the recognition rate of all algorithms increase with higher SNRs. From Figure 16, one can see that the recognition rate of all algorithms increase with higher SNRs. From Figure 16, one can see that the recognition rate of all algorithms increase with higher SNR, however, however, the the ADNNLS ADNNLS algorithm algorithm could could reach reach aa recognition recognition rate rate of of over over 90% 90% within within aa broad broad SNR, SNR, however, the ADNNLS algorithm could reach a recognition rate of over 90% within a broad range of 5–50 dB, while, the other three algorithms all have recognition rates lower than 70% at rangerange of 5–50 dB, dB, while, thethe other allhave haverecognition recognition rates lower at 55 dB. dB. of 5–50 while, otherthree threealgorithms algorithms all rates lower thanthan 70% 70% at 5 dB. The results show that ADNNLS is more robust to noise corruption than the other three algorithms. The results show that ADNNLS is more robust to noise corruption than the other three algorithms. The results show that ADNNLS is more robust to noise corruption than the other three algorithms.

(a) (a)

(b) (b)

(c) (c)

(d) (d)

(f) (g ) (e) (f) (g ) (e) Figure 15. SAR images with different SNRs: (a) original image; (b) 0 dB; (c) 10 dB; (d) 20 dB; (e) 30 dB;

Figure 15. SAR images with different SNRs: (a) original image; (b) 0 dB; (c) 10 dB; (d) 20 dB; (e) 30 dB; (f) 15. 40 dB; (g) 50 dB. with different SNRs: (a) original image; (b) 0 dB; (c) 10 dB; (d) 20 dB; (e) 30 dB; Figure SAR (f) 40 dB; (g) 50 images dB. (f) 40 dB; (g) 50 dB.

Sensors 2016, 16, 1413 Sensors 2016, 2016, 16, 16, 1413 1413 Sensors

Sensors 2016, 16, 1413

16 of 19 16 of of 19 19 16 100 100

16 of 19

90 90100

Recognition RecognitionRate(%) Rate(%) Recognition Rate(%)

80 80 90 70 70 80

ADNNLS ADNNLS NNLS NNLS ADSRC ADSRC ADNNLS SRC SRC NNLS ADSRC SRC

60 60 70 50 50 60 40 40 50 30 40 30-10 -10

0 0

10 10

20 20 SNR(dB) SNR(dB)

30 30

40 40

50 50

30

0 10 20 30 40 Figure-10 16. Performance Performance of the algorithm algorithm with varying SNRs.50 16. varying SNRs. Figure of the with SNR(dB)

Figure Performance ofVariations the algorithm with varying SNRs. 4.4. Experiments Experiments Conducted Conducted with 16. Depression Angle Variations 4.4. Depression Angle with

In addition to to aspect aspect angle angleDepression changes, Angle depression angle variations variations of of test test samples samples can can also also addition changes, depression angle 4.4.In Experiments Conducted with Variations significantly affect affect recognition recognition performance. performance. In In the the experiments, experiments, four four algorithms algorithms are are also also evaluated evaluated significantly In addition to aspect angle changes, depression angle variations ofZSU23/4 test samples can also with a large depression angle variation. Four targets, BRDM2, 2S1, SLICY and are employed, with a large depression areemployed, employed, depression angle angle variation. variation. Four targets, targets, BRDM2, BRDM2, 2S1, 2S1, SLICY SLICY and and ZSU23/4 ZSU23/4 are affect recognition performance. In the experiments, four algorithms are also evaluated assignificantly illustrated in in Figure 17, several several SAR image image samples of these targets are illustrated in Figure 18. as illustrated image samples samples of of these these targets targets are are illustrated illustrated in in Figure Figure 18. 18. Figure 17, SAR with a large depression angle variation. Four targets, BRDM2, 2S1, SLICY and ZSU23/4 are employed, The dataset used in the depression variation experiments is shown in Table 2. In this experiment, The dataset used in the depression variation experiments is shown in Table 2. In this experiment, experiment, as illustrated inatFigure 17, severalangle SAR are image of these targets are taken illustrated in Figure 18. ◦ depression ◦ and ◦ are images collected 17° depression usedsamples fortraining, training, while images at3030°and 30°and 45°are images collected at 17 angle for while images taken 45 17° depression angle are areused used for training, while images takenatat 45°are The dataset used Experimental in the depression variation experiments is shown in Table 2. In this experiment, utilized for testing. testing. results to be be studied studied are shown shown in Table Table 3. utilized for to are in 3. Experimental results shown in Table 3. images collected at 17° depression angle are used for training, while images taken at 30°and 45°are utilized for testing. Experimental results to be studied are shown in Table 3.

(a) BRDM2

(b) 2S1

(c) ZSU23/4

(d) SLICY

BRDM2 2S1 target. (a) (c) ZSU23/4 (d) SLICY Figure(a) 17. optical images(b) of(b) four BRDM2; (b) 2S1; (c) ZSU234; (d) SLICY. (a)The BRDM2 2S1 (c) ZSU23/4 (d) SLICY Figure 17. The optical images of four target. (a) BRDM2; (b) 2S1; (c) ZSU234; (d) SLICY. Figure 17.17. The images ofof four target. (a)(a) BRDM2; (b)(b) 2S1; (c)(c) ZSU234; (d)(d) SLICY. Figure Theoptical optical images four target. BRDM2; 2S1; ZSU234; SLICY.

(a) BRDM2

(b) 2S1

(c) ZSU23/4

(d) SLICY

(a) BRDM2 2S1 of four targets. (a)(c) ZSU23/4 SLICY 18. Example SAR (b) images BRDM2; (b) 2S1; (c) ZSU234;(d) (d) (a) Figure BRDM2 (b) 2S1 (c) ZSU23/4 (d)SLICY. SLICY Figure 18. Example SAR images of four targets. (a) BRDM2; (b) 2S1; (c) ZSU234; (d) SLICY. Figure Example SAR images of four targets. BRDM2; 2S1; ZSU234; SLICY. Table 2.ofDataset used in the experiment. Figure 18.18.Example SAR images four targets. (a)(a) BRDM2; (b)(b) 2S1; (c)(c) ZSU234; (d)(d) SLICY. Table 2. Dataset used in the BRDM2 2S1experiment. ZSU234 SLICY Table Dataset used experiment. Table 2. 2. Dataset used in in thethe experiment.

Training Set (17°) BRDM2 298 299 ZSU234 299 298 2S1 SLICY BRDM2 2S1 ZSU234 SLICY Testing Set (30°) 298 2S1 288 288 288 Training SetBRDM2 (17°) 298 299 299 ZSU234 298 SLICY Training Set (17°) 298 298 299 303 299 298 Testing Set (45°) 303 303 Testing 298 288 288 299 288 ◦ ) Set (30°) Training Set (17 298 299 Testing Set (30°) 298 288 288 288 298 298 303 303 288 303 Testing SetTesting (30◦ ) Set (45°) 298 288 288 Testing Set (45°) 298 303 303 303 ◦ Testing Set (45 )

298

303

303

303

Sensors 2016, 16, 1413

17 of 19

Table 3. Recognition rates under different depression angles with four classifiers.

Depression 30◦ 45◦

Classifier SRC

NNLS

ADSRC

ADNNLS

84.41% 59.82%

84.71% 65.10%

93.40% 77.81%

96.44% 79.21%

As can be seen from Table 3, with test samples at 30◦ depression angle, the accuracy of ADNNLS achieved a high level of 96.44%, 3.04%, 11.73% and 12.43% better in average than ADSRC, NNLS and SRC, but when test samples at 45◦ depression angle are used, the performance of the four algorithms all decline. The reason is that huge differences occur between the training and test samples when there are large changes of depression angle from 17◦ to 45◦ . In this condition, the over-fitting issue in all algorithms is much more serious. However, ADNNLS still achieves an accuracy of 79.21%, 1.4%, 14.11% and 19.39% better on average than ADSRC, NNLS and SRC. The experimental results verify that ADNNLS is the most robust to the depression variation among the four algorithms. This is due to that aspect constraints and smooth self-representation that reduces the confusion caused by large depression variation, which ensures accurate inference ability of sparse representation. 5. Conclusions In the present study, an aspect-aided dynamic non-negative least square sparse representation model was proposed to classify target images collected by microwave imaging radar sensors, which fully utilized the aspect-dependent characteristic of microwave images. In ADNNLS sparse representation, the aspect sector of active atoms was determined based on the aspect angle of the testing samples. Then the smooth self-representation strategy was used in the sector to choose training samples that were most correlated to the testing sample, thereby constructing the dictionary. The strategy benefits searching for effective support training samples for the corresponding testing sample, and can effectively prevent interference from the training samples that have poor aspect correlation with the testing sample. Experiments based on MSTAR database showed that ADNNLS significantly outperformed SRC and NNLS and reached the recognition level of state-of-art algorithm. Meanwhile, it showed promising anti-noise behavior. Acknowledgments: This work was supported by the National Natural Science Foundation of China under Grant No. 61301224; GaoFen Major Project Youth Innovation Fund under Grant no. GFZX04060103; and the National Hightech R&D Program of China (863 Program) under Grant No. 2015AA01A706. This work was also supported by Basic and Advanced Research Project in Chongqing under Grant No. cstc2016jcyjA0134. Author Contributions: X. Zhang conceived and designed the experiments, analyzed the data, and wrote the paper. Q. Yang and M. Liu performed the anti-noise experiments. Y. Jia, S. Liu and G. Li gave some revision suggestions about the paper. Conflicts of Interest: The authors declare no conflict of interest.

References 1. 2.

3.

Lindell, D.B.; Long, D.G. Multiyear Arctic Ice Classification Using ASCAT and SSMIS. Remote Sens. 2016, 8, 294. [CrossRef] Islam, T.; Rico-Ramirez, M.A.; Srivastava, P.K. CLOUDET: A Cloud Detection and Estimation Algorithm for Passive Microwave Imagers and Sounders Aided by Naive Bayes Classifier and Multilayer Perceptron. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2015, 8, 4296–4301. [CrossRef] Jun, D.; Bo, C.; Hongwei, L. Convolutional Neural Network with Data Augmentation for SAR Target Recognition. IEEE Geosci. Remote Sens. Lett. 2016, 13, 364–368.

Sensors 2016, 16, 1413

4.

5.

6. 7.

8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28.

18 of 19

Liu, M.; Wu, Y.; Zhao, W.; Zhang, Q.; Li, M.; Liao, G. Dempster–Shafer Fusion of Multiple Sparse Representation and Statistical Property for SAR Target Configuration Recognition. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1106–1109. [CrossRef] Liu, M.; Wu, Y.; Zhang, Q. Synthetic Aperture Radar Target Configuration Recognition Using Locality-preserving Property and the Gamma Distribution. IET Radar Sonar Navig. 2016, 10, 256–263. [CrossRef] Novak, L.; Owirka, G.; Netishen, C. Performance of a High-resolution Polarimetric SAR Automatic Target Recognition System. Lincoln Lab J. 1993, 6, 11–23. Ma, C.; Wen, G.J.; Ding, B.Y. Three-dimensional Electromagnetic Model-based Scattering Center Matching Method for Synthetic Aperture Radar Automatic Target Recognition by Combining Spatial and Attributed Information. J. Appl. Remote Sens. 2016, 10, 122–134. [CrossRef] Huang, X.; Qiao, H.; Zhang, B. SAR Target Configuration Recognition Using Tensor Global and Local Discriminant Embedding. IEEE Geosci. Remote Sens. Lett. 2016, 13, 222–226. [CrossRef] Cui, Z.Y.; Cao, Z.J.; Yang, J.Y. Target Recognition in Synthetic Aperture Radar Images via Non-negative Matrix Factorisation. IET Radar Sonar Navig. 2015, 9, 1376–1385. [CrossRef] Wright, J.; Yang, A.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust Face Recognition via Sparse Representation. IEEE Trans. Patt. Anal. Mach. Intell. 2009, 31, 210–227. [CrossRef] [PubMed] Luo, Y.; Bao, G.; Xu, Y.; Ye, Z. Supervised Monaural Speech Enhancement Using Complementary Joint Sparse Representations. IEEE Signal Process. Lett. 2016, 23, 237–241. [CrossRef] Li, C.; Ma, Y.; Mei, X.; Liu, C.; Ma, J. Hyperspectral Image Classification with Robust Sparse Representation. IEEE Geosci. Remote Sens. Lett. 2016, 13, 641–645. [CrossRef] Fang, L.Y.; Li, S.T. Face Recognition by Exploiting Local Gabor Features with Multitask Adaptive Sparse Representation. IEEE Trans. Instrum. Measur. 2015, 64, 2605–2615. [CrossRef] Zhang, L.; Zhou, W.D.; Chang, P.C.; Liu, J.; Yan, Z.; Wang, T.; Li, F.Z. Kernel Sparse Representation-Based Classifier. IEEE Trans. Signal Process. 2012, 60, 1624–1632. [CrossRef] Li, Y.; Ngom, A. Classification Approach Based on Non-Negative Least Squares. Neurocomput 2013, 118, 41–57. [CrossRef] Li, Y.; Ngom, A. Nonnegative Least-Squares Methods for the Classification of High-Dimensional Biological Data. IEEE/ACM Trans. Comput. Biol. Bioinform. 2013, 10, 447–456. [CrossRef] [PubMed] Zhang, H.; Nasrabadi, N.M.; Zhang, Y.; Huang, T.S. Multi-view Automatic Target Recognition Using Joint Sparse Representation. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 2481–2497. [CrossRef] Cheng, J.; Li, L.; Li, H.S. SAR Target Recognition Based on Improved Joint Sparse Representation. EURASIP J. Adv. Signal Process. 2014, 2014, 1–12. [CrossRef] Dong, G.G.; Kuang, G.Y.; Wang, N. SAR Target Recognition via Joint Sparse Representation of Monogenic Signal. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2015, 8, 3316–3328. [CrossRef] Dong, G.G.; Kuang, G.Y. SAR Target Recognition via Sparse Representation of Monogenic Signal on Grassmann Manifolds. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2016, 9, 1308–1319. [CrossRef] Xing, X.W.; Ji, K.F.; Zou, H.X. Ship Classification in TerraSAR-X Images with Feature Space Based Sparse Representation. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1562–1566. [CrossRef] Liu, H.C.; Li, S.T. Decision Fusion of Sparse Representation and Support Vector Machine for SAR Image Target Recognition. Neurocomputing 2013, 113, 97–104. [CrossRef] Cao, Z.; Xu, L.; Feng, J. Automatic Target Recognition with Joint Sparse Representation of Heterogeneous Multi-view SAR Images over a Locally Adaptive Dictionary. Signal Process. 2016, 126, 27–34. [CrossRef] Hou, B.; Ren, B.; Ju, G.; Li, H.; Jiao, L.; Zhao, J. SAR Image Classification via Hierarchical Sparse Representation and Multisize Patch Features. IEEE Trans. Geosci. Remote Sens. 2016, 13, 33–37. [CrossRef] Feng, J.; Cao, Z.; Pi, Y. Polarimetric Contextual Classification of PolSAR Images Using Sparse Representation and Superpixels. Remote Sens. 2014, 6, 7158–7181. [CrossRef] Zhang, L.M.; Sun, L.J.; Zou, B. Fully Polarimetric SAR Image Classification via Sparse Representation and Polarimetric Features. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2015, 8, 3923–3932. [CrossRef] Yang, F.; Gao, W.; Xu, B. Multi-Frequency Polarimetric SAR Classification Based on Riemannian Manifold and Simultaneous Sparse Representation. Remote Sens. 2015, 7, 8469–8488. [CrossRef] Xie, W.; Jiao, L.C.; Zhao, J. PolSAR Image Classification via D-KSVD and NSCT-Domain Features Extraction. IEEE Signal Process. Lett. 2016, 13, 227–231. [CrossRef]

Sensors 2016, 16, 1413

29. 30.

31.

32. 33.

34. 35. 36. 37. 38. 39. 40. 41.

19 of 19

Principe, J.; Zhao, Q.; Xu, D. A Novel ATR Classifier Exploiting Pose Information. In Proceedings of the Image Understanding Workshop, Monterey, CA, USA, 30 October–5 November 1998; pp. 833–836. Shafiee, S.; Kamangar, F.; Ghandehari, L. Cluster-Based Multi-task Sparse Representation for Efficient Face Recognition. In Proceedings of the 2014 IEEE Southwest Symposium on Image Analysis and Interpretation, San Diego, CA, USA, 6–8 April 2014; pp. 125–128. Hu, H.; Lin, Z.; Feng, J.; Zhou, J. Smooth Representation Clustering. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 3834–3841. Von Luxburg, U. A Tutorial on Spectral Clustering. Stat. Comput. 2007, 17, 395–416. [CrossRef] Yang, M.; Zhang, L.; Yang, J.; Zhang, D. Robust Sparse Coding for Face Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 625–632. Donoho, D. For Most Large Underdetermined Systems of Linear Equations the Minimal ` 1-Norm Solution Is Also the Sparsest Solution. Commun. Pure Appl. Math. 2006, 59, 797–829. [CrossRef] Sharon, Y.; Wright, J.; Ma, Y. Computation and Relaxation of Conditions for Equivalence between `1 and l `0 Minimization; CSL Technical Report UILU-ENG-07-2208; University of Illinois: Champaign, IL, USA, 2007. Donoho, D.; Elad, M. Optimal Sparse Representation in General (Nonorthogonal) Dictionaries via `1 Minimization. Proc. Nat. Acad. Sci. USA 2003, 100, 2197–2202. [CrossRef] [PubMed] Tibshirani, R. Regression Shrinkage and Selection via the LASSO. J. R. Stat. Soc. B 1996, 58, 267–288. Candes, E.; Romberg, J.; Tao, T. Stable Signal Recovery from Incomplete and Inaccurate Measurements. Commun. Pure Appl. Math. 2006, 59, 1207–1223. [CrossRef] Donoho, D. For Most Large Underdetermined Systems of Linear Equations the Minimal ` 1-Norm near Solution Approximates the Sparest Solution. Commun. Pure Appl. Math. 2006, 59, 907–934. [CrossRef] Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: London, UK, 2004. Yang, A.Y.; Sastry, S.S.; Ganesh, A.; Ma, Y. Fast ` 1-minimization Algorithms and an Application in Robust Face Recognition: A review. In Proceedings of the 17th IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 1849–1852. © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

Aspect-Aided Dynamic Non-Negative Sparse Representation-Based Microwave Image Classification.

Classification of target microwave images is an important application in much areas such as security, surveillance, etc. With respect to the task of m...
5MB Sizes 0 Downloads 12 Views