Computers in Biology and Medicine 63 (2015) 196–207

Contents lists available at ScienceDirect

Computers in Biology and Medicine journal homepage: www.elsevier.com/locate/cbm

A Hybrid Swarm Algorithm for optimizing glaucoma diagnosis Chandrasekaran Raja a,n, Narayanan Gangatharan b a b

Department of ECE, Anjalai Ammal Mahalingam Engineering College, Kovilvenni 614403, India Department of ECE, R.M.K. College of Engineering and Technology, Puduvoyal 601206, India

art ic l e i nf o

a b s t r a c t

Article history: Received 30 December 2014 Accepted 21 May 2015

Glaucoma is among the most common causes of permanent blindness in human. Because the initial symptoms are not evident, mass screening would assist early diagnosis in the vast population. Such mass screening requires an automated diagnosis technique. Our proposed automation consists of preprocessing, optimal wavelet transformation, feature extraction, and classification modules. The hyper analytic wavelet transformation (HWT) based statistical features are extracted from fundus images. Because HWT preserves phase information, it is appropriate for feature extraction. The features are then classified by a Support Vector Machine (SVM) with a radial basis function (RBF) kernel. The filter coefficients of the wavelet transformation process and the SVM-RB width parameter are simultaneously tailored to best-fit the diagnosis by the hybrid Particle Swarm algorithm. To overcome premature convergence, a Group Search Optimizer (GSO) random searching (ranging) and area scanning behavior (around the optima) are embedded within the Particle Swarm Optimization (PSO) framework. We also embed a novel potential-area scanning as a preventive mechanism against premature convergence, rather than diagnosis and cure. This embedding does not compromise the generality and utility of PSO. In two 10-fold cross-validated test runs, the diagnostic accuracy of the proposed hybrid PSO exceeded that of conventional PSO. Furthermore, the hybrid PSO maintained the ability to explore even at later iterations, ensuring maturity in fitness. & 2015 Elsevier Ltd. All rights reserved.

Keywords: Glaucoma Hyper analytic wavelet transform Hybrid PSO–GSO Feature extraction Support Vector Machines

1. Introduction Glaucoma is among the leading causes of blindness worldwide. Furthermore, the loss of vision caused by glaucoma is irreversible [1]. The most common type, primary open angle glaucoma, now affects 60.5 million people worldwide. This number is expected to rise to 79.6 million in 2020, as the global population ages [2]. In the US, the projected number of patients with primary open angle glaucoma is 7.32 million in 2050 [3]. In a 2012 study, the affected population above 40 years of age in India was reported as 11.2 million [4]. Glaucoma is a widespread optic neuropathy characterized by optic disc damage and visual loss [5]. Glaucoma is conventionally diagnosed by intraocular pressure (4 22 mmHg without medication), glaucomatous cupping of the optic disc and glaucomatous visual field defects [6]. The agents that influence optic disc size, such as age, gender, race, refractive error, axial length (AL), and central corneal thickness (CCT), have been extensively investigated [7]. The importance of optic disc size as an independent risk factor for glaucomatous optic neuropathy remains controversial [7] and presents a serious drawback to structural feature extraction for automated diagnosis.

n

Corresponding author. E-mail address: [email protected] (C. Raja).

http://dx.doi.org/10.1016/j.compbiomed.2015.05.018 0010-4825/& 2015 Elsevier Ltd. All rights reserved.

Early diagnosis of glaucoma is crucial or preventing permanent structural damage and irreversible vision loss [8]. Currently available imaging techniques for examining the RNFL (Retinal Nerve Fiber Layer) thickness and optic disc in glaucoma include confocal scanning laser ophthalmoscopy (CSLO), optical coherence tomography (OCT), and scanning laser polarimetry (SLP). Each of these technique uses different technologies and light sources to characterize the distribution of RNFL and/or the optic disc topography. CSLO builds a three-dimensional representation of the retinal surface height from multiple image sections acquired by confocal technology. OCT determines the thickness of the circumpapillary RNFL by interferometry and a reflection based edge-detection algorithm. SLP uses light to penetrate the birefringent RNFL and estimates the RNFL thickness from the linear relationship between RNFL birefringence and the retardation of reflected light [9]. A portable retinal camera is a cost-effective method of screening for diabetic retinopathy in isolated communities of at-risk individuals [10]. Bock and Meier highlighted that, unlike structural features, statistical methodologies require no precise measurements of geometric structures because they perform statistical data mining on the image patterns themselves. Statistical methodologies can be transferred to other domains and might extract additional parameters, providing new insights into other ophthalmic questions [11].

C. Raja, N. Gangatharan / Computers in Biology and Medicine 63 (2015) 196–207

Mass screening would help identify glaucoma disease among the vast population but requires an automated glaucoma diagnosis technique [12]. Diseased regions in digital fundus images are diagnosed by structural and statistical computations. Statistical analysis has become the recent trend in automated glaucoma diagnosis and is commonly performed by wavelet analysis. Wavelets are mathematical functions that decompose data into their frequency components. Each component is then investigated at the resolution that matches its scale [13]. The approximate coefficients decompose the image into dyadic resolutions, while the detailed coefficients provide significant edge information. The discrete wavelet transform (DWT) is less suitable for pattern recognition tasks because it lacks shift invariance and has poor directional selectivity [14]. Kingsbury [14] introduced quadrature phase shifted filter coefficients, which simultaneously operate on the original data to preserve the phase information. Because they incorporate the phase spectrum along with the conventional magnitude spectrum, the wavelet coefficients significantly improve feature extraction [15]. The statistical parameters extracted from the wavelet coefficients are energy and entropy, both of which are suitable for feature extraction from retinal images [16]. The HWT is another wavelet transformation that inherits the classical mother wavelets [17] because the input data are phaseshifted instead of the coefficients, which are left unmodified for phase switching. Moreover, HWT admits application-specific optimization because the coefficients can be generated from randomly generated angular parameters. The features obtained from the wavelet transformation are classifiable by a wide range of available classifiers. The SVM is a supervised learning method used for classification and regression [18], which is extensively applied to biomedical signal classification [19]. Basically, SVM transforms the feature space into a hypothesis space of linearly separable targets. The core functions of the SVM introduce non-linearity to the hypothesis space without explicitly requiring a nonlinear algorithm [18]. In SVM, the selection of appropriate kernel function and its parameter values [19] are of primary importance. The statistical parameters (for feature extraction) and the classification parameters (for decision making) are simultaneously optimized to attain the best execution. The Particle Swarm Optimization (PSO) is an evolutionary optimization algorithm inspired by social individual animal navigation, proposed by Kennedy and Eberhart [20,21]. Biological phenomena such as swarming and flocking give rise to collective intelligence as an innovative distributed optimization. PSO is recognized for its global search ability [22], ease of implementation, few constraints and fast convergence [23]. The swarms are initialized by random encoding. Like any other optimization, the initial swarms should be constraint-free, randomly producible quantities. For example, in this work, the angular parameters are initiated and tailored to modulate their derived wavelet coefficients. In the execution stage, the swarms browse the universe for the ultimate solution (optima) by iteratively updating their position through velocity modulations. In turn, the velocity is modulated by the inertia weight factor and also influenced by the local optima (self-memory) and global optima (social experience). The inertia weight factor tradesoff the local optimum, which promotes exploitation of a zone, with the global optimum, which encourages the swarm to explore the universe. The velocity is also modulated by the previous velocity [24]. In PSO, a particular member's own best-fit score over all iterations is the fitness of the local best, the p-best. The best fit member in the population over all iterations is called the g-best. Another bio-inspired optimization algorithm is Group Search Optimization (GSO). This algorithm relies on direct fitness information rather than collateral information [25]. The chance of finding the ultimate fitness is proportional to the available

197

resources, efficiency of mining and locating resources and adaptability to the environment [26]. The GSO framework is primarily grounded on the producer–scrounger model, which assumes that group members search by finding (producers) or by joining(scroungers). The finding-other (ranger) role is a peculiar, privileged searching strategy that discloses potential zones. The population is initialized, and the fitness of all members is evaluated (accuracy of diagnosis). The fittest member is assigned the producer role. The producer scans a certain distance in some initial direction, searching for more fit near by solutions. In this manner, the producer ensures that it remains optimal in the zone, a strategy that is unavailable in PSO. Scroungers are less prominent members with lower fitness. They join the producers in searching the same optimum. Rangers are also less prominent members that navigate the space in any orientation, preventing entrapment in local optima [26]. All members can change their roles depending on their fitness in a given iteration. In any optimization technique (Genetic Algorithm (GA), PSO, GSO) wrapping the parameters that belong to different steps of optimization leads to more honest outcomes as the algorithm proceeds. The importance of simultaneously wrapping the parameters at different stages of the operation is highlighted in [27]. Liu et al. [27] simultaneously optimized the SVM kernel and feature selection. They referred to the methodology of integrating the optimization of different parameters as Improved. In the works of Liu et al. [27] and Huang et al. [28], the SVM kernel parameter, integrated with other parameters, is optimized by PSO. Huang [29] optimized SVM-RBF kernel width along with the features using binary coded GA. The chromosomes in GA resemble the swarm members in PSO.

2. Premature convergence problem in PSO The standard PSO converges over time and loses diversity [27]. Nakisa et al. [30] noted that the conventional PSO algorithm satisfactorily finds optimal solutions but often fails to find the universally applicable solution. Therefore, whether PSO reaches the global or local optimum cannot be known. If PSO fails to find the global optima, the swarm will become trapped into premature convergence [30]. Nakisa et al. [30] state that premature convergence will always prevail in conventional PSO because the entire search-space must be matched to obtain the optimal resolution. As the goals of maintaining high diversity and achieving fast convergence are partially antagonistic, it makes perfect sense to reduce the number of sub-optimal solutions found in the optimization. To remedy premature convergence and other problems in PSO, researchers have modified the PSO parameters, incorporated mutations within the PSO framework, hybridized PSO and GA, or adopted other evolutionary techniques [30]. The inertia weight balances exploration (large inertia weight) and exploitation (small inertia weight) [31–33]. Linearly decreasing inertia weight favors exploitation as the iterations proceed [34,35]. Eventually, the algorithm is expected to converge to a profitable zone. In another interesting modification of conventional PSO, swarms are induced to attract when far away and repel when they approach too closely [36]. Tang and Zhou [37] applied a mutation fix when premature convergence was identified. Chuang et al. [38] reset the global optimum after a long period of no update. 2.1. Diagnosis of premature convergence In most research, premature convergence is diagnosed by monitoring the variation in the fitness function. For example, Tiang [32] assessed the normalized difference between the

198

C. Raja, N. Gangatharan / Computers in Biology and Medicine 63 (2015) 196–207

variance of the fitness of a member and the average fitness. Premature convergence is also detected when the fitness variance falls below a specified threshold, and the fittest member does not reach the expected fitness [37]. In [27], duplicated particles with very similar fitness functions are identified and subsequently withdrawn. Zhan et al. [39] quantified premature convergence by monitoring the evolutionary status of the swarm throughout the iterations. 2.2. Premature convergence therapy and prevention In most works, the universe comprises members identified by their fitness values. For instance, Zeng et al. [40] described the universe as the search space of objective functions. Conversely, Jie et al. [41] defined the premature state as the convergence of particles around one stage. These authors state that to avoid premature convergence and maintain an active particle population, we should mainly adjust the distance between the particles and the optimal position. Nakisa et al. [30] addressed the problem of non-scanning of potential areas when the swarm becomes trapped in a sub-optimal zone. In this case, the swarm cannot explore other promising areas. In [42], premature convergence is diagnosed by a consistent g-best throughout the iterations and is corrected by a leap. Other researchers use the Euclidean distance as a diagnostic parameter of PSO convergence. Jie et al.'s [41] algorithm pushes a member aside if its Euclidean distance becomes too close to an optimum, to avoid ambush by premature convergence. Zhang et al. [43] adopted a hybrid (Differential Evolution) DE-PSO, which updates the particle positions by DE if the Euclidean distance between the p-best and the current best neighbor becomes minimal. Our work identifies a particle by its location and quantifies premature convergence in terms of the Euclidean distance between the particles' positions.

of PSO [44]. In the hybrid PSO–GA approach of Yang et al. [45], genetic operation is executed on selected members following a PSO step. Zeng et al. switched between PSO and GSO modes [40]. They combined the step search mechanism of PSO with the angle search mechanism of GSO, securing the advantages of both algorithms. 2.3.1. Rangers in the PSO framework Rangers undertake random walks, considered as the most efficient way to seek randomly located resources. In PSO, members always remain clustered in a group none of the members fly away to search other spaces [46,47]. By contrast, GSO finds new clues leading toward the prey (better location) by the random walk style of the rangers. Rather than diagnosis and cure, our work prevents premature convergence by a ranging action, which is executed on an equal scale and parallel to PSO searching. The rangers assist in identifying potential zones. Potential members are defined as members with fitness comparable to the g-best and located far from the gbest (in terms of Euclidean distance). The potential zone is scanned for better solutions. If a better member is found within the potential zone, the swarm escapes the local optimum. In addition, as a preventative measure against premature convergence, a ranger may directly become the ultimate member. Since the rangers are embedded at the stage of finding new optima, our algorithm preserves the generality and utility of conventional PSO. 2.3.2. Scanning around the optimal member in the PSO framework The poor local search ability of PSO has rarely been addressed. Ant Colony Optimization (ACO) has been adopted for local searching in the PSO framework [48], and the solution obtained at each step has been refined by sequential quadratic programming (SQP) [49]. In our work, we embed the “scanning around the optima” strategy of GSO into the PSO framework.

2.3. Hybrid PSO 3. Methods Various efforts have been made to embed the unique operators of a particular technique into another methodology. For example, the mutation and crossover operators of GA can be adopted in PSO [44]. Specifically, hybridization improves the local searching ability

The images used in this study are taken from the “rim-one” data-base. Out of 158 extracted images, 84 and 74 belong to the normal and diseased classes, respectively. The operation of the

(Population) iter

Population ( Wavelets - Angular parameters & SVM – Sigma Parameter)

Hybrid PSO

(Population) iter +1

(Ɵ1 , Ɵ 2,...... Ɵ N ) iter σ parameter

Wavelet coefficient generation

No (Ø1, Ψ1 ) , (Ø2, Ψ2 ) ......... (Ø2N , Ψ2N)

Input Images

Preprocessing

Hyper analytic Wavelet Transform

SVM Classifier

Fitness Evaluation

Termination Criteria

Yes

Fittest Wavelet coefficient & Fittest “σ” parameter for SVM Fig. 1. Block diagram of the proposed algorithm.

C. Raja, N. Gangatharan / Computers in Biology and Medicine 63 (2015) 196–207

hybrid PSO is shown in Fig. 1. The input images are pre-processed before the wavelet transformation. The hyper-analytic wavelet transform coefficients and the SVM-RBF-σ parameter are simultaneously optimized as detailed in Section 3.6. The images are transformed by the optimized wavelet functions and the statistical features are extracted. The SVM classifier learns the features in a supervised fashion and tests whether a new set of features matches the learned features of any class.

3.1. Pre-processing Prior to feature extraction, the fundus images are pre-processed by grayscale conversion and histogram equalization.

3.1.1. Gray scale conversion The fundus images in tri-planar RGB format are reformatted to a uniplanar grayscale. The intensity varies from 0 (absolute black) to 255 (absolute white). The 512  512 pixel image is represented in a matrix of the same size. Single information from each pixel is appropriate for the statistical operations.

199

3.2.1. Feature extraction The mean, energy and entropy were extracted from the subband outcomes of the HWT. In previous studies, the mean graylevel intensity, as well as the entropy and energy, has provided significant statistical quantification of the sub-bands [18]. Raja et al. [16] quantified the uniqueness and discriminatory potential of the energy and entropy measures in quantifying wavelet subbands. According to Huang et al. [51], the energy measures the response of the image to the specific scale and orientation of the filters, while the measured randomness of the grey levels in a subband subjectively quantifies the texture. The mean intensity, energy and entropy of an m  n sub-band ðωÞ are respectively calculated as follows: Pm Pn i¼1 j ¼ 1 ωði; jÞ n ð2Þ meanðωÞ ¼ m Pm energyðωÞ ¼

i¼1

entropyðωÞ ¼ 

Pn

ωði; jÞg2

j¼1f m2 þ n2

m X

npi log 2 pi :

ð3Þ

ð4Þ

i¼1

3.3. Support Vector Machines

3.1.2. Histogram equalization Generally, the raw gray scale images do not cover the entire range of gray levels (from 0 to 255) but converge in a certain region of the scale. This convergence localizes the brightness and contrast visualization of the images. For statistical processing, the histogram is equalized such that the gray levels span 0-255. Representative fundus images of normal and glaucomatous eyes are shown in Fig. 2.

The problem is to categorize a set of are linearly non-separable. We denote T i ¼ ½T 1 ; T 2 ; T 3 ; …; T n  and the targets T i A out. The classes are distinguished discrimination formula:

3.2. Hyper-analytic wavelet transformation (HWT)

where W is the weight vector normal to the hyperplane of class separation, and b is the bias of the plane corresponding to the classes. Positive and negative conditions can be expressed in terms of the class assignment out

The wavelet transform conveys the information content of an image as a hierarchy of its frequency content, subsequently providing the significant statistics of the data. In HWT, the input data (not the wavelet coefficients) are Hilbert transformed. The Hilbert transform is defined such that if F 1 ðω) and F 2 ðωÞ are Hilbert transform pairs, then F 1 ðωÞ ¼ 7 jF 2 ðωÞ, where j is negative (positive) for positive (negative) frequencies. The two low pass filters then form orthogonal bases [50]. The hyper-analytic wavelet associated with an image f ðx; yÞ and its 2D-DWT Ψ ðx; yÞ is denoted Ψ h ðx; yÞ:

Ψ h ðx; yÞ ¼ Ψ ðx; yÞ þ iH x fΨ ðx; yÞg þ jHy fΨ ðx; yÞg   þ kHx H y fΨ ðx; yÞg :

ð1Þ

In Eq. (1), the terms i, j, and k denote imaginary numbers in two dimensions. Ψ represents the wavelet transformation of the corresponding orientation (enclosed in braces). The complex terms i, j, and k refer to the wavelet transformations applied to the Hilbert-transformed rows, columns, and both dimensions of the image, respectively. Each of these terms preserves the phase in the corresponding direction(s). In our work, features are extracted by adding and subtracting the first term of (1) and the k-term, thereby obtaining the spectra in π =4 and  π =4 respectively. Similarly, the i- and k-terms are added and subtracted to obtain the spectra in different orientations. The outcome is four operations. Finally for each of the four sub-bands of the first level wavelet decomposition (A-Approximate, H-Horizontal, V-Vertical and D-Diagonal), we obtain four manipulated outcomes. Hence 16 outputs are available from the four sub-bands.

gðxÞ ¼ W t  T þ b

n training vectors which the training vectors by by out ¼ ½ þ 1;  1, i.e., by the following linear ð5Þ

out i ðW t T i þ bÞ Z 1:

ð6Þ

W  T þ j bj = J w J is the perpendicular distance of the hyperplane to the origin [52]. To properly bias the classes, this distance should exceed a certain amount ν: W Tþ

j bj Z ν: JwJ

ð7Þ

The bias is optimized by simultaneously minimizing J w J and maximizing j bj . The quadratic optimization problem is solved by Lagrange multipliers, minimizing the Lagrange multiplier with respect to w and b. Applying the Karush–Kuhn–Tucker conditions, the problem transforms to a dual Lagrange problem [18]. In a nonlinear SVM, the linearly non-separable data are translated into a linearly separable hypothetical space by a kernel function. The dual Lagrange to be optimized becomes [52] max LD ðθÞ

n X i¼1

θi 

n X n 1X θ θ out i out j k; 2i¼1j¼1 i j

ð8Þ

where θ represent the dual variables, and k denotes the kernel function. As the kernel function, we selected a radial basis function given by ! J T ti  T j J k ¼ exp ; ð9Þ 2σ 2 where

σ2 is the variance.

200

C. Raja, N. Gangatharan / Computers in Biology and Medicine 63 (2015) 196–207

Fig. 2. Samples of raw and preprocessed images. Gray scale image (left) and histogram equalized image (right) of a normal (top) and glaucomatous (bottom) eye.

3.4. Particle Swarm Optimization The initial population generated in PSO contains n  m members, where m is the number of swarms, mem ¼ f1; 2; …; mg, and n is the length of each swarm, dim ¼ f1; 2; …; ng. The position and velocity of a member are represented by the matrices posdim;mem ¼ fposdim;mem ; …; posn;m g and veldim;mem ¼ fveldim;mem ; …; veln;m g, respectively. In PSO, each member updates its own position and velocity by Eqs. (10) and (14), iter

iter  1

veldim;mem ¼ W  veldim;mem þ c2 

iter

ð10Þ

ð13Þ

The second and third terms in (10) describe the orientations of a member towards the p-best and the g-best, respectively, and are multiplied by the cognitive acceleration constant c1 and the latter term is multiplied with the social acceleration constants c1 and c2, respectively. Based on its current velocity, each member updates its position by Eq. (10). The velocity is restricted to a predefined range ½  vmax ; vmax : þ1 iter positer dim;mem ¼ posdim;mem þ veldim;mem :

iter þ c1  r 1 ðbest_posðmemÞiter Þ dim  posdim ; mem iter r 2 ðbest_positer Þ: dim  posdim ; mem

of all members in the population ð 8 memÞ over all iterations:   8 iter gbest ¼ best_posdim ¼ max Fit 8 iter ð 8 memÞ :

ð14Þ

3.5. Group Search Optimizer

vel and pos represent the velocity and position respectively, and mem and dim represent the member and dimension respectively. Iter represents the iteration number, and W denotes the inertia weight that controls the impact of the previous velocity on the current velocity. The fitness of the memth member in the iterth iteration is calculated as follows:

The conventional GSO proceeds through several steps: population initialization, fitness evaluation of the initial population, declaration and execution of the members’ roles in each iteration (as producers, scroungers or rovers), and modification of these roles based on the current fitness in the next iteration.

iter Fit iter ðmemÞ; mem ¼ accuracy

3.6. PSO–GSO hybridization

ð11Þ

where accuracy means the accuracy of the SVM. The term 8 iter best pos ðmemÞdim represents the fittest position (or p-best) of a given member ðmemthÞ over alliterations:   8 iter ¼ max Fit 8 iter ðmemÞ : ð12Þ pbest ¼ best pos ðmemÞdim 8 iter The term best pos ðmemÞdim represents the fittest position (or g-best)

Fig. 3 is a flowchart of the hybridized PSO–GSO. In PSO, nonoptimal members are oriented towards the optima. This action is governed by the loop (the non-optimal members towards the optima) in Fig. 3, which successively regenerates the population if a better member is not found in the current iteration. This curling action limits the search space, leading to the premature

C. Raja, N. Gangatharan / Computers in Biology and Medicine 63 (2015) 196–207

201

( Optimal Member) Iter ( Potential Member) Iter

(Population) iter + 1 (Population) iter

PSO population

Ranger population Random Search Local Scan

Non-Optimal Members

Optima

PSO Search Orientation Towards Optima

Replace Optima

Yes

Found Better Member ?

PSO - Regenerate population

No

Fig. 3. Flowchart of the hybrid-PSO population behavior during one iteration.

convergence problem commonly encountered in PSO. To prevent premature convergence, we execute a large-scale ranging action parallel to the PSO search. This action, programmed in an additional loop, increases the chance of exploration. The rangers identify the potential zones that might otherwise be neglected, but are instead scanned to improve the result. The probability of identifying potential zones is uncompromised by specifying a fixed proportion of rangers (50% of the hybrid population). The output of the premature convergence evolves as a ranger transforms into a producer or as potential zones are identified. The PSO–GSO executes by a fusion of the processes described in Sections 3.4 and 3.5. The PSO population and its rangers are initialized in the PSO framework, and the optimal member (g-best) is identified. The global optima and potential members are scanned to identify better members.

finite impulse response (FIR) filter of arbitrary length as ! H 00 ðZÞ H 01 ðZÞ H p ðZÞ ¼ H 10 ðZÞ H 11 ðZÞ ! !   Ci Si S0 N  1 1 C0 0 ¼ ∏  Si C i  S0 C 0 i ¼ 1 0 Z  1

In Eq. (14), the first and second columns correspond to even and odd taps of the filters; the first and second rows correspond to the low-pass and high-pass coefficients. Ci and Si are the cosine and sine of the input angular parameters, respectively. In this context, an elegant way to determine the coefficients of a filter bank has been developed by Sherlock and Monro [54]. The form leads to the following recursive formula for the even numbered coefficients: ðk þ 1Þ

¼ ck h0

ðk þ 1Þ

¼ ck h2i  sk h2i  1

ðk þ 1Þ

¼  sk h2k  1

h0 3.6.1. PSO initialization The hybrid algorithm is initialized in the PSO framework. The entire process runs through maxiter iterations (specified as iter ¼ ð1 to maxiterÞ in Fig. 3). The hybrid population is generated in the first iteration (50% PSO population and 50% ranger population). The processes that follow are the execution of the member roles, fitness evaluation, identification of local and global optima, and population regeneration in subsequent iterations. In the initial population, half of the members are assigned as rovers.

3.6.2. Evolution of the PSO population and rangers Generation of the wavelet coefficients from the angular parameters: The population is randomly generated with angular parameters ranging from 0 to 2π radians. To generate the wavelet coefficients from the angular parameters, the poly-phase factorization method was proposed by Vaidyanathan et al. [53]. Their algorithm allows deriving orthonormal perfect-reconstruction

ð15Þ

h2i

h2k

ðkÞ

ð16Þ

ðkÞ

ðkÞ

for ði ¼ 1; 2; …; k  1Þ

ð17Þ

ðkÞ

ð18Þ

And the following recursive formula for the odd numbered coefficients, ðk þ 1Þ

h1

ðkÞ

¼ sk h1

ðk þ 1Þ

ðkÞ

ðk þ 1Þ

ðkÞ

ð19Þ ðkÞ

h2i þ 1 ¼ sk h2i þ ck h2i  1

for ði ¼ 1; 2; …; k  1Þ

ð20Þ

h2k þ 1 ¼ ck h2k  1

ð21Þ

 The first even ðhk0 þ 1 Þ and odd ðhk1 þ 1 Þ coefficients are obtained k

by modulating the previous stage even ðh0 Þ coefficients with cosine and sine of the angular parameters respectively in Eqs. (16) and (19).

202

C. Raja, N. Gangatharan / Computers in Biology and Medicine 63 (2015) 196–207

For comparison purposes, we execute both conventional and hybrid PSO algorithms. The PSO population is appended once to generate the hybrid population. The PSO population is initially regenerated as described in Section 3.4. During the regeneration, the non-optimal members of the PSO population are oriented towards the global and local optima. The ranger society avoids local entrapment by random walking and encountering new clues leading to the prey (better position) [47]. iteration, the rangers search in random  In each  kþ1 directions Dki ϕ over a random distance for better solutions [26]:   kþ1 xki þ 1 ¼ xki þ li Dki ϕ : ð22Þ After all iterations, the optimal wavelet coefficient is selected as the g-best. 3.6.3. Local area scan The steps that belong to the producer in the GSO framework are executed for the PSO's “g-best”. Specifically, the maximum pursuit distance lmax is calculated as the norm of the upper bound minus the lower bound of each dimension of the population. The initial distance is an n-dimensional vector containing the lmax values for all dimensions. The initial angle vector is a random ðn  1Þ dimensional vector, ϕi A Rn  1 . The initial direction is an n-dimensional vector derived from the initial angle vector as follows [26]: 1. The first element of the initial direction vector is the product of the cosines of all angular elements. 2. The second to penultimate element of the initial direction (jth element) is the product of the cosines from the jth to the final angular element; the product is then multiplied by the sine of the ðj  1Þth angular element 3. The last element of the initial direction vector is the product of the sines of all the angular elements. Thus the initial direction is an (n-dimensional) column vector derived from ðn  1Þ dimensional initial angle vector. The initial distance and direction are utilized for g-best scanning at the initial distance and direction as detailed in Section 3.6.5.

Fig. 4. Scaling (a, c) and wavelet functions (b, d) of Test runs 1 and 2. The horizontal axis represents the sample number of the filter taps; the vertical axis in the left and right panels represents the scaling and wavelet coefficients, respectively. (a) Scaling function of Test Run 1, (b) Wavelet function of Test Run 1, (c) Scaling function of Test Run 2 and (d) Wavelet function of Test Run 2.

3.6.4. g-Best scanning from the initial distance and direction Once the population has been generated and regenerated, its fitness is evaluated. Let the fittest member (g-best) be the pth member of the population. Here, the fitness is directly equated to diagnostic accuracy. Scanning from the initial angle and direction is implemented as follows [26]: X z ¼ X kp þ r 1 lmax Dkp ðϕ Þ; k

ð23Þ

Xkp

 The last even ðhk2kþ 1 Þ and odd ðhk2kþþ11 Þ coefficients are obtained k



by modulating the previous stage odd ðh2k  1 Þ last coefficients with negative of the sine and cosine of the angular parameters respectively in Eqs. (18) and (21). The intermediate even terms are evaluated from last stage's intermediate-even terms multiplied by cosine of the angular parameter plus last stage's intermediate-odd terms multiplied by negative of the sine of the angular parameter in Eq. (17).  The intermediate odd terms are evaluated from last stage's intermediate-even terms multiplied by sine of the angular parameter plus last stage's intermediate-odd terms multiplied by cosine of the angular parameter in Eq. (20).

denotes the g-best in the kth iteration, r 1 is a random where k number, lmax and Dkp ðϕ Þ denote the initial distance and direction, respectively. The g-best scans to the left and right (over its local field). The left and right scanning limits are obtained by adding a positive and negative constant to the initial angle as follows:   r 2 θmax k X z ¼ X kp þ r 1 lmax Dkp ϕ þ ð24Þ 2   r 2 θmax k ; X z ¼ X kp þ r 1 lmax Dkp ϕ  2

ð25Þ

where r2 is an (n  1) column vector of random numbers and θmax denotes the maximum pursuit angle (π =2). The fitness functions of these three new points are evaluated. If one of these points is better than the current g-best, the

C. Raja, N. Gangatharan / Computers in Biology and Medicine 63 (2015) 196–207

203

Table 1 Sample sigma values of the SVM classifier. Test Run

Wavelet function

SVM-RBF parameter (σ)

Accuracy attained (%)

1

Wavelet function un-optimized

5.2456 7.3385 7.3386

88.89 93.3 98.2

6.1146(18) 6.2018 (19) 6.2572 (20)

85.18 91 95

Wavelet function-1 2

Wavelet function un-optimized Wavelet function-2

Fig. 5. Metrics of the Test (a) Run 1 and (b) Run 2. (a) Metrics of Test Run 1, and (b) Metrics of Test Run 2.

g-best will fly to that point, otherwise it focuses on a new random angle. 3.6.5. Potential area scanning In each iteration, potential members are searched from both the PSO and rangers populations. A potential member (denoted the potenth member) satisfies the following condition: Fit iter ðpotenÞ 4 ðα  Fit iter ðgbestÞÞ;

ð26Þ

where α is less than one. Since the accuracy is nominated as the fitness function, we desire that the fitness function is maximized, not minimized. That is, the higher the fitness, the better the solution. The second condition is as follows: the Euclidean distance (ED) given in Eq. (20) between the potenth member and the g-best should be greater than a pre-set threshold: vffiffiffiffiffiffiffiffiffiffiffiffiffi u n u X

E:D: ¼ t potendim  gbestdim : ð27Þ dim ¼ 1

Collectively, these two conditions increase the distance of seeking a potential member with fitness comparable to the current g-best. The potential member is then scanned around, similar to g-best scanning.

3.6.6. Quantifying the exploration capability of the hybrid population The rangers in our PSO population are expected to maintain exploration until the fitness matures. Rangers’ goals are twofold: (i) to become the g-best, and (ii) to disclose one or more potential zones. Zhan et al. [39] quantified premature convergence by evaluating the evolutionary status of the swarm in each iteration. They calculated the average distance between the atoms by the Euclidean metric: vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u dim  m 2 uX X 1 t di ¼ xk xkj : ð28Þ m  1 j ¼ 1;j a i k ¼ 1 i Again, m denotes the number of members in the swarm and dim is the dimension of each swarm (see Section 3.4). After assessing the di, Zhan et al. [39] measured the evolutionary factor as the divergence of the mean distance between the g-best and its closest member, relative to the divergence of the closest and furthest members from g-best. The evolutionary factor f was then defined as F ¼ ðdg dmin Þ=ðdmax  dmin Þ:

ð29Þ

In Eq. (22) the distance of the globally best particle is given

204

C. Raja, N. Gangatharan / Computers in Biology and Medicine 63 (2015) 196–207

Fig. 6. Fitness vs iterations. (a) Best, mean, and worst fitness of PSO and Hybrid PSO (Test Run 1) and (b) Best, mean, and worst fitness of PSO and Hybrid PSO (Test Run 2).

Table 2 Alarm of premature convergence in the Test Runs (for 100 iterations). Iteration no.

Test Run-1 1–24 25 43

PSO Alarm of premature convergence

Leap-out from premature convergence

Alarm of premature convergence

Leap-out from premature convergence

– Evolutionary factor: 0.3544 Fitness: 88.89 % Already trapped at 25th iteration

– –

– –









Evolutionary factor: 0.3379 Fitness: 93.3% –

– –

– –

– –

54

Test Run-2 1–73 74

Hybrid PSO

– Evolutionary factor: 0.3512 Fitness: 88.89%

as dg, dmax and dmin are the maximal and minimal distances, respectively.

Evolutionary factor: 0.50 Fitness: 98.2%

Based on this evolutionary factor, Zhan et al. [39] classified the evolutionary status in a fuzzy manner as convergence, exploitation, exploration, or leap-out.

C. Raja, N. Gangatharan / Computers in Biology and Medicine 63 (2015) 196–207

Out of these four criteria, we adopt the exploitation criterion. We say that an evolutionary factor below 0.4 signifies exploitation. We also specify a required accuracy of 95% in advance. Accordingly, we fix the conditions of premature convergence as follows: if the accuracy is below 95% and the evolutionary factor is less than 0.4, premature convergence is suspected. Thus, convergence (in terms of inter-member distances) is assured only when the fitness has matured.

4. Results The PSO and ranger populations were evaluated in two 10-fold cross-validation test runs. The population size of the rangers was twice that of conventional PSO, and the initial populations of both subgroups was the same. This setup is appropriate for comparison with conventional PSO. The wavelet coefficients were optimized by fitting the accuracy of diagnosis in the two test runs, as shown in Fig. 4. The SVM–RBF width parameter (the sigma parameter) is optimized around 7.33 and 6.25 in Runs 1 and 2, respectively. Table 1 tabulates the final optimized values of the sigma parameter. Sample values obtained in the intermediate iterations are also listed. The accuracy, sensitivity and specificity metrics were evaluated from the numbers of true positives, false positives, false negatives, and true negatives, which are defined as follows: True positive (TP) : A diseased image is correctly identified diseased. False positive : A normal image is incorrectly identified (FP) diseased. False Negative : A diseased image is incorrectly identified (FN) normal. True Positive (TP) : A normal image is correctly identified normal.

as as as as

The three metrics are then evaluated as Accuracy ¼ ðTN þ TPÞ=ðTN þTP þFN þ FPÞ Sensitivity ¼ TP=ðTP þ FNÞ Specificity ¼ TN=ðTN þ FPÞ Fig. 5 shows bar graphs of the accuracy, sensitivity, and specificity metrics obtained in the test runs. The accuracy, sensitivity and specificity of PSO (Hybrid PSO) was 88.88% (98.2%), 80% (97.5%), and 92.37% (98.31%), respectively, in Test Run 1; and 88.88% (95%), 35.48% (95%), and 87.4% (94.92%), respectively, in Test Run 2. Our hybrid algorithm achieved an increase of 6% accuracy, 38.51% sensitivity, and 6.73% specificity, relative to conventional PSO.

5. Discussion Panels (a) and (b) of Fig. 6 show how the fitness evolves over the generations in Test Runs 1 and 2. The best, mean, and worst fitness of PSO (hybrid algorithm) in Test Run 1 were 88.8% (98.2%), 88.5% (94.5%), and 81.4% (83.5%) respectively. In Test Run 2, the best, mean, and worst fitness of PSO (hybrid algorithm) was 88.8% (95%), 86.6% (89.7%), and 77.7% (79.3%) respectively.

205

Table 2 lists the alarm of the premature convergence triggered in the test runs by the PSO and hybrid algorithm initialised with the same population. The evolutionary factor is calculated as per Eq. (29) and the value has a lower threshold of 0.4 i.e., a value of evolutionary factor less 0.4 indicates decay of the exploration of the search space. Further we would coin the term mature convergence as convergence in the exploration space (quantified by the evolutionary factor less than 0.4) should happen only after attaining maturity in fitness ( 495%). Else the alarm is raised. As shown in Fig. 6(a), conventional PSO is prematurely converged around the 25th iteration in Test Run-1. The trap is also evident from Table 2 which indicates the alarm rose with an evolutionary factor of 0.3544 ( o0:4) and a fitness of 88.89%. In Test Run-2, the conventional PSO managed to escape the trap till 73rd generation as shown in Fig. 6(b) and Table 2. At the 74th iteration, as shown in Table 2, the exploration decayed with an evolutionary factor of 0.3512 ( o0:4) when the fitness (88.89%) was not matured. In both the runs, once trapped, the algorithm had no avenue for recovery. Permanent trapping is evident, from Fig. 6(a) and (b), as the fitness does not improve after 25th and 74th iterations. As shown in Fig. 6(a), the hybrid population also gets trapped at the 43rd iteration in Test Run-1. (Infer Table 2, the evolutionary factor: 0.3379 when the fitness is immature: 93.3%.) Still, the hybrid algorithm escapes the situation at 54th iteration. The evolutionary factor becomes 0.50 which is an indication of enhancement in the exploration space with a mature fitness of 98.2%. Throughout the remaining iterations the fitness remained constant while exploration of the search space continued. In Test Run 2, the hybrid PSO avoided the convergence problem throughout the whole evolution. The ultimate fitness was 95%, with an evolutionary factor of 0.41. Table A1 summarizes previous attempts to solve the premature convergence problem of conventional PSO. Solutions include hybridization with GSO, Ant Colony Optimization (ACO), and Tabu Search (TS). Zeng and Li [40] and Grosan et al. [47] adopted PSO– GSO switching instead of embedding. The inertia weight is an extensively used parameter that tunes the exploration–exploitation balance. Premature convergence is generally estimated by the Euclidean distance or fitness function.

6. Conclusion Once trapped by premature convergence, the conventional PSO could not recover in any of the test runs. However, the hybrid algorithm incorporating a ranger population always escaped such situations. Relative to the conventional PSO, the hybrid algorithm increased the fitness by 9.4% and 6.1%, in the test runs. Thus, the hybrid PSO consistently outperformed conventional PSO. This improvement was achieved by complementing the superior search ability of PSO with the entropy of the rangers. That the rangers can disclose unexplored potential solutions is evident from their leaps over premature convergence in all test runs. Moreover, the algorithm maintains the identity of the PSO population, as the ranger population introduces no noise besides updating the g-best. To further enhance the algorithm, we could inhibit or accelerate the rangers depending on the evolutionary factor of the swarm.

206

C. Raja, N. Gangatharan / Computers in Biology and Medicine 63 (2015) 196–207

Table A1 Previously proposed solutions to the PSO convergence problem. No. Ref. Hybrids/ Modifications

Objective

Parameters optimized

Special truss structural design parameters 4 Benchmark PSO–GSO hybrid To achieve global convergence for high functions dimensional problems Gene Selection PSO–Tabu Search (TS) hybrid. To overleap local Optima for Tumor Modified PSO–Particle classification position is modified with respect to velocity thresholds Medical Image PSO–Differential Evolution To maintain the (DE) population diversity Processing; Multimodal by bell shaped Image mutations registration 17 Benchmark PSO–Ant Colony To improve the Optimization (ACO) performance of PSO problems for optimization of multimodal functions Adaptive PSO Parameter adaption 17 Benchmark problems to achieve global optimality 4 Benchmark Modification of Inertia Improve PSO's functions weights performance using adaptive inertia weights 5 Benchmark To overcome the Repulse force is inhibited if precocious defect of functions the distance between the the PSO algorithm particle and optima is less 7 Benchmark Particles are reinitiated when To overcome the functions the Euclidean distance with premature the global best is less than a convergence threshold

1

[40] PSO–GSO hybrid

2

[47]

3

[55]

4

[56]

5

[48]

6

[39]

7

[57]

8

[41]

9

[58]

10

[37] Hybridized with G. A; Adaptive mutation is done with respect to variance of the fitness

Accelerate appropriate global convergence of PSO

To overcome the premature convergence

4 Benchmark functions

Conflict of interest statement None declared.

Appendix A See Table A1. References [1] Y. Mallikarjun, Two molecular mechanisms causing glaucoma found, The Hindu, July 11, 2013, p. 15. [2] H.A. Quigley, A.T. Broman, The number of people with glaucoma worldwide in 2010 and 2020, Br. J. Ophthalmol. 90 (3) 2006 262–267. [3] Thasarat S. Vajaranant, Shuang Wu, A 40-year forecast of the demographic shift in primary open-angle glaucoma in the United States, Investig. Ophthalmol. Vis. Sci., Special Issue 53 (5) (2012) 2464–2466. [4] R. George, S. Ve Ramesh, et al., Glaucoma in India: estimated burden of disease, J. Glaucoma 19 (6) (2010) 391–397. [5] M.B. Shields, Optic Nerve, Retina, and Choroid, Shields Textbook of Glaucoma, 6th ed, 2005, pp. 216–217. [6] Ryusuke Futa, Tsutomu Shimizu, et al., Clinical features of capsular glaucoma in comparison with primary open-angle glaucoma in Japan, Acta Ophthalmol. 70 (1992) 214–219. [7] Na Hee Kang, Roo Min Jun, Clinical features and glaucoma according to optic disc size in a South Korean population: the Namil study, Jpn. J. Ophthalmol. 58 (2) (2014) 205–211.

Mechanism

Merits and metrics

Initialized with GSO; Step-search mechanism of PSO and Area search mechanism of GSO are fused

PS–GSO converges quicker than conventional PSO

PSO-to explore and GSO-to exploit. Rangers revise the local space

Better fitness than PSO and GSO

Tabu Search is incorporated as a local improvement

The accuracy, after cross validation, is higher for the hybrid algorithm than for pure PSO and pure TS

Differential evolution is executed for the best Local minima problem is overcome. Efficient in handling multimodal image from PSO. To maintain the diversity, the registration differential PSO introduces random mutations ACO works as a local search procedure. Pheromone-guided mechanism to improve the performance of PSO

For higher dimensions ( 4 4), PSACO converges efficiently

Search efficiency (for global optima) is achieved by evolutionary state estimation and appropriate parameter tuning Linearly decreasing inertia weights with respect to iterations

Adaptive PSO offers the highest accuracy for five benchmark functions out of 12 Mean best fitness is zero to about 4 decimal places for sphere function and relatively low for other functions

If the radius between a particle and Optima Diversity of the particles is high enough is smaller, it tends to move away to find an optimal solution Particles are reinitiated with random velocity when they are struck in local minima; Adapts a reactive determination of step-size based on the feedback from the last iterations If the variance is less and the fitness of g-best is low, g-best is mutated

Fast convergence rate. Except for the Schaffer and Griewank functions, G-PSO outperforms the Basic-PSO and HPSOTVAC for all the benchmark functions Adaptive mutation PSO attains theoretical fitness

[8] Pooja Sharma, Pamela A. Sample, et al., Diagnostic tools for glaucoma detection and management, Surv. Ophthalmol. 53 (1) (2008) S17–S32. [9] Christopher Bowd, Linda M. Zangwill, Structure–function relationships using confocal scanning laser ophthalmoscopy, optical coherence tomography, and scanning laser polarimetry, Investig. Ophthalmol. Vis. Sci. 47 (7) (2006) 2889–2895. [10] David Maberley, Hugh Walker, Screening for diabetic retinopathy in James Bay, Ontario: a cost-effectiveness analysis, Can. Med. Assoc. J. 168 (2) (2003) 160–164. [11] Rüdiger Bock, Jörg Meier, Glaucoma risk index: automated glaucoma detection from color fundus images, Med. Image Anal. 14 (2010) 471–481. [12] M. Muthu Rama Krishnan, Oliver Faust, Automated glaucoma detection using hybrid feature extraction in retinal fundus images, J. Mech. Med. Biol. 13 (1) (2013) 1–21. [13] C. Raja, N. Gangatharan, Glaucoma detection in fundal retinal images using trispectrum and complex wavelet-based features, Eur. J. Sci. Res. 97 (1) (2013) 159–171. [14] Nick. Kingsbury, Complex wavelet transform for shift invariant analysis and filtering of Signals, J. Appl. Comput. Harmon. Anal. 10 (3) (2001) 234–253. [15] C. Raja, N. Gangatharan, Incorporating phase information for efficient glaucoma diagnosis through hyper analytic wavelet transform, in: Proceedings of Fourth International Conference on Soft Computing for Problem Solving, Advances in Intelligent Systems and Computing, vol. 2, 2014, pp. 325–339. [16] C. Raja, N. Gangatharan, Appropriate sub-band selection in wavelet packet decomposition for automated glaucoma diagnosis, Int. J. Autom. Comput. (2015), http://dx.doi.org/10.1007/s11633-014-0858-6, in press. [17] I. Firoiu, C. Nafornita, et al., Bayesian hyperanalytic denoising of sonar images, IEEE Geosci. Remote Sens. Lett. 8 (6) (2011) 1065–1069. [18] M. Muthu Rama Krishnan, U. Rajendra Acharya, et al., Data mining technique for automated diagnosis of glaucoma using higher order spectra and wavelet energy features, Knowl. Based Syst. 33 (2012) 73–82. [19] Abdulhamit, Classification of EMG signals using PSO optimized SVM for diagnosis of neuromuscular disorders, Comput. Biol. Med. 43 (2013) 576–586.

C. Raja, N. Gangatharan / Computers in Biology and Medicine 63 (2015) 196–207

[20] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of the IEEE International Conference on Neural Network, vol. 4, 1995, pp. 1942–1948. [21] R. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in: Proceedings of the Sixth International Symposium on Micro Machine and Human Science, IEEE: Nagoya 1995, pp. 39–43. [22] Bing Xue, Mengjie Zhang, Will N. Browne, Multi-objective particle swarm optimisation (PSO) for feature selection, in: GECCO'12 2012 Philadelphia, Pennsylvania, USA. [23] J. Kennedy, W. Spears, Matching algorithms to problems: an experimental test of the particle swarm and some genetic algorithms on the multimodal problem generator, in: IEEE Congress on Evolutionary Computation, vol. 78– 83, 1998, pp. 226–233. [24] M.A. Esseghir, Gilles Goncalves, Yahya Slimani, Adaptive particle swarm optimizer for feature selection, in: Intelligent Data Engineering and automated learning, vol. 6283, 2010. [25] Hai Shen, Yunlong Zhu et al., An improved group search optimizer for mechanical design optimization problems, Prog. Nat. Sci. 19 (1) (2009) 91–97. [26] Q.H. Wu, J.R. Saunders, Group search optimizer: an optimization algorithm inspired by animal searching behavior, IEEE Trans. Evol. Comput. 13 (5) (2009) 973–990. [27] Yuanning Liu, Gang Wang, et al., An improved particle swarm optimization for feature selection, J. Bionic Eng. 8 (2) (2011) 191–200. [28] C.L. Huang, J.F. Dun, A distributed PSO–SVM hybrid system with feature selection and parameter optimization, Appl. Soft Comput. 8 (2008) 1381–1391. [29] Cheng-Lung Huang, Chieh-Jen Wang, A GA-based feature selection and parameters optimization for support vector machines, Expert Syst. Appl. 31 (2005) 231–240. [30] Bahareh Nakisa, Mohd Zakree Ahmad Nazri, et al., A survey: particle swarm optimization based algorithms to solve premature convergence problem, J. Comput. Sci. 10 (9) (2014) 1758–1765. [31] Ahmad Nickabadi, Mohammad Mehdi Ebadzadeh, et al., A novel particle swarm optimization algorithm for adaptive inertia weight, Appl. Soft Comput. 11 (2011) 3658–3670. [32] Dong ping Tian, A review of convergence analysis of particle swarm optimization international. J. Grid Distrib. Comput. 6 (6) (2013) 117–128. [33] C.S. Yang, L.Y. Chuang, et al., Chaotic maps in binary particle swarm optimization for feature selection, in: Proceedings of IEEE conference on Soft Computing in Industrial Applications, 2008, pp. 107–112. [34] R.C. Eberhart, Y.H. Shi, Comparing inertia weights and constriction factors in particle swarm optimization, IEEE Congress Evol. Comput. (2000) 84–88. [35] Y.H. Shi, R.C. Eberhart, Experimental study of particle swarm optimization, in: SCI2000 Conference, Orlando, 2000. [36] J. Riget, J.S. Vesterstrom, A Diversity-Guided Particle Swarm Optimizer, The ARPSO, Vesterstrøm, 2002. [37] J. Tang, X. Zhao, Particle swarm optimization with adaptive mutation, in: Proceedings of the International Conference on Information Engineering, 2009, pp. 234–237 2009. [38] L.Y. Chuang, S.W. Tsai, C.H. Yang, Improved binary particle swarm optimization using catfish effect for feature selection, Expert Syst. Appl. 38 (10) (2011) 699–707.

207

[39] Zhi-Hui Zhan, Sun Yat-Sen, et al., Adaptive particle swarm optimization, IEEE Trans. on Syst. Man Cybern.—Part B: Cybern. 39 (6) (2009) 1362–1381. [40] S.K. Zeng, L.J. Li, Particle swarm-group search algorithm and its application to spatial structural design with discrete variables, Int. J. Optim. Civil Eng. 2 (4) (2012) 443–458. [41] Z. Jie, F. Chaozan, L. Bo, et al., An improved particle swarm optimization based on repulsion factor, Open J. Appl. Sci. 2 (4B) (2012) 112–115. [42] R. Cheng, M. Yao, Particle swarm optimizer with time-varying parameters based on a novel operator, Int. J. Appl. Math. Inf. Sci. 5 (2) (2011) 33–38. [43] C. Zhang, J. Ning, et al., A novel hybrid differential evolution and particle swarm optimization algorithm for unconstrained optimization, Oper. Res. Lett. 37 (2009) 117–122. [44] Radha Thangaraj, Millie Pant, Particle swarm optimization: hybridization perspectives and experimental illustrations, Appl. Math. Comput. 217 (12) (2011) 5208–5226. [45] B. Yang, Y. Chen et al., A hybrid evolutionary algorithm by combination of PSO and GA for unconstrained and constrained optimization problems, in: Proceedings of the IEEE International Conference on Control and Automation, 2007, pp. 166–170. [46] G.M. Viswanathan, S.V. Buldyrev, et al., Optimizing the success of random searches, Nature 401 (6756) (1999) 911–914. [47] C. Grosan, A. Abraham et al., A hybrid algorithm based on particle swarm optimization and group search optimization, in: Seventh International Conference on Natural Computation, Shanghai, 2011. [48] P.S. Shelokar, P. Siarry, et al., Particle swarm and ant colony algorithms hybridized for improved continuous optimization, Appl. Math. Comput. 188 (2007) 129–142. [49] P.T. Boggs, J.W. Tolle, Sequential quadratic programming, Acta Numer. 4 (1995) 1–52. [50] Ivan W. Selesnick, Hilbert transform pairs of wavelet bases, IEEE Signal Process. Lett. 8 (6) (2001) 170–173. [51] Ke Huang, Selin Aviyente, Statistical partitioning of wavelet subbands for texture classification, in: IEEE International Conference on Image Processing, vol. 1, Michigan State University, East Lansing, USA, 2005, pp. I-441–I-444. [52] Christopher J.C. Burges, A tutorial on support vector machines for pattern recognition, Data Min. Knowl. Discov. 2 (2) (1998) 121–167. [53] P.P. Vaidyanathan, Multirate Systems and Filter Banks, Prentice-Hall, Englewood Cliffs, NJ, 1993. [54] B.G. Sherlock, D.M. Monro, On the space of Orthonormal Wavelets, IEEE Trans. Signal Process. 46 (6) (1998) 1716–1720. [55] Qi Shen, Wei-Min Shi, et al., Hybrid particle swarm optimization and tabu search approach for selecting genes for tumor classification using gene expression data, Comput. Biol. Chem. 32 (2007) 53–60. [56] Hassiba TaIbi, Mohamed Batouche, Hybrid particle swam with differential evolution for multimodal image registration, in: IEEE International Conference on Industrial Technology, vol. 3, 2004, pp. 1562–1572. [57] Y. Shi, R.C. Eberhart, Empirical study of particle swarm optimization, in: Proceedings of the Congress on Evolutionary Computation, vol. 3, 1999. [58] S. Pasupuleti, R. Battiti, The gregarious particle swarm optimizer, in: Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, New York, 2006, pp. 67–74.

A Hybrid Swarm Algorithm for optimizing glaucoma diagnosis.

Glaucoma is among the most common causes of permanent blindness in human. Because the initial symptoms are not evident, mass screening would assist ea...
1MB Sizes 0 Downloads 14 Views