2232

IEEE TRANSACTIONS ON CYBERNETICS, VOL. 44, NO. 11, NOVEMBER 2014

A Multiple-Feature and Multiple-Kernel Scene Segmentation Algorithm for Humanoid Robot Zhi Liu, Shuqiong Xu, Yun Zhang, and Chun Lung Philip Chen

Abstract— This paper presents a multiple-feature and multiple-kernel support vector machine (MFMK-SVM) methodology to achieve a more reliable and robust segmentation performance for humanoid robot. The pixel wise intensity, gradient, and C1 SMF features are extracted via the local homogeneity model and Gabor filter, which would be used as inputs of MFMK-SVM model. It may provide multiple features of the samples for easier implementation and efficient computation of MFMK-SVM model. A new clustering method, which is called feature validity-interval type-2 fuzzy C-means (FV-IT2FCM) clustering algorithm, is proposed by integrating a type-2 fuzzy criterion in the clustering optimization process to improve the robustness and reliability of clustering results by the iterative optimization. Furthermore, the clustering validity is employed to select the training samples for the learning of the MFMKSVM model. The MFMK-SVM scene segmentation method is able to fully take advantage of the multiple features of scene image and the ability of multiple kernels. Experiments on the BSDS dataset and real natural scene images demonstrate the superior performance of our proposed method. Index Terms—Humanoid robot, interval type-2 fuzzy C-means, multiple-kernel, support vector machine.

I. Introduction Scene segmentation is an important task in many research fields, including robotics path planning [1]–[4], video understanding [5]–[7], and video surveillance [8]–[10]. Due to the complexity of scene images and the uncertainties of shapes in the scene, efficient and accurate scene segmentation is still a challenging problem for humanoid robot. Recently, a large number of robust and interesting scene image segmentation algorithms has been developed, which could be roughly classified into three categories: edge-based schemes, region-based schemes, and feature-based schemes. Edge-based schemes are popular image segmentation approaches, which detect the edge information of images for scene segmentation, such as edge detectors [11]–[13], border tracing [14], dynamic Manuscript received March 19, 2013; revised July 18, 2013; accepted October 26, 2013. Date of publication January 20, 2014; date of current version October 13, 2014. This work was supported in part by the National Natural Science Foundation of China under Project 60974047, U1134004; the Natural Science Foundation of Guangdong Province under Grant S2012010008967; the Science Fund for Distinguished Young Scholars under Grant S20120011437 and Grant 2011 Zhujiang New Star; the Ministry of Education of New Century Excellent Talent under Grant NCET-12-0637; the 973 Program of China under Grant 2011CB013104; and by the Doctoral Fund of Ministry of Education of China under Grant 20124420130001; University of Macau Multiyear Research Grants. This paper was recommended by Associate Editor L. Sheremetov. Z. Liu and Y. Zhang are with the School of Automation, Guangdong University of Technology, Guangzhou Guangdong 510006, China (e-mail: [email protected]). S. Xu is with the School of Automation, Guangdong University of Technology, Guangzhou Guangdong 510006, China, and also with the Department of Electronic Engineering, Dongguan Polytechnic, Guangdong 523808, China. C. L. P. Chen is with the Faculty of Science and Technology, University of Macau, Macau 999078, China. Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TSMC.2013.2297398

programming [15]–[16], and active contours [17]–[18]. Regionbased schemes, such as region growing [19]–[20], region merging [21]–[22], region splitting [23], and intelligent paint [24], are based on the similarities among the pixels within a region and the region homogeneity is used as an important segmentation criterion. Featurebased schemes classify the different features of scene images by the nonlinear mapping of feature space. Common features including color intensity, gradient magnitude, texture, grayscale thresholding, and clustering methods [25]–[26] could be employed as the criterion of feature classification. However, in view of all the methods mentioned above, few of them could extract and classify the scene images via multiple features in the multiple feature space. Due to the environmental complexity and external disturbance of humanoid robot, the scene pattern could be represented as different features via different feature mappings. It is extremely difficult to understand and classify the scene image accurately only via one single feature. Most of the existing works focus on the single feature exaction and classification, which may degrade the segmenting performance in a real humanoid application. There is still much work to be done to improve the accuracy and rapidity of image segmentation by employing multiple feature knowledge, especially the kernel-based strategy should be a good choice for the feature-based learning. Recently, the intelligent systems, such as support vector machine [27] and fuzzy system [38] have been successfully used for the pattern classification [28], controller design [38]–[41], and estimation problems [29], [30]. Some researches have take advantages of the efficient learning ability of SVM for image segmentation [31]–[33]. Flamary et al. [32] proposed an efficient methodology for multispectral image segmentation to improve the robustness performance. Saha et al. [33] presented an approach to integrate the capability of fuzzy clustering techniques and SVM classifiers. However, the possibility to improve performance of SVM for image segmentation is unfortunately ignored by only applying the single kernel framework. Due to the multiple feature complexity and disturbance of scene images for humanoid robot, it is interesting to develop a multiple-feature multiple-kernel-based segmentation method for humanoid robot, which has not been extensively addressed before. In this paper, a framework of multiple-feature and multiple-kernel support vector machine (MFMK-SVM) methodology is presented to achieve a more reliable and robust segmentation performance for humanoid robot. In order to realize easier implementation and efficient computation of MFMK-SVM model, one of the challenges is to develop a feature-based clustering method to classify multiple features into different clustering centers. A feature-based clustering method, which we called feature validity-interval type-2 fuzzy C-means (FV-IT2FCM) clustering algorithm, is developed to classify multiple features for the kernel-based learning of the MFMK-SVM model. The MFMK-SVM scene segmentation method can fully take advantage of the multiple features of image and the ability of multiple kernels. Experimental studies on synthesis images and real natural scene images show that the proposed method achieves a competitive segmentation result.

c 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. 2168-2267  See http://www.ieee.org/publications standards/publications/rights/index.html for more information.

LIU et al.: MULTIPLE-FEATURE AND MULTIPLE-KERNEL SCENE SEGMENTATION ALGORITHM FOR HUMANOID ROBOT

2233

The remainder of the paper is organized as follows. In Section II, vision-based framework of humanoid robot is presented. In Section III, the MFMK-SVM segmentation algorithm for humanoid robot is described in detail. In Section IV, experiment results are provided for image segmentation on synthesis images and the scene segmentation of humanoid, followed by the conclusion in Section V.

II. Related Works and Contribution of Proposed Method A. Related Works Image segmentation attracts a great deal of attention in computer vision and pattern recognition. A large number of interesting and efficient scene image segmentation approaches have been developed and we can roughly classify them into three categories: edge-based schemes, region-based schemes, and feature-based schemes. Edge-based schemes are widely used image segmentation approaches, which are based on the abrupt changes in image intensity or other feature. A contour tracing methodology was proposed to segment images with the edge linking approaches [11]. Young et al. [13] proposed a new edge-based gradient approach to handle impulsive noise and provides well gradient estimation. Zhu et al. [16] proposed a hierarchical image model (HIM) to represents the image recursively by using multiple levels of the hierarchy. Lee et al. [17] presented an segmentation algorithm for quantification of cardiac output and myocardial mass in clinical practice. Gao et al. [18] proposed a edgebased image segmentation method to segment an image in a series of nested regions. Yang et al. [15] proposed a segmentation method based on an enhanced dynamic programming approach. Region-based schemes are based on the similarities among the pixels within a region and the region homogeneity is used as an important segmentation criterion. Yu et al. [19] presented a regionbased segmentation algorithm for synthetic aperture radar (SAR) images. Qin et al. [20] proposed a Markov random field (MRF) based segmentation algorithm to address the problem of feature space sparseness and solution complexity. Peng et al. [21] proposed the image segmentation algorithm in a region merging style according to statistical testing. Panagiotakis et al. [22] proposed a region growingbased image segmentation framework. Kim and Kang [23] proposed a new segmentation algorithm by using similar intensity values of regions. However, when images have too many edges or have difficulties to be understood, the edge and region-based approaches do not work well. In addition, the resulting edges and regions are often over detected or discontinuous, which will greatly influence the performance of scene segmentation. Feature-based schemes classify pixels based on their positions in feature space without explicitly considering their connectivity to similarly classified pixels. Alpert et al. [34], [35] presented a novel approach by utilizing multiple cues for image segmentation. The major advantage of this approach is that the algorithm complexity is linear in the number of the image pixels, which greatly reduce the computational complexity. It requires almost no usertuned parameters, which make it more practical in real applications. Chen et al. [25] proposed a multiple-kernel fuzzy C-means (FCM) (MKFCM) methodology for image-segmentation problems. Hasanzadeh and Kasaei [26] proposed a segmentation method based on the MC method, watershed transform, and fuzzy clustering method. Saha et al. [33] integrated the support vector machine (SVM) classifiers with fuzzy clustering technique to improve its performance in uncertain situations.

B. Contributions of Proposed Approach In view of all the methods mentioned above, few works have fully considered the multiple features in a unified framework, and the

Fig. 1.

(a) Humanoid robot Alice. (b) Alice kinematic configuration.

existing single-kernel approaches may not utilize multiple features of pixels. It is very interesting to construct a unified framework to describe multiple features in multiple kernel spaces, which will improve the accuracy and rapidity of image segmentation. As compared to the previous works, the contributions, and novelty of the proposed approach are summarized as follows. 1) Multiple features of pixels: This is the first work that integrates the pixel-wise intensity feature, gradient feature and C1 SMF feature as pixel-wise feature representations of scene images into a unified segmentation algorithm. It will be more suitable to extract a complex feature by multiple feature mappings. 2) Multiple features of pixels: A novel multiple-kernel is employed by the multiple spaces in the MFMK-SVM framework. Different feature mapping are employed to capture the specified statistical features of scene images and the multiple feature mapping makes the problem to be separable in the feature spaces. 3) FV-IT2FCM samples selecting: A new clustering method, which is called as FV-IT2FCM clustering algorithm, is proposed by integrating a type-2 fuzzy criterion in the clustering optimization process to improve the robustness and reliability of clustering results by the iterative optimization. Clustering validity is employed to select the training samples for the learning of the MFMK-SVM model.

III. Vision-Based Control System of Humanoid Robot The Alice robot was constructed at the Guangdong University of Technology in China as shown in Fig. 1[(a) is the Alice robot and (b) is the kinematic configuration of the Alice]. It has 35 kg weight and 170 mm height, which use the three-wheels mechanism for autonomous movement. The overview and control system of the Alice robot are presented in the following sections. The structure system of Alice is a closed-loop control system as shown in Fig. 2. It is composed of visual driven module, motion control system, industrial computer system, the sensing module, PWM module, motor driving module, and actuator. The visual sensor is set as the Alice’s eye to observe the scene images and to measure the visual information of the robot. To reduce the consumption and improve the efficiency of the Alice robot, audio controller, joint angle controller, position controller, velocity controller, current controller, and pulse width modulation (PWM) module are designed with other modules (motor driving module, the actuator and sensing module) to establish the closed loop control strategy for the movement control of Alice. The motor driving module is used to driven the motors and gearing to achieve a desired position and orientation. The sensing module is mainly responsible for measuring position, speed, current, and joint angle feedback signals for the motion

2234

Fig. 2.

IEEE TRANSACTIONS ON CYBERNETICS, VOL. 44, NO. 11, NOVEMBER 2014

Overview of the visual driven system of Alice.

control system. The decisions of Alice generated from the industrial computer system, which including data processing module and visual driven module to realizes communication and control among motion control system, sensing module, and objectives. Data processing module is responsible for interacting with motion control system, sending commands and saving history record, and calculating inverse kinematics and joint coordinates. Visual driven module is composed of image capture circuit, image processing, object detection, and object pose estimation. It is also driven by the events corresponding to the changes of the scenes, and these changes are detected by the visual sensing module of Alice.

IV. MFMK-SVM Segmentation Algorithm for Humanoid Robot The proposed MFMK-SVM segmentation algorithm for humanoid robot (as shown in Fig. 3) can be divided into three main components: multiple features representations, FV-IT2FCM algorithm, and a multiple-kernel based classification algorithm.

A. Multiple-Feature Representations of Scene Images The extraction of multiple features of scene image is a very important procedure during image segmentation. We unify pixel-wise intensity feature, gradient feature, and C1 SMF feature as pixel-wise feature representations of scene images. 1) Intensity Feature: Let I(w, z) denote the intensity component of an image with size W×Z at location (w, z). We define the local homogeneity as the pixel-wise intensity feature υ(w, z), which can be calculated via the standard derivation of intensity component and expressed as follows:        1 (I(w + c, z + l) − μ(w, z))2 (1) υ(w, z) =  (2 + 1)2 c=− l=−

s.t.

≤w≤W −  and ≤z≤Z − 

(2)

where (2 + 1)×(2 + 1) is the size of neighborhood window, and μ(w, z) is the mean of intensity component in the neighborhood window. 2) Gradient Feature: The gradient feature g(w, z) is employed for the feature extraction of scene images, which denotes the magnitude of the gradient at location (w, z)  g(w, z) = G2w + G2z (3) where Gw and Gz are the components of the gradient of intensity component in the w and z directions, respectively. 3) C1 SMF Feature: Different to the traditional SMF, pixelwise C1 SMF is applied to each pixel, and C1 SMF feture is developed by the computation of Gabor filtered coefficient. SMF Gabor filters and its parameters could be designed as the similar procedure in [2]. The Gabor filter output of the scene image is computed as its convolution with the Gabor filter. Suppose ρθ (w, z) denote the Gabor filtered coefficient at location (w, z) that corresponds to the ρ (ρ = {7, 9, 11, 13}) scale and the θ (θ = {00 , 450 , 900 , 1350 }) orientation. The local energy can be expressed as follows: qρθ (w, z) = max{ w,z

s.t.

   

c=− l=−

≤w≤W − 

(ρθ (w + c, z + l))2 } ≤z≤Z − 

(4)

where 2 + 1 is the length of patches. The pixel-wise C1 SMF feature q(w, z) at location(w, z) can also be obtained by the following operation: q(w, z) = max{qρθ (w, z)}, w,z

ρ = {7, 9, 11, 13}, θ = {00 , 450 , 900 , 1350 }.

(5)

LIU et al.: MULTIPLE-FEATURE AND MULTIPLE-KERNEL SCENE SEGMENTATION ALGORITHM FOR HUMANOID ROBOT

2235

B. FV-IT2FCM Algorithm Training sample selection is one of the major factors determining to what degree the MFMK-SVM classification rules can be generalized by the samples. Considering the undesirable uncertainties in the input and output image data, a interval clustering method is developed by employing the interval type-2 fuzzy membership function to handle the uncertainties of data. t Suppose, we are given a pixels set  = {Xi }N i=1 via the previous section, Xi is the data for clustering optimization and Nt is the size of the data set. The objective function of the proposed FV-IT2FCM algorithm can be calculated as follows: Minimize F(U, V ) =

Nt  C   i=1

uij



Xi − vj 2

(6)

j=1

where U = [uij ] and V = [vj , v¯ j ], uij ∈ [uij , u¯ ij ] is the membership degree of data point Xi in the jth cluster i = 1, · · · , Nt and j = 1, · · · , C. The clustering ∈ vj , v¯ j = {(vj,1 , vj,2 , vj,3 ) and (vj , v¯ j ) center vj = (vj,1 , vj,2 , vj,3 ), |∀vj,r ∈ R, vj,r ≤vj,r ≤¯vj,r , r = 1, 2, 3}, vj v¯ j = (¯vj,1 , v¯ j,2 , v¯ j,3 ) and vj = (vj,1 , vj,2 , vj,3 ). Xi = [xi,1 , xi,2 , xi,3 ]. denotes the pixel feature at the ith position and Nt is the total number of pixels in the images. The dimension of Xi and vj is consistently set to be three which is equal to the number of feature descriptors for each pixel. C is the clustering number and 0 bestAccuracy) bestAccuracy = cv; bestgamma = gamma; bestdelta = delta; end end end over Admittedly, the LOO-CV will result in the computation complexity in the training process. However, the training method is employed offline, thus it may not increase the computation time

in real-time application. The compounds in the sampling data are randomly chosen from the FV-IT2FCM to train the MFMK-SVM classification model. The process is repeated 10 times, with a different components being used for training in the iteration. The average results are reported, and the parameters which have the minimum cross validation error are chosen for the MFMK-SVM model. The optimal hyper-parameters of the MFMK-SVM include a regularization factor γ = 200, and the multiple kernel parameters which are described in (22): d = 1, η = 2, σ1 =150, and σ2 = 100. The number of training pixels is set to be N ≤ 4000.

B. Segmenting Scenes for Humanoid Robot 1) Data Setup: To demonstrate the flexibility and the advantage of the proposed MFMK-SVM, a database is created based on the recorded data from Alice which is described in Section II. One of the major advantages of the proposed method is the effectiveness on

LIU et al.: MULTIPLE-FEATURE AND MULTIPLE-KERNEL SCENE SEGMENTATION ALGORITHM FOR HUMANOID ROBOT

2239

Fig. 6. Flower image segmented by different methods. (a) Segmentation result of the SVM. (b) Segmentation result of the WK-SVM. (c) Segmentation result of the MK-SVM. (d) Segmentation result of the MFMK-SVM.

Fig. 7. Land image segmented by different methods. (a) Segmentation result of the SVM. (b) Segmentation result of the WK-SVM. (c) Segmentation result of the MK-SVM. (d) Segmentation result of the MFMK-SVM.

Fig. 8. Car image segmented by different methods. (a) Segmentation result of the SVM. (b) Segmentation result of the WK-SVM. (c) Segmentation result of the MK-SVM. (d) Segmentation result of the MFMK-SVM.

different environments, which included both the indoor and outdoor environments. 2) Performance Evaluation of MFMK-SVM: In this paper, we justify the effectiveness of the proposed MFMK-SVM classifier for scene segmentation of humanoid robot. To evaluate the MFMKSVM classifier for scene segmentation, we replace the MFMK in the MFMK-SVM classifier with RBF kernel, WK kernel, and the MF, respectively, which is widely used in the traditional SVM framework for image processing. In the following experiments, FV-IT2FCM samples selection method is used for clustering optimization. The digital images of Flower, Land and Car are selected to evaluate the segmentation performance of the proposed method. The MFMKSVM method is compared with the same segmentation algorithms described in the Section IV-A1. As observed in Figs. 6–8, MFMKSVM clearly outperforms the traditional SVM method, and similar results to the WK-SVM and MK-SVM methods for some natural scene images. The reason is that the MFMK-SVM has integrated the multiple features in the feature spaces, which contribute more to deal with complicated situation via the novel feature space. 3) Choices Parameters of MFMK-SVM: To find the optimal hyper-parameters for the proposed MFMK-SVM, the tuning parameters are also fixed by a 10-fold cross-validation strategy. The compounds in the sampling data are randomly chosen from the FVIT2FCM to train the MFMK-SVM classification model. The average results are reported, and the parameters which have the minimum cross validation error are chosen for the MFMK-SVM model. The optimal hyper-parameters of the MFMK-SVM include a regularization factor γ = 100, and the multiple kernel parameters

which are described in (22): d = 1, η = 2, σ1 = 120 and σ2 = 120. The number of training pixels is set to be N ≤ 4000.

VI. Conclusion In this paper, an MFMK-SVM methodology has been proposed and applied as the general framework for scene image segmentation to achieve a more reliable and robust segmentation performance for humanoid robot. A systematic MFMK-SVM framework is presented and its model is trained as classifier for scene images segmentation. The kernels are selected for different pieces of information or properties of image pixels. The MFMK-SVM provides a significant flexibility in selecting and combining different kernel functions and the information of image from multiple heterogeneous or homogeneous data sources is combined in the kernel space. Experiments on the BSDS data set and real natural scene images show the flexibility and advantages of MFMK-SVM in image segmentation problem.

Acknowledgment The authors would like to thank the associate editors and the reviewers for the constructive comments and suggestions.

References [1] M. Cakir and L. Sevgi, “Path planning and image segmentation using the FDTD method,” IEEE Trans. Antennas Propag. Mag., vol. 53, no. 2, pp. 230–245, Apr. 2011. [2] T. H. Le, S. W. Jung, K. S. Choi, and S. J. Ko, “Image segmentation based on modified graph-cut algorithm,” Electron. Lett., vol. 46, no. 16, pp. 1121–1123, Aug. 2010.

2240

[3] C. W. Bong and M. Rajeswari, “Multiobjective clustering with metaheuristic: Current trends and methods in image segmentation,” IET Image Process., vol. 6, no. 1, pp. 1–10, Feb. 2012. [4] M. Begum and F. Karray, “Visual attention for robotic cognition: A survey,” IEEE Trans. Autonom. Mental Develop., vol. 3, no. 1, pp. 92–105, Mar. 2011. [5] O. Barnich and M. Van Droogenbroeck, “ViBe: A universal background subtraction algorithm for video sequences,” IEEE Trans. Image Process., vol. 20, no. 6, pp. 1709–1724, Jun. 2011. [6] J. L. A. Samatelo and E. O. T. Salles, “A new change detection algorithm for visual surveillance system,” Latin Amer. Trans., vol. 10, no. 1, pp. 1221–1226, Jan. 2012. [7] M. Albanese, R. Chellappa, N. Cuntoor, V. Moscato, A. Picariello, V. S. Subrahmanian, et al., “PADS: A probabilistic activity detection framework for video data,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 12, pp. 2246–2261, Dec. 2010. [8] Y. Du, E. Arslanturk, Z. Zhi, and C. Belcher, “Video-based noncooperative iris image segmentation,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 41, no. 1, pp. 64–74, Feb. 2011. [9] Z. Li, F. Zhou, X. Tian, and Y. Chen, “High efficient moving object extraction and classification in traffic video surveillance,” J. Syst. Eng. Electron., vol. 20, no. 4, pp. 858–868, Aug. 2009. [10] H. Woo, Y. M. Jung, J. G. Kim, and J. K. Seo, “Environmentally robust motion detection for video surveillance,” IEEE Trans. Image Process., vol. 19, no. 11, pp. 2838–2848, Nov. 2010. [11] T. Mondal, A. Jain, and H. K. Sardana, “Automatic cranofacial structure detection on cephalometric images,” IEEE Trans. Image Process., vol. 20, no. 9, pp. 2606–2614, Sep. 2011. [12] Y. Zhang, X. Wang, and C. Wang, “Restoration of space-variant blurred image based on motion blurred target segmentation,” J. Syst. Eng. Electron., vol. 21, no. 2, pp. 191–196, Apr. 2010. [13] N. Young and A. N. Evans, “Median centred difference gradient operator and its application in watershed segmentation,” Electron. Lett., vol. 47, no. 3, pp. 178–180, Feb. 2011. [14] C. Duan, K. Yuan, F. Liu, P. Xiao, G. Lv, and Z. Liang, “Volumebased features for dectection of bladder wall abnormal regions via MR cystography,” IEEE Trans. Biomed. Eng., vol. 58, no. 9, pp. 2506–2512, Sep. 2011. [15] R. Yang, S. Sarkar, and B. Loeding, “Handling movement epenthesis and hand segmentation ambiguities in continuous sign language recognition using nested dynamic programming,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 3, pp. 462–477, Mar. 2010. [16] L. Zhu, Y. Chen, Y. Lin, C. Lin, and A. Yuille, “Recursive segmentation and recognition templates for image parsing,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 2, pp. 359–371, Feb. 2012. [17] H. Y. Lee, N. C. F. Codella, M. D. Cham, J. W. Weinsaft, and Y. Wang, “Automatic left ventricle segmentation using iterative thresholding and an active contour model with adaptation on short-axis cardiac MRI,” IEEE Trans. Biomed. Eng., vol. 57, no. 4, pp. 905–913, Apr. 2010. [18] X. Gao, B. Wang, D. Tao, and X. Li, “A relay level set method for automatic image segmentation,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 41, no. 2, pp. 518–525, Apr. 2011. [19] P. Yu, A. K. Qin, and D. A. Clausi, “Unsupervised polarimetric SAR image segmentation and classification using region growing with edge penalty,” IEEE Trans. Geosci. Remote Sens., vol. 50, no. 4, pp. 1302–1317, Apr. 2012. [20] A. K. Qin and D. A. Clausi, “Multivariate image segmentation using semantic region growing with adaptive edge penalty,” IEEE Trans. Image Process., vol. 19, no. 8, pp. 2157–2170, Aug. 2010. [21] B. Peng, L. Zhang, and D. Zhang, “Automatic image segmentation by dynamic region merging,” IEEE Trans. Image Process., vol. 20, no. 12, pp. 3592–3602, Dec. 2011.

IEEE TRANSACTIONS ON CYBERNETICS, VOL. 44, NO. 11, NOVEMBER 2014

[22] C. Panagiotakis, I. Grinias, and G. Tzirtas, “Natural image segmentation based on tree equipartition, Bayesian flooding and region merging,” IEEE Trans. Image Process., vol. 20, no. 8, pp. 2276–2287, Aug. 2011. [23] S. Kim and M. Kang, “Multiple-region segmentation without supervision by adaptive global maximum clustering,” IEEE Trans. Image Process., vol. 21, no. 4, pp. 1600–1612, Apr. 2012. [24] L. W. Tsai, J. W. Hsieh, C. H. Chuang, Y. J. Tseng, K. C. Fan, and C. C. Lee, “Road sign detection using eigen colour,” IET Comput. Vision, vol. 2, no. 3, pp. 162–177, Sep. 2008. [25] L. Chen, C. L. P. Chen, and M. Lu, “A multiple-kernel fuzzy C-means algorithm for image segmentation,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 41, no. 5, pp. 1263–1274, Oct. 2011. [26] M. Hasanzadeh and S. Kasaei, “A multispectral image segmentation method using size-weighted fuzzy clustering and membership connectedness,” IEEE Geosci. Remote Sens. Lett., vol. 7, no. 3, pp. 520–524, Jul. 2010. [27] V. N. Vapnik, Statistical Learning Theory. New York, NY, USA: Wiley, 1998. [28] K. P. Lin and M. S. Chen, “On the design and analysis of the privacy preserving SVM classifier,” IEEE Trans. Knowl. Data Eng., vol. 23, no. 11, pp. 1704–1717, Nov. 2011. [29] B. Andreas, A. W. Jamie, G. Hans, and T. Gerhard, “Eye movement analysis for activity recognition using electrooculography,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 4, pp. 741–753, Apr. 2011. [30] D. S. Kim and S. W. Lee, “Prediction of axial DNBR distribution in a hot fuel rod using support vector regression models,” IEEE Trans. Nuclear Sci., vol. 58, no. 4, pp. 2084–2090, Aug. 2011. [31] Z. Wang, H. Xi, W. Guo, and Q. Chen, “Generalized PCRTT offline bandwidth smoothing based on SVM and systematic video segmentation,” IEEE Trans. Multimedia, vol. 11, no. 5, pp. 998–1009, Aug. 2009. [32] R. Flamary, D. Tuia, G. Camps-Valls, and A. Rakotomamonjy, “Large margin filtering,” IEEE Trans. Signal Process., vol. 60, no. 2, pp. 648–659, Feb. 2012. [33] I. Saha, U. Maulik, S. Bandyopadhyay, and D. Plewczynski, “SVMeFC: SVM ensemble fuzzy clustering for satellite image segmentation,” IEEE Geosci. Remote Sens. Lett., vol. 9, no. 1, pp. 52–55, Jan. 2012. [34] S. Alpert, M. Galun, R. Basri, and A. Brandt, “Image segmentation by probabilistic bottom-up aggregation and cue integration,” in Proc. IEEE Conf. Comput. Vision Pattern Recognit. (CVPR), Jun. 2007, pp. 1–8. [35] S. Alpert, M. Galun, A. Brandt, and R. Basri, “Image segmentation by probabilistic bottom-up aggregation and cue integration ,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 2, pp. 315–327, Feb. 2012. [36] A. J. Smola, Z. L. Ovari, and R. C. Williamson, “Regularization with dot-product kernels,” in Proc. NIPS, vol. 32. 2001, pp. 308–314. [37] A. J. Smola and B. Scholkopf, “A tutorial on support vector regression,” Stat. Comput., vol. 14, pp. 199–222, Mar. 2004. [38] S. Tong and H. Li, “Fuzzy adaptive sliding-mode control for MIMO nonlinear systems,” IEEE Trans. Fuzzy Syst., vol. 11, no. 3, pp. 354–360, Jun. 2003. [39] S. Tong, B. Chen, and Y. Wang, “Fuzzy adaptive output feedback control for MIMO nonlinear systems,” Fuzzy Sets Syst., vol. 156, no. 2, pp. 285–299, 2005. [40] S. Tong, C. Liu, and Y. Li, “Fuzzy-adaptive decentralized outputfeedback control for large-scale nonlinear systems with dynamical uncertainties,” IEEE Trans. Fuzzy Syst., vol. 18, no. 5, pp. 845–861, Oct. 2010. [41] S. Tong, Y. Li, Y. Li, and Y. Liu, “Observer-based adaptive fuzzy backstepping control for a class of stochastic nonlinear strict-feedback systems,” IEEE Trans. Syst., Man Cybern. B, Cybern., vol. 41, no. 6, pp. 1693–1704, Dec. 2011. [42] R. Unnikrishnan, C. Pantofaru, and M. Hebert, “Toward object evaluation of image segmentation algorithm,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 6, pp. 929–943, Jun. 2007.

A multiple-feature and multiple-kernel scene segmentation algorithm for humanoid robot.

This technical correspondence presents a multiple-feature and multiple-kernel support vector machine (MFMK-SVM) methodology to achieve a more reliable...
15MB Sizes 0 Downloads 9 Views