This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TNSRE.2015.2439298, IEEE Transactions on Neural Systems and Rehabilitation Engineering

1

Control of a Wheelchair in an Indoor Environment Based on a Brain-Computer Interface and Automated Navigation Rui Zhang, Yuanqing Li∗ , Yongyong Yan, Hao Zhang, Shaoyu Wu, Tianyou Yu, and Zhenghui Gu

Abstract—The concept of controlling a wheelchair using brain signals is promising. However, the continuous control of a wheelchair based on unstable and noisy electroencephalogram (EEG) signals is unreliable and generates a significant mental burden for the user. A feasible solution is to integrate a braincomputer interface (BCI) with automated navigation techniques. This paper presents a brain-controlled intelligent wheelchair with the capability of automatic navigation. Using an autonomous navigation system, candidate destinations and waypoints are automatically generated based on the existing environment. The user selects a destination using a motor imagery (MI)-based or P300-based BCI. According to the determined destination, the navigation system plans a short and safe path and navigates the wheelchair to the destination. During the movement of the wheelchair, the user can issue a stop command with the BCI. Using our system, the mental burden of the user can be substantially alleviated. Furthermore, our system can adapt to changes in the environment. Two experiments based on MI and P300 were conducted to demonstrate the effectiveness of our system. Index Terms—Brain computer interface, wheelchair, automated navigation, motor imagery, P300.

I. INTRODUCTION

E

Lectroencephalogram (EEG)-based brain-computer interfaces (BCIs) have recently promoted the development and application of BCI technology because they are portable, convenient, safe, and inexpensive. An important application of EEG-based BCIs is wheelchair control, which has received a substantial amount of attention [1]–[4]. Tanaka et al. developed a brain-controlled wheelchair, in which the user can control the direction of the wheelchair by performing motor imageries [1]. In another study [2], a BCI wheelchair was proposed. The user controlled the left-turning, right-turning, and forward movements of the wheelchair by imagining clenching the left hand, squeezing the right hand, and walking with both feet, respectively. In our previous study [3], a hybrid BCI that combines motor imagery (MI) and P300 was employed This work was supported by the National Key Basic Research Program of China (973 Program) under grant 2015CB351703, the National HighTech R&D Program of China (863 Program) under grant 2012AA011601, the National Natural Science Foundation of China under grant 91120305, and Guangdong Natural Science Foundation under grant 2014A030312005. R. Zhang, *Y. Li, Y. Yan, H. Zhang, S. Yu, T. Yu, Z. Gu are with the School of Automation Science and Engineering, South China University of Technology, 510640 China, and also with Guangzhou Key Laboratory of Brain Computer Interaction and Applications (correspondence e-mail: [email protected]). Copyright (c) 2015 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending an email to [email protected].

to control the speed and direction of a wheelchair. Specifically, the user performed left- or right-hand MI to produce the direction control signals and adjusted the speed of the wheelchair with foot imagery or by focusing on a flashing button. Diez et al. developed a BCI wheelchair based on steady-state visual-evoked potentials (SSVEP), in which four control commands (go forward, turn left/right, and stop) were available [4]. Three challenges exist for a brain-controlled wheelchair: (i) Wheelchair control is multi-objective, including start and stop control, direction control, and speed control. The task of producing numerous control signals is challenging for an EEG-based BCI. Although we may obtain multiple control commands using a P300 or SSVEP, producing an accurate control command is time-consuming. (ii) The performance of a BCI is dependent on the user. For instance, many users cannot perform the MI that is required to produce direction control signals. (iii) Controlling a wheelchair for an extensive period may produce a large mental workload for the user, especially for disabled people. These challenges may be solved using an automated navigation method [5]–[7]. During the period of automated navigation, the user does not need to send any control commands to the wheelchair. However, the autonomous navigation system can not perform all control tasks. For instance, the autonomous navigation system can not identify the intended destination of the user. Thus, a human machine interface (HMI) is required to convey the user’s intentions to the navigation system. For severely disabled people, e.g., patients with amyotrophic lateral sclerosis (ALS), obstacles exist with respect to the use of a common HMI, such as a keyboard; thus, BCI technology may be an alternative choice [8]. In recent years, shared control techniques for wheelchairs have been developed. These techniques incorporate the strength of the human and the autonomous navigation system by allowing each to control different aspects of the system in situations that require teamwork [9]. Mill´an et al. presented a BCI wheelchair system based on shared control, in which the steering commands were acquired by continuously analyzing the user’s brain data and the data from a laser range finder (LRF); a contextual filter based on environmental information was designed to filter incorrect BCI commands [10]. Mill´an et al. also proposed another shared control method for a wheelchair [11]. Specifically, a dynamical system that could generate naturally smooth trajectories by integrating the BCI commands from the user and obstacle information from the vision sensor was designed for navigation. For these two

1534-4320 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TNSRE.2015.2439298, IEEE Transactions on Neural Systems and Rehabilitation Engineering

2

shared control systems, the BCI commands were obtained from the user’s motor imageries. The main advantages of these systems are as follows: (i) no priori information about the environment is required; (ii) the trajectories are not predefined and are determined in real time; and (iii) the two systems can automatically avoid obstacles when necessary. However, a large number of BCI commands were required to reach a destination, which may exhaust the user. Moreover, the user was not able to stop the wheelchair in free space; instead, the wheelchair would only stop when it docked at a potential target. Other types of wheelchair shared control systems, which were developed by Iturrate et al. [12] and Rebsaman et al. [13], combined an autonomous navigation system with a P300based BCI. In the study by Iturrate et al. [12], multiple stages of shared control were performed. At each stage, a threedimensional (3-D) environmental map was constructed using an LRF. A set of candidate destinations were set/distributed in an environmental map and presented to the user. The user selected a destination using the P300-based BCI. The wheelchair autonomously moved to the selected destination. Via a series of destination selections and navigations, the final destination was reached. The advantage of this system is that the user could drive the wheelchair near an obstacle or in free space. However, when the user drove the wheelchair to the final destination, the multiple destination selections and validations were time-consuming. Lopes et al. [9] also proposed a shared control wheelchair, in which the waypoints and goals were established offline, whereas a P300-based BCI was employed to provide the following steering commands: go forward, turn right 45 degrees, turn right 90 degrees, turn left 45 degrees and turn left 90 degrees. Assume that the wheelchair is at a bifurcation point and two possibilities exist: turn right or go forward. The user determines the next goal through the BCI. In a previous study [13], a semi-automatic wheelchair was proposed. Specifically, the user selected one of several predetermined destinations using a P300-based BCI. The autonomous system took control to navigate the wheelchair to the selected destination along a predefined path. This system significantly alleviated the workload of the user. Because these candidate destinations and their corresponding paths were predefined, a change in the environment could cause failure of the strategy and redevelopment of the system might be required. In this paper, we develop an intelligent wheelchair that combines an MI- or P300-based BCI and an autonomous navigation system, in which the users select one of the candidate destinations with the MI- or P300-based BCI. According to the determined destination and the current location of the wheelchair, the autonomous navigation system plans a path and then drives the wheelchair to the destination. The user does not have to pay attention to the control during the navigation period; thus, the workload for the user is substantially alleviated. Because the candidate destinations and paths are automatically generated based on the existing environment detected by two webcams, our system is adaptive to changes in the environment (e.g., newly added furniture). In addition, the user can issue a stop command via the BCI

when necessary. The remaining sections of this paper are organized as follows: the autonomous navigation system, including the system hardware, vector map, obstacle localization, self-localization of the wheelchair, path planning, path tracking, and motion control, is described in Section II. Two brain-computer interfaces that are based on MI and P300 and employed for destination selection and emergency stopping, respectively, are described in Section III. The experimental results are outlined in Section IV. Additional discussion is presented in Section V. II. AUTONOMOUS

NAVIGATION SYSTEM

Our intelligent wheelchair includes a BCI and an autonomous navigation system; the architecture is illustrated in Fig. 1 (a). In this section, we describe the autonomous navigation system, which contains the environment sense, path planning, path tracking, and execution modules. A. System Hardware Our wheelchair is based on a commercially available midwheel drive model (888WNLL, Pihsiang Machinery MFG. Co. Ltd., Taiwan.). After retrofitting, the wheelchair footprint is 1.27m × 0.77m. As shown in Fig. 1(b), the wheelchair is equipped with the following sensors: (1) an LRF (SICK LMS 111), which is mounted on a custom-designed steel prop at a height of 1.3 m above the ground; (2) two encoders, which are attached to the central driving wheels; (3) an array of three ultrasonic sensors, which are fixed to the front of the wheelchair approximately 15 cm from the ground. The outline of the ultrasonic array approximately forms an equilateral triangle with a side length of 20 cm. Because the beam angle of a single ultrasonic sensor is 15 degrees, this combination of three ultrasonic sensors can form a larger detection range. In addition, two webcams are placed on two walls with opposite orientations near the ceiling. B. Vector Map The two-dimensional vector map, which is manually constructed, provides coordinate information for the wheelchair and obstacle localizations. This map consists of several directed line segments; each segment is represented by its start and end points. An immovable obstacle, e.g., a wall, can be described using one or several line segments. All doors and windows on the wall are assumed to be closed and are treated as ordinary walls. This convention implies that the wheelchair cannot travel through them. Note that the vector map does not change with respect to the movable obstacles. C. Obstacle Localization and Obstacle Map The localization of the obstacles for the navigating wheelchair to safely and efficiently reach the destination is essential. The majority of distance measurement devices, such as LRF and sonar, are only capable of detecting limited distance information (for example, distance information about obstacles in one or several specific planes). Thus, the application of these sensors to locate all obstacles of different heights is

1534-4320 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TNSRE.2015.2439298, IEEE Transactions on Neural Systems and Rehabilitation Engineering

3

EEG Cap

Laser Range Finder PID Controller

Encoders

Ultrasonic Array

(a)

Fig. 1.

(b)

(a) System architecture. (b) Wheelchair system.

challenging. The selection of vision sensors, such as cameras, guarantees the detection of an abundant amount of obstacle information. As detailed in the literatures [11], [14], [15], a computer vision technique is employed to detect obstacles for our wheelchair system. Two webcams are fixed on two walls with opposite orientations near the ceiling, and a calibration is then performed. The two webcams are employed for obstacle localization in this study. Based on the obstacle information from the webcams and the vector map, an obstacle map, which is a two-dimensional geometrical representation of the environment, is automatically constructed. The obstacle map is updated once the system is activated. 1) Calibration: After the two webcams are fixed on the walls, we perform calibration based on a homography technique [16]. This technique requires four corresponding points on the two images from the two webcams and on the ground plane. We thus mark four fiducial points on the ground, for which the ground plane coordinates are known, and click on the points in the images from the two webcams to acquire the corresponding coordinates. A homography matrix, which represents the correspondences between the pixel and the ground plane coordinates, is then calculated as in [16]. This homography matrix will never be changed unless these two webcams are adjusted. 2) Obstacle Localization: The processes for localizing the obstacles are as follows: First, we obtain two pictures (P ic1 and P ic2 ) from the two calibrated webcams via the wireless Internet. Next, we perform the following processing steps for each picture: (i) obstacles are separated using the threshold segmentation method, in which the threshold value is calculated according to a gray-level histogram and a gray-scale value of the ground; (ii) the morphological operation open is employed to shrink the areas of small noise to zero, and

the morphological operation close, which is used to rebuild the area of the surviving components that was lost during opening, is subsequently employed; (iii) contours are retrieved from each segmented image; (iv) the contours that are too small are discarded because small contours tend to be noise contours, and the remaining contours are approximated with convex hulls; and (v) the vertices of the convex hulls on the ground plane coordinate (namely, the global coordinate) are mapped according to the homography matrix obtained in the calibration procedure [16]. We calculate the intersection of the regions that correspond to the convex hulls in P ic1 with the regions that correspond to the convex hulls in P ic2 on the ground plane coordinate. Subsequently, the regions of the obstacles on the ground plane can be approximated using these intersection regions. Successfully representing the obstacles by these obtained intersection regions is dependent on the degree of difference in color between the obstacles and the floor. D. Generation of Candidate Destinations and Waypoints Destinations tend to be locations in which the user usually goes or could go, e.g., one side of the table, the front of the cabinet, and the vicinity of the sofa. To offer the user additional candidate destinations, some predefined destinations are uniformly scattered in the obstacle map with a certain distance between them (in our setup, the actual distance between two adjacent destinations is 1 m). After obstacle localization is completed, some additional destinations are distributed around the obstacles and the predefined destinations that are located within the coverage of the obstacles are cleared (refer to Fig. 2 (b)). A sequence of waypoints forms a complete path. Therefore, waypoint generation is critical to safely navigate the

1534-4320 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TNSRE.2015.2439298, IEEE Transactions on Neural Systems and Rehabilitation Engineering

4















  







 

 

 













Fig. 2. (a) The obstacle map, which is a two-dimensional geometrical representation of the environment. It contains the vector map (violet vectors) and obstacle information, in which the four obstacles are represented as four gray areas. (b) Graphical user interface (GUI), constructed based on the obstacle map. The candidate destinations (yellow circles with Arabic numbers) and the current obstacles (gray areas) are arranged at their corresponding positions in the GUI as certain ratios according to their actual locations and sizes.

wheelchair to the destination. In our system, we generated waypoints using the generalized Voronoi diagram [17] for a planar region with specified obstacles. The process is described as follows: First, the distance between two obstacles is verified to determine if it is larger than the safe distance through which the wheelchair can traverse. If not, the regions of these obstacles are merged. Second, the regions of obstacles are expanded with a certain safe distance. Third, the boundaries of the polygonal obstacles are approximated with the large number of points that are obtained from subdividing each side of the original polygon into smaller segments. Next, a Voronoi diagram is constructed for this collection of approximating points, and Voronoi edges are obtained. Last, a waypoint is determined every 0.2 m from each Voronoi edge. In this way, all waypoints are obtained, which will be used for later path planning. Additionally, it takes approximately 150 ms to construct the obstacle map and to generate the candidate destinations and waypoints in our system. E. Self-Localization of the Wheelchair Two types of localization, global localization and process localization, are employed in our system. 1) Global Localization: Global localization is employed to accurately estimate the wheelchair’s initial position without any prior position information. An LRF is employed in our global localization system. The corresponding global localization algorithm, which was previously proposed [18], includes four main steps: first, local line segments are extracted from the LRF data; second, we search the possible matched line segment pairs based on the line segment relationships; third, we evaluate the likelihood of matching for each matching pair; and last, the coordinates of the wheelchair are calculated as the localization results. 2) Process Localization: Process localization that is based on an LRF and two encoders is employed to track the position of the wheelchair in real-time after the initial position of the wheelchair is known. The process localization algorithm is similar to the previously described process [19]. First, the local line segments, which are extracted from the LRF data, are transformed to the global coordinates according to the

estimated pose by dead reckoning. Second, we compare the transformed local line segments with the candidate global line segments in the vector map to find matching pairs based on the following criteria: (i) the orientation difference between the two lines is smaller than a predefined threshold; (ii) the distance from the midpoint of the transformed local line segment to the global line segment in the map is less than a predefined threshold; and (iii) the projection of the midpoint on the global line segment lies within the global line segment. We employ an existing method for extracting the candidate global line segments from the map, as previously described [20]. Last, if all line segments that are being matched are parallel, the algorithm will return to the global localization step. Otherwise, the pose of the wheelchair in the global coordinate system is calculated according to the following equations. Assuming n non-parallel matching pairs, the orientation φ and translation T (Tx , Ty ) of a wheelchair in the global coordinate system are calculated using (1) and (2) as φ=

n 1 X k (φG − φkL ) · Nk N

(1)

k=1

Tx =

n n 1 X k 1 X k (Tx ) · Nk , Ty = (Ty ) · Nk N N k=1

(2)

k=1

For each non-parallel pair,Txk and Tyk are determined by (3) as ′ Ai (Rφ · Pi + T k ) + ci = 0, i = 1, 2 (3) where Ai = [ai , bi ], Rφ = Pi =



xiLc i yLc

 

cos φ − sin φ sin φ cos φ

,N =

n X



Nk

k=1

φkG and φkL are inclination angles of the global line segment and the local line segment in their corresponding coordinate system, respectively; ai , bi , and ci are the parameters of the linear equation, which correspond to the global line segment; Pi is the midpoint of the local line segment; Nk is the number

1534-4320 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TNSRE.2015.2439298, IEEE Transactions on Neural Systems and Rehabilitation Engineering

5

of scan points that are contributed to generate the local line segment, where NNk indicates the quality of the local line segment for localization; and φ and T (Tx, Ty ) receive greater weight when the local line segment is composed of more scan points. In practice, we achieved a localization accuracy within 3 cm and 1 degree using this localization method. F. Path Planning The selection of an optimal path from several candidate paths contributes to a reduction in navigation time. The A∗ algorithm [21], which employs a best-first search and finds the least-cost path from a given initial node to a goal node, is applied to a set of points, including the current position of the wheelchair (start point), the destination (end point) selected by the BCI, and a sequence of waypoints based on Voronoi edges, and the optimal path is obtained. The planned path that consists of waypoints based on Voronoi edges is a safe one. G. Path Tracking After the optimal path is obtained, the position error between the actual position of the wheelchair and the path is calculated and is employed as the feedback of a proportionalintegral-derivative (PID) tracking algorithm (as previously described in [22]). The reference angular speed is obtained based on this algorithm. In our setup, the linear velocity is considered to be constant (0.18 m/s) and the angular speed is restricted to a maximum of 0.6 rad/s to guarantee safety and comfort. The reference linear and angular speeds are employed as the input for the following motion controller.

III. B RAIN

COMPUTER INTERFACE

In our system, a BCI is employed to select a destination and to issue a stop command during the motion of the wheelchair. We provide the user with two choices: an MI-based and a P300-based BCI. For a user with visual impairment, the MI-based BCI is generally selected. For users with normal vision, the P300-based BCI is suggested if they have difficulty performing MI to produce control signals. A. EEG Data Acquisition In our system, the EEG signals are recorded from 15 electrodes (shown in Fig. 3) attached on an EEG cap (Compumedics, Neuroscan Inc, Abbotsford, Australia). The ground electrode is positioned on the forehead, and the reference electrode is placed on the right mastoid. Two channels, “HEOG” and “VEOG”, which represent eye movements, are excluded (not shown here). All impedances of the electrodes are maintained below 5 kΩ. The EEG signals are amplified (NuAmps, Neuroscan), digitized with a sampling frequency of 250 Hz, and bandpass-filtered between 0.5 Hz and 100 Hz.

H. Motion Control Once the reference speed commands are obtained from the path tracking module, a PID controller drives the wheelchair along the planned path. The reference speed commands are employed as the inputs of the PID controller. The odometry data from the two encoders attached to the glide-wheels are transmitted to the controller. The actual linear and angular velocities are calculated in real-time based on the odometry data. These velocity values are employed as the feedback of the PID motion controller, which regulates the wheelchair control signals.

Fig. 3. Names and distribution of electrodes. Fifteen electrodes (blue color) are employed in our experiment.

B. MI-based BCI for Destination Selection and Stopping I. Emergency Collision Avoidance To prevent collisions when new dynamic obstacles, such as pedestrians and pets, which are not included in the already constructed obstacle map, appear near the wheelchair, an array of three ultrasonic sensors are fixed to the front of the wheelchair. When an obstacle is detected within 45 cm, the wheelchair will automatically stop. If the obstacle disappears within 10 seconds, the wheelchair will resume the motion from the stopping point to the previously selected destination. Otherwise, the obstacle map is automatically updated and the graphical user interface (GUI) switches to the destination selection interface, from which the user can reselect a destination to reactivate the system.

In the following, we describe our method for destination selection and stop command generation based on an MI-based BCI. 1) Destination Selection: The GUI, in which two vertical bars on the left and right sides are employed for visual feedback, is illustrated in Fig. 4 (a). When the left- or righthand MI is detected by the BCI system, the corresponding left/right bar is filled with a red color. Here, the size of the filled red area (expressed as a ratio) is proportional to the output value of the MI detection. In the left/right bar, a small horizontal pink/cyan bar is located. The two horizontal bars indicate the threshold for MI detection; their colors guide the user to perform left- or right-hand MI, as explained below.

1534-4320 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TNSRE.2015.2439298, IEEE Transactions on Neural Systems and Rehabilitation Engineering

6

 































































 

 



 







 











 

D

E

Fig. 4. (a) The GUI provides visual feedback for the user to select a destination using an MI-based BCI. The central area contains twenty-five candidate destinations (pink/cyan solid circles), four obstacles (gray areas), and the actual position of the wheelchair (a red solid circle). Two vertical bars on the left and right sides of the central area provide the user with feedback about the current status of the MI-based BCI. (b) The progress for selecting the destination ‘2’. This procedure, including five sequential selections, begins with 25 destinations. Specifically, we obtain 12 remaining destinations via the first selection. Repeat this process until only destination ‘2’ remains.

Home or office environments tend to contain several or dozens of destinations; however, an MI-based BCI generally performs the classification of left- and right-hand MI, which only supports a binary decision. We solve this problem using a successive dichotomy method. Specifically, the candidate destinations are represented using pink and cyan solid circles on the screen, in which the two colors imply two subsets of destinations. If the user wants to select a pink/cyan destination, he/she needs to perform left- or right-hand MI and keep the filled red area beyond the threshold bar for a minimum of two seconds. Once the output values of the left- or right-hand MI detection exceed the predefined threshold for two consecutive seconds, the pink/cyan destinations remain in the GUI and are partitioned into two classes with pink and cyan colors, whereas the other destinations disappear. The user repeats this selection procedure until a single (final) destination is determined. If a wrong selection occurs in a selection procedure, the user can remain in an idle state for two seconds to reject this selection. If no motor imagery is detected by the system for two seconds, the wrong selection in this level is automatically deleted and the system returns to the upper level. Subsequently, the user can continue the selection procedure. If an error occurs in the last selection, the user can reject this wrong selection using this method. Once the final candidate destination is obtained, the user needs to perform left- or right-hand MI and maintain the filled red color beyond the threshold bar for at a minimum of two seconds to accept or reject the result of this entire selection procedure. If the user rejects the final selection result via this validation procedure, the GUI will return to its initial state. Once the final selection result is accepted by the user, the autonomous system takes control to navigate the wheelchair to the destination. After the wheelchair stops at the selected destination, the GUI switches to its initial state. The user can select a new destination to reactivate the system. 2) Stop Command: During the wheelchair motion, the user can issue a stop command by performing left-hand MI for a

minimum of three seconds. Once the BCI detects the three seconds of left-hand MI, the wheelchair stops and the GUI switches to its initial state. In this case, the user can reselect a destination to restart the wheelchair. 3) MI Detection: To detect left- or right-hand MI, the system extracts the EEG block of 1200 ms ending at the current time point and performs the following processing steps: first, the signal block is spatially filtered with a common average reference (CAR) filter and is subsequently bandpassfiltered between 8 and 30 Hz; second, we extract a feature vector by projecting the filtered EEG data using a common spatial pattern (CSP) transformation matrix; last, this feature vector is fed into a support vector machine (SVM) classifier. Both the predicted class and the corresponding output value (the absolute value of the SVM score) are obtained. Here, the predicted class is employed to determine the left- or right-hand MI. This detection is performed every 200 ms. The CSP transformation matrix and the SVM classifier are obtained in the training session. The training data for each subject are collected with 60 trials: 30 trials for left-hand MI and 30 trials for right-hand MI. In each trial, the subject is instructed to perform four seconds of left- or right-hand MI, as indicated by a left or right arrow on the screen and as illustrated in Fig. 5. To obtain the threshold for MI detection, an EEG dataset is collected. Specifically, the user is asked to stay in an idle state for two minutes, in which he/she does not perform any MI. By applying a moving window with a length of 1200 ms and a window overlap of 1000 ms to this dataset, we extract N blocks of data (N = 595). Furthermore, we perform a classification for each block of data using the SVM classifier and obtain N SVM scores f (1), · · · , f (N ). The threshold for MI detection is defined as f max ∗ p, where f max = max{|f (1)|, · · · , |f (N )|}, p is a scale factor. The parameter p is set as follows: p is initially set to 1. Prior to the online test, the user is asked to remain in the idle state. If the absolute values of the SVM scores never or seldom exceed

1534-4320 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TNSRE.2015.2439298, IEEE Transactions on Neural Systems and Rehabilitation Engineering

7

the threshold, the parameter p can be unchanged; otherwise, we can increase p (e.g., p = 1.1) until an appropriate p value is obtained. As shown in Fig. 4 (a), the height of each vertical bar corresponds to five times the f max; the positions of the two horizontal bars in the corresponding vertical bars are max∗p determined using the ratio f5f max ; and the size of the filled f red area is determined by the ratio 5f max , where f is the output value of the MI detection. For each subject, the entire data collection requires approximately 14 minutes, excluding the EEG preparation time.

%ODQN

)L[DWLRQ FURVV

&XH

Fig. 5. Paradigm for MI training data acquisition. In a trial, the screen remains blank for the first 2.25 seconds. From 2.25 seconds to 4 seconds, a cross appears on the screen to attract the subject’s attention. From 4 seconds to 8 seconds, a cue (left/right arrow) is presented, and the subject performs left/right-hand MI according to the cue.

C. P300-based BCI for Destination Selection and Stopping In the following, we describe our method for destination selection and stop command generation based on a P300-based BCI. 1) Destination Selection: The destination selection is performed in two steps: (i) the user silently determines the number that corresponds to his desired destination in 20 seconds, according to Fig. 2 (b); and (ii) the P300 GUI appears on the screen, as shown in Fig. 6, where the numbers correspond to the candidate destinations in Fig. 2 (b). Using this P300-based BCI, the user selects the intended destination by focusing on the corresponding number button. Additionally, if the number of candidate destinations in Fig. 2 (b) is less than that of number buttons in Fig. 6, then there exist several number buttons in Fig. 6 which do not correspond to any candidate destinations. These number buttons will not generate any control command even if they are selected by the user. If the desired destination is selected, the user should focus on the ‘O/S’ button for validation. Otherwise, the subject should focus on the ‘Delete’ button to reject this selection and reselect the desired destination. Once a P300 potential is detected at the ‘O/S’ button, the wheelchair automatically moves to the selected destination and stops when the destination is reached. After the wheelchair stops, the GUI in Fig. 2 (b) is shown to the subject and the subject can select a new destination to restart the wheelchair. In our setup, we employ the above two different GUIs instead of combining them into one GUI for destination selection based on the following consideration: if the distribution of the stimulation buttons in the GUI of the P300 BCI is equivalent to the candidate destinations in the obstacle map (Fig. 2 (b)), these stimulation buttons will not be uniformly distributed in the GUI. This outcome may affect the performance of the P300-based BCI.

Fig. 6. GUI of the P300-based BCI is employed for destination selection. Forty buttons arranged in a 10×4 stimuli matrix correspond to 37 destinations and three functional keys. Thirty-seven Arabic number buttons were mapped onto 37 different destinations. ‘O/S’: ‘O’ represents validation of the selected destination, navigation of the wheelchair to the final destination, and locking of the P300 interface, whereas ‘S’ represents stopping the wheelchair and unlocking the P300 interface. ‘Delete’: represents deletion of the last input destination. ‘More’: opens the next stimuli interface with additional and different destinations.

2) Stop Command: When the wheelchair is in motion, the system performs P300 detection once every 1.2 seconds. If and only if P300 is detected at the ‘O/S’ button, a stop command for the wheelchair is issued. In this manner, the user can stop the wheelchair in motion by focusing on the ‘O/S’ button in Fig. 6. Once the wheelchair stops, the computer screen switches to the GUI in Fig. 2 (b), from which the user can select a destination again to restart the wheelchair. 3) P300 Detection: The P300 detection for the destination selection is described as follows. First, the EEG signals are bandpass-filtered in the range of 0.1 Hz to 20 Hz and subsequently downsampled at a rate of 5 Hz. Second, a segment (0-600 ms after a button flashes) of EEG signals is extracted from each channel to form a vector. Third, a feature vector that corresponds to a button flash is constructed by concatenating the vectors from the 15 channels in our system. After one round of button flashes, an SVM classifier is applied to these feature vectors, and forty scores that correspond to forty buttons are obtained. Here, one round of button flashes is defined as a complete cycle, in which each of the 40 buttons flashes once in a random order. The duration of each flash is 100 ms, and a flash begins 30 ms after the onset of the previous flash. Thus, one round of button flashes requires about 1.2 seconds (30ms × 40). Last, we calculate the sum of the SVM scores for each button obtained from l accumulated rounds (l ≥ 4 in this study) and determine the maximum and second maximum among these summed scores. The button with the highest score is selected only if the ratio between the maximum summed scores and the second maximum summed scores is higher than a threshold. Here, the detection is performed per round of flashes using the previous l rounds of data, until an output is obtained. The detection condition in this study is the same as the condition in our previous study [23]. While the wheelchair is in motion, we use the following condition for P300 detection. Based on the summed SVM scores for all buttons obtained from three accumulated rounds, the ‘O/S’ button in the GUI shown in Fig. 6 is selected when

1534-4320 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TNSRE.2015.2439298, IEEE Transactions on Neural Systems and Rehabilitation Engineering

8

its score is the highest. No control command is generated when the P300 is detected at the other buttons. These buttons function as pseudo-keys, which are useful for differentiating the control and idle states, as explained in our previous studies [24], [25]. We employ the same GUI for stop detection and destination selection. For destination selection, each button in the GUI represents a destination or functional key and can be selected by the user. Because all buttons in the GUI for stop detection, with the exception of the ‘O/S’ button for stop detection, are pseudo-keys, the detection condition can be set less strictly than the detection condition for destination selection. In this manner, the time for issuing a stop command can be reduced, whereas the false activation rate can be maintained at an acceptable level. During the training data-acquisition phase, each subject performs 30 trials with the GUI shown in Fig. 6. In each trial, all 40 buttons flash in a random order and each button flashes ten times. The duration of each flash is 100 ms, and a flash begins 30 ms after the onset of the previous flash. Simultaneously, the subject is instructed to focus on a given button according to the cue. For each subject, the training data acquisition requires approximately 8 minutes, excluding the EEG preparation time. The collected training data are used to train an SVM classifier for P300 detection. IV. EXPERIMENTAL RESULTS To validate the effectiveness of our intelligent wheelchair, we conducted two experiments, namely, Experiment 1 and Experiment 2, which involve the MI-based BCI and the P300based BCI, respectively. A. Subjects Nine subjects (S1, S2, ..., and S9: eight males and one female, with ages ranging from 23 to 31 years) from the local research unit participated in the experiments. Three subjects (S1, S2, and S3) participated in Experiment 1, which involves the MI-based BCI; the remaining six subjects (S4, S5, ..., and S9) participated in Experiment 2, which involves the P300-based BCI. Prior to the experiments, only three subjects, namely, subjects S1, S2, and S3, had considerable experience using the MI-based BCI system. Subjects S4, S5, S6, and S7 had experience using the P300-based BCI system [25], and the two remaining subjects had never used the P300- and MIbased BCI systems. B. Experimental Tasks In the following, we describe two different scenarios, namely, the simple Scenario A and the complex Scenario B, and the corresponding experimental tasks. Scenarios A and B were involved in each of Experiments 1 and 2. In Scenario A, the subjects were required to consecutively perform the following three tasks in a room (7.475m × 5.565m) equipped with a few pieces of furniture (an airconditioner, a bookcase, and two chairs). The experiment environment, as depicted in Fig. 7(a), contained 25 candidate destinations. For Task 1, the subjects were instructed to select

a given destination using the MI- or P300-based BCI. The wheelchair automatically moved to the destination and stopped there. Next, the subject performed Task 2, which was similar to Task 1 with a new destination. For Task 3, each subject selected a given destination such that the wheelchair automatically moved to it; however, during the movement of the wheelchair, a randomly delivered beep tone reminded the subject to stop the wheelchair. The subject stopped the wheelchair as quickly as possible by MI or P300. After the wheelchair stopped, the subject reselected the given destination and allowed the wheelchair to reach it. Note that the starting positions of the wheelchair for Task 2 and Task 3 were the destination positions for Task 1 and Task 2, respectively. At the beginning of each task, a traffic cone was placed at a position in the room to indicate the destination for the subject. We validated the feasibility of our wheelchair system in a relatively complex home environment, namely, Scenario B. Compared with the experiment environment in Scenario A, a TV cabinet with a TV, a sofa and a coffee table were added in Scenario B, which rendered the simulated environment more complex and practical (refer to Fig. 7 (b)). This scenario contained 24 candidate destinations. Each subject performed five trials, and each trial consisted of two tasks. For Task 1, the subjects were required to drive the wheelchair to five goals one at a time, as depicted in Fig. 7 (b). For Task 2, the subject was asked to drive the wheelchair to the same five goals outlined in Task 1. However, a stop area with a length of 1 m and a width of 1 m was set in the middle of each path from one goal to the next goal and marked on the floor prior to the beginning of Task 2 (e.g., the light blue area in the path from goal 1 to goal 2 in Fig. 7 (b)). Note that these paths were acquired after Task 1 was performed. While the wheelchair was moving to each goal, the subject was required to stop the wheelchair in the stop area as precisely as possible. At the beginning of each task, a traffic cone was placed at a position in the room to indicate the first destination of the subject. After the wheelchair reached a destination, the examiner moved the traffic cone to another position to indicate the next destination. To facilitate the experimental data analysis, we randomly set the destination positions for each experimental task and the starting position of the wheelchair in Scenarios A and B. All subjects employed this setting, however, they had no prior knowledge of these positions. In each of Experiments 1 and 2, two sessions for each subject were conducted during two separate days; the first session and the second session involved Scenario A and Scenario B, respectively. Prior to each session, training data were collected to train the classifier. The subjects (S1, S2 and S3) for Experiment 1 were required to perform the experimental tasks using the MI-based BCI, whereas the subjects (S4, S5, ..., and S9) for Experiment 2 performed the experimental tasks using the P300-based BCI. C. Performance Evaluation To evaluate our intelligent wheelchair system, we adopted the following metrics, according to the related references [12], [13].

1534-4320 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TNSRE.2015.2439298, IEEE Transactions on Neural Systems and Rehabilitation Engineering

9

3ODQQHG3DWK

3ODQQHG3DWK

Actual Trajectory

Actual Trajectory :DVKEDVLQV

:DVKEDVLQV

79FDELQHW

*RDO

*RDO

*RDO

&KDLU

6WDUW

&KDLUV

%RRNFDVH

%RRNFDVH

&KDLU

&RIIHHWDEOH

$L UF RQ

*RDO

6WDUW

*RDO

6RID

D 

QH U

*RDO

$L UF RQ GL WLR

*RDO

GL WLR QH U

*RDO

E

Fig. 7. Two different experimental scenarios: (a) simple environment (Scenario A) and (b) complex environment (Scenario B). All obstacles are colored in light gray; the black solid dots indicate the goals/destinations; and the red and blue lines represent the planned path and the actual navigation path, respectively, where the experimental data in subplot (a) and subplot (b) were obtained from the subject S3 and the subject S1, respectively. For the subplot (b), the light blue rectangle indicates a stop area between Goal 1 and Goal 2;

The metrics that were employed to evaluate the performance of the BCIs are as follows: 1) Concentration time (CT): The time spent in selecting a destination and its validation, including the time required to correct incorrect selections. 2) Concentration time for each selection (CTFES): The time spent in performing a selection by the MI-based BCI. 3) False destination selection (FS): incorrect destination selections of the BCI. Once an incorrect selection occurs, the user needs to delete it using the BCI. 4) Response time (RT): The time from the moment the user is instructed to stop the wheelchair to the moment the stop command is issued (Note: in our setup, the linear velocity of the wheelchair is 0.18 m/s, from which the required distance for stopping can be calculated). 5) Success Rate (SR): The ratio of the successful stops to the total number of stops. A successful stop implies that the center of the wheelchair is located in the stop area when the wheelchair stops. 6) Error distance (ED): The distance between the center of the wheelchair and one edge of the stop area that is nearest to the wheelchair for an unsuccessful stop. 7) False activation rate (FA): The number of times per minute that a stop command is incorrectly issued when the subject does not intend to stop the wheelchair. D. Results In Experiment 1, the results for concentration time are summarized in Table I. The subjects required an average of 24.3 seconds and 23.8 seconds to complete a destination selection using the MI-based BCI in Scenario A and Scenario B, respectively. Additionally, six steps (including five selections and one validation) for each destination selection were

required in Scenario A, and 5.6 steps on average were required for a destination selection in Scenario B. The required steps for each destination selection can be calculated by the dichotomy method; an example is shown in Fig. 4 (b). The average concentration time for each selection (CTFES) was 4 seconds and 4.3 seconds for Scenario A and Scenario B, respectively. The average time for each selection was substantially longer than the predefined time of 2 seconds because the subjects failed to achieve the absolute values of SVM sores that exceeded the threshold in two consecutive seconds on numerous occasions. No false destination selections occurred based on MI-based BCI for Scenario A and Scenario B. For the stop command generation summarized in Table II, the subjects issued a stop command in Scenario A with the MI-based BCI in an average of 4.5 seconds, which corresponded to the distance of 0.81 m required for a wheelchair to stop at a velocity of 0.18 m/s. For Scenario B, the subjects could stop the wheelchair within the stop area (success rate: 94.7 ± 2.3% of trials) or near the stop area (average error distance: 9.5 ± 2.9 cm) using the MI-based BCI. Furthermore, the FA rates for wheelchair stopping were zero in Scenario A and Scenario B. TABLE I P ERFORMANCE INDICES FOR DESTINATION SELECTION IN E XPERIMENT 1 BCIs

Subject

MI

S1 S2 S3 Average

Scenario A CT(s) CTFES(s) 24.1 ± 1.4 4.0 ± 0.2 25.8 ± 1.3 4.3 ± 0.2 23.1 ± 0.8 3.8 ± 0.1 24.3 ± 1.6 4.0 ± 0.3

FS 0 0 0 0

Scenario B CT(s) CTFES(s) 23.8 ± 2.8 4.3 ± 0.2 24.4 ± 2.6 4.4 ± 0.1 23.2 ± 2.5 4.2 ± 0.1 23.8 ± 2.6 4.3 ± 0.2

FS 0 0 0 0

As illustrated in Table III, the subjects in Experiment 2 required an average of 12.8 seconds and 13.9 seconds to complete a destination selection using the P300-based BCI in Scenario A and Scenario B, respectively. Furthermore, several

1534-4320 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TNSRE.2015.2439298, IEEE Transactions on Neural Systems and Rehabilitation Engineering

10

TABLE II P ERFORMANCE INDICES FOR STOP COMMAND GENERATION IN E XPERIMENT 1 BCIs

Subject

MI

S1 S2 S3 Average

Scenario A RT (s) FA (event/min) 4.1 0 5.1 0 4.4 0 4.5 ± 0.5 0

SR(%) 96 92 96 94.7 ± 2.3

Scenario B ED(cm) FA (event/min) 10.0 ± 0.0 0 11.0 ± 2.8 0 6.0 ± 0.0 0 9.5 ± 2.9 0

false destination selections for several subjects occurred in Scenario B. However, the subjects had corrected these false selections using the P300-based BCI and successfully performed the experimental tasks, in which the false destination selections cause an increase in the concentration time. For the stop command generation summarized in Table IV, the subjects issued a stop command in Scenario A with the P300based BCI in an average of 6.2 seconds, which corresponded to the distance of 1.12 m required for a wheelchair to stop at a velocity of 0.18 m/s. In Scenario B, the subjects were able to stop the wheelchair in the stop area (success rate: 92.0 ± 4.4% of trials) or near the stop area (average error distance: 12.7 ± 3.7 cm) using the P300-based BCI. The FA rates for wheelchair stopping were zero for both Scenario A and Scenario B. TABLE III P ERFORMANCE INDICES FOR DESTINATION SELECTION IN E XPERIMENT 2 BCIs

Subject

P300

S4 S5 S6 S7 S8 S9 Average

Scenario A CT(s) FS 12.8 ± 0.7 0 12.4 ± 2.5 0 11.2 ± 1.4 0 12.8 ± 2.4 0 13.2 ± 2.1 0 14.4 ± 2.4 0 12.8 ± 2.0 0

Scenario B CT(s) FS 13.4 ± 3.4 1 13.8 ± 3.9 3 12.2 ± 2.0 0 14.4 ± 3.3 2 14.1 ± 4.3 3 15.2 ± 5.3 5 13.9 ± 3.9 2.3 ± 1.8

TABLE IV P ERFORMANCE INDICES FOR STOP COMMAND GENERATION IN E XPERIMENT 2 BCIs

Subject

P300

S4 S5 S6 S7 S8 S9 Average

Scenario A RT (s) FA (event/min) 4.8 0 7.2 0 4.8 0 6.0 0 6.0 0 8.4 0 6.2 ± 1.4 0

SR(%) 92 92 96 92 96 84 92.0 ± 4.4

Scenario B ED(cm) FA (event/min) 13.0 ± 1.4 0 11.5 ± 2.1 0 7.0 ± 0.0 0 18.0 ± 2.8 0 13.0 ± 0.0 0 11.8 ± 3.6 0 12.7 ± 3.7 0

The comparison of the results in Experiment 1 with the results in Experiment 2 reveals the following differences in performance between the MI-based and the P300-based BCI: (i) The subjects required less time to complete a destination selection using the P300-based BCI compared with the MIbased BCI. Generally, only two selections were required to determine a destination using the P300-based BCI (the first option for the selection of the destination button, and the second option for the selection of the ‘O/S’ button for validation), whereas an MI-based selection employed a dichotomy method that required multiple selections for one destination (e.g., six selections were needed in the example in Fig. 4(b), in which the first five for determining the destination, and the last for validation); (ii) The time for issuing a stop command was

shorter with the MI-based BCI compared with the time for issuing a stop command with the P300-based BCI. Furthermore, a larger success rate and smaller average error distance were achieved with the MI-based BCI compared with the P300based BCI. Therefore, the MI-based BCI outperformed the P300-based BCI in issuing a stop command. For the subjects who have sufficient experience in MI-based control, they may select the MI-based BCI in our intelligent wheelchair system because precisely issuing a stop command is challenging. For naive subjects, we recommend the P300-based BCI because an extensive training procedure is generally required to effectively use an MI-based BCI. V. D ISCUSSION AND CONCLUSIONS In this paper, we present an intelligent wheelchair that combines an MI- or P300-based BCI and an automated navigation system. Two experiments, an experiment based on MI and an experiment based on P300, were conducted. The experimental results demonstrated the effectiveness of our system. The wheelchair system has several advantages: (i) The candidate destinations and paths are automatically generated based on the existing current environment, which indicates that our system can adapt to changes in the environment. (ii) Once the user selects a destination with the BCI, our wheelchair automatically navigates to it; the user does not need to issue additional mental commands. Thus, the workload for the user is significantly alleviated. (iii) During the motion of the wheelchair, the user can issue a stop command via the BCI. To illustrate these advantages, we compared our intelligent wheelchair system with state-of-the-art brain-controlled wheelchairs. For the previously described BCI-controlled wheelchair systems [1]–[4], no robot intelligence was incorporated. The user was required to concentrate on the control of the wheelchair and frequently issue the steering commands. Thus, the required mental effort for these systems exceeds the required mental effort for our system. In contrast with the autonomous wheelchair system equipped with conventional interfaces [5]–[7], our system provides a direct communication interface for the users and enables users to determine the destinations without manual assistance. The BCI and automated navigation techniques were integrated in some previously described wheelchair systems [9]– [12]. Because users need to issue steering commands, for instance, when the wheelchair reached a bifurcation point, these systems may be time-consuming and cause some mental burden, especially when a P300-based BCI is employed. Our system can be regarded as an extension of a previously proposed system [13]. Using this system [13], the user selected a destination via a BCI. The wheelchair, which was controlled by the autonomous navigation system, navigated toward the destination without any commands from the user. However, the candidate destinations and paths were predefined, which implied that once a modification to the environment occurred, a technician would need to redefine the candidate destinations and paths. This inconvenience has been overcome in our system via generation of the candidate destinations and path planning prior to operation of the wheelchair.

1534-4320 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TNSRE.2015.2439298, IEEE Transactions on Neural Systems and Rehabilitation Engineering

11

The main disadvantage of our method is that our wheelchair is only suitable for a room equipped with webcams. In our future study, we intend to extend the system to enable the wheelchair to move in more complex indoor or outdoor environments. Previous studies have demonstrated that some disabled people can perform in a similar manner to healthy persons when using either P300- or MI-based BCI [26]–[29]. In the future, we also need to improve the system for severely disabled people. R EFERENCES [1] K. Tanaka, K. Matsunaga, and H. O. Wang, “Electroencephalogrambased control of an electric wheelchair,” IEEE Trans. Robot., vol. 21, no. 4, pp. 762–766, Aug. 2005. [2] K. Choi and A. Cichocki, “Control of a wheelchair by motor imagery in real time,” in Proc. 9th Int. Conf. Intell. Data Eng. Autom. Learning, 2008, vol. 5326, pp. 330–337. [3] J. Long, Y. Li, H. Wang, T. Yu, J. Pan, and F. Li, “A hybrid brain computer interface to control the direction and speed of a simulated or real wheelchair,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 20, no. 5, pp. 720–729, 2012. [4] P. F. Diez, S. M. Torres M¨uller, V. A. Mut, E. Laciar, E. Avila, T. F. Bastos-Filho, and M. Sarcinelli-Filho, “Commanding a robotic wheelchair with a high-frequency steady-state visual evoked potential based brain–computer interface,” Med. Eng. Phys., vol. 35, no. 8, pp. 1155–1164, 2013. [5] Q. Zeng, B. Rebsamen, E. Burdet, and C. L. Teo, “A collaborative wheelchair system,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 16, no. 2, pp. 161–170, 2008. [6] T. Taha, J. V. Mir´o, and G. Dissanayake, “Pomdp-based long-term user intention prediction for wheelchair navigation,” in Proc. IEEE ICRA, Pasadena, LA, May 2008, pp. 3920–3925. [7] S. P. Parikh, V. Grassi Jr, V. Kumar, and J. Okamoto Jr, “Usability study of a control framework for an intelligent wheelchair,” in Proc. IEEE ICRA, Barcelona, Spain, Apr. 2005, pp. 4745–4750. [8] L. Bi, X. Fan, and Y. Liu, “Eeg-based brain-controlled mobile robots: A survey,” IEEE Trans. Human Mach. Syst., vol. 43, no. 2, pp. 161–176, 2013. [9] A. C. Lopes, G. Pires, and U. Nunes, “Assisted navigation for a brainactuated intelligent wheelchair,” Robot. Auton. Syst., vol. 61, no. 3, pp. 245–258, Mar. 2013. [10] J. Mill´an, F. Gal´an, D. Vanhooydonck, E. Lew, J. Philips, and M. Nuttin, “Asynchronous non-invasive brain-actuated control of an intelligent wheelchair,” in Proc. IEEE/EMBS 31st Annu. Int. Conf., Minneapolis, MN, Sept. 2009, pp. 3361–3364. [11] T. Carlson and J. d. R. Millan, “Brain-controlled wheelchairs: A robotic architecture,” IEEE Robot. Autom. Mag., vol. 20, no. 1, pp. 65–73, Jun. 2013. [12] I. Iturrate, J. M. Antelis, A. Kubler, and J. Minguez, “A noninvasive brain-actuated wheelchair based on a p300 neurophysiological protocol and automated navigation,” IEEE Trans. Robot., vol. 25, no. 3, pp. 614– 627, 2009. [13] B. Rebsamen, C. Guan, H. Zhang, C. Wang, C. Teo, M. H. Ang, and E. Burdet, “A brain controlled wheelchair to navigate in familiar environments,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 18, no. 6, pp. 590–598, Dec. 2010. [14] Y. Li and S. T. Birchfield, “Image-based segmentation of indoor corridor floors for a mobile robot,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., Taipei, Oct. 2010, pp. 837–843. [15] C. H. Lin, S. Y. Jiang, Y. J. Pu, and K. T. Song, “Robust ground plane detection for obstacle avoidance of mobile robots using a monocular camera,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., Taipei, Oct. 2010, pp. 3706–3711. [16] A. Criminisi, I. Reid, and A. Zisserman, “A plane measuring device,” Image Vis. Comput., vol. 17, no. 8, pp. 625–634, 1999. [17] K. Sugihara, “Approximation of generalized voronoi diagrams by ordinary voronoi diagrams,” CVGIP, Graph. Models Image Process., vol. 55, no. 6, pp. 522–531, 1993. [18] X. Zezhong, L. Jilin, and X. Zhiyu, “Scan matching based on cls relationships,” in Proc. IEEE Int. Conf. Robot., Intell. Syst. Signal Process., vol. 1, Oct. 2003, pp. 99–104.

[19] L. Zhang and B. K. Ghosh, “Line segment based map building and localization using 2d laser range finder,” in Proc. IEEE Int. Conf. Robot. Autom., vol. 3, San Francisco, CA, Apr. 2000, pp. 2538–2543. [20] H. J. Sohn and B. K. Kim, “An efficient localization algorithm based on vector matching for mobile robots using laser range finders,” J. Intell. Robot. Syst., vol. 51, no. 4, pp. 461–488, 2008. [21] P. E. Hart, N. J. Nilsson, and B. Raphael, “A formal basis for the heuristic determination of minimum cost paths,” IEEE Trans. Syst. Sci. Cybern., vol. 4, no. 2, pp. 100–107, July 1968. [22] L. Chen, S. Wang, H. Hu, and K. McDonald-Maier, “B´ezier curve based trajectory planning for an intelligent wheelchair to pass a doorway,” in Proc. 2012 UKACC Int. Conf. Control, Cardiff, Sept. 2012, pp. 339–344. [23] Y. Li, J. Long, T. Yu, Z. Yu, C. Wang, H. Zhang, and C. Guan, “An eegbased bci system for 2-d cursor control by combining mu/beta rhythm and p300 potential,” IEEE Trans. Biomed. Eng., vol. 57, no. 10, pp. 2495–2505, Oct. 2010. [24] J. Pan, Y. Li, R. Zhang, Z. Gu, and F. Li, “Discrimination between control and idle states in asynchronous ssvep-based brain switches: A pseudo-key-based approach,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 21, no. 3, pp. 435–443, May 2013. [25] Y. Li, J. Pan, F. Wang, and Z. Yu, “A hybrid bci system combining p300 and ssvep and its application to wheelchair control,” IEEE Trans. Biomed. Eng., vol. 60, no. 11, pp. 3156–3166, Nov. 2013. [26] E. W. Sellers, T. M. Vaughan, and J. R. Wolpaw, “A brain-computer interface for long-term independent home use,” Amyotroph. Lateral Scler., vol. 11, no. 5, pp. 449–455, 2010. [27] R. Ortner, M. Bruckner, R. Pruckl, E. Grunbacher, U. Costa, E. Opisso, J. Medina, and C. Guger, “Accuracy of a p300 speller for people with motor impairments,” in Proc. IEEE Symp. Comput. Intell., Cogn. Algorithms, Mind, Brain, Apr. 2011, pp. 1–6. [28] L. Tonin, T. Carlson, R. Leeb, and J. del Millan, “Brain-controlled telepresence robot by motor-disabled people,” in Proc. 33rd Annu. Int. Conf. IEEE EMBC, Boston, MA, Aug./Sep. 2011, pp. 4227–4230. [29] R. Leeb, D. Friedman, G. R. M¨uller-Putz, R. Scherer, M. Slater, and G. Pfurtscheller, “Self-paced (asynchronous) bci control of a wheelchair in virtual environments: A case study with a tetraplegic,” Comput. Intell. Neurosci., vol. 2007, pp. 1–7, 2007.

Rui Zhang received the B.S. degree in information and computing science from University of South China, Hengyang, China, in 2010. He is currently working toward the Ph.D. degree in pattern recognition and intelligent systems at the South China University of Technology, Guangzhou, China. His research interests include noninvasive braincomputer interfaces and brain-signal analysis, and their applications in BCI.

Yuanqing Li (M’05) was born in Hunan, China, in 1966. He received the B.S. degree in applied mathematics from Wuhan University, Wuhan, China, in 1988, the M.S. degree in applied mathematics from South China Normal University, Guangzhou, China, in 1994, and the Ph.D. degree in control theory and applications from the South China University of Technology, Guangzhou, in 1997. Since 1997, he has been with the South China University of Technology, where he became a Full Professor in 2004. From 2002 to 2004, he was with the Laboratory for Advanced Brain Signal Processing, RIKEN Brain Science Institute, Saitama, Japan, as a Researcher. From 2004 to 2008, he was with the Laboratory for Neural Signal Processing, Institute for Infocomm Research, Singapore, as a Research Scientist. He is the author or coauthor of more than 60 scientific papers in journals and conference proceedings. His research interests include, blind signal processing, sparse representation, machine learning, brain-computer interface, EEG, and fMRI data analysis.

1534-4320 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TNSRE.2015.2439298, IEEE Transactions on Neural Systems and Rehabilitation Engineering

12

Yongyong Yan received the B.S. degree in automation from Nanchang University, Nanchang, China, in 2011. He is currently working toward the M.S. degree in pattern recognition and intelligent systems at the South China University of Technology, Guangzhou, China. His research interests include intelligent control and intelligent detection.

Hao Zhang is currently working toward the M.S. degree in pattern recognition and intelligent systems at the South China University of Technology, Guangzhou, China. His research interests include robot control and computer vision.

Shaoyu Wu recevied the B.S. degree in automation from South China University of Technology, Guangzhou, China, in 2013. He is currently working toward the M.S. degree in pattern recognition and intelligent systems at the South China University of Technology, Guangzhou, China. His research interests include image processing and pattern recognition.

Tianyou Yu received the Ph.D. degree in pattern recognition and intelligent systems at the South China University of Technology, Guangzhou, China, in 2013. He is now with the School of Automation Science and Engineering, South China University of Technology. His research interests include noninvasive brain-computer interfaces, pattern recognition and machine learning.

Zhenghui Gu (S’00-AM’02-M’03) received the Ph.D. degree from Nanyang Technological University, Singapore, in 2003. From 2002 to 2008, she was with the Institute for Infocomm Research, Singapore. In 2009, she joined the College of Automation Science and Engineering, South China University of Technology, Guangzhou, China, as an Associate Professor. Her research interests include the fields of brain signal processing and pattern recognition.

1534-4320 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Control of a Wheelchair in an Indoor Environment Based on a Brain-Computer Interface and Automated Navigation.

The concept of controlling a wheelchair using brain signals is promising. However, the continuous control of a wheelchair based on unstable and noisy ...
602KB Sizes 0 Downloads 12 Views