sensors Article

A Self-Organizing Interaction and Synchronization Method between a Wearable Device and Mobile Robot Min Su Kim 1 , Jae Geun Lee 2 and Soon Ju Kang 2, * 1 2

*

Departement of Software Convergence, Kyungpook National University, 80 Daehakro, Bukgu, Daegu 702-701, Korea; [email protected] School of Electronics Engineering, College of IT Engineering, Kyungpook National University, 80 Daehakro, Bukgu, Daegu 702-701, Korea; [email protected] Correspondence: [email protected]; Tel.: +82-53-950-6604; Fax: +82-53-950-5505

Academic Editors: Massimo Poncino and Ka Lok Man Received: 4 March 2016; Accepted: 3 June 2016; Published: 8 June 2016

Abstract: In the near future, we can expect to see robots naturally following or going ahead of humans, similar to pet behavior. We call this type of robots “Pet-Bot”. To implement this function in a robot, in this paper we introduce a self-organizing interaction and synchronization method between wearable devices and Pet-Bots. First, the Pet-Bot opportunistically identifies its owner without any human intervention, which means that the robot self-identifies the owner’s approach on its own. Second, Pet-Bot’s activity is synchronized with the owner’s behavior. Lastly, the robot frequently encounters uncertain situations (e.g., when the robot goes ahead of the owner but meets a situation where it cannot make a decision, or the owner wants to stop the Pet-Bot synchronization mode to relax). In this case, we have adopted a gesture recognition function that uses a 3-D accelerometer in the wearable device. In order to achieve the interaction and synchronization in real-time, we use two wireless communication protocols: 125 kHz low-frequency (LF) and 2.4 GHz Bluetooth low energy (BLE). We conducted experiments using a prototype Pet-Bot and wearable devices to verify their motion recognition of and synchronization with humans in real-time. The results showed a guaranteed level of accuracy of at least 94%. A trajectory test was also performed to demonstrate the robot’s control performance when following or leading a human in real-time. Keywords: mobile robot; human following or leading; LF/RF pairing; device-to-device synchronization

1. Introduction The demand and supply of human-support robots continue to increase. However, most of these robots require specific control interfaces. This means that the user can hardly do anything except control when using the robot. If the user could do something else, his/her time efficiency would increase. Human-support robots that can also follow their users mainly use image-processing or sensor network methods to eliminate specific control interfaces. Image-processing methods basically require high levels of hardware specifications. In addition, these methods only perform well in locations with low population densities. In contrast, in locations with high population densities, these methods require additional complex algorithms to maintain the performance which the robots in location with low population density have. Sensor network methods perform well in limited space. These methods have spatial constraints because a specific infrastructure must be constructed. Therefore, we propose a mobile robot system that can accompany a user and does not require a specific control interface through self-organizing interaction from the combination of a wearable device to obtain the user’s motion information and a mobile robot for carriage service. When a robot goes ahead or follows a human, it can also synchronize with the human’s motion by using self-organizing interaction without Sensors 2016, 16, 842; doi:10.3390/s16060842

www.mdpi.com/journal/sensors

Sensors 2016, 16, 842

2 of 15

Sensors 2016, 16, 842

2 of 15

any specific controls. In case of exceptions, this system requires specific controls that are commanded by a self-organizing gesture recognition function in any the specific wearable device. Theofproposed is called “Pet-Bot” interaction without controls. In case exceptions,robot this system requires because, likecontrols a pet, it that can are recognize its master and their location and motion to wearable accompany them. In this specific commanded by a gesture recognition function in the device. The proposed robot is calledthe “Pet-Bot” because, pet, itlocation can recognize its master and their (LF) location system, a Pet-Bot identifies authorized userlike anda their by using low-frequency wireless and motion to accompany them.device In this identifies system, a the Pet-Bot identifies the authorized user measurement and their communication, and the wearable user’s motion with the inertial location by using low-frequency (LF) wireless communication, and the wearable device identifies unit (IMU). By using such information, Pet-Bots can always identify an authorized user, gothe ahead user’s motion with the inertial measurement unit (IMU). By using such information, Pet-Bots can or follow the user, and synchronize with the user’s motion without spatial constraint. The robot can always identify an authorized user, go ahead or follow the user, and synchronize with the user’s guarantee safe-carriage service when going ahead of the user because the user can easily check their motion without spatial constraint. The robot can guarantee safe-carriage service when going ahead goodsofin their line of sight. In addition to the areas shown in Figure 1, this robot can be applied in any the user because the user can easily check their goods in their line of sight. In addition to the areas area that needs carriage services. shown in Figure 1, this robot can be applied in any area that needs carriage services. The rest of this paper is organized Section2 2presents presents overview of related works. The rest of this paper is organizedas asfollows: follows: Section anan overview of related works. Section 3 describes thethe detailed design Section4 4presents presents results of experiments Section 3 describes detailed designofofour our work. work. Section thethe results of experiments our prototype robot and wearabledevices. devices. Finally, ourour contributions and and usingusing our prototype robot and wearable Finally,Section Section5 5summarizes summarizes contributions discusses future extensions. discusses future extensions.

Figure1.1.Application Application of Figure of the thePet-Bot Pet-Botsystem. system.

2. Related Research

2. Related Research

Two conventional human-following mobile robots have been developed, each of which has its

Two conventional human-following mobile have beenprocessing, developed, each of which own method for recognizing and following users.robots One uses image and the other uses has a its own sensor method for recognizing and following users. One uses image processing, and the other uses network. a sensor network. An image-processing method [1–5] uses a different type or number of cameras and algorithms for image-processing obtaining the user’s location[1–5] anduses motion information. By number analyzing identifies the for An method a different type or of images, camerasitand algorithms human’s locationlocation and steers afterinformation. the human. By However, suchimages, methods commonlythe require obtaining the user’s anditself motion analyzing it identifies human’s powerful computing as well as high-capacity batteries require for long-term location and steers itselfresources after theincluding human. GPUs, However, such methods commonly powerful operation. These methods essentially incur delays during the image analysis and hence are not computing resources including GPUs, as well as high-capacity batteries for long-term operation. suitable for an embedded system requiring a long operation time with real-time interaction. These methods essentially incur delays during the image analysis and hence are not suitable for The next type of method uses a sensor-network [6,7]. This method obtains the user and robot an embedded system requiring a long operation real-time location information using fixed sensor nodes, time e.g., awith charge coupledinteraction. device (CCD) camera, laser The next type of method uses a sensor-network [6,7]. This method obtains the user and range scanner, or Radio Frequency Identification Reader (RFID), within a limited space. Because fixedrobot location information using fixed sensor nodes, e.g., a charge coupled device (CCD) camera, laser sensor nodes that communicate with each other are applied, the user and robot must be within arange scanner, or Radio Identification (RFID), within a limited space. Because fixedthe sensor limited space;Frequency this method can create anReader absolute coordinate system, and the robot can obtain coordinates ofwith both the user and the which areuser needed followmust the user. This method nodesabsolute that communicate each other arerobot, applied, the andtorobot be within a limited can provide thecan robot accurate coordinates of the system, user and itself, but it can requires specific space; this method create an absolute coordinate and the robot obtaina the absolute infrastructure and has certain spatial constraints. coordinates of both the user and the robot, which are needed to follow the user. This method can In addition, other methods for human-following or guiding robots have been developed. One provide the robot accurate coordinates of the user and itself, but it requires a specific infrastructure uses compressed infrared sensors [8] to detect and follow a human. It does not need a powerful and has certain spatial constraints. processor but has a weak point in terms of its operation in bright locations. Another method uses In addition, other methods human-following guiding have[9].been developed. laser range finder (LRF) sensors tofor detect the user and guideor them throughrobots a safe path This system One uses compressed infrared sensors [8] to detect and follow a human. It does not need a partially solves some of the problems described above. However, it cannot synchronize withpowerful the processor has a weak point in terms of its operation in bright locations. Another method uses laser user’sbut motion. range finder (LRF) sensors to detect the user and guide them through a safe path [9]. This system partially solves some of the problems described above. However, it cannot synchronize with the user’s motion.

Sensors Sensors 2016, 2016, 16, 16, 842 842

33 of of 15 15

To identify a human’s location, some methods might require a Pedestrian Dead-Reckoning To identify a human’s location, some methods might require a Pedestrian Dead-Reckoning (PDR) (PDR) system. Inertial Navigation Systems (INSs) [10–12] and Step-and-Heading Systems (SHSs) [13– system. Inertial Navigation Systems (INSs) [10–12] and Step-and-Heading Systems (SHSs) [13–15] 15] are two major topics of a PDR. An INS uses an Inertial Measurement Unit (IMU) that attaches to are two major topics of a PDR. An INS uses an Inertial Measurement Unit (IMU) that attaches to the human body, e.g., the head, waist, or toe. The INS filters the data from the IMU and then uses the human body, e.g., the head, waist, or toe. The INS filters the data from the IMU and then uses additional compensation functions to obtain the user’s velocity and direction. The SHS uses additional compensation functions to obtain the user’s velocity and direction. The SHS uses ultrasonic ultrasonic sensors as well as an IMU attached to the human’s body. The SHS filters the data from the sensors as well as an IMU attached to the human’s body. The SHS filters the data from the sensors and sensors and then estimates the user’s steps and direction. To increase the accuracy, both methods use then estimates the user’s steps and direction. To increase the accuracy, both methods use filters and filters and compensation functions, which have a long processing time. compensation functions, which have a long processing time. To solve the above constraints, an LF wireless communication method [16] is proposed in this To solve the above constraints, an LF wireless communication method [16] is proposed in this paper. Because this method simply uses an LF signal’s received signal strength indication (RSSI) to paper. Because this method simply uses an LF signal’s received signal strength indication (RSSI) obtain the human’s location, it does not require a powerful processor, complex algorithms, or a to obtain the human’s location, it does not require a powerful processor, complex algorithms, or high-capacity battery. It also does not need a PDR system, which may generate certain timing issues. a high-capacity battery. It also does not need a PDR system, which may generate certain timing issues. In addition, because it uses relative coordinates, it does not incur a spatial constraint. Through a smart In addition, because it uses relative coordinates, it does not incur a spatial constraint. Through a smart belt with an IMU, it can detect and synchronize with a human’s motions. In addition, it uses an LF belt with an IMU, it can detect and synchronize with a human’s motions. In addition, it uses an LF signal’s unique wake-up patterns to identify the robot’s master. In brief, it can identify the master signal’s unique wake-up patterns to identify the robot’s master. In brief, it can identify the master and and their location and motion in real-time using simple algorithms that utilize LF wireless their location and motion in real-time using simple algorithms that utilize LF wireless communication communication and an IMU. and an IMU. 3. Detailed Design 3. Detailed Design 3.1. Basic Basic Concept Concept 3.1. Figure 22 shows shows the the overall overall setup setup of of the the Pet-Bot Pet-Bot system. system. It It consists consists of of wearable wearable devices, devices, aa smart smart Figure watch and belt, and a mobile robot. If a user wearing a smart watch and belt approaches the mobile watch and belt, and a mobile robot. If a user wearing a smart watch and belt approaches the mobile robot, an an LF periodically burst from thethe robot, andand if the robot, LF signal signal having havingaaunique uniquewake-up wake-uppattern patternis is periodically burst from robot, if user is authorized, the user’s smart belt will wake up and then connect with the robot using Bluetooth the user is authorized, the user’s smart belt will wake up and then connect with the robot using low energy (BLE) communication and then services the usertoare the userIfselects a leading Bluetooth low energy (BLE) communication and thentoservices theoffered. user areIfoffered. the user selects themode, mobile finds thefinds user’s and motion information from the belt. belt. The amode, leading therobot mobile robot thelocation user’s location and motion information fromsmart the smart user’s location is detected using the LF signal’s RSSI, and the motion is recognized through the IMU The user’s location is detected using the LF signal’s RSSI, and the motion is recognized through the in theinsmart belt. Finally, usingusing the information regarding the user’s location and motion, the robot IMU the smart belt. Finally, the information regarding the user’s location and motion, the can accompany the user. robot can accompany the user.

Figure 2. 2. Overall Overall setup setup of of the the Pet-Bot Pet-Bot system. system. Figure

The smart watch in this system is used to give commands whenever the robot requires it, The smart watch in this system is used to give commands whenever the robot requires it, exception exception situation and remote-control mode. When the robot detects an obstacle in front of the robot, situation and remote-control mode. When the robot detects an obstacle in front of the robot, in leading in leading mode, an exception situation occurs. In both cases, the user has to provide the direction mode, an exception situation occurs. In both cases, the user has to provide the direction where the where the robot is to move toward. To command the robot, we use a smart watch gesture recognition robot is to move toward. To command the robot, we use a smart watch gesture recognition function. function. Using this function, the user is able to command the robot in a particular direction in a very Using this function, the user is able to command the robot in a particular direction in a very intuitive intuitive way because the user simply points toward the direction where the robot is to move. In this way because the user simply points toward the direction where the robot is to move. In this paper, we paper, we use the seven basic gesture patterns shown in Table 1. Through a combination of gestures,

Sensors 2016, 16, 842

4 of 15

Sensors 2016, 2016, 16, 16, 842 of 15 Sensors 444 of use the seven basic2016, gesture Sensors 16, 842 842patterns shown in Table 1. Through a combination of gestures, the robot of 15 15 Sensors 2016, 16, 842 4 of Sensors 2016, 16, 842 4 of15 15 Sensors 16, 4 4ofof 1515 Sensors 16,842 842direction. The principle of this function is based on analyzing the pattern can even move in 2016, a2016, diagonal the robot can even move in a diagonal direction. The principle of this function is based on analyzing the robot can even move in diagonal direction. The principle of this function based on analyzing ofSensors accelerometer values. the caneven evenmove movein inaaadiagonal diagonaldirection. direction.The Theprinciple principleof ofthis thisfunction functionisis is4based based onanalyzing analyzing 2016, 16, 842robot of 15 the the pattern ofeven accelerometer values. therobot robotcan can even moveinin inaaavalues. diagonaldirection. direction.The Theprinciple principleofof ofthis thisfunction functionisis isbased basedon onanalyzing analyzing the robot can move diagonal on the robot can even move diagonal direction. The principle this function based on analyzing the pattern of accelerometer the pattern of accelerometer values. the pattern of accelerometer values. thepattern patternofof ofaccelerometer accelerometer values. the values. Table 1. Basic patterns forprinciple gesture recognition. the pattern accelerometer values. the robot can even move in a diagonal direction. The of this function is based on analyzing Table 1. Basic patterns for gesture gesture recognition. Table the pattern of accelerometer values. Table 1. 1. Basic Basic patterns patterns for for gesture recognition. recognition. Table 1. Basic patterns for gesture Table 1. Basic patterns for gesturerecognition. recognition. Symbol Motion Table Command Description 1.1.Basic patterns for gesture recognition. Table Basic patterns for gesture recognition. Symbol Motion Command Description Description Symbol Motion Command Symbol Table 1.Motion Motion Command Description Basic patterns for gesture recognition. Symbol Command Description Symbol Motion Command Description Circle Cancel (return towait command wait mode) mode) Circle Cancel (return to (return command mode) wait Symbol Motion Command Description Symbol Motion Command Description Circle Cancel to command Circle Cancel (return tocommand commandwait waitmode) mode) Circle Cancel (return to Circle Cancel (returntoto tocommand commandwait waitmode) mode) Forward Move forward Symbol Motion Forward Command Description Circle Cancel (return Circle Cancel (return command wait mode) Forward Move forward Forward Move forward Moveforward forward Forward Move Forward Move forward Backward Move backward Circle Cancel (return to command wait mode) Forward Move forward Forward Move forward Backward Move backward BackwardMove backward Movebackward backward Backward Move Backward Move backward Forward Backward Move forward Up Rotate counter-clockwise 90° Backward Move backward Backward Move backward Up Rotate counter-clockwise 90° Up Rotatecounter-clockwise counter-clockwise 90° ˝ Up Up Rotate counter-clockwise 90 Rotate 90° Backward Move backward Up Rotate counter-clockwise 90° Down Rotate clockwise 90° Up Rotate 90° Up Rotatecounter-clockwise counter-clockwise 90° Down Rotate clockwise 90° Down Rotateclockwise clockwise90° 90° Down Rotate Up Rotate counter-clockwise ˝ 90° 90° Down Rotateleft clockwise 90° Left Move left Down Rotate clockwise Down Rotate clockwise 90 Down Rotate clockwise 90° Left Move Left Rotate clockwise Move left Down 90° Left Move left Left Move left Right Move right Left Moveleft left LeftLeft leftMove Right Move right RightMoveMove Moveright right Left left right Right Move Right Move Right Move right Right Move right Right Move right Right Move right Because a low-frequency band has high transmissivity, such a signal signal can can transmit transmit through through Because low-frequency band has high transmissivity, such Because aaa low-frequency low-frequency band band has has high high transmissivity, transmissivity, such such aaa signal signal can can transmit transmit through through Because obstacles. Thus, an band LF signal signal can be used for obtaining obtaining the distance and direction from the the object. Because low-frequency band hashigh high transmissivity, such signal can transmit through Because aaalow-frequency band has transmissivity, such signal can transmit through Because low-frequency band has high transmissivity, such signal can transmit Thus, an LF can be used for distance direction from object. Becauseobstacles. a low-frequency has high transmissivity, such the athe signal canaaaand transmit through obstacles. Thus, anband LFsignal signal can be used forobtaining obtaining distance and direction fromthe thethrough object. Because a low-frequency hasinhigh transmissivity, such a the signal can transmit through obstacles. Thus, an LF can be used for distance and direction from object. This technic is widely used smart key systems for vehicles. Figure 3 shows the protocol of the LF obstacles. Thus, an LF signal can be used for obtaining the distance and direction from the object. obstacles. Thus, an LF signal can be used for obtaining the distance and direction from the object. obstacles. an LFused signal can bekey used for obtaining the distance direction from the object. This technic is widely in smart systems for vehicles. Figure 33and shows the protocol of the LF obstacles. an LFThus, signal can be used forobtaining obtaining the distance direction from the object. This technic iswidely widely used infor smart key systems forvehicles. vehicles. Figure shows theprotocol protocol ofthe the LF obstacles. Thus, Thus, an LF signal can be used the distance andand direction from the object. This technic is used in smart key systems for Figure 3 shows the of LF wireless communication used in the Pet Bot system. As can be seen, there are only 4-byte data sections This technic is widely used in smart key systems for vehicles. Figure 3 shows the protocol of the LF This technic isiswidely used inin smart key systems for vehicles. Figure 3protocol the protocol ofof the This technic widely used key systems for Figure 3shows shows the protocol theLF LF wireless communication used in the Pet system. As can seen, there are only data sections This is used smart key systems forBot vehicles. Figure 3be shows the of4-byte the wireless communication used insmart thePet Pet Bot system. Asvehicles. can be seen, there areonly only 4-byte datasections sections Thistechnic technic iswidely widely used in in smart key systems for vehicles. Figure 3 shows thethere protocol of the LF LF wireless communication used in the Bot system. As can be seen, are 4-byte data that cancommunication be filledin with user-defined data. Such sections are insufficient toonly contain alldata of the user’s wireless communication used in thePet Pet Bot system. Ascan can beseen, seen, there are only4-byte 4-byte data sections wireless used inin the Bot system. As be there are sections wireless communication used the Pet Bot system. As can be seen, there are only 4-byte data that can be filled with user-defined data. Such sections are insufficient to contain all of the user’s wireless used the Pet Pet Bot system. As can seen, there only 4-byte sections that can be be filled with user-defined data. Such sections are insufficient todata contain all of of thesections user’s wirelesscommunication communication used with in the Bot system. As can bebe seen, there areare only 4-byte data sections that can filled user-defined data. Such sections are insufficient to contain all the user’s location and motion information. Moreover, the robot has only two LF transmit antennas, and that can be filled with user-defined data. Such sections are insufficient to contain all of the user’s that can be filled with user-defined data. sections are insufficient totothe contain all ofofthe that beuser-defined filled with user-defined data.Such Such sections are insufficient contain all theuser’s user’s location and motion information. Moreover, the robot has only two LF transmit antennas, and that with data. Such Suchsections sections are insufficient contain the user’s thatcan canbe befilled filledcan with user-defined data. are insufficient to to contain all all of user’s location and motion information. Moreover, the robot has has only two LFof transmit antennas, and location and motion information. Moreover, the robot only two LF transmit antennas, and wearable devices include only LF receivers. Therefore, we additionally use RF BLE wireless locationinformation. and motion information. Moreover, the robot has only twoLF LFand transmit antennas, and location motion information. Moreover, the robot has only two transmit antennas, and location and motion information. Moreover, the robot has only two LF transmit antennas, and wearable devices include only receivers. Therefore, we additionally use RF BLE wireless location motion Moreover, theLF robot has has only two LF antennas, wearable locationand and motionand information. Moreover, the robot only twotransmit LF transmit antennas, and wearable devices include only LF receivers. Therefore, we additionally use RF BLE BLE wireless wireless wearable devices include only LF receivers. Therefore, we additionally use RF communication to exchange the needed data. wearable devices include only LF receivers. Therefore, we additionally use RF BLE wireless wearable devices include only LF receivers. Therefore, we additionally use RF BLE wireless wearable include only LF receivers. additionally use RF toBLE wireless communication to exchange the needed data. devices include onlyinclude LFdevices receivers. Therefore, we additionally use RF BLEwe wireless wearable devices only LF receivers. Therefore, additionally use RFcommunication BLE wireless communication toexchange exchange theneeded needed data. weTherefore, communication to the Figure 4 shows shows a state state diagram diagram of aadata. mobile robot. When When the the robot robot is is first first turned turned on, on, it enters enters into into communication toexchange exchange theneeded needed data. robot. communication to the data. communication to exchange the needed data. Figure 4 a of mobile exchange the needed data. communication to exchange the needed data. Figure44shows showsaastate statediagram diagramof ofaamobile mobilerobot. robot.When Whenthe therobot robotisisfirst firstturned turnedon, on,itit itenters entersinto into Figure waiting-user sequence and sends out out an LF signal signal periodically. When user wearing an authorized Figure shows state diagram ofaan mobile robot. When the robot is first turned on, entersinto into Figure 444shows aaastate ofof robot. When robot isis ititenters Figure shows state diagram aamobile mobile robot. When the robot first turned on, enters into waiting-user sequence and sends LF periodically. When aaafirst user wearing an authorized Figure4aaa4shows shows diagram of aadiagram mobile When the robot is first turned on, it enters intoon, Figure aa state diagram of mobile robot. When the robot is the first turned on, itturned enters into waiting-user sequence and sendsrobot. out an LF signal periodically. When user wearing anitauthorized authorized asmart waiting-user sequence and sends out an LF signal periodically. When a user wearing an belt is within 1 m of the robot, the robot identifies whether the user is its master. If the user is a waiting-user sequence and sends out an LF signal periodically. When a user wearing an authorized a waiting-user sequence and sends out an LF signal periodically. When a user wearing an authorized a waiting-user sequence and sends out an LF signal periodically. When a user wearing authorized smart belt within 11 m of the robot, the robot identifies the user is its master. If the user waiting-user sequence sends out an LF signal periodically. When awhether user wearing an authorized a awaiting-user sequence and sends out an LF signal periodically. When a user wearing an authorized smart beltisis iswithin within m of the robot, the robot identifies whether theuser user isits its master.an Ifthe theuser userisis is smart belt 1 m of the robot, the robot identifies whether the is master. If its master, it enters a command-waiting state. In this state, the user can select the main operation: smart belt is within 1 m of the robot, the robot identifies whether the user is its master. If the user is smart belt is within 1a1m the robot, the identifies whether the isthe master. If the smart belt is within mof ofthe the robot, therobot robot identifies whether theuser user isits its master. theuser userisis smartbelt beltis is within 11 m the the robot identifies whether thestate, user isthe itsis master. Ifselect user isuser its master, enters command-waiting state. In this user can main operation: smart within mit of therobot, robot, robot identifies whether the user its master. If the its master, itof enters acommand-waiting command-waiting state. In this state, the user can select the mainIfoperation: operation: its master, it enters a state. In this state, the user can select the main remote-control mode, leading mode, or following following mode. If select main operation isthe stopped, the robot robot its master, itenters enters command-waiting state. Inthe this state, the user can select mainoperation: operation: master, itcommand-waiting aaacommand-waiting state. InIn this state, the user can select the main it a acommand-waiting state. In In this state, the user can the the main operation: its master, it enters command-waiting state. this state, the user can select main operation: remote-control mode, leading mode, or mode. If the main operation is stopped, the isits itsmaster, master,its itenters enters state. state, user select main operation: remote-control mode, leading mode, orthis following mode. Ifcan the main operation isthe stopped, therobot robot remote-control mode, leading mode, or following mode. If main operation is stopped, the returns to a command-waiting state. If the robot is far from the user, it enters into an emergency state. remote-control mode, leading mode, or following mode. If the main operation is stopped, the robot remote-control mode, leading mode, or following mode. If the main operation is stopped, the robot remote-control mode, leading mode, oror following mode. If the main operation is stopped, the remote-control mode, leading mode,If mode. Ifthe theuser, main operation isan therobot robot returns to command-waiting state. the robot is far from it enters into emergency state. remote-control mode, mode, or following mode. If the main operation is stopped, the robot returns toaaleading acommand-waiting command-waiting state. Ifthe thefollowing robot isfar far from the user, itenters entersinto into anstopped, emergency state. returns to state. If robot is from the user, it an emergency state. In this type of situation, the robot sends an alert message to the smart belt. In leading mode, when returns toa acommand-waiting command-waiting state. Ifthe the robot isfar farfrom from the user, it enters into anemergency emergency state. returnstotoaareturns command-waiting state. the is from thethe itfrom enters into anitit emergency state. toto state. If robot isuser, the user, enters into an state. returns command-waiting state. If the robot is far the user, enters into an emergency state. In this type of situation, sends an alert message to the smart belt. In leading mode, when returns command-waiting state.IfIfthe therobot robot isfar far from user, it enters into an emergency state. In thistype typeaof ofsituation, situation, the robot sends an alert message to the smart belt. Inleading leading mode,when when In this the robot sends an alert message to the smart belt. In mode, the robot detects any obstacles to the front, the robot will change to remote-control mode. After In this type of situation, the robot sends an alert message to the smart belt. In leading mode, when In this type of situation, the robot sends an alert message to the smart belt. In leading mode, when In this type of situation, the robot sends an alert message to the smart belt. In leading mode, when Insituation, this type of situation, the sends an to alert message tochange theleading smart belt. Inwhen leading mode, After when the robot detects any obstacles to the front, the to remote-control In this type of the robot anrobot alert message therobot smart will belt. In mode, themode. the robot detects anysends obstacles to the the front, front, the robot will change to remote-control remote-control mode. After the robot detects any to robot change to After the robot detects anythe obstacles to obstacles the front, the robot will change towill remote-control mode. After avoiding the obstacle, if the robot receives athe return command, it mode. will change back to to mode. the previous the robot detects any obstacles tothe the front, the robot will change tochange remote-control mode. After the robot detects any obstacles to front, the robot will change toto remote-control mode. After the robot detects any obstacles to the front, the robot will change remote-control mode. After avoiding obstacle, if the robot receives a return command, it will back the previous robot detects any obstacles to the front, the robot will change to remote-control After avoiding avoiding the obstacle, if the robot receives a return command, it will change back to the previous avoiding the obstacle, if the robot receives a return command, it will change back to the previous avoiding the obstacle, if the robot receives a return command, it will change back to the previous mode. avoiding the obstacle, ifthe therobot robotreceives receives return command, itprevious willchange change backtoto tothe theprevious previous the obstacle, aaareturn command, itit will back avoiding the obstacle, if the robot receives return command, will change back the previous mode. the obstacle,avoiding if the robot receives a if return command, it will change back to the mode. mode. mode. mode. mode. mode. mode.

(a)

(a) (a) (a) (a) (a) (a) (a)

(b) (b) (b) (b) (b) (b) (b) Figure 3. Protocol of LF wireless communication in Pet-Bot system. (a) LF signal wave; (b) message (b)

Figure 3. Protocol of3.LFProtocol wirelessof communication in Pet-Bot system. (a) LF signal wave; (b) message Figure LF in system. (a) signal wave; Figure Protocol ofcommunication LF wireless wireless communication communication in Pet-Bot Pet-Bot system. (a) LF LF (b) signal wave; (b) (b) message message Figure 3. Protocol of3.3. LF wireless in Pet-Bot system. (a) LF signal (a) wave; message Figure Protocol of LF wireless communication in Pet-Bot system. LF signal wave; (b) format for LF wireless communication. Figure 3. Protocol of LF wireless communication in Pet-Bot system. (a) LF signal wave; (b)message message format for LF wireless communication. Figure 3. Protocol of LF wireless communication in Pet-Bot system. (a) LF signal wave; (b) message Figure 3. Protocol of LF wireless communication in Pet-Bot system. (a) LF signal wave; (b) message format for LF wireless communication. format for LF wireless communication. format for LFformat wireless communication. for LF wireless communication. format for LF wireless communication. format formatfor forLF LFwireless wirelesscommunication. communication.

Sensors 2016, 16, 842

5 of 15

Sensors 2016, 16, 842 Sensors 2016, 16, 842

5 of 15 5 of 15

Figure 4. State diagram of a mobile robot. Figure robot. Figure4.4.State Statediagram diagram of of aa mobile mobile robot.

3.2. Detection of its Master 3.2.3.2. Detection of of itsits Master Detection Master Figure 5 shows the detection of the robot’s master. Once the mobile robot is turned on, it sends Figure 5 shows detection of master. Once the mobile robot isturned turned on, sends 5every shows detection thethe robot’s Once the mobile robot on, ititsends LFFigure signals 3 the s. Ifthe the user isofwithin 1robot’s m ofmaster. the robot, their smart belt can is receive the robot’s LF LF LF signals every 3 s. If the user is within 1 m of the robot, their smart belt can receive the robot’s LF signals every 3 s. If the user is within 1 m of the robot, their smart belt can receive the robot’s LF signal. signal. The LF signal has its own wake-up pattern, and the LF receiver maintains this wake-up pattern signal. The LF signal has its own wake-up pattern, and the LF receiver maintains this wake-up pattern The LF signal has its own wake-up pattern, and the LF receiver maintains this wake-up pattern in in its register. Therefore, when the smart belt receives an LF signal, it compares the received pattern its in itsits register. Therefore, when thebelt smart belt receives an LF signal, compares received register. Therefore, when smart receives an LF signal, it compares the received pattern with its with own pattern in the register. After confirming the pattern, it itgenerates anthe interrupt topattern wake with its own pattern in the register. After confirming the pattern, it generates an interrupt wake own inprocessor. the register. Aftersends confirming pattern, it generates an interrupt to wakethe upto the main uppattern the main It then a reply the to receive services. Through these LF signals, mobile up the main processor. It then sends a reply to receive services. Through these LF signals, the mobile robot can identify its master [16]. In addition, smart watch authorization is also checked in the same processor. It then sends a reply to receive services. Through these LF signals, the mobile robot can robot can identify its master [16]. Insmart addition, smart watch authorization is also checked in themanner same manner needed. identify itswhen master [16]. In addition, watch authorization is also checked in the same manner when needed. when needed.

Figure 5. Master recognition. Figure5.5.Master Master recognition. recognition. Figure

3.3. User-Location Identification User-Location Identification 3.3.3.3. User-Location Identification To accompany a human, the mobile robot must know the user’s location. The user’s location To accompany a human, the mobile robot must know the user’sthe location. The user’s location information is available from the signal’s RSSImust [17]. know In other words, mobile robot can calculate To accompany a human, the LF mobile robot the user’s location. The user’s location information is available from the LF signal’s RSSI [17]. In other words, the mobile robot can calculate the user’s distance and direction the LFRSSI signal. To identify user’sthe location, is firstcan necessary information is available from the using LF signal’s [17]. In otherthe words, mobileitrobot calculate the user’s distance and direction using the LF signal. To identify the user’s location, it is first necessary the user’s distance and direction using the LF signal. To identify the user’s location, it is first necessary

Sensors 2016, 16, 842

6 of 15

Sensors 2016, 16, 842

6 of 15

to to make the RSSI RSSIsignal signalstrength strength distance. the mobile makea atable tablethat that maps maps the to to thethe distance. OnceOnce the mobile robot robot beginsbegins the theleading leadingororfollowing following services, it sends LF signals every 350 ms. As soon as the user’s smart belt services, it sends LF signals every 350 ms. As soon as the user’s smart belt receives anan LFLF signal, it obtains thethe RSSI values from the LF and and sends the values to thetomobile receives signal, it obtains RSSI values from the receiver LF receiver sends the values the robot through BLE wireless Figure 6 shows the user’s location mobile robotRF through RF BLEcommunication. wireless communication. Figurehow 6 shows how the user’sinformation location is obtained. information is obtained. The mobilerobot robotisisshown shownon onthe the left left side of Figure thethe sequence The mobile Figure 6, 6,and andthe theright rightside sidedescribes describes sequence used obtainthe theuser’s user’slocation location information. information. Two vertically used to to obtain Two LF LF transmitter transmitterantennas antennasare areplaced placed vertically side-by-side, apart. They each send signals turn; mobile robot then obtain side-by-side, 2020 cmcm apart. They each send LFLF signals in in turn; thethe mobile robot cancan then obtain thethe RSSI RSSI value from the user’s smart belt through RF communication. value from the user’s smart belt through RF communication.

Figure user’slocation locationinformation. information. Figure6.6.Method Method for for obtaining obtaining aa user’s

createdan anRSSI RSSI value thethe distance, andand thusthus we can the distance from WeWe created valuetable tableusing using distance, wedetermine can determine the distance the RSSI value. The robot then obtains the distance to the left antenna (𝑑 ) and right antenna (𝑑𝑟 ). 𝑙 from the RSSI value. The robot then obtains the distance to the left antenna (dl ) and right antenna these twotwo distances and and distance between two antennas (𝑑𝑊 ), (d a triangle is formed. By (drThrough ). Through these distances distance between two antennas W ), a triangle is formed. determining the condition of the triangle, we judge whether the triangle is formed: By determining the condition of the triangle, we judge whether the triangle is formed: (1) 𝑑𝐵 + 𝑑𝐶 > 𝑑𝐴 (𝑑𝐴 𝑖𝑠 𝑡ℎ𝑒 𝑙𝑜𝑛𝑔𝑒𝑠𝑡 𝑠𝑖𝑑𝑒) (1) d B ` dC ą d A pd A is the longest sideq The coordinates (x, y) of the user’s location are then calculated for use by the mobile robot’s relative coordinate system bythe utilizing trigonometric function. Equations (2) and (3) indicate how The coordinates (x, y) of user’sa location are then calculated for use by the mobile robot’s the user’s location is calculated. Here, 𝛼 is the angle contained between sides of length 𝑑𝑙 and 𝑑𝑟 , relative coordinate system by utilizing a trigonometric function. Equations (2) and (3) indicate how the X and Y are the coordinates of the user’s location, d is the distance between the robot and the user, user’s location is calculated. Here, α is the angle contained between sides of length dl and dr , X and Y and finally, θ is the direction of the user, which represents the angle away from the current direction: are the coordinates of the user’s location, d is the distance between the robot and the user, and finally, θ 2 𝑑𝑙2 + 𝑑𝑤 − 𝑑𝑟2 is the direction of the user, which represents the 𝛼 = cos−1 ( angle away)from the current direction: 2𝑑𝑙 𝑑𝑤 ˆ ˙ $ (2) d2l ` 𝑑d𝑊2w ´d2r ´ 1 ’ ’ α “ cos & 𝑥 = 𝑑𝑙 cos 𝛼 − 2dl dw 2 (2) 𝑦 = 𝑑d𝑙l cosα sin 𝛼 ´ dW {’ x“ 2 ’ % y “ dl sinα d = √𝑥 2 + 𝑦 2 a 𝑥 # { (3) d“ θ= tan−1x(2 ` )´y2¯ 𝑦 (3) θ “ tan´1 yx Figure 7 shows the sequence diagram for finding the user’s location. The system includes two Figure showsrobot the and sequence diagram for the finding therobot user’s location. Theitsystem parts: the 7 mobile a smart belt. Once mobile starts operating, finds itsincludes own two parts:If the and a smart belt. Once mobile starts finds master. the mobile master isrobot within 1m of the robot, the LFthe receiver in robot the smart beltoperating, can receiveit the LF its signals. If LF wake-up pattern identical to smart belt’s, mainin processor wakes it own master. If signal’s the master is within 1 misof the robot, the LF receiver the smart belt up canand receive out a BLE connection to providepattern service.isThe user can select the main main operation: thecarries LF signals. If LF signal’s wake-up identical tothen smart belt’s, processorleading, wakes up and it carries out a BLE connection to provide service. The user can then select the main operation:

Sensors 2016, 16, 842 Sensors 2016, 16, 842

7 of 15 7 of 15

following, remote control, or other service. If the robot is in leading mode, it sends LF signals periodically to identify thecontrol, human’s As soon asrobot the smart belt receives signalsLF from the leading, following, remote or location. other service. If the is in leading mode,LF it sends signals two antennas, it calculates the distance based on the RSSI value and sends the distance to the mobile periodically to identify the human’s location. As soon as the smart belt receives LF signals from the robot. Finally,it the mobile adjusts toRSSI accompany master throughtothe two antennas, calculates therobot distance baseditself on the value anditssends the distance theuser-location mobile robot. information. Finally, the mobile robot adjusts itself to accompany its master through the user-location information. The user’s user’s location within 22 m m of of the the robot; robot; if if the the distance distance between between The location is is reliable reliable when when the the user user is is within them is is greater greater than than 22 m, m, the the LF LF signal signal becomes becomes too them too weak weak and and the the RSSI RSSI value value cannot cannot be be calculated. calculated. Therefore, it is necessary to consistently synchronize with the human in order to avoid missing them them Therefore, it is necessary to consistently synchronize with the human in order to avoid missing after aa sudden sudden movement. movement. after

Figure 7. Sequence diagram for finding the user’s location.

3.4. User-Motion Recognition

recognize two two states states of of user user motion: motion: movement movement and rotation. rotation. A movement movement state The robot can recognize occurs when the human is standing, walking, or running. A rotation state relays information on the rotation, angular speed, and accumulated angle of the human. During a movement state, the mobile human even even if if the the human human is is running running quickly quickly or or suddenly suddenly stops. stops. Otherwise, Otherwise, it robot must trace the human even run run into, into, the the human. human. may lose track of, or even Figure 8 shows how the user’s movement state is recognized through the accelerometer. We use waist to to determine their state of movement. As the smart belt’s belt’s accelerometer accelerometerattached attachedtotothe theuser’s user’s waist determine their state of movement. shown in Figure 8, when the human moves, an oscillation occurs on the z-axis, which causes a wave As shown in Figure 8, when the human moves, an oscillation occurs on the z-axis, which causes the accelerometer. In Table we 2, classify the user’s movements by using the amplitude of the ainwave in the accelerometer. In 2, Table we classify the user’s movements by using the amplitude of accelerometer value, as well as its TheThe periodicity is defined by by checking thethe number of the accelerometer value, as well as periodicity. its periodicity. periodicity is defined checking number sign changes of of the values of sign changes the valuesthat thatoccur occurwithin withina avalid validtime timeperiod. period.IfIfthe theperiodicity periodicity indicates indicates that the user does a periodic action, then the smart belt applies the signal magnitude area (SMA). The SMA programmer-defined time. time. It can be used to classify accumulates the accelerometer values within the programmer-defined user’smovements. movements.Table Table 2 shows the classification rules user movements based on the a user’s 2 shows the classification rules for userfor movements based on the periodicity periodicity and SMA. Through we can classify a user’s movements and SMA. Through this process,this we process, can classify a user’s movements accurately. accurately.

Sensors 2016, 16, 842 Sensors 2016, 16, 842

8 of 15

Sensors 2016, 16, 842

8 of 15

8 of 15

Figure8.8.Accelerometer Accelerometer shaking shaking when Figure when the theuser userisisrunning. running. Table 2. Classification rules for user Figure 8. Accelerometer shaking when themovements. user is running. Table 2. Classification rules for user movements.

Periodicity

SMA State 3 Run Movement 2 3 Run Walk Periodicity SMA State Run Movement 13 2 Walk Rest Non-movement X2 Stop Movement Walk 1 Rest 1 Rest Non-movement Xthe user, Stop the mobile robot must be aware of The next motion state is a rotation. When leading Non-movement X Stop the human’s rotation information, because it must mimic the user’s direction when the user turns. Figure 9next shows the user-rotation recognition for leading aleading leadingthe task. If the user turns by θ,must the mobile robot The motion state When the user, the mobile robot must aware The next motion stateisisa arotation. rotation. When user, the mobile robot bebe aware of of must correct its movement direction. Because the robot uses Mecanum wheels [18], it is not necessary thethe human’s rotation the user’s user’sdirection directionwhen when the user turns. human’s rotationinformation, information,because because itit must must mimic mimic the the user turns. for it9toshows and move; it can recognition simply movefor along the arctask. of a If circle. Figure 9rotate shows the user-rotation recognition for leading mobile robot Figure the user-rotation aaleading Ifthe theuser userturns turnsbybyθ,θ,the the mobile robot Table 2. Classification for userState movements. Periodicity rules SMA

must correct movementdirection. direction.Because Becausethe the robot robot uses Mecanum not necessary must correct itsits movement Mecanumwheels wheels[18], [18],it itisis not necessary for it to rotate and move; it can simply move along the arc of a circle. for it to rotate and move; it can simply move along the arc of a circle.

Figure 9. Detection of user rotation.

The user rotation information Figure is foundDetection through the gyro sensor in the smart belt. The gyro sensor of user user rotation. Figure 9.9.Detection of rotation. value is sampled at 70 ms intervals. It minimizes the amount of noise by using a moving-average filter.The Theuser robot must know the rotation state, angular and accumulated angle of gyro the human rotation information is found through the speed, gyro sensor in the smart belt. The sensor The user rotation information is found through the gyro sensor in the smart belt. The gyro sensor to synchronize with their rotation. Using this information, the mobile robot can recognize the value is sampled at 70 ms intervals. It minimizes the amount of noise by using a moving-average value is sampled at 70 ms intervals. It minimizes the amount of noise by using a moving-average human’s rotation and accompany the human in a leading mode. filter. The robot must know the rotation state, angular speed, and accumulated angle of the human filter. The robot must know rotationUsing state, this angular speed, and the human to synchronize with theirthe rotation. information, theaccumulated mobile robotangle can of recognize the to 3.5. Control System for Leading or Following a Human synchronize with their rotation. Using this information, the mobile robot can recognize the human’s human’s rotation and accompany the human in a leading mode. rotationWhen and accompany the human a leading leading or following theinhuman, themode. goal of the robot control is to maintain a specific 3.5. Control System for Leading or Following a Human distance and direction from the user. To do this, the user location and motion information are 3.5. Control System for Leading or Following a Human required, which are or attainable from smart the beltgoal and LF wireless Figure a10specific shows When leading following thethe human, of the robotcommunication. control is to maintain When leading orThe following the human, goal the location robot control is todepending maintain a specific our control region. mobile robot hasTo a different directional-angle resolution on how distance and direction from the user. dothe this, theofuser and motion information are the antennas are located. it hasTo a ±25° “reliable region.” In other words, robot mustare maintain distance andwhich direction fromThus, the user. do this, the user location and motionthe information required, required, are attainable from the smart belt and LF wireless communication. Figure 10 shows aour specific distance m),mobile andsmart stay within this region. We set upresolution this reliable region based on which are attainable from the belthas and LFreliable wireless communication. Figure 10 shows our control region.(1 The robot a different directional-angle depending on control how region. The mobile robot has a different directional-angle resolution on how the antennas the antennas are located. Thus, it has a ±25° “reliable region.” In other depending words, the robot must maintain arealocated. Thus, it has a ˘25 “reliable In other words, the must maintain a specific specific distance (1 m), and˝stay withinregion.” this reliable region. We set uprobot this reliable region based on

Sensors Sensors 2016, 2016, 16, 16, 842 842

99 of of 15 15

Sensors 2016, 16, 842

9 of 15

our our experimental experimental results. results. As As shown shown in in Figure Figure 11, 11, distortion distortion of of the the LF LF signal signal worsens worsens outside outside of of this this our experimental results. As shown in Figure 11, distortion of the LF signal worsens outside of this distance (1 m), and stay within this reliable region. We set up this reliable region based on our region. region. region. results. As shown in Figure 11, distortion of the LF signal worsens outside of this region. experimental

Figure10. 10. Control Control region robot. Figure regionofofthe themobile mobile robot. Figure Figure 10. 10. Control Control region region of of the the mobile mobile robot. robot.

Figure 11. Experiment result in searching for a reliable region.

Three factors control the Vx, front andin rear speed; Vy, and right speed; and W, angular Figure Experiment for aa reliable region. Figure 11.robot: Experiment result searching for left reliable region. Figure11. Experimentresult resultin insearching speed. The robot uses Mecanum wheels, which when combined with the speed factors above, allow the robot to move quitethe freely without having to rear rotate. Figure 12left describes the control and Three factors control front and speed; Vy, and right speed; factors and W, angular Three factors the robot: robot: Vx, Vx, Three factors control control the robot: Vx, front front and and rear rear speed; speed; Vy, Vy, left left and and right right speed; speed; and and W, W, angular angular their signs. speed. The robot uses Mecanum wheels, which when combined with the speed factors above, allow

speed. speed. The The robot robot uses uses Mecanum Mecanum wheels, wheels, which which when when combined combined with with the the speed speed factors factors above, above, allow allow the robot to move quite freely without having to rotate. Figure 12 describes the control factors and the robot to move quite freely without having to rotate. Figure 12 describes the control factors the robot to move quite freely without having to rotate. Figure 12 describes the control factors and and their signs. their their signs. signs.

Figure 12. Control factors of the mobile robot.

Figure 13 shows the control system of the mobile robot. It can obtain the user’s location and motion information from the smart belt. This user information is the input of the control system. The first controller is the distance and12. speed controller. To maintain specific distance (1 m), it uses Figure Control factors mobile Control Figure 12. Control factors of of the the mobilearobot. robot. automobile adaptive cruise control (ACC) [19]. When the distance is far from the target distance, it

Figure 13 shows can obtain and Figure shows the the control control system system of of the the mobile mobile robot. robot. It It obtain the the user’s user’s location location Figure 13 13 shows the control system of the mobile robot. It can can obtain the user’s location and and motion information from the smart belt. This user information is input of control system. The motion user information is the the input of the the system. The motion information informationfrom fromthe thesmart smartbelt. belt.This This user information is the input of control the control system. first controller is the distance and speed controller. To maintain a specific distance (1 m), it uses first controller is the distance andand speed controller. ToTo maintain a aspecific The first controller is the distance speed controller. maintain specificdistance distance(1(1m), m), itit uses uses automobile adaptive cruise control (ACC) [19]. When the distance is far from the target distance, it automobile adaptive cruise control (ACC) [19]. When the distance is far from the target distance, automobile adaptive cruise control (ACC) [19]. When the distance is far from the target distance, it it

Sensors 2016, 16, 842 Sensors 2016, 16, 842

10 of 15 10 of 15

operates as a distance controller. However, when it is near the target distance, it controls the speed to operates as a distance controller. However, when it is near the target distance, it controls the speed keep pace with the human. to The keepnext pace controller with the human. varies with the operation mode. In following mode, it uses the rotation The to next controller varies with theif operation followingregion, mode,ititturns uses to thethe rotation controllers correct the direction, e.g., the user ismode. in theIn right-side right to controllers to correct the direction, e.g., if the user is in the right-side region, it turns to the right to keep the user within the reliable region. In leading mode, on the other hand, the controller depends keep the user within the reliable region. In leading mode, on the other hand, the controller depends on the user’s rotation state. If the user is rotating, the robot synchronizes with the human rotation by on the user’s rotation state. If the user is rotating, the robot synchronizes with the human rotation by utilizing a moving arc. If the user is not rotating, and the direction region is not within the reliable utilizing a moving arc. If the user is not rotating, and the direction region is not within the reliable region, the robot moves left or right to correct its direction. The final step is to avoid walls or obstacles, region, the robot moves left or right to correct its direction. The final step is to avoid walls or obstacles, and check its speed to prevent a malfunction. The output of this control system is a control factor, Vx, and check its speed to prevent a malfunction. The output of this control system is a control factor, Vx, Vy,Vy, or or W.W. The robot operates factors. The robot operatesitsitsmotor motoraccording accordingto to these these control control factors.

Figure 13. Control system of a mobile robot when following or leading a human. Figure 13. Control system of a mobile robot when following or leading a human.

4. Implementation and Evaluation 4. Implementation and Evaluation 4.1. Mobile Robot Hardware 4.1. Mobile Robot Hardware Figure 14 shows the hardware configuration of the mobile robot system. There are two parts: the Figure 14 shows the hardware configuration of the mobile robot system. There are two parts: wearable devices (smart watch and belt) and the mobile robot. The wearable devices consist of an theARM wearable devices (smart watch belt) and the RAM), mobileLF robot. The(low-power wearable devices consista of Cortex-M3 based MCU (1024and KB flash, 128 KB receiver LF wake-up), an BLE ARM Cortex-M3 based MCU (1024 KB flash, 128 KB RAM), LF receiver (low-power LF wake-up), module, and an IMU (three-axis independent acceleration and angular rate channels). a BLE module, andbecause an IMUwearable (three-axis independent acceleration and angular rate channels). In addition, devices have a low capacity battery, the components used in the In addition, because wearable haveconsumption. a low capacity components in the wearable devices basically supportdevices low power Thebattery, mobilethe robot consists of used an ARM wearable devices basically support low power consumption. The mobile robot consists of an ARM Cortex-M4 based ESTK board, which is a development kit made by our laboratory (1024 KB flash, Cortex-M4 based ESTK board, which is a development kit made by our laboratory (1024 KB flash, 128 KB SRAM, fast floating-point calculation), two LF transmitter antennas (with a long reading 128distance), KB SRAM, floating-point twofour LF motors transmitter a long reading RF fast module (low powercalculation), consumption), with antennas Mecanum(with wheels, ultrasonic sensors, RF andmodule a gyro sensor. Figure 15 shows the prototypes of the with mobile robot andwheels, wearableultrasonic device distance), (low power consumption), four motors Mecanum used. A performance test was executed usingthe these prototypes. user wore a smart and belt. sensors, and a gyro sensor. Figure 15 shows prototypes of The the mobile robot andwatch wearable device We Aconducted three the evaluation: identification, motion recognition, and used. performance testtests was for executed using theselocation prototypes. The user wore a smart watch and belt. The firsttests twofor tests user location information. The last test was conducted to and observe the Wetrajectory. conducted three theobtained evaluation: identification, motion recognition, trajectory. robot’s leading capabilities thewas control system. to observe the mobile robot’s Themobile first two testsfollowing obtained or user information. Theusing last test conducted

following or leading capabilities using the control system.

Sensors 2016, 16, 842 Sensors 2016, 16, 842 Sensors 2016, 16, 842 Sensors 2016, 16, 842

11 of 15

11 of 15 11 of 15 11 of 15

Figure 14. mobile robot system. Figure14. 14.Hardware Hardwareconfiguration configuration of Figure of the themobile mobilerobot robotsystem. system. Figure 14.Hardware Hardware configuration configuration of the mobile robot system.

(a) (b) (a) (b) (a) (b) Figure 15. Prototypes of a mobile robot system: (a) mobile robot and (b) wearable devices (smart belt Figure 15. Prototypes ofofaamobile robot robot and and (b) wearable wearable devices(smart (smartbelt belt Figure Prototypesof mobilerobot robotsystem: system: (a) (a) mobile mobile Figure 15.15. Prototypes a mobile system: (a) mobile robot robot and(b) (b) wearabledevices devices (smart belt and watch). and watch). and watch). and watch).

4.2. User-Location Identification Test 4.2. User-Location 4.2. User-LocationIdentification IdentificationTest Test 4.2. User-Location Identification Test Figure 16 shows the results of the location-identification test. The empty green circle is the actual Figure The empty emptygreen greencircle circleisisthe theactual actual Figure1616shows showsthe theresults resultsof ofthe thelocation-identification location-identification test. The human location. Thethe colored redofcircle is the measured location. TheThe location is green measured while Figure 16 shows results the location-identification test. empty circle is the human The location location isis measured measuredwhile while humanlocation. location.The Thecolored coloredred red circle circle isis the the measured measured location. The varying the distance between the LFred transmitters and the LF receiver from 25location cm to 200 cm. We actual human location. The colored circle is the measured location. The is measured varyingthe thedistance distancebetween betweenthe the LF LF transmitters transmitters and and the LF receiver varying receiver from from 25 25 cm cmto to200 200cm. cm.We We measured thethe RSSI values of the LF signal 50transmitters times at eachand point. Table 3 showsfrom that the results werecm. while varying distance between the LF the LF receiver 25 cm to 200 measured the RSSI values of the LF signal 50 times at each point. Table 3 shows that the results were measured the RSSI values of the LF signal 50 times at Table 3 shows that the results were ameasured 5.10-cm root mean square error (RMSE) and 6.52° RMSE. We the RSSI values of the LF signal 50 times at each point. Table 3 shows that the results a 5.10-cmroot root mean square error (RMSE) and 6.52° 6.52° RMSE. a 5.10-cm mean square error (RMSE) and RMSE. were a 5.10-cm root mean square error (RMSE) and 6.52˝ RMSE.

Figure 16. Location identification test result, with varying distances from the user. Figure 16. Location identification test result, with varying distances from the user. Figure 16. Location identification Figure identification test test result, result, with withvarying varyingdistances distancesfrom fromthe theuser. user.

Sensors 2016, 2016, 16, 16, 842 842 Sensors

12of of15 15 12

Table 3. RMSE of measurement experiment. Table 3. RMSE of measurement experiment.

X (cm) 15.57

X (cm)

Y (cm) Y (cm) 8.96

15.57

Distance (cm)

8.96

Direction (°)

Direction (˝ ) 6.52

Distance5.10 (cm) 5.10

6.52

4.3. User-Motion Recognition Test 4.3. User-Motion Recognition Test for the user-motion recognition test. The robot is in leading mode, Figure 17 shows the setup and the distance between the robot anduser-motion human is 100 cm; the angular speed The goal of this Figure 17 shows the setup for the recognition test. The robotisisfixed. in leading mode, and test is to verify the smart belt’s and is synchronization performance. the distance between the robot androtation-recognition human is 100 cm; thecapability angular speed fixed. The goal of this test is to Therefore, the human rotates at a given angle (−90°, −45°, 45°, and 90°), and we measureTherefore, the resulting verify the smart belt’s rotation-recognition capability and synchronization performance. the ˝ , ´454 ˝ , shows ˝ ), and of angle ofrotates the robot’s arc movement. the90results test. Each case has angle a standard human at a given angle (´90Table 45˝ , and wethis measure the resulting of the deviation within 4.0° and a level of precision of over 94%, showing thathas the arobot can bedeviation accurately robot’s arcofmovement. Table 4 shows the results of this test. Each case standard of ˝ controlled. within 4.0 and a level of precision of over 94%, showing that the robot can be accurately controlled.

Figure17. 17.Test Testenvironment environmentfor for user-motion user-motion recognition recognition and and synchronization. synchronization. Figure Table 4. 4. Results Results of of user-motion user-motion recognition recognition and and synchronization synchronization test. Table

Target (°) Average (°) STDEV (°) Accuracy (%)

−90 (˝ ) Target −87.4 Average (˝ ) 3.2 STDEV (˝ ) 96.04 (%) Accuracy

−45 ´ 90 −43.8 ´87.4 3.6 3.2 94.31 96.04

´ 45

45 45

90

´43.8 3.6 94.31

45.16 45.16 3.0 3.0 94.67 94.67

87.56 3.5 95.78

90 87.56 3.5 95.78

4.4. 4.4. Trajectory Trajectory Test Test As As mentioned mentioned earlier, earlier, the the mobile mobile robot robot is is controlled controlled through through the the user user location location and and motion motion information. To control the robot effectively, an independent control system was implemented. information. To control the robot effectively, an independent control system was implemented. Therefore, be controlled controlled stably. stably. In Therefore, we we conducted conducted four four trajectory trajectory tests tests to to see see if if the the robot robot can can be In the the following figures, the theuser userisismarked marked a gray arrow or light-blue The mobile robot for a following figures, as as a gray arrow or light-blue circle.circle. The mobile robot for a leading leading task is marked a red square. For a following the shown as triangle. a green triangle. task is marked as a redas square. For a following task, thetask, robot is robot shownisas a green The first two cases tested the control of the straight and rotating movements. The first two cases tested the control of the straight and rotating movements. The The user user moved moved straight, turned to the right or left at 60° , and continued in that direction. The tests were carried ˝ straight, turned to the right or left at 60 , and continued in that direction. The tests were carried out out using can seesee in Figure 18a,b thatthat the the robot is controlled. For the task, usingeach eachdriving drivingmode. mode.We We can in Figure 18a,b robot is controlled. Forleading the leading when the user the robot a moving arc, whereas for a following task, it simply task, when theturned, user turned, thefollowed robot followed a moving arc, whereas for a following task, itrotated simply at a misaligned angle. Figure 18c shows the performance of lateral movement for a leading rotated at a misaligned angle. Figure 18c shows the performance of lateral movement for a leadingtask. task. Figure Figure 18d 18d shows shows the the results results of of the the mode-change mode-change test. test. The The user user started started with with the the robot robot in in following following mode mode byby turning at at a mode and and walked walkedstraight straightahead; ahead;the theuser userthen thenswitched switchedthe therobot robottotoleading leading mode turning particular point and walking in the opposite direction. As shown in the figures, the system performed

Sensors 2016, 16, 842

13 of 15

Sensors 2016, 16, 842

13 of 15

a particular point and walking in the opposite direction. As shown in the figures, the system performed well by allowing allowing all major functions functions to interact with i.e., identification identification of of user’s user’s location location well by all major to interact with each each other, other, i.e., and motion, and independent control of the robot. And you can see a demo video at [20]. and motion, and independent control of the robot. And you can see a demo video at [20].

Figure 18. Results of the trajectory test: (a) turning right at 60° ; (b) turning left at 60°; (c) zigzag; and Figure 18. Results of the trajectory test: (a) turning right at 60˝ ; (b) turning left at 60˝ ; (c) zigzag; and (d) mode change. (d) mode change.

5. Conclusions 5. Conclusions In this paper, we have proposed a mobile robot system, called Pet-Bot that can accompany a user In this paper, we have proposed a mobile robot system, called Pet-Bot that can accompany and does not require a specific control interface through self-organizing interaction from the a user and does not require a specific control interface through self-organizing interaction from the combination of a wearable device to obtain the user’s motion information. To identify an authorized combination of a wearable device to obtain the user’s motion information. To identify an authorized user and their location by the Pet-Bot itself, we use LF wireless communication. The wake-up pattern user and their location by the Pet-Bot itself, we use LF wireless communication. The wake-up pattern of the LF protocol is used to identify an authorized user. The authorization time is very fast, compared of the LF protocol is used to identify an authorized user. The authorization time is very fast, compared with image-processing methods, and provides very accurate authorization results. In addition, we with image-processing methods, and provides very accurate authorization results. In addition, we simply use the RSSI of an LF signal to obtain the precise proximity information regarding the user’s simply use the RSSI of an LF signal to obtain the precise proximity information regarding the user’s location, even though we use very low performance wearable devices, so we can obtain the proximity location, even though we use very low performance wearable devices, so we can obtain the proximity information of the user’s location quickly and accurately, as shown in the evaluation test. In this information of the user’s location quickly and accurately, as shown in the evaluation test. In this system, the robot can synchronize with a user’s motion by utilizing the smart belt’s IMU. Therefore, system, the robot can synchronize with a user’s motion by utilizing the smart belt’s IMU. Therefore, the the robot can react to change of the user’s motion without any specific controls; e.g., run, walk, stop robot can react to change of the user’s motion without any specific controls; e.g., run, walk, stop and and rotating. This system can provide smooth leading or following driving motion. In the evaluation, we observed an identification performance with greater than 94% accuracy. Through a trajectory test, we proved that the Pet-Bot can be effectively controlled. And we observed a synchronization

Sensors 2016, 16, 842

14 of 15

rotating. This system can provide smooth leading or following driving motion. In the evaluation, we observed an identification performance with greater than 94% accuracy. Through a trajectory test, we proved that the Pet-Bot can be effectively controlled. And we observed a synchronization performance with greater than 94% accuracy. The designed control system has enough performance to lead or follow their user smoothly. As a result, we expect the Pet-Bot system to be applicable at any region that requires carriage services. In the future, the Pet-Bot’s reliability against failures that occur from a distorted LF signal must be improved, and a swarm or combination of robots will be needed to provide more advanced carriage services. Acknowledgments: This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIP). [No. 10041145, Self-Organized Software platform (SoSp) for Welfare Devices] Author Contributions: Kim, M.S. designed and conducted the experiments, analyzed the data, and wrote the revised paper; Lee, J.G. wrote the original paper, designed and conducted the experiments in the original paper; and Kang, S.J. supervised the analysis and edited the manuscript. Conflicts of Interest: The authors declare no conflicts of interest.

References 1.

2.

3. 4. 5.

6. 7. 8. 9.

10.

11. 12.

13.

Yoshimi, T.; Nishiyama, M.; Sonoura, T.; Nakamoto, H.; Tokura, S.; Sato, H.; Ozaki, F.; Matsuhira, N.; Mizoguchi, H. Development of a Person Following Robot with Vision Based Target Detection. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 5286–5291. Xing, G.; Tian, S.; Sun, H.; Liu, W.; Liu, H. People-following system design for mobile robots using kinect sensor. In Proceedings of the 2013 25th Chinese Control and Decision Conference, Guiyang, China, 25–27 May 2013; pp. 3190–3194. Petrovic, E.; Leu, A.; Ristic-Durrant, D.; Nikolic, V. Stereo Vision-Based Human Tracking for Robotic Follower. Int. J. Adv. Robot. Syst. 2013, 10, 1. [CrossRef] Munaro, M.; Menegatti, E. Fast RGB-D people tracking for service robots. Auton. Robots 2014, 37, 227–242. [CrossRef] Hoshino, F.; Morioka, K. Human following robot based on control of particle distribution with integrated range sensors. In Proceedings of the 2011 IEEE/SICE International Symposium on System Integration (SII), Kyoto, Japan, 20–22 December 2011; pp. 212–217. Morioka, K.; Lee, J.H.; Hashimoto, H. Human-following mobile robot in a distributed intelligent sensor network. IEEE Trans. Ind. Electron. 2004, 51, 229–237. [CrossRef] Choi, B.S.; Lee, J.W.; Lee, J.J.; Park, K.T. A Hierarchical Algorithm for Indoor Mobile Robot Localization Using RFID Sensor Fusion. IEEE Trans. Ind. Electron. 2011, 58, 2226–2235. [CrossRef] Feng, G.; Guo, X.; Wang, G. Infrared motion sensing system for human-following robots. Sens. Actuators A Phys. 2012, 185, 1–7. [CrossRef] Saegusa, S.; Yasuda, Y.; Uratani, Y.; Tanaka, E.; Makino, T.; Chang, J.Y. Development of a Guide-Dog Robot: Leading and Recognizing a Visually-Handicapped Person using a LRF. J. Adv. Mech. Des. Syst. Manuf. 2010, 4, 194–205. [CrossRef] Jimenez, A.; Seco, F.; Prieto, C.; Guevara, J. A comparison of Pedestrian Dead-Reckoning algorithms using a low-cost MEMS IMU. In Proceedings of the 2009 IEEE International Symposium on Intelligent Signal Processing, Budapest, Hungary, 26–28 August 2009; pp. 37–42. Godha, S.; Lachapelle, G. Foot mounted inertial system for pedestrian navigation. Meas. Sci. Technol. 2008, 19, 075202. [CrossRef] Nilsson, J.O.; Skog, I.; Handel, P.; Hari, K.V.S. Foot-mounted INS for everybody—An open-source embedded implementation. In Proceedings of the 2012 IEEE/ION Position, Location and Navigation Symposium, Myrtle Beach, SC, USA, 23–26 April 2012; pp. 140–145. Goyal, P.; Ribeiro, V.J.; Saran, H.; Kumar, A. Strap-Down Pedestrian Dead-Reckoning System. In Proceedings of the 2011 International Conference Indoor Position IPIN, Guimaraes, Portugal, 21–23 September 2011.

Sensors 2016, 16, 842

14. 15.

16. 17.

18.

19. 20.

15 of 15

Feliz Alonso, R.; Zalama Casanova, E.; Gómez García-Bermejo, J.; Feliz, R.; Zalama, E.; Gomez, J. Pedestrian tracking using inertial sensors. J. Phys. Agents 2009, 3, 35–42. [CrossRef] Castaneda, N.; Lamy-Perbal, S. An improved shoe-mounted inertial navigation system. In Proceedings of the 2010 International Conference on Indoor Positioning and Indoor Navigation, Zurich, Switzerland, 15–17 September 2010; pp. 1–6. Demirkol, I.; Ersoy, C.; Onur, E. Wake-up receivers for wireless sensor networks: Benefits and challenges. Wirel. Commun. IEEE 2009, 16, 88–96. [CrossRef] Saxena, M.; Gupta, P.; Jain, B.N. Experimental analysis of RSSI-based location estimation in wireless sensor networks. In Proceedings of the 2008 3rd International Conference on Communication Systems Software and Middleware and Workshops (COMSWARE ’08), Bangalore, India, 6–10 January 2008; pp. 503–510. Diegel, O.; Badve, A.; Bright, G.; Potgieter, J.; Tlale, S. Improved mecanum wheel design for omni-directional robots. In Proceedings of the 2002 Australasian Conference on Robotics and Automation, Auckland, New Zealand, 27–29 November 2002; pp. 117–121. Sivaji, V.V.; Sailaja, M. Adaptive cruise control systems for vehicle modeling using stop and go manoeuvres. Int. J. Eng. Res. Appl. 2013, 3, 2453–2456. Pet-bot: A Human Following or Leading Mobile Robot. Available online: https://www.youtube.com/ watch?v=yLQVfH7UP-w (accessed on 25 April 2016). © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

A Self-Organizing Interaction and Synchronization Method between a Wearable Device and Mobile Robot.

In the near future, we can expect to see robots naturally following or going ahead of humans, similar to pet behavior. We call this type of robots "Pe...
9MB Sizes 0 Downloads 11 Views