sensors Article

Vision/INS Integrated Navigation System for Poor Vision Navigation Environments Youngsun Kim 1 and Dong-Hwan Hwang 2, * 1 2

*

Payload Electronics Team, Korea Aerospace Research Institute, Daejon 34133, Korea; [email protected] Department of Electronics Engineering, Chungnam National University, Daejon 34134, Korea Correspondence: [email protected]; Tel.: +82-42-821-5670

Academic Editors: Gabriel Oliver-Codina, Nuno Gracias and Antonio M. López Received: 8 August 2016; Accepted: 7 October 2016; Published: 12 October 2016

Abstract: In order to improve the performance of an inertial navigation system, many aiding sensors can be used. Among these aiding sensors, a vision sensor is of particular note due to its benefits in terms of weight, cost, and power consumption. This paper proposes an inertial and vision integrated navigation method for poor vision navigation environments. The proposed method uses focal plane measurements of landmarks in order to provide position, velocity and attitude outputs even when the number of landmarks on the focal plane is not enough for navigation. In order to verify the proposed method, computer simulations and van tests are carried out. The results show that the proposed method gives accurate and reliable position, velocity and attitude outputs when the number of landmarks is insufficient. Keywords: vision navigation system; inertial navigation system; integrated navigation; focal plane measurements; landmark

1. Introduction The inertial navigation system (INS) is a self-contained dead-reckoning navigation system that provides continuous navigation outputs with high-bandwidth and short-term stability. Due to its navigation characteristics, the accuracy of the navigation output degrades as time passes. In order to improve the performance of the INS, a navigation aid can be integrated into the INS. The GPS/INS integrated navigation system is one of the most generally used integrated navigation systems [1,2]. However, the GPS/INS integrated navigation system may not produce reliable navigation outputs, since the GPS signal is vulnerable to interference such as jamming and spoofing [3,4]. In recent years, many alternative navigation systems to GPS such as vision, radar, laser, ultrasonic sensor, UWB (Ultra-Wide Band) and eLoran (enhanced Long range navigation) have been studied in order to provide continuous, reliable navigation outputs [4]. Vision sensors have recently been used for navigation of vehicles such as cars, small-sized low-cost airborne systems and mobile robots due to their benefits in terms of weight, cost and power consumption [5–7]. Navigations using vision sensors can be classified into three methods [4,7]. The first method determines the position of the vehicle by comparing the measured image of a camera with the stored image or stored information of a map [8]. The second method, which is called landmark-based vision navigation, determines position and attitude by calculating directions to landmarks from the measured image of the landmarks [9,10]. The third method, called visual odometry, determines the motion of the vehicle from successive images of the camera [11]. Among these three methods, the landmark-based approach is known to have the advantages of bounded navigation parameter error and simple computation [7].

Sensors 2016, 16, 1672; doi:10.3390/s16101672

www.mdpi.com/journal/sensors

Sensors 2016, 16, 1672

2 of 14

In order to integrate an inertial navigation system with a vision navigation system, several methods have been proposed [12–16]. The method in [12] uses gimbal angle and/or bearing information calculated from camera images. In this case, the integrated navigation method may not 2016, 16, 1672 2 ofoutputs 14 give Sensors an optimal navigation output since the inputs to the integration filter are processed from raw measurements from the vision sensor. When the visual odometry is used for the integrated information calculated from camera images. In this case, the integrated navigation method may not navigation in [13], the error of the fromfilter the are vision navigation system give an system optimal as navigation output since the navigation inputs to theoutput integration processed outputs increases with time. The integrated navigation method proposed in [14–16] uses the position from raw measurements from the vision sensor. When the visual odometry is used for the and attitude, velocity or heading information fromthe the error visionofnavigation system. This integrated navigation integrated navigation system as in [13], the navigation output from the vision method may not give aincreases reliable with navigation output whennavigation the number of landmarks camera navigation system time. The integrated method proposedin inthe [14–16] usesimage position velocity or heading information from the vision navigation system. This is notthe enough forand theattitude, navigation output. integrated navigation may not a reliable navigation output when the for number This paper proposesmethod an inertial andgive vision integrated navigation method poorofvision landmarks in the camera image is not enough for the navigation output. environments, in which position and attitude outputs cannot be obtained from a vision navigation This paper proposes an inertial and vision integrated navigation method for poor vision system due to the limited number of landmarks. The proposed method uses focal plane measurements environments, in which position and attitude outputs cannot be obtained from a vision navigation of landmarks in the camera and INS outputs. Since there is no need to have navigation output from system due to the limited number of landmarks. The proposed method uses focal plane the vision navigation theinproposed method can give integrated navigation output measurements of system, landmarks the camera and INS outputs. Since there is no need to even havewhen the number of landmarks thevision camera is not enough navigation output. In addition to this, navigation output frominthe navigation system,for thethe proposed method can give integrated sincenavigation the integration uses raw measurements for integration filter, the navigation output outputmethod even when the number of landmarks in the camera is not enough for the may output. In In addition since the integration method uses raw measurements havenavigation better performance. Sectionto2,this, a brief description of landmark-based vision navigation for is given. integration filter, the navigation output may have better performance. In Section 2, a brief The proposed integration method is presented in Section 3. Results of computer simulations and description of landmark-based vision4.navigation is given. The proposed integration is vehicle experiments are given in Section The concluding remarks and further studiesmethod are mentioned presented in Section 3. Results of computer simulations and vehicle experiments are given in Section in Section 5. 4. The concluding remarks and further studies are mentioned in Section 5.

2. Landmark-Based Vision Navigation

2. Landmark-Based Vision Navigation

Vision navigation output is computed from the projected landmarks on the focal plane in Vision navigation output is computed from the projected landmarks on the focal plane in landmark-based vision navigation Figure11shows showsprojected projected landmarks onfocal the focal landmark-based vision navigation[7,13,14]. [7,13,14]. Figure landmarks on the planeplane whenwhen the pin hole camera model is adopted. The x axis of the camera frame is aligned with thethe optical c the pin hole camera model is adopted. The axis of the camera frame is aligned with axis optical of the camera. The y and z axes are in the horizontal and vertical direction of the focal c The c and axis of the camera. axes are in the horizontal and vertical direction of the focalplane, respectively. The focal The plane is placed a distance of focal f , on the plane, respectively. focal plane isat placed at a distance of length, focal length, , onxthe c axis. axis.



pkc  f,uk ,vk 

Pkc X kc ,Ykc , Z kc



p lc  f, u l , v l 



Pl c X lc ,Yl c , Zlc

pmc  f,um ,vm 

yc



zc

xc



Pmc X mc , Ymc , Z mc



Figure 1. Landmark measurements in the vision navigation.

Figure 1. Landmark measurements in the vision navigation.

As shown in Figure 1, the landmark at position

(

,

,

) is projected into the point

c Y c , Z c ) is projected into the point As( shown in the Figure theinlandmark position Pkc ( X(1) k , and k (2) k represent the relationship , , ) on focal 1, plane the cameraat frame. Equations pck ( f ,between uk , vk ) the on the focal planeoninthe the camera Equations (1) andvalues. (2) represent the relationship measurements focal planeframe. and landmark coordinate between the measurements on the focal plane and landmark coordinate values.

=

Yc uk = f kc Xk

(1)

(1)

Sensors 2016, 16, 1672

3 of 14

Sensors 2016, 16, 1672

vk = f

3 of 14

Zkc Xkc

(2)

= (2) Equation (3) is the navigation equation to obtain navigation output from landmark measurement on the focal plane of the camera. Equation (3) is the navigation equation to obtain navigation output from landmark n n n b c measurement on the focal plane of thePcamera. k − Pu = rk Cb Cc pk − = n

(3) (3)

where the subscript k denotes index of landmarks and Pk is a known position vector of the kth whereb,the subscript index of landmarks and isand a known position vector of respectively. the th landmark. c and n denotedenotes the body frame, the camera frame the navigation frame, landmark. , and denote the body frame, the camera frame and the navigation frame,ratio n Pu is the vehicle’s three-dimensional position vector in the navigation frame. rk is the distance respectively. is the vehicle’s three-dimensional position vector in the navigation frame. n is of the projected landmark on the focal plane to the actual landmark in the camera frame. Cb and Ccb the distance ratio of the projected landmark on the focal plane to the actual landmark in the camera are the direction cosine matrix from the body frame to the navigation frame and the direction cosine frame. and are the direction cosine matrix from the body frame to the navigation frame and matrixthe from the camera thethe body frame, respectively. Here, Ccb respectively. is a constantHere, matrix since direction cosine frame matrix to from camera frame to the body frame, is a the camera is fixed to the body. constant matrix since the camera is fixed to the body. It canItbecan seen (3) that leastatthree are required in order determine befrom seen Equation from Equation (3)atthat leastmeasurements three measurements are required in to order to determine a navigation output of six variables, which are three-dimensional position and attitude a navigation output of six variables, which are three-dimensional position and attitude [14]. In this Inthan this 0paper, more than 0 and lessare than 3 landmarks in the poor vision paper,[14]. more and less than 3 landmarks available in the are pooravailable vision environments. environments.

3. Vision/INS Integrated Navigation System 3. Vision/INS Integrated Navigation System

3.1. Vision/INS Integrated Navigation System 3.1. Vision/INS Integrated Navigation System

Figure 2 describes the proposed method of a vision/INS integrated navigation system. The inertial Figure 2 describes the proposed method of a vision/INS integrated navigation system. The navigation system computes vehicle’s position (PI NS ), velocity (VI NS ) and attitude (ΨINS ) from inertial navigation system computes vehicle’s position ( ), velocity ( ) and attitude ( ) from outputs of IMU (Inertial Measurement Unit) (∆v , ∆θ ). The vision navigation system gives projected outputs of IMU (Inertial Measurement Unit) (∆ , ∆ ). The vision navigation system gives projected landmark measurements on on thethe focal plane Kalman filter estimates errors landmark measurements focal plane(u(k, V, IS ,, vk,, V IS).).Kalman filter estimates the the INS INS errors (δxnav(, ∇ and and )the sensor errors (δu,( δv, andand δ f ). ). , ε)and andvision the vision sensor errors PINS ,VINS , INS

v, θ

δx nav , , ε

uk,INS ,vk,INS

uk,VIS ,vk,VIS

image

δu, δv, δf

Figure 2. Proposed method ofofthe integratednavigation navigation system. Figure 2. Proposed method thevision/INS vision/INS integrated system.

3.2. Process Model of Kalman the Kalman Filter 3.2. Process Model of the Filter INS error equation canbe beobtained obtained by thethe navigation equation of the INS [17].INS The [17]. INS error equation can byperturbing perturbing navigation equation of the process model based on the INS error equation and the sensor error equation for the proposed The process model based on the INS error equation and the sensor error equation for the proposed method is given in Equation (4). method is given in Equation (4). .

( )= ( )

( ) + ( ),

( )~ (0,

( ))

δx (t) = F (t)δx (t) + w(t), w(t) ∼ N (0, Q(t))

(4)

(4)

where ( ) is process noise vector with covariance ( ). Equation (4) can be rewritten into whereEquation w(t) is process noise vector with covariance Q(t). Equation (4) can be rewritten into Equation (5). (5).

Sensors 2016, 16, 1672

4 of 14

"

.

δx nav .

#

"

=

δx sen

F11 09×9

F12 09×9

#"

#

δxnav δxsen

"

+

wnav wsen

# (5)

where wnav and wsen are the navigation parameter error vector noise and sensor error vector noise v, respectively. State vector δx is composed of navigation parameter error vector δxnav and sensor error vector δxsen . 0m×n denotes an m by n zero matrix. The navigation parameter error vector is composed of position error, velocity error and attitude error of the INS as given in Equation (6). δxnav = [δPN

δPE

δPD

δVN

δVE

δVD

6ϕ N

ϕE

ϕD ]T

(6)

where δP, δV, and ϕ are position error, velocity error and attitude error expressed in the rotation vector, respectively. The subscripts N, E and D are the north, the east and the down axes in the navigation frame, respectively. The sensor error vector includes six inertial sensor errors and three vision sensor errors as in Equation (7). δxsen = [∇ x

∇y

∇z

εx

εy

εz

δu

δv

δ f ]T

(7)

where ∇ and e are accelerometer and the gyro error, respectively. δu and δv are errors of the coordinate values on the focal plane and δ f is focal length error. The subscripts x, y and z denote roll, pitch and yaw direction in the body frame, respectively. Submatrix F11 in Equation (5) is given in Equation (8). 

−Ωnen  F11 =  O3×3 O3×3

I3×3 n + Ωn Ωie in O3×3

 O3×3  f n×  n −Ωin

(8)

n and Ωn are the skew-symmetric matrix of the vehicle’s craft-rate in the navigation where Ωnen , Ωie in frame, the skew-symmetric matrix of the earth rate in the navigation frame and the skew-symmetric matrix of the rotation rate of the navigation frame relative to the inertial frame represented in the navigation frame, respectively. f n × is the skew-symmetric matrix of the vehicle’s specific force in the navigation frame. Submatrix F12 in Equation (5) is given in Equation (9).   O3×3 O3×3 O3×3   F12 =  Cbn (9) O3×3 O3×3  O3×3 −Cbn O3×3

The accelerometer sensor error and the gyro error are modeled as random constants and are given in Equations (10) and (11), respectively. .

∇b = 0 . b

ε =0

(10) (11)

The vision senor errors are also modeled as random constants and are given in Equations (12)–(14). .

δu = 0 .

δv = 0 .

δf = 0

(12) (13) (14)

3.3. Measurement Model of the Kalman Filter The measurement equation for the Kalman filter is given in Equation (15). δz(t) = H (t)δx (t) + v(t), v(t) ∼ N (0, R(t))

(15)

where H (t) is the observation matrix and v(t) is the measurement noise vector with covariance R(t). The measurement vector δz(t) is given in Equation (16).

𝜀 𝑏̇ = 0 𝛿𝑢̇ = 0 The vision senor errors are also modeled as random constants and are given in Equations (12)–(14). 𝛿𝑣̇ = 0 𝛿𝑢̇ = 0 Sensors 2016, 16, 1672

3.3. Measurement Model of the Kalman Filter

Sensors 2016, 16, 1672

𝛿𝑓̇ = 0

𝛿𝑣̇ = 0

5 of 14

𝛿𝑓̇ = 0 3 of 9

The measurement equation for the Kalman filter is given in Equation (15). δz = Model [δu1 of δu · · · Filter δun δv1 δv2 · · · δvn ] T (16) 3.3. Measurement the2 Kalman 𝑏 ̇ 𝛿𝑧(𝑡) = 𝐻(𝑡)𝛿𝑥(𝑡) + 𝑣(𝑡), 𝑣(𝑡)~𝑁(0, 𝑅(𝑡)) (11) 𝜀 =0 𝜀 𝑏̇ INS-based = 0filter is given measurement equation for the Kalman in Equation (15). where δuk and δvk The denote the differences between the estimates and measurements on the where is the observation matrix and landmark. 𝑣(𝑡)and is the noise vector with covariance 𝑅(𝑡). The vector enor errors𝐻(𝑡) are also modeled as random constants aremeasurement given in Equations (12)–(14). focal plane the k-th n constants denotes the of landmarks on themeasurement focal plane of the 𝛿𝑧(𝑡) is given in E The vision senor errors arefor also modeled as random andnumber are given inthe Equations (12)–(14). 𝛿𝑧(𝑡) = 𝐻(𝑡)𝛿𝑥(𝑡) + 𝑣(𝑡), 𝑣(𝑡)~𝑁(0, 𝑅(𝑡)) (16). camera. The INS-based estimates𝛿𝑢̇for each element in Equation (16) are calculated from position and (12) =0 = 0is 𝛿𝑣 where 𝐻(𝑡) is 𝛿𝑧 the=observation matrix and𝛿𝑢̇𝑣(𝑡) the measurement noise vector with covariance 𝑅(𝑡). The measur 𝑇 [𝛿𝑢 𝛿𝑢 𝛿𝑣 ⋯ 𝛿𝑣 ] 𝛿𝑢 ⋯ attitude outputs of INS and the position information The observation matrix is given 𝑛 of the 1 landmarks. 2 𝑛 1 2 (16). (13) 𝛿𝑣̇ = 0 in Equation (17). where 𝛿𝑢𝑘 and 𝛿𝑣𝑘 denote the differences between the INS-based estimates 𝛿𝑣̇ = and 0 measurements on the focal plane for the 𝑘-th𝑇 landmark. 𝑛 den 𝛿𝑣2 (16) ⋯ are 𝛿𝑣𝑛calculated ] ⋯ 𝛿𝑢𝑛 in 𝛿𝑣 𝛿𝑧 = [𝛿𝑢1for 𝛿𝑢 1 2 element number of the landmarks on the focal plane of the camera. The INS-based estimates each Equation from posi (14) 𝛿𝑓̇ = 0 H ≡ [ H1 H2 H3 ] (17) attitude outputs of INS and the position information of the landmarks. The matrix is given in Equation (17). where 𝛿𝑢 between the INS-based estimates and measurements on the focal plan 𝛿𝑓̇observation =0 𝑘 and 𝛿𝑣𝑘 denote the differences

number ofinthe landmarks on the of the camera.from The INS-based estimates for each element in Equatio Each sub-matrix the observation can be obtained computing the Jacobian. 𝐻 [𝐻1 plane 𝐻 matrix ≡focal 2 𝐻3 ] attitude outputs of INS and the position information of the landmarks. The observation matrix is given in Equation The submatrix H1Filter is given in Equation (18). 3.3. Measurement Model of the Kalman

Model of the Kalman Filter

Each sub-matrix in the observation matrix can be (15). obtained from computing the Jacobian. The submatrix 𝐻1 is given in Equation (18). ement equation for the Kalman filter is given in Equation 𝐻 ≡ [𝐻1 𝐻2 𝐻3 ] The measurement equation for the Kalman filter is given in𝜕𝑢 Equation 𝜕𝑢1 (15). 𝜕𝑢1 1 𝛿𝑧(𝑡) = 𝐻(𝑡)𝛿𝑥(𝑡) + 𝑣(𝑡), 𝑣(𝑡)~𝑁(0, 𝑅(𝑡)) 𝑂 be obtained from computing the Jacobian. The(15) Each sub-matrix in the observation submatrix 𝐻1 is 𝜕𝑥 𝜕𝑦matrix 𝜕𝑧 can 1×3 𝛿𝑧(𝑡) = 𝐻(𝑡)𝛿𝑥(𝑡) + 𝑣(𝑡), 𝑣(𝑡)~𝑁(0, 𝑅(𝑡)) e observation matrix and 𝑣(𝑡) is the measurement noise vector with⋮ covariance measurement vector 𝛿𝑧(𝑡) is given in Equation ⋮ ⋮ 𝑅(𝑡). The ⋮ 𝜕𝑢1 𝜕𝑢1 𝜕𝑢1 𝑂1×3 𝜕𝑢𝑛 noise 𝜕𝑢𝑛 vector 𝜕𝑢𝑛 with covariance 𝑅(𝑡). The measurement where 𝐻(𝑡) is the observation matrix and 𝑣(𝑡) is the measurement vector 𝛿𝑧(𝑡) is given 𝜕𝑥 𝜕𝑦 𝜕𝑧 𝑂1×3 𝜕𝑥 𝜕𝑦 𝜕𝑧 ⋮ ⋮ ⋮ ⋮ (16). 𝑇 𝐻 = [𝛿𝑢 𝛿𝑢 𝛿𝑣 𝛿𝑣 ⋯ 𝛿𝑣 ] 𝛿𝑢 ⋯ 𝛿𝑧 = (18) (16) 𝑛 11 2 1 2 𝜕𝑣 𝜕𝑣1 𝑛 𝜕𝑣1 𝜕𝑢𝑛 𝜕𝑢𝑛 𝜕𝑢𝑛 1 𝑂1×3 𝑂1×3 𝛿𝑢𝑛 𝜕𝑧 𝛿𝑣on 𝛿𝑣focal 𝛿𝑣𝜕𝑥 ]𝑇 the 𝛿𝑢2 measurements 𝛿𝑧estimates = [𝛿𝑢1 and 𝜕𝑥⋯ 𝜕𝑦 1 the 2 ⋯plane 𝑛for 𝛿𝑣𝑘 denote the differences between the INS-based 𝑛 denotes the 𝜕𝑦 𝑘-th 𝜕𝑧landmark. 𝐻1 = ⋮ ⋮ ⋮ ⋮ 𝜕𝑣 𝜕𝑣 𝜕𝑣 ndmarks on the focal plane of the camera. The INS-based estimates for each element in Equation (16) are calculated from and 1 1 1 where 𝛿𝑢𝑘 and 𝛿𝑣𝑘 denote the differences between the INS-based forposition the 𝑘-th landmark. 𝑛 𝜕𝑣 estimates 𝜕𝑣𝑛 𝜕𝑣𝑛 and measurements on the focal𝑂plane 1×3 of INS and the position information of the landmarks. The observation𝑛 matrix is given𝑂1×3 in Equation (17). 𝜕𝑥 𝜕𝑦 𝜕𝑧 number of the landmarks on the focal plane of the camera. The INS-based estimates for each element in Equation (16) are calculated from [ 𝜕𝑥 𝜕𝑦 𝜕𝑧 ] ⋮ ⋮ ⋮ ⋮ 𝐻2the𝐻landmarks. ] (17) attitude outputs of 𝑇INS and the position information The observation matrix is given in Equation (17). 𝐻 ≡ [𝐻1 of 3 𝜕𝑣𝑛 𝜕𝑣𝑛(19). 𝜕𝑣𝑛 where [𝑥 𝑦 𝑧] is the position vector in the navigation frame. The submatrix 𝐻2 is given in Equation 𝑂1×3 T [ 𝜕𝑥 𝜕𝑦 𝜕𝑧 (18). ] atrix in the observation matrix computing thein The 𝐻 is given in Equation 𝐻𝜕𝑢 𝐻3 ] frame. [ x can y bezobtained ]𝜕𝑢 is thefrom position vector [𝐻1navigation 𝐻Jacobian. ≡the 2 submatrix 𝜕𝑢where 𝜕𝑢 𝜕𝑢1 𝜕𝑢1 𝜕𝑢 𝜕𝑢1 1 The 𝜕𝑢1submatrix H2 is given in 1 1 1 1 1 𝐴11 + 𝐴21 where + 𝐴31 𝐴22 + vector 𝐴32 in the navigation 𝐴 + 𝐴frame. 𝐴33submatrix 𝑂1×6 𝑥 𝑦 𝑧]𝑇𝐴𝜕𝑢 12 23 + (19). is1+ the𝜕𝛽 The 𝐻 is given in Equation (19). 𝜕𝑢𝜕𝛼 𝜕𝑢position 1can 1 𝜕𝛼Equation 𝜕𝛾 [matrix 𝜕𝛾 computing 𝜕𝛼 13the 𝜕𝛽 Each sub-matrix in the𝜕𝛽observation be obtained from Jacobian. 𝜕𝛾 The submatrix 𝐻12 is given in Equation (18). 𝑂1×3 ⋮ ⋮1 ⋮𝜕𝑢1 𝜕𝑥𝜕𝑢1 𝜕𝑦 𝜕𝑧 𝜕𝑢⋮1 𝜕𝑢1 𝜕𝑢1 𝜕𝑢 𝜕𝑢1 𝜕𝑢1 𝜕𝑢1 𝜕𝑢 + 𝐴13 + 𝐴23 + 𝐴 𝑂1×6 𝜕𝑢𝑛 𝜕𝑢𝑛 𝜕𝑢𝑛 𝜕𝑢 𝜕𝑢𝑛⋮ 𝐴21⋮𝜕𝑢 𝜕𝑢1𝑛 𝐴 𝜕𝑢𝑛 𝐴22 + 𝜕𝑢𝑛 𝐴32 ⋮ 𝑛𝜕𝛼 𝐴⋮11 + 311 𝜕𝑢𝜕𝑢 𝑛 1 𝐴12 + 𝜕𝛽𝐴22 + 𝜕𝛾𝐴32 𝜕𝛼 𝜕𝛼 𝜕𝛽 𝜕𝛾 33 𝑂1×3𝜕𝛽𝐴23 + 𝜕𝛾𝐴33 𝑂1×6 𝐴 + 𝐴 + 𝐴 𝐴12 + 𝜕𝑢 𝐴13 + 𝜕𝑥 𝜕𝑦 𝜕𝑧 𝑛 𝜕𝑢𝑛 𝜕𝛽 𝑛 𝜕𝛼 11 𝜕𝛽 21 𝜕𝛾 31 𝜕𝑢 𝜕𝛼 𝜕𝛾 𝜕𝛼 𝜕𝛽 𝜕𝛾 ⋮ 𝑂1×3 ⋮ ⋮ ⋮ 𝐻2 = ⋮ 𝜕𝑢 ⋮𝜕𝑣 𝜕𝑥𝜕𝑣 𝜕𝑧 𝜕𝑣1 𝜕𝑣1 𝜕𝑣1 𝜕𝑣 𝜕𝑣 𝜕𝑣𝜕𝑢 𝜕𝑢1𝑛 𝜕𝑦 𝜕𝑣 𝜕𝑢1𝑛 𝜕𝑢1⋮𝑛 𝜕𝑢 𝜕𝑢𝑛 𝜕𝑢𝑛 𝜕𝑢 1𝑛 ⋮ 1𝑛 1 𝑛 𝑛 𝐻 = 𝐴 + 𝐴 + 𝐴1 𝐴𝐴12 ++𝜕𝑣 𝐴𝐴2221+𝜕𝑢 𝐴𝐴 𝐴23 +𝜕𝛾 𝐴 ++ 𝐴33 𝐴 𝑂1×6 𝐴13 + 𝐴 + (18)𝐴 𝑂1×6 3231 13 ++ 𝜕𝑢𝐴𝑛𝐴 𝑛 𝜕𝑢 𝑛 𝜕𝛼 1𝜕𝛼 𝜕𝑣11 1 1 𝜕𝛼 11 𝜕𝛽 21 𝜕𝛾 31 𝜕𝑣𝜕𝛼 𝜕𝛽 𝜕𝛽 𝜕𝛽 𝜕𝛼 12𝑂1×3 𝜕𝛽 22 𝜕𝛾𝜕𝛾 32 𝜕𝛼 𝜕𝛽 23 𝜕𝛾 33 𝑂1×3 𝜕𝛾 𝐻 = 2 𝜕𝑥 𝜕𝑥𝜕𝑣1 𝜕𝑦 𝜕𝑧 ⋮ ⋮ ⋮1 ⋮𝜕𝑣1 𝜕𝑧 𝜕𝑣1 𝜕𝑦 𝜕𝑣 𝜕𝑣1 𝜕𝑣 𝜕𝑣1 𝜕𝑣1 𝜕𝑣1 𝐻11 𝐴= +𝜕𝑣 + 𝑛 𝐴22 𝜕𝑣 + 𝑛 𝐴32 𝐴13 + 𝐴23 + 𝐴 𝑂1×6 𝜕𝑣𝑛 𝜕𝑣𝑛 𝜕𝑣𝑛 21⋮𝜕𝑣1 𝑛 𝐴 311 𝜕𝑣 ⋮𝜕𝑣𝜕𝛼 ⋮𝜕𝛽 𝑛 𝐴⋮11 +𝜕𝑣 𝑛 𝑛 1 𝐴12 𝜕𝑣 𝜕𝑣 𝜕𝑣 𝜕𝛼 𝜕𝛽 𝜕𝛼 𝜕𝛽 𝜕𝛾 33 𝐴 + 𝐴 + 𝐴 𝐴 𝐴13 + 𝐴23 + 𝜕𝛾 𝐴33 𝑂1×6 12 + 𝜕𝑣 𝐴22 + 𝜕𝛾 𝐴32 𝑂1×3 𝜕𝑣 [ 𝜕𝛼 11 𝜕𝛽 21 𝜕𝛾 31 𝜕𝑣𝜕𝛼 𝜕𝛽𝑛 ⋮ 𝜕𝛾 𝜕𝛼 𝜕𝛽 𝜕𝛾 ] 𝑛 𝑛 𝜕𝑥 𝜕𝑦 𝜕𝑧 ⋮ ⋮ ⋮ 𝑂1×3 [ 𝜕𝑥𝜕𝑣𝑛 𝜕𝑦 𝜕𝑧 ]𝜕𝑣𝑛 ⋮ 𝜕𝑣 𝜕𝑣 𝜕𝑣 𝜕𝑣 𝜕𝑣 𝜕𝑣 𝜕𝑣 𝑇 ⋮ ⋮ ⋮ 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 [𝛼 𝛽 𝛾] where is the attitude vector expressed navigation frame. attitude 𝐴error in the model 𝐴11in+the Euler 𝐴21 𝜕𝑣 + angle 𝐴31in the 𝐴12 + 𝐴22 + The𝐴32 𝐴 process + 𝐴33 𝑂1×6Equati 13 + 𝜕𝑣 𝑇 𝑛(19) 𝑛represented [ 𝜕𝛼 𝜕𝛽 𝐻 𝜕𝛾 𝜕𝑣 𝜕𝛼 𝜕𝛽 in 𝜕𝛼 The 𝜕𝛽 23 𝜕𝛾 ] isrepresented the positioninvector in the navigation frame. The submatrix is 𝑛given in Equation (19). the rotation vector, whereas the attitude error in2 Equation is the𝜕𝛾 Euler angle. relationship between the rotatio 𝑂1×3 (19) [ 𝜕𝑥 𝜕𝑦 𝜕𝑧 ] 𝑇 and the in Equation [18]. 𝐴𝑖𝑗 (𝑖 =𝜕𝑢 1,2,3, 𝑗 =expressed 1,2,3) in in Equation (19)angle are the as those in frame. Equation (20). is The the vector the Euler insame the navigation The attitude erro 𝜕𝑢1Euler angle 𝜕𝑢1 is expressed 𝜕𝑢1 where 𝜕𝑢1 [𝛼 𝛽(26) 𝜕𝑢𝛾] 𝜕𝑢1attitude 𝜕𝑢 1 1 1 1 𝐴11 + [𝑥 𝐴𝑦21 +𝑧]𝑇 𝐴31 𝐴 + 𝐴22 +in the𝐴rotation 𝐴13 + whereas 𝐴23 + the attitude 𝐴33 𝑂1×6 32 vector, error in Equation (19) is represented in the Euler angle. The vector in the navigation frame. The submatrix 𝐻 is given in Equation (19). 𝜑 𝜑 𝜑 𝛼where 𝜕𝛽 𝜕𝛾is the position 𝜕𝛼 12represented 𝜕𝛽 𝜕𝛾 𝜕𝛼 𝜕𝛽 𝜕𝛾 2 𝐴11 𝐴12 𝐴31 −𝑐𝑜𝑠𝛾/𝑐𝑜𝑠𝛽 −𝑠𝑖𝑛𝛾/𝑐𝑜𝑠𝛽 0 𝛿𝛼 𝑁 𝑁 𝑁 and the is expressed in Equation (26) [18]. The 𝐴 (𝑖 = 1,2,3, 𝑗 = 1,2,3) in Equation (19) are the same as t ⋮ ⋮𝐴Euler ⋮ ⋮ 𝜑𝐸 ] angle 𝜑 𝜑 𝛿𝛽 𝐴 𝐴 𝐴 𝑠𝑖𝑛𝛾 −𝑐𝑜𝑠𝛾 0 [ ] ≡ [ = [ ] [ ] = [ ] [ ] 𝐸 22 32 𝜕𝑢𝐸 𝜕𝑢𝜕𝑢 𝜕𝑢1 𝜕𝑢1 𝜕𝑢1 21 𝜕𝑢 𝜕𝑢 𝜕𝑢1 𝜕𝑢1 𝑖𝑗 𝜕𝑢1 1 1 1 𝜕𝑢𝑛 𝜕𝑢 𝜕𝑢 𝜑 𝜑 𝑛 𝑛11 + 𝜕𝑢𝐴 𝑛 21 +𝛿𝛾𝜕𝑢𝑛𝐴31 𝜑𝐷 𝜕𝑢𝑛 𝐴 𝑛 𝑛 𝑛 𝐴 + 𝐴 + 𝐴 𝐴 + 𝐴 + 𝐴 𝑂 𝐴 𝐴 𝐴 −𝑡𝑎𝑛𝛽𝑐𝑜𝑠𝛾 −𝑡𝑎𝑛𝛽𝑠𝑖𝑛𝛾 −1 𝐷 𝐷 31 23 𝐴 22 11 23 1×6 𝜑𝑁 33 −𝑐𝑜𝑠𝛾/𝑐𝑜𝑠𝛽 𝐴1311 𝑂1×6 𝐴 −𝑠𝑖𝑛𝛾/𝑐𝑜𝑠𝛽 0 𝜑𝑁 𝐴11 + 𝐴 + 𝐴 𝐴 + 𝐴 + 𝐴 12 12 𝐴31 𝜕𝛾 13 +𝛿𝛼 𝜕𝛽 𝜕𝛾 𝐴2332+𝜑𝑁𝜕𝛾 𝜕𝛼𝐴33 𝜕𝛽 𝜕𝛽 21 𝜕𝛼𝜕𝛾 31 𝜕𝛽𝜕𝛼 12 𝜕𝛾 𝜕𝛽 22 𝜕𝛼 𝜕𝛾 32 𝜕𝛼 𝜕𝛽 𝜑𝐸 ] = [𝐴21 𝐴22 ⋮ 𝐴32 ] [ 𝜑𝐸 ] = [ 𝛿𝛽 𝑠𝑖𝑛𝛾 −𝑐𝑜𝑠𝛾 0 ] [ 𝜑𝐸 ] [ ] ≡ 𝐴 [ The submatrix 𝐻 is given in Equation (21). ⋮ ⋮ ⋮ 3 𝜕𝑣1 𝜕𝑣1 𝜕𝑣1 𝜕𝑣1 𝜕𝑣 𝜕𝑣 𝜕𝑣 𝜕𝑣 1 𝜑𝐷 𝐴23𝑛 𝐴11 𝜕𝑢 −𝑡𝑎𝑛𝛽𝑐𝑜𝑠𝛾 −𝑡𝑎𝑛𝛽𝑠𝑖𝑛𝛾 −1 𝜑𝐷 𝜕𝑢𝑛 1 𝐴32 𝜕𝑢𝑛1 𝐴13 +𝛿𝛾 𝜕𝑢1𝑛 𝐴23 +𝜑𝐷𝜕𝑢1 𝑛𝐴33𝐴31 𝑂1×6 𝜕𝑢 𝐴11 + 𝐴21 𝜕𝑢 + 𝑛 𝐴31 𝜕𝑢𝑛 𝐴12 𝜕𝑢 + 𝑛 𝐴22 + 𝑛 𝛼 𝜕𝛽 𝜕𝛾𝐴 + 𝜕𝛼 𝜕𝛾𝐴 + 𝜕𝛼 𝐴22 + 𝜕𝛽 𝐴32 𝜕𝛾 𝑌1𝐴13 + 𝐴 + 𝜕𝛽𝐴31 𝐴23 + 𝐴 𝑂1×6 −1 0 (21). − 𝜕𝛼 11 𝜕𝛽 21 𝜕𝛾 The 𝜕𝛼 12 𝐻3 𝜕𝛽 𝜕𝛼 𝜕𝛾 33 submatrix is given in𝜕𝛾 Equation ⋮ ⋮ ⋮ ⋮𝜕𝛽 𝑋1 𝐻2 = 𝜕𝑣𝜕𝑣 𝜕𝑣1𝜕𝑣𝑛 𝜕𝑣𝜕𝑣 𝜕𝑣 𝜕𝑣𝑛1 𝜕𝑣1⋮ 𝜕𝑣1 𝜕𝑣1 𝜕𝑣1 𝜕𝑣𝑛 𝜕𝑣1𝑛 𝜕𝑣 𝜕𝑣 𝜕𝑣 1 𝑛 1 𝑛 𝑛 + 𝐴21𝐴+ 𝐴12 + 𝑛𝐴 𝐴2332+⋮ 𝑛 ⋮𝐴𝐴 𝐴 𝑂1×6𝑌1 𝐴11 + 𝐴21 + 𝐴11 𝐴31 𝐴22 + 𝐴𝐴13 𝑂1×6 𝐴23 + −1 22+ 12 + 𝐴31 32 + 3313 + 𝑌 𝜕𝛼𝜕𝛾 𝜕𝛽 𝜕𝛼 𝜕𝛾𝜕𝛽 𝜕𝛼 𝜕𝛽 𝜕𝛾 𝜕𝛼 𝑛 𝜕𝛽] 𝜕𝛾 33 0 − 𝛼 𝜕𝛽 𝜕𝛾 𝜕𝛼 𝜕𝛽 𝜕𝛾 𝑋 −1 0 − ⋮ ⋮ ⋮ ⋮⋮ 1 𝑋𝑛 ⋮ process ⋮ ]𝑇 is the attitude 𝐻 = vector expressed in the Euler angle in the navigation frame. The attitude error in the model Equation (6) is 3 𝜕𝑣𝑛 where𝜕𝑣[𝑛α β 𝜕𝑣 𝜕𝑣 𝜕𝑣𝑛 𝜕𝑣 𝜕𝑣𝑛 angle𝜕𝑣in 𝑍the 𝑛 𝑛 𝑛 𝑛 the navigation 1 𝑌𝑛 γ]𝑛T 𝐴is the𝜕𝑣 attitude vector expressed in Euler frame. 𝐴11 + the 𝐴attitude 𝐴12 + 𝐴13 + 𝐴33 0 𝑂 0 𝐴−1 −Euler e rotation vector, whereas error (19) is 𝐴 represented in the angle. 𝐴 The relationship between the rotation vector 21 + 31 in Equation 22 + 32 23 + 1×6 −1 − [ 𝜕𝛼 The attitude 𝜕𝛽 𝜕𝛾 in the process 𝜕𝛼 𝜕𝛽 Equation 𝜕𝛾 (6) is represented 𝜕𝛼𝑋1 𝜕𝛽 in the rotation 𝜕𝛾 𝑋𝑛] whereas the error model vector, gle is expressed in Equation (26) [18]. The 𝐴𝑖𝑗 (𝑖 = 1,2,3, 𝑗 = 1,2,3) in Equation (20). 3 = in Equation ⋮ ⋮ (19) ⋮are the same as𝐻those 𝑍1 between error in Equation (19) in is represented in the Euler angle. The relationship where [𝛼 𝛽 𝛾]𝑇 attitude is the attitude vector expressed the Euler angle in the navigation frame. The attitude error in the process model Eq 𝑍 0 −1 − 𝑛 0 𝜑𝑁 𝜑𝑁 𝐴11 𝐴12 𝐴31 𝜑𝑁 −𝑐𝑜𝑠𝛾/𝑐𝑜𝑠𝛽 −𝑠𝑖𝑛𝛾/𝑐𝑜𝑠𝛽 𝛿𝛼 𝑋 0 −1 − 1 represented in[𝛿𝛽 the] rotation the error in Equation (19) 𝑋 is𝑛represented in the rotation vector and Euler angle is expressed Equation Aij (Euler i = 1,angle. 2, 3, jThe = 1,relationship 2, 3) in (20) between the ro [ in−𝑐𝑜𝑠𝛾 ] (26) 𝜑𝐸 ] The 𝐴22the𝐴32 𝑠𝑖𝑛𝛾 0 ] [[18]. ≡ 𝐴 [ 𝜑𝐸 ]vector, = [𝐴21 whereas ] [ 𝜑attitude 𝐸] = [ ⋮ ⋮ ⋮ and the Euler angle is𝑇expressed in Equation (26) [18]. The 𝐴 (𝑖 = 1,2,3, 𝑗 = 1,2,3) in Equation (19) are the same as those in Equation (20). 𝜑𝐷 (19) 𝜑 𝜑 Equation are the same as those in Equation (20). 𝐴 𝐴 𝐴 −𝑡𝑎𝑛𝛽𝑐𝑜𝑠𝛾 −𝑡𝑎𝑛𝛽𝑠𝑖𝑛𝛾 −1 𝑖𝑗 𝛿𝛾 𝑍𝑛 𝐷 31 23 11of the where [𝑋𝑘 𝑌𝑘 𝑍𝑘 ] is the position vector 𝑘th landmark in the camera frame and 𝐷can be expressed Equation (22). 0 −1in − [ 𝜑𝑁 𝜑𝑁 𝐴11 𝐴12 𝑥𝐴31 −𝑐𝑜𝑠𝛾/𝑐𝑜𝑠𝛽 −𝑠𝑖𝑛𝛾/𝑐𝑜𝑠𝛽 0 𝜑𝑋𝑁𝑛 ] ix 𝐻3 is given in Equation (21). 𝛿𝛼 𝑋𝑘 𝐶11 𝐶12 𝐶13 𝑥𝑘 − 𝑥 𝑘 −𝑥 𝜑 𝜑 𝐴 𝐴 𝐴 𝑠𝑖𝑛𝛾 −𝑐𝑜𝑠𝛾 0 ] [𝜑 𝑇 [𝛿𝛽 ] ≡ 𝐴 [ ] = [ ] [ ] = [ 𝐸 𝐸 𝐸] 𝑐 21 22 𝑦 32 − 𝑦] = ]𝑘 ] is=the where [𝑋𝑘 𝑌𝑘 𝑍[𝑘𝑌 vector frame and can(20) be expressed in Equat 𝐶23 ]landmark 𝐶𝑛 𝑌 [position [𝐶21 of𝐶the [𝑦𝑘 − 𝑦] in the camera 22 𝑘th 1𝑘 𝜑𝐷 𝜑 𝜑 𝐴𝑍31 0 𝐴23 𝐴 −𝑡𝑎𝑛𝛽𝑐𝑜𝑠𝛾 −𝑡𝑎𝑛𝛽𝑠𝑖𝑛𝛾 −1 𝛿𝛾 𝐷 𝐷 −1 − 11 𝑧𝑘 − 𝑧 𝑧𝑘 − 𝑧 𝐶 𝐶 𝐶 𝑘 31 32 33 𝑥𝑘 − 𝑥 𝑋1 𝑋𝑘 𝐶11 𝐶12 𝐶13 𝑥𝑘 − 𝑥 ⋮ ⋮ The submatrix is 𝑘given (21). ⋮ [ 𝑌𝑘 ] = 𝐶𝑛𝑐 [𝑦𝑘 − 𝑦] = [𝐶21 𝐶22 𝐶23 ] [𝑦𝑘 − 𝑦] ⁄𝜕𝑦 in [𝜕𝑢𝑘 ⁄𝜕𝑥𝐻3 𝜕𝑢 𝜕𝑢Equation Then, (23). 𝑘 ⁄𝜕𝑧] in Equation (18) can be expressed in Equation 𝑌𝑛 𝑧𝑘 − 𝑧 𝑍 𝐶31 𝐶32 𝐶33 𝑧𝑘 − 𝑧 −1 0 − 𝑌 𝑘 𝑋𝑛𝑘 𝑌−1 1 − 1 𝐶11 𝐶12 𝐶13 𝜕𝑢𝑘 𝜕𝑢𝑘 𝜕𝑢 0 𝑘 =𝑘 ⁄𝜕𝑥 𝜕𝑢𝑘 ⁄𝜕𝑦 ] = (21) ] in Then,𝐻[𝜕𝑢 in Equation (23). 𝐶22 be𝐶expressed −𝑓Equation 3 [ 𝑋0] [𝐶(18) can 𝑘 ⁄𝜕𝑧 23 ] 𝑍1 𝜕𝑢[𝑓 𝑋𝑘 1 21 𝑋𝑘2 𝜕𝑥0 −1 𝜕𝑦 −𝜕𝑧 𝐶 𝐶 𝐶 ⋮ ⋮ ⋮ 31 32 33 𝐶11 𝐶12 𝐶13 𝑋1 𝑌𝑘 1 𝜕𝑢𝑌 𝜕𝑢𝑘 𝑘𝑛 𝜕𝑢𝑘 𝐶 −𝑓 0] [ ] = [𝑓 [ ⋮ (19) ⋮ can ⋮ be expressed 21 𝐶22 𝐶23 ] And [𝜕𝑢𝑘 ⁄𝜕𝛼 𝜕𝑢𝑘 ⁄𝜕𝛽 𝜕𝑢𝑘 ⁄𝜕𝛾] in Equation in Equation (24). −1 0 − 𝑋 𝑋2 𝜕𝑥 𝜕𝑦 𝜕𝑧

Sensors 2016, 16, 1672

6 of 14

The submatrix H3 is given in Equation (21).  −1   .  .  .    −1  H3 =    0    .  ..   0



0 .. .

Y1 X1 .. .

Yn Xn Z − 1 X1 .. . Zn − Xn



0

−1 .. .

−1

                 

(21)

where [ Xk Yk Zk ] T is the position vector of the kth landmark in the camera frame and can be expressed in Equation (22). 

    Xk xk − x C11     c   Yk  = Cn  yk − y  =  C21 C31 Zk zk − z Then, [∂uk /∂x 

  C13 xk − x   C23   yk − y  C33 zk − z

∂uk /∂z] in Equation (18) can be expressed in Equation (23).      C11 C12 C13 Yk 1 ∂uk ∂uk  0  = f 2 −f  C21 C22 C23  Xk Xk ∂y ∂z C31 C32 C33

∂uk /∂β

∂uk ∂α

∂uk ∂β

∂uk ∂γ





=

−f

Yk Xk2

f

1 Xk

∂Xk  ∂α    ∂Yk 0   ∂α    ∂Zk ∂α

∂Xk ∂β ∂Yk ∂β ∂Zk ∂β

Also, [∂vk /∂x ∂vk /∂y ∂vk /∂z] in Equation (18) and [∂vk /∂α Equation (19) are expressed in Equations (25) and (26), respectively.  

∂vk ∂x

∂vk ∂y



∂vk ∂z



=

f

Zk Xk2

0

−f

1 Xk



C11   C21 C31





∂vk ∂α

∂vk ∂β

(23)

∂uk /∂γ] in Equation (19) can be expressed in Equation (24). 



(22)

∂uk /∂y ∂uk ∂x

And [∂uk /∂α

C12 C22 C32

∂vk ∂γ





=

−f

Zk Xk2

0

∂Xk  ∂α    ∂Yk 1  f Xk   ∂α   ∂Zk ∂α

C12 C22 C32 ∂Xk ∂β ∂Yk ∂β ∂Zk ∂β

 ∂Xk ∂γ    ∂Yk   ∂γ    ∂Zk  ∂γ ∂vk /∂β  C13  C23  C33  ∂Xk ∂γ    ∂Yk   ∂γ    ∂Zk  ∂γ

(24)

∂vk /∂γ] in

(25)

(26)

It can be seen from Equation (16) that the proposed method can provide an integrated navigation output even though the number of landmarks is not sufficient for a vision navigation output. 4. Computer Simulation and Experimental Result The proposed method is verified through computer simulations and van tests.

Sensors 2016, 16, 1672

7 of 14

4.1. Computer Simulation Computer simulations of the proposed integrated navigation method were carried out for a low medium-grade inertial sensor and a low-cost commercial camera. Figure 3 shows the scheme of theSensors simulations. Reference trajectory and inertial sensor data were generated using MATLAB 2016, 16, 1672 7 of 14 and INS tool box manufactured by GPSoft LLC. True camera measurement data of the landmarks camera model given in Equations (1) and (2). (2). The The camera were first first generated generatedusing usingthe thepinhole pinhole camera model given in Equations (1) and camera measurement data on the focal plane of the landmarks were finally generated by adding noises into measurement data on the focal plane of the landmarks were finally generated by adding noises into the camera true camera measurement to landmarks ten landmarks to observed be observed every image the true measurement data.data. ZeroZero to ten to be on on every image areare placed placed by a random generator. The IMU measurement data were also generated by adding noises by a random generator. The IMU measurement data were also generated by adding noises into the into the true IMU measurement data. Tables 1 and 2 show the specifications of the IMU and the true IMU measurement data. Tables 1 and 2 show the specifications of the IMU and the vision sensor vision sensor for the simulation. for the simulation. IMU

True measurement data

Reference trajectory

+

V , 

IMU noises Vision/INS integration

Camera

True measurement data Landmark DB

+

c k , rk

Camera noises

Figure of simulation. simulation. Figure3.3.Scheme Scheme of Table 1. IMU specificationand andINS INS initial initial attitude simulation. Table 1. IMU specification attitudeerror errorforfor simulation.

Specification Specification Accelerometer bias Accelerometer bias Accelerometer Accelerometerrandom randomwalk walk Gyro Gyrobias bias Gyrorandom randomwalk walk Gyro Data rate Data Roll rate Error Roll Error Pitch Error YawError Error Pitch

Description Description 5 mg 5 mg √ 0.1 h 0.1 m/s/m/s/√h 100/h 100◦ /h √ 0.5◦ / 0.5º/√h h 100 Hz 0.1◦100 Hz 0.1◦ 0.1 5.0◦ 0.1 Yaw Error 5.0 Table 2. Vision sensor specification for simulation. Table 2. Vision sensor specification for simulation.

Specification Specification Focallength length Focal

Description Description mm 2525mm

No. of horizontal pixels No. of horizontal pixels

4000 4000

No. of vertical pixels No. of vertical pixels Field of view Field of view Horizontal pixel pitch Horizontal pixel pitch Vertical pixel pitch

3000 3000 90◦ 90 8 um 8 um 8 um

Vertical pixel pitch Data rate Data rate

810 umHz 10 Hz

Specification Specification Avg. focal lengh error Avg. of of focal lengh error Avg. of horizontal optical Avg. of horizontal optical axis coordinate error axis coordinate error Avg. of vertical optical axis Avg. of coordinate vertical optical erroraxis coordinate error Focal lengh error (1σ) Focal lengh error (1σ) Horizontal optical axis coordinate erroraxis (1σ) Horizontal optical Vertical optical axis coordinate error (1σ) coordinate error (1σ) Vertical optical axis

coordinate error (1σ)

Description Description 200 200 um um 200 um 200 um 200 um 200 um 200 um 200 um 200 um 200 um 200 um

200 um

Sensors 2016, 16, 1672

8 of 14

Sensors 2016, 16, 1672

8 of 14

50 Monte-Carlo simulations were performed for an eight-shaped flight path with constant height as shown in Figure 4. Less than three landmarks were intentionally placed randomly in a 50 Monte-Carlo simulations were performed for an eight-shaped flight path with constant height specific area in a poor navigation environment. as shown in order Figure to 4. create Less than threevision landmarks were intentionally placed randomly in a specific area Results of the proposed method were compared with those of another integration method in in order to create a poor vision navigation environment. [14]. In the integration methodmethod in [14],were the compared outputs of thethose vision navigation systemmethod are position Results of the proposed with of another integration in [14]. and attitude and state vector is given Equation (27). In the integration method in [14],inthe outputs of the vision navigation system are position and attitude and state vector is given in Equation (27).





=

δx = [δPN

δPE

δPD

δVN

δVE

δVD

ϕN

ϕE

The measurement vector is given in Equation (28).

ϕD

∇x

∇y

∇z

εx

εy

ε z ]T

(27) (27)

The measurement vector is given in Equation (28).

=

δz = [δPN

δPE

δPD

(28) δα

δβ

δγ]

T

(28)

Figure 4. Vehicle trajectory and landmarks in simulation.

Figure 4. Vehicle trajectory and landmarks in simulation.

As with the loosely coupled GPS/INS integrated navigation method, the method in [14] has As with theinloosely coupledoutput. GPS/INS navigation method, the method in [14] has redundancy the navigation Theintegrated vision navigation system can provide a stand-alone redundancy inoutput the navigation The vision navigation system system can provide stand-alone navigation even when output. the INS and/or the integrated navigation cannot aprovide a navigation output even when the INS and/or the integrated navigation system cannot provide a navigation output. However, as described in Section 2, the vision system cannot give navigation navigation output. described Sectionon 2, the thefocal vision system cannot give navigation output when lessHowever, than three as landmarks arein available plane. In this case, performance of integrated can deteriorate sinceon the measurement cannot be of output when lessnavigation than threesystem landmarks are available the focal plane.update In thisprocess case, performance performed in the integration Kalman filter. Assince shownthe in Equation (15), the measurement integrated navigation system can deteriorate measurement update process update cannot be process can be performed even when only one landmark is visible on the focal plane in the proposed performed in the integration Kalman filter. As shown in Equation (15), the measurement update method. the time update Kalmanonly filtering performed when no landmarks visible at all. process can Only be performed evenin when oneis landmark is visible on the are focal plane in the Figure 5 shows results of the estimated vision sensor errors of the proposed method in the proposed method. Only the time update in Kalman filtering is performed when no landmarks are simulation. It can be seen from the results that the vision sensor errors are well estimated and the visible at all. performance of the vision navigation system is improved.

Figure 5 shows results of the estimated vision sensor errors of the proposed method in the simulation. It can be seen from the results that the vision sensor errors are well estimated and the performance of the vision navigation system is improved.

Sensors 2016, 16, 1672 Sensors 2016, 16, 1672 Sensors 2016, 16, 1672 Sensors 2016, 16, 1672

9 of 14 9 of 14 9 of 14 9 of 14

Figure 5. Camera sensor error estimation results of the simulation. Figure 5.5.Camera sensor error estimation results of the simulation. Figure Camera estimation results of the the simulation. Figure 5. Camera sensor sensor error error estimation results of simulation.

In Figure 6, navigation results of the proposed method are compared with those of the pure

In Figure 6, navigation results the proposed method areare compared withwith those of the In Figure Figure navigation resultsof the proposed method compared those of pure the pure INS and the method in [14]. results Figure 7of shows the position errors the north, eastthose and down direction In 6,6,navigation of the proposed method are in compared with of the pure INS INS and the method in [14]. Figure 7 shows the position errors in the north, east and down direction INS and the method in [14]. Figure 7 shows the position errors in the north, east and down direction of the method and the method [14]. errors in the north, east and down direction of the and theproposed method in [14]. Figure 7 shows theinposition of the the proposed method and the in [14]. of proposed method themethod method proposed method and theand method in [14].in [14].

Figure 6. 6. Navigation results of the simulation. Figure Navigation results of the the simulation. Figure 6. Navigation results of simulation.

Figure 6. Navigation results of the simulation.

Figure 7. Position error of the simulation.

Figure Position error Figure 7. 7. Position error of of the the simulation. simulation. Figure 7. Position error of the simulation.

Sensors 2016, 16, 1672

10 of 14

Sensors 2016, 16, 1672

10 of 14

Table 3 shows RMS errors for the pure INS, the method in [14] and the proposed method. It can be observed that error of the pure INS becomes large as the navigation operation continues. It Table shows RMS for the pure INS, therelatively method inlarge [14] and the proposed method. It can be can also be3observed thaterrors the method in [14] gives navigation parameter errors in the observed that error of the pure INS becomes large as the navigation operation continues. It can also be area where the number of the landmarks are not enough for vision navigation output. The proposed observed that the method in [14] relatively large performance navigation parameter errors in thethe areaattitude where method gives approximately 50 gives and 10 times better in the position and the number of the landmarks are not enough for vision navigation output. The proposed method gives than the method in [14] in this area, respectively. approximately 50 and 10 times better performance in the position and the attitude than the method in [14] in this area, respectively. Table 3. RMS navigation parameter error of the simulation. Error Table 3. RMS navigation Pure parameter INS Method [14] Propsosed Method error of thein simulation. N 1026.88 18.59 0.99 Pure INS Method in [14] Propsosed Method Position error (m) Error E 3350.72 15.90 1.12 N 1026.88 18.59 0.99 D 1370.55 5.34 0.45 Position error (m)

Velocity error (m/s) Velocity error (m/s)

Attitude error ()

Attitude error (◦ )

E

N D E N D E

3350.72 22.04 1370.55

15.90 5.343.46

1.12 0.45

22.09 22.04 22.09 15.15

3.462.88 2.880.97 0.97

2.73 3.07 1.14

2.022.44 2.44 3.143.14

0.06 0.37 0.63

Roll D

15.15 0.69

Roll Pitch Pitch Yaw Yaw

0.69 0.65 0.65 8.42 8.42

2.02

2.73 3.07 1.14 0.06 0.37 0.63

4.2. Van 4.2. Van Test Test Figure 88shows showsthe theexperimental experimental setup a reference navigation system. The experimental Figure setup andand a reference navigation system. The experimental setup setup consists of a camera and an IMU and is installed on an optical bench. The reference navigation consists of a camera and an IMU and is installed on an optical bench. The reference navigation system, system,iswhich is a carrier-phase differential GPS (CDGPS)/INS integrated navigation is which a carrier-phase differential GPS (CDGPS)/INS integrated navigation system, issystem, installed installed together. Outputs of the navigation reference navigation are as regarded as true values in the together. Outputs of the reference system aresystem regarded true values in the evaluation evaluation of the experimental results. A low-cost commercial camera and a micro electro of the experimental results. A low-cost commercial camera and a micro electro mechanical system mechanical system given in Tables andexperiment. 5 were usedDatabase in the experiment. Database of (MEMS) IMU given(MEMS) in TablesIMU 4 and 5 were used in4 the of the landmarks was the landmarks was made in advance with the help of large-scale maps and aerial photographs. made in advance with the help of large-scale maps and aerial photographs.

Figure 8. 8. Experimental Experimental setup setup and and reference reference navigation Figure navigation system. system.

Sensors 2016, 16, 1672

11 of 14

Sensors 2016, 16, 1672 Table 4. IMU specification and initial attitude error for the experiment.

11 of 14

Specification

Description

Table 4. IMU specification and initial attitude errorCrossbow for the experiment. Manufacturer Ltd.

Accelerometer bias Specification Accelerometer random walk Manufacturer Accelerometer scaling factor error Accelerometer bias Gyro random bias walk Accelerometer Accelerometer scaling factor error Gyro random walk Gyro bias Gyro scaling factor error Gyro random walk Data factor rate error Gyro scaling Roll Dataerror rate Roll error Pitch error Pitch error error Yaw Yaw error

10 mg

Description

0.1 m/s/√h

Crossbow Ltd. 10000 ppm 10 mg√ 3600/h 0.1 m/s/ h 10,000 1.0/√h ppm 3600◦√ /h 1000 ppm 1.0◦ / h 135 Hz 1000 ppm 0.61 135 Hz 0.61◦0.01 0.01◦4.30 4.30◦

Table 5. Vision sensor specification for the experiment. Table 5. Vision sensor specification for the experiment.

Specification

Description

Specification Manufacturer

Description Axis Ltd.

Manufacturer Sensor type type pixel No. of Sensor horizontal No. of horizontal pixel Image sensor Image sensor No. No.ofofvertical vertical pixel pixel Horizontal Horizontalpixel pixel pitch pitch Vertical pixel pitch Vertical pixel pitch Lens Lens

Focallength length Focal Field of view Field of view

Data rate (frame rate)

Data rate (frame rate)

Axis Ltd. CMOS-color CMOS-color 1280 1280 800 800 um 33um 33um um 1.7 1.7mm mm 99◦

99

Max 30 Hz (1.4 Hz is used in experiment)

Max 30 Hz (1.4 Hz is used in experiment)

Figure position of of thethe vehicle’s reference trajectory in theinexperiment. The results of the Figure99shows showsthe the position vehicle’s reference trajectory the experiment. The results proposed method of the experiment were compared with those of the pure INS and the method in of the proposed method of the experiment were compared with those of the pure INS and[14]. the Figures show the results Table 6 results shows the experiment. Asinwith method10 inand [14].11Figures 10 navigation and 11 show the and navigation anderrors Tablein6 the shows the errors the the results of As thewith computer simulations, it can be seen from the experimental proposed experiment. the results of the computer simulations, it can be seenresult fromthat the the experimental method provides reliable solutions approximately 5 times better positioning performance than result that the proposed method with provides reliable solutions with approximately 5 times better the method in [14] even inthan poorthe vision environments. positioning performance method in [14] even in poor vision environments.

Figure 9. 9. Vehicle’s Vehicle’s reference reference trajectory trajectoryin inthe theexperiment. experiment. Figure

Sensors 2016, 16, 1672 Sensors 2016, 16, 1672 Sensors 2016, 16, 1672

12 of 14 12 of 14 12 of 14

Figure 10. Navigation of the the experiment. experiment. Figure 10. Navigation results results of Figure 10. Navigation results of the experiment.

Figure Position error error of Figure 11. 11. Position of the the experiment. experiment.

Figure 11. Position error of the experiment. Table 6. Table 6. RMS RMS navigation navigation parameter parameter error error of of the the experiment. experiment. Table 6. RMS navigation parameter error of the experiment. PurePure INS INS Method in [14]in [14] Propsosed Method Method Propsosed Method

Error Error Error

Positionerror error(m) (m) Position

Position error (m) Velocity error (m/s)

Velocity error (m/s) Velocity error (m/s) Attitude error (◦ )

N N E N E D

Pure INS 7110.95 7110.95 1228.32 7110.95 1228.32 6973.98

N D E N D N E

52.02 6973.98 166.0652.02 52.02 200.25 166.06

E Roll D PitchRoll D Yaw

166.06 32.54200.25 20.82200.25 32.54 53.05

E D

1228.32 6973.98

Method 9.29 9.29in [14] 16.44 16.44 9.29 15.34 16.44 15.34 2.58 15.34 4.13 2.58

Propsosed 6.50 6.50Method 6.47 6.47 6.50 3.25 6.47 3.25 1.65 3.25 1.87 1.65

2.58 7.83 4.13

1.65 4.67 1.87

4.13 4.04 7.83 3.53 4.04 7.83 5.61

1.87 2.31 4.67 2.91 2.31 4.67 4.68

Roll 32.54 4.04 2.31 Attitude error () Pitch 20.82 3.53 2.91 Attitude error () Pitch 20.82 3.53 2.91 Yaw 5.61 4.68 5. Concluding Remarks and Further Studies 53.05 Yaw 53.05 5.61 4.68 This paperRemarks proposed inertialStudies and landmark-based vision integrated navigation method 5. Concluding andanFurther using focal plane measurements of landmarks. An integration model was derived to use the raw 5. Concluding Remarks and Further Studies This paperonproposed inertial landmark-based visionThe integrated navigation method measurements the focal an plane in theand integration Kalman filter. proposed method has been usingThis focal planeproposed measurements of landmarks. integration model to method use the raw paper an inertial and vision integrated navigation method verified through computer simulations andlandmark-based vanAn tests. Performance ofwas thederived proposed has measurements on the focal plane in the integration Kalman filter. The proposed method has been using focal plane measurements of landmarks. An integration model was derived to use the raw been compared with other integration method which used a vision navigation output, i.e., position verified through simulations van tests. Performance ofproposed the proposed method has measurements oncomputer the focal plane in the and integration Kalman filter. The method has been verified through computer simulations and van tests. Performance of the proposed method has

Sensors 2016, 16, 1672

13 of 14

and attitude output from a vision navigation system. It has been observed from the results that the proposed system gives reliable navigation outputs even when the number of landmarks is not sufficient for vision navigation. An integration method to use continuous images to improve navigation performance and an integration model to efficiently detect and recognize landmarks will be studied. As future works, other filtering methods such as the particle filter and unscented Kalman filter, artificial neural network-based filtering and the application of a vision/INS integrated navigation system for sea navigation can be considered. Author Contributions: Youngsun Kim and Dong-Hwan Hwang proposed the idea of this paper; Youngsun Kim designed and performed the experiments; Youngsun Kim and Dong-Hwan Hwang analyzed the experimental data and wrote the manuscript. Conflicts of Interest: The authors declare no conflict of interest.

References 1. 2. 3.

4. 5. 6. 7. 8.

9. 10. 11. 12. 13.

14. 15.

16.

Biezad, D.J. Integrated Navigation and Guidance Systems, 1st ed.; American Institute of Aeronautics and Astronautics Inc.: Virginia, VA, USA, 1999; pp. 139–150. Groves, P.D. Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems, 1st ed.; Artech House: Boston, MA, USA, 2008; pp. 361–469. Ehsan, S.; McDonald-Maier, K.D. On-Board Vision Processing for Small UAVs: Time to Rethink Strategy. 2015, arXiv:1504.07021. arXiv.org e-Print archive. Available online: https://arxiv.org/abs/1504.07021 (accessed on 10 October 2016). Groves, P.D. The PNT boom, future trends in integrated navigation. Inside GNSS 2013, 2013, 44–49. Bryson, M.; Sukkarieh, S. Building a robust implantation of bearing-only inertial SLAM for a UAV. J. Field Robot. 2007, 24, 113–143. [CrossRef] Veth, M.J. Navigation using images, a survey of techniques. J. Inst. Navig. 2011, 58, 127–139. [CrossRef] Borenstein, J.; Everette, H.R.; Feng, L. Where am I? Sensors and Methods for Mobile Robot Positioning, 1st ed.; University of Michigan: Michigan, MI, USA, 1996. Thompson, W.B.; Henderson, T.C.; Colvin, T.L.; Dick, L.B.; Valiquette, C.M. Vision-based localization. In Proceedings of the 1993 Image Understanding Workshop, Maryland, MD, USA, 18–21 April 1993; pp. 491–498. Betke, M.; Gurvits, L. Mobile robot localization using landmarks. IEEE Trans. Robot. Autom. 1997, 13, 251–263. [CrossRef] Chatterji, G.B.; Menon, P.K.; Sridhar, B. GPS/machine vision navigation systems for aircraft. IEEE Trans. Aerosp. Electron. Syst. 1997, 33, 1012–1025. [CrossRef] Scaramuzza, D. Visual odometry-tutorial. IEEE Robot. Autom. Mag. 2011, 18, 80–92. [CrossRef] George, M.; Sukkarieh, S. Camera aided inertial navigation in poor GPS environments. In Proceedings of the 2007 IEEE Aerospace Conference, Big Sky, MT, USA, 3–10 March 2007; pp. 1–12. Tardif, J.P.; George, M.; Laverne, M. A new approach to vision-aided inertial navigation. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 4161–4168. Kim, Y.S.; Hwang, D.-H. INS/vision navigation system considering error characteristics of landmark-based vision navigation. J. Inst. Control Robot. Syst. 2013, 19, 95–101. [CrossRef] Yue, D.X.; Huang, X.S.; Tan, H.L. INS/VNS fusion based on unscented particle filter. In Proceedings of the 2007 International Conference on Wavelet Analysis and Pattern Recognition, Beijing, China, 2–4 November 2007; pp. 151–156. Wang, W.; Wang, D. Land vehicle navigation using odometry/INS/vision integrated system. In Proceedings of the 2008 IEEE International Conference on Cybernetics Intelligent Systems, Chengdu, China, 21–24 September 2008; pp. 754–759.

Sensors 2016, 16, 1672

17. 18.

14 of 14

Meskin, D.G.; Itzhack, Y.B. A unified approach to inertial navigation system error modeling. J. Guid. Control Dyn. 1992, 15, 648–653. [CrossRef] Song, G.W.; Jeon, C.B.; Yu, J. Relation of euler angle error and eotation vector error. In Proceedings of the 1997 Conference on Control and Instrumentation, Automation, and Robotics, Seoul, Korea, 22–25 July 1997; pp. 217–222. © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

INS Integrated Navigation System for Poor Vision Navigation Environments.

In order to improve the performance of an inertial navigation system, many aiding sensors can be used. Among these aiding sensors, a vision sensor is ...
5MB Sizes 1 Downloads 13 Views