sensors Article

Error Analysis and Calibration Method of a Multiple Field-of-View Navigation System Shuai Shi 1,2 , Kaichun Zhao 1, *, Zheng You 1, *, Chenguang Ouyang 1 , Yongkui Cao 1 and Zhenzhou Wang 1 1

2

*

State Key Laboratory of Precision Measurement Technology and Instruments, Tsinghua University, Beijing 100084, China; [email protected] (S.S.); [email protected] (C.O.); [email protected] (Y.C.); [email protected] (Z.W.) Astronaut Center of China, Beijing 100094, China Correspondence: [email protected] (K.Z.); [email protected] (Z.Y.); Tel.: +86-10-62776000 (ext. 8010) (K.Z.); +86-10-62782308 (Z.Y.)

Academic Editor: Vittorio M.N. Passaro Received: 9 December 2016; Accepted: 20 March 2017; Published: 22 March 2017

Abstract: The Multiple Field-of-view Navigation System (MFNS) is a spacecraft subsystem built to realize the autonomous navigation of the Spacecraft Inside Tiangong Space Station. This paper introduces the basics of the MFNS, including its architecture, mathematical model and analysis, and numerical simulation of system errors. According to the performance requirement of the MFNS, the calibration of both intrinsic and extrinsic parameters of the system is assumed to be essential and pivotal. Hence, a novel method based on the geometrical constraints in object space, called checkerboard-fixed post-processing calibration (CPC), is proposed to solve the problem of simultaneously obtaining the intrinsic parameters of the cameras integrated in the MFNS and the transformation between the MFNS coordinate and the cameras’ coordinates. This method utilizes a two-axis turntable and a prior alignment of the coordinates is needed. Theoretical derivation and practical operation of the CPC method are introduced. The calibration experiment results of the MFNS indicate that the extrinsic parameter accuracy of the CPC reaches 0.1◦ for each Euler angle and 0.6 mm for each position vector component (1σ). A navigation experiment verifies the calibration result and the performance of the MFNS. The MFNS is found to work properly, and the accuracy of the position vector components and Euler angle reaches 1.82 mm and 0.17◦ (1σ) respectively. The basic mechanism of the MFNS may be utilized as a reference for the design and analysis of multiple-camera systems. Moreover, the calibration method proposed has practical value for its convenience for use and potential for integration into a toolkit. Keywords: multiple field-of-view navigation system; imaging model; error analysis; checkerboard-fixed post-processing calibration

1. Introduction According to its space program schedule, China will launch the Tiangong Space Station in the next few years. To make use of the microgravity environment and conduct flight experiments and on-orbit service inside the space station, the Spacecraft Inside Tiangong Space Station (SITSS) is proposed (Figure 1). The SITSS aims to offer a platform for on-orbit-distributed satellite flight tests and serve as free-flying robots for environmental monitoring and astronaut assistance. Hence, an accurate, fast, and fully autonomous navigation system is expected. Nowadays, computer vision is widely used in the observation [1], measurement [2], entertainment [3], and navigation [4] fields because of the rich information it acquired from the surrounding environment, high precision, and potential for

Sensors 2017, 17, 655; doi:10.3390/s17030655

www.mdpi.com/journal/sensors

Sensors 2017, 17, 655

2 of 20

Sensors 2017, 17, 655

2 of 20

Sensors 2017, 17, 655

2 of 20

intelligence. To achieve the intended function for SITSS, a multiple field-of-view navigation system potential for intelligence. To achieve the intended function for SITSS, a multiple field-of-view (MFNS) for SITSS (Figure 2) isfor built via(Figure computer potential for intelligence. To achieve the intended for SITSS, navigation system (MFNS) SITSS 2) is vision. built function via computer vision.a multiple field-of-view navigation system (MFNS) for SITSS (Figure 2) is built via computer vision.

(a) (b) (a) (b) Figure 1. (a) 3D model and (b) flying sketch of the SITSS. Figure 1. (a) 3D model and (b) flying sketch of the SITSS. Figure 1. (a) 3D model and (b) flying sketch of the SITSS.

(a) (a)

(b) (b)

Figure 2. (a) Prototype and (b) navigation sketch of the MFNS. Figure 2. (a) Prototype and (b) navigation sketch of the MFNS.

Figure 2. (a) Prototype and (b) navigation sketch of the MFNS. Parameter calibration is the fundamental problem in computer vision and has been the focus of Parameter is the fundamental in be computer vision back and has been the [5], focus of research since itcalibration was first proposed. An earlierproblem work can found dating to the 1970s and since it was are first proposed. earlier work can found dating back to has the 1970s [5], aresearch large set of works toAn camera calibration methods and calibration model Parameter calibration isdedicated the fundamental problem inbe computer vision and beenanalyses. the and focus of a large set it of works are dedicated toAn camera calibration methods andcan calibration model analyses. Calibration methods thatproposed. require beacon coordinates in the 3D be found in to [6–8]. Other research since was first earlier work can beworld found dating back the 1970s [5], Calibration methods that require beacon coordinates in the 3D world can be found in [6–8]. Other automatic methods were proposed by Tsai [9], Strum [10], and Heikkila [11] et al., all of whom and a large set of works are dedicated to camera calibration methods and calibration model automatic methods were by Tsai [9], Strum [10], and Heikkila [11] etinal.,2000 all observed of whom utilized known objects as proposed calibration references. method [12]3D published analyses. Calibration methods that require beaconZhang’s coordinates in the world can be found in [6–8]. utilized objects calibration as calibration references. methodcheckerboard [12] published in 2000 an easy known and accurate method basedZhang’s on a planar placed in observed different Other automatic methods were proposed by Tsai [9], Strum [10], and Heikkila [11] et al., all of whom an easy and accurate calibration method on aHowever, planar checkerboard placedmethods in different locations in the field-of-view (FOV) of abased camera. those automatic are utilized known objects as calibration references. Zhang’s method [12] published in 2000Aside observed locations in to the of a when camera. However, those automatic methods are inconvenient usefield-of-view in large FOV (FOV) applications the size of the reference object is limited. an easy and accurate calibration method based on a planar placed inis different locations inconvenient to use in large applications when the sizecheckerboard of the reference objectpoints limited. Aside from those methods, some FOV approaches focus on calibration using vanishing based on in thegeometric field-of-view (FOV)some of a camera. However, those automatic methods arecameras inconvenient to use from thoserelationships methods, approaches focus of on calibration using vanishing points based on [13–15]. A large amount literature on calibrating single or vision in large FOV applications when the the reference object limited. Aside fromtothose methods, geometric relationships [13–15]. Asize largeofamount of literature on calibrating single cameras or vision systems based on extension, optimization, and adaption ofis methods applicable different systems based on on extension, optimization, and adaption of are methods applicable to different have been introduced as using well [16–18]. Those methods also applicable in other fields, somesituations approaches focus calibration vanishing points based on geometric relationships [13–15]. situations have been introduced as well [16–18]. Those methods are also applicable in other for instance, the error analysis and calibration method of star trackers [19]. A large amount of literature on calibrating single cameras or vision systems based on fields, extension, for instance, theadaption error and calibration star trackers The effective andanalysis accurate intrinsic and method extrinsic parameters of[19]. conventional singleintroduced cameras as optimization, and of methods applicable to of different situations have been The effective and accurate intrinsic and extrinsic parameters of conventional single cameras can be recognized in many ways. The position and attitude relationship between cameras is a well [16–18]. Those methods are also applicable in other fields, for instance, the error analysis and can be recognized in many ways. The position and attitude relationship between cameras is a concern for binocular or multiple camera vision systems, such a relationship can be obtained based calibration method of star trackers [19]. concern for binocular or multiple camera vision systems, such a relationship can be obtained on homography because the cameras share a substantial overlap [20,21]. However, obtainingbased this The effective and accurate intrinsic and extrinsic parameters of conventional single cameras can on homography because the cameras a substantial overlap [20,21]. However, obtaining this relationship becomes a challenge whenshare the cameras of a system do not share a visual field. In recent be recognized in becomes many have ways. The position and attitude between cameras is aIncamera concern relationship a challenge when the cameras of arelationship systemideas do not share a visual field. recent for years, researchers come up with several remarkable for non-overlapping binocular or multiple camera vision systems, such a relationship can be obtained based on homography years, researchers have come up with remarkable ideas for non-overlapping camera systems, some of them use mirrors andseveral a calibration target [22,23], some use rigid motion because the cameras share asome substantial overlap However, obtaining this relationship becomes systems, some of them use use mirrors and a [20,21]. calibration target [22,23], some use rigid motion constraints [24,25] while a combination of those two methods [26,27]. Unfortunately, these constraints [24,25] while some a the combination those methods [26,27]. Unfortunately, these methods seem incapable of solving problem MFNStwo directly, as theyears, attitudes and a challenge when the cameras ofuse a system do notofshare a calibration visual field. In recent researchers of solving the problem of MFNS calibration directly, as the some attitudes and use have methods come upseem withincapable several remarkable ideas for non-overlapping camera systems, of them

mirrors and a calibration target [22,23], some use rigid motion constraints [24,25] while some use a combination of those two methods [26,27]. Unfortunately, these methods seem incapable of solving

Sensors 2017, 17, 655

3 of 20

the problem of MFNS calibration directly, as the attitudes and positions of the cameras with respect Sensors 2017, 17, 655 3 of 20 to the MFNS coordinate are indispensable while difficult to obtain through either existing camera calibration methods mechanical measurements. positions of the or cameras with respect to the MFNS coordinate are indispensable while difficult to In the present paper,existing we attempt solve this problem by introducing an imaging model of obtain through either camerato calibration methods or mechanical measurements. the its present we attempt to solve this problem by introducing an imaging model of the the MFNSInand errorpaper, analysis. A novel calibration method is proposed on this imaging model, MFNS and its error analysis. A novel calibration method is proposed on this imaging model, which which meets the requirement of MFNS and other similar systems. Our experiment indicates that the the requirement MFNS andaccuracy other similar systems. experiment indicates that the the MFNSmeets can reach an expectedofcalibration by utilizing theOur method proposed. We introduce MFNS can reach an expected calibration accuracy by utilizing the method proposed. We introduce mathematical model of the MFNS in Section 2. The error analysis of the system and the simulation the mathematical model of the MFNS in Section 2. The error analysis of the system and the result analysis are discussed in Section 3. The checkerboard-fixed post-processing calibration (CPC) simulation result analysis are discussed in Section 3. The checkerboard-fixed post-processing method for MFNS parameters is proposed in Section 4, and a verification navigation experiment is calibration (CPC) method for MFNS parameters is proposed in Section 4, and a verification conducted in Section 5. Finally, the conclusions presented in Section are 6. presented in Section 6. navigation experiment is conducted in Section are 5. Finally, the conclusions 2. Mathematical Model 2. Mathematical Model 2.1. Imaging Model 2.1. Imaging Model The pinhole model is commonly used ofthe the3D 3Dpoints points world to the The pinhole model is commonly usedfor forideal ideal mapping mapping of of of thethe world to the 2D image generated a camera [6,28].Figure Figure 3 displays projection performed by a by 2D image generated by abycamera [6,28]. displaysthe theperspective perspective projection performed pinhole camera. The world pointPPisisprojected projected through center of the lenslens to the a pinhole camera. The world point throughthe theprojection projection center of the to point the point p in the image I. Plane II symmetry is the symmetry of I, which is introduced to facilitate the p in the image plane plane I. Plane II is the planeplane of I, which is introduced to facilitate the analysis. analysis. The relationship between P (X w, Yw, Zw) and its image p (u, v) is as follows: The relationship between P (Xw , Yw , Zw ) and its image p (u, v) is as follows: Optic axis

Of v

u P(Xw,Yw,Zw) O(u0,v0) p(u,v)

Zc

II Oc

x

y Xc

p' Yc

Zw

O Ow Yw

I Xw

Figure 3. 3.Pinhole model. Figure Pinholeimaging imaging model. 

  u 1/dx s    1/ dx Zc  v  =  0u  1/dy Z c 0v    0 0 1

 1   0 = M1 M2 X W = MX W

 u0  v0 s 1/ 1 dy

0

 Μ1 Μ 2 X W  ΜX W

f 0 0 0u0 f f 0  0v0 0 0 1

1   0

 0 " 00  0  f0 0

0

1



  XW α x γ u0 X W   R T Y  W   0  x0 α uv0 =    y 0 T R T Y 00  1  ZW W   0    0T 1  1 Z   0 y0 v10   W #

0 

   1 

 0

0

1

where: Zwhere: c is the optic axis coordinate of point P, the optic axis coordinate point P, Zc isratio dx is the coefficient in the x of direction, dx is the ratio coefficient in the x direction, dy is the ratio coefficient in the y direction, dy is the ratio coefficient in the y direction, s and γ are the non-orthogonal factor of axes of the image coordinate, s and γ are the non-orthogonal factor of axes of the image coordinate, (u0 , v0(u) 0is the pixel coordinate of the camera principal point, , v0) is the pixel coordinate of the camera principal point, f is the principal distance of the camera, R is the 3 × 3 rotation matrix,





XW

 XW Y  W  R T    T R T W ZW  0 1  Y   0T 1   Z  1   W 0     1 

0 00 00

"

#

    

(1)

(1)

Sensors 2017, 17, 655

4 of 20

T is the 3D translation vector, αx = f/dx and αy = f/dy are the respective scale factors of the u-axis and v-axis of the image coordinate, M 1 and M 2 are the intrinsic parameter matrix and extrinsic parameter matrix respectively, and M = M 1 ·M 2 is the perspective projection transform matrix. In the actual imaging system, the displacement of the principal point, focal length deviation, distortion, and other error factors should be considered on the basis of ideal pinhole imaging model (see Section 4.1). 2.2. MFNS Model Three industrial cameras whose optic axes are approximately perpendicular to one another are the main components of the MFNS (Figure 2). This arrangement is adopted for immediate and accurate navigation performance of the SITSS, because it presents six degrees of freedom of free flight inside the space station. The three cameras form a generalized monocular vision system while the FOV is extended, and the accuracy in three directions is optimized compared with traditional monocular vision systems. Based on the imaging model of a single camera (Equation (1)), the imaging model of the MFNS is given as follows: 





α xk uk    Zck  vk  =  0 1 0

γk αyk 0

u0k v0k 1



0  0  0

 "

Ckb 0T

0 1

#"

Cbw 0T

r bbw − r bbk 1

#

   

XWi YWi ZWi 1

    

(2)

where k = x, y, z represents camera X, Y, and Z, respectively; i = 1, 2, 3, . . . represents the number of beacons observed, Ckb is the rotation matrix form of the MFNS body coordinate to the camera k coordinate, Cbw is the rotation matrix from the world coordinate to the MFNS body coordinate, and r bbw = (t x , ty , tz )T and r bbk = (r xk , ryk , rzk )T are the translation vectors from the origin of the MFNS body coordinate to the origins of the world coordinate and camera k coordinates respectively, which are expressed in the MFNS body coordinate. Although a common way to model a set of rigidly attached cameras is using a generalized camera model, which find its way in many applications such as motion estimation, image reconfiguration, etc. [29–32], equation.4 is a practical model of the MFNS, which is an extension of the commonly used pinhole model. Coordinate transformation is simply applied during the derivation, which makes it simple and reliable. Moreover, the model of Equation (2) is convenient for navigation applications in our future work, as the rotation matrices and translation vectors have clear physical meanings, which are exactly the 6-DOF navigation parameters (three-axis attitude and equations can be built directly based on Equation (2).  position). Thus measurement  # α xk γk u0k 0 " k    Cb 0 Let Ak =  0 αyk v0k 0  = ( aijk ) and Cbw = rij 3×3 . We therefore get: T 3×4 0 1 0 0 1 0  k − ak u )(r X k k  ( a11 11 Wi + r12 YWi + r13 ZWi + t x − r xk ) + ( a12 − a32 uk )(r21 XWi + r22 YWi + r23 ZWi + ty − ryk )  31 k     +( ak − ak uk )(r31 XWi + r32 YWi + r33 ZWi + tz − rzk ) = 0 33 13 k − ak v )(r X k k  ( a21 11 Wi + r12 YWi + r13 ZWi + t x − r xk ) + ( a22 − a32 vk )(r21 XWi + r22 YWi + r23 ZWi + ty − ryk )  31 k    k k  +( a − a vk )(r31 XWi + r32 YWi + r33 ZWi + tz − rzk ) = 0 23 33

(3)

Equation (5) is the practical imaging model of the MFNS. The pose of the MFNS (six unknowns) is possibly obtained by solving a Perspective-N-Point problem [33,34] when plenty of beacons are observed.

Sensors 2017, 17, 655

5 of 20

3. Error Analysis The MFNS measurements are the coordinates of the images of navigation reference beacons. In the imaging model (Equation (2)), the error in beacon positions, intrinsic parameter calibration error of cameras, pose (with respect to the MFNS) calibration error of cameras, and algorithm error are the factors affecting the measurement accuracy of the system. The main focus of this research is the influence of the system parameter error on the measurement error. The algorithm error will be studied in future work. To facilitate the equation derivation and error analysis, the following discussion is based on the assumption that Czb = I3 and r bbz = 0. Given that cameras X, Y, and Z are equivalent, only the measurement error of camera Z is calculated and simulated as follows: (1) Beacon position error The pixel coordinate (u, v) of a beacon observed by camera Z is given by: Xc W +r12 YW +r13 ZW + t x u = α x rr11 X X +r32 Y +r33 Z +tz + u0 = α x Zc + u0 31

v=

W

W

W

r X +r YW +r23 ZW +ty αy r21 XW +r22 32 YW +r33 ZW + tz 31 W

(4)

+ v0 = αy ZYcc + v0

where (Xc , Yc , Zc ) is the coordinate of that beacon in the camera Z coordinate. Suppose the position of a beacon is inaccurate and its coordinate is (Xw + ∆Xw , Yw + ∆Yw , Zw + ∆Zw ), then the measurement of the pixel coordinate is given by: X0

r ( X +∆X )+r (Y +∆Y )+r ( Z +∆Z )+t

u0beacon = α x r11 (XW +∆XW )+r12 (YW +∆YW )+r13 (ZW +∆ZW )+tx + u0 = α x Zcb 0 + u0 z

cb

r ( X +∆X )+r (Y +∆Y )+r ( Z +∆Z )+t αy r11 (XW +∆XW )+r12 (YW +∆YW )+r13 (ZW +∆ZW )+ty z 32 W 33 W 31 W W W W

Y0 αy Zcb0 cb

31

v0beacon =

W

W

32

W

W

33

W

W

+ v0 =

(5)

+ v0

Thus, the error of measurement is as follows: X0

∆ub = u0beacon − u = α x ( Zcb 0 −

Xc Zc )

∆vb = v0beacon − v = αy (

Yc Zc )

cb 0 Ycb 0 Zcb



(6)

Let r1 = (r11 , r12 , r13 )T , r2 = (r21 , r22 , r23 )T , r3 = (r31 , r32 , r33 )T , and ∆Pw = (∆Xw , ∆Yw , ∆Zw )T . Equation (6) could be rewritten as: c +r1 · ∆Pw ∆ub = α x ( X Zc +r3 ·∆Pw −

r2 ·∆Pw ∆vb = αy ( ZYcc+ +r3 ·∆Pw −

Xc Zc ) Yc Zc )

(7)

The Monte Carlo (MC) method is used to study the relationship between the beacon position error and the measurement error. The parameters of the method are shown in Table 1. Only the pixel error of u is simulated because the properties of the pixels in the two directions (u, v) [or (Xc , Yc )] are the same. The MFNS coordinate is taken as the reference frame because of the imaging progress. Specifically, the MFNS is fixed at the origin of the world coordinate (see the extrinsic parameter of the MFNS in Table 1), and the beacons are in any possible position (see beacon position in Table 1). As a result, the simulation model is simplified and the result is more convenient for observation. Figure 4 shows the simulation result, the dots in the figure stand for the root-mean-square error of the MC results, the same below. The pixel error increases linearly with the approximate beacon position error.

the MFNS in Table 1), and the beacons are in any possible position (see beacon position in Table 1). As a result, the simulation model is simplified and the result is more convenient for observation. Sensors 2017, 17, 655

Table 1. Parameters for MC simulation on MFNS error analysis.

6 of 20

Camera Parameter α y (pixel) (pixel) v0 (pixel) αxTable (pixel) 1. Parameters foru0MC simulation on MFNS error analysis. 1600 640 512 1600 Camera Parameter Extrinsic Parameter of MFNS θ (°) (°) Tvx 0(mm) φ (°) αx (pixel) αy (pixel) u0ψ(pixel) (pixel) Ty (mm) Tz (mm) 16000 1600 640 0 0 0512 0 0 Beacon Position Extrinsic Parameter of MFNS Y w (mm) Zwψ(mm) Xφw(◦(mm) ◦ ) θ( ) (◦ ) Tx (mm) Ty (mm) Tz (mm) 0–1000 0–1000 0 00 0 0 0 Number of MC Simulation Trials for Each Group of Parameters Beacon Position 10,000 Xw (mm) Yw (mm) Zw (mm) 0–1000 0–1000

Figure 4 shows the simulation result, the dots in the figure stand for the root-mean-square error Number of MC Simulation Trials for Each Group of Parameters of the MC results, the same below. The pixel error increases linearly with the approximate beacon 10,000 position error.

pixel error

0.0 12

0.2

0.4

0.6

0.8

1.0

1

9

9

6

6

3

3

0 0.0

0.2

0.4

0.6

0.8

1.0

0

beacon position error (mm) Figure 4. MC simulation result of the pixel error caused by the beacon error. Figure 4. MC simulation result of the pixel error caused by the beacon error.

(2) Camera intrinsic parameter error (2) Camera intrinsic parameter error On the basis of the pinhole imaging model (Equation (2)), the influence of the principal point displacement, the of focal and model the lens(Equation distortion(2)), should be considered the practical On the basis the length pinholeerror, imaging the influence of the in principal point vision measurement Theseand parameters are the intrinsic of the a camera, displacement, the focalsystem. length error, the lens distortion should beparameters considered in practicalwhose vision calibration is system. one of the key problemsare in the vision research. At present, a reasonable choice of measurement These parameters intrinsic parameters of a camera, whose calibration calibration method ensures in that the imaging reaches the sub-pixel level. is one of the key problems vision research.accuracy At present, a reasonable choice of calibration method ensures that the imaging accuracy reaches the sub-pixel level. (3) Relative position error between camera and MFNS When multiple cameras are integrated inMFNS the MFNS, a relative position error between each (3) Relative position error between camera and camera coordinate and the system body coordinate can occur, known as the error of the translation vector T in multiple the imaging model. Let T’= (txin+ the Δtx,MFNS, ty + Δtay,relative tz + Δtzposition )T; it is obtained according to the When cameras are integrated error between each camera imaging model: coordinate and the system body coordinate can occur, known as the error of the translation vector T in the imaging model. Let T0 = (tx + ∆tx , ty + ∆ty , tz + ∆tz )T ; it is obtained according to the imaging model: +∆t x ∆u p = u0position − u = α x ( XZcc + ∆tz − Y +∆t

∆v p = v0position − v = αy ( Zcc +∆tyz −

Xc Zc ) Yc Zc )

(8)

The result of the MC simulation of the pixel error is shown in Figure 5, while the parameters are set as shown in Table 1.

c

v p  v

' position

z

c

(8)

Yc  t y

Y  v  y(  c) Z c  t z Z c

The result of the MC simulation of the pixel error is shown in Figure 5, while the parameters are 7 of 20 set as shown in Table 1.

Sensors 2017, 17, 655

pixel error

0.2

0.4

0.6

0.8

1.0

10

8

8

6

6

4

4

2

2

0 0.0

0.2

0.4

0.6

0.8

1.0

pixel error

0.0

10

0

relative position error between camera and MFNS (mm) Figure 5. MC simulation result of pixel error caused by the relative relative position position error. error.

(4) Relative Relative attitude error between cameras and MFNS (4) attitude error between cameras and MFNS Similar to the relative position error, an error inevitably occurs in the rotation matrix between Similar to the relative position an errorcoordinate, inevitably occurs matrix between the the camera coordinate system anderror, the MFNS knowninasthe therotation error of attitude of Euler camera coordinate system and the MFNS coordinate, known as the error of attitude of Euler angle φ, θ, ’ T angle φ, θ, and ψ in the imaging model. Let Θ = (φ + Δφ, θ + Δθ, ψ + Δψ) . The rotation matrix in the 0 T and ψ in model the imaging Θ = (φ +as∆φ, θ +rotation ∆θ, ψ +in ∆ψ) The rotation in the imaging imaging (Eulermodel. angles Let are defined 2-1-3 this. paper) can bematrix rewritten as follows: model (Euler angles are defined as 2-1-3 rotation in this paper) can be rewritten as follows: '

r11  cos(   ) cos(   )  sin(   ) sin(   ) sin(   )

0 = r11 (ψ+∆ψ (θ+∆θ )+ r12' cos sin( ) cos ) cos(  ) sin(ψ + ∆ψ) sin(θ + ∆θ ) sin( ϕ + ∆ϕ) 0 r12 = ' sin( ψ + ∆ψ ) cos( ϕ + ∆ϕ )   cos(   ) sin(   )  sin(   ) cos(   ) sin(   ) 0 r13 r13 = − cos(ψ + ∆ψ) sin(θ + ∆θ ) + sin(ψ + ∆ψ) cos(θ + ∆θ ) sin( ϕ + ∆ϕ) ' 0 sin( ) )cos cos( ) )+cos cos( ))sin sin(  ∆θ ))sin sin((  ∆ϕ  + ∆ψ  )) r21 r= −sin (ψ+∆ψ (θ+∆θ (ψ (θ + ϕ+ 21  0 ' r22 r=  cos (ψ+ ∆ψ ) cos (ϕ +∆ϕ cos(  ) cos( ) ) 22 0 = r23 sin ( ψ + ∆ψ ) sin ( θ + ∆θ ) + cos(ψ + ∆ψ) cos(θ + ∆θ ) sin( ϕ + ∆ϕ) r23'  sin(   ) sin(   )  cos(   ) cos(   ) sin(   ) 0 r31 = cos ϕ sin(θ + ∆θ ) ' 0 r=  ) ) r32 −cos sin ( ϕsin( + ∆ϕ 31  0 = ' cos( θ + ∆θ ) cos( ϕ + ∆ϕ ) r33 r   sin(   )

(9) (9)

32 '

r33, r0 cos( 0  T 0 ) cos(0   0 ) 0 T 0 0 0 0 T T Let r 0 1 = (r0 11 12 , r 13 ) , r 2 = (r 21 , r 22 , r 23 ) , r 3 = (r 31 , r 32 , r 33 ) , Pw = (Xw , Yw , Zw ) . Thus, we get Let r’1 = (r’11, r’12, r’13)T, r’2 = (r’21, r’22, r’23)T, r’3 = (r’31, r’32, r’33)T, Pw = (Xw, Yw, Zw)T. Thus, we get r 0 ·P

∆u a = u0attitude − u = α x ( r'10 ·Pw −

Xc Z )

r1r03·PPww Xc c ' w  Yc  u  u  u   ( ) 0 attitude − v = αx y ( '20 ∆v aa = v attitude r3r3·PPww − ZZc )c

(10)

(10) The result of the MC simulation of pixel error is shown Figure r2'  Pin Yc 6, while the parameters are set ' w va  vattitude  v   y ( '  ) as shown in Table 1. r3  Pw Z c In Figures 4–6, the pixel error grows approximately linearly with the beacon error and the MFNS extrinsic parameter error (including the relative position error and attitude error between cameras and the MFNS), such outcome is regarded as a system error. System errors have a serious influence on MFNS performance compared to the intrinsic parameter calibration errors of cameras. The simulation results show that the relative position and attitude of the system must be calibrated to an accuracy of

Error Analysis and Calibration Method of a Multiple Field-of-View Navigation System.

The Multiple Field-of-view Navigation System (MFNS) is a spacecraft subsystem built to realize the autonomous navigation of the Spacecraft Inside Tian...
5MB Sizes 0 Downloads 7 Views