sensors Article

On-Ground Processing of Yaogan-24 Remote Sensing Satellite Attitude Data and Verification Using Geometric Field Calibration Mi Wang 1 , Chengcheng Fan 1, *, Bo Yang 2,3 , Shuying Jin 1 and Jun Pan 1 1

2 3

*

State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, 129 Luoyu Road, Wuhan 430079, China; [email protected] (M.W.); [email protected] (S.J.); [email protected] (J.P.) Computer School, Wuhan University, 129 Luoyu Road, Wuhan 430079, China; [email protected] Collaborative Innovation Center of Geospatial Technology, Wuhan University, 129 Luoyu Road, Wuhan 430079, China Correspondence: [email protected]; Tel.: +86-27-6877-8969

Academic Editor: Assefa M. Melesse Received: 2 June 2016; Accepted: 25 July 2016; Published: 30 July 2016

Abstract: Satellite attitude accuracy is an important factor affecting the geometric processing accuracy of high-resolution optical satellite imagery. To address the problem whereby the accuracy of the Yaogan-24 remote sensing satellite’s on-board attitude data processing is not high enough and thus cannot meet its image geometry processing requirements, we developed an approach involving on-ground attitude data processing and digital orthophoto (DOM) and the digital elevation model (DEM) verification of a geometric calibration field. The approach focuses on three modules: on-ground processing based on bidirectional filter, overall weighted smoothing and fitting, and evaluation in the geometric calibration field. Our experimental results demonstrate that the proposed on-ground processing method is both robust and feasible, which ensures the reliability of the observation data quality, convergence and stability of the parameter estimation model. In addition, both the Euler angle and quaternion could be used to build a mathematical fitting model, while the orthogonal polynomial fitting model is more suitable for modeling the attitude parameter. Furthermore, compared to the image geometric processing results based on on-board attitude data, the image uncontrolled and relative geometric positioning result accuracy can be increased by about 50%. Keywords: precise attitude estimation; star sensor; gyroscope; bidirectional filter; Yaogan-24 remote sensing satellite; earth observation camera

1. Introduction Since 2010, China has launched a series of high-resolution optical satellites. On 9 January 2012, China launched the first high-accuracy civil stereo-mapping optical satellite ZY-3 from the Taiyuan satellite launch center. Like Japan’s Advanced Land-Observing Satellite (ALOS) [1], ZY-3 was equipped with three panchromatic (0.50–0.80 µm) time-delay CCD-integrated cameras (nadir, forward, and backward) capable of taking in-track triplet stereo images. Their spatial resolutions were 2.1 m (for the nadir camera) and 3.5 m (for the forward and backward cameras) [2]. On 20 November 2014, the Yaogan-24 remote sensing satellite, with panchromatic spatial resolution within 1 m, was successfully launched. Optical satellite ZY-3-02 will be launched in early 2016, which is similar to ZY-3 satellite. In general, the orbital heights of these optical satellites are 500–800 km, from which a 1 arcsec attitude error could cause 3–5 m ground positioning error if other sources of error are ignored. Therefore, the impact of the accuracy of the satellite’s attitude data on the high-resolution image geometry process could be very significant and is summed up by the statement “one false step Sensors 2016, 16, 1203; doi:10.3390/s16081203

www.mdpi.com/journal/sensors

Sensors 2016, 16, 1203

2 of 30

will make a great difference”. Thanks to technological advancements in precise orbit determination, microsecond time synchronization, and high-accuracy calibration of internal camera parameters, satellite attitude precision is becoming an important factor restricting the geometric positioning precision of high-resolution optical images, which remains unsolved [3–6]. To achieve high-precision image geometry processing for attitude determination of a high-resolution optical satellite, many attempts have been made in remote sensing and geoscience applications, including deterministic method such as TRIAD, QUEST, FOAM, SVD, Euler-q, ESOQ-2, which are based on vector observations, and the state filtering estimation method such as SOAR, q-EKF, Filter QUEST, and REQUEST methods [7–12]. Meanwhile, correlation algorithms for precision geometry processing of optical images have matured, including (to name a few), the tight geometric-imaging, rational polynomial-fitting, and pixel-pointing angle models, among others [2–4,13–16]. In particular, in recent years platform jittering and sub-meter image processes have become hot topics. Typical methods include tremor detection that are based on the calibration field, and tremor image geometry compensation based on a high-frequency angular displacement sensor [1,17–19]. However, to our best knowledge current studies are scarce; most are based mainly on simulation data, but lack the validation of real data. Due to the intrinsic nature of attitude, the issue of how to evaluate and verify its accuracy effectively also remains unsolved. Fortunately, our research has been dedicated to undertake intensive research in this area. Therefore, we introduce what we have undertaken recently to fill the perceived research gap. Satellite attitude determination accuracy depends not only on the attitude sensor measurement accuracy, but also on the attitude data processing method used [20]. Generally, optical satellite images always uses on-board processing attitude data for geometric processing, which is also used for the satellite attitude control systems. Because the satellite’s attitude control system depends not on the accuracy, but on the robustness of attitude data, onboard processing usually uses a real-time unidirectional Kalman filter for attitude data processing, which is due to the use of past observation data; it relies more on gyro observation information, which will affect the accuracy of attitude for the existing of gyro bias and other error sources, is unable to make full use of the original observation data of attitude sensors to achieve high-precision processing [21]. Therefore, the attitude accuracy of the attitude control system cannot meet the requirements of optical image geometry processing. To address these problems, the Yaogan-24 remote sensing satellite was taken as an example for doing research and experimental analysis. We will use the bidirectional Kalman filter and overall weighted smoothing method for attitude data processing to enhance the accuracy, a method that can realize high-precision ground processing for attitude data. Meanwhile, we will use the real panchromatic image and digital orthophoto (DOM) and digital elevation (DEM) (DOM/DEM) model of the geometric calibration field to achieve automatic measurements of high-precision control points and high-precision attitude inversions based on the image-intensive matching method. With this approach we can more reasonably evaluate the absolute and relative attitude accuracies. The remainder of the paper is organized as follows: Section 2 introduces attitude data processing, model construction, and the verification methods. Section 3 presents the collection and analysis of experimental data. Section 4 summarizes our work and presents the study’s conclusions and research perspectives. 2. On-Ground Processing for Attitude Data and Verification in a Geometric Calibration Field 2.1. The On-Ground Processing Workflow The Yaogan-24 remote sensing satellite is a high-resolution mapping satellite injected in a Sun-synchronous orbit at an altitude of 645 km. To extend the viewing range of the camera, HRO can swing laterally up to a maximum of 32˝ . The photographing system consists of a panchromatic linear mapping camera and a multi-spectral camera with a pixel resolution of 1.0 m and 2 m, respectively.

Sensors 2016, 16, 1203

3 of 30

The systems of the satellite attitude sensor for configuration include two German Astro10 star sensors, an APS-made star sensor, four gyro components, a digital Sun sensor, and an infrared Earth sensor. Table 1 shows the details of the Yaogan-24 remote sensing satellite’s star sensor and gyro performance parameters. In addition, the Astro10 star sensor on the satellite was superior to the domestic APS star sensor that was used as an alternative. Table 1. The Yaogan-24 remote sensing satellite’s star sensor and gyro performance parameters. Attitude Sensor

Performance Parameters

Star sensor ASTRO10

Optical axis error ď5” (3σ) Horizontal axis error ď35” (3σ) Frequency 4 Hz

APS star sensor

Optical axis error ď20” (3σ) Horizontal axis error ď35” (3σ) Frequency 4 Hz

Gyro components

Random bias ď0.39˝ /h (3σ) Constant bias ď2˝ /h Frequency 8 Hz

Precise attitude data are a precondition for achieving high-precision geometric positioning of high-resolution optical image processing. The goal of attitude determination is to calculate the attitude parameters in the reference coordinate system on the basis of measurements from the attitude sensors. An attitude determination system consists of the attitude sensor and its attitude-determination algorithm [20]. Therefore, attitude accuracy depends on the accuracies of both the measurement and the calibration algorithm. On-satellite attitude sensors on high-resolution optical satellites include an infrared Earth sensor, a Sun sensor, star sensor, and gyro inertial sensors, etc. In general, the data of the star sensor and gyroscope are combined to determine the precise attitude parameters for high-resolution optical satellites [22–24]. Figure 1 shows the on-ground processing workflow for attitude data of the Yaogan-24 remote sensing satellite and its verification. Firstly, we do preprocessing of attitude observation data. Secondly, we construct the measurement model and state model, use the bidirectional Kalman filter and overall weighted smoothing method to realize the optimal satellite attitude estimation. Finally, we use the panchromatic image and reference data of geometric field to verify the relative and absolute accuracy of the estimated attitude data. The workflow includes four major steps: (1)

Measurement equation construction

A star sensor optical axis vector can achieve high precision for positioning a space object, from which an equation of measurements is built. To ensure the quality and reliability of the observational attitude data, we will control the quality of the observational data based on optical axis angle stability. (2)

State equation construction

Gyro data reflect the change in attitude and are used to construct the equation of state on the basis of attitude kinematic equations. (3)

Filtering for information fusion

To estimate the optimal attitude parameters, we use a bidirectional Kalman filter to process attitude data, which are then smoothed overall according to the error covariance matrix. (4)

Control point measurement and attitude precision inversion

Sensors 2016, 16, 1203

4 of 30

To verify the effectiveness of our method, we applied real image and DOM/DEM data in the geometric calibration field for geometric processing and attitude precision inversion. Before building the state and measurement equations, we must first determine the state variables. The selection of state variables can directly affect the non-linear dimensions of the state and measurement equations. To reduce the matrix order and streamline the attitude determination algorithms, we chose three parameter variables of error quaternion vector, and gyro bias error, as the T

state variables of the system, that is, X “ r∆q13 T , ∆b T s , ∆q13 “ r∆q1 ∆q2 ∆q3 sT . Sensors 2016, 16, 1203

4 of 31

Figure 1. 1. Flow for attitude attitude data data processing processing and and verification. verification. Figure Flow chart chart for

2.2. Measurement Equation Construction 2.2. Measurement Equation Construction The measurement accuracy of a star sensor’s optical axis is the highest among the three axes, The measurement accuracy of a star sensor’s optical axis is the highest among the three axes, and and we will construct the measurement equation based on the optical axis vector. Construction of the we will construct the measurement equation based on the optical axis vector. Construction of the star star sensor measurement equation includes the following three aspects: sensor measurement equation includes the following three aspects: (1) Quality control of observational data from the star sensor (1) Quality control of observational data from the star sensor Naturally, certain unusual errors may occur during star sensor observation. Therefore, the quality of theNaturally, observational data should be controlled Suppose that at time t, the Therefore, quaternionthe observation certain unusual errors may occurfirst. during star sensor observation. quality of T T A B the observational data should be controlled first. Suppose that at time t, the quaternion observation value of Sensor A is q”t   q0 A q1A q2 A qı3 A  , and that of Sensor B is qt ”  q0 B q1B q2 B value q3BıT . T of Sensor A is qtA “ q0A q1A q2A q3A , and that of Sensor B isI qB “ q I q1B q2B q3B . We could place the satellite body into the inertial rotation matrices R A tand R0B B : We could place the satellite body into the inertial rotation matrices R IA and RBI :  q1 A 2  q2 A 2  q3 A 2  q0 A 2 2( q1 A q2 A  q3 A q0 A ) 2( q1 A q3 A  q2 A q0 A )  fi »   2 2 2 2 I 2 2 2 2 q1A 2pq q2Aq2´A q3Aqq3 A0Aq q0 A 2pq q A `q1qA2A RA ´ q2 A ` 2( q21A q0 qA 0A ) q  q 2A 2(´q1qA3A  qq30A  q11A (1) A q0 A ) A  A q33A — ffi I 2 2 2 2 2 2 ´q2 q 2 RA “ – (1) 2pq1A ` q q q ´q ` q ´ q ` q 2pq q q  q2A fl 0Aq2 A q0 A ) 1A 2( q2A2 A q3 A 3Aq1 A q0 A 0A ) q2 A3A q3 A1A 0A q0 A   q1 A 2A 3A   2( q1 A q3A 2pq1A q3A ´ q2A q0A q 2pq2A q3A ` q1A q0A q ´q1A 2 ´ q2A 2 ` q3A 2 ` q0A 2  q1 B 2  q2 B 2  q3 B 2  q0 B 2 2( q1 B q2 B  q3 B q0 B ) 2( q1 B q3 B  q2 B q0 B )  fi q1BI 2 ´ q2B 2 ´ q3B 2 ` q0B 2 2pq1B2 q2B ´2q3B q0B2 q 2pq1B q3B ` q2B q0B q  2  q1 B  q2 2 B  q23 B  q02B 2( q1 B q2 B  q3 B q0 B ) 2( q2 B q3 B  q1 B q0 B )  ffi — R  RBI “ – B 2pq1B q2B ` q q0B q ´q1B 2 ` q2B ´ q3B ` q0B 2pq fl 2B q3B ´ q1B q0B q  2( q1 B q3B  q 2  q2 B 22  q3 B 2 2 q0 B 2 2 2( q2 B q3 B  q1 B q0 B ) 3 B  q 2 B q0 B ) 2pq1B q3B ´ q2B q0B q 2pq2B q3B ` q1B q0B q ´q1B1 B2 ´ q2B ` q3B ` q0B

»

(2) (2)

Next, we can obtain the optical axis vectors of the two sensors in the inertial reference frame as follows:

Z A   2( q1 A q3 A  q2 A q0 A ) 2( q2 A q3 A  q1 A q0 A )

 q1 A 2  q2 A 2  q3 A 2  q0 A 2 

T

Z B   2( q1B q3 B  q2 B q0 B ) 2( q2 B q3 B  q1B q0 B )

 q1B 2  q2 B 2  q3 B 2  q0 B 2 

T

Next, we could calculate the angle between the two at time t:

(3)

Sensors 2016, 16, 1203

5 of 30

Next, we can obtain the optical axis vectors of the two sensors in the inertial reference frame as follows: ıT 2pq1A q3A ` q2A q0A q 2pq2A q3A ´ q1A q0A q ´q1A 2 ´ q2A 2 ` q3A 2 ` q0A 2 ” ıT Z B “ 2pq1B q3B ` q2B q0B q 2pq2B q3B ´ q1B q0B q ´q1B 2 ´ q2B 2 ` q3B 2 ` q0B 2

ZA “



(3)

Next, we could calculate the angle between the two at time t: αt “ arccospZ A ¨Z B q

(4)

As a rigid body bracket and temperature control device are used among the star sensors, and the axis angle change among the sensors is very small, we are able to control the quality of observational data in the following model: d

N ř i“1

pαi ´αcal q2

δm “ N # |αi ´ αcal | ď γδm normal observation |αi ´ αcal | ą γδm abnormal observation

(5)

where αcal represents the angle between the optical axis calibrated in the laboratory, γ represents the threshold value, the general range of 1–3, and δm represents the mean square error of the optical axis angle. (2)

Data fusion for multiple star sensors

Data fusion for multiple star sensors uses origin attitude observations, by which the high-precision attitude of the body coordinate relative to the inertial coordinate system can be determined, and the attitude accuracy depends on the optical axis pointing accuracy of the star sensor [25,26]. This operation is a prerequisite for the combination of star sensor and gyro. Assuming that the axis of multiple star sensors in the inertial coordinate is V1CIS , V2CIS , . . . VnCIS , the body coordinate of the axis is V1Body , V2Body , . . . VnBody , and the star sensor observation equation would be: ViCIS ` v3ˆ1 “ Rˆ BI ¨ ViBody i “ 1, 2, ¨ ¨ ¨ , n

(6)

in which Rˆ BI is the rotation matrix from the body coordinate system to the inertial coordinate system, and v3ˆ1 is the star sensitive observation error. We could establish an indirect adjustment model using quaternions as independent variables, which is expressed as: ViCIS ` v3ˆ1 “ Rˆ BI ¨ ViBody pi “ 1, 2, ¨ ¨ ¨ , nq (7) qˆ20 ` qˆ21 ` qˆ22 ` qˆ23 “ 1 We do a Taylor series expansion on the first item: v3ˆ1 q00

2

“ R BI ¨ ViBody ´ ViCIS

BR I

“ R BI pQ0 q ¨ ViBody ` p Bq0B ViBody dq0 ` 2

2

2

BR BI Bq1 ViBody dq1

`

BR BI Bq2 ViBody dq2

`

BR BI Bq3 ViBody dq3 q ´ ViCIS

(8)

`q01 ` q02 ` q03 ` 2q00 dq0 ` 2q01 dq1 ` 2q02 dq2 ` 2q03 dq3 ´ 1 “ 0

” ıT where Q0 “ q00 q01 q02 q03 is the initial value of the unknown quaternion. The above equation is linearized with restriction of error equations, and could be written in matrix form: V “ AX ´ L CX ` W “ 0

(9)

Sensors 2016, 16, 1203

6 of 30

Furthermore, we could solve the following equations and the specific derivation process is as follows: $ ’ & V “ AX ´ L (10) CX ` W “ 0 ’ % A T PV ` C T K “ 0 A T PAX ` C T K ´ A T PL “ 0 ´1 T ´1 ´1 C q pCpA T PAq A T PL ` Wq

K “ pCpA T PAq X“

´1

´1 ´1 pA T PAq pA T PL ´ C T pCpA T PAq C T q

(11)

´1 pCpA T PAq A T PL ` Wqq

We then calculate the quaternion update: q0 q1 q2 q3

“ q00 ` dq0 “ q01 ` dq1 “ q02 ` dq2 “ q03 ` dq3

(12)

” ıT We do iterative calculations until X “ dq0 dq1 dq2 dq3 stabilizes. Therefore, we could calculate the quaternion estimation based on the above model. (3)

Constructing the measurement equation

From the fusion results of the multiple-star sensor observations shown above, we can construct attitude measurement equation. The attitudes of the three axes vectors in the body coordinate system, the measurement values in the inertial coordinate system and the real values are: $ $ $ 1 1 1 1 T 1 1 1 1 T 1 1 1 1 T ’ ’ ’ & lb “ rxb , yb , zb s & lmi “ rxmi , ymi , zmi s & li “ rxi , yi , zi s T T 2 “ rx2 , y2 , z2 sT , l 2 “ rxb2 , y2b , z2b s , lmi li2 “ rxi2 , y2i , z2i s mi mi mi ’ ’ ’ % b3 % % T T 3 “ rx3 , y3 , z3 sT lb “ rxb3 , y3b , z3b s lmi li3 “ rxi3 , y3i , z3i s mi mi mi

(13)

We simplify the measurement equations of: Zptq “ hpX, » tq `fiVptq » fi » fi li1 h1 v1 — ffi — ffi — ffi hpX, tq “ – li2 fl “ – h2 fl , Vptq “ – v2 fl 3 li h3 v3 9 ˆ1

(14) 9ˆ1

ˆ As the measurement Equation (14) is continuous and Xptq is non-linear, to use the attitude ˆ estimation filtering algorithm we first need to focus on the best estimates of Xptq to linearize it with the sampling period T, and then do discrete and recursive calculations. The detailed derivation process is as follows: T pqql 1 h1 “ li1 “ Abi b (15) ˆ Abi pqq “ Abi p∆qqAbi pqq where Abi pqq represents the rotational transformation matrix from the inertial to the body coordinate system. As the error quaternion is small, it can be reduced to: »

fi 2∆q3 ´2∆q2 Ñ ffi 1 2∆q1 fl “ I3ˆ3 ´ 2r∆ q ˆs ´2∆q1 1 ! ) Ñ ˆ “ I3ˆ3 ´ 2r∆ q ˆs Abi pqq ˆ Abi pqq “ Abi p∆qqAbi pqq 1 — Ap∆qq “ – ´2∆q3 2∆q2

(16)

Sensors 2016, 16, 1203

7 of 30

Further solving it: T pqql 1 h1 “ li1 “ Abi ! bÑ )T T 1 ˆ b ´ 2r∆ q ˆsAbi pqq ˆ “ Abi pqql lb1

(17)

Ñ

T pqql T pqqr∆ ˆ b1 ` 2Abi ˆ “ Abi q ˆslb1 Ñ T pqql T pqqrl ˆ b1 ´ 2Abi ˆ b1 ˆs∆ q “ Abi

We could similarly get: T pqql 2 h2 “ li2 “ Abi ! bÑ )T T 2 ˆ b ´ 2r∆ q ˆsAbi pqq ˆ “ Abi pqql lb2

Ñ

T pqql T pqqr∆ ˆ b2 ` 2Abi ˆ “ Abi q ˆslb2 Ñ T pqql T pqqrl ˆ b2 ´ 2Abi ˆ b2 ˆs∆ q “ Abi

,

T pqql 3 h3 “ li3 “ Abi ! bÑ )T T 3 ˆ b ´ 2r∆ q ˆsAbi pqq ˆ “ Abi pqql lb3

(18)

Ñ

T pqql T pqqr∆ ˆ b3 ` 2Abi ˆ “ Abi q ˆslb3 Ñ T pqql T pqqrl ˆ b3 ´ 2Abi ˆ b3 ˆs∆ q “ Abi

Now we calculate the measurement matrix: Bh1 Ñ

B∆ q

T ˆ b1 ˆs, “ ´2Abi pqqrl

Bh2 Ñ

B∆ q

T ˆ b2 ˆs, “ ´2Abi pqqrl

Bh3 Ñ

B∆ q

T ˆ b3 ˆs “ ´2Abi pqqrl

(19)

Therefore, Equation (14) becomes linear and discrete: Zk “ Hk Xk ` Vk ˇ B hrX pt q,t s ˇ Hk “ B Xptk q k ˇ k

»

X ptk q“Xˆ k{k´1

» Bh 1 Ñ — B∆ q — B h2 “ — B ∆Ñ q – B h3 Ñ B∆ q

03ˆ3

ffi 03ˆ3 ffi ffi fl 03ˆ3

fi

T pqql ˆ b1 l 1 ´ Abi — mi 2 ´ A T pqql 2 ffi Zk “ – lmi bi ˆ b fl 3 3 ´ A T pqql lmi bi ˆ b

fi »

fi T pqqrl ˆ b1 ˆs 03ˆ3 ´2Abi — ffi T pqqrl “ – ´2Abi ˆ b2 ˆs 03ˆ3 fl T pqqrl ˆ b3 ˆs 03ˆ3 ´2Abi 9 ˆ9

9 ˆ9

(20)

9 ˆ1

Vk is observation noise sequence, satisfying: #

EpVk q “ 0 EpVk VjT q “ Rk δkj

(21)

where δ represents the Dirichlet function and Rk is the covariance matrix of the measurement noise. Assuming that the main axis of the observation error is σs , we obtain Rk “ σs 2 I9ˆ9 . 2.3. Construction of the State Equation Based on the Gyroscope and Attitude Kinematic Equations The measurement model of a gyro, a common inertial attitude sensor, is critical in attitude determination. The gyro output angular velocity is used to integrate and preset the next-time satellite attitude, in which gyro bias is treated as an estimated additional state amount; the measurement data of the gyroscope are used directly in the state equation, but are reflected in the measurement equation. According to the configuration of the on-satellite gyroscope feature, a gyroscope measurement model is: ωg “ ω ` b ` ηg

(22)

in which ω g represents the gyro output-measured values; ω is the rotation speed of the satellite body coordinate system relative to the inertial space coordinate; b is the gyro bias; ηg is the gyro

Sensors 2016, 16, 1203

8 of 30

measurement noise; σg is the mean square error of gyro measurement noise; and δ represents Dirichlet function, satisfying: Epηg ptqq “ 0 (23) Epηg ptqηgT pt1 qq “ σg 2 δpt ´ t1 q A gyro bias amount is not static and meets the following random walk model, assuming the gyro bias is driven by white noise, i.e.,: .

b “ ηb # Epηb ptqq “ 0 Epηb ptqηb T pt1 qq “ σb 2 δpt ´ t1 q

(24)

Furthermore, we assume that the two types of noise are independent. ηb represents the gyro bias white noise; and σb is the mean square error of the gyro bias white noise. With the quaternion kinematics: . 1 q “ q b ωbi (25) 2 in which q represents the attitude of the satellite body coordinate system relative to the inertial Ñ coordinate system and ω bi represents the speed in the body coordinate system of the inertial coordinate system relative to the satellite body, we can obtain an integrator quaternion. As the angle rate of the gyro measurement contains, for example, measurement error and bias error, we can only obtain the corresponding qˆ and ωˆ bi estimates. The error between the real satellite attitude quaternion q and the quaternion estimates qˆ can be expressed as ∆q “ r∆q0 ∆q1 ∆q2 ∆q3 sT , and using the error quaternion to represent the error, we have: q “ qˆ b ∆q (26) where the error quaternion ∆q represents a small rotation angle in which ∆q0 « 1, so we just need to consider the vector part of the quaternion error, and the error quaternion can be reduced into three independent variables. Change Equation (26) into: ∆q “ qˆ´1 b q (27) and calculate the derivative of both sides: .

. ´1

∆q “ qˆ

.

b q ` qˆ´1 b q

(28)

Furthermore, from Equation (28): .

∆q

. ´1

.

“ qˆ b q ` qˆ´1 b q “ ´ 12 ωˆ bi b qˆ´1 b q ` 12 qˆ´1 b q b ωbi “ ´ 12 ωˆ bi b ∆q ` 12 ∆q b ωbi

(29)

from which ∆ωbi “ ωbi ´ ωˆ bi is obtained: . 1 1 1 ∆q “ ´ ωˆ bi b ∆q ` ∆q b ωˆ bi ` ∆q b ∆ωbi 2 2 2

(30)

Sensors 2016, 16, 1203

9 of 30

Because the error quaternion is a small amount, we could obtain: ∆q “ r∆q0 ∆q1 ∆q2 ∆q3 sT “ r1 0 0 0sT « ff « ff ÑT 0 ∆q ´∆ q 1 1 0 Ñ Ñ Ñ 2 ∆q b ∆ωbi “ 2 ∆ ω bi ∆ q ∆q0 I3ˆ3 ´ r∆ q ˆs ff « ˇ0 Ñˇ ˇ Ñ ˇ Ñ “ 12 ˇ ˇˇ ˇ ∆ ω bi ` Opˇ∆ q ˇ ˇ∆ ω bi ˇq

(31)

By incorporating Equations (29) and (30) into (31), and ignoring the second-order small quantities, we can derive kinematic equations based on the error quaternion: . fi ∆ q1 Ñ Ñ Ñ ˆ — . ffi ∆ q “ – ∆q2 fl “ ´r ω bi ˆs∆ q ´ 12 ∆b ´ 12 ηg . ∆ q3 . ∆ q0 “ 0

»

.

(32)

.

∆ b “ ηb Using Equation (32), we can constitute a linear filtering state equation based on the state ÑT

X6ˆ1 “ r∆ q

T

∆b T s , and get: .

Xptq “ «FptqXptq ` Wptq ff ´rωˆ bi ˆs ´0.5I3ˆ3 Fptq “ 03ˆ3 03ˆ3

(33)

Wptq “ r´0.5ηg ηb sT Equation (33) is a continuous dynamic system filter and will be linear and discrete: Xk “ Φk{k´1 Xk´1 ` Γk´1 Wk´1 şt Xptk q “ Φptk , tk´1 qXptk´1 q ` t k

k´1

Φptk , τqWpτqdτ

(34)

When a sampling interval is small, the calculation of the state transition matrix will be: .

Φpt, tk´1 q “ FptqΦpt, tk´1 q Φptk´1 , tk´1 q “ I şt Φptk , tk´1 q “ exp t k Fptqdt

(35)

k´1

When the filter period TpT “ tk ´ tk´1 q is small, Fptq can be approximated into a constant matrix: Fptq « Fptk´1 q tk´1 ď t ď tk Φptk , tk´1 q “ expTFptk´1 q

(36)

In addition, the system noise sequence Wk´1 in the state equation and the driving array Γk´1 can be expressed as: şt Γk´1 “ t k Φptk , τqdτ “ T ¨ I k´1

Wk´1 “ r´0.5ηg ηb sT ( E tWk u “ 0, E Wk , Wj T “ Qk δkj Qk “ diagp0.25σg 2 I3ˆ3 σb 2 I3ˆ3 q

(37)

Wk 1  [0.5 g b ]T E Wk   0, E Wk ,W j

T

Q k

(37) kj

Qk  diag (0.25 g 2 I 33  b 2 I 33 ) Sensors 2016, 16, 1203

10 of 30

2.4. Information Fusion Filter Design 2.4. Information Fusion Filter Design On the basis of the measurement equation and the state equation, we use the bidirectional On basis of the measurement and the state equation, useoptimal the bidirectional Kalman Kalman the filter and overall weightedequation smoothing method to realizewe the satellite attitude filter and overall weighted smoothing method to realize the optimal satellite attitude estimation and estimation and derive it specifically [27,28]. Figure 2 shows the schematic diagram of attitude data derive it specifically [27,28]. Figure 2 shows the schematic diagram of attitude data processing with processing with the bidirectional Kalman filter. the bidirectional Kalman filter.

X (tk  2 )  Xˆ (tk  2 ) q (tk  2 )  qˆ (tk  2 )

X (tk 1 )  Xˆ (tk  2 , tk 1 ) q (tk 1 )  qˆ (tk  2 , tk 1 )

b(tk  2 )  bˆ(tk  2 ) P(t )  Pˆ (t )

b(tk 1 )  bˆ(tk  2 , tk 1 ) P(t )  Pˆ (t , t )

k 2

k 1

k 2

k 2

X (tk  2 )  Xˆ (tk  2 )

X (tk 1 )  Xˆ (tk  2 , tk 1 ) q (tk 1 )  qˆ (tk  2 , tk 1 ) b(t )  bˆ(t , t )

P(tk  2 )  Pˆ (tk  2 )

P(tk 1 )  Pˆ (tk  2 , tk 1 )

k 1

k 2

k 2

X (tk 1 )  Xˆ (tk , tk 1 )

X (tk  2 )  Xˆ (tk  2 )

q(tk 1 )  qˆ (tk , tk 1 ) b(t )  bˆ(t , t )

q(tk  2 )  qˆ (tk  2 ) b(t )  bˆ(t )

P(tk )  Pˆ (tk )

P (tk 1 )  Pˆ (tk , tk 1 )

P (tk  2 )  Pˆ (tk  2 )

k

k 1

q (tk  2 )  qˆ (tk  2 ) b(t )  bˆ(t ) k 2

X (tk )  Xˆ (tk ) q (tk )  qˆ (tk ) b(t )  bˆ(t )

k 1

k

k 2

k 1

k

k 2

k 1

k 2

Pˆ (tk  2 )  P(tk  2 )

P (tk 1 )  Pˆ (tk , tk 1 )

k

k 2

Xˆ (tk  2 )  X (tk  2 ) qˆ(tk  2 )  q(tk  2 ) bˆ(t )  b(t )

q(tk 1 )  qˆ (tk , tk 1 ) b(t )  bˆ(t , t )

b(tk )  bˆ(tk ) P(t )  Pˆ (t ) k

k 1

X (tk 1 )  Xˆ (tk , tk 1 )

X (tk )  Xˆ (tk ) q (tk )  qˆ (tk )

k 1

k

Figure 2. Schematic diagram of attitude data processing with bidirectional Kalman filter.

Figure 2. Schematic diagram of attitude data processing with bidirectional Kalman filter.

(1) Attitude forecast processing (1) Attitude forecast processing When a star sensor does not output a measured value, a gyro measurement model based on the When a star sensor does not output a measured value, a gyro measurement model based on the following equation at time tk´1 is integrated, from which the satellite quaternion pqˆbi qk{k´1 forecast can be derived: following equation at time t is integrated, from which the satellite quaternion (qˆ ) k 1

1 ş tk 2 tk´1

Ωpωˆ bi ptk´1 qqdt

ˆ k{k´1 q “ e qpt forecast „can be derived: 2 1 1 2 ∆Θ

p 2 ∆Θq 2!

3

p 21 ∆Θq 3!

bi k / k 1

ˆ k´1 q ¨ qpt 

ˆ k´1 q ` ¨ ¨ ¨ ¨ qpt » 0 ´ωx ş tk ş tk — 0 — ωx ∆Θ “ t Ωpωˆ bi ptk´1 qqdt “ t — k´1 k´1 – ωy ´ωz ωz ωy ∆Θ2 “ ´∆θ 2 I ∆θ 2 “ ∆θ x 2 ` ∆θy 2 ` ∆θz 2

“ I`

1!

`

`

´ωy ωz 0 ´ωx

´ωz ´ωy ωx 0

fi

» 0 ffi — ∆θ ffi — x ffi dt “ — fl – ∆θy ∆θz

´∆θ x 0 ´∆θz ∆θy

´∆θy ∆θz 0 ´∆θ x

´∆θz ´∆θy ∆θ x 0

fi ffi ffi ffi fl

(38)

where ∆θ x , ∆θy , ∆θz represent the incremented angles of the gyro in the X, Y, and Z axes in a sampling interval rtk´1 , tk s. Furthermore, the recurrence relations are derived as follows: ˆ k{k´1 q “ e qpt

ş 1 tk 2 tk´1

Ωpωˆ bi ptk´1 qqdt

$ & ˆ k´1 q “ ¨ qpt

»

I %

" „  2 4 6 p ∆θ p ∆θ p ∆θ 2 q 2 q 2 q ` 4! ´ 6! `¨¨¨ ` “ I 1 ´ 2! ” ı ∆Θ ∆θ ˆ k´1 q “ Icos ∆θ 2 ` ∆θ sin 2 ¨ qpt

∆Θ 2

¨

∆Θ 2

2

2

4

∆Θ ´p ∆θ p ∆θ ´p ∆θ 2 q 2 q 2 2 q ` ` 4! 2! 3! 1! – `I ∆θ 6 ∆θ 4 ∆Θ p q ´p q ` 2 5! 2 ` 6!2 *` ¨ ¨ ¨ „ ∆θ ∆θ 3 ∆θ 5 p 2 q p 2 q 2 2 ˆ k´1 q ¨ qpt ∆θ 1! ´ 3! ` 5! ´ ¨ ¨ ¨

`

fi, . fl ¨ qpt ˆ k´1 q -

(39)

ˆ k{k´1 q can be obtained. Therefore, with Equation (39), the satellite quaternion predictive value qpt The equation specific for the gyro bias predictive value bˆ k{k´1 is: bˆ k{k´1 “ bˆ k´1

(40)

Sensors 2016, 16, 1203

11 of 30

In addition, the equation specific for the predictive value Pˆk{k´1 of error covariance matrix is: Pˆk{k´1 “ Φk{k´1 Pˆk´1 ΦkT{k´1 ` Γk´1 Qk´1 ΓkT´1 (2)

(41)

Attitude correction processing

At time tk , the observation matrix Hk can be calculated according to the measurement equation, and the filter gain can be calculated using the following equation: ´1

Kk “ Pk{k´1 HkT rHk Pk{k´1 HkT ` Rk s

(42)

Correspondingly, the correction on filter status updates is: Xˆ k “ Xˆ k{k´1 ` Kk pZk ´ Hk Xˆ k{k´1 q

(43)

T Ñ ˆ After obtaining the state variable Xˆ k “ r∆ q k T ∆bˆ k T s at time tk , the gyro bias can be corrected by the conventional method: bˆ k “ bˆ k{k´1 ` ∆bˆ k (44)

The corrected values pqˆbi qk at time tk are: pqˆbi qk “ pqˆbi qk{k´1 b p∆qˆbi qk

(45)

As the constraint on the quaternion modulus equals 1, we can obtain the following results: fi » b Ñ ˆ T Ñ ˆ 1 ´ ∆ q k ∆ q k fl p∆qˆbi qk “ – Ñ ˆ ∆qk

(46)

The error covariance matrix can then be updated: Pk “ pI ´ Kk Hk qPk{k´1 pI ´ Kk Hk qT ` Kk Rk KkT

(47)

Hk will be determined using the updated quaternion estimation, putting the filtering process in better convergence. After adjustment for the satellite attitude quaternion and gyro bias, the predicted value of the state variable is zero, and the state variable Xˆ k must be reset to zero. (3)

Covariance-weighted smoothing

Results of the optimal estimation are obtained by averaging the forward and backward state estimates with the weights based on their error covariance matrix, which minimizes the covariance of the optimal estimation results. The algorithm for this covariance-weighted smoothing is as follows: qˆ f b pkq “ qˆb ´1 pkq b qˆ f pkq ” ı T T ∆ xˆ f b pkq “ sgnpqˆ f b0 qr qˆ f b1 qˆ f b2 qˆ f b3 s pbˆ f pkq ´ bˆ b pkqq ´1 Pˆs pkq “ p Pˆ f ´1 pkq ` Pˆb ´1 pkqq ” ıT Ñ ∆ xˆ s pkq “ ∆ qˆ s T pkq ∆bˆ s T pkq “ Pˆs pkq Pˆ f ´1 pkq∆ xˆ f b pkq Ñ Ñ Ñ Ñ Ñ Ñ ˆ ˆ ˆ ˆ ˆ ˆ ∆qˆs0 pkq “ sqrtp1 ´ ∆ q s1 T pkq∆ q s1 T pkq ´ ∆ q s2 T pkq∆ q s2 T pkq ´ ∆ q s3 T pkq∆ q s3 T pkqq qˆs “ qˆb pkq b ∆qˆs pkq bˆ s pkq “ bˆ b pkq ` ∆bˆ s pkq

(48)

in which b represents quaternion multiplication, f is the forward filtering result, b is the backward one, and s is the covariance-weighted smoothing result.

Sensors 2016, 16, 1203

12 of 30

2.5. Attitude Data Model Construction The high-resolution optical satellite equips the line push-broom camera in an imaging frequency up to tens of thoUSAnds of hertz, while the frequency for the attitude data is much smaller. The imaging frequency means number of image lines taken by line push-broom camera per second. Optical image geometric correction requires an accurate attitude for each line; therefore, a reasonable model should be used to meet the image geometry processing requirement. Common fitting methods for the attitude data include [29–31]: (1)

Lagrange polynomial interpolation

Lagrange polynomial interpolation is a common interpolation method because it is simple, fast, and widely used. The attitude expression parameter contains Euler angles and quaternions. Assuming that pϕ, ω, κq represents the Euler angle parameter and pq0 , q1 , q2 , q3 q represents quaternion parameters, the specific interpolation model would be: ϕ“

n ř

ϕ j Wj , ω “

j “1

Wj “

n ř j “1

n ś

k“1 k‰j

ω j Wj , κ “

n ř

κ j Wj

j “1

t´t k t j ´t k

n ř

n n ř ř q1j Wj , q2 “ q2j Wj , q3 “ q3j Wj j“b 1 j “1 j “1 q0 “ ˘ p1 ´ q21 ´ q22 ´ q23 q

q1 “

(2)

(49)

(50)

Orthogonal polynomial fitting

Unlike ordinary polynomial fitting models, the orthogonal polynomial model can effectively avoid a pathological matrix. The orthogonal polynomial fitting in the m ´ 1 order of the ϕ parameter can be expressed as: Pϕ ptq “ a0 ` a1 t ` a2 t2 ` ¨ ¨ ¨ ` am´1 tm´1 , pm ď nq (51) Assuming the above equation is a linear combination of the orthogonal polynomials δj ptq pj “ 0, 1, ¨ ¨ ¨ , m ´ 1q, we can obtain: Pϕ ptq “ c0 δ0 ptq ` c1 δ1 ptq ` ¨ ¨ ¨ ` cm´1 δm´1 ptq

(52)

in which the orthogonal polynomials δj ptq can be constructed with a recursive equation as shown below: δ0 ptq “ 1; δ1 ptq “ pt ´ α1 q; δj ptq “ pt ´ α j qδj´1 ptq ´ β j δj´2 ptq, j “ 2, ¨ ¨ ¨ , m ´ 1

(53)

The orthogonal polynomial fitting principle of Euler angle parameters ω, κ and quaternion are the same as above, so we could get: mř ´1 mř ´1 mř ´1 Pq1 ptq “ cq1 k δq1 k ptq, Pq2 ptq “ cq2 k δq2 k ptq, Pq3 ptq “ cq3 k δq3 k ptq k “0 k “0 b k “0 q0 “ ˘ p1 ´ q21 ´ q22 ´ q23 q

(3)

Spherical linear interpolation (SLERP) model

(54)

Sensors 2016, 16, 1203

13 of 30

If q1 , q2 are considered as two points in four-dimensional space on a unit ball, the SLERP will go along the shortest arc for connection, which can be used in quaternion interpolation at a constant speed. Therefore, we can obtain: qptq “ C1 ptqq1 ` C2 ptqq2 sinp1´tqθ sintθ sinθ , C2 ptq “ sinθ sinp1´tqθ sintθ qptq “ sinθ q1 ` sinθ q2 Sensors 2016, 16, 1203 θ “ cos´1 xq1 ¨q2 y “ cos´1 pq10

C1 ptq “

(55) ¨ q20 ` q11 ¨ q21 ` q12 ¨ q22 ` q13 ¨ q23 q

14 of 31

2.6. Precise in aa Geometric Geometric Calibration Calibration Field Field 2.6. Precise Attitude Attitude Inversion Inversion in Due to Due to the the intrinsic intrinsic nature nature of of attitude, attitude, itit is is difficult difficult to toevaluate evaluateand andverify verifyits itsaccuracy accuracyeffectively. effectively. Based on real panchromatic images and the DOM/DEM of the geometric calibration imageBased on real panchromatic images and the DOM/DEM of the geometric calibration field, field, image-dense dense matching is used to achieve automatic measurement of a high-precision control point, while matching is used to achieve automatic measurement of a high-precision control point, while a stricta strict geometric imaging equation to achieve calculation earth observation geometric imaging equation will bewill usedbetoused achieve attitudeattitude calculation for earthfor observation cameras. cameras. Furthermore, we could verify and evaluate the accuracy of attitude determination Furthermore, we could verify and evaluate the accuracy of attitude determination algorithm. Figure 3 algorithm. Figure 3 displays the flowchart of precision control-point matching and high-precision displays the flowchart of precision control-point matching and high-precision attitude inversion in a attitude inversion in afield. geometric calibration field. It is processing the core of on-ground attitude geometric calibration It is the core of on-ground for attitudeprocessing data of thefor Yaogan-24 data of the Yaogan-24 remote sensing satellite and its verification. It comprises the following key remote sensing satellite and its verification. It comprises the following key steps of processing that steps offeature processing image feature point extraction betweenimages the realand panchromatic and the image pointthat extraction between the real panchromatic the DOM ofimages the geometric DOM of the geometric calibration field, image simulation, pyramid image matching, whole pixel calibration field, image simulation, pyramid image matching, whole pixel matching, sub pixel matching matching, sub pixel matching and gross error elimination [32–34]. The method of above-mentioned and gross error elimination [32–34]. The method of above-mentioned could achieve the matching could achieve matching accuracy of sub pixel. accuracy of subthe pixel.

Figure Flowchart of of precision matching and and high-precision attitude inversion inversion in in aa Figure 3. 3. Flowchart precision control-point control-point matching high-precision attitude geometric calibration field. geometric calibration field.

The attitude accuracy includes two aspects: absolute and relative accuracies. Absolute accuracy means the external error including the datum error, while relative accuracy means the internal error deduction of ofdatum datumerror. error.For Forthe thefirst, first,we wewill willanalyze analyze the geometric positioning accuracy after the deduction the geometric positioning accuracy of ofpanchromatic a panchromatic image evaluate theabsolute absoluteaccuracy; accuracy;for forthe thesecond, second,we wewill will take take the the inversion a image toto evaluate the attitude of the panchromatic camera as a reference to evaluate the relative accuracy. accuracy. The absolute and relative accuracy of inversion attitude could respectively reach 0.3 arcsec and 0.06 arcsec under the following conditions:

(a) The Yaogan-24 Yaogan-24remote remote sensing satellite uses dual-frequency GPS observations, precise sensing satellite uses dual-frequency GPS observations, precise ephemeris ephemeris andmodel dynamic model tothe determine orbit, by which centimeter-level can and dynamic to determine orbit, bythe which centimeter-level accuracy canaccuracy be achieved be on achieved the orbit; on the orbit; (b) Satellite payloads have achieved high-precision time synchronization, and the synchronization error is subtle; (c) The camera internal parameters have been precisely calibrated in the pixel pointing angle model, and the calibration accuracy is better than 0.3 pixel; (d) The geometric calibration field is used, in which the absolute and relative accuracy of the control points are respectively within 1 m and 0.2 m in plane. The camera calibration model used in this paper is [3–5]:

Sensors 2016, 16, 1203

14 of 30

(c) The camera internal parameters have been precisely calibrated in the pixel pointing angle model, and the calibration accuracy is better than 0.3 pixel; (d) The geometric calibration field is used, in which the absolute and relative accuracy of the control points are respectively within 1 m and 0.2 m in plane. The camera calibration model used in this paper is [3–5]: ` ˘ T y V “ p xf , f , 1q “ ptanpψx psqq, tanpψy psqq, 1qT # Image cam ψx psq “ ax0 ` ax1 ˆ s ` ax2 ˆ s2 ` ax3 ˆ s3 ψy psq “ ay0 ` ay1 ˆ s ` ay2 ˆ s2 ` ay3 ˆ s3

(56)

In which pψx psq, ψy psqq represents the direction angle of probe element s; ax0 , ax1 , ax2 , ax3 , ay0 , ay1 , ay2 , ay3 represent the fitting coefficients of pointing angle; x, y represent the coordinate of probe element in camera coordinate system; f represents the focal length of camera; ` ˘ VImage cam represents the vector of probe element. With Equation (52), we could get that the camera internal error model is simplified, and avoid the coupling of error parameters, when comparing to standard camera intrinsic parameters model. The tight geometric imaging model used in this paper is: ¨ ˛ » fi tanpψx psqq Xg ´ Xgps ˚ ˚ ‹ ffi ˚ body J2000 — ˝ tanpψy psqq ‚ “ λRcam body ˝ RJ2000 Rwgs – Yg ´ Ygps fl 1 Zg ´ Zgps ¨

wgs

fi BX — ffi ´ – BY fl BZ

˛

»

‹ ‹ ‚

(57)

body

In which pXg , Yg , Zg q represent object square coordinates of object points; pXgps , Ygps , Zgps q and pBX , BY , BZ q respectively represent the object space coordinates of the camera center and body J2000 GPS eccentric error; λ represents a scaling factor; RWGS , R J2000 , and Rcam body represent the rotation matrices of, respectively, the WGS84 coordinate system to the J2000 coordinate system, the satellite the J2000 coordinate to body coordinate system, and the satellite body coordinate to the camera coordinate system. With Equation (57), we could obtain a conversion model between the observation vector in the J2000 coordinate and observation vector in the camera measurement coordinates: ¨ ˛ » fi » fi tanpψx psqq Xg ´ Xgps BX body J2000 — ‹ ffi — ffi ´1 ˚ λ´1 pRcam ´ – BY fl (58) ˝ tanpψy psqq ‚ “ RJ2000 Rwgs – Yg ´ Ygps fl body q 1 Zg ´ Zgps BZ wgs

body

With the line push-broom camera, when a non-collinear observation vector on a matching control point in each scan line is ě2, the attitude parameters along the scan line at certain time can be calculated from Equation (58). In theory, to ensure the precision and reliability of the attitude parameters, a larger number of matching control points are required and they are distributed evenly along the scan line [35,36]. The attitude accuracy depends mainly on the accuracy of the GPS orbit accuracy, DEM/DOM accuracy of the reference calibration field data, and the number and distribution of the control points per scan line [34,37–39]. The frequency of the inversion attitude should be equal to that of the linear array camera imaging, and can reach tens of thousands of hertz. However, changes in the calibration field are relatively large, making it impossible to match the control points of each line. Therefore, we will down-sample the frequency to tens of hertz. 3. Experiment and Discussion An experiment was carried out using the data provided by the Yaogan-24 remote sensing satellite that was launched on 20 November 2014. The on-ground attitude data processing algorithm has been applied in the ground processing system in the China Resources Satellite Application Center.

Sensors 2016, 16, 1203

15 of 30

3.1. Experimental Data Sensors 2016, 16, 1203

(1)

16 of 31

Observation data of the Yaogan-24 remote sensing satellite 3.1. Experimental Sensors 2016, 16, 1203Data

16 of 31

The Yaogan-24 remote sensing satellite’s original observation data used for experimental analysis (1) Observation data of the Yaogan-24 remote sensing satellite Experimental Data mainly3.1. includes dual-frequency GPS original observations, original observations of Astro10 A and B, The Yaogan-24 remote sensing satellite’s and original observation image data used experimental gyro observation data, line time data of imaging, panchromatic data.for The data mentioned (1) Observation data of the Yaogan-24 remote sensing satellite analysis mainly includes dual-frequency GPS original observations, original observations of Astro10 above were acquired during the satellite in orbit test. The Yaogan-24 remote sensing satellite’s observation data used fordata. experimental A and B, gyro observation data, line time data oforiginal imaging, and panchromatic image The data

(2)

analysis mainly includes dual-frequency GPS original original observations of Astro10 mentioned above were acquired during the satellite inobservations, orbit test. Geometric calibration field A and B, gyro observation data, line time data of imaging, and panchromatic image data. The data

(2) geometric Geometriccalibration calibration field The fieldsduring included Songshan, Anyang, mentioned above were acquired the satellite in orbit test. Dongying, Sanya, Taiyuan, and Yili, all in China. In this paper, the Songshan and Anyang calibration fields were used. TheTaiyuan, Songshan The geometric calibration fields included Songshan, Anyang, Dongying, Sanya, andfield is (2) Geometric calibration field ˝ 1 ˝ 1 ˝ 1 all in China. In thiscentral paper, the Songshan and Anyang calibration fields42 were used. Songshan locatedYili, in Henan Province, China, and features a hilly terrain, 112 –113 54 The E/34 13 –35˝ 21 N, 2 The geometric calibration fields included Songshan, Anyang, Dongying, Sanya, Taiyuan, and is at field100 is located Henan central China, and features a hilly coverage ˆ 80 =in8000 kmProvince, , average altitude of approximately 500 terrain, m (the 112°42′–113°54′ highest point 2 Yili, all in China. In this paper, the Songshan and Anyang calibration fields were used. The Songshan coveragefluctuation 100 × 80 = 8000 km m. , average altitude of calibration approximately 500provides m (the highest 1491.73E/34°13′–35°2′N, m), and a maximum ď2000 The Songshan field a region of field located Henan China, andm.features a hillycalibration terrain, 112°42′–113°54′ point is is at 1491.73inm), and aProvince, maximumcentral fluctuation ≤2000 The Songshan field provides 1:2000-scale digital orthophoto (DOM) and digital elevation model (DEM) reference data, in which the 2, average altitude of approximately 500 m (the highest E/34°13′–35°2′N, coverage 100 ×orthophoto 80 = 8000 km a region of 1:2000-scale digital (DOM) and digital elevation model (DEM) reference data, DOM point ground GSD geometric resolution was 0.2 m, and the plane accuracy ď1 m; the DEM geometry is atthe 1491.73 and a GSD maximum fluctuation ≤2000 m.0.2 Them, Songshan field≤1 provides in which DOMm), ground geometric resolution was and the calibration plane accuracy m; the ground resolution was 1 mdigital GSD, orthophoto accuracy ď2 m (Figure 4).≤2 elevation aDEM region of 1:2000-scale (DOM) and digital (DEM) reference data, geometry ground resolution was 1 m GSD, accuracy m (Figuremodel 4). in which the DOM ground GSD geometric resolution was 0.2 m, and the plane accuracy ≤1 m; the DEM geometry ground resolution was 1 m GSD, accuracy ≤2 m (Figure 4).

(a)

(b)

Figure 4. Reference data of the Songshan calibration field. (a) Digital orthophoto; (b) Digital

(a)the Songshan calibration field. (a) Digital (b) Figure 4. Reference data of orthophoto; (b) Digital elevation model. elevation model. Figure 4. Reference data of the Songshan calibration field. (a) Digital orthophoto; (b) Digital The Anyang field is also located in Henan Province, China, featuring a plain, 114°19′–115°12′ elevation model. ˝ 191 –115 E/35°44′–36°1′ N, coverage 90 × 30 = in 2700 km2; average altitude approximately m (the114 highest point˝ 121 E/ The Anyang field is also located Henan Province, China, featuring a40plain, The Anyang field is also located in Henan Province, China, featuring a plain, 114°19′–115°12′ ˝ 1 ˝ 1 2 70 1m),N, and a maximum The field provides a region of 1:1000-scale and point 35 44 is –36 coverage 90 ˆfluctuation 30 = 2700 ≤100 km m. ; average altitude approximately 40 m (theDOM highest E/35°44′–36°1′ N, coverage 90 ×the 30 =DOM 2700 ground km2; average altitude approximately 40≤0.1 m (the highest point DEM reference data, in which GSD geometric resolution was m, and the plane is 70 m), and a maximum fluctuation ď100 m. The field provides a region of 1:1000-scale DOM and is 70 m), ≤0.5 and m; a maximum fluctuation m. The field provides region of 1:1000-scale DOM accuracy the DEM ground GSD ≤100 geometric resolution was ≤0.5a m, accuracy ≤1 m (Figure 5). and DEM reference data, in which the DOM ground GSD geometric resolution was ď0.1 m, and the plane DEM reference data, in which the DOM ground GSD geometric resolution was ≤0.1 m, and the plane accuracy ď0.5 ≤0.5 m; the DEM ground geometricresolution resolution m, accuracy ď1 m5).(Figure 5). accuracy m; the DEM groundGSD GSD geometric waswas ≤0.5 ď0.5 m, accuracy ≤1 m (Figure

(a)

(b)

Figure 5. Reference data of the Anyang calibration field. (a) Digital orthophoto; (b) Digital elevation model.

(a)

(b)

Figure 5. Reference data of the Anyang calibration field. (a) Digital orthophoto; (b) Digital elevation model.

Figure 5. Reference data of the Anyang calibration field. (a) Digital orthophoto; (b) Digital elevation model.

Sensors 2016, 16, 1203

16 of 30

Sensors 2016, 16, 1203

17 of 31

3.2. Experimental 3.2. ExperimentalResults Resultsand andAnalysis Analysis (1)(1) Quality Qualityofofthe thestar starsensor sensorobservation observation data data Star sensors areare connected through a bracket and and are vertically fixed.fixed. In theory, the angle Star sensors connected through a bracket are vertically In theory, the between angle any two optical axesoptical of the axes star sensor should be ashould constant. we did a quality between any two of the star sensor be a Therefore, constant. Therefore, we did analysis a qualityon theanalysis originalonobservation using variation detection means of the optical axisoptical angles.axis The general the originaldata observation data using variation detection means of the angles. auxiliary dataauxiliary includeddata onlyincluded raw observations of the Astro10A Astro10B sensors, focused The general only raw observations of theand Astro10A andstar Astro10B starwe sensors, onwe thefocused raw data two sensors analysis. on of thethe raw data of the for twoour sensors for our analysis. the raw First,we weanalyzed analyzedthe thequality quality of of the raw observations of of thethe star sensors. First, observationsininthe theangle anglechanges changes star sensors. Figures 6 and 7 show the errors in the angle change before and after treatment. Because thethe Figures 6 and 7 show the errors in the angle change before and after treatment. Because measurement accuracy of the optical axis of Astro10 star sensor is ≤5 arcsec, according to the law of of measurement accuracy of the optical axis of Astro10 star sensor is ď5 arcsec, according to the law error propagation,the theoptical opticalaxis axisangle angle error error of of the the star arcsec. error propagation, star sensor sensorwould wouldbe be≤7ď7 arcsec. 20 arcsecond arcsecond

arcsecond arcsecond

20 0 -20 0

20

40 epoch (a)

60

80

-20 0

50

100

150

epoch (b) 20 arcsecond arcsecond

20 arcsecond arcsecond

0

0 -20 0

50

100

150

0 -20 0

20

40 epoch (d)

epoch (c)

60

80

Figure 6. Variations of the sensor optical axis angle treatment taken in Figure 6. Variations of the starstar sensor optical axis angle beforebefore treatment usingusing photosphotos taken in different different calibration fields. (a) (b) YiliSongshan field; (b) Songshan (c,d) Anyang calibration fields. (a) Yili field; field; (c,d)field; Anyang field. field.

20 arcsecond arcsecond

arcsecond arcsecond

20 0 -20

0

20

40 epoch (a)

60

80

0

50

100

150

20 arcsecond arcsecond

arcsecond arcsecond

-20

epoch (b)

20 0 -20

0

0

50

100 epoch (c)

150

0 -20

0

20

40 epoch (d)

60

80

Figure Variationsofofthe thestar starsensor sensor optical optical axis angle taken in in different Figure 7. 7. Variations angleafter aftertreatment treatmentusing usingphotos photos taken different calibration fields. (a) Yili field; (b) Songshan field; (c,d) Anyang field. calibration fields. (a) Yili field; (b) Songshan field; Anyang field.

Figure 6 depicts, when a photo was taken in the Yili, Anyang, or Songshan calibration field, AsAs Figure 6 depicts, when a photo was taken in the Yili, Anyang, or Songshan calibration field, gross errors of 20–40 arcsec were presented in the observation data during some epochs that are much gross errors of 20–40 arcsec were presented in the observation data during some epochs that are much

Sensors 2016, 16, 1203 Sensors 2016, 16, 1203

17 of 30 18 of 31

greater measurement accuracy. After being preprocessed in our the gross greaterthan thanthe thestar starsensor sensor measurement accuracy. After being preprocessed inalgorithm, our algorithm, the errors were effectively corrected, and the optical angleaxis error could satisfy the satisfy indicator 7). gross errors were effectively corrected, and theaxis optical angle error could the(Figure indicator Figure (Figure 7). 8 represents the distribution of the variations in star sensor optical axis angle error, TableFigure 2 represents the statistics of the optical angle error Figure and Table 8 represents the distribution of theaxis variations in starcharacteristic. sensor opticalAs axis angle8error, Table 22 show, the error of the angle a normal distribution.As However, in the represents the statistics of thechange opticalfollowed axis angle error characteristic. Figure 8the andgross Tableerrors 2 show, the observations did not exist, and the chance of a ±5 arcsec deviation appearing between the optical axes error of the angle change followed a normal distribution. However, the gross errors in the observations was 95%. did not exist, and the chance of a ˘5 arcsec deviation appearing between the optical axes was 95%. Probability Between Limits = 0.98236

Probability Between Limits = 0.98295 0.2 Probability Density

Probability Density

0.2 0.15 0.1 0.05 0 -10

0 5 10 (a) Probability Between Limits = 0.9999

0.05 -5

0 5 10 (b) Probability Between Limits = 0.99994

0.4 Probability Density

Probability Density

0.1

0 -10

-5

0.4 0.3 0.2 0.1 0 -5

0.15

0 (c)

5

0.3 0.2 0.1 0 -5

0 (d)

5

Figure 8. Normal Normal distribution distribution of of the the variations variations in in star star sensor sensor optical optical axis axis angle angle when when photos photos were taken in different calibration fields. (a) Yili field; (b) Songshan field; (c,d) Anyang field. field; (c,d) Anyang field. optical axisaxis angle change in theinAstro10A and Astro10B star sensors different Table 2.2. Statistics Statisticsofofthethe optical angle change the Astro10A and Astro10B star in sensors in times andtimes places (unit: arcsec). different and places (unit: arcsec).

Average Skewness Average Skewness −11 27 November 2014 Yili 2.3483 × 10 0.117 27 November 2014 Yili 0.117 2.3483 ˆ 10´11 −11 16 December 2014 Songshan −1.3952 × 10 −0.438 ´11 16 December 2014 Songshan ´0.438 ´1.3952 ˆ 10 24 December Anyang ´1.15568 −1.15568 10−10 0.188 24 December 20142014 Anyang 0.188 ˆ 10×´10 1 January 20152015 Anyang 0.100 1 January Anyang 5.1159× 10−11 0.100 5.1159 ˆ 10´11 23 January 20152015 Yili Yili ´0.034 7.5487.548 ˆ 10×´12 23 January 10−12 −0.034

Kurtosis Kurtosis 2.868 2.868 2.360 2.360 3.213 3.213 2.789 2.285 2.285

Mean Square Error Mean Square Error 2.089 2.089 2.087 2.087 1.282 1.282 1.231 1.231 1.669 1.669

As inversion attitude parameters as As described described in inSection Section2.6, 2.6,we wewould wouldfurther furtheruse usethose thoseprecise precise inversion attitude parameters reference data to analyze the relative precision of the star sensor raw observation data. Figures 9 and 10 as reference data to analyze the relative precision of the star sensor raw observation data. Figures 9 show, the errorthe distributions in the Astro10A and Astro10B star sensors the yaw, and 10respectively, show, respectively, error distributions in the Astro10A and Astro10B starinsensors inroll, the and pitch directions. The maximum deviations in arcsec were −15–15 in yaw direction, −10–10 in roll yaw, roll, and pitch directions. The maximum deviations in arcsec were ´15–15 in yaw direction, direction, in pitch direction, of Astro10B wereof −15–20, −15–15, and −15–15´15–15, arcsec, ´10–10 inand roll−5–5 direction, and ´5–5 inwhile pitch those direction, while those Astro10B were ´15–20, respectively. This was because only the optical axis of star sensor could achieve the high pointing and ´15–15 arcsec, respectively. This was because only the optical axis of star sensor could achieve the accuracy, the single star the sensor attitude determination accuracy was limited and could not be directly high pointing accuracy, single star sensor attitude determination accuracy was limited and could used attitude determination. In addition, the error distribution in the three in directions the two not befor directly used for attitude determination. In addition, the error distribution the threeof directions sensors were all normal distribution (Figures 11 and andand thethe relative of the two sensors were all in normal in distribution (Figures 11 12); and 12); relativeprecision precisionofof the the measurements in the three directions were ≤12 arcsec (Figures 13 and 14), which was consistent measurements in the three directions were ď12 arcsec (Figures 13 and 14), which was consistent with with the accuracy of the optical optical axis axis error error was was ď5” ≤5″ (3σ) the star star sensor sensor design design accuracy of the (3σ) and and horizontal horizontal axis axis error error was was ≤35″ (3σ). Therefore, the data quality of the star sensor observation used in our experiment will be reliable for other processes.

Sensors 2016, 16, 1203

18 of 30

ď35” (3σ). Therefore, the data quality of the star sensor observation used in our experiment will be

Sensors 16, 1203 Sensors 2016, 2016, 1203 processes. reliable for16, other

19 19 of of 31 31

20 20

yaw yaw

roll roll

pitch pitch

arcsecond arcsecond

10 10 00 -10 -10 -20 -20 00

20 20

40 40

60 80 60 80 epoch num epoch num

100 100

120 120

140 140

Figure Figure 9. 9. The The original original observation observation accuracies accuracies of of the the Astro10A Astro10A star star sensor sensor when when photos photos were were taken taken in in the the Anyang Anyang calibration calibration field field on on 11 January January 2015. 2015.

yaw yaw

20 20

roll roll

pitch pitch

arcsecond arcsecond

10 10 00 -10 -10 -20 -20 00

20 20

40 40

60 80 60 80 epoch epoch num num

100 100

120 120

140 140

Figure original observation Figure 10. 10. The The original original observation accuracies accuracies of of the the Astro10B Astro10B star star sensor sensor when when photos photos were were taken taken in in the the Anyang Anyang calibration calibration field field on on 11 January January 2015. 2015.

0.06 0.06

ProbabilityDensity Density Probability

ProbabilityDensity Density Probability

0.07 0.07

0.05 0.05 0.04 0.04 0.03 0.03 0.02 0.02

0.35 0.35

0.1 0.1

0.3 0.3

0.08 0.08 0.06 0.06 0.04 0.04 0.02 0.02

0.01 0.01 00 -20 -20

Probability Probability Between Between Limits Limits == 0.99569 0.99569

0.12 0.12

ProbabilityDensity Density Probability

Probability Probability Between Between Limits Limits == 0.93551 0.93551

0.08 0.08

-10 -10

00 (a) (a)

10 10

20 20

00 -20 -20

Probability Probability Between Between Limits Limits == 11

0.25 0.25 0.2 0.2 0.15 0.15 0.1 0.1 0.05 0.05

-10 -10

00 (b) (b)

10 10

20 20

00 -10 -10

-5 -5

00 (c) (c)

55

10 10

Figure distribution of the original observation accuracies of the Astro10A star sensor (a) Figure 11. 11. Normal Normal accuracies of of thethe Astro10A starstar sensor (a) Figure 11. Normaldistribution distributionofofthe theoriginal originalobservation observation accuracies Astro10A sensor yaw; (b) roll; (c) pitch. yaw; (b) roll; (c) pitch. (a) yaw; (b) roll; (c) pitch.

Sensors 2016, 16, 1203

20 of 31

0.05

Probability Between Limits = 0.74779 0.05 Probability Between Limits = 0.74779 Probability Density Probability Density Probability Density

Probability Density Probability Density Probability Density

0.05 0.04

0.04 0.04 0.03 0.03 0.03 0.02 0.02 0.02 0.01 0.01 0.010 -40 0 0-40 -40

-20

0 20 40 (a) -20 0 20 40 -20 0 20 40 Normal(a)(a) distribution of

Probability Between Limits = 0.82574

0.06

Probability Between Limits = 0.82574 0.06 Probability Between Limits = 0.82574 0.05

0.06

Probability Density Probability Density Probability Density

Sensors Sensors2016, 2016,16, 16,1203 1203 Sensors 2016,Probability 16, 1203 Between Limits = 0.74779

0.05 0.04 0.05 0.04 0.03 0.04 0.03 0.02 0.03 0.02 0.01 0.02 0.01 0 0.01-40

19 20ofof3031 Probability Between Limits = 0.8370820 of 31

0.06

Probability Between Limits = 0.83708

0.06 Probability Between Limits = 0.83708 0.05 0.06 0.05 0.04 0.05 0.04 0.03 0.04 0.03 0.02 0.03 0.02 0.01 0.02

0.01 0 0.01-40 0 20 40 -20 0 20 40 (b) (c) 0 0 -20 0 20 40 -20 0 20 40 0-40 0-40 -40 -20 0(b) 20 40 -40 -20 0 20 40 (c) original observation accuracies of the Astro10B star sensor (a) (b) (c) -20

Figure 12. the yaw; (b) roll; (c) pitch. Figure 12. Normal distribution of the original observation accuracies of the Astro10B star sensor (a) Figure 12. 12. Normal distribution accuracies of of thethe Astro10B starstar sensor (a) Figure Normal distributionofofthe theoriginal originalobservation observation accuracies Astro10B sensor yaw; (b) roll; (c) pitch. yaw; (b) roll; (c) pitch. (a) yaw; (b) roll; (c) pitch.

12

10 12 12 8 10 10 68 8 46 6 24 4 02 2 20141127 Yili 0 0 20141127 Yili 20141127 Yili

Yaw Roll Yaw Yaw Pitch Roll Roll Pitch Pitch

20141216 20150101 20150109 20150123 Yili Songshan Anyang Anyang 20141216 20150101 20150109 20150123 Yili 20141216 20150101 20150109 20150123 Yili Songshan Anyang Anyang Songshan Anyang Anyang data at different times and places Figure 13. Precision of the onboard Astro10A original observation

Figure 13. Precision of the onboard Astro10A original observation data at different times and places (unit: arcsec). Figure 13. Precision of the onboard Astro10A original observation data at different times and places (unit: Figurearcsec). 13. Precision of the onboard Astro10A original observation data at different times and places (unit: arcsec). (unit: arcsec).

14 12 14 10 14 12 8 12 10 1068 846 624 402 20 0

Yaw

20141127 Yili

Roll Yaw Yaw Pitch Roll Roll Pitch Pitch

20141216 20150101 20150109 20150123 Yili Songshan Anyang Anyang 20141127 Yili 20141216 20150101 20150109 20150123 Yili 20141127 Yili 20141216 20150101 20150109 20150123 Yili Songshan Anyang Anyang Figure 14. Precision of the onboard Astro10B original observation Songshan Anyang Anyangdata at different times and places

Figurearcsec). 14. Precision of the onboard Astro10B original observation data at different times and places (unit: (unit: arcsec). Figure 14. Precision of the onboard Astro10B original observation data at different times and places Figure 14. Precision of the onboard Astro10B original observation data at different times and places (unit: arcsec). and stability after bidirectional filtering (2) Convergence arcsec). and stability after bidirectional filtering (2) (unit: Convergence

the basis ofand the measurement and state equations, used a bidirectional filter and overall (2) On Convergence after bidirectional filtering we On the basis of thestability measurement and state equations, we used a bidirectional filter and overall (2) Convergence and stability after bidirectional filtering weighted smoothing method to process the attitude data. Raw observational attitude data taken in the weighted smoothing method to processand thestate attitude data. Raw observational attitude data taken in On the basis of the measurement equations, we used a bidirectional filter and overall Anyang on 1 January 2015 applied for we data fusion, and the experimental results On calibration the basis offield the measurement and were state equations, used a bidirectional filter and overall the Anyang calibration field on 1 January 2015 were applied for data fusion, and the experimental weighted smoothing method to process the attitude data. Raw observational attitude data taken in were analyzed. weighted smoothing method to process the attitude data. Raw observational attitude data taken in results were analyzed. the Anyang calibration field on 1trend January 2015 applied forisdata fusion, and the experimental Understanding the variation of a2015 state were errorapplied parameter important determine whether the Anyang calibration on 1 trend January for data fusion, to and the experimental Understanding thefield variation of a statewere error parameter is important to determine whether results were analyzed. the filter for the attitude determination system is convergent and stable. To test the convergence results were analyzed. the filter for the attitude determination system is convergent and stable. To test the convergence and Understanding the variation trend of a state error parameter is important to determine whether and stability of bidirectional filtertrend and of overall weighted smoothing method, to we had chosen state Understanding the variation a state error parameter is important determine whether stability of filter and overall weighted smoothing wetest had state error the filter forbidirectional the attitude determination system is convergent andmethod, stable. To thechosen convergence and error characteristic parameters of three vector attitude quaternion, the filter for the attitude determination system is parameter convergent variables and stable.ofTo test theerror convergence and stability of bidirectional filter and overall weighted smoothing method, we had chosen state error stability of bidirectional filter and overall weighted smoothing method, we had chosen state error

Sensors Sensors 2016, 2016, 16, 16, 1203 1203

21 21 of of 31 31

Sensors 2016, 16, 1203

20 of 30

characteristic characteristic parameters parameters of of three three vector vector parameter parameter variables variables of of attitude attitude error error quaternion, quaternion, T T T T T ,  b T ]T T X  [  q  q  [  q  q  q ] 13 1 2 3 X  [  q13 ,  b ]  q  [  q  q  q ] , ,, and the 13 13 1 2 3 and the gyro gyro bias bias error error as as state state variables variables of of the the system. system. The The T T T T, , ∆q “ r∆q X “ r∆q the gyro biasand error ascorrected state variables of the by system. 3 s , and 13 the difference 1 ∆q2 ∆q 13 , ∆b s means error quaternion between the predicted the respectively gyro error quaternion means the difference between the predicted and the corrected respectively by gyro The errorsensor, quaternion means the difference between the predicted and the corrected respectivelyWith by gyro and and star star sensor, and and the the error error of of the the gyro gyro bias bias means means that that was was corrected corrected by by the the star star sensor. sensor. With the the and star sensor, and the error of the gyro bias means that was corrected by the star sensor. With the bidirectional filter on the star sensor and the gyro, information can be fused and the variation trend bidirectional filter on the star sensor and the gyro, information can be fused and the variation trend bidirectional filter on the star sensor and the gyro, information can be fused andEuler the variationerrors trend of the the of of the the error error parameters parameters can can stand stand out out (Figures (Figures 15 15 and and 16). 16). We We found found that that Euler angle angle errors in in the error parameters can stand out (Figures 15 and 16). We found that Euler anglethe errors in The the yaw, roll, and yaw, yaw, roll, roll, and and pitch pitch directions directions varied varied steadily steadily and and randomly, randomly, and and so so did did the gyro. gyro. The range range of of the the pitch directions varied steadily and randomly, and so did the gyro. The range of the Euler angle error −4 −4 Euler angle error was −0.02–0.03 arcsec, and that of error bias −4.0 × 10 ~6.0 × 10 deg/h. The estimates −4 −4 Euler angle error was −0.02–0.03 arcsec, and that of error bias −4.0´×4 10 ~6.0 × 10 deg/h. The estimates was ´0.02–0.03changed arcsec, and that of error bias17). ´4.0 ˆ 10´4 ~6.0 ˆ 10 deg/h. The estimates for gyro bias for for gyro gyro bias bias changed over over time time (Figure (Figure 17). Therefore, Therefore, we we conclude conclude that that the the gyro gyro bias bias in in the the X, X, Y, Y, changed over time (Figure 17). Therefore, we conclude that the gyro bias in the X, Y, and Z directions and and Z Z directions directions tended tended toward toward aa constant constant value value within within 0.2 0.2 deg/h, deg/h, meeting meeting the the gyro gyro bias bias requirement requirement tended toward a constant value within 0.2 deg/h, meeting the gyro bias requirement of ď2.0 deg/h. of of ≤2.0 ≤2.0 deg/h. deg/h. In order totofurther verify thethe convergence and and stability of theofdesigned filter, we present a detaileda In In order order to further further verify verify the convergence convergence and stability stability of the the designed designed filter, filter, we we present present a description of the change in the gyro angle velocity estimates after star sensor and gyro information detailed description of the change in the gyro angle velocity estimates after star sensor detailed description of the change in the gyro angle velocity estimates after star sensor and and gyro gyro fusion. As shown inAs Figure 18, in theFigure values18, of the the values gyro angle velocity in three directions became close information fusion. shown of the gyro angle velocity in three directions information fusion. As shown in Figure 18, the values of the gyro angle velocity in three directions to the true state of thetrue satellite flight that whenflight the satellite was in stable flight, the angular velocity became became close close to to the the true state state of of the the satellite satellite flight that that when when the the satellite satellite was was in in stable stable flight, flight, the the measured by the gyro was the angular velocity of the satellite around the Earth, about 0.06 degrees angular velocity measured by the gyro was the angular velocity of the satellite around the Earth, angular velocity measured by the gyro was the angular velocity of the satellite around the Earth, per second,degrees and the measurement noisemeasurement was effectively smoothed out. For more details, we list the about about 0.06 0.06 degrees per per second, second, and and the the measurement noise noise was was effectively effectively smoothed smoothed out. out. For For more more average value and mean square of the Euler angle and error bias errors in the photos taken in different details, details, we we list list the the average average value value and and mean mean square square of of the the Euler Euler angle angle and and error error bias bias errors errors in in the the calibration fields at different times, andfields the two errors tended toward a Gaussian distribution (Tables 3a photos taken in different calibration at different times, and the two errors tended toward photos taken in different calibration fields at different times, and the two errors tended toward a and 4). Therefore, the bidirectional Kalman filter was reliable, which could maintain the convergence Gaussian Gaussian distribution distribution (Tables (Tables 33 and and 4). 4). Therefore, Therefore, the the bidirectional bidirectional Kalman Kalman filter filter was was reliable, reliable, which which and stability. could maintain the convergence and stability. could maintain the convergence and stability. 0.03 0.03

yaw yaw

roll roll

pitch pitch

60 80 60 80 epoch epoch num num

100 100

arcsecond arcsecond

0.02 0.02 0.01 0.01 0 0 -0.01 -0.01 -0.02 -0.02 -0.03 -0.03 0 0

20 20

40 40

120 120

140 140

Figure angle error error based based on on star star sensor and gyro information fusion. Figure 15. 15. Variation Variation of star sensor sensor and and gyro gyro information information fusion. fusion. of Euler Euler angle -4

-4 xx 10 10 6 6

bx bx

by by

bz bz

deg/h deg/h

4 4 2 2 0 0 -2 -2 -4 -4 0 0

20 20

40 40

60 80 60 80 epoch epoch num num

100 100

120 120

140 140

Figure Figure 16. 16. Variation Variation of of error error bias bias based based on on star star sensor sensor and and gyro gyro information information fusion. fusion. Figure 16. Variation of error bias based on star sensor and gyro information fusion.

Sensors 2016, 16, 1203 Sensors2016, 2016,16, 16,1203 1203 Sensors

21 of 30 22of of31 31 22

bx bx

0.3 0.3

by by

bz bz

0.2 0.2

deg/h deg/h

0.1 0.1 00 -0.1 -0.1 -0.2 -0.2 -0.3 -0.3 -0.4 -0.4 00

20 20

40 40

60 80 60 80 epoch num epoch num

100 100

120 120

140 140

Figure17. 17.Gyro Gyrobias biasestimates estimatesafter afterstar starsensor sensorand andgyro gyroinformation informationfusion. fusion. Figure Figure 17. Gyro bias estimates after star sensor and gyro 0.02 0.02

XX

YY

ZZ

80 80

100 100

deg/s deg/s

00 -0.02 -0.02 -0.04 -0.04 -0.06 -0.06 -0.08 -0.08 00

20 20

40 40

60 60

epoch epoch

120 120

140 140

Figure 18. Gyro angle velocity estimates after star sensor and gyro information fusion. Figure18. 18.Gyro Gyroangle anglevelocity velocityestimates estimatesafter afterstar starsensor sensorand andgyro gyroinformation informationfusion. fusion. Figure Table 3. Statistics of the Euler angle error at different arcsec). Table3. 3.Statistics Statisticsof ofthe theEuler Eulerangle angleerror errorat atdifferent differenttimes timesand andplaces places(unit: (unit:arcsec). arcsec). Table times and places (unit:

Average Value Average Value Average Value Calibration Fields Yaw Roll Pitch Calibration Fields Yaw Roll Pitch Calibration Fields Yaw Roll Pitch Yili 0.00156 0.00035 0.00031 Yili 0.00156 0.00035 0.00031 YiliAnyang 0.00156 Anyang −0.00026 0.00035 0.00003 0.00031 0.00041 −0.00026 0.00003 0.00041 Anyang ´0.00026 0.00003 0.00041 Anyang −0.00019 −0.00017 0.00018 Anyang −0.00019 −0.00017 0.00018 Anyang Yili ´0.00019 ´0.00017 0.00018 −0.00164 0.00300 0.00119 Yili −0.00164 0.00300 0.00119 Yili ´0.00164 0.00300 0.00119 Songshan −0.00109 −0.00132 −0.00132 −0.00008 −0.00008 Songshan −0.00109 Songshan ´0.00109 ´0.00132 ´0.00008

Mean Square Error Mean Square Error Mean Square Error Yaw Roll Pitch Yaw Roll Pitch Yaw Roll Pitch 0.00863 0.01199 0.01067 0.00863 0.01199 0.01067 0.00863 0.00429 0.011990.00333 0.01067 0.00423 0.00429 0.00333 0.00423 0.00423 0.00991 0.00429 0.01013 0.00333 0.01194 0.01194 0.00991 0.01013 0.01194 0.01298 0.00991 0.00988 0.01013 0.01045 0.01045 0.01298 0.00988 0.01045 0.01298 0.00988 0.00717 0.00732 0.00732 0.00542 0.00542 0.00717 0.00717 0.00732 0.00542

Table4. 4.Statistics Statisticsof ofgyro gyrobias biaserror errorat atdifferent differenttimes timesand andplaces places(unit: (unit:deg/h). deg/h). Table Table 4. Statistics of gyro bias error at different times and places (unit: deg/h). AverageValue Value MeanSquare SquareError Error Average Mean MeanBB Square Error BBzz CalibrationFields Fields Calibration BBxx Average Value BByy BBzz BBxx yy −6 −6 −5 −6 −6 −5 Yili −9.7333 × 10 −7.3 × 10 −3.5 × 10 0.00026 0.000237 0.000188 CalibrationYili Fields B B B B B Bz −9.7333 × 10 −7.3y × 10 −3.5 z× 10 0.00026 0.000237 0.000188 x x y −7 −6 −6 −5 −5 −5 −7 −6 −6 −5 −5 Anyang −4.55149 10 −4 × 10 2.5 × 10 3.97 × 10 3.06 × 10 3.98 × 10 ´6 ´6 ´5 Anyang −4.55149 2.5ˆ× 10 10 3.97 × 10 3.060.000237 × 10 3.98 ×0.000188 10−5 Yili 0.00026 ´9.7333 ˆ 10 10 ´7.3−4ˆ×1010 ´3.5 −6 −6 −6 ´6 Anyang 4.19674 ×10 10−6 ´4−4.5 −4.5 10−6 4.98 ×10 10−6 0.000211 0.000218 0.000246 Anyang Anyang 4.19674 ××´6 10 4.98 0.000211 0.000218 ´4.55149 ˆ 10×´7 ˆ 10 2.5 ˆ ×10 3.97 ˆ 10´5 3.06 ˆ 10´5 0.000246 3.98 ˆ 10´5 ´6 −5 ´6−5 ´6 −5 ´4.5 −5 −5 −5 Anyang 0.000211 0.000218 0.000246 4.19674 ˆ 10 ˆ 10 4.98 ˆ 10 −2.7 × 10 3.84 × 10 0.000278 0.000214 0.000234 Yili −6.8224 × 10 −2.7 × 10 3.84 × 10 0.000278 0.000214 0.000234 Yili −6.8224 × 10 ´5−6 ´5−5 Yili 0.000278 0.000214 0.000234 −5 ´2.71.1 −5 −5 ´6.8224 ˆ 10´5 ˆ 10 3.84 ˆ ×1010 −5 −6 −5 −5 −5 Songshan 1.52775 × 10 × 10 1.19 7.91 × 10 5.79 × 10 7.67 10−5−5 ´5 Songshan 1.52775 ´5 × 10 1.1 × ´6 10 1.19 × 10 7.91 × 10 ´5 5.79 × 10 ´5 7.67 ××10 ´5 Songshan

1.52775 ˆ 10

1.1 ˆ 10

1.19 ˆ 10

7.91 ˆ 10

5.79 ˆ 10

7.67 ˆ 10

(3) Relative Relative attitude attitudeaccuracy accuracy (3) Due to to the the nature nature of of attitude attitude data, data, itit isis difficult difficult to to verify verify their their accuracy accuracy and and reliability. reliability. As As Due described in in Section Section 2.6, 2.6, we we still still used used precise precise attitude attitude data data calculated calculated from from an an optical optical image image in in the the described geometric calibration calibration field field as as reference reference data. data. We We converted converted attitude attitude parameters parameters from from the the body body geometric

Sensors 2016, 16, 1203

(3)

22 of 30

Relative attitude accuracy

Due to the nature of attitude data, it is difficult to verify their accuracy and reliability. As described 23 of 31 still used precise attitude data calculated from an optical image in the geometric calibration field as reference data. We converted attitude parameters from the body coordinate system coordinate system relative to the inertial coordinate into the body coordinate relative to the orbit relative to the inertial coordinate into the body coordinate relative to the orbit coordinate. It is more coordinate. It is more convenient for us to analyze the processing precision of optical image geometry convenient for us to analyze the processing precision of optical image geometry in the along-orbit in the along-orbit direction and the direction perpendicular to the orbit. Figures 19–21 present the direction and the direction perpendicular to the orbit. Figures 19–21 present the relative attitude error relative attitude error distribution using photos taken in the Yili, Anyang, and Songshan calibration distribution using photos taken in the Yili, Anyang, and Songshan calibration fields. As figure parts (a) fields. As figure parts (a) and (b) indicate, we used a multi-star sensor combination and a star sensor and (b) indicate, we used a multi-star sensor combination and a star sensor and gyro combination to and gyro combination to process the attitude data. process the attitude data. Sensors 2016, 16, in Section 2.6,1203 we

5

1.5 yaw roll pitch

4 3

1 0.5 arcsecond

2 arcsecond

yaw roll pitch

1 0 -1

0 -0.5

-2 -1 -3 -4

0

50

100

150

200

-1.5

250

0

50

100

(a)

150

200

250

(b)

Figure 19. 19. Distribution Distribution of of relative relative attitude attitude error error using using photos photos taken taken in in the the Yili Yili calibration calibration field: field: (a) (a) in in Figure multi-star sensor sensor combination; combination; (b) multi-star (b) in in star star sensor sensor and and gyro gyro combination. combination. 2.5

1 yaw roll pitch

2 1.5

yaw roll pitch

0.5

arcsecond

arcsecond

1 0.5 0

0

-0.5

-0.5 -1

-1

-1.5 -2

0

100

200 (a)

300

-1.5

0

100

200

300

(b)

Figure 20. Distribution Distribution of of relative relative attitude attitude error error using using photos photos taken taken in in the the Anyang Anyang calibration calibration field: field: (a) in multi-star sensor combination; (b) in star sensor and gyro gyro combination. combination.

Sensors 2016, 16, 1203 Sensors 2016, 16, 1203

23 of 30 24 of 31

3

1.5 yaw roll pitch

2

1 0.5 arcsecond

arcsecond

1 0 -1

0 -0.5

-2

-1

-3 -4 0

yaw roll pitch

50

100

150

-1.5

(a)

0

50

100

150

(b)

photos taken taken in in the the Songshan Songshan calibration calibration field: field: Figure 21. Distribution of relative attitude error using photos (a) in multi-star sensor combination; (b) in star sensor and and gyro gyro combination. combination.

The The results results show show that that in in the the multi-star multi-star sensor sensor combination, combination, the the range range of of relative relative attitude attitude error error in in the yaw, roll, and pitch directions could reach the level of −2–3 arcsec, while in the star sensor and the yaw, roll, and pitch directions could reach the level of ´2–3 arcsec, while in the star sensor and gyro gyro combination, combination, the the range range of of relative relative attitude attitude error error could could be be within within plus plus or or minus minus sub-arcsec sub-arcsec level. level. We analyzed analyzedthe thereliability reliability proposed algorithm; Table 5 presents the statistics thesquare mean We of of ourour proposed algorithm; Table 5 presents the statistics on the on mean square of the relative attitude in different attitude-sensor combinations. Incase the case of multi-star the multierror oferror the relative attitude in different attitude-sensor combinations. In the of the star sensor combination, the mean square error could reach approximately 1 arcsec and was ≤0.5 sensor combination, the mean square error could reach approximately 1 arcsec and was ď0.5 arcsec arcsec in the starand sensor gyro combination. The processing in the multi-star sensor in the star sensor gyroand combination. The processing effect in theeffect multi-star sensor combination combination was worse than in the star sensor and gyro combination because the attitude in the was worse than in the star sensor and gyro combination because the attitude in the multi-star sensor multi-star sensor combination contained high-frequency noise; thecombination, star sensor star andsensor gyro combination contained high-frequency noise; in the star sensor andingyro combination, star sensor high-frequency noise could be smoothed out, and the gyro bias could be high-frequency noise could be smoothed out, and the gyro bias could be estimated to optimize the estimated to optimize the attitude data estimation. attitude data estimation. Table 5. Mean Mean square square error of the the relative relative attitude attitude in in different different attitude sensor combinations (unit: arcsec).

Multi-Star Sensor combInation Multi-Star Sensor CombInation Calibration Field Yaw Roll Pitch Calibration Field Yaw Roll Pitch Yili 0.727 0.868 0.811 YiliAnyang 0.727 0.868 0.811 0.699 0.773 0.543 Anyang 0.699 0.773 0.543 Anyang 1.005 0.782 0.796 AnyangYili 1.005 0.782 0.796 0.789 0.993 0.733 Yili 0.789 0.993 0.733 Songshan 1.022 1.029 0.777 Songshan 1.022 1.029 0.777 RMS 0.859 0.895 0.738 RMS 0.859 0.895 0.738

Star Sensor and Gyro Combination Star Sensor and Gyro Combination Yaw Roll Pitch Yaw Roll Pitch 0.272 0.611 0.477 0.272 0.418 0.3030.611 0.161 0.477 0.418 0.303 0.326 0.386 0.294 0.161 0.326 0.386 0.354 0.416 0.159 0.294 0.354 0.416 0.425 0.513 0.291 0.159 0.425 0.513 0.291 0.363 0.458 0.299 0.363 0.458 0.299

(4) Accuracy analysis of attitude fitting model (4) Accuracy analysis of attitude fitting model Although the precision attitude parameters could be obtained through a star sensor and gyro Althoughthe thesampling precisionfrequency attitude parameters could be below obtained star sensor and gyro combination, was only 4–8 Hz, far thethrough imagingafrequency (20,000 Hz) combination, sampling frequency wasThe only 4–8 Hz, far below the imaging frequency (20,000 Hz) of the satellitethe line push-broom camera. resolution of the satellite panchromatic camera in this of the satellite push-broom camera. The resolution satellite panchromatic camerarelative in this research was 1line m and the orbital altitude was 645 km; of to the meet a panchromatic geometric research was 1 m than and the orbital was 645 km; toofmeet a panchromatic accuracy of better 1 pixel, the altitude fitting model accuracy the attitude parametergeometric should berelative within accuracy better than pixel, the model accuracy of the attitude parameter be within 0.3 arcsec.ofTherefore, we1 focused onfitting how to obtain the precise attitude parameters ofshould each scan line of 0.3 arcsec. Therefore, we focused on how to obtain the precise attitude parameters of each scan line of the push-broom camera. Three attitude-fitting methods are described in detail in Section 2.5: Lagrange the push-broom camera. Three attitude-fitting methods are described in detail in Section 2.5: Lagrange polynomial interpolation, orthogonal polynomial fitting, and the spherical linear interpolation model. polynomial interpolation, orthogonal and parameters the spherical linear interpolation We used different fitting models in ourpolynomial experiment fitting, on attitude obtained from the star model. sensor and gyro combination, and evaluated the fitting accuracy in its precise attitude.

Sensors 2016, 16, 1203

24 of 30

16, 1203 of 31 We Sensors used 2016, different fitting models in our experiment on attitude parameters obtained from25the star sensor and gyro combination, and evaluated the fitting accuracy in its precise attitude. 25 of 31 Sensors 2016, 16, 1203 The attitude data used to fit on different fitting models were down-sampled from 8 to 4 Hz, The attitude data used to fit on different fitting models wereofdown-sampled from 8 to 4 Hz, allowingThe us to use different fitting to fit fitting attitude parameters different forms attitude data used to fitmodels on different models were down-sampled from 8oftoexpression. 4 Hz, allowing us to use different fitting models to fit attitude parameters of different forms ofaccuracy. expression. Finally, we used 8 Hz attitude parameters asattitude reference values of todifferent evaluate the of fitting allowing us to use different fitting models to fit parameters forms expression. Sensors 2016, 16, 8 1203 25 of 31 Finally, we used Hz attitude parameters as reference values to evaluate the fitting accuracy. Figures 22–24, describe the attitude fittingvalues accuracy distribution in the Lagrange Finally, we used 8 Hzrespectively, attitude parameters as reference to evaluate the fitting accuracy. Figures 22–24, describe respectively, the fitting accuracy distribution theHz, Lagrange polynomial interpolation, orthogonal fitting, and spherical linear interpolation Figures describe respectively, the attitude attitude fitting accuracy distribution infrom thein The 22–24, attitude data used to fit onpolynomial different fitting models were down-sampled 8Lagrange to 4models. polynomial interpolation, orthogonal polynomial fitting, and spherical linear interpolation models. polynomial interpolation, orthogonal polynomial fitting, and spherical linear interpolation models. allowing us to use different fitting models to fit attitude parameters of different forms of expression.

0.3 we used 8 Hz attitude parameters as reference 0.3values to evaluate the fitting accuracy. Finally, 0.3 accuracy distribution in the Lagrange yawattitude fitting yaw Figures0.322–24, describe respectively, the yaw yaw roll 0.2 0.2 roll polynomial interpolation, orthogonal polynomial fitting, and spherical linear interpolation models. 0.2

0.2

roll pitch

pitch

0.3 0.1

0.3 0.1

-0.1 -0.1 0.1

arcsecond arcsecond arcsecond

0 0.2 0

0 -0.2 -0.2 -0.1 -0.3 -0.3 -0.2

-0.4 -0.4 0 10 20 0 10 20 -0.3

30

40

30 (a) (a)

40

50

yaw roll pitch

0 0.2 0 -0.1 0.1 -0.1 0 -0.2 -0.2 -0.1 -0.3 -0.3 -0.2 -0.4 -0.40

60

50

roll pitch pitch

0.1

yaw roll pitch

arcsecond arcsecond

arcsecond

0.1

60

-0.3

0

10

20

10

30 40 50 60 30 40 50 (b)

20

60

(b)

Figure 22. Distribution of attitude fittingaccuracy accuracy in Lagrange polynomial interpolation model: model: -0.4 -0.4Lagrange Figure 22. Distribution ofofattitude fitting inthe the Lagrange polynomial interpolation Figure 22. attitude40 fitting in the 0 Distribution 10 on20 50accuracy 10 polynomial 20parameters. 30 interpolation 40 50 model: 60 (a) fitting based Euler 30 angle parameters; (b)60fitting based0on quaternion (a) fitting based onon Euler angle onquaternion quaternionparameters. parameters. (a) fitting based Euler angle parameters;(b) (b)fitting fitting based based on (a) parameters; (b) 0.2

0.2

Figure in the0.2 Lagrange polynomial interpolation yaw yawmodel: 0.2 22. Distribution of attitude fitting accuracy 0.15 based on Euler angle parameters; (b) fitting based 0.15 on quaternion parameters. (a) fitting roll roll yaw yaw 0.15 0.1

pitch roll pitch

0.2 0.1 0.05 0.15 0.05 0 0.1 0-0.05 0.05 -0.05 -0.1 0 -0.1-0.15 -0.05 -0.2 -0.15 -0.1 0

arcsecond arcsecond arcsecond

arcsecond

yaw

roll pitch

arcsecond

arcsecond

0.15 0.1

20

-0.2-0.15

40

60 (a)

80

100

120

0.2 0.1 0.05 0.15 0.05 0 0.1 -0.050 0.05 -0.1 -0.05 0 -0.15 -0.1 -0.05 -0.2 -0.150 -0.1

pitch roll yaw

pitch

roll pitch

20

40

-0.2 -0.15

60 (b)

80

100

120

20 40 of60attitude-fitting 80 100 accuracy 120 0 20 polynomial 40 60 fitting 80 model: 100 (a) 120 Figure0 23. Distribution in the orthogonal (a) (b) -0.2on quaternion parameters. Fitting-0.2 based on Euler angle parameters; (b) Fitting based 0 20 40 60 80 100 120 0 20 40 60 80 100 120 (a) (b) Figure 23. Distribution of attitude-fitting accuracy in the orthogonal polynomial fitting model: (a)

Figure 23. Distribution of attitude-fitting accuracy in the orthogonal polynomial fitting model: 0.3 yaw basedroll pitch parameters. Fitting based Euler angle (b)accuracy Fitting on quaternion Figure 23.on Distribution of parameters; attitude-fitting in the orthogonal polynomial fitting model: (a) (a) Fitting based on Euler angle parameters; (b) Fitting based on quaternion parameters. Fitting based on Euler angle parameters; (b) Fitting based on quaternion parameters. 0.2 0.3 yaw roll pitch 0.1 0.3

yaw

roll

pitch

arcsec

0.2 0 0.2 -0.1 0.1

arcsec

arcsec

0.1 0

-0.2 0

-0.1

-0.1

-0.2-0.2

0

10

20

30 epochnum

40

50

60

Figure 24. Distribution of attitude fitting accuracy in the spherical linear interpolation model. 0 0 1010 20 30 20 30 4040 50 50 60 60 epochnum epochnum Figure 24. Distribution of attitude fitting accuracy in the spherical linear interpolation model.

Figure Distributionofofattitude attitudefitting fitting accuracy accuracy in interpolation model. Figure 24.24. Distribution in the thespherical sphericallinear linear interpolation model.

Sensors 2016, 16, 1203

25 of 30

Parts (a) and (b) in each figure separately indicate the attitude parameter in the Euler angle and quaternion. The above-mentioned figures lead us to believe that we can control the attitude fitting accuracy in yaw, roll, and pitch directions within a level of 0.3 arcsec. Attitude parameters of both the Euler angle and quaternion could be used to build mathematical fitting models, and all of three types of fitting model could be used to fit the attitude parameter. Table 6 lists the statistical details of the fitting accuracy in the yaw, roll, and pitch directions based on different fitting methods. The results show that the orthogonal polynomial fitting model was more suitable for building mathematical models and could ensure the relative geometry accuracy of the optical image. Table 6. The mean square error of attitude fitting accuracy in different fitting models (unit: arcsec). Fitting on Euler Fitting Model Lagrange polynomial Orthogonal polynomial Spherical linear

(5)

Fitting on Quaternion

Yaw

Roll

Pitch

Yaw

Roll

Pitch

0.209

0.207

0.249

0.204

0.203

0.244

0.141

0.101

0.136

0.142

0.105

0.135

NULL

NULL

NULL

0.272

0.225

0.119

Imagery processing accuracy

We would use post-precise orbit data to provide precise exterior orientation line elements for image geometry processing, in which the accuracy could reach the centimeter level. Therefore, due to high-precision calibration and time synchronization, the attitude data are used directly for image geometry processing, which may directly reflect the quality of the data. The Section 3.1 describes in detail the geometric calibration field data for checking geometric accuracy, including the Songshan, Anyang, Dongying, Sanya, Taiyuan, and Yili calibration fields, and we would check the quality of the attitude parameter on the basis of the geometric calibration field image taken by the panchromatic camera and DOM/DEM reference data. Wuhan University developed an optical satellite ground pretreatment system for the Yaogan-24 remote sensing satellite image processing, and we conducted experiments on the platform. The optical satellite ground pretreatment includes radiation treatment, sensor calibration, geometric correction, and so on. We took attitude data as the input for image preprocessing, and then analyzed the geometric accuracy on the basis of the geometric correction product and DOM/DEM of the calibration field reference data. Figures 25 and 26 show the distribution of correspondence points between the DOM images and geometric correction images taken by the satellite panchromatic camera in Songshan on 16 March 2015 and in Anyang on 9 February 2015. The correspondence points between the DOM and geometric correction images were sufficient and were distributed more evenly to ensure the reliability of the geometric precision. Figures 27 and 28 show the distribution of the relative geometric accuracy in the cross-track and along-track directions of the panchromatic image with respect to the DOM/DEM of the calibration field. The relative geometric accuracies in the cross-track and along-track directions were clearly between 1.5 and 2.0 pixels. Meanwhile, we concluded that the relative accuracy in the attitude data was 0.3–0.5 arcsec with respect to the precise attitude data calculated on the optical image in the geometric calibration field in Section 3.2. Therefore, both conclusions confirmed each other. Tables 7 and 8 respectively show detailed statistics on the uncontrolled and relative positioning accuracy of the satellite geometric correction image taken by the panchromatic camera based on on-board and on-ground processing attitude data. Furthermore, we could conclude that the side swing angle did not affect the image geometric correction accuracy under no control-point condition when images were taken in different calibration fields, and the uncontrolled and relative positioning accuracy of the geometric correction image based on on-ground processing attitude data was about 15 m and 1.3 pixels, comparing to on-board attitude data about 30 m and 2.4 pixels, which increased

Sensors 2016, 16, 1203

26 of 30

about 50%. As the attitude determination accuracy of the star sensor was 5 arcsec, which is configured by the satellite, and the orbital altitude of the satellite was 645 km, 1 arcsec corresponded to a ground Sensors 2016, 16, 1203 27 of 31 error 3.127 m. Theoretically, the uncontrolled positioning precision of the satellite should within Sensorsof 2016, 16, 1203 27 of 31 15 m. Therefore, the attitude accuracy the of the star sensor and the accuracy image positioning precision of the satellite shoulddetermination within 15 m. Therefore, attitude determination of the star precision of the satellite should within 15 m. Therefore, the attitude determination accuracy of the star accuracy without a control point were consistent. sensor and the image positioning accuracy without a control point were consistent. sensor and the image positioning accuracy without a control point were consistent.

(a) (a)

(b) (b)

Figure 25. Correspondence points between the DOM image and geometric correction image of the Figure 25. Correspondence Correspondencepoints pointsbetween betweenthethe DOM image geometric correction image of Figure 25. DOM image andand geometric correction image of the satellite panchromatic camera taken in Songshan on 16 March 2015: (a) panchromatic image; (b) the satellite panchromatic camera taken in Songshan on 16 March 2015: (a) panchromatic image; satellite panchromatic camera taken in Songshan on 16 March 2015: (a) panchromatic image; (b) reference image. (b) reference image. reference image.

(a) (a)

(b) (b)

Figure 26. Correspondence points between the DOM and the geometric correction images of the satellite Figure between DOM the correction images of the the satellite satellite panchromatic camera takenpoints in Anyang on 9the February 2015: (a)geometric panchromatic image; (b) reference image. Figure 26. 26. Correspondence Correspondence points between the DOM and and the geometric correction images of panchromatic camera taken in Anyang on 9 February 2015: (a) panchromatic image; (b) reference image. panchromatic camera taken in Anyang on 9 February 2015: (a) panchromatic image; (b) reference image. Cross track Cross track

2 2

Along track Along track

pixel pixel

1 1 0 0 -1 -1 -2 -2 0 0

10 10

20 20

30 30

40 40

50 gcp50 num gcp num

60 60

70 70

80 80

90 90

100 100

Figure 27. Distribution of the geometric relative accuracy in the cross-track and along-track directions Figure 27. Distribution of the geometric relative accuracy in the cross-track and along-track directions

(a)

(b)

Figure 26. Correspondence points between the DOM and the geometric correction images of the satellite Sensors 2016, 16, 1203 taken in Anyang on 9 February 2015: (a) panchromatic image; (b) reference 27 ofimage. 30 panchromatic camera

Cross track

2

Along track

pixel

1

0

-1

-2

0

10

20

30

40

50 gcp num

60

70

80

90

100

Figure 27. Distribution of the geometric relative accuracy in the cross-track and along-track directions

Sensors 2016, 16, 1203 28 of 31 Figure 27. the geometric relative accuracy in the cross-track and along-track directions of Distribution a panchromaticof image taken in Songshan on 16 March 2015. of a panchromatic image taken in Songshan on 16 March 2015.

2

Cross track

Along track

pixel

1

0

-1

-2

0

10

20

30

40

50 gcp num

60

70

80

90

100

Figure 28. Distribution of the geometric relative accuracy in the cross-track and along-track directions Figure 28. Distribution of the geometric relative accuracy in the cross-track and along-track directions of a panchromatic image taken in Anyang on 9 February 2015. of a panchromatic image taken in Anyang on 9 February 2015. Table 7. Uncontrolled and relative positioning accuracy of a geometric correction image taken by the Table 7. Uncontrolled and relative positioning accuracy of a geometric correction image taken by the panchromatic camera based on on-board attitude data (unit: m). panchromatic camera based on on-board attitude data (unit: m). Time Time

Average Offset Relative Relative Accuracy Average Offset Accuracy Side Positioning Regions Regions Positioning Accuracy dx dy mx Swing (°) Accuracy dx dy mx mymy

Side Swing (˝ )

27 20142014 15:2715:27 27December December 9 February 2015 11:10 9 February 16 March 20152015 13:20 11:10 1 May 2015 16:13 16 March 2015 13:20 23 July 2015 15:27 1 May 2015 16:13 12 September 2015 14:15 20 October 13:30 23 July2015 2015 15:27 25 December 2015 16:55

´3.820 −3.820 ´5.532 −5.532 4.152 3.318 4.152 ´2.243 3.318 7.436 ´5.616 −2.243 2.378

12 September 2015 14:15 7.436 RMS 20 October 2015 13:30 −5.616 25 December 2015 16:55 2.378 RMS

YiliYili Anyang Anyang Songshan Anyang Songshan Yili Anyang Dongying Sanya Yili Taiyuan

Dongying Sanya Taiyuan

19.643 19.643 25.364 25.364 31.570 10.883 31.570 35.982 10.883 29.998 45.548 35.982 27.536 29.998 29.941 45.548 27.536 29.941

16.788−10.199 ´10.1991.9421.942 16.788 21.329 13.727 2.101 21.329 ´18.69313.727 ´25.4412.1012.478 9.233−25.441 ´5.7622.4781.873 −18.693 ´34.288 10.912 2.711 9.23322.788 −5.762 1.8732.788 ´19.509 37.255 10.912 ´26.2052.7111.893 −34.288 ´24.406 ´12.752 22.788 −19.509 2.7883.117 24.622 17.036 37.255 −26.205 1.8932.404 −24.406 −12.752 3.117 24.622 17.036 2.404

2.438 2.438 1.854 1.854 2.135 2.195 2.135 2.389 2.195 1.993 2.154 2.389 2.082 1.993 2.162 2.154 2.082 2.162

Table 8. Uncontrolled and relative positioning accuracy of a geometric correction image taken by the panchromatic camera based on ground processing attitude data (unit: m). Time

Side Swing (°)

Regions

Positioning Accuracy

27 December 2014 15:27 9 February 2015 11:10 16 March 2015 13:20

−3.820 −5.532 4.152

Yili Anyang Songshan

13.141 6.429 7.020

Average Offset dx dy −11.581 5.356 4.271

−6.212 −3.557 −5.572

Relative Accuracy mx my 1.665 1.355 1.323

1.721 1.399 1.413

Sensors 2016, 16, 1203

28 of 30

Table 8. Uncontrolled and relative positioning accuracy of a geometric correction image taken by the panchromatic camera based on ground processing attitude data (unit: m).

Time

Side Swing (˝ )

27 December 2014 15:27 9 February 2015 11:10 16 March 2015 13:20 1 May 2015 16:13 23 July 2015 15:27 12 September 2015 14:15 20 October 2015 13:30 25 December 2015 16:55

´3.820 ´5.532 4.152 3.318 ´2.243 7.436 ´5.616 2.378 RMS

Regions Yili Anyang Songshan Anyang Yili Dongying Sanya Taiyuan

Positioning Accuracy

Average Offset

Relative Accuracy

dx

dy

mx

my

13.141 6.429 7.020 8.381 15.747 18.288 21.099 17.676

´11.581 5.356 4.271 ´3.564 ´14.197 15.734 16.388 ´14.988

´6.212 ´3.557 ´5.572 ´7.586 6.815 ´9.323 ´13.29 ´9.372

1.665 1.355 1.323 0.831 1.385 1.255 1.421 1.006

1.721 1.399 1.413 1.896 1.125 1.133 0.847 1.155

14.464

11.916

8.197

1.302

1.374

4. Conclusions In this paper, we proposed a method of on-ground processing for attitude data of the Yaogan-24 remote sensing satellite and verification based on the geometric calibration field. By addressing the algorithms, we achieved a significant result. First, the quality of the star sensor observation data can be effectively preprocessed, and the optical axis angle error and original observations error in the directions of yaw, roll, and pitch were in line with the accuracy of the star sensor design indicators and were normally distributed. In addition, application of the bidirectional filter and overall weighted smoothing for attitude data information fusion and performance evaluation showed that the on-ground processing model could achieving bidirectional convergence, ensuring the robust and feasible. Furthermore, different attitude fitting models were analyzed. The results showed that both the attitude parameter of the Euler angle and quaternion could be used to build a mathematical fitting model, in which the orthogonal polynomial fitting model was more suitable for building mathematical models and ensured the relative geometry accuracy. Finally, how to evaluate the relative and absolute accuracies of the attitude result obtained from the proposed algorithm was important. The experimental results show that the relative accuracy of the attitude data was 0.3–0.5 arcsec, and the relative geometric accuracy in the cross-track and along-track directions was between 1.5 and 2.0 pixels. The attitude determination accuracy of star sensor configured by the satellite was 5 arcsec, while the uncontrolled positioning precision of satellite’s panchromatic image was within 15 m. Therefore, both conclusions confirmed each other. In addition, the uncontrolled and relative geometric positioning accuracy of the panchromatic image could be effectively improved, when comparing to the result based on on-board attitude data. Note that this paper does not involve topics such as error sources in the attitude sensor and sensor calibration. In the future, we will focus on the error characteristics, error model construction, relative and absolute calibration model construction, and the effect of the calibration parameters generated on the image geometric precision. Moreover, because the accuracy and frequency of the attitude data have become a key factor in high-resolution optical satellite image geometry processing, the attitude data of the high-frequency angular displacement sensor will be considered and discussed in future work. Acknowledgments: This work was substantially supported by the National Basic Research Program of China 973 Program (Project No. 2014CB744201, 2012CB719902, 2012CB719901), the National Natural Science Foundation of China (Project No. 41371430, 91438112, 91438203, 91438111). The authors would like to thank the anonymous reviewers for their detailed review, valuable comments, and constructive suggestions. Author Contributions: The first author conceived the study and designed the experiments; the second author developed the algorithm and wrote the program; The third author performed the experiments; The fourth and fifth author helped perform the analysis with constructive discussions and contributed to manuscript preparation. Conflicts of Interest: The authors declare no conflicts of interest.

Sensors 2016, 16, 1203

29 of 30

References 1.

2. 3. 4.

5. 6.

7. 8. 9. 10. 11. 12. 13. 14.

15.

16. 17. 18. 19. 20. 21. 22. 23. 24.

Iwata, T. Precision Geolocation Determination and Pointing Management for the Advanced Land Observing Satellite (ALOS). Available online: http://www.researchgate.net/publication/4064183_Precision_ geolocation (accessed on 17 January 2016). Tang, X.M.; Xie, J.F. Summary of high-resolution remote sensing satellite mapping key technology research. In Proceedings of the 2011 China Satellite Conference, Beijing, China, 26–28 October 2011; pp. 182–191. Yang, B. Research on On-Orbit Geometric Calibration Theories and Methods for Optical Linear Pushbroom Satellite Imagery. Ph.D. Dissertation, Wuhan University, Wuhan, China, 2014. Jacobsen, K. Geometry of Satellite Images-Calibration and Mathematical Models. Available online: http: //120.52.72.40/www.ipi.uni-hannover.de/c3pr90ntcsf0/uploads/tx_tkpublikationen/GeomSatJac.pdf (accessed on 17 January 2016). Zhang, G.; Jiang, Y.H. In-orbit geometric calibration and validation of ZY-3 linear array sensors. Photogramm. Rec. 2014, 29, 68–88. [CrossRef] Delvit, J.; Greslou, D.; Latry, C. Geometric improvement for Earth observation applications. In Proceedings of the IEEE Internetional Geoscience and Remote Sensing Symposium (IGARSS), Honolulu, HI, USA, 25–30 July 2010; pp. 3632–3635. Bar-Itzhack, I. REQUEST: A recursive QUEST algorithm for sequential attitude determination. J. Guid. Control Dyn. 1996, 19, 1034–1038. [CrossRef] Shuster, M. A simple Kalman filter and smoother for spacecraft attitude. J. Astronaut. Sci. 1989, 37, 89–106. Christian, J.A.; Lightsey, E.G. Sequential optimal attitude recursion filter. J. Guid. Control Dyn. 2010, 33, 1787–1800. [CrossRef] Zanetti, R.; Ainscough, T.; Christian, J.A.; Spanos, P.D. Q-method extended Kalman filter. J. Guid. Control Dyn. 2015, 38, 752–760. Shuster, M.; Oh, S. Three-axis attitude determination from vector observations. J. Guid. Control Dyn. 1981, 4, 70–77. [CrossRef] Mortari, D. Second estimator of the optimal quaternion. J. Guid. Control Dyn. 2000, 23, 885–888. [CrossRef] Liu, J.; Zhang, Y.S.; Wang, D.H. Precise positioning of high spatial resolution satellite images based on RPC Models. Surv. Map 2006, 35, 30–34. Robertson, B.C. Rigorous geometric modeling and correction of QuickBird imagery. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Toulouse, France, 21–25 July 2003; pp. 797–802. Schwind, P.; Schneider, M.; Palubinskas, G. Processors for ALOS optical data: Deconvolution, DEM generation, orthorectification, and atmospheric correction. IEEE Trans. Geosci. Remote Sens. 2009, 47, 4074–4082. [CrossRef] Toutin, T. Review article: Geometric processing of remote sensing images: Models, algorithms and methods. Int. J. Remote. Sens. 2004, 25, 1893–1924. [CrossRef] Wu, J.; Zheng, Z.; Feng, H. Restoration of TDI camera images with motion distortion and blur. Opt. Laser Technol. 2010, 42, 1198–1203. [CrossRef] Iwasaki, A. Detection and estimation of satellite attitude jitter using remote sensing imagery. Adv. Spacecr. Technol. 2011, 257–272. [CrossRef] Teshima, Y.; Iwasaki, A. Correction of attitude fluctuation of Terra spacecraft using ASTER/SWIR imagery with parallax observation. IEEE Trans. Geosci. Remote Sens. 2008, 46, 222–227. [CrossRef] Soken, H.E.; Hajiyev, C.; Sakai, S.I. Robust Kalman filtering for small satellite attitude estimation in the presence of measurementfaults. Eur. J. Control 2014, 20, 64–72. [CrossRef] Goh, Y.H.; Raveendran, P.; Goh, Y.L. Robust speech recognition system using bidirectional Kalman filter. IET Signal Process. 2015, 9, 491–497. [CrossRef] Liu, L.; Zhang, L.; Zheng, X. Current situation and development trend of star sensor technology. Infrared Laser Eng. 2007, 36, 529–533. Markley, F.L. Attitude determination using vector observations: A fast optimal matrix algorithm. J. Astronaut. Sci. 1993, 41, 261–280. Lv, Z.D.; Lei, Y.J. Satellite Attitude Measurement and Determination; The Press of National Defence Industry: Beijing, China, 2013.

Sensors 2016, 16, 1203

25. 26. 27.

28. 29. 30. 31. 32. 33. 34. 35. 36.

37.

38. 39.

30 of 30

Xie, J.F. The Critical Technology of Data Processing of Satellite Attitude Determination Based on Star Sensor. Ph.D. Dissertation, Wuhan University, Wuhan, China, 2009. Liu, B. Space-ground Intergated Attitude Determination of High-Resolution Satellite and Geometric Image Processing under Jitter Conditions. Ph.D. Dissertation, Wuhan University, Wuhan, China, 2011. Iwata, T. Precision Attitude and Position Determination for the Advanced Land Observing Satellite (ALOS). Available online: http://proceedings.spiedigitallibrary.org/ConferenceProceedings.aspx (accessed on 17 January 2016). Julier, S.; Uhlmann, J.; Durrant-Whyte, H.F. A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Trans. Autom. Control 2000, 45, 477–482. [CrossRef] Liu, J.; Wang, D.H.; Zhang, L. Orientation image adjustment of three line images based on simplified slerp equation. Cehui Kexue Jishu Xuebao 2012, 29, 357–361. Liu, A.D.; Huang, B.; Lu, Z.W. Research on method of coordinate rotation and interpolation based on quaternion. Jisuanji Yu Xiandaihua 2012, 198, 44–51. Sun, H.Y.; Li, R.M.; Yan, L. Semi-parametric fitting of the track of perspective center and attitude of airborne TLS CCD system. Wuhan Daxue Xuebao Xinxi Kexue Ban 2003, 28, 706–709. Zhang, L.; Gruen, A. Multi-image matching for DSM generation from IKONOS imagery. ISPRS J. Photogramm. 2006, 60, 195–211. [CrossRef] Fraser, C.S.; Dial, G.; Grodecki, J. Sensor orientation via RPCs. ISPRS J. Photogramm. 2006, 60, 182–194. [CrossRef] Kocaman, S.; Armin, G. Orientation and self-calibration of ALOS PRISM imagery. Photogramm. Rec. 2008, 23, 323–340. [CrossRef] Zhang, Y.J.; Zheng, M.T. On-orbit geometric calibration of ZY-3 three-linear array imagery with multistrip data sets. IEEE Trans. Geosci. Remote Sens. 2014, 52, 224–234. [CrossRef] Ayoub, F.; Leprince, S.; Binet, R. Influence of camera distortions on satellite image registration and change detection applications. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Boston, MA, USA, 7–11 July 2008; pp. 1072–1075. Wang, P.; Zhang, Y.C.; Qiang, W.Y. Research on the algorithm of on-orbit calibration based on gyro/star-sensor. In Proceedings of the Fifth International Conference on Machine Learning and CyBernetics, Dalian, China, 13–16 August 2006; pp. 303–307. Pittelkau, M.E. Kalman filtering for spacecraft system alignment calibration. J. Guid. Control Dyn. 2001, 24, 1187–1195. [CrossRef] Fu, M.Y.; Deng, Z.H. Theory of Kalman Filter Applied in Navigation System; Science Press: Beijing, China, 2003. © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

On-Ground Processing of Yaogan-24 Remote Sensing Satellite Attitude Data and Verification Using Geometric Field Calibration.

Satellite attitude accuracy is an important factor affecting the geometric processing accuracy of high-resolution optical satellite imagery. To addres...
3MB Sizes 3 Downloads 14 Views