1448

J. Opt. Soc. Am. A / Vol. 30, No. 7 / July 2013

Li et al.

Improving wavefront reconstruction accuracy by using integration equations with higher-order truncation errors in the Southwell geometry Guanghui Li, Yanqiu Li,* Ke Liu, Xu Ma, and Hai Wang Key Laboratory of Photoelectron Imaging Technique and System (Ministry of Education of China), School of Optoelectronics, Beijing Institute of Technology, Beijing 100081, China *Corresponding author: [email protected] Received March 4, 2013; revised June 7, 2013; accepted June 11, 2013; posted June 12, 2013 (Doc. ID 186329); published June 28, 2013 Least-squares (LS)-based integration computes the function values by solving a set of integration equations (IEs) in LS sense, and is widely used in wavefront reconstruction and other fields where the measured data forms a slope. It is considered that the applications of IEs with smaller truncation errors (TEs) will improve the reconstruction accuracy. This paper proposes a general method based on the Taylor theorem to derive all kinds of IEs, and finds that an IE with a smaller TE has a higher-order TE. Three specific IEs with higher-order TEs in the Southwell geometry are deduced using this method, and three LS-based integration algorithms corresponding to these three IEs are formulated. A series of simulations demonstrate the validity of applying IEs with higher-order TEs in improving reconstruction accuracy. In addition, the IEs with higher-order TEs in the Hudgin and Fried geometries are also deduced using the proposed method, and the performances of these IEs in wavefront reconstruction are presented. © 2013 Optical Society of America OCIS codes: (010.7350) Wave-front sensing; (100.5070) Phase retrieval; (220.4840) Testing. http://dx.doi.org/10.1364/JOSAA.30.001448

1. INTRODUCTION By measuring the wavefront slopes, Shack–Hartmann wavefront sensors are widely used to analyze the optical aberrations in the areas of ground-based astronomy [1], ophthalmology [2], and optical shop testing [3]. The output data of deflectometry is also in the form of slopes that are widely used in shape metrology for industrial inspection and surface topography [4–7]. Thus, we need to convert the measured slopes to the wavefront or surface shape to be estimated; this should be carefully dealt with since it influences the measurement accuracy. Wavefront or surface shape reconstruction from slopes is a mathematical integration process in which accuracy and cost time are two important factors. Much literature has been published in the past concerning these two factors. The discrete Fourier transform (DFT)-based integration algorithms compute the wavefront values in the frequency domain [8–14]. Fast speed can be achieved for DFT-based algorithms using fast Fourier transforms (FFTs), since the FFTs need only On log2 n operations [15], where n is the number of wavefront values to be calculated. However, the applications of DFT-based algorithms require that the pupil shape be rectangular and all the measured data points be available in the pupil. But circular, or even irregular, pupils are more common in practical situations. Extra operations of extending the circular or irregular pupils into rectangular ones, such as the iterative Fourier transforms [9] and the extensions of slopes at the pupil edge [10], are necessary to apply DFT-based algorithms. On the other hand, there are also least-squares (LS)-based integration algorithms [16–23] that can also achieve fast speeds using sparse matrix methods with On3∕2  operations [15,24] 1084-7529/13/071448-12$15.00/0

and, particularly, have the advantages of being pupil shape free. The LS-based algorithms can be presented as follows: a set of integration equations (IEs) that indicate the finite difference relationship between local wavefront values and slopes are obtained; then the LS method is applied to solve the obtained set of IEs, since the number of IEs is always larger than the number of unknown wavefront values. The focus of this paper is on LS-based integration algorithms. Southwell proposed a classical LS integration algorithm from his geometry that assumes that the wavefront values to be computed are coincident with the slope measurements [19]. As stated by Southwell, his algorithm can reconstruct the wavefronts having any combination of tilts, defocus, and astigmatism. However, for the wavefronts containing higher-order aberrations, his algorithm gives imperfect reconstruction. In [23], Huang and Asundi called this limitation a biquadratic function problem and they proposed an operation with iterative compensations to improve the reconstruction accuracy for the biquadratic and higher-order aberrations [23]. Even in the presence of measurement noise, Huang and Asundi’s algorithm can obtain a performance gain in reconstruction accuracy. However, their algorithm takes more time since iterative compensations are applied. In this paper, we try to improve the reconstruction accuracy for the wavefronts having aberrations higher than the third-order and, at the same time, without increasing the cost time. The IE used in Southwell’s LS-based integration algorithm is the first Newton–Cotes quadrature rule (the trapezoidal rule) [11,12], which indicates the finite difference relationship between the function values and the slopes at two adjacent points in one dimension. As will be shown in Section 2.A, this © 2013 Optical Society of America

Li et al.

IE has the truncation error (TE) of Oh3  (where h is the interval of the two adjacent points). Generally, the smaller the TE in the approximate calculation, the more accurate the result we can obtain. Thus, we think that the application of IE with a smaller TE in wavefront reconstruction can improve reconstruction accuracy. The point is, then, how to obtain the IE with a smaller TE. This paper proposes a general method, based on the Taylor theorem [25], to deduce the finite difference relationship between the adjacent several discrete function values and the slopes. The mathematical expression of this finite difference relationship is the IE. By containing different numbers of function values and slopes, different IEs can be deduced using the proposed method. The differences of the deduced IEs are their TEs, and the IE with the smaller TE is the one containing more function values and slopes. We can use this method to deduce the IEs with smaller TEs in the Southwell, Hudgin and Fried geometries. Two IEs with TEs of Oh5  in the Southwell geometry are deduced. One contains two function values and four slopes, and the other contains four function values and two slopes. We also deduce another IE with TE of Oh7 , where the IE contains four function values and four slopes. Since the interval h is usually very small in practical situations, Oh5  is much smaller than Oh3 , and Oh7  is much smaller than Oh5 . Applying the three IEs to wavefront reconstruction, we formulate three corresponding LS-based integration algorithms. The validity of improving wavefront reconstruction accuracy, by using IEs with higher-order TEs in the Southwell geometry, is shown by a set of simulations. In the Hudgin and Fried geometries, we also deduce the IEs with higher-order TEs using the proposed method, and formulate the corresponding LS-based integration algorithms. However, the simulations indicate that the improvement of the wavefront reconstruction accuracy, by using IEs with higherorder TEs in the Hudgin and Fried geometries, is not as apparent as that in the Southwell geometry. Therefore, this paper mainly focuses on improving the wavefront reconstruction accuracy in the Southwell geometry, and we will discuss the cases of the Hudgin and Fried geometries at the end of this paper. We will begin the remainder of the paper by introducing the Taylor theorem and analyzing the TE of the IE in Southwell’s algorithm in Section 2.A. The proposed general method is given in Section 2.B. Section 2.C gives the three IEs and their corresponding LS-based integration algorithms. Simulations are conducted in Section 3 to compare the reconstruction accuracy of Southwell’s algorithm, Huang and Asundi’s iterative algorithm, and the three algorithms applying IEs with higherorder TEs. The comparisons are investigated from the aspects of the reconstruction error with noise ignored, the noise propagation, and the reconstruction error with noise considered. In Section 4, we include discussions from the aspects of accuracy and cost time for the LS-based algorithms in the Southwell geometry. The applications of the method in the Hudgin and Fried geometries are also discussed. The work is concluded in Section 5.

2. METHOD AND THREE ALGORITHMS A. Taylor Theorem and Truncation Error The Taylor theorem [25] provides the analytical tool for approximate calculation and evaluation of the calculation

Vol. 30, No. 7 / July 2013 / J. Opt. Soc. Am. A

1449

errors. The n  1th order Taylor theorem tells us that, if the function f x has the derivatives f 0 x; …; f n1 x, up to the order n  1 in the open interval x1 ; x2 , then f x can be expanded at x0 ∈ x1 ; x2  as f x  P n x0 ; x  r n x0 ; x;

(1)

in which P n x0 ; x 

r n x0 ; x 

n X f k x0 

x − x0 k ;

(2)

f n1 ξ x − x0 n1 ; n  1!

(3)

k0

k!

where ξ is a point between x0 and x; P n x0 ; x is the nth order Taylor polynomial of f x at x0 ; r n x0 ; x is the remainder or the TE for the approximation of f x by P n x0 ; x; and r n x0 ; x can be represented by Ox − x0 n1  if f n1 x is bounded in the interval x1 ; x2 . The Taylor theorem implies that, if r n x0 ; x is smaller, P n x0 ; x can approximate f x more accurately. The IE indicates the finite difference relationship between the discrete function values and the discrete first derivatives, which also has TE. Similarly, if the IE with smaller TE is applied in wavefront reconstruction, the reconstructed wavefront will more accurately approximate the actual wavefront to be measured. We next use the Taylor theorem to analyze the TE of the IE in Southwell’s integration algorithm, and then use the obtained TE to support the above opinion. For simplicity, Fig. 1 displays a row of four estimated points in the Southwell geometry. Corresponding to this geometry, the IE in Southwell’s LS-based integration algorithm [19] indicates the finite difference relationship between the function values and the slopes at two adjacent points. For example, for the first two points, the IE is h f 2 − f 1   f 01  f 02 ; 2

(4)

where f 1 and f 2 are the function values at points (1) and (2), respectively; f 01 and f 02 are the first derivatives (slopes) at points (1) and (2), respectively; h is the interval between points (1) and (2). The left-hand side and the right-hand side of Eq. (4) are not equal, actually. Applying the third-order Taylor theorem for the function values of the left-hand side and the second-order Taylor theorem for the slopes of the right-hand side at the intermediate point (1.5), the difference between the left-hand side and the right-hand side of Eq. (4) is

Fig. 1. Row of four estimated points in the Southwell geometry. The unknown function values (marked by closed circles) are coincident with the slopes (marked by crosses). Intermediate “imaginary” positions are indicated with open circles; h is the interval between two adjacent slope measurements.

1450

J. Opt. Soc. Am. A / Vol. 30, No. 7 / July 2013

! f 4 f 3 h 0 ξ1 ξ2 0 f 2 − f 1  − f 1  f 2   Δξ − h3 ; 2 24 12

Li et al.

(5)

where ξ1 and ξ2 are two points between points (1) and (2), and jΔξj < h. This difference is the TE of Eq. (4). It is related to the third and the fourth derivative of function f and is approximately proportional to h3 . This TE can be represented by Oh3 . The above obtained TE of the IE in Southwell’s algorithm can explain that: (1) his algorithm can perfectly reconstruct the wavefronts having any combination of tilts, defocus and astigmatism because the third and the fourth derivatives of these wavefronts are all zero, making the TEs of the IEs zero; but the third or the fourth derivatives of the wavefronts having higher-order aberrations are not zero, making the TEs of the corresponding IEs not zero, and the reconstruction errors are then induced; (2) more sampling points for the measurement domain will yield a higher reconstruction accuracy because the interval h of two adjacent estimated points is smaller, making the TEs of the corresponding IEs smaller. Therefore, to obtain higher reconstruction accuracy, we should use IE with a smaller TE in wavefront reconstruction. Since each IE in Southwell’s algorithm contains only two adjacent function values and slopes, the minimum TE that can be reached is Oh3 . If more function values or slopes are contained in the IE, we will have more degrees of freedom to control the order of its TE. In this way, the IE with a smaller TE can be obtained. B. Method As shown in Fig. 1, we now consider the IE containing two adjacent function values and four adjacent slopes that are symmetrical at the same intermediate point (2.5). The general form of this IE can be expressed using a group of unknown coefficients of A, B, C, D, E, and F as Af 2  Bf 3  hCf 01  Df 02  Ef 03  Ff 04 :

(6)

For Eq. (6), the key point is how to calculate the unknown coefficients to make its TE minimum. To do this, the Taylor theorem is applicable. The method proposed here is: (1) Apply the fifth-order Taylor theorem for the function values and the fourth-order Taylor theorem for the slopes in Eq. (6) at point (2.5), respectively. Eq. (6) becomes Aff 2.5 − f 02.5 h∕2     − Oh∕25 g  Bff 2.5 

f 02.5 h∕2

A  −k; E

13 k; 24

B  k;

C−

F −

1 k; 24

1 k; 24

D

13 k; 24 (8)

where k is an arbitrary constant. Let k  1. Under the above solved set of coefficients, the 5 5 TE of Eq. (6) is about 0.0190f 6 ξ1 Δξ  0.0162f ξ2 h , where ξ1 is a point between points (1) and (4), ξ2 is a point between points (2) and (3), and jΔξj ≤ 2h. This TE can be represented by Oh5 . Thus, the TE is improved by two orders of h by containing another two slopes, compared with Eq. (4) whose TE is Oh3 . Since h is usually very small in practical measurements, Oh5  is much smaller than Oh3 . Oh5  is also the minimum TE for the IE containing two adjacent function values and four adjacent slopes that are symmetrical at the same intermediate point. The method proposed above can be applied to deduce other IEs. For example, using the method, we can deduce the first three Newton–Cotes quadrature rules, which are the trapezoidal, the Simpson, and the 3/8 Simpson rules [11,12]. The trapezoidal rule is given by Eq. (4). The Simpson and the 3/8 Simpson rules are given by f 3 −f 1  h∕3f 01 4f 02 f 03 , and f 4 − f 1  3h∕8f 01  3f 02  3f 03  f 04 , respectively. Moreover, for an IE whose total number of contained function values and slopes is n, as up to the n − 2th derivative can be made equal between its left-hand side and right-hand side, its TE is Ohn−1 . C. Three Algorithms Formulated by IEs with Higher-Order TEs Three different IEs with higher-order TEs are developed by using the method proposed above. The first two have TEs of Oh5  and the third one has a TE of Oh7 . Corresponding to these three IEs, three LS-based integration algorithms are formulated, respectively. We first give the three IEs and their exact TEs. Then we show how to formulate the three algorithms. The first IE (IE1) is the one deduced in the last section, and given by f3 − f2 

  13h 1 1 − f 01  f 02  f 03 − f 04 : 24 13 13

(9)

The exact TE of IE1 was also given in Section 2.B. The second IE (IE2) contains four adjacent function values and two adjacent slopes that are symmetrical at the same intermediate point (2.5), and given by

5

     Oh∕2 g

 hCff 02.5 − f 002.5 3h∕2      O3h∕24 g  Dff 02.5 − f 002.5 h∕2      Oh∕24 g  Eff 02.5  f 002.5 h∕2      Oh∕24 g  Fff 02.5  f 002.5 3h∕2      O3h∕24 g:

(3) Solve the obtained equation set, and the value of A, B, C, D, E, and F is

(7)

(2) Make the coefficients of f 2.5 ; f 02.5 ; …; f 4 2.5 on the lefthand side of Eq. (7) equal to those of f 2.5 ; f 02.5 ; …; f 4 2.5 on the right-hand side of Eq. (7), respectively. An equation set on A, B, C, D, E and F is obtained.

1 1 2h f  f 3 − f 2 − f 1  f 02  f 03 : 9 4 9 3

(10)

5 5 The exact TE of IE2 is about 0.0146f 6 ξ1 Δξ  0.0111f ξ2 h , where ξ1 is a point between points (1) and (4), ξ2 is a point between points (2) and (3), and jΔξj ≤ 2h. The third IE (IE3) contains four adjacent function values and four adjacent slopes that are symmetrical at the same intermediate point (2.5), and given by

Li et al.

Vol. 30, No. 7 / July 2013 / J. Opt. Soc. Am. A

  11 11 1 1 f 4  f 3 − f 2 − f 1  h f 01  f 02  f 03  f 04 : 27 27 9 9

(11)

7 The exact TE of IE3 is about 0.0028f ξ8 Δξ − 0.0008f 7 ξ2 h , 1 where ξ1 and ξ2 are two points between points (1) and (4), and jΔξj ≤ 3h. Before giving the three algorithms, we assume the measurement domain is divided into a N × N grid of estimated points. W i;j denotes the wavefront value to be reconstructed at point i; j. S xi;j and S yi;j denote the x and y slopes at point i; j in Cartesian coordinates, where i and j is 1; 2; …; N. The three algorithms are: Algorithm 1 (A1): corresponding to IE1, the IEs for wavefront reconstruction on the x and y dimensions are

W i;j1 − W i;j

  13h 1 1 − S xi;j−1  S xi;j  S xi;j1 − S xi;j2 ;  24 13 13 (12a)

W i;j − W i1;j

  13h 1 1 − S yi−1;j  S yi;j  S yi1;j − S yi2;j ;  24 13 13 (12b)

where i  1; 2; …; N and j  2; 3; …; N − 2 for Eq. (12a), i  2; 3; …; N − 2 and j  1; 2; …; N for Eq. (12b). Combining Eqs. (12a) and (12b), a matrix equation for A1 is derived as "

Px1 Py1

# W

"

Sx1 Sy1

# :

(13)

Before explaining the definition of Px1 , Py1 , W, Sx1 , and Sy1 in matrix Eq. (13), we introduce some representations to simplify the explanation. We define ceilA as the nearest integer greater than or equal to A. For two integers of B and C, we define remB; C as the remainder of the division B∕C if B does not divide by C, otherwise define remB; C as C. Use m and n to denote the mth row and the nth column in a matrix. Based on the above definitions, we use the following substitutions in the remainder of this paper: ceilm∕N − 3 → cm, remm; N − 3 → rm, ceiln∕N → cn, and remn; N → rn. We now explain the definition of each element in Eq. (13): W is a vector with the dimensions of N 2 × 1, given by Wjm;1  W remm;N;ceilm∕N ;

(14)

where m  1; 2; …; NN − 3; Px1 and Py1 are two matrices with the same dimensions of NN − 3 × N 2 , given by

Px1 jm;n

Py1 jm;n

Sy1 jm;1 

  13h 1 1 − S xcm;rm  S xcm;rm1  S xcm;rm2 − S xcm;rm3 ; 24 13 13 (15a)   13h 1 1 − S yrm;cm  S yrm1;cm  S yrm2;cm − S yrm3;cm ; 24 13 13 (15b)

8 −1; > > <  1; > > : 0; 8 1; > > <  −1; > > : 0;

cm  rn & rm  1  cn cm  rn & rm  2  cn ;

(16a)

otherwise cm  cn & rm  1  rn cm  cn & rm  2  rn ;

(16b)

otherwise

where “&” means “and”, m  1; 2; …; NN − 3, and n  1; 2; …; N 2 . Algorithm 2 (A2): corresponding to IE2, the IEs for wavefront reconstruction on the x and y dimensions are 1 1 2h W  W i;j1 − W i;j − W i;j−1  S xi;j  S xi;j1 ; (17a) 9 i;j2 9 3 1 1 2h W i−1;j  W i;j − W i1;j − W i2;j  S yi;j  S yi1;j ; (17b) 9 9 3 where i  1; 2; …; N and j  2; 3; …; N − 2 for Eqs. (17a), i  2; 3; …; N − 2 and j  1; 2; …; N for Eqs. (17b). Combining Eqs. (17a) and (17b), a matrix equation for A2 is derived as " x# " x# P2 S2 W ; (18) y P2 Sy2 where Sx2 and Sy2 are two vectors with the same dimensions of NN − 3 × 1, given by Sx2 jm;1 

2h x S  S xcm;rm2 ; 3 cm;rm1

(19a)

Sy2 jm;1 

2h y S  S yrm2;cm ; 3 rm1;cm

(19b)

where m  1; 2; …; NN − 3; Px2 and Py2 are two matrices with the same dimensions of NN − 3 × N 2 , given by

Px2 jm;n

where m  1; 2; …; N 2 ; Sx1 and Sy1 are two vectors with the same dimensions of NN − 3 × 1, given by Sx1 jm;1 

1451

Py2 jm;n

8 −1∕9; > > > > > > −1; > > <  1; > > > > 1∕9; > > > > : 0; 8 1∕9; > > > > > > 1; > > <  −1; > > > > −1∕9; > > > > : 0;

cm  rn & rm  cn cm  rn & rm  1  cn cm  rn & rm  2  cn ;

(20a)

cm  rn & rm  3  cn otherwise

cm  cn & rm  rn cm  cn & rm  1  rn cm  cn & rm  2  rn ; cm  cn & rm  3  rn otherwise

where m  1; 2; …; NN − 3 and n  1; 2; …; N 2 .

(20b)

1452

J. Opt. Soc. Am. A / Vol. 30, No. 7 / July 2013

Li et al.

"

Algorithm 3 (A3): corresponding to IE3, the IEs for wavefront reconstruction on the x and y dimensions are 11 11 W  W i;j1 − W i;j − W i;j−1 27 i;j2 27   1 x 1  h S i;j−1  S xi;j  S xi;j1  S xi;j2 ; 9 9 11 11 W  W i;j − W i1;j − W i2;j 27 i−1;j 27   1 y 1  h S i−1;j  S yi;j  S yi1;j  S yi2;j ; 9 9

(21a)

Px3 Py3

#

"

W

Sx3 Sy3

(21b)

# ;

(22)

where Sx3 and Sy3 are two vectors with the same dimensions of NN − 3 × 1, given by Sx3 jm;1  h

Sy3 jm;1

  1 x 1 S cm;rm  S xcm;rm1  S xcm;rm2  S xcm;rm3 ; 9 9 (23a)

  1 1  h S yrm;cm  S yrm1;cm  S yrm2;cm  S yrm3;cm ; 9 9 (23b)

where m  1; 2; …; NN − 3; Px3 and Py3 are two matrices with the same dimensions of NN − 3 × N 2 , given by

Px3 jm;n

Pyx jm;n

8 −11∕27; > > > > > > −1; > > <  1; > > > > 11∕27; > > > > : 0 8 11∕27; > > > > > > 1; > > <  −1; > > > > −11∕27; > > > > : 0

cm  rn & rm  cn

# Sxi ; Syi

(27)

W i;j2 − W i;j 

 h x S i;j  4S xi;j1  S xi;j2 ; 3

(28a)

W i;j − W i2;j 

 h y S i;j  4S yi1;j  S yi2;j ; 3

(28b)

where i  1; 2; …; N and j  1, N − 2 for Eqs. (28a), i  1, N − 2 and j  1; 2; …; N for Eq. (28b). Combining Eqs. (28a) and (28b), a matrix equation is obtained as "

(24a)

Pxs Pys

otherwise

# W

"

Sxs Sys

# ;

(29)

where Sxs and Sys are two vectors with the same dimensions of 2N × 1, given by

cm  cn & rm  rn

(h

cm  cn & rm  1  rn

Sxs jm;1  (24b)

x x x 3 S cm2;1  4S cm2;2  S cm2;3 ; h x x x 3 S cm2;N−2  4S cm2;N−1  S cm2;N ;

m is odd m is even

otherwise

where m  1; 2; …; NN − 3 and n  1; 2; …; N 2 . Matrix Eqs. (13), (18) and (22) can be written in the same simplified form as (25) N 2 , and

where P is a matrix with the dimensions of 2NN − 3 × S is a vector with the dimensions of 2NN − 3 × 1, given by

; (30a)

cm  cn & rm  3  rn

PW  S;

(26)

where PT is the transform of P. It is found that, however, the rank of PT P for each algorithm is N 2 − 9. Thus, PT P is singular, and we cannot obtain a unique solution to matrix Eq. (27). The rank deficiency of PT P is caused by two reasons: one is that the obtained IEs for each algorithm are not enough, leading to eight-rank deficiency; the other is the existence of the uncertain piston in the process of integration, leading to one-rank deficiency. One can add more constraints to the wavefront values to solve the problem of rank deficiency. That is, introducing more IEs to matrix Eq. (25). To solve the problem of eight-rank deficiency, we introduce the Simpson IEs to matrix Eq. (25). Their introductions can keep the quality of higher-order TEs, since the TE of the Simpson integration rule is also Oh5 . The introduced Simpson IEs are obtained from the head and the end of each row and each column in the reconstruction domain, and are given by

cm  rn & rm  3  cn

cm  cn & rm  2  rn ;

" and S 

where i  1, 2 and 3, representing algorithm i. Matrix Eq. (25) is over determined for N > 7, which is always true in practical situations. We can obtain the LS solution to matrix Eq. (25) by solving its normal equation set as

cm  rn & rm  1  cn cm  rn & rm  2  cn ;

# Pxi ; Pyi

PT PW  PT S;

where i  1; 2; …; N and j  2; 3; …; N − 2 for Eqs. (21a), i  2; 3; …; N − 2 and j  1; 2; …; N for Eq. (21b). Combining Eqs. (21a) and (21b), a matrix equation for A3 is derived as "

P

Sys jm;1 

(h

y y y 3 S 1;cm2  4S 2;cm2  S 3;cm2 ; y y y h 3 S N−2;cm2  4S N−1;cm2  S N;cm2 ;

m is odd m is even

; (30b)

where “cm2” means ceilm∕2, and m  1; 2; …; 2N; Pxs and Pys are two matrices with the same dimensions of 2N × N 2 , given by

Li et al.

Vol. 30, No. 7 / July 2013 / J. Opt. Soc. Am. A

Pxs jm;n

8 > > < −1; cm2  rn & cn  1; N − 2  1; cm2  rn & cn  3; N ; > > otherwise : 0;

Pys jm;n

8 1; > > <  −1; > > : 0;

ΔW  W − W 0 ; (31a)

cm2  cn & rn  1; N − 2 cm2  cn & rn  3; N

;

where m  1; 2; …; 2N and n  1; 2; …; N 2 . The problem of an uncertain piston can be solved by setting one grid point of the wavefront to be zero. We set the wavefront at the center of the reconstruction domain to be zero by introducing the equation as P0 W  0;

(32) 2

where P0 is a vector with the dimensions of 1 × N , given by

P0 j1;m 

8 1; > > < > > :

ceilm; N  remm; N  ceilN∕2

0;

;

(33)

otherwise

where m  1; 2; …; N 2 . Combining matrix Eqs. (25), (29), and (32) into one matrix set, we have Pe W  S e ;

(34)

where Pe is a matrix with the dimensions of 2NN − 1  1 × N 2 and Se is a vector with the dimensions of 2NN − 1  1 × 1 given by 2

P

2

3

6 x7 6 Ps 7 6 7 Pe  6 y 7; 6 Ps 7 4 5

and

S

3

6 x7 6 Ss 7 6 7 Se  6 y 7: 6 Ss 7 4 5

P0

(35)

0

The column rank of Pe is N 2 . Then the standard LS method can be applied to solve matrix Eq. (34), whose solution is W  PTe Pe −1 PTe Se ;

(36)

where PTe is the transform of Pe , and PTe Pe −1 is the inverse of the square matrix PTe Pe . The matrix of Pe is very sparse, and can be stored on a computer as a sparse matrix. The above formulations of A1, A2, and A3 are based on the assumption that the measurement domain is square. Actually, this assumption is unnecessary. For circular or irregular measurement domains, we can derive the corresponding set of IEs, like matrix Eq. (34), for each algorithm. That is, combining all the IEs (including the Simpson IEs obtained from the head and the end of each row and each column, and setting one grid of the wavefront to be zero) into one matrix, which can be done with a computer program. The reconstructed wavefront given by Eq. (36) contains, of course, reconstruction errors, which are

(37)

where W 0 is the vector of the true wavefront values. We now provide some analysis on the reconstruction error ΔW. The practical slope measurements always contain measurement noise. Thus, the vector Se can be expressed as Se  S0e  ne ;

(31b)

otherwise

1453

(38)

where S0e is the vector corresponding to the ideal (noise free) slopes, and ne is the vector corresponding to the measurement noise. Substituting Eq. (38) into Eq. (36), then substituting Eq. (36) into Eq. (38), we have ΔW  ΔW 0  ΔW n ;

(39)

in which ΔW 0 is given by ΔW 0  PTe Pe −1 PTe S0e − W 0 ;

(40)

representing the algorithm error; ΔW n is given by ΔW n  PTe Pe −1 PTe ne ;

(41)

representing the noise-induced error. Thus, the reconstruction error ΔW is made up of the algorithm error ΔW 0 and the noise-induced error ΔW n .

3. SIMULATION Through simulations, this section mainly concerns the validity of the algorithms, applying IEs with higher-order TEs to improve the reconstruction accuracy by comparing the reconstruction error between Southwell’s algorithm and the three algorithms proposed above. Some comparisons with Huang and Asundi’s iterative compensation algorithm are also given. The simulations are conducted from the next three aspects: first, by the simulation with noise ignored the algorithm errors are compared; then, noise propagation is analyzed for each algorithm; finally, the total reconstruction error, which contains the algorithm error and the noise-induced error, is compared. A. Simulation with Noise Ignored This section focuses on the comparisons of the algorithm error from the simulations of reconstruction from the ideal slopes. As mentioned in Section 1, the process of integration from the measured slopes is needed in both the Shack– Hartmann sensor and the deflectometry methods, a main difference of which is the resolution of the reconstruction domain. The case of the Shack–Hartmann sensor is considered by discretizing the reconstruction domain into a 50 by 50 grid of subapertures. The discretization of a 500 by 500 grid of sampling points is considered for the deflectometry case. 1. Shack–Hartmann Case The simulated wavefront is represented here by the Zernike polynomials, which are widely used as wavefront decomposition basis functions for describing the classical optical aberrations [26]. The Zernike polynomials applied in this paper are those numbered by Noll [27]. In polar coordinates, these polynomials are a product of radial polynomials with radial order

1454

J. Opt. Soc. Am. A / Vol. 30, No. 7 / July 2013

Li et al.

n and angular functions with angular order m. n and m are integral and satisfy m ≤ n, n − jmj  even [27]. The radial order n represents the power of the Zernike polynomial. As shown in Fig. 2, the range of the square domain is −1 to 1 mm in the x and y dimensions, which is discretized into a 50 by 50 grid of subapertures. The circle is the incircle of the square. Since the Zernike polynomials are defined on a unit circle, only the wavefront in the subapertures whose centers are within the circle are reconstructed. The second to the 105th Zernike polynomials are reconstructed one by one, which means the simulated wavefront is represented by only one Zernike polynomial in each simulation. The simulated x and y slopes are the first x and y partial derivatives of the simulated wavefront at the center of the subapertures. The relative root mean square (RMS) reconstruction error R is applied to evaluate the algorithm error. R is defined as R  ΔW 0 · ΔW 0 ∕W 0 · W 0 1∕2 ;

(42)

where ΔW 0 is the vector of the algorithm error as given in Eq. (40), and W 0 is the vector of the ideal wavefront values. The pistons of ΔW 0 and W 0 are all removed. The simulation results are shown in Fig. 3, in which the powers (1–13) of Zernike polynomials are also labeled. To clearly show the results, R is displayed in the form of a linear scale in Fig. 3(a) and in the form of a log scale in Fig. 3(b), respectively. As shown in Fig. 3(a), R of SA increases dramatically with the increase of the power of the Zernike polynomial, while R of A1, A2, and A3 remains very small. Even for the Zernike polynomials whose power is 13, R of A1, A2, and A3 is no more than 0.05. Figure 3(a) also shows that, for the Zernike polynomials under the same power (the radial order n), R of each algorithm decreases fast with the increase of the angular order m. Calculations show that the actual TEs of the IEs in each

Fig. 2. Square domain is discretized into a 50 by 50 grid of subapertures whose centers are marked by dots. The circle is the incircle of the square. Only the subapertures whose centers are within the circle are considered in the simulation.

Fig. 3. Relative RMS reconstruction errors of Southwell’s algorithm (SA) and A1–A3 for the reconstructions of the second to the 105th Zernike polynomials are displayed in (a) a linear scale and (b) a log scale. The corresponding powers of the Zernike polynomials are from 1 to 13, which are also labeled.

algorithm decrease fast with the increase of the angular order m, thus making the reconstruction error decrease fast. For the reconstructions of Zernike polynomials that are beyond the 105th term, R of each algorithm further increases, but A1, A2, and A3 still give a smaller R. The performances of the four algorithms are much clearer in the log scale, as shown in Fig. 3(b). R is reduced more than one order of magnitude, with the TE order improved from Oh3  (SA) to Oh5 (A1 and A2). Though the TEs of A1 and A2 are at the same level of Oh5 , the relative reconstruction error of A2 is slightly smaller than that of A1. There is also an apparent decrease of algorithm error with the TE order improvement from Oh5  to Oh7  (A3), for the reconstructions of wavefronts represented by the first 60th Zernike polynomials, but the algorithm errors of A1, A2, and A3 tend to be close with the increase of the Zernike term. One can also note from Fig. 3(b) that SA can perfectly reconstruct the Zernike polynomials, whose power is no greater than 2, while A1, A2, and A3 can perfectly reconstruct the Zernike polynomials, whose power is no greater than 4. With the similar explanation for SA in Section 2.A, the reason that A1, A2, and A3 can reconstruct the Zernike polynomials,

Li et al.

Vol. 30, No. 7 / July 2013 / J. Opt. Soc. Am. A

1455

(43)

As shown in Table 1, A1 and A2 give smaller reconstruction errors, compared with HA, after the first compensation. The reconstruction error of A3 is even smaller than that of HA after the third compensation. Thus, compared with HA, A1–A3 can achieve higher reconstruction accuracy without the operation of iterative compensation. For SA, A1, A2, and A3, the errors at the edge of the reconstruction domain are much larger than those inside the reconstruction domain. The reason is that the data points at the edge lack neighbors compared with the inner ones. In this situation, fewer IEs can be obtained for the wavefront values at the edge, leading to relatively larger errors. Nevertheless, the maximum absolute errors at the edge of the reconstruction domain for SA, A1, A2, and A3 are about 900, 2.5, 1.2, and 0.25 nm, respectively. Thus, the applications of IEs with higher-order TEs can also effectively reduce the errors at the edge.

The reconstruction range is −5 to 5 mm for both x and y coordinates, with an interval of 0.02 mm. Thus, the mesh grid is 500 by 500 for the reconstruction domain. The reconstruction errors Δz of SA and A1–A3 are shown in Figs. 4(a)–4(d), respectively. The mean values of these errors are all set to be zero. As shown in Fig. 4, an apparent reduction of reconstruction error is achieved with the TE improved from Oh3  (SA) to Oh5  (A1 and A2). For A3, there is only a little reconstruction error on the boundary. The standard deviation (Std.) and peak-to-valley (PV) values of the reconstruction errors for SA, HA, and A1–A3 are given in Table 1. The percentage in parentheses is the ratio of the result of each algorithm to the result of SA. “First” and “Third” denote the results after the first and the third compensation in HA, which are derived from [23].

B. Noise Propagation Analysis The error propagation coefficient (EPC) is usually used to evaluate the sensitivity to measurement noise for a reconstruction algorithm. Larger EPC means the algorithm is more sensitive to measurement noise. Zou and Rolland did a quantitative error propagation analysis of the traditional LS-based integration algorithms in the Fried, Hudgin and Southwell geometries [22], respectively. The method they formulated is general for the analysis of noise propagation. In this paper we use the same method as Zou and Rolland’s to analyze the noise propagation for A1, A2, and A3. The EPCs of SA and A1–A3 for the circular and square reconstruction domains are shown in Figs. 5(a) and 5(b), respectively. It can be seen from Fig. 5 that the odd subaperture number yields better noise propagation than the even subaperture

whose power is no greater than 4, is that the TEs of the IEs in A1, A2, and A3 are related to the fifth and sixth derivatives of the wavefront to be reconstructed; the fifth and sixth derivatives of the Zernike polynomials whose power is no greater than 4 are all zero. 2. Deflectometry Case In [23], Huang and Asundi reconstructed a synthesized surface shape to demonstrate the validity of their iterative compensation algorithm in improving reconstruction accuracy. We denote their algorithm as HA. In this paper, the same surface shape is reconstructed from its true slope to make comparisons among SA, HA, and A1–A3. The definition of the surface shape is zx; y as follows, zx; y  0.3 cos0.4x2  2x cos0.4y2  2y 0.7 cosx3  y2 ∕4π:

Fig. 4. Error between the reconstructed and the ideal shape for (a) SA, (b) A1, (c) A2, and (d) A3.

1456

J. Opt. Soc. Am. A / Vol. 30, No. 7 / July 2013

Table 1. Std. and PV of the Reconstruction Error for Each Algorithm Algorithm SA First Third A1 A2 A3

Std. (nm)

PV (nm)

215.97 (1) 0.7 (0.32%) 0.15 (0.07%) 0.44 (0.2%) 0.24 (0.1%) 0.01 (0.005%)

1761 (1) 8 (0.45%) 3 (0.17%) 4.59 (0.26%) 2.54 (0.14%) 0.49 (0.03%)

number, for each algorithm. Under the same subaperture number, A1, A2, and A3 give larger EPCs than SA. As shown in Fig. 5(a) for the circular reconstruction domain, on average, the relative EPC ratio of A1 to SA, A2 to SA, and A3 to SA is about 1.22, 1.24, and 1.55, respectively. As shown in Fig. 5(b) for the square reconstruction domain, on average, the relative EPC ratio of A1 to SA, A2 to SA, and A3 to SA is about 1.08, 1.15, and 1.51, respectively. Thus, the algorithm applying IEs with higher-order TEs is more sensitive to noise for both circular and square reconstruction domains.

Fig. 5. EPCs of the four algorithms for the (a) circular reconstruction domain and (b) square reconstruction domain. N is the discretized subaperture number on one dimension, which is from 10 to 50. The circular domain is the incircle of the square as shown in Fig. 2.

Li et al.

C. Total Reconstruction Error As shown in Eq. (39), the total reconstruction error consists of algorithm error ΔW 0 and noise-induced error ΔW n . The results in Section 3.A indicate that the algorithm errors of A1–A3 are much smaller than that of SA. However, noise propagation analysis indicates that A1–A3 are slightly more sensitive to noise. Thus, the comparison of reconstruction errors in presence of noise is necessary since noise always exists in practical slope measurements. In this section, both the Shack–Hartmann case and the deflectometry case are considered. 1. Shack–Hartmann Case The simulation conditions in this section are similar to those in Section 3.A. The only difference is, by adding normally distributed noise to the ideal x and y slopes, the simulated slopes now contain noise. The noise level of the simulated slopes is indicated by the signal-to-noise ratio (SNR), which is defined as the ratio of the RMS of ideal slopes to the RMS of added noise as SNR  S0 · S0 ∕n · n1∕2 ;

(44)

where S0 is the vector of the ideal x and y slopes, and n is the noise vector. In the simulations of this section, the same normally distributed noise is added to the ideal slopes. By varying the coefficient of the Zernike polynomial to be reconstructed, different SNRs can be obtained. A SNR of 10 and a SNR of 30 for the simulated noisy slopes are two cases we considered. The SNR of 10 represents bad slope measurements. The SNR of 30 represents slightly better slope measurements. The relative RMS reconstruction error is still utilized to evaluate the reconstruction error of each algorithm. The results of the simulations are shown in Fig. 6 with (a) the SNR of 10 and (b) the SNR of 30. As shown in Fig. 6(a), with a SNR of 10 the four algorithms give similar performances in reconstructing Zernike polynomials whose power is smaller than or equal to 8. For the Zernike polynomials whose power is larger than 8, A1–A3 give smaller relative reconstruction errors. As shown in Fig. 6(b), with a SNR of 30 the SA gives similar relative reconstruction errors with A1–A3 only for the reconstructions of Zernike polynomials whose power is below or equal to 6. For the reconstruction of Zernike polynomial whose power is larger than 6, A1–A3 give apparently smaller relative reconstruction errors. One can also note that the total errors of A1–A3 drop about two times with the increase of SNR from 10 to 30 for all the reconstructed Zernike polynomials. The above difference in total errors for SA and A1–A3 can be explained by the difference of weight that algorithm error and noise-induced error make in the total error. As shown in Fig. 3, the algorithm errors of A1–A3 remain small, but the algorithm errors of SA increase fast with the increase of the power of the Zernike polynomial. On the other hand, even though A1–A3 are more sensitive to measurement noise than SA, the noise-induced errors of A1–A3 are just slightly larger than of SA. Therefore, for the reconstruction of the Zernike polynomial whose power is low, the total errors of A1–A3 are close to that of SA since the noise-induced error is the main part of the total error. For higher-power Zernike polynomials, noise-induced errors still take up the main part of

Li et al.

Vol. 30, No. 7 / July 2013 / J. Opt. Soc. Am. A

1457

noise level in a practical measurement by deflectometry [23]. The simulation results are shown in Fig. 7. The Std. of algorithm error, noise-induced error, and total error for each algorithm are given in Table 2. It can be seen from Fig. 7 and Table 2 that the total reconstruction error of SA is mainly caused by algorithm error, and the total reconstruction errors of A1–A3 are almost all caused by noise-induced error. Moreover, the total errors of A1–A3 are also very close to that of HA [23]. Thus, the validity of applying IEs with higher-order TEs in improving reconstruction accuracy in the presence of noise is also demonstrated for the deflectometry case.

4. DISCUSSION

Fig. 6. Relative RMS reconstruction error for the reconstructions of the second to the 105th Zernike polynomials in the presence of noise. (a) SNR of the simulated slopes is 10 and (b) SNR of the simulated slopes is 30. The powers (1–13) of the Zernike polynomials are also labeled.

A1–A3, but the algorithm error takes up a large part of the total error for SA. This makes the total error of SA larger than those of A1–A3 for the reconstructions of the higher-power Zernike polynomials. 2. Deflectometry Case For this case, Huang and Asundi reconstructed another synthesized shape to demonstrate the validity of their iterative compensation algorithm in the presence of noise [23]. In this paper, we use the same synthesized shape to examine the reconstruction ability of A1–A3 in the presence of noise. The shape zx; y is defined as 2 −y12

zx; y  31 − x2 · e−x ·e

−x2 −y2

  x − 10 − x3 − y5 5

1 2 2 − e−x1 −y : 3

The simulations in Section 3 have shown that the application of IE with smaller TE can improve the reconstruction accuracy if the measurement noise is ignored. However, it will not be so if the measurement noise is considered, since the algorithm applying IE with smaller TE is more sensitive to noise. In practical situations, measurement noise is always unavoidable. Thus, for the four algorithms of SA, A1, A2, and A3, A1 is more likely to give more accurate results in practical situations, as shown in the simulations in Section 3.C. Another aspect that should be considered is the memory storage and the cost time. Note that only two values have to be stored for each row in the coefficient matrices of SA and A1 [Pe as in Eq. (34)], but four values have to be stored for most of the rows in the coefficient matrices of A2 and A3. Thus, SA and A1 consume less memory storage than A2 and A3. More importantly, we find that SA and A1 take less time than A2 and A3 under the same reconstruction conditions. The reason should be that the coefficient matrices of SA and A1 are sparser than those of A2 and A3, so less calculation is needed to solve the obtained set of IEs for SA and A1. Considering the aspect of reconstruction accuracy, A1 is then recommended more in practical applications. The above discussion also implies that it is not necessary to formulate the algorithms for applying IEs with TE higher than Oh7 , although these IEs are achievable by containing more wavefront values or slopes and then deduced by using the general method proposed in Section 2.B. We now consider the applications of IEs with higher-order TEs in the Hudgin and Fried geometries, which are illustrated in Figs. 8(a) and 8(b), respectively. By containing more function values or slopes and using the method proposed in Section 2.B we can also deduce the IEs with higher-order TEs for these two geometries. Corresponding to each deduced IE, an LS-based algorithm can be formulated following the procedure given in Section 2.C. The traditionally used IE with TE of Oh3  for the Hudgin geometry shown in Fig. 8(a) is f 2 − f 1  hf 01.5 ;

(45)

The reconstruction domain is from −1 to 1 mm in the x and y dimensions, with sampling points of 400 by 400. Simulated angular noise conforming to a normal distribution with a standard deviation of 8 arcsec is added to the angles in the x and y directions, respectively. This added noise represents a typical

(46)

the deduced IE with TE of Oh4  is f 1 − 2f 2  f 3  hf 02.5 − f 01.5 ;

(47)

the deduced IE with TE of Oh6  is   17 17 62 f 4  7f 3 − 7f 2 − f 1  h f 01.5  f 02.5  f 03.5 : 27 27 9

(48)

1458

J. Opt. Soc. Am. A / Vol. 30, No. 7 / July 2013

Li et al.

Fig. 7. Total reconstruction errors of (a) SA, (b) A1, (c) A2, and (d) A3.

The function values and the slopes are not on the same line for the Fried geometry shown in Fig. 8(b), so we apply the Taylor theorem of two variables (x and y) [25] for the function values and slopes. Then, we let the coefficients of partial derivatives with respect to x, y, and mixed x and y be correspondingly equal between the two sides of the IE. The traditionally used IE in the Fried geometry with TE of Oh3  is f 1;2  f 2;2 − f 1;1  f 2;1   2hf x1.5;1.5 ;

(49)

where f x1.5;1.5 is the first partial derivative of f at point (1.5,1.5) with respect to x, and the deduced IE with TE of Oh4  is f 1;1  f 2;1 − 2f 1;2  f 2;2   f 1;3  f 2;3  2hf x2.5;2.5 − f x1.5;1.5 : (50)

one traditionally used IE obtained from the end of each row and column; H2 contains two traditionally used IEs obtained from the head and the end of each row and column, respectively. The synthesized surface shape expressed by Eq. (43) is reconstructed under the same simulation conditions as for the deflectometry case in Section 3.A using H0, H1, H2, F0, and F1, respectively. The standard deviations of the reconstruction errors for H0, H1, H2, F0, and F1 are about 108, 110, 2, 153, and 151 nm, respectively. Thus, the reconstruction errors of H1 and F1 are close to those of H0 and F0, respectively; H2 achieves apparent reduction of reconstruction errors compared to H0. This is because Eq. (47) is the subtraction of Eq. (46) from f 3 − f 2  hf 02.5 , which is the IE for f 2 , f 3 and f 02.5 in H0; Eq. (50) is the

The IE with TE higher than Oh4  in the Fried geometry is complicated (containing more than eight function values and three slopes) and not suitable for practical application because of the great burden in computation. Hereafter, we denote the LS-based algorithms corresponding to Eqs. (46), (47), (48), (49), and (50) by H0, H1, H2, F0, and F1, respectively. To avoid the problem of rank deficiency, H1 and F1 also contain Table 2. Std. of Algorithm Error (AE), Noise-Induced Error (NE), and Total Reconstruction Rrror (RE) of SA and A1–A3 Algorithm SA A1 A2 A3

AE

NE (nm)

RE (nm)

35 nm 2 pm 1 pm 8 fm

4.96 5.09 5.14 5.45

34.4 5.09 5.14 5.46

Fig. 8. (a) Row of four estimated points for Hudgin geometry; (b) two rows of four estimated points for the Fried geometry. The closed circles indicate the positions of the function values to be computed; the horizontal dashes indicate the positions of the known x slopes; the vertical dashes indicate the positions of the known y slopes.

Li et al.

subtraction of Eq. (49) from f 1;3  f 2;3 − f 1;2  f 2;2   2hf x2.5;2.5 , which is the IE for f 1;2 , f 1;3 , f 2;2 , f 2;3 and f x2.5;2.5 in F0. In other words, all of the IEs in H1 and F1 can be derived by linearly transforming the IEs in H0 and F0, respectively. The IEs in H2 cannot be derived by linearly transforming the IEs in H0, and the TEs of the IEs in H2 is Oh6 , which is much smaller than the TEs of the IEs in H0. We also conducted some other simulations using H0, H1, H2, F0, and F1, and the differences of the reconstruction errors among the algorithms are similar to those of the above simulation. Therefore, we can use the IEs with a TE of Oh6  to improve the wavefront reconstruction accuracy in the Hudgin geometry. However, the IEs with higher-order TEs are not tailored for the Fried geometry due to the high complexity of the IEs, in this case.

5. CONCLUSION The concept of using IEs with smaller TEs to improve the accuracy of LS-based integration was presented in this paper. We proposed a general method to deduce all kinds of IEs whose TEs are dependent on the number of contained function values and slopes. The IE with a smaller TE is the one with a higher-order TE, which contains more function values and slopes. Using this method, we deduced three specific IEs with higher-order TEs in the Southwell geometry, and formulated three corresponding LS-based integration algorithms for wavefront reconstruction. Simulations illustrated the improvement of the reconstruction accuracy by applying IEs with smaller TEs. From the simulations, we also concluded that one of the three formulated algorithms (A1) is the most preferable in practical applications. Moreover, the method can also be applied in the Hudgin and Fried geometries to deduce the IEs with higher-order TEs. Simulations indicated that the application of the IE with TE of Oh6  can effectively improve the wavefront reconstruction accuracy in the Hudgin geometry. Finally, we would like to emphasize that in other fields where the LS-based integration is used to convert discrete measured slopes to discrete function values, we can consider using the IEs with higher-order TEs to improve the integration accuracy. The IEs with higher-order TEs can be deduced using the method proposed in this paper.

ACKNOWLEDGMENTS We thank the reviewers for their comments, which greatly helped in improving both the clarity and the content of this paper. This work was supported by the Key Program of the National Natural Science Foundation of China (No. 60938003), National Program for Key Science & Technology Projects of China, Young Scientists Fund of the National Natural Science Foundation of China (No. 61205164), and the Basic Research Foundation of the Beijing Institute of Technology (No. 20120442004).

REFERENCES 1.

T. W. Nicholls, G. D. Boreman, and J. C. Dainty, “Use of a Shack–Hartmann wave-front sensor to measure deviations from a Kolmogorov phase spectrum,” Opt. Lett. 20, 2460–2462 (1995).

Vol. 30, No. 7 / July 2013 / J. Opt. Soc. Am. A

1459

2. P. Prieto, F. Vargas-Martín, S. Goelz, and P. Artal, “Analysis of the performance of the Hartmann–Shack sensor in the human eye,” J. Opt. Soc. Am. A 17, 1388–1398 (2000). 3. J.-S. Lee, H.-S. Yang, and J.-W. Hahn, “Wavefront error measurement of high-numerical-aperture optics with a Shack–Hartmann sensor and a point source,” Appl. Opt. 46, 1411–1415 (2007). 4. M. C. Knauer, J. Kaminski, and G. Häusler, “Phase measuring deflectometry: a new approach to measure specular free-form surfaces,” Proc. SPIE 5457, 366–376 (2004). 5. T. Bothe, W. Li, C. von Kopylow, and W. P. O. Jüptner, “High resolution 3D shape measurement on specular surfaces by fringe reflection,” Proc. SPIE 5457, 411–422 (2004). 6. W. C. Keller and B. L. Gotwols, “Two-dimensional optical measurement of wave slope,” Appl. Opt. 22, 3476–3478 (1983). 7. Q. Li, M. Zhao, S. Tang, S. Sun, and J. Wu, “Two-dimensional scanning laser slope gauge: measurements of ocean-ripple structures,” Appl. Opt. 32, 4590–4597 (1993). 8. K. Freischlad and C. L. Koliopoulos, “Modal estimation of a wavefront from difference measurements using the discrete Fourier transform,” J. Opt. Soc. Am. A 3, 1852–1861 (1986). 9. F. Roddier and C. Roddier, “Wavefront reconstruction using iterative Fourier transforms,” Appl. Opt. 30, 1325–1327 (1991). 10. L. A. Poyneer, D. T. Gavel, and J. M. Brase, “Fast wave-front reconstruction in large adaptive optics systems with use of the Fourier transform,” J. Opt. Soc. Am. A 19, 2100–2111 (2002). 11. J. Campos, L. P. Yaroslavsky, A. Moreno, and M. J. Yzuel, “Integration in the Fourier domain for restoration of a function from its slope: comparison of four methods,” Opt. Lett. 27, 1986–1988 (2002). 12. L. P. Yaroslavsky, A. Moreno, and J. Campos, “Frequency responses and resolving power of numerical integration of sampled data,” Opt. Express 13, 2892–2905 (2005). 13. S.-W. Bahk, “Band-limited wavefront reconstruction with unity frequency response from Shack–Hartmann slopes measurements,” Opt. Lett. 33, 1321–1323 (2008). 14. S.-W. Bahk, “Highly accurate wavefront reconstruction algorithms over broad spatial-frequency bandwidth,” Opt. Express 19, 18997–19014 (2011). 15. A. Talmi and E. N. Ribak, “Wavefront reconstruction from its gradients,” J. Opt. Soc. Am. A 23, 288–297 (2006). 16. R. H. Hudgin, “Wavefront reconstruction for compensated imaging,” J. Opt. Soc. Am. 67, 375–378 (1977). 17. D. L. Fried, “Least-squares fitting a wavefront distortion estimate to an array of phase-difference measurements,” J. Opt. Soc. Am. 67, 370–375 (1977). 18. B. R. Hunt, “Matrix formulation of the reconstruction of phase values from phase difference,” J. Opt. Soc. Am. 69, 393–399 (1979). 19. W. H. Southwell, “Wavefront estimation from wavefront slope measurements,” J. Opt. Soc. Am. 70, 998–1006 (1980). 20. J. Hermann, “Least-squares wave front errors of minimum norm,” J. Opt. Soc. Am. 70, 28–35 (1980). 21. W. Zou and Z. Zhang, “Generalized wavefront reconstruction algorithm applied in a Shack–Hartmann test,” Appl. Opt. 39, 250–268 (2000). 22. W. Zou and J. P. Rolland, “Quantifications of error propagation in slope-based wavefront estimations,” J. Opt. Soc. Am. A 23, 2629–2638 (2006). 23. L. Huang and A. Asundi, “Improvement of least-squares integration method with iterative compensations in fringe reflectometry,” Appl. Opt. 51, 7459–7465 (2012). 24. B. L. Ellerbroek, “Efficient computation of minimum-variance wave-front reconstructors with sparse matrix techniques,” J. Opt. Soc. Am. A 19, 1803–1816 (2002). 25. V. A. Zorich, Mathematical Analysis I (Springer, 2004). 26. V. N. Mahajan, “Zernike polynomial and wavefront fitting,” in Optical Shop Testing, D. Malacara, ed. (Wiley, 2007), pp. 498–546. 27. R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. 66, 207–211 (1976).

Improving wavefront reconstruction accuracy by using integration equations with higher-order truncation errors in the Southwell geometry.

Least-squares (LS)-based integration computes the function values by solving a set of integration equations (IEs) in LS sense, and is widely used in w...
1MB Sizes 0 Downloads 0 Views