A one-shot-projection method for measurement of specular surfaces Zhenzhou Wang Shenyang Institute of Automation, Chinese Academy of Science, No.114 Nanta Street, Shenhe District, Shenyang, Liaoning Province, China [email protected]

Abstract: In this paper, a method is proposed to measure the shapes of specular surfaces with one-shot-projection of structured laser patterns. By intercepting the reflection of the reflected laser pattern twice with two diffusive planes, the closed form solution is achieved for each reflected ray. The points on the specular surface are reconstructed by computing the intersections of the incident rays and the reflected rays. The proposed method can measure both static and dynamic specular shapes due to its oneshot-projection, which is beyond the capability of most of state of art methods that need multiple projections. To our knowledge, the proposed method is the only method so far that could yield the closed form solutions for the dynamic and specular surfaces. ©2015 Optical Society of America OCIS codes: (150.6910) measurement, and metrology.

Three-dimensional

sensing;

(120.0120)

Instrumentation,

References and links 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17.

Z. Z. Wang, “Robust measurement of the diffuse surface by phase shift profilometry,” J. Opt. 16(10), 105407 (2014). C. Je, S. W. Lee, and R. H. Park, “Color-stripe permutation pattern for rapid structured-light range imaging,” Opt. Commun. 285(9), 2320–2331 (2012). C. Je, K. H. Lee, and S. W. Lee, “Multi-projector color structured-light vision,” Signal Process. Image Commun. 28(9), 1046–1058 (2013). R. Furukawa, H. Kawasaki, R. Sagawa, and H. Masuyama, “Single color one-shot scan using modified Penrose tiling pattern,” IET Comput. Vis. 7(5), 293–301 (2013). L. Huang, C. S. Ng, and A. K. Asundi, “Dynamic three-dimensional sensing for specular surface with monoscopic fringe reflectometry,” Opt. Express 19(13), 12809–12814 (2011). H. S. Song and Y. M. Zhang, “Three-dimensional reconstruction of specular surface for a gas tungsten arc weld pool,” Meas. Sci. Technol. 18(12), 3751–3767 (2007). W. J. Zhang, Y. K. Liu, and Y. M. Zhang, “Real-time measurement of the weld pool surface in GTAW process,” 2013 IEEE International Conference on Instrumentation and Measurement Technology, 1640–1645 (2013). W. Jang, C. Je, Y. Seo, and S. W. Lee, “Structured-light stereo: Comparative analysis and integration of structured-light and active stereo for measuring dynamic shape,” Opt. Lasers Eng. 51(11), 1255–1264 (2013). G. Vogiatzis, C. Hernandez, P. H. S. Torr, and R. Cipolla, “Multi-view stereo via volumetric graphcuts and occlusion robust photo consistency,” IEEE T. Pattern Anal. 29(12), 2241–2246 (2007). K. N. Kutulakos and E. Steger, “A Theory of Refractive and Specular 3D Shape by Light-Path Triangulation,” Int. J. Comput. Vis. 76(1), 13–29 (2007). A. Sanderson, L. Weiss, and S. Nayar, “Structured highlight inspection of specular surfaces,” IEEE T-PAMI 10(1), 44–55 (1988). K. Ikeuchi, “Determining surface orientations of specular surfacesby using the photometric stereo method,” IEEE T. Pattern Anal. 3(6), 661–669 (1981). M. F. Tappen, “Recovering Shape from a Single Image of a Mirrored Surface from Curvature Constraints,”IEEE Conference on CVPR 2545–2552 (2011). M. Oren and S. K. Nayar, “A theory of specular surface geometry,” Int. J. Comput. Vis. 24(2), 105–124 (1997). J. Y. Zheng, Y. Fukagawa, and N. Abe, “3D Surface Estimation and Model Construction from Specular Motion,” IEEE T. Pattern Anal. 19 (5), 513–520 (1997). J. Y. Zheng and A. Murata, “Acquiring a Complete 3D Model from Specular Motion under the Illumination of Circular-Shaped Light Sources,” IEEE T. Pattern Anal. 22(8), 913–920 (2000). D. Miyazaki, M. Kagesawa, and K. Ikeuchi, “Transparent surface modeling from a pair of polarization images,” IEEE T. Pattern Anal. 26(1), 73–82 (2004).

#222957 - $15.00 USD (C) 2015 OSA

Received 12 Sep 2014; revised 30 Nov 2014; accepted 12 Jan 2015; published 26 Jan 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.001912 | OPTICS EXPRESS 1912

18. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE T. Pattern Anal. 22(11), 1330–1334 (2000). 19. R.E. Smith, J.M. Price and L.M. Howser, “A smoothing algorithm using cubic spline function,” NASA TN D7397 (1974). 20. C. D. Boor, “A practical guide to spline,” Appl. Math. Sci. 27, 1–348 (1978). 21. Z. Z. Wang, “Monitoring of GMAW weld pool from the reflected laser lines for real time control,” IEEE Trans. Ind. Inform. 10(4), 2073–2083 (2014). 22. Z. Z. Wang, X. Y. Huang, R. G. Yang, and Y. M. Zhang, “Measurement of mirror surfaces using specular reflection and analytical computation,” Mach. Vis. Appl. 24(2), 289–304 (2013).

1. Introduction Many industrial applications require measuring specular surfaces robustly. For example, mirrors used by the telescopes, melting weld pool during welding and solders after welding. Most traditional 3D reconstruction methods fail in these situations because their working principles rely on the facts that the objects in the scene are made of diffusive (Lambertian) surfaces. For structured light methods [1–4], well-designed patterns are reflected by the specular surfaces onto a diffusive plane or onto the image plane of the camera directly. The distortion of the pattern caused by one specific point is determined by both the height and the surface normal of that point. Thus the ambiguity between the normal and depth failed the traditional structured light methods in measuring specular surfaces. Huang [5] calculated the height distribution with an integration of gradient data obtained from one monoscopic fringe reflection, which could estimate the profile of the dynamic specular surface by one-shot projection. Similar estimation based one shot methods are proposed in [6] and [7] to measure the specular and dynamic weld pool However, none of them could calculate precise depth and lacks robustness for precision measurement. For stereo vision methods [8,9], the specular surfaces prevent the two cameras from obtaining two views directly. However, methods based on stereo vision [10–12] are still the most popular in measuring specular surfaces. The method described in [10] can measure completely specular and refractive surfaces. However, it is based on the triangulation and has the challenging registration problem of finding corresponding points, which causes the poor performance of traditional stereo vision technology in accurate measurement. Consequently, their accuracy is also limited. Sanderson et al. [11] combine stereo and structured highlights to reconstruct the specular surface, however, their measuring principle is mainly stereo triangulation which also needs to match two views from two cameras and ambiguity exists for retaining the matches. Ikeuchi [12] used distributed light source and look up table to measure the specular surfaces. The nonlinearity of the lookup table gives rise to two solutions for each intensity pair and it is difficult to decide which one is the real one. Besides structured light methods and stereo vision methods, many other methods were also proposed in the past decades including shape from curvature [13], shape from features [14], shape from the highlights [15,16] and shape from polarization [17]. The method proposed by Tappen [13] uses curvature constraints to estimate the mirror's shape, however it does not solve the convexity and concavity ambiguity and thus lacks the robustness in measuring complex mirror surfaces. Oren and Nayar [14] used the camera motion to form a trajectory of features and thus can only reconstruct static object. The method proposed by Zheng et al. [15,16] need to rotate the object to produce enough highlights for a complete 3D model. Miyazaki et al. [17] reconstructed the shape by analyzing the polarization of light reflected from the surface. Compared to all these state of art methods, our method is the only method that yields the closed form analytic solutions for the dynamic and specular surfaces. It overcomes the correspondence ambiguity problem of 3D stereo vision by registering distinctly corresponding points computed analytically in two camera views. This paper is organized as follows. Section 2 describes the principle of the proposed method and section 3introducesthe virtual camera which is the key technique for the proposed method. In section 4, the system is calibrated, the plane equations are computed and the mapping is determined. Section 5 describes the three dimensional reconstruction. The system #222957 - $15.00 USD (C) 2015 OSA

Received 12 Sep 2014; revised 30 Nov 2014; accepted 12 Jan 2015; published 26 Jan 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.001912 | OPTICS EXPRESS 1913

is re-calibrated in Section 6 to rectify the coefficient error. The experimental results and discussions are given in Section 7. Section 8 concludes the paper. 2. Principle of the proposed method The principle of the proposed method is illustrated in Fig. 1. An incident laser ray is projected onto a specular surface and it is reflected to intercept Plane4 and Plane3 at P4 and P3 respectively. The equation of the incident laser ray is determined by intercepting it with two horizontal planes, Plane1 and Plane2. With two known points, P1 and P2, the incident laser ray can be uniquely determined. A mirror plane at z = 0 is selected as the specular surface during the calibration stage and then its normal is known. Thus the reflected ray can be computed based on the reflection law. The equations of the two diffusive planes, Plane4 and Plane3, are then computed based on the interception points of these known reflected rays by the two diffusive planes and a virtual camera that will be described in the next section. The 3D world coordinates of the interception points P4 and P3 can be computed once the equations of Plane4 and Plane3 are known. The plane equations of the system are fixed while the interception points vary depending on the shape of the specular surface. During the reconstruction or measurement stage, the 3D world coordinates of P4 and P3 are calculated by the two cameras on line. Then the reflected ray is determined by P4 and P3. The point, P, on the specular surface is determined by computing the intersection of the incident ray and the reflected ray. The number of the reconstructed points on the specular surface can be increased by increasing the number of incident rays. The mirror surface can then be reconstructed by polynomial interpolation with enough sampled points.

Fig. 1. Principle of the Proposed Method

3. Virtual camera The laser rays are projected from the point C and reflected by the mirror plane onto the diffusive plane as illustrated in Fig. 2. According to the reflection law, the equations of the reflected rays are the same as those are projected directly onto the diffusive plane from the point C' which is the symmetric point of C relative to the mirror plane. The central projection from C' intercepts the mirror plane and the diffusive plane in the same way as light goes through a pin-hole and images on the image plane. Hence, the mirror plane can be treated as the image plane of a virtual pin-hole camera. The interception points on the diffusive plane

#222957 - $15.00 USD (C) 2015 OSA

Received 12 Sep 2014; revised 30 Nov 2014; accepted 12 Jan 2015; published 26 Jan 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.001912 | OPTICS EXPRESS 1914

are treated as the real objects that are imaged on the mirror plane. As shown in Fig. 2, A, B, C, D and E are five laser light spots on the diffusive plane which are produced when the laser rays are reflected onto the diffusive plane. They correspond to the five points, a, b, c, d and e which are the intersecting points of the projected laser rays with the mirror plane. If A, B, C, D, and E are treated as the real objects, then a, b, c, d and e are the imaged points on the image plane of the virtual pin-hole camera. With this virtual camera and another real camera imaging the interception points on the diffusive plane, the equation of the diffusive plane can be determined, which will be described in Section 4 in detail. The principle point of the virtual camera is computed as X and Y coordinates of the center C and its focal length is computed as Z coordinate of the center C which is the distance between C and the mirror plane.

Fig. 2. Virtual Camera

Fig. 3. Demonstration of incident ray determination

The center C is computed as the inception point of all the projected laser rays. Hence, the equation of each laser ray needs to be computed first. As illustrated in Fig. 1, the horizontal planes are used to intercept the incident rays and then the 3D coordinates of the interception points are computed. Two points can determine one ray and at least two planes are needed to intercept the rays. For practical implementation, the effect of noise should be considered when the distances between rays are relatively small and N horizontal planes are used instead

#222957 - $15.00 USD (C) 2015 OSA

Received 12 Sep 2014; revised 30 Nov 2014; accepted 12 Jan 2015; published 26 Jan 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.001912 | OPTICS EXPRESS 1915

of only 2 as shown in Fig. 3. N equals 6 in the experiments due tothe implementation constraint. The equations of the horizontal planes are z = 0; z = −1; z = −2; z = −3; z = −4; z = −5 . The equation of the th incident ray is formulated as: xli − x0i yli − y0i zli − z0i = = = t i ; l = 1 …6 (1) ai bi ci where l denotes the index of the interception points on different horizontal planes.( x0i , y0i , z0i ) denotes the ithinterception point on plane z = 0 .As shown in Fig. 3, the 3D coordinates of the interception points are computed with the calibrated camera c1. With six known points for each incident ray, its equation is computed by singular value decomposition (SVD). The coefficients [a i , bi , ci ]T of theithincident ray is determined.With all the incident rays determined, the least square method is used to find the projection center C whose distances to all the incident rays are the smallest. The distance of the projection center C to each laser ray is formulated as:



d

2

(P =

1

   − P0 × P0 − C   2 P1 − P 0

) (

)

2

(2)

  where the coordinate of the projection center is denoted as C ( x, y, z ) . P0 ( x0i , y0i , z0i ) and

 i i i P0 ( x1 , y1 , z1 ) are two known points on the ith incident ray. Differentiating d 2 with respect to x, y, z respectively and then setting the derivations to zero, three equations are obtained for each ray and they are stacked into matrix format as:

V1i , V2i , V3i 

T

[ x, y , z ]

T

= b1i , b2i , b3i 

T

(3)

where ( z i − z i ) 2 + ( y i − y i )2  1 0  1 0  i i i i i  V1 = ( x0 − x1 )( y1 − y0 )     xi − xi z i − z i  ( )( ) 0 1 1 0  

T

 ( x0i − x1i )( y1i − y0i )    2 2 V2i = ( z1i − z0i ) + ( x1i − x0i )     yi − yi z i − z i  ( )( ) 0 1 1 0  

(4)

T

 ( x i − x i )( z i − z i )  0 1 1 0   i i i i  V3 = y0 − y1 )( z1 − z0i )  (    i i 2 i i 2 ( x1 − x0 ) + ( y1 − y0 ) 

(5)

T

b1i = − ( x1i z0i − x0i z1i )( z1i − z0i ) + ( x1i y0i − x0i y1i )( y1i − y0i ) 

#222957 - $15.00 USD (C) 2015 OSA

(6)

(7)

Received 12 Sep 2014; revised 30 Nov 2014; accepted 12 Jan 2015; published 26 Jan 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.001912 | OPTICS EXPRESS 1916

b2i = − ( x1i y0i − x0i y1i )( x0i − x1i ) + ( y1i z0i − y0i z1i )( z1i − z0i ) 

(8)

b3i = − ( y1i z0i − y0i z1i )( y0i − y1i ) + ( x1i z0i − x0i z1i )( x0i − x1i ) 

(9)

Stacking the results of Eq. (3) for all the laser rays in one common equation system of the following form: [V 1 , …V i , …,V N ]T [ x, y, z ] = [b1 , …bi , …b N ]T T

(10)

Solving the above equation by the least squares, the projection center ( x, y, z ) is acquired. The intrinsic matrix of the virtual camera is also obtained. Its principle point ( Cx , C y ) equals ( x, y ) and its focal length f equals z. Please note that the virtual camera needs that the projected laser rays comply with the central projection strictly. If not, the virtual camera will become ill-conditioned and add errors to the system calibration since most key equations are derived based on central projection and the virtual camera. 4. System calibration

The illustration of the practical system is shown in Fig. 4. It uses a beam splitter to split the reflected rays into two rays which are then intercepted by two diffusive planes, Plane3 and Plane4. As a result, two interception points are acquired for each reflected ray. These two interception points are imaged in camera c3 and c4 respectively. The homography between the image plane of the camera and the diffusive plane can transform the 2D camera coordinate into 2D plane coordinate. Hence, if the plane equation is known, the 3D world coordinates of the interception points could be calculated.

Fig. 4. Illustration of practical system

The equation of the image plane of the virtual camera (mirror plane) is known as z = 0 , which is formulated as:

#222957 - $15.00 USD (C) 2015 OSA

Received 12 Sep 2014; revised 30 Nov 2014; accepted 12 Jan 2015; published 26 Jan 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.001912 | OPTICS EXPRESS 1917

π 1 = [0, 0,1, 0]T

(11)

Then the equation of the diffusive plane, Plane3 or Plane4is computed as follows:

π = ( PT )−1 π 1 −1

0 0 −Cx   1 0 −C y  0 1 f  0 0 1  where (Cx , C y ) and f are the principle points 1  0 P= 0   0

(12)

 r0   r3  r6   0

tx  t y  (13) r7 r8 t z   0 0 1  and focal length of the virtual camera.The r1 r4

r2 r5

rotation matrix R = [r1, r 2, r 3] and Translation vector T = [t x , t y , t z ]T defines the affine between mirror plane and the diffusive Plane3.They are computed as follows [18]: r1 = [r0 , r3 , r6 ]T =

r 2 = [r1 , r4 , r7 ]T =

T = [TX , TY , TZ ]T =

K −1 h 1 K −1 h 1 K −1h 2 K −1h1 K −1h 3 K −1 h 1

(14)

(15)

(16)

r 3 = [r2 , r5 , r8 ]T = r1× r 2

(17) where H = [h1 , h 2 , h 3 ] is the homography between the diffusive plane and the image plane of the virtual camera K is the intrinsic matrix of the virtual camera and defined as follows: f K =  0  0

0 f 0

Cx  C y  1 

(18)

Maximum Likelihood (ML) estimation is used to computethe homography between Plane3 and the image plane of the virtual camera: N

 = argmin( ( x − x ' ) 2 + ( y − y ' ) 2 ) H  i i i i H

(19)

i =1

where

#222957 - $15.00 USD (C) 2015 OSA

xi′ =

H11 X i + H12Yi + H13 ; H 31 X i + H 32Yi + H 33

i = 1… N

(20)

yi′ =

H 21 X i + H 22Yi + H 23 ; H 31 X i + H 32Yi + H 33

i = 1… N

(21)

Received 12 Sep 2014; revised 30 Nov 2014; accepted 12 Jan 2015; published 26 Jan 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.001912 | OPTICS EXPRESS 1918

 H11 H =  H 21  H 31

H12 H 22 H 32

H13  H 23  H 33 

(22)

where ( xi , yi ) in the camera coordinate and ( X i , Yi ) in the diffusive plane coordinate are the point pairs of the real data. 5. Three dimensional reconstruction

With the plane equations π known, the 3D world coordinate of the interception point is known: Zi =

π (1) X i + π ( 2 ) Yi + π ( 4 ) −π ( 3)

(23)

After the world coordinates of the interception points on Plane3 and Plane4 are calculated, two points are known for each reflected laser ray and it can be determined as: X i − X 0i Y i − Y0i Z i − Z 0i = = = Ti Ai Bi Ci

(24)

The distance between the point ( xi , y i , z i ) on the incident ray and the point ( X i , Y i , Z i ) on the reflected ray is formulated: d i = ( X i − xi ) 2 + (Y i − y i ) 2 + ( Z i − z i ) 2

(25)

Substitute Eq. (1) and Eq. (24) into Eq. (25), the following equation is acquired: (d i ) 2 = ( AiT i + X 0i − a i t i − x0i ) 2 + ( B i T i + Y0i − bi t i − y0i ) 2 + (C iT i + Z 0i − c i t i − z0i ) 2 (26)

where T i and t i are unknown. It is treated as the quadratic equation of T i first. The minimum value of the quadratic equation is computed by finding the point where the derivative of the equation to T i equals zero. Solving the derivative equation, T i is obtained: T i = ρ1t i + ρ 2

(27)

a i Ai + bi B i + c i C i Ai Ai + B i B i + C i C i

(28)

Ai x0i + B i y0i + C i z0i − Ai X 0i − B iY0i − C i Z 0i Ai Ai + B i B i + C i C i

(29)

where ρ1 = ρ2 =

Substituting Eq. (27) back into Eq. (26) and setting the derivative of t i to zero and solving it, the value of t i is obtained as: ti =

μ1 + μ2 + μ3 σ

(30)

where

#222957 - $15.00 USD (C) 2015 OSA

σ = ( Ai ρ1 − a i ) 2 + ( B i ρ1 − bi ) 2 + (C i ρ1 − ci ) 2

(31)

μ1 = −( Ai ρ1 − a i )( Ai ρ 2 + X 0i − x0i )

(32)

Received 12 Sep 2014; revised 30 Nov 2014; accepted 12 Jan 2015; published 26 Jan 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.001912 | OPTICS EXPRESS 1919

μ2 = −( B i ρ1 − bi )( B i ρ 2 + Y0i − y0i )

(33)

μ3 = −(C i ρ1 − ci )(C i ρ 2 + Z 0i − z0i )

(34)

Substituting the value of t i back into Eq. (27), the value of T i is obtained: T i = ρ1

μ1 + μ2 + μ3 + ρ2 σ

(35)

Substitute the value of t i and the value of T i into Eq. (1) and Eq. (24) respectively.Two nearest points are acquired:  xi   i  μ1 + μ2 + μ3 y  = σ  zi   

 a i   x0i   i  i  b  +  y0   ci   z i     0

X i   Ai   X 0i  μ1 + μ2 + μ3  i     + ρ 2 )  B i  +  Y0i   Y  = (ρ1 σ Zi  C i   Z i       0

(36)

(37)

When the two points are not equal, the coordinate of the intersecting point is approximated by the average value of the two nearest points:  X ri  X i   xi   i  1 i  1 i  Yr  = 2  Y  + 2  y  Zi  Zi   zi       r

(38)

After all the sampled points are calculated, a mirror surface can be reconstructed by the polynomial interpolation [19,20]: 2  Nt  2 d 2 f (t )  (39) Pf (1: N t ) = argmin  α  Pr  v ( i )  − f ( v ( i ) ) + (1 − α )  dt  j =1  dt 2 f  

(

)

where v(i ) = 1 + (i − 1) / M i = 1: N t ; N t = M × N (40) where Pr denotes the reconstructed sampled point and Pf denote the point after

interpolation. N t denotes the number of surface points after interpolation and it is M times of the number of sampled points N. α is the smoothing factor. 6. System re-calibration

From Eq. (23), it is seen that the z coordinate of the interception point is computed with the plane equations afteritsx and y coordinates are obtained. Consequently, the accuracy of the computed plane equation is critical for the measurement robustness of the proposed method. When the virtual camera is not perfectly formed, large error might occur for the computed plane equation. As a result, the reconstruction accuracy will be affected. Two parallel incident rays are used to analyze how the reconstruction accuracy is affected by the plane coefficient errors. The two parallel incident rays projected on the plane ( z = 2 ) at ( x0 , y0 , 2 ) and ( x0 − 1, y0 , 2 ) are defined as:

#222957 - $15.00 USD (C) 2015 OSA

Received 12 Sep 2014; revised 30 Nov 2014; accepted 12 Jan 2015; published 26 Jan 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.001912 | OPTICS EXPRESS 1920

x − x0 y − y0 z − 2 = = =t a b c

(41)

x − x0 + 1 y − y0 z − 2 = = =t a b c

(42)

The difference of these two interception points in z direction is defined as depth difference and it is denoted as Z d . As can be seen, Z d = 0 . Their reflected raysare formulated as follows: X − x0 Y − y0 Z − 2 = = =T −a −b c

(43)

X − x0 + 1 Y − y0 Z − 2 = = =T −a −b c

(44)

The first reflected ray (Eq. (43)) intercepts the diffusive plane Plane4 at: X 2 = x0 −

Y2 = y0 −

Z2 = 2 +

a ( x0π 4 (1) + y0π 4 ( 2 ) + π 4 ( 4 ) ) aπ 4 (1) + bπ 4 ( 2 ) − cπ 4 ( 3)

b ( x0π 4 (1) + y0π 4 ( 2 ) + π 4 ( 4 ) ) aπ 4 (1) + bπ 4 ( 2 ) − cπ 4 ( 3)

c ( x0π 4 (1) + y0π 4 ( 2 ) + π 4 ( 4 ) ) aπ 4 (1) + bπ 4 ( 2 ) − cπ 4 ( 3)

(45)

(46)

(47)

When no errors occur, Eq. (47) is the same as the following equation: Z2 =

π 4 (1) X 2 + π 4 (2)Y2 + π 4 (4) −π 4 (3)

(48)

Since X 2 and Y2 are computed by ML estimation, they can be assumed free of error. They will have the same values as those computed by Eqs. (45)-(46). On the other side, Z 2 is computed by the imperfect plane equation: Z 2e =

π 4 e (1) X 2 + π 4 e (2)Y2 + π 4 e (4) −π 4e (3)

(49)

Similarly, the other three interception points of the two reflected rays with the two diffusive planes, Plane4 and Plane3, are obtained. Consequently, the equations of the reflected rays which are used to compute the interception points on the reflective surface are changed to: X − x0 Y − y0 Z − 2 = = e =T −a −b c

(50)

X − x0 + 1 Y − y0 Z − 2 = = e' = T −a −b c

(51)

where

#222957 - $15.00 USD (C) 2015 OSA

Received 12 Sep 2014; revised 30 Nov 2014; accepted 12 Jan 2015; published 26 Jan 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.001912 | OPTICS EXPRESS 1921

ce =

π 3e (1) X 2 + π 3e (2)Y2 + π 3e (4) π 4e (1) X 4 + π 4 e ( 2 ) Y4 + π 4 e ( 4 ) − −π 3e (3) −π 4 e ( 3)

(52)

ce ' =

π 3e (1) X 2' + π 3e (2)Y2' + π 3e (4) π 4 e (1) X 4' + π 4 e ( 2 ) Y4' + π 4 e ( 4 ) − −π 3e (3) −π 4e ( 3)

(53)

where ( X 2 ,Y2 ), ( X 4 ,Y4 ), ( X 2' ,Y2' ) and ( X 4' ,Y4' ) are the x and y coordinates of the four interception points of the two reflected rays with the two diffusive planes. The depth difference changes to: '

Z d r = T | ce − ce |

(54)

'

From the above equation, it is seen that c e and c e are not equal in general unless π 2 e = π 2 and π 4 e = π 4 . As a result, the depth difference Z d of the two reconstructed points will be greater than zero with incorrect plane coefficients. To rectify the plane errors caused by the imperfect virtual camera, the minimum depth difference method is proposed as follows: Step 1: a flat mirror surface is reconstructed with the developed system without recalibration. The flat mirror is placed at z = 2 in the experiment. Step 2: the depth difference Z d of the reconstructed mirror surface is calculated as: Z d = max Z ir − min Z ir

(55)

i =1…N

i =1.. N

Where i is the index of the interception point. N is the total number of the interception points. Z ir is the ith z coordinate of the reconstructed surface. Step 3: based on the originally calibrated plane parameter set θ , the optimal plane parameter set θ = {π (1) , π ( 2 ) , π ( 3) , π ( 4 ) , π (1) , π ( 2 ) , π ( 3) , π ( 4 )} are obtained by 3

3

3

3

4

4

4

4

the following equation:

θ = argmin ( Z d ) θ

(56)

As can be seen, it will be time-consuming to search in 8 dimensions simultaneously. A better way is to search in one dimension each time until 8 parameters are found. 7. Results and discussion

Figure 5 shows our established system. A SNF laser is used to project the laser rays onto the specular surface.Plane4 is attached on the beam splitter to get better imaging quality due to the restriction of laser focus. The front surface of the beam splitter is placed at y = −115 mm and the thickness of the beam splitter is 3 mm. The diffusive plane, Plane3, is placed at y = −30 mm. Two Dragonfly2 RGB cameras synchronically record images with resolution 480╳640 on the two diffusive planes at 60 frames/sec. During incident ray calibration, the laser rays are projected onto a horizontal mirror plane which is placed on the top of the Metric Lab Jack whose height can be adjusted flexibly.

#222957 - $15.00 USD (C) 2015 OSA

Received 12 Sep 2014; revised 30 Nov 2014; accepted 12 Jan 2015; published 26 Jan 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.001912 | OPTICS EXPRESS 1922

Fig. 5. Established system.

The used SNF laser projects the 19 × 19 laser rays while the least points needed for the proposed method are 5 × 5 to obtain the meaningful depth information of the specular surface. Examples of captured laser points from a flat mirror by camera 1, camera 2 and camera 3 are shown in Figs. 6(a)-7(c) respectively. As can be seen, not all the projected rays could be utilized to measure the depth of the specular surface because some of the rays are reflected out of the scopes of the diffusive plane and the camera. The one-to-one correspondence of the laser point in different camera views need to be guaranteed on line. Firstly, the threshold selection method proposed in [21] is used to segment the image. Then, the points are clustered line by line. The top line in Fig. 6 (a) corresponds to the bottom lines in Figs. 6 (b)6(c).The laser points are registered based on the corresponding laser lines and the index of the point in the line.

Fig. 6. Captured laser points by different cameras.

After system re-calibration, the comparisons of original plane parameter set and the optimal plane parameter set are shown in Table 1 and Table 2. The third parameter that corresponds to Z coordinate has great error before re-calibration. Table 1. Comparison of the original and searched coefficients for plane3

π3 (1) Original Searched

#222957 - $15.00 USD (C) 2015 OSA

−4

−9.32 × 10 −9.32 × 10−4

π3 (2) −0.0061 −0.0061

π3 (3) −5

1.83 × 10 2.38 × 10 −4

π3 (4) −0.0098 −0.0099

Received 12 Sep 2014; revised 30 Nov 2014; accepted 12 Jan 2015; published 26 Jan 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.001912 | OPTICS EXPRESS 1923

Table 2. Comparison of the original and searched coefficients for plane4

π4 (1) π4 (2) π4 (3) π4 (4) −0.0130 −0.04 0.00084 −1.1685 Searched −0.0129 −0.04 0.0062 −1.1729 The convex mirrors with different convexity and maximum depth less than 1mm are recontructed by the proposed method and re-calibrated systemand the results are shown in Figs. 7-9 respectively. A dynamic specular surface (melting weld pool with an approximate size as 3mm × 5mm × 0.5mm ) was reconstructed before and after system re-calibration and the results are shown in Fig. 10.The first column shows the captured patterns by the two dragon fly cameras. The second and third columns show the reconstructed surfaces before and after system recalibration with different polynomial interpolation smoothing factor. As can be seen, the reconstructed specular surface is enlarged at least 10 times without system recalibration. If the the smoothing factor is small, e.g. α = 0.1 , the interpolated surface will be distorted greatly. The optimal smoothing factor is chosen as α = 0.9 based on comparisons and analysis. Original

Fig. 7. Reconstructed convex mirror 1 (a) result by the system without re-calibration and smoothing factor α = 0.9 ; (b) result by the re-calibrated system and smoothing factor

α = 0.1 ; (c) result by the re-calibrated system and smoothing factor α = 0.5 ; (b) result by α = 0.9 ; (The scale is mm)

the re-calibrated system and smoothing factor

#222957 - $15.00 USD (C) 2015 OSA

Received 12 Sep 2014; revised 30 Nov 2014; accepted 12 Jan 2015; published 26 Jan 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.001912 | OPTICS EXPRESS 1924

Fig. 8. Reconstructed convex mirror 2 (a) result by the system without re-calibration and smoothing factor α = 0.9 ; (b) result by the re-calibrated system and smoothing factor

α = 0.1 ; (c) result by the re-calibrated system and smoothing factor α = 0.5 ; (b) result by α = 0.9 ; (The scale is mm)

the re-calibrated system and smoothing factor

Fig. 9. Reconstructed convex mirror 3 (a) result by the system without re-calibration and smoothing factor α = 0.9 ; (b) result by the re-calibrated system and smoothing factor

α = 0.1 ; (c) result by the re-calibrated system and smoothing factor α = 0.5 ; (b) result by α = 0.9 ; (The scale is mm)

the re-calibrated system and smoothing factor

#222957 - $15.00 USD (C) 2015 OSA

Received 12 Sep 2014; revised 30 Nov 2014; accepted 12 Jan 2015; published 26 Jan 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.001912 | OPTICS EXPRESS 1925

Fig. 10. Reconstructed dynamic specular surface (a) result by the system without re-calibration and smoothing factor α = 0.9 ; (b) result by the re-calibrated system and smoothing factor

α = 0.1 ; (c) result by the re-calibrated system and smoothing factor α = 0.5 ; (b) result by α = 0.9 ; (The scale is mm)

the re-calibrated system and smoothing factor

To evaluate the reconstruction accuracy quantitatively, a flat mirror is reconstructed. The results are shown in Fig. 11. It is seen that the reconstructed points are close to the original points. The computed reconstruction MSE (Mean Squared Errors) are 0.30188 µm in xdirection, 0.59231 µm in y-direction, 2.3 µm in z-directionand 3.2 µm in total.The results are compared with those in [6], [7] and our previous work [22]. [6] reported 48.4 µm average squared error and [7] reported 28.9 µm average squared error. In our previous work [22], the reported measurement errors are 20.7 µm in x-direction, 43.3 µm in y-direction, 27.9 µm in zdirection and 91.9 µm in total. As can be seen, the accuracy of the proposed method with system recalibration is increased 10 order of magnitude.For the other referenced state of art methods, none quantitative measurement results were reported in their papers. In the matlab simulation, the proposed method could achieve zero error accuracy for measurement of the specular surfaces without noise, which is beyond the capability of all the state of art methods. Figure 12 shows the reconstruction of a simulated flat mirror and a convex mirror in MATLAB. The reconstructed points match the simulated points completely. The error induced in practical measurement is induced by the noise, which indicates that suppressing the noise could increase the accuracy of the proposed method further.

#222957 - $15.00 USD (C) 2015 OSA

Received 12 Sep 2014; revised 30 Nov 2014; accepted 12 Jan 2015; published 26 Jan 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.001912 | OPTICS EXPRESS 1926

Fig. 11. Reconstruction of a flat mirror (the scale is mm)

Fig. 12. Simulation results of reconstructing a flatmirror and a convex mirror in MATLAB

The reconstructed weld pool by the proposed method and those from [6] and [7] are compared visually in Fig. 13(a)-13(c). As can be seen, the reconstructed weld pool by the proposed method is more realistic than those reconstructed by the state of art methods since the weld pool during the welding process is very unsteady and its shape should be irregular. Undesirably, the reconstructed weld pools by [6] and [7] appear to be symmetric to some extent.

Fig. 13. Visual comparison of reconstruction result a real weld pool by the proposed method with those by the state of art methods [6] [7](a) result by the proposed method; (b) result by [7]; (c) result by [6](the scale is mm)

#222957 - $15.00 USD (C) 2015 OSA

Received 12 Sep 2014; revised 30 Nov 2014; accepted 12 Jan 2015; published 26 Jan 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.001912 | OPTICS EXPRESS 1927

A dynamic specular surface (melting weld pool) at different times was reconstructed by the re-calibrated system and the results are shown in Figs. 14-17. From these results, the change of the dynamic surface is vividly presented, which can be used for scientific study of the characteristics of the dynamic surface. The computation cost for the weld pool measurement is low because of the small number of points used for the reconstruction. The reconstruction rate of 60 frames/sec could be achieved for weld pool measurement.

Fig. 14. Reconstructed weld pool at time

T2 = 16.7ms

(the scale is mm)

Fig. 15. Reconstructed weld pool at time

T2 = 33.4ms

(the scale is mm)

Fig. 16. Reconstructed weld pool at time

T2 = 50.1ms

(the scale is mm)

#222957 - $15.00 USD (C) 2015 OSA

Received 12 Sep 2014; revised 30 Nov 2014; accepted 12 Jan 2015; published 26 Jan 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.001912 | OPTICS EXPRESS 1928

Fig. 17. Reconstructed weld pool at time

T2 = 66.8ms

(the scale is mm)

8. Conclusion

Through intercepting the reflection of a projected laser pattern twice, the proposed method gives a closed form solution for each reflection ray and a closed form solution for the corresponding point on the specular surface. The specular surface can thus be reconstructedby one-shot-projection. The robustness of the proposed method is attributed to two aspects: closed form solution and system re-calibration by the proposed minimum depth difference method. The major contributions of the paper include: 1), based on the previous work [22], a SNF laser instead of the Pico laser projector is used to reconstruct the specular surfaces. The SNF laser has much better luminance and is capable of measuring the specular surfaces under harsh circumstances, e.g. high illumination that failed the Pico laser projector. 2), polynomial interpolation is used to reconstruct the specular surface instead of fitting a quadric surface, which is more realistic and accurate since some specular surface is irregular and could not be parameterized by a 3D model. 3), the minimum depth difference method is proposed to re-calibrate the system to rectify the coefficient errors that are mainly caused by the imperfect virtual camera when the projected laser rays did not comply with central projection strictly. 4), the proposed method could measure the dynamic specular surface with one-shotprojection and closed form analytic solutions. This is the only method so far that could yield closed form solutions and carry out accurate measurement for dynamic and specular surfaces. Since the specular surface is dynamic and ever changing, state of art methods that utilize multiple projections will not work because it is impossible to capture different distorted patterns on the same surface. For the method described in [5]- [7], one shot projection was achieved. However, their proposed method could not yield the closed form analytic solutions and thus not suitable for precision (accurate) measurement. Acknowledgment

We want to thank Prof. John Koshel and the anonymous reviewers for contributing advice.

#222957 - $15.00 USD (C) 2015 OSA

Received 12 Sep 2014; revised 30 Nov 2014; accepted 12 Jan 2015; published 26 Jan 2015 9 Feb 2015 | Vol. 23, No. 3 | DOI:10.1364/OE.23.001912 | OPTICS EXPRESS 1929

A one-shot-projection method for measurement of specular surfaces.

In this paper, a method is proposed to measure the shapes of specular surfaces with one-shot-projection of structured laser patterns. By intercepting ...
4MB Sizes 0 Downloads 6 Views