April 1, 2015 / Vol. 40, No. 7 / OPTICS LETTERS

1231

Multi-directional 3D flame chemiluminescence tomography based on lens imaging Jia Wang,1 Yang Song,1,* Zhen-hua Li,1 Andreas Kempf,2 and An-zhi He1 1

Department of Information Physics and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China 2 Chair of Fluid Dynamics, Institute for Combustion and Gas Dynamics—IVG, Duisburg-Essen University, Germany *Corresponding author: [email protected] Received October 22, 2014; revised December 29, 2014; accepted January 12, 2015; posted February 2, 2015 (Doc. ID 225498); published March 18, 2015 Flame chemiluminescence tomography (FCT) has been widely used in flame diagnostics for three-dimensional (3D), spatially resolved measurements of instantaneous flame geometry and, to some extent, of species concentrations. However, in most studies, tomographic reconstructions are based on a traditional parallel projection model. Due to the light collection characteristics of a lens, a parallel projection model is not appropriate for the practical optical setups that are used for emission imaging, particularly at small F-numbers. Taking the light collection effect of the lens into account, this Letter establishes a complete and novel tomographic theory for a multi-directional tomography system consisting of a lens and CCD cameras. A modified camera calibration method is presented first. It determines the exact spatial locations and intrinsic parameters of the cameras. A 3D projection model based on the lens imaging theory is then proposed and integrated into the multiplicative algebraic reconstruction technique (MART). The new approach is demonstrated with a 12-camera system that is used to reconstruct the emission field of a propane flame, thereby resolving space and time. © 2015 Optical Society of America OCIS codes: (280.1740) Combustion diagnostics; (100.6890) Three-dimensional image processing; (110.6955) Tomographic imaging; (150.1488) Calibration. http://dx.doi.org/10.1364/OL.40.001231

Flame chemiluminescence is recognized as an important indicator in combustion diagnostics and control [1,2]. Compared to laser diagnostics, chemiluminescence imaging is simple and optically passive, making it attractive for applications in harsh industrial environments; combined with computed tomography, flame diagnostics with 3D spatial and high temporal resolutions can be realized. The implementation of high resolution, accurate flame chemiluminescence tomography (FCT) requires a sufficient number of projections and the availability of accurate reconstruction algorithms. With the rapidly growing capabilities of optical sensing and computing technologies, FCT setups with multidirectional projections have been widely studied. Based on the imaging system, FCT can be classified into the following categories: (a) custom-made lens and film based multi-directional setup [3]; (b) lens and CCD camera based multi-directional setup [4–15]; and (c) fiber and CCD-based multi-directional setup [16]. Of these classes, (b), the lens and CCD camera, offers the highest resolution and is the easiest to set up and operate, giving it the greatest potential for commercialization. So far, most studies on FCT that use lenses and CCD cameras are based on the parallel projection model, in which the projection is regarded as the integration of a signal along the entire line of sight [7–10]. However, due to the lens, the signal collected by a camera does not result from the integration along the parallel rays, as is assumed by the parallel projection approach [17]. A more accurate reconstruction can be achieved if a systemspecific projection model is considered. In recent years, researchers have started to consider different projection models for camera based tomography. Walsh et al. [18] studied the effect of the light-collecting geometry in Abel inversion reconstruction. They showed how the intensity recorded at a pixel results from signals emitted within two cones, but that the reconstructed region was limited 0146-9592/15/071231-04$15.00/0

to the camera’s nominal depth of field. Floyd [19] examined the projections for an insufficient depth of field by ray tracing, taking the “blur effect” into account. However his algorithm for calculating the weight matrix was complex and costly. Lin and Ma [14] used point spread functions to describe the projection process, calculating the point spread function for the imaging system by the Monte Carlo method. In this Letter, we propose a 3D projection model based on the lens imaging theory and integrate this model into the multiplicative algebraic reconstruction technique (MART) so that the imaging effect of the lens is inherently considered in the 3D reconstruction process. The algorithm requires that the extrinsic parameters (the position and orientation of the views) and the image distance (the distance between the lens and the CCD) of the system are accurately determined. Some flexible methods to calibrate the cameras’ locations and intrinsic parameters have already been studied in the field of machine vision [20,21], but the camera calibration methods were based on a pinhole camera model and did not consider any out-of-focus effects. Our testing has shown that the available methods are accurate for measuring the extrinsic parameters but not for determining the camera’s image distance. This Letter also establishes a FCT system with 12 cameras, as shown in Fig. 1(a). The system consists of lenses (CBC, Computar M0814-MP2: focal length 12 mm, maximum aperture 8.4 mm) and grayscale CCD cameras (Allied Vision Technologies, Guppy Pro F-125B: 1292× 964, pixel size 3.75 μm) that are connected to a computer. Trigger signals are issued from the trigger program in the computer. Using an external trigger card, the 12 cameras are synchronized and triggered to simultaneously capture the projections from 12 views. The cameras were roughly arranged to face the test flame with equal spacing, and then focused on specific points by adjustments © 2015 Optical Society of America

OPTICS LETTERS / Vol. 40, No. 7 / April 1, 2015

with the camera calibration pattern. Band-pass filters of 431.5  10 nm were placed between the lenses and the CCDs to limit the detected signals emitted by CH*. The coordinate systems that are used in the FCT are shown in Fig. 1(b). Each projection includes three coordinate systems: the world coordinate system xw ; yw ; zw , the camera coordinate system x; y; z, and the image coordinate system x0 ; y0 . The origin of the camera coordinates x; y; z is at the intersection of the optic axis and the lens. The origin of the image coordinates x0 ; y0  is located in the center of the CCD (the principle point of the CCD). The position and orientation of a camera in space is uniquely defined by Euler angles ψ; θ; ϕ (“yaw,” “pitch,” and “roll” respectively) and three translations T x ; T y ; T z  [20]. Mathematically, the transformation from the world coordinate system xw ; yw ; zw  to the camera coordinate system x; y; z can be expressed as " x # " r r r #" x # " T # y z



1

2

3

w

r4 r7

r5 r8

r6 r9

yw zw

Ty : Tz

y y0  Z 0 : z

(2)

B2N×11 y11×1  b2N×1 ;

(3)

where w1

6 0 6 B6 6 4 xwN 0  y

Z0r1 Tz

yw1 0 ywN 0

1 0

zw1 0

1 0

zwN 0

Z0 r2 Tz

Z0r3 Tz

0 xw1 0 xwN Z0T x Tz

0 yw1 .. .

0 ywN Z 0 r4 Tz

b  x01

y01



x0N

y0N

3 10

2

11

1

12

yw

Flame

z

zw

Ow

y

Lens

xw

O

x CCD O

x

era n

(a)

y

(b)

Fig. 1. (a) FCT system with 12 cameras as seen from above. (b) The coordinate systems used.

the rotation matrix satisfy Eq. (7), the coefficient a  T z ∕Z 0 can be computed. Then, r 1 − r 6 , T x and T y can be obtained by a · y. Based on the orthonormal and righthanded properties of the rotation matrix, r 7 , r 8 , r 9 can be calculated q q r 1  r 5 2  r 2 − r 4 2  r 1 − r 5 2  r 2  r 4 2  2: (7) The camera is preset to focus on the sample xwf ; ywf ; zwf , and the coordinate of the focused in the camera coordinate system xf ; yf  can be puted [Eq. (1)]. The image position of the focused can be determined by the lens imaging equation: 0 zw1 0 zwN Z0r5 Tz

0 1

−x01 xw1 −y01 xw1

−x01 yw1 −y01 yw1

0 1

−x0N xwN −y0N xwN

−x0N ywN −y0N ywN

Z0 r6 Tz

Z0T y Tz

r7 Tz

−x01 zw1 3 −y01 zw1 7 7 7; 7 5 0 −xN zwN 0 −yN zwN

r8 Tz

r9 Tz

T :

point point compoint

(4)

T ;

y0f 1 1 1 x0f Z   ;   0. yf zf zf Z 0 f lens xf

and 

4

(1)

The world coordinates xwi ; ywi ; zwi  of the sampling points i  f1; 2; …; Ng are known. Generally, T z ≠ 0, and we can get 2N linear equations based on Eqs. (1) and (2). In matrix form, the equations can be written as

2x

5

x



The focal length of the lens is fixed, and focusing is achieved through altering the image distance Z 0 . Neglecting lens distortion, an object point and its corresponding image satisfy the pinhole camera model, no matter if the lens is focused or not: x x0  Z 0 ; z

6

7

8 9

Cam

1232

(5)

(8)

(6)

The linear Eq. (3) describes the relationship between the world coordinates of the sampling points and the corresponding image coordinates, where y is unknown. One needs at least N  6 noncoplanar sampling points for a fully determined system of equations, but more points should be used to reduce errors. The resulting system of equations is then over-determined, so that a least square method can be used to solve for y. As the parameters in

Using Eq. (8), the image distance of the camera Z 0 can be computed as follows: Z 0  f lens  f lens

r 1 xwf

x0f :  r 2 ywf  r 3 zwf  T x

(9)

Finally, T z is computed by T z  a · Z 0 . Based on the theory analyzed above, a 3D pattern with dot arrays is used for the camera calibration. Some

April 1, 2015 / Vol. 40, No. 7 / OPTICS LETTERS

1233

y x

f ( xw , yw , zw )

z

u

O

O

x v

x Z0 Lens

Fig. 2. 3D camera calibration pattern captured by camera (a) 2, (b) 5, and (c) and 8.

marks on the surface indicate that the cameras are preset to focus on specified points. The calibration pattern images recorded by three cameras are shown in Fig. 2. The positions of the imaged sampling points on the image plane are determined by computing the center of gravity of each dot. The cameras’ parameters are then calculated according to the above calibration method, together with the corresponding Euler angles [20], and results are given in Table 1. The results show that the pitch and roll angles are all near zero, which agrees with the fact that the cameras are almost perpendicular to the horizontal plane. The yaw angles are almost equally spaced. The results confirm that the algorithm achieves a good accuracy in camera calibration. The 3D distribution of the chemiluminescence emission of the flame is denoted by f xw ; yw ; zw  in the world coordinate system and shall be reconstructed. The 3D reconstruction region is divided into N discrete voxels. The tomography system has P cameras with Q pixels per camera. Neglecting self absorption by the flame, the projection operator to the cameras can be expressed as Ip;q 

N X

f xwn ;ywn ;zwn  · wxwn ;ywn ;zwn ;p;q: (10)

n1

In this equation, p  f1; 2; …; Pg denotes the index of the camera, q  f1; 2; …; Qg is the index of the pixel, and Ip; q is the projection value on the qth pixel of the pth camera. The weight factor wxwn ; ywn ; zwn ; p; q denotes the contribution coefficient of the nth voxel located at xwn ; ywn ; zwn  to the pixels p and q. In FCT, the projection intensities I are directly captured by the cameras. In order to accurately reconstruct the object f , the accurate weight factors w for the tomography system must be determined first. Table 1. Camera Calibration Results Cam. # Yaw° Pitch° Roll° T x mm T y mm T z mm Im. dist mm 1 2 3 4 5 6 7 8 9 10 11 12

−81.00 −69.39 −52.89 −38.65 −22.15 −7.28 6.55 21.34 34.82 49.45 66.25 71.66

3.78 4.69 1.62 1.79 1.63 1.66 1.36 2.66 1.09 1.50 3.26 3.68

−5.17 −2.26 −1.25 −1.06 −1.21 −1.09 0.10 0.31 0.46 1.24 0.74 4.58

−0.6 −18.0 1.4 −7.1 11.9 13.8 8.2 5.7 −2.6 −3.0 19.7 −50.0

1.3 −11.4 −4.0 −8.0 −7.1 −8.9 −6.4 −15.8 −3.0 −8.1 −8.0 −7.4

566.8 565.7 561.8 562.8 566.9 565.4 573.4 572.8 569.7 568.9 577.0 570.5

12.281 12.282 12.283 12.275 12.275 12.276 12.274 12.271 12.272 12.279 12.278 12.281

Fig. 3.

O

y

y

Radius: r Center: (Xc, Yc)

Image plane

Image of a single voxel on camera.

The projection process of a single voxel through the imaging system of the camera is shown in Fig. 3. For a voxel located at xw ; yw ; zw , its position x; y; z in the camera coordinate system can be computed by Eq. (1); the image point ximg ; yimg ; zimg  can be determined according to Eq. (8). As the image distance of the camera Z 0 is fixed, object points beyond the focal plane are not imaged as sharp points, but instead as blurred circles on the camera’s image plane x0 ; y0 . The blurry circle is centered at Xc; Y c and has a radius r that depends on the aperture D of the camera, r

jzimg − Z 0 j D; 2zimg

Xc  ximg

Z0 ; zimg

Y c  yimg

Z0 : (11) zimg

The next step must compute the intersection areas of the blurry circle with the pixels on the CCD plane. Two distinct cases can occur: (a) the diameter of the blurry circle is greater than the pixel size, or (b) the diameter is smaller. For simplification, we assume that the pixels located on the edge of the blurry circle are circles themselves, and that the intersection area varies linearly with the separation of the two circle centers l [19]. The intersection areas can be computed by Eq. (12), where dp is the pixel size, Ai is the intersection area, Ab is the area of blurriness, and Ap is the area of a pixel: 8 Ap > > l ≤ r −dp ∕2 b Ai < rd A∕2−l Ap p  r ≥ dp ∕2; (12a) Ab r −dp ∕2 < l < r dp ∕2 dp Ab > > : 0 l ≥r d ∕2 p

8

Multi-directional 3D flame chemiluminescence tomography based on lens imaging.

Flame chemiluminescence tomography (FCT) has been widely used in flame diagnostics for three-dimensional (3D), spatially resolved measurements of inst...
488KB Sizes 2 Downloads 8 Views