Microscopy, 2014, 357–369 doi: 10.1093/jmicro/dfu021 Advance Access Publication Date: 17 June 2014

Article

A new field-of-view autotracking method for online tomography reconstruction based on back-projected ray image cross-correlation Downloaded from http://jmicro.oxfordjournals.org/ at New York University on May 31, 2015

Sachihiko Tomonaga1, Misuzu Baba2, Sadao Yamazaki3, and Norio Baba1,* 1

Major of Informatics, Graduate School, Kogakuin University, 2665-1 Nakano, Hachioji, Tokyo 192-0015, Japan, 2Research Institute for Science and Technology, Kogakuin University, 2665-1 Nakano, Hachioji, Tokyo 192-0015, Japan, and 3Major of Electrical Engineering and Electronics, Graduate School, Kogakuin University, 2665-1 Nakano, Hachioji, Tokyo 192-0015, Japan

*To whom correspondence should be addressed. E-mail: [email protected] Received 30 December 2013; Accepted 15 May 2014

Abstract We devised a new field-of-view autotracking method for online tomography reconstruction based on a cross-correlation between a pair of neighbours, called back-projected ray images, among a specimen tilt sequence. One ray image is calculated through normal filtered back-projection only in the cross-sectional plane from each projection image. This ray-image matching can reliably track the field-of-view because a pair of neighbouring ray images mostly cross-correlates at the existing three-dimensional object position. Online experiments using real specimens resulted in successful autotracking performance with high accuracy, and online tomograms were obtained immediately after the final tracking at the last tilting angle. Key words: electron tomography, field-of-view autotracking, automatic alignment, back-projected ray image, online tomography

Introduction Electron tomography is an important and essential technique for analysing nanoscale structures. It has been used extensively in various applications in the physical, materials and biological sciences. Electron tomography currently appears to be the generally established technique; however, in general, a tomogram cannot be observed immediately after the acquisition of a series of specimen tilt images, but is instead observed after the post-processing of the tilt series alignment, which often requires a substantial amount of time. Moreover, for general specimens, the automatic

acquisition of the tilt series is difficult because field-of-view tracking frequently fails as the tilt angle or specimen thickness increases. Auto focusing also fails for similar reasons [1,2]. Therefore, compared with medical computed tomography equipment, an electron tomography system integrated into a TEM/STEM is inferior from the viewpoint of functionality. This technology has yet to be successfully achieved. In this study, we focus on the improvement of the field-of-view autotracking technique for the purpose of online tomography reconstruction and propose a new

© The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: [email protected]

357

358

fast Fourier transform (IFFT) are used, high-speed processing is expected because of the available computing power.

Theoretical basis In three-dimensional tomography, an object is reconstructed by summing all back-projection volumes (voxel intensities) from all filtered projection images, in which each volume corresponds to a single projection image (a tilt image) [11]. This reconstruction method is referred to as FBP. (For simplicity, the tilt axis is hereafter considered to be coincident with the y-axis.) FBP is formulated using O(X, Y, Z), Bn (X, Y, Z; φn) and gn (x, y; φn) to denote a three-dimensional object, a back-projection volume and a filtered projection image with a weighting function proportional to the frequency [11], respectively. OðX; Y; ZÞ ¼ lim

n!∞

X

Bn ðX; Y; Z; φn Þ;

n

Bn ðX; Y; Z; φn Þ ¼ gn ðx; y; φn Þ;



x ¼ cos φn  X  sin φn  Z y¼Y

;

ð1Þ where (X, Y, Z) is a three-dimensional coordinate in the volume, φn is the tilt angle of the specimen, where n indicates the specimen tilt order, and (x, y) is an image coordinate. In general, the tilt angle is increased in equal steps as φn+1 = φn + Δφ. Figure 1 shows the model calculations of the TEM tomography reconstruction regarding the (X−Z) cross-section. Figure 1a shows a simple model, and Fig. 1b and c show sample reconstruction results obtained when the tilt angle

Fig. 1. Model calculations of FBP reconstruction on the (X–Z) plane to demonstrate that the reconstruction is summation of all ‘back-projected ray images’ each of which is calculated from a single filtered projection distribution. (a) A simple model (scale bar: 50 pixels), (b) a reconstruction from seven directions (±60° tilt range) to clarify the ‘ray image’ and (c) a reconstruction with practical parameters of Δφ = 2° and the range ±60°.

Downloaded from http://jmicro.oxfordjournals.org/ at New York University on May 31, 2015

alternative technique. If the field-of-view is completely auto-tracked at high speed while the specimen is tilted, the tomogram can be obtained online by relying on computer power. (In our previous report [3], for example, a threedimensional reconstruction with filtered back-projection (FBP) was calculated in a few tens of seconds by a supercomputer system.) With increases in the tracking speed, in situ tomographic observations for analysing the dynamic behaviour might become feasible in the future. In conventional tracking methods, a captured-image matching technique is generally used, in which the image cross-correlation function (XCF) between neighbouring serial images among the specimen tilt series is calculated, and according to the maximum XCF position, the corresponding field-of-view is tracked. However, this method suffers from a weakness in that the accuracy decreases when it is applied to thick specimens [1]. The tracking technique itself is limited as long as it relies on image-pattern matching because the projection image more or less changes the pattern because of the specimen tilt. Feature-based alignment methods that track the local image feature points through a tilt series have been proposed [4–6], and sophisticated methods for fiducial markers such as colloidal gold have also been proposed [7–9]. Some sophisticated methods not only track the fiducial makers, but also predict where the makers would project in the next image among the series for accurate tracking. Although these methods have the potential to become usable as online autotracking systems, these methods are, at present, used primarily for post-processing after the acquisition of the entire tilt series because they have been developed from an analytical algorithm. In our previous report, we also devised a new automatic image alignment method based on volume data cross-correlation [3,10]. However, our developed method is also intended for post-processing because it requires iterative processing. Therefore, it may be difficult to adapt these methods into online autotracking use even if they are reconstructed. In contrast, the method proposed in this paper uses a so-called ‘back-projected ray image’ instead of a specimen tilt image. The back-projected ray image is a cross-section image calculated from each projection image only during reconstruction. As a result of a study on ‘ray images’, the quality and accuracy of the cross-correlation between a pair of neighbouring ray images among the tilt series were observed to be very high compared with those between a pair of projection images. Therefore, taking advantage of the high reliability of ray image matching, we have devised a new autotracking method. We applied this method to real specimens in online experiments using a TEM and thereby demonstrated its successful performance. The method is simple and although three-dimensional fast Fourier transform (FFT) and inverse

Microscopy, 2014, Vol. 63, No. 5

359

Microscopy, 2014, Vol. 63, No. 5

was changed in steps of Δφ to 20° and 2°, respectively. In this calculation, the tilt angle range is limited to ±60° for normal TEM tomography conditions. Here, as is clearly shown in Fig. 1b, each back-projection image Bn (X, Z; φn) forms a line-like image. In this study, we refer to such an image as a ‘back-projected ray image’ or simply as a ‘ray image’; we have conducted various studies on such images. We now consider an arbitrarily chosen pair of ray images and the cross-correlation between them. The set shown in Fig. 2a and b is such a pair of back-projected ray images for the model in Fig. 1a, where the two tilt angles are 20° and 25°. Figure 2e shows the normalized cross-correlation coefficient, RNCC, as a function of the image shift variable, xs, which is calculated using Eq. (2)

T=2 Z X1

Bn ðX; Z; φn Þ  Bnþ1 ðX þ xs cos φnþ1 ; Z þ xs sin φnþ1 ; φnþ1 ÞdXdZ T=2 X0 sZ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; RNCC ðxs Þ ¼ sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Z Z Z T=2

X1

T=2 X0

B2n ðX; Z;

φn ÞdXdZ

T=2

X1

T=2

X0

B2nþ1 ðX

ð2Þ

þ xs cos φnþ1 ; Z þ xs sin φnþ1 ; φnþ1 ÞdXdZ

R

D Bn ðX; Z; φn Þ  Bnþ1 ðX þ xs cos φnþ1 ; Z þ xs sin φnþ1 ; φnþ1 Þ ds ffi; ffiqffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi RNCC ðxs Þ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi R 2 R 2 D Bn ðX; Z; φn Þ ds D Bnþ1 ðX þ xs cos φnþ1 ; Z þ xs sin φnþ1 ; φnþ1 Þ ds

where T is the thickness along the Z-axis, and X0 through X1 is the integration range along the X-axis. In the case of Fig. 2e, φn = 20° and φn+1 = 25°. For a fast calculation, the well-known FFT method based on the convolution theorem of a Fourier transform was used. Therefore, the integration ranges of T and (X1 − X0) were set to 2N pixels, where N is a positive integer. After obtaining the two-dimensional map of RNCC as functions of X and Z using the FFT method, we extracted the line profile along the xs-axis. Figure 2c and d shows the overlapping images of two ray images with image Bn+1(X, Z; 25°) moving along the xs-axis. Figure 2f and g shows similar overlapping images for φn = 20° and φn+1 = 21°, respectively. Figure 2h shows the results of the RNCC profile. As clearly revealed by a comparison of Fig. 2e with Fig. 2h, a sharp maximum profile was obtained for a smaller difference angle, Δφ. As additional factors, a degradation of the sharpness was due to the thicker integration range of T, as shown in Fig. 2i and j, and by a greater tilt angle, φn.

ð3Þ

where D is the integration domain and ds is the small elementary area. Figure 3 shows the calculated ideal profiles corresponding to the cases in Fig. 2e and h. For Δφ = 5°, because the integration domain was limited, the maximum profile in Fig. 3d is very sharp, even though Δφ is not small. A comparison of Figs. 3c and 2h reveals that the condition Δφ = 1° is close to the ideal case. However, a comparison of Figs. 3d and 2e also reveals that if T is chosen to be as small as possible, the accuracy of the cross-correlation becomes sufficient, even though Δφ is not small.

Method Based on the previously discussed theoretical study, we propose a new field-of-view autotracking method for online reconstruction. As previously described, we observed that a back-projected ray image cross-correlation has high reliability and accuracy if parameters T and Δφ are chosen

Downloaded from http://jmicro.oxfordjournals.org/ at New York University on May 31, 2015

Z

As previously described, if parameters Δφ and T are properly selected, a ray image cross-correlation can consistently track the field-of-view because the ray line is originally emitted from the object. Table 1 shows the relationship between the parameters (Δφ and T) and the matching error calculated with the model depicted in Fig. 1a, which indicates the aforementioned tendency and provides some approximate reference data for the selection of the parameters. Ideal RNCC profiles can thus be derived, which are calculated by limiting the integration domain to only areas where an object exists. These calculations were performed from a simple model; however, they are meaningful for demonstrating the limit profiles as examples. The calculation equation is as follows:

360

Microscopy, 2014, Vol. 63, No. 5

Downloaded from http://jmicro.oxfordjournals.org/ at New York University on May 31, 2015

Fig. 2. Samples of the back-projected ray image cross-correlation with the model of Fig. 1a. (a and b) A pair of ray images, (c and d) superimpositions of the both images placing (b) image at different positions away from (a) image and the most correlated position, (e) the RNCC cross-correlation distribution along the xs-axis perpendicular to the projection direction of φ = 25° (b image), (f–h) similar superimpositions and RNCC distribution for Δφ = 1°, which shows sharper peak profile rather than the case of (e) and (i and j) a superimposition for T increased to 512 pixels at the most correlated position and the RNCC distribution, which shows duller peak profile even for Δφ = 1°.

Table 1. RNCC maximum position error (in pixel) calculated with the model of Fig. 1a Parameters

φ: 60°, T: 128 [ pixel] φ: 60°, T: 64 φ:20°, T: 128 φ:20°, T: 64

Δφ (°) 1

2

3

4

5

6

7

8

9

10

12

14

16

18

20

22

24

26

28

30

0 0 0 0

1 0 0 0

1 0 0 0

1 0 0 0

2 1 0 0

1 1 0 0

1 1 0

1 0 0

0 0 1

0 −1 1

−2 0 0

1 0

1 −1

−1 −1

1 −1

3 −1

−1

−1

−2

−3

The calculation and increasing Δφ were stopped when the RNCC central peak profile became unclear.

361

Microscopy, 2014, Vol. 63, No. 5

Fig. 3. Ideal RNCC distributions calculated by limiting the integration domain to the object area (Eq. (3)) although for the case of the simple model of Fig. 1a. (a and b) Samples of ray images limited to the object area only for φ = 20° and 25°, respectively, and (c and d) the RNCC distributions of the calculated results for different pairs of Δφ = 20° and 21° and Δφ = 20° and 25°, respectively.

correlation RNCC(xs, ys) between B1(X, Y, Z) and B2(X, Y, Z) is calculated using the three-dimensional FFT and IFFT methods previously mentioned. (d) According to the maximum position of RNCC, the image is shifted using the TEM digital control program functions with a linked external PC to move the peak position to the image centre. At the same time, B2(X, Y, Z) is also moved to align with B1(X, Y, Z), and B2(X, Y, Z) is added to I(X, Y, Z). (e) B1(X, Y, Z) is

RNCC ðxs ; ys Þ ¼ Z

T=2

Z

Y1

Z

X1

Bn ðX; Y; Z; φn Þ  Bnþ1 ðX þ xs cos φnþ1 ; Y; Z þ xs sin φnþ1 ; φnþ1 Þ dXdYdZ ffisZ sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi; Z Z Z Z Z T=2

T=2

Y1

X1

T=2

Y0

X0

Y0

X0

B2n ðX; Y; Z; φn Þ dXdYdZ

T=2

Y1

X1

T=2

Y0

X0

B2nþ1 ðX þ xs cos φnþ1 ; Y; Z þ xs sin φnþ1 ; φnþ1 Þ dXdYdZ ð4Þ

where (xs, ys) indicates two-dimensional variables for an image shift to align the specimen tilt image. According to the expansion, three-dimensional FFT and IFFT processing for the volume data are used for a fast calculation. The three-dimensional cross-correlation data point, RNCC(X, Y, Z), is then calculated once, and RNCC(xs, ys) is subsequently extracted from the (xs, ys) plane perpendicular to the projection direction inside the RNCC(X, Y, Z) data point (Fig. 5). This field-of-view autotracking can be started at any tilt angle position. The method continues to track the field-of-view in one tilt for each image. (a) After the autofocusing process, the first image is captured, and the backprojected ray volume data point, denoted by B1(X, Y, Z), is calculated. At the same time, B1(X, Y, Z) is stored into a tomogram three-dimensional space I(X, Y, Z). (b) The specimen is then tilted, and autofocusing is performed. The image is captured, and the back-projected ray volume data point, denoted as B2(X, Y, Z), is calculated. (c) The cross-

subsequently replaced with B2(X, Y, Z), and processes (b) through (e) are repeated until the last tilt angle is achieved. Finally, when the tracking is finished, a threedimensional tomogram reconstruction is completed.

Results Specimen preparation of thin sections Two types of thin sections were prepared to verify the theoretical basis of the proposed method. One was a thin section covered with colloidal gold particles with a diameter of 20 nm. A generally used Epon resin was cut into approximately 200-nm-thick sections before being attached to a specimen. After hydrophilic treatment of the section, colloidal gold particles were attached to both surfaces. Both sides of the copper grid that the section rested upon were coated with a carbon film. Because the resin section layer

Downloaded from http://jmicro.oxfordjournals.org/ at New York University on May 31, 2015

appropriately. Relying on the high reliability supported by the aforementioned study, we constructed the flow diagram shown in Fig. 4. The main part of the process is the crosscorrelation RNCC calculation between a pair of backprojected ray volume data points [Bn (X, Y, Z; φn) and Bn+1 (X, Y, Z; φn+1)] calculated from a pair of neighbouring tilt images. Equation (2) can be expanded for the volume data as follows:

362

Microscopy, 2014, Vol. 63, No. 5

convenient for comparing the present method with a conventional method. The other specimen was a biological thin section. Yeast Saccharomyces cerevisiae was used. After the cells were grown in a nutrient-rich medium, they were transferred to a rapamycin-containing medium and incubated for 2 h. For the fixation, the cells were prepared in a manner similar to that described in a previous report [12]. Sections, 150-nm thick, were collected onto single-slot copper grids and stained as previously described [12]. Both sides of the grids were coated with carbon film.

TEM instrument for online experiments

Fig. 4. Flow diagram of the field-of-view autotracking method. See the text for details.

Fig. 5. Schematic representation of the relationship between the image shift plane (xs, ys) perpendicular to the projection direction and the three-dimensional tomogram volume.

was almost invisible to the electron beam compared with the gold, the gold particles on both outer sides resulted in a noticeable local change in the pattern, even at a slight change in tilt angle. This local change in the pattern is

Autotracking Online field-of-view tracking was examined in accordance with the flow diagram shown in Fig. 4 using the sample specimens previously described. Prior to the experiments, the azimuthal angle of the misaligned tilt axis was accurately measured. Using a measurement method proposed in our previous report [3], we obtained an angle of 1.1°, which was taken into account in the back-projection calculation; the equation for this calculation, which takes into consideration the misaligned tilt axis, is also described in our previous report [3]. The specimen was tilted at 1°, 2° and in increments of several degrees for various examinations. The maximum tilt angle range was −70° to +70°. Because the present experiments were performed using tentative laboratory-based software, the processing speed was slow. To reduce the processing time and because of the limited memory capacity, images with sizes of 1024 × 1024 pixels were processed. The magnification used in the experiments was ×30 000. At each specimen tilting angle, the captured image was tracked and aligned with a pre-captured image using image shift control programs. In the maximum-tiltangle-range experiment, starting from a 0° tilt angle, we

Downloaded from http://jmicro.oxfordjournals.org/ at New York University on May 31, 2015

The aforementioned thin sections were used for online autotracking experiments. These experiments were performed with an HT-7700 electron microscope (Hitachi HighTechnologies; Tokyo, Japan) equipped with a XR-41B 2k CCD camera (AMT; Woburn, MA, USA) and operated at 120 kV; the images were processed using the EMIP-3D software. With the exception of the TEM controls, the programs needed for the proposed method were custom written in our laboratory using the C++ programming language. The external control programs for the TEM instrument, i.e. programs related to the specimen tilting, autofocusing and image shift that occurred while the deviation was fed back to the microscope [13], were supplied by Hitachi HighTechnologies.

Microscopy, 2014, Vol. 63, No. 5

repeated this process until +70° was achieved, and in turn, repeated the process from 0° to −70°. No existing tracking system installed in the TEM was used in this experiment, except for the autofocusing function. In the experiments, definitive results were obtained to clearly distinguish the superiority of the proposed method over the conventional method. For the colloidal gold specimen, the autotracking was performed at 2° increments. As a result, tracking with the conventional method (i.e. the phase correlation function (PCF) method) noticeably failed (see Supplementary data online, ‘Movie_gold_old’). In contrast, the autotracking with the proposed method was successfully realized with high precision, and the resulting tilt series images were well aligned, as shown in Supplementary data online, ‘Movie_gold_new’, which was also verified from a comparison between reconstructed images from alignment results with the proposed method (Fig. 6b) and with the PCF method (Fig. 6c). For the yeast cell specimen, the autotracking experiment was performed at 8° increments. Although the increment angle was considerably greater than that normally used, the proposed method successfully auto-tracked every field-of-view

Fig. 7. Online three-dimensional tomogram of the yeast cell thin section obtained immediately after the autotracking. (a) An orthogonal cross-section representation of the autotracking result at 1° increment angle (scale bar: 100 nm). Clear serial cross-section images appeared (see Supplementary data online, ‘Movie_tomo_Yeast’). (b and c) A comparison between reconstructed X–Y cross-section images from tracking results at 8° increment angle with the proposed method and with the PCF method, respectively. N, nucleus; V, vacuole; NVJ, nucleus–vacuole junction.

through all tilt angles (see Supplementary data online, ‘Movie_Yeast_new’). In contrast, the tracking with the conventional PCF method failed (see Supplementary data online, ‘Movie_Yeast_old’). This difference can be confirmed from a comparison between reconstructed images from alignment results with the proposed method (Fig. 7b) and with the PCF method (Fig. 7c). Figures 8 and 9 show sample images among the autotracking series and the corresponding RNCC maps (Figs. 8d–i and 9d–i, respectively) as functions of the image shift variables (xs, ys), which show very sharp maximum peaks. Notably, the cross-correlation RNCC value decreased as the tilt angle was increased; however, this result was expected and does not conflict with the aforementioned argument because the real integration region overlapping two ray

Downloaded from http://jmicro.oxfordjournals.org/ at New York University on May 31, 2015

Fig. 6. Online three-dimensional tomogram of the colloidal gold specimen obtained immediately after the autotracking. (a) An orthogonal cross-section representation of the autotracking result at 1° increment angle (scale bar: 100 nm). Clear serial cross-section images appeared except the ‘missing-wedge’ effect (see Supplementary data online, ‘Movie_tomo_gold’). (b and c) A comparison between reconstructed X–Z cross-section images from tracking results at 2° increment angle with the proposed method and with the PCF method, respectively.

363

364

Microscopy, 2014, Vol. 63, No. 5

images becomes relatively smaller as the tilt angle increases. The sharpness indicates a high tracking accuracy from the model calculation results described in the ‘Theoretical basis’ section. Furthermore, to examine the ability of the proposed method regarding the resolution in the reconstruction, autotrackings for both specimens at 1° increments were executed through a maximum tilt range of −70° to +70°. The threedimensional reconstructed results after the trackings are shown in Figs. 6 and 7, which depict almost clear crosssection images without any blurring artefacts, except the influence of a ‘missing wedge’. (See the movies for the tomograms in the Supplementary data online.)

Discussion We proposed a new field-of-view autotracking method as an alternative to conventional image-based methods, which consistently track the next specimen tilt image through a single processing. At present, most electron tomography systems integrated into TEM/STEM instruments employ some form of an image-based method for autotracking. In this study, XCF- and PCF-based tracking methods [13–15] were performed for comparison with our method. The XCF method is generally used for such alignment processing. The PCF method is also well known to be a high-precision image matching method that is used as an alternative to XCF.

Downloaded from http://jmicro.oxfordjournals.org/ at New York University on May 31, 2015

Fig. 8. Experimental results of the autotracking at 2° increment angle for the colloidal gold specimen which is shown in the Supplementary data online, ‘Movie_gold_new’ comparing with Supplementary data online, ‘Movie_gold_old’ tracked with the PCF method. (a–c) Field-of-views tracked with the present method among the all serial views (scale bar 100 nm), (d–f) RNCC(xs, ys) maps corresponding to the tracked views, respectively (since the 0° view was the start one, −2° RNCC map was used; scale bar: 50 pixels) and (g–i) three-dimensional representations of RNCC central peak profiles extracted from those maps whose sharpness was kept through the full tilt range of ±70°.

365

Microscopy, 2014, Vol. 63, No. 5

Figures 10 and 11 show the comparisons of our method with the XCF and the PCF methods for high-tilt-angle image tracking cases. As expected, the XCF peak profiles (bottom left in Figs. 10c and 11c) were dull shaped, which indicates that the method is unsuitable for the investigated tracking case. In the case of the colloidal gold specimen, the PCF peak profile in Fig. 10c showed a complex and irregular form because the image pattern changed locally at numerous positions between the 68° (Fig. 10a) and 70° (Fig. 10b) tilt-angle images. In contrast, the RNCC peak profile in Fig. 10d shows a sharper and simpler form for reasons described later. When applied to the biological thin section for the case where the thickness parameter T was chosen to 32 pixels (i.e. as small as possible), the RNCC peak profile in

Fig. 11e became considerably sharper than in the case where T = 128. Therefore, accurate tracking was achieved even though the increment angle was very large (8°). In contrast, the PCF peak profile in Fig. 11c was very poor and was immediately affected by the image change (between Fig. 11a and b) because of the large increment angle. Figure 12 shows the difference between the proposed method and other image-based methods. Figure 12a and b shows the typical high-tilt-angle TEM images of neighbouring areas. Figure 12c shows an (X−Z)-plane cross-section image along the dotted line indicated in Fig. 12a. In addition, Fig. 12d and e are back-projected ray images for tilt angles of −58° and −60°, respectively, and Fig. 12f shows an image created by the superimposition of Fig. 12d and e.

Downloaded from http://jmicro.oxfordjournals.org/ at New York University on May 31, 2015

Fig. 9. Experimental results of the autotracking at 8° increment angle for the yeast cell thin section, which is shown in the Supplementary data online, ‘Movie_Yeast_new’ comparing with Supplementary data online, ‘Movie_Yeast_old’ tracked with the PCF method. See the caption of Fig. 8 for the others except for the 8° increment angle and the tilt range of ±64°.

366

Microscopy, 2014, Vol. 63, No. 5

Furthermore, the images in Fig. 12d–f are superimposed with the image in Fig. 12c. For an image-based tracking method, if the objects are far apart on both the top and bottom sides, such as in the case of the two gold particles indicated by ‘α’ and ‘β’ in Fig. 12c, correct pattern matching is difficult because the distance between the objects projected on each image changes, even in the case of a small directional change (−58° to −60° in this case) (see the distances indicated by ① and ② in Fig. 12a–e), compared with when they are at the same Z position. In contrast, our method can consistently track the next tilt image because a ray image matching technique is used. As shown in Fig. 12f, the two ray images of Fig. 12d (−58°) and e (−60°) were matched at the existing position of a real three-dimensional object. In Fig. 12f, several pairs of interesting ray lines were observed; both

lines in each pair were coincident at the gold particle position and were slightly spread out in proportion to their distance from the particle (see the arrow heads in Fig. 12f ). The gold particles appeared to be pinned. In this way, two back-projection ray images (Fig. 12d and e) match at a unique object position. In electron tomography, a projection image is formed in parallel projection, and a ray image is also calculated in parallel projection. Therefore, in the ray image matching, if the thickness T parameter is appropriately chosen so as to relatively include the objects inside the thickness as closely as possible, then one ray image is moved in translational motion along the projection plane and certainly meets the other ray image at the unique object position. This means that the rotation centre (tilt axis) is placed in the middle of the reconstructed volume in the Z-axis.

Downloaded from http://jmicro.oxfordjournals.org/ at New York University on May 31, 2015

Fig. 10. Comparison of the proposed method with the conventional PCF and XCF-based tracking methods using the colloidal gold specimen. (a and b) A pair of neighbouring tilt images at high angles −68° and −70°, respectively (scale bar: 100 nm), (c) the PCF map between (a) and (b) images (inserted the enlarged peak area of the PCF at the top right and the XCF map at the bottom left without enlargement) (scale bar: 50 pixels) and (d) the RNCC (xs, ys) map between them in which the RNCC peak profile (inserted at the top right) shows obviously sharper one rather than the PCF map.

367

Microscopy, 2014, Vol. 63, No. 5

The line profiles perpendicular to the ray lines were enhanced using high-frequency filtering in the back-projection calculation. Therefore, as shown in Fig. 8d–i, the peak profile of the cross-correlation, RNCC, was very sharp, indicating high accuracy. This high accuracy is independent of the specimen because a back-projected ray image consistently forms a line-like image, irrespective of the specimen, which was verified from the results for the biological thin section, as shown in Figs. 9 and 11e. The basic alignment process of the proposed method is sequential matching between a current ray image and the next one. Then, there is a general problem of accumulation error because of factors of the matching accuracy and specimen drift. However, as shown in Table 1, if the parameters of thickness T and increment angle Δφ have been appropriately selected, the matching error in a single

tracking is estimated to almost zero in the ideal case. Therefore, we expected a slight accumulation error. This error was examined by a computer simulation using a thin section model reported in our previous paper [3]. The tracking result is presented in Fig. 13, which shows the pre-set misaligned image shift distribution as a function of the tilt angle and the result of the matching error distribution. As a result, every field-of-view was perfectly tracked through all tilt angles, except in only a few cases of the highest tilt angles. In the experiments, specimen drift is serious. The drift during image acquisition causes the accumulation error; of course, blurred resolution. (The drift during the computer processing and the autofocus adjustment does not cause the error because it is compensated by the matching, i.e. the tracking.) In our experience, this type of accumulation error

Downloaded from http://jmicro.oxfordjournals.org/ at New York University on May 31, 2015

Fig. 11. Comparison of the proposed method with the conventional PCF and XCF-based tracking methods using the yeast cell thin section specimen. (a and b) A pair of neighbouring tilt images at high angles −56° and −64°, respectively (scale bar: 100 nm), (c) the PCF map between (a) and (b) images (inserted the enlarged peak area of the PCF at the top right and the XCF map at the bottom left without enlargement) (scale bar: 50 pixels) and (d and e) RNCC (xs, ys) maps between them for the thickness parameter T = 128 and 32, respectively, in which the RNCC peak profile for T = 32 (inserted at the top right in (e)) shows obviously sharper one rather than the case of T = 128.

368

Microscopy, 2014, Vol. 63, No. 5

frequently generates some boomerang-shaped artificial images in (X−Z) cross-sections because a slight position deviation of an object due to drift is sequentially repeated,

thereby changing the deviation direction according to the tilting, which results in artificial curves being drawn. If this error occurs, both the suppression of the drift phenomena and further processing with off-line alignments may be required.

Conclusion

Fig. 13. Verification of the proposed tracking method with the computer simulation. Using the model of the previous paper [3], a misaligned tilt series was made (the tilt axis was also moved 10 pixels apart from the centre and was azimuthally 2° rotated) and the proposed method applied. As a result, every image was perfectly tracked excepting a few higher tilt angle images. The pre-set misaligned image shift distribution and the matching error distribution of the result as a function of the tilt angle are plotted for each component xs (a) and ys (b), respectively.

Downloaded from http://jmicro.oxfordjournals.org/ at New York University on May 31, 2015

Fig. 12. Diagram explaining the difference in the proposed method to other image-based methods. See the ‘Discussion’ section in the text for details.

A new field-of-view autotracking method that enables online tomography reconstruction was proposed. The proposed method uses the cross-correlation between a pair of neighbouring back-projected ray images calculated from filtered projection images captured in an online specimen tilt sequence. As a result of this study, we observed that a backprojected ray image reliably cross-correlates with other neighbouring ray images at the position of an existing threedimensional object provided that the thickness, T, and angle increment, Δφ, parameters are properly selected during the cross-correlation calculation. The proposed method can therefore consistently track the field-of-view, overcoming the weakness of a conventional image-matching-based method. In addition, the present method is simple, and highspeed processing is expected to be achieved because FFT and IFFT algorithms can be used. Online autotracking experiments using TEM with thinsection samples were used to demonstrate the effectiveness of the proposed method. The field-of-view was automatically tracked with high accuracy through a tilt angle range. Furthermore, online tomograms were obtained immediately after the last specimen tilting, which is a very important advantage for users.

Microscopy, 2014, Vol. 63, No. 5

Funding Grant-in-Aid for Scientific Research (21570069 and 2450076 to M.B.) and (21310075 and 24510156 to N.B.).

References

8. Amat F, Moussavi F, Comolli L R, Elidan G, Downing K H, Horowitz M (2008) Markov random field based automatic image alignment for electron tomography. J. Struct. Biol. 161: 260–275. 9. Cao M, Takaoka A, Zhang H, Nishi R (2011) An automatic method of detecting and tracking fiducial markers for alignment in electron tomography. J. Electron Microsc. 60: 39–46. 10. Tomonaga S, Kato D, Baba M, Baba N (2013) An alternative image alignment method based on reconstructed cross-section image correlations. Microscopy and Microanalysis 2013 Meeting, PD-3. 11. Frank J (1996) Three-Dimensional Electron Microscopy of Macromolecular Assemblies, pp. 202–210. (Academic Press, San Diego). 12. Baba M (2008) Electron microscopy in yeast. Methods in Enzymol. In: Klionsky D J (ed.), Autophagy: Lower Eukaryotes and Non-Mammalian Systems, Part A, vol. 451, pp. 133–149 (Elsevier, San Diego, CA, USA). 13. Tsuneta R, Koguchi M, Nakamura K, Nishida A (2002) A specimen-drift-free EDX mapping system in a STEM for observing two-dimensional profiles of low dose elements in fine semiconductor devices. J. Electron Microsc. 51: 167–171. 14. Kuglin C D, Hines D C (1975) The phase correlation image alignment method. Proc. of IEEE Int. Conf. on Cybernet. and Soc., pp. 163–165. 15. Saxton W O (1978) Computer Techniques for Image Processing in Electron Microscopy, pp. 185–248, (Academic Press, New York).

Downloaded from http://jmicro.oxfordjournals.org/ at New York University on May 31, 2015

1. Brandt S (2006) Markerless alignment in electron tomography. In: Frank J (ed.), Electron Tomography, Second Edition, pp. 187–215 (Springer, New York). 2. Zheng S Q, Sedat J W, Agard D A (2010) Automated data collection for electron microscopic tomography. Methods in enzymol. In: Jensen G J (eds), Cryo-EM, Part A, vol. 481, pp. 284–315 (Elsevier, San Diego, CA, USA). 3. Tomonaga S, Baba M, Baba N (2014) Alternative automatic alignment method for specimen tilt-series images based on back-projected volume data cross-correlations. Microscopy. doi: 10.1093/jmicro/dfu014 4. Brandt S, Heikkonen J, Engelhardt P (2001) Automatic alignment of transmission electron microscope tilt series without fiducial markers. J. Struct. Biol. 136: 201–213. 5. Brandt S, Ziese U (2005) Automatic TEM image alignment by trifocal geometry. J. Microsc. 222: 1–14. 6. Sorzano C O S, Messaoudi C, Eibauer M, Bilbao-Castro J R, Hegerl R, Nickell S, Marco S, Carazo J M (2009) Marker-free image registration of electron tomography tilt-series. BMC Bioinformatics. 10: 1–11. 7. Brandt S (2001) Multiphase method for automatic alignment of transmission electron microscope images using markers. J. Struct. Biol. 133: 10–22.

369

A new field-of-view autotracking method for online tomography reconstruction based on back-projected ray image cross-correlation.

We devised a new field-of-view autotracking method for online tomography reconstruction based on a cross-correlation between a pair of neighbours, cal...
2MB Sizes 5 Downloads 3 Views