This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2532239, IEEE Transactions on Image Processing IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, —– 2015

1

Double-Tip Artefact Removal from Atomic Force Microscopy Images Yun-feng Wang, Jason I. Kilpatrick, Suzanne P. Jarvis, Francis Boland, Anil Kokaram, Member, IEEE, and David Corrigan

Abstract—The Atomic Force Microscope (AFM) allows the measurement of interactions at interfaces with nanoscale resolution. Imperfections in the shape of the tip often lead to the presence of imaging artefacts such as the blurring and repetition of objects within images. Generally, these artefacts can only be avoided by discarding data and replacing the probe. Under certain circumstances (e.g., rare, high value samples, or extensive chemical/physical tip modification) such an approach is not feasible. Here, we apply a novel deblurring technique, using a Bayesian framework, to yield a reliable estimation of the real surface topography without any prior knowledge of the tip geometry (blind reconstruction). A key contribution is to leverage the significant recently successful body of work in natural image deblurring to solve this problem. We focus specifically on the ‘double-tip’ effect, where two asperities 1 are present on the tip, each contributing to the image formation mechanism. Finally, we demonstrate that the proposed technique successfully removes the ‘double-tip’ effect from high resolution AFM images which demonstrate this artefact whilst preserving feature resolution. Index Terms—Deconvolution, Image Restoration, Atomic Force Microscopy, Bayesian, Deblurring, Blind Deconvolution, Amyloid.

I. I NTRODUCTION TOMIC Force Microscopy (AFM) is a technique widely used for imaging, manipulating, and measuring interaction forces at interfaces with nanoscale resolution. Imaging is performed by raster scanning a micro-fabricated cantilever with an integrated sharp tip across a sample (see Fig. 1). The bending of the cantilever is proportional to the tip-sample force at a given location and a feedback loop is employed to ensure that the tip-sample force is kept constant during the scan. This is achieved by modulating the height of the sample and the sample motion is then translated into the sample topography with the height at each location represented as pixel brightness [1]. We are interested in correcting for an unusual blurring problem that happens in high resolution imaging. Our subject material for examination is a range of amyloid fibrils displaying heme-b562 electron transfer complexes using high resolution AFM [2]. Approximately 50 biological disorders have now been identified which arise due to the misfolding

A

Yun-feng Wang is with the Department of Electronic and Electrical Engineering, Trinity College Dublin, Ireland e-mail: ([email protected]). F. Boland, A. Kokaram and D. Corrigan are with the Department of Electronic and Electrical Engineering, Trinity College Dublin, Ireland. J. I. Kilpatrick and S. P. Jarvis are with the Conway Institute of Biomedical and Biomolecular Research, University College Dublin, Ireland. Manuscript received —–,2015; revised —–, 2015. 1 An ”asperity” is a localised sharp peak in the surface of an object.

Fig. 1. A schematic diagram of basic elements of an Atomic Force Microscope. The sample is shown being scanned by the tip. The height of the sample is controlled by the feedback loop shown in order to maintain a constant imaging force. The nano-scale measurements that control the movement of the tip are inferred through the deflection of a laser beam reflected off the surface of the cantilever.

of normally soluble proteins and peptides in the body. The most prominent diseases associated with amyloid fibrils are Alzheimer’s and type-II diabetes [3]. Self-assembled protein based fibrils are of great interest due to their role in the formation of structural components of biological systems e.g., cell cytoskeleton, actin, collagen, etc. as well as their function in a range of biomaterials e.g., silks, adhesives, coatings etc [4]. In addition to understanding their biological function, a range of potential nanotechnology applications exist due to the ability to control both the chemistry and structural properties of self assembled fibril systems [5]. Such systems have been used to construct brushes, nanotubes, physical nanoscale connectors and as templates for nanowires. High Resolution AFM The imaging of such systems with AFM at high resolution in situ allows for characterisation of the effects of chemical modifications of the fibril structure and for the identification of functional groups along the fibril surface. The topography of the fibrils is assessed through the locations of important local features indicated for instance by high intensity spots along the fibril. Any effect that obscures these features is therefore detrimental to subsequent analysis and interpretation. Abrupt changes in topography associated with this sample make imaging challenging [6] and accentuate any errors in the geometry of the tip.

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2532239, IEEE Transactions on Image Processing 2

Whilst small imperfections in tip geometry typically do not manifest as significant artefacts in conventional AFM imaging, the influence of these imperfections is amplified for high resolution AFM [2]. As the trend of widespread adoption of high resolution AFM imaging continues the prevalence of these tip induced artefacts is likely to increase. The presence of artefacts can lead either discarding or erroneously interpreting AFM data and therefore it is important to understand their origin and to develop techniques to deconvolve them from images. To date the effect of tip shape and its influence on artefacts in AFM imaging has been a topic of great interest [7], [8], [9], [10]. One well-known artefact is the blurring effect that occurs when the dimensions of the tip are greater than or equal to the lateral dimensions of a feature of the sample surface [7], [10]. This artefact, known as tip-sample convolution, can be modelled as the convolution of the ground truth sample topography with a blur function that captures the shape of the tip [8]. This kind of artefact can be removed from data by deconvolving the blurred image with an appropriate blur function (typically a simple Gaussian). The Double-Tip Artefact In this paper we describe an approach which can be utilized to remove a dramatic form of image artefact known as the ‘double-tip’ effect. Fig. 2 shows AFM images of amyloid fibrils exhibiting the ‘double-tip’ artefact. A ‘double-tip’ artefact is a specific case of convolution where the ground truth sample topography is blurred with a bi-modal blur function. In practice, two geometric asperities on the tip interact with the sample surface causing a complex ghosting or echo effect within images. This artefact is particularly challenging to deal with since each of the asperities on the tip may contribute different information to the image dependent upon their relative height, lateral spacing, radius and the local topography of the sample. By deconvolving the AFM image with a multi-modal blur function capturing the shape of the tip, it is possible to recover the true topography of the sample surface. Whilst techniques exist which may allow for the direct measurement of a cross section of the tip shape (SEM, TEM, etc.) these may be difficult to directly relate to a blurring function. They also rely upon the operator identifying the artefacts during AFM data collection and applying one of these tip characterization techniques immediately after the experiment to ensure the integrity of the tip shape data with respect to the measurements. As a result, the precise tip shape is often not available. Thus, to deblur AFM images degraded by the ‘double-tip’ effect we adopt a methodology which is similar to the blind deconvolution problem in natural image domain. A Natural Image Deblurring Approach Deblurring technology has advanced significantly in the last decade due to the rapid progress in consumer and postproduction imaging. The blind deconvolution problem has been well studied [11], [12], [13] and generally focuses on removing motion blur from an image given a rough estimation

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, —– 2015

of the blur function. In this paper we leverage that large body of work to develop a new technique for treating blurring in AFM imaging. Our deblurring framework is inspired by these natural image deconvolution techniques and is comprised of two steps. The first step is to utilize the existing natural image blind deconvolution technique to deblur the degraded AFM images resulting in rough estimation of both the deblurred image and the blur function. Secondly, a novel step (overlap region refinement) is employed to identify regions of the image where the summation model of convolution is violated and an overwrite model is present. The correction for the presence of the regions obeying the overwrite model in the deconvolution framework is a novel process that is necessary due to the unique image formation mechanism in AFM and is discussed in detail in section VI. This deblurring framework yields good approximations of both the deblurred ‘clean’ latent AFM image and the multi-modal blur function, which provide comparatively reliable information about the true surface topography as well as the shape of the asperities on the imaging tip. Here we present results demonstrating that the proposed framework is successful in removing the ‘doubletip’ effect from AFM images where the tip shape is unknown without sacrificing details of the sample features. II. R ELATED W ORK Under the motion blur model, a blurred image B can be modelled as convolution of the ‘clean’ latent image L, with the blur function k along with an additive noise n. This relationship is described as: B=L⊗k+n

(1)

where ⊗ denotes the convolution operator. Both L and k are unknown and n is a constant. As the ‘double-tip’ artefact is similar to the motion blur problem, B denotes the degraded AFM image, L represents the ground truth sample features, k corresponding to the shape of the tip, and n denotes the noise inherent in the measurement due to instrument sources. Blind deconvolution is a very challenging ill-posed problem which must estimate both the ‘clean’ latent image and the blur function from a single image. Early approaches to blind deconvolution generally assumed the structure of the blur function to be a simple parametric model or a mixture of Gaussian distributions [14]. Recently, significant progress has been made by increasing the complexity of the blur function model. Fergus et al. [15] introduced an approach to recover the blur function using ensemble learning [16] and adopting a latent image prior which used a mixture of Gaussians to model the heavy-tailed log-density gradient distribution for natural images [17]. Then, Shan et al. [18] improved the priors for both the latent image and the blur function which enforce their sparsity by using a simple piecewise function to fit the heavy-tailed distribution. Furthermore, Cho et al. [12] significantly reduced the computational load by using the Fast Fourier Transform (FFT) to compute the convolution via the frequency domain. Xu et al. [13] introduced a novel fast TV-`1 deconvolution algorithm as well as an edge selection scheme to further improve the deblurring results. Here we present

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2532239, IEEE Transactions on Image Processing WANG et al.: DOUBLE-TIP EFFECT REMOVAL IN ATOMIC FORCE MICROSCOPY IMAGES

3

IV. P RE - PROCESSING

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

Fig. 2. AFM images showing ‘double-tip’ artefacts. (a-d) Images of amyloid fibril types (a) SSB, (b) SSB, (c) BSS, and (d) SSSB. Each fibril has one or more repetitions of different intensity beside it. Z scale = 10 nm for (a-c) 5 nm for (d). (e-h) Pre-processing results of images (a-d) after background flattening (2nd order plane fitting) and scar removal procedures. Z scale = 5 nm for (e-g) 8 nm for (h). The fast scan axis in these images is vertical.

a Bayesian deblurring framework based on the algorithms described in [18], [12], [13]. In this paper we modify that class of algorithms to adapt the algorithms to the restoration of AFM images. An early description of this framework has been published elsewhere [19]. This paper is organized as follows. Section III gives details of the AFM experiments. Section IV describes some preprocessing that has been applied to the original AFM images to reduce the effects of other types of common artefacts. As the potential structure of the blur function for a degraded AFM image is simple, instead of using the multi-scale approaches for natural images, this paper introduces a novel Hough Transform (HT) rough blur function estimation method. It offers a good starting point for the blur function which can speed up the entire deblurring framework. The details of the HT method are described in Section V. The key steps and modifications of our Bayesian deblurring framework are introduced in Section VI. Results and a quantitative validation are discussed in Sections VII and VIII. Conclusions follow in Section IX.

III. E XPERIMENTS The process for creating these fibrils is described in [2]. All AFM images were obtained using a bespoke low noise AFM system combined with an ARC bipolar controller (Asylum Research, USA). Details of this instrument can be found elsewhere [20]. Images were acquired using SSS-NCH probes (tip radius < 5 nm, resonant frequency 300 - 350 kHz, spring constant ∼ 42 N/m, Nanosensors, Switzerland) using the FMAFM mode. The raw data was firstly imported into Gwyddion [21] to generate PNG images which were then imported into Matlab [22] for application of the image pre-processing as well as the deblurring algorithm.

As shown in each example image in Fig. 2, a single fibril feature shows at least one similar repetition (ghost image) beside it due to multiple asperities on the tip, each operating as an independent imaging probe (with a both lateral and vertical offsets from the other asperities), resulting in multiple scans of the same feature occurring within a single image [23], [24], [25]. It can also be observed that regions exist where the real fibril features overlap with their repetitions. These regions do not appear to be brighter than neighbouring pixels therefore they do not follow the summation model in convolution. These overlap regions are unique to the image formation mechanism and will be discussed in detail in section VI. Some pre-processing of AFM images is common practice and is necessary before applying the deblurring framework. It involves two main procedures: A) Background flattening and B) Scar removal. A. Background Flattening As can be seen in Fig. 2(a-d), all the images appear to have curved background. This is because the profiles taken in the scanning axis can be mutually shifted in height by some amount and/or the samples may be tilted slightly. Therefore, a 2nd order polynomial surface model is used here to fit the image background (excluding the pixels in the fibril by setting a threshold). Then, the estimated 2nd order polynomial background is subtracted from the original AFM image. This process is termed 2nd order plane fitting. B. Scar Removal There are some line defects parallel to the fast scanning axis in the image (so called scars) which may arise due to instabilities in the tip-sample distance feedback loop. The curvature in the image background is typically due to a scanner bow artefact. The scar removal procedure first finds the gradients of the image in the direction perpendicular to the fast scanning direction. Next, the pixels with large gradients are selected to create a mask. Furthermore, in order to exclude spots and small line features, the mask is filtered using a morphological closing. Finally, these highlighted pixels are interpolated by their adjacent pixels in the two neighbourhood lines. Fig. 2(e-h) shows the pre-processing results corresponding to the images in Fig. 2(a-d). V. HT B LUR F UNCTION I NITIALIZATION Our algorithm introduces a novel rough blur function estimation method using the Hough Transform (HT). The large distortion observed in the AFM images is fundamentally a repetition artefact. In other words, the ground truth image is blurred with a bimodal blur function. Thus, the blur function can be considered to have two peaks with a 0 background. Therefore, the distance between these two peaks and the gain between them is estimated first in order to construct the rough blur function.

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2532239, IEEE Transactions on Image Processing 4

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, —– 2015

A. Image Model for Hough Transform A conventional model for the ‘double-tip’ effect in the image space can be expressed as: I(h, v) = µI(h − x, v − y)

(2)

where I(h, v) denotes the image intensity at location (h, v), I(h − x, v − y) is a repetition of I(h, v) offset by a vector (x, y), and µ represents the intensity ratio between these two pixels. Accordingly, x, y and µ represent the horizontal and vertical distance and the gain between the two peaks in the estimated blur function. By using the proposed HT technique, each pixel I(h, v) in image space is paired with all other pixels in the image. Then, all the pixel pairs vote for a (x, y, µ) triplet in Hough Space. As a result, an estimate of x, y and µ can be found by identifying the bin with the largest number of votes in Hough Space. Finally, an initial blur function is constructed by using these three parameters. In order to reduce the computational load as well as increase the robustness, the proposed HT method only investigates the bins with a high potential to have the largest number of votes. A lower bound α and higher bound p β were used to limit the range of interest for (x, y) as α ≤ x2 + y 2 ≤ β. Indeed, the lower bound α and the higher bound β are just the minimum and maximum shift distance between the real feature and its repetition. Our algorithm uses a user selected α to reduce the disturbances caused by adjacent pixel pairs inside the real feature. Also, to further reduce the computational load, a multi-scale HT technique with a coarse-to-fine fashion is applied and a higher bound β is used at the full resolution. The details of the values have been used in this paper are introduced in the following section.

(a)

(b)

(c)

(d)

Fig. 3. (a) The result image after applying a 5-tap Gaussian smoothing filter accompanied by a threshold procedure to Fig. 2(e). (b) Highlight lines (in red) between all the pixel pairs that have distance (x0 , y0 ) and intensity ratio equal to µ0 ± 0.01. (c) A magnified view of (b). (d) The blur function generated by using HT result and smoothed by a 7-tap Gaussian filter.

the lower bound of α = 20, the higher bound of β = 100 and the range of interest limited to (4 × x1 ± 16), (4 × y1 ± 16), (µ1 ± 0.5) respectively at full resolution. Hence, the HT resultant values for parameters (x, y, µ) can be found by observing the bin (x0 , y0 , µ0 ) with largest number of votes at full resolution. Accordingly, the offset between the two peaks in the estimated rough blur function is (x0 , y0 ) and the gain between these two peaks is µ0 . C. Hough Transform Results The results of the Hough Transform Implementation for Fig. 2(e) are shown in Fig. 3 where (b) highlights all the lines (in red) between the pixel pairs with x/y offset (x0 , y0 ) and intensity ratio equal to µ0 ± 0.01. Using the HT results, a rough blur function is then created which is shown in Fig. 3(d). As a common approach to the Gaussian tip shape, the rough blur function is finally smoothed by a 7-tap Gaussian filter to provide better convergence at later optimisation.

B. Hough Transform Implementation After applying the pre-processing procedures introduced in Section III, a 5-tap Gaussian filter with standard deviation set to one is applied to smooth the AFM images. This filtering minimises the distortion caused by noise. Furthermore, only pixels inside the real feature are used to form pixel pairs for the Hough space. To get a rough mask of the real feature region, the smoothed image is further thresholded by a user selected value (see Fig. 3 (a)). This is done to improve convergence and computational efficiency. The Hough Space is then represented by a 3-dimensional bin (x, y, µ) with the bin resolution defined as one pixel per bin for x, y axes and 0.01 per bin for µ axis. Moreover, a Gaussian pyramid decomposition is applied to downsample the image by a factor of 4 with the aim of reducing the computational load of the proposed HT method. Applying the HT method to the downsampled image is 16 times faster than to the original image. The HT method is first applied to the downsampled image to get an initial estimation of (x, y, µ) within a limited range determined by user observation (from our experiments, a lower bound of α = 5 was sufficient for all the test examples). The bin (x1 , y1 , µ1 ) is found with the largest number of votes. As a result, the corresponding values for (x, y, µ) at full resolution are approximately (4 × x1 , 4 × y1 , µ1 ). Then, in order to increase the accuracy, we applied the HT method again with

VI. BAYESIAN D EBLURRING F RAMEWORK Our deblurring framework is based on the Bayesian blind deconvolution algorithms described in [12], [13], [18] to optimise both the ground truth ‘clean’ latent image and the blur function. However, due to non-linearities in the regions of overlap between the real feature and its repetition, the framework must be modified for a good restoration in these regions. As a result, our framework contains two steps. The first step is to deblur the AFM image based on the natural image blind deconvolution model. This will produce a rough estimation for both deblurred image and blur function. The second step is to construct a new degraded image which has its overlap region follow the summation model. Then a new latent image can be estimated by deconvolving the new degraded image. A. Bayesian Blind Deconvolution Model From the artefact model described in (1), the observation in this case is the blurred image B. The goal is to find a latent image L and a blur function k that maximises the posterior p(L, k | B) Using Bayes Theorem: p(L, k | B) ∝ p(B | L, k)p(L)p(k)

(3)

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2532239, IEEE Transactions on Image Processing WANG et al.: DOUBLE-TIP EFFECT REMOVAL IN ATOMIC FORCE MICROSCOPY IMAGES

5

where p(B | L, k) is the likelihood and p(L) and p(k) are the priors of the latent image L and the blur function k respectively. Our approach makes the implicit assumption that k and L are independent. Due to the large number of variables involved in the Bayesian blind deconvolution model, it is very difficult and computationally expensive to jointly optimise both the latent image and blur function. Therefore, an iterative approach is adopted that divides the optimisation into two parts. First, the latent image L is optimised with the blur function k fixed. Second, the blur function k is optimised with the latent image L fixed. This process is iterated and is referred to as Iterated Conditional Modes (ICM) [26]. There are a variety of ways to model the likelihood term as well as the prior terms above based on previous studies [18], [13], [12]. Their performance deconvolving ‘doubletip’ artefacts from AFM images was assessed, and the blind deconvolution algorithm chosen for this work uses a combination of the latent image estimation algorithm (fast T V − `1 deconvolution) described in Xu et al. [13] which has better performance for noisy images and the blur function refinement algorithm described in Shan et al. [18] and Cho et al. [12]. Some modifications have been applied to these algorithms with the aim of improving their performance for AFM images. The details of those modifications are described later. 1) Latent Image Estimation Model: As the latent image was estimated with the blur function k fixed, the probabilistic terms that do not rely on L in (3) become constants of proportionality. Therefore, the probabilistic model can be written as follows

the model in Shan et al. [18], our method excludes the image pixel intensities (i.e., ∇0 ). As a result, boundary artefacts during deblurring are avoided. Moreover, it has been shown by our analysis that the blur 2 function prior p(k) ∝ e−kkk (introduced by Cho et al. [12]) can not sufficiently constrain the estimated blur function. Indeed, Levin et al. [27] pointed out that the optimal MAP estimate of both L and k is the blurred image with an impulsive blur function. If this prior is used, the estimated function will diverge far away from the bi-modal blur function model (one of the peaks disappears). Hence, a novel blur function prior is proposed since we have access to a reliable estimation of the peak locations of the blur function from the HT technique, this result is used to constrain the blur function. The prior is defined as:

p(L | B, k) ∝ p(B, k | L)p(L).

With respect to the fast TV-`1 deconvolution model [13], Equation (10) is then transformed into τX ∗ k∂ L − w∂ ∗ k2 (11) E(L) = kL ⊗ k − B − vk2 + θ ∗

(4)

The likelihood p(B, k | L) is given by p(B, k | L) ∝ e−kL⊗k−Bk .

(5)

This is modelled by assuming the noise in the image should follow a Gaussian distribution. Next, the latent image prior it uses is described as P

p(L) ∝ e−λ

∂∗

k∂ ∗ Lk

(6)

where λ is the prior weight and ∂ ∗ denotes the x, y gradients of the image (i.e., ∂ ∗ ∈ {∂x , ∂y }). The latent image prior assumes that the latent image is sparse in the gradient domain. 2) Blur Function Estimation Model: Similar to latent image estimation mentioned above, all probabilistic terms that do not rely on k in (3) become constants. The probabilistic model is given by: p(k | B, L) ∝ p(B, L | k)p(k)

(7)

The likelihood term is defined as [12] p(B, L | k) ∝ e−

P

∇∗ ∈Θ

φO(∇∗ ) k∇∗ L⊗k−∇∗ Bk2

(8)

where ∇∗ denotes the set of five image gradients (i.e., ∇∗ = ∇x , ∇y , ∇xx , ∇xy , ∇yy ), and φO(∇∗ ) are the weights for the weighted sum of the noise gradients. We use the values for the weights used by Shan et al. [18]. Thus, the likelihood term here assumes that the 1st and 2nd order gradients of the noise should follow a Gaussian distribution. Compared with

p(k) ∝ e−ω1 kk−kHT k

2

(9)

where k is the refined blur function to be optimised and kHT is the HT resultant function. Note that, ω1 is the weight of this prior which is set to 5 × 103 for all examples. 3) Optimising Latent Image L with Blur Function k Fixed: All the priors are transformed to their energy expressions by taking negative logarithm. By combing the energy expressions for likelihood and latent image prior mentioned in (5) & (6), the latent image L can be optimised by finding the minimum of the following equation X E(L) = kL ⊗ k − Bk + λ k∂ ∗ Lk. (10) ∂∗



where v is a non-linear penalty for the difference between the reconstructed blurred image (L ⊗ k) and the original blurred image B. w∂ ∗ are non-linear penalties for image gradients. After the optimised latent image L is found, v and w∂ ∗ can be estimated by:  ∂∗L w∂ ∗ = k∆Lk max(k∆Lk − θλ, 0) v = sign (L ⊗ k − B) max(kL ⊗ k − Bk − τ, 0) (12) q 2 2 where k∆Lk = ∂x L + ∂y L . By iteratively solving v, w∂ ∗ and L independently while fixing other variables, converged values for all v, w∂ ∗ and L can be found. Note that, we decrease θ and τ by a factor of 2 at each iteration until they reach their minimum values. Based on the values of the user tuning parameters used in [18], [12], [13], we empirically set λ = 0.01, θinitial = 0.2 ∗ λ−1 , τinitial = 1, θmin = 0.01 and τmin = 0.1 to adapt to the degraded AFM images in Fig. 2. A direct implementation of this method results in an estimate with significant background noise. Since the background is of less interest compared to the brighter pixels inside the real feature the mean was subtracted from the latent image and all negative intensities were set to zero in each iteration. By doing this, the result is much improved with less noise and ringing artefacts.

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2532239, IEEE Transactions on Image Processing 6

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, —– 2015

(a)

(b)

(c)

(d)

in which the real feature overlaps with its repetition (ghost image). In this scenario the AFM imaging process obeys an overwrite model, rather than a summation model of convolution as assumed in the blind deconvolution algorithm. This assumption is corroborated by the observation that the overlap regions do not appear to be brighter than their surrounding regions. Therefore, in the next section, an overlap regions refinement scheme is described which overcomes this limitation. B. Overlap Region Refinement

(e)

(f)

(g)

(h)

Fig. 4. (a-d) Pre-processed AFM images from in Fig. 2. (e-h) The corresponding deblurred results.

4) Optimising Blur Function k with Latent Image L Fixed: By combing the energy expressions for likelihood and priors mentioned before, the blur function k can be optimised by minimising (13) with a fixed latent image L (the result of Section VI-A3). X 2 E(k) = φO(∇∗ ) k∇∗ L ⊗ k − ∇∗ Bk + ω1 kk − kHT k2 ∇∗ ∈Θ

(13) Note that, Equation (13) can be rewritten in matrix multiplication form: E(k) = kAk − bk2 + ω1 kk − kHT k2 = (Ak − b)T (Ak − b) + ω1 (k − kHT )T (k − kHT ) (14) where Ak is a matrix multiplication representation of the convolution operation between the 5 latent image gradients and the blur kernel in (13) (i.e., ∇x , ∇y , ∇xx , ∇xy and ∇yy ). k and kHT are the vector representations for blur function k and HT result function kHT , and b is a vector consisting of a concatenation of the 5 gradients of the observed image B. Then, Equation (14) can be optimised using the Conjugate Gradient (CG) based method introduced by Cho et al. [12] which also uses the Fast Fourier Transform (FFT) to speed up the matrix multiplications in the optimisation. By iteratively repeating the two steps above (Section VI-A3 and VI-A4), a final optimal point within a relative tolerance (defined inside each step) can be found efficiently. As a result, an estimate for both the latent image as well as blur function can be found. 5) Bayesian Blind Deconvolution Results: The deblurred results along with their blur function are shown in Fig. 4(e)(h). It was observed that 5 iterations were sufficiently to produce an acceptable result for both latent image and blur function. Over the iterations, the sub-features of the real feature in the resultant latent image were observed to sharpen, demonstrating that the proposed blur function refinement model can result in an accurate blur function. However, some poorly deconvolved regions remain which appear as gaps within the real feature. These are the regions

As mentioned previously, the overlap regions in the original blurred AFM images do not obey the summation model for convolution. Therefore, the Bayesian deconvolution algorithms will always return erroneous estimates in the overlap regions. The reason for the violation of the summation model in the overlap regions is shown schematically in Fig. 5. (a) shows the relationship between the tip-force generated as a function of the distance of the tip from the surface, and indicates a highly non-linear relationship. In general, the difference between the heights of the asperities is significant at this scale. For the non-overlap regions an orthogonal imaging mode (Fig. 5(b)) results in Regions I & III both being present in the fast scan axis. Region II, where there is a gap between the real feature and its repetition (ghost image), is only possible when the feature dimensions in the fast scan axis are less than the spacing between the asperities and the feature height is small compared to the asperity recess height. For this imaging mechanism both the asperity closest to the surface (dominant) and the second asperity (submissive) each interact with the feature independently resulting in the image of the real feature and its repetition, respectively. However, where the dimensions of the fibril feature in the orientation parallel to the relative orientations of the asperities, a parallel imaging mechanism operates (Fig. 5(c)). In this case the submissive asperity will be prevented from interacting with the feature surface and hence the force generated by the further submissive tip from the surface is negligible compared to the force generated on the dominant asperity. Therefore, the observed intensity in these regions is effectively generated by the dominant asperity and the summation model is violated. In order to get a complete reconstruction of the surface topography of the real feature, we introduce an overlap region refinement scheme by constructing a new blurred image in which the overlap regions obey the summation model. The first stage of this scheme is to isolate the overlap regions using a user-interactive segmentation method. Then, a new blurred image was constructed with its overlap regions obeying the summation model using the original blurred image and a segmented overlap region. Finally, a refined ‘clean’ latent image can be estimated by deconvolving the new constructed blurred image with the previously estimated blur function. From our experiments, this overlap region refinement scheme is more robust and simpler to implement than deconvolving with a separate blur kernel in the overlap regions. The details of this scheme are introduced in this section. 1) Overlap Region Isolation: In order to isolate the overlap regions, a mask M of the shape of the real feature is first

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2532239, IEEE Transactions on Image Processing WANG et al.: DOUBLE-TIP EFFECT REMOVAL IN ATOMIC FORCE MICROSCOPY IMAGES

7

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

Fig. 6. (a) Pre-processed AFM image from Fig. 2(e). (b) The inner bound of the real feature (provided by the user). (c) The outer bound of the real feature (provided by the user). (d) The mask of the thresholded blurred AFM image which excludes all background pixels. (e) The resultant mask of the real feature. (f) The convolving result of (e) and the blur function k0 . (g) The resulting mask of the overlap regions. (h) The new constructed blurred image which is created by altering the image so that the intensity of the overlap regions obeys a summation (linear convolution) model.

Fig. 5. Double tip imaging schematic with regions of strong interaction (red) and weaker iteration (blue). (a) A Lennard-Jones potential showing the typical force profile a tip experiences as a function of distance from an interface. (b) Orthogonal imaging mechanism: Region I where the dominant asperity images the object, Region II where neither asperity images the object, and Region III where the submissive asperity images the object. (c) Parallel imaging mechanism: where the interaction of the dominant asperity prevents the submissive asperity from interacting with the object.

created. The AFM image in Fig. 2(e) is used as an example here. The accuracy of the mask is important as it indicates the locations for the later modifications. Fully automatic generation of an accurate mask of the shape of the real features is complex and can be unstable. Moreover, using manual drawing to directly create an accurate mask is even harder due to the intricate shape of the fibril contour. Therefore, this work uses a user-interactive method to secure a good estimation of the overlap regions. The user-interactive method requires a user to provide a tri-map containing areas of definite foreground (i.e., the real feature), definite background and a small band of unknown pixels along the boundary of the real feature. As a result, two user inputs: the inner bound (Fig. 6(b)) and the outer bound (Fig. 6(c)) were used. An intermediate entry M (Fig. 6(e)) between the inner bound and the outer bound can be found by computing the minimum of the following expression: M = argmin ||MB − H(M ⊗ k0 )||2

(15)

M

where H(.) is the Heaviside step function, MB is the mask for the thresholded blurred image (Fig. 6(d)). k0 is the thresholded result of blur function k which is one at the position of each

peak and zero everywhere else. We observed that the mask preserves the intricate shape of the real feature boundary better by using k0 instead of k. Therefore, Equation (15) can be simplified as:   1 When MB (h, v) = 1 and (16) M(h, v) = MB (h + Sx , v + Sy ) = 1   0 Otherwise where (Sx , Sy ) is the offset between the two peaks inside the blur function which is demonstrated in Fig. 7. Next, by convolving the mask of the shape of the real feature M with the thresholded result k0 of the blur function found in Section VI-A, Fig. 6(f) is generated. The regions with comparatively higher brightness in this image are where the real feature overlaps with its repetition. As a result, the overlap regions Mol can be easily isolated by thresholding M ⊗ k0 and the corresponding thresholded result is shown in Fig. 6(g). The overlap regions mask estimation is robust and results will be the same for most sensible user input conditions. However, despite the stability of the overlap region segmentation, it is possible for inaccuracies to occur in the overlap region contour. User-editing of the overlap regions would be necessary in these cases. 2) New Degraded Image Construction: In order to construct a new blurred image using the overlap region mask estimated above, the missing influence of the submissive asperity can be estimated from the corresponding region of the real feature imaged by the dominant asperity. Then, the estimate is added back to the original blurred image. This newly constructed degraded image should obey the summation model throughout the image. The details of the new degraded image Bnew construction are shown in Algorithm 1. An example of the new degraded image is shown Fig. 6(h).

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2532239, IEEE Transactions on Image Processing 8

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, —– 2015

Algorithm 2: Bayesian deblurring framework Input: The pre-processed AFM image B and HT resultant rough blur function kHT Output: The clean deblurred latent image L and refined blur function k 1 Step 1: Bayesian blind deconvolution: 2 k ← kHT 3 for i ← 2 to n do 4 Optimise latent image L by fixing blur function k; 5 Optimise blur function k by fixing latent image L; 6 7 8 9

Fig. 7. The peaks for both dominant and submissive asperities in the blur function for Fig. 6(a) and the offset between the two.

Algorithm 1: New blurred image construction Input: The original AFM image B, overlap regions mask Mol and Bayesian resultant blur function k Output: The new constructed degraded image Bnew 1 Compute the offset vector (sx , sy ) from the peak of the dominant asperity to the peak of the submissive asperity in the blur function k. 2 Split the blur function into the dominant part kdom and the submissive part ksub . 3 Center kdom & ksub at the locations of their peak values. 4 Find the corresponding locations of the overlap regions in the dominant asperity scanning result: M0ol = Shift (Mol , −(sx , sy )) 5 Compute the overwritten region of the submissive asperity replica: C = Convolve(Deconvolve(B .* M0ol , kdom ),ksub ) 6 Shift C back to the locations of the overlap regions: C0 = Shift(C, (sx , sy )) 7 Add the data back to the observed image: Bnew = B + C0 8 return Bnew

3) New Latent Image Estimation: After the new degraded image Bnew is constructed, the deblurring algorithm introduced in Section VI-A3 is performed to estimate the new ‘clean’ latent image. This takes two inputs: i) the estimate of the blur function k from Section VI-A, ii) the modified degraded image Bnew . The blur function estimation procedure in Section VI-A4 can be also performed here to further improve the blur function. However, we have observed that the resultant blur function k from the first step (Section VI-A3) is sufficient to generate an acceptable result. Hence, only the latent image estimation step is used here. Further results and discussion are described in the following sections. The structure of our Bayesian deblurring framework is shown in Algorithm 2.

10

Step 2: Overlap regions refinement Find the overlap regions Mol Construct a new blurred image Bnew Estimate the new ‘clean’ latent image Lnew by using line 4; return Lnew , k TABLE I VALUES OF THE PARAMETERS USED IN THIS PAPER Parameter Value Parameter Value Parameter Value

φO(∇x ) 25 φO(∇yy ) 12.5 θmin 0.01

φO(∇y ) 25 λ 0.01 τmin 0.1

φO(∇xx ) 12.5 θinitial 0.2 ∗ λ−1 ω1 5 ∗ 103

φO(∇xy ) 12.5 τinitial 1

VII. R ESULTS Despite the number of user tunable parameters involved in the proposed algorithm, we found that a single set of parameter values (based on the values used in [18], [13], [12]) were sufficient to provide acceptable results for all AFM images presented here. A list of all the tuning parameters and their values are shown in Table I. Moreover, two parameters here may still be exposed to the users for tuning as a trade-off between the adaptability and genericity. They are the latent image prior weight λ and blur function prior weight ω1 . By adjusting these two parameters, our deburring framework can gain better performance with different types of images. Note that the size of the blur function for each AFM image is different. However, as long as the size is sufficiently large to contain the estimated peaks, it will not affect the final deblurred result significantly. Our blur kernels size is based on the HT transform results and are padded by 5 pixels in each direction. AFM Images restored using our Bayesian deblurring algorithm are shown in Fig. 8. The first row shows the preprocessed images from Fig. 2 and the second row shows the corresponding deblurred results and their blur functions. In order to demonstrate the correctness of the deblurring results, the convolution of the deblurring results with their corresponding blur functions are shown on the third row. As can be seen, the ‘double-tip’ effects are removed and the topographical details of the fibrils are well preserved. Combining the information from the real fibril features and their repetitions, the restored images allow the identification

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2532239, IEEE Transactions on Image Processing WANG et al.: DOUBLE-TIP EFFECT REMOVAL IN ATOMIC FORCE MICROSCOPY IMAGES

9

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

(k)

(l)

Fig. 8. (a-d): Pre-processed AFM images as shown in Fig. 2; (e-h) The deblurred results of the proposed framework and their corresponding blur function; (i-l) The convolving results of the deblurred images with their corresponding blur functions.

of small features on the fibril surface that were obscured in the degraded images due to convolution with the tip geometry. Fig. 9 compares our deblurring results with current existing blind deconvolution systems. Fig. 9(a-d) are our deblurring results. Fig. 9(e-h) are the blind deconvolution results using a full implementation of Shan et al. [18] 2 . Fig. 9(i-l) and (m-p) are the blind deconvolution results of two widely used AFM image processing packages Gwyddion (open source) [21] and 2 http://www.cse.cuhk.edu.hk/∼leojia/deblurring.htm

SPIPT M (commercial) [28]. From Fig. 9, we observed that current algorithms fail to give a good estimation of multimodal blur functions when performing blind deconvolution. The deblurring result is a sharper version of the original AFM image where both real fibril and its repetition exist. Therefore, our HT blur function initialisation algorithm is able to achieve a better performance at multi-modal blur function estimation which leads to a better deblurring result. Furthermore, we separate the kernel estimation problem from these deconvolution systems to compare the performance

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2532239, IEEE Transactions on Image Processing 10

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, —– 2015

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

(k)

(l)

(m)

(n)

(o)

(p)

Fig. 9. The blind deconvolution results of Fig. 2(e). (a-d) Our results. (e-h) The results of Shan et al. [18]. (i-l) The results of Gwyddion [21]. (m-p) The results of SPIPT M [28].

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2532239, IEEE Transactions on Image Processing WANG et al.: DOUBLE-TIP EFFECT REMOVAL IN ATOMIC FORCE MICROSCOPY IMAGES

11

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

(k)

(l)

Fig. 10. The deblurring results of Fig. 2(e) using our resultant blur functions. (a-d) The results of Shan et al. [18]. (e-h) The results of Gwyddion [21]. (i-l) The results of SPIPT M [28].

of the deconvolution process. Fig. 10 compares our deblurring results with some existing tip-deconvolution algorithms using our estimated blur functions as the known tip shape. Fig. 10(ad), (e-h) and (i-l) show the deblurring results using a full implementation of Shan et al. [18], Gwyddion [21] and SPIPT M [28] respectively. Generally, all deblurring results are improved compared with corresponding results in Fig. 9 as a plausible blur kernel has been provided to each method. However, our algorithm still outperforms the other methods by being able to significantly reduce the background noise and ringing effect. This is due to the clipping of values of the restored image below the mean during the deconvolution process. Last but not least, our overlap region refinement scheme makes our framework capable of generating a good estimate of the fibril in the overlap regions where the blurred image follows a overwrite model rather than a conventional summation model. The Bayesian deblurring framework in this paper does not

consider other types of AFM imaging artefacts and their presence in the degraded image may cause poorly deconvolved regions in the restored result. Some examples of imaging artefacts include significant image noise, feature blurring due to sample deformation and pixels in the real feature which have no corresponding repetition (see red arrow in Fig. 4(d)). Whilst incorporating restoration techniques for these artefacts into our framework may result in a better restoration of images, this is beyond the scope of this work. VIII. Q UANTITATIVE VALIDATION As the ground truth sample surface topography and the tip shape data are not available for our degraded AFM images, a synthetic experiment has been carried out with the aim of performing a quantitative evaluation of our deblurring results. This experiment is intended to ascertain how effective our algorithm is at restoring images degraded by the degradation

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2532239, IEEE Transactions on Image Processing 12

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, —– 2015

(a) 0◦

(b) 30◦

(c) 60◦

(d) 120◦

(e) 0◦

(f) 30◦

(g) 60◦

(h) 120◦

Fig. 11. (a-d) 4 selected synthetic degraded AFM images blurred with different orientations and constant gain. (e-h) The corresponding deblurring results of (a-d). TABLE II Q UANTITATIVE MEASUREMENTS OF THE DEBLURRING RESULTS OF THE SYNTHETIC DATA Image

Orientation:

0◦

30◦

60◦

90◦

120◦

150◦

180◦

Fig. 8

SSIM PSNR (dB)

0.8611 35.23

0.8663 35.15

0.8596 35.02

0.8571 34.98

0.8497 35.20

0.8543 35.42

0.8644 35.34

Fig. 8

SSIM PSNR (dB)

0.8290 35.63

0.8230 33.96

0.8690 38.25

0.8623 38.65

0.8493 36.72

0.7266 30.05

0.8455 33.14

model presented here. The details of the synthetic data generation as well as the corresponding deblurring results evaluation are discussed in this section.

A. Synthetic Degraded Image Generation Two of our deblurred results (Fig. 8(e) & (f)) are used as the ground truth clean image L for our synthetic data. A set of blur functions {k1 , k2 ,. . . kn } with different blurring orientations were created (n is set to 7 in our experiments). The orientations are equally distributed from 0 degrees to 180 degrees with 30 degree increments. Then, by using L and each blur function ki,i∈{1,2,...7} , a degraded blurred image B can be generated. In order to generate a reasonable degraded image that follows the overwrite model, we use a natural image matting algorithm that introduced in [29] to estimate the foreground If g , background Ibg and alpha map α which are the real fibril features, their echoes and the matting weights in our case. Then, the degraded blurred image B can be calculated by

B = (α. ∗ If g ) + Ibg + |n|

where

( If g Ibg

= L ⊗ kdom = L ⊗ ksub (17)

n denotes Gaussian noise with a mean of 0 and a variance of 0.022 . The absolute value of n is used here as the pixel intensities of degraded blurred image B should be non-negative. kdom and ksub represent the dominant and submissive asperities in the synthetic blur functions shown previously in Fig. 7. The resultant degraded image B shows a more realistic topography at the boundary of the fibril (especially at the overlapping regions) compared to a direct overwrite process. Finally, the new degraded image B is used as a single input of our deblurring framework introduced in Section VI. B. Evaluation Selected synthetic deblurring results are shown in Fig. 11. The images in the top row are the synthetic degraded images

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2532239, IEEE Transactions on Image Processing WANG et al.: DOUBLE-TIP EFFECT REMOVAL IN ATOMIC FORCE MICROSCOPY IMAGES

generated with different blur function orientations. Constant gain is used in these blur functions. The bottom row shows the corresponding deblurred results of the top row. This demonstrates that our deblurring framework can successfully deblur the synthetic images regardless of the orientation of the blur function. Table II shows quantitative measurements of the restored latent images using Structural SIMilarity (SSIM) [30] as well as Peak signal-to-noise ratio (PSNR). The SSIM map shows high scores for the regions inside the fibril (approximately 1, indicating high similarity). Lower scores are observed at the boundary of the fibril which lowers the average (the SSIM score for each image) to around 0.85. However, in our study, the region within the boundary of the fibril is of greater interest than the boundary. Moreover, the PSNR value for each result image is around +35 dB which indicates a well restored image compared to the ground truth image.As a result, these measurements clearly demonstrate that our deblurring framework is capable of restoring degraded images under the assumed degradation model. It is capable of preserving the fibril structure as well as the details inside the fibril while preforming the deblurring process.

13

Fig. 12. A plot of quantitative measurements of 25 synthetic AFM images. 95% confidence intervals are plotted as vertical error bars and the maximum & minimum particle percentage are plotted as horizontal error bars for each cluster. It shows the performance of our framework reduces as the particle percentage increases.

C. Versatility In order to show the versatility of our framework, we use the open-source software Gwyddion [21] to generate 25 synthetic AFM images of particles with increasing coverage. The coverage is simply estimated by measuring the particle percentage which we define as the percentage of non-zero pixels in the synthetic images. The data generation tool in Gwyddion allows the random generation of particles with predefined coverage levels. 5 images each are generated at 5 different coverage levels. The range of particle percentages is from 6% to 22% with approximate 4% increments between each coverage level. Each image is then artificially degraded with the double-tip artefact using the same method as in Section VIII-A. The deblurring algorithm is then applied to each example. We measure the similarity between each deblurred results and the corresponding ground truth image using SSIM and PSNR (as in Section VIII-B). A plot of quantitative measurements of the 25 restored images is shown in Fig. 12. Both metrics confirm that the quality of the deblurred results drops off as the coverage level increases. This is confirmed qualitatively in Fig. 13. Fig. 13(a) & (b) are two synthetic AFM images with particle percentages of 6.40% and 17.75% respectively and Fig. 13(c) & (d) are their corresponding deblurred results. Clearly, Fig. 13(c) has better appearance than Fig. 13(d). This is because each individual particle in the former is better isolated than in the latter, as demonstrated by the red arrows in Fig. 13(b) & (d). IX. C ONCLUSIONS Deconvolution of tip geometry from AFM images is longstanding issue that is becoming increasingly important as access to high resolution imaging of challenging biological systems becomes more prevalent. This work focuses on correcting for the ‘double-tip’ artefact which is a dramatic-form

(a)

(b)

(c)

(d)

Fig. 13. 2 selected synthetic AFM images of particles blurred with the same orientations and gain. (a) An example degraded image with particle percentage at 6.40%. (b) An example degraded image with particle percentage at 17.75%. (c-d) The corresponding deblurring results of (a-b). The red arrows in (b) & (d) demonstrate some poorly restored regions. However, our framework is still capable to preserve the shape of the particles as well as their brightness contrast even when the particle percentage high.

distortion compared to the traditional tip convolution artefacts. A deblurring model based on the natural image blurring model was developed for the restoration of degraded images. A Bayesian deblurring framework containing two key steps was carried out to recover the true surface topography of the fibrils. The proposed deblurring framework was successful at removing the ‘double-tip’ artefacts from AFM images, resulting in a true image of the surface without sacrificing resolution. Due to the presence of overlapping regions (resulting from

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2532239, IEEE Transactions on Image Processing 14

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, —– 2015

the parallel image formation mechanism) it was not possible to implement a fully automatic algorithm. A user-interactive process is incorporated into our deblurring framework which allows the user to provide information to secure a good estimation of the overlap regions. A new degraded image is then constructed which satisfies the summation model of convolution throughout and a Bayesian deconvolution step is applied to generate restored images with better estimation at the overlap regions. The restored images demonstrate higher resolution than the original images once the influence of tip geometry has been removed. The ability to deconvolve the influence of tip geometry from images with significant variations in local topography is vital to the high resolution study of samples (nanoparticles, nanowires and anyloid fibrils, etc) at interfaces where any imperfections in the geometry of the tip are amplified by the length-scale and topographical gradients present. An ability to access the structure of such samples with Angstrom resolution is a vital tool in understanding the molecular structure of these systems. Relating changes in the chemistry of precursors to the resulting structure of particles and aggregates at the molecular level is likely to further our understanding of the assembly process and thus gain grater control over it. ACKNOWLEDGMENT This work was supported, in part, by the Science Foundation Ireland PI Programme: SFI-PI 08/IN.1/I2112. JIK acknowledges support provided by NanoRemedies, which is funded under the Programme for Research in Third Level Institutions Cycle 5 and co-funded by the European Regional Development Fund. SPJ acknowledges support provided by Science Foundation Ireland (12/IA/1449). R EFERENCES [1] G. Binnig, C. F. Quate, and C. Gerber, “Atomic force microscope,” Physical Review Letters, vol. 56, pp. 930–933, Mar. 1986. [2] C. J. Forman, A. A. Nickson, S. J. Anthony-Cahill, A. J. Baldwin, G. Kaggwa, U. Feber, K. Sheikh, S. Jarvis, and P. D. Barker, “The morphology of decorated amyloid fibers is controlled by the conformation and position of the displayed protein,” ACS Nano, vol. 6, no. 2, pp. 1332–1346, 2012. [3] T. P. J. Knowles, M. Vendruscolo, and C. M. Dobson, “The amyloid state and its association with protein misfolding diseases,” Nat. Rev. Mol. Cell Biol., vol. 15, no. 6, pp. 384–396, Jun. 2014. [4] S. Jarvis and A. Mostaert, The Functional Fold: Amyloid Structures in Nature. Pan Stanford, 2012. [Online]. Available: http://books.google. ie/books?id=tFBAfJTkqL8C [5] E. Gazit, “Self-assembled peptide nanostructures: the design of molecular building blocks and their technological utilization,” Chemical Society Reviews, vol. 36, pp. 1263–1269, 2007. [6] T. Fukuma, A. S. Mostaert, L. C. Serpell, and S. P. Jarvis, “Revealing molecular-level surface structure of amyloid fibrils in liquid by means of frequency modulation atomic force microscopy,” Nanotechnology, vol. 19, no. 38, p. 384010, 2008. [Online]. Available: http://stacks.iop.org/0957-4484/19/i=38/a=384010 [7] P. Markiewicz and M. C. Goh, “Atomic force microscope tip deconvolution using calibration arrays,” Review of Scientific Instruments, vol. 66, no. 5, pp. 3186–3190, May 1995. [8] J. S. Villarrubia, “Algorithms for scanned probe microscope image simulation, surface reconstruction, and tip estimation,” J Res Natl Inst Stand Technol, vol. 102, pp. 102–425, 1997. [9] A. Yacoot and L. Koenders, “Aspects of scanning force microscope probes and their effects on dimensional measurement,” Journal of Physics D: Applied Physics, vol. 41, no. 10, p. 103001, 2008. [Online]. Available: http://stacks.iop.org/0022-3727/41/i=10/a=103001

[10] Y. Yeo, B. Aumond, and K. Youcef-Toumi, “Precision atomic force microscope imaging,” in Signal Processing Proceedings, 2000. WCCCICSP 2000. 5th International Conference, vol. 2, 2000, pp. 1180– 1186vol.2. [11] J. Jia, “Single image motion deblurring using transparency,” in Computer Vision and Pattern Recognition, 2007. CVPR ’07. IEEE Conference on, Jun. 2007, pp. 1–8. [12] S. Cho and S. Lee, “Fast motion deblurring,” ACM Transactions on Graphics (SIGGRAPH), vol. 28, pp. 145:1–145:8, Dec. 2009. [13] L. Xu and J. Jia, “Two-phase kernel estimation for robust motion deblurring,” in Proceedings of the 11th European conference on Computer vision: Part I, ser. ECCV’10. Berlin, Heidelberg: SpringerVerlag, 2010, pp. 157–170. [Online]. Available: http://portal.acm.org/ citation.cfm?id=1886063.1886077 [14] A. C. Likas and N. P. Galatsanos, “A variational approach for Bayesian blind image deconvolution,” IEEE Trans. on Signal Processing, vol. 52, pp. 2222–2233, 2004. [15] R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman, “Removing camera shake from a single photograph,” ACM Trans. Graph, vol. 25, pp. 787–794, 2006. [16] J. Miskin and D. J. MacKay, “Ensemble learning for blind image separation and deconvolution,” 2000. [17] D. J. Field, “What is the goal of sensory coding?” Neural Computation, vol. 6, pp. 559–601, Jul. 1994. [Online]. Available: http://portal.acm.org/citation.cfm?id=188132.188136 [18] Q. Shan, J. Jia, and A. Agarwala, “High-quality motion deblurring from a single image,” ACM Transactions on Graphics (SIGGRAPH), 2008. [19] Y. F. Wang, D. Corrigan, C. Forman, S. Jarvis, and A. Kokaram, “Restoration of high-resolution afm images captured with broken probes,” in SPIE BiOS Symposium, San Francisco, United State, January 2012. [20] T. Fukuma, J. I. Kilpatrick, and S. Jarvis, “Phase modulation atomic force microscope with true atomic resolution,” Review of Scientific Instruments, vol. 77, no. 12, pp. 123 703–123 703–5, 2006. [21] D. Necas and P. Klapetek, “Gwyddion: an open-source software for SPM data analysis,” Central European Journal of Physics, vol. 10, pp. 181–188, 2012. [22] MATLAB, version 7.6.0.324 (R2008a). Natick, Massachusetts: The MathWorks Inc., 2008. [23] Y. Chen, J. Cai, M. Liu, G. Zeng, Q. Feng, and Z. Chen, “Research on double-probe, double- and triple-tip effects during atomic force microscopy scanning,” Scanning, vol. 26, no. 4, pp. 155–161, 2004. [24] Y. Chen, “Elucidation and identification of double-tip effects in atomic force microscopy studies of biological structures,” Journal of Surface Engineered Materials and Advanced Technology, vol. 2, no. 3, pp. 238– 247, 2012. [25] Y. Wu, Y. Hu, J. Cai, S. Ma, X. Wang, and Y. Chen, “The analysis of morphological distortion during afm study of cells.” Scanning, 2008. [26] J. Besag, “On the statistical analysis of dirty pictures, (with discussion),” Journal of the Royal Statistical Society B, vol. 48, no. 3, pp. 259–302, 1986. [27] A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, “Understanding blind deconvolution algorithms,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2354–2367, 2011. [28] Image Metrology, “SPIPT M software package,” http://www.imagemet. com/. [29] A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 30, no. 2, pp. 228–242, Feb 2008. [30] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” Image Processing, IEEE Transactions on, vol. 13, no. 4, pp. 600–612, April 2004.

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TIP.2016.2532239, IEEE Transactions on Image Processing WANG et al.: DOUBLE-TIP EFFECT REMOVAL IN ATOMIC FORCE MICROSCOPY IMAGES

Yun-feng Wang received his degree in Electronic & Computer Engineering from Trinity College Dublin in 2010. Then he joined the Sigmedia Group (www. sigmedia.tv) and is currently pursuing a PhD on Atomic Force Microscopy (AFM) image processing in Trinity College Dublin under the supervision Prof. Francis Boland. His research interests include AFM image restoration, registration and segmentation.

15

Anil Kokaram received the PhD in Signal Processing from Cambridge University (1993). He then worked as a research fellow at the Signal Processing group at the Cambridge University Engineering Dept till 1998 when he took up a lectureship at Trinity College Dublin. He is now a Technical Lead at YouTube’s video infrastructure division and also holds a Professorship at Trinity College Dublin. His work is in the broad area of Video Processing and particularly applications in the Entertainment industry including video coding, restoration and film postproduction. He has published over 150 papers in these areas and was an Associate Editor of the IEEE Trans on Image Processing, and IEEE Trans on Ccts and Systems for Video Technology. In 2007 he was awarded a Technical Oscar by the American Academy of Motion Picture Arts and Sciences for his work in motion estimation in the movies and in 2011 his startup (GreenParrotPictures) was acquired by Google.

Jason I. Kilpatrick received his Ph.D. in Chemistry from The University of Newcastle, Australia, in 2006 and has over 10 years of experience in AFM instrumentation and technique development. He joined the Jarvis Laboratory, Trinity College Dublin in 2005 before moving to University College Dublin in 2007. His research is focused on solving important biological questions including the influence of water and ions at biological interfaces, measurement of cellular mechanics, and high resolution biological imaging.

Suzanne P. (Suzi) Jarvis graduated from Oxford with a BA in Physics and a DPhil in Materials Science. She has worked with Atomic Force Microscopy (AFM) since 1990, including 8 years in the Joint Research Centre for Atom Technology in Japan. She established the Nanoscale Function Group in 2002 to apply AFM instrumentation to the understanding of biological materials. Her group has particular expertise in studying aqueous-substrate interfaces with atomic and molecular precision.

Francis M. (Frank) Boland studied Electrical Engineering at University College Dublin. He then worked as a research assistant with Sheffield City Polytechnic and the British Steel Corporations Special Steels Research Laboratory on measurement and control of steelmaking. He was awarded his PhD by the CNAA for his thesis on modelling and state-estimation for arc furnace steelmaking. After lecturing in Control Engineering at the University of Sheffield he joined Trinity College Dublin where he has been the Head of the Department of Electronic & Electrical Engineering , Senior Dean and College Bursar. He is a Fellow of Trinity College Dublin and The Institution of Engineers of Ireland. Since 1994 he has been Professor of Engineering Science and his research interests are in signal and systems theory and engineering applications. He has been Principle Investigator on research projects funded by industrial, National and EU agencies and is currently directing a research group working on digital signal processing for immersive audio for virtual reality with applications that include Gaming and eLearning systems.

David Corrigan received the B.A./B.A.I. degree in computer and electronic engineering from Trinity College Dublin, Dublin, Ireland, in 2003 and the Ph.D. degree under the supervision of Prof. Anil Kokaram in 2008. The subject of his doctoral thesis was on the relationship between motion estimation reliability and archived film restoration. Since then, he has spent time working on the research team at The Foundry, a leading provider of visual-effect software to the cinema postproduction industry, and has spent three years working as a Research Fellow with the Sigmedia Group, Trinity College Dublin. Since 2011, he is an Assistant Professor with Trinity College Dublin. His current research interests concern image/video processing applications in the Life Sciences, Microscopy and the creative industries.

1057-7149 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Double-Tip Artefact Removal from Atomic Force Microscopy Images.

The Atomic Force Microscope (AFM) allows the measurement of interactions at interfaces with nanoscale resolution. Imperfections in the shape of the ti...
568B Sizes 1 Downloads 20 Views