Journal of Microscopy, Vol. 261, Issue 2 2016, pp. 177–184

doi: 10.1111/jmi.12306

Received 20 February 2015; accepted 23 July 2015

Development and comparison of the methods for quantitative electron probe X-ray microanalysis analysis of thin specimens and their application to biological material A. WARLEY Centre for Ultrastructural Imaging, King’s College, London, U.K and Department of Histology, Faculty of Medicine, University of Granada, Spain

Key words. Biology, Cliff-Lorimer, Hall, quantification, X-ray microanalysis.

Summary In recent years, there has been a return to the use of electron probe X-ray microanalysis for biological studies but this has occurred at a time when the Hall programme which acted as the mainstay for biological microanalysis is no longer easily available. Commercial quantitative routines rely on the CliffLorimer method that was originally developed for materials science applications. Here, the development of these two main routines for obtaining quantitative data from thin specimens is outlined and the limitations that are likely to be met when the Cliff-Lorimer routine is applied to biological specimens is discussed. The effects of specimen preparation on element content is briefly summarized and the problems encountered when using quantitative analysis on resin-embedded materials emphasized.

Introduction In biology, quantitative electron probe X-ray microanalysis (EPXMA) of thin specimens gained its heyday for the examination of the physiologically active diffusible elements in the 1980s and 1990s but this traditional use is now practiced in few laboratories worldwide. Nevertheless, there has been a resurgence in interest in applying EPXMA to biological studies particularly at the materials science/ biology interface stimulated, in part, by the increase in nanoparticle research. This increase in interest has occurred at a time when continuum/normalization (the Hall technique), the mainstay for biological quantification, is no longer supported commercially (although it is available as freeware from http//www.nist.gov/mml/mmsd/software.cfm). With modern EDS systems, quantification can appear to be achieved simply by a ‘push-button’ operation without the need for reference to standards, an approach that does not take into account Correspondence to: Alice Warley, CUI King’s College London, Guy’s Campus, London SE1 1UL, U.K. Tel: +44 (0)1223-893194; fax: +44 (0)2078-486950; e-mail: [email protected]

 C 2015 The Authors C 2015 Royal Microscopical Society Journal of Microscopy 

the special requirements for successful analysis of biological specimens. This review is meant to act as a guide to users new to the technique. The development of the different methods used for the quantification of spectra from thin sections will be described focusing on continuum normalization (Hall, 1979a; Hall & Gupta, 1986) commonly known as the Hall technique, which has been largely used for biological applications and the ratios method described in Cliff & Lorimer (1975), the standard procedure for materials science specimens. The problems likely to be met when the Cliff-Lorimer method is used with biological specimens will be highlighted. In addition, the way in which different specimen preparation procedures affect the results achieved from quantitative analysis will be discussed. Methods for quantitative analysis of thin specimens When thin sections are analyzed in a TEM at high voltage, there is minimal interaction of the electron beam with the specimen. Generated X-rays have a very low probability of interacting within the specimen so that emitted X-rays are neither absorbed nor do they interact within the specimen to cause fluorescence of X-rays (the absence of absorption and fluorescence are the thin specimen criteria). In a spectrum generated from a thin specimen, there is a simple relationship with the number of counts detected for a given element being directly proportional to its concentration in the irradiated area. Nevertheless, quantification cannot be achieved solely by measurement of net peak counts; there is a need to correct for differences in specimen thickness and for differences in the overall detectability of different elements, both of which affect the net peak intensity. Continuum normalization (Hall) method In biological specimens, the elements of interest are generally present at low concentrations in an organic matrix that consists of the low atomic number elements carbon, nitrogen and oxygen which may constitute up to 90% of the total mass. Due to their high concentration, these elements

178

A. WARLEY

Fig. 1. A diagram showing the areas in a spectrum that are used for quantification. With continuum normalization, the mass fraction of the element of interest is determined by dividing the net number of counts in its characteristic peak (Px) by the number of counts in a continuum window (W) set in a peak-free region of the spectrum. Quantification (mmoles/kg total mass) is achieved by determining a sensitivity factor for the element of interest (kx ) by measuring Px/W values obtained by the analysis of a standard of known composition. With the Cliff-Lorimer routine, a ratio of elemental concentrations is obtained by dividing the net peak intensities of the two elements of interest Px/Py. Analysis of standards of known element ratios is used to determine kxy which corrects for differences in detector efficiency between the two elements.

contribute substantially to the continuum radiation (also known as bremsstrahlung or white radiation) generated when the incident beam is decelerated by the atomic nuclei (mass) of the specimen. As early as 1966, Hall (cited in Hall, 1986), suggested that this continuum radiation could be used to measure the total mass in the area of analysis and introduced the practice of normalizing characteristic X-ray intensity to continuum intensity to determine element mass as a ratio of total organic mass and thus compensate for variations in thickness. In later papers, Hall et al. (1973) and Hall & Peters (1974) demonstrated that comparison with peak-to-continuum ratios generated from standards of known composition could be used to achieve quantitative results. Although Hall himself commented that continuum normalization was initially introduced as a gross correction (Hall, 1989), the method was adopted by the biological community as the preferred method for quantitative analysis (Shuman et al., 1976; Roomans, 1988a; Zierold, 1988). Using continuum normalization, the concentration of a given element x is expressed in terms of mass x/total mass in the area of analysis. The mass of x is determined from the number of counts in its characteristic peak (Px, Fig. 1) and the total mass in the area of analysis from the continuum counts measured in a peak-free region of the spectrum (W, Fig. 1). The basic equation is C xsp =

(Px /W)sp G sp × × C x std G std (Px /W)std

(1)

where Cx is the concentration in mmoles/kg of specimen, the subscripts sp and std refer to specimen and standard,

Fig. 2. A typical spectrum generated by the analysis of an osteoblast cell supported on a Pioloform-covered gold grid showing the characteristic peaks generated from the specimen (C, O, Na, Mg, P, S, Cl and K) as well as the Au peak generated from the support grid. The shaded area (W) denotes the region that is used for the estimation of continuum counts.

respectively, and G is the G-factor which is the sum of the mass fraction multiplied by (Z2 /A) for all constituent elements present in the irradiated region. The factor Cx std / (Px /W)std x Gstd is a constant (kx ) determined from the analysis of a standard whose composition has been analyzed by an independent method (Warley, 1990; Patak et al.,1993) and is variously called the standard correction or S factor. Full details of quantification procedures can be found in Warley (1997). Continuum normalization has the advantage of being independent of fluctuations in both beam current and specimen thickness which affect both P and W to the same extent. Additionally, the standard correction factors are characteristic of the particular system in use and only need to be determined once, however, it should be noted that the correction factors are dependent on the operating voltage of the microscope. The major disadvantage of continuum normalization is that the continuum reading (W) must be that generated by the specimen itself; so, for a thin specimen supported on a plastic film-covered grid, the continuum reading needs to be corrected for contributions from both the grid and the film. Details of the procedures involved in performing these corrections are given in the Appendix. The difficulty encountered in obtaining accurate estimation of the specimen continuum has been the main drawback to the use of this technique. In the early days of microanalysis, inaccurate estimation of the continuum counts for specimen and film, and variation in continuum production over the grid area were shown to lead to variability in results (Hall, 1979b; Warner et al.,1985; Steinbrecht & Zierold,1989) and even zero or negative values for the specimen continuum (Roomans,1988b). Since then, there have been improvements in design of analytical microscopes with shielding of both the specimen chamber and specimen holder to minimize contributions to the spectrum from these sources. Extraneous contributions can be minimized in practice by the  C 2015 The Authors C 2015 Royal Microscopical Society, 261, 177–184 Journal of Microscopy 

QUANTITATIVE MICROANALYSIS OF THIN BIOLOGICAL SPECIMENS

use of high-transmission grids, carrying out analysis in the central region of grids and avoiding analysis close to grid bars (Roomans, 1988b; Warley, 1997). Ratios (Cliff-Lorimer) method Materials science specimens generally consist of high atomic number elements that produce well-defined characteristic peaks which constitute the majority of the spectrum. Cliff & Lorimer (1975) showed that, provided that the thin section criteria are met, the relative concentrations of two elements within a specimen can be directly determined from measurement of their characteristic X-ray intensities i.e. Px Cx = k xy Py Cy

(2)

where Px and Py are the measured characteristic X-ray intensities (see Fig. 1), Cx and Cy the weight fractions of the two elements and kxy a relative sensitivity factor that takes into account differences in ionization cross-section, fluorescent yield and detector efficiency for the two elements (note that kxy corrects for the relative sensitivity of the instrumentation to the two different elements and differs from the standard correction factor (kx ) used in the Hall method). Quantification is achieved by reference to standards of known atomic ratios. With this method, when all of the elements present are represented as peaks in the spectrum, it is possible to determine the relative amount of every constituent element and, since the total must be 100 %, absolute concentrations can then be determined without reference to a standard of known composition. This ‘standard less’ routine is generally incorporated into commercial programmes.

179

is still not easily quantified. Cliff et al. (1984) used an ultrathin window detector and noted variability in KxSi for X-rays with energy

Development and comparison of the methods for quantitative electron probe X-ray microanalysis analysis of thin specimens and their application to biological material.

In recent years, there has been a return to the use of electron probe X-ray microanalysis for biological studies but this has occurred at a time when ...
1KB Sizes 0 Downloads 7 Views