Home

Search

Collections

Journals

About

Contact us

My IOPscience

A normalization technique for 3D PET data

This content has been downloaded from IOPscience. Please scroll down to see the full text. 1991 Phys. Med. Biol. 36 939 (http://iopscience.iop.org/0031-9155/36/7/003) View the table of contents for this issue, or go to the journal homepage for more

Download details: IP Address: 131.91.169.193 This content was downloaded on 08/09/2015 at 16:23

Please note that terms and conditions apply.

Phys. Med. Biol., 1991, Val. 36, No 7,939-952. Printed in the UK

A normalization technique for 3~

PET data

M Defriset, D W Townsend+, D Baileys, A GeissbuhlerS, C Michelll and T Jones5 t Division

of Nuclear Medicine, Az-VUB, Brussels, Belgium

$ Division of Nuclear Medicine, University Hospital, Geneva, Switzerland

5 MRC Cyclotron Unit, Hammersmith Hospital, London, UK

/I Positron Tomography

Laboratory, Louvain-la-Neuve, Belgium

Received 27 February 1991 Abstract. Prior to reconstruction, emission data from a multi-ring PET camera must be corrected (normalized) for variations in detector sensitivity. The appropriate correction coefficients are obtained by measuring the response of all coincidence lines to a calibrated source of aclivity (a blank scan). State-of-the-art cameras may contain up to a million such lines of response (LO-), and therefore around 400 million counts will be required to calibrate each LOR to a statistical accuracy of 5%. Alternatively, by modelling the LOR sensitivity as the product of the individual detector efficiencies and a geometrical factor, a calibration procedure has been proposed which requires the determination of only 6000 parameters from this same data Set. A significant improvement in the statistical accuracy of the coefficients can therefore be expected. Recently, multi-ring scanners have been operated with the septa retracted, increasing the number of measured LORS by a factor of eight. The acquisition of the calibration data necessary t o achieve adequate statistical accuracy then becomes prohibitive. We show that, by modelling the LOR sensitivity, it is possible, with certain approximations, to normalize a septa-retracted emission data set with good accuracy. The input to the model is a high statistics blank scan acquired with the septa extended, which offers a number of practical advantages.

1. Iotroduetiou

The calibration factors used to normalize emission data from a multi-ring PET camera are usually obtained by exposing each detector to the same (known) source of activity for the same period of time. Calibration factors are then determined to equalize the differing responses of the individual detectors. The calibration source is generally either a uniform plane of activity, or one or more rotating rods. The plane source is rotated into a number of different positions (typically six) during the calibration scan. The calibration scan is termed a blank scan because the data are acquired with only the calibration source in the field-of-view. The normalization factor for the coincidence line of response (LOR) between a given pair of detectors is then estimated as the ratio between the number of coincidences measured in that LOR during the blank scan, and the number of coincidences which would have been measured if the sensitivities of the detectors were uniform. This latter is estimated as the mean number of coincidences detected, averaged over all active LO&. The accuracy of this calibration technique, which will be referred to here as the standard technique, is limited by the statistical accuracy of the blank scan. I n order to determine the normalization factors with an accuracy of Soh, an average of 400 coincidences must be acquired in each LOR. This implies a blank scan of about 300 0031-9155/91/070939+ 14903.50 @ 1991 IOP Publishing Ltd

939

940

M Defrise et nl

million events for a 16-ring scanner such as the ECAT 9538131 (SiemensjCTI, Knoxville, TN). Casey and Hoffman (1986) proposed a technique to improve the statistical accuracy of the normalization factors obtained from a blank scan. Their approach is based on the fact that each detector is in coincidence with a number (typically 100-200) of opposing detectors and many LOR normalization factors may involve a common detector, i.e., they are not independent. Modelling these correlations enabled Casey and Hoffman to evaluate the normalization factor for a given pair of detectors by incorporating data from a large number of different measurements. The authors assumed that the sensitivity of a LOR is the product of the intrinsic efficiencies of the pair of detectors in coincidence. For the ECAT 9538131, the number of parameters to be estimated from the blank scan data is thus reduced from about one million LOR$ to 6000 detector efficiencies, with a corresponding improvement in statistical accuracy. It is, however, evident that the effective accuracy of the normalization procedure will also depend on the validity of the model. Hoffman et al (1989) subsequently proposed an improved model which takes into account the dependence of a LOR sensitivity on the incidence angle of the gamma rays to the detector face. Since this angle depends upon the radial position of the coincidence line, a n LOR sensitivity is obtained from the product of the two detector efficiencies and a radiallydependent geometrical factor. These geometrical factors can also be estimated from the blank scan data by, for example, averaging over all LOR^ with the same radial position. We will see later (section 2) that the exact solution requires taking geometric means. It is, of course, possible to envisage more elaborate models which take into account factors unique to the particular geometrical design of the scanner (Casey 1990). However, increasingly marginal improvements to the model also increase the number of unknown parameters and therefore degrade statistical accuracy, unless some prior knowledge of these additional parameters is available. Prior knowledge could be obtained, for example, from a Monte Carlo simulation, or from an extremely highstatistics blank scan measured only once when the scanner becomes operational. As usual, there is a trade-off between accepting the possibility of systematic error without increasing the statistical error (i.e. applying no correction for non-uniform sensitivity), and correcting the systematic sensitivity variations, but at the risk of increasing the statistical noise in the emission scan. In the latter case, the standard method will be the most accurate approach, provided that a sufficiently high statistics blank scan is available. Recently, there has been increasing interest in the operation of multi-ring scanners with septa retracted (Thompson 1988, Dahlbom er a/ 1989, Townsend et al 1989a. b). In this volume, or 3~ acquisition mode, coincidences are acquired between detectors in any two rings, thus significantly increasing the scanner sensitivity compared to the usual acquisition with septa extended. For a 16-ring scanner, 256 sinograms are acquired in the septa-retracted mode, compared with 31 (one for each transaxial slice) with the septa extended. There is, therefore, an eight-fold increase in the number o f ~ o which ~ r have to be normalized, rendering the standard technique rather inappropriate for the calibration of a scanner operated in the septa-retracted mode. A 5% accuracy would now require a blank scan of about 3 x lo9 events. Although originally developed for septa extended, ZD acquisition, the variancereduction technique of Casey and Hoffman offersobvious advantages for the calibration of the 3~ operational mode. In addition, apart from the improvement in statistical accuracy discussed above, such a method has two further advantages. Firstly, the

A normalization technique f o r 3~

PET

data

941

representation of the normalization coefficients as detector efficiencies and geometrical factors requires typically a reduction in storage space by a factor of 1000 compared to that needed for the coefficients themselves. Secondly, a more important advantage from a practical point of view is that a full set of 3~ normalization coefficients can, in principle, be obtained from a 20, septa-extended blank scan. This greatly simplifies the acquisition procedure, and obviates the need to provide two separate sets of normalization sources for use with septa extended and retracted. In the next section, we present the variance-reduction normalization technique for a single-ring scanner, using the model of Hoffman et a1 (1989). which includes a geometrical factor. We describe an algorithm to determine the individual efficiencies and geometrical factors which differs slightly from that proposed by Hoffman et al. Our algorithm is unbiased in the sense that it recovers the exact values of the parameters in the limit where the model is strictly valid and the blank scan data are noise free. This normalization technique is then extended to 3 ~ septa-retracted , data acquisition. In section 3, the algorithm is applied to blank scan data acquired with a 16-ring ECAT 953B/31 neuroPET scanner with automatically retractable septa. The algorithm is validated by comparing the 30 normalization factors obtained from a ZD blank scan, acquired with septa extended, to the standard normalization factors obtained from a high statistics, 3~ blank scan. 2. Theory of the normalization technique

We will first describe the theory of the variance-reducing normalization procedure for a single-ring scanner, which is equally applicable to a multi-ring scanner with septa extended, i.e., operated as a set of individual rings. We will then extend the method to a multi-ring scanner operated with the septa retracted, the aim being to use the same blank scan as for septa-extended normalization. 2.1. A single-ring scanner We consider a single ring of N detectors. From a blank scan, the number of events in the coincidence channel joining detectors i and j is denoted by n,. We assume that any correction for the geometry of the normalization source has already been applied. We further assume that any detector i is in coincidence with 2K + 1 opposing detectors, denoted by j = i + N / 2 - K, , , . ,i + N / 2 + K. Since we have circular symmetry, all detector indices are to be taken modulo N throughout, i.e., j should be replaced by j mod N. The counts nu and nji are the same since they refer to the same LOR. Following Hoffman et a1 (1989), we assume that the mean (noise free) value of n, can be written:

(nu) = Ei&jg(i+N/z-j)A

(1)

where E $ and g(i+N/2-jlare the individual detector efficiencies and geometrical factors, respectively. The constant A, which is proportional to both the duration of the blank scan and the source activity, will be incorporated into the geometrical factors g ( i + N / l - j , . Since we have ng = njj, the geometrical factors are symmetric, i.e., g , i + N , 2 - ,=) g - l i + N / Z - j )Estimating . the parameters E , and g ( i + N / 2 - jfrom , the blank scan data, equation (1) then yields low variance estimates of the ( n u ) , which can be used to normalize the emission data. We have, therefore, to solve the system of equations

nq

EiE,k7i+~iz-,i

(2)

M Defrise et a/

942

which cannot, in general, be done since the number of equations is much larger than the number of unknown parameters and, as the data are noisy, the system of equations is usually inconsistent. One possibility would be to estimate the maximum likelihood (ML) solution, which is the set of parameters that maximizes the probability of obtaining the measurements n,, given the model of equation (1). Unfortunately, even assuming a simple Poisson noise model, the maximum likelihood criterion leads to a set of equations which is not at all straightforward to solve. Instead, we derive an algorithm which solves this set of equations (2) exactly, assuming consistent (i.e., noise-free) data. The algorithm can then he applied to inconsistent, noisy data to obtain an approximate solution. This solution is unbiased in the sense that it converges t o the real solution in the ideal case where the blank scan data are noise-free and the model of equation (1) is exact. The exact solution of equation (2) is obtained by taking, for each radial position k, the product of all measured LORS:

The reason for the product, as opposed to the sum, is that the two factors in parentheses on the left-hand side of equation (3) are equal and independent of k. Since the gk are in any case only defined up t o a global multiplicative constant, we set these factors equal to unity to obtain gk=

n nt(i+N/,+k,) ("-'

I/N

=g_,

for

k#O

i=0

(4)

In practice, LOR$ with less counts than some small threshold (e.g., 10%of the average LOR count) should be excluded from the calculation in order to minimize numerical errors. The blank scan measurements corrected for geometrical factors are then obtained from m!,= n , / g j + N / 2 - j ,resulting in the set of equations

m.. = &.E. 'I ' I

(5)

which have to be solved for E ; . T h e solution is obtained following the method proposed by Hoffman et a/ (1989). which we summarize here in order to introduce the notation required later in this section. The ring is subdivided into an even number G of groups, each of M detectors (i.e., N = G M ) . These groups need not have an actual physical meaning, although they could be defined according to the bucket or block structure of scanners such as the 953B/31, which is described in section 3. The group denoted by A (where A = 0,. . . , G - 1) consists of all detectors with indices i (or j ) in the interval (AM, A M + M - 1 ) . Let us consider first two groups, A and B, such that each detector of group A is in coincidence with each detector of group B. Summing over all detectors j in group B, equation (5) becomes 1

1

si-

Mjes

E.=-

'

m,

Mje~

A normalization technique for

3~ PET

Defining for group B the mean detector efficiency ( counts by

data

E ) ~and

943

the average (corrected)

equation (6) becomes

where it is important to note that ( the detectors in group A, we find

is independent of i. By symmetry, summing over

E ) ~

Combining equations (S), ( 8 a ) and ( l ) , and noting that

we obtain for the averaged blank scan values for all B:

LOR$

between group A and group

The calculation is then repeated for all pairs of groups in coincidence to yield the appropriate coefficients for normalization. This is the method proposed by Hoffman et a1 (1989), with the exception that the geometrical factors are determined using geometric rather than arithmetic means in order to ensure an unbiased solution. This approach does not yield the individual detector efficiencies, although they could be obtained from equation (8) provided that the ( E ) & are known explicitly. Instead, this is avoided by using equation (9). However, as we shall see later, while this procedure is adequate for the calibration of a single-ring scanner, the calibration of 30 emission data using a septa-extended, 2 0 blank scan requires explicit knowledge of the individual detector efficiencies. In fact, Hoffman et a1 (1989) have already considered this problem in order to evaluate the normalization factors for LO& outside the field-of-view covered by the calibration source. They estimate the individual efficiencies from

using the fact that the denominator on the right-hand side is approximately independent ofthe detector index i when a large number of efficiencies are averaged. This assumption relies on the fact that the variations of the individual efficiencies are purely random and that no systematic effects are present, due to, for example, the bucket geometry of the scanner. However, with a n approach analogous to the determination of the geometrical factors using equation (3). it is nevertheless possible to obtain an exact solution which is insensitive to systematic variations of detector efficiencies. To d o so, the group size M is chosen such that any group A is in coincidence with three opposing groups, i.e.,

M Defrise et a1

944

all detectors in group A are in coincidence with all detectors in groups A+(G/Z- 1). A+ G / 2 and A + ( G / 2 + 1). This condition is satisfied if 2M s K.Then, for any group A =O,. . . ,G - 1

where all group indices are to be taken modG. Using this result, individual detector efficiencies can then be obtained from equation (8). A further reduction in variance is possible if the si (for i E A) are estimated from &.

+

= - (m)i(A+0,2-,J (&)A+C/2--I

(m)i(A+c,zJ+(m)i(A+c,*+,) (B)A+G/2

(13)

(E)A+F/2+I

2.2. A multi-ring scanner operated without septa A 16-ring scanner such as the ECAT953B/31 operated with septa extended acquires 31 sinograms. This sinogram set consists of direct plane sinograms involving coincidences between detectors in the same ring, and crossplane sinograms involving coincidences between detectors in adjacent rings. In practice, the situation may be more complicated in that, in order to improve statistics, direct and cross planes can be combined in different ways (Townsend et al 19911, but for the purposes of this paper we assume that only coincidences between detectors within the same ring contribute to the direct sinogram. When the septa are retracted, coincidences between detectors in any two rings are acquired and collected into 256 sinograms. These data can be normalized using either the standard approach, or the variance-reduction method described in section 2.1, provided that a suitable 3D blank scan is available. However, as explained in section 1, this may not always be possible owing to the difficulty of acquiring sufficient statistics in a reasonable scan time. For example, in order to limit the singles rates on the detectors, a lower activity normalization source (or sources) has to be used with the septa retracted, and hence the acquisition of 3 x lo9 counts (for a 5% statistical error) could take u p to 12 hours, which would be impractical on a regular basis. An alternative approach is to acquire a high-statistics, ZD blank scan with the septa extended and with the higher activity source. The technique of section 2.1 is then applied to the 16 direct plane sinograms to obtain the individual detector efficiencies and geometrical factors for each ring. The mean number of counts in an LOR between detector i in ring a and detector j in ring p is then estimated from: ( @ ) = & : & y g : f N / 2 - , f iU I.L P. (14) This expression assumes that the modification to the effective crystal efficiencies caused by septa shadowing can be represented by a multiplicative constant f i e which is independent of the detector index i within the ring. These scaling factors can be determined for each ring DI by taking the ratio of the average values of the corresponding blank scan sinograms for that ring measured with the septa extended and retracted. In principle, the geometrical factors in equation (14) also depend on the axial component of the gamma ray incidence angle. However, since the maximum axial aperture of current multi-ring scanners does not exceed about I", we neglect this effect and amroximate the geometrical factor by

A normalization technique for

30 PET

data

945

Using equation (41, the factors g t are determined separately for each ring (Y from a septa-extended blank scan. Equations (4), (8), (12), (14) and (15) thus provide a method for generating a complete set of coefficients to normalize a 3 ~ septa-retracted , emission scan. The method requires only a ZD,septa-extended blank scan as input, which can be acquired with high statistics in a reasonable scan time using a high activity calibration source. 2.3. 7he use of geometrical means

In section 2.1, WE proposed the use of geometrical means to obtain an unbiased estimate of both the geometrical factors and the individual detector efficiencies. Using simulated data, we have tested this method, and compared it with both the standard method, and the approach suggested by Hoffman et a1 (1989). We simulated a ring with 256 detectors, each of which is in coincidence with 129 opposing detectors. The geometric factors are modelled as g,=1+0.15sin2(rrk/2K) and the efficiencies are assigned random values according to the expression &,=(1+0.20 R){1+0.20R+0.30sin[rr(imod8)/8]} where R is a pseudo-random number uniformly distributed between *l. By analogy with the block structure of the ECAT953B/31, the first factor represents a variation of the global efficiency of a block of 8 detectors, while the second factor includes both a random fluctuation and a term representing a systematic sinusoidal variation of the detector efficiency within a block. The purpose of this simple model is to test the robustness of the algorithms described above. A noise-free blank scan was generated from values of gk and E , given by the two expressions above. The exact normalization coefficients are then proportional to the reciprocal of these noise-free values. After adding Poisson noise to the blank scan to simulate actual measurements, normalization coefficients can be computed using the algorithms described in section 2.1. A complete set of normalization coefficients are generated for each of the three methods (the standard method, the method of Hoffman et a/, and the method using geometric means) and compared with the exact coefficients. This procedure has been performed for two hundred blank sinograms, each containing 5 million counts, generated with different values of the individual detector efficiencies. The results are summarised in table I, which shows the reduction in variance that can be achieved using the approach of Hoffman el a / compared with the standard method. The use of geometric means (equations (4) and (12)) leads to a further improvement, which is achieved at essentially no cost. However, when the total number of counts in the blank scan is small, the level of statistical noise may be more significant than any bias arising from the use of arithmetic rather than geometric means, i.e., this Table 1. The average a n d maximum relative errors on the normalization coeficients. Relative error on normalization coefficients

Standard method

Hoffman el

4.7% 51%

12.9%

1.8%

01

(1989)

This method 0.8% 5.7%

946

M DeJrise et nl

improvement may be difficult to observe in practice. It is also important to note that this simulation was based on the assumption that the simple model in equation ( 1 ) is exact. As we shall see in section 3, the real situation is rather more complicated.

3. Application to the ECAT 9538 scanner

The normalization algorithm described above has been applied to data acquired with the ECAT953B/31. Blocks of BGO, approximately 5 c m x 5 cm in section (and 3 cm thick), are cut into an 8 x 8 array of crystals. Forty-eight such blocks, grouped in twelve buckets, are mounted on a circular support 16 cm in diameter to form 8 rings of detectors, with each ring containing 384 crystals. In the 953B/31, two such circular arrangements of blocks are mounted contiguously to form 16 rings of detectors covering an axial length of 10.8 cm. In section 3.3, we will refer to these two circular arrangements of blocks as C and D. The septa between the individual detector rings are 1 mm thick tungsten, 77 mm in length. The data are sorted into sinograms of 192 angular by 160 radial samples, with a radial sampling distance of 3.1 mm. The blank and transmission scans are performed using three rod sources mounted 120" apart on a rotating circular support with a diameter of 44 cm. Two blank scans were acquired, one of 256 sinograms with the septa retracted, and a second with the septa extended containing the 16 direct plane sinograms only. The mean counts per LOR were 449 and 443 for the septa retracted and septa extended scans, respectively. The 256 sinograms acquired with septa retracted were processed with the standard method to generate a set of 3D normalization coefficients. The geometrical factors and the individual detector efficiencies were estimated from the 16 sinograms acquired with septa extended using both arithmetic (Hoffman et a l ) and geometric means, and then the 3~ normalization coefficients generated from equation (14). In order to compare these three sets of coefficients, a third blank scan (the test data set) was acquired with septa retracted (and an average of 453 counts per LOR) and then normalized using each set of coefficients. As a measure of the effectiveness of the normalization procedure, the mean and variance were estimated within a region-of-interest placed on each sinogram before and after normalization. Ideally, after normalization, the test data sinograms should be uniform, and the magnitude of the deviations from uniformity can be used to compare the different normalization procedures.

3.1. Geometrical factors and detector eficiencies The geometrical factors (averaged over all rings) obtained from the zu, septa extended blank scan are plotted in figure l ( a ) as a function of radial position. Each radial element corresponds to a particular incidence angle of the photons detected within that LOR. The value at the centre (element 80) is for a LOR through the symmetry axis of the scanner and corresponds t o normally incident photons on each detector. The curve in figure l ( a ) agrees qualitatively with that obtained by Hoffman et a / (1989), (figure 2) for an ECAT831/15. We have obtained the same curve, for a field-of-view of diameter 25 cm, from a blank scan with the septa retracted. Thus, the geometrical factors obtained from a septa-extended blank scan can be used to normalize an emission scan acquired with septa retracted.

A normalization technique for

Hlstagrsm

of

3~ PET

data

947

X i : Dsleclor etllslsncy

14

-

12

S

8

IO

6

4 2 0 .4

.8

1 1.2 Deleclor etllciency

1.4

1.6

Figure 1. ( 0 ) The geometrical factors as a function of radial element. The centre of the projection is at element 80 and the factors are averaged over all rings. For the 953R a projection element corresponds to 3.1 mm. ( b ) The 384 detector efficiencies estimated from the septa-extended blank scan using equation (12).

The efficiencies of the 384 detectors in ring 4 were estimated from the septa extended blank scan using equation (12), and the result is shown in figure I ( b ) . The distribution has a full-width at half-maximum of about 14%. Similar distributions are observed for the other detector rings, and for efficiencies estimated from the septa retracted blank scan.

3.2. The normalized sinogram uniformity The effectiveness of the three different normalization techniques was assessed by evaluating the uniformity of each of the 256 sinograms in the test data set both before and after normalization. As a measure of uniformity, we used the standard deviation of the data values within a field-of-view of ZOcm, i.e., 65 radial samples for all 192 angular samples. The estimated standard deviation includes effects due not only to the normalization procedure, but also to Poisson noise in the test data (accounting for about 4.7%) and systematic variations from the rotating rod source geometry (accounting for 3.5%). The results are summarized in table 2; the standard deviations are given for each normalization method, averaged over all 256 sinograms in the test data set (second row) and over the 16 direct plane sinograms of the same test data set (first row).

M Defrise

948

et

a/

Table 2. Uniformity of the test data set before and after normalization. No of sinograms

Before

in average

normalization

16

256

23.0% 23.3%

Standard method

Hoffman et 01 +equation (14)

8.6%

10.4%

9.2%

8.5%

11.3%

10.9%

This algorithm

It is evident from these results that a 3~ scan can be adequately normalized using a septa-extended blank scan, with only slightly reduced accuracy compared with the standard method. For this study, the septa retracted blank scan used for normalization had approximately the same number of counts per LOR as the septa-extended blank. Thus the coefficients from the standard method have been estimated effectively using 16 times more counts than the other two methods, offsetting to some extent the gain to be expected from the two variance reduction techniques. But the loss of accuracy observed with these techniques is also due to various systematic effects, including: (i) effects not incorporated into the model of equation ( 1 ) for a single ring, e.g., a possible dependence of the geometrical factors on the position in a block of detectors, (ii) normalizing a septa-retracted scan with coefficients from a septa-extended scan, despite the fact that the modification of the detector efficiencies by the presence of the septa may not b e perfectly uniform, a n d (iii) the application, equation (14), to cross plane sinograms at all angles, neglecting the axial component of the gamma ray incidence angle. The consequences of the extension to cross plane sinograms is seen by comparing the two rows in table 2. For the algorithm presented in this paper, the mean standard deviation increases from 9.2% t o 10.9% when cross plane sinograms are included in the average. Finally, note that the average standard deviation for the coefficients obtained with the method of Hoffman e f al is slightly larger than for our algorithm; the small difference indicates that biases resulting from the use of arithmetic means are less important than the other systematic effects.

3.3. Relative sinogram normalization

The results in table 2 reflect individual sinogram uniformity, i.e. the spread of data values about the mean for each sinogram. The aim of the calibration procedure, however, is also to correctly normalize the sinograms with respect to each other, i.e., for the test data set the sinogram mean values should also have a small spread. The mean value for each of the 256 sinograms was computed for the same region-of-interest as in section 3.2, and plotted as shown in figure 2( a ) as a function of sinogram index. Sinograms with indices in the range 0 to 63 contain LOR^ between detectors in the first ring ( C ) of blocks, and sinograms with indices in the range 192 to 255 contain LORS in the second ring (D) of blocks. Sinograms with indices in the range 64 to 191 contain LOR^ which cross between C and D. The detector sensitivity within a block is known to be a maximum at the centre of the block and a minimum at the edges. Sinograms with indices around 0, 64, 128, 192 and 255 all involve at least one ring of detectors at or near the edge of a block, while sinograms around 32, 96, 160 and 224 involve detectors in the centre of block. This broadly explains the behaviour and structure seen in figure 2(a). The relative standard deviation on these mean values is 14%.

A normalization technique for

3~ PET

949

data

600 550.

.-.-. .. -. -+m.. .... < p *., . . r: . .. . ... ..' . .. .... .:. .... :.>. .- ..-. .. .-. ... ....... '.'......... ..... . ..#

'8

m 2

Bc

-E

I'

500.

450.

'400.1,

' 5 ,

+.

.

'

'a

c

m C

.a.*-:.

I

c

300 0

.. 50

(a1

..- c=:. * ...-: . * .- '...... ... i

F*

c

5

8 100

150 Sinogram index

200

250

3

(bl 475.

s 7j

470.

465.

450. 445.

3000 / 0 5 3

50

too

150

200

250

300

Sinogram index

Figure 2. ( a ) The mean value of each sinogram in the test data set before normalization estimated from a ZOcm field-of-view and plotted as a function of sinogram index. The relative standard deviation is 14%. ( b ) The mean value of each sinagram after normalization with coefficientsobtained from the standard method, plotted as a function of sinogram index. The relative standard deviation is 0.3%. ( c ) The mean value of each sinogram after normalization with coefficients obtained from the method presented in this paper, plotted as a function of sinogram index. The relative standard deviation is 3.1%.

M Defrise et a /

950

The corresponding mean values after normalization with the standard method and with the method presented in section 2 are shown in figures 2 ( b ) and 2 ( c ) . The relative standard deviations are now 0.3% and 3.1%, respectively. The increased variance in figure 2 ( c ) is a consequence of using a ZD, septa-extended blank scan to normalize a 3 ~ septa-retracted , emission scan. However, the dispersion of values seen in figure 2 ( c ) is larger for sinograms involving blocks from C than from D. The relative standard deviation for sinograms from D is only 1.1%, while that from C is 2.3%. The reason forthis difference is unknown, although it might indicate, for example, a timing problem involving blocks in C only. Such effects are not, of course, modelled in equations (1) and (14). Note that this normalization procedure might, in itself, be a way of detecting such problems. The relative standard deviation for the sinograms containing the cross LOR$ is 4%, suggesting that the dispersion of the mean values is due not only to the problems with blocks from C but also to the increasing axial incidence angle. This is seen more clearly in figure 3 where the same data as in figure 2 ( c ) has been plotted as a function of the ring index difference ( e- p ) . The observed correlation is due to the fact that, in equation (14), we do not take into account the axial incidence angle of the photons. The geometry factors gE@take into account only the incidence angle in the transaxial plane. However, the shape of this correlation has a qualitative similarity to the behaviour of the g;@ in the central pait of figure l ( a ) . This suggests that, since the detector blocks are structured similarly in both the axial and transaxial directions, it should in principle be possible to estimate appropriate axial geometrical correction factors and incorporate them in equation (14), thereby correcting, at least partially, for the correlation seen in figure 3.

540

520.

%

500.

r:

480. 460. 440. 420.

4 0 0 1 , , -20 -15

,

~.

-10

,

-5

.

!

0

.

1

5

,

I

10

,

I .

1s

1

20

Ring index difference

Figure 3. T h e same data as i n figure 2(e) plotted as a function of the ring index difference (a- 0 ) .

4. Discussion and conclusions We have shown that it is possible to improve and extend the normalization technique proposed by Casey and Hoffman (1986) and Hoffman ef a/ (1989). Their technique models LOR sensitivity as the product of the individual coincident detector efficiencies and a geometrical factor which accounts for the gamma ray incidence angle. This

A normalization fechnique for

3~ PET

data

951

approach improves the statistical accuracy of the normalization coefficients compared with the standard method. We have shown with a simple simulation that, by evaluating the geometrical factors and detector efficiencies using the geometric rather than the arithmetic mean, unbiased estimates of the parameters can be obtained which result in a further improvement in the statistical accuracy of the normalization coefficients. However, in practice, this potential improvement is offsetby LOR-dependent,systematic effects not described by the model, such as timing differences between LOR$, or geometrical factors which vary across a block (Casey 1990). The importance of systematic errors relative to statistical accuracy will depend upon the number of coincidences acquired per LOR in the blank scan. The possibility with current PET cameras to acquire up to 1000 counts per LOR for a normalization scan explains why the standard method is still the method of choice for most PET facilities. The situation is different when the camera is operated with the septa retracted. Present trends with neuroPET cameras such as the 9538 is to interchange between the two modes of operation depending on the activity in the field-of-view, in such a way as to optimize the useful count rate, i.e., the real count rate corrected for randoms and scatter. This approach may even require changing from septa-extended to septaretracted mode between two time frames in a dynamic scan. However, it becomes increasingly difficultto acquire enough counts to provide septa-retracted normalization coefficients with the same statistical accuracy as those generated for septa-extended operation. The model described in this paper then becomes an attractive proposition. Unfortunately, as we have seen, the good statistical accuracy resulting from this approach is offset by an increased potential for introducing systematic errors which can be avoided by the standard method. These include detector efliciency changes caused by septa retraction and the influence of the axial component of the gamma ray incidence angle, neither of which are taken into account by the model presented here. The conclusions from section 3 indicate that, when the average counts per sinogram are comparable for the blank scans acquired with septa extended and retracted, the systematic errors from the model result in reduced accuracy of the normalization coefficients compared with the standard method. We conclude that, even though the systematic errors could be reduced further by a simple correction for the axial component of the incidence angle (figure 3), it will be difficult to equal the performance of the standard method. However, in practice it will not be possible to acquire on a regular basis total counts in excess of 3000 million that are required to achieve 5% or better accuracy on the normalization coefficients for septa retracted operation, and hence small systematic errors resulting from the use of this model can be tolerated.

Acknowledgments This work is supported by the Swiss Commission for Encouragement of Scientific Research (CERS), Grant Number 1922.1. MD is research associate with the National Fund for Scientific Research (Belgium).

Resume Une Technique de normalisation pour les donnies

3 0 PET.

Le scan d'imission mesurt par un tamographe positon i multi-anneaux doit 81re normalis6 pour corriger I'effct des variations de sensibilitt des ditecteurs. Ler facteurs de correction son1 obtenus en mesurant la rtponse de tousles canaux de coincidence (lignes de rtponre) P une source calibrie. Les cameras acluelles

M Defrise ef

952

a1

peuvent contenir plus d'un million d e tels lignes de reponse (LORS). et emiron 400 millions de coincidences devront Stre mesuries pour calibrer chaque LOR avec "ne precision de 5%. Une solution alternative consiste H modeliser la sensibilite des lignes de reponse comme l e produit des efficacitbs des deux detecteurs en coincidence et d'un racteur giometrique. Cette technique de normalisation ne nicessite la determination que de 6000 parametres B panir du scan de calibration, et permet une amelioration significative de la prtcision statistique avec laquelle les coefficients de normalisation sont calcul&. Recemment, des tomographes multi-anneaux ont et6 utilises sans septa. L'augmentation du nomhre de lignes de rhponse P calibrer rend l a technique clarsique de normalisation inadiquate, car des scans de calibration extrSmement longs seraient ntcesrairer pour abtenir une precision suffisante. Dans cet article nous montrons qu'il est possible, en faisant cenaines approximations, de calibrer ces donnies sans septa B I'aide d'un scan de calibration mesure avec septa. La precision et les avantages de cette technique sont discuths d a m I'article.

Zusammenfassung Normierungsverfahren f i r

ID-PET-DSen.

Vor der Rekonstruktion miissen Emissionsdaten YO" einer Multiring r~r-Kamerakorrigiert (normien) werden auf Schwankungen der Detektorempfindlichkeit. Die entrprechenden Korrekturkoeffizienten erhiilt man durch Messung der Response aller Koinzidenrlinien gegeniiber einer kalibrienen Strahlenquelle (Blank-Scan). Moderne Kameras k6nnen bis zu einer Million solcher Responsehien (LOR) enthalten. Deshalb werden etwa 400 Millionen Impulse bcnctigt, um jede LOR mit einer statistkchen Genauigkeit von 5% zu kalihrieren. Alternativ d a m wird ein Kalibrierverfahren durch Simulation der Lon-Empfindlichkeiten 31s Produkt der individuellen Detektorwirkungsgrade und eines geametrischen Faktars vorgeschlagen, das die Bestimmung von nur 6000 Parametern aur der gleichen Datenreihe erforden. Eine wesentliche Verbesserung der statistischen Genauigkeit der Kaeffizienten kann deshalb erwanet werden. Kiirrlich wurden Mulitringscanner mit einziehbaren Septen venuendet, wobei die Anrahl der gemessenen LOR um einen Faktor acht erh6ht wird. Die Gewinnung der Kalibrierdaten, die m m Erreichen einer angemessenen statistischen Genauigkeit benctigt werden, wird dadurch verhinden. Es wird gezeigt, daG es durch Simulation der Lon-Empfindlichkeiten mSglich ist, mit gewissen Naherungen, cine Emissionsdatenreihe, die mit zuriickziehbaren Septen gewonnen wurde, mit guter Genauigkeit zu normiere". AIS Eingangsdaten Cfir die Simulation wurden Blank-Scans mit guter Statistik, die mit den erweitenen Septen gewonnen worden warm, verwendet. Dies bietet eine Reihe van praktischen Vorteilen

References Casey M E 1990 Private communication Casey M E and Hoffman E J 1986 Quantitation in positron emission tomography: 1. A technique to reduce noise in accidental coincidence measurements and coincidence efficiency calibration J. Compul. Assist. Tomogr. IO 845-50 Dahlbom M, Eriksan L, Rasenqvint G and Bahm C 1989 A study of the passibility of using multi-slice PET systems for I D imaging IEEE Trans. Nud 5%. NS-36 1066-71 Hoffman E 1, Guerrero T M, German0 G, Digby W M and Dahlbom M 1989 PET system calibration and corrections for quantitative and spatially accurate images IEEE Trans. Nuel Sei. NS-36 1108-12 Thompson C 1988 The effect of collimation on scatter fraction in PET IEEE Trmr N u d Sci. NS-36 598-603 Townsend D W, Spinks T, Jones T, Geissbuhler A, Defrise M, Gilardi M C and Heather J 1989a Three-dimensional reconstruction of PET data from a multi-ring camera IEEE Trans. Nucl. Sei. NS-36 1056-65

-1989b Aspects of three-dimensional reconstruction for a multi-ring positron tomograph

Eur. 3. Nuel.

Med. 15 741-5

Townsend D W, Geisshuhler A, Defrise M, Hoffman E, Spinks T J, Bailey D, Gilardi M C and Jones T 1991 Fully three-dimensional reconstruction for a PET camera with retractable septa IEEE Trans. Med. Imasing at press

A normalization technique for 3D PET data.

Prior to reconstruction, emission data from a multi-ring PET camera must be corrected (normalized) for variations in detector sensitivity. The appropr...
671KB Sizes 0 Downloads 0 Views