Computerized Medical Imaging and Graphics. Printed in the U.S.A. All rights reserved.

Vol. 16, No. 4, pp. 237-251.

I

1992

0895-61 l/92 $5.00 + .OO Copyright Q 1992 Pergamon Pres Ltd.

INTERACTIVE DISPLAY OF VOLUMETRIC DATA BY FAST FOURIER PROJECTION S. Dunne,*p? S. Napel,* and B. Rutt§,Il *Department of Computer Science, University of Western Ontario, London, Ontario, Canada, *Radiological Sciences Laboratory, Stanford University School of Medicine, Stanford, CA, “Department of Diagnostic Radiology, University of Western Ontario, London, Ontario, Canada, and “Torn Lawson Family Imaging Research Laboratory, John P. Robarts Research Institute, London, Ontario, Canada (Received 15 August 1991; Revised 30 December 1991) Abstract-This article describes a new algorithm for reprojection of volumetric data, called Fast Fourier Projection (FFP), which is one to two orders of magnitude faster than conventional methods such as ray casting. The theoretical basis of the new method is developed in a unified mathematical framework encompassing slice imaging and conventional volumetric reprojection methods. Software implementation is discussed in detail. The article closes with an account of experience with a prototype FFP implementation, and applications of the technique in medical visualization. Key Words: 3D,

Projection,Reprojection,Display,Volumetric,Interactive.Visualization

INTRODUCIION

This article deals exclusively with reprojection, an approach pioneered by Harris et al. (4). The reprojection technique most commonly used at present is ray casting (5), wherein the intensity of each image pixel is computed as some function of the input voxel values along an imaginary ray between the pixel and the viewer’s eye. Summing voxel intensities along each ray, for example, yields a transparent reprojection (also called an integral or density-sum reprojection) similar to an X-ray image (see Fig. 1, lower left, for an example). In this article, we present a new algorithm which computes transparent reprojections more efficiently than ray casting. Figure 1 shows the relationship between the two methods. While ray casting and other conventional algorithms compute a reprojection (bottom left in the figure) directly from the 3D array A of image samples (top left), our approach begins by computing the 3D discrete Fourier transform (DFT) of A, yielding a new array B (top right). A reprojection at any angle may then be computed according to the Fourier projection-slice theorem (6, 7), by resampling B at points on a plane through the zero-frequency point (bottom right ), and then applying a 2D inverse DFT to the resulting 2D array. We call this approach Fast Fourier Projection or FFP. FFP’s principal advantage is its speed, which comes from the fact that it computes a reprojection from only part of the array B (the resampled points), rather than the entire array A. Its principal limitation

A fundamental problem :in 3D medical imaging is that of displaying volumetric (3D sampled) data on 2D screens. A great deal of work has been done on this problem to date, a thorough review of which is beyond the scope of this article; interested readers should see References ( 1) and (2). Goldwasser et al. (3) give a summary of techniques, noting that an animated display conveys 3D structures more effectively than a static one, and that the ideal display would be generated in real time, allowing the viewer to turn and tilt the displayed structures at will. 3D display techniques can be divided into three very broad classes: Display of 2D cross-sectional slices, either the originally acquired slices or arbitrarily oriented slices computed from the originals by interpolation. Shaded surface display (also called surface rendering), based on segmentation of objects of interest in the 3D data set, a:nd a model of their optical density, colour, surfacNecharacteristics, etc. Reprojection (also called volume rendering), which treats each sample in the 3D data set as a small volume element (voxel) with chosen optical properties (e.g., colour, opacity). +Correspondenceshould be addressed to S. Dunne, Department of Computer Science, University of Western Ontario, London, Ontario, Canada. 237

Computerized Medical Imaging and Graphics

238

July-August 1992, Volume 16, Number 4

3-D DFY

I

resa?ple plane array C

array D

e

2-D IDFT-

Fig. 1. In FFP, a 3D input array A is Fourier-transformed yielding a k-space array B. To produce a projection view, B is resampled at points on a plane through the k-space origin, yielding the 2D array C12, which is inverse

Fourier-transformed

to produce the array D12 of image samples.

it that it produces only transparent reprojections. Ray casting and related methods are more general. This article expands on our initial presentation of FFP, given at the 1990 Conference on Visualization in Biomedical Computing ( 8 ) , adding new results and giving a more complete account of both the theory and implementation of the technique. A companion paper (9) discusses application of FFP to data from magnetic resonance imaging (MRI) scanners, and also gives an expanded discussion of the types of image artifacts which can arise from the method. THEORY

Notation We use the following typographical conventions: lowercase italics for scalars and functions, uppercase

italics for geometric objects (e.g., points, planes) and Fourier transforms of functions, lowercase bold for vectors, uppercase bold for matrices, uppercase roman for arrays. The symbol i denotes the imaginary operator (i.e., i = ‘I/-1), and the superscript * the complex conjugate (i.e., if x = a + ib, x* = a - ib). IR’ denotes the set of real numbers, Z the integers. The notation MT denotes the transpose of matrix M, whose column vectors are the row vectors of M . The notation vT denotes a column vector (by default, vectors are assumed to be row vectors). The inverse of a matrix M is denoted as usual by M-l. We also introduce a new notation, MO, meaning (M-‘)T or (MT)-‘. (The two are equivalent; see (lo).) In the following section, we define three coordinate systems, called the absolute, sampling, and resampling

Interactive display of data

systems. The system to which a particular coordinate variable refers is indicated through the use of primes, for example x, x’, x” respectively, for the three systems. Most integral formulas are shown without limits for clarity; in such cases, the limits &co are implied. Cartesian coordinate sys terns Imposing a Cartesian coordinate system on 3D physical space provides a means to uniquely identify points using vectors in R 3. (In the plane, vectors in !R* suffice.) The first step is to choose three mutually orthogonal coordinate or axis directions, which we shall call the X, , X2, and X3 directions, and choose a unit of distance measure, for example, inch, millimeter, etc. Any vector x = (xl, x2, x3) may now be considered to uniquely specify a displacement of x1 units in the X1 direction, x2 units in the X2 direction, and x3 units in the X3 direction. If we now choose a fixed point of origin 0, we may uniquely idemtify any point X by the vector specifying how to get from 0 to X. The components xl, x2, x3 of this displacement vector are called the coordinates of the point X, and x may be called the coordinate vector of X. In this article, we consider situations where it is appropriate to define more than one coordinate system, so we shall use the term absolute coordinate system to denote the system just defined. We shall call X1, X2, and X3 the absolute coordinate directions, the abovementioned unit of distance measure the absolute unit, xl, x2, x3 the absolute coordinates of the point X, and x the absolute coordinate vector of X. 3D imaging systems usually measure some physical quantity on a regulalr lattice of points, equally spaced in three orthogonal directions which, without loss of generality, may be taken to be the three axis directions of our absolute coordinate system. Furthermore, we may assume that there is one sample point at the origin 0 and then one every Ax’, distance units in the X, direction, one every Ax; units in the X2 direction, and one every Ax; units in the X3 direction.’ Identifying the sample points using vectors of integer coordinates amounts to using a new coordinate system, which we shall call the sampling coordinate system. This system has the same origin 0 and axis directions as the absolute system, but its units of measure are the distances between adjacent samples; these will generally be different for the three axis directions. It is often useful to resample the image at points on an arbitrarily oriented lattice. To formalize this ’Ax’, , Ax;, and Ax; are measured in absolute units, and hence the notation Ax,, Ax2, Ax, mighl: be more natural. The primes serve to distinguish these intersample distances from others to be introduced shortly.

S. DUNNE et al.

??

239

process, we introduce a third coordinate system, which we call the resampling coordinate system, having the same origin 0 as the other two systems but whose mutually orthogonal axis directions X’i, X5, Xl; are rotated with respect to the absolute coordinate directions. The resampling lattice is taken to be aligned with the X’iX$Xl; axis directions such that there is a sample point at 0 and then every Ax’{ absolute units in the X ‘: direction, every Ax; absolute units in the X 5 direction, and every Ax’; absolute units in the X ‘;-direction. These intersample distances are the natural units of measure for the new coordinate system. *

Coordinate transformations We have defined three coordinate systems having a common origin 0. Associated with any point X there is a vector x = (x1, x2, x3) of absolute coordinates, a vector x’ = (x’, , xi, x5) of sampling coordinates, and a vector x” = (x’i, xl, x’j) of resampling coordinates. The coordinate vectors x, x’, and x” are related through linear transformations, expressed as 3 X 3 matrices. One of the properties of such transformations is that, when a linear transformation matrix T is used to effect a change of variables for integration, the Jacobian determinant involved (usually called simply the Jacobian) is the ordinary determinant det T. A vector x’ = (x’, , xi, x; ) of sampling coordinates may be converted to the corresponding vector x = (xl, x2, x3) of absolute coordinates through the formula (x,, x2, xj) =

(x;Ax;,

x;Ax;,

x;Ax;).

If we let Ax;

0

0

the formula becomes x = X’SS.

(1)

We can do the reverse conversion similarly: x’ = xs;’ )

(2)

where / Ax;

i - 001

&0 oz

2

3

Ss and its inverse are dilation or scaling matrices. Any scaling matrix is diagonal, meaning that (a) its inverse may be obtained as above, by taking reciprocals of its diagonal elements, and (b) its determinant is simply

Computerized Medical Imaging and Graphics

240

the product of its diagonal elements. A scaling matrix is also symmetric, meaning that it equals its own transpose. [See ( 10) for proofs of the matrix properties mentioned in this section.] A vector x” = (x’;, x5, x;) of resampling coordinates may be scaled to absolute distance units by the scaling matrix 0

0 Ax;

( 0

0

Ax;

SR =

0 0

July-August 1992, Volume 16, Number 4 T = S&S;‘.

(6)

Then eq. (5) becomes x’ = x”T. The reverse conversion, using the relation xC = x’T-’

(7)

(8)

requires T-',which is found using some of the abovementioned properties of scaling and rotation matrices:

,

Ax; 1

T-’ = (S,RS,‘)-’

but the distances are still measured in the X’;XlX; directions. To convert to absolute coordinates we require a further transformation, expressed as a rotation matrix R, that is, x = x”&R.

R = RI(~)Rz(P)R~(Y),

(4)

where CY,/3, and y are three rotation angles selected interactively by the user, and where =

0 cos(a)

0 sin(a)

0

-sin(a)

cos(a)

cos(@

0

0

1

( sin(@) 0

-sin(@) 0

cos(/?) 1

effects a rotation through an angle p about the second axis, and R3(r)

=

= (S;R&)T.

COG-Y) sin(r) -sin(r) cos(y) 0 0 (

0 0 1)

effects a rotation through an angle y about the third axis. Substituting for x from eq. (3) into eq. (2) yields a formula to convert resampling coordinates directly to sampling coordinates: x’ = x”&RS;’ . Let us define the composite matrix T

Image functions, sampling, resampling Let us say that the physical quantity measured by an imaging system is given by a function h , called the image function, such that the value measured at a point X having absolute coordinate vector x is h(x) . In other words, given the absolute coordinates of a point X, we can apply the function h to find the value of the image function at X. Let us define the function h’to work similarly for sampling coordinates h’(x’) = h(x). Substituting for x from eq. ( 1) yields h’(x’) = h(x’Ss),

1 0

effects a rotation through an angle Q!about the first coordinate axis, R2(/3) =

= (Ss)-‘RT(S;‘)T

(3)

Any rotation matrix is orthogonal, meaning (a) its inverse equals its transpose, and (b) its determinant is 1. Note also that any product of rotation matrices is itself a rotation matrix. Rotation matrices may be computed in a number of ways [see ( 11) for a discussion]. In our prototype FFP program, a rotation matrix R is computed as the product

R,(a)

= f&R-%6

(5)

which expresses how h’could be computed if h were known. However, in general it is h’which is known. The imaging system produces a 3D array A of sample values, which in effect tabulates h’, that is, for integervaluedx’,, xi, x;,A[x’,, xi, x;] = h’(x’).Thevalue of h’(and hence of h) may be approximated between sample points using various interpolation techniques, which we shall consider later. For the moment, we may treat both h and h’ as continuous functions defined everywhere in R3. The key point is that computations expressed in terms of h’are easier to implement in a computer program, because arguments to h’ correspond directly to indices of the sample array A. Let us also define a second relative h” of the image function h whose argument is a vector of resampling coordinates: h”(x”) = h’(x’) = h(x). Substituting for x’ from eq. (7) yields h”(x”) = h’(x”T), by which h” may be computed directly from h’.

(9)

Interactive display of data

Note that we shall use vector symbols and the corresponding lists of components interchangeably as arguments to these functions, that is, h(x) is considered equivalent to h(xl, x2, x3). Projections

There are many ways to view the 3D image function h. For example, the familiar slice display is the restriction of h to points on some plane P, which we denote Y$,p. We can say without loss of generality that P is parallel to the X’lX fI plane of our resampling coordinate system, say at .r’; = c (constant), and then define the slice as the 2D function

S. DUNNE ef al.

??

A. This is in fact one form of the ray casting algorithm for reprojection. Fourier transform A function h(x) defined over R” can be expressed uniquely as a Fourier series, consisting of a possibly

infinite number of terms of the form eZ”i’“kT’.These terms are themselves functions over R”, parametrized by a vector variable k having n components. The coefficients of these terms, which are complex numbers, are given by a function H(k), called the Fourier transform of h, denoted by the operator 3,: H(k)

Y&&;, x;) = h”(:c;, xl, c) = h”(x”)l,;,,. Practical computation of _9i?h,p amounts to resampling the image function on a.2D lattice of points in the plane P; each point corresponds to a pixel in the resulting image. To do this computation, we need first to express it in terms of h’as noted above. Substituting for h”(x”) from eq. (9) we get Yi?,Jx;, x;) = h’(x”T)I,;=,. The matrix T is computed using eq. (6): the scaling components are derived from the intersample distances for both the sampling and resampling lattices; the rotation component is chosen according to the desired angle of view [e.g., as in eq. (4)]. For each pixel, addressed by integers x’;, ~5, the computation x”T = (x7. x$, c)T yields the corresponding sampling coordinate vector x’, which may be used to approximate the value h’( x’) from available samples in the array A. A transparent reproJiection, which is our main concern for the rest of thi:s article, can also be defined in terms of our resampling coordinate system. Let pDh,.u;denote the projection of h in the Xl; direction. ~/I,x;(x;, x’i) =

s

h”( x”)dx’;

(10)

Each point in the slice image was the value of h at a single point; each point in the reprojection is a line integral of h computed in the direction of view, that is, the X ; direction. Again we can substitute for h”( x”) from eq. (9) to get P3h,x;(x’:, x’i) =:

s

h’(x”T)dx’;,

which is implemented as follows. For all integer x’:, x5 corresponding to pixels in the output image, the integral J- h’(x”T)u’x; is computed (approximately) by varying x’; through a suitably large range, and computing h’( x”T ) by interpolation from the sample array

241

= B,jh(x)}

= j. h(~)e-*“‘“~~dx.

(11)

The inverse Fourier transform, denoted by the operator 3 ;’ , expresses how the terms of the Fourier series are summed to recover the original function h: h(x)

= s,‘{H(k)}

= j- W(k)e2”‘“kTdk.

(12)

We are primarily interested in the 3D (n = 3) case, although the following results extend naturally to more or fewer dimensions. H can be interpreted as a function over spatial-frequency space, which we shall call k-space, wherein every point corresponds to a term in the Fourier series expansion of the function h. Identifying such points by vectors k amounts to imposing a Cartesian coordinate system upon k-space. This coordinate system must have an origin, coordinate directions K, , K2, K3, and a unit of measure. The origin point corresponds to the term e2~iX(o.o.o’7,which is unity for all x; this term adds a constant bias to the Fourier series, and is typically called the zero-frequency or DC component. The point whose coordinates are (k, , 0, 0)) which is k, units from the origin along the K, axis, is the term e2*i-r1kl,which varies sinusoidally 1 in the XI direction only, with period - spatial units. XI Hence, the unit of measure for the K, axis is the reciprocal of the unit for the XI axis; if the latter is millimeters, the former is mm-’ or cycles per mm. The same holds for the KS and K3 axes by a similar argument. To differentiate between k-space and the ordinary space over which the image functions h, h’, and h” are defined, we henceforth call the latter image space. Linear transformation theorem The FTs of h’ and h” can be defined by making appropriate substitutions into eq. ( 11). Corresponding

to the vector variables x’and x”, we define new k-space variables k’ = (k;, k5, k;) and k” = (k’:, k$, k$).

Computerized Medical Imaging and Graphics

242

July-August 1992, Volume 16, Number 4 k’

ff’(k’)

=

yj

{

h’(x’)}

ff”(k”) = 9,{ h”(x”)}

h’(xf)e-2”‘“‘(k’)7dXI

(13)

= s hf~(x”)e-2”‘x”‘k~)TdxI.

(14)

= j-

Recall that the matrix T, which effects a linear transformation, relates x’ to x” according to eqs. (7, 8). We now show that it is the inverse transpose of T, that is, (T-’ )T, which relates k’to k”. Because this can be proven knowing nothing about T other than that it is linear and invertible, it constitutes a theorem, the linear transformation theorem, which holds for all invertible linear transformations. We need an expression for H” in terms of H’corresponding to the relationship between h” and h’given by eq. (9). We begin by substituting for h”(x”) according to eq. (9) in eq. ( 14), obtaining

k”T0.

=

(17)

This theorem holds for any number of dimensions. In the 1D case, where scaling is the only applicable linear transformation, it reduces to the well-known similarity theorem ( 12 ) .

We can change the variable of integration to x’by substituting for x” according to eq. (8). The differential

Projection-slice theorem The following development is similar to that given by Mersereau and Oppenheim (7), but follows our notational conventions. We shall find an expression for the 2D FT of a projection of an image function h, and show that this is the same as the restriction of the 3D PT of h to a certain plane through the k-space origin. This result, known as the projection-slice theorem, is the basis of FFP. Let us compute 32 { 13,,,x;} using x’[, x$ as the variables of integration. We substitute 2 for n, (x’;, x5) for x, and (k’i, kl) for k in eq. ( 1 1 ), and change from the vector representation to the component representation:

dx” becomes g

3,{ P/&(x’;,

I

dx’, where g

is the Jacobian of the

I

matrix T-l. Ii‘T is an invertible linear transformation, so is T-‘, and the Jacobian is simply det T-l; hence, dx” = )det T-l (dx’,

=

x’i)}

p,, ss xw(x;,

.

Xl;)e-2*r(x;k-;+x;k’i)dx;dxl.

3

Substituting for ph.X;(~‘;, x’l) from eq. ( 10) yields and H”( k”) = 1det T-’ 1

h’( x’)e-

s

2rix’T-1(k”)T

dx’.

(15)

Note that

=

sss h”(x,,)e-

’ * ” ’ ” 2nr(s,k,x2kz+~3)dX;dX'ldX'; (kj=o.

Returning to the vector representation, Let us use the notation To to denote the inverse transpose (T-‘)T of T. Now we can substitute x’Tp’(k”)T = x’( k”T”)T in eq. ( 15), giving H”(k”) = ldet T-’ ( =

s

h’( x’)e-

3,{ P,&x’;,

x_;)} = j h’~(x”)e-2”‘““‘k”‘Tdx”I~I=0 = 33{h”(x”)}l+,,

2nrx’(krfTo)TdX,

= H”( k”) 1,+j=o

(18)

(16)

ldet T-’ IH’(k”T’).

This is the linear transformation theorem. It shows that when the FT H’of a function h’is known, the F’i N” of a related function h” defined through an invertible linear transformation T as h”( x”) = h’( x”T) = h’(x’), is given by the above expression. Furthermore, in the same way that the matrix T transforms X’[XlX’; (resampling) coordinates to X’, X$X; (sampling) coordinates for points in image space, the matrix To transforms K’;K$ , K’; coordinates to K’,Ki , K; coordinates for points in k-space. That is,

or 32{

~/I,&;,

-d)}

=

%f”,kj=O.

This is the projection-slice theorem, which states that the 2D FT of the projection ‘E’3h,x; is the restriction of the 3D FT of h’to the k’; = 0 plane. In practice, we usually begin with an array A of samples of h’and apply a discrete Fourier transform (DFT) to obtain an array B of samples of H’, so computation of the expression in eq. ( 18 ) will be simplified if we re-express it in terms of H’. We can do this by

Interactivedisplayof data??S. DUNNE

applying the linear transformation theorem, substituting for H”( k”) according to eq. ( 16), obtaining s,{ ?‘~h,~;(x’x3)) {, =: ldet T-‘JH’(k”T”)~k~=o. To compute the projection itself, we need only apply a 2D inverse FT. The scaling factor det T-’ can be taken outside the FT operator because the FT and its inverse are linear operatrons. pDh.x-;(x’{,x$)= IdetT~~‘IS;‘{H’(k”T”)lkj=O}.

(19)

Fast Fourier Projection Equation ( 19) is essentially a recipe for computing projections of h. Holding k’; = 0, we vary k’[ and k’l to resample H’ on the K;K’ plane, giving H’( k”T’) ( k;=O,then take the 2D IFT of the result. In practice, we begin with a 3D array A containing samples of h’(see Fig. 1 ), compute its 3D discrete FT (DFT, discussed below), yielding a 3D array B containing samples of H’. The matrix To = S;’ RSs is computed according to the sampling parameters and the chosen view direction. We resample H’on the K’iK’; plane by interpolation from the samples in B, yielding a 2D array C, and apply a 2D inverse DFT to P yielding a 2D array D of pixels of the reprojected image. This is the Fast Fourier Projection ( FFP) algorithm. At first glance, FFP seems to require more steps than direct reprojection ((e.g., volumetric ray casting), because of the need to convert to k-space and back. However, the initial 3D DFT need only be computed once; thereafter, reprojections for any number of different view angles may be computed by eq. ( 19). Hence, the time required for the 3D DFT may be ignored if a large number of views are required. Suppose for simplicity that the input sample array contains N X N X N samples, that h’ X N pixels are required in the projection, and that the projection should show the entire 3D input image. The number of computation steps required for direct reprojection will be proportional to the number of input samples N3 because every sample properly contributes to the projection image. In the language of computational complexity [see e.g., ( 13 )] , the direct computation is said to require 0 ( N3) time. The rate-limiting computation in FFP is the 2D inverse DFT, which for N* pixels using the standard FFT algorithm requires 0 ( N210gN) computation time. N increases as N increases, and in The difference, 1ogN’ our experience, can yield a reduction of more than two orders of magnitude in the time required to compute reprojections.

243

et al.

Discrete Fourier Transform To implement FFP, we need a discrete expression of eq. ( 19), which in turn requires a precise understanding of how the discrete Fourier transform (DFT) approximates the continuous FT. Let us consider our sampled image function h ( x’), which is defined to equal h(x) for all x’ = (x’,, xi, xi) where x’,, x;, and xi are integers. In practice, those integers will be confined within intervals according to how h has been sampled. Assume h is sampled on a regular rectangular lattice of N, X N2 X Nj points, within a right parallelepiped volume which we shall call the sampling window, whose dimensions in absolute units are Wr X W2 X W3. Let Wdenote the set of all x = (x1, x2, x3) E R3 within the sampling window, WI Wl w2 w2 that is, with - 1 I xl < 2 , - 2 I x2 < 2 , and w3

- -

2

5 x3 < F

. Let W’ denote the set of all x’ =

(xi, x)z, x;) E Z3 within the sampling window, i.e.,

NI

with - -

2

NI

5 x’, < -

2

N2

NZ

2

2

, -- -

N3

n3ythen B := (complex

2

Y

lor

- 1 or I

zero, i.e. 0 + i0)

else begin

B :=(value computed

by tri-linear

interpolation

in

B, at the point (n,, n2, In31)); if n, -c 0 then B :=(its complex

conjugate)

July-August 1992, Volume 16, Number 4

3. Swap octants in B, to move the k-space origin back

to the centre of the array. The plane resampling and image reconstruction becomes 1. 2. 3. 4.

step

Resample plane in k-space, yielding array Cl1 . Swap quadrants in Cl2 to prepare for the 2D IDFT. Perform a 2D IDFT yielding array D. Swap quadrants in D to move the image-space origin back to the centre of the array.

The first two steps may be combined by modifying the resampling algorithm to fill Cl2 in quadrant-swapped order. Similar kinds of modifications are required to deal with DFT algorithms which assume real data in image space and conjugate symmetry in image space (the most common variety). It is vitally important to understand the array format expected by the DFT algorithm used. RESULTS

end ; end ;

(The code sections elided by angle brackets ( - - -) are simple, but depend heavily on the data structure chosen to represent complex numbers. The function B will be called M1 X M2 times, once per pixel in the output image, so it should be implemented as efficiently as possible. The version in our prototype FFP program was carefully hand-coded in assembly language. Sample array reformatting inherent in the FFT

The standard 1D FFT algorithm is defined on an array A of N samples of a function h(x), such that N element AIO] = h(O), A[l] through A y - 1 cor[ I respond to h(x) for positive x, and A $! through A [N [

I

l] correspond to h(x) for negative x, as shown in Fig. 3 (a). In 2D, the equivalent effect is quadrant swapping in the image array as shown in Fig. 3 (b) . In 3D, the equivalent effect is octant swapping [Fig. 3 (c)l. These effects necessitate a few small additions in the implementation of FFP. The initial 3D DFT step becomes -

1. Swap octants in the input image array A, to move the image-space origin to one corner (entry A[O, 0,011. 2. Perform a 3D DFT yielding array B.

We implemented FFP on a Sun Microsystems TAAC- 1 processor [rated at 12.5 Mflops ( 15 )] , and compared its performance to that of a ray-casting program provided with the hardware ( 16). Test input data consisting of 256 X 256 X 58 samples in real space was used directly for ray casting, and transformed to 256 X 128 X 48 samples in k-space for FFP. The results are summarized in Table 1. Fewer samples were used with FFP because of memory limitations on the TAAC1. The ray-casting program uses only 8 bits per sample (density data), while the FFP prototype uses 32 ( 16 bits each for real and imaginary components). A 128 X 128 X 58 version of the image-space data was tried with ray casting, with no improvement in running time. The running time of the ray-casting program is a function of the number of output pixels times the average ray depth, and appears to depend far more heavily on output size than on input size. The ray-casting program may not have been as efficient an implementation as possible, but it seems unlikely that its inefficiency could account for a speed difference of two orders of magnitude. The execution times shown for FFP indicate clearly that on the TAAC-1 it works fast enough to give a truly interactive display. The prototype can be set to produce 64 X 64 images at 10 frames per second until there is a pause in user input; a 256 X 256 image appears 1.5 s later. This two-pass technique yields quite a satisfying interactive display. Furthermore, by computing two view angles at a time, a stereoscopic display

Interactive display of data ??S. DUNNE ef al. true function

(a)

array representation

0 ,-. II :: : :., : :, : k, : t., ! ‘.., ‘.., , I

0

UN

249

A true image ___________________-____________________,

..‘...

‘.,.............’

‘t...

;_I&

I

N-l

array representation

I I I,

0

v i -x1

\

:-:\

1

~_______________________________________-

Fig. 3. Conventional FFT algorithms are defined such that the first element of the input array corresponds to the origin. In 1D, the negative-x samples follow the positive-x ones (a). In 2D, diagonally opposite image quadrants are swapped (b). In 3D, diagonally opposite octants are swapped (c). After the FFT is performed, the resulting k-space array is formatted similarly, with the origin point in the first element.

Computerized Medical Imaging and Graphics

250

Table 1. Comparison

of reprojection

July-August 1992, Volume 16, Number 4

DISCUSSION

times

Output matrix

Ray casting (s)

FFP (s)

64 X 64 128 X 128 256 X 256

14 50 197

0.1 0.4 1.5

can be supported without unacceptable reduction in update rate. It is also possible to generate in real time the kind of rotating display which is usually made using a precomputed tine loop. The initial 3D FFI’ required to prepare the test data set for FFP took about 15 min on the TAAC- 1, and 5 min on a Sun Sparcstation-2. If five or more views are to be computed in the former case, or more than one in the latter case, this time is more than recovered by using FFP instead of the ray-casting program. The TAAC-1 uses a colour lookup table (LUT), which may be modified interactively to enhance image contrast. We found that linear greyscale LUT functions, which are commonly used in slice display systems, are not very useful with projection images because of the latter’s much greater dynamic range. We had better results with a LUT function consisting of a logarithmic segment at the low end (i.e., darker pixels) and a linear region at the high end. In the prototype, the user interactively selects the degree of nonlinearity. Another feature of the projection images is that noise or a nonzero bias in the sampled image function accumulates, resulting in an indistinct, milky display. This is again due to the fact that each pixel represents a line integral off, and to the fact that in most imaging systems the values off are strictly positive and hence, noise does not average out in a projection. We have found that this problem can be largely eliminated by simply thresholding the input data before doing the initial 3D DFT. Recall the image artifacts illustrated in Figs. 2 (b) and 2 (c), which arise because of inaccurate interpolation in k-space. Our current FFP prototype uses trilinear interpolation, which requires little computation and yields acceptable results. Parker ( 14) describes a number of higher-order interpolation functions which may result in improvements in image quality, but these will greatly increase computation time. However, even with tri-linear interpolation, ghost artifacts as in Fig. 2 (a) can be reduced to an acceptable level by ensuring that no high-valued samples of f appear in the outer 10% of the sample array, either by zero-padding or by application of a windowing operator [see (9)]. Because these operations are done only once, before the initial 3D DFT, they have no effect on reprojection time.

FFP is very fast, but it is limited to producing transparent reprojections. Conventional display algorithms such as ray casting are slower but more general; they can be adjusted to produce a variety of image types which may better assist visualization of 3D structure. For example, the so-called maximum intensity projection (MIP) ( 14) can easily be implemented using the ray-casting method, by computing the intensity of each output pixel not as the sum of voxel values along a ray, but rather as the maximum on the ray. For applications where projection images are not optimal, a hybrid scheme may be used, in which FFP is used to interactively select optimal viewing angles, followed by ray casting to generate the best possible images. It is worth noting that the slice extraction algorithm at the heart of an FFP implementation is identical to that used in image space for displaying arbitrarily oriented slices (multiplanar reformatting or MPR). In a hybrid system the same code and data structures could be used for both functions. To date, we have concentrated on applying FFP to X-ray computed tomography (CT) and magnetic resonance angiography (MRA) data, with encouraging results in both cases. Experience with CT has suggested that FFP might be used together with conventional computer graphics in an interactive CT-based radiation treatment planning system. We envisage a display in which a graphical depiction of the treatment beam in 3D is overlaid onto the FFP reprojection, and a “true beam’s eye view” can be computed on demand by ray casting. SUMMARY

Fast Fourier Projection (FFP) is a very efficient method of producing 2D transparent reprojections from 3D (volumetric) data. It may be implemented easily on conventional workstations, and may be expected to run between 10 and 100 times as fast as raycasting methods on a given processor. Its speed permits truly interactive viewing; the user may freely zoom and rotate the view, the system filling in intermediate views quickly enough to provide motion-related depth cues which aid visualization of 3D structures. The method yields only transparent reprojection views (similar to transmission X-ray images) and hence cannot be expected to replace conventional reprojection techniques such as ray casting. It may, however, be used to complement such techniques by providing previously infeasible interactive operation. This article presents the theoretical basis of FFP together with that of slice imaging and conventional

Interactive display of data

volumetric reprojection tlechniques such as ray casting, in a unified mathematical framework. The defining equation for FFP is developed first in the continuous domain; an explicit development of a usable discrete approximation follows, leading to a detailed discussion of the implementation of the new method in software. Experience with a prototype implementation, and application of the technique in medical visualization are also discussed.

Acknowledgments-The

authors wish to thank the Medical Research Council of Canada, the Natural Sciences and Engineering Research Council of Canada, the Ontario Ministry of Colleges and Universities, and General Electric Medical Systems for supporting this work. We thank the reviewer for providing important additions to our list of references. The first author (S.D.) also wishes to thank Dr. Stuart Rankin of the University of Western Ontario Mathematics Department, for several helpful discussions.

REFE:RENCES I. Udupa, J.K.; Herman, G.T., eds. 3D imaging in medicine. Boca

Raton, FL: CRC Press; 199 1. 2. Kaufman, A., ed. A tutorial on volume visualization. Los Alamitos, CA: IEEE Computer Society Press; 1990. 3. Goldwasser, S.M.; Reynolds, R.A.; Tahon, D.A.; Walsh, E.S. Techniques for the rapid display and manipulation of 3-D biomedical data. Comput. Med. Imaging Graph., 12: l-24; 1988. 4. Harris, L.D.; Robb, R.A.; Yuen, T.S.; Ritman, E.L. Non-invasive numerical dissection and display of anatomic structure using computerized x-ray tomogmphy. SPIE Proceedings 152:IO- 18; 1978. Levoy, M. Efficient ray tracing of volume data. ACM Trans. Graph. 9:245-261; 1990. Bracewell, R. Strip integration in radio astronomy. Aust. J. Phys. 9:198-217; 1956. Mersereau, R.; Oppenheim, A. Digital reconstruction of multidimensional signals from their projections. Proc. IEEE. 62: 13191338; 1974. 8 Dunne, S.; Nape], S.; Rutt, B.K. Fast reprojection of volume data. In: Ezquerra, N.F., ed. Proceedings of the First Conference on Visualization in Biomedical Computing. New York: IEEE Computer Society Press; 1990: 1l- 181 9. Naoel. S.: Dunne. S.: Rutt. ELK. Fast Fourier Proiection for MR angiography. Ma&. ‘Reson. Med. 19:393-405; 199 I. 10. Strang, G. Linear algebra and its applications, 2nd ed. Orlando, FL: Academic Press; 1980. II. Foley, J.D.; Van Dam, A. Fundamentals of interactive computer graphics. Reading, MA: Addison-Wesley; 1982. 12. Bracewell, R. The Fourier transform and its applications. New .. York: McGraw-Hill: 1965. 13. Aho, A.V.; Hopcroft, J.E.; Ullmann, J.D. The design and analysis of computer algorithms. Reading, MA: Addison-Wesley; 1974. 14. Parker, J. A comparison of interpolating methods for image resampling. IEEE Trans. Med Imaging 8(4):31-39; 1983.

S. DUNNE et al.

??

251

15. Austin, J.; Hook, T.V. Medical image processing on an enhanced workstation. SPIE Medical Imaging II, 9 14:13 I7- 1324; 1988. 16. McMillan, D.; Johnson, R.; Mosher, C. Volume rendering on the TAAC-1. SunTech Journal, 2(4):52-58; 1989. 17. Laub, G.A. MR angiography with gradient motion refocussing. J. Comput. Assist. Tomogr. 12:377-382;1988.

About the Author-SHANE DUNNEreceived BSc. and M.Sc. degrees in Computer Science from the University of Western Ontario in 1987 and 1988, respectively, and then joined the scientific staff of the Tom Lawson Family Imaging Research Laboratories at the John P. Robarts Research Institute in London, Ontario, Canada, where he, Nape1 and Rutt developed the FFP technique. In 1990, he returned to full-time study towards the Ph.D. at the University of Western Ontario, under an NSERC fellowship. Mr. Dunne is interested in combining methods from image analysis and computer graphics. FFP is an example, being essentially a graphics display technique based on theory more commonly applied in image processing. His present research is aimed toward automatic on-line recognition of hand-written visual languages such as logic diagrams and music notation. About the Author-SANDY NAPELreceived his B.S. in Engineering from the State University of New York at Stony Brook in 1974, and his M.S. and Ph.D. in Electrical Engineering from Stanford University in 1976 and 1981, respectively. He joined the faculty of the Department of Radiology at the University of California at San Francisco as Adjunct Assistant Professor, and became a founding member of Imatron, Inc., where he directed the development of the computer system for the world’s first CT scanner capable of stop-action scanning of the beating heart. After several years as Vice President of Engineering, he left Imatron to pursue academic interests as a Visiting Assistant Professor in the Department of Radiology at the University of Western Ontario. He joined the Stanford Faculty in 1991. Dr. Nape1 is interested in efficient methods of extracting medical information from the hundreds of images typically generated by one or more radiological exams performed for each patient. Examples are: visualization of flow from magnetic resonance images, segmentation of tumors from 3D medical imaging data, and multidimensional, multimodality image correlation/display. the Author-BRIAN RUTT received his B.A.Sc. in Engineering Sciences from the University of Toronto in 1976, his M.S. degree in Electrical Engineering from Stanford University in 1977, and his Ph.D. degree in Medical Biophysics from the University of Toronto in 1982. He was the Hounsfield Fellow in the Department of Radiology at the University of California at San Francisco in 1983, and was subsequently an Adjunct Assistant Professor and later Assistant Professor in that department until 1986. He joined the faculty of the Department of Diagnostic Radiology at the University of Western Ontario in London, Ontario, Canada as an Assistant Professor in 1986 and has been an Associate Professor in that department since 1990. Dr. Rutt is also an Associate Scientist in the Tom Lawson Family Imaging Research Laboratories at the John P. Robarts Research Institute. Dr. Rutt is interested in developing new techniques for image formation and display, particularly using Magnetic Resonance Imaging. His primary research interest is vascular MRI, and he has secondary interests in multimodality image comparison and combination.

About

Interactive display of volumetric data by Fast Fourier Projection.

This article describes a new algorithm for reprojection of volumetric data, called Fast Fourier Projection (FFP), which is one to two orders of magnit...
2MB Sizes 0 Downloads 0 Views