Mogneric Resonance Imaggmg. Vol. 9. pp. 213-221, Printed in the USA. All rights reserved.

0730-725WYl $3.00 + .oo Copyright 0 IY9l Pergamon Press plc

1991

l Original Contribution

OBSERVATIONS

ON MAXIMUM

ENTROPY

PROCESSING

OF MR IMAGES

PAUL R. MORAN Department of Radiology, Wake Forest University, Bowman Gray School of Medicine, Winston Salem, North Carolina 27103, USA A maximum entropy (MAXENT) criteria for MR image processing optimizations has previously shown poor performance, hut this note observes that there are two entirely different kinds of “data transmission” applications which appear to have been intermixed. In the two cases, “image entropy” actually refers to different kinds of data variables. The previous literature formulations are for transfer of data in which pixel-locations are the transmitted variable, and these pixels may be neither uniform nor constant. The second application concerns the MRI data set for display. Its data variables are image pixel-values of magnetization intensity, and the data transfer mode has the sense of visual display. When MAXENT criteria are modified to address an array of pixel-value intensities, and use a pixel-value information entropy rather than pixel-locations entropy, then successful data processing results. Restoring display visualization from highly nonuniform surface coils for lumbar spine scans are demonstrated, as an example of MAXENT usefulness. Keywords: Image processing; Information;

INTRODUCTION

Entropy; MR imaging.

AND PRIOR LITERATURE

tions as the information, as in nuclear medicine serial data mode, with nonequal pixel areas and nonuniform pixel positions (x,y), but where the total count for each transmitted location is fixed; or (ii) when the application is to transmit image-value information, as in a visual display mode, for a uniform resolution with an equally spaced pixel array, as commonly understood in MRI. In the first case, the data values are spatial location numbers, and not visualizable information. In the second, they are “visual” subject density intensities. This note offers a substantially different perspective about MRI MAXENT uses, including the following:

Interest in MR image processing using maximum entropy (MAXENT) criteria has recently been renewed, through the detailed review and empirical studies by Constable and Henkelman (CH).’ That article entitled “Why MEM does not work in MR image reconstruction,” studies a specific methodology for MR optimizations attributed to DeSimone, DeLuca and Maraviglia (DDM). ‘9’ Those authors cite prior literature for maximum-entropy algorithms; all give the same original sources.4*5 This note points out that the image entropy evaluation, in the MRI works cited in CH, may not be the appropriate one; it also demonstrates a successful application of MAXENT processing for an important MRI concern. Using an incorrect entropy formulation may be why DDM comment negatively on their image result evaluations. And, it confirms the findings of CH, where they demonstrate poor performance of that approach. These specific confusions for MAXENT are known in other image processing areas.6,7 The confusion is because of two very different applications: (i) when the intent is to transfer pixel-loca-

While Constable and Henkelman show that MAXENT “fails” to perform well, for MRI, that was only for one particular Lagrangian formula, which has never been given a theoretic basis in any of the cited MR literature. The expression for Lagrangian maximization of that form for data entropy, with constraints, may be in error from the starting point. The quoted expression for the image data’s frequency-of-occur-

School of Medicine, 300 South Hawthorne Road, Winston Salem, NC 27103, USA.

RECEIVED6/13/90; ACCEPTED9/26/90. Address all correspondence to Paul R. Moran, Department of Radiology, Wake Forest University, Bowman Gray 213

214

Magnetic

Resonance

Imaging

rence histogram (or probability function, from which the information entropy is computed) may be appropriate only for data transfer of pixel locations. 3. When the entropy expression is replaced with a conventional information-theory result for pixelvalues, then MAXENT criteria are shown to work well in MRI tasks. Item 3 above, is presented here for a particular MR image model in restoring lumbar spine axial scans. Their original array suffers severe gray-scale distortions because of surface coil sensitivity variations in the subject volume. MAXENT processing criteria are demonstrated for an example; it is a very different image model application than addressed in the previous articles. ENTROPY

AND BIT-CAPACITY

FORMULATIONS

Pixel-values Entropy When pixel-values are the information content of the transmission, we denote the variables in the image array by V(r). The pixel locations, r, form a regular array indexed by r = 1,2 3, _. . ,N, the total pixel count, N, is fixed. These data are scaled so that the smallest meaningful difference corresponds to a single integer increment and, thus, the V(r) are integers. In information theory, the data entropy, HV, is the average system value of the logarithm of (l/f(V)). The frequency of occurrence, f(V), is f(v)

= n(U/N

,

= -(l/N)

5

entropy

loglf(I’(r))l

(Ib)

r=pixels

vm=+ =- “=gmi”) f(Jw%Lf(VJ

9, Number

Pixel-locations Entropy The other transmission mode identifies, instead, an equal “count” level for spatial locations (also called pixels), and transmits those (x, y) locations associated with equal counts; the corresponding “pixel” areas are generally nonuniform. The information value transmitted is the number locating the pixel in question, (k) --, (x, y). Let n(k) be the number of times that the pixel position labeled by (k) is transmitted, when each element signifies a fixed total count, of grand total M such equicount elements transferred. The frequency of occurrence will be denoted pi, interpreted as a probability of transferring the i-th location number, pi = n(i)/M

In the background that CH cite, as developed by Jaynes8p9 in the 195Os, an item is noted. The frequency of occurrence implied byf( V) is from an em-

.

(24

This n(i) quantity has little in common with the n ( V) quantity used in Eq. (la). In particular here, the number of pixel locations, m, is not a fixed constant; the total transmission number of counts M is the constant quantity. Also, MRI magnitudes 1I( r)l , represent magnetization (a number density) and not a discrete number of “counts.” Doubling the pixel area, doubles the n(i) value in Eq. (2a); the correspondence with the MR 1Z( r)l intensities therefore is: {n(i) + V(i) -(area of i-th pixel)] _ Pi =

.

2, 1991

pirical procedure carried out in one single realization of the whole data array, sorting over all N different pixels in that array. It corresponds to “information entropy.” This is not the common thermodynamic concept of random statistics, or disorder in a noisy system. In that case, we would examine one single pixel location at a time, sorting the variations occurring there over many different realizations, employing successive imaging trials of that same subject. The latter yields the system’s random events “ensemble entropy,” whereas the former yields the information-entropy from a single image. Only informationentropy concepts are applied here. Further, within the category of information-entropy for images, two different kinds of “transmission” modes must be distinguished. One is the display mode, described above; that is the usual case for MRI. It is represented by Eqs. (1) above, and by the HV pixel-values entropy.

(14

where n(V) is the number of different pixels sharing the same distinct V-value, and N is the total fixed number of pixels. HV is the pixel-value entropy, which is simply the bit-capacity needed by the average pixel to represent this data configuration in terms of discernably different levels: HY = pixel-value information

0 Volume

(IZ(i)l~i)/(~;lZ(r)lsk,

,

(2b)

where C&is the area for that center location, (x,y), whose position is transmitted by the coding (i). For this kind of application, the pixel-location entropy is called “S,” and is given by a formula similar in form to Eq. (lb):

Maximum entropy processing of MR images 0 P.R.

5

s= -

pjlog(pj]

.

i=pixels

A version of this “locations” entropy for nmr had been proposed in the context of spectroscopy plots processing, as well. lo The locations labeled by (i) do not represent a uniform array of resolvable pixels, as would be understood in the sense of an MRI image reconstruction. Constable and Henkelman quote previous literature, which defines a probability frequency of occurrence, pi as follows:

But this is not related to the image-values entropy, confusion, also, is seen comparing Eq. (2d) to the information theoretic requirement of Eq. (2a). To conform to the cited development papers, the definition in Eq. (2d) would need to incorporate the variable pixel area, &, and allow the (k) label to represent only the center-point of these nonuniform areas. In this case, (k) does not sum to the total number of pixels, ZV,which one usually associates with an MRI array. The sum of all the (V(k)&) values, however, is fixed; it has the same total as the sum of usual MRI array magnitudes, each multiplied by their uniform equal pixel area. Equations (2~) and (2d) are the forms which CH evaluate, attributing that formulation to DMM who, in turn, use expressions written down fully by Wernecke and D’Addario (WD).4 This assignment is made without comment or justification. No reason is given, in any of the papers cited, for assuming that the application at hand relates to that pixel-locations entropy. The ideas for Eqs. (2) are consistent with Jaynes’s various early development papers or later reviews,” but only when the application is to transfer pixel-locations, and when the effective “pixel” areas may be nonequal, and the number of these pixels is not necessarily a fixed total. Equations 2(c) and (d) lead to an algorithmic expression for maximizing a Lagrangian Q-value. As quoted in CH, it is Hv. A compounding

Q= -

2

(Pilog

215

for deviations from the original image values, and taken over the original uniform MRI pixel array. Thus, this formulation appears inconsistent with the pixel-locations application, in which image (i)-values are themselves the variables to be transferred. The image information is transferred, according to Eq. (3), instead by the image pixel-values, Z,; this demands the HV entropy. The fact that the same formulation has simultaneously demanded effectively equal areas for the presumably nonuniform transmission of pixel-locations, in equating the content of Eqs. (2a) and (2d), leads potentially to other serious misinterpretations. In the previous literature, no discussion of specific computation procedure is given; thus we cannot assess this issue further. From Jaynes other work,‘*” image processing in non-MRI areas, I2 and modern information theory texts,13 a corrected expression would use the HY entropy according to Eq. (1). Writing it in the CH subscript notation, N

Q= -

C

~(1NlogW’(r)l

r=pixels +

=

-

zFT(r))/~i12J

lPvh[P,lJ

-;,

-

L[(V(r)

i:

LI(Ur)

(4) - Zdr))/cji12

.

r=pixels

In Eq. (4), the “original image” ZFT(r) recognizes some arbitrary scaling; the V(r) may have another constraint, such as a specified dynamic range. This causes no problem, since the expected random noise, Uir also is to be scaled in the same way. Equations (4) and (3) appear superficially the same, but the entropy in (4) involves the histogram frequency of occurrence, summed over the image pixel values, V. In Eq. (3), the summations run over the same index, the pixel locations. Since image-values and p-histogram values have been forced into proportionality here, it is not necessary to perform numerical parameter variations to discover the behavior. The solution can be derived analytically; it is easy to show that the variational Q-maximizing solution is:

+ L[(pi -Zi)/ai12 . (3)

i=pixels

where the Lagrange FT-image straint is

MORAN

second term is a “closeness” contraint, with multiplier parameter L, to the (normalized) values Zi. Note, however, that this conembodied in a pixel by pixel summation,

V(r) = Gl(1 - P)Z(r) + P(Z)1 ,

(5)

where (I) is the average of the Z(r) values, is a constant independent of (r), and P is I 1.0. If P = 1.O, then all image values are equal, and pi is a constant which produces greatest possible S-entropy. The close-

216

Magnetic Resonance Imaging 0 Volume 9, Number 2, 1991

ness contraint is loose, with a small L-parameter value for Eq. (4). If @= 0, then V(r) is a scaled version of Z(r); this produces zero variance in the closeness constraint corresponding to a large L-value. But, the S-entropy will be smaller. Any degree of closeness consistent with a relatively maximized S-entropy, therefore, is satisfied by some value (0.0 I fi I 1.O). Increasing /3 adds more (I) component in Eq. (5), increases the S-entropy, and gives a “smoothed” average-image appearance tending to a contrastless character relative to the original. These predictions agree well with the results reported by Constable and Henkelman; they also are consistent with the one example presented by DMM. This “ME method” does not work for MRI, nor for any other image display modality optimization scheme, because it tends to replace image contrasts by contrastless constant average values everywhere. The enforced relation between the pi function and image values themselves means also that no particular image model for processing needs to be specified. While the approach of Eq. (3) computes, in fact, some valid image data entropy for certain transmission applications, it is not one based upon the histogram of the image-values for the application here. It probably should not be called a maximum entropy method, when substituted for use in “display” mode image processing. With Eq. (4), on the other hand, the process is flexible since (pv =f( V)) relates to the values of the image pixel variables, V(r) themselves, in no inherently defined manner. The defined item here is the total pixel count. Given some original inverse FT values, Z(r), an image model for processing must be specified according to some prescription. It must define the basis upon which one generates modified V(r) values for that array. There are many significant efforts employing MAXENT principles, for MRI in special areas, other than that addressed in CH and DMM. One of them is optimizing presentation MR images from with surface coils having enormous sensitivity variations whose details are unknown. Those MAXENT attempts have proven, contrary to the methods of concern in CH, to work with remarkable success. I4 An illustration is shown here, one of many possibilities, to indicate how maximum entropy approaches do work in other application to MRI situations. The problem addressed below is to find an optimized way to “correct and improve” MR image presentations from surface coil scans. It is not the only correct MAXENT application, but it is one particular image processing example.

SUCCESSFUL

APPLICATIONS

EXAMPLE

The particular image model demonstrated below is called dynamically adaptive contrast windowing (DACW). l4 Its objective is to optimize visualization in a non-windowed, highly variable MRI display. It is convenient mathematically to subtract the overall original image magnitude mean-value. The original Z(r) resulting then has (I) = zero. The modified pixelvalues are derived from Z(r) by V(r)

where F is a but both They are

L(r)

= F

Z(r) - L(r)

W(r)

+ (1 - F)Z(r)

(6)

is a fractional value typically close to 1.0. window level, and W(r) is a window width, are adaptively determined for each pixel. processed as follows: Z(r) =A(r)

@ Z(r) ,

(74

where “a” signifies convolution into a blurred, regional-average image. The diameter of the normalized blurring aperture, A(r), is the inverse of a range in Fourier-data space, inside of which the original data information are modeled to have lower visual priority than higher-frequency data outside that range. The roving window-width modifier, W(r), is computed from a “B-convolution” average of the magnitude of departure locally from the window-level value: W(r) = (I/SC) IB(r) @ ll(Z(r) - Ur)ll

+ gl , 0)

where (SC) is the image scale factor, and “g” is a small fraction of the average of the preceding term, added for clipping the contrast gain. Averaging the magnitude of contrast over a region about (r), in Eq. (7b), uses the second normalized convolving aperture B(r). These operations are computationally efficient in using Fourier processing, via FFT advantages. Final monitor display levels for V(r) are taken linearly to the extremes of the 256 steps available (Display(r)

(V( < 128

- (128)) = I”(r) ; = k(128)

;

IV1 2 128 .

(8)

The mapping in Eqs. (6)-(8) is a parameteric version of adaptive histogram equalization, with clipping and clamping; the CLAHE algorithm of Pizer et al.” For example, the W(r) width approximates the breadth of the regional histogram, and the L(r) level approxi-

Maximum entropy processing

of

mates the histogram regional median value. The Fparameter is basically the clamping level, to give a minimum contrast gain, and the (g) parameter is the clipping level. In uniform regions, using g = 0 can result in artifactual contrast exaggeration; it generates gaps in the V(r) histogram at small V, which reduces HY entropy. CLAHE is an analytic solution to regional MAXENT, subject to the constraint of mean-squared closeness, just as in Eq. (4). The F-parameter controls the tradeoff between largest possible entropy maximization and closeness to the original image appearance. The empirical variational approach for Eq. (6) will thus relatively simple. It is merely a linear combination of an “enhanced image” and the original image. The image model is a close approximation to CLAHE, and so good MAXENT performance is already guaranteed. The other constraint is that the final V(t) data have a finite range for image display, 255 integer steps. This is controlled effectively by the (SC) parameter. Finally, we require the condition of simultaneously maximized HY entropy, on the 256 integer range. Limiting the contrast gain, via the g-parameter, does not arise as an issue for the MAXENT application pursued in the previous literature. Together with the relative aperture area of B(r) relative to A(r), (g) exerts some fine control of the ultimate maximized entropy value. The r-width of A(r), determines the spatial range

Table Fourier

Image model parameters

form of the averaging

Level subtraction Scale

1.

MR images l P.R.

of the resulting image’s auto-correlation function for correlated pixel values. Some additional constraint could be imposed for that condition, with respect to the original image auto-correlation distance. We choose not to do that, but to leave the A(r) breadth (see Table 1) free here, chosen for subjective visual appearance. It is found that parameters described above can readily be adjusted within a wide range of reasonable value, depending upon Lagrange multiplier strengths imposed, with a MAXENT result, to meet constraints of closeness and background noise containment. Since it is not known in an empirical testing situation how one should optimally stress, say, avoidance of background noise exaggerations, relative to display entropic capacity, or an impression of uniform rms contrast in relation to those issues, it is not necessary to formalize the process. This tolerance has, in essence, been guaranteed by the parametric-AHE form originally chosen. Thus, final details of optimization criterion are employed more interactively: we determine the most visually advantageous variations of the parameters which produce an entropic capacity HV(data)-maximization, checking against the stated contraints on the image. If it were formalized mathematically, this procedure would be called variational optimization by successive projections.

used in MAXENT-guided

processing

for image in Fig. 1

(1 - k/K,))2

apertures

F =

fraction (SC) =

factor

Noise-limit

217

MORAN

factor

0.90

255/( V&,x - Vmin)

g = 0.10 x (B @ I(1 - A) @ ~Owhole ,mage Fig. 1 simple scaling to 256 display

Implementation.

Fig. 2

Fig. 3

as above

as above

-20 pixels -20 pixels in a 256 x 256 image

Diameter of A(r) aperture reconstructed picture

in the

N/A

Diameter of B(r) aperture reconstructed picture

in the

N/A

20 pixels equal to A(r)

10 pixels half of A (r)

256 (8 bits)

256 (8 bits)

256 (8 bits)

4.2 bits

5.5 bits

7.2 bits

2200 (11 bits)

1000

(10 bits)

256 (8 bits)

1.3 bits

7.3 bits

7.2 bits

Scaled for display Total image values integer range Entropy

evaluation

Before scaling for display Total image values integer range Entropy

evaluation

218

Magnetic Resonance

Imaging 0 Volume 9, Number 2, 1991

When good DACW-parameters are determined for a typical MRI example, they serve well in optimized display for all data obtained from the same quality of clinical studies without further adjustments; this property was confirmed in studying dozens of images similar to those presented here. The figures following are from a lumbar spine scan, in axial mode, using our Picker VISTA 1.5 T unit with high sensitivity surface coils. Each image version is shown along with its histogram, prior to restriction to the 256 integer steps. In Fig. 1, the original is simply scaled for display on a linear luminance scale, without selective windowing. The subcutaneous fat is intense, and exaggerated in the higher sensitivity part of the receiver coil. Consequently, all the desired and useful image details medially in the subject are driven too dark to visualize adequately, if the musclature subcutaneously is to be visualized as well. Table 1 gives the parameters for the three versions of this lumbar spine image. The original data range over about 11 bits (2048 integer), and its entropy evaluation gives just about 7.2 bits per pixel for HV. In this configuration, the entropic bit capacity is almost 4 bits less than the range, and each pixel on the average utilizes only (l/16) of the data range for presentation, otherwise available in

the total range. When data are simply scaled down to 256 levels for monitor display, the same entropy disadvantage remains, and the display-values entropy drops as shown in Table 1. In the histogram, at such low levels that the graphplot marks are not discernable, there is an occasional pixel with values straggling to the end of the graph’s x-axis. These are located in the rim of pixels at the very extreme of the skin, posteriorily close to the surface coil. Only the lower 8%-10% of the entire data range actually contains visually useful pixel image data. In fact, that segment is where essentially all the entropic capacity for display presentation is concentrated. For larger display image entropy at 256 levels Fig. 2 shows a version achieved as follows. The A-aperture and B-aperture are Gaussian truncated approximations in Fourier-domain (see Table 1). They were first taken both having an effective diameter of 20 pixels in the image. The scale factor (SC), and display limits of Eq. (8) can simply be estimated by obtaining the greatest entropy count for the final range of 256 (8 bits). The noise-clipping g-parameter has little noticeable effect upon the entropy value that results; we set it based upon a subjective impression of noise texture in the featureless backgrounds. The display data entropy

Fig. 1. The figure presents the image and histogram of unmodified lumbar spine MRI data. The data are simply scaled to a display range of 250 integer values. The luminance transfer in display is a strictly linear relation in MRI data values.

Maximum entropy processing of MR images 0 P.R. MORAN

219

Fig. 2. Dynamically adaptive contrast windowing has been implemented for these data reconstructions. The range of image values has dropped by a factor of two, from the set in Fig. 1, but the rms display data extent has not decreased significantly. Much more medial detail is visualizable, whiile still being able to determine changes within the subcutaneous region. Only the gain clipping and clamping parameters are available for adjustment.

has now risen significantly, to 5 bits; but, the real display improvement is reflected in the gain of display data entropy in relation to the g-bit range. The finding that the final 256 range entropy value, once achieving a reasonable approach to the g-bit limit, is relatively insensitive to further parameter variations, over surprising excursions, was an intriguing byproduct of these trials. That is, the peak in entropy maximization is a broad one. While the data for Fig. 2 give a vast improvement from the simple-scaling display of the original MR image data, a great deal more can be achieved. For Fig. 3, the B-aperture size was varied relative to the Aaperture. It found that the best visual results indeed correspond to maximized display data entropy, and they can be attained when the B-area is taken about 0.25 of the area of the A-aperture. Essentially no change in the clipping parameter, g, was required, but the clamping limit, F,could be reduced substantially, to achieve a 256 integer display range without substantial resealing via (SC). The closeness constraint in Eq. (4) basically controls the clamping parameter, F.Now the entropy is restored essentially to the original data

level, about 7.2 bits. In this case, at least half of the available capacity is used effectively by all the pixels, for display information. The findings in Figs. 1-3, even in this simple and relatively crude DACW image processing model, show visually that MAXENT methods can indeed work well in these applications, and establish optimized display data versions for special and difficult MRI visual presentations. These kind of DACW display optimizations have been tested for several dozen different clinical MRI image cases, and in some X-ray CT examples, all with comparable results to those shown here. While outside the scope of discussion, it was also found that the list of “optimization” parameters in Table 1 is not unique. An even larger final entropy can be attained for 256 levels, by reducing the characteristic diameter of the “leveling” A (r)-convolution below 20 pixels. The quasi-Gaussian histogram shown in Fig. 3, then broadens out to be more nearly constant. One can increase the apparent entropy to 7.5-7.8 bits out of an &bit range. But, then the image develops a visually objectionable stress upon sharp edges, noise, ultra-fine details, contour artifacts, and so on. The

MagneticResonanceImaging0 Volume9, Number

2, 1991

Fig. 3. Maximum entropy criteria are used to determine an optimal choice in varying the diameter of the B(r) aperture which is used to determine the locally adaptive W(r)-function of Eq. (5), with respect to the diameter of the A(r) aperture. The A(r)

aperture is used to determine the adaptively implemented window leveling function.

same levels of mean-squared

variance can still be approximately maintained, but the quality of how that “closeness” is achieved has a very different subjective visual impression. Large entropy implies a large transmission potential, a capacity for information transfer to the observer; it does not imply the data actually transmitted necessarily will be meaningful or useful. For a fixed level of mean-squared variance, a variety of visual textures and modifications are possible in the displayed images resulting. But, this should not distract from the enormous improvement in visualization that is achieved quite directly, in going from Fig. 1 to Fig. 3. SUMMARY

AND CONCLUSIONS

MAXENT criterion for image processing optimization shown here are successful for a significant MRI example; it is an advantageous technique. But, MAXENT does not enforce a rigid, cut and dried fully determinism for any image model of more than trivial complexity. These findings argue strongly that a rich potential for practical applications and benefits exists, as well as for understanding more about the meaning of basic visual information variables in image data, in

the use of maximum information entropy concepts for MR images. For the works cited and evaluated empirically by Constable and Henkelman, the validity of the peculiar definition of image entropy is called strongly into question. It is the image pixel-location entropy,‘j and not pixel-value entropy. Making probabilities proportional to image values intermixes image-location entropy formulations with image-value applications. This is the aspect which apparently “does not work” correctly, nor should it be expected to, on basic theoretic grounds. When the definition of image data entropy conforms with the frequency of occurrence for image values, originally presented by Jaynes and in more modern developments, excellent results can be achieved. Here MAXENT criteria give a successful and valuable family of methodologies for approaching various image processing problems. REFERENCES 1. Constable, R.T.; Henkelman, R.M. Why MEM does not work in MR image reconstruction. Magn. Reson. Med. 14:12-25; 1990. 2. DeSimone, B.C.; DeLuca, F.; Maraviglia, B. Maximum

Maximum

entropy

processing

entropy method in phase-encoded nmr imaging. Magn. Reson. Med. 4:78-82; 1987. 3. DeSimone, B.C.; DeLuca, F.; Maraviglia, B. Maximum entropy and 2DFT nmr images. Magn. Reson. Med. 8: 332-339; 1988. 4. Wernecke, S.J.; D’Addario, L.R. Maximum entropy image reconstruction. IEEE Trans. Comput. C26(4): 351-364; 1977. 5. Frieden, B.R. Restoring with maximum likelihood and maximum entropy. J. Opt. Sot. Am. 62(4):511-518; 1972.

6. Steenstrup, S. Comments on enhancement of mammographic features by optimal adaptive neighborhood image processing. IEEE Trans. Med. Imaging MI-6:82; 1987. 7. Dhawan, A.P.; Gordon,

R. Enhancement of mammographic features by optimal adaptive neighborhood image processing. Trans. Med. Imaging MI-5:8-15; 1986; see also MI-5:120; see also MI-6:82-83. 8. Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev. 106(4):620-630; 1956.

of MR images 0 P.R.

MORAN

221

9. Jaynes, E.T. Information theory and statistical mechanics II, Phys. Review 108(2):171-190; 1957. 10. Daniel& G. J.; Hore, P. J. Maximum entropy and NMR A new approach. J. Magn. Reson. 84:515-536; 1989. 11. Jaynes, E.T. On the rationale of maximum entropy methods. Proc. IEEE 70(9):939-952; 1982. 12. Mohammad-djfari, A.; Demoment, G. Maximum entropy image reconstruction in . . . . . ; IEEE Trans. Med. Imaging 7(4):345-354; 1988. 13. Papoulis, A. Probability, Random Variables, and Stochastic Processes (Intro. Information Theory), 2nd Ed. New York: McGraw-Hill; 1984. 14. Moran, P.R. Expanded visualization in the display of medical images. Proc. of World Congress on Medical Physics and Biomedical Engineering; MP27.2. Phys. Med. Biol. 33(Sl):lOO; 1988.

15. Zimmerman, J.B.; Pizer, S.M.; Staab, E.V.; et al. Evaluation of the effectiveness of adaptive histogram equalization for contrast enhancement. IEEE Trans. Med. Imaging 7:304-312;

1988.

Observations on maximum entropy processing of MR images.

A maximum entropy (MAXENT) criteria for MR image processing optimizations has previously shown poor performance, but this note observes that there are...
2MB Sizes 0 Downloads 0 Views