0 1992 Wiley-Liss, Inc.

Cytometry 13:766-774 (1992)

Automated Image Detection and Segmentation in Blood Smears Steven S.S. Poon, Rabab K. Ward, and Branko Palcic Cancer Imaging, B.C. Cancer Agency, Vancouver, British Columbia, V5Z 1L3 (S.S.S.P., B.P.) and Department of Electrical Engineering, University of British Columbia, Vancouver, British Columbia V6T 124 (R.K.W.), Canada Received for publication September 23, 1991; accepted February 23, 1992

A simple technique which automatically detects and then segments nucleated cells in Wright’s giemsa-stained blood smears is presented. Our method differs from others in 1) the simplicity of our algorithms; 2) inclusion of touching (as well as nontouching) cells; and 3) use of these algorithms to segment as well as to detect nucleated cells employing conventionally prepared smears. Our method involves: 1) acquisition of spectral images; 2) preprocessing the acquired images; 3) detection of single and touching cells in the scene; 4) segmentation of the cells into nuclear and cytoplasmic regions; and 5 ) postprocessing of the segmented regions. The first two steps of this algorithm are employed to obtain high-quality images, to remove random noise, and to correct aberration and shading effects. Spectral information

Blood smears are routinely used for diagnosis and classification of leukemias. Subjective interpretation of the smears on the basis of cell morphology and cytochemical staining properties are often used by pathologists and cytologists to classify these cells into subgroups of the myeloid and lymphoid groups. However, variations in classification of the slides do exist, not only among different observers but also for the same observer at different sessions. Hence a more objective classification scheme is needed whereby methods based on “quantitative” descriptors are employed. This process can be automated, providing means for analyzing large numbers of cells in individual samples and yielding more consistent and representative data. The most difficult and also the most critical step in an automated blood cell classifying system is correct segmentation of the image into its components. These are the nucleus and the cytoplasm of the nucleated cells, the cytoplasm of the nonnucleated cells, and the background. It is intuibive for the human observer to

of the image is used in step 3 to segment the nucleated cells from the rest of the scene. Using the initial cell masks, nucleated cells which are just touching are detected and separated. Simple features are then extracted and conditions applied such that single nucleated cells are finally selected. In step 4, the intensity variations of the cells are then used to segment the nucleus from the cytoplasm. The success rate in segmenting the nucleated cells is between 81 and 93%. The major errors in segmentation of the nucleus and the cytoplasm in the recognized nucleated cells are 3.5% and 2.2%, respectively. o 1992 Wiley-Liss, Inc. Key terms: Nucleated cell detection, spectral image analysis, separating touching cells, edge-enhanced averaging filter

segment an image. A machine, however, relies on digital processing techniques to define the set of pixels which make up each region of the image. The machine, on the other hand, has an advantage that it can quantitatively describe features extracted from these regions, a task which the human can only estimate using qualitative means. Any errors introduced in the automated segmentation process will propagate to feature calculations which in turn may lead to improper classification of the cells in the scene (18). Thus, development of techniques leading to accurate and consistent segmentation is not a trivial task. Many segmentation techniques have been developed over the last 20 years (6,7). These methods can be categorized into three different classes: 1) characteristic feature thresholding or features clustering (1,15,22);2) edge detection (9,16,17); and 3) region extraction (8,ll).For blood cell segmentation, algorithms containing a combination of these methods are often used (2,3,10,12,14,19).These algorithms were developed on

AUTOMATED IMAGE DETECTION I N BLOOD SMEARS i

RGB

CCD

Display

Camera

/

\

t Frame Grabber and Image Processor

?

Microscope

Light

Source

Stage Controller

767

The pixel separation corresponds to 0.33 pm. The digitized image is stored in the computer frame memory where it can be displayed on an RGB monitor and processed using dedicated hardware and software for near real-time image manipulation. Image Acquisition Image acquisition and pre-processing steps (5)were implemented to obtain high-quality input images which simplify and reduce the amount of processing required in the latter stages of the analysis. Multispectral images (red, green, and blue) are used in the analysis. The camera is calibrated to ensure correct colour registration such that the images stored are a true representation of the grey levels of each colour. The light source and frame grabber are then calibrated to utilize the full photometric range of the digitizer (256 grey levels). Before the analysis of a static scene, 36 images from each spectrum are captured and averaged. The averaging reduces the random noise introduced by the light source, detector, and digitizer to a value equal to the square root of the number of frames averaged (5,201. However, the amount of improvements by averaging images is limited by other noise sources such as quantization and fixed pattern noise.

I PC/AT

Computer

FIG.1. Block diagram of the cell analyzer imaging system. The major components of the system are the stabilized light source to illuminate the sample, motorized stage to move the object into the microscope’s field of view, microscope optics to magnify the image, camera to acquire the images, and digitizing circuitry to transform the video signal into digital format which can then be stored to image memory to be displayed on the red-green-blue (RGB) monitor and manipulated by the computer.

systems using selective and sometimes numerous spectral filters and on samples which in some instances were specially prepared for machine vision. In the present work we used cells which were conventionally prepared with Wright’s giemsa stain. Our method is very simple and the results are comparable to those others obtained with specially prepared smears. In addition to incorporating some of the standard segmentation techniques, we have modified the technique used by Wermser et al. (23) to simplify the preliminary discrimination of blood cells in Wright giemsa-stained preparations. We have also introduced a method for separating just-touching cells as well as a method for enhancing images to give a better distinction to the objects of interest using the conditional mean filter. Once the initial regions had been defined, adjustments to the region borders were implemented using geometric, morphometric, and topographic properties of cells. MATERIALS AND METHODS Hardware The detection and segmentation of nucleated cells were performed on a general image cytometry device, the Cell Analyzer Imaging System (13) (Fig. 1)which was developed at the B.C. Cancer Research Center. In this system, the microscope objective is a Nikon optiphot 11, 40 x , NA = 0.95. The magnified microscope image is captured by a three-chip red-green-blue (RGB) camera (Sony DXC-3000A). The video image is digitized to a 512 x 480 pixel area a t a resolution of 8 bits.

Image Calibration Although random noise can be reduced as discussed above, fixed pattern noise is still present and must be corrected for. The fixed pattern noise is produced by: 1) unequal sensitivity of elements of the detector; 2) systematic errors in the camera control circuitry; and 3) shading and aberration effects caused by the optics. Decalibration and background subtraction of optical densities are two methods which have been used to correct the fixed pattern noise (4). For our work, we chose to use a different method which is simpler and faster. This method involves subtracting the bright image from the test image and then adding an offset equal to an average value of the bright image such that a bright background image appears to have equal grey level value for all pixels. No quantization error is introduced in this method since only subtraction of integers is involved. The disadvantage of this method is that it could introduce some error in calculating optical density and similar features which depend on the division or multiplication of absolute intensities. However, we have estimated that the maximum errors in these feature values would only amount to approximately 10%. If more exact values are needed, such features must be re-calculated from the original uncorrected image. Nucleated Cell Recognition The first step in the analysis of an image is to determine if there are any nucleated cells of interest in the image. Using the three images R, G, and B, two-dimensional plots can be generated. As can be seen from Fig-

768

POON ET AL.

mt

A

0

( I

'

Red Intensity

250

RedIntensity

250

Green Intensity

s '

FIG.2. Cluster plots of spectral images of blood cells. A photograph is shown of the intensity variations of each of the spectral images (left): red, green, and blue (from top to bottom), and cluster plots of pixel intensities of each of the two colours (right): red-blue, red-green, and green-blue (from top to bottom). Most of the points clustered within each of the drawn ellipses are representative of their corresponding components: A, the background; B, cytoplasm of red blood cells; C, cytoplasm of white blood cells; and D, nucleus of the white blood cells.

images. Unlike the method of Wermser et al., it introduces no quantization errors and is faster to compute since no floating point arithmetic is used. The constant value of 128 is added to map most pixels within 0 and 255. Those values of X(x,y) which are less than 0 or greater than 255 are set equal to their respective limits. Most debris are transformed to the background level since they tend to have similar intensities in both the red and blue images. The speckle noise which still remains at the edges of the red blood cells in the subtracted image is then averaged using a 3 x 3 convolution kernel. To extract the nucleated cells, thresholding on the histogram of the smoothed subtracted image is performed (Fig. 3). A threshold is found from the region of the valley to the left of the histogram peak with grey levels corresponding to the red cells and background pixels. A binary image of the nucleated cell mask was created by setting all pixels below this threshold to a grey level of 255 and all other pixels to 0. The mask generated was usually slightly larger and could have jagged boundaries and some holes. To obtain the desired mask, a sequence of rank filters was then performed on the binary mask image. First, a 3 x 3 median filter was applied on this binary cell mask to fill the holes in the white blood cells and to remove the unwanted points in the red blood cells and background. Next, a single dilation followed by two erosions using a 3 x 3 window was applied to the median filtered mask to fill the larger holes and to smooth the contour of the nucleated cell mask.

Boundary Detection of Single and Touching Cells Some of the nucleated cells extracted from the mask were used for further analysis. Certain arrangements of cells would not be analyzed because of the complexity and the high error rate in segmentation. These include overlapping nucleated cells or cells which were so ure 2, the regions of interest form clusters. It can also close together that even a human observer would have be seen that no single threshold can be used in any of difficulty in segmenting. Hence, only nucleated cells the individual intensity histograms to separate suc- which were isolated or are barely touching were used cessfully the nucleated cells from the rest of the image. for further analysis. To detect the presence of just-touching cells, an alA modification of the method proposed by Wermser et gorithm was developed to locate the points of sharp al. (23) for Pappenheim-stained smears was then conconcavities in the cell mask boundary. The nucleated sidered as follows. The two-dimensional system is simplified into a one- cell mask was first searched until the boundary of a cell dimensional analysis by transforming the red R(x,y) was encountered. A boundary chain code, i.e., a consecand blue B(x,y), images into a characteristic feature utive list of direction codes which specifies the location of the neighbouring boundary points, was then generimage, X(x,y): ated. Using this boundary information, the angle of the X(X,Y)= R(x,y) - B(x,Y) + 128 tangent line t o each point on the object boundary was This method has the effect of transforming the red- found. This angle was calculated from the slope of the blue intensity plane by 45" where a threshold for sep- tangent line which was generated from the least arating nucleated cells from the rest of the image can square fit of the nine points centered a t the point of be found from the rotated axis. The process is imple- interest. By taking the difference of angles between mented by a simple subtraction of the red and blue points which are four boundary points away, a notice

769

AUTOMATED IMAGE DETECTION I N BLOOD SMEARS

Threshold Cells /A-

(A)

70

80

I I

Nucleated

,I --LJ -: ‘M

!In)

11(1

120

Intensity ( G r e y

110

14)

150

I(i1

170

Levels)

RG.3. Subtracted red-blue image and its histogram. A transformed image obtained by subtracting the blue image from the red. The resulting image clearly shows a distinction in grey level intensity between the nucleated cell and the rest of the image. The smoothed histogram of this image is also displayed.

able peak emerges at the corresponding location of where the two cells touch (Fig. 4). A value of 27r was added or subtracted from the difference angle t o ensure that the result was within -n and +T. To separate two touching cells, a line segment was produced by joining the two positions corresponding to the two peaks. Since the cell boundaries were generally smooth, a smoothing operation was applied to each of the separated objects so as to smooth any corners introduced by the separation of the touching cells. After the touching cells were separated, the size of each cell was calculated. All objects (cells) which were within a prescribed size were then used for further analysis. This eliminated the unnecessary analysis of objects which were much smaller or much larger than a typical cell. Although the above procedure describes the separation of two touching cells, the actual method was worked out such that it also could be employed in justtouching cells.

Nucleus and Cytoplasm Segmentation The next step in the segmentation process was to determine the different regions of a nucleated cell: the nucleus and the cytoplasm. To isolate the cell from the rest of the scene, the nucleated cell mask (consisting of 0 or 255) was overlaid on the green spectral image and the logical AND function was applied to the images pixel by pixel. Since the histogram of the resulting masked image (Fig. 5) was generally very noisy and had more than two distinct peaks, an additional operation utilizing the edge information was performed to help define each region more readily and to smooth the intensity level variations in each region. This additional (required) filtering operation should smooth the image but at the same time it should also preserve the edges. Although the median filter can be used to preserve the edges, we have developed and used

the conditional mean filter which gives a greater distinction to pixels at the edge by assigning them a value indicating they belong either to the cytoplasm or to the nucleus. In this filter, the sample mean and the sample variance of the grey levels in a local 3 x 3 window at each pixel are calculated. If the sample variance is below an experimentally defined “conditional” limit, the value of the pixel at the centre of the window is replaced by the mean value. If the sample variance is not below the conditional limit, the value of the pixel is examined and adjusted. If the pixel’s value is greater than or equal t o the sample mean, it is replaced by the sum of the sample mean and the sample standard deviation. Otherwise, it is replaced by the difference of the sample mean and the sample standard deviation. In this work the conditional limit was set to 64, which is approximately half the difference in intensity level between the cytoplasm and nucleus. The histogram of the resulting mask image (Fig. 5) was used to determine the boundary between the nucleus and the cytoplasm. A threshold could be more reliably found from a smooth histogram and hence, the histogram was processed by a 9 x 1 median filter to remove noisy spikes. A threshold was set a t the valley to the right of the first peak and the nuclear mask image was then constructed. A median filter followed by dilation and erosion operations was performed on the resulting nuclear mask to fill the holes and smooth the boundary of the nuclear mask. The cytoplasm mask image was calculated as the difference of the nucleated cell mask and the nuclear mask.

Data Collection In order to test the blood cell analysis algorithm, approximately 1,000 cells from ten slides of blood smears were used. These slides were obtained from the Children’s Hospital of British Columbia and contained

770

POON ET AL.

0

20

40

60

80

100

120

140

160

180

Boundary Point Number FIG.4. Separation of touching cells. The outline is shown of a single cell and two touching cells (top). The plots of the difference of the angles of the tangents along the boundary of the single cell and the two touching cells (bottom) are a180 shown. The two peaks in the angle difference plot correspond to the indentations found at the boundary of the two touching cells.

typical sample preparation and staining variations which may be expected in blood smear preparations. The smears were prepared manually using the standard Wright giemsa staining procedure a t different times such that any variations in the stain batch were included in the analysis. The classification of the slides was known in advance since they had been prepared and examined by a pathologist. The history and the progress history (the slides were at least 1 year old) of each patient were already known at the time of these experiments. To collect the cells, randomly chosen areas on the slide were brought to the microscope field of view, using the motorized x,y stage of the device. Each field was

manually focused to obtain the greatest contrast in the image as seen on the monitor. Although automated focusing based on maximizing or minimizing a characteristic feature would have given a better and nonsubjective “best” focus, this was not implemented for this analysis. Spectral images were acquired and data were fed to the system program which found the nucleated cells and performs the segmentation in a fully automated fashion. The detected cell boundaries were then overlaid on the original images and displayed on the monitor for visual inspection. The spectral images (64 x 64 pixels), the nucleus and cytoplasm masks, and the location of the cell on the slide were stored for subsequent analysis and/or display.

771

AUTOMATED IMAGE DETECTION IN BLOOD SMEARS

I

0

40

8 0 1 2 0 1 6 c 2 0 0

Intensity

0

4

0

Threshold

8

0

u

o

1

6

0

~

Intensity

FIG.5. Histogram of intensity levels of nucleated cells. The histogram of the nucleus and cytoplasm of the green intensity distribution of a typical nucleated cell is shown (left). The conditional mean filter is applied to the image and the histogram is generated (right).A threshold from the filtered histogram (not shown) is then determined to segment the nucleus from the cytoplasm.

Data Classification Visual inspection is a subjective method of classifying errors and it is simple and easy to implement. A more objective method is to determine the boundaries manually, i.e., drawing the borders of the cell and nucleus and then compare the result with the automated method by counting the number of pixels that are incorrectly segmented. Manual scoring, however, is very tedious and time consuming and would not be feasible for analysis of a large number of cells. In the visual inspection process, various errors were recorded. False-negative errors in detection consisted of not detecting isolated cells or cells which were justtouching. Cells which touched the border of the images were not considered since there was not enough information to classify a portion of a cell. False-positive errors in detection consisted of detecting objects which were not isolated single cells or just-touching cells. Minor errors in separating touching cells were considered as a misplacement of the boundary by up to 3 pixels from the actual position. Major separation errors were those which exceed the minor error limits. Besides detection and separation of cells, the other important criterion in analyzing blood cell images is the accuracy in defining the inside regions of the cell. The correctness of segmentation is crucial in this process since the rest of the analysis is based on these defined regions. We divided the segmentation errors into four categories: minor cytoplasm and nuclear errors and major cytoplasm and nuclear errors (Fig. 6). Minor segmentation errors were defined as those errors in which the total number of wrongly assigned pixels was less than one-tenth of the length of the respective cell or nucleus boundary. All other segmentation errors were considered to be major errors.

FIG.6. Segmentation errors. Examples of segmentation errors. a: Minor cytoplasmic. b: Minor nuclear. c: Major cytoplasmic. d: Major nuclear segmentation errors.

RESULTS Slides 1 to 6 came from patients with acute lymphoblastic leukemia (ALL) and contained a large number of lymphoblast cells amongst the nucleated cell population. All slides had lymphoblast cells of the L1 classification type, with the exception of slide 6 which had both L1 and L2 sub-classification types. Slides 7 and 8 were from patients with acute myelogenous leukemia

772

POON ET AL.

Table 1 Percentage Segmentation Errors in Nucleated Cells Errors

Slide No. 1 2 3 4 5 6 7 8 9 10

Average SD

Correct sement.

Minor nucleus

CVtOD1.

Major nucleus

CVtODl.

81.8 83.2 90.7 91.3 93.2 82.1 80.7 90.6 82.9 80.6 85.7 5.1

6.4 8.8 1.9 5.8 3.9 2.7 11.0 9.4 0.9 0 5.1 3.8

2.7 5.3 1.9 1.9 2.9 2.7 1.0 0 5.7 8.3 3.2 2.5

5.5 2.7 0.9 1.0 0 12.5 7.3 0 2.9 1.9 3.5 3.9

1.8 0 4.6 0 0 0 0 0 6.4 9.2 2.2 3.3

(AML) and they contained a large population of myeloblast cells. Slides 9 and 10 were from patients who had been treated for ALL and hence they contained a mixture of normal nucleated cells as well as some abnormal cells. In the experiments described in this paper, all isolated or just-touching nucleated cells were correctly detected in the examined randomly chosen frames. This was true in areas where the cells were crowded and overlapping as well as in the areas where the cells were well separated. In the case in which the nucleated cells or clumps of platelets overlap in such a way that the clump resembles the shape of a nucleated cell, these artifacts will not be eliminated by the touching-cell algorithms. In this experiment, no such false positives were detected. Although clumps of cells were detected, they were all eliminated from the data by the cell separation and size criteria. Of the 1,078 detected nucleated cells, 781 were individual single cells and 297 were cells that were justtouching with one or two other cells. Of the touching cells, 271 were correctly separated by the above algorithm into individual cells, 18 had minor (not serious) errors in the position of the boundary, and only 8 experienced major (serious) errors in which the location of the boundary was misplaced as judged visually. The minor errors in separating touching cells were largely due to the boundary smoothing operation applied to the nucleated cell mask. The smoothing operator blurs the boundary where two cells touch and hence the touching cell separation algorithm can choose from points which are up to 3 pixels away from the actual boundary position. The major errors were due to irregularly shaped (not elliptic) cells or cells which were near platelets or fragmented pieces of cytoplasm. The algorithm either did not split the single cell into pieces or included fragments which did not belong to the cell into the nucleated cell mask. The results of the segmentation process for each individual slide are presented in Table 1. The percentage of correct segmentation for the ten slides ranged from

Minor

Major

80.6% to 93.2%.The range for the minor errors in the nucleus was 0-11.0% and for the minor errors in the cytoplasm was 0-8.3%. The range for the major errors in the nucleus was 0-12.5%, and for the major errors in the cytoplasm was 0-9.2%. Minor nucleus segmentation errors were mainly noted after the smoothing process of the sharp concavities which were present in certain types of the white blood cells. Major cytoplasmic errors arose mainly when cells were irregularly shaped or when the cytoplasm of some nucleated cells possessed absorption very similar to those of the red blood cells. The latter major nuclear errors are attributed t o the uneven stain uptake in the cells. The cytoplasm of some cells incorporates too much stain resulting in a larger nuclear region. Some nuclei have very dark stained regions and thus the algorithm assigns the lighter parts of the nucleus to the cytoplasmic area.

DISCUSSION An important aspect of automated blood cell analysis is the ability to detect correctly all nucleated cells in a given field and to eliminate all debris. A special problem in this task is the presence of the overlapping cells which are often difficult to classify even for an human observer. It is important to include even the overlapping nucleated cells since undetected cells may generate results which could falsely indicate the absence of abnormal cells (false-negative results). Debris which is not eliminated from the analysis may produce results that indicate abnormality in the slide (false-positive results). The latter is not as serious an error, as all positive slides (including false positives and true positives) must be subsequently manually examined by pathologists to verify the abnormality and machine classification. To date, much of the research has concentrated on the detection and classification of non-touching cells. We have introduced an algorithm which successfully detects both touching and non-touching cells. This al-

AUTOMATED IMAGE DETECTION IN BLOOD SMEARS

gorithm is simple and very fast, requiring rather modest computing power while a t the same time yielding a good success rate for non-touching and touching cells. Most major segmentation errors were due to cells with irregularly shaped nucleus and cells with atypical cytoplasm colour which are characteristics of some types of normal cells in the bone marrow. The diagnostically important isolated cells for determining leukemia were detected and correctly segmented. Most debris have approximately equal intensity in all three colour images and hence were easily eliminated by the subtraction of the blue image from the red. However, this process does not remove many regions in the image which belong to parts of red blood cells, platelets, and other debris. These were generally very small and were thus eliminated by the erosion process and the size criterion imposed on each isolated object. As seen above, the simple operation of image subtraction and the simple features, areas of cell and nucleus, were sufficient for correct detection and segmentation of nucleated cells. Further classification and sub-classification of the nucleated cells will involve calculating more features such as convexity, aspect ratio, optical density, etc. The error rates in segmentation vary from slide to slide. This was due to the type of cells on the slide as well as the way the slides were prepared. Our algorithm performed better in segmenting slides containing cells with smooth nuclear and cytoplasm boundaries. In the slides which contained a mixture of the cell types (normal and abnormal cells), more cytoplasm-related errors were present because there was a greater probability of finding cells which had a cytoplasm colour similar to the red blood cells. Most of these segmentation errors can be corrected by examining the neighbourhood of the boundary points and adjusting the pixels of concern. The errors in locating touching cell boundaries can also be further corrected by introducing additional algorithms; these may use the intensities of the image andor other criteria to search in the neighbourhood of the coarsely chosen cell separation point for the location of the exact dividing point. Some of these errors can be corrected for by analyzing the features of these objects. The accuracy of the segmentation depends strongly on how well the cells are focused (21). The focusing range is in the order of 5 pm at the selected microscope setup. As the image becomes defocused, the transition of intensity levels at the nucleus and cytoplasm boundary is not abrupt enough and may result in errors in defining the nucleus region. The cytoplasm of the nucleated cells can also blend into the rest of the background, causing cytoplasm segmentation errors. Introducing an autofocusing algorithm will greatly improve the accuracy of the exact focus positioning and therefore cell segmentation. This will therefore further influence the performance of the detection algorithm of nucleated cells and elimination of debris. Furthermore,

773

cell classification will be improved, as most nuclear features (18),particularly texture features which describe cluster distribution in the cell nucleus, should improve in their discriminating powers.

ACKNOWLEDGMENTS This work has been funded by the National Cancer Institute of Canada and the Medical Research Council of Canada. We are grateful for the help of the medical staff and especially Dr. Bonnie Massing and Dr. Kamir Tezcan of the Children’s Hospital in Vancouver for providing the samples and guidance in assigning the cells to appropriate categories.

LITERATURE CITED 1. Amadasun M, King RA: Low-level segmentation of multispectral images via agglomerative clustering of uniform neighborhoods. Pattern Recognition 21261-268, 1988. 2. Aus HM, Ruter A, Ter Meulen V, Gunzer U, Nurnberger R: Bone marrow cell scene segmentation by computer-aided color cytophotometry. J Histochem Cytochem 25662-667, 1977. 3. Brenner JF, Necheles TF, Bonacossa IA, Fristensky R, Weintraub BA, Neurath PW: Scene segmentation techniques for the analysis of routine bone marrow smears from acute lymphoblastic leukemia patients. J Histochem Cytochem 25:601-613, 1977. 4. Castleman KR: Digital Image Processing. Prentice Hall, New York, 1979. 5. Castleman K R Spatial and photometric resolution and calibration requirements for cell image analysis instruments. Appl Optics 26:3338-3342, 1987. 6. Davis LS: A survey of edge detection techniques. Comput Graphics Image Processing 4:248-270, 1975. 7. Fu KS, Mui J K : A survey of image segmentation. Pattern Recognition 13:3-16, 1981. 8. Garbay C, Chassery J M , Brugal G: An iterative region-growing process for cell image segmentation based on local color similarity and global shape criteria. Anal Quant Cytol Histol8:25-34,1986. 9. Hadon JF: Generalized threshold selection for edge detection. Pattern Recognition 21:195-203, 1988. 10. Harms H, Aus HM, Haucke M, Gunzer U: Segmentation of stained blood cell images measured at high scanning density with high magnification and high numerical aperture optics. Cytometry 7:522-531, 1986. 11. Haussmann G, Liedtke CE: A region extraction approach to blood smear segmentation. Comput Vision Graphics Image Processing 25:133-150, 1984. 12. Ingram M, Preston K, Jr: Automatic analysis of blood cells. Sci Am 223:72-82, 1970. 13. Jaggi B, Poon SSS, MacAulay C, Palcic B: Imaging system for morphometric assessment of conventionally and fluorescently stained cells. Cytometry 9566-572, 1988. 14. Landeweerd GH, Gelsema ES, Brenner JF, Selles WD, Zahniser DJ: Pattern recognition of nucleated cells from the peripheral blood. Pattern Recognition 16:131-140, 1983. 15. MacAulay C, Palcic B: A comparison of some quick and simple threshold selection methods for stained cells. Anal Quant Cytol Histol 10:134-138, 1988. 16. Marr D: Vision. WH Freeman and Company, New York, 1982. 17. Ohlander R, Price K, Reddy R Picture segmentation using a recursive region splitting method. Comput Graphics Image Processing 8:313-333, 1978. 18. Palcic B, Jaggi B, MacAulay C: The importance of image quality for computing texture features in biomedical specimens. Proc OE Laser 90, L.A., 1990. 19. Poon SSS: Algorithms for detecting and segmenting nucleated blood cells. M.A.Sc. Thesis, University of British Columbia, Vancouver, B.C., Canada, 1989.

774

POON ET AL.

20. Pratt WK: Digital Image Processing. John Wiley & Sons, New York, 1978. 21. Spadinger I, Poon SSS, Palcic B: Effect of focus on cell detection and recognition by the Cell Analyzer. Cytometry 11:460-467, 1990.

22. Umesh RM: A technique for cluster formation. Pattern Recognition, 21:393-400, 1988. 23. Wermser D, Haussmann G , Liedke CE: Segmentation of blood smears by hierarchical thresholding. Comput Vision Graphics Image Processing 25151-168, 1984.

Automated image detection and segmentation in blood smears.

A simple technique which automatically detects and then segments nucleated cells in Wright's giemsa-stained blood smears is presented. Our method diff...
791KB Sizes 0 Downloads 0 Views