This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 1

Digital Topology and Geometry in Medical Imaging: A Survey Punam K. Saha∗ Senior Member, IEEE, Robin Strand† Member, IEEE, Gunilla Borgefors† Fellow, IEEE

Abstract—Digital topology and geometry refers to the use of topologic and geometric properties and features for images defined in digital grids. Such methods have been widely used in many medical imaging applications, including image segmentation, visualization, manipulation, interpolation, registration, surface-tracking, object representation, correction, quantitative morphometry etc. Digital topology and geometry play important roles in medical imaging research by enriching the scope of target outcomes and by adding strong theoretical foundations with enhanced stability, fidelity, and efficiency. This paper presents a comprehensive yet compact survey on results, principles, and insights of methods related to digital topology and geometry with strong emphasis on understanding their roles in various medical imaging applications. Specifically, this paper reviews methods related to distance analysis and path propagation, connectivity, surface-tracking, image segmentation, boundary and centerline detection, topology preservation and local topological properties, skeletonization, and object representation, correction, and quantitative morphometry. A common thread among the topics reviewed in this paper is that their theory and algorithms use the principle of digital path connectivity, path propagation, and neighborhood analysis. Index Terms—Digital topology and geometry, distance transform, connectivity and tracking, watersheds, connected operators, minimal path, simple point, local topology, skeletonization, object characterization.

I. I NTRODUCTION

D

IGITAL TOPOLOGY AND GEOMETRY refer to the use of topologic and geometric properties and features, e.g., straightness, convexity, compactness, distance, adjacency, boundary, connectivity, components, cavities, tunnels, paths, watersheds, centerlines, homotopy, skeleton, component-tree, etc., for images defined in digital grids. The primary endgoal of most medical imaging applications is to assess the function, physiology, abnormality of internal human organs or tissues through in vivo or ex vivo imaging [1]. Often, such images are processed through complex cascades of computational and analytical algorithms. Digital topology and geometry play important roles in the design and development of such algorithms by enriching the scope of their target outcomes as well as by adding strong theoretical foundations enhancing their stability, fidelity, and efficiency. For example, distance transforms [2]–[5] have been widely used to define ∗ P. K. Saha is in the Departments of Electrical and Computer Engineering and Radiology, University of Iowa, Iowa City, IA, 52246 USA e-mail: [email protected]. † R. Strand and G. Borgefors are in the Centre for Image Analysis, Uppsala University, Uppsala, Sweden. Copyright (c) 2010 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to [email protected]

structure thickness, spacing, and other geodesic features as well as for mesh generation [6] and image interpolation [7]. Component labeling and surface-tracking have been applied to medical image visualization, manipulation, and analysis [1], [8]–[10]. Fuzzy connectivity, watersheds, connected operators, and minimal paths have been widely used in medical image segmentation [11]–[17]. Skeletonization has been used in several medical imaging applications, including representation of anatomic structures [18], path planning [19]–[21], feature extraction [22]–[28], stenosis detection [29]–[32], etc. Local topological properties have been used for characterization of local structures in mammography [23], brain imaging [26], trabecular bone imaging [24], [27], [28], etc. Also, topological features have been used for correction of anatomic structures [26], [33] segmented from noisy images. The notions of digital topology and geometry are intertwined in medical imaging applications and, often, it is difficult to draw a dividing line between them. This paper presents a comprehensive yet compact survey on results, principles, and insights of methods related to digital topology and geometry with strong emphasis on understanding their roles and potential in various applications, especially, those related to medical imaging. There are two distinct categories of digital topological and geometrical methods that have been widely applied in medical imaging. Under the first category, the theories and algorithms are directly conceived and developed for a digital space. In the other category, the methods are essentially digital approximations of continuous mathematics and algorithms. A large number of popular and landmark works under the second category solve specific topological and geometrical problems on digital images. For example, topological consistency is a major issue in deformable image registration and several methods from continuous mathematics have been adopted to solve this challenge [34]–[36]. Continuous approaches have been popularly adopted for skeletonization of digital objects and a vast literature is available on this topic; see [37] for an indepth review. Pizer et al. [38] introduced deformable m-reps, a representation of object boundaries using sheets of medial atoms inside an object, to segment anatomical structures in 3-D medical images. Yoshida and N¨appi [39] proposed a set of 3-D geometrical features to automatically detect and characterize polyps in CT colonoscopy images using continuous models of curvature analysis and volumetric shape index. S´egonne et al. [40] presented an algorithm to retrospectively correct the spherical topology of the cortical surface by mapping it onto a sphere to detect topological defects as minimally nonhomeomorphic. However, the focus of this paper is on methods that are directly developed for a digital space and the reference

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 2

to methods under the second category will be limited and will be used only to facilitate the insights of a method under the first category. In the year of 1989, Kong and Rosenfeld [41] wrote a thorough survey on digital topology, which has remained an inspiring document on this topic till this date. A few books have been published on specific focus areas of digital topology and geometry [37], [42], [43]. Several surveys were dedicated to specific sub-areas, e.g., early works on digital geometry [44], algebraic methods in digital topology [45], connected operators [46], Voronoi neighborhoods and spatial tessellations [47], voxelization [48], fuzzy connectivity [49], geodesic methods and optimal paths [50], skeletonization [37], [51], digital straight lines [52], digital compactness and circularity [53], etc. Klette and Rosenfeld [43] wrote a comprehensive book on theory, methods, and algorithms on digital topology and geometry. Several applications of digital manifolds in medical imaging have been presented in [54]. However, these surveys and books were primarily written from theoretical and algorithmic perspective, and the focus on applications, especially those related to medical imaging, has been somewhat marginalized. In this paper, a survey on digital topology and geometry is presented that fills two major gaps – (1) inclusion of recent works and results on digital topology and geometry since the last major survey written in the 1990s and (2) presentation of insights and intuitive ideas behind leading theories, results, and algorithms from different topics of digital topology and geometry and understanding of their pathways to various medical imaging applications. This paper should benefit the understanding of the potential roles of digital topology and geometry in medical imaging applications, encourage applications of existing results and algorithms to solve various research problems, and instigate new ideas and cutting-edge results contributing to the field itself. Also, this survey will be helpful to application-oriented researchers in relating their specific medical imaging applications to appropriate areas of digital topologic and geometric results and algorithms. Specific aims of this survey are – (1) review of literature on different topics of digital topology and geometry relevant to medical imaging; (2) discussion on results, principles, and insights of leading methods from relevant topics with hints of remaining challenges; and (3) discussion of the roles of different methods in various medical imaging applications. Most medical imaging technologies produce scanned data in cubic or elongated cubic grids. Therefore, this survey will primarily focus on results and algorithms in three-dimensional (3-D) cubic grids. Some medical images, e.g., CT images, are originally captured using “match-stick” voxels, where one dimension has lower resolution than the other two. In these cases we recommend resampling the image at isotropic voxel size using a suitable interpolation algorithm [55]–[57] before further processing. Although, most of our survey is confined to 3-D, sometimes 2-D results are cited to describe the progression of the respective topic. This paper begins with a brief review of early landmark works on digital topology and geometry followed by a set of common definitions and notations. The next section reviews

the literature and principles of different methods related to the distance transform and path propagation. In the same section, different methods, namely, center of maximal balls, Voronoi neighborhood, image interpolation, all using the principle of distance analysis, are reviewed and their applications are discussed. Topological approaches to component labelling, surface-tracking, and image segmentation are reviewed in Sections IV and V. Section IV reviews the results and principles on different image segmentation methods related to fuzzy connectivity, watersheds, and connected operators, and the minimal path approaches for object boundary and centerline tracking are reviewed in Section V. Although the methods in both Sections IV and V use digital paths and wave propagation, the methods in Section IV are based on the principle of path connectivity and minimal path approaches in Section V utilize the metric property of paths. Topology preservation, simple points, local topologic properties, and their applications are discussed in Section VI and skeletonization and their applications are reviewed in Section VII. Finally, the roles of digital topology and geometry in object characterization including object representation, quantitative object morphometry, object correction, and their applications to medical imaging are reviewed in Section VII. There are several other topics related to topology and geometry in the continuous space or mathematical morphology [58], [59] which have been applied to medical imaging. However, these topics are excluded from the current survey for the sake of compactness and space limitation. The common thread among the topics discussed in this paper is that their theory and algorithms use the principle of digital path connectivity, path propagation, and neighborhood analysis. In a situation where the underlying theory of an algorithm or results falling in the scope of this survey had originally evolved in the continuous domain, a brief review of original works is presented for the sake of fairness. II. E ARLY W ORKS AND D EFINITIONS Early works on digital geometry were focused on formulating, defining, and developing efficient algorithms for various basic geometric notions for digital images such as distance functions [4], [60], convexity [61]–[63], straight lines [64]– [66], perimeter, area [67]–[69], compactness, circularity, [70], etc. Many ideas from these early works have been researched and adopted in modern medical imaging applications. For example, the principle of 3-D digital straight lines has become the basis for ray casting based rendering techniques in voxel grids [71]–[73]; also, voxel-based approaches have been adopted to compute surface area for brain MRI [74], trabecular bone thickness at in vivo resolution [25], [75] etc. Rosenfeld formulated the topological notions of connectivity, arcs and curves, thinning, border-following, and adjacencytree for binary images [76]. Earlier results on digital topology include the works by Duda and Munson [77] on perimeter, convex hull, concavities, enclosures, and spurs; Rosenfeld [78] on connectivity; Mylopoulos and Pavlidis [79] on dimension; as well as Gray [80] on topological genus. Yokoi et al. [81] reported an alternative approach of defining some basic concepts

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 3

of digital topology, including connectivity number, coefficient of curvature, and border-following. Kong and his colleagues introduced the “digital fundamental group” [82] that allows precise interpretation of important results on digital topology narrowing its gap with the classical topology; this notion been further studied by others [83]–[86]. Khalimsky [87] proposed a topological approach for digital spaces where a binary image is represented by a locally finite T0 topological space. Kovalevsky [88] discussed the difficulties of adopting classical topological notions for digital images and suggested the use of a cellular complex for the purpose. Recently, a research trend has been observed on using simplicial or cubicial complex frameworks to define discrete topological transformations [89]–[93]. Several authors have adopted alternative graph-based approaches for digital topology [94]–[99]. In medical imaging, 3-D cubic grids are most popularly used for image representation. A 3-D cubic grid may be represented by the set Z 3 , where Z is the set of integers; an element of the grid Z 3 is referred to as a voxel. A voxel may also be referred to as a point, because, some of the notions from digital topology and geometry, e.g., “simple point”, are well-established in literature. To avoid changing such wellestablished terminologies, we will switch between the words “voxel” and “point” but we will stick to “voxel” as much as possible. An element of a 2-D square grid Z 2 is referred to as a pixel or a point. Two voxels are 26-adjacent if they share a vertex; they are 18-adjacent if they share an edge; and they are 6-adjacent if they share a 2-D face. An αadjacency, where α ∈ {6, 18, 26}, is defined in a way such that a voxel is α-adjacent to exactly α other voxels. The set of all α-adjacent voxels of a given voxel is referred to as its αneighborhood. A 26-neighborhood of a voxel, including itself, ∗ is a 3 × 3 × 3 set denoted N26 (p); N26 (p) denotes the set of all voxels of N26 (p) excluding the central voxel p. The other ∗ neighborhoods, namely, N6 (p), N6∗ (p), N18 (p), and N18 (p) are defined similarly. An α-path π is a non-empty sequence hp0 , p1 , · · · , pl−1 i of voxels where every two successive voxels pi−1 , pi are αadjacent. Now, consider a set of voxels S ⊂ Z 3 . Two voxels x, y ∈ S are said to be α-connected in S if there exists an αpath from x to y that is contained in S. An α-component of S is a maximal α-connected subset of S. An empty set is always α-connected. Now, let us consider two sets S1 ⊂ S2 ⊂ Z 3 . S1 is referred to as α-connected in S2 , if, for every two voxels p, q ∈ S1 , there exists an α-path in S2 joining p, q. An αcomponent of S1 in S2 is a maximal subset of S1 where every two voxels are α-connected in S2 . Rosenfeld recommended the use an adjacency pair (κ1 , κ0 ) where κ1 is used for object voxels and κ0 is used for background voxels [100]. Specifically, in a 3-D cubic grid, 26-adjacency is used for object voxels and 6-adjacency is used for background voxels, or vice versa. He convincingly demonstrated that use of such an adjacency pair leads to a workable framework of digital topology, which holds many important classical properties, including the Jordan curve property. Although the use of two different adjacency relations appears to be somewhat paradoxical, it is this form of digital topology that has become most popular.

1 1 2 1 2 1

1 2 3 2 1

1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 1 2 1 1 2 3 3 3 3 3 2 1 1 1 2 3 4 4 4 3 2 2 1 1 2 3 4 5 5 5 4 3 3 2 2 3 Distance 4 4shape 4 4 4 4 4 into 2 2 3 3 3 3 3 3 3 3 3 1 1 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1

1 2 3 2 1

1 1 2 1 1 2 3 2 2 2 2 1 1 1 1

1 2 3 3 2 1

1 2 3 2 2 3 2 1

1 2 2 2 1 1 2 3 2 1

1 1 1 1 1

1 2 2 2 1

1 1 1 1

Fig. 1. The city block distance transform of a fish.

A digital image space is represented by a triplet hZ 3 , α, βi where Z 3 is the image grid and α, β are the adjacency relations for object and background voxels, respectively; valid adjacency pairs in a 3-D cubic grid are (6, 26), (26, 6), (6, 18), (18, 6). A binary digital image or a binary object in a digital space hZ 3 , α, βi is represented as a quadruple D = (Z 3 , α, β, B), where B ⊂ Z 3 is the set of object voxels. In the same spirit, a fuzzy digital image or a fuzzy object is defined as O = (Z 3 , α, β, µO ), where µO : Z 3 → [0, 1] is the membership function of the fuzzy object. The support O of a fuzzy object O is the set of all voxels with nonzero membership values, i.e., O = {p|p ∈ Z 3 ∧ µO (p) > 0}. In a fuzzy digital image, the adjacency relation α is used for voxels inside the support of the object while β is used for the voxels ¯ = Z 3 − O. inside the background O In 3-D digital topology, we often use three concepts: object components, tunnels, and cavities. Given a binary image (Z 3 , α, β, B), an object component is an α-component of B and a background component is a β-component of Z 3 − B. A cavity is a background component inside an object with no β-path to the background surrounding the object. Although tunnels are easily visualized and intuitively described, they are more difficult to define formally. However, the number of tunnels in an object can be defined precisely. Intuitively, a tunnel would be the opening in the handle of a coffee mug, formed by bending a cylinder to connect the two ends to each other or to another connected object. There exists a connection between tunnels and handles—when the coffee mug’s handle is broken, the tunnel is lost. In most objects, the number of tunnels can be counted by recursively reducing the number of handles. If an object has a cavity that forms a handle (e.g., the interior of a hollow torus), that cavity must be filled first, and then the exterior handles are counted. Filling such a cavity is counted as removal of one tunnel. For example, a ring has one tunnel because it forms a single solid handle. A hollow torus has two tunnels: the first arises from the cavity inside the ring and the second from the ring itself. More accurately, the number of tunnels in an object is the rank of its first homology group [41]. The number of objects, tunnels, and cavities represent the 0th, 1st, and 2nd Betti numbers, respectively. III. D ISTANCE T RANSFORMS AND PATH -P ROPAGATION A digital distance transform (DT) structures the interiors of objects and/or background of a binary image. Each object

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 4

voxel gets a value that is equal to the distance between itself and the nearest point in the background; or vice versa, see Fig. 1 for a 2-D example. Another way of interpreting the DT value is that it tells when the voxel would be reached in an iterative erosion process, i.e., Blum’s grassfire [101] analogy. Distance transforms have been used as popular utility tools in many applications. At the end of this section, we review three applications of DT – (1) center of maximal balls, (2) Voronoi neighborhoods, and (3) DT-based interpolation. More applications of DT in skeletonization are reviewed in Section VII-A, while its roles in multi-scale opening and quantitative assessment of object morphology in medical imaging are discussed in Sections IV-A and VIII-B, respectively. Minimal paths and geodesics are based on metric properties of paths; the literature on this topic and its applications to boundary and centerline tracking are discussed in Section V. Often, the length of a digital path is computed using steps from a limited neighborhood. The simplest DT in 2-D is called the City block DT, which is computed using the 4neighbourhood (edge-neighbors) and counting the steps in the path, see Fig. 1 again. The DT concept was introduced already in the mid 1960s by Rosenfeld and Pfaltz, [2], [60]. They considered the City block (or, Manhattan) DT; the Chessboard DT, where steps in the 8-neighbourhood (edge- and pointneighbors) are allowed; and the Octagonal DT where the previous two DTs are mixed. These are all usually called “path-generated” DTs. The chessboard DT has several bad properties and should not be used. Early landmark works on DTs were presented in 2-D. The City block and Chessboard DTs are very rotation dependent, so the DTs’ values change very much depending on the position of the object. Therefore, weighted DTs were introduced, where edge-neighbor and point-neighbor steps get√different lengths. Montanari [102] suggested weights 1 and 2, while Hilditch and Rutovitz [103] suggested 2 and 3, as integers are easier to handle. However, it was not until the 1980s that Borgefors computed optimal weights [4], which turned out to be 0.955 and 1.369, with 3 and 4 as the best integer approximation with a maximum distance difference between directions of about 8% (which is lower than for Montanari’s more intuitive suggestion, where the maximum difference is 9%). If the neighborhood is extended to 5 × 5, the difference can be lowered to 2%, by using weights 5, 7, and 11 for edge-, pointand knight-neighbors, respectively [4]. Regularity issues for weighted DTs are discussed in [104]. Weighted and path-generated DTs can be easily computed by “chamfering,” using two raster scans through the image [4], [105]. In fact, chamfering is so efficient that it may at first seem unnecessary to create other computing schemes for sequential computers. In fact, it was not until constrained and geodesic DTs began to be used that this was done, using wavefront propagation [106]. A constrained DT is computed in an image with three types of pixels, object, background, and obstacle pixels, that cannot be passed. Distance information has to travel around these obstacles. Constrained DTs can be computed by chamfering, but the number of necessary scans is unknown, as it is necessary to keep scanning until no pixel values change.

Also in the 1980s, people began investigating and using the 2-D Euclidean distance transform, where each pixel contains a vector pointing to the nearest pixel in the background [107], [108]. Danielsson suggested a simple but not correct algorithm for computing the 2-D Euclidean DT [107]. Borgefors was the first to extend it to 3-D and higher dimensions [3]. It is complicated to compute the absolutely correct Euclidean DT, as the path to the background is straight and does not necessarily pass through any object point in Z 2 even if the pixel is deep inside to object. Therefore, the values of neighbors cannot be used directly, as it can in weighted DTs. The errors are small, but the resulting DT is not a metric, which can be a problem. Euclidean DT computation through neighbors was also studied by others [109], [110]. A correct 3-D Euclidean DT is found in [111]. An extension is the signed Euclidean DT, where the distances are computed in both directions from a border [112]. Reverse DTs (rDT) are less commonly used, even though they were defined at the very beginning [60], but they are also useful for, e.g., recovering a shape from its skeleton. In an rDT we start from a set of seed points with radius values. The result is the union of the discs with these seed points as centers. Computation is easy for weighted DTs [113], [114] but complex for the Euclidean DT [111]. Distance transforms and their reverses are easy to extend to higher dimensions [3], [115]. The equivalent of the pathgenerated DTs are the D6 , D26 , and octahedral where the first allow steps to the six face-neighbors, the second to the full 3 × 3 × 3 neighborhood and the third are mixes between them [116]. The D18 would use face- and edge-neighbors in the path, but is not recommended as it has several undesirable properties. For weighted DTs the optimal weights become rather complex to compute, but 3, 4, and 5 is the best integer approximation for the 3 × 3 × 3 neighborhood with low integers [115], [117]. Weighted DTs have also been extended to larger neighborhoods in 3-D [117] and to 4-D [118]. Pathgenerated and weighted DTs can be efficiently computed by two-scan chamfering in any dimension and for any pointlattice [119]. The DTs have also been extended in several other directions. When propagating distance information, it is possible to also propagate other image information. An example is Voronoi tessellation [47], [120], [121] from a set of seeds (not necessarily isolated points), where the identity of the seed is propagated [3] together with the distance. Gray-level information can also be propagated, in which case we talk about geodesic DTs, [122]–[124]. Distance and gray-level information can be combined in various ways, depending on what the purpose is. Geodesic distance [125], [126] has been studied in 2-D to represent a constrained distance between two points where the paths lies inside a predefined region. Piper and Granum [106] studied the properties of such paths in convex and concave domains in 2-D. Verwer et al. [127] presented an efficient algorithm in 2-D for computing the constrained DT of a binary reference image where the paths are not allowed to enter into the background of another constraint image.

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 5

Generally the term salience DTs, introduced by Rosin and West [128], is used when propagating additional information. They give many examples of the kinds of information that can be spread, e.g., edge strength and curvature. If using chamfering, spreading such information needs many raster scans, just like the constrained DT, but when using wave front propagation there is no extra effort. DTs have also been defined for fuzzy images [5], [129]. Borgefors and Svensson [129] presented an algorithm for computing distance transforms in images with fuzzy object borders. Specifically, they proposed different initialization values when computing the DT, which takes the fuzziness of the border into account. Saha et al. [5] introduced the DT for a general fuzzy object and established its metric property. It was further generalized to arbitrary digital grids [130]. Also, they presented an efficient computational solution for fuzzy distance transform (FDT). In the general framework of FDT, the length of a path is defined as a path-integral of membership values of the underlying fuzzy object. The fuzzy distance between two points is defined as the infimum length of all possible paths joining the two points. For a digital fuzzy object, a minimum length path is guaranteed between every two points; however, it is not true for a fuzzy subset of Rn . A. Center of Maximal Balls Probably the most important type of element to be identified in a DT is the “maxima,” or centers of maximal balls (CMBs). A ball is a set of voxels whose distance from the central voxel is smaller or equal to a radius value. A maximal ball is a ball in the object not completely covered by any other enclosed ball. A CMB is the center voxel in a maximal ball, labelled with its radius. Note also that the union of the maximal balls is equivalent to the original shape. For the path-generated DTs the CMBs are simply the pixels that have a higher or equal value than its neighbors, in the appropriate neighborhood. For weighted DTs, the CMBs are found using simple rules in the 3 × 3 × 3 neighborhood [113], [131], [132]. For the 57-11 2-D DT small valued CMBs are found by look-up tables while large ones can be found by simple rules [133]. For the Euclidean DT (in any dimension) the situation is more complex, where look-up tables up to the maximum thickness of the objects is probably the easiest solution [111], [134]. Saha et al. [130], [135] and Svensson [136] generalized CMBs for fuzzy digital objects. A voxel p ∈ Z 3 is a fuzzy center of maximally ball (FCMB) in a fuzzy digital object O if the following inequality holds for every neighbor q of q F DT (q) − F DT (p) < 12 (µO (p) + µO (q))|p − q|,

(1)

where FDT is the fuzzy distance transform [5]. A voxel can be a CMB even if it has a neighbor with a higher DT or FDT value. Actually, CMBs form ridges on an FDT or DT map as characterized by the above equation: a voxel p is a ridge voxel if no other voxels in its neighborhood receives their FDT or DT values (or, the fire using Blum’s grassfire analogy) through p. The definition of FCMB is equivalent to that of CMB [132] for binary digital objects. See Section VII for applications of CMBs in skeletonization.

Fig. 2. Optimum mesh generation on a rabbit femur surface using geodesic Voronoi neighborhood. The geodesic Voronoi neighborhood of each seed voxel is shown with a unique color. The optimum mesh is formed by Delaunay triangulation of geodesic Voronoi neighborhoods.

B. Voronoi Neighborhood The Voronoi neighborhood defines the influence zone of a given voxel, which is useful in many applications, including feature propagation [28], [137], mesh generation [6], and skeletonization [138]. Given a set of finitely many seed voxels p1 , p2 , · · · , pn , their Voronoi neighborhoods create a unique partitioning V1 , V2 , · · · , Vn of the digital image grid Z 3 such that, ∀q ∈ Vi , |q − pi | ≤ |q − pj |, for any i = 1, 2, · · · , n; Vi is referred to as the Voronoi neighborhood of pi . Early works on computation of Voronoi neighborhood were presented in [120], [121]. Borgefors [4] presented a simple, yet efficient, DT-based algorithm for Voronoi neighborhood computation. It computes the distance transform from seed voxels together with the label of the nearest seed voxel. Saha et al. [6] adopted this approach to compute geodesic Voronoi neighborhood and used it for optimum mesh generation on a rabbit femur surface for a given set of seed points on the surface (Fig. 2). Peyr´e et al. [50] reviewed various methods on geodesic Voronoi diagram and Delaunay tessellation on triangulated surfaces. See [47] for a thorough review on Voronoi neighborhood and spatial tessellation.

C. Image Interpolation Image interpolation [55]–[57] is an essential task in medical imaging due to limited and varying spatial and temporal resolution in imaging devices. Interpolation techniques may be classified into two categories, namely, image-based [56], [57] and object-based [7], [139]–[141]. Image-based approaches are primarily based on local image intensities in adjacent slices and, often, suffer from several artifacts caused by non-uniform and nonlinear structural deformities in the slice direction. On the other hand, object-based interpolation techniques capture structural deformation in the slice direction. Different methods in this category use different techniques to determine the structural correspondence between successive slices, e.g., distance transform [7], registration [141], mathematical morphology [140] etc.

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 6

The DT-based interpolation technique were referred to as “shape-based interpolation” [7], where the deformation between cross-sectional object contours on successive image slices is captured using a simple linear combination of signed DT fields computed on the two successive slices. The object representation on an intermediate image slice is computed as a weighted sum of the two signed DT fields. Herman et al. [142] demonstrated that the use of modified cubic spline in shapebased interpolation further improves its performance for CT images. Higgins et al. [143] augmented Raya and Udupa’s shape-based interpolation [7] to a shape- and location-based interpolation where the centroids of object cross-sections in two successive slices are aligned by a translation before applying the shape-based interpolation. Also, they suggested using gray-scale information of object cross-sections in shapeand location-based interpolation. Grevera and Udupa [139] generalized shape-based interpolation for gray-scale images. One difficulty with DT-based interpolation methods is that, when the geometry and/or topology of an object changes rapidly between successive image slices, they may select the wrong object contours from successive image slices as matching ones and, thus, misguide the interpolation algorithm. This challenge is enhanced for thinner structures with high deformation and changing topology and geometry, e.g., vascular structures, airway trees, etc. IV. C ONNECTIVITY, T RACKING , AND S EGMENTATION Path connectivity is the parent of many primitive as well as advanced algorithms for image segmentation and tracking, e.g., component labeling, surface-tracking, fuzzy connectivity, watershed algorithms, connected operators etc. Many medical imaging applications, including several recent ones, apply binary component labelling on thresholded images to segment a target anatomic structure. An efficient and popular implementation of two-scan component labelling algorithms [60] uses equivalence tables; first, the image is scanned, each voxel is assigned a label and label collisions are detected and stored in an equivalence table. During the second scan, the voxel labels are corrected by using the equivalence table. Another efficient algorithm for component labeling uses contour tracking, where the identification of a connected component is carried out by tracking its contour [144]. Surface-tracking [145]–[148] is a precursory step for surface rendering [8] and interactive manipulation useful in creating a virtual reality of surgical procedures [9], [10]. Surfacetracking algorithms seek to compute a “boundary surface” as a “connected component” of “boundary faces” in a 3-D binary digital image. The major challenge in a surface-tracking algorithm is to design a traversal rule to cover the full span of a 3-D surface and it is accomplished using directed circuits of adjacent boundary faces that define the rules for walking along an object boundary to track the entire surface. Artzy et al. [145] presented an efficient algorithm for surface-tracking in a 3-D (18,6) binary digital image, and its formal proof was presented in [147]. Gordon and Udupa [149] presented another efficient algorithm for surface-tracking, and its correctness was proven in [148]. Later, Udupa et al. [146] generalized the surface-tracking algorithm for arbitrary dimensions.

Relatively advanced region growing algorithms use fuzzy connectivity [12], [150], [151] instead of binary connectivity. Watershed algorithms [11], [58], [98], [152]–[156] and connected operators [46], [157]–[160] incorporate morphological and hierarchical strategies within the framework of connectivity and region-based segmentation. Some links may be observed among different topological segmentation approaches, e.g., from a general viewpoint, fuzzy connectedness can be seen as the altitude of the passes between basins of the watershed, which is also the grey-value of the lowest common ancestor of two minima in the min-tree. A trend has been observed to formulate connectivity and region-based algorithms on graphs [98], [99], [161], [162]. The image foresting transform [161], [162] offers a unified graph-based framework for realization of various connectivity-based operators, including region growing, ordered propagation, watershed, flooding, geodesic dilation, morphological reconstruction etc. Najman and Cousty [99] surveyed graph-based approaches for mathematical morphology and topological segmentation algorithms. In general, graph-based approaches are superior for hierarchical representation of the topography of regions in an image. In the following section, we review the results, principles, and applications of fuzzy connectivity, watershed algorithms, and connected operators. Another path-based image segmentation approach using minimal paths or geodesics is discussed in Section V, which utilizes metric properties of paths to track object boundaries or centerlines. By contrast, the methods reviewed in this section are region-based algorithms based on the connectivity property of paths. Boundary and centerline tracking algorithms are relatively more robust in handling small leaks at boundaries as compared to region-based techniques. However, the performance of tracking algorithms drops as the complexity of the geometry and topology of the target object boundary increases and interactive user inputs become inevitable. Region-based algorithms are less dependent on object geometry, and their performance is more related to seed selection, boundary leaking, and propagation through small scale structures. A. Fuzzy Connectivity Fuzzy connectivity was introduced by Rosenfeld [150] where the “degree of connectedness” is computed from a set of seed voxels. Dellepiane and Fontana [151], [163] and Udupa and Samarasekera [12] suggested its application to image segmentation. Aspects related to the computational efficiency of fuzzy connectivity algorithms were investigated by Carvalho et al. [164] and Ny´ul et al. [165]. Udupa and Saha [49] presented a survey on fuzzy connectivity. The goal of a fuzzy connectivity algorithm is to grow a region in an object from selected seeds so that the region fills the entire object without leaking into another object or the background. The fuzzy connectivity between two voxels is defined as the strength of the strongest path between them, where the strength of a path is the strength of the weakest link of the path. Starting with a high fuzzy connectivity value at seed voxels, fuzzy connectivity values are propagated

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 7

through adjacent voxels under the min-max rule. This propagation is iterated until a convergence is reached. Udupa and Samarasekera [12] recommended the use of a fuzzy affinity relation between adjacent voxels to govern the region growing process. The affinity relation captures various local image features to define a local “hanging-togetherness” between two adjacent voxels; the performance of a fuzzy connectivity algorithm highly depends on the seed selection along with the choice of the affinity function. A general form of an affinity function [166], [167] is the following: µκ (p, q, ) = µω (p, q, ) g(µϕ (p, q, ), µδ (p, q, ) ), | {z } | {z } | {z } adjacency

(2)

homogeneity object feature

where g(x, y) is a monotonically non-decreasing function of x, y. Saha et al. [166] incorporated local scale in affinity functions while Zhuge et al. [168] extended it for vectorial images. In the above formulation, the fuzzy connectivity is computed from one set of seed voxels representing single object. A thresholding of fuzzy connectivity values is needed to segment the target object. In relative and iterative relative fuzzy connectivity [169]–[172], multiple objects are allowed to grow simultaneously through a competitive rule. In iterative relative fuzzy connectivity [169], [171], the region growing process starts to grow in the “core” of the object through relative fuzzy connectivity and then it iteratively expands to finer details. This process terminates when a convergence is reached. Saha et al. [173] combined the notions of iterative relative fuzzy connectivity and distance transform leading to an algorithm for multi-scale opening of different anatomic structures conjoined at various unknown locations and scales. This method has been applied for artery-vein separation in non-contrast pulmonary CT imaging (Fig. 3) and assessment of morphology of pulmonary acini in mouse lungs using micro-CT imaging [174]. Lei et al. [175] and Tizon and Smedby [176] used intensitybased features for artery-vein separation in MRA imaging. Bazin and Pham [177] presented a topology-preserving fast marching algorithm for region growing and applied it for tissue classification in brain MR imaging. Recently, Strand et al. [178] has introduced a region growing framework, referred to as the “minimum barrier distance”, which captures the total intensity-variation along a path as its cost. The minimum barrier distance or MBD between two voxels is defined as the cost of the minimum cost-path between the two voxels. MBD is a metric that possess a unique property that, as a path grows, its length remains unchanged until a new intensity fluctuation, or barrier, is observed that is stronger than previous fluctuations. This unique property triggers its application to region growing. By this approach, the sensitivity to local disturbance such as noise and edge blur is reduced. Recently, an efficient algorithm for computation of MBD has been presented by Ciesielski et al. [179]. B. Watershed Methods The watershed transform is a family of methods built on an intuitive idea of water flooding on the surface of a landscape. Watershed-based image segmentation was intro-

Fig. 3. Multi-scale opening of pulmonary arteries and veins in non-contrast CT imaging. (a) A coronal image slice. (b) Color-coded rendering of separated arterial and venous trees [180].

duced by Beucher and Lantu´ejoul [11], and early works were reported in [58], [152]–[154]. The basic idea of watershed segmentation is to interpret a gradient magnitude image as a topographic relief where the watershed contours correspond to the crest lines of the relief. The barriers are called watershed lines, and the catchment basins are the segmented objects. The principle of the watershed method may be explained from two different perspectives – (1) watershed by region growing – the flooding analogy and (2) watershed by optimal path extraction – the drop-of-water analogy. Following the flooding analogy of watershed transform, a water source is placed at each seed point or at one or more local minima. As the flooding continues from these water sources, basins are filled up with water, and, at points where water coming from different sources meet, barriers or boundaries, referred to as watersheds, are built between different object regions. When the water level reaches the highest peak in the landscape, the process stops and the landscape is partitioned into regions, or basins, separated by contour lines, or watersheds. An alternative interpretation is the drop-of-water analogy, where a drop of water falling on the surface of the landscape flows in the direction of the steepest descent towards the nearest local minimum. Each drop of water falling on a given basin flows toward the same local minimum and the watersheds are formed as lines separating catchment basins. An algorithmic definition and efficient computation of watershed transform using the flooding analogy was presented by Vincent and Soille [152] (see [58] for the binary case). A review of several definitions of the watershed transform and the associated algorithms was presented by Roerdink and Meijster [181]. Meyer and Najman [182] presented an overview of the watershed-based segmentation framework together with several examples of hierarchical watersheds. The primary challenge in image segmentation using the watershed transform is that the image is usually over-segmented into a large number of tiny and shallow watersheds, while a desired algorithm should obtain only a few deep and meaningful watersheds. Najman and Schmitt [183] presented a hierarchical segmentation algorithm where all watershed contours are assessed by a saliency measure, defined by its gradient values and geodesic reconstruction. Recently, Najman [184] has presented the correct definition of saliency maps and has established the equivalence with hierarchies

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 8

of connected partitions. Bleau and Leon [185] presented a watershed merging algorithm that replicates the process of rain fall over a real landscape – smaller watersheds progressively fill until an overflow occurs. The water then flows to a nearby, larger or deeper watershed, in which the overflown watersheds are merged. Beucher [154] presented two approaches for computing hierarchical watersheds, by region merging (based on the gradient values between adjacent regions) and by applying mathematical morphologic reconstruction from the set of watershed lines to remove all local minima except the significant ones. Another approach to reduce the number of regions is using a seeded watershed [186], [187], where the flooding starts from labeled seed points instead of local minima and barriers are built when two basins meet. Seeded watershed algorithms ensure that a segmented region contains seed point(s) of one label, only (Fig. 4). By combining the results from repeated executions of seeded watersheds, with seeds at different random positions, a robust method, called a stochastic watershed, can be realized [188]. Ideally, a stochastic watershed generates a seedindependent probabilistic gradient image, where edge pixels representing significant object boundaries are enhanced. Since the probability of a pixel to be classified as edge depends on the probability of a seed point ending up in the regions separated by the pixel, the stochastic watershed result is biased towards large regions. An important parameter in the stochastic watershed is the number of seed points used in each execution [189]. If a low number of seed points is used, only the most prominent edges are detected. With a high number of seed points, also weaker edges are detected. In its original form, the time complexity of the stochastic watershed is high, since it depends linearly on the number of repeated executions of seeded watersheds. To speed up the computation, merging of adjacent regions of a standard watershed can be used in the computation. When merging adjacent regions, the probability that a seed has been placed in the regions is used to compute the edge probabilities [190]. In 2014 Malmberg and Luengo Hendriks [191] presented an efficient algorithm for computing the exact stochastic watershed without any randomness. Stochastic watershed has been applied to several medical image segmentation problems, such as study of angiogenesis [192], segmentation of granular materials in 3D micro-tomography [193], and detection of the optic disc in fundus images [194]. Beare [195] proposed a locally constrained watershed transform that uses Minkowski paths or covered paths, constructed by dilating conventional paths with a structuring element, to define distance functions leading to constrained catchment basins and watershed lines, which are smoother compared to traditional ones. Meyer and Vachier [196], [197] simulated a viscous fluid by bringing a Euclidean disk structuring element within the flooding mechanism that does not follow all irregularities of the landscape surface and, therefore, produce lakes with smooth boundaries. The authors showed that a flooding with viscous fluid can be simulated as a flooding with nonviscous fluid on a mathematical morphologic filtered surface, where both methods produce exactly the same lake contours. This principle was used to develop an efficient algorithm for

Fig. 4. Watershed computation. (a) Original intensity image, (b) edge image (gradient magnitude), and (c) seeded watershed segmentation by merging of adjacent regions – first result from all local minima shown in gray.

computing the viscous watershed transform. Couprie and Bertrand [155], [156], [198] introduced topological watersheds, where a greyscale watershed transform is interpreted as an “ultimate W-thinning” that modifies the original image by lowering some points while preserving the connectivity of each lower cross-section. Topological watersheds preserve the fuzzy connectivity or “connectedness value” among the minima in an image. In other words, topological watersheds are the only watershed algorithms to preserve the structure of the min-tree of connected components. This unique property of topological watersheds makes these algorithms as a better choice for developing hierarchical strategies. Couprie et al. [198] developed an efficient algorithm (quasilinear) for computation of topological watersheds using a local characterization of “W-destructible” points on a graph representation of an image. Cousty et al. [97] introduced watershedcuts, a notion of watershed on edge-weighted graphs, and showed that watershed cuts are equivalent to the separations induced by minimum spanning forests relative to regional minima. In [98], the same authors derived the computational solution for watershed cuts and discussed the links as well as differences among watershed cuts, minimum spanning forests, shortest path forests and topological watersheds. Cousty et al. [199] applied the watershed cut method to develop an automated algorithm for segmentation of the left ventricular myocardium in 4-D (3-D plus time) cine-MRI sequences. Couprie et al. [200] developed the power watershed on edgeweighted graph representations of images that unifies the notions of optimum spanning forest and energy optimization with the watershed framework. C. Connected Operators Connected operators [201]–[203] are region-based filtering tools operating on connected components of constant intensity, or flat zones, in an image. Such operators partition an image into regions; thus, they are popularly used in image segmentation [157]–[160]. A popular connected operator-based image filtering and segmentation approach is to construct a component tree of flat zones; prune the tree under a simplification criterion; then reconstruct the final image from the pruned tree. This approach provides more flexibility regarding the choice of the tree simplification criterion, and, thus, on the image structure to be simplified. The first connected operators known as “opening by reconstruction” were defined for binary images by Klein [204]. Vincent [201] generalized

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 9

AU B U C U D U E U F U G U H

E1 C0

G4 F2

B3

EUFUGUH

BU D

H4

D5

FUGUH

D

A0

(a) (b)

AU B U C U D U E U F U G U H

H

G

AU B U C U D U E U F U G U H AU B U C U E U F U G U H

AU B U C U E U F

C

AUEUF D

C

(c)

EUFUGUH

BUCUD

AUE

(d)

G

FUGUH

H

A

Fig. 5. Different component tree representations. (a) A gray-level image with eight flat zones indicated by the letters A, B, · · · , H. The number next to each letter denotes the constant intensity value of the corresponding region. The graphs represent the max-tree (b), min-tree (c), and inclusion tree (d) representations of (a).

the reconstruction process for gray-level images and presented efficient algorithms. Gray-level connected operators are obtained by iteratively dilating the opening, constrained by the mask defined by the original image [59], [201]. Applications of connected operators were reported in [205], [206] while works on efficient computation of connected operators were presented in [157], [205]. A thorough survey on connected operators was presented in [46]. The max- or min-tree [206] and the inclusion tree [207] are popular component tree representations for a gray-level image; see Fig. 5. Note that a leaf node in a max-tree is always a local maxima of flat regions. The example of (a) has three local maxima and, therefore, the max-tree in (b) has three leaf nodes. The dual representation of the max-tree is the min-tree, computed as the max-tree of a negated image; see (c). A leaf node in a min-tree is always a local minimum of flat zones. An inclusion tree [207] is computed using the inclusion criterion of the borders of bounded components at different thresholds; see Fig. 5(d). Another popular component tree representation is the binary partition tree (BPT) [208], which starts with the set of all flat zones in an image as its leaf nodes and iteratively merges two nodes under a predefined similarity criterion. The merging continues until the root node representing the entire image domain is found. A BPT presents a multi-scale hierarchical representation of an image where large regions appear close to the root while fine details are found at lower levels. A naive approach to component trees is computationally expensive. The flood-filling approach [206] starts at the root node and applies a depth first or breadth first flooding process

to build the complete tree. The union-find approach [209] finds and administers merging of disjoint regions. Efficient algorithms using this approach were presented in [210]. The union-find approach was combined with the flood-filling approach in [211]. An efficient algorithm for construction of the inclusion tree was presented in [207]. Once such trees have been created, the filtering or segmentation process consists of simplifying the tree, e.g., by pruning, and then reconstructing the image or segmented regions from the simplified tree. A simple pruning strategy is to find a single cut along each path from the leaf toward the root, and then to collapse all nodes on the leaf-side of the cut onto the highest surviving ancestor. Various features such as size, contrast, color, texture, motion, shape are used to define simplification criteria. Several tree simplification approaches are available in the literature [201], [206], [208], [212], [213]. Herman et al. [214] recommended a simplified foreground component tree structure (FCTS), studied its theoretical properties, and demonstrated its applicability and robustness in distinguishing different biological structures. Couprie and his colleagues applied connected operators and component trees to develop the topological watershed transform for image segmentation [155], [156], [198], [211]. Several applications of component trees to medical imaging were presented in [215]– [218]. Wilkinson and Westenberg [215] developed component tree-based filtering algorithms for shape preserving filament enhancement and applied those to extract carotid arterial trees in CT and MR angiograms of human heads. Ouzounis et al. [216] demonstrated the effectiveness of component treebased algorithms for extraction of thin elongated biological structures, e.g., pyramidal neurons in 3-D confocal microscopy images and protein chains in electron micrographs. Westenberg et al. [217] augmented the basic max-tree data structure for interactive filtering and visualization of an object embedded in a 3-D image and applied it to several medical imaging datasets, including rotational bi-plane X-ray scans and CTangiograms. Oliveira et al. [218] applied the component tree to develop interactive tools for extracting biological structures in an image. V. M INIMAL PATH A PPROACHES In many segmentation applications, it is more intuitive and efficient to track the object boundary or the centerline (e.g., for vascular structures) instead of growing the object region. Often, object boundary or centerline tracking is performed using minimal cost-paths or geodesics [16], [219]–[221]. Similar to region growing, minimal cost-paths are computed using path propagation through adjacent voxels. In this framework, a predefined potential or step-cost function is used, and the energy or cost of a path is computed by integrating the potentials along the path. Finally, the minimum action U(p) at a given voxel p is defined as the minimal path cost to p from a starting voxel p0 . The minimal path, or geodesic, between p0 and a given voxel p is derived from the action map U using a gradient descent back-propagation starting at p until p0 is reached. The gradient descent back-propagation always converges to p0 from p since U has no local minima except p0 . The fast marching algorithm [222] offers an

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 10

efficient computation of the action map, which uses a frontpropagation approach similar to Dijkstra’s algorithm [223]. Deschamps and Cohen [224] introduced a simultaneous front propagation approach for minimal path computation. The idea is to simultaneously propagate a front from each of the two voxels p0 and p1 and finding the voxel pm where the two fronts meet. The minimal path between p0 and p1 is computed by separately finding the minimal path from pm to each of p0 and p1 and then adjoining the two geodesics. This approach greatly reduces the region covered during front propagation as compared to the Fast Marching algorithms and, also, it allows parallel implementation of the two front propagations. Cohen and Kimmel [220] adapted the minimal path approach to develop an active contour [225] model with the global minimum property. It overcomes a major challenge of active contours which, often, get trapped at poor local minima leading to undesired segmentations. Specifically, Cohen and Kimmel used the fast marching propagation to find the object contour with the global minimum energy. In their work, the closed object contour is represented as the union of two minimal paths, or geodesics. Appleton and Talbot [226] presented a method to compute global optimal active contours under the sole restriction that they contain a specified internal point and demonstrated its application to segmentation of different medical images. Cohen [227] used the minimal path approach to find multiple contours in an image from an unstructured set of edge points. In the same paper, they proposed an automated method to find the edge points from a large set of admissible points. The 3-D extension of this method and its application to vessel tracking was presented in [228]. Deschamps and Cohen [228] used minimal paths between connected components for perceptual grouping and contour completion of unstructured regions in 2-D and 3-D images, e.g., completion of vascular trees. Live wire and intelligent scissors [13], [14], [229]–[232] use the minimal path approach to design an effective interactive tool with graphical interface for intelligently tracking an object boundary. The user interactively provides input to guide the boundary detection process. Falc˜ao and Udupa [14] proposed an oriented live-wire algorithm (Fig. 6) with “oriented boundary”, where intensity values from “inside” and “outside” of the object are treated differently. Peyr´e et al. [50] presented a survey of geodesic methods and their applications. Minimal path techniques have been extensively used for centerline extraction of tubular tree structures in medical imaging. A major advantage of such methods is that these methods are robust in the presence of noisy local perturbations on a target structure in an image. Several minimal path based centerline extraction methods have been reported in the literature [19], [224], [227], [233]–[235]. These techniques involve deriving a cost metric [220] from the image in a way such that minimal paths correspond to the centerline of a tubular structure. Such methods need the user to provide a starting point or source, and end points. The algorithm connects each end point to the source with a minimal path. Deschamps and Cohen [224] and Wan et al. [19] used a combination of the path length and the distance to the object border as the path cost function. Staal et al. [234] used the maximal

Fig. 6. The oriented live wire algorithm forces the user to steer tracking in one direction as indicated by the curved arrow. In this MRI example of human foot, different bone edges come close to the boundary of the talus bone, but their contrasts have opposite orientations, so they have very different cost values. This figure was provided by Dr. J. K. Udupa.

ridge response in their step-cost function for vessel centerline extraction in retinal images. Benmansour and Cohen [235] incorporated anisotropic enhancement features to develop an interactive centerline tracking tool for vessel trees in 2-D and 3-D medical images. Rouchdy and Cohen [236] introduced a notion of geodesic voting based on computing geodesics from a set of end points scattered in the image and demonstrated its application to various medical imaging applications, including extraction of microglia extensions and cardiac or neuronal blood vessels. Wink et al. [233] used minimal paths to detect the coronary axis in a 3-D MRA. VI. T OPOLOGY P RESERVATION , S IMPLE P OINT, AND L OCAL T OPOLOGICAL N UMBERS Many image processing operators aim to deform an image while preserving its topological properties. Topology preservation has been thoroughly investigated in the context of sequential and parallel skeletonization; see Section VII. Also, it has been pursued in the context of segmentation [98], [155], [237], tissue classification [177], homotopic dilation and erosion [33], [238], registration [34]–[36] etc. A weak condition for topology preservation is imposed using the Euler characteristic. The Euler characteristic of a polyhedron X, denoted by χ(X), is obtained by adding the numbers of vertices and faces in X while subtracting the numbers of edges and volume elements, where each element is convex. Also, χ(X) equates to the number of components plus the number cavities minus the number of tunnels in X. Efficient algorithms for computation of the Euler characteristics of digital objects are available [239], [240]. Beside the purpose of topology preservation, the Euler characteristic has been used as a quantitative descriptor of object topology in many medical imaging applications, including correction of the topology of the segmented human cortex [26], [33], characterization of trabecular bone connectivity [22] etc. Imposing topology preservation in voxel-wise digital operations led to the notion of “simple point.” Although, a formal definition of topology preservation in a 3-D digital space is complex, intuitively, an operation is “topology preserving” if individual object components, tunnels, and cavities in the

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 11

‫݌‬ (a)

‫݌‬ (b)

‫݌‬ (c)

‫݌‬ (d)

‫݌‬ (e)

‫ܤ‬

(f)



(a)



(b)

‫ܤ‬

Fig. 8. Examples of tunnels on the surface of a topological sphere. (a) The object B, shown in dark on the surface S of a cube, forms one tunnel; such an object always generates exactly two components in S − B. (b) Here, B forms two tunnels and generates exactly three components in S − B.

Fig. 7. Examples of (26,6) 3-D simple (a,b) and non-simple (c-f) points. Deletion of p in (a,b) leaves exactly one object component without a cavity or a tunnel. On the other hand, the same operations creates two object components in (c), one tunnel in (d), a cavity in (e) and, finally, a tunnel and two objects components in (f).

original image are preserved after the concerned operation; see [89] for more on topology preservation. In the following section, a survey of simple point characterization together with a brief history of its progression, particularly in 3-D is presented. It is followed by a review of local topological numbers and their applications to classifications of structures in medical imaging. A. Simple Points In a binary digital image (Z 3 , α, β, B), a voxel or point p ∈ Z 3 is a 3-D simple point if and only if its deletion, i.e., removal from B, or its addition, i.e., inclusion in B, preserves the topology of the original image in the neighborhood of p. Specifically, p is a simple point if and only if (N26 (p), α, β, N26 (p)∩(B−{p})) and (N26 (p), α, β, N26 (p)∩ (B ∪ {p})) are topologically equivalent (Fig. 7). A concise characterization of 2-D simple points were established in the early 1970s [78], [79], [241]. A characterization of simple points in 3-D is more complex than in 2-D, primarily, due to the presence of tunnels in 3-D. Tourlakis and Mylopoulos [242] presented a generalized characterization of simple points that applies to any dimension. Morgenthaler [243] presented a less complex characterization of 3-D simple points, where the preservation of tunnels was imposed by adding a constraint of Euler characteristic preservation. Lobregt et al. [244] presented an efficient algorithm for 3-D simple point detection based on the Euler characteristic preservation. However, as described by Saha et al. [245], their algorithm fails to detect the violation of topology preservation when the deletion of a point simultaneously splits an object into two and creates a tunnel, e.g., the example of Fig. 7(f). Using (26,6) adjacency, Saha et al. [245]–[248] characterized the existence as well as the number of tunnels in the 3 × 3 × 3 neighborhood using simple connectivity analysis as stated in the following theorem. Theorem 1. If a voxel, or point, p ∈ Z 3 has a background 6∗ neighbor, the number of tunnels η(p) in N26 (p) is one less than ∗ the number of 6-components of background points in N18 (p) ∗ that intersect with N6 (p), or, zero otherwise.

(a)

(b)

Fig. 9. An example of tunnels in the 3 × 3 × 3 neighborhood. Object voxels are shown dark; background voxels are shown transparent and semitransparent in (a) and opaque in (b). The object voxel configurations are same in both (a) and (b). (a) shows that a background path (shown by semitransparent voxels) crosses a non-trivial closed loop of object voxels on the boundary of a 3×3×3 neighborhood. This phenomenon of “crossing”, illustrated in [245], [246], [248], was crucial for characterization of tunnels and 3-D simple points. (b) demonstrates the 18-neighborhood; note that no background path may cross a non-trivial closed loop of object voxels in the 18-neighborhood.

The above theorem may be intuitively explained using examples of tunnels on the surface of a topological sphere, the surface S ⊂ R3 of a cube in Fig. 8. In both Figs. 8(a) and (b), the object B ⊂ S is present on the surface of the cube. Such an object forms a tunnel if it forms a loop and, thus, divides the topological sphere S into two or more disconnected regions, i.e., S − B is not connected. In Theorem 1, this observation is ∗ translated for the digital case, where the neighborhood N26 (p) forms the digital boundary surface of a 3 × 3 × 3 cube and ∗ N26 (p) ∩ B is the set of object voxels on that surface. While ∗ defining 6-components of background points, N18 (p) is used ∗ instead of N26 (p), to avoid any crossing between a background voxel path and an object voxel path (Fig. 9). See [249] for a formal proof of the theorem. Saha et al. [245]–[248] used the results of Theorem 1 and presented a concise characterization of (26,6) 3-D simple points in a four-condition format. Theorem 2. A voxel, or point p ∈ Z 3 is a (26,6) simple point in a 3-D binary image (Z 3 , 26, 6, B) if and only if it satisfies the following four conditions. Condition 1. p has a background 6-neighbor, i.e., N6∗ (p) − B is non-empty. ∗ Condition 2. p has an object 26-neighbor, i.e., N26 (p) ∩ B is non-empty. Condition 3. The set of object 26-neighbors of p is 26∗ connected, i.e., N26 (p) ∩ B is 26-connected. Condition 4. The set of background 6-neighbors of p is 6-

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 12

connected in the set of background 18-neighbors, i.e., ∗ N6∗ (p) − B is 6-connected in N18 (p) − B. The four conditions in the above characterization define the four different types of local topological violations that occur when a non-simple point is deleted. Condition 1 ensures that no cavity is created by deletion of a point while Condition 2 confirms that no isolated point is deleted. Condition 3 ensures that, after the deletion of the central point p, its neighboring points remains connected. Finally, Condition 4 guarantees that the deletion of a point creates no tunnel in the neighborhood. Malandian and Bertrand [250], [251] re-discovered the simple point characterization of Theorem 2 and presented it in a twocondition format as stated in the following. A voxel or a point p ∈ Z 3 is a (26,6) 3-D simple point if and only if it satisfies the following two conditions. ∗ Condition 1. N26 (p) has exactly one 26-component of object points. Condition 2. The number of 6-components of background ∗ points in N18 (p) that intersect with N6∗ (p) is exactly one. Fourey and Malgouyres [252] formalized the proofs of several results on 3-D simple points, which were reported earlier without formal proofs [41], [82], [243], [245], [246], [248], [251], [253]. Klette and Pan [254] presented a review of 3-D simple point characterizations and proposed a new methodology for identifying non-simple points and demonstrated its applications to skeletonization. Malgouyres and Lenoir [255] presented a characterization for topology preservation on a digital surface. A recent trend in discrete topological transformations is the use of the cubical complex framework [89]–[93] where a cellular complex is embedded in a 3-D cubic grid. In a cellular model, lower dimensional elements (or, cells), e.g., the faces, or edges are allowed in the connectivity relationship. The presence of cells of lower dimension yields a more precise definition of object boundaries. Kong [89] defined simple points based on the operation of collapsing [256] in a cellular representation of binary objects in cubic grids. Couprie and Bertrand [257] defined characterizations of simple points in 2-, 3- and 4-D cubic grids using the framework of simplicial complexes, which lead to simple and efficient algorithms. Cointepas et al. [90] used characterization of simple cells in a cellular model to develop a cellular homotopic deformation algorithm, applied it for segmentation of the cerebral cortex in brain MR images, and demonstrated that the cellular homotopic deformation better captures both volume and surface structures on a cortex boundary [90]. Cardoso et al. [93] used the framework of cubicial complex to compute topologically correct thickness of brain cortex from MR images. B. Local Topological Numbers and Classification Saha et al. [245], [246], [248], [258] advocated the following three numbers for characterization of local topological properties of individual voxels. • ξ(p), η(p), δ(p): the numbers of object components, tunnels, and cavities, respectively, in the 3 × 3 × 3 neighborhood after the deletion of p.

Curve edge

pce Curve-curve junction

pcc pc

Curve interior psc Surface-curve junction Surface-surface junction

ps

pss Surface interior pse

Surface edge

Fig. 10. An illustration of possible topological classifications on an example in the continuous 3-D space.

Bertrand and Malandain [251] suggested the following two local topological numbers. ∗ • T26 (p, B): the number of 26-components in N26 (p)∩B. ∗ ¯ • T6 (p, B): the number of components of N6 (p) − B ∗ under 6-connectivity in N18 (p) − B. An efficient algorithm for computing 3-D simple points and local toplogical numbers ξ(·), η(·), and δ(·), using the notions of “dead surface” and “effective voxel” was presented by Saha et al. [245], [246], [248], [258]. Borgefors et al. [259] presented an efficient algorithm for connected component labelling in the 3 × 3 × 3 neighborhood using a recursive approach. Local topological properties have been popularly used to identify junctions, edges, and interior points in a 2-D skeleton or 3-D curve skeleton [258], [260]. Saha and Chaudhuri [258] and Malandain et al. [260] independently realized the principle of characterizing topological classes of individual voxels in a 3-D surface skeleton using local topological numbers. Saha and Chaudhuri [258] used the numbers ξ, η, δ for topological classification of individual voxels, while Malandain et al. [260] adopted the topological numbers T26 and T6 . Also, this approach was used to segment a surface skeleton into meaningful segments [258], [260]. Svensson et al. [261] used local topology-based voxel classification to facilitate curve skeletonization of surface-like objects. Recently, Serino et al. [262] have used local topology in a skeleton to decompose a 3-D object into meaningful segments. Saha and Chaudhuri [258] developed the comprehensive solution for digital topological analysis or DTA to uniquely identify the topological class of individual voxels. Consider an object S that is a union of finitely many surfaces and curves in R3 (Fig. 10). A point in S may be classified depending on its local topological numbers. For example, the removal of a point ps on a surface of S creates exactly one tunnel in a sufficiently small neighborhood. The removal of a point pss on a junction of multiple surfaces creates multiple tunnels. It creates exactly two objects on a curve (pc ) and multiple objects at a junction of curves (pcc ). At a junction of surface and curve (psc ), it creates multiple objects and exactly one

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 13

Fig. 11. Rod-like versus plate-like networks in 3-D projection of micro-MRI images of radial trabecular bone [263]: (a) highly rod-like (F 74 yrs.), (b) moderately rod-like (F 53 years), (c) predominantly plate-like (M 68 yrs.), (d) highly plate-like (M 60 years). Topological parameters parallel visual assessment of increasing plate-like trabecular bone micro-architecture with the surface-to-curve ratio increasing nearly 20-fold from (a) through (d).

tunnel, and so on. Local topological classification in a digital space is not as straightforward as in R3 . DTA is solved using the three local topological parameters ξ(·), η(·), and δ(·) and several rules in extended neighborhoods as described in [258]. DTA has been widely applied for characterization of trabecular bone plate/rod micro-architecture from in vivo imaging [24], [263]–[268] (Fig. 11). VII. S KELETONIZATION Blum [101] demonstrated that the set of loci of maximal included balls in an R2 object forms its medial axis or skeleton that provides a compact yet effective representation of the object. The skeleton of a 3-D object consists of the surfaces/curves of symmetry with lower dimensionality while persevering key topological and geometrical features of the object. Together with the distance of its points to the object boundaries, the skeleton allows exact reconstruction of the original object. Here, it should be clarified that there are other forms [269]–[271] of skeleton besides Blum’s medial skeleton; see Chapters 1 and 3 of the book by Siddiqi and Pizer [37] for a thorough discussion on this issue. In this paper, we will discuss only the discrete variant of Blum’s medial skeleton. Skeletons are useful for object description, retrieval, manipulation, matching, registration, tracking, recognition, compression, and, also, they facilitate efficient assessment of local object properties, e.g., scale, orientation, topology etc. Analytically skeletonization is defined as a grassfire transformation [272] where the object is assumed to be a field of dry grass that is simultaneously lit at all boundary points. The fire burns the grass-field at a uniform speed and the set of quench points, where two independent fire fronts meet, forms the skeleton of the object [273], [274]. In a digital image, the fire propagation process is simulated using digital path-propagation or the distance transform [2]–[5] and quench voxels are defined as the centers of the maximal included balls [113], [131], [132] (see Section III-A) where path propagations are interrupted. Several fundamentally different approaches are available for computation of skeletons [37]. Some researchers [275]–[277] have used shocks of boundary evolution to extract the skeletons of an object where the object boundary, represented as an active contour, is partitioned at locations of positive curvature maxima, each leading to an end point of the skeleton. Different segments of the active contour then propagate inwardly driven by a function modelled by the negative gradient of the distance

to the object boundary. These contour segments slow down at the vicinity of the skeleton, where the magnitude of the gradient of the distance map is small. Others have used Voronoi diagrams to compute Voronoi skeletons [138], [278], [279], where an object is represented by a discrete set of its boundary points and a Voronoi diagram of this set of points is computed. The approximate object boundary generated by connecting the discrete boundary points cuts the Voronoi diagram into internal and external parts. The internal part of the Voronoi diagram is called the Voronoi skeleton. However, the Voronoi skeleton generated by a discrete set of boundary points contains many spurious branches, Schmitt [280] proved that, as the number of generating boundary points increases, the Voronoi diagram converges in the limit to the continuous medial locus, with the exception of the edges generated by neighboring pairs of boundary points. Gray-scale shape skeletonization [276], [281] is based on an “edge-strength” function whose level curves are smoothed analogs of the successive shape outlines obtained during mathematical morphologic erosion. Such methods are robust with respect to noise and gaps in the shape outline. Moreover, an efficient computation solution is achieved using the linear differential of the governing function. The above approaches use continuous representations of object boundaries. A thorough review of above continuous approaches of skeletonization, together with several discrete skeletonization algorithms, was presented in a book edited by Siddiqi and Pizer [37]. The most primitive, yet popular, skeletonization approach is based on simulating Blum’s grassfire propagation in a digital grid as an iterative erosion method under certain digital topologic and geometric rules. In the rest of this section, we will primarily confine our discussion on skeletonization algorithms falling under this category. A. Digital Geometric Approaches The literature on 2-D skeletonization using iterative digital erosion had matured in the early 1990s [51]; however, the same was not true in 3-D. Tsao and Fu [282] reported the first complete 3-D skeletonization algorithm. In terms of computational efficiency, skeletonization algorithms may be classified into sequential [283]–[290] and parallel [282], [291]–[295] algorithms. In terms of methods and principles, skeletonization algorithms may be roughly classified into three categories – (1) fully predicate-kernel based iterative algorithms [284], [287], [290]; (2) structured iterative algorithms [282], [284], [293]; and (3) distance transform, or DT, based algorithms [285], [286], [289]. The first kernel based 3-D skeletonization algorithm was reported by Pal´agyi et al. [287], which was further improved in [290]. Using iterative boundary peeling, Tsao and Fu [282] and others [284], [293] presented a structured approach for 3-D skeletonization by separately modeling conditions for local shape and topology preservation, and other issues, e.g., sharpness at corners. Others have used the DT for skeletonization [285], [286], [288], [289]. Arcelli and Sannti di Baja [283] introduced the notion of DT-based skeletonization in 2-D and discussed its advantages. Borgefors et al. [286], [296] applied DT to 3-D skeletonization. Saito and Toriwaki [297] introduced the idea of using DT to define the

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 14

(a)

(b)

(a)

(b)

(c)

Fig. 12. Illustration of independent fire fronts meeting at (a) surface- and (b) curve-like quench voxels.

sequence of voxel erosion, which has further been studied by others [285], [289]. Recently, Jin and Saha [135] presented a fuzzy grassfire propagation by eroding voxels in a fuzzy subset in the sequence of their FDT values. A major advantage of a DT-based approach is that it avoids repetitive image scans and thus improves computation efficiency. Also, a DT-driven voxel erosion strategy, together with a suitable choice of distance metric [115], makes the skeletons more robust under image rotation. However, an apparent difficulty of this approach is that it may be difficult to parallelize such algorithms. Others have applied mathematical morphologic approaches to skeletonization [298]. Several skeletonization algorithms [29], [31], [299]–[301] have been tailored for elongated structures which directly generate curve skeletons from a volumetric object. Although, a few works on gray-scale and fuzzy skeletonization were presented [15], [276], [281], the fundamental issues and a comprehensive skeletonization algorithm for fuzzy digital objects have been reported only recently [135]. In general, a skeletonization algorithm consists of three major modules – primary skeletonization, final skeletonization, and pruning. The notion of primary and final skeletonization was simultaneously introduced by Saha et al. [248], [293] and Arcelli and Sanniti di Baja [132]. Saha et al. described the two steps in a 3-D skeletonization algorithm using an iterative boundary erosion, while Arcelli and Sanniti di Baja presented the idea in 2-D using a DT-based method. Primary skeletonization produces a two-voxel thick “thin set” [283] representing the medial axis of the object, where every voxel has at least one background neighbor (except at very busy intersections of surfaces and curves). A scan-independent choice of quench voxels always produces two-voxel thick skeletons where the object has a thickness of an even number of voxels. The primary skeleton is further eroded during final skeletonization to generate one-voxel thick skeletons. Centers of maximal balls or CMBs (see Section III-A) are used to define the quench voxels. Depending upon the geometry of the local DT or FDT map, two types of quench voxels, namely, surface- and curve-quench voxels may be identified (Fig. 12) [135], [293]. A surface quench point is formed when two opposite fire fronts meet, while a curve quench point is formed when fire fronts meet from all directions on a plane. Identification of the two different types of quench voxels plays an important role in tailoring the kernels of local shape significance functions for the two topologically distinct conditions [135]. During final skeletonization, two-voxel thick structures are

(d)

(e)

(f)

Fig. 13. Results of skeletonization [135] on two anatomic structures, fuzzily segmented from acquired images. (a) Lateral cerebral ventricles segmented from a human brain MR image. (b) An axial image slice illustrating the fuzziness and noise. (c) Surface rendering of the fuzzy skeleton. (d-f) Same as (a-c) but for micro-CT image data of a human tooth.

eroded under topology preservation and some additional geometric constraints [132], [289], [293] maintaining the overall shape of the skeleton. The purpose of the skeletal pruning is to detect and eliminate noisy branches based on a predefined measure of global shape significance factor [28], [135], [289]. N´emeth et al. [290] has recommended an iteration-by-iteration smoothing approach to improve the quality of final skeletons. A few examples of skeletonization results are presented in Fig. 13. For topology preservation during skeletonization, constraints of 3-D simple points (see Section VI-A) are applied while eroding voxels. B. Parallelization The first 3-D parallel skeletonization algorithm was presented by Tsao and Fu [282]. The primary challenge in parallel skeletonization is that, although simple point characterizations guarantee topology preservation for sequential erosion, it fails to ensure topology preservation when a set of simple points is deleted in parallel. Several parallelization strategies are reviewed in the following. In a subiterative parallelization approach, an iteration is divided into several subiterations. Two schemes have been adopted for division of an iteration into subiterations – based on the direction of open face(s) of boundary voxels [282], [284], [287], [291], [294], [302]–[304] and based on subfield partitioning of the image grid [293], [295], [305], [306]. In the first scheme, an image is divided into six directional subsets, e.g., north, south, east, west, top, and bottom. In general, directional subdivision cannot eliminate all ambiguous simple sets, and additional restricting conditions are required to ensure topology preservation. Algorithms are available under this category where different number subiterations are used to complete one iteration, e.g., 12-subiteration [287], 8-subiteration [307], 3-subiteration [308], and fully parallel [309] algorithms. The premise behind the subfield-based parallelization scheme is to divide the image space such that

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 15

Fig. 14. Two applications of skeletonization. (a) Maximal Intensity Projection (MIP) of hepatic circulation in an MRA with superimposed arterial paths and branching patterns detected using gray-scale skeletonization (from [15]). (b) Skeleton of the extracted vessels from a CT dataset. The hepatic veins (blue) are separated from the portal vein automatically (from [9]).

simplicity of any voxel is independent of the object/non-object configuration of any other voxel from the same subfield [305]. Thus, eight sub-fields are usually defined in the 3-D cubic grid [293]. Ma et al. [295] presented a 4-subfield parallel skeletonization algorithm for 3-D cubic grid and established its topology preservation property. Ronse [310], [311] introduced the fundamental notion of minimal non-simple sets in 2-D to study the conditions under which pixels may be removed in parallel while preserving the topology. In particular, he showed that a set of pixels D and its proper subsets are all co-deletable in a binary image if each singleton and each pair of 8-adjacent pixels in D is co-deletable. Others established similar results [292], [312]–[314] for 2-D, 3-D, and 4-D binary digital images. Such strategies lead to fully parallel skeletonization with topology preservation constraints defined over an extended neighborhood beyond the 3 × 3 × 3 neighborhood. Ma and Sonka presented the first fully parallel skeletonization algorithm in 3-D. Lohou [315] and Wang and Basu [316] detected non-topology preservation and counter examples of Ma and Sonka’s algorithm. Later, Lohou and Dehos [317] presented an automatic correction on Ma and Sonka’s algorithm using Psimple points [318]. Kong [319] extended results on minimal non-simple sets for polytopal complex representations. Bertrand [318] introduced a new interpretation of simple points, referred to a P-simple points, which guarantees topology preservation even under parallel deletion. A parallel skeletonization scheme based on P-simple points is described in two steps. In the first step, a set of voxels D is identified that may be considered for deletion based on geometric requirements of the skeletonization algorithm. In the second step, the voxels of D which are P-simple points are deleted in parallel. Depending upon the propagation strategy, a parallel skeletonization algorithm using P-simple points may be either directional [320]–[322] or symmetrical [323]. Bertrand and Couprie [92], [324] developed another parallel skeletonization scheme for cubicial complex representation of binary digital images using the notion of the critical kernel of a complex. C. Applications and Evaluation Skeletonization has been widely used in different medical imaging applications, including arterial motion in cardiac

angiographic image sequence [325], [326], feature detection in mammograms [23], [27], vessel segmentation in MR angiography [15], [327], mathematical morphologic image interpolation [328], object characterization [278], stenosis detection [29]–[32], [329]–[331], path finding in colonoscopy [19], [21], [332] and bronchoscopy [18], [20], tree analysis in surgical planning [9] and pulmonary imaging [333], and local structure analysis in trabecular bone imaging [24], [28], [75], [263], [264], [268], [334], [335]. Also, skeletonization has been used to build the correspondence of matching landmarks in a cardiac angiographic image sequence for computation of arterial motion [325], [326]. Kobatake and Yoshinaga [23] applied skeletonization to detect spicules on mammograms that are useful to identify malignant tumors. Zwiggelaar et al. [27] used skeletonization for detecting linear structures in mammograms and for classifying those into different anatomical types, e.g., vessels, spicules, ducts etc. Yim et al. [15] developed a gray-scale skeletonization algorithm for interactive detection of small vessels and determination of their branching patterns (Fig. 14(a)). Chatzis and Pitas [328] demonstrated an application of skeletonization in image interpolation. Selle et al. [9] applied skeletonization to assess the structural properties of the hepatic vessels and their relationship with tumors for liver surgical planning; see Fig. 14 (b). Tschirren et al. [333] developed a method using curve skeletonization for automated matching of corresponding branch points between two human airway trees, as well as for assigning anatomical names or labels to different branches in a human airway tree, acquired through in vivo CT imaging. Performance evaluation of a skeletonization algorithm is a non-trivial challenge emerging from the lack of a definition of “true” skeleton for a digital object. Haralick [336] discussed several guidelines for evaluating skeletonization algorithms. The Rotterdam coronary axis tracking evaluation group [330] reported results of a comprehensive evaluation study for centerline detection algorithms for coronary vasculature using consensus centerlines by multiple observers. Greenspan et al. [31] reported an evaluation study for center-line extraction algorithms in quantitative coronary angiography using simulated coronary arteries with known centerlines. A comprehensive and consensus framework for evaluating skeletonization algorithms providing structured knowledge of understanding of their performance under various categories, e.g., generation of spurious branches, missing true branches, sharpness corners, smoothness etc., under different imaging conditions is yet to emerge. However, based on methods and principles of different skeletonization algorithms, one may anticipate their expected behaviors which are useful in selecting an appropriate type of skeletonization algorithm for a specific application. Obviously, parallel skeletonization algorithms have the added advantage in terms of computational efficiency. At the time, such algorithms face additional challenges in maintaining the expected geometry of the skeleton due to the asynchronous sequence of simple point removal. Parallel skeletonization algorithms can be combined with a priority queue on distance transform to obtain a symmetric skeleton; see [337] for a recent example of such a strategy. DTbased approaches are expected to show a better performance

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 16

0

O

1 D

A

A C

C B

O

(a)

2

B D

(b) (c)

Fig. 15. (a) A binary image with O as the background. A and B are two object components and C and D are two non-object components inside A; C and D correspond to two 2-D holes in A. (b) The adjacency tree of (a). (c) The unfolding transform of (a) as derived from its adjacency tree in (b).

under rotation and at sharp corners due to the underlying robust strategy of using the DT, or depth measure, in defining the peel sequence. However, it is difficult to make such a distinction between DT-based and iterative skeletonization algorithms in terms of performance under noise. In the context of eliminating noisy branches, an algorithm based on assessment of the global significance of a branch is expected to be superior as compared to local decision based approaches during peeling. VIII. O BJECT C HARACTERIZATION A wide range of topological and geometric approaches have been applied to represent and characterize various properties of digital objects. Here, we identify these methods under one of the following three categories – (1) object representation; (2) quantitative characterization of object morphology; and (3) topological correction of segmented anatomic objects. A. Object Representation Various models have been used to represent an object. The adjacency tree, or homotopic tree, [58], [338] offers a treebased representation of certain topological and spatial properties of connected components in a binary digital image, where each vertex represents a distinct foreground or background component and an edge between two vertices represents the spatial adjacency between the two corresponding components in the image (Fig. 15). The root in an adjacency tree always represents the unique background component. One of the two vertices on an edge represents an object component, while the other represents a background component and the component at the higher end of the edge surrounds the component at the lower end. Two 2-D binary digital images are topologically equivalent if and only if their adjacency trees are equivalent. However, the same is not true in 3-D because an adjacency tree does not account for tunnels and knotting. An adjacency tree defines the nesting of the partitions in a binary image; see Fig. 15(b). The unfolding transform yields the level of a connected component in the adjacency tree (Fig. 15(c)). Using the unfolding transform, Keshet [339] defined a partial ordering of all possible binary images leading the formation of an adjacency lattice. Also, he formulated different mathematical morphologic operations, including erosion, dilation, gradients, top-hat transforms, conditional dilation/erosion, skeletons, reconstruction etc. on an adjacency lattice. Stell and Worboys [340] studied different relations among

Fig. 16. Plate/rod classification of trabecular bone. (a) Healthy female, (b) an age-similar, sex- and BMI-matched patient on continuous treatment with a selective serotonin re-uptake inhibitor, and (c) another age-similar, sexand BMI-matched patient with cystic fibrosis. The healthy female has more trabecular bone plates (green) as compared to both non-healthy participants. Between the two patients, the cystic fibrosis patient (c) has some signs of heterogeneous bone loss.

adjacency trees in the context of various filtering operations. Heijmans [341] defined “grain operators” using adjacency trees. Monasse and Guichard [207] defined a “shape” as a connected component in a thresholded image after filling holes (cavities in 3-D) and formulated a notion of tree of shapes. Ballester et al. [342] studied different properties of a tree of shapes and demonstrated their application to adaptive image quantization. Topological object decomposition involve detection of junctions and ungluing of topological segments in a skeletal and volumetric reconstruction of individual segments. It has been applied for vascular and airway trees [17], [18], [20] as well as for surface-like anatomic structures [26], [238], [268]. Szymczak et al. [17] applied topological segmentation of the vascular tree to clean up the coronary vessel segmentation in heart CT scans. Branch decomposition and tree analysis were used for identification of anatomic branching patterns in human airway trees from CT imaging [18], [20]. Detection of junctions and ungluing of topological segments in a skeleton with both surfaces and curves involve complex rules using local topological numbers [258], [260]. Mangin et al. [26], [238] applied topological segmentation of the hollow object formed between the inner surface of the cortex and the brain hull for quantitative morphometric analysis of the cerebral cortex. Liu et al. [268] applied topological segmentation [258] for volumetric decomposition of individual trabeculae in a trabecular bone network. B. Object Morphometry Digital topology and geometry play important roles in quantitative characterization of structural properties of objects, referred to as “object morphometry”. Skeletonization has been used to measure structural features in mammograms [23], [27]. Hildebrand and R¨uegsegger [75] used the diameter of the largest maximal included ball or MIB to define local structure thickness and presented an efficient algorithm using the distance transform, (DT). Saha et al. [25] used sampling of the fuzzy distance transform (FDT) at skeletal voxels to compute trabecular bone thickness at in vivo resolution. DT and digital geometric approaches have been applied to study the expression pattern of proteins in normal and tumor cells [343]. Feldkamp et al. [22] used Euler-Poincar´e characteristics to define the connectivity number of a trabecular bone network.

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 17

body residue opening CTE + labeling

(a)

(c)

(b) graph construction

cycle breaking

(f) Fig. 17. An MR image slice of human brain. (a) Interfaces between white and gray matter regions and between gray matter regions and the cerebrospinal fluid are indicated with green and red lines, respectively. (b) 3-D rendition of the cortical surface. (c) Opposite fronts of a sulcus often get connected due to low resolution and noise. (d) A segmentation algorithm often generates a topologically incorrect anatomic surface with noisy tunnels/handles. This figure is reproduced from Fig. 1 in [40].

Other have used skeletonization and topological analysis to measures trabecular numbers, thickness and spacing in a trabecular bone network [334], [335]. Saha et al. [24], [258], [263] applied local digital topological analysis to define the surface-to-curve ratio and an erosion index of a trabecular bone network, which has been applied to several human bone studies [264]–[266]; see Fig. 16. A common strategy for local morphometric analysis is to measure the target local properties, e.g., thickness, orientation, plate-rod topology, etc., at representative sample points, e.g., skeletal voxels, and then propagate it back to the entire volume. Properties measured at skeletal voxels benefit from symmetry, larger context, reduced effects of digitization, and other artifacts. Additionally, such a strategy drastically improves computational efficiency. This strategy is related to the salience DTs of Rosin and West [128]. It leads to the notion of “feature propagation” where computed local features are propagated from a set of fewer representative sample points, e.g., skeletal voxels, to a larger set, e.g., the entire object volume. Bonnassie et al. [137] proposed a feature propagation algorithm that copies feature values of a skeletal voxel to all voxels within its MIB. A voxel may belong to multiple MIBs; thus, the final result is dependent of the scan order of skeletal voxels. Liu et al. [344] compared the performance among four different strategies of feature propagation, namely, (1) largest MIB, (2) smallest MIB, (3) nearest MIB, and (4) farthest circumference MIB and demonstrated the advantages of using the farthest circumference MIB for feature propagation. The results would probably be more stable if the set of MIBs is first reduced as much as possible, see [345]. C. Object Correction The topologic properties of anatomic structures are often known. Reconstruction of an accurate and topologically cor-

(e)

(d)

Fig. 18. Illustration of topological correction. (a) Original object, (b) the residue, the region eliminated by a mathematical morphologic opening, (c) conditional topological expansion (CTE), (d) adjacency graph among connected components of body and residual parts, (e,f) cycle breaking and corrected object reconstruction. This figure was redrawn following Fig. 4 in [33].

rect representation of an object has been thoroughly studied in the context of cortex segmentation from MRI [9], [33], [40], [238], [346], see Fig. 17. Among others, Mangin et al. [238], Han et al. [33], and Dokl´adal et al. [346] adopted a voxelwise digital topologic and geometric approach to correct a segmented cortical surface by eliminating noisy tunnels. Due to image noise and other artifacts, the segmentation of an anatomic object may not comply with its known topology, and correction is essential. The common topological aberrations in 3-D are creation of noisy cavities and tunnels. Filling of cavities is quite straightforward [238]; however, the same does not hold for noisy tunnels. Mangin et al. [238] proposed homotopic multi-scale conditional dilation and erosion generating topologically correct segmentations of the union of gray matter, white matter, and cerebrospinal fluid from T1-weighted brain MRIs. Han et al. [33] proposed a retrospective algorithm for topology correction of the segmented cortical surface. They adopted an iterative multi-scale filtering approach, where, during each iteration, the method removes noisy handles at a given scale in both foreground and background structures; see Fig. 18. They applied a conditional topological expansion where a residual point is transformed to a body point under a loose definition of topology preservation that they referred to as a “nice point” where the preservation of tunnels or cavities is relaxed (see the region indicated by an arrow in Fig. 18(b)). Gao et al. [347] adopted persistent homology [348], [349] for segmentation of papillary muscles and the trabeculae from high resolution cardiac CTs through restoration of topological handles. Heijmans et al. [350] introduced the notion of path openings and closings that use families of structuring elements for correcting noisy broken paths and demonstrated their applications in tracking elongated structures in biological images, e.g., vessels in retinal images.

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 18

IX. C ONCLUSION A wide range of topics related to digital topology and geometry, including distance analysis and path propagation, surfacetracking, image segmentation, border and centerline tracking, topology preservation and local properties, skeletonization, and object characterization have here been assembled in the context of medical image applications. Together with literature studies of individual topics and their applications to medical imaging, principles of the state of the art methods and their insights have been reviewed under each topic to encourage and benefit the understanding of the potential of digital topology and geometry and their applications. The focus on this survey has remained on theories and algorithms that are directly developed in digital spaces in contrast to those that digitally approximate continuous mathematics or algorithms. Most of the theories and algorithms under purely digital approaches are built on simple notions such as path connectivity, path-propagation, path-cost and neighborhood analysis. This simplicity in their foundation enhances the stability, efficiency and inter-links among the theories, algorithms and results. A major challenge with the inherently digital approach is that the precision of a method under this approach is, often, determined by the image resolution and that derived measures may incur reduced smoothness due to digital effects. In the following, we draw our concluding remarks for the different topics discussed in this survey. •





A good selection of distance transforms are available. Which one to choose is of course dependent on the algorithm of which it is going to be a part? Always choosing the Euclidean DT in the mistaken idea that it must be the best can lead to unnecessarily complex algorithms where the end result is no better than an algorithm using an integer weighted DT would produce. The DT notion has been used in many medical imaging algorithms, either instead of iterative processes or on its own merit. The most used applications are probably skeletonizing and watershed algorithms, but also mathematical morphologic operations, other shape manipulation, compression, template matching, path-finding, Voronoi-tessellation, and shape-based image interpolation should be mentioned. Efficient algorithms are available for computing object components and surface boundaries for binary images. Component labeling is an essential step in most medical imaging applications involving some form of object definition. Fast computation of surface boundaries leads to efficient object rendering with interactive manipulation and virtual reality for surgery. Three connectivity segmentation approaches, namely, fuzzy connectivity, watersheds, and connected operators have been extensively used in medical image analysis. Fuzzy connectivity is governed by a local “affinity” function that aims to fill an object region through adjacent voxels without leaking into other object. Watershed and connected operators provide structured representations of the topography on an image. Such methods are suitable for hierarchical multi-scale object segmentation through effective split-and-merge or tree pruning strategies.









Minimal path approaches have been popularly used in image segmentation using border or centerline tracking approaches. Unlike the other topological segmentation approaches which rely on the connectivity property of paths, minimal path approaches utilize the metric property of a path. Such strategies along with an efficient user interface leads to an effective segmentation tool. Even though it is difficult to define topological equivalence among 3-D digital objects, topology preservation under voxel-by-voxel operations in a binary image is well-defined by the notion of 3-D simple points. Local topological numbers are useful for characterization of topological elements, e.g., surface, curve, edge, junction, etc., which leads toward understanding architectural properties of anatomic structures. The Euler characteristic, a topological descriptor, has been used for topological correction of the segmented human cortex, characterization of object connectivity, e.g., trabecular bone, etc. Significant research efforts have been dedicated toward enhancing the performance of skeletonization algorithms in terms of noisy branches, sharpness at corners, invariance under image transformation, reconstruction of original objects, etc. Creation of noisy skeletal branches has remained a major challenge and a comprehensive solution to this problem is yet to emerge. Skeletonization has been widely applied in various medical imaging areas, including pulmonary, cardiac, mammographic, abdominal, retinal, bone imaging, etc. Validation of skeletonization algorithms is a challenging issue and a comprehensive and consensus framework for evaluation is yet to emerge. Digital topologic and geometric features have been used in a considerable number of methods toward characterizing various aspects of an object. While a comprehensive representation of 3-D objects by topologic trees is difficult, methods are available for decomposition of 3-D digital objects into meaningful topological segments benefitting quantitative object characterization and correction. Topological and geometrical approaches have been combined to quantitatively assess object morphometry and local structural properties in an object. ACKNOWLEDGMENT

The strategic fund of the Dept. of Information technology at Uppsala University is thankfully acknowledged for supporting this co-operation. The authors wish to convey special thanks to Prof. Sanniti di Baja for her thoughtful comments and suggestions on Section VII. Also, the authors wish to thank Prof. Nystr¨om for her encouragement and for providing references to several important works during the preparation of this manuscript. Finally we wish to thank the reviewers who provided many useful insights and references. R EFERENCES [1] M. Sonka and J. M. Fitzpatrick, Handbook of Medical Imaging Volume 2 – Medical Image Processing and Analysis. Bellingham, WA: SPIE Press, 2000. [2] A. Rosenfeld and J. Pfaltz, “Distance functions in digital pictures,” Pat Recog, vol. 1, pp. 33–61, 1968.

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 19

[3] G. Borgefors, “Distance transform in arbitrary dimensions,” Comp Vis Graph Imag Proc, vol. 27, pp. 321–345, 1984. [4] ——, “Distance transformations in digital images,” Comp Vis Graph Imag Proc, vol. 34, pp. 344–371, 1986. [5] P. K. Saha, F. W. Wehrli, and B. R. Gomberg, “Fuzzy distance transform: Theory, algorithms, and applications,” Comp Vis Imag Und, vol. 86, pp. 171–190, 2002. [6] P. K. Saha, G. Liang, J. M. Elkins, A. Coimbra, L. T. Duong, D. S. Williams, and M. Sonka, “A new osteophyte segmentation algorithm using partial shape model and its applications to rabbit femur anterior cruciate ligament transection via micro-CT imaging,” IEEE Trans Biomed Eng, vol. 58, pp. 2212–2227, 2011. [7] S. P. Raya and J. K. Udupa, “Shape-based interpolation of multidimensional objects,” IEEE Trans Med Imag, vol. 9, pp. 32–42, 1990. [8] J. K. Udupa and G. T. Herman, 3D Imaging in Medicine. Boca Raton, FL: CRC Press, 1991. [9] D. Selle, B. Preim, A. Schenk, and H.-O. Peitgen, “Analysis of vasculature for liver surgical planning,” IEEE Trans Med Imag, vol. 21, pp. 1344–1357, 2002. [10] R. Beichel, A. Bornik, C. Bauer, and E. Sorantin, “Liver segmentation in contrast enhanced CT data using graph cuts and interactive 3D segmentation refinement methods,” Med Phys, vol. 39, no. 3, pp. 1361– 73, 2012. [11] S. Beucher and C. Lantuejoul, “Use of watersheds in contour detection,” in Proc of Int Workshop Imag Proc: Real-Time Edge Motion Detect/Estimat, Rennes, France, 1979, pp. 17–21. [12] J. K. Udupa and S. Samarasekera, “Fuzzy connectedness and object definition: Theory, algorithms, and applications in image segmentation,” Graph Mod Imag Proc, vol. 58, pp. 246–261, 1996. [13] W. A. Barrett and E. N. Mortensen, “Interactive live-wire boundary extraction,” Med Imag Anal, vol. 1, pp. 331–341, 1997. [14] A. Falc˜ao, J. K. Udupa, S. Samarasekera, and S. Sharma, “User-steered image segmentation paradigms: Live wire and live lane,” Graph Mod Imag Proc, vol. 60, pp. 233–260, 1998. [15] P. J. Yim, P. L. Choyke, and R. M. Summers, “Gray-scale skeletonization of small vessels in magnetic resonance angiography,” IEEE Trans Med Imag, vol. 19, no. 6, pp. 568–76, 2000. [16] L. Cohen, Minimal paths and fast marching methods for image analysis. Springer, 2006, pp. 97–111. [17] A. Szymczak, A. Stillman, A. Tannenbaum, and K. Mischaikow, “Coronary vessel trees from 3D imagery: A topological approach,” Med Image Anal, vol. 10, no. 4, pp. 548–59, 2006. [18] K. Mori, J. Hasegawa, Y. Suenaga, and J. Toriwaki, “Automated anatomical labeling of the bronchial branch and its application to the virtual bronchoscopy system,” IEEE Trans Med Imag, vol. 19, no. 2, pp. 103–14, 2000. [19] M. Wan, Z. Liang, Q. Ke, L. Hong, I. Bitter, and A. Kaufman, “Automatic centerline extraction for virtual colonoscopy,” IEEE Trans Med Imag, vol. 21, pp. 1450–1460, 2002. [20] A. P. Kiraly, J. P. Helferty, E. A. Hoffman, G. McLennan, and W. E. Higgins, “Three-dimensional path planning for virtual bronchoscopy,” IEEE Trans Med Imag, vol. 23, no. 11, pp. 1365–79, 2004. [21] D. G. Kang and J. B. Ra, “A new path planning algorithm for maximizing visibility in computed tomography colonography,” IEEE Trans Med Imag, vol. 24, no. 8, pp. 957–68, 2005. [22] L. A. Feldkamp, S. A. Goldstein, A. M. Parfitt, G. Jesion, and M. Kleerekoper, “The direct examination of three-dimensional bone architecture in vitro by computed tomography,” J Bone Min Res, vol. 4, pp. 3–11, 1989. [23] H. Kobatake and Y. Yoshinaga, “Detection of spicules on mammogram based on skeleton analysis,” IEEE Trans Med Imag, vol. 15, no. 3, pp. 235–45, 1996. [24] P. K. Saha, B. R. Gomberg, and F. W. Wehrli, “Three-dimensional digital topological characterization of cancellous bone architecture,” Int J Imag Sys Tech, vol. 11, pp. 81–90, 2000. [25] P. K. Saha and F. W. Wehrli, “Measurement of trabecular bone thickness in the limited resolution regime of in vivo MRI by fuzzy distance transform,” IEEE Trans Med Imag, vol. 23, pp. 53–62, 2004. [26] J. F. Mangin, D. Riviere, A. Cachia, E. Duchesnay, Y. Cointepas, D. Papadopoulos-Orfanos, D. L. Collins, A. C. Evans, and J. Regis, “Object-based morphometry of the cerebral cortex,” IEEE Trans Med Imag, vol. 23, no. 8, pp. 968–82, 2004. [27] R. Zwiggelaar, S. M. Astley, C. R. Boggis, and C. J. Taylor, “Linear structures in mammographic images: Detection and classification,” IEEE Trans Med Imag, vol. 23, no. 9, pp. 1077–1086, 2004.

[28] P. K. Saha, Y. Xu, H. Duan, A. Heiner, and G. Liang, “Volumetric topological analysis: A novel approach for trabecular bone classification on the continuum between plates and rods,” IEEE Trans Med Imag, vol. 29, no. 11, pp. 1821–1838, 2010. [29] M. Sonka, M. D. Winniford, X. Zhang, and S. M. Collins, “Lumen centerline detection in complex coronary angiograms,” IEEE Trans Biomed Eng, vol. 41, no. 6, pp. 520–8, 1994. [30] M. Sonka, M. D. Winniford, and S. M. Collins, “Robust simultaneous detection of coronary borders in complex images,” IEEE Trans Med Imag, vol. 14, no. 1, pp. 151–61, 1995. [31] H. Greenspan, M. Laifenfeld, S. Einav, and O. Barnea, “Evaluation of center-line extraction algorithms in quantitative coronary angiography,” IEEE Trans Med Imag, vol. 20, pp. 928–941, 2001. [32] E. Sorantin, C. Halmai, B. Erdohelyi, K. Pal´agyi, L. G. Ny´ul, K. Olle, B. Geiger, F. Lindbichler, G. Friedrich, and K. Kiesler, “Spiral-CTbased assessment of tracheal stenoses using 3-D-skeletonization,” IEEE Trans Med Imag, vol. 21, no. 3, pp. 263–73, 2002. [33] X. Han, C. Xu, U. Braga-Neto, and J. L. Prince, “Topology correction in brain cortex segmentation using a multiscale, graph-based algorithm,” IEEE Trans Med Imag, vol. 21, no. 2, pp. 109–21, 2002. [34] G. E. Christensen, R. D. Rabbitt, and M. I. Miller, “Deformable templates using large deformation kinematics,” IEEE Trans Imag Proc, vol. 5, no. 10, pp. 1435–1447, 1996. [35] M. F. Beg, M. I. Miller, A. Trouv´e, and L. Younes, “Computing large deformation metric mappings via geodesic flows of diffeomorphisms,” Int Journal Comput Vis, vol. 61, no. 2, pp. 139–157, 2005. [36] T. Vercauteren, X. Pennec, A. Perchant, and N. Ayache, “Diffeomorphic demons: ffficient non-parametric image registration,” Neuro Image, vol. 45, no. 1, pp. S61–S72, 2009. [37] K. Siddiqi and S. M. Pizer, Medial Representations: Mathematics, Algorithms and Applications. Springer, 2008, vol. 37. [38] S. M. Pizer, P. T. Fletcher, S. Joshi, A. Thall, J. Z. Chen, Y. Fridman, D. S. Fritsch, A. G. Gash, J. M. Glotzer, M. R. Jiroutek et al., “Deformable m-reps for 3D medical image segmentation,” Int Journal Comput Vis, vol. 55, no. 2-3, pp. 85–106, 2003. [39] H. Yoshida and J. Nappi, “Three-dimensional computer-aided diagnosis scheme for detection of colonic polyps,” IEEE Trans Med Imag, vol. 20, no. 12, pp. 1261–1274, 2001. [40] F. S´egonne, J. Pacheco, and B. Fischl, “Geometrically accurate topology-correction of cortical surfaces using nonseparating loops,” IEEE Trans Med Imag, vol. 26, no. 4, pp. 518–29, 2007. [41] T. Y. Kong and A. Rosenfeld, “Digital topology: Introduction and survey,” Comp Vis Graph Imag Proc, vol. 48, pp. 357–393, 1989. [42] ——, Topological Algorithms for Digital Image Processing. Elsevier, 1996. [43] R. Klette and A. Rosenfeld, Digital Geometry: Geometric Methods for Digital Picture Analysis. San Francisco, CA: Morgan Kaufmann, 2004. [44] A. Rosenfeld and R. A. Melter, “Digital geometry,” The Math Intel, vol. 11, no. 3, pp. 69–72, 1989. [45] A. McAndrew and C. Osborne, “A survey of algebraic methods in digital topology,” J Math Imag Vis, vol. 6, no. 2-3, pp. 139–159, 1996. [46] P. Salembier and M. H. Wilkinson, “Connected operators,” IEEE Sig Proc Mag, vol. 26, no. 6, pp. 136–157, 2009. [47] A. Okabe, B. Boots, and K. Sugihara, Spatial Tessellations; Concepts and Applications of Voronoi Diagrams. Chichester, UK: Wiley, 1992. [48] D. Cohen-Or and A. Kaufman, “3D line voxelization and connectivity control,” IEEE Compr Graph Appl, vol. 17, no. 6, pp. 80–87, 1997. [49] J. K. Udupa and P. K. Saha, “Fuzzy connectedness and image segmentation,” Proc of the IEEE, vol. 91, pp. 1649–1669, 2003. [50] G. Peyr´e, M. P´echaud, R. Keriven, and L. D. Cohen, “Geodesic methods in computer vision and graphics,” Found Trend Comp Graph Vis, vol. 5, no. 34, pp. 197–397, 2010. [51] L. Lam, S.-W. Lee, and C. Y. Suen, “Thinning methodologies - a comprehensive survey,” IEEE Trans Patt Anal Mach Intell, vol. 14, no. 9, pp. 869–885, 1992. [52] R. Klette and A. Rosenfeld, “Digital straightness – a review,” Discr Appl Math, vol. 139, no. 1, pp. 197–230, 2004. [53] R. S. Montero and E. Bribiesca, “State of the art of compactness and circularity measures,” in Proc of Int Math Forum, vol. 4, 2009, pp. 1305–1335. [54] R. P. Barneva and V. E. Brimkov, Digital geometry and its applications to medical imaging. Netherlands: Springer, 2009, vol. 13, pp. 77–92. [55] T. M. Lehmann, C. Gonner, and K. Spitzer, “Survey: Interpolation methods in medical image processing,” IEEE Trans Med Imag, vol. 18, pp. 1049 –1075, 1999.

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 20

[56] P. Thevenaz, T. Blu, and M. Unser, “Interpolation revisited,” IEEE Trans Med Imag, vol. 19, no. 7, pp. 739–58, 2000. [57] H. W. Meijering, W. J. Niessen, and M. A. Viergever, “Quantitative evaluation of convolution-based methods for medical image interpolation,” Med Imag Anal, vol. 5, pp. 111–126, 2001. [58] J. Serra, Image Analysis and Mathematical Morphology. London: Academic Press, 1982. [59] ——, Image analysis and mathematical morphology, Vol II: Theoretical Advances. New York, NY: Academic press, 1988. [60] A. Rosenfeld and J. Pfaltz, “Sequential operations in digital picture processing,” J Assoc Comp Mach, vol. 13, pp. 471–494, 1966. [61] M. Minsky and S. Papert, Perceptrons: An Introduction to Computational Geometry. Cambridge, MA: MIT Press, 1969. [62] J. Sklansky, “Recognition of convex blobs,” Pat Recog, vol. 2, no. 1, pp. 3–10, 1970. [63] C. E. Kim and A. Rosenfeld, “Digital straight lines and convexity of digital regions,” IEEE Trans Patt Anal Mach Intell, no. 2, pp. 149–153, 1982. [64] J. E. Bresenham, “Algorithm for computer control of a digital plotter,” IBM Sys Journ, vol. 4, no. 1, pp. 25–30, 1965. [65] A. Rosenfeld, “Digital straight line segments,” IEEE Trans Comp, vol. 100, no. 12, pp. 1264–1269, 1974. [66] L. Dorst and A. W. Smeulders, “Discrete representation of straight lines,” IEEE Trans Patt Anal Mach Intell, no. 4, pp. 450–463, 1984. [67] A. Rosenfeld, “Compact figures in digital pictures,” IEEE Trans Sys Man Cyber, no. 2, pp. 221–223, 1974. [68] Z. Kulpa, “Area and perimeter measurement of blobs in discrete binary pictures,” Comp Graph Imag Proc, vol. 6, no. 5, pp. 434–451, 1977. [69] N. Sladoje, I. Nystr¨om, and P. K. Saha, “Measurements of digitized objects with fuzzy borders in 2D and 3D,” Imag Vis Comp, vol. 23, pp. 123–132, 2005. [70] R. M. Haralick, “A measure for circularity of digital figures,” IEEE Trans Sys Man Cyber, no. 4, pp. 394–396, 1974. [71] A. Fujimoto, T. Tanaka, and K. Iwata, “Arts: Accelerated ray-tracing system,” IEEE Comp Graph Appl, vol. 6, no. 4, pp. 16–26, 1986. [72] J. Amanatides and A. Woo, “A fast voxel traversal algorithm for ray tracing,” in Proc of Eurographics, vol. 87, 1987, pp. 3–10. [73] R. Yagel, D. Cohen, and A. Kaufman, “Discrete ray tracing,” IEEE Comp Graph Appl, vol. 12, no. 5, pp. 19–28, 1992. [74] G. Windreich, N. Kiryati, and G. Lohmann, “Voxel-based surface area estimation: From theory to practice,” Pat Recog, vol. 36, no. 11, pp. 2531–2541, 2003. [75] T. Hildebrand and P. R¨uegsegger, “A new method for the model independent assessment of thickness in three-dimensional images,” J Micros, vol. 185, pp. 67–75, 1997. [76] A. Rosenfeld, “Digital topology,” Am Math Month, vol. 86, no. 8, pp. 621–630, 1979. [77] R. O. Duda and J. H. Munson, “Graphical-data-processing research study and experimental investigation,” Stanford Research Institute, Menlo Park, CA, Tech. Rep. AD657670, 1967. [78] A. Rosenfeld, “Connectivity in digital pictures,” J Assoc Comp Mach, vol. 17, pp. 146–160, 1970. [79] J. Mylopoulos and T. Pavlidis, “On the topological properties of quantized spaces,” J Assoc Comp Mach, vol. 18, no. 2, pp. 239–254, 1971. [80] S. B. Gray, “Local properties of binary images in two dimensions,” IEEE Trans Comp, vol. 100, no. 5, pp. 551–561, 1971. [81] S. Yokoi, J.-I. Toriwaki, and T. Fukumura, “An analysis of topological properties of digitized binary pictures using local features,” Comp Graph Imag Proc, vol. 4, no. 1, pp. 63–73, 1975. [82] T. Y. Kong, “A digital fundamental group,” Comp Graph, vol. 13, no. 2, pp. 159–166, 1989. [83] L. Boxer, “A classical construction for the digital fundamental group,” J Math Imag Vis, vol. 10, no. 1, pp. 51–62, 1999. [84] R. Malgouyres, “Homotopy in two-dimensional digital images,” Theo Comp Sc, vol. 230, no. 1, pp. 221–233, 2000. [85] S.-E. Han, “Non-product property of the digital fundamental group,” Info Sc, vol. 171, no. 1, pp. 73–91, 2005. [86] L. Boxer and I. Karaca, “Some properties of digital covering spaces,” J Math Imag Vis, vol. 37, no. 1, pp. 17–26, 2010. [87] E. D. Khalimsky, “Topological structures in computer science,” J Appl Math Simulat, vol. 1, pp. 25–40, 1987. [88] V. A. Kovalevsky, “Finite topology as applied to image processing,” Comp Vis Graph Imag Proc, vol. 46, pp. 141–161, 1989. [89] T. Y. Kong, “Topology-preserving deletion of 1’s from 2-, 3-and 4dimensional binary images,” in Proc of Int Work Discr Geom Comp Imag. Montpellier, France: Springer, 1997, pp. 1–18.

[90] Y. Cointepas, I. Bloch, and L. Garnero, “A cellular model for multi-objects multi-dimensional homotopic deformations,” Pat Recog, vol. 34, no. 9, pp. 1785–1798, 2001. [91] N. Passat, M. Couprie, and G. Bertrand, “Minimal simple pairs in the 3-D cubic grid,” J Math Imag Vis, vol. 32, no. 3, pp. 239–249, 2008. [92] G. Bertrand and M. Couprie, “On parallel thinning algorithms: Minimal non-simple sets, p-simple points and critical kernels,” J Math Imag Vis, vol. 35, no. 1, pp. 23–35, 2009. [93] M. J. Cardoso, M. J. Clarkson, M. Modat, and S. Ourselin, “On the extraction of topologically correct thickness measurements using khalimsky’s cubic complex,” in Proc of Info Proc Med Imag (IPMI), G. Sz´ekely and H. Hahn, Eds., vol. LNCS 6801. Springer, 2011, pp. 159–170. [94] V. Neumann-Lara and R. G. Wilson, “Compatible connectedness in graphs and topological spaces,” Order, vol. 12, no. 1, pp. 77–90, 1995. [95] D. Nogly and M. Schladt, “Digital topology on graphs,” Comp Vis Imag Und, vol. 63, no. 2, pp. 394–396, 1996. [96] A. Bretto, “Comparability graphs and digital topology,” Comp Vis Imag Und, vol. 82, no. 1, pp. 33–41, 2001. [97] J. Cousty, G. Bertrand, L. Najman, and M. Couprie, “Watershed cuts: Minimum spanning forests and the drop of water principle,” IEEE Trans Patt Anal Mach Intell, vol. 31, no. 8, pp. 1362–1374, 2009. [98] ——, “Watershed cuts: Thinnings, shortest path forests, and topological watersheds,” IEEE Trans Patt Anal Mach Intell, vol. 32, no. 5, pp. 925– 939, 2010. [99] L. Najman and J. Cousty, “A graph-based mathematical morphology reader,” Pat Recog Lett, 2014. [100] A. Rosenfeld, “Adjacency in digital pictures,” Info Contr, vol. 26, pp. 24–33, 1974. [101] H. Blum, “A transformation for extracting new descriptors of shape,” Model Percep speech Vis Form, vol. 19, no. 5, pp. 362–380, 1967. [102] U. Montanari, “A method for obtaining skeletons using a quasieuclidean distance,” J Assoc Comp Mach, vol. 15, pp. 600–624, 1968. [103] J. Hilditch and D. Rutovitz, “Chromosome recognition,” Annals New York Acad Sc, vol. 157, pp. 339–364, 1969. [104] C. O. Kiselman, “Regularity properties of distance transformations in image analysis,” Comp Vis Imag Und, vol. 64, no. 3, pp. 390–398, 1996. [105] H. G. Barrow, J. M. Tenenbaum, R. C. Bolles, and H. C. Wolf, “Parametric correspondence and chamfer matching: Two new techniques for image matching,” in Proc of 5th Int Joint Conf Artificial Intel, Cambridge, USA, 1977, pp. 659–663. [106] J. Piper and E. Granum, “Computing distance transformations in convex and non-convex domains,” Pat Recog, vol. 20, no. 6, pp. 599– 615, 1987. [107] P. E. Daniensson, “Euclidean distance mapping,” Comp Graph Imag Proc, vol. 14, pp. 227–248, 1980. [108] H. Yamada, “Complete euclidean distance mapping,” in Proc of 7th Int Conf Pat Recog, Montreal, Canada, 1984, pp. 69–71. [109] I. Ragnemalm, “The euclidean distance transform in abritrary dimensions,” Pat Recog Lett, vol. 14, pp. 883–888, 1993. [110] G. J. Grevera, “The dead reckoning signed distance transform,” Comp Vis Imag Und, vol. 95, pp. 317–333, 2004. [111] D. Coeurjolly and A. Montanvert, “Optimal separable algorithms to compute the reverse euclidean distance transformation and discrete medial axis in arbitrary dimension,” IEEE Trans Patt Anal Mach Intell, vol. 29, no. 3, pp. 437–448, 2007. [112] Q. Z. Ye, “The signed euclidean distance transform and its applications,” in Proc of 9th Int Conf Pat Recog, Rome, Italy, 1988, pp. 495– 499. [113] I. Nystr¨om and G. Borgefors, “Synthesising objects and scenes using the reverse distance transformation in 2D and 3D,” in Proc of Image Analysis and Processing, Braccini, DeFloriani, and Vernazza, Eds., vol. LNCS 974. Berlin: Springer-Verlag, 1995, pp. 441–446. [114] S. Svensson, I. Nystr¨om, and G. Borgefors, “On reversible skeletonization using anchor points from distance transforms,” Int J Vis Comm Imag Repr, vol. 10, pp. 379–397, 1999. [115] G. Borgefors, “On digital distance transformation in three dimensions,” Comp Vis Graph Imag Proc, vol. 64, pp. 368–376, 1996. [116] N. Okabe, J. Toriwaki, and T. Fukumura, “Paths and distance functions in three dimensional digitized pictures,” Pat Recog Lett, vol. 1, p. 205212, 1983. [117] S. Svensson and G. Borgefors, “Digital distance transforms in 3D images using information from neighbourhoods up to 555,” Comp Vis Imag Und, vol. 88, pp. 24–53, 2002. [118] G. Borgefors, “Weighted digital distance transforms in four dimensions,” Discr Appl Math, vol. 125, no. 1, pp. 161–176, 2003.

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 21

[119] R. Strand, “Distance functions and image processing on point-lattices: With focus on the 3D face-and body-centered cubic grids,” Ph.D. dissertation, Uppsala University, 2008. [120] D. F. Watson, “Computing the n-dimensional delaunay tessellation with application to voronoi polytopes,” The Comp J, vol. 24, no. 2, pp. 167– 172, 1981. [121] M. Tanemura, T. Ogawa, and N. Ogita, “A new algorithm for threedimensional voronoi tessellation,” J Comp Phy, vol. 51, no. 2, pp. 191–207, 1983. [122] G. Levi and U. Montanari, “A gray-weighted skeleton,” Info Contr, vol. 17, pp. 62–91, 1970. [123] P. Toivanen, “New geodesic distance transforms for grey-scale images,” Pat Recog Lett, vol. 17, pp. 437–450, 1996. [124] L. Ikonen and P. Toivanen, “Shortest routes on varying height surfaces using grey-level distance transforms,” Imag Vis Comp, vol. 23, no. 2, pp. 133–141, 2005. [125] C. Lantuejoul and F. Maisonneuve, “Geodesic methods in quantitative image analysis,” Pat Recog, vol. 17, no. 2, pp. 177–187, 1984. [126] R. C´ardenes, C. Alberola-L´opez, and J. Ruiz-Alzola, “Fast and accurate geodesic distance transform by ordered propagation,” Imag Vis Comp, vol. 28, pp. 307–316, 2010. [127] B. Verwer, P. Verbeek, and S. Dekker, “An efficient uniform cost algorithm applied to distance transforms,” IEEE Trans Patt Anal Mach Intell, vol. 11, no. 4, pp. 425–429, 1989. [128] P. L. Rosin and G. A. W. West, “Salience distance transforms,” Graph Mod Imag Proc, vol. 57, no. 6, pp. 483–521, 1995. [129] G. Borgefors and S. Svensson, “Fuzzy border distance transforms and their use in 2D skeletonization,” in Proc of 16th Int Conf Pat Recog, vol. 1, Quebec City, Canada, 2002, pp. 180–183. [130] P. K. Saha and F. W. Wehrli, “Fuzzy distance transform in general digital grids and its applications,” in Proc of 7th Joint Conf Info Sc, Research Triangular Park, NC, 2003, pp. 201–213. [131] C. Arcelli and G. Sanniti di Baja, “Finding local maxima in a pseudoeuclidean distance transform,” Comp Vis Graph Imag Proc, vol. 43, pp. 361–367, 1988. [132] G. Sanniti di Baja, “Well-shaped, stable, and reversible skeletons from the (3,4)-distance transform,” J Vis Comm Imag Repr, vol. 5, pp. 107– 115, 1994. [133] G. Borgefors, “Centres of maximal discs in the 5-7-11 distance transform,” in Proc of 8th Scand Conf Imag Anal, Troms, Norway, 1993, pp. 105–111. [134] E. Remy and E. Thiel, “Look-up tables for medial axis on squared euclidean distance transform,” in Discrete Geometry for Computer Imagery, Nystr¨om, Sanniti di Baja, and Svensson, Eds., vol. 2886. Berlin: Springer-Verlag, 2003, pp. 224–235. [135] D. Jin and P. K. Saha, “A new fuzzy skeletonization algorithm and its applications to medical imaging,” in Proc of 17th Int Conf Image Anal Proc (ICIAP), vol. LNCS 8156, Naples, Italy, 2013, pp. 662–671. [136] S. Svensson, “Aspects on the reverse fuzzy distance transform,” Pat Recog Lett, vol. 29, pp. 888–896, 2008. [137] A. Bonnassie, F. Peyrin, and D. Attali, “A new method for analyzing local shape in three-dimensional images based on medial axis transformation,” IEEE Trans Syst Man Cybern B, vol. 33, no. 4, pp. 700–5, 2003. [138] R. L. Ogniewicz, Discrete Voronoi Skeletons. Konstanz, Germany: Hartung-Gorre Verlag, 1993. [139] G. J. Grevera and J. K. Udupa, “Shape-based interpolation of multidimensional grey-level images,” IEEE Trans Med Imag, vol. 15, pp. 881 –892, 1996. [140] T.-Y. Lee and W.-H. Wang, “Morphology-based three-dimensional interpolation,” IEEE Trans Med Imag, vol. 19, pp. 711–721, 2000. [141] G. P. Penney, J. A. Schnabel, D. Rueckert, M. A. Viergever, and W. J. Niessen, “Registration-based interpolation,” IEEE Trans Med Imag, vol. 23, no. 7, pp. 922–6, 2004. [142] G. T. Herman, J. Zheng, and C. A. Bucholtz, “Shape-based interpolation,” IEEE Comp Graph Appl, vol. 12, no. 3, pp. 69–79, 1992. [143] W. E. Higgins, C. Morice, and E. L. Ritman, “Shape-based interpolation of tree-like structures in three-dimensional images,” IEEE Trans Med Imag, vol. 12, no. 3, pp. 439–50, 1993. [144] F. Chang, C. J. Chen, and C. J. Lu, “A linear-time component-labeling algorithm using contour tracing technique,” Comp Vis Imag Und, vol. 93, no. 2, pp. 206–220, 2004. [145] E. Artzy, G. Frieder, and G. T. Herman, “The theory, design, implementation and evaluation of a three-dimensional surface detection algorithm,” Comp Graph Imag Proc, vol. 15, no. 1, pp. 1–24, 1981.

[146] J. K. Udupa, S. N. Srihari, and G. T. Herman, “Boundary detection in multidimensions,” IEEE Trans Patt Anal Mach Intell, vol. 4, pp. 41–50, 1982. [147] G. T. Herman and D. Webster, “A topological proof of a surface tracking algorithm,” Comp Vis Graph Imag Proc, vol. 23, no. 2, pp. 162–177, 1983. [148] T. Y. Kong and J. K. Udupa, “A justification of a fast surface tracking algorithm,” CVGIP: Graph Mod Imag Proc, vol. 54, no. 2, pp. 162– 170, 1992. [149] D. Gordon and J. K. Udupa, “Fast surface tracking in three-dimensional binary images,” Comp Vis Graph Imag Proc, vol. 45, no. 2, pp. 196– 214, 1989. [150] A. Rosenfeld, “Fuzzy digital topology,” Info Contr, vol. 40, pp. 76–87, 1979. [151] S. Dellepiane and F. Fontana, “Extraction of intensity connectedness for image processing,” Pat Recog Lett, vol. 16, pp. 313–324, 1995. [152] L. Vincent and P. Soille, “Watersheds in digital spaces: An efficient algorithm based on immersion simulations,” IEEE Trans Patt Anal Mach Intell, vol. 13, pp. 583–598, 1991. [153] S. Beucher and F. Meyer, The morphological approach to segmentation: The watershed transformation. New Work, NY: Marcel Dekker, Inc., 1992, pp. 433–481. [154] S. Beucher, “Watershed, hierarchical segmentation and waterfall algorithm,” in Proc of Int Conf Math Morph Appl Imag Proc, J. Serra and P. Soille, Eds. Springer, 1994, pp. 69–76. [155] M. Couprie and G. Bertrand, “Topological gray-scale watershed transformation,” in Proc of Int SPIE Conf Vis Geom, vol. 3168, 1997, pp. 136–146. [156] G. Bertrand, “On topological watersheds,” J Math Imag Vis, vol. 22, no. 2-3, pp. 217–230, 2005. [157] R. Jones, “Connected filtering and segmentation using component trees,” Comp Vis Imag Und, vol. 75, no. 3, pp. 215–228, 1999. [158] D. Gatica-Perez, C. Gu, M.-T. Sun, and S. Ruiz-Correa, “Extensive partition operators, gray-level connected operators, and region merging/classification segmentation algorithms: Theoretical links,” IEEE Trans Imag Proc, vol. 10, no. 9, pp. 1332–1345, 2001. [159] J. Serra, “A lattice approach to image segmentation,” J Math Imag Vis, vol. 24, no. 1, pp. 83–130, 2006. [160] C. Ronse, “Partial partitions, partial connections and connective segmentation,” J Math Imag Vis, vol. 32, no. 2, pp. 97–125, 2008. [161] A. X. Falc˜ao, J. Stolfi, and R. de Alencar Lotufo, “The image foresting transform: Theory, algorithms, and applications,” IEEE Trans Patt Anal Mach Intell, vol. 26, no. 1, pp. 19–29, 2004. [162] A. X. Falc˜ao and F. P. Bergo, “Interactive volume segmentation with differential image foresting transforms,” IEEE Trans Med Imag, vol. 23, no. 9, pp. 1100–1108, 2004. [163] S. Dellepiane, F. Fontana, and G. L. Vernazza, “Nonlinear image labeling for multivalued segmentation,” IEEE Trans Imag Proc, vol. 5, pp. 429–446, 1996. [164] B. M. Carvalho, C. J. Gau, G. T. Herman, and T. Y. Kong, “Algorithms for fuzzy segmentation,” Pat Anal Appl, vol. 2, pp. 73–81, 1999. [165] L. G. Ny´ul, A. X. Falc˜ao, and J. K. Udupa, “Fuzzy-connected 3D image segmentation at interactive speeds,” Graph Mod Imag Proc, vol. 64, pp. 259–281, 2003. [166] P. K. Saha, J. K. Udupa, and D. Odhner, “Scale-based fuzzy connected image segmentation: Theory, algorithms, and validation,” Comp Vis Imag Und, vol. 77, pp. 145–174, 2000. [167] K. C. Ciesielski and J. K. Udupa, “Affinity functions in fuzzy connectedness based image segmentation ii: Defining and recognizing truly novel affinities,” Comp Vis Imag Und, vol. 114, pp. 155–166, 2010. [168] Y. Zhuge, J. K. Udupa, and P. K. Saha, “Vectorial scale-based fuzzy connected image segmentation,” Comp Vis Imag Und, vol. 101, pp. 177–193, 2006. [169] P. K. Saha and J. K. Udupa, “Iterative relative fuzzy connectedness and object definition: Theory, algorithms, and applications in image segmentation,” in Proc of IEEE Work Math Meth Biomed Imag Anal, Hilton Head, South Carolina, 2000. [170] G. T. Herman and B. M. Carvalho, “Multiseeded segmentation using fuzzy connectedness,” IEEE Trans Patt Anal Mach Intell, vol. 23, pp. 460–474, 2001. [171] J. K. Udupa, P. K. Saha, and R. A. Lotufo, “Relative fuzzy connectedness and object definition: Theory, algorithms, and applications in image segmentation,” IEEE Trans Patt Anal Mach Intell, vol. 24, pp. 1485–1500, 2002. [172] K. C. Ciesielski, J. K. Udupa, P. K. Saha, and Y. Zhuge, “Iterative relative fuzzy connectedness for multiple objects with multiple seeds,” Comp Vis Imag Und, vol. 107, pp. 160–182, 2007.

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 22

[173] P. K. Saha, Z. Gao, S. K. Alford, M. Sonka, and E. A. Hoffman, “Topomorphologic separation of fused isointensity objects via multiscale opening: Separating arteries and veins in 3-D pulmonary CT,” IEEE Trans Med Imag, vol. 29, no. 3, pp. 840–851, 2010. [174] D. M. Vasilescu, Z. Gao, P. K. Saha, L. Yin, G. Wang, B. HaefeliBleuer, M. Ochs, E. R. Weibel, and E. A. Hoffman, “Assessment of morphometry of pulmonary acini in mouse lungs by nondestructive imaging using multiscale microcomputed tomography,” Proc Natl Acad Sci USA, vol. 109, no. 42, pp. 17 105–10, 2012. [175] T. Lei, J. K. Udupa, P. K. Saha, and D. Odhner, “Artery-vein separation via MRA – an image processing approach,” IEEE Trans Med Imag, vol. 20, pp. 689–703, 2001. [176] X. Tizon and O. Smedby, “Segmentation with gray-scale connectedness can separate arteries and veins in MRA,” J Magn Reson Imag, vol. 15, no. 4, pp. 438–445, 2002. [177] P. L. Bazin and D. L. Pham, “Topology-preserving tissue classification of magnetic resonance brain images,” IEEE Trans Med Imag, vol. 26, no. 4, pp. 487–496, 2007. [178] R. Strand, K. C. Ciesielski, F. Malmberg, and P. K. Saha, “The minimum barrier distance,” Comp Vis Imag Und, vol. 117, pp. 429– 437, 2013. [179] K. C. Ciesielski, R. Strand, F. Malmberg, and P. K. Saha, “Efficient algorithm for finding the exact minimum barrier distance,” Comp Vis Imag Und, vol. 123, pp. 53–64, 2014. [180] Z. Gao, R. Grout, C. Holtze, E. A. Hoffman, and P. K. Saha, “A new paradigm of interactive artery/vein separation in non-contrast pulmonary CT imaging using multi-scale topo-morphologic opening,” IEEE Trans Biomed Eng, vol. 59, no. 11, pp. 3016–3027, 2012. [181] J. B. T. M. Roerdink and A. Meijster, “The watershed transform: Definitions, algorithms and parallelization strategies,” Fundam Info, vol. 41, pp. 187–228, 2000. [182] F. Meyer and L. Najman, Segmentation, minimum spanning tree and hierarchies. Wiley, 2013, pp. 229–261. [183] L. Najman and M. Schmitt, “Geodesic saliency of watershed contours and hierarchical segmentation,” IEEE Trans Patt Anal Mach Intell, vol. 18, no. 12, pp. 1163–1173, 1996. [184] L. Najman, “On the equivalence between hierarchical segmentations and ultrametric watersheds,” J Math Imag Vis, vol. 40, no. 3, pp. 231– 247, 2011. [185] A. Bleau and L. J. Leon, “Watershed-based segmentation and region merging,” Comp Vis Imag Und, vol. 77, no. 3, pp. 317–370, 2000. [186] P. Soille, Morphological Image Analysis: Principles and Applications. New York: Springer-Verlag, 2003. [187] C. W¨ahlby, I. Sintorn, F. Erlandsson, G. Borgefors, and E. Bengtsson, “Combining intensity, edge and shape information for 2D and 3D segmentation of cell nuclei in tissue sections,” J Micros, vol. 215, pp. 67–76, 2004. [188] J. Angulo and D. Jeulin, “Stochastic watershed segmentation,” in Proc of Int Symp Math Morpho, vol. 8, 2007, pp. 265–276. [189] K. B. Bernander, K. Gustavsson, B. Selig, I.-M. Sintorn, and C. L. Luengo Hendriks, “Improving the stochastic watershed,” Pat Recog Lett, vol. 34, pp. 993–1000, 2013. [190] L. Gillibert and D. Jeulin, “Stochastic multiscale segmentation constrained by image content,” in Proc of Math Morph Appl Imag Sign Proc, G. O. P. Soille, M.Pesaresi, Ed., vol. LNCS 6671. Berlin: Springer-Verlag, 2011, pp. 132–142. [191] F. Malmberg and C. L. Luengo Hendriks, “An efficient algorithm for exact evaluation of stochastic watersheds,” Pat Recog Lett, 2014. [192] G. Noyel, J. Angulo, and D. Jeulin, Regionalized random germs by a classification for probabilistic watershed application: Angiogenesis imaging segmentation. Springer, 2010, pp. 211–216. [193] M. Faessel and D. Jeulin, “Segmentation of 3D microtomographic images of granular materials with the stochastic watershed,” J Micros, vol. 239, no. 1, pp. 17–31, 2010. [194] S. Morales, V. Naranjo, J. Angulo, and M. A. Raya, “Automatic detection of optic disc based on pca and mathematical morphology,” IEEE Trans Med Imag, vol. 32, no. 4, pp. 786–796, 2013. [195] R. Beare, “A locally constrained watershed transform,” IEEE Trans Patt Anal Mach Intell, vol. 28, no. 7, pp. 1063–1074, 2006. [196] F. Meyer and C. Vachier, “Image segmentation based on viscous flooding simulation,” in Proc of Int Symp Math Morph, R. Beare and H. Talbot, Eds., 2002, pp. 69–77. [197] C. Vachier and F. Meyer, “The viscous watershed transform,” J Math Imag Vis, vol. 22, no. 2-3, pp. 251–267, 2005. [198] M. Couprie, L. Najman, and G. Bertrand, “Quasi-linear algorithms for the topological watershed,” J Math Imag Vis, vol. 22, no. 2-3, pp. 231– 249, 2005.

[199] J. Cousty, L. Najman, M. Couprie, S. Cl´ement-Guinaudeau, T. Goissen, and J. Garot, “Segmentation of 4D cardiac MRI: Automated method based on spatio-temporal watershed cuts,” Imag Vis Comp, vol. 28, no. 8, pp. 1229–1243, 2010. [200] C. Couprie, L. Grady, L. Najman, and H. Talbot, “Power watershed: A unifying graph-based optimization framework,” IEEE Trans Patt Anal Mach Intell, vol. 33, no. 7, pp. 1384–1399, 2011. [201] L. Vincent, “Morphological grayscale reconstruction in image analysis: Applications and efficient algorithms,” IEEE Trans Imag Proc, vol. 2, no. 2, pp. 176–201, 1993. [202] J. Crespo, J. Serra, and R. W. Schafer, “Theoretical aspects of morphological filters by reconstruction,” Sig Proc, vol. 47, no. 2, pp. 201–225, 1995. [203] P. Salembier and J. Serra, “Flat zones filtering, connected operators, and filters by reconstruction,” IEEE Trans Imag Proc, vol. 4, no. 8, pp. 1153–1160, 1995. [204] J.-C. Klein, “Conception et r´ealisation d’une unit´e logique pour l’analyse quantitative d’images,” Ph.D. dissertation, Nancy University, France, 1976. [205] E. J. Breen and R. Jones, “Attribute openings, thinnings, and granulometries,” Comp Vis Imag Und, vol. 64, no. 3, pp. 377–389, 1996. [206] P. Salembier, A. Oliveras, and L. Garrido, “Antiextensive connected operators for image and sequence processing,” IEEE Trans Imag Proc, vol. 7, no. 4, pp. 555–570, 1998. [207] P. Monasse and F. Guichard, “Fast computation of a contrast-invariant image representation,” IEEE Trans Imag Proc, vol. 9, no. 5, pp. 860– 872, 2000. [208] P. Salembier and L. Garrido, “Binary partition tree as an efficient representation for image processing, segmentation, and information retrieval,” IEEE Trans Imag Proc, vol. 9, no. 4, pp. 561–576, 2000. [209] R. E. Tarjan, “Efficiency of a good but not linear set union algorithm,” J of the ACM (JACM), vol. 22, no. 2, pp. 215–225, 1975. [210] A. Meijster and M. H. Wilkinson, “A comparison of algorithms for connected set openings and closings,” IEEE Trans Patt Anal Mach Intell, vol. 24, no. 4, pp. 484–494, 2002. [211] L. Najman and M. Couprie, “Building the component tree in quasilinear time,” IEEE Trans Imag Proc, vol. 15, no. 11, pp. 3531–3539, 2006. [212] F. Cheng and A. N. Venetsanopoulos, “An adaptive morphological filter for image processing,” IEEE Trans Imag Proc, vol. 1, no. 4, pp. 533– 539, 1992. [213] E. R. Urbach, J. B. T. M. Roerdink, and M. H. F. Wilkinson, “Connected shape-size pattern spectra for rotation and scale-invariant classification of gray-scale images,” IEEE Trans Patt Anal Mach Intell, vol. 29, no. 2, pp. 272–285, 2007. [214] G. T. Herman, T. Y. Kong, and L. M. Oliveira, Provably robust simplification of component trees of multidimensional images. Springer, 2012, pp. 27–69. [215] M. H. Wilkinson and M. A. Westenberg, “Shape preserving filament enhancement filtering,” in Proc of Med Imag Comp Comp-Assist Interven (MICCAI), W. J. Niessen and M. A. Viergever, Eds., vol. LNCS 2208. Utrecht, The Netherlands: Springer, 2001, pp. 770–777. [216] G. K. Ouzounis and M. H. Wilkinson, “Mask-based second-generation connectivity and attribute filters,” IEEE Trans Patt Anal Mach Intell, vol. 29, no. 6, pp. 990–1004, 2007. [217] M. A. Westenberg, J. B. Roerdink, and M. H. Wilkinson, “Volumetric attribute filtering and interactive visualization using the max-tree representation,” IEEE Trans Imag Proc, vol. 16, no. 12, pp. 2943–2952, 2007. [218] L. M. Oliveira, T. Y. Kong, and G. T. Herman, Using component trees to explore biological structures. New York, NY: Springer, 2014, pp. 221–255. [219] R. Kimmel, A. Amir, and A. M. Bruckstein, “Finding shortest paths on surfaces using level sets propagation,” IEEE Trans Patt Anal Mach Intell, vol. 17, no. 6, pp. 635–640, 1995. [220] L. D. Cohen and R. Kimmel, “Global minimum for active contour models: A minimal path approach,” Int J Comp Vis, vol. 24, no. 1, pp. 57–78, 1997. [221] A. Dufour, O. Tankyevych, B. Naegel, H. Talbot, C. Ronse, J. Baruthio, P. Dokl´adal, and N. Passat, “Filtering and segmentation of 3D angiographic data: Advances based on mathematical morphology,” Med Imag Anal, vol. 17, no. 2, pp. 147–164, 2013. [222] J. A. Sethian, Level Set Methods and Fast Marching Methods. Cambridge University Press, 1996. [223] E. Dijkstra, “A note on two problems in connection with graphs,” Num Math, vol. 1, pp. 269–271, 1959.

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 23

[224] T. Deschamps and L. D. Cohen, “Fast extraction of minimal paths in 3D images and applications to virtual endoscopy,” Med Image Anal, vol. 5, pp. 281–299, 2001. [225] M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active contour models,” Int J Comp Vis, vol. 1, pp. 321–331, 1988. [226] B. Appleton and H. Talbot, “Globally optimal geodesic active contours,” J Math Imag Vis, vol. 23, no. 1, pp. 67–86, 2005. [227] L. D. Cohen, “Multiple contour finding and perceptual grouping using minimal paths,” J Math Imag Vis, vol. 14, no. 3, pp. 225–236, 2001. [228] T. Deschamps and L. D. Cohen, Grouping connected components using minimal path techniques. Springer, 2002, pp. 91–106. [229] E. Mortensen, B. Morse, W. Barrett, and J. K.Udupa, “Adaptive boundary detection usinglive-wire’two-dimensional dynamic programming,” in Proc of IEEE Conf Comp Card. IEEE, 1992, pp. 635–638. [230] E. N. Mortensen and W. A.Barrett, “Intelligent scissors for image composition,” in Proc of 22nd Annual Conf Comp Graph Interac Tech. ACM, 1995, pp. 191–198. [231] E. N. Mortensen and W. A. Barrett, “Interactive segmentation with intelligent scissors,” Graph Mod Imag Proc, vol. 60, pp. 349–384, 1998. [232] A. Falc˜ao, J. K. Udupa, and F. K. Miyazawa, “An ultra-fast user-steered image segmentation paradigm: Live-wire-on-the-fly,” IEEE Trans Med Imag, vol. 19, pp. 55–62, 2000. [233] O. Wink, A. F. Frangi, B. Verdonck, M. A. Viergever, and W. J. Niessen, “3D MRA coronary axis determination using a minimum cost path approach,” Mag Reson Med, vol. 47, no. 6, pp. 1169–1175, 2002. [234] J. Staal, M. D. Abramoff, M. Niemeijer, M. A. Viergever, and B. van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Trans Med Imag, vol. 23, no. 4, pp. 501–9, 2004. [235] F. Benmansour and L. D. Cohen, “Tubular structure segmentation based on minimal path method and anisotropic enhancement,” Int J Comp Vis, vol. 92, no. 2, pp. 192–210, 2011. [236] Y. Rouchdy and L. D. Cohen, “Geodesic voting for the automatic extraction of tree structures. methods and applications,” Comp Vis Imag Und, vol. 117, no. 10, pp. 1453–1467, 2013. [237] X. Han, C. Xu, and J. L. Prince, “A topology preserving level set method for geometric deformable models,” IEEE Trans Patt Anal Mach Intell, vol. 25, no. 6, pp. 755–768, 2003. [238] J. F. Mangin, V. Frouin, I. Bloch, J. R´egis, and J. L´opez-Krahe, “From 3D magnetic resonance images to structural representations of the cortex topography using topology preserving deformations,” J Math Imag Vis, vol. 5, pp. 297–318, 1995. [239] K. Voss, “Images, objects, and surfaces in zn,” Int J Pat Recog Art Intel, vol. 5, no. 05, pp. 797–808, 1991. [240] P. K. Saha and B. B. Chaudhuri, “A new approach of computing euler characteristic,” Pat Recog, vol. 28, pp. 1955–1963, 1995. [241] C. J. Hilditch, Linear skeletons from square cupboards. Edinburgh, U.K.: Edinburgh Univ. Press, 1969, vol. 4, pp. 403–420. [242] G. Tourlakis and J. Mylopoulos, “Some results on computational topology,” J Assoc Comp Mach, vol. 20, pp. 439–455, 1973. [243] D. G. Morgenthaler, “Three-dimensional simple points: Serial erosion, parallel thinning and skeletonization,” Comp Vis Lab, Univ of Maryland, College Park, MD, Tech. Rep., 1981. [244] S. Lobregt, P. W. Verbeek, and F. C. A. Groen, “Three-dimensional skeletonization, principle, and algorithm,” IEEE Trans Patt Anal Mach Intell, vol. 2, pp. 75–77, 1980. [245] P. K. Saha, B. B. Chaudhuri, B. Chanda, and D. D. Majumder, “Topology preservation in 3D digital space,” Pat Recog, vol. 27, pp. 295–300, 1994. [246] P. K. Saha, B. Chanda, and D. D. Majumder, “Principles and algorithms for 2-D and 3-D shrinking,” Indian Statistical Institute, Calcutta, India, Tech. Rep. TR/KBCS/2/91, 1991. [247] P. K. Saha, 2D thinning algorithms and 3D shrinking, INRIA, Sophia Antipolis Cedex, France, Seminar Talk, June 1991. [248] P. K. Saha and B. B. Chaudhuri, “Detection of 3-D simple points for topology preserving transformations with application to thinning,” IEEE Trans Patt Anal Mach Intell, vol. 16, pp. 1028–1032, 1994. [249] P. K. Saha and A. Rosenfeld, “Determining simplicity and computing topological change in strongly normal partial tilings of r2 or r3,” Pat Recog, vol. 33, pp. 105–118, 2000. [250] G. Malandain and G. Bertrand, “Fast characterization of 3-D simple points,” in Proc of 11th Int Conf Pat Recog, 1992, pp. 232–235. [251] G. Bertrand and G. Malandain, “A new characterization of threedimensional simple points,” Pat Recog Lett, vol. 15, pp. 169–175, 1994. [252] S. Fourey and R. Malgouyres, “A concise characterization of 3D simple points,” Discr Appl Math, vol. 125, no. 1, pp. 59–80, 2003.

[253] G. Bertrand, “Simple points, topological numbers and geodesic neighborhoods in cubic grids,” Pat Recog Lett, vol. 15, no. 10, pp. 1003– 1011, 1994. [254] G. Klette and M. Pan, “3D topological thinning by identifying nonsimple voxels,” in Proc of Int Work Comb Imag Anal, R. Klette and J. uni, Eds., vol. LNCS 3322. Springer, 2005, pp. 164–175. [255] R. Malgouyres and A. Lenoir, “Topology preservation within digital surfaces,” Graph Model, vol. 62, no. 2, pp. 71–84, 2000. [256] P. J. Giblin, Graphs, Surfaces and Homology. Cambridge: Cambridge Univ Press, 2010. [257] M. Couprie and G. Bertrand, “New characterizations of simple points in 2D, 3D, and 4D discrete spaces,” IEEE Trans Patt Anal Mach Intell, vol. 31, no. 4, pp. 637–648, 2009. [258] P. K. Saha and B. B. Chaudhuri, “3D digital topology under binary transformation with applications,” Comp Vis Imag Und, vol. 63, pp. 418–429, 1996. [259] G. Borgefors, I. Nystr¨om, and G. Sanniti di Baja, “Connected components in 3D neighbourhoods,” in Proc of Scand Conf Imag Anal, 1997, pp. 567–572. [260] G. Malandain, G. Bertrand, and N. Ayache, “Topological segmentation of discrete surfaces,” Int J Comp Vis, vol. 10, pp. 183–197, 1993. [261] S. Svensson, I. Nystr¨om, and G. Sanniti di Baja, “Curve skeletonization of surface-like objects in 3D images guided by voxel classification,” Pat Recog Lett, vol. 23, no. 12, pp. 1419–1426, 2002. [262] L. Serino, G. Sanniti di Baja, and C. Arcelli, “Using the skeleton for 3D object decomposition,” in Proc of Scand Conf Imag Anal, A. Heyden and F. Kahl, Eds., vol. LNCS 6688. Ystad Saltsj¨obad, Sweden: Springer, 2011, pp. 447–456. [263] B. G. Gomberg, P. K. Saha, H. K. Song, S. N. Hwang, and F. W. Wehrli, “Topological analysis of trabecular bone MR images,” IEEE Trans Med Imag, vol. 19, pp. 166–174, 2000. [264] F. W. Wehrli, B. R. Gomberg, P. K. Saha, H. K. Song, S. N. Hwang, and P. J. Snyder, “Digital topological analysis of in vivo magnetic resonance microimages of trabecular bone reveals structural implications of osteoporosis,” J Bone Min Res, vol. 16, pp. 1520–31, 2001. [265] M. Stauber and R. Muller, “Volumetric spatial decomposition of trabecular bone into rods and plates–a new method for local bone morphometry,” Bone, vol. 38, no. 4, pp. 475–84, 2006. [266] G. Chang, S. K. Pakin, M. E. Schweitzer, P. K. Saha, and R. R. Regatte, “Adaptations in trabecular bone microarchitecture in olympic athletes determined by 7T MRI,” J Magn Reson Imaging, vol. 27, no. 5, pp. 1089–95, 2008. [267] G. A. Ladinsky, B. Vasilic, A. M. Popescu, M. Wald, B. S. Zemel, P. J. Snyder, L. Loh, H. K. Song, P. K. Saha, A. C. Wright, and F. W. Wehrli, “Trabecular structure quantified with the MRI-based virtual bone biopsy in postmenopausal women contributes to vertebral deformity burden independent of areal vertebral bmd,” J Bone Miner Res, vol. 23, no. 1, pp. 64–74, 2008. [268] X. S. Liu, P. Sajda, P. K. Saha, F. W. Wehrli, G. Bevill, T. M. Keaveny, and X. E. Guo, “Complete volumetric decomposition of individual trabecular plates and rods and its morphological correlations with anisotropic elastic moduli in human trabecular bone,” J Bone Miner Res, vol. 23, no. 2, pp. 223–35, 2008. [269] H. Asada and M. Brady, “The curvature primal sketch,” IEEE Trans Patt Anal Mach Intel, no. 1, pp. 2–14, 1986. [270] M. Leyton, Symmetry, Causality, Mind. MIT press, 1992. [271] J. Damon, “Smoothness and geometry of boundaries associated to skeletal structures i: Sufficient conditions for smoothness,” in Annales de l’institut Fourier, vol. 53, no. 6. Chartres: L’Institut, 1950-, 2003, pp. 1941–1985. [272] H. Blum and R. Nagel, “Shape description using using weighted symmetric axis features,” Pat Recog Lett, vol. 10, no. 3, pp. 167–180, 1978. [273] B. B. Kimia, A. Tannenbaum, and S. W. Zucker, “Shape, shocks, and deformations i: The components of two-dimensional shape and the reaction-diffusion space,” Int J Comp Vis, vol. 15, pp. 189–224, 1995. [274] P. J. Giblin and B. B. Kimia, “A formal classification of 3D medial axis points and their local geometry,” IEEE Trans Patt Anal Mach Intell, vol. 26, no. 2, pp. 238–251, 2004. [275] F. Leymarie and M. D. Levine, “Simulating the grassfire transform using an active contour model,” IEEE Trans Patt Anal Mach Intell, vol. 14, no. 1, pp. 56–75, 1992. [276] Z. S. G. Tari, J. Shah, and H. Pien, “Extraction of shape skeletons from grayscale images,” Comp Vis Imag Und, vol. 66, pp. 133–146, 1997. [277] K. Siddiqi, S. Bouix, A. Tannenbaum, and S. W. Zucker, “Hamiltonjacobi skeletons,” Int J Comp Vis, vol. 48, no. 3, pp. 215–231, 2002.

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 24

[278] M. N¨af, G. Sz´ekely, R. Kikinis, M. E. Shenton, and O. K¨ubler, “3D voronoi skeletons and their usage for the characterization and recognition of 3D organ shape,” Comp Vis Imag Und, vol. 66, pp. 147–161, 1997. [279] N. Amenta, S. Choi, and R. K. Kolluri, “The power crust, unions of balls, and the medial axis transform,” Comp Geom: Theo Appl, vol. 19, no. 2-3, pp. 127–153, 2001. [280] M. Schmitt, “Some examples of algorithms analysis in computational geometry by means of mathematical morphological techniques,” in Proc of Geom Robot, J.-D. Boissonnat and J.-P. Laumond, Eds., vol. LNCS 391. Toulouse, France: Springer, 1989, pp. 225–246. [281] J. Shah, “Extraction of shape skeletons from grayscale images,” Comp Vis Imag Und, vol. 99, pp. 96–109, 2005. [282] Y. F. Tsao and K. S. Fu, “A parallel thinning algorithm for 3D pictures,” Comp Graph Imag Proc, vol. 17, pp. 315–331, 1981. [283] C. Arcelli and G. Sanniti di Baja, “A width-independent fast thinning algorithm,” IEEE Trans Patt Anal Mach Intell, vol. 7, no. 4, pp. 463– 474, 1985. [284] T.-C. Lee, R. L. Kashyap, and C.-N. Chu, “Building skeleton models via 3-D medial surface/axis thinning algorithm,” CVGIP: Graph Mod Imag Proc, vol. 56, no. 6, pp. 462–478, 1994. [285] C. Pudney, “Distance-ordered homotopic thinning: A skeletonization algorithm for 3D digital images,” Comp Vis Imag Und, vol. 72, no. 2, pp. 404–413, 1998. [286] G. Borgefors, I. Nystr¨om, and G. Sanniti di Baja, “Computing skeletons in three dimensions,” Pat Recog, vol. 32, no. 7, pp. 1225–1236, 1999. [287] K. Pal´agyi and A. Kuba, “A parallel 3D 12-subiteration thinning algorithm,” Graph Mod Imag Proc, vol. 61, pp. 199–221, 1999. [288] I. Bitter, A. E. Kaufman, and M. Sato, “Penalized-distance volumetric skeleton algorithm,” IEEE Trans Vis Comp Graph, vol. 7, no. 3, pp. 195–206, 2001. [289] C. Arcelli, G. Sanniti di Baja, and L. Serino, “Distance-driven skeletonization in voxel images,” IEEE Trans Patt Anal Mach Intell, vol. 33, no. 4, pp. 709–720, 2011. [290] G. N´emeth, P. Kardos, and K. Pal´agyi, “Thinning combined with iteration-by-iteration smoothing for 3D binary images,” Graph Model, vol. 73, p. 335345, 2011. [291] G. Bertrand, “A parallel thinning algorithm for medial surfaces,” Pat Recog Lett, vol. 16, no. 9, pp. 979–986, 1995. [292] C. M. Ma and M. Sonka, “A fully parallel 3D thinning algorithm and its applications,” Comp Vis Imag Und, vol. 64, pp. 420–433, 1996. [293] P. K. Saha, B. B. Chaudhuri, and D. D. Majumder, “A new shape preserving parallel thinning algorithm for 3D digital images,” Pat Recog, vol. 30, pp. 1939–1955, 1997. [294] C. M. Ma and S. Y. Wan, “Parallel thinning algorithms on 3D (18, 6) binary images,” Comp Vis Imag Und, vol. 80, pp. 364–378, 2000. [295] C. M. Ma, S. Y. Wan, and J. D. Lee, “Three-dimensional topology preserving reduction on the 4-subfields,” IEEE Trans Patt Anal Mach Intell, vol. 24, pp. 1594–1605, 2002. [296] G. Borgefors, I. Nystr¨om, and G. Sanniti di Baja, “Surface skeletonization of volume objects,” in Proc of Int Work Adv Struct Synt Pat Recog, P. Perner, P.Wang, and A. Rosenfeld, Eds., vol. LNCS 1121, Leipzig, Germany, 1996, pp. 251–259. [297] T. Saito and J.-I. Toriwaki, “A sequential thinning algorithm for three dimensional digital pictures using the euclidean distance transformation,” in Proc of 9th Scand Conf Imag Anal (SCIA’95), IAPR, Uppsala, Sweden, 1995, pp. 507–516. [298] V. Chatzis and I. Pitas, “A generalized fuzzy mathematical morphology and its application in robust 2-D and 3-D object representation,” IEEE Trans Imag Proc, vol. 9, pp. 1798–1810, 2000. [299] O. Wink, W. J. Niessen, and M. A. Viergever, “Multiscale vessel tracking,” IEEE Trans Med Imag, vol. 23, pp. 130–133, 2004. [300] S. Wang, J. Wu, M. W, and X. Ma, “Robust curve skeleton extraction for vascular structures,” Graph Mod, vol. 74, pp. 109–120, 2012. [301] D. Jin, K. S. Iyer, E. A. Hoffman, and P. K. Saha, “A new approach of arc skeletonization for tree-like objects using minimum cost path,” in Proc of 22nd Int Conf Pat Rec (ICPR). Stockholm, Sweden: IEEE, 2014, pp. 942–947. [302] W. Gong and G. Bertrand, “A simple parallel 3D thinning algorithm,” in Proc of 10th Int Conf Pat Recog (ICPR), vol. 1. IEEE, 1990, pp. 188–190. [303] K. Pal´agyi and A. Kuba, “A 3D 6-subiteration thinning algorithm for extracting medial lines,” Pat Recog Lett, vol. 19, no. 7, pp. 613–627, 1998. [304] W. Xie, R. P. Thompson, and R. Perucchio, “A topology-preserving parallel 3D thinning algorithm for extracting the curve skeleton,” Pat Recog, vol. 36, pp. 1529–1544, 2003.

[305] M. J. Golay, “Hexagonal parallel pattern transformations,” IEEE Trans Comp, vol. 18, no. 8, pp. 733–740, 1969. [306] K. Pal´agyi and A. Kuba, “A hybrid thinning algorithm for 3D medical images,” J Comp Info Tech, vol. 6, no. 2, pp. 149–164, 1998. [307] ——, “Directional 3D thinning using 8 subiterations,” in Proc of Int Conf Discr Geom Comp Imag, G. Bertrand, M. Couprie, and L. Perroton, Eds., vol. LNCS 1568. Springer, 1999, pp. 325–336. [308] K. Pal´agyi, “A 3-subiteration 3D thinning algorithm for extracting medial surfaces,” Pat Recog Lett, vol. 23, no. 6, pp. 663–675, 2002. [309] ——, “A 3D fully parallel surface-thinning algorithm,” Theo Comp Sc, vol. 406, no. 1, pp. 119–135, 2008. [310] C. Ronse, “A topological characterization of thinning,” Theo Comp Sc, vol. 43, pp. 31–41, 1986. [311] ——, “Minimal test patterns for connectivity preservation in parallel thinning algorithms for binary digital images,” Discr Appl Math, vol. 21, no. 1, pp. 67–79, 1988. [312] R. W. Hall, “Tests for connectivity preservation for parallel reduction operators,” Topo Appl, vol. 46, no. 3, pp. 199–217, 1992. [313] C. M. Ma, “On topology preservation in 3D thinning,” CVGIP: Imag Und, vol. 59, no. 3, pp. 328–339, 1994. [314] C.-J. Gau and T. Y. Kong, “Minimal non-simple sets in 4D binary images,” Graph Mod, vol. 65, no. 1, pp. 112–130, 2003. [315] C. Lohou, “Detection of the non-topology preservation of ma and sonka’s algorithm, by the use of p-simple points,” Comp Vis Imag Und, vol. 114, pp. 384–399, 2010. [316] T. Wang and A. Basu, “A note on a fully parallel 3D thinning algorithm and its applications’,” Pat Recog Lett, vol. 28, no. 4, pp. 501–506, 2007. [317] C. Lohou and J. Dehos, “Automatic correction of ma and sonka’s thinning algorithm using p-simple points,” IEEE Trans Patt Anal Mach Intell, vol. 32, no. 6, pp. 1148–1152, 2010. [318] G. Bertrand, “P-simple points: A solution for parallel thinning,” in Proc of 5th Conf. Discrete Geom, France, 1995, pp. 233–242. [319] T. Y. Kong, “Minimal non-deletable sets and minimal non-codeletable sets in binary images,” Theo Comp Sc, vol. 406, no. 1, pp. 97–118, 2008. [320] J. Burguet and R. Malgouyres, “Strong thinning and polyhedric approximation of the surface of a voxel object,” Discr Appl Math, vol. 125, no. 1, pp. 93–114, 2003. [321] C. Lohou and G. Bertrand, “A 3D 12-subiteration thinning algorithm based on p-simple points,” Discr Appl Math, vol. 139, no. 1, pp. 171– 195, 2004. [322] ——, “A 3D 6-subiteration curve thinning algorithm based on p-simple points,” Discr Appl Math, vol. 151, no. 1, pp. 198–228, 2005. [323] ——, “Two symmetrical thinning algorithms for 3D binary images, based on p-simple points,” Pat Recog, vol. 40, no. 8, pp. 2301–2314, 2007. [324] G. Bertrand and M. Couprie, “Two-dimensional parallel thinning algorithms based on critical kernels,” J Math Imag Vis, vol. 31, no. 1, pp. 35–56, 2008. [325] H. C. Kim, B. G. Min, M. K. Lee, J. D. Seo, Y. W. Lee, and M. C. Han, “Estimation of local cardiac wall deformation and regional wall stress from biplane coronary cineangiograms,” IEEE Trans Biomed Eng, vol. 32, no. 7, pp. 503–12, 1985. [326] B. S. Tom, S. N. Efstratiadis, and A. K. Katsaggelos, “Motion estimation of skeletonized angiographic images using elastic registration,” IEEE Trans Med Imag, vol. 13, no. 3, pp. 450–60, 1994. [327] I. Nystr¨om and O. Smedby, “Skeletonization of volumetric vascular imagesdistance information utilized for visualization,” J Combin Opt, vol. 5, no. 1, pp. 27–41, 2001. [328] V. Chatzis and I. Pitas, “Interpolation of 3-D binary images based on morphological skeletonization,” IEEE Trans Med Imag, vol. 19, no. 7, pp. 699–710, 2000. [329] S. Y. Chen, J. D. Carroll, and J. C. Messenger, “Quantitative analysis of reconstructed 3-D coronary arterial tree and intracoronary devices,” IEEE Trans Med Imag, vol. 21, no. 7, pp. 724–40, 2002. [330] M. Schaap, C. T. Metz, T. van Walsum, A. G. van der Giessen, A. C. Weustink, N. R. Mollet, C. Bauer, H. Bogunovic, C. Castro, X. Deng, E. Dikici, T. O’Donnell, M. Frenay, O. Friman, M. Hernandez Hoyos, P. H. Kitslaar, K. Krissian, C. Kuhnel, M. A. Luengo-Oroz, M. Orkisz, O. Smedby, M. Styner, A. Szymczak, H. Tek, C. Wang, S. K. Warfield, S. Zambal, Y. Zhang, G. P. Krestin, and W. J. Niessen, “Standardized evaluation methodology and reference database for evaluating coronary artery centerline extraction algorithms,” Med Image Anal, vol. 13, no. 5, pp. 701–14, 2009. [331] Y. Xu, G. Liang, G. Hu, Y. Yang, J. Geng, and P. K. Saha, “Quantification of coronary arterial stenoses in CTA using fuzzy distance transform,” Comp Med Imag Graph, vol. 36, pp. 11–24, 2012.

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2015.2417112, IEEE Transactions on Medical Imaging 25

[332] H. Frimmel, J. N¨appi, and H. Yoshida, “Centerline-based colon segmentation for CT colonography,” Med Phy, vol. 32, pp. 2665–2672, 2005. [333] J. Tschirren, G. McLennan, K. Pal´agyi, E. A. Hoffman, and M. Sonka, “Matching and anatomical labeling of human airway tree,” IEEE Trans Med Imag, vol. 24, no. 12, pp. 1540–1547, 2005. [334] A. Laib, D. C. Newitt, Y. Lu, and S. Majumdar, “New modelindependent measures of trabecular bone structure applied to in vivo high-resolution MR images,” Osteop Int, vol. 13, pp. 130–136, 2002. [335] L. Pothuaud, A. Laib, P. Levitz, C. L. Benhamou, and S. Majumdar, “Three-dimensional-line skeleton graph analysis of high-resolution magnetic resonance images: A validation study from 34-micromresolution microcomputed tomography,” J Bone Miner Res, vol. 17, no. 10, pp. 1883–95, 2002. [336] R. M. Haralick, “Performance characterization in image analysis: Thinning, a case in point,” Pat Recog Lett, vol. 13, no. 1, pp. 5–12, 1992. [337] G. Bertrand and M. Couprie, “Powerful parallel and symmetric 3D thinning schemes based on critical kernels,” J Math Imag Vis, vol. 48, no. 1, pp. 134–148, 2014. [338] O. P. Buneman, A grammar for the topological analysis of plane figures. Edinburgh: Edinburgh University Press, 1969, pp. 383–393. [339] R. Keshet, “Shape-tree semilattice,” J Math Imag Vis, vol. 22, no. 2-3, pp. 309–331, 2005. [340] J. Stell and M. Worboys, “Relations between adjacency trees,” Theo Comp Sc, vol. 412, no. 34, pp. 4452–4468, 2011. [341] H. J. Heijmans, “Connected morphological operators for binary images,” Comp Vis Imag Und, vol. 73, no. 1, pp. 99–120, 1999. [342] C. Ballester, V. Caselles, and P. Monasse, “The tree of shapes of an image,” ESAIM: Control, Optim Calc. Var, vol. 9, pp. 1–18, 2003. [343] F. Erlandsson, C. Wahlby, S. Ekholm-Reed, A. C. Hellstrom, E. Bengtsson, and A. Zetterberg, “Abnormal expression pattern of cyclin e in tumour cells,” Int J Cancer, vol. 104, no. 3, pp. 369–75, 2003. [344] Y. Liu, D. Jin, C. Li, K. F. Janz, T. L. Burns, J. C. Torner, S. M. Levy, and P. K. Saha, “A robust algorithm for thickness computation at low resolution and its application to in vivo trabecular bone CT imaging,” IEEE Trans Biomed Eng, vol. 61, no. 7, pp. 2057–2069, 2014. [345] G. Borgefors and I. Nystr¨om, “Efficient shape representation by minimizing the set of centres of maximal discs/spheres,” Pat Recog Lett, vol. 18, no. 5, pp. 465–471, 1997. [346] P. Dokl´adal, I. Bloch, M. Couprie, D. Ruijters, R. Urtasun, and L. Garnero, “Topologically controlled segmentation of 3D magnetic resonance images of the head by using morphological operators,” Pat Recog, vol. 36, no. 10, pp. 2463–2478, 2003. [347] M. Gao, C. Chen, S. Zhang, Z. Qian, D. Metaxas, and L. Axel, “Segmenting the papillary muscles and the trabeculae from high resolution cardiac CT through restoration of topological handles,” in Proc of Int Conf Info Proc Med Imag (IPMI). Springer, 2013, pp. 184–195. [348] D. Cohen-Steiner, H. Edelsbrunner, and J. Harer, “Stability of persistence diagrams,” Discr Comp Geom, vol. 37, no. 1, pp. 103–120, 2007. [349] H. Edelsbrunner and J. Harer, “Persistent homology-a survey,” Contemp Math, vol. 453, pp. 257–282, 2008. [350] H. Heijmans, M. Buckley, and H. Talbot, “Path openings and closings,” J Math Imag Vis, vol. 22, no. 2-3, pp. 107–119, 2005.

0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Digital Topology and Geometry in Medical Imaging: A Survey.

Digital topology and geometry refers to the use of topologic and geometric properties and features for images defined in digital grids. Such methods h...
9MB Sizes 131 Downloads 18 Views