Vol. 8, No. 9 | 1 Sep 2017 | BIOMEDICAL OPTICS EXPRESS 4257

Three-dimensional continuous max flow optimization-based serous retinal detachment segmentation in SD-OCT for central serous chorioretinopathy MENGLIN WU,1,6 WEN FAN,2,6 QIANG CHEN,3,7 ZHENLONG DU,1 XIAOLI LI,1 SONGTAO YUAN,2 AND HYUNJIN PARK4,5,8 1School

of Computer Science and Technology, Nanjing Tech University, Nanjing, China of Ophthalmology, First Affiliated Hospital with Nanjing Medical University, Nanjing,

2Department

China 3School

of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China 4School of Electronic and Electrical Engineering, Sungkyunkwan University, South Korea 5Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), South Korea 6These authors contributed equally to this manuscript [email protected] [email protected]

Abstract: Assessment of serous retinal detachment plays an important role in the diagnosis of central serous chorioretinopathy (CSC). In this paper, we propose an automatic, threedimensional segmentation method to detect both neurosensory retinal detachment (NRD) and pigment epithelial detachment (PED) in spectral domain optical coherence tomography (SDOCT) images. The proposed method involves constructing a probability map from training samples using random forest classification. The probability map is constructed from a linear combination of structural texture, intensity, and layer thickness information. Then, a continuous max flow optimization algorithm is applied to the probability map to segment the retinal detachment-associated fluid regions. Experimental results from 37 retinal SD-OCT volumes from cases of CSC demonstrate the proposed method can achieve a true positive volume fraction (TPVF), false positive volume fraction (FPVF), positive predicative value (PPV), and dice similarity coefficient (DSC) of 92.1%, 0.53%, 94.7%, and 93.3%, respectively, for NRD segmentation and 92.5%, 0.14%, 80.9%, and 84.6%, respectively, for PED segmentation. The proposed method can be an automatic tool to evaluate serous retinal detachment and has the potential to improve the clinical evaluation of CSC. ©2017 Optical Society of America OCIS codes: (100.0100) Image processing; (110.4500) Optical coherence tomography; (170.4470) Ophthalmology.

References and links 1.

2.

3.

4.

5.

K. K. Dansingani, C. Balaratnasingam, S. Mrejen, M. Inoue, K. B. Freund, J. M. Klancnik, Jr., and L. A. Yannuzzi, “Annular lesions and catenary corms in chronic central serous chorioretinopathy,” Am. J. Ophthalmol. 166, 60–67 (2016). R. Hua, L. Liu, C. Li, and L. Chen, “Evaluation of the effects of photodynamic therapy on chronic central serous chorioretinopathy based on the mean choroidal thickness and the lumen area of abnormal choroidal vessels,” Photodiagn. Photodyn. Ther. 11(4), 519–525 (2014). Y. Kuroda, S. Ooto, K. Yamashiro, A. Oishi, H. Nakanishi, H. Tamura, N. Ueda-Arakawa, and N. Yoshimura, “Increased choroidal vascularity in central serous chorioretinopathy quantified using swept-source optical coherence tomography,” Am. J. Ophthalmol. 169, 199–207 (2016). S. J. Ahn, T. W. Kim, J. W. Huh, H. G. Yu, and H. Chung, “Comparison of features on SD-OCT between acute central serous chorioretinopathy and exudative age-related macular degeneration,” Ophthalmic Surg. Lasers Imaging 43(5), 374–382 (2012). P. Roberts, B. Baumann, J. Lammer, B. Gerendas, J. Kroisamer, W. Bühl, M. Pircher, C. K. Hitzenberger, U. Schmidt-Erfurth, and S. Sacu, “Retinal Pigment Epithelial Features in Central Serous Chorioretinopathy Identified by Polarization-Sensitive Optical Coherence Tomography,” Invest. Ophthalmol. Vis. Sci. 57(4), 1595– 1603 (2016).

#300544 Journal © 2017

https://doi.org/10.1364/BOE.8.004257 Received 21 Jun 2017; revised 29 Jul 2017; accepted 22 Aug 2017; published 29 Aug 2017

Vol. 8, No. 9 | 1 Sep 2017 | BIOMEDICAL OPTICS EXPRESS 4258

6.

7. 8.

9.

10.

11. 12.

13.

14.

15.

16.

17.

18. 19.

20.

21. 22.

23.

24.

25.

26. 27. 28.

A. Daruich, A. Matet, A. Dirani, E. Bousquet, M. Zhao, N. Farman, F. Jaisser, and F. Behar-Cohen, “Central serous chorioretinopathy: recent findings and new physiopathology hypothesis,” Prog. Retin. Eye Res. 48, 82– 118 (2015). D. C. Fernández, “Delineating fluid-filled region boundaries in optical coherence tomography images of the retina,” IEEE Trans. Med. Imaging 24(8), 929–945 (2005). J. Novosel, Z. Wang, H. de Jong, M. van Velthoven, K. A. Vermeer, and L. J. van Vliet, “Locally-adaptive loosely-coupled level sets for retinal layer and fluid segmentation in subjects with central serous retinopathy,” in Proceedings of IEEE International Symposium on Biomedical Imaging (IEEE, 2016), pp. 702–705. T. Wang, Z. Ji, Q. Sun, Q. Chen, S. Yu, W. Fan, S. Yuan, and Q. Liu, “Label propagation and higher-order constraint-based segmentation of fluid-associated regions in retinal SD-OCT images,” Inf. Sci. 358, 92–111 (2016). J. Wang, M. Zhang, A. D. Pechauer, L. Liu, T. S. Hwang, D. J. Wilson, D. Li, and Y. Jia, “Automated volumetric segmentation of retinal fluid on optical coherence tomography,” Biomed. Opt. Express 7(4), 1577– 1589 (2016). S. Farsiu, S. J. Chiu, J. A. Izatt, and C. A. Toth, “Fast detection and segmentation of drusen in retinal optical coherence tomography images,” Proc. SPIE 6844, 68440D (2008). P. A. Dufour, L. Ceklic, H. Abdillahi, S. Schröder, S. De Dzanet, U. Wolf-Schnurrbusch, and J. Kowal, “Graphbased multi-surface segmentation of OCT data using trained hard and soft constraints,” IEEE Trans. Med. Imaging 32(3), 531–543 (2013). F. Shi, X. Chen, H. Zhao, W. Zhu, D. Xiang, E. Gao, M. Sonka, and H. Chen, “Automated 3-D retinal layer segmentation of macular optical coherence tomography images with serous pigment epithelial detachments,” IEEE Trans. Med. Imaging 34(2), 441–452 (2015). B. J. Antony, A. Lang, E. K. Swingle, O. Al-Louzi, A. Carass, S. Solomon, P. A. Calabresi, S. Saidha, and J. L. Princea, “Simultaneous segmentation of retinal surfaces and microcystic macular edema in SDOCT volumes,” Proc. SPIE Medical Imaging. International Society for Optics and Photonics, 9784, 97841C (2016). G. Quellec, K. Lee, M. Dolejsi, M. K. Garvin, M. D. Abràmoff, and M. Sonka, “Three-dimensional analysis of retinal layer texture: identification of fluid-filled regions in SD-OCT of the macula,” IEEE Trans. Med. Imaging 29(6), 1321–1330 (2010). A. Lang, A. Carass, E. K. Swingle, O. Al-Louzi, P. Bhargava, S. Saidha, H. S. Ying, P. A. Calabresi, and J. L. Prince, “Automatic segmentation of microcystic macular edema in OCT,” Biomed. Opt. Express 6(1), 155–169 (2015). S. J. Chiu, M. J. Allingham, P. S. Mettu, S. W. Cousins, J. A. Izatt, and S. Farsiu, “Kernel regression based segmentation of optical coherence tomography images with diabetic macular edema,” Biomed. Opt. Express 6(4), 1172–1194 (2015). T. Hassan, M. U. Akram, B. Hassan, A. M. Syed, and S. A. Bazaz, “Automated segmentation of subretinal layers for the detection of macular edema,” Appl. Opt. 55(3), 454–461 (2016). X. Xu, K. Lee, L. Zhang, M. Sonka, and M. Abramoff, “Stratified sampling voxel classification for segmentation of intraretinal and subretinal fluid in longitudinal clinical OCT data,” IEEE Trans. Med. Imaging 34(7), 1616– 1623 (2015). X. Chen, M. Niemeijer, L. Zhang, K. Lee, M. D. Abramoff, and M. Sonka, “Three-dimensional segmentation of fluid-associated abnormalities in retinal OCT: probability constrained graph-search-graph-cut,” IEEE Trans. Med. Imaging 31(8), 1521–1531 (2012). Z. Sun, H. Chen, F. Shi, L. Wang, W. Zhu, D. Xiang, C. Yan, L. Li, and X. Chen, “An automated framework for 3D serous pigment epithelium detachment segmentation in SD-OCT images,” Sci. Rep. 6(1), 21739 (2016). J. Novosel, K. A. Vermeer, J. H. de Jong, Ziyuan Wang, and L. J. van Vliet, “Joint Segmentation of Retinal Layers and Focal Lesions in 3-D OCT Data of Topologically Disrupted Retinas,” IEEE Trans. Med. Imaging 36(6), 1276–1286 (2017). A. Montuoro, S. M. Waldstein, B. S. Gerendas, U. Schmidt-Erfurth, and H. Bogunović, “Joint retinal layer and fluid segmentation in OCT scans of eyes with severe macular edema using unsupervised representation and autocontext,” Biomed. Opt. Express 8(3), 1874–1888 (2017). A. G. Roy, S. Conjeti, S. P. K. Karri, D. Sheet, A. Katouzian, C. Wachinger, and N. Navab, “ReLayNet: Retinal Layer and Fluid Segmentation of Macular Optical Coherence Tomography using Fully Convolutional Network,” Biomed. Opt. Express 8(8), 3627–3642 (2017). L. Fang, D. Cunefare, C. Wang, R. H. Guymer, S. Li, and S. Farsiu, “Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search,” Biomed. Opt. Express 8(5), 2732–2744 (2017). J. Yuan, E. Bae, and X. C. Tai, “A study on continuous max-flow and min-cut approaches,” in Proceeding of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 2217–2224. J. Yuan, E. Bae, X. C. Tai, and Y. Boykov, “A continuous max-flow approach to potts model,” in Proceeding of European Conference on Computer Vision (Springer 2010), pp. 379–392. M. Rajchl, J. Yuan, J. A. White, E. Ukwatta, J. Stirrat, C. M. Nambakhsh, F. P. Li, and T. M. Peters, “Interactive hierarchical-flow segmentation of scar tissue from late-enhancement cardiac MR images,” IEEE Trans. Med. Imaging 33(1), 159–172 (2014).

Vol. 8, No. 9 | 1 Sep 2017 | BIOMEDICAL OPTICS EXPRESS 4259

29. W. Qiu, J. Yuan, E. Ukwatta, Y. Sun, M. Rajchl, and A. Fenster, “Prostate segmentation: An efficient convex optimization approach with axial symmetry using 3-D TRUS and MR images,” IEEE Trans. Med. Imaging 33(4), 947–960 (2014). 30. J. Rogowska and M. E. Brezinski, “Evaluation of the adaptive speckle suppression filter for coronary optical coherence tomography imaging,” IEEE Trans. Med. Imaging 19(12), 1261–1266 (2000). 31. Q. Chen, T. Leng, L. Zheng, L. Kutzscher, J. Ma, L. de Sisternes, and D. L. Rubin, “Automated drusen segmentation and quantification in SD-OCT images,” Med. Image Anal. 17(8), 1058–1072 (2013). 32. C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color image,” in Proceeding of IEEE International Conference on Computer Vision (IEEE, 1998) pp. 839–846. 33. M. K. Garvin, M. D. Abràmoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Trans. Med. Imaging 28(9), 1436–1447 (2009). 34. K. Li, X. Wu, D. Z. Chen, and M. Sonka, “Optimal surface segmentation in volumetric images-a graph-theoretic approach,” IEEE Trans. Pattern Anal. Mach. Intell. 28(1), 119–134 (2006). 35. K. Lee, M. Niemeijer, M. K. Garvin, Y. H. Kwon, M. Sonka, and M. D. Abramoff, “Segmentation of the optic disc in 3-D OCT scans of the optic nerve head,” IEEE Trans. Med. Imaging 29(1), 159–168 (2010). 36. M. D. Abràmoff, M. K. Garvin, and M. Sonka, “Retinal imaging and image analysis,” IEEE Rev. Biomed. Eng. 3, 169–208 (2010). 37. F. M. Penha, P. J. Rosenfeld, G. Gregori, M. Falcão, Z. Yehoshua, F. Wang, and W. J. Feuer, “Quantitative imaging of retinal pigment epithelial detachments using spectral-domain optical coherence tomography,” Am. J. Ophthalmol. 153(3), 515–523 (2012). 38. A. Chambolle, “An algorithm for total variation minimization and applications,” J. Math. Imaging Vis. 20(1), 89–97 (2004). 39. J. K. Udupa, V. R. Leblanc, Y. Zhuge, C. Imielinska, H. Schmidt, L. M. Currie, B. E. Hirsch, and J. Woodburn, “A framework for evaluating image segmentation algorithms,” Comput. Med. Imaging Graph. 30(2), 75–87 (2006). 40. M. Wu, Q. Chen, and X. He, “Automatic Subretinal fluid segmentation of retinal SD-OCT images with neurosensory retinal detachment guided by enface fundus imaging,” IEEE Transactions on Biomedical Engineering (2017).

1. Introduction Central serous chorioretinopathy (CSC) is an idiopathic disease of the posterior pole of the retina, which often appears as serous retinal detachment accompanied by leakage of altered retinal pigment epithelium (RPE) [1–3]. Spectral domain optical coherence tomography (SDOCT), which capitalizes on recent advances in optical imaging, is the primary imaging modality for the diagnosis of CSC. State-of-the-art SD-OCT devices can non-invasively capture a high-definition, cross-sectional profile of retinal layers and pathological changes in the macular area, and thus allow radiologists to make detailed anatomical assessments for proper CSC treatment [4-5]. Serous retinal detachment, such as pigment epithelial detachment (PED) and neurosensory retinal detachment (NRD) as shown in Fig. 1, is a prominent characteristic of CSC. Mechanical stress resulting from increased intra-choroidal pressure reduces RPE adhesion and alters hydro-ionic RPE regulation, which in turn causes PED. NRD is often associated with a mechanical abrasion resulting from an active flow through a break in the RPE [6]. Therefore, the segmentation of serous retinal detachment-associated fluid is important for evaluating the severity and progression of chorioretinal impairment. Several automatic or semi-automatic methods have been proposed for the segmentation of fluid-filled regions with abnormal retinal structures. Many of them are 2-D methods that segment those retinal abnormalities based on 2-D OCT slices. Fernández [7] applied a deformable model to outline the lesion boundaries by manual initialization. Novosel et al. [8] proposed an automatic, locally-adaptive approach to segment the subretinal fluid caused by CSC, exploiting the local attenuation differences between the layers near the fluid. Various spatial constraints have been incorporated into segmentation models so that spatial consistency between successive OCT slices could be exploited. Through label propagation and higher-order constraint, Wang et al. [9] presented an interactive, fluid-associated region segmentation algorithm for consecutive OCT B-scans. Wang et al. [10] utilized a voting strategy in 2-D OCT slices with different axes to eliminate false positives in cases of diabetic macular edema (DME) after fuzzy level set-based segmentation. Farsiu et al. [11] proposed a

Vol. 8, No. 9 | 1 Sep 2017 | BIOMEDICAL OPTICS EXPRESS 4260

gradient vector flow based deformable method for segmentation of drusen in retinal OCT images, which estimated the normal RPE layer by enforcing the local convexity condition and curve fitting. Recently, machine learning approaches have shown competitive performance for the segmentation of retinal abnormalities. Starting with layer segmentation [12–14], retinal fluidfilled regions were detected using various classifiers, including K nearest neighbor [15], random forest [16], kernel regression [17] and support vector machine [18] with retinal layerdependent feature selection [19]. Some reports utilized machine learning as a coarse segmentation step. Chen et al. [20] adopted a probability map from training samples as seeds for a graph cut search method to segment the symptomatic exudate-associated derangements (SEADs). Sun et al. [21] proposed a framework for serous PED segmentation that combined AdaBoost classification and a shape-constrained graph cut. Jointly segmenting both fluid and retinal layer is potentially valuable because it might allow clinically relevant interpretation of fluids within the context of retinal layers. Novosel et al. [22] proposed an approach to jointly segment the retinal layer and lesions by using a framework of loosely coupled level set, where lesions are modeled as an additional spacevariant layer delineated by auxiliary interfaces. Montuoro et al. [23] proposed a supervised learning framework to segment retinal layers with two kinds of fluid, which utilized complex interaction between different retinal structures to refine the results. Roy et al. [24] proposed a deep learning based end-to-end framework named ReLayNet, which transfer the layer and lesion segmentation problem to a classification problem. Fang et al. [25] also proposed a layer segmentation method in OCT images with AMD using deep neural network. The method utilized convolutional neural network to obtain the coarse results based on probability map and then they refined the results using the graph search method. Despite recent advancements, automatic serous retinal detachment segmentation remains a challenging task because of several critical problems. First, the accuracy of layer detection might be influenced by the presence of abnormal retinal structures. Second, the fluid-filled regions sometimes appear as regions with weak contours and an ill-defined shape. Furthermore, both NRD- and PED-associated fluids have similar intensities and shape profiles, which makes automatic segmentation even more difficult. In this paper, we propose a continuous max flow approach using a probability map for serous retinal detachment segmentation. The probability map is learned from training samples, and combines structural texture, intensity, and retinal layer thickness information using a region-restricted training strategy. Then, a continuous max flow model [26–29] is utilized to segment the retinal abnormalities based on the obtained probability map. The learned probability map contains processed high-level information for discriminating between NRD, PED, and other related retinal tissue. We then apply the continuous max flow approach to the probability map, as the segmentation problem is better handled in the feature space of high-level representation. Our method affords the following advantages: 1. Our methodology consists of 3-D processing steps except for curve fitting in preprocessing and post-processing. 2. We utilize a probability map representation combining various features, which improves the segmentation results for the continuous max flow optimization 3. The proposed approach can sequentially segment PED- and NRD-associated fluid by using different restricted regions.

Vol. 8, No. 9 | 1 Sep 2017 | BIOMEDICAL OPTICS EXPRESS 4261

(a)

(b) NRD associated subretinal fluid

PED associated sub-RPE fluid

NRD associated subretinal fluid

(c) PED associated sub-RPE fluid

Fig. 1. (a) SD-OCT B-scan with PED- and NRD-associated fluid. The red curve denotes the internal limiting membrane (ILM), the green curve represents the elevated RPE floor, and the blue curve indicates the interpolated normal RPE. (b) The en face projection between the ILM and RPE floor. (c) The en face projection between the RPE floor and interpolated RPE. The dark dashed green line indicates the position of the B-scan presented in (a).

2. Methodology 2.1 Method overview Figure 2 illustrates the flowchart of the proposed method. First, structural texture and intensity scores are calculated by a random forest classification, and a thickness score is constructed from the clustering on the thickness map obtained after layer segmentation. These scores indicate the degree of confidence of the corresponding voxels belonging to the retinal detachment-associated fluid, according to the respective features. Then, those three types of feature information are linearly combined to build the probability map, which denotes the degree of membership of each voxel belonging to the fluid. Finally, a continuous max flow approach is applied to obtain the refined segmentation on the probability map. Denoising

Feature extraction

voxel classification

SD-OCT volume

Structural texture score Intensity score

Layer segmentation

Thickness map

Fuzzy C-means clustering

Probability map

Thickness score

Continuous max flow optimization 3-D volumetric segmentation result

Fig. 2. The flowchart of the proposed segmentation framework.

2.2 Generation of probability map In this step, we extract the structural texture and intensity score from the denoised SD-OCT volume by random forest classification. The thickness score is computed from the thickness map generated by the layer segmentation. Then, three types of feature information are linearly

Vol. 8, No. 9 | 1 Sep 2017 | BIOMEDICAL OPTICS EXPRESS 4262

combined to construct the probability map, which is viewed as a high-level representation of the SD-OCT volume. 2.2.1 Pre-processing SD-OCT images are subject to speckle noise caused by the low temporal coherence and high spatial coherence of the optical beams [30], which significantly influences the performance of the image processing algorithms. Fang et al. [25] applied the median filter to the normalized SD-OCT volume and learned the size of needed filters from the convolution neural network structure. The heavy speckle noise might negatively influence the segmentation of retinal layers and fluid contours. In our protocol, we apply the widely used bilateral filtering for speckle noise reduction and edge preservation, which facilitates the layer segmentation and feature extraction procedures [31-32]. We used standard deviation values of 3.0 for space and 0.5 for range parameters in our study. Layer segmentation is another pre-processing step. Here, we adopt a 3-D graph search algorithm [33–35] to delineate the ILM and RPE floor (Fig. 1.) using the well-established Iowa Reference Algorithm [36]. The 3-D graph search algorithm converts the multiple retinal surface segmentation problem to that of finding a minimum closed set in a node-weighted digraph with appropriate feasibility constraints, and it can effectively solve this optimization problem in low-order polynomial time. We used OCTexplore r3.6 for layer segmentation based on optimized 3-D graph search [33–35]. In our OCT data, the software worked well for ILM and RPE layer segmentation even there might be other abnormalities. Furthermore, our approach allow incorrect layer segmentation because we only need the candidate regions for fluid segmentation. After the layer segmentation, we calculate an interpolated normal RPE [37] using a curve fitting procedure to tackle the deformation of the retinal structure. First, a second-order polynomial fitting is applied for coarse interpolation using the leftmost and rightmost 100 pixels from the elevated RPE, after which the point with a distance larger than 10 pixels to the interpolated curve are removed. This is assuming that there are no dramatic retinal structure changes caused by PED at that region. Then, we apply the second-order polynomial fitting again using those refined points. 2.2.2 Classification To extract the structural texture score from the SD-OCT volume, we apply a random forest classification based on the 3-D structural and textural features. The output probability is considered to be a similarity in texture of the given voxel with respect to the fluid. 1. Feature extraction: The 3-D structural texture features of retinal layers can be used to identify the fluid-filled regions [15, 20]. A set of 47 features are extracted from each voxel, which are summarized in Table 1. The first 15 features (features 1-15) describe the local image structure, the next 30 features (features 16-45) describe the image texture, and the last two features (features 46 and 47) are the distances to the ILM and RPE floor, respectively. 2. Classifier training: We segment PED and NRD sequentially by using different restricted regions. In the training phase, we only consider voxels between the ILM and RPE floor for NRD detection to discard voxels related to irrelevant tissue types. For PED detection, we allow an additional ± 20 voxels in the z-direction (axis shown in Fig. 3(a)) between the RPE floor and the interpolated RPE. Then, the detection of NRD and PED can be treated as two separate binary classification problems, which might allow PED and NRD to overlap due to the specification of the search regions. To accelerate the speed of training, the SD-OCT volume is downsampled from 512 × 128 × 1024 voxels to 256 × 128 × 256 voxels. We apply a random forest algorithm by constructing 60 decision trees with a minimum leaf size of 10. For each volume,

Vol. 8, No. 9 | 1 Sep 2017 | BIOMEDICAL OPTICS EXPRESS 4263

we used 5,000 positive (belonging to object) voxels and 15,000 negative (belonging to background) voxels to train the classifier in a 5-fold cross-validation fashion. We train and test RF classifier for each group separately. We sample more negative voxels because the background occupies a larger area than objects in a natural setting. The prediction score produced by the random forest classifier is used as our structural texture score, which denotes similarity in texture of the given voxel with respect to the fluid (Fig. 3(d), 3(g)). Table 1. List of features for classification Feature Index 1-5

Details

6-10

Second eigenvalues of the Hessian matrices at 5 scales (σx = 11.7, 35.2, 70.4, 105.7, 164.4 μm; σy = 47.2, 141.7, 283.4, 425.2 μm; σz = 2.0, 5.9, 11.8, 17.6, 27.4μm)

11-15

Third eigenvalues of the Hessian matrices at 5 scales (σx = 11.7, 35.2, 70.4, 105.7, 164.4 μm; σy = 47.2, 141.7, 283.4, 425.2 μm; σz = 2.0, 5.9, 11.8, 17.6, 27.4μm)

16-45

Zero, first, and second orders derivatives of the Gaussian filter bank at 3 scales (σx = 23.5, 47.0, 93.9 μm; σy = 94.5, 189.0, 378.0 μm; σz = 3.9, 7.8, 15.7 μm)

46-47

Distances to the ILM and RPE floor

First eigenvalues of the Hessian matrices at 5 scales σx = 11.7, 35.2, 70.4, 105.7, 164.4 μm; σy = 47.2, 141.7, 283.4, 425.2 μm; σz = 2.0, 5.9, 11.8, 17.6, 27.4μm).

Assume St ∈ [0,1] is the predicted score of each voxel by random forest classification, and I and σ denote the mean and the standard deviation of the intensity of those voxels with S > 0.8 , respectively. We define S = exp(− || I − I ||2 /2σ 2 ) as the intensity score for i

t

i

each voxel (Fig. 3(e), 3(h)), where σ i equals two times the standard deviation σ . 2.2.3 Construction of probability map In this step, we linearly combine the various feature information to construct the probability map. The intensity and thickness scores are obtained based on two assumptions: (1) the fluid always shows up as a low reflection region and (2) the fluid region is associated with a larger thickness than the normal part of the retina. The thickness map is calculated between the ILM and RPE for NRD detection, and between the RPE and interpolated RPE for PED detection (Fig. 3(b), 3(c)). Then, we apply fuzzy c-mean clustering to the thickness map and obtain three categories [40]. The distance between ILM and RPE floor are plotted with a red-green-blue (from thick to thin) color scale in Fig. 3(b), 3(c). The thickness map reveals a few distinct clusters that match the general shape of our object. The category with the highest thickness is considered to be the object, and the category with the lowest thickness is considered the background. The category with a moderate thickness is combined either with the object or the background. In this study, we are mainly interested in obtaining the likelihood of a pixel belonging to the fluid in the thickness map and the degree of the output membership belonging to the class with highest thickness is assigned to the corresponding pixels on the map (i.e. columns in B-scans), which is denoted as the thickness score S m . The lighter blue columns in Fig. 3(f) and 3(i) indicate the restricted regions used for detecting candidate fluids. Finally, the probability map is generated through a linear combination of the three types of feature information as follows:

S p =λ1 St + λ2 S m + λ3 Si

(1)

where λ1 , λ2 and λ3 are weight parameters, subject to λ1 + λ2 + λ3 = 1 . In the end, the probability map can be viewed as a high-level representation of the SD-OCT volume.

Vol. 8, No. 9 | 1 Sep 2017 | BIOMEDICAL OPTICS EXPRESS 4264

(b) (d)

(e)

(f)

(g)

(h)

(i)

(a) ILM NRD

RPE Interpolated RPE PED

Z

Y

X

(c)

Fig. 3. Probability map construction. (a) The SD-OCT volume. (b) The thickness map generated between the ILM and RPE for NRD detection, and (c) the thickness map generated between the RPE and interpolated RPE for PED detection. The black dashed curve denotes the contour of the fluid region in the thickness map. (d - f) The structural texture score, intensity score, and probability map for NRD detection, respectively, in the current OCT B-scan. (g - i) The structural texture score, intensity score, and probability map for PED detection, respectively, in the current OCT B-scan. The yellow dash line indicates the restricted region in B-scan.

2.3 Serous retinal detachment segmentation 2.3.1 Continuous max flow model applied to the generated probability map Let Ro and Rb denote the retinal detachment-associated fluid region and the background, respectively. The label function for each voxel x can be denoted as: 1, for ∀x ∈ Ro (2) u ( x) =  0, for ∀x ∈ Rb This segmentation problem can be formulated by a spatially continuous min-cut problem [26], which aims to force the smoothness of the segmented region and minimize its surface simultaneously: min E (u ) = ∫ (1 − u )Cs dx + ∫ uCt dx + ∫ g ( x) | ∇u | dx

u ( x )∈{0,1}







(3)

where Cs ( x) and Ct ( x) are the cost functions for voxel x corresponding to the fluid and background, respectively. Ω = Ro  Rb denotes the whole SD-OCT volume, and g ( x) ≥ 0 represents the weight function of the total–variation term that measures the minimum surface of each fluid. In this work, we focus on the convex relaxed version of Eq. (3), with u ( x) ∈ [0,1] : min E (u ) = ∫ (1 − u )Cs dx + ∫ uCt dx + ∫ g ( x) | ∇u | dx

u ( x )∈[0,1]







(4)

According to previous reports [27, 28], Eq. (4) is equivalent to the continuous max flow problem as follows:

Vol. 8, No. 9 | 1 Sep 2017 | BIOMEDICAL OPTICS EXPRESS 4265

max



ps , pt , p Ω

ps ( x)dx

s.t. | p ( x) |≤ g ( x) ps ( x) ≤ Cs ( x), pt ( x) ≤ Ct ( x)

(5)

div p ( x) − ps ( x) + pt ( x) = 0

where p , ps , and pt are the spatial flow, source flow, and sink flow, respectively. g ( x) is treated as the capacity of the spatial flow, and div(.) denotes the divergence operator. The continuous max flow model is shown in Fig. 4(c). In this study, we perform the segmentation on the probability map rather than on the original SD-OCT volume (Fig. 4). Therefore, we modify the cost functions of the continuous max flow model as: Cs ( x) = − log( S p ( x))

(6)

Ct ( x) = − log(1 − S p ( x)) .

where S p ( x)

denotes the value of probability map in voxel x .We also set

g ( x= ) 1/ (1+ | ∇S p ( x) |) as an edge indicator function on our probability map.

Fig. 4. Segmenting the probability map using continuous max flow optimization. (a) The probability map for NRD segmentation. (b) The probability map for PED segmentation. (c) The continuous max flow model. (d) The NRD and PED 3-D segmentation results. The purple and green regions denote NRD and PED, respectively. The blue region indicates the interpolated RPE.

2.3.2 Iterative optimization To optimize Eq. (5), we define its augmented Lagrangian function as [26]: L( ps , pt , p,= u)





ps dx + ∫ u (div p − ps + pt )dx − Ω

µ 2

|| div p − ps + pt ||2

(7)

, where the parameter µ is a constant and represents the step size of the augmented Lagrangian method. We set µ = 0.3 in our study. An optimization algorithm is applied to solve Eq. (7) iteratively until convergence. The source flow ps and the sink flow pt are p= min(Cs , Ct ) . We perform the following steps at each kth iteration: initially set to p= s t

Vol. 8, No. 9 | 1 Sep 2017 | BIOMEDICAL OPTICS EXPRESS 4266

• Optimize spatial flow p by fixing other variables: k +1 p= arg max −

µ

| p ( x )| < g ( x )

2

|| div p + ptk − psk − u k / µ ||2

(8)

which can be solved by the gradient-projection algorithm [38]. • Optimize source flow ps by fixing other variables: psk +1 arg max ∫ ps dx − = ps ( x ) ≤ C s ( x ) Ω

µ 2

|| ps − ( ptk + div p k +1 ) + u k / µ ||2

(9)

• Optimize sink flow pt by fixing other variables: k +1 p= arg max − t pt ( x ) ≤ Ct ( x )

µ 2

|| pt + div p k +1 − psk − u k / µ ||2

(10)

• Update label function:

u k +1 =u k − µ (div p k +1 − psk +1 + ptk +1 )

(11)

The iteration will stop when the iterative error is lower than a small value (1e-4) or it exceeds the maximum allowed number of iterations (100). After convergence, the fluid segmentation is obtained by thresholding of label function u * ( x) in (11). If u * ( x) > 0.5 , then voxels above the threshold are treated as fluid region, and the remaining voxels are treated as the background. The higher threshold of u * ( x) leads to more positive voxels and less negative voxels. Therefore, we need to choose the threshold of u * ( x) to consider the balance between true positive and false positive voxels. Here, we use 0.5 which is the default choice in the continuous max flow algorithm. 2.3.3. Post-processing Low reflection regions, such as artifacts and blood vessels, might cause misclassification. We eliminate those potential false positives based on the size and position of the detected regions. Regions smaller than 100 voxels in each B-scan and regions placed beyond the candidate fluid region determined from the thickness map are removed. We believe these regions are more likely to be the false positives. We intend to keep more true positives rather than removing the false positives. The size threshold depends largely on the prior knowledge of the experimental data set. In practice, we extend the scope of the candidate fluid region by an additional ± 117µm ( ± 10 pixels) and ± 236µm ( ± 5 pixels) in x-axis and y-axis (axes shown in Fig. 3) on the thickness map, respectively. The x-axis and y-axis information are the same on both the thickness map and original SD-OCT volume. 3. Experiments 3.1. Data set and configuration In this study, a data set of 37 SD-OCT volumes from cases diagnosed with CSC were acquired (25 cases with only NRD, 6 cases with only PED, and 6 cases with both NRD and PED) by a Cirrus SD-OCT device (Carl Zeiss Meditec, Inc., Dublin, CA, USA). Each SDOCT volume contains 512 × 128 × 1024 voxels, with a voxel size of 11.7 × 47.2 × 2.0μm. This study was approved by the Institutional Review Board of The First Affiliated Hospital of Nanjing Medical University. Two specialists with more than 10 years of experience in fundus disease drew manual boundaries. They were instructed to find smooth boundary for NRD between the neurosensory retina and the underlying RPE, which corresponds to the

Vol. 8, No. 9 | 1 Sep 2017 | BIOMEDICAL OPTICS EXPRESS 4267

accumulation of a clear or lipid-rich exudate in the subretinal space. They also drew the elevated RPE along with the overlying retina from the remaining Bruch’s membrane to delineate the PED contour, which is caused by the accumulation of the fluid. We tested the proposed algorithm on a PC with an Intel Core i7-870 CPU @ 2.93GHz and 16 GB of memory. The 3-D textual information is known to be discriminative at identifying fluid-filled regions ([15, 20]). Therefore, we set a relatively large weight ( λ1 > 0.5) to the predicted score (based on the texture). The intensity and the thickness scores are treated as supplemental information to aid the segmentation, and thus they have lower weights ( λ2 for intensity and λ3 for thickness) than the predicted score. We explored many sets of weights ( λ2 in ranges [0.1-0.5] and λ3 in ranges in [0.1-0.5]) and empirically set the weights considering the tradeoff between true positive rate and false positive rate. In our experiment, the weight parameters λ1 , λ2 , and λ3 in Eq. (3) are set to be 0.55, 0.2, and 0.25, respectively, for NRD segmentation and 0.6, 0.3, and 0.1, respectively, for PED segmentation. The running time for random forest classification on the downsampled SD-volumes (implemented in Matlab R2016a) and continuous max flow-based optimization (implemented in C + + ) are 440 ± 49 s and 31 ± 5 s, respectively, for NRD segmentation and 122 ± 20 s and 23 ± 2 s, respectively, for PED segmentation. The continuous max flow optimization converges very fast since the probability map mainly contains discriminative information to distinguish between the object and the background. 3.2. Metrics for segmentation performance For a quantitative evaluation, the performance of the segmentation results is measured by the true positive volume fraction (TPVF), false positive volume fraction (FPVF), positive predicative value (PPV) [39], and dice similarity coefficient (DSC) as follows: | VA |  | VT | | VT |

(12)

| VA | − | VA |  | VT | | V | − | VT |

(13)

| VA |  | VT | | VA |

(14)

TPVF =

FPVF =

PPV =

DSC= 2 ×

| VA  VT | | VA | + | VT |

(15)

where | . | indicates volume, | V | is either the total volume between the ILM and RPE (for detecting NRD) or the total volume between the ILM and interpolated RPE (for detecting PED), VA is the automatic segmentation result, and VT is the reference standard labeled by the specialist. For statistical analysis, we applied a linear correlation analysis and BlandAltman approach to compare our results with those of the specialist. 3.3. Evaluation of serous retinal detachment For qualitative evaluation, we show representative segmentation results from different CSC cases. Figure 5 shows results from CSC cases with only NRD. We compare our method with the initial classification by random forest and the fuzzy level set-based segmentation in [10]. The fuzzy level set method results in under segmentation, due to high reflective artifacts inside the fluid region. The initial random forest classification results include obvious false positive regions, which have a similar intensity profile to the fluid region. Those false

Vol. 8, No. 9 | 1 Sep 2017 | BIOMEDICAL OPTICS EXPRESS 4268

positives can be suppressed by the thickness score from the probability map in our approach. The regions associated with a small retinal thickness are not considered to be the fluid region caused by serous retinal detachment. Compared to the level set-based segmentation [10] shown in Fig. 5, our methods has two ways to reduce the false positive. First, we apply multifeatures spanning multi-scale compared to single scale structural texture features to get a more robust and discriminative representation for fluid classification. Second, the continuous max flow aims to force the smoothness of the segmented region and minimize its surface simultaneously. We also test our method in CSC cases with only PED. Figure 6 shows that highly reflective holes inside the fluid are misclassified in the initial classification, while our approach can tackle this problem by adding probability information based on the thickness score and minimizing the fluid surface using the continuous max flow optimization. We compare our method with a layer segmentation-based method [13], which detects the PED footprint between the segmented elevated RPE and estimated normal RPE floor. The large smoothness constraint might lead to under segmentation of the fluid, which is shown by the yellow arrow in Fig. 6(a). The most challenging cases are the patients with both NRD and PED. Figure 7 compares the results of the reference standard, the initial classification, and our method in such a case. Although a small NRD-associated fluid region is present, our method achieves good agreement with the manual segmentation method, and shows fewer false positives than the random forest classification results.

Fig. 5. Examples of fluid segmentation of CSC cases with only NRD. (a) Results in the B-scan. The results of the reference standard (first column), initial classification (second column), the fuzzy level set method (third column), and the proposed method (fourth column) are shown. The blue and yellow arrows indicate the false positives and false negatives, respectively. (b) The corresponding 3-D segmentation results of the proposed method. The blue and purple regions denote the RPE floor and area of NRD, respectively. The dashed red rectangle shows the fluid region with a weak contour.

Vol. 8, No. 9 | 1 Sep 2017 | BIOMEDICAL OPTICS EXPRESS 4269

Fig. 6. Examples of fluid segmentation of CSC cases with only PED. (a) Results in the B-scan. The results of the reference standard (first column), initial classification (second column), the layer segmentation method (third column), and the proposed method (fourth column) are shown. The yellow arrow indicates the false negatives. (b) The corresponding 3-D segmentation results of the proposed method. The blue and green regions denote the interpolated RPE floor and area of PED, respectively.

Fig. 7. Examples of fluid segmentation of CSC cases with both NRD and PED. (a) The results of the reference standard (first column), initial classification (second column), and the proposed method (third column) are shown. The blue and yellow arrows indicate the false positives and false negatives, respectively. (b) The corresponding 3-D segmentation results of the proposed method. The blue, purple and green regions denote the interpolated RPE, area of NRD and area of PED, respectively.

For a quantitative evaluation, Tables 2 and 3 summarize the TPVG, FPVF, PPV, and DSC of the various approaches using the reference standards for NRD and PED, respectively. Our

Vol. 8, No. 9 | 1 Sep 2017 | BIOMEDICAL OPTICS EXPRESS 4270

method achieves a TPVG that is close to the initial classification, but has better FPVF, PPV and DSC compared to the initial classification. Therefore, the proposed method can be viewed as a refinement procedure after the initial random forest classification, which eliminates most of false positive regions. The proposed method is also superior to other two algorithms in overall performance. These results indicate that the machine learning-based segmentation performs better than the pure image processing-based approaches, as machine learning methods can utilize the high-level representation from the learning samples. Table 2. Comparison of results from the initial classification, fuzzy level set approach, and the proposed method with respect to the reference standard for NRD segmentation Specialist 1

Initial classificat ion Fuzzy level set [10] Proposed method

TPVF (%)

p-value

FPV (%)

p-value

PPV (%)

p-value

DSC (%)

pvalue

91.8 ± 4.3

0.846

0.77 ± 1.03

0.223

92.7 ± 6.4

0.171

92.1 ± 3.8

0.284

84.8 ± 16.0

0.021

1.61 ± 1.22

1.20e05

85.6 ± 7.9

1.27e06

84.0 ± 12.8

0.0002

91.7 ± 3.8

-

0.51 ± 0.57

-

94.7 ± 4.4

-

93.1 ± 2.9

-

Specialist 2

Initial classificat ion Fuzzy level set [10] Proposed method

TPVF (%)

p-value

FPV (%)

p-value

PPV (%)

p-value

DSC (%)

pvalue

92.5 ± 4.0

0.862

0.80 ± 1.13

0.290

92.7 ± 6.7

0.197

92.4 ± 3.8

0.258

85.3 ± 15.6

0.013

1.55 ± 1.13

0.0001

85.9 ± 8.1

6.45e06

84.2 ± 12.8

0.0002

92.4 ± 3.8

-

0.55 ± 0.69

-

94.7 ± 4.9

-

93.4 ± 2.7

-

Table 3. Comparison of results from the initial classification, layer segmentation approach and the proposed method with respect to the reference standard for PED segmentation Specialist 1

Initial classificat ion Layer segmentat ion [12] Proposed method

Initial classificat ion Layer segmentat ion [12] Proposed method

TPVF (%)

p-value

FPV (%)

p-value

PPV (%)

p-value

DSC (%)

pvalue

91.9 ± 10.6

0.975

0.17 ± 0.10

0.481

75.2 ± 15.4

0.392

81.2 ± 10.6

0.450

87.7 ± 7.2

0.258

0.16 ± 0.12

0.824

79.1 ± 10.4

0.761

82.4 ± 3.2

0.504

91.8 ± 9.9

-

0.14 ± 0.11

-

84.5 ± 10.2

-

TPVF (%)

p-value

FPV (%)

p-value

PPV (%)

p-value

DSC (%)

pvalue

92.7 ± 9.9

0.895

0.19 ± 0.10

0.443

73.8 ± 15.4

0.356

80.7 ± 10.9

0.385

88.9 ± 6.8

0.199

0.17 ± 0.11

0.803

77.9 ± 9.9

0.717

82.3 ± 3.3

0.483

93.2 ± 9.0

-

0.15 ± 0.13

-

80.0 ± 10.6

-

84.6 ± 10.6

-

81.8 ± 16.2 Specialist 2 -

Vol. 8, No. 9 | 1 Sep 2017 | BIOMEDICAL OPTICS EXPRESS 4271

For statistical analysis, we apply a linear correlation analysis and Bland-Altman reproducibility approach to compare the proposed method with the reference standard of the specialist. The linear correlation results and Bland-Altman plots for the analysis of NRD and PED segmentation are shown in Fig. 8 and Fig. 9, respectively. Figure 10 and Fig. 11 demonstrate the reproducibility assessment of manual tracings of two specialists. Our method demonstrates a high correlation with the manual results of the specialist for NRD (r 2 = 0.99) and PED (r 2 = 0.91) segmentation. The Bland-Altman results confirm stable agreement between the automated method and the reference standard. Simultaneous segmentation of NRD and PED is challenging, and our results demonstrate a similar performance whether we are segmenting one type of abnormality or both types of abnormality.

Fig. 8. Statistical correlation analysis for NRD volume evaluation. (a) Correlation analysis between the proposed method and the specialist 1. (b) Bland-Altman plot for the proposed method. The blue and purple rectangles denote the CSC subjects with only NRD and those with both NRD and PED, respectively.

Fig. 9. Statistical correlation analysis for PED volume evaluation. (a) Correlation analysis between the proposed method and the specialist 1. (b) Bland-Altman plot for the proposed method. The blue and purple rectangles denote the CSC subjects with only PED and those with both NRD and PED, respectively.

Vol. 8, No. 9 | 1 Sep 2017 | BIOMEDICAL OPTICS EXPRESS 4272

Fig. 10. Statistical correlation analysis of reproducibility comparing manual tracings of specialist 1 and specialist 2 for NRD volume. (a) Correlation analysis between the specialist 1 and the specialist 2. (b) Bland-Altman plot.

Fig. 11. Statistical correlation analysis of reproducibility comparing manual tracings of specialist 1 and specialist 2 for PED volume. (a) Correlation analysis between the specialist 1 and the specialist 2. (b) Bland-Altman plot.

4. Discussion and conclusion Assessment of serous retinal detachment plays a vital role in CSC diagnosis and treatment. Automatic segmentation of serous retinal detachment is a challenging task, especially when NRD and PED occur simultaneously. The abnormal retinal structure, weak contours, and illdefined fluid-filled regions make the segmentation task difficult. Furthermore, both NRDand PED-associated fluids have similar intensities and shape profiles, which makes the segmentation task even more difficult. In this paper, we propose a 3-D automatic segmentation method for serous retinal detachment in cases of CSC. First, a probability map is constructed from learning samples using random forest classification. The probability map is a linear combination of structural texture, intensity, and layer thickness information. Then, we apply a continuous max flow optimization algorithm on this probability map to segment the fluid-associated region with serous retinal detachment. Our method is a full 3-D method which can handle both NRD and PED segmentation simultaneously in SD-OCT images from cases of CSC well, while most of the previously published studies only focus on one type of abnormality. Compared with the pure image processing-based segmentation methods, our method achieves a higher overall performance. The learning procedure can effectively extract the

Vol. 8, No. 9 | 1 Sep 2017 | BIOMEDICAL OPTICS EXPRESS 4273

high-level representation from the raw intensity values for each voxel. Therefore, our method is robust and can avoid misclassifying other fluid-like regions, such as low reflection artifacts and blood vessels. Moreover, although NRD and PED share similar intensities and shape profiles, we can discriminate between these two types of abnormalities using a region restriction learning strategy. The post-processing steps based on the candidate region’s size and position restricted by the thickness map are utilized to eliminate any potential false positives. The proposed method significantly improves the initial classification results by using the probability map. The probability map flexibly represents the voxel score for the segmentation task by combining various feature information depending on the target abnormality. For example, we force a higher weight for retinal thickness for PED segmentation compared to that used for NRD segmentation, because the thickness between the elevated RPE floor and interpolated RPE is an important factor for judging PED. Furthermore, the continuous max flow optimization aims to force the smoothness of the segmented region and minimize its surface simultaneously. Thus, the problem of highly reflective artifacts inside the fluid can be tackled, as shown in Fig. 6. Comparing with other state-of-the-art approaches, our method achieves desirable performance: (1) our method obtains TPVF, FPVF and DSC of 92.1%, 0.53% and 93.3% for NRD segmentation, and 92.5%, 0.14% and 84.6% for PED segmentation; (2) Chen et al. [20] achieved TPVP and FPVF of 86.5%, 1.7% for SEAD segmentation in AMD subjects; (3) Montuoro et al. [23] reported DSC of 60% for fluid segmentation in a public data set; (4) Roy et al. [24] achieved DSC of 77% in fluid segmentation by using deep neural network; (5) Novosel et al. [22] obtained TPR, FPR and Dice of 93%, 12% and 89% for fluid segmentation in CSC patients. Note that the comparison is an indirect one using different data set with different types of fluids. Segmenting retinal lesions without retinal layers might lead to ignoring the potentially valuable clinical information regarding the retina. Exploring their complex interaction can yield improved segmentation results on severely diseased cases. Novosel et al. [22] extended the loosely coupled level set model to jointly segment retinal layers and lesions with topology-disrupting retinal diseases, where lesions were modelled as an additional spacevariant layer denoted by auxiliary interfaces. Montuoro et al. [23] applied 3-D graph search on the trained probability map and utilized auto-context methodology to refine the results exploiting the complex interaction between different retinal structures. The complementary information between retinal layers and associated lesions are useful and meaningful for fluid detection and segmentation, which we intend to explore in future work. There are several limitations which need to be improved upon in future studies. Some very small fluid regions are treated as artifacts in our method, and thus they can hardly be detected (shown in Fig. 7(a)). However, this defect does not significantly influence the quantitative evaluation, as we correctly segment large structures well. Figure 7(b) shows a false positive in the 3-D segmentation results, which indicates a high probability of the false positive region belonging to the fluid. Therefore, we need to extract more robust and discriminative features to eliminate those false positives in the future studies. In addition, we believe the manual threshold of the label function u * ( x) from Eq. (11) can be adjusted to alter the true positive and false positive detection rates as needed in clinical practice. Our results come from 37 cases, and thus our findings need to be validated using a larger cohort in a future study. Although our framework is currently applied to detect serous retinal detachment in cases of CSC, we believe it can be applied to segmentation of other abnormalities, such as symptomatic exudate-associated derangement (SEAD), with only minor modification. In summary, our 3-D automatic segmentation method demonstrates good agreement with the manual segmentation method both qualitatively and quantitatively. Our method can be

Vol. 8, No. 9 | 1 Sep 2017 | BIOMEDICAL OPTICS EXPRESS 4274

used as an automatic tool to evaluate serous retinal detachment, including NRD and PED, in cases of CSC, and has the potential to improve the clinical treatment of CSC. Funding National Science Foundation of China (NSFC) (61671242, 61672279, 61701222); Fundamental Research Funds for the Central Universities (30920140111004); Jiangsu Government Scholarship for Overseas Studies; Institute for Basic Science (IBS-R015-D1); National Research Foundation of Korea (NRF-2016R1A2B4008545); Natural Science Foundation for Universities of Jiangsu (17KJB510026). Conflicts of interest The authors declare that there are no conflicts of interest related to this article.

Three-dimensional continuous max flow optimization-based serous retinal detachment segmentation in SD-OCT for central serous chorioretinopathy.

Assessment of serous retinal detachment plays an important role in the diagnosis of central serous chorioretinopathy (CSC). In this paper, we propose ...
4MB Sizes 0 Downloads 16 Views