http://informahealthcare.com/jmt ISSN: 0309-1902 (print), 1464-522X (electronic) J Med Eng Technol, 2014; 38(4): 211–219 ! 2014 Informa UK Ltd. DOI: 10.3109/03091902.2014.904014

RESEARCH ARTICLE

MRI-PET image fusion based on NSCT transform using local energy and local variance fusion rules 1

Biomedical Engineering Department, Science and Research Branch, Islamic Azad University, Tehran, Iran, 2Biomedical Signal and Image Processing Lab (BiSIPL), Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran, and 3Bio-Medical Engineering Group, Department of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran Abstract

Keywords

Image fusion means to integrate information from one image to another image. Medical images according to the nature of the images are divided into structural (such as CT and MRI) and functional (such as SPECT, PET). This article fused MRI and PET images and the purpose is adding structural information from MRI to functional information of PET images. The images decomposed with Nonsubsampled Contourlet Transform and then two images were fused with applying fusion rules. The coefficients of the low frequency band are combined by a maximal energy rule and coefficients of the high frequency bands are combined by a maximal variance rule. Finally, visual and quantitative criteria were used to evaluate the fusion result. In visual evaluation the opinion of two radiologists was used and in quantitative evaluation the proposed fusion method was compared with six existing methods and used criteria were entropy, mutual information, discrepancy and overall performance.

Image fusion, fusion rules, multiscale geometric analysis, nonsubsampled contourlet transform

1. Introduction There are various types of medical images and each of them has specific application. Images such as MRI (Magnetic Resonance Imaging) and CT (Computed Tomography) provide anatomical information with high-resolution; images such as PET (Positron Emission Tomography) and SPECT (Single Photon Emission Computed Tomography) provide functional information with low spatial resolution. Functional images don’t have detailed anatomical information. To solve this problem, the physician can see two anatomical and functional images side-by-side. However, this is not a precise way, because the images have different resolutions, so researchers used image fusion in the medical field. Image fusion is the process of combining information from two or more images of a scene to construct an image that is more informative and is more suitable for visual perception [1]. Image fusion techniques can also be classified on the three main levels, including pixel level, feature level and decision level [2,3]. Among the three fusion levels, pixel level fusion is the most encompassed and includes the majority of image fusion algorithms [3,4]. Pixel-level algorithms work either in

*Corresponding author. Email: [email protected]

History Received 19 October 2013 Revised 8 March 2014 Accepted 9 March 2014

the spatial domain, such as averaging, PCA, Brovey and HSI [5,6], or in the transform domain, such as Multi-scale decomposition and Multi-geometric analysis [7,8]. The spatial domain method usually leads to undesirable effects such as reduced contrast and spectral distortion. The advantage of the transform domain technique is that it can show salient features more clearly [9]. Multi-scale decomposition (MSD) methods don’t have spatial domain problems such as spectral distortion and low contrast in the fusion result. Typical multi-scale decomposition includes the laplacian pyramid, 2-D wavelet transform. The laplacian pyramid has the distinguishing feature that each pyramid level generates only one bandpass image [7]. Wavelet transform of images produces a nonredundant image representation and it can provide better spatial and spectral localization of image information as compared to other multi-resolution representations. However, 2-D wavelet transforms decompose images into only three directions (vertical, horizontal and diagonal), capturing only limited directional information [9,10]. In order to overcome the limitations of the MSD methods, some novel multi-scale geometric analysis (MGA) tools have been introduced into image fusion that can capture 2-D geometrical structures in visual information much more effectively than MSD methods [11]. Typical multi-scale geometric analysis included the curvelet transform (CVT), contourlet transform (CT) and nonsubsampled contourlet transform (NSCT). Curvelet transform is effective in capturing curvilinear properties, like lines and edges. However, the curvelet transform in terms of

20 14

J Med Eng Technol Downloaded from informahealthcare.com by National University of Singapore on 06/04/14 For personal use only.

Nasrin Amini1, E. Fatemizadeh*2, and Hamid Behnam3

J Med Eng Technol Downloaded from informahealthcare.com by National University of Singapore on 06/04/14 For personal use only.

212

N. Amini et al.

computational is complex and time consuming [12,13]. Contourlet transform can provide a multi-scale and directional decomposition for images, which is more suitable for catching complex contours, edges and textures. Due to downsampling and up-sampling, the contourlet transform is shiftvariant. The nonsubsampled contourlet transform (NSCT) is a shift-invariant version of the contourlet transform [14–19]. Shift-invariance is desirable in image analysis applications such as edge detection, contour characterization and image enhancement [8]. Wang et al. [16] used the NSCT transform for Multi-focus, CT-MRI and Visible-infrared images and also used the NMF algorithm for fusion rules in 2012. Wang et al. [14] used the NSCT transform for Multi-focus, CTSPECT and Visible-infrared images and they used three fusion rules:  Fusion rule 1: both the low-frequency and the highfrequency coefficients follow the average value rule,

Figure 1. Ideal frequency response of the building block of: (a) nonsubsampled pyramid; (b) nonsubsampled DFB.

J Med Eng Technol, 2014; 38(4): 211–219



Fusion rule 2: the low-frequency coefficients follow the average value and the high-frequency coefficients follow the largest absolute value rule, and  Fusion rule 3: both the low-frequency and the highfrequency coefficients follow the largest absolute value rule in 2013. Das and Kundu [15] used the NSCT transform and pulsecoupled neural network (PCNN) for multimodal medical images fusion and fusion rules in low-frequency sub-band was the max selection rule and for fusing the high-frequency subbands a PCNN model is utilized in 2012. Savitha et al. [17] used the NSCT transform for MRI and CT with fusion rules: averaging in the low-frequency sub-band and max selection in the high-frequency sub-bands, in 2014. Li and Wang [18,19] used the NSCT transform for biological images and MRI and SPECT images in 2010 and 2011. In this paper, we present MRI and PET images fusion based on the nonsubsampled contourlet transform (NSCT). Maximum local contrast and maximum local energy are incorporated as the fusion rules. Then the fused image is obtained by applying an inverse transform. We also compared our fusion rules with routine rules: averaging and maximum selection. Experimental results show that the proposed fusion algorithm provides an effective way to enable more accurate analysis of multimodality images. The rest of this paper is organized as follows. In section 2, we briefly introduce the nonsubsampled contourlet transform, image fusion based on NSCT and fusion rules. The experimental results, visual and quantitative evaluation are presented in section 3. Section 4 concludes the paper.

Figure 2. The nonsubsampled contourlet transform: (a) Block diagram. (b) Resulting frequency division.

Figure 3. Block diagram of the image fusion based on nonsubsampled contourlet transform algorithm.

J Med Eng Technol Downloaded from informahealthcare.com by National University of Singapore on 06/04/14 For personal use only.

DOI: 10.3109/03091902.2014.904014

MRI-PET image fusion based on NSCT transform

Figure 4. Data-set of MRI images.

Figure 5. Data-set of PET images.

213

J Med Eng Technol Downloaded from informahealthcare.com by National University of Singapore on 06/04/14 For personal use only.

214

N. Amini et al.

J Med Eng Technol, 2014; 38(4): 211–219

Figure 6. Normal coronal PET and MRI images (a and b), Averaging (c), HSI model (d), Laplacian Pyramid (e), Wavelet transform (f), Curvelet transform (g), contourlet transform (h) and proposed method (NSCT) (i).

2. Image fusion based on nonsubsampled contourlet transform Recently, Do and Vetterli [20] proposed an efficient directional multi-resolution image representation called the contourlet transform. The contourlet transform is one of the most modern techniques in multi-scale geometric analysis. The contourlet transform can capture 2-D geometrical structures in visual information much more effectively than traditional multi-resolution analysis methods can do [11]. This transform employs Laplacian pyramids and directional filter banks. Laplacian pyramids used to achieve multi-resolution decomposition and directional filter banks used for directional decomposition. The contourlet transform is due to down-sampling and up-sampling is Shift-variant. Sshift-invariance is desirable in image processing applications such as edge detection and contour characterization. Therefore, the nonsubsampled contourlet transform (NSCT) is created, which is a shift-invariant version of the contourlet transform [21].

2.1. Nonsubsampled contourlet transform The NSCT is built based on nonsubsampled pyramids and nonsubsampled directional filter banks and is used for a shift invariant directional multi-resolution image representation. The contourlet transform employs Laplacian pyramids for multi-scale decomposition and directional filter banks (DFB) for directional decomposition. The nonsubsampled pyramid is completely different from the Laplacian pyramid of the contourlet transform. The nonsubsampled directional filter bank is a shift-invariant version of DFB in the contourlet transform. The building blocks of the nonsubsampled pyramid and nonsubsampled DFB are shown in Figure 1. The perfect reconstruction condition is given as: H0 ðZÞG0 ðZÞ þ H1 ðZÞG1 ðZÞ ¼ 1

ð1Þ

The nonsubsampled contourlet transform is shown in Figure 2. First, the nonsubsampled pyramid divides the input into a lowpass sub-band and a highpass sub-band. Then a nonsubsampled DFB decomposes the highpass sub-band into

J Med Eng Technol Downloaded from informahealthcare.com by National University of Singapore on 06/04/14 For personal use only.

DOI: 10.3109/03091902.2014.904014

MRI-PET image fusion based on NSCT transform

215

Figure 7. Normal sagittal PET and MRI images (a and b), Averaging (c), HSI model (d), Laplacian Pyramid (e), Wavelet transform (f), Curvelet transform (g), contourlet transform (h) and proposed method (NSCT) (i).

several directional sub-bands. The scheme is iterated repeatedly on the lowpass sub-band [21]. 2.2. Fusion using nonsubsampled contourlet transform A good image fusion algorithm should preserve all the salient features in source images. In this paper, we developed a novel image fusion algorithm by using the nonsubsampled contourlet transform and new fusion rules, which can create a fused image that contains more information than a single source image and is more suitable for human visual and object detection in clinical applications. This method first performed NSCT on the rows and columns of the images. Results were one set of approximation coefficients in low frequency and sets of detail coefficients in high frequency; then two images were fused by applying the fusion rules. The fusion rules were maximal energy in the low frequency band and maximal variance in the high frequency bands. In order to better understand the concept and procedure of our fusion technique, the proposed fusion method is illustrated in Figure 3.

Therefore, the detailed fusion steps of multi-resolution and panchromatic images based on the nonsubsampled contourlet transform can be summarized as below:  Step 1: Two images to be fused must be registered to assure the corresponding pixels are aligned.  Step 2: The multi-resolution image is to be converted from RGB space to HSI space.  Step 3: Two images are decomposed with NSCT transform and images include one low frequency band and sets of high frequency bands.  Step 4: The transform coefficients of different bands are combined with the fusion rules. The fusion rules were maximal energy in the low frequency band and maximal variance in the high frequency bands.  Step 5: Then performing the inverse NSCT on the result of step 4.  Step 6: The fused image is constructed by converting the result back to RGB space. In the process of image fusion, the choice of fusion rules is very important because it can influence the fusion results. The common fusion rules include averaging and maximum selection. We used maximal energy in the low frequency

J Med Eng Technol Downloaded from informahealthcare.com by National University of Singapore on 06/04/14 For personal use only.

216

N. Amini et al.

J Med Eng Technol, 2014; 38(4): 211–219

Figure 8. Normal axial PET and MRI images (a and b), Averaging (c), HSI model (d), Laplacian Pyramid (e), Wavelet transform (f), Curvelet transform (g), contourlet transform (h) and proposed method (NSCT) (i).

band and maximal variance in the high frequency bands and show our fusion rules better than averaging and maximum selection.

Most of the energy placed at low frequency of image and local energy preserve most image features. So we used local energy for fused images at a low frequency band.

2.3. Fusion of the low frequency sub-band

2.4. Fusion of the high frequency sub-bands

Fusion rules and the way they are applied directly affect the final fusion performance. Most information of their source images is kept in the low frequency band, such as the mean intensity and texture information [22,23]. Local energy is one method to find important coefficients. First we calculate the local energy E(x,y), centring the current coefficient in the approximate sub-band aJ, which is [11]: X X 2 ð2Þ Eðx, yÞ ¼ m n aJ ðx þ m, y þ nÞ WL ðm, nÞ

The high frequency (HF) bands contain the detail coefficients, contour structure and salient information of an image. Therefore, in order to better improve the quality of the fused result, we used local variance for selecting the coefficients in HF bands. Therefore, the fusion scheme of the HF band can be formulated as the following Equations (4) and (5) [9]: XX I ðpÞ ¼ ð1=S  TÞ ðDI ðm þ s, n þ t, k, lÞ ð4Þ meanI ðpÞÞ2

where WL is the windows size of the image and aJ is coefficients of every sub-band. After obtaining the local energy of each window, the coefficients are then selected by:  aA if jEA j 4 jEB aF ¼ ð3Þ aB else

meanI ðpÞ ¼ ð1=S  TÞ

XX

ðDI ðm þ s, n þ t, k, lÞ

ð5Þ

where meanI(p) and I(p) denote the mean value and variance value of the coefficients centred at (m,n) in the window of S  T, respectively.

J Med Eng Technol Downloaded from informahealthcare.com by National University of Singapore on 06/04/14 For personal use only.

DOI: 10.3109/03091902.2014.904014

MRI-PET image fusion based on NSCT transform

217

Figure 9. Alzheimer’s disease PET and MRI images (a and b), Averaging (c), HSI model (d), Laplacian Pyramid (e), Wavelet transform (f), Curvelet transform (g), contourlet transform (h) and proposed method (NSCT) (i).

After obtaining the local variance of each window, the coefficients are then selected by:  aA if jA j 4 jB j ð6Þ aF ¼ aB else Local contrast represents local variation. If your image is not noisy and the window size is small, high contrast means there is edge in the considered window.

3. Experimental results and discussion The data set consists of 34 images of colour PET images and high resolution MRI images. All images have been downloaded from the Harvard university site (http://www.med. harvard.edu/AANLIB/home.html). The brain images are classified into two groups: normal (Coronal, Sagittal and Transaxial) and Alzeimer’s disease data-set images. PET images contain three spectral bands (red, green, blue) and MRI images are panchromatic. Figures 4 and 5 show data sets of MRI and PET images used in this article, respectively. All images of MRI images and PET images are re-sized to 256  256.

The Averaging, HSI transform, Wavelet transform, Laplasian Pyramid, Curvelet Transform, Contourlet Transform and the proposed method (NSCT) were employed to fuse the image data-sets. The images are all decomposed into four levels by DWT (db2), LP, CVT, CT and NSCT. The original images and fusion results are displayed in Figures 6–9. The fused results can be evaluated subjectively or objectively. Besides the visual comparison between the fused images and original images, quantitative analysis is also applied on the experimental results of different fusion algorithms. Entropy, mutual information, discrepancy and overall performance are selected as the quantitative measurements. Fusion-based averaging and HSI leads to undesirable effects such as reduced contrast and spectral distortion, respectively. The Laplacian pyramid has the distinguishing feature that each pyramid level generates only one band pass image. Wavelet transforms decompose images into limited direction. The curvelet transform in terms of computation is complex and time-consuming. The contourlet and nonsubsample contourlet transform can provide a multi-scale and

218

N. Amini et al.

J Med Eng Technol, 2014; 38(4): 211–219

Table 1. Quantitative comparison of the six methods for the fusion of the all data-set of MRI–PET with fusion rules: averaging and maximum selection. Method Pixel averaging HSI based Laplacian Pyramid Wavelet Curvelet Contourlet NSCT

Entropy

Mutual information

Discrepency

OP

3.1141 ± 0.6050 2.7465 ± 0.5273 2.8755 ± 0.5634 3.0815 ± 0.5789 3.1371 ± 0.5607 3.9996 ± 0.5479 3.3029 ± 0.5739

2.8881 ± 0.4950 2.3796 ± 0.4357 1.9299 ± 0.2643 1.9000 ± 0.2662 1.9705 ± 0.2754 1.8770 ± 0.2629 1.9817 ± 0.2803

6.1358 ± 3.1009 7.8387 ± 2.9109 7.7678 ± 2.8812 5.5806 ± 1.7392 5.5333 ± 1.7438 6.0533 ± 1.7742 5.0661 ± 1.6740

3.2292 ± 2.6033 3.3772 ± 2.2280 0.9461 ± 2.2068 1.1012 ± 0.7471 1.0148 ± 0.8064 1.1805 ± 0.8739 0.9341 ± 0.6596

Table 2. Quantitative comparison of the six methods for the fusion of the all data-set of MRI–PET with fusion rules: maximal energy and maximal variance.

J Med Eng Technol Downloaded from informahealthcare.com by National University of Singapore on 06/04/14 For personal use only.

Method Pixel averaging HSI based Laplacian Pyramid Wavelet Curvelet Contourlet NSCT

Entropy

Mutual information

Discrepency

OP

3.1141 ± 0.6050 2.7465 ± 0.5273 2.8755 ± 0.5634 3.3285 ± 0.5632 3.3090 ± 0.5881 4.1828 ± 0.5392 3.3794 ± 0.5862

2.8881 ± 0.4950 2.3796 ± 0.4357 1.9299 ± 0.2643 1.8502 ± 0.2371 1.9159 ± 0.2532 1.7949 ± 0.2246 1.9901 ± 0.2558

6.1358 ± 3.1009 7.8387 ± 2.9109 7.7678 ± 2.8812 6.0827 ± 2.0100 5.7767 ± 1.8290 7.4307 ± 2.1142 5.0510 ± 1.5680

3.2292 ± 2.6033 3.3772 ± 2.2280 0.9461 ± 2.2068 1.7598 ± 1.1480 1.1166 ± 0.8423 2.5320 ± 1.3088 0.8707 ± 0.5848

directional decomposition for images. In visual comparison we used opinion of two radiologists (one Neuroradiologist and one Professor of Diagnostic Radiology and Nuclear Medicine) and the proposed method was better than other methods. Averages and standard deviations of the above-mentioned parameters for 34 images were calculated and listed in Tables 1 and 2. The small amount of discrepancy and overall performance (O.P), high amount of entropy and mutual information means a higher overall fusion quality. Tables 1 and 2 show that the amount of entropy, mutual information, discrepancy and overall performance of the fused images was achieved by the proposed method compared with six methods. The difference between Tables 1 and 2 was fusion rules. Table 1 shows the result of fusion rules: averaging in the low frequency band and maximum selection in the high frequency bands. Table 2 shows the result of fusion rules: maximal energy in the low frequency band and maximal variance in the high frequency bands and shows our fusion rules better than averaging and maximum selection. Table 2 shows that the amount of entropy of the fused images, achieved by the proposed method after contourlet transform, is the highest. The amount of mutual information of the fused images, achieved by the spatial domain methods, are the highest. The amount of discrepancy and overall performance of the fused images, achieved by the proposed method, is the lowest.

4. Conclusions There are articles that used NSCT transform on images such as Multi-focus, Visible-infrared and medical images. This paper presents a novel PET and MRI image fusion algorithm based on NSCT with local energy and local variance fusion rules. The PET images provide colour images with low spatial resolutions, while MRI provides appropriate spatial resolution with no colour. Image fusion is the process of integrating information from two or more images of an object into a

single image that is more appropriate for visual perception or computer analysis. We employed the Averaging, HSI transform, Wavelet transform, Laplasian Pyramid, Curvelet Transform, Contourlet Transform and the proposed method (NSCT) to fuse the MRI and PET images. Fusion-based spatial domains lead to reduced contrast and spectral distortion. In order to overcome this problem, transform domains methods have been introduced into image fusion. In visual comparison based opinion of two radiologists (one Neuroradiologist and one Professor of Diagnostic Radiology and Nuclear Medicine) the proposed method is better than other methods. The fusion result can be used next to two source images and helps physicians to better diagnosis. In quantitative evaluation MGA methods have a better result than MSD methods in transform domains. The amount of discrepancy and overall performance show the proposed method is the best method. Also we show our fusion rules (maximal energy in the low frequency band and maximal variance in the high frequency bands) better than averaging in the low frequency band and maximum selection in the high frequency bands. A problem of the proposed method is the long running time. A future study may expand this method into other medical images and also use new fusion rules.

Acknowledgements The authors would like to thank the anonymous reviewers for their invaluable suggestions. We are very thankful of Professor Bijan Bijan and Amir Arsalan Zamani, MD, who gave us their time. This article is extracted from a biomedical engineering master’s thesis, sponsored by the Biomedical Engineering Department, Science and Research Branch, Islamic Azad University Tehran, Iran.

Declaration of interest The authors report no conflicts of interest. The authors alone are responsible for the content and writing of this article.

MRI-PET image fusion based on NSCT transform

DOI: 10.3109/03091902.2014.904014

J Med Eng Technol Downloaded from informahealthcare.com by National University of Singapore on 06/04/14 For personal use only.

References 1. Goshtasby, A., 2007, Image fusion: Advances in the state of the art. Information Fusion, 8, 114–118. 2. Deshmukh, M., and Bhosale, U., 2009, image fusion and image quality assessment of fused images. International Journal of Image Processing, 4, 489–508. 3. Angel, H., Ste-Croix, C., and Kittel, E., 2007, Review of fusion systems and contributing technologies For SIHS (Toronto, Canada: Defence Research and Development. 4. Yang, B., and Li, S., 2012, Pixel-level image fusion with simultaneous orthogonal matching pursuit. Information Fusion, 13, 10–19. 5. He, C., Liu, Q., et al. 2010, Multimodal medical image fusion based on IHS and PCA. Procedia Engineering, 7, 280–285. 6. Daneshvar, S., and Ghassemian, H., 2010, MRI and PET image fusion by combining IHS and retina-inspired models. Information Fusion, 11, 114–123. 7. Huang, S.-G. Wavelet for image fusion (Graduate Institute of Communication Engineering & Department of Electrical Engineering; National Taiwan University). http://djj.ee.ntu.edu.tw/ Tutorial_Wavelet%20for%20Image%20Fusion.pdf. 8. Li, S., Yang, B., and Hu, J., 2011, Performance comparison of different multi resolution transforms for image fusion. Information Fusion, 12, 74–84. 9. Yang, Y., 2011, Performing wavelet based image fusion through different integration schemes. International Journal of Digital Content Technology and its Applications, 5, 156–166. 10. Lu, B., Wang, H., and Miao, C., 2011, Medical image fusion with adaptive local geometrical structure and wavelet transform. Procedia Environmental Sciences, 8, 262–269. 11. Yang, L., Guo, B.L., and Ni, W., 2008, Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform. Neurocomputing, 72, 203–211. 12. Islam, M.M., Zhang, D., and Lu, G., Rotation invariant curvelet features for texture image retrieval (Australia: Gippsland School of Information Technology; Monash University). VIC 3842. 13. Alparone, L., Baronti, S., Garzelli, A., and Nencini, F., 2005, The curvelet transform for fusion of very-high resolution multispectral

14.

15.

16.

17.

18.

19. 20.

21.

22.

23.

219

and panchromatic images. Global Developments in Environmental Earth Observation from Space, 2, 95–102. Wang, P., Tian, H., and Zheng, W., 2013, A novel image fusion method based on FRFT-NSCT. Mathematical Problems in Engineering, vol. 2013, Article ID 408232, 9 pages, 2013. doi:10.1155/2013/408232. Das, S., and Kundu, M.K., 2012, NSCT-based multimodal medical image fusion using pulse-coupled neural network and modified spatial frequency. Medical & Biological Engineering & Computing, 50.10, 1105–1114. Wang, J., Lai, S., and Li, M., 2012, Improved Image Fusion Method Based on NSCT and Accelerated NMF. Sensors, 12.5, 5872–5887. Savitha, V., Kadhambari, T., and Sheeba, R., 2014, Multimodality Medical Image Fusion Using NSCT. IJREAT International Journal of Research in Engineering & Advanced Technology, 1, http://www.ijreat.org/Papers%202013/Issue5/IJREATV1I6013.pdf. Li, T., and Wang, Y., 2010, Multiscaled combination of MR and SPECT images in neuroimaging: A simplex method based variableweight fusion (Department of Electronic Engineering, Fudan University). pp. 31–39. Li, T., and Wang, Y., 2011, Biological image fusion using a NSCT based variable-weight method. Information Fusion, 12, 85–92. M. N. Do and M. Vetterli, 2005, The contourlet transform: an efficient directional multiresolution image representation, IEEE Transactions on Image Processing, 14.12, 2091–2106. da Cunha, L., Zhou, J., and Do, M.N., 2006, The nonsubsampled contourlet transform: Theory, design, and applications. IEEE Transactions on Image Processing, 15, 3089–3101. Wang, L., Li, B., and Tian L.F., 2012, Multi-modal medical image fusion using the inter-scale and intra-scale dependencies between image shift-invariant shearlet coefficients. Information Fusion, 17, 1–9. Dyla, M.H.O.M., and Tairi, H., 2012, Multi Focus Image Fusion Scheme Using A Combination Of Nonsubsampled Controurlet Transform And An Image Decomposition Model. Journal of Theoretical and Applied Information Technology, 38, 136–144.

MRI-PET image fusion based on NSCT transform using local energy and local variance fusion rules.

Image fusion means to integrate information from one image to another image. Medical images according to the nature of the images are divided into str...
1MB Sizes 3 Downloads 3 Views