Magnetic Resonance Imaging 33 (2015) 624–634

Contents lists available at ScienceDirect

Magnetic Resonance Imaging journal homepage: www.mrijournal.com

MR image reconstruction with block sparsity and iterative support detection Yu Han a, Huiqian Du a,⁎, Wenbo Mei a, Liping Fang b a b

School of Information and Electronics, Beijing Institute of Technology, Beijing, China School of Mathematics, Beijing Institute of Technology, Beijing, China

a r t i c l e

i n f o

Article history: Received 8 May 2014 Revised 18 December 2014 Accepted 10 January 2015 Keywords: MR image reconstruction Union-of-subspaces Block sparsity Iterative support detection

a b s t r a c t This work aims to develop a novel magnetic resonance (MR) image reconstruction approach motivated by the recently proposed sampling framework with union-of-subspaces model (SUoS). Based on SUoS, we propose a mathematical formalism that effectively integrates a block sparsity constraint and support information which is estimated in an iterative fashion. The resulting optimization problem consists of a data fidelity term and a support detection based block sparsity (SDBS) promoting term penalizing entries within the complement of the estimated support. We provide optional strategies for block assignment, and we also derive unique and robust recovery conditions in terms of the structured restricted isometric property (RIP), namely the block-RIP. The block-RIP constant we derive is lower than that of the previous structured sparse method, which leads to a reduction of the measurements. Simulation results for reconstructing individual and multiple T1/T2-weighted images demonstrate the consistency with our theoretical claims, and show considerable improvement in comparison with methods using only block sparsity or support information. © 2015 Elsevier Inc. All rights reserved.

1. Introduction Magnetic resonance imaging (MRI) is widely used for medical diagnosis on account of its desirable features. However, the speed of MRI has always been a limitation for various applications [1–3]. Compressed sensing (CS) based methods have demonstrated great potential in accelerating MRI data acquisition by exploiting sparsity and have enabled accurate reconstruction from undersampled k-space data [4–6]. Lustig et al. first applied CS by sparsifying MR images using the wavelets and the total variation regularization for high quality reconstructions [7]. Then, a series of CS-MRI approaches have emerged [8–10]. Some aim to find sparser representations. For example in Ref. [11], over-complete contourlets were employed as a new sparsifying transform for the recovery of curve-like MR images. Some focus on better surrogate functions to enforce sparsity. In Ref. [12], lp quasi-norm (0 b p b 1) was put forward as an alternative to l1-norm for the MR image reconstruction. Methods mentioned above only exploit sparsity which is implicit in MR images. Beyond utilizing sparsity, researchers propose to incorporate additional prior knowledge to improve the reconstruction quality. Vaswani et al. established modified compressed sensing (MCS) using an l1 penalty over entries within the complement of the support which is known a priori [13,14]. Stojnic et al. studied the block-sparse ⁎ Corresponding author. E-mail address: [email protected] (H. Du). http://dx.doi.org/10.1016/j.mri.2015.01.011 0730-725X/© 2015 Elsevier Inc. All rights reserved.

model in which the signal and the measurement matrix were divided into blocks [15,16]. Prieto et al. applied group sparsity to elements in pixel domain [17,18]. The main theme of this paper is to propose a new MR image reconstruction method motivated by the sampling framework with union-of-subspaces model (SUoS). Distinct from traditional sampling theorems, this nonlinear framework considers images coming from a union-of-subspaces instead of a single vector space and enables structural modelings of images in addition to sparsity [19–23]. Our method employs an iterative strategy for alternating between wavelet coefficients estimation and subspace identification, namely the support detection. At each iteration, the image is reconstructed via mixed l2–l1 norm minimization in the wavelet transform domain. We take advantage of the subspace information estimated from the previous iteration and incorporate structured sparse features to the complementary subspace in a block manner. Put differently, we exploit a support detection based block sparsity (SDBS) penalty over entries within the complement of the estimated support. Subsequently, the subspace where the image resides is updated and used in the next iteration. Furthermore, we provide optional strategies for block assignment and derive unique and robust recovery conditions in terms of the structured restricted isometric property (RIP), namely the block-RIP. The block-RIP constant δk + s we derive is lower than δ2k which is obtained from the previous structured sparse method. Here, k denotes the block sparsity of the signal, and s b k corresponds to the block sparsity over the complementary support. This consequently

Y. Han et al. / Magnetic Resonance Imaging 33 (2015) 624–634

guarantees a better reconstruction under the same sampling ratio or a lower sampling rate to achieve a given reconstruction quality. We conduct simulations on both individual and multiple T1/T2-weighted brain MRI data demonstrating that the combination of the structured sparse feature and the support information can effectively reduce the number of the measurements required without degrading the reconstruction quality. By comparison, the performance of our method is superior to methods using only block sparsity or support information. The remainder of this paper proceeds as follows. Section 2 briefly overviews works on CS-MRI, SUoS and block-sparse model which will lay out basic concepts that we extend to our approach in the sequel. In Section 3, a detailed description of the proposed method is given. Section 4 presents simulation results which validate our theoretical claims. Finally in Section 5, conclusions are summarized. Notation: For the understanding of our subsequent results, we provide some basic notations here. ‖ ⋅ ‖p denotes the lp-norm with p ∈ {0, 1, 2}, where ‖x‖0 is the l0 quasi-norm counting the number of nonzeros in x. (α)T denotes the sub-vector of α containing elements with indices in T. For a matrix A, we define AT as the sub-matrix with |T | columns, formed by columns of A picked by T, where |T | represents the cardinality of T. 2. Preliminaries 2.1. CS-MRI In the context of CS, an image, vectorized as x ∈ ℝ N, is reconstructed from undersampled k-space measurements y = Fux ∈ ℂ M, where Fu ∈ ℂ M × N(M ≪ N) denotes the partially sampled Fourier operator. By representing x in an appropriate wavelet basis Ψ = {ψi}iN= 1, we obtain the K-sparse wavelet coefficient sequence α ∈ ℝ N. The data acquisition is then formulated as y ¼ F u x ¼ F u Ψα ¼ Aα:

ð1Þ

Based on the sparsity assumption, one can search for the sparsest estimate that agrees with the measurements via l0-minimization min kαk0 α

subject to y ¼ Aα:

ð2Þ

As we know, Eq. (2) is in general an NP-hard problem which prompts us to search for effective alternatives to solve it in an approximate way. A natural relaxation is to rewrite the l0-norm reconstruction as the basis pursuit (BP) problem min kαk1 α

subject to y ¼ Aα:

ð3Þ

Owing to the convexity, the adaptation Eq. (3) is computationally feasible via optimization based approaches [24–26]. 2.2. SUoS The SUoS can be viewed as a unification of a variety of existing sampling theories such as CS and finite rate of innovation (FRI) [27,28]. As a more general framework, SUoS extends the traditional sampling theorem by assuming that signals, including piecewise polynomials and sparse representations, come from a union of K-dimensional subspaces within ℝ N [19] U ¼ ∪i vi :

ð4Þ

Here, each V i is a canonical subspace associated with a specific set of K nonzeros. Given the location set, x is restricted to a K-dimensional

625

subspace. However in most cases, the nonzero locations cannot be   N determined. There are possible subspaces where x may belong. K It is noteworthy that this union model is not closed any more under linear operations, which means that the sum of two signals from U may not be in U. By taking additional structural information into account, each subspace vi can be then expressed in block- subspaces [21] Vi ¼ ⊕ A j;

ð5Þ

j∈ J; j J j ¼ k   where A j jj ∈ J; J ⊆ f1; …; mg are a given set of low-dimensional subspaces, k is regarded as the structured sparsity. Clearly, each V i corresponds to a different choice of k subspaces that comprise the sparse sum. The structured sparsity reduces to standard sparse case by letting the dimensionality of A j equals to 1 for all j. 2.3. Block-sparse model The above-mentioned structured union-of-subspaces leads to the block-sparse model in which the l2 norm of some certain blocks (sub-vectors) is zero. We assume that the N-dimensional α∈U is a block-sparse vector and resides in a union of k out of m subspaces based on Eqs. (4) and (5). The dimensionality of Ai is di ¼ dimðAi Þ and thus N = ∑im= 1di. According to Ref. [21], we can obtain a corresponding sub-vector α[i] by projecting α onto Ai . α is then expressed in a block-wise fashion h i T α ¼ α 1 …α d1 …α N−dm þ1 …α N : |fflfflfflfflfflffl{zfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflffl} ðα½1ÞT

ð6Þ

ðα ½mÞT

and is reconstructed via mixed norm minimization min kαk2;I α

subject to y ¼ Aα;

ð7Þ

where ‖α‖2,I = ∑im= 1‖α[i]‖2 denotes the mixed l2–l1 norm over the block index set I. A natural extension of the standard RIP to the block-sparse case yields block-RIP. Matrix A satisfies block-RIP with a constant δk|I if for any block k-sparse vector α,     2 2 2 1−δkjI kαk2 ≤ kAαk2 ≤ 1 þ δkjI kαk2

ð8Þ

holds. For ease of notation, we remove the index set I from the subscript to have δk when there is no confusion for the rest of this paper. 3. Method In this section, we propose an improved MR image reconstruction method which is capable of using block-sparse features and updating the support information iteratively. Two optional strategies for the block assignment are presented. We also give a general algorithmic framework. Additionally, illustrative analyses of the uniqueness and robustness conditions are considered. 3.1. Proposed model In line with the structured SUoS, we establish a more flexible reconstruction model named as SDBS min ðαÞe c α

T

2;I

subject to y ¼ Aα;

ð9Þ

626

Y. Han et al. / Magnetic Resonance Imaging 33 (2015) 624–634

where α ∈ ℝ N is the block-sparse wavelet coefficient sequence of the image x. We first define the structured support, namely the block support, as 

 e S ¼ i∈I kα½ik2 N0 ;

ð10Þ

which is an index set of blocks with nonzero l2 norms. The block e Δ e5 Δ e d . Here, Te denotes support can be further decomposed as e S ¼ T∪ e ¼e e d ¼ Te 5 e S the known block support of α, Δ S 5 Te is the unknown part, Δ e “\" is the set difference operator. Obviously, we have is the error in T, e Te c represents the complement to T. e It is shown e Δ e ¼ ∅ and Δ e d ⊆T. T∩ through theoretical analysis and numerical study [13,29] that reconstructing signals by constraining the sparsity over the complement of the known support Eq. (11) requires significantly fewer measurements compared with the BP problem Eq. (3) min ðαÞT c 1 α

subject to y ¼ Aα:

ð11Þ

We extend this attractive idea to the block-sparse level, thereby proposing Eq. (9). Our aim is to find the estimate whose block support contains the fewest new additions to the known part, while the data fidelity constraint is satisfied simultaneously. The flexible reconstruction Eq. (9) turns to be the standard block-sparse case Eq. (7) if no prior support knowledge is available, i.e. Te ¼ ∅. On the other hand, given that di = 1 for all blocks, it becomes MCS Eq. (11) which enables a unique minimizer coinciding exactly with the ground truth under proper conditions [13], Theorem 1. 3.2. Block assignment In our method, we propose to assign the wavelet coefficients within the complement of the known block support into individual blocks. In this subsection, two optional block assignment strategies will be provided. 3.2.1. Reordering-based strategy We should note that the wavelet coefficients of an MR image in most cases do not exhibit block-sparse property directly. To this end, a preprocessing step is necessary before applying the block sparsity constraint on the transformed coefficients. We adopt a reordering operation which gives the wavelet coefficient sequence a better match to the block-sparse setting. The only difference is that the block sparsity constraint is applied on the coefficients which are sorted by intensity in a descending order. Put differently, reordering the wavelet coefficients can also be regarded as multiplying the coefficients with a reordering matrix R which is a permutation matrix of ones and zeros. Thus, the measurement matrix in Eq. (9) turns into A = FuΨR. Without loss of generality, we assume α as the wavelet coefficient sequence after reordering. Then, the transformed coefficients in consecutive locations within the complementary support are assigned into individual blocks with a uniform block size d. By simply assuming di = d for all i, we have N = dm. In this case, the mixed l2–l1 norm in Eq. (9) is given as ðαÞec T

2;I

¼

X i∈e Tc

kα½ik2 :

ð12Þ

The cardinality of the known support and the block size are supposed to be specified in advance. 3.2.2. Tree-based strategy As we know the wavelet coefficients of a piecewise smooth signal (such as MR images) exhibit a tree structure. There exist dependencies among the parent and the child coefficients, that is, the

intensities of the parent–child pairs at adjacent scales tend to be simultaneously high or low. So if we group the coefficients in a parent–child manner, they constitute overlapping blocks. Since we just constrain coefficients within the complement of the known support, there exists a special case in which the index of some coefficient is contained in the complement, while indices of its parent and children are not. To this end, we define an index set for these isolated coefficients   c Ω ¼ i∈T j pðiÞ∈T; cðiÞ⊂T

ð13Þ

Here, p(i) is the index of the parent coefficient of αi, and c(i) is the children's index set. Then, the mixed norm is expressed as ðαÞ c T

2;I

¼

  X α i ; α pðiÞ þ kαΩ k1 ;

i∈T c Ω c pðiÞ∈T

2

ð14Þ

where the notation (αi, αp(i)) stands for the sub-vector of α projecting onto a parent–child pair. The first term in Eq. (14) denotes the mixed l2–l1 norm of all parent–child pairs with indices in T c. The second term is the l1 norm of the isolated coefficients. 3.3. Algorithmic framework In our method, we employ an iterative strategy for alternating between wavelet coefficients estimation and support detection. This iterative strategy provides a self-corrected capacity which makes our method insensitive to a handful of errors present in the support. The initial support information Te0 can be self-provided, that is, a conventional CS-based method is applied to obtain an image estimate providing the support which is used at the first iteration. In the multi-contrast setting, it can be extracted from the image of the same anatomical cross section with different contrast. In the estimation step, we employ a simple conjugate gradient algorithm. As for the mixed l2–l1 norm term involved in the objective function, we assign elements within Tec into individual blocks using our two optional strategies. In the detection step, we adopt a truncated strategy as retaining indices of the largest L wavelet coefficients as the support set. The iterative strategy contributes to the consistency between the support-based setting and the true data structure. We first rewrite our reconstruction model in an unconstrained minimization form 2 ^ ¼ argmin ky−Aαk2 þ λ ðαÞec ; α T 2;I α

ð15Þ

where the parameter λ N 0 balances the tradeoff between the data fidelity term and the regularization term. The algorithmic framework of the proposed method is then summarized as follows. Input: Undersampled k-space measurements y, regularization parameter λ, block size d, support set size L and maximum iteration number tmax. 1. Initialization: Initialize the support information Te 0 and set the iteration number t = 1; 2. Iteration: While the stopping criterion is not reached, do (a) Estimation: Solve Eq. (15) for the wavelet coefficients estimate α ^ t using the support information Tet−1 ; (b) Support detection: Update the support information Tet based on α ^ t; (c) t = t + 1; t 3. The image is reconstructed by ^x ¼ Ψ^ α.

Output: ^ x - Reconstructed MR image. Based on the fact that smooth MR images should have relatively small total variations, a TV regularization term is usually introduced

Y. Han et al. / Magnetic Resonance Imaging 33 (2015) 624–634

to enforce spatial homogeneity in practical applications, which yields SDBS-TV 2 ^ ¼ argmin ky−Aαk2 þ λ1 ðαÞec þ λ2 kΨαkTV ; α T 2;I α

ð16Þ

627

given reconstruction quality, alternatively. The proof is inspired by that in [21]. However, some modifications are necessary to make the results adaptable to our case (see Appendix B). Then, we turn to the case in which the measurements are contaminated by noise or the vector is not strictly block-k sparse. The noisy measurements

where λ1, λ2 are two positive weighting factors that control the influence of the two constraints on the final solutions.

y ¼ Aα þ z;

3.4. Uniqueness and robustness

where z denotes the bounded noise ‖z‖2 ≤ ϵ. The optimization problem is cast as

With the establishment of the reconstruction model, a natural question to be addressed is that under which condition a block-sparse vector is uniquely determined. Theorem 1 gives a necessary and sufficient condition which is pertinent to the property of the measurement matrix in its null-space, so that there is a unique block-sparse solution consistent with the measurements. For completeness, the theorem together with the proof can be found in Appendix A. We also give a theorem (Theorem 2) demonstrating that the solution to Eq. (9) is the ground truth. However, this condition is only in terms of the block-RIP. Theorem 2 states that any block-k sparse vector can be exactly recovered from its measurements as long as A satisfies block-RIP Eq. (8) with a suitable constant. The block-RIP constant we derived is lower than that of the previous structured sparse method. This consequently guarantees a better reconstruction under the same sampling ratio or a lower sampling ratio to achieve a

min ðα Þec α

T

ð17Þ

2;I

subject to ky−Aαk2 ≤ϵ:

ð18Þ

We derive Theorem 3 indicating that the robust recovery error is bounded by the best block-k term approximation error and the noise level. The lengthy proof is similar to that of Theorem 2 (see Appendix C). 4. Results and discussion In this section, we evaluated the performance of the proposed method using practical MRI data. The capabilities of our method were demonstrated under a variety of sampling ratios. To remove the aliasing interference without degrading the image quality, we employed the variable density undersampling pattern which is usually used in ky–kz plane for 3D imaging. Data acquisition was simulated by undersampling the 2D discrete Fourier transform of the

Fig. 1. Fully sampled image (Brain-1) (a), sampling mask (b), reconstructed images and error maps by MCS (c)(d), BS (e)(f), OGL (g)(h), Re-SDBS (i)(j) and Tr-SDBS (k)(l).

628

Y. Han et al. / Magnetic Resonance Imaging 33 (2015) 624–634

image according to the pattern. We selected the wavelet transform (Daubechies-2) as the sparsifying transform through the simulations. All simulations were carried out on Windows 7 and Matlab R2011b running on a PC with an Intel Core i5 CPU 3.2GHz and 4GB of memory. The reconstruction quality was quantified using the peak-signal-to-noise ratio (PSNR) and the relative error (Err) which indicates the difference between the estimate and the ground truth.

Table 1 Comparison of relative errors and PSNRs. Test image

Method

Err (%)

PSNR (dB)

Brain-1

MCS BS OGL Re-SDBS Tr-SDBS MCS-TV BS-TV OGL-TV Re-SDBS-TV Tr-SDBS-TV MCS-TV BS-TV SDBS-TV

6.66 6.46 7.38 4.22 5.20 4.66 5.89 5.35 3.71 4.49 7.09 8.01 5.83

33.4794 33.9388 33.5352 37.1949 35.7059 36.8905 34.8019 35.8585 38.4987 37.2310 32.6856 31.3166 34.0876

Brain-1

4.1. Individual T1/T2-weighted image reconstruction We first tested our approach without a TV regularization term, that is, the evaluation is purely based on the mixed l2–l1 norm term penalizing entries within the complement of the estimated support Eq. (15). To perform fair comparisons, all TV terms involved in other methods are also removed. In vivo T1 brain data were provided by Prof. N. Schuff at UCSF School of Medicine. The fully sampled brain image (Brain-1, 256 × 256) and the variable density pattern of 30% sampling ratio are shown in Fig. 1(a) and (b). The parameters in our method were set as follows: λ = 5 × 10 −3, d = 50. The support size L1 = 9000 for the reordering-based strategy (Re-SDBS), and L2 = 6000 for the tree-based strategy (Tr-SDBS). Three other reconstruction methods, including MCS Eq. (11), block-sparse reconstruction without prior support (BS) Eq. (7) and a similar tree-based reconstruction method, overlap group lasso (OGL) [30], were considered. OGL groups the wavelet coefficients in a parent–child fashion, while it does not introduce support knowledge. The

Brain-2

reconstructed images and the error maps obtained through different methods are also given in Fig. 1. The relative errors and PSNR values are presented in Table 1. Fig. 3(a) gives the curves of the relative errors as a function of the sampling ratio. The errors have been averaged over 100 tests, each time a new pattern. From the results, our proposed method which combines the structured sparse features and the support information is far better than other methods when a low percentage of data is sampled. Under high sampling ratios, all the methods provide good performances, and the reconstruction quality of the proposed method with tree-based strategy degrades. However, Re-SDBS still performs the best.

Fig. 2. Fully sampled image (Brain-1) (a), sampling mask (b), reconstructed images and error maps by MCS-TV (c)(d), BS-TV (e)(f), OGL-TV (g)(h), Re-SDBS-TV (i)(j) and Tr-SDBS-TV (k)(l).

Y. Han et al. / Magnetic Resonance Imaging 33 (2015) 624–634

a

b

MCS

0.2

OGL

OGL−TV Re−SDBS−TV Tr−SDBS−TV

Tr−SDBS

Relative Error

Relative Error

BS−TV

0.14

Re−SDBS

0.16

MCS−TV

0.16

BS

0.18

629

0.14 0.12 0.1 0.08

0.12 0.1 0.08 0.06

0.06 0.04 0.04 0.02

0.02 0 0.1

0.2

0.3

0.4

0.5

Sampling Ratio

0 0.1

0.2

0.3

0.4

0.5

Sampling Ratio

Fig. 3. Relative errors of reconstructed images versus sampling ratio by different methods without TV (a) and with TV (b).

Then, we considered the reconstruction problem containing a TV regularization term Eq. (16). Methods for comparison included TV terms as well. We still tested Brain-1 using a pattern of 30% sampling ratio. Parameters in Eq. (16) were set as: λ1 = 5 × 10 −3, λ2 = 3 × 10 −4, d = 50 and L1 = 8000, L2 = 6000. Fig. 2 displays the reconstructed images and the error maps by different methods. From Table 1 and the relative error curves shown in Fig. 3(b), we see that the proposed method is further improved by combining with TV regularization and outperforms other methods almost under all sampling ratios. However for the Tr-SDBS, since we introduce support information, there exists a set of isolated coefficients making the parent–child relationship difficult to track. In addition, the blocks are assigned in an overlapping manner. Thus, provided that the wavelet coefficient is involved in more than one block, the corresponding index will be inevitably duplicated. The measurement matrix is also needed to be replicated, which brings inconvenience for partial Fourier transform and consequently slows down the algorithm. The superiority is weakened by the high time cost. In view

of above-mentioned reasons, we will only focus on SDBS using reordering-based strategy hereinafter. Fig. 4 shows the simulation results of another set of practical MRI data under 30% sampling ratio. The fully sampled T2-weighted brain image (Brain-2, 256 × 256) was acquired from SIEMENS scanner 3.0 T, spin-echo sequence with parameters: TR = 4000 ms, TE = 91 ms, slice thickness = 5.0 mm, flip angle = 120° and the field of view (FOV) = 176 × 220 mm 2. We selected the parameters as: λ1 = 5 × 10 −3, λ2 = 3 × 10 −4, d = 50 and L = 8000. Clearly, the image reconstructed by our approach has detailed textures and is the closest to the original one. 4.2. Multi-contrast image reconstruction The above-mentioned simulations focused on the reconstruction of individual T1- or T2-weighted images. Our method is also suitable for the case in which a reference image is available. In this subsection, we evaluated the performance of our method on the multi-contrast data (courtesy of Prof. Zhi-Pei Liang and Fan Lam at University of Illinois at

Fig. 4. Fully sampled image (Brain-2) (a), sampling mask (e), reconstructed images and error maps by MCS-TV (b)(f), BS-TV (c)(g), SDBS-TV (d)(h).

630

Y. Han et al. / Magnetic Resonance Imaging 33 (2015) 624–634

Fig. 5. T2-weighted image (reference) (a), T1-weighted image (target) (b), sampling mask (c), reconstructed images and error maps by MCS-TV (d)(g), BS-TV (e)(h), SDBS-TV (f)(i).

Urbana-Champaign) acquired from a 3.0 T SIEMENS Trio scanner with a 12-channel head coil array. The matrix size = 128 × 128, FOV = 220 × 220 mm2, slice thickness = 10.0 mm. We used a T2-weighted image with TR = 1000 mm, TE = 120 mm as a reference to provide the initial support as shown in Fig. 5(a). The fully sampled T1-weighted image with TR = 800 mm, TE = 100 mm is the target as shown in Fig. 5(b). The sampling mask of 30% sampling ratio is shown in Fig. 5(c). Reconstruction parameters were set as λ1 = 5 × 10−3, λ2 = 3 × 10−4, d = 30 and L = 2500. Fig. 5(d)–(i) presents the reconstructed images and the error maps. Table 2 gives the performance comparisons between different methods in terms of the relative error and PSNR. The results clearly demonstrate the effectiveness of the proposed method. Usually, there exist motion effects which may result in inaccuracy of the support selection. To alleviate this drawback, a motion compensation step is necessary [31,32]. 4.3. Parameter evaluation Based on the simulation results, we now do some evaluations on parameters involved in the proposed method. We first consider the regularization parameter λ and the block size d by testing Brain-1 under different sampling ratios. In order to show the benefit of the block-sparse structure, we removed the TV term, leaving only the proposed regularization term. The regularization parameter trades the structured sparsity and data consistency. Fig. 6(a) displays the plots of PSNR values as a function of λ. The selected λ-values are marked with asterisks. It is observed that the optimal parameters under different sampling ratios are almost selected identically between 10−3 and 10−2, which means

the optimal λ in our method is robust to the sampling ratio. One can also conduct parameter selection using the popular L-curve method [33]. Another advantage of the proposed method lies in the insensitivity to the block size as shown in Fig. 6(b). The performance of our method approaches to the MCS as d decreases to 1. However, the relative error tends to be stable with the increase of d. We recommend choosing d between 30 and 100. Then, we turn to analyze the support set size L. The accuracy of the support is essential in that it affects the quality of the reconstruction. Table 3 gives the relative errors of both Brain-1 and Brain-2 with varying L. It can be seen from the results that the error undulates slightly when L ranges from 4000 to 10000. This indicates the stability of our method with respect to the support size in a certain range. Empirically, we recommend to choose 5 ~ 15% of the image dimension N as the size of the support set. In addition, since an iterative strategy is adopted to update the support information, the proposed method has a self-corrected capacity making it less sensitive to a few errors present in the support. Based on our

Table 2 Comparison of relative errors and PSNRs. Method

Err (%)

PSNR (dB)

MCS-TV BS-TV SDBS-TV

5.18 5.76 3.98

35.1840 34.1545 37.3731

Y. Han et al. / Magnetic Resonance Imaging 33 (2015) 624–634

a

b

631

0.1

39 37

0.08

Relative Error

35

PSNR (dB)

25% 30% 35%

33 31 29

20% 25% 30% 35%

27 25 −5

−4

−3

0.06

0.04

0.02

−2

−1

0

10

20

30

40

log10λ

50

60

70

80

90

100

Block Size

Fig. 6. PSNR values versus regularization parameter λ for the reconstruction of Brain-1 under different sampling ratios. Asterisks indicate the selected λ-values (a). Relative errors of reconstructed images of Brain-1 with varying block size d under different sampling ratios (b).

simulation, three iterations for the outer loop and five iterations for the inner loop are sufficient. The support selection is also related to the sampling ratio. We are supposed to select more wavelet coefficients as the support with the increase of the sampling ratio, thereby avoiding over-constraining entries within the complement. In our method, we only employed a simple truncated strategy. One can rely on more sophisticated but effective approaches to choosing the support. We also tested the proposed method on Brain-1 and Brain-2 using radial and Cartesian patterns sampling 20% and 30% of k-space, respectively. From the results given in Table 4, we see that different patterns may affect the reconstruction quality. However, once the pattern fixed, our method performs the best. 5. Conclusion In this paper, we presented a new MR image reconstruction approach based upon the sampling framework with unionof-subspaces model. This approach aims at fusing the block-sparse property with the support information in the wavelet domain. Consequently, a modified sparsity-inducing regularization term is obtained, and a mixed l2–l1 norm minimization is iteratively applied for the image reconstruction. We derive unique and robust recovery conditions of the proposed optimization model relying on the notion of block-RIP. Simulation results for reconstructing individual and multiple T1/T2-weighted images consistently indicate the superior performance of our approach in terms of the reconstruction accuracy in comparison with other reconstruction methods.

Acknowledgments The authors sincerely thank Fan Lam at University of Illinois at Urbana-Champaign for the valuable comments and suggestions on this work. Table 3 Reconstruction errors (%) with varying support size L. Test image

Brain-1 Brain-2

Appendix A. Theorem 1 Theorem 1. Given a block-sparse vector β, whose block support e e Δ e5 Δ e d . The measurement matrix A satisfies the block-RIP S ¼ T∪ Eq. (8) with δ

e

b1. Then consider reconstructing it from the data T

fidelity constraint y = Aβ by solving Eq. (9). β is its unique minimizer if and only if for all nonzero w ∈ ℝ N where Aec ðwÞec ¼ 0, T

3

4

5

6

7

8

9

10

4.11 6.45

3.96 6.33

3.86 6.09

3.76 5.93

3.72 5.81

3.74 5.85

3.71 5.83

3.80 5.89

3.93 6.00

ðA:1Þ

Proof. First, we prove that if Eq. (A.1) is satisfied then β is the unique solution to Eq. (9). Let α be an arbitrary solution to Eq. (9) and  c e Δand e e Δ e according to β. further divide it into three disjoint parts T, T∪  c e Δ e . Also, assume that ðα Þ c ≠ðβÞ c . In We have ‖β[i]‖2 = 0 for i∈ T∪ eT eT terms of the definition of the block support, we can write ðαÞec T

2;I

¼

X i∈e Tc

kα½ik2

X X ¼ kα½i−β½i þ β½ik2 þ kα½i−β½ik2 c i∈e Δ i∈ e T∪e Δ X X ≥∑ kβ½ik2 − kα½i−β½ik2 þ kα½i−β½ik2 : c i∈e Δ i∈e Δ i∈ e T∪e Δ ðA:2Þ Since Aα = Aβ = y, we have A e c ðα−βÞ e c ¼ 0, namely, ðα−βÞ e c T T T lies in the null-space of A ec . Hence according to Eq. (A.1), we have T Table 4 PSNRs (dB) of the reconstructed images using radial and Cartesian k-space sampling. Test image

Method

Radial

Cartesian

Brain-1

MCS-TV BS-TV SDBS-TV MCS-TV BS-TV SDBS-TV

32.2636 32.7421 34.5953 31.4621 30.8151 32.8413

32.4447 33.5767 34.9249 31.9245 31.0265 33.8499

Support size L (×103) 2

T

c : ð w Þ ðwÞe b eT∪e Δ 2;I Δ 2;I

Brain-2

632

Y. Han et al. / Magnetic Resonance Imaging 33 (2015) 624–634

ðα−βÞ e e c ðα−βÞe b Δ 2;I T∪Δ

Note that

which implies

l−1 l−1 X X hI i ≤ hðI0 ∪I1 Þc ¼ hIi ; 2 2

2;I

X X kβ½ik2 ¼ kβ½ik2 ¼ ðβÞec : ðαÞ ec N T 2;I T 2;I i∈e Δ i∈e Tc

i¼2

and

This contradicts with that α is the solution to Eq. (9). Therefore, ðαÞec ¼ ðβÞec . Note that Aeðα−βÞe ¼ 0 and δ

e

b1, so Ae is full rank, T

T

T

T

T

1=2 hIi ≤s hIi

T

Now we prove the converse. Assume that there exists a w ≠ 0 2 3 0 where Aec ðwÞec ¼ 0, so that Eq. (A.1) does not hold. Let w ¼ 4 w1 5, T T w2



e

e , while w2 has where w1 contains Δ

blocks corresponding to Δ

  c 

e e c

e Δ e . So we have T∪

T∪Δ blocks corresponding to 2 3 βe T 4 c kw1 k2;I ¼ ðwÞe ≥ ðwÞ e e ¼ kw2 k2;I . Define β ¼ w1 5 as the Δ 2;I T∪Δ 2;I 0 2 3 βe T true minimizer consistent with y, and choose α ¼ 4 0 5 . Thus, −w2 2 3 0 Aðβ−αÞ ¼ A4 w1 5 ¼ Aw ¼ 0, i.e., Aα = y and ðβÞec ¼ kw1 k2;I ≥ T 2;I w2 kw2 k2;I ¼ ðαÞec . Since β is the true minimizer of Eq. (9), we have T 2;I α = β. Then, w1 = 0 and w2 = 0, which contradicts with w ≠ 0. Appendix B. Theorem 2

Proof. Our purpose is to show that α, any solution to Eq. (9), has to be equal to β. To this end, we assume that α = β + h. So the ultimate goal is to prove that h = 0, i.e., ‖h‖2 = 0. We denote by e Δ e the indices for the block support. Then, h can be split as I0 ≜T∪



l−1 X

hI i :

ðB:1Þ

i¼0



ðB:4Þ

2;I

l−2 X −1=2 hðI0 ∪I1 Þc ≤s hIi 2

2;I

i¼1 −1=2

≤s

l−1 X hIi:

ðB:5Þ

2;I

i¼1

hIc

−1=2

¼s

0

2;I

The equality is due to the fact that hI1 ; …; hIl−1 are nonzero on disjoint blocks. Since α is the solution to Eq. (9), we have ðβÞe c ≥ ðα Þec . Then T T 2;I 2;I ðβÞec ≥ ðβ þ hÞec T 2;I T 2;I   ¼ β þ he þ he þ hIc ec 0 T T Δ 2;I ≥ ðβÞec − he þ hIc ; 2;I

Δ 2;I

0

ðB:6Þ

2;I

which implies 1=2 hIc0 ≤ he ≤s he : Δ 2;I

2;I

ðB:7Þ

Δ 2

The second inequality follows from the Cauchy–Schwarz inequality. Substituting Eq. (B.7) into Eq. (B.5) yields −1=2 hðI0 ∪I1 Þc ≤s hIc0 ≤ he ≤ hI0 ∪I1 : 2

Δ 2

2;I

2

ðB:8Þ

Therefore, the first part of the proof has been completed. B:

hI0 ∪I1 ¼ 0 2

Noting that y = Aα = Aβ, we obtain Ah = A(α − β) = 0.

Ii ⊆Ic0

Each hIi consists of s blocks. We choose I1 over which the norm is the largest, while the norm over I2 is the second largest. Note that khk2 ¼ hI0 ∪I1 þ hðI0 ∪I1 Þc 2 ≤ hI0 ∪I1 þ hðI0 ∪I1 Þc : 2

ðB:2Þ

2

c ≤ hI ∪I . Second, we show that First, we prove h ð I ∪I Þ 0 1 2 0 1 2 hI ∪I ¼ 0, which completes the proof. 0 1 2 A: hðI0 ∪I1 Þc ≤ hI0 ∪I1 2

≤s

Here, ‖α‖∞,I = maxj ‖α[j]‖2. Substituting Eq. (B.4) into Eq. (B.3) gives

T

Theorem 2. Given a block-sparse vector β, whose block support e e d. jT∪ e Δj e ¼ k denotes the block sparsity and the cardinality e Δ e5 Δ S ¼ T∪ e ¼ s . Then consider reconstructing β from the data fidelity jΔj constraint y = Aβ by solving Eq. (9). Ifpthe ffiffiffi measurement matrix A satisfies the block-RIP Eq. (8) with δkþs b 2−1, there exists a unique block-k sparse solution which is equal to β.

hIi−1 ; i≥1:

−1=2

∞;I

2

T

then we conclude that ðαÞe ¼ ðβÞe. In summary, α = β. T

ðB:3Þ

i¼2

2

2

l−1

Recall that h ¼ ∑hIi ¼ hI0 ∪I1 þ ∑i ≥ 2 hIi . Then i¼0

2 D E AhI0 ∪I1 ¼ AhI0 ∪I1 ; AhI0 ∪I1 2 D E X ¼ AhðI0 ∪I1 Þ ; − i ≥ 2 AhIi  E X D  ¼ − i ≥ 2 A hI0 þ hI1 ; AhIi :

ðB:9Þ

Using the parallelogram identity and the block-RIP, we have

D E







Ah ; Ah ≤δ

hI0 hIi I0 Ii

eTjþ2je

Δ

2 2 ¼ δkþs hI0 hIi : 2

2

ðB:10Þ

Y. Han et al. / Magnetic Resonance Imaging 33 (2015) 624–634

 

Similarly, AhI1 ; AhIi ≤δ2s hI1 2 hIi 2 . Therefore,

sparse. We prove hðI0 ∪I1 Þc ≤ hI0 ∪I1 2 þ 2e0 in the first part of our 2 proof where e0 ¼ s−1=2 β−βI0 2;I ¼ s−1=2 βI0 c 2;I . In the second part, we show that hI0 ∪I1 2 is bounded. Note that βI0 ¼ βk .

2 X D   E



AhI0 ∪I1 ¼ i ≥ 2 A hI0 þ hI1 ; AhIi

2



l−1 

D E



D E

 X

Ah ; Ah þ Ah ; Ah

I0 Ii

I1 Ii



2

i¼2

2

2

2

ðB:11Þ

X l−1  ¼ δkþs hI0 þ δ2s hI1 hIi 2

2

2

2

2

2

Substituting into Eq. (B.11) and exploiting Eqs. (B.4), (B.5) and (B.7) obtains

2

ðB:13Þ 2

Δ 2;I

0

0

2;I

2;I

2;I

T

Δ 2;I

2;I

¼ βIc0 , we have 2;I

ðC:3Þ

2

Using Eqs. (B.5), (B.7) and (C.3), we obtain

0

2;I

2

ðC:4Þ

2

B. Bound on hI0 ∪I1 2 According to the decomposition of h, we have 2 D  E E X D  AhI0 ∪I1 ¼ AhI0 ∪I1 ; Ah − i ≥ 2 A hI0 þ hI1 ; AhIi :

2 pffiffiffi 2δkþs hI0 ∪I1 :

ðC:5Þ

2

2

2 2 pffiffiffi 2 1−δkþs hI0 ∪I1 ≤ AhI0 ∪I1 ≤ 2δkþs hI0 ∪I1 : 2

Δ

þ hIc − βIc : 0 0

þ he 2;I 2;I Δ 2;I 1=2 ¼ 2s e0 þ he Δ 2;I 1=2 1=2 ≤2s e0 þ s he Δ 2 1=2 1=2 ≤2s e0 þ s hI0 :

2

Finally, by combining Eq. (B.13) with the block-RIP, we conclude that

2;I

−1=2 hðI0 ∪I1 Þc ≤s hIc ≤2e0 þ hI0 ≤2e0 þ hI0 ∪I1 :

2;I

pffiffiffi −1=2 1=2 ≤ 2δkþs hI0 ∪I1 s  s hI0 ∪I1 ¼

Δ 2;I

hIc0 ≤2 βIc0 ðB:12Þ

2

T

þ βIc þ hIc

ðC:2Þ

pffiffiffi ¼ 2 hI0 ∪I1 :

2

2;I

¼ βe þ he

Then, due to the fact that βec − βe

2

2 pffiffiffi −1=2 AhI0 ∪I1 ≤ 2δkþs hI0 ∪I1 s hIc0

T

2;I

¼ ðβ þ hÞec

Δ 2;I

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffir ffi 2 2 hI0 þ hI1 ≤ 2 hI0 þ hI1 2

c βe ≥ αec

≥ βe − he

From the Cauchy–Schwarz inequality,

2

2

2

i¼2

i¼2

2

Since α is the solution to Eq. (18), T

X l−1  ≤δkþs hI0 þ hI1 hIi : 2

hðI0 ∪I1 Þc ≤ hI0 ∪I1 þ 2e0

A:

i¼2

l−1   X δkþs hI0 hIi þ δ2s hI1 hIi ≤

633

2

2

ðB:14Þ

From Eq. (8)

D qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi E





Ah

≤ ; Ah I 0 ∪I 1

AhI0 ∪I1 2 kAhk2 ≤ 1 þ δkþs hI0 ∪I1 2 kAhk2 : Since

pffiffiffi 2 Since δkþs b 2−1, Eq. (B.14) holds only if hI0 ∪I1 2 ¼ 0, which completes the proof.

kAhk2 ¼ kAðβ−α Þk2 ≤ kAβ−yk2 þ kAα−yk2 ≤2ε;

Appendix C. Theorem 3

we conclude that

Theorem 3. Suppose that y = Aβ + z is the noisy measurements of the vector β whose block-k sparse approximation is e d. jT∪ e Δ e5 Δ e Δj e ¼ k is the denoted by β k. The block support of β k is e S ¼ T∪ e block sparsity and the cardinality jΔj ¼ s. Let α be the solution to Eq. (18). Then if the pffiffiffi measurement matrix A satisfies the block-RIP Eq. (8) with δkþs b 2−1, pffiffiffi  2 2−1 δkþs þ 2 −1=2 k  s kβ−α k2 ≤ β−β pffiffiffi 2;I 1− 1 þ 2 δkþs

ðC:1Þ

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 4 1 þ δkþs  þ ε: pffiffiffi 1− 1 þ 2 δkþs

ðC:7Þ

D qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi E





Ah I 0 ∪I 1 ; Ah ≤2ε 1 þ δkþs hI0 ∪I1 :

2

ðC:8Þ

Substituting into Eq. (C.5) and combining with Eqs. (B.11) and (B.13) gives 2

D  E

E



X D  AhI0 ∪I1 ≤

AhI0 ∪I1 ; Ah

þ

i ≥ 2 A hI0 þ hI1 ; AhIi

2

≤2ε

X D  qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  E



1 þ δkþs hI0 ∪I1 þ

i ≥ 2 A hI0 þ hI1 ; AhIi

2

 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffi  −1=2 ≤ 2ε 1 þ δkþs þ 2δkþs s hIc0 hI0 ∪I1 2;I

2

 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffi   ≤ 2ε 1 þ δkþs þ 2δkþs hI0 þ 2e0 hI0 ∪I1 2

Proof. We also assume that α = β + h. Our ultimate goal turns to prove ‖h‖2 is bounded instead of h = 0 in that the measurements are corrupted by noise or β is not exactly block-k

ðC:6Þ

2

 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffi   ≤ 2ε 1 þ δkþs þ 2δkþs hI0 ∪I1 þ 2e0 hI0 ∪I1 : 2

2

ðC:9Þ

634

Y. Han et al. / Magnetic Resonance Imaging 33 (2015) 624–634

Together with the block-RIP, we have

2  qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffi   1−δkþs hI0 ∪I1 ≤ 2ε 1 þ δkþs þ 2δkþs hI0 ∪I1 þ 2e0 hI0 ∪I1 ; 2

2

2

ðC:10Þ or hI0 ∪I1 ≤ 2

pffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 1 þ δkþs 2 2δkþs    ε þ e0 : pffiffiffi pffiffiffi 1− 1 þ 2 δkþs 1− 1 þ 2 δkþs

ðC:11Þ

Therefore khk2 ¼ hI0 ∪I1 þ hðI0 ∪I1 Þc ≤2 hI0 ∪I1 þ 2e0 2

2

2

pffiffiffi  pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2−1 δkþs þ 2 −1=2 4 1 þ δkþs k   ≤ s ε; β−β þ pffiffiffi pffiffiffi 2;I 1− 1 þ 2 δkþs 1− 1 þ 2 δkþs ðC:12Þ which completes the proof. References [1] Sodickson DK, McKenzie CA. A generalized approach to parallel magnetic resonance imaging. Med Phys 2001;28:1629–43. [2] Haase A, Frahm J, Matthaei D, Hanicke W, Merboldt KD. FLASH imaging. Rapid NMR imaging using low flip-angle pulses. J Magn Reson 1986;67:258–66. [3] Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P. SENSE: sensitivity encoding for fast MRI. Magn Reson Med 1999;42:952–62. [4] Donoho DL. Compressed sensing. IEEE Trans Inf Theory 2006;52:1289–306. [5] Candès EJ, Tao T. Decoding by linear programming. IEEE Trans Inf Theory 2005; 51:4203–15. [6] Candès EJ. The restricted isometry property and its implications for compressed sensing. C R Acad Sci Paris 2008;346:589–92. [7] Lustig M, Donoho D, Pauly JM. Sparse MRI: the application of compressed sensing for rapid MR imaging. Magn Reson Med 2007;58:1182–95. [8] Gamper U, Boesiger P, Kozerke S. Compressed sensing in dynamic MRI. Magn Reson Med 2008;59:365–73. [9] Haldar JP, Hernando D, Liang ZP. Compressed-sensing MRI with random encoding. IEEE Trans Med Imaging 2011;30:893–903. [10] Trzasko J, Manduca A. Highly undersampled magnetic resonance image reconstruction via homotopic l0-minimization. IEEE Trans Med Imaging 2009; 28:106–21.

[11] Qu X, Guo D, Chen Z, Cai C. Compressed sensing MRI based on nonsubsampled contourlet transform. IEEE Int. Symp IT Med. Edu; 2008. p. 693–6. [12] Chartrand R. Fast algorithms for nonconvex compressive sensing: MRI reconstruction from very few data. ISBI; 2009. p. 262–5. [13] Vaswani N, Lu W. Modified-CS: Modifying compressive sensing for problems with partially known support. Inform. Theory Proc. ISIT; 2009. [14] Qiu CL, Vaswani N. Support-predicted modified-CS for recursive robust principal components’ pursuit. Inform. Theory Proc. ISIT; 2011. [15] Stojnic M, Parvaresh F, Hassibi B. On the reconstruction of block-sparse signals with an optimal number of measurements. IEEE Trans Signal Process 2009;57: 3075–85. [16] Stojnic M. Strong thresholds for l2/l1-optimization in block-sparse compressed sensing. ICASSP; 2009. p. 3025–8. [17] Usman M, Prieto C, Schaeffter T, Batchelor PG. k-t group sparse: a method for accelerating dynamic MRI. Magn Reson Med 2011;66:1163–76. [18] Prieto C, Usman M, Wild JM. Group sparse reconstruction using intensity-based clustering. Magn Reson Med 2012;69:1169–79. [19] Lu YM, Do MN. A theory for sampling signals from a union of subspaces. IEEE Trans Signal Process 2008;56:2334–45. [20] Blu T, Davies ME. Sampling theorems for signals from the union of finitedimensional linear subspaces. IEEE Trans Inf Theory 2009;55:1872–82. [21] Eldar YC, Mishali M. Robust recovery of signals from a structured union of subspaces. IEEE Trans Inf Theory 2009;55:5302–16. [22] Eldar YC, Kuppinger P, Bolcskei H. Block-sparse signals: uncertainty relations and efficient recovery. IEEE Trans Signal Process 2010;58:3042–54. [23] Babacan SD, Lam F, Peng X, Do MN, Liang ZP. Interventional MRI with sparse sampling using union-of-subspaces. IEEE ISBI; 2012. p. 314–7. [24] Ma S, Yin W, Zhang Y, Chakraborty A. An efficient algorithm for compressed MR imaging using total variation and wavelets. CVPR; 2008. [25] Yang J, Zhang Y, Yin W. A fast alternating direction method for TVL1-L2 signal reconstruction from partial Fourier data. IEEE Journal of Selected Topics in Signal Processing, Special Issue on Compressive Sensing, 4; 2010 288–97. [26] Huang JZ, Zhang ST, Metaxas D. Efficient MR image reconstruction for compressed MR imaging. Med Image Anal 2011;15:670–9. [27] Vetterli M, Marziliano P, Blu T. Sampling signals with finite rate of innovation. IEEE Trans Signal Process 2002;50:1417–28. [28] Maravic I, Vetterli M. Sampling and reconstruction of signals with finite rate of innovation in the presence of noise. IEEE Trans Signal Process 2005;53: 2788–805. [29] Wang Y, Yin W. Sparse signal reconstruction via iterative support detection. SIAM J Imaging Sci 2010;3:462–91. [30] Rao N, Nowak R, Wright S, Kingsbury N. Convex approaches to model wavelet sparsity patterns. IEEE ICIP; 2011. [31] Du HQ, Lam F. Compressed sensing MR image reconstruction using a motioncompensated reference. Magn Reson Imaging 2012;30:954–63. [32] Lam F, Haldar JP, Liang ZP. Motion compensation for reference constrained image reconstruction from limited data. IEEE ISBI; 2011. [33] Hansen PC. The L-curve and its use in the numerical treatment of inverse problems. In: Johnston P, editor. Computational Inverse Problems in Electrocardiology; 2001. p. 119–42.

MR image reconstruction with block sparsity and iterative support detection.

This work aims to develop a novel magnetic resonance (MR) image reconstruction approach motivated by the recently proposed sampling framework with uni...
1MB Sizes 1 Downloads 15 Views