sensors Article

Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization Chung-Cheng Chiu 1, * and Chih-Chung Ting 2 1 2

*

Department of Electrical and Electronic Engineering, Chung Cheng Institute of Technology, National Defense University, Taoyuan 33551, Taiwan School of Defense Science, Chung Cheng Institute of Technology, National Defense University, Taoyuan 33551, Taiwan; [email protected] Correspondence: [email protected]; Tel.: +886-3-390-9962

Academic Editor: Yael Nemirovsky Received: 17 March 2016; Accepted: 16 June 2016; Published: 22 June 2016

Abstract: Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalization (HE) is one of the methods used for enhancing image contrast. However, HE may result in over-enhancement and feature loss problems that lead to unnatural look and loss of details in the processed images. Researchers have proposed various HE-based methods to solve the over-enhancement problem; however, they have largely ignored the feature loss problem. Therefore, a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) is proposed. It refers to a visual contrast enhancement algorithm based on histogram equalization (VCEA), which generates visually pleasing enhanced images, and improves the enhancement effects of VCEA. CegaHE adjusts the gaps between two gray values based on the adjustment equation, which takes the properties of human visual perception into consideration, to solve the over-enhancement problem. Besides, it also alleviates the feature loss problem and further enhances the textures in the dark regions of the images to improve the quality of the processed images for human visual perception. Experimental results demonstrate that CegaHE is a reliable method for contrast enhancement and that it significantly outperforms VCEA and other methods. Keywords: cumulative distribution function (CDF); contrast enhancement; histogram equalization (HE); human visual perception; gap adjustment

1. Introduction Good lighting conditions and high sensitivity of CCD (Charge-Coupled Device)/CMOS (Complementary Metal-Oxide Semiconductor) sensors are important factors in obtaining good image quality. Strong light on the subject leads to a white or washed out appearance; conversely, weak light on the subject results in the image appearing too dark to be seen clearly. Further, poor sensitivity of CCD/CMOS sensors causes images to have obscure details and lower contrast with narrow dynamic ranges. However, image enhancement methods can effectively improve the visual quality of images and compensate the disadvantages arising from the inferior light sensitivity of CCD/CMOS sensors. Therefore, these techniques have been widely used in many fields such as image processing, medical analyses, meteorological images, consumer electronics, and other applications in recent years. They are also utilized for obtaining clearer images on the displays of various types of instruments. Histogram equalization (HE) [1] is a simple, effective, and widely utilized image enhancement method. After performing HE, the intensity histogram of a poor contrast image spans the full spectrum of the gray levels to gain higher contrast and achieve image enhancement. Although HE enhances an image effectively, it may cause the gray values of the enhanced image to be overstretched and lead Sensors 2016, 16, 936; doi:10.3390/s16060936

www.mdpi.com/journal/sensors

Sensors 2016, 16, 936

2 of 18

to excessive contrast enhancement. Further, HE may also result in the feature loss problem because a lot of gray values with smaller probabilities are merged to a certain gray value. Many studies have focused on finding solutions to the over-enhancement problem produced by HE. Brightness preserving bi-histogram equalization (BBHE) [2] segments an original image histogram into two subhistograms based on its mean and performs HE in each of them. Dynamic histogram equalization (DHE) [3] segments the histogram into several subhistograms by using the local minima. When the subhistogram is not a normal distribution, DHE further partitions it into three parts by using µ + σ and µ ´ σ. Here, the values µ and σ, they denote the mean and the standard deviation of the subhistogram, individually. Then, every subhistogram is reassigned its dynamic range and HE is performed in each of them. Dynamic range separate histogram equalization (DRSHE) [4] utilizes the weighted average of absolute color difference to enable more uniform distribution of the original image. It divides the dynamic range of the histogram into k equal subhistograms and assigns the dynamic range based on their area ratios. Then, DRSHE reassigns the intensities of the histogram uniformly in the resized dynamic range. DRSHE can preserve the naturalness of the original image and avoid severe change in brightness. In addition, in [5], statistic-separate tri-histogram equalization (SSTHE) uses the values µ ´ w ˆ σ and µ + w ˆ σ to separate the histogram of an image into three regions and revises the range of each region. Here, w is an adjustable weighting value. µ and σ are the mean and the standard deviation of the image, individually. Then, SSTHE performs HE within each region. Bi-histogram equalization with a plateau level (BHEPL) [6] uses the mean value to segment the input histogram into two subhistograms. It clips every subhistogram according to the plateau limits to avoid over-amplification of noise. Next, two transform functions proposed by the authors are applied to these two subhistograms to make them be equalized. Weighting mean-separated sub-histogram equalization (WMSHE) [7] segments the histogram based on the proposed weighting mean function. Next, WMSHE equalizes each subhistogram individually. The methods described above use various ways to divide the histogram into some subhistograms. After that, HE or other equalization methods are applied to stretch the dynamic range of an image to achieve the purpose of image enhancement. These methods can avoid the over-enhancement problem because each subhistogram is limited within a certain range; nevertheless, they cannot deal with the feature loss problem produced by HE. Further, many studies focused on preserving the image brightness and maintaining image quality. Equal area dualistic subimage histogram equalization (DSIHE) [8] segments an image into two equal subimages by using the median value. Then, DSIHE executes HE within each subimage. The enhanced images by DSIHE can maintain the brightness of the original image. Similar to BBHE, recursive mean-separate histogram equalization (RMSHE) [9] uses the mean value to segment the input histogram into two subhistograms. RMSHE performs the separation recursively r times, and then, each subhistogram is equalized independently. When r becomes greater, the output mean is close to the input mean. Hence, RMSHE can provide better brightness preservation. Minimum mean brightness error bi-histogram equalization (MMBEBHE) [10] tries to find out the specific value from gray level 0 to L-1 that obtains the minimum absolute difference value of the input and output means. Then, MMBEBHE utilizes the specific value to segment the input histogram into two subhistograms and equalizes them separately. The enhanced images by MMBEBHE can preserve the maximum brightness of the original image. Brightness preserving histogram equalization with maximum entropy (BPHEME) [11] decides a histogram which has the maximum entropy. Hence, it preserves the brightness of an original image considerably well. Recursive sub-image histogram equalization (RSIHE) [12] is similar to DSIHE in its usage of the median value to separate the input histogram into two subhistograms. RSIHE performs the separation recursively r times, and HE is performed on each subhistogram. The average brightness of the image obtained by applying RSIHE is equal to the value obtained in DSIHE. Thus, RSIHE can achieve brightness preservation. Brightness preserving dynamic histogram equalization (BPDHE) [13] uses the concepts of MPHEBP and DHE. BPDHE utilizes the local maxima of the smoothed histogram to divide the histogram, and is thus similar to the approach used by MPHEBP. In addition, the step that BPDHE maps each segment to a new dynamic range is close to the approach utilized by DHE. Next, BPDHE normalizes the output intensity. The average intensity of the processed image is almost

Sensors 2016, 16, 936

3 of 18

the same as that of the original image. Thus, BPDHE can effectively preserve the brightness of images. Flattest histogram specification with accurate brightness preservation (FHSABP) [14] determines the flattest histogram with mean brightness constraint. Then, it acquires better brightness preservation by using an exact histogram specification. A histogram modification framework for image contrast enhancement [15] adjusts the levels of enhancement by tuning the parameters of the introduced penalty terms manually. The processed images have good enhancement quality. Dynamic quadrants histogram equalization plateau limit (DQHEPL) [16] segments the histogram based on the median value and iteratively generates four subhistograms. Then, DQHEPL uses the plateau limit, the average intensity of each subhistogram, to clip each subhistogram. After that, each subhistogram is assigned to a new dynamic range. Finally, DQHEPL performs HE within each subhistogram individually. The images processed by DQHEPL can avoid over enhancement and maintain the mean brightness. Piecewise maximum entropy (PME) [17] obtains the maximum entropy by using the piecewise transformation function. The resultant images obtained after using PME preserve the original brightness considerably well. The methods described above endeavor to resolve the problem of severe change in brightness produced by HE and to maintain the brightness of the original images to achieve the purpose of image enhancement. They are proposed to maintain the brightness of the resultant image almost at the same level as that of the original. Nevertheless, when the images are overexposed or underexposed, maintaining the brightness of the input images is not logical because in such cases, the effects of enhancement would not meet the needs for human visual perception. Apart from the abovementioned methods, the contextual and variational contrast enhancement algorithm (CVC) [18] employs contextual data modeling using two dimensional (2D) histogram of the input image to execute nonlinear data mapping for producing pleasing enhanced image. Lee et al. [19] also brought out a contrast enhancement algorithm using the difference of 2D histograms of gray-level differences between adjacent pixels to amplify the gray-level differences between adjacent pixels to reach the goal of image enhancement. Huang et al. [20] proposed an efficient contrast enhancement using adaptive gamma correction with weighting distribution (AGCWD). It improves the brightness of darkish images by means of the gamma correction and probability distribution of luminance pixels. Due to the limitation of the gamma correction, AGCWD is suitably applied to the dimmed images. Ting et al. [21] proposed a visual contrast enhancement algorithm (VCEA), which takes the requirements of the human visual perception into consideration and tries to alleviate the over enhancement and feature loss problems. In addition, VCEA further improves the detailed textures of an image. The resultant images have better visual quality. This paper refers to the VCEA method and proposes a contrast enhancement algorithm based on gap adjustment for histogram equalization (CegaHE) to further improve the enhancement effects of VCEA. CegaHE also considers the properties of human visual perception. It mitigates the over enhancement and feature loss problems produced by HE. Further, CegaHE enhances the textures in the dark regions of the images and improves their clarity. Images obtained by applying CegaHE are more suitable for human visual perception and have better visual quality than those obtained by applying VCEA and other HE-based methods. The remainder of the paper is structured as follows. The proposed algorithm, CegaHE, is introduced in Section 2. The experimental results of CegaHE and other HE-based methods are demonstrated in Section 3. Finally, the conclusions are stated in Section 4. 2. The Proposed Algorithm HE is a simple and effective method for improving image contrast; however, it may cause the over-enhancement and feature loss problems. The basic principle of HE is to remap the gray values of the original image to new ones based on the cumulative distribution function by stretching the dynamic range to improve the image contrast. Figure 1a,b represent the dark and low contrast original image, and the HE image, respectively. Figure 1c,d are the histograms of the Y component of Figure 1a,b individually. Figure 1b illustrates the over-enhancement problem, in which the face and clothing of Daniel, as well as the background of the image are very bright and look unnatural. The reason

Sensors 2016, 16, 936 Sensors 2016, 16, 936

4 of 18

4 of 18

Figure 1a,b individually. Figure 1b illustrates the over-enhancement problem, in which the face and Sensors 2016, 16, 936 4 of 18 clothing of Daniel, as well as the background of the image are very bright and look unnatural. The to for this problem is that HE stretches the gaps of the histogram between two adjacent gray values reason for problem that HE stretches the of enhancement. the histogram between two adjacent gray Figure 1a,bthis individually. Figure 1b illustrates the gaps over-enhancement problem, in paper, which the a great extent as shown in is Figure 1d, thus leading to over In this theface termand “gap” values to a great extent as shown in Figure 1d, thus leading to over enhancement. In this paper, the clothing of Daniel, as well as the background of the image are very bright and look unnatural. The denotes the number of gray levels between two neighboring gray values. Further, HE also causes term “gap” denotes the number of gray levels between two neighboring gray values. Further, HE reason for this problem is that HE stretches the gaps of the histogram between two adjacent gray the feature loss problem. HE expands the dynamic range of the original image from gray level 0 also causes feature loss problem. HE expands theleading dynamic range of the originalInimage from gray values to a the great as shown inprobability Figure 1d, thus over enhancement. thisprobability paper, the of to 255 according to extent the cumulative of each graytovalue. Hence, when the level to 255 according to theofcumulative each gray gray value. Hence, whenHE the term 0“gap” denotes the number gray levels probability between twoofneighboring values. Further, a particular gray value is less than 1/256, it cannot be distributed a gray level. It is combined with probability particular value is expands less thanthe 1/256, it cannot a grayfrom level.gray It is also causes of theafeature loss gray problem. HE dynamic rangebe of distributed the original image other gray values until the cumulative probability is greater than 1/256 and can be distributed at least level 0 towith 255 other according to the until cumulative probability of each isgray value. Hence, whencan thebe combined gray values the cumulative probability greater than 1/256 and one gray level. All such gray values map to the same gray value. The above process continues until no probability at of least a particular gray value less than cannot be distributed gray level. It is distributed one gray level. Allissuch gray 1/256, valuesitmap to the same gray avalue. The above gray level remains. Many compressed gray mapping to the gray same one results in the feature combined with other values until thevalues cumulative probability is greater 1/256 and cansame beloss process continues untilgray no gray level remains. Many compressed valuesthan mapping to the problem. In Figure 1b, the facial and clothing features of Daniel are fewer than in the original image. distributed at the least one gray All such gray 1b, values theclothing same gray value.ofThe above one results in feature loss level. problem. In Figure the map facialtoand features Daniel are Further, many details cannot be seen clearly. Many researchers proposed various methods to overcome process continues until no gray level remains. Many compressed gray values mapping to the same fewer than in the original image. Further, many details cannot be seen clearly. Many researchers theproposed limitations of the HE.methods However, they focused on methods solve theclothing over-enhancement problem, and one results in feature loss problem. In Figure 1b, the facial and of Daniel areto various to overcome the limitations oftoHE. However, they features focused on methods didsolve not consider the problem of gray values being compressed, which causes the enhanced images fewerthe thanover-enhancement in the original image. Further, details cannotthe be seen clearly. Manyvalues researchers problem, andmany did not consider problem of gray being proposed todiscussions overcome limitations HE. However, they focused to lose some various features. Further related theofsome small images in Figure 1c,don aremethods presented compressed, whichmethods causes the enhancedthe images totolose features. Further discussions relatedtoto in solve theimages over-enhancement and in did not consider the problem of gray values being Section 2.1. the small in Figure 1c,d problem, are presented Section 2.1. compressed, which causes the enhanced images to lose some features. Further discussions related to the small images in Figure 1c,d are presented in Section 2.1.

(a)

(b)

(a)

(b)

(c)

(d)

(c) (d) HE image; (c) original Figure Daniel image(image (image size:1935 1935ˆ× 1294 1294 pixels). Figure 1. 1. Daniel image size: pixels). (a) (a)Original Originalimage; image;(b) (b) HE image; (c) original histogram; (d) HE histogram. Figure 1.(d) Daniel image (image size: 1935 × 1294 pixels). (a) Original image; (b) HE image; (c) original histogram; HE histogram. histogram; (d) HE histogram.

The proposed algorithm takes the needs of human visual perception into consideration and The proposed takesand thefeature needs of human visual perception into consideration and alleviates the overalgorithm enhancement problems caused by HE. Further, CegaHEand can The proposed algorithm takes the needs loss of human visual perception into consideration alleviates the over enhancement feature problems caused by HE. Further, can enhance textures in the dark and regions of theloss images and make them FigureCegaHE 2 CegaHE showscan the alleviatesthethe over enhancement and feature loss problems caused by clearer. HE. Further, enhance the textures in the dark regions of the images and make them clearer. Figure 2 shows block diagram of CegaHE. enhance the textures in the dark regions of the images and make them clearer. Figure 2 shows the the block diagram of of CegaHE. block diagram CegaHE.

Figure 2. The block diagram of CegaHE. Figure2.2.The The block block diagram Figure diagramof ofCegaHE. CegaHE.

Sensors 2016, 16, 936

5 of 18

CegaHE is a HE-based method. Therefore, all the input images processed by CegaHE need to be processed by HE in advance. After that, they are processed by each process of CegaHE. The three major processes in CegaHE are as follows: ‚

Gap adjustment process (GA) This process adjusts the gaps between two neighboring gray values in the HE histogram to alleviate the over-enhancement problem produced by HE, and generates processed images that meet the minimum requirement of human visual perception. Here, the minimum requirement of human visual perception indicates that when observing the image carefully, human visual perception can discern the differences between features of the image without difficulty. After the adjustment, the “free pixels”, i.e., the available gaps, are used in the following two processes:



Gray value recovery process (GVR) This process alleviates the feature loss problem produced by HE. HE merges the gray values with low cumulative probabilities to the same gray value, resulting in the feature loss problem. To mitigate the problem, the GVR uses free pixels to recover the lost features.



Dark region enhancement process (DRE) Owing to the inferior light sensitivity of CCD/CMOS sensors, the image quality is poor, especially when the light is dim or insufficient. To compensate for this effect, the process enhances the textures in the dark regions of the images, and increases their contrast and clarity.

In this paper, CegaHE adopts the RGB color space to process the input images. Prior to applying CegaHE, the RGB color space is converted to the YUV color space. YUV consists of one luminance (Y) component and two chrominance components (U and V). For color images, the luminance (Y) component is used for further processing. For gray images, CegaHE utilizes gray values for further processing. Then, CegaHE uses the HE histogram of an image as the basis to adjust the gaps between two neighboring gray values to meet the minimum requirement of human visual perception. HE can be used to stretch the intensity histogram of an image and obtain better contrast. However, HE may widen the gaps of gray values excessively, and cause the over-enhancement problem. In order to solve this problem, the GA refers to the characteristics of human visual perception and adjusts the gaps of neighboring gray values using a proposed adjustment method to restrict the overstretched situation. The details of the GA are illustrated in Section 2.1. 2.1. Gap Adjustment Process (GA) The GA alleviates the problem of overstretched gaps produced by HE and ensures that the resultant image meets the minimum requirement of human visual perception. In Figure 1a, the face and clothing of Daniel can be observed, but the background details are not clearly seen. In Figure 1c, the green regions, which occupy two-thirds of the total area of the small image, have seven gray values that lie in the dark region of the dynamic range. Therefore, the details of the background are not easily seen. On the contrary, other parts of the small image with 211 gray values, which are spread from gray level 23 to 233, occupy most of the dynamic range, and therefore, most facial and clothing features of Daniel can be observed clearly. In Figure 1b, the face and clothing of Daniel are too bright to be seen after the process of HE. According to the cumulative distribution function of an image, HE flattens the dynamic range from gray level 0 to 255. That causes the gray values with greater cumulative probabilities to be spread extensively, resulting in over enhancement. From Figure 1d, the green parts in the small image have seven gray values that occupy approximately two-thirds of the dynamic range in the red over brace. Seventy-six other gray values, which are compressed from the original 211 gray values, shift to the bright region of the dynamic range and occupy one-third of the dynamic range. Thus, there is an information loss corresponding to 135 gray values. Therefore, in comparison with the original image, Figure 1b loses some features and also exhibits the over-enhancement problem.

Sensors 2016, 16, 936

6 of 18

Sensors 2016, 16, 936

6 of 18

values. Therefore, in comparison with the original image, Figure 1b loses some features and also exhibits the over-enhancement problem. In more sensitive to to luminance change in the middle of the In general, general,human humanvisual visualperception perceptionisis more sensitive luminance change in the middle of luminance range [22–24]. Conversely, it is less to luminance changechange in the dark bright the luminance range [22–24]. Conversely, it issensitive less sensitive to luminance in theordark or regions of the luminance range. Thus, human perception can discern only a narrow in the bright regions of the luminance range. Thus,visual human visual perception can discern onlygap a narrow middle luminance range but a wide gapbut in the bright and areas ofand the dark luminance gap in of thethe middle of the luminance range a wide gap indark the bright areas range. of the HE stretches the HE gapsstretches between gray values excessively to theowing greater luminance range. thetwo gapsneighboring between two neighboring gray valuesowing excessively to cumulative probabilities, thus resulting in over-enhancement. Therefore, the gaps between two the greater cumulative probabilities, thus resulting in over-enhancement. Therefore, the gaps between adjacent graygray values should be adjusted adequately to to avoid over-enhancement. two adjacent values should be adjusted adequately avoid over-enhancement.According Accordingto to the the properties of human visual perception and the concept of adjusting gaps, the proposed adjustment properties of human visual perception and the concept of adjusting gaps, the proposed adjustment equation, the gap gap limiter, limiter,isisused usedasasthe thebasis basis restrict gaps between gray values. proposed equation, the to to restrict thethe gaps between gray values. TheThe proposed gap gap adjustment is consistent more consistent with visual humanperception. visual perception. the gaps two adjustment is more with human When the When gaps between twobetween neighboring neighboring gray values arethegreater than thevalues corresponding gap limiter, the gaps gray values are greater than corresponding of the gap values limiter,of thethe gaps should be narrowed should bethe narrowed on the the limiter in order to make image meet based on values ofbased the limiter in values order toofmake the processed image meetthe theprocessed minimum requirement the minimum requirement human of human visual perception.ofThe gap visual limiterperception. equation is:The gap limiter equation is:





2 ¯ ´ L(G“ ) Round Round aa ˆrpG{127q G / 127  ´11s 2 `b b LpGq

(1) (1)

where L(G) denotes denotesthe thelimiting limitinggap gap each gray value, is the value ranging 0 toand 255,a where L(G) ofof each gray value, G isGthe graygray value ranging fromfrom 0 to 255, and a and b are parameters that control the levels of enhancement. Round denotes to round and b are parameters that control the levels of enhancement. Round denotes to round a ˆ [(G/127) –a1]2 [(G/127) – 1]2 to the nearest When a the andgaps b increase, thegray gaps between values to the nearest integer. When integer. a and b increase, between values also gray increase, andalso the increase, and the features of the adjusted image become clearer. However, this would reduce the features of the adjusted image become clearer. However, this would reduce the number of free pixels number of for freethe pixels remaining for the processes to use.two When the gap gray between two remaining succeeding processes to succeeding use. When the gap between neighboring values is neighboring values greater than the value ofby theusing gap limiter calculated by using Equation greater than gray the value of is the gap limiter calculated Equation (1), the difference between (1), the the between gap andasthe gappixels”. limiter is defined as “free pixels”. gapdifference and the gap limiterthe is defined “free The task of selecting values of a and b that free pixels is is a The task of selecting values of a and b thathelp helptotoobtain obtainbetter betterimages imagesand andmore more free pixels tradeoff challenge. a tradeoff challenge.The Theobjective objectiveisistotogenerate generateGA GAimages imagesthat thathave haveas asmany manyfeatures featuresas as possible possible to to meet the minimum requirement of human visual perception, and to simultaneously acquire more meet the minimum requirement of human visual perception, and to simultaneously acquire more free free pixels forsucceeding the succeeding processes. In toorder toadequate obtain adequate adjustment the GA pixels for the processes. In order obtain adjustment gaps, thegaps, GA processes processes different typessuch of images such as underexposed, overexposed, lowhigh contrast, and high different types of images as underexposed, overexposed, low contrast, and contrast images contrast images to obtain suitable a and b. From the processed images, we deduce that when both to obtain suitable a and b. From the processed images, we deduce that when both a and b are equal to a3,and b are equalimages to 3, the images canrequirement meet the minimum requirement of human the processed canprocessed meet the minimum of human visual perception, andvisual more perception, and more free pixels can be obtained. The choice of values of a and b is discussed further free pixels can be obtained. The choice of values of a and b is discussed further in the Appendix A. in the Appendix A. adjustment Figure 3 shows the curve the gap Figure 3 shows the curve ofadjustment the gap limiter forofeach graylimiter value. for each gray value.

Figure 3. 3. Gap Gap limiter. limiter. Figure

After the the image image is is processed processed by by HE, HE, suppose suppose his(x) his(x) denotes denotes the the number number of of pixels pixels at atgray grayvalue valuex, x, After ranging from 0 to 255, and the gap between gray value x and the one previous x is d. If d is more than ranging If d is more than the value value of of the gap limiter, L(x), his(x) is moved backward by After all all his(x) his(x) are are the by d ´ − L(x) gray levels. After moved backward, the GA image is obtained as shown in Figure 4a. moved In Figure 4a, the details of the face and clothing of Daniel can be seen. At the same time, human visual perception can perceive the differences between the above features of the image without

Sensors 2016, 16, 936

7 of 18

Sensors 2016, 16, 936

7 of 18

In Figure 4a, the details of the face and clothing of Daniel can be seen. At the same time, human visual perception can perceive the differences between the above features of the image without difficulty.Furthermore, Furthermore,inin Figure and clothing of Daniel of 76 grayranging values difficulty. Figure 1b,1b, the the faceface and clothing of Daniel consistconsist of 76 gray values ranging 176 to 255, which occupy bright region of therange. dynamic range. After the from grayfrom levelgray 176 level to 255, which occupy the brightthe region of the dynamic After the GA, those GA, those features have the same number of gray values ranging from gray level 31 to 110, which features have the same number of gray values ranging from gray level 31 to 110, which move to move to the middle of the dynamic range Figurevisual 4b. Human visual has good the middle of the dynamic range in Figure 4b. inHuman perception hasperception good discrimination discrimination in the of middle region ofrange, the dynamic range, and therefore, 4a visual has a better in the middle region the dynamic and therefore, Figure 4a hasFigure a better effectvisual than effect than Figure 1b. Further, the dynamic range of the green regions, which occupy two-thirds of Figure 1b. Further, the dynamic range of the green regions, which occupy two-thirds of the area of the small area ofimage the small image4binwith Figure 4b with gray values, is wider than therange dynamic range of the in Figure seven gray seven values, is wider than the dynamic of the green the green regions of the small image in Figure 1c. Therefore, the details of the background of regions of the small image in Figure 1c. Therefore, the details of the background of Figure 4a are Figure than 4a are than those of ofFigure the background 1a. over-enhancement Thus, the GA solves the clearer thoseclearer of the background 1a. Thus, theof GAFigure solves the problem, over-enhancement problem, retains the characteristic of HE that enhances image contrast, and meets retains the characteristic of HE that enhances image contrast, and meets the minimum requirement thehuman minimum requirement ofThe human visual145 perception. The pixels are used by for the of visual perception. obtained free pixels areobtained used by145 thefree succeeding processes succeeding processes for further enhancement. further enhancement.

(a)

(b)

Figure 4. 4. Daniel Daniel image, image, (a) (a) GA GA image; image; (b) (b) GA GA histogram. histogram. Figure

2.2. Gray Gray Value Value Recovery Recovery Process Process (GVR) (GVR) 2.2. HE and until now result in the loss problem. They HE and other otherHE-based HE-basedmethods methodsproposed proposed until now result in feature the feature loss problem. stretch the gaps between two adjacent gray values depending on cumulative probabilities. When They stretch the gaps between two adjacent gray values depending on cumulative probabilities. cumulative probabilities of gray values are not high enough to be allocated a gray level, they are When cumulative probabilities of gray values are not high enough to be allocated a gray level, they are combined to a certain gray value whose cumulative probability is high enough to be allocated at combined to a certain gray value whose cumulative probability is high enough to be allocated at least leastgray onelevel. gray level. This phenomenon of gray manyvalues gray values being merged to theone same onethe causes the one This phenomenon of many being merged to the same causes feature feature loss problem. In Figure 1a, the number of gray values of the Daniel image is 218. After loss problem. In Figure 1a, the number of gray values of the Daniel image is 218. After performing performing HE, of thegray number of becomes gray values Thus, 135 of gray of are the compressed. features are HE, the number values 83. becomes Thus, 13583. gray values thevalues features compressed. this paper proposes the GVR alleviate the feature loss problem. Therefore, thisTherefore, paper proposes the GVR to alleviate thetofeature loss problem. The GVR recovers the lost gray values of an image by using the pixels thatavailable are available The GVR recovers the lost gray values of an image by using the free free pixels that are after after theThe GA.recovery The recovery the features counts the number ofpixels. free pixels. When the number of the GA. of theoffeatures counts on theonnumber of free When the number of free free pixels is greater than or equal to twice that of the compressed gray values, the GVR recovers all pixels is greater than or equal to twice that of the compressed gray values, the GVR recovers all the lost the lost features sequentially. when the of is free is lessthat thanoftwice that of the features sequentially. However,However, when the number ofnumber free pixels lesspixels than twice the compressed compressed gray levels, the GVR recoverswith the features with pixels greater than thesize value, image size  gray levels, the GVR recovers the features pixels greater than the value, image ˆ (1/255) ˆ β, (1/255)  β, sequentially. When β increases, fewer features can be recovered; conversely, when sequentially. When β increases, fewer features can be recovered; conversely, when β decreases, moreβ decreases, features After can beexperimenting recovered. After values of β, we features canmore be recovered. withexperimenting different valueswith of β,different we observe that the GVR observe thatmore the GVR can for recover more features for the when majority the images when thethe value of the β is can recover features the majority of the images the of value of β is 0.1. After GVR, 0.1. After the GVR, the processed image recovers most features of the original image. processed image recovers most features of the original image. Although Figure 4a can can meet meet the the minimum minimum requirement requirement of of human human visual visual perception perception and and solve solve Although Figure 4a the over-enhancement over-enhancementproblem problemofof HE, feature problem of still HE exists. still exists. Hence, features the HE, thethe feature lossloss problem of HE Hence, features such such as the color gradient of facial complexion, the shirt, and the tie have disappeared, and the as the color gradient of facial complexion, the shirt, and the tie have disappeared, and the quality of quality of has the degraded. image hasThe degraded. The in red regions in Figure 5a are the regions features the image red regions Figure 5a are the regions where featureswhere are lost caused are by lost After caused by HE. After the GVR, 68 compressed valuesasare recovered as shown in HE. performing theperforming GVR, 68 compressed gray values aregray recovered shown in Figure 5b, and

Sensors 2016, 16, 936 Sensors 2016, 16, 936

8 of 18 8 of 18

77 free 5b, pixels useremain by DRE. looks better, morebetter, vivid,more and clearer than Figure 4a. Figure andremain 77 free for pixels for Figure use by 5b DRE. Figure 5b looks vivid, and clearer than This proves that the GVR can effectively alleviate the feature loss problem. Figure 4a. This proves that the GVR can effectively alleviate the feature loss problem.

Sensors 2016, 16, 936

8 of 18

Figure 5b, and 77 free pixels remain for use by DRE. Figure 5b looks better, more vivid, and clearer than Figure 4a. This proves that the GVR can effectively alleviate the feature loss problem.

(b)

(a)

(b)

(a)

(c) Figure image, (a)(a) Regions of of compressed gray values; (b) (b) GVR image; (c) Figure 5. 5. GVR GVRresults resultsfor forDaniel Daniel image, Regions compressed gray values; GVR image; GVR histogram. (c) GVR histogram. (c)

Figure 5. GVR results for Daniel image, (a) Regions of compressed gray values; (b) GVR image; (c) 2.3. Enhancement Process 2.3. Dark Dark Region RegionGVR Enhancement Process (DRE) (DRE) histogram.

The enhances the in the region of of an an image image and and compensates compensates for The DRE DRE enhances the textures textures the dark dark region for the the 2.3. Dark Region Enhancement Process in (DRE) disadvantage of inferior light sensitivity of CCD/CMOS sensors. When taking pictures, the images disadvantage ofThe inferior light sensitivity of CCD/CMOS sensors. When taking pictures, the images are DRE enhances the textures in the dark region of an image and compensates for the are generally extremely dark owing to insufficient light or low light sensitivity of CCD/CMOS generally extremely dark owing to insufficient light or low light sensitivity of CCD/CMOS sensors. disadvantage of inferior light sensitivity of CCD/CMOS sensors. When taking pictures, the images sensors. the textures indark the darktoregions of light the images are notdiscerned easily by human generally extremely owing or low sensitivity of discerned CCD/CMOS Further, Further, thearetextures in the dark regions ofinsufficient the images are notlight easily by human visual sensors. Further, the textures in the dark regions of thetextures images arefor notfurther easily discerned by human visual perception. Therefore, the DRE focuses on these enhancement. perception. Therefore, the DRE focuses on these textures for further enhancement. visual perception. Therefore, the DRE focuses on these textures for the further enhancement. First, segments the image into the dark and regions by using First, the the DRE DRE segments the GVR GVR image thedark dark and the bright bright bymean. using its its mean. mean. First, the DRE segments the GVR imageinto into the and the bright regionsregions by using its Pixel values less than the mean value imply that the pixels lie in the dark region. The yellow area the mean value implythat that the the pixels lie lie in the region.region. The yellow area of Pixel valuesPixel lessvalues thanless thethan mean value imply pixels in dark the dark The yellow area of of Figure 6a shows the dark region of the Daniel image. Then, the DRE computes the gradient Figure 6a shows the dark region of the Daniel image. Then, the DRE computes the gradient Figure 6a shows the dark region of the Daniel image. Then, the DRE computes the gradient magnitude magnitude at each pixel and sums up the gradient values values ofof thethe graygray values in the dark region of the magnitude atand eachsums pixel and up the gradient values the dark region of the at each pixel up thesums gradient values of the gray values in the dark in region of the image. image. image.

(a)

(b) Figure 6. Cont.

Figure 6. Cont.

(a)

(b) Figure 6. Cont.

Sensors 2016, 16, 936 Sensors 2016, 16, 936

9 of 18 9 of 18

(c) Figure 6. 6. DRE results for for Daniel image, (a) Dark regionregion of the of GVR DRE (b) image; (c)image; DRE Figure DRE results Daniel image, (a) Dark theimage; GVR (b) image; DRE histogram. (c) DRE histogram.

For example, n j represents the value of each pixel in an image and the size of the image is For example, ni,ji ,represents the value of each pixel in an image and the size of the image is W ˆ H ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ sumˇ of GH ˇ ˇ ˇ GVˇ . GH ˇ ˇ Gnis W  H The pixels. The gradient of ni , jtoisthe equal pixels. gradient value Gnvalue of ni,j equal sumtoofthe ˇGHni,j ˇ and ˇGV ˇ. ˇGHni,j nˇi ,and ˇGV ni , jni,j and ni ,nj i,j ˇ i,j j i ,j ˇ ˇ ˇ ˇ are equal to ˇni,j`1 ´ ni,j´1 ˇ and ˇni`1,j ´ ni´1,j ˇ, respectively: and GVni , j are equal to ni , j  1  ni , j 1 and ni  1, j  ni  1, j , respectively: ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ Gni,j “ ˇGHni,j ˇ ` ˇGVni,j ˇ “ ˇni,j`1 ´ ni,j´1 ˇ ` ˇni`1,j ´ ni´1,j ˇ (2) Gni ,j  GHni , j  GVni ,j  ni , j 1  ni , j 1  ni 1, j  ni 1, j (2)

where 0 ď i ď W ´ 1 and 0 ď j ď H ´ 1. Suppose G(x) is the gradient sum of the xth gray value of the where 0  i  W  1 and 0  j  H  1 . Suppose G(x) is the gradient sum of the xth gray value of the dark region. The DRE computes the gradient probabilities, Ppxq, of the gray values of the dark region: dark region. The DRE computes the gradient probabilities, P( x) , of the gray values of the dark ÿ´1 Lmean region: Ppxq “ Gpxq Gprq 0 ď x ď mean ´ 1 (3) r“0mean 1

P( x)  G( x)

where mean is defined as: where mean is defined as:



G(r )

0  x  mean  1

(3)

r 0

mean “

255 ÿ

s ˆ ppsq

(4)

255 s“ 0

mean  s  p( s) (4) where p(s) is the ratio of the total pixels of each s  0gray level to the total pixels of the whole image. Further, the DRE adopts the remaining free pixels, rfp, to enhance the textures in the dark region. where p(s) is the ratio of the total pixels of each gray level to the total pixels of the whole image. In order to avoid over-enhancement, the DRE utilizes only a part of the remaining free pixels, Rrfp , Further, the DRE adopts the remaining free pixels, rfp, to enhance the textures in the dark region. In based on the ratio of the number of pixels in the dark region to the total number of pixels. Suppose D(x) order to avoid over-enhancement, the DRE utilizes only a part of the remaining free pixels, Rrfp, is the total number of pixels at gray value x. The Rrfp is defined as: based on the ratio of the number of pixels in the dark region to the total number of pixels. Suppose ¸ff D(x) is the total number of pixels at «˜ gray value Rrfp is defined as: mean ÿ´1 x. The ÿ L 255 Rr f p “ Dpxq 255 Dpxq ˆrf p (5)  mean 1  0 D( x) x“0D( x)  rfp Rrfp  x“ (5)   x 0   x  0 Based on the gradient probabilities of each gray value in the dark region, the DRE distributes Rrfp Based these on thegray gradient probabilities of each in the dark region, the DRE distributes to enhance values. The allocated gray gray level,value Space(x), of the xth gray value is defined as: Rrfp to enhance these gray values. The allocated gray level, Space(x), of the xth gray value is defined as: Spacepxq “ Rr f p ˆ Ppxq 0 ď x ď mean ´ 1 (6) Space( x)  Rrfp  P( x) 0  x  mean  1 (6)

Finally, DRE expands expandsthe thegaps gapsofof gray values in the region on Space(x). Finally, the the DRE thethe gray values in the darkdark region basedbased on Space(x). The The image processed by the DRE is shown in Figure 6b. The DRE uses 54 free pixels, and 23 freepixels pixels image processed by the DRE is shown in Figure 6b. The DRE uses 54 free pixels, and 23 free remain. of the the gaps gaps of of the the gray gray values valuesin inthe thedark darkregion, region,features featuressuch suchasas remain. Owing Owing to to the the expansion expansion of

Sensors 2016, 16, 936

10 of 18

Sensors 2016, 16, 936 10 of 18 the contours of tables, benches, and arches behind Daniel can also be manifested. The background of Figure 6b becomes clearer than the one of Figure 5b, the GVR image. Thus, it is seen that the DRE can effectively enhance thebenches, textures and in the darkbehind region Daniel of an image. the contours of tables, arches can also be manifested. The background of

Figure 6b becomes clearer than the one of Figure 5b, the GVR image. Thus, it is seen that the DRE can 3. Experimental Results effectively enhance the textures in the dark region of an image. To illustrate the preeminence of CegaHE, both subjective and objective assessments are utilized 3. Experimental Resultsthe results of CegaHE with the ones of other well-known methods. As far as in this paper to compare subjective assessment is concerned, 7–10 show the experimental results of CegaHE and other To illustrate the preeminence ofFigures CegaHE, both subjective and objective assessments are utilized in methods, namely, HE, BBHE [2], RMSHE [9], DSIHE [8], RSIHE [12], DQHEPL [16], and VCEA [21]. this paper to compare the results of CegaHE with the ones of other well-known methods. As far as In terms ofassessment objective assessment, discrete entropy (E) experimental [25] is used to measure the capability of subjective is concerned,the Figures 7–10 show the results of CegaHE and other extracting details from images. A higher discrete entropy denotes that more information is extracted methods, namely, HE, BBHE [2], RMSHE [9], DSIHE [8], RSIHE [12], DQHEPL [16], and VCEA [21]. from an of image. The assessment, detailed statements concerning and objective assessments are as In terms objective the discrete entropy subjective (E) [25] is used to measure the capability of follows. extracting details from images. A higher discrete entropy denotes that more information is extracted from an image. The detailed statements concerning subjective and objective assessments are as follows. 3.1. Subjective Assessment 3.1. Subjective Assessment Figure 7a is a standard image with underexposure. Figure 7b, processed by HE, is overFigureand 7a is a standard image with underexposure. 7b, processed byalso HE, amplified is over- enhanced enhanced produces unpleasant visual artifacts. TheFigure background noise is by HE. and produces unpleasant visualafter artifacts. The background noise is also amplified by HE. 7c–g Figure 7c–g are the results applying BBHE, RMSHE, DSIHE, RSIHE, and Figure DQHEPL, are the resultsThey afterare applying BBHE, RMSHE, and DQHEPL, individually. They are individually. exceedingly dark and theDSIHE, faces ofRSIHE, the brothers and the textures of the sofa are not exceedingly dark and the faces of the brothers and the textures of the sofa are not seen clearly. seen clearly.

Figure 7. Results Resultsofofdifferent different methods for image the image Brothers size: 960pixels): × 640 (a) pixels): (a) Figure 7. methods for the Brothers (image(image size: 960 ˆ 640 Original Original image; (b) HE; (c) (d)(rRMSHE = 2); (e) (r = 2); (g)(h) DQHEPL; (h) image; (b) HE; (c) BBHE; (d)BBHE; RMSHE = 2); (e) (r DSIHE; (f)DSIHE; RSIHE (r(f)=RSIHE 2); (g) DQHEPL; VCEA; and VCEA; and (i) CegaHE. (i) CegaHE.

Sensors 2016, 16, 936 Sensors 2016, 16, 936 Sensors 2016, 16, 936

11 of 18 11 of 18 11 of 18

Figure Results of methods for the Cat [26] size: ×× 600 (a) Figure 8. 8. Results of different methods for the Cat [26] size: 902 ˆ 902 600 (a) Original Figure 8. Results of different different methods for image the image image Cat(image [26] (image (image size: 902 pixels): 600 pixels): pixels): (a) Original image; (b) HE; (c) BBHE; (d) RMSHE (r = 2); (e) DSIHE; (f) RSIHE (r = 2); (g) DQHEPL; (h) image; (b) HE; (c) BBHE; (d) RMSHE (r = 2); (e) DSIHE; (f) RSIHE (r = 2); (g) DQHEPL; (h) VCEA; and Original image; (b) HE; (c) BBHE; (d) RMSHE (r = 2); (e) DSIHE; (f) RSIHE (r = 2); (g) DQHEPL; (h) VCEA; and (i) CegaHE. (i)VCEA; CegaHE. and (i) CegaHE.

Figure Results different pixels): Figure 9. Results of different methods methods for for the the image image Landscape Landscape [27] [27] (image (image size: size: 960 960 ×× 640 pixels): Figure 9. 9.Results ofof(b) different methods for the image Landscape [27](f) (image size: 960(g) ˆ640 640 pixels): (a) Original image; HE; (c) BBHE; (d) RMSHE (r = 2); (e) DSIHE; RSIHE (r = 2); DQHEPL; (a) Original image; (b) HE; (c) BBHE; (d) RMSHE (r = 2); (e) DSIHE; (f) RSIHE (r = 2); (g) DQHEPL; (a)(h) Original image; (b) HE; (c) BBHE; (d) RMSHE (r = 2); (e) DSIHE; (f) RSIHE (r = 2); (g) DQHEPL; (h) VCEA; VCEA; and and (i) (i) CegaHE. CegaHE. (h) VCEA; and (i) CegaHE.

Sensors Sensors 2016, 2016, 16, 16, 936 936

12 of of 18 18 12

Figure 10. Results of of different different methods methods for for the the image 600 pixels): pixels): Figure 10. Results image Boat Boat [28] [28] (image (image size: size: 900 900 ˆ × 600 (a) (r = 2); (g) (g) DQHEPL; DQHEPL; (a) Original Original image; image; (b) (b) HE; HE; (c) (c) BBHE; BBHE; (d) (d) RMSHE RMSHE (r (r = = 2); 2); (e) (e) DSIHE; DSIHE; (f) (f) RSIHE RSIHE (r = 2); (h) (h) VCEA; VCEA; and and (i) (i) CegaHE. CegaHE.

Figure Figure 7h 7h processed processed by by VCEA VCEA shows shows the the faces faces and and the the clothing clothing of of the brothers as well as the textures of the sofa in the dark region of the image clearly. clearly. However, some parts of the image are brighter, e.g.,the thefaces faces brothers the behind wall behind theFigure sofa. 7i, Figure 7i, the result after brighter, e.g., of of thethe brothers and and the wall the sofa. the result after applying applying not faces only the and the of clothing of the brothers but textures also the textures of CegaHE, CegaHE, shows notshows only the andfaces the clothing the brothers but also the of the sofa the sofa in the dark region of the image clearly. It looks more natural and has better visual quality in the dark region of the image clearly. It looks more natural and has better visual quality than the than theprocessed images processed other methods. images by other by methods. Figure 8a is an underexposed image. The detailed body textures of the cat and the background are not seen clearly. Figure 8b, the result of processing by HE is an over enhanced image in which some textures of the cat are are not not seen. seen. Figure 8c–f are processed by BBHE, RMSHE, DSIHE, and RSIHE, individually. individually. The Theproblem problemthey theyexhibit exhibitis is that dark regions appear extremely dark that thethe dark regions appear extremely dark andand the the bright regions appear extremely The images obtained usingmethods these methods appear bright regions appear extremely bright.bright. The images obtained using these appear unclear unclear and unnatural. Figure 8g, processed by DQHEPL, is much better than obtained the ones using obtained and unnatural. Figure 8g, processed by DQHEPL, is much better than the ones the using the other methods. However, details of the left eye are not clear and some detailed body other methods. However, details of the left eye are not clear and some detailed body textures of the cat textures of the darkFigure to be seen clearly. Figure 8h, the image obtained VCEA, has appear too darkcat toappear be seentoo clearly. 8h, the image obtained by VCEA, has goodby enhancement good enhancement details can be seen clearly. However, the the forehead, the performance. All theperformance. details can beAll seenthe clearly. However, the forehead, the body, and right front body, theare right front foot of 8i, thethe catresult are too bright. Figure the result of best processing using foot of and the cat too bright. Figure of processing using8i, CegaHE has the enhancement CegaHE has the enhancement performance. detailed body textures of the and cat are recovered; performance. Thebest detailed body textures of the catThe are recovered; hence, the features color gradient hence, the features and of the body of to thethe catenhancement are seen clearly. Further, owing to of the body of the cat arecolor seengradient clearly. Further, owing of the dark textures of the enhancement of theofdark textures of the theeye details thecan background and theThus, left eye of the image, the details the background andimage, the left of theofcat be seen clearly. Figure 8i cat canmore be seen clearly. Figure 8i looks more natural and has better visual quality. looks natural andThus, has better visual quality. Figure 9a is a low contrast and high brightness image. Figure 9b–h are obtained after processing the image using HE, BBHE, RMSHE, DSIHE, RSIHE, DQHEPL, and VCEA, respectively. All these resultant images imagesexhibit exhibitthe theoverexposure overexposure problem, with thelayers ice layers the images problem, with the the skysky andand the ice in theinimages being being extremely bright, thus making them appear unnatural. The surface of Figure 9b is extremely extremely bright, thus making them appear unnatural. The surface of Figure 9b is extremely dark, dark, and therefore, some textures not easily visible. The surface in Figures 9d,f is extremely bright,

Sensors 2016, 16, 936

13 of 18

and therefore, some textures not easily visible. The surface in Figure 9d,f is extremely bright, causing some features of the surface to disappear. Figure 9i, obtained by applying CegaHE, is clearer than the images obtained by using the other methods. The textures of the sky, the surface, and the ice layers are clear. In addition to higher contrast, the resulting image has better visual quality. Figure 10a is a low contrast and high brightness image. Figure 10b,e are obtained by applying HE and DSIHE, individually. The problem of Figure 10b,e is that it is hard to see the boat, trees, and their reflections clearly because they are too dark. Figure 10c,d,f–h also have the same problem with the trees and their reflections in the distance being too bright to be seen clearly. Figure 10i, which is processed by CegaHE, has better visual quality. The trees and their reflections in the distance are much clearer. Further, the detailed textures of the boat, trees behind the boat, and their reflections on the lake are recovered by CegaHE, and displayed clearly. In addition, the resulting image appears more natural than the ones obtained by applying the other methods. The remaining free pixels of each image after each process are recorded in Table 1. From Table 1, we can observe that free pixels remain for each image. These free pixels can be used for further contrast enhancement. Table 1. Remaining free pixels after each process. Image

Free Pixels after the GA

Free Pixels after the GVR

Free Pixels after the DRE

Brothers Cat Landscape Boat

90 107 88 147

86 76 82 90

40 43 46 82

Thus, CegaHE alleviates the limitations of HE, i.e., the over enhancement and feature loss problems, and also improves the clarity of the textures in the dark regions of the images. In the case of underexposed images, CegaHE improves their clarity, resulting in large amount of detail and makes them adequate for human visual perception. In the case of overexposed images, CegaHE effectively restricts the over-enhancement effect and makes the resultant images appear more natural. Comparing with VCEA and other HE-based methods, CegaHE yields better visual quality images. 3.2. Objective Assessment Apart from subjective assessment of the enhancement effect by means of observation, objective assessment, which is the quantitative measure, is often used to verify the enhancement effect in most studies. It is not easy to find a suitable quantitative measure. Consequently, one of the objective assessment methods, the discrete entropy [25], is utilized in this paper to quantitatively evaluate the performance of different enhancement methods. The definition of discrete entropy E is as follows: E “ ´

j ÿ

ppXi q ˆ log2 ppXi q

(7)

i “0

where j is the maximum intensity value (e.g., for 8-bit images, j = 255) and p(Xi ) is the probability at gray level i. Table 2 shows the discrete entropy values of compared methods. The higher entropy value indicates that more information is extracted from an image. However, the images with higher entropy values do not always denote that their enhancement effects are better than others. To take an example, DQHEPL has the highest entropy values for Figures 7–10 in Table 2. It means that DQHEPL obtains significant information from the original images. In fact, enhancement effects of DQHEPL are not as good as the ones of VCEA and CegaHE. Although objective assessment provides quantitative information to readers; nevertheless, it must be accompanied by subjective assessment. The reason is that subjective assessment is the most direct method to evaluate image quality from observers [21].

Sensors 2016, 16, 936

14 of 18

Table 2. Discrete entropy values of compared methods. Method

Brothers (Figure 7)

Cat (Figure 8)

Landscape (Figure 9)

Boat (Figure 10)

HE BBHE RMSHE DSIHE RSIHE DQHEPL VCEA CegaHE

4.79 4.70 4.69 4.76 4.71 4.82 4.81 4.81

4.99 4.99 4.92 5.02 4.97 5.05 5.05 5.04

4.86 4.87 4.80 4.87 4.58 4.88 4.88 4.88

3.43 3.50 3.57 3.41 3.31 3.64 3.64 3.63

In Table 2, CegaHE shows the highest entropy for Figure 9, which is the same value as DQHEPL and VCEA, indicating that CegaHE has the capability of extracting significant information from the original image. Besides, CegaHE has the second highest entropy values in Figures 7, 8 and 10. All these values are almost close to the highest entropy values. That denotes that the capability to extract information of CegaHE is almost the same as the ones of DQHEPL and VCEA. Although the entropy values of CegaHE are not always the highest, subjective assessment shows that CegaHE has better enhancement effect than other methods in Figures 7–10. The purpose of image enhancement is to improve the quality of an image and to make the enhanced image suitable for human visual perception. Therefore, both subjective and objective assessments are often used by most researchers to compare their results with the ones of other methods. Few studies compare the execution times of different image enhancement methods especially for an image because most studies focus on the image quality instead of its execution time for an image. However, for continuous images such as video images, because they have to be processed in real time, the execution time needs to be taken into consideration. According to the standard set up by NTSC, the frame rate is 30 frames per second. Therefore, it is high demand to have the fast execution speed to process all the video images. Currently, the operation system of the computer is 64 bit Windows 7 and the processor is Intel core i7-4790 with 12 GB RAM (Genuine, Taipei, Taiwan). Under the condition that the program of the proposed algorithm has not been optimized, the execution time of the proposed algorithm for an image of size 600 ˆ 400 is around 70 ms. After optimizing the algorithm, the execution time can be shortened. Although the current execution speed of the proposed algorithm cannot meet the demand of the standard set up by NTSC, the proposed algorithm can provide better image quality for an image. In the future, the proposed algorithm can be optimized and applied to video images to get better image quality in real time. To sum up, through subjective assessment described in Section 3.1, the images processed by CegaHE have better image quality and enhancement effect than the ones obtained by the other methods. Furthermore, through objective assessment, the discrete entropy is used to demonstrate the capability of extracting details from images. From Table 2, the fact that the entropy values of CegaHE are close to the highest ones shows that CegaHE has good capability of extracting details from images. All in all, both the subjective and objective analyses show that CegaHE has better enhancement effect and is superior to other methods. 4. Conclusions In this paper CegaHE, a new image enhancement method, is presented. CegaHE solves the problems of HE including the over-enhancement and feature loss problems. Further, to compensate for the disadvantage of inferior sensitivity of CCD/CMOS, CegaHE enhances the textures in the dark region of an image to make them clearer. Different from VCEA, CegaHE is more consistent with human visual perception. The images processed by CegaHE are suitable for human visual perception and have better visual quality. In addition, some free pixels remain after applying CegaHE and these can be used

Sensors 2016, 16, 936

15 of 18

for further contrast enhancement. The experimental results indicated that CegaHE produces better results in enhancing the contrast of the images and outperforms VCEA and other HE-based methods. Acknowledgments: This research was supported by the Ministry of Science and Technology of Taiwan, Project No. 104-2221-E-606-012. Author Contributions: Chung-Cheng Chiu comes up with the algorithm and improves the algorithm and the paper. Chih-Chung Ting devised the CegaHE algorithm, performed HE-based methods, and wrote the paper. Conflicts of Interest: The authors declare no conflict of interest.

Abbreviations The following abbreviations are used in this paper: CCD/CMOS CegaHE DRE GA GVR VCEA

Charge-Coupled Device/Complementary Metal-Oxide Semiconductor Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization Dark Region Enhancement Gap Adjustment Gray Value Recovery Visual Contrast Enhancement Algorithm Based on Histogram Equalization

Appendix A In general, human visual perception is more sensitive to luminance change in the middle of the luminance range; conversely, it is less sensitive to luminance change in the dark or bright regions of the luminance range. Based on this property of human visual perception, the GA proposes the gap limiter equation as the adjustment basis. The gap limiter equation shown in Equation (1) is where G is the gray value ranging from 0 to 255 and L(G) denotes the limiting gap of each gray value. Round denotes to round a ˆ rpG{127q ´ 1s2 to the nearest integer. When the gaps between two neighboring gray values are greater than the corresponding values of the gap limiter, L(G), the gaps must be shortened based on L(G) for the processed image to meet the minimum requirement of human visual perception. In this case, the minimum requirement of human visual perception indicates that when observing the image processed by GA carefully, human visual perception can discern the differences between features of the image without difficulty. The GA reduces the over-enhancement problem caused by HE by restricting the gaps between two neighboring gray values. Simultaneously, the processed images also meet the minimum requirement of human visual perception. In order to obtain reasonable adjustment gaps, the GA processes different types of images such as underexposed, overexposed, low contrast, and high contrast images to obtain suitable values of parameters a and b. Figures A1–A3 show the images obtained after processing by the GA with values of a and b ranging from 1 to 5. The resultant images vary for different values of a and b. For a fixed value of a, as b increases, the GA images are brighter and more features can be seen. For a fixed value of b, as a increases, the GA images become brighter and more features can be seen. In both cases, choosing suitable values of a and b to obtain better GA images and more free pixels is a tradeoff challenge. The GA should yield images with as many features as possible and make free pixels available. In Figures A1–A3, we can observe that as the values of a and b increase, detailed textures and features of images can be seen more clearly. However, the number of free pixels reduces. After the GA, the resulting image need not be seen very clearly. It is important to have more free pixels and the detailed textures and features of the image should meet the minimum requirement of human visual perception. Through empirical analyses, we deduce that when the values of both a and b are equal to 3, the detailed textures and features of the GA images can be seen, and more free pixels can be obtained for the succeeding processes to further enhance images.

Sensors Sensors2016, 2016,16, 16,936 936 Sensors 2016, 16, 936

16 of1818 16 16 of of 18

Figure A1. Image, Image, Landscape Landscape [27], processed by Figure [27], processed processed by by GA GA with with different differentaaaand andb. b. Figure A1. A1. Image, Landscape [27], GA with different and b.

Figure processed by by GA GA with with different different aa and andb.b. Figure A2. A2. Image, Image, Girl Girl [29], [29], processed Figure A2. Image, Girl [29], processed by GA with different a and b.

Sensors 2016, 16, 936 Sensors 2016, 16, 936

17 of 18 17 of 18

Figure [30], processed processed by by GA GA with with different differentaaand andb.b. Figure A3. A3. Image, Image, Couple Couple [30],

References References 1.1.

Gonzalez, R.; R.; Woods, Woods, R. R. Digital Digital Image Processing, Gonzalez, Processing, 3rd 3rd ed.; ed.; Prentice PrenticeHall: Hall:Upper UpperSaddle SaddleRiver, River,NJ, NJ,USA, USA,2007; 2007; pp.144–166. 144–166. pp. Kim, Y.Y. Contrast Contrast enhancement enhancement using using brightness brightness preserving preserving bi-histogram bi-histogram equalization. equalization. IEEE 2.2. Kim, IEEETrans. Trans. Consum. Electron. 1997, 43, 1–8. Consum. Electron. 1997, 43, 1–8. Abdullah-Al-Wadud, M.; M.; Kabir, Kabir, M.; M.; Dewan, Dewan, M.; Chae, 3.3. Abdullah-Al-Wadud, Chae, O. A dynamic dynamic histogram histogram equalization equalizationfor forimage image contrastenhancement. enhancement. IEEE IEEE Trans. Trans. Consum. Consum. Electron. Electron. 2007, contrast 2007, 53, 53, 593–600. 593–600. [CrossRef] Park, G.; G.; Cho, Cho, H.; H.; Choi, Choi, M. M. A A contrast contrast enhancement enhancement method 4.4. Park, method using using dynamic dynamic range range separate separatehistogram histogram equalization. IEEE Trans. Consum. Electron. 2008, 54, 1981–1987. equalization. IEEE Trans. Consum. Electron. 2008, 54, 1981–1987. [CrossRef] Lin,P.; P.; Lin, Lin, C.; C.; Yen, based onon firstfirst order statistics. In Proceedings of theof 13th 5.5. Lin, Yen, H. H.Tri-histogram Tri-histogramequalization equalization based order statistics. In Proceedings the IEEE International Symposium on Consumer Electronics, Kyoto, Japan, 25–28 May 2009; pp. 387–391. 13th IEEE International Symposium on Consumer Electronics, Kyoto, Japan, 25–28 May 2009; pp. 387–391. Ooi,C.;C.; Kong, Ibrahim, H. Bi-histogram equalization with alimit plateau limit image for digital image 6.6. Ooi, Kong, N.; N.; Ibrahim, H. Bi-histogram equalization with a plateau for digital enhancement. enhancement. IEEE Trans. Consum. Electron. 2009, 55, 2072–2080. IEEE Trans. Consum. Electron. 2009, 55, 2072–2080. [CrossRef] Wu, P.; P.; Cheng, Cheng, F.; F.; Chen, Chen, Y. Y. AA weighting weighting mean-separated mean-separated sub-histogram 7.7. Wu, sub-histogram equalization equalization for for contrast contrast enhancement. In Proceedings of the 2010 International Conference on. Biomedical Engineering enhancement. In Proceedings of the 2010 International Conference on. Biomedical Engineeringand and ComputerSciences, Sciences,Wuhan, Wuhan,China, China, 23–25 23–25 April April 2010; 2010; pp. pp. 1–4. 1–4. Computer Wang,Y.; Y.;Chen, Chen,Q.; Q.; Zhang, Zhang, B. B. Image Image enhancement enhancement based 8.8. Wang, based on on equal equal area area dualistic dualistic subimage subimagehistogram histogram equalization method. IEEE Trans. Consum. Electron. 1999, 45, 68–75. equalization method. IEEE Trans. Consum. Electron. 1999, 45, 68–75. [CrossRef] Chen,S.;S.;Ramli, Ramli, Contrast enhancement using recursive mean-separate histogram equalization for 9.9. Chen, A. A. Contrast enhancement using recursive mean-separate histogram equalization for scalable scalable brightness preservation. IEEE Trans. Consum. Electron. 2003, 49, 1301–1309. brightness preservation. IEEE Trans. Consum. Electron. 2003, 49, 1301–1309. [CrossRef] 10. Chen, Chen,S.; S.;Ramli, Ramli, A. A. Minimum Minimum mean mean brightness brightness error 10. error bi-histogram bi-histogram equalization equalizationinincontrast contrastenhancement. enhancement. IEEE Trans. Consum. Electron. 2003, 49, 1310–1319. IEEE Trans. Consum. Electron. 2003, 49, 1310–1319. [CrossRef] 11. Wang, C.; Ye, Z. Brightness preserving histogram equalization with maximum entropy: A variational perspective. IEEE Trans. Consum. Electron. 2005, 51, 1326–1334.

Sensors 2016, 16, 936

11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30.

18 of 18

Wang, C.; Ye, Z. Brightness preserving histogram equalization with maximum entropy: A variational perspective. IEEE Trans. Consum. Electron. 2005, 51, 1326–1334. [CrossRef] Sim, K.; Tso, C.; Tan, Y. Recursive sub-image histogram equalization applied to gray scale images. Pattern Recogn. Lett. 2007, 28, 1209–1221. [CrossRef] Ibrahim, H.; Kong, N. Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 2007, 53, 1752–1758. [CrossRef] Wang, C.; Peng, J.; Ye, Z. Flattest histogram specification with accurate brightness preservation. IET Image Process. 2008, 2, 249–262. [CrossRef] Arici, T.; Dikbas, S.; Altunbasak, Y. A histogram modification framework and its application for image contrast enhancement. IEEE Trans. Image Process. 2009, 9, 1921–1935. [CrossRef] [PubMed] Ooi, C.; Isa, N. Adaptive contrast enhancement methods with brightness preserving. IEEE Trans. Consum. Electron. 2010, 56, 2543–2551. [CrossRef] Thomas, G.; Flores-Tapia, D.; Pistorius, S. Histogram specification: A fast and flexible method to process digital images. IEEE Trans. Instrum. Meas. 2011, 60, 1565–1578. [CrossRef] Celik, T.; Tjahjadi, T. Contextual and variational contrast enhancement. IEEE Trans. Image Process. 2011, 20, 3431–3441. [CrossRef] [PubMed] Lee, C.; Lee, C.; Kim, C. Contrast enhancement based on layered difference representation of 2D histograms. IEEE Trans. Image Process. 2013, 22, 5372–5384. [CrossRef] [PubMed] Huang, S.; Cheng, F.; Chiu, Y. Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Trans. Image Process. 2013, 22, 1032–1041. [CrossRef] [PubMed] Ting, C.; Wu, B.; Chung, M.; Chiu, C.; Wu, Y. Visual contrast enhancement algorithm based on histogram equalization. Sensors 2015, 15, 16981–16999. [CrossRef] [PubMed] Chou, C.; Li, Y. A perceptually tuned subband image coder based on the measure of just-noticeable-distortion profile. IEEE Trans. Circuits Syst. Video Technol. 1995, 5, 467–476. [CrossRef] Yang, Y.; Sun, X.; Yang, H.; Li, C.; Xiao, R. A contrast-sensitive reversible visible image watermarking technique. IEEE Trans. Circuits Syst. Video Technol. 2009, 19, 656–667. [CrossRef] Lee, C.; Lin, P.; Chen, L.; Wang, W. Image enhancement approach using the just-noticeable-difference model of the human visual system. J. Electron. Imaging 2012, 21, 033007. [CrossRef] Ooi, C.; Isa, N. Quadrants dynamic histogram equalization for contrast enhancement. IEEE Trans. Consum. Electron. 2010, 56, 2552–2559. [CrossRef] Raw vs. Jpg Format Editing Experiment. Available online: http://mynameisanna.com/camera-raw-formatversus-jpeg/ (accessed on 4 March 2016). Overexposed in Yellowstone + Top 10 Initial Montana Observations. Available online: http://www. mattludin.com/blog/page/2/ (accessed on 4 March 2016). Cannon 5D Mark III Review. Available online: http://cdn0.mos.techradar.futurecdn.net/art/cameras/ Canon/5D_MK_III/HDR/Over.JPG (accessed on 4 March 2016). Retinex Image Processing. Available online: http://dragon.larc.nasa.gov/retinex/pao/news/lg-image16. jpg (accessed on 4 March 2016). Volume 3: Miscellaneous. Available online: http://sipi.usc.edu/database/database.php?volume=misc& image=2#top (accessed on 4 March 2016). © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

Contrast Enhancement Algorithm Based on Gap Adjustment for Histogram Equalization.

Image enhancement methods have been widely used to improve the visual effects of images. Owing to its simplicity and effectiveness histogram equalizat...
12MB Sizes 0 Downloads 10 Views