Improved measurement of camera response function and its performance evaluation Shih-Hsin Chang and Hsien-Huang P. Wu* Graduate School of Engineering Science and Technology, National Yunlin University of Science and Technology, #123 Univ. Rd. Sect. 3, Yunlin, Douliou 640, Taiwan *Corresponding author: [email protected] Received 23 September 2013; revised 1 December 2013; accepted 1 December 2013; posted 3 December 2013 (Doc. ID 198091); published 23 December 2013

Estimation of camera response function (CRF) has become important in the field of computer graphics and radiance measurement to achieve accurate modeling and high dynamic range imaging. A method is proposed to provide accurate radiance for direct measurement of the CRF in this paper by using a polariscope. The experimental results indicate that the accuracy of the estimated CRF obtained by the new approach is about 5% better than that of the previous method. © 2013 Optical Society of America OCIS codes: (100.2000) Digital image processing; (120.4820) Optical systems. http://dx.doi.org/10.1364/AO.53.000082

1. Introduction

The increases in resolution of digital cameras and in performance of computers have made the applications of digital cameras in the optical measurement and high definition computer graphics become more and more popular. Previous research on image analysis has made progress on the computation of the geometric parameters, and now many studies have also tried to acquire the radiance-related information for the objects of the scene. For example, the recovering of reflectance models of the scene [1], the measurement of illuminance and bidirectional reflectance distribution function (BRDF) [2–4], the photoelastic analysis [5,6], the measurement of automotive headlamp [7], and the recovering of high dynamic range radiance maps [8,9]. Most of these applications require the relation between the pixel value and its corresponding irradiance that fell on the detector, that is, the camera response function (CRF). Therefore, the estimation of the CRF has become an important topic in the field of image processing.

1559-128X/14/010082-08$15.00/0 © 2014 Optical Society of America 82

APPLIED OPTICS / Vol. 53, No. 1 / 1 January 2014

While the pixel value in the image could be used to represent the brightness of the corresponding scene, it is not directly proportional to the irradiance that fell on the detector. That is, for most digital cameras, the irradiance and the pixel value that it generated are nonlinearly related. This relation is called CRF or camera transfer function. The methods that have been developed to estimate (or reconstruct) the CRF can be roughly divided into three categories. The first category assumed that the CRF is a polynomial function, and an optimization criterion was used to find the best coefficients [8,10–12]. The second category utilized the comparametric relation of two images with known exposure ratio to derive the equation for CRF based on a given functional form [13–15]. The third approach used the relation of pixel values among three images taken at different exposure for a given scene and applied the singular value decomposition technique to directly obtain the relation between pixel value and exposure without assuming its functional form. The CRFs obtained by these available methods are all very similar [11,16], and we believe that these functions are also very close to the real curve of the camera response. However, these methods assumed that the exposure is the only factor that

affects the pixel value and neglected other factors like dark current, shot noise, and read noise [17]. This assumption may affect the precision of the calculation of the CRF. Recently, some researchers took a different approach and utilized calibration charts of various brightness for image acquisition at several exposure times. These images were then employed to find the relation between the pixel value and the exposure point by point experimentally [18,19]. Nevertheless, it is not an easy job to find a calibration charts with known and adjustable brightness and the dynamic range of the brightness usually cannot cover the range of the pixel value that the camera can provide. Furthermore, there is no theoretical foundation to calculate or measure the brightness generated from the calibration chart. The purpose of this paper is to present a new method for the measurement of an accurate CRF for the digital camera. The new approach uses the same idea that was proposed in [18], which directly finds the relation between the pixel value and the exposure value point by point experimentally. However, a new approach based on the optical instrument to generate known and adjustable brightness with wide dynamic range pattern was proposed to successfully improve the measurement results. The remainder of this paper is organized as follows. Section 2 presents the theory of polarization and equations for the computation of the exposure. The proposed method is tested and verified in Section 3 by experiments. In Section 4, experimental results are summarized that confirm the correctness and effectiveness of the proposed method. Section 5 presents concluding remarks. 2. Theory of Polarization and Computation of the Exposure

The purpose of this section is to develop the theory that relates the light intensity out of the polariscope to the exposure on the detector of the camera. This theory will form the foundation for the modeling of the CRF. Furthermore, the method for evaluating the quality of the measurement of the CRF will also be introduced. A.

Light Intensity out of the Polariscope

A plane polariscope is composed of two components, one is named polarizer and the other is called analyzer, as shown in Fig. 1. Actually, the polarizer and the analyzer have exactly the same physical properties, and the difference in their names is just to distinguish the different purpose that each of them serves. The property of the polarizer (analyzer) is mainly described by the principal transmittances T t , T e and extinction ratio ρp . The principal transmittance T t represents the maximum transmittance when a plane-polarized light passes through the polarizer. It occurs when the vibration plane of the electric vector of an incident beam with planepolarization is parallel to the transmission axis of the polarizer. The principal transmittance T e

Fig. 1. Plane polariscope which is composed of one polarizer and one analyzer.

represents the minimum transmittance when a plane-polarized light passes through the polarizer. It occurs when the vibration plane of the electric vector of an incident beam with plane-polarization is vertical to the transmission axis and is parallel to the extinction axis of the polarizer. The extinction ratio ρp is defined as ρp 

Te I  min ; T t I max

(1)

where I min and I max are the minimum and maximum light intensity when the light passes through the polarizer, respectively [20]. Given a polarizer, the smaller the extinction ratio, the higher the polarization effect will be, which leads to a better polarized light beam. For example, the parameters for the polarizer used in our later experiments are T t  0.637, T e  3.91 × 10−5 , and ρp  6.1 × 10−5 . The small value of the extinction ratio ρp reveals that the polarizer we used is nearly perfect, and the light passing through the polarizer can be treated as a plane-polarized beam. When this plane-polarized beam passes through the analyzer, the transmittance Tθ will now be a function of the angle θ, which lies between the principal transmittances T t , and T e . That is [20], Tθ  T t − T e cos2 θ  T e ;

(2)

where θ is the angle between the transmission axes of the polarizer and the analyzer, and we will call it angle of the polariscope in the later discussion. If the light before entering the analyzer has an intensity of I 0 , then it will become Iθ after passing through the analyzer, where Iθ  I 0 T t − T e cos2 θ  T e :

(3)

The angle θ can be varied to precisely control the light intensity on the output surface of the analyzer based on Eq. (3). These controlled values of the light intensity can then be used for the measurement of the CRF. 1 January 2014 / Vol. 53, No. 1 / APPLIED OPTICS

83

B.

Given an image formed by a thin lens, the relation between the irradiance (E) falling on the detector and the radiance (L) of the scene can be represented by [21]   π d 2 4 EL cos α; 4 f

(4)

where α is the pixel’s angle from the optical axis of the lens. Most modern camera lenses have been designed to compensate the effect of this angle [22]; therefore, its effect can be neglected and we can treat α as a constant. The other two factors f and d are the focal length and aperture size of the camera, respectively. To avoid the vignetting effect caused by different aperture size, a higher and fixed aperture of f ∕6.3 will be used for image acquisition. If the ambient light of the background in the dark room can be neglected, then the radiance L that appears on the surface of the analyzer will be generated only by the light passing through the analyzer. Furthermore, this radiance L is proportional to the light intensity Iθ shown in Eq. (3), that is, L  kIθ. When an image was taken with a known exposure time ts for the surface of the analyzer in the dark room, the exposure X of the detector was   2  π d X  Ets  kIθ cos4 α ts  sIθts ; 4 f

(5)

where s  kπ∕4d∕f 2 cos4 α can be treated as a constant. On the basis of Eqs. (3) and (5), the exposure of the detector X is proportional to the light intensity Iθ of the analyzer surface and can be precisely controlled by rotating the angle of the polariscope. The purpose of studying the CRF ϕ is to investigate the relationship between the exposure X and the pixel value Z of the camera, that is, Z  ϕX. By controlling the angle of the of the polariscope θ and the exposure time ts , we can determine the exposure of each pixel in the acquired image based on Eq. (5). Given the exposure and its corresponding pixel value, these pairs of data can then be used to construct a precise CRF. C. Method of Verification for the Camera Response Function

The homogeneity test is a popular method for verifying the accuracy of the estimated CRF [16]. It was conducted by acquiring two images from the same static scene with two different exposure times ts1 and ts2 (ts1 < ts2 ), where each image has its corresponding exposure (X 1 or X 2 ) and pixel value (Z1 or Z2 ). Because the same scene will have the same radiance E, the exposure X generated will be dependent only on the exposure time ts . That is, the exposure is directly proportional to the exposure time, or 84

X 2 Ets2 ts2    r: X 1 Ets1 ts1

Computation of the Exposure

APPLIED OPTICS / Vol. 53, No. 1 / 1 January 2014

(6)

This equation shows that for each pixel pair in these two images, the ratio of their exposure is related to the exposure time but not the scene radiance. Since the exposure time as well as r can be obtained, these characteristics can be utilized in the verification process as follows. Given an estimated CRF ϕ and values of the pixel pair from two acquired images Z1 and Z2 , the exposure can be computed by the inverse function of ϕ, or X 01  ϕ−1 Z1  and X 02  ϕ−1 Z2 . If the estimated CRF is correct, then Eq. (6) reveals that rX 01 and X 02 will be equal, or rX 01 ∕X 02  1. Practically, this ideal condition can never be obtained for a realistic camera due to the intrinsic noise inside it. However, the deviation of the estimated CRF from this ideal condition can be used to evaluate its correctness by e

rX 01 rϕ−1 Z  − 1  −1 1 − 1; 0 X2 ϕ Z2 

(7)

or by the root-mean-square (rms) error for all the image pixels in the image pair, that is, v u N  2 u 1 X rϕ−1 Z1i  × 100%: − 1 δt N i1 ϕ−1 Z2i 

(8)

Note that either the whole range (0 < Z1i , Z2i < 255) or values in a certain range (ZS ≤ Z1i , Z2i ≤ ZT ) are selected for this computation. Pixels with value of 0 or 255 were excluded in this computation, because the pixel with value equal to zero was exposed to the radiance that fell below the sensitivity of the detector, and the pixel with value equal to 255 was exposed to the radiance that saturated the detector. On the basis of Eq. (8), the lower the value of δ, the more accurate of the recovered exposure X 0  ϕ−1 Z, and the better of the estimated CRF will be. 3. Experiments and Results

A white light and a Nikon D40 digital camera were used throughout the experiments, and a photometer T-10A made by Konica Minolta was used to measure the light intensity on the output surface of the analyzer. The process and the results obtained will be described below. A. Detection and Verification of the Light Intensity Controlled by the Polariscope

To investigate the properties of the polariscope used in our experiments, data were collected while the angle of the polariscope θ was gradually increased from 0° to 360° by a step of 2°. The transmission axis (P~ 1 ) of the polarizer was fixed at the direction of the x axis as shown in Fig. 1, and the angle of the polariscope θ is the rotation angle of the transmission axis (P~ 2 ) of the analyzer with respect to that of the polarizer.

values of radiance from the output surface of the analyzer. We then acquired the surface of the analyzer with eight different exposure times for each angle θ to obtain 368 images in total. On the basis of Eq. (5), the values of the exposure generated on the detector of the camera for these images were X ij  sIθi tsj ∝ T t − T e cos2 θi  T e tsj ;

Fig. 2. Image of the analyzer output taken at the rotation angle of 20°, 40°, 60°, and 80°, respectively.

Four images acquired with different rotation angles are illustrated in Fig. 2 to show the brightness of the analyzer. For each angle θ, the light coming out from the surface of the analyzer was recorded by the photometer and the results are shown in Fig. 3. There are two curves in the figure, one is formed by the experimental data collected by the photometer, and the other is a theoretical curve generated by Eq. (3) using T t  0.637, and T e  3.91 × 10−5 . Obviously, both of them vary periodically with respect to the angle of the polariscope θ and the measured data followed the theoretical value accordingly (their differences will be explored in the later section). This experiment verified that the plane polariscope is close to an ideal one, and it also proved that by varying the angle of the polariscope θ, we can precisely control the light intensity on the output surface of the analyzer as described by Eq. (3). B.

(9)

where θi  i − 1 × 90°∕45, i  1–46, and tsj j  1–8 are equal to 1∕2000, 1∕1000, 1∕500, 1∕250, 1∕125, 1∕60, 1∕30, 1∕15 seconds, respectively. If we had the pixel value as y coordinate and its corresponding normalized exposure value calculated by Eq. (9) as x coordinate, then we obtained the data distribution as shown in Fig. 4. In the experiment, there were 368 measurements and the distribution of these data formed the basic curve of the camera response. By applying least-squared curve fitting to these data, we can obtain a function that passed through these points in an optimal way. The formula that we P used to model the camera response is Z  f X  nk0 ak X k, and the objective function used in the curve fitting process is to minimize ε

46 X 8 X n X i1 j1

ak X kij

− Zij

2 :

(10)

k0

By applying the above curve fitting procedure, we found an optimal curve to fit the measurement points and obtained the CRF, which has been plotted to superpose the data, as shown in Fig. 4. Apparently, the CRF for the Nikon D40 is nonlinear and it is more sensitive in the darker region where the pixel values fall within [0, 100], and it becomes saturated when the pixel value approaches 250. C. CRF Estimation Based on the Debevec and Malik Method

As has been described above, there are many approaches available for the estimation of the CRF. Among these methods, the one developed by Debevec

Proposed Method for the Camera Response Function

By gradually increasing the angle of the polariscope θ from 0° to 90° with a step of 2°, we obtained 46 different

Fig. 3. Irradiance measured by the photometer (dot) and predicted by the formula (curve), respectively, for each angle.

Fig. 4. Pixel value acquired and its corresponding normalized exposure value computed by using Eq. (9). These points were then fitted to form the CRF curve. 1 January 2014 / Vol. 53, No. 1 / APPLIED OPTICS

85

Fig. 5. Seven images captured with different exposure times were used for the CRF estimation based on the DM method.

and Malik (DM method) [8] is the most cited and discussed. To evaluate the result of the newly proposed method, another response function for the same Nikon D40 camera was estimated based on the DM method for comparison. The DM method requires several images captured with different exposure time for the calculation, and Fig. 5 illustrates an example of seven images acquired with 1∕60, 1∕30, 1∕15, 1∕8, 1∕4, 1∕2, 1, and 2 s, respectively, to serve this purpose. Characteristic of these images is that the range of the pixel values covers from 0 to 255. Using the DM method, we reconstructed the CRF for the Nikon D40 camera by setting the smoothing parameter λ  100, and the result is shown in Fig. 6. The curve obtained by the proposed method illustrated in Fig. 4 is also included in the same figure for easier comparison. We find that the curve fitting is much more difficult in the DM method due to the wide distribution of the measurement data. 86

APPLIED OPTICS / Vol. 53, No. 1 / 1 January 2014

D.

Verification of the Camera Response Functions

To numerically compare the accuracy of these two methods for the CRF estimation, the homogeneity test was applied for the verification. The light

Fig. 6. Comparison of the estimated CRF’s based on the proposed method and the DM method.

Fig. 7. Patterns of the headlamp light were used in the performance evaluation for the estimated CRF. They were acquired under different exposures: (a) 1∕4 s and (b) 1∕2 s.

pattern of the headlamp was chosen as the scene for the measurement because it contains both bright and dark regions to provide very wide dynamic range of brightness and supports all the possible values of image pixels. Two images acquired with 1∕4 and 1∕2 s exposure time from the same scene were used for the experiment and are shown in Figs. 7(a) and 7(b), respectively. Using the two CRFs obtained in the experiment, we can convert the values of all the image pixels in Figs. 7(a) and 7(b) to their corresponding relative exposure values based on the inverse function of the CRF, that is, ϕ−1 . To compute the rms error defined in Eq. (8) for the homogeneity test, the image pixels were first grouped by the range of their pixel values. The whole range of the distribution (0–255) was divided into 32 regions, each containing eight levels of pixel value (Zi ≤ Z < Zi  7). The rms errors were then computed for the homogeneity test in each of the 32 regions, respectively, and the results (one data point for one region) are shown in Fig. 8. 4. Discussion

Several issues related to the range of pixel values that can be used for the measurement and the accuracy of

Fig. 8. Rms errors defined for homogeneity test were computed in each given region for both methods.

the radiance controlled by the polariscope are worth further discussions. A. Usable Range of Pixel Values

We can observe that for the pixel values of intermediate brightness (43 ≤ Z ≤ 203), the rms errors are relatively low for both methods where the range of the error for the DM method is [5.9%–12.5%] and is [3.2%–6.3%] for the polariscope-based method. While both of these two methods are quite correct, it is obvious that a method based on the pointby-point measurement using the polariscope has lower error values than those of the DM method. When the pixel values are lower than 43 or higher than 203, the corresponding rms errors are apparently higher. We think that when the pixel value is lower than 43, the detector responds mostly to the ambient light and the noise generated inside the camera instead of the irradiance of the scene. On the other hand, when the pixel values are higher than 203, the detector becomes saturated. These factors can reduce the sensitivity of the camera and contribute to the higher rms error. Given the estimated CRF based on the proposed method, we can find from the curve (Fig. 4) that the pixel that has a value higher than 203 contains a wider range of brightness compared with that of an intermediate value. For example, the pixel value in the range of [43, 203] has a corresponding relative exposure value in the range of [0.019, 0.34] and the average relative exposure range (photographic quantity or photoquantity/pixel [16]) is 0.0019. In contrast, the pixel value in the range of [203, 255] has a corresponding relative exposure value in the range of [0.34, 1.0], which leads to a photographic quantity value of 0.013. Therefore, the photographic quantity value in the bright region is about 7 times in that of the intermediate bright region. Interestingly, the rms error for the pixel value higher than 210 (bright region) is about 7 times of that at the 1 January 2014 / Vol. 53, No. 1 / APPLIED OPTICS

87

B. Effect of the Extinction Ratio on the Radiation Accuracy

Effect of extinction ratio on radiation

5 4.5

Ext. ratio = 0.001 Ext. ratio = 0.0001 Ext. ratio = 0.00006

4

Error of radiation (%)

intermediate bright region. For example, the maximum error occurred at pixel value 227 (23.5%), which is 7.4 times that of the minimum error that occurred at pixel value 67 (3.2%). Given the characteristics of the rms error of the homogeneity test, if the digital camera is to be used for the applications of high dynamic range imaging, we suggest that the most suitable range of the pixel value in each acquired image to suit this purpose is [50, 200].

3.5 3 2.5 2 1.5 1

How will the extinction ratio affect the radiation accuracy? Given Eq. (2), where

0.5 0 10

Tθ  T t − T e

cos2

θ  Te

20

30

40

50

60

70

80

ERθ 

2

Tθ  T t 1 − ρp cos θ  ρp :

~ Tθ − Tθ Tθ

An ideal system has ρp  0, that is,



1 − ρp cos2 θ − Δθ  ρp −1 1 − ρp cos2 θ  ρp

T ideal θ  T t cos2 θ:



cos2 θ − Δθ  ρp −1 cos2 θ  ρp

We define the effect of the extinction ratio on the radiation accuracy as ERρ by



1  cosθ − Δθ  2ρp −1 1  cos θ  2ρp

Tθ − T ideal θ ERρ  T ideal θ



cosθ − Δθ − cos θ : 1  cos θ  2ρp

Since ρp is usually very small, the first term can be neglected. However, ρp will play a significant role when θ is close to 90°. We calculate the values of ERρ for three different extinction ratios, and the results are shown in Fig. 9. Obviously, the extinction ratio has profound effect on the radiation accuracy when the angle of the polariscope is close to 90°. This is because the light intensity is very low under this condition and the measurement becomes very sensitive to the ambient light. C. Effect of Error of the Polariscope Angle on the Radiation Accuracy

How will the mechanical rotation accuracy affect the radiation accuracy? Let us assume the mechanical rotation has an error of Δθ, then the transmittance becomes ~ Tθ  T t − T e cos2 θ − Δθ  T e : We can define the effect of the mechanical rotation error on the radiation accuracy as ERθ by 88

APPLIED OPTICS / Vol. 53, No. 1 / 1 January 2014

(12)

Similarly, we calculate the values of ERθ for three different rotation errors (Δθ), and the results are shown in Fig. 10 (where ρp  0.000061). Obviously, the rotation error also has profound effect on radiation accuracy when the angle of the polariscope approaches Effect of mechanical rotation accuracy 100 90 80

rotation error=0.5°

Error of radiation (%)

1 − ρp cos2 θ  ρp − cos2 θ ρp  −ρp  : (11) cos2 θ cos2 θ

100

Fig. 9. Effect of the extinction ratio on radiation accuracy.

and ρp  T t ∕T e  defined in Eq. (1), we have



90

The angle between two polarizers (degree)

70

rotation error=0.3° rotation error=0.1°

60 50 40 30 20 10 0

10

20

30

40

50

60

70

80

90

100

The angle between two polarizers (degree)

Fig. 10. Effect of the mechanical rotation error on radiation accuracy.

Error of radiation (%)

12 10 8 6 4 2 0 -2 -4

0

15

30

45

60

75

90

The angle between two polarizers (degree)

Fig. 11. Overall radiation accuracy of the experimental setup.

90°. However, when the angle of the polariscope θ is gradually increased from 0° to 30° by a step of 2°, the mechanical rotation accuracy only slightly affects the radiation accuracy. The system setup that we used to measure the CRF has an extinction ratio of 0.000061 and a maximal rotation accuracy of Δθ  0.5°. On the basis of the above analysis, we anticipate that these two factors combined will make the error of radiation a function of the angle of the polariscope. The errors of radiation are estimated by comparing the radiance predicted by Eq. (3) with the radiance measured by the photometer, and the experimental results are illustrated in Fig. 11. We found that if the angle of the polariscope θ is smaller than 90°, then the error of radiation will be limited to 2%. The images taken at angles close to 90°, which contribute the pixels with values below 50, are actually not used in the measurement. 5. Conclusion

This paper has presented a new method for generating adjustable brightness based on a polariscope to provide accurate radiance for the CRF estimation. The results indicated that the proposed method can obtain higher accuracy than that of the previous method based on the homogeneity test. On the basis of these results, we suggest that the estimated CRF may be accurate enough for the digital camera to be used as an imaging photometer to measure the radiance radiated from a selected object. The technique presented in this paper will be useful for the applications that used a digital camera for acquisition of radiance maps. The authors gratefully acknowledge the helpful suggestions made by the anonymous reviewers for improving the paper from its earlier draft. They also wish to express their appreciation for the support of the National Science Council of Taiwan. References 1. Y. Yu, P. E. Debevec, J. Malik, and T. Hawkins, “Inverse global illumination: recovering reflectance models of real scenes from photographs,” in Proceedings of the 26th Annual ACM SIGGRAPH Conference on Computer Graphics and Interactive Techniques (1999), pp. 215–224. 2. P. E. Debevec, “Rendering synthetic objects into real scenes,” in Proceedings of the 25th Annual ACM SIGGRAPH Conference on Computer Graphics and Interactive Techniques (1998), pp. 189–198.

3. S. Marschner, S. Westin, E. Lafortune, and K. Torrance, “Image-based BRDF measurement,” Appl. Opt. 39, 2592–2600 (2000). 4. Q. T. Luong, P. Fua, and Y. Leclerc, “The radiometry of multiple images,” IEEE Trans. Pattern Anal. Mach. Intell. 24, 19–33 (2002). 5. A. D. Nurse, “Full-field automated photoelasticity by use of a three wavelength approach to phase stepping,” Appl. Opt. 36, 5781–5786 (1997). 6. L. Zhenkun, Y. Dazhen, and Y. Wanming, “Whole-field determination of isoclinic parameter by five-step color phase shifting and its error analysis,” Opt. Lasers Eng. 40, 189–200 (2003). 7. H. P. Wu, Y. P. Lee, and S. H. Chang, “Fast measurement of automotive headlamp based on high dynamic range imaging,” Appl. Opt. 51, 6870–6880 (2012). 8. P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” in Proceedings of the 24th Annual ACM SIGGRAPH Conference on Computer Graphics and Interactive Techniques (1997), pp. 369–378. 9. A. R. Varkonyi-Koczy and A. Rovid, “High-dynamic-range image reproduction methods,” IEEE Trans. Instrum. Meas. 56, 1465–1472 (2007). 10. T. Mitsunaga and S. K. Nayar, “High dynamic range imaging: spatially varying pixel exposures,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2000), pp. 472–479. 11. M. D. Grossberg and S. K. Nayar, “Modeling the space of camera response functions,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 1272–1282 (2004). 12. Y. Tsin, V. Ramesh, and T. Kanade, “Statistical calibration of CCD imaging process,” in Proceedings of the IEEE International Conference on Computer Vision (2001), pp. 480–487. 13. S. Mann, “Comparametric equations with practical applications in quantigraphic image processing,” IEEE Trans. Image Process. 9, 1389–1406 (2000). 14. S. Mann and R. Mann, “Quantigraphic imaging: estimating the camera response and exposures from differently exposed images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2001), pp. 842–849. 15. F. M. Candocia and D. A. Mandarino, “A semiparametric model for accurate camera response function modeling and exposure estimation from comparametric data,” IEEE Trans. Image Process 14, 1138–1150 (2005). 16. C. Manders, C. Aimone, and S. Mum, “Camera response function recovery from different illuminations of identical subject matter,” in IEEE International Conference on Image Processing (2004), pp. 2965–2968. 17. G. Healey and R. Kondepudy, “Radiometric CCD camera calibration and noise estimation,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 267–276 (1994). 18. A. Bevilacqua, A. Gherardi, and L. Carozza, “A robust approach to reconstruct experimentally the camera response function,” in First Workshops on Image Processing Theory, Tools and Applications (2008), pp. 1–6. 19. B. Wilburn, H. Xu, and Y. Matsushita, “Radiometric calibration using temporal irradiance mixtures,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2008), pp. 1–7. 20. J. M. Bennett, “Polarization,” in Handbook of Optics, M. Bass and E. W. Van Stryland, eds. (McGraw-Hill, 1995), Vol. 1, pp. 5.12–5.13. 21. D. A. Forsyth and J. Ponce, Computer Vision A Modern Approach (Prentice-Hall, 2003), pp. 62–63. 22. C. Kolb, D. Mitchell, and P. Hanrahan, “A realistic camera model for computer graphics,” in Proceedings of the 22nd Annual ACM SIGGRAPH Conference on Computer Graphics and Interactive Techniques (1995), pp. 317–324.

1 January 2014 / Vol. 53, No. 1 / APPLIED OPTICS

89

Improved measurement of camera response function and its performance evaluation.

Estimation of camera response function (CRF) has become important in the field of computer graphics and radiance measurement to achieve accurate model...
849KB Sizes 1 Downloads 0 Views