sensors Article

Depth Errors Analysis and Correction for Time-of-Flight (ToF) Cameras Ying He 1, *, Bin Liang 1,2 , Yu Zou 2 , Jin He 2 and Jun Yang 3 1 2 3

*

Shenzhen Graduate School, Harbin Institute of Technology, Shenzhen 518055, China; [email protected] Department of Automation, Tsinghua University, Beijing 100084, China; [email protected] (Y.Z.); [email protected] (J.H.) Shenzhen Graduate School, Tsinghua University, Shenzhen 518055, China; [email protected] Correspondence: [email protected]; Tel.: +86-755-6279-7036

Academic Editor: Gonzalo Pajares Martinsanz Received: 2 September 2016; Accepted: 9 December 2016; Published: 5 January 2017

Abstract: Time-of-Flight (ToF) cameras, a technology which has developed rapidly in recent years, are 3D imaging sensors providing a depth image as well as an amplitude image with a high frame rate. As a ToF camera is limited by the imaging conditions and external environment, its captured data are always subject to certain errors. This paper analyzes the influence of typical external distractions including material, color, distance, lighting, etc. on the depth error of ToF cameras. Our experiments indicated that factors such as lighting, color, material, and distance could cause different influences on the depth error of ToF cameras. However, since the forms of errors are uncertain, it’s difficult to summarize them in a unified law. To further improve the measurement accuracy, this paper proposes an error correction method based on Particle Filter-Support Vector Machine (PF-SVM). Moreover, the experiment results showed that this method can effectively reduce the depth error of ToF cameras to 4.6 mm within its full measurement range (0.5–5 m). Keywords: ToF camera; depth error; error modeling; error correction; particle filter; SVM

1. Introduction ToF cameras, which have been developed rapidly in recent years, are a kind of 3D imaging sensor providing a depth image as well as an amplitude image with a high frame rate. With its advantages of small size, light weight, compact structure and low power consumption, this equipment has shown great application potential in fields such as navigation of ground robots [1], pose estimation [2], 3D object reconstruction [3], identification and tracking of human organs [4] and so on. However, limited by its imaging conditions and influenced by the interference of the external environment, the data acquired by a ToF camera has certain errors, among which is the fact it has no unified correction method for any non-systematic errors caused by the external environment. Therefore, different depth errors must be analyzed, modeled and corrected case by case according to the different causes. ToF camera errors can be divided into two categories: systematic errors, and non-systematic errors. A systematic error is triggered not only by its intrinsic properties, but also by the imaging conditions of the camera system. The main characteristic of this kind of error is that their form is relatively fixed. These errors can be evaluated in advance, and the correction process is relatively convenient. Systematic errors which can be reduced by calibration under normal circumstances [5] and can be divided into five categories. A non-systematic error is an error caused by the external environment and noise. The characteristic of this kind of error is that the form is not fixed and random, and it is difficult to establish a unified Sensors 2017, 17, 92; doi:10.3390/s17010092

www.mdpi.com/journal/sensors

Sensors 2017, 17, 92

2 of 18

model to describe and correct such errors. Non-systematic errors are mainly divided into four categories: signal-to-noise ratio, multiple light reception, light scattering and motion blurring [5]. Signal-to-noise ratio errors can be removed by the low amplitude filtering method [6], or an optimized integration time can be decided by using a complex algorithm as per the area to be optimized [7]. Other ways generally reduce the impact of noise by calculating the average of data to determine whether it exceeds a fixed threshold [8–10]. Multiple light reception errors mainly exist at surface edges and depressions of the target object. Usually, the errors in surface edges of the target object can be removed by comparing the incidence angle of the adjacent pixels [7,11,12], but there is no efficient solution to remove the errors of depressions in the target object. Light scattering errors are only related to the position of a target object in the scene; the closer it is to the target object, the stronger the interference will be [13]. In [14], a filter approach based on amplitude and intensity on the basis of choosing an optimum integration time was proposed. Measurements based on multiple frequencies [15,16] and the ToF encoding method [17] both belong to the modeling category, which can solve the impact of sparse scattering. A direct light and global separation method [18] can solve mutual scattering and sub-surface scattering among the target objects. In [19], the authors proposed detecting transverse moving objects by the combination of a color camera and a ToF camera. In [20], transverse and axial motion blurring were solved by an optical flow method and axial motion estimation. In [21], the authors proposed a fuzzy detection method by using a charge quantity relation so as to eliminate motion blurring. In addition, some error correction methods cannot distinguish among error types, and uniformly correct the depth errors of ToF cameras. In order to correct the depth error of ToF cameras, a fusion method with a ToF camera and a color camera was also proposed in [22,23]. In [24], a 3D depth frame interpolation and interpolative temporal filtering method was proposed to increase the accuracy of ToF cameras. Focusing on the non-systematic errors of ToF cameras, this paper starts with the analysis of the impacts of varying external distractions on the depth errors of ToF cameras, such as materials, colors, distances, and lighting. Moreover, based on the particle filter to select the parameters of a SVM error model, an error modeling method based on PF-SVM is proposed, and the depth error correction of ToF cameras is realized as well. The reminder of the paper is organized as follows: Section 2 introduces the principle and development of ToF cameras. Section 3 analyzes the influence of lighting, material properties, color and distance on the depth errors of ToF cameras through four groups of experiments. In Section 4, a PF-SVM method is adopted to model and correct the depth errors. In Section 5, we present our conclusions and discuss possible future work. 2. Development and Principle of ToF Cameras In a broad sense, ToF technology is a general term for determining distance by measuring the flight time of light between sensors and the target object surface. According to the different measurement methods of flight time, ToF technology can be classified into pulse/flash, continuous wave, pseudo-random number and compressed sensing [25]. The continuous wave flight time system is also called ToF camera. ToF cameras were firstly invented at the Stanford Research Institute (SRI) in 1977 [26]. Limited by the detector technology at that time, the technique wasn’t used widely. Fast sampling of receiving light didn’t come true until the lock-in CCD technique was invented in the 1990s [27]. Then, in 1997 Schwarte, who was at the University of Siegen (Germany), put forward a method of measuring the phases and/or magnitudes of electromagnetic waves based on the lock-in CCD technique [28]. With this technique, his team invented the first CCD-based ToF camera prototype [29]. Afterwards, ToF cameras began to develop rapidly. A brief development history is shown in Figure 1.

Sensors 2017, 2017, 17, 17, 92 92 Sensors Sensors 2017, 17, 92

3 of 18 3 of 18 Microsoft Microsoft

MESA

SoftKInetic

MESA

SoftKInetic

MESA MESA

Canesta PMD

 DS325:320×240

Canesta

PMD

Stanford Stanford  第一款ToF相 机  第一款ToF相  机 单点测量  单点测量 128×128@2h 

 Kinect II: 512×424  Kinect II: 512×424

 DS325:320×240

 64×64

 SR3000:176×144

 SR4000:176×144

 SR3000:176×144

 SR4000:176×144

 64×64 Lock-in CCD Lock-in CCD

 64×25  64×25

 128×128@2h

1977

1995

1999

2002

2006

2007

2009

2013

1977

1995

1999

2002

2006

2007

2009

2013

Figure 1. Development history of ToF cameras. Figure cameras. Figure 1. 1. Development Development history history of of ToF ToF cameras.

In Figure 2, the working principle of ToF cameras is illustrated. The signal is modulated on the In Figure Figure 2, 2, the the working working principle principle of of ToF ToF cameras cameras is is illustrated. illustrated. The The signal signal is is modulated modulated on on the the lightIn source (usually LED) and emitted to the surface of the target object. Then, the phase shift light source (usually LED) and emitted to the surface of the target object. Then, the phase shift light source LED) emittedsignals to the surface of the target object. Then, theaccumulated phase shift between between the(usually emitted andand received is calculated by measuring the charge between the emitted and received signals is calculated by measuring the accumulated charge the emitted is calculated by we measuring the the accumulated charge numbers of each numbers ofand eachreceived pixel onsignals the sensor. Thereby, can obtain distance from the ToF camera to numbers of sensor. each pixel on the sensor. Thereby, we can obtain the distance from the ToF camera to pixel on the the target object. Thereby, we can obtain the distance from the ToF camera to the target object. the target object.

ToF cameras. cameras. Figure 2. Principle of ToF Figure 2. Principle of ToF cameras.

The received received signal signal is is sampled sampled four four times times at at equal equal intervals intervals for for every every period period (at (at 1/4 1/4 period). The period). The received signal is sampled four times at equal intervals for every period (at 1/4 period). From the four samples (ϕ 0, ϕ1, ϕ3, ϕ4) of phase ϕ, offset B and amplitude A can be calculated as From the four samples (ϕ0 , ϕ1 , ϕ2 , ϕ3 ) of phase ϕ, offset B and amplitude A can be calculated as follows: From the four samples (ϕ0, ϕ1, ϕ3, ϕ4) of phase ϕ, offset B and amplitude A can be calculated as follows:   follows: ϕ0 − ϕ2 ϕ = arctan     , (1)  = arctan ϕ1 00−ϕ322  , (1)  = arctan  1  3  , (1) 3ϕ3 + ϕ0 + ϕ1 +ϕ 12 B= (2) 4       0 1 2 3 q B      (2) 0 1 3 4 (2ϕ1 − (2) ϕ0− ϕ2 )2 + ϕ3 )2 (B 4 A= (3) 22 2 0  2 2  1  3 2 Distance D can be derived: (3) A  0  2   1  3   2  (3) A 1 c∆ϕ 2 , D= (4) Distance D can be derived: 2 2π f Distance D can be derived: where D is the distance from ToF camera to the target 1  cobject,   c is light speed and f is the modulation D    , on the principle of ToF cameras can (4) 1 c   frequency of the signal, ∆ϕ is phase difference. More be D  2  2details f  , (4) found in [5]. 2  2 f  WeDlist the distance exterior and typical commercial cameras the market in where is the fromparameters ToF cameraoftoseveral the target object, c is light ToF speed and f ison the modulation where D is the distance from ToF camera to the target object, c is light speed and f is the modulation Table 1. frequency of the signal,  is phase difference. More details on the principle of ToF cameras can frequency of the signal,  is phase difference. More details on the principle of ToF cameras can be found in [5]. be found in [5].

Sensors 2017, 17,92 92 SensorsSensors 2017,2017, 17, 92 17, Sensors 2017, 17, 92

4 of 18

of18 18 444of of 18

Welist listthe theexterior exteriorand andparameters parametersof ofseveral severaltypical typicalcommercial commercialToF ToFcameras camerason onthe themarket marketin in We We list the exterior and parameters of several typical commercial ToF cameras on the market in Table1. 1. Table Table 1.

Table 1. Parameters of typical commercial ToF cameras.

ToF Camera ToF Camera ToF ToF Camera Camera

Table1. 1.Parameters Parametersof oftypical typicalcommercial commercialToF ToFcameras. cameras. Table Table 1. Parameters of typical commercial ToF cameras. Maximum Maximum Maximum Maximum Maximum Maximum Measurement Field of Resolution Maximum Resolution Maximum Measurement Field Fieldof of Power/W Accuracy Weight/g Frame Resolution Measurement Power/W Resolution Measurement Field of Power/W Frame Accuracy Weight/g Rage/m View/◦Weight/g ofDepth Depth Frame Accuracy Frame Accuracy Weight/g of Rage/m View/° (Typical/Maximum) of Depth Rage/m View/° (Typical/Maximum) Rate/fps of Depth Rage/m View/° (Typical/Maximum) Rate/fps Rate/fps Images Rate/fps Images Images

Power/W (Typical/Maximum)

Images

MESA-SR4000 MESA-SR4000 MESA-SR4000 MESA-SR4000

Microsoft-Kinect Microsoft-Kinect Microsoft-Kinect

Microsoft-Kinect II II IIII

PMD-Camcube PMD-Camcube 3.0 3.0

PMD-Camcube PMD-Camcube 3.0 3.0

176 144 176 × 144 176 ×××144 176 144

50 50 50

50

9.6/24 ±1 cm 9.6/24 9.6/24

470

9.6/24

512 424 512 ×××424 512 424 512 × 424

30 30 30

0.5–4.5 0.5–4.5 30 0.5–4.5

70×××60 60 ±3 ±3cm@2 cm@2 m 60 550 550±3 cm@2 m 16/32 70 16/32 70 60 ±3 cm@2 m 550 16/32 0.5–4.5 70 m ×

550

16/32

200 200 200 × 200 200 ×××200 200 200

15 15 15

0.3–7.5 15 0.3–7.5 0.3–7.5

40×××40 40 ±3 ±3mm@4 mm@4 m40 1438 1438 0.3–7.5 40 × ±3 mm@4 m--40 m 40 40 ±3 mm@4 m 1438

1438

-

0.1–5 0.1–5 0.1–5

69×××55 55 0.1–5 69 69 55

±1cm cm× 55 470 470 69 ±1 ±1 cm 470

3.Analysis Analysison onDepth DepthErrors Errorsof ofToF ToFCameras Cameras 3. 3. Analysis Depth Errors of ToF Cameras 3. Analysis on on Depth Errors of ToF Cameras The external external environment environment usually usually has has aaa random random and and uncertain uncertain influence influence on on ToF ToF cameras, cameras, The The external environment usually has random and uncertain influence on ToF cameras,

The external environment a random uncertain influence ToF cameras, therefore, therefore, it’sdifficult difficult toestablish establishusually unifiedhas model todescribe describeand andcorrect correctsuch sucherrors. errors. Inthis thison section, therefore, it’s to aaaunified model to and In section, therefore, it’s difficult to establish unified model to describe and correct such errors. In this section, we take taketo the MESA SR4000 SR4000 cameramodel (Zurich,to Switzerland, camera withsuch good performance performance [30],section, we take the we the MESA camera (Zurich, Switzerland, with good [30], we take the MESA SR4000 camera (Zurich, Switzerland, aa camera camera with good performance [30], it’s difficult establish a unified describe aand correct errors. In this which has has been been used used in in error error analysis analysis [31–33] [31–33] and and position position estimation estimation [34–36]) [34–36]) as as an an example example to to which which has been used in error analysis [31–33] and estimation [34–36]) as an example to MESAanalyze SR4000 camera Switzerland, aposition camera with performance [30], which has been analyze theinfluence influence of(Zurich, theexternal external environmenttransformation transformation onthe thegood deptherror error ofToF ToFcameras. cameras. the of the environment on depth of analyze the influence of the external environment transformation on the depth error of ToF cameras. used in error analysis and position estimation [34–36]) asof andepth example to analyze the influence The data we we get from from[31–33] the experiments experiments provide references references for the the correction of depth errors in in the the The data get the provide for correction errors The data we get from the experiments provide references for the correction of depth errors in the next step. environment transformation on the depth error of ToF cameras. The data we get from step. next step. of thenext external the experiments references foron correction of depth errors in the next step. 3.1.Influence Influenceof ofprovide Lighting,Color Color andDistance Distance onthe Depth Errors 3.1. Lighting, and Depth Errors 3.1. Influence of Lighting, Color and Distance on Depth Errors During the the measurement measurement process process of of ToF ToF cameras, cameras, itit seems seems that that the the measured measured objects objects tend tend to to During have have different different colors, colors, different different distances distances and and may may be be under under different different lighting lighting conditions. conditions. Then, Then, the the following question question arises: arises: will will the the difference difference in in lighting, lighting, distances distances and and colors colors affect affect the the following following question arises: will the difference in lighting, and colors affect the During the measurement process of ToF cameras, itdistances seemsexperiments. that the measured objects tend to have measurementresults? results?To Toanswer answerthis thisquestion, question,we weconduct conductthe thefollowing followingexperiments. measurement measurement results? To answer this question, we conduct the following experiments. different colors, different distances andindoor may be under different lighting conditions. Aswe weknow, know, thereare are severalnatural natural indoor lighting conditions, such aslight-sunlight, light-sunlight, indoorThen, the following As there several lighting conditions, such as indoor As we know, there are several natural indoor lighting conditions, such as light-sunlight, indoor light-lamp light andthe nolight. light. Thisexperiment experiment mainlyconsiders considers theinfluence influence of these these threethe lighting light-lamp light and no This mainly the of three lighting light-lamp light and no light. This experiment mainly considers the influence of these three lighting question arises: will difference in lighting, distances and colors affect measurement results? conditions on on the the depth depth errors errors of of the the SR4000. SR4000. Red, Red, green green and and blue blue are are three three primary primary colors colors that that can can conditions conditions on the depth errors of the SR4000. Red, green and blue are three primary colors that can To answer this question, we conduct the following experiments. be superimposed superimposed into into any any color. color. White White is is the the color color for for measuring measuring error error [32,37,38], [32,37,38], while while reflective reflective be be superimposed into any color. White is the color for measuring error [32,37,38], while reflective As we(tin know, there several natural indoor mainly lighting conditions, such papers (tin foil) can can reflectare all light. light. Therefore, this experiment experiment mainly considers the influence influence of as light-sunlight, papers foil) reflect all Therefore, this considers the of papers (tin foil) can reflect all light. Therefore, this experiment mainly considers the influence of these fiveconditions conditions onthe thedepth depth errorsof ofThis theSR4000. SR4000. these five on the these five conditions on the depth errors of the SR4000. indoor light-lamp light and no errors light. experiment mainly considers the influence of these three As the the measurment measurment target, target, the the white white wall wall is is then then covered covered by by red, red, blue, blue, green, green, white white and and As As the measurment target, the white wall is then covered by red, blue, white and lighting conditions on the depth errors ofof the SR4000. Red,different green and green, blue are three primary colors that reflective papers, papers, respectively, respectively, as as examples examples of backgrounds with with different colors. Since Since the the wall wall is is reflective reflective papers, respectively, as examples of backgrounds backgrounds with different colors. colors. Since the wall is can benot superimposed into any color. White is the color for measuring [32,37,38], while reflective not completely flat, flat, laser laser scanners scanners are used used to build build wall model. Then we we used usederror 25HSX laser completely are to aaa wall model. Then aaa 25HSX laser not completely flat, laser scanners are used to build wall model. Then we used 25HSX laser scanner fromcan Surphaser (Redmond, WA, USA) to to provide provide reference value, value, becauseconsiders its accuracy accuracythe is influence of these from Surphaser (Redmond, WA, USA) because its is scanner from Surphaser (Redmond, WA, USA) to provide aa reference reference value, because its accuracy is papersscanner (tin foil) reflect all light. Therefore, this aexperiment mainly relatively high high (0.3 (0.3 mm). mm). The The SR4000 SR4000 camera camera is is set set on on the the right right side side of of the the bracket, bracket, while while the the 3D 3D relatively relatively high (0.3 mm). The SR4000 camera set on the right side of the bracket, while the 3D five conditions on the depth errors of the isSR4000. laser scanner scanner is is on on the the left. left.The The bracket bracket is is mounted mounted in in the the middle middle of of two two tripods tripods and and the the tripods tripods are are laser laser scanner is on the left. The bracket is mounted in the middle of two tripods and the tripods are As theparallel measurment target, thedistances white wall is the then covered bywall red, green, placed parallel tothe thewhite white wall.The The distances between thetripods tripods andthe the wall areblue, measured withwhite and reflective placed to wall. between and are measured with placed parallel to the white wall. The distances between the tripods and the wall are measured with two paralleltapes. tapes.The The experimentalof scene isarranged arrangedas aswith showndifferent inFigure Figure333colors. below. Since the wall is not completely two parallel experimental scene is shown in below. two parallel tapes. The experimental scene is arranged as shown in Figure below. papers, respectively, as examples backgrounds During the measurement process of ToF cameras, it seems that the measured objects tend to 3.1. Influence of Lighting, Color and Distance on Depth Errors have different colors, different distances and may be under different lighting conditions. Then, the

flat, laser scanners are used to build a wall model. Then we used a 25HSX laser scanner from Surphaser (Redmond, WA, USA) to provide a reference value, because its accuracy is relatively high (0.3 mm). The SR4000 camera is set on the right side of the bracket, while the 3D laser scanner is on the left. The bracket is mounted in the middle of two tripods and the tripods are placed parallel to the white wall. The distances between the tripods and the wall are measured with two parallel tapes. The experimental scene is arranged as shown in Figure 3 below.

Sensors 2017, 17, 92 Sensors 2017, 17, 92

5 of 18 5 of 18

(a)

(b)

Figure Figure 3. 3. Experimental Experimental scene. scene. (a) (a) Experimental Experimental scene; scene; (b) (b) Camera Camera bracket. bracket.

The distances from fromthe thetripods tripodstotothe thewall wall 5, 3, 4,2.5, 3, 2.5, 2, 1.5, 1, m 0.5respectively. m respectively. At The distances areare setset to to 5, 4, 2, 1.5, 1, 0.5 At each each position, we change the lighting conditions and obtain one frame with the laser scanner and 30 position, we change the lighting conditions and obtain one frame with the laser scanner and 30 frames frames thecamera. SR4000To camera. theofinfluence of time, the integral time, thesoftware SR_3D_View with thewith SR4000 excludeTo theexclude influence the integral the SR_3D_View of the software of the SR4000 camera is set to “Auto”. SR4000 camera is set to “Auto”. In order to toanalyze analyzethe thedepth depth error, acquired are processed in MATLAB. the In order error, thethe acquired datadata are processed in MATLAB. Since Since the target target objectfill can’t the image, wethe select the region centralofregion of pixels 90 × 90 of thetoSR4000 to be object can’t the fill image, we select central 90 × 90 of pixels the SR4000 be analyzed analyzed for depth errors. The distance error is defined as: for depth errors. The distance error is defined as: n

m ∑mi,j, f n

i, j, f

f 1

hi , j f =1 hi,j = n n a

 ri , j , − ri,j ,

b

∑ ∑ hhi,ji , j a

b

(5) (5)

i 1 j 1

g i=1 j=1 g= ss

(6) (6)

where hi,ji,j is is the the mean mean error of pixel i,j, f is the frame number of the is the distance the camera, camera, m mi,j,f i,j,f is f, nf,=n30, ri,j isri,jthe a andab and are the rowthe androw column number measured at at pixel pixeli,ji,jininFrame Frame = 30, is real the distance, real distance, b are and column of the selected region respectively and s is the total number of pixels. The real distance r is provided number of the selected region respectively and s is the total number of pixels. The real i,j distance ri,j is by the laser provided byscanner. the laser scanner. Figure 44 shows showsthe theeffects effectsofofdifferent differentlighting lighting conditions depth error of the SR4000. Figure conditions onon thethe depth error of the SR4000. As As shown in Figure 4, the depth error of the SR4000 is on slightly affected by the lighting conditions shown in Figure 4, the depth error of the SR4000 is on slightly affected by the lighting approximately linearly with distance, andand the (the maximum maximum effect effectisis22mm). mm).The Thedepth deptherror errorincreases increases approximately linearly with distance, measurement error value complies with the error test of other Swiss Ranger cameras in [37–40]. Besides, the measurement error value complies with the error test of other Swiss Ranger cameras in [37–40]. as seen in figure, very robust changes, and canand adapt various indoor Besides, asthe seen in theSR4000 figure, is SR4000 is veryagainst robust light against light changes, cantoadapt to various lightinglighting conditions for the lower requirements. indoor conditions for theaccuracy lower accuracy requirements.

Sensors 2017, 17, 92 Sensors 2017, 17, 92

6 of 18 6 of 18

Sensors 2017, 17, 92

6 of 18

light light

0.025 0.025

dark dark natural natural indoor indoor

0.02 0.02

deviation[m]

deviation[m]

0.015 0.015 0.01 0.01

0.005 0.005

00

-0.005 -0.005 0.5 0.5

11

1.5 1.5

22

2.5 3 3.5 2.5 3.5 measured distance[m] distance[m] measured

44

4.5 4.5

55

5.5 5.5

Figure 4.Influence Influence of lighting lighting on depth errors. Figure Figure 4. 4. Influence of of lighting on on depth depth errors. errors.

Figure 5shows showsthe theeffects effectsofof various various colors colors on on the errors ofofthe camera. AsAs Figure the depth depth theSR4000 SR4000 Figure 55 shows the effects of various colors on the depth errorserrors of the SR4000 camera.camera. As shown shown in Figure 5,the thedepth deptherror errorof ofthe the SR4000 SR4000 is is affected affected by the color ofofthe target object, and it it shown in Figure 5, by the color the target object, and in Figure 5, the depth error of the SR4000 is affected by the color of the target object, and it increases increases linearly with distance. The depth error curve under reflective conditions is quite different increases linearly with distance. The depth error curve under reflective conditions is quite different linearly with distance. The depth error curve reflective conditions is quite different the from the others. Whenthe thedistance distance is1.5–2 1.5–2 m,under the depth depth error isis too large, while atat3–5 m,m, it it isfrom small. from the others. When is m, the error too large, while 3–5 is small. others. When the distance is 1.5–2 m, the depth error is too large, while at 3–5 m, it is small. When When the distance is 5 m, the depth error is 15 mm less than when the color is blue. When thethe When theis distance is 5 m, the isdepth error is 15 when mm less when the color is distance blue. When the distance 5ism, error 15 when mm less the than color is blue. When is 1.5 distance 1.5the m,depth the depth error the than color is white is 5 mm higher thanthe when the color is m, distance is 1.5 m, the depth error when the color is white is 5 mm higher than when the color is thegreen. depth error when the color is white is 5 mm higher than when the color is green. green. color color

0.03

0.03

green green white white red red blue blue reflect reflect

0.025

0.025 0.02 0.02

deviation[m] deviation[m]

0.015

0.015 0.01

0.01 0.005

0.005 0

0 -0.005

-0.005 -0.01 0.5 -0.01 0.5

1

1

1.5

1.5

2

2

2.5 3 3.5 measured distance[m]

2.5

3

3.5

4

4

4.5

4.5

5

5.5

5

5.5

Figure5.5.Influence Influence of color on measured distance[m] Figure of color on depth deptherrors. errors.

Figure 5. Influence of color on depth errors. Influence MaterialononDepth Depth Errors 3.2.3.2. Influence of of Material Errors During measurement process ToF cameras, it seems that the measured objects tend to be 3.2. Influence ofthe Material on Depth ErrorsofofToF During the measurement process cameras, it seems that the measured objects tend to be of of different materials. Then, will this affect the measurement results? For this question, we different materials. Then, will this affect the measurement results? Forthe thismeasured question, objects we conducted During the process of to ToF cameras, seemsofthat tend tothe be conducted themeasurement following experiments: analyze the iteffects different materials on the depth following experiments: to analyze the effects of different materials on the depth errors of the SR4000, of errors different Then, affect the measurement results? thisstainless question, we of thematerials. SR4000, we chosewill four this common materials in the experiment: ABSFor plastic, steel, conducted the following experiments: to analyze the effects of different materials on the depth errors of the SR4000, we chose four common materials in the experiment: ABS plastic, stainless steel,

Sensors 2017, 17, 92

7 of 18

Sensors 2017, 17, 92

7 of 18

we chose four materials in the experiment: plastic, steel, and Sensors 2017, 17, common 92 The tripods 7 ofglass. 18are wood and glass. are arranged as shown inABS Figure 3 of stainless Section 3.1, andwood the targets The tripods are arranged as shown in Figure 3 of Section 3.1, and the targets are four 5-cm-thick four 5-cm-thick boards of the different materials, as shown in Figure 6. The tripods are placed woodof and glass. Theand tripods are arranged shown in1 Figure of and targetsto arethe boards the different materials, as shown Figure 6. The are3.1, placed parallel parallel to the target the distance is setastoin about m, and3 tripods theSection experiment is the operated under four 5-cm-thick boards of the different materials, as shown in Figure 6. The tripods are placed target and distance To is set to about 1the m,boards and the is operated under natural light natural lightthe conditions. differentiate on experiment the depth image, we leave a certain distance parallel toTothe target and the theboards distance isthe setdepth to about 1 m,we and the aexperiment is operated under conditions. differentiate on image, leave certain distance between them. between them. Then we acquire one frame with the laser scanner and 30 consecutive frames with natural light conditions. To differentiate the boards on30the depth image, we leave certain distance Then we acquire one The frame with the laser and consecutive with athe SR4000 the SR4000 camera. integral time in scanner the SR_3D_View software frames of the SR4000 camera iscamera. set to between them. Then we acquire one frame with the laser scanner and 30 consecutive frames with The integral time in the SR_3D_View software of the SR4000 camera is set to “Auto”. “Auto”. the SR4000 camera. The integral time in the SR_3D_View software of the SR4000 camera is set to “Auto”.

Figure 6. Four boards made of different materials. Figure 6. Four boards made of different materials. Figure 6. Four boards made of different materials.

For the SR4000 and the laser scanner, we select the central regions of 120 × 100 pixels and the SR4000 and scanner, central regions 120 100pixels pixels and For the SR4000 andthe thelaser laser scanner, we we select select the central regions ofof120 ××100 and 750 ×For 750 pixels, respectively. To calculate the meanthe thickness of the four boards, we need to 750 × 750 pixels, respectively. To calculate the mean thickness of the four boards, we need to measure 750 × 750 pixels, respectively. To calculate the mean thickness of the four boards, we need to measure the distance between the wall and the tripods as well. Section 3.1 described the data themeasure distancemethod between theFigure wall and the tripods as well. Section 3.1well. described the3.1 data processing the distance between wall theerrors tripods as Section described themethod data processing and 7 shows theand mean of the four boards. and Figure 7 shows mean errors of the boards. processing methodthe and Figure 7 shows thefour mean errors of the four boards. 0.018 0.018 0.016 0.016

Laser scanner Laser scanner SR-4000 SR-4000

0.012 0.012

standard deviation/m

standard deviation/m

0.014 0.014

0.01 0.01

0.008 0.008 0.006 0.006 0.004 0.004 0.002 0.002

0 0

wood wood

plastic plastic

texture texture

glass glass

metal metal

Figure7.7.Depth Depth data data of of two Figure two sensors. sensors.

As shown inFigure Figure7,7,the thematerial material affects affects both the errors ofofthe and the laser As shown the depth depth theSR4000 SR4000 laser As shown ininFigure 7, the material affects bothboth the depth errorserrors of the SR4000 and theand laserthe scanner. scanner. When the material is wood, the absolute error of the ToF camera is minimal and only 1.5 mm. scanner. the is material is wood, theerror absolute error the ToF camera isand minimal and only 1.5 mm. When theWhen material wood, the absolute of the ToFof camera is minimal only 1.5 mm. When the When the target is the stainless steel board, the absolute error reaches its maximum value and the When the target is the stainless steel board, the absolute error reaches its maximum value and the target is the stainless steel board, the absolute error reaches its maximum value and the depth error is depth error is 13.4 mm, because, as the reflectivity of the target surface increases, the number of depth error is 13.4asmm, because, asofthe reflectivity of increases, the targetthe surface increases, the received number by of 13.4 mm, target surface number of photons photonsbecause, received bythe thereflectivity light receiverthe decreases, which leads to a higher measurement error. photons by the light receiver decreases, which leads to a higher the light received receiver decreases, which leads to a higher measurement error. measurement error. 3.3. Influence of a Single Scene on Depth Errors 3.3. Influence of a Single Scene on Depth Errors The following experiments were conducted to determine the influence of a single scene on The errors. experiments wereasconducted to determine the 3.1, influence a single following experiments conducted determine scene8,on depth The tripods are placed shown in Figure 3 of Section and as of shown in Figure depth errors. The tripods are placed as shown in Figure 3 of Section 3.1, and as shown in The tripods are placed as shown in Figure 3 of Section 3.1, and as shown in Figure 8, 8, the measuring target is a cone, 10 cm in diameter and 15 cm in height. The tripods are Figure placed theparallel measuring target isa the a cone, indistance diameter and cm in height. The istripods areunder placed measuring target cone, 10 10 cm inthe diameter and 15 cm height. The tripods are placed parallel to the axis isof cone andcm is set to 15 1inm. The experiment operated parallel to the axis of the cone and the distance is set to 1 m. The experiment is operated under natural light conditions. We acquire one frame with the laser scanner and 30 consecutive frames natural light conditions. We acquire one frame with the laser scanner and 30 consecutive frames

Sensors 2017, 17, 92

8 of 18

Sensors Sensors2017, 2017,17, 17,92 92

18 88ofof18

to the axis of the cone and the distance is set to 1 m. The experiment is operated under natural light conditions. We acquire one frame withtime the laser and 30software consecutive frames with the SR4000 with camera. The integral in SR_3D_View software of the the SR4000 camera set withthe theSR4000 SR4000 camera. The integral time in the thescanner of SR4000 camera isisset camera. The integral time in the SR_3D_View software of the SR4000 camera is set to “Auto”. toto“Auto”. “Auto”.

Figure 8. 8. The The measured measured cone. cone. Figure Figure 8. The measured cone.

Asshown shownininFigure Figure9,9,we wechoose choose one one of of the the 30 30 consecutive consecutive frames As frames to to analyze analyze the theerrors, errors,extract extract As shown in Figure 9, we choose one of the 30 consecutive frames to analyze the errors, point cloud data from the selected frame and compare it with the standard cone to the point cloud data from the selected frame and compare it with the standard cone to calculate calculate the extract point cloud data from the selected frame and compare it with the standard cone to calculate error. The right side in Figure 9 is a color belt of the error distribution, of which the unit is m. As error. The right side in Figure 9 is a color belt of the error distribution, of which the unit is m. the As error. The side Figure 9 is a coloraccuracy belt of the distribution, of which the the unit is m. As shown shown Figure the measurement accuracy oferror SR4000 is also where depth shown ininright Figure 9,9,in the measurement of SR4000 is also higher, higher, where the maximal maximal depth in Figure 9, the accuracy SR4000mainly is also locate higher,in theprofile maximal depth error is error 0.06 m.measurement The depth depth errors errors of the theofSR4000 SR4000 mainly locate the of The error isis0.06 m. The of inwhere the rear rear profile of the the cone. cone. The 0.06 m. The depth errors of the SR4000 mainly locatewith in the rear profile of the The measured measured object deformation small, but, but, compared with the laser scanner, its point data measured object deformation isissmall, compared the laser scanner, its cone. pointcloud cloud dataare are object deformation is small, but, compared with the laser scanner, its point cloud data are sparser. sparser. sparser.

Figure 9. 9. Measurement Measurement errors errors of of the the cone. Figure cone. Figure 9. Measurement errors of the cone.

3.4.Influence InfluenceofofaaComplex Complex Sceneon on Depth Errors Errors 3.4. 3.4. Influence of a Complex Scene Scene on Depth Depth Errors Thefollowing followingexperiments experiments were were conducted conducted in in order order to the of The to determine determine the influence influence of aaa complex complex The following experiments were conducted in order to determine the influence of complex sceneonon depth errors. The tripods are placed as in shown in3 of Figure 3 3.1 of and Section 3.1 and the scene depth errors. The tripods are placed as shown Figure Section the measurement scene on depth errors. The tripods are placed as shown in Figure 3 of Section 3.1 and the measurement target is a complex scene, as shown in Figure 10. The tripods are placed parallel to the target is a complex scene, as shown in Figure 10. The tripods are placed parallel to theparallel wall, and the measurement target is a complex scene, as shown in Figure 10. The tripods are placed to the wall, and the distance is set to about 1 m. The experiment is operated under natural light conditions. distance setdistance to about 1 set m. to The experiment operated under naturalunder light natural conditions. acquire wall, andisthe about Theisexperiment is operated lightWe conditions. We acquire one frameis with the laser1 m. scanner and 30 consecutive frames with the SR4000 camera. one frame with the laser scanner and 30 consecutive frames with the SR4000 camera. The integral time We frame the laser software scanner and 30SR4000 consecutive the SR4000 camera. Theacquire integralone time in thewith SR_3D_View of the cameraframes is set towith “Auto”. in the SR_3D_View of the SR4000 camera is set to “Auto”. The integral time insoftware the SR_3D_View software of the SR4000 camera is set to “Auto”.

Sensors 2017, 17, 92

9 of 18

Sensors 2017, 17, 92

9 of 18

Sensors 2017, 17, 92

9 of 18

Figure 10. Complex scene. Figure 10. Complex scene. Figure 10. Complex scene.

We then choose one of the 30 consecutive frames for analysis and, as shown in Figure 11, WeWe then choose onedata of the consecutive frames for analysis and, shown inFigure Figure obtain the obtain the point cloud of30 the SR4000 and theframes laser scanner. Asasshown 11, there11,is a then choose one of the 30 consecutive for analysis and, as in shown in 11, Figure point cloud data of the SR4000 and the laser scanner. As shown in Figure 11, there is a small amount obtain the point cloud data of and laserobject scanner. As shown 11,compared there is a to small amount of deformation in the the SR4000 shape of thethe target measured by in theFigure SR4000 of deformation inespecially the shapeon ofinthe the target object measured SR4000 compared to the laser of deformation theedge shape thesensor target object by measured by the SR4000 compared to thesmall laseramount scanner, ofof the where thethe measured object is clearly curved. scanner, especially the edge ofthe theedge sensor where the measured object is clearly However, the laser scanner, on especially ofthe the sensor where the and measured object iscurved. clearly However, distortion exists on on the border of point cloud data artifacts appear on thecurved. plant. distortion exists on theexists border the pointofcloud datacloud and artifacts onappear the plant. However, distortion onof the border the point data andappear artifacts on the plant.

Figure point cloud ofdepth depthsensors. sensors. Figure11. 11.Depth Depthimages images based based on Figure 11. Depth images based on the the point pointcloud cloudofof depth sensors.

Analysis DepthErrors Errors 3.5.3.5. Analysis of Depth 3.5. Analysis of of Depth Errors From the abovefour fourgroups groupsof ofexperiments, experiments, the the depth affected From the above four groups of experiments, depth errors errors ofthe theSR4000 SR4000are areweakly weakly affected From the above depth errorsof of the SR4000 are weakly affected by lighting conditions(2(2mm mmmaximum maximum under under the same conditions). The second factor is is thethe target by lighting conditions same conditions). The second factor target by lighting conditions (2 mm maximum under the same conditions). The second factor is the target object color. Underthe thesame sameconditions, conditions, this this affects affects the depth error by aamaximum ofof 5 mm. OnOn thethe object color. Under the depth error by maximum 5 mm. object color. Under the same conditions, this affects the depth error by a maximum of 5 mm. On the other hand, the material has a great influence on the depth errors of ToF cameras. The greater the other hand, hand, the material has great influence influence on on the the depth depth errors errors of of ToF ToF cameras. cameras. The The greater greater the the other the material has a a great reflectivityofofthe themeasured measured object object material, material, the the greater the depth error, which increases reflectivity greater the depth error, which increases reflectivity of the measured object material, the greater the depth error, which increases approximately approximatelylinearly linearlywith withthe thedistance distance between between the object ToF camera. InIn a more approximately the measured measured objectand and camera. a scene, more linearly with the distance between the measured object and ToF camera. In ToF a more complex complex scene, the depth error of a ToF camera is greater. Above all, lighting, object color, material, complex the error ofisa greater. ToF camera is greater. Aboveobject all, lighting, object color, material, the depthscene, error of adepth ToF backgrounds camera all, lighting, color, distance and distance and complex could Above cause different influences on the material, depth errors of ToF distance and complex backgrounds could cause different influences on the depth errors of ToF complex backgrounds could cause different influences on the depth errors of ToF cameras, but cameras, but it’s difficult to summarize this in an error law, because the forms of these errors areit’s cameras,tobut it’s difficult in an the error law,ofbecause the forms of these errors are difficult summarize thisto in summarize an error law,this because forms these errors are uncertain. uncertain. uncertain. 4. Depth Error Correction 4. Depth Error Correctionfor forToF ToFCameras Cameras 4. Depth Error Correction for ToF Cameras In In thethe lastlast section, fourfour groups of experiments werewere conducted to analyze the influence of several section, groups of experiments conducted to analyze the influence of In the last section, four groups of experiments were conducted to analyze the influence of external factors on the depth errors of ToF cameras. The results of our experiments indicate that several external factors on the depth errors of ToF cameras. The results of our experiments indicate several external factors on the depthon errors of the ToFmeasurement cameras. The results of itour experiments different factors factors have different effects the measurement results, and it isand difficult to establish aindicate unified that different have different effects on results, is difficult to establish that different factors have different effects on errors. the measurement results, andthat it isisdifficult a unified model to describe and correct such For a complex process difficult to to establish model a unified model to describe and correct such errors. For a complex process that is difficult to model

Sensors 2017, 17, 92

10 of 18

Sensors 2017, 17, 92

10 of 18

model to describe and correct such errors. For a complex process that is difficult to model mechanically, mechanically, an inevitable choice is to use actual measurable input and output data to model. an inevitable choice is to use actual measurable input and output data to model. Machine learning is Machine learning is proved to be an effective method to establish non-linear process models. It proved to be an effective method to establish non-linear process models. It maps the input space to maps the input space to the output space through a connection model, and the model can the output space through a connection model, and the model can approximate a non-linear function approximate a non-linear function with any precision. SVM is a new generic learning method with any precision. SVM is a new generic learning method developed on the basis of a statistical developed on the basis of a statistical learning theory framework. It can seek the best compromise learning theory framework. It can seek the best compromise between the complexity of the model between the complexity of the model and learning ability according to limited sample information and learning ability according to limited sample information so as to obtain the best generalization so as to obtain the best generalization performance [41,42]. Also in the last section of this paper, we performance [41,42]. Also in the last section of this paper, we learn and model the depth errors of ToF learn and model the depth errors of ToF cameras by using a LS-SVM [43] algorithm. cameras by using a LS-SVM [43] algorithm. Better parameters generate better SVM recognition performance to build the LS-SVM model. Better parameters generate better SVM recognition performance to build the LS-SVM model. We need to determine the penalty parameter C and Gaussian kernel parameter γ. Cross-validation We need to determine the penalty parameter C and Gaussian kernel parameter γ. Cross-validation [44] [44] is a common method which suffers from large computation demands and long running times. is a common method which suffers from large computation demands and long running times. A particle filter [45] can be used to approximate the probability distribution of parameters in the A particle filter [45] can be used to approximate the probability distribution of parameters in the parameter state space by spreading a large number of weighted discrete random variables, based parameter state space by spreading a large number of weighted discrete random variables, based on on which, this paper puts forward a parameter selection algorithm, which can fit the depth errors of which, this paper puts forward a parameter selection algorithm, which can fit the depth errors of ToF cameras quickly and meet the requirements of correcting the errors. The process of the PF-SVM ToF cameras quickly and meet the requirements of correcting the errors. The process of the PF-SVM algorithm is shown in Figure 12 below. algorithm is shown in Figure 12 below.

Figure Figure12. 12.Process Processof ofPF-SVM PF-SVMalgorithm. algorithm.

4.1. 4.1. PF-SVM PF-SVM Algorithm Algorithm 4.1.1. 4.1.1. LS-SVM LS-SVMAlgorithm Algorithm According theory, during the process of black-box modeling for non-linear systems, Accordingtotostatistical statistical theory, during the process of black-box modeling for non-linear training set {x ,y }, i = 1,2, . . . ,n is generally given and non-linear function f is established to minimize i i set {xi,yi}, i = 1,2,…,n is generally given and non-linear function f is established to systems, training Equation (8): minimize Equation (8): f ( x ) = w T ϕ( x ) + b, (7) f  x   wT   x   b , n (7) 1 T 1 2 minw,b,δ J (w, δ) = w w + C ∑ δ , (8) 2 2 i =1 n

1 1 min w,b, J  w,    wT w  C  2 , 2 2 i 1

(8)

where ϕ(x) is a nonlinear function, and w is the weight. Moreover, Equation (8) satisfies the constraint:

Sensors 2017, 17, 92

11 of 18

where ϕ(x) is a nonlinear function, and w is the weight. Moreover, Equation (8) satisfies the constraint: yi = w T ϕ( xi ) + b + δi , i = 1, 2 · · · n,

(9)

where δi ≥ 0 is the relaxation factor, and C > 0 is the penalty parameter. The following equation introduces the Lagrange function L to solve the optimization problem in Equation (8): n 1 1 n L = kwk2 + C ∑ δi2 − ∑ αi ( ϕ( xi ) · w + b + δi − yi ), (10) 2 2 i =1 i =1 where αi is a Lagrange multiplier. For i = 1,2, . . . n by elimination of w and δ, a linear equation can be obtained: "

0 e

eT T GG + C −1 I

#

" (n+1)×(n+1)

b α

#

"

=

0 y

# ,

(11)

where e is an element of one n-dimensional column vector, and I is the n × n unit matrix: h G = ϕ ( x1 ) T

ϕ ( x2 ) T

ϕ( xn ) T

···

iT

,

(12)

According to the Mercer conditions, the kernel function is defined as follows:   K xi , x j = ϕ ( xi ) · ϕ x j ,

(13)

We substitute Equations (12) and (13) into Equation (11) to get a linear equation from which α and b can be determined by the least squares method. Then we can obtain the non-linear function approximation of the training data set: n

y( x ) =

∑ αi K(x, xi ) + b,

(14)

i =1

4.1.2. PF-SVM Algorithm The depth errors of ToF cameras mentioned above are used as training sample sets {xi, yi }, i = 1,2, . . . n, where xi is the camera measurement distance, and yi is the camera measurement error. Then the error correction becomes a black-box modeling problem of a nonlinear system. Our goal is to determine the nonlinear model f and correct the measurement error with it. The error model of ToF cameras obtained via the LS-SVM method is expressed in Equation (14). In order to seek a group of optimal parameters for the SVM model to approximate the depth errors in the training sample space, we put this model into a Particle Filter algorithm. In this paper, the kernel function is: ! −k x − yk2 , k( x, y) = exp 2γ2 (1)

(15)

Estimation state. The estimated parameter state x at time k is represented as: h j j x0 = C0 j

i j T

γ0

,

(16)

where x0 is j-th particle when k = 0, C is the penalty parameter and γ is the Gauss kernel parameter.

Sensors 2017, 17, 92

(2)

12 of 18

Estimation Model.

The relationship between parameter state x and parameter α,b in non-linear model y(x) can be expressed by state equation z(α,b): z(α, b) = F (γ, C ), (17)   −1     0 1 ··· 1 0 b     1 K ( x1 , x1 ) · · ·  K ( x1 , x n )   y1    α1      . = , (18)   .  . . .  .   . .   .  . .. .. ..   .     . yn αn 1 K ( xn , x1 ) · · · K ( xn , xn ) + C1 where Equation (17) is the deformation of Equation (11). The relationship between parameter α,b and ToF camera error y(x) can be expressed by observation equation f : y( x ) = f (α, b),

(19)

n

y( x ) =

∑ αi K(x, xi ) + b,

(20)

i =1

where Equation (20) is the non-linear model derived from LS-SVM algorithm. (3)

Description of observation target.

In this paper, we use yi of training set {xi, yi } as the real description of the observation target, namely the real value of the observation: z = { y i }, (4)

(21)

The calculation of the characteristic and the weight of the particle observation.

This process is executed when each particle is under characteristic observation. Hence the error values of the ToF camera are calculated according to the sampling of each particle in the parameter state x:     j zj αj, b = F γj, Cj , (22)   j y j (x) = f α j , b , (23) Here we compute the similarity between the ToF camera error values and the observed target camera values of each particle. The similarity evaluation RMS is defined as follows: v u 2 u1 n  RMS = t ∑ y j − yi , n i =1

(24)

where y j is the observation value of particle j and yi is the real error value. The weight value of each particle is calculated according to the Equation (24): w( j) = √

1 2πσ

e−

RMS2 2σ

,

(25)

Then the weight values are normalized: wj =

wj m

∑ wj

j =1

,

(26)

Sensors 2017, 17, 92

13 of 18

Sensors 2017, 17, 92

13 of 18

(5) Resampling

Resampling of the particles is conducted according to the normalized weights. In this process, (5) Resampling not only the particles with great weights but also a small part of particles with small weights should be kept down. Resampling of the particles is conducted according to the normalized weights. In this process, not only the particles with great weights but also a small part of particles with small weights should j j j T x   C   (6) Outputting particle set 0 0 0   . This particle set is the optimal LS-SVM parameter. be kept down. h iT j j j (7) The measurement of ToF cameras can be obtained by introducing the parameter (6) Outputting particleerror set xmodel 0 = C0 γ0 . This particle set is the optimal LS-SVM parameter. into the LS-SVM model. (7) The measurement error model of ToF cameras can be obtained by introducing the parameter into the LS-SVM model. 4.2. Experimental Results 4.2. Experimental Resultsthree groups of experiments to verify the effectiveness of the algorithm. In We’ve performed Experiment 1, the depththree errorgroups model of of experiments ToF cameras to was modeled with the experimental data in We’ve performed verify the effectiveness of the algorithm. [32], and the results were compared with the error correction results in the original text. In In Experiment 1, the depth error model of ToF cameras was modeled with the experimental data in [32], Experiment 2, the depth error model of ToF cameras was modeled with the data in Section 3.1, and and the results were compared with the error correction results in the original text. In Experiment 2, the error error correction under different test conditions were compared. In the Experiment 3, the the depth modelresults of ToF cameras was modeled with the data in Section 3.1, and error correction error correction results under different reflectivity and different texture conditions were compared. results under different test conditions were compared. In Experiment 3, the error correction results under different reflectivity and different texture conditions were compared. 4.2.1. Experiment 1 4.2.1.InExperiment 1 this experiment, we used the depth error data of the ToF cameras which was obtained from Section 3.2 of [32] as the we training sample set. Thedata training setToF consists of 81 sets was of data, wherefrom x is In this experiment, used the depth error of the cameras which obtained the distance measurement of the ToF camera and y is the depth error of the ToF camera, as shown Section 3.2 of [32] as the training sample set. The training set consists of 81 sets of data, where x is in 13 by blue dots. Inofthe the solid linedepth represents error modeling by theFigure distance measurement thefigure, ToF camera andgreen y is the error the of the ToF camera, results as shown using the polynomial givenIninthe [32]. It shows fitting effect is betterthe when themodeling distance is 1.5–4 by m, in Figure 13 by blue dots. figure, the that solidthe green line represents error results and the maximum absolute error is 8 mm. However, when the distance is less than 1.5 m or more using the polynomial given in [32]. It shows that the fitting effect is better when the distance is 1.5–4 m, than 4 m, the errorabsolute model deviated true error values. By using our than algorithm, wemore can and the maximum error is 8from mm. the However, when the distance is less 1.5 m or obtain thetheresults C = deviated 736 and from γ = the 0.003. By substituting theseour two parameters into the than 4 m, error model true error values. By using algorithm, we can obtain abovementioned algorithm, we can also obtain the depth error model of the ToF camera as shown the results C = 736 and γ = 0.003. By substituting these two parameters into the abovementioned in the figure the redobtain solid the line.depth For this, can beofseen thatcamera the error model can match algorithm, weby can also erroritmodel the ToF as shown in the figurethe byreal the errors well. red solid line. For this, it can be seen that the error model can match the real errors well. 0.015 mean discrepancies distance error model distance error model in [32]

mean discrepancies[m]

0.01

0.005

0

-0.005

-0.01 0.5

1

1.5

2 2.5 3 mean measured distance[m]

3.5

4

4.5

Figure Figure 13. 13. Depth Depth error error and and error error model. model.

In order to verify the validity of the error model, we use the ToF camera depth error data In order to verify the validity of the error model, we use the ToF camera depth error data obtained obtained from Section 3.3 of [32] as a test sample set (the measurement conditions are the same as from Section 3.3 of [32] as a test sample set (the measurement conditions are the same as Section 3.2

Sensors 2017, 2017, 17, 17, 92 92 Sensors

14 of of 18 18 14

Section 3.2 of [32]). The test sample set consists of 16 sets of data, as shown in Figure 14 by the blue line. In the the solid lineofrepresents the error modeling results14bybyusing the polynomial of [32]). Thefigure, test sample set green consists 16 sets of data, as shown in Figure the blue line. In the in [32].the It shows that the effectthe is better when the results distance 1.5–4 the m, polynomial and the maximum figure, solid green linefitting represents error modeling byisusing in [32]. absolute error is fitting 8.6 mm. However, less than 1.5 the m or more than 4 m, the error It shows that the effect is betterwhen whenthe the distance distance is 1.5–4 m, and maximum absolute error is model deviated from true iserror value. agree the fitting of the 8.6 mm.has However, when the the distance less than 1.5 The m orresults more than 4 m,with the error model effect has deviated aforementioned error model. The model results obtained by using our algorithm are from the true error value. The results agree correction with the fitting effect of the aforementioned error model. shown by the red solid line in the figure. It shows the results of the by error are in better The model correction results obtained by using our that algorithm are shown thecorrection red solid line the when the distance in 0.5–4.5 m,the anderror the absolute 4.6distance mm. Table gives the figure. It shows thatisthe results of correctionmaximum are bettererror whenisthe is in20.5–4.5 m, detailed performance comparison results these two error corrections. From Table 2, we can see and the absolute maximum error is 4.6 mm.of Table 2 gives the detailed performance comparison results that, while the rangeFrom of error correction, can expanding also improve accuracy of of these twoexpanding error corrections. Table 2, we canthis see method that, while thethe range of error the error correction. correction, this method can also improve the accuracy of the error correction. 0.01 mean discrepancies before correction mean discrepancies after correction in [32] mean discrepancies after correction

mean discrepancies[m]

0.005

0

-0.005

-0.01

-0.015 0.5

1

1.5

2 2.5 3 mean measured distance[m]

3.5

4

4.5

Figure 14. 14. Depth Depth error error correction correction results. results. Figure Table 2. 2. Analysis Analysis of of depth depth error error correction correction results. results. Table Comparison Items

Comparison

Maximal Error/mm

Maximal Error/mm 1.5–4 0.5–4.5

ThisItems paper’s algorithm This paper’s Reference [32] algorithm algorithm

Reference [32]

4.2.2.algorithm Experiment 2

Average Error/mm Average Error/mm 1.5–4 0.5–4.5

Variance/mm Variance/mm 1.5–4 0.5–4.5

Optimal

Running

1.5–4 4.6

0.5–4.5 4.6

1.5–4 1.99

0.5–4.5 2.19

1.5–4 2.92

Range/m Running Time/s Optimal Range/m Time/s 0.5–4.5 2.4518 0.5–4.5 2

4.6 4.6

4.6 8.6

1.99 2.14

2.19 4.375

2.92 5.34

2.4518 29.414

4.6

8.6

2.14

4.375

5.34

29.414

0.5–4.5 1.5–4 1.5–4

2

-

-

The ToF depth error data of Section 3.1 on the condition of blue background is selected as the 4.2.2. Experiment 2 training sample set. As shown in Figure 15 by blue asterisks, the training set consists of eight sets of data.The TheToF error model established by our is shown the blue line in depth error data of Section 3.1algorithm on the condition of by blue background is Figure selected15. as The the model can fit theset. error well, the 15 training sample set should be as set richconsists as possible in order to training sample Asdata shown in but Figure by blue asterisks, the training of eight sets of buildThe the error accuracy of the model. To the applicability model, we use15.white, green data. model established by verify our algorithm is shown of bythe theerror blue line in Figure The model andfit red background ToF depth error data sample as test samples, and dataas after correction is shown in can the error data well, but the training set should bethe as rich possible in order to build figure by of thethe black, green redthe lines. It can be seen from the model, figure that values of the accuracy model. Toand verify applicability of the error we the use absolute white, green and the three groups ToF of residual errors is less thansamples, the uncorrected data correction after the application of the the red background depth error data as test and the error data after is shown in blue distance error green model.and Thered figure illustrates is absolute very applicable to the the figure by the black, lines.also It can be seen that fromthis the error figuremodel that the values of error groups correction of ToF cameras backgrounds. three of residual errors isfor lessdifferent than thecolor uncorrected error data after the application of the blue distance error model. The figure also illustrates that this error model is very applicable to the error correction of ToF cameras for different color backgrounds.

Sensors 2017, 17, 92 Sensors 2017, 17, 92 Sensors 2017, 17, 92

15 of 18 15 of 18 15 of 18 0.025 0.025

training data [blue] training data[blue] [blue] error model error model testing data [blue] [white] testing data deviation after[white] correction [white] deviation after correction [white] testing data [green] testing data deviation after[green] correction [green] deviation after correction [green] testing data [red] testing data deviation after[red] correction [red] deviation after correction [red]

0.02 0.02

deviation[m] deviation[m]

0.015 0.015 0.01 0.01 0.005 0.005 0 0 -0.005 -0.0050.5 0.5

1 1

1.5 1.5

2 2

2.5 3 3.5 2.5 3 3.5 measured distance[m] measured distance[m]

4 4

4.5 4.5

5 5

5.5 5.5

Figure Depth of various colors and and error model. model. Figure 15. 15. Depth error error correction correction results results Figure 15. Depth error correction results of of various various colors colors and error error model.

4.2.3. Experiment 3 4.2.3. Experiment 33 4.2.3. Experiment The experimental process similar to that of Section 3.1 hereof was adopted in order to verify The 3.13.1 hereof was adopted in order to verify the The experimental experimental process processsimilar similartotothat thatofofSection Section hereof was adopted in order to verify the validity of the error modeling method under different reflectivity and different texture validity of the error modeling method under different reflectivity and different texture conditions. the validity of the error modeling method under different reflectivity and different texture conditions. The sample set, including 91 groups of data, involved the depth errors obtained from The sample The set, including 91 including groups of 91 data, involved the depth errors from obtained the white from wall conditions. sample set, groups of data, involved theobtained depth errors the white wall surfaces photographed with a ToF camera at different distances, as shown with the surfaces a ToF camera distances, as shown with the blue solid the whitephotographed wall surfaces with photographed withatadifferent ToF camera at different distances, as shown withlines the blue solid lines in Figure 16. The error model established by use of the algorithm herein is shown in Figure 16. The error model established by use of the algorithm herein is shown with the red solid blue solid lines in Figure 16. The error model established by use of the algorithm herein is shown with the red solid lines in Figure 16. The figure indicates that this model fits the error data better. lines 16. The indicates that this model fitsthat the error data better. newspaper with in theFigure red solid linesfigure in Figure 16. The figure indicates this model fits theWith errora data better. With a newspaper fixed on the wall as the test target, the depth errors obtained with a ToF camera fixed the wall asfixed the test theasdepth errors obtained with errors a ToF camera at with different distances With on a newspaper on target, the wall the test target, the depth obtained a ToF camera at different distances are taken as the test data, as shown with the black solid lines in Figure 16, are taken as distances the test data, shown with thedata, blackassolid lines in Figure 16, while the data corrected at different are as taken as the test shown with the black solid lines in Figure 16, while the data corrected through the error model created here are shown with the green solid lines through error model created here shown with the green lines inwith the same figure. It can be while thethe data corrected through theare error model created heresolid are shown the green solid lines in the same figure. It can be seen from the figure that the absolute values of residual errors is less seen thefigure. figureItthat values of residual errors is less values than the errorisdata in thefrom same canthe beabsolute seen from the figure that the absolute ofuncorrected residual errors less than the uncorrected error data after the application of the distance error model. The figure also after of error the distance errorthe model. The figure alsodistance illustrates that model. this error model is very than the theapplication uncorrected data after application of the error The figure also illustrates that this error model is very applicable to the full measurement range of ToF cameras. applicable to the full measurement range of ToF cameras. illustrates that this error model is very applicable to the full measurement range of ToF cameras. -3

x 10 10 x 10-3 10

training data training data error model error model deviation after correction deviation after correction testing data testing data

8 8 6 6

deviation[m] deviation[m]

4 4 2 2 0 0 -2 -2 -4 -4 -6 -6 -8 -80.5 0.5

1 1

1.5 1.5

2 2

2.5 3 2.5 distance[m] 3 measured measured distance[m]

3.5 3.5

4 4

4.5 4.5

Figure 16. Depth error correction results and error model. Figure 16. Depth correction results results and and error error model. model. Figure 16. Depth error error correction

5 5

Sensors 2017, 17, 92

16 of 18

5. Conclusions In this paper, we analyzed the influence of some typical external distractions, such as material properties and color of the target object, distance, lighting and so on on the depth errors of ToF cameras. Our experiments indicate that lighting, color, material and distance could cause different influences on the depth errors of ToF cameras. As the distance becomes longer, the depth errors of ToF cameras increase roughly linearly. To further improve the measurement accuracy of ToF cameras, this paper puts forward an error correction method based on Particle Filter-Support Vector Machine (PF-SVM). Then, the best parameters with particle filter algorithm on the basis of learning the depth errors of ToF cameras are selected. The experimental results indicate that this method can reduce the depth error from 8.6 mm to 4.6 mm within its full measurement range (0.5–5 m). Acknowledgments: (No. 61305112).

This research was supported by National Natural Science Foundation of China

Author Contributions: Ying He proposed the idea; Ying He, Bin Liang and Yu Zou conceived and designed the experiments; Ying He, Yu Zou, Jin He and Jun Yang performed the experiments; Ying He, Yu Zou, Jin He and Jun Yang analyzed the data; Ying He wrote the manuscript; and Bin Liang provided the guidance for data analysis and paper writing. Conflicts of Interest: The authors declare no conflict of interest.

References 1. 2. 3. 4. 5. 6. 7. 8.

9. 10.

11.

12. 13. 14.

Henry, P.; Krainin, M.; Herbst, E.; Ren, X.; Fox, D. RGB-D mapping: Using kinect-style depth cameras for dense 3D modeling of indoor environments. Int. J. Robot. Res. 2012, 31, 647–663. [CrossRef] Brachmann, E.; Krull, A.; Michel, F.; Gumhold, S.; Shotton, J.; Rother, C. Learning 6D Object Pose Estimation Using 3D Object Coordinates; Springer: Heidelberg, Germany, 2014; Volume 53, pp. 151–173. Tong, J.; Zhou, J.; Liu, L.; Pan, Z.; Yan, H. Scanning 3D full human bodies using kinects. IEEE Trans. Vis. Comput. Graph. 2012, 18, 643–650. [CrossRef] [PubMed] Liu, X.; Fujimura, K. Hand gesture recognition using depth data. In Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, Seoul, Korea, 17–19 May 2004; pp. 529–534. Foix, S.; Alenya, G.; Torras, C. Lock-in time-of-flight (ToF) cameras: A survey. IEEE Sens. J. 2011, 11, 1917–1926. [CrossRef] Wiedemann, M.; Sauer, M.; Driewer, F.; Schilling, K. Analysis and characterization of the PMD camera for application in mobile robotics. IFAC Proc. Vol. 2008, 41, 13689–13694. [CrossRef] Fuchs, S.; May, S. Calibration and registration for precise surface reconstruction with time of flight cameras. Int. J. Int. Syst. Technol. App. 2008, 5, 274–284. [CrossRef] Guomundsson, S.A.; Aanæs, H.; Larsen, R. Environmental effects on measurement uncertainties of time-of-flight cameras. In Proceedings of the 2007 International Symposium on Signals, Circuits and Systems, Iasi, Romania, 12–13 July 2007; Volumes 1–2, pp. 113–116. Rapp, H. Experimental and Theoretical Investigation of Correlating ToF-Camera Systems. Master’s Thesis, University of Heidelberg, Heidelberg, Germany, September 2007. Falie, D.; Buzuloiu, V. Noise characteristics of 3D time-of-flight cameras. In Proceedings of the 2007 International Symposium on Signals, Circuits and Systems, Iasi, Romania, 12–13 July 2007; Volumes 1–2, pp. 229–232. Karel, W.; Dorninger, P.; Pfeifer, N. In situ determination of range camera quality parameters by segmentation. In Proceedings of the VIII International Conference on Optical 3-D Measurement Techniques, Zurich, Switzerland, 9–12 July 2007; pp. 109–116. Kahlmann, T.; Ingensand, H. Calibration and development for increased accuracy of 3D range imaging cameras. J. Appl. Geodesy 2008, 2, 1–11. [CrossRef] Karel, W. Integrated range camera calibration using image sequences from hand-held operation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 945–952. May, S.; Werner, B.; Surmann, H.; Pervolz, K. 3D time-of-flight cameras for mobile robotics. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; Volumes 1–12, pp. 790–795.

Sensors 2017, 17, 92

15.

16.

17.

18. 19.

20. 21. 22. 23.

24. 25.

26. 27. 28. 29. 30. 31. 32. 33.

34. 35.

36.

17 of 18

Kirmani, A.; Benedetti, A.; Chou, P.A. Spumic: Simultaneous phase unwrapping and multipath interference cancellation in time-of-flight cameras using spectral methods. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), San Jose, CA, USA, 15–19 July 2013; pp. 1–6. Freedman, D.; Krupka, E.; Smolin, Y.; Leichter, I.; Schmidt, M. Sra: Fast removal of general multipath for ToF sensors. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014. Kadambi, A.; Whyte, R.; Bhandari, A.; Streeter, L.; Barsi, C.; Dorrington, A.; Raskar, R. Coded time of flight cameras: Sparse deconvolution to address multipath interference and recover time profiles. ACM Trans. Graph. 2013, 32, 167. [CrossRef] Whyte, R.; Streeter, L.; Gree, M.J.; Dorrington, A.A. Resolving multiple propagation paths in time of flight range cameras using direct and global separation methods. Opt. Eng. 2015, 54, 113109. [CrossRef] Lottner, O.; Sluiter, A.; Hartmann, K.; Weihs, W. Movement artefacts in range images of time-of-flight cameras. In Proceedings of the 2007 International Symposium on Signals, Circuits and Systems, Iasi, Romania, 13–14 July 2007; Volumes 1–2, pp. 117–120. Lindner, M.; Kolb, A. Compensation of motion artifacts for time-of flight cameras. In Dynamic 3D Imaging; Springer: Heidelberg, Germany, 2009; Volume 5742, pp. 16–27. Lee, S.; Kang, B.; Kim, J.D.K.; Kim, C.Y. Motion Blur-free time-of-flight range sensor. Proc. SPIE 2012, 8298, 105–118. Lee, C.; Kim, S.Y.; Kwon, Y.M. Depth error compensation for camera fusion system. Opt. Eng. 2013, 52, 55–68. [CrossRef] Kuznetsova, A.; Rosenhahn, B. On calibration of a low-cost time-of-flight camera. In Proceedings of the Workshop at the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Lecture Notes in Computer Science. Volume 8925, pp. 415–427. Lee, S. Time-of-flight depth camera accuracy enhancement. Opt. Eng. 2012, 51, 527–529. [CrossRef] Christian, J.A.; Cryan, S. A survey of LIDAR technology and its use in spacecraft relative navigation. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, Boston, MA, USA, 19–22 August 2013; pp. 1–7. Nitzan, D.; Brain, A.E.; Duda, R.O. Measurement and use of registered reflectance and range data in scene analysis. Proc. IEEE 1977, 65, 206–220. [CrossRef] Spirig, T.; Seitz, P.; Vietze, O. The lock-in CCD 2-dimensional synchronous detection of light. IEEE J. Quantum Electron. 1995, 31, 1705–1708. [CrossRef] Schwarte, R. Verfahren und vorrichtung zur bestimmung der phasen-und/oder amplitude information einer elektromagnetischen Welle. DE Patent 19,704,496, 12 March 1998. Lange, R.; Seitz, P.; Biber, A.; Schwarte, R. Time-of-flight range imaging with a custom solid-state image sensor. Laser Metrol. Inspect. 1999, 3823, 180–191. Piatti, D.; Rinaudo, F. SR-4000 and CamCube3.0 time of flight (ToF) cameras: Tests and comparison. Remote Sens. 2012, 4, 1069–1089. [CrossRef] Chiabrando, F.; Piatti, D.; Rinaudo, F. SR-4000 ToF camera: Further experimental tests and first applications to metric surveys. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2010, 38, 149–154. Chiabrando, F.; Chiabrando, R.; Piatti, D. Sensors for 3D imaging: Metric evaluation and calibration of a CCD/CMOS time-of-flight camera. Sensors 2009, 9, 10080–10096. [CrossRef] [PubMed] Charleston, S.A.; Dorrington, A.A.; Streeter, L.; Cree, M.J. Extracting the MESA SR4000 calibrations. In Proceedings of the Videometrics, Range Imaging, and Applications XIII, Munich, Germany, 22–25 June 2015; Volume 9528. Ye, C.; Bruch, M. A visual odometry method based on the SwissRanger SR4000. Proc. SPIE 2010, 7692, 76921I. Hong, S.; Ye, C.; Bruch, M.; Halterman, R. Performance evaluation of a pose estimation method based on the SwissRanger SR4000. In Proceedings of the IEEE International Conference on Mechatronics and Automation, Chengdu, China, 5–8 August 2012; pp. 499–504. Lahamy, H.; Lichti, D.; Ahmed, T.; Ferber, R.; Hettinga, B.; Chan, T. Marker-less human motion analysis using multiple Sr4000 range cameras. In Proceedings of the 13th International Symposium on 3D Analysis of Human Movement, Lausanne, Switzerland, 14–17 July 2014.

Sensors 2017, 17, 92

37.

38. 39.

40.

41. 42. 43. 44. 45.

18 of 18

Kahlmann, T.; Remondino, F.; Ingensand, H. Calibration for increased accuracy of the range imaging camera SwissrangerTM . In Proceedings of the ISPRS Commission V Symposium Image Engineering and Vision Metrology, Dresden, Germany, 25–27 September 2006; pp. 136–141. Weyer, C.A.; Bae, K.; Lim, K.; Lichti, D. Extensive metric performance evaluation of a 3D range camera. Int. Soc. Photogramm. Remote Sens. 2008, 37, 939–944. Mure-Dubois, J.; Hugli, H. Real-Time scattering compensation for time-of-flight camera. In Proceedings of the ICVS Workshop on Camera Calibration Methods for Computer Vision Systems, Bielefeld, Germany, 21–24 March 2007. Kavli, T.; Kirkhus, T.; Thielmann, J.; Jagielski, B. Modeling and compensating measurement errors caused by scattering time-of-flight cameras. In Proceedings of the SPIE, Two-and Three-Dimensional Methods for Inspection and Metrology VI, San Diego, CA, USA, 10 August 2008. Vapnik, V.N. The Nature of Statistical Learning Theory; Springer: New York, NY, USA, 1995. Ales, J.S.; Bernhand, S. A tutorialon support vector regression. Stat. Comput. 2004, 14, 199–222. Suykens, J.A.K.; Vandewalle, J. Least squares support vector machine classifiers. Neural Process. Lett. 1999, 9, 293–300. [CrossRef] Zhang, J.; Wang, S. A fast leave-one-out cross-validation for SVM-like family. Neural Comput. Appl. 2016, 27, 1717–1730. [CrossRef] Gustafsson, F. Particle filter theory and practice with positioning applications. IEEE Aerosp. Electron. Syst. Mag. 2010, 25, 53–82. [CrossRef] © 2017 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

Depth Errors Analysis and Correction for Time-of-Flight (ToF) Cameras.

Time-of-Flight (ToF) cameras, a technology which has developed rapidly in recent years, are 3D imaging sensors providing a depth image as well as an a...
5MB Sizes 1 Downloads 8 Views