Research Article

Vol. 55, No. 3 / January 20 2016 / Applied Optics

A127

Realization of real-time interactive 3D image holographic display [Invited] JHEN-SI CHEN

AND

DAPING CHU*

P&S Group, Department of Engineering, University of Cambridge, 9 JJ Thomson Avenue, Cambridge CB3 0FA, UK *Corresponding author: [email protected] Received 11 September 2015; revised 22 November 2015; accepted 23 November 2015; posted 24 November 2015 (Doc. ID 249929); published 6 January 2016

Realization of a 3D image holographic display supporting real-time interaction requires fast actions in data uploading, hologram calculation, and image projection. These three key elements will be reviewed and discussed, while algorithms of rapid hologram calculation will be presented with the corresponding results. Our vision of interactive holographic 3D displays will be discussed. © 2016 Optical Society of America OCIS codes: (090.0090) Holography; (090.2870) Holographic display; (100.2000) Digital image processing. http://dx.doi.org/10.1364/AO.55.00A127

1. INTRODUCTION Interactive 3D image displays are applicable to a wide range of applications related to images/information demonstration, such as medical image visualization, industrial design modeling, telecommunications, gaming, advertisement, etc. Its capability of showing depth and 3D location information from the holographic images helps to deliver vivid and accurate visual perception. Furthermore, its interaction feature allows direct control, and the feedback feature can potentially further enforce the level of user experience. This has been a technology dream since not long after the holography was invented [1]. There are various 3D image display prototypes, such as light field displays [2–5] and volume displays [6–9]. In principle, a light field can achieve a floating-in-the-air visual effect. Most existing light field displays and all volume displays are not directly touchable because the images are reconstructed behind, on, or across the projection media, which can be a display apparatus, a panel, or scattering materials. Note that the semitouchable volume 3D image displays, which project images on smoke or water [10,11], can be touchable, but it is not ideal because the images will be disturbed and distorted when touched. Nevertheless, an intermediate system, such as an imaging system or a telescope setup can be added to realize touchable features for these systems. On the other hand, holographic displays have the ability to project images directly in space to create floating-in-the-air visual perception. The hardware can be implemented by digital devices such as liquid crystal on silicon (LCOS) modules and digital micro-mirror displays (DMD) or analogue mediums such as photorefractive holographic film [12,13], optical addressed spatial light modulator (OASLM) [14–16], acoustic optics modulator [17,18]. The challenge to realize real-time 1559-128X/16/03A127-08$15/0$15.00 © 2016 Optical Society of America

interactive 3D image holographic displays is to overcome three main limitations on the speed of data transmission, hologram generation, and holographic image projection. A. Holographic Image Projections

Projecting holographic 3D image with a large size, wide viewing angle, video rate, and full colors requires a large amount of data. To make things understandable, here is an example: Assuming we want a hologram display that has a size of 100 mm × 100 mm and a diffraction angle of 15° × 15°. This would require the pixel pitch of 1.18 μm × 1.18 μm for the wavelength of 633 nm (red) to ensure full parallax (referred to the optics equations for calculating diffraction [19]), and this equals about 92 k × 92 k  8.5 × 109 pixels. The necessary pixel number is 1010 pixels and 1.4 × 1010 pixels for wavelength of 532 nm (green) and 450 nm (blue), respectively. To have 30 fps for full colors (combined by RGB channels), it adds up to be 3.6 × 1011 pixels per second. For convenience, we use this number as the target for discussions in this paper. It is noted that the examples above assume pixelated holograms, which can be addressed digitally by electrical addressed spatial light modulators (EASLMs). Alternatively, information can be written on updatable holographic mediums, such as OASLMs and photorefractive hologram films, using an EASLM. For the analogue hologram, which can be implemented by devices, such as acousto-optic modulators (AOMs), there is no clear pixilation, but its diffraction angle, which is related to fringes, and necessary information the image carries work out to have the same order of the necessary amount of data to achieve our target because of the optical extent invariance. We also should note that viewing angle of a projected holographic 3D image depends on its location in relation to the hologram, and the viewing angle is not universal. In general,

A128

Vol. 55, No. 3 / January 20 2016 / Applied Optics

the diffraction angle can be used to indicate (but not equal to) the viewing range. For images reconstructed very close or even on the hologram, the viewing angle is diffraction angle, and this viewing angle decreases when the reconstructed 3D image is away from the hologram. A simple calculation shows that the necessary projecting capability for our target is more than 3 × 1011 pixels∕s or even more if a larger size, wider angle, and/or higher frame rate are/is required. Unfortunately, the capability of existing SLMs falls far behind this number. An high-definition (HD) LCOS can deliver at around 108 pixels∕s [20]; an AOM can deliver at around 50–200 MHz (equivalently to around 108 pixels∕s) [17]; both the ferroelectric LCOS (FELCOS) [21] and DMD [22] can support modulations at more than 20 kHz for a resolution higher than the HD standard, which can be more than 3.5 × 1010 pixels∕s. They represent the highest pixel delivery capability of the off-shelf SLMs available today. It is worth to point out that both the DMD and FELCOS can be driven at the order of microsecond on a single pixel [23,24], so the HD device can theoretically achieve the order of 1012 pixels, which then can meet the requirement. However, the challenge is at the electrical controlling, which at present is limited to 10–30 kHz for the global switch (to switch all pixels). It also should be pointed out that both DMD and FELCOS are binary devices, which suffer from conjugation images (can be removed but equivalently lose half of its delivered pixels) and degraded image quality. Furthermore, how to efficiently utilize the information delivered from optical engines (the information provider) also remains a challenge. For example, if multiple devices, such as LCOSs, DMDs, or AOMs, are tiled upto meet the delivery requirement, how to distribute the information from each device to space evenly and tile them seamlessly is not trivial. Different prototypes have tried to implement various methods [25–28]. Therefore, major issues in holographic image projection are the lack of enough information delivered from the source of the system and the need to find an approach to distribute information from high-speed devices and tile all these information in space. B. Holograms Calculations

For hologram calculations, various methods can be categorized into three types: point-based, polygon-based, and image-based methods. In 3D graphics, a 3D object can be composed by cloud points while each point can be allocated with a scattering/ transmission function. A point-based method takes advantage of this structure and applies the fundamental wave propagation from a point source, so that each point propagates its wave front onto the hologram, where the diffraction patterns are stacked upto compose the overall hologram. In point-based methods, shading can be supported by defining the correct propagation function according to the scattering/transmission function [29]; occlusion also can be provided when ray tracing techniques are used to determine the spatial order of objects and the blocking effect [30,31]. However, these would significantly slow down the calculation. Besides, the cloud point data contents a large amount of points for a delicate/fine object, and each point

Research Article should be individually calculated, so the computation load piled up is massive. To solve this issue, the look-up-table (LUT) table method was proposed to exchange the memory size for the calculation load [32]. Note that the speed is limited by the memory transmission rate. Besides, various methods have been proposed to modify the procedure of the point-based method [33–37]. One of the principles is to decompose the propagation function into different compositions sets, which reduce the use of LUT memory requirement or even further reduce the computation load, sometimes at the cost of degrading the image quality. Another way to decompose a 3D object is to break it into polygons (such as ply format), and each polygon comes with its defined texture [38–40]. The Fourier transformation of each polygon (on its own axis) is rotated in the frequency domain, and its inverse Fourier transformation in the space domain then represents the polygon with a tilted angle (on the hologram axis). The calculation patterns of each polygon are added up, and the whole procedure has the potential to be faster because the number of necessary basic units (from point to polygon) is much less. Having the similar difficulty as a point-based method, the polygon method has to deal with the occlusion issue through ray tracing [41–44]. This slows down the calculation speed. Although it has a potential to be faster than the point-based method, the speed performance from simplifying propagation function for the point-based method has overtaken the speed performance in the polygon-based methods these years. The third method to reconstruct a 3D image is to use multiple 2D images from different viewing angles to approximate the 3D effect, and the image-based method is built on this concept [45,46]. In the image-based method, 2D images of viewing angles are rendered through a graphic pipeline, which can support occlusion and shading at the first place with the support of 3D graphics. The calculation involves fast Fourier transformation (FFT) and can be fast. Note that image-based method requires a certain display system to reconstruct its multiple views— Fourier holograms and the visual effect are similar to the holographic stereogram [47–50]. The advantage of the image-based method is its speed and its support of occlusion and shading (without the necessity to develop extra techniques). There are other methods, such as the diffraction specific coherent panoramagram (DCSP) [51] and layer-based method [52–54], both of which are variations of image-based methods. DCSP acquires the rendered 2D images from a different viewing angle, as in the image-based method but, additionally, with the depth map to conduct point-based method calculation for each individual view. The advantage is that the shading and rendering information can be supported by the 3D graphics, which reduces the calculation load compared with the pure point-based method. The layer-based method takes the rendering 2D images from a different viewing angle and the depth map, similar to DCSP, but it slices the image data into different layers for the Fourier–Fresnel hologram calculation. The advantage of the layer-based method is its trade-off between the computational load and depth resolution for accommodation cue, for which human eyes have a limited resolution [55,56]. The calculation of holograms can be largely parallelized, and general-purpose GPU (GPGPU) computing has been used widely

Research Article

Vol. 55, No. 3 / January 20 2016 / Applied Optics

since 2003 [57]. Various methods of all three types have adapted such an approach to boost performance. Besides GPGPU, dedicated field-programmable gate arrays (FPGAs) also have been developed for specific algorithms in hologram calculation [58,59]. Overall, most of the reported hologram calculation speeds are currently limited at the order of 107 pixels∕s, using a personal computer plus a GPGPU. This can be improved by one order if multiple high-end GPUs are in use, but it is still 3 orders less than our target. C. Data Transmission

For the data transmission to support our target (3.6 × 1011 pixels/s and each pixel is of 1 bit, i.e., binary device), it requires a data transmission capability of 360 Gbits/s or 45 GB/s. Although the technology of high-speed data transmission keeps making progress (and there are techniques to meet this number according to a recent report [60]), many of the developments are still in its early stage. In addition, the data transmission standards for SLMs are also to be developed accordingly. The latest high-speed standards of PCI-e 3.016× can support upto 15.75 GB/s, while its next generation (4.016×, to be released in 2017) aims for 31.51 GB/s, which is close to the target number. However, if gray-level SLMs are in used, or a larger size and wider angle are required, the necessary data transmission rate easily could be one order higher than that of the target, which is unlikely to be met in the near future. Therefore, practical data delivery using a single channel is unlikely to be realized soon, and one may want to consider using multiple channels with each channel contents its own display, calculation, and data transmission as an alternative solution. D. Existing Holography Display Systems with High Information Amount

Some representative holographic display systems with high information amount (i.e., information bandwidth) are listed in Table 1 for comparison. Note that the information amount here means how much information is delivered optically in terms of pixels/s. Each pixel may have different numbers of bits causing different total bandwidth (pixel∕s × bits∕pixel), and pixels/s is more related to the visual performance (including viewing angle and image size) while bits/pixel is more related to image quality. The horizontal-parallax-only (HPO) system from Tokyo University of Agriculture and Technology System (TUAT, Japan) [26] consumes 1010 pixels/s (13; 333 Hz × 1;024 × 768), which is one of the early systems, which successfully used a high-speed DMD to implement prototype holographic displays. The repetition of each subhologram reduces the effective visual performance (viewing angle and image size), and its information Table 1.

A129

amount in terms of viewing angle and image size is equivalent to a system having around 109 pixels∕s. However, the repetition subholograms can be used to improve the image quality, and its latest publication [61] from the same group has shown improved results. National Institute of Information and Communications Technology (NICT, Japan) published its tiled display system in 2013 [62], which is composed of three 8k4 k LCOS panels at 60 Hz. Note that the 8k4 k panel was self-developed by NICT, and it is not accessible to outsiders. The effective information amount is 6 × 109 pixels∕s. The tiled multiple DMDs system from Agency for Science Technology and Research (A*STAR, Singapore) published in 2013 [27], is composed of 24 DMDs, each of which is operated at 720 Hz for 1280 × 1080 resolution. This makes the total information amount of 2.38 × 1010 pixel∕s, which is the highest so far for a holographic display system. Massachusetts Institute of Technology (MIT) has been working on AOM-type HPO holographic systems for decades, and its latest research result [18] shows that its current prototype reaches 4 × 109 pixels∕s with 40 channels of AOMs each provides 108 pixels∕s. It is claimed that tiling more than 1000 channels together is possible and a system of 1250 channels is under reconstruction, which would reach 1.25 × 1011 pixels∕s. One thing should be pointed out is the challenge of system configuration for seamless tiling. Simply increasing the number of SLMs in use can push for overall information amount; however, the more SLMs are in use, the more complicated the system will be. It also can put higher requirements for distribution systems, such as scanners. All these make it a practical challenge to physically tile up tens or more SLMs for a holographic display. Attention should be drawn to the connection between data transmission and information amount. Existing prototype holographic displays can reach more than 10 billion pixel/s, which make either 10 Gbits/s for binary pixels or 80 Gbits for 8 bits (gray scale) pixels. The PCI-e 3.0 16× mentioned in Section 1.C can support more than four times this number. However, the existing holographic displays do not support a wide-enough viewing angle and large enough image size. Future systems for consumer use will require a much higher (>10×) information amount to deliver and the use of multiple channels may still be necessary. It also should be noted that there are a few well-known holographic display prototypes not included here, such as QinetiQ’s active tiling OASLM display [63], due to their relatively low total information amount. There are other prototype displays with a high information amount, but not using holography, to realize the 3D visual effect, such as Zebra Imaging’s Zscape [64],

Comparison of Some Representative Holographic Displays

System (Year) TUAT (2009) [26] NICT (2013) [62] A*STAR (2013) [27] MIT (2014) [18]

Information Amount (billions pixel/s)

Number of SLMs

Information Amount Density per SLM (billions pixel/s)

10 (1) 6 23.8 4

1 3 24 40

10 (1) 2 ∼1 0.1

A130

Vol. 55, No. 3 / January 20 2016 / Applied Optics

which manages to deliver more than 20 × 109 pixel∕s (more than 200 units of HD SLMs) and two displays with 360° views [65,66] both of which reaches over 10 × 109 pixels∕ sec . E. Haptic Systems with Holographic Display

Haptic systems are considered to be integrated with holographic display systems. To implement an interactive holographic display with touch sensation, a few critical components should be provided: a holographic display, user body (head or hand) tracking, and touch sensation producer. For holographic displays, despite the many efforts as discussed earlier (Sections 1.A and 1.D), currently there is no prototype system that can meet the requirement for consumer use. For user body tracking, research works have been carried out for decades, and many successful commercial products have been developed and widely used, such as Kinect [67] and Wii remote [68], and the related techniques are used in high-profile virtual augmentation device prototypes, such as Microsoft HoloLens [69] and Magic Leap [70]. For touch sensation producer, there are also various prototypes, including physical controllers that can be a machine arm [71–74]), surrounding vibrators around the chair [75] from Disney research, and ultrasound producers [76,77]. Due to the lack of appropriate holographic display systems, works combining both haptic system and holographic display system are not many. Early works can be traced back to MIT in 1990s [71,72]. They use a physical controller (also as the feedback force provider), which is a pen linked to a machine arm, as the touch sensation producer, and combine it with a holographic stereogram display system at the same time.

2. TOWARD A REAL-TIME INTERACTIVE 3D IMAGE HOLOGRAPHIC DISPLAY A. Tiling Approach at the Displaying Stage

Our previous works involve optical tiling [78], which can be a candidate for the solution to implement the information distribution/tiling. It takes advantage of conventional coarse integral imaging (CII) system for angular tiling, and the so-called coarse integral holography (CIH) system [78] replaces the conventional 2D image sources with holograms, which record the wavefront information of the reconstructed 3D images, as illustrated in Fig. 1.

Research Article In a CII system, each lenslet and the field lens form a 4f imaging system, and this individualizes each view, which means each image source (or, say, hologram source in a CIH situation) is individualized. This structure can be used to implement angular tiling of digital holographic 3D images, and the calculation of each subhologram can be separated and parallel. Our static CIH prototype was published, and some of them are shown in Fig. 2. Its dynamic prototype is under development. B. Rapid Hologram Calculation Development

We also have developed a layer-based method and some related rapid hologram generation algorithms [52–54]. As mentioned in Section 1.B, it belongs to image-based methods. Our method is based on two features: view-dependent layerization and multiview tiling. View-dependent layerization is to project the point data of a 3D image to different layers along the viewing axis according to its depth. Each layer represents a 2D image, which is equivalent to the cross-section image in tomography. Multiview tiling is to tile up multiple views of limited viewing angles to form a seamless view of a large viewing angle. Figure 3 illustrates the principal procedure of our layer-based method. It should be noted that the reason that simply adding one Fresnel-lens pattern to the Fourier transformation of an image can project the layer image at a target depth is based on ignoring the phase components of the image planes. Related details can found in [53]. The calculation for each view considers both its 2D projection image and the corresponding depth map. As a result, each view stands as an approximation of a 3D image and providing depth information to visual perception. The calculation of hologram for this is rapid because the major part is involved with fast Fourier transformation (FFT) for each layer, and the necessary number of layer is low (less than 1 cm per layer for the arm-reach viewing distance is enough [55,56]). The action of slicing introduces depth error during slicing; fortunately, eyes have much less capability to resolve depth along the viewing axis (accommodation cue) than their capability to resolve image resolution on the plan perpendicular to the viewing axis. Therefore, it also can be understood as its rapid speed is achieved by reducing the resolution of accommodation cue, which eyes have low resolution on. One serious drawback of layerization is that the depth error becomes obvious when viewing from an angle off much from the viewing/layerization axis. This can be managed by using the

Fig. 1. (a) Illustration of a conventional CII system, which uses tiled 2D images sources. Each 2D image is projected to a certain viewing angle through its corresponding 4f imaging system. (b) In a CIH system, the images sources are replaced by holograms, each of which can support natural depth cues on its own. The lenslet and the field lens also can be incorporated as angular tiled information on subholograms.

Research Article

Vol. 55, No. 3 / January 20 2016 / Applied Optics

A131

Fig. 2. Image viewed from different angles to a static coarse integral hologram system, the specifications of which are in [78] (Visualization 1).

Fig. 3. Illustration of the main steps in our layer-based method. It goes through view-dependent slicing, fast Fourier transformation, superimposing with corresponding lenses, and summing up. Note that slicing means each image data point is projected to the closest layer plane.

multiple view structure, which limits the widest angle of each view and, hence, the maximum visual error. In this way, smooth tiling and rapid calculation speed can be achieved at the same time. Furthermore, we took advantage of depth-fused 3D technique [79,80] to blend depth from two neighboring layers. This can enhance the smoothness of the accommodation resolution and also reduce the necessary number of layers. In addition, tiling of holograms in each layer is also applied. This technique is sometimes used in 2D image hologram calculation to trade off between image resolution and calculation load, and it can be applied to the layer-based method, which is composed by multiple 2D sliced images. After optimization, we achieved a calculation speed of 4.5 × 107 pixels∕s using a single standard GPU. C. Demonstration of Real-Time Control of Reconstructed Holographic 3D Images

Attempts to use a holographic system for head-mounted displays (HMDs) can be found in [81–83]. The calculation speed

of 4.5 × 107 pixels∕s allows us to project HD holograms at a frame rate of 21 Hz, which can be used to demonstrate realtime control of the holographic 3D images reconstructed by a prototype holographic 3D setup for HMDs [54]. The images as shown in Figs. 4 and 5 have depth, and they are controlled with dragging, rotating, and zooming in real time by using a computer mouse.

3. DISCUSSIONS To show holographic 3D images in real time, the necessary pixel delivery capability is very high (≥1011 pixels∕s). Currently, no single device can achieve it. We think there may be three possible solutions for this issue: (1) developing a single high-capability device, which should be able to deliver at least 10 times more pixels/s than the highest one currently available; 2) tiling up multiple existing devices and designing an approach to distribute information from each device and tile all the information into one display system; 3) degrading the target

A132

Research Article

Vol. 55, No. 3 / January 20 2016 / Applied Optics

Fig. 4. Images taken at different times as controlled by a computer mouse (Visualization 2).

Fig. 5. (a) Original graphic image. (b) and (c) Images taken from the displayed 3D image with different depth focus to show the accommodation.

performance by some visual effect, such as lowering image resolution, removing one axis parallax such as HPO. The second solution can be achieved sooner than the first, and this means tiling up more than 10 high-speed (>10 kHz) HD DMDs together with an effective distribution. On the other hand, the third solution is not conflicting with the first and second ones, so this route also is worth developing. For example, a HPO holographic display with 100 mm × 100 mm size, 15° diffractive angle, 30 Hz, and full color can be achieved by a single high-speed HD DMD device, simply in terms of pixel capability. (Note that, the practical implementation can still be challenging.) However, an HPO system lacks a vertical parallax, and this would limit its applications. As for the hologram calculation speed, the current best is 4.5 × 107 pixels∕s, which is about three to four orders behind the target (≥1011 pixels∕s). The possible approaches to improve this number are: 1) designing dedicated FPGAs for the developed algorithm, which can improve the speed by one to two orders; 2) using a multiple channel configuration (since multiple optics engines will be used), which can enhance more than one order; 3) reducing unnoticeable visual effects in 3D images to reduce the data quantity, e.g., reducing accommodation cue as in our layer-based method. For the data transmission rate, the PCI-e 3.0 can already support to 10 GB/s, and multiple channel features could bring it to a level that can match our target. The difficulty remains is

the engineering challenge to implement it on the existing or future SLMs. Upto now, no published work shows any existing SLM of holographic displays to communicate information through PCI-e. The highest standard for SLMs at present is USB 3.0, which can support upto 625 MB/s. Another approach worth trying is to deliver information only into the eyes, rather than to the 3D space all in once. This is more efficient in terms of information usage because most 3D information of the 3D images would not be seen when they are delivered. Therefore, a head-mounted holographic display, which only requires us to project one view (or few views to make up a large field of view), has the hope to be integrated with augmented reality (AR) or mixed reality (MR) to implement the real-time interactive holographic displays. For single view (or a few views), the SLM, hologram calculation, and data transmission are all able to match the 3D viewing requirements with existing technology. The challenge in holographic HMDs is similar to that in existing stereoscopy HMDs: how to enlarge field of view, how to track users accurately (and response rapidly), and how to make the system compact and light. 4. CONCLUSIONS Three important components of a real-time interactive holographic display, namely, fast data uploading, fast hologram calculation, and fast image projection, were introduced and reviewed. The necessary performance requirement and current status were addressed. Our previous works related to the display projection, and hologram calculation components were presented. Finally, discussions and some potential future works toward the ideal holographic 3D image display were provided. Funding. Engineering and Physical Sciences Research Council (EPSRC) (EP/F00897X/1). Acknowledgment. The authors would like to thank the UK Engineering and Physical Sciences Research Council (EPSRC) for the support through the Platform Grant for Liquid Crystal Photonics (EP/F00897X/1) and a joint collaboration project between the University of Cambridge and Disney

Research Article Research through the CAPE consortium for some of the previous work presented here. REFERENCES 1. D. Gabor, “A new microscopic principle,” Nature 161, 777–778 (1948). 2. J. Park, D. Nam, S. Y. Choi, J. H. Lee, D. S. Park, and C. Y. Kim, “Light field rendering of multi-view contents for high density light field 3D display,” in SID International Symposium (2013), pp. 667–670. 3. G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph. 31, 1–11 (2012). 4. T. Balogh, P. T. Kovács, and Z. Megyesi, “Holovizio 3D display system,” in Proceedings of the First International Conference on Immersive Telecommunications (ICST) (Institute for Computer Sciences, 2007), p. 19. 5. X. Xia, X. Liu, H. Li, Z. Zheng, H. Wang, Y. Peng, and W. Shen, “A 360-degree floating 3D display based on light field regeneration,” Opt. Express 21, 11237–11247 (2013). 6. H. Kimura, T. Uchiyama, and H. Yoshikawa, “Laser produced 3D display in the air,” in ACM SIGGRAPH Emerging Technologies (2006), p. 20. 7. K. Langhans, D. Bahr, D. Bezecny, D. Homann, K. Oltmann, K. Oltmann, C. Guill, E. Rieper, and A. Gotz, “FELIX 3D display: an interactive tool for volumetric imaging,” Proc. SPIE 4660, 176–190 (2002). 8. G. E. Favalora, R. K. Dorval, D. M. Hall, M. Giovinco, and J. Napoli, “Volumetric three-dimensional display system with rasterization hardware,” Proc. SPIE 4297, 227–235 (2001). 9. A. Sullivan, “DepthCube solid-state 3D volumetric display,” Proc. SPIE 5291, 279–284 (2004). 10. S. DiVerdi, I. Rakkolainen, T. Höllerer, and A. Olwal, “A novel walkthrough 3D display,” Proc. SPIE 6055, 605519 (2006). 11. S. I. Eitoku, K. Hashimoto, and T. Tanikawa, “Controllable water particle display,” in Proceedings of the 2006 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology (2006), p. 36. 12. S. Tay, P. A. Blanche, R. Voorakaranam, A. V. Tunc, W. Lin, S. Rokutanda, T. Gu, D. Flores, P. Wang, G. Li, P. St Hilaire, J. Thomas, R. A. Norwood, M. Yamamoto, and N. Peyghambarian, “An updatable holographic three-dimensional display,” Nature 451, 694–698 (2008). 13. H. N. Giang, K. Kinashi, W. Sakai, and N. Tsutsumi, “Photorefractive response and real-time holographic application of a poly (4-(diphenylamino) benzyl acrylate)-based composite,” Polym. J. 46, 59–66 (2014). 14. N. Collings, S. Mias, T. D. Wilkinson, A. R. Travis, J. R. Moore, and W. A. Crossland, “Optically addressed spatial light modulator: performance and applications,” Proc. SPIE 5213, 40–48 (2003). 15. L. Xinan, X. Xuewu, S. Solanki, P. Yuechao, R. B. A. Tanjung, T. Chiwei, X. Baoxi, and C. T. Chong, “3D holographic display with optically addressed spatial light modulator,” in 3DTV Conference: the True Vision-Capture, Transmission and Display of 3D Video (IEEE, 2009), pp. 1–4. 16. P. K. Shrestha, Y. T. Chun, and D. Chu, “A high-resolution optically addressed spatial light modulator based on ZnO nanoparticles,” Light 4, e259 (2015). 17. D. E. Smalley, Q. Y. Smithwick, and V. M. Bove, Jr., “Holographic video display based on guided-wave acousto-optic devices,” Proc. SPIE 6488, 64880L (2007). 18. D. E. Smalley, Q. Y. Smithwick, and V. M. Bove, Jr., J. Barabas and S. Jolly, “Anisotropic leaky-mode modulator for holographic video displays,” Nature 498, 313–317 (2013). 19. E. Hecht, Optics (Addison-Wesley, 2001), Chap. 10, pp. 443–518. 20. HOLOEYE photonics AG, “PLUTO phase only spatial light modulator (reflective),” http://holoeye.com/spatial‑light‑modulators/slm‑pluto‑ phase‑only/. 21. Forth dimension display, http://www.forthdd.com/technology/. 22. Texas Instruments: DLP & MEMS products, http://www.ti.com/lsds/ti/ analog/dlp/overview.page.

Vol. 55, No. 3 / January 20 2016 / Applied Optics

A133

23. N. A. Clark and S. T. Lagerwall, “Submicrosecond bistable electro‐ optic switching in liquid crystals,” Appl. Phys. Lett. 36, 899–901 (1980). 24. S. K. Nayar, V. Branzoi, and T. E. Boult, “Programmable imaging using a digital micromirror array,” in Proceedings of the IEEE Computer Society Conference Computer Vision and Pattern Recognition (2004), Vol. 1, pp. I-436–I-443. 25. P. St-Hilaire, S. A. Benton, M. E. Lucente, J. S. Underkoffler, and H. Yoshikawa, “Real-time holographic display: Improvements using a multichannel acousto-optic modulator and holographic optical elements,” Proc. SPIE 1461, 254–261 (1991). 26. Y. Takaki and N. Okada, “Hologram generation by horizontal scanning of a high-speed spatial light modulator,” Appl. Opt. 48, 3255–3260 (2009). 27. Z. M. A. Lum, X. Liang, Y. Pan, R. Zheng, and X. Xu, “Increasing pixel count of holograms for three-dimensional holographic display by optical scan-tiling,” Opt. Eng. 52, 015802 (2013). 28. J. Hahn, H. Kim, Y. Lim, G. Park, and B. Lee, “Wide viewing angle dynamic holographic stereogram with a curved array of spatial light modulators,” Opt. Express 16, 12372–12386 (2008). 29. T. Kurihara and Y. Takaki, “Shading of a computer-generated hologram by zone plate modulation,” Opt. Express 20, 3529–3540 (2012). 30. R. H. Chen and T. D. Wilkinson, “Computer generated hologram with geometric occlusion using GPU- accelerated depth buffer rasterization for three-dimensional display,” Appl. Opt. 48, 4246–4255 (2009). 31. H. Zhang, N. Collings, J. Chen, B. Crossland, D. Chu, and J. Xie, “Full parallax three-dimensional display with occlusion effect using computer generated hologram,” Opt. Eng. 50, 074003 (2011). 32. M. E. Lucente, “Interactive computation of holograms using a look-up table,” J. Electron. Imaging 2, 28–34 (1993). 33. Y. Pan, X. Xu, S. Solanki, X. Liang, R. B. Tanjung, C. Tan, and T. C. Chong, “Fast CGH computation using S-LUT on GPU,” Opt. Express 17, 18543–18555 (2009). 34. J. Jia, Y. Wang, and J. Liu, “Reducing the memory usage for effective computer-gene rated hologram calculation using compressed look-up table in full-color holographic display,” Appl. Opt. 52, 1404–1412 (2013). 35. S.-C. Kim and E.-S. Kim, “Effective generation of digital holograms of three-dimensional objects using a novel look-up table method,” Appl. Opt. 47, D55–D62 (2008). 36. P. Tsang, W.-K. Cheung, T.-C. Poon, and C. Zhou, “Holographic video at 40 frames per second for 4-million object points,” Opt. Express 19, 15205–15211 (2011). 37. J. Weng, T. Shimobaba, N. Okada, H. Nakayama, M. Oikawa, N. Masuda, and T. Ito, “Generation of real-time large computer generated hologram using wavefront recording method,” Opt. Express 20, 4018–4023 (2012). 38. K. Matsushima, “Wave-field rendering in computational holography: the polygon-based method for full-parallax high-definition CGHs,” in IEEE/ACIS 9th International Conference on Computer and Information Science (ICIS) (2010), pp. 846–851. 39. K. Matsushima and S. Nakahara, “Extremely high-definition fullparallax computer-generated hologram created by the polygon-based method,” Appl. Opt. 48, H54–H63 (2009). 40. D. Im, E. Moon, Y. Park, D. Lee, J. Hahn, and H. Kim, “Phaseregularized polygon computer-generated holograms,” Opt. Lett. 39, 3642–3645 (2014). 41. K. Matsushima and A. Kondoh, “A wave-optical algorithm for hiddensurface removal in digitally synthetic full parallax holograms for threedimensional objects,” Proc. SPIE 5290, 90–97 (2004). 42. A. Kondoh and K. Matsushima, “Hidden surface removal in full‐parallax CGHs by silhouette approximation,” Syst. Comput. Japan 38, 53–61 (2007). 43. K. Matsushima, “Computer-generated holograms for three-dimensional surface objects with shade and texture,” Appl. Opt. 44, 4607– 4614 (2005). 44. H. Nishi, K. Matsushima, and S. Nakahara, “Advanced rendering techniques for producing specular smooth surfaces in polygon-based high-definition computer holography,” Proc. SPIE 8281, 828110 (2012). 45. F. Remondino and S. El-Hakim, “Image-based 3D modelling: a review,” Photog. Rec. 21, 269–291 (2006).

A134

Vol. 55, No. 3 / January 20 2016 / Applied Optics

46. H. Shum and S. B. Kang, “Review of image-based rendering techniques,” Proc. SPIE 4067, 2–13 (2000). 47. M. Lucente, “Diffraction-specific fringe computation for electro-holography,” Ph.D. thesis (Massachusetts Institute of Technology, 1994). 48. W. Plesniak, M. Halle, J. Bove, J. Barabas, and R. Pappu, “Reconfigurable image projection holograms,” Opt. Eng. 45, 115801 (2006). 49. H. Kang, T. Yamaguchi, and H. Yoshikawa, “Accurate phase-added stereogram to improve the coherent stereogram,” Appl. Opt. 47, D44–D54 (2008). 50. Q. Y. Smithwick, J. Barabas, D. E. Smalley, and V. M. Bove, Jr., “Realtime shader rendering of holographic stereograms,” Proc. SPIE 7233, 723302 (2009). 51. J. Barabas, S. Jolly, D. E. Smalley, and V. M. Bove, Jr., “Diffraction specific coherent panoramagrams of real scenes,” Proc. SPIE 7957, 795702 (2011). 52. J.-S. Chen, Q. Smithwick, and D. Chu, “Implementation of shading effect for reconstruction of smooth layer-based 3D holographic images,” Proc. SPIE 8648, 86480R (2013). 53. J.-S. Chen, D. Chu, and Q. Smithwick, “Rapid hologram generation utilizing layer-based approach and graphic rendering for realistic three-dimensional image reconstruction by angular tiling,” J. Electron. Imaging 23, 023016 (2014). 54. J. S. Chen and D. P. Chu, “Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications,” Opt. Express 23, 18143–18155 (2015). 55. K. Akeley, “Achieving near-correct focus cues using multiple image planes,” Ph.D. thesis (Stanford University, 2004). 56. J. P. Rolland, M. W. Krueger, and A. A. Goon, “Dynamic focusing in head mounted displays,” Proc. SPIE 8648, 463–470 (1999). 57. C. Petz and M. Magnor, “Fast hologram synthesis for 3D geometry models using graphics hardware,” Proc. SPIE 5005, 266–275 (2003). 58. T. Ito, N. Masuda, K. Yoshimura, A. Shiraki, T. Shimobaba, and T. Sugie, “Special-purpose computer HORN-5 for a real-time electroholography,” Opt. Express 13, 1923–1932 (2005). 59. Y. Ichihashi, H. Nakayama, T. Ito, N. Masuda, T. Shimobaba, A. Shiraki, and T. Sugie, “HORN-6 special-purpose clustered computing system for electroholography,” Opt. Express 17, 13895–13903 (2009). 60. X. Liu, S. Chandrasekhar, and P. J. Winzer, “Digital signal processing techniques enabling multi-Tb/s superchannel transmission: an overview of recent advances in DSP-enabled superchannels,” IEEE Signal Process. Mag. 31, 16–24 (2014). 61. T. Senoh, Y. Ichihashi, R. Oi, H. Sasaki, and K. Yamamoto, “Study of a holographic TV system based on multi-view images and depth maps,” Proc. SPIE 8644, 86440A (2013). 62. Y. Matsumoto and Y. Takaki, “Improvement of gray-scale representation of horizontally scanning holographic display using error diffusion,” Opt. Lett. 39, 3433–3436 (2014). 63. C. W. Slinger, R. W. Bannister, C. D. Cameron, S. D. Coomber, I. Cresswell, P. W. Hallett, J. R. Hughes, V. Hui, J. C. Jones, R. Miller, V. Minter, D. A. Scattergood, D. T. Sheerin, M. A. Smith, and M. Stanley, “Progress and prospects for practical electroholographic display systems,” Proc. SPIE 4296, 18–32 (2001).

Research Article 64. M. Klug, T. Burnett, A. Fancello, A. Heath, K. Gardner, S. O’Connell, and C. Newswanger, “A scalable, collaborative, interactive light‐field display system,” SID Int. Symp. Dig. Tech. Pap. 44, 412–415 (2013). 65. Y. Takaki and S. Uchida, “Table screen 360-degree threedimensional display using a small array of high-speed projectors,” Opt. Express 20, 8848–8861 (2012). 66. C. Su, X. Xia, H. Li, X. Liu, and W. Shen, “An interactive 360° floating 3D display based on gesture recognition,” SID Int. Symp. Dig. Tech. Pap. 45, 1278–1281 (2014). 67. Kinect–Windows app development, https://dev.windows.com/en‑us/ kinect. 68. Wii Remote, “Wikipedia, the free encyclopedia,” https://en.wikipedia .org/wiki/Wii_Remote. 69. Microsoft HoloLens, http://www.microsoft.com/microsoft‑hololens/en‑us. 70. Magic Leap, http://www.magicleap.com/#/home. 71. W. J. Plesniak and A. K. Michael, “Tangible holography: adding synthetic touch to 3D display,” Proc. SPIE 3011, 53 (1997). 72. R. Pappu and W. J. Plesniak, “Haptic interation with holographic video images,” Proc. SPIE 3293, 38 (1998). 73. P. Olsson, F. Nysjo, S. Seipel, and I. Carlbom, “Physically co-located haptic interaction with 3D displays,” in IEEE Haptics Symposium (HAPTICS) (2012), pp. 267–272. 74. M. Page, “Haptic holography/touching the ethereal,” J. Phys. 415, 012041 (2013). 75. A. Israr, S. C. Kim, J. Stec, and I. Poupyrev, “Surround haptics: tactile feedback for immersive gaming experiences,” in ACM Conference on Human Factors in Computing Systems (CHI) (2012), pp. 1087–1090. 76. T. Hoshi, M. Takahashi, K. Nakatsuma, and H. Shinoda, “Touchable holography,” in ACM SIGGRAPH Emerging Technologies (2009), p. 23. 77. T. Carter, S. A. Seah, B. Long, B. Drinkwater, and S. Subramanian, “UltraHaptics: multi-point mid-air haptic feedback for touch surfaces,” in Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (ACM, 2013). 78. Q. Smithwick, J. S. Chen, and D. P. Chu, “A coarse integral holographic display,” SID Inf. Symp. Dig. Tech. Pap. 44, 310–313 (2013). 79. S. Suyama, S. Ohtsuka, H. Takada, K. Uehira, and S. Sakai, “Apparent 3-D image perceived from luminance modulated two 2-D images displayed at different depths,” Vis. Res. 44, 785–793 (2004). 80. C. Lee, S. DiVerdi, and T. Höllerer, “An immaterial depth-fused 3D display,” in Proceedings of the ACM Symposium on Virtual reality Software and Technology, New York, USA (2007), pp. 191–198. 81. H. E. Kim, N. Kim, H. Song, H. S. Lee, and J. H. Park, “Three-dimensional holographic display using active shutter for head mounted display application,” Proc. SPIE 7863, 78631Y (2011). 82. T. Yoneyama, C. Yang, Y. Sakamoto, and F. Okuyama, “Eyepiecetype full-color electro-holographic display for binocular vision,” Proc. SPIE 8644, 864413 (2013). 83. T. Yoneyama, T. Ichikawa, and Y. Sakamoto, “Semi-portable full-color electro-holographic display with small size,” Proc. SPIE 9006, 900617 (2014).

Realization of real-time interactive 3D image holographic display [Invited].

Realization of a 3D image holographic display supporting real-time interaction requires fast actions in data uploading, hologram calculation, and imag...
629KB Sizes 0 Downloads 7 Views