How does it work … SeeReal’s holographic display principle in a nutshell
On our Technology page we describe selected aspects of holographic display technology, which SeeReal is actively pursuing.
But how do all these things contribute to a holographic 3D (H3D) display? A short intro on how this works for example in a desktop-class H3D display from SeeReal could be as follows.
You start with a thin laser backlight. There multiple diffractive optical elements allow stretching the beam of light emanating from a miniaturized RGB-laser to whatever display size is required. The result is a large sufficiently coherent planar wavefront. This rectangular area illuminates the spatial light modulator (SLM) – a kind of liquid crystal display – containing the hologram which is updated with typical display refresh rates. The hologram, represented by display pixels with controllable phase and amplitude modulation, modulates the passing illumination wavefront in terms of light intensity (amplitude) and timing (phase shift). The hologram pixel information contains data of many encoded Sub-Holograms (SH) – one SH for each of the millions of 3D scene points in space. The properly illuminated and encoded hologram reconstructs the 3D scene in space, in front and behind the physical display plane. The reconstruction of the encoded 3D scene can now be seen from the positions in space addressed by the display’s optical system; SeeReal named these zones „Viewing Windows“ (VW).
But what happens when the observer moves? Eye-tracking cameras and software are used to compute real-time observer eye positions. This may include estimation of movement direction and speed to promptly notify display-components about observer eye-positions. Active beam steering optics deflect the VW to the new positions each frame of a video or interactive game / application. The update rate is so fast and positioning so precise that observers will not notice shifting of the VW and will continuously see the holographically reconstructed 3D scenes. New eye-positions are also reported to the 3D-content source. Depending on the application, this can be used to enable 3D-scene look-around – a choice which can be made depending on application and user preference.