The next generation of computer screens could incorporate holographic displays, allowing us to interact with images and videos that appear to be presented in three dimensions.
Instead of interacting with real-time videos of colleagues during videoconferences, they could be represented as holograms. Medical professionals could be presented with three-dimensional (3D) views of organs and scans to help diagnose conditions and plan surgeries. Holographic representations are also of interest for defense applications, to better visualize a battlefield when preparing missions.
"The promise of holography is to project images that you can almost not distinguish from reality," says Theo Marescaux, founder and chief product officer of Swave Photonics, a fabless semiconductor company based in Leuven, Belgium.
Holography can be thought of as photography's 3D counterpart. Whereas a photo captures the varying intensity of light in a scene, a hologram also incorporates the light's phase—a light wave's position in its cycle at a point in time—which allows a 3D image to be recreated. In traditional holography, a light beam from a coherent light source such as a laser, which emits light waves of the same phase, is split in two to illuminate a scene. A hologram can then be produced by recording the interference pattern of the two beams. With computer-generated holography, algorithms can be used to simulate the process in order to create holographic displays.
Virtual reality (VR) is often considered to be a competing technology since it also aims to mimic the real world. VR headsets provide a stereoscopic view and typically track a user's head pose to make them feel immersed in a virtual scene. However, when creating the illusion of depth, virtual objects at certain distances often look blurry, since there can be a mismatch between their perceived location in space and the focusing distance, which is the screen from which they are being projected. The discrepancy can also make users feel nauseous after long-term use. In contrast, "With holography, you don't have that because you're reconstructing the entire image with perfect focus at any point in space," says Marescaux.
Furthermore, virtual reality generates a 3D view for a single user, whereas holographic displays could create virtual objects capable of being viewed by many people simultaneously. "You could have different perspectives represented in a big format display [so that] people can look at things from different angles," says Kaan Aksit, an associate professor at University College London in the U.K., who is researching 3D display technologies.
There are challenges to overcome, however, before holographic displays can be made real. One is that generating holograms is computationally expensive, since all the light rays in a 3D scene, passing through every point of every object and in every direction, must be recorded and reconstructed, including all the depth cues perceived by the human eye. Supercomputers have been used to create accurate simulations of the underlying physics, but even so it can take minutes to produce a single holographic image.
To speed up the process, Liang Shi, a Ph.D. student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) of the Massachusetts Institute of Technology (MIT), worked with colleagues to develop a deep learning method called tensor holography that can reduce the amount of computation required.
The researchers first created a dataset containing 4,000 images, with color and depth information for each pixel of each image, and a corresponding 3D hologram for each image. They then trained their deep learning model with the pairs of visuals, after which the system was able to uncover the underlying relationship between an image and its holographic counterpart.
When the model was tested using image color and depth data it had not seen before, it was able to generate a photorealistic hologram in milliseconds. "[Our method] is making some approximations, it's not exactly equivalent to the physical simulation," says Shi. "It loses tiny bits of precision, but you gain huge benefits [in terms of] the speed."
In follow-up work, Shi and his colleagues have been able to improve the realism of the holograms their model generates. Initially, their image data did not include information about objects that are hidden or partially hidden from view, which created some artefacts at the boundaries of the objects that could be seen fully. By including information about the background of an image, however, the model was able to create more accurate virtual 3D objects.
There are also hardware challenges standing in the way of holographic displays. While the pixels in current computer screens only control the color and intensity of light, to create a three-dimensional experience pixels must also manipulate the direction of light. Holographic pixels also need to be much smaller than those in existing displays—less than half of the wavelength of the light impinging on them—to properly diffract that light and allow light waves to interfere with each other and form a 3D image. "You need a pixel size that is basically two to three orders of magnitude smaller than any pixel [currently] out there," says Marescaux.
Marescaux and his colleagues at Swave have developed a solution: a niche component called a spatial light modulator (SLM), which is a chip specifically designed for digital holography. When it is illuminated with laser light, billions of tiny pixels on the chip interfere with diffracted light and sculpt it to form a holographic image. They have also achieved a high pixel density, which is necessary to create a wide-angle view. The distance between the center of a pixel and the center of an adjacent pixel, called the pixel pitch, is less than 250 nanometers (billionths of a meter) in their demonstrator chips. "At this pixel pitch, we have fields of view of over 100 degrees, and that's us just getting started," says Marescaux. In comparison, VR headsets provide a field of view of 100 degrees, on average, while we can see nearly 180 degrees with our natural vision if peripheral vision is included.
The chip is also cost-effective to make, since it is manufactured using the latest standard CMOS process, with an additional top layer of phase-change material. So far, Swave has been able to create chips with an unprecedented resolution of 16 megapixels per square millimeter. A four-by-four-millimeter chip could be embedded into augmented reality (AR) eyeglasses to create a 256-megapixel system, says Marescaux, achieving a pixel density that is 100 times greater than the closest SLM competitor and about 400 times greater than a typical VR headset display. "It means that you can project augmented reality into your hands' reach."
Marescaux and his team are building prototypes of their chips and expect the technology to be released later this year. It could enable new applications that can be viewed with the naked eye, such as a holographic wall made up of tiles, each a holographic projector with Swave's chips inside and some magnification. The technology could also be used to create a 360-degree volumetric display—a freestanding hologram that can be viewed from all angles without special glasses. "This is what you have in Star Wars-type movies," says Marescaux.
Holographic displays could take on other forms too. Aksit and his colleague are investigating a different approach: while conventional VR headsets and AR glasses contain active components such as batteries, electronics, and display screens, Aksit wondered whether a system could be developed with glasses that are completely passive, since there is a push to develop thinner and more lightweight designs. If future systems want to incorporate high-end graphics, for example, the graphics card required would add a lot of extra weight to a headset (often about 2.5 kilograms/more than 5 lbs.), making it uncomfortable to wear. "AR doesn't have to be the way it is right now," he says. "Maybe there is an alternative using computer-generated holography techniques."
Aksit and his team have developed a working prototype of a bespoke holographic projector that beams 3D representations of images at a targeted location one or two meters away. When viewed through eyeglasses with conventional lenses, high-quality, magnified versions of these images appear in mid-air with a wide field of view. Their most recent prototype is able to generate images in full color, and they have also developed an alternative eyepiece that is paper-thin.
"We are all investigating and trying to figure out what kinds of components should go into holographic displays," says Aksit. "How do we make them thin, lightweight, smaller, and providing good visuals? There is no clear winner yet."
Sandrine Ceurstemont is a freelance science writer based in London, U.K.
No entries found