Author Topic: Meta VR Headset Prototypes Designed to Make VR ‘Indistinguishable From Reality’  (Read 179 times)

Online javajolt

  • Administrator
  • Hero Member
  • *****
  • Posts: 35305
  • Gender: Male
  • I Do Windows
    • Email
Meta says its ultimate goal with its VR hardware is to make a comfortable, compact headset with visual finality that’s ‘indistinguishable from reality. Today the company revealed its latest VR headset prototypes which it says represent steps toward that goal.

Meta has made it no secret that it’s dumping tens of billions of dollars in its XR efforts, much of which is going to long-term R&D through its Reality Labs Research division. Apparently, in an effort to shine a bit of light onto what that money is actually accomplishing, the company invited a group of press to sit down for a look at its latest accomplishments in VR hardware R&D.

Reaching the Bar

To start, Meta CEO Mark Zuckerberg spoke alongside Reality Labs Chief Scientist Michael Abrash to explain that the company’s ultimate goal is to build VR hardware that meets all the visual requirements to be accepted as “real” by your visual system.

VR headsets today are impressively immersive, but there’s still no question that what you’re looking at is, well… virtual.

Inside Meta’s Reality Labs Research division, the company uses the term ‘visual Turing Test’ to represent the bar that needs to be met to convince your visual system that what’s inside the headset is actually real. The concept is borrowed from a similar concept which denotes the point at which a human can tell the difference between another human and artificial intelligence.

For a headset to completely convince your visual system that what’s inside the headset is actually real, Meta says you need a headset that can pass that “visual Turing Test.”

Four Challenges

Zuckerberg and Abrash outlined what they see as four key visual challenges that VR headsets need to solve before the visual Turing Test can be passed: varifocal, distortion, retina resolution, and HDR.

Briefly, here’s what those mean:

   • Varifocal: the ability to focus on arbitrary depths of the virtual scene, with both essential focus
      functions of the eyes (vergence and accommodation)

   • Distortion: lenses inherently distort the light that passes through them, often creating artifacts like
      color separation and pupil swim that make the existence of the lens obvious.

   • Retina resolution: having enough resolution in the display to meet or exceed the resolving power of
      the human eye, such that there’s no evidence of underlying pixels

   • HDR: also known as high dynamic range, which describes the range of darkness and brightness that
      we experience in the real world (which almost no display today can properly emulate).

The Display Systems Research team at Reality Labs has built prototypes that function as proofs-of-concept for potential solutions to these challenges.


Image courtesy Meta

To address varifocal, the team developed a series of prototypes which it called ‘Half Dome’. In that series the company first explored a varifocal design that used a mechanically moving display to change the distance between the display and the lens, thus changing the focal depth of the image. Later the team moved to a solid-state electronic system which resulted in varifocal optics that were significantly more compact, reliable, and silent. We’ve covered the Half Dome prototypes in greater detail here if you want to know more.

Virtual Reality… For Lenses

As for distortion, Abrash explained that experimenting with lens designs and distortion-correction algorithms that are specific to those lens designs is a cumbersome process. Novel lenses can’t be made quickly, he said, and once they are made they still need to be carefully integrated into a headset.

To allow the Display Systems Research team to work more quickly on the issue, the team built a ‘distortion simulator’, which actually emulates a VR headset using a 3DTV, and simulates lenses (and their corresponding distortion-correction algorithms) in software.

Image courtesy Meta

Doing so has allowed the team to iterate on the problem more quickly, wherein the key challenge is to dynamically correct lens distortions as the eye moves, rather than merely correcting for what is seen when the eye is looking in the immediate center of the lens.

Retina Resolution

Image courtesy Meta

On the retina resolution front, Meta revealed a previously unseen headset prototype called Butterscotch, which the company says achieves a retina resolution of 60 pixels per degree, allowing for 20/20 vision. To do so, they used extremely pixel-dense displays and reduced the field-of-view—in order to concentrate the pixels over a smaller area—to about half the size of Quest 2. The company says it also developed a “hybrid lens” that would “fully resolve” the increased resolution, and it shared through-the-lens comparisons between the original Rift, Quest 2, and the Butterscotch prototype.

Image courtesy Meta

While there are already headsets out there today that offer retina resolution—like Varjo’s VR-3 headset—only a small area in the middle of the view (27° × 27°) hits the 60 PPD mark… anything outside of that area drops to 30 PPD or lower. Ostensibly Meta’s Butterscotch prototype has 60 PPD across its entirety of the field of view, though the company didn’t explain to what extent resolution is reduced toward the edges of the lens.

High Dynamic Range

Zuckerberg said that of the four key challenges he and Abbrash overviewed “the most important of these all is HDR.”

To prove the impact of HDR on the VR experience, the Display Systems Research team built another prototype, appropriately called Starburst. According to Meta, it’s the first VR headset prototype (‘as far as we’re aware) that can reach a whopping 20,000 nits.

Image courtesy Meta

The goal of HDR however is not to fry your eyes, but to give realistic luminance to things that actually are starkly bright in real life. For instance a fire, explosion, firework, or even bright reflections off of a window on a cloudless day. All of these things seem to ‘pop’ in real life because they’re so much brighter than the world around them. Being able to replicate that ‘pop’ of brightness in VR is essential to passing the visual Turing Test, says Meta.

For comparison, Quest 2’s display maxes out at 100 nits, and high-end HDR TVs reach around 2,000 nits. That means the Starburst prototype can produce a range of brightness that’s 10 times brighter than even some of the best HDR TVs out there.

And while Sony’s upcoming PlayStation VR 2 is expected to be the first commercially available HDR VR headset, ‘HDR’ isn’t exactly well defined, so there’s no telling if it will hit 1,000 nits, let alone 2,000.


Image courtesy Meta

While many of the company’s prototype VR headsets sacrifice weight and size in order to prove those fundamental ideas, Meta is also focused on drastically shrinking the VR headset form factor. To that end, the company has taken its proof-of-concept holographic folded optics research and turned it into a real, working VR headset called Holocake 2.

This impressively compact prototype tackles the two biggest size limitations of contemporary VR headsets: the length of the optical path and the width of lenses.

In order for the lenses in a VR headset to do their job, they must be placed a certain distance from the display. If you move them any closer you simply won’t be able to focus the image correctly. But using ‘pancake’ optics (also known as ‘folded’ optics), effectively shrinks the distance between the lens and the display by ‘folding’ the path back on itself using polarization to bounce the light back and forth before finally reaching the eye.

As you shrink that distance you start to see that the thickness of the lenses is actually further limiting how close you can put the display to the eye. To that end, the Holocake 2 prototype uses holographic lenses which are significantly thinner than traditional lenses.

These are essentially thin holographic films that have a hologram of a traditional lens embedded within them. Even though they’re thin, they manipulate light similarly to the thicker lens which they are modeled from.

Image courtesy Meta

The combination of holographic lenses and pancake optics—hence holo cake—is the key that makes Holocake 2 so compact.

“The creation of the holographic lens was a novel approach to reducing form factor that represented a notable step forward for VR display systems,” says Meta. “This is our first attempt at a fully functional headset that leverages holographic optics, and we believe that further miniaturization of the headset is possible.”

Image courtesy Meta

However, this is a PC-tethered headset which means that it will need some additional bulk (compute and battery) in order to reach the standalone form factor that Meta is gunning for. And unfortunately, Meta says, Holocake 2 requires a laser light source to make its holographic optics work well, and they aren’t yet at the size or cost needed to be practically implemented in a real product.

The Cambrian Era

Image courtesy Meta

That’s all an exciting look at where the future of VR hardware may be headed, but will Meta’s upcoming VR headset—Project Cambria—be the point where it all comes together?

Unfortunately, it doesn’t seem to be so. The company indicates that the majority of the technology shown off here is far from ready for prime time. And with Project Cambria expected to launch sometime this year, there just isn’t enough time for Meta to productize all of these technologies. Granted, it seems like Cambria will use folded optics (though not holographic folded optics) to make things a bit more compact than Quest, but it will be a while yet before we see something like Holocake 2 come to market.