Virtual reality (VR) headset manufacturers are producing continuous innovations for the global extended reality (XR) industry by improving on several key technologies such as eye-tracking, biometrics, processing power, and increasing levels of degree-of-freedom (DoF) movement.
The continued exploration of 5G technologies along with edge computing will further cement critical infrastructure for building and securing the Metaverse: the next generation of communications.
VR headsets will maintain a massive presence in the development of the Metaverse frontier and will inevitably lead to the expansion of use cases, including virtual training, education, healthcare, cybersecurity, and many others, leading to the potential ubiquitous use of the technology.
Several of the world’s top tech firms, including Meta, HTC, Varjo, Lenovo, Pico, Valve Index, and others have begun collaborating with semiconductor firms such as Intel, ARM, MediaTek, Qualcomm, and AMD to build next-generation devices to facilitate the future of VR headset development.
For our XR Today round table, we are pleased to welcome:
- Jason McGuigan, Head of Virtual Reality of the Lenovo Intelligent Devices Group
- Urho Konttori, Founder and Chief Technology Officer of Varjo Technologies
- Shen Ye, Senior Director and Global Head of Hardware Products at HTC VIVE
- Christoph Fleischmann, Chief Executive and Founder of Arthur
Panellists have discussed the most critical factors to consider when designing flagship headsets, the potential of using standalone headsets compared with tethered devices, and the benefits of 5G and cloud computing in enabling future devices as well as the Metaverse.
XR Today: When designing headsets, what are some of the most demanding requirements? Which specifications are important to consider when developing a (flagship) head-mounted display?
Jason McGuigan: Naturally, design follows function and it is all about the intended use cases and user base. Depending on who uses the headset and for which reasons, we make design decisions to fulfil their needs.
Enterprises look for the most versatile devices to appeal to the widest user base. Enterprise VR solutions must be easy to use, manage, and maintain at scale.
In developing our solutions, Lenovo has considered features that will allow us to meet those goals while maintaining our high standards for quality, privacy, and security across hardware and software.
The Lenovo Mirage VR S3 solution is intended as an entry-level VR headset for enterprises and schools. These customers need a device they can roll out to many users, and can easily use and manage as part of a fleet of devices.
Because of this, we decided 3 degrees of freedom (DoF) was sufficient for people who are generally engaging in armchair immersive training and learning.
We also designed the device to be hygienic for multiple users by adding removable and cleanable rubber face guards and other wipeable plastic or rubber surfaces.
The solution is supported by our ThinkReality platform — a cloud and device-agnostic software platform that allows institutions to manage headsets and monitor device use, battery life, push content, and other functions.
I would like to note that we saw 3DoF as a good entry-level VR solution for many organizations, but we recognize the demand for 6DoF as companies are incorporating more VR training and developing future products accordingly.
Urho Konttori: With our device lineup — the VR-3, XR-3, and Aero headsets — we value comfort and quality above all else. We are also working to push the limits of technology to the edge to gain paradigm shifts in high-end training and design that does not come by cutting corners.
We also consider latency and frame rates as key for XR devices, and while [primary considerations] boil down to user comfort and quality in the end, I see these factors as separate items worth noting.
Shen Ye: There are two key specifications when developing flagship head-mounted displays (HMDs): performance and form factor. The overall ergonomics are key to the form factor, but performance is not just computing as we also consider tracking and battery life.
We have to balance these two considerations out as prioritizing too much on one form factor could lead to excessive compromise on performance.
For example, standard Fresnel lenses require a bigger panel to achieve a higher field of view (FoV). In our latest flagship headsets — the VIVE Pro 2 and VIVE Focus 3 — we used our new optics technology with stacked lens elements to achieve a higher FoV than previous generations with smaller panels.
Christoph Fleischmann: For productive enterprise use, we require a combination of the highest-quality performance and best-in-class user experience.
Primarily, we have seen that standalone (or all-in-one) 6DoF headsets are needed, as users do not want to deal with a second device such as a laptop or desktop PC. Both the headset and controllers need to be 6DoF tracked to guarantee users can be productive in our VR meetings.
Also, the more memory, the better. Comfortable fit and weight distribution as well as an easy-to-find sweet spot for clear pictures with the lenses is a must.
What is the potential of standalone 5G-enabled headsets for the future of enterprise use, and what are their potential use cases?
Jason McGuigan: 5G-enabled devices have a bright future as the industry continues to develop the Metaverse for enterprises. Enterprise 5G scenarios are now increasingly common while universal 5G coverage remains on the horizon.
Use cases include the ability to quickly and seamlessly push content as well as deliver location-based experiences, regardless of WiFi connectivity. This is very useful for enterprises managing global device fleets and applications for their employees and customers.
We also see an increase in applications for the cloud and edge rendering with 5G, namely for advanced XR applications pushed to devices. 5G and beyond allows high-performance computing to be centralized, enabling end-users to be distributed and not tethered to workstations.
This allows powerful XR solutions to be more available to users. Imagine researchers or engineers in a research and development (R&D) facility or university campus moving into and manipulating high resolution, complex 3D models they are working on, wherever and whenever they meet up, because rendering is done on the cloud or at the edge, and not on devices or local workstations. Scenarios like this greatly accelerate productivity and innovation, and many organizations are already benefiting.
Urho Konttori: We see 5G as a great facilitator for cloud computing for headsets, which can lead to the development of lightweight, fully-connected, but infinitely powerful glasses.
5G is also a great enabler to make devices work everywhere, which will usher the era of virtual teleportation to succeed the era of face-to-face interaction that we live right now in.
Shen Ye: With today’s connectivity and the ever-growing infrastructure of edge computing, 5G-enabled headsets will bring high-end performance to smaller devices.
We have a lot of experience in 5G and remote rendering technology, and just at the 2022 Consumer Electronics Show (CES), we debuted a training demo showing the Focus 3 streaming over 5G.
While enterprise use cases will inevitably remain first, there will be an inflection point where connectivity, infrastructure and costs will equalize, allowing consumers to transition to cloud computing even more.
Christoph Fleischmann: As of right now, I have not yet seen a demo in practice where a 5G-enabled XR device could provide a top-quality user experience with content rendered in the cloud.
Cloud-rendered and streamed XR has the potential to be the Holy Grail of unlocking the smallest and most elegant form factor for XR devices, provided the overall experience is absolutely high-end and that latency does not make users less collaborative or productive.
Foveated rendering, or rendering enabled through eye-tracking, might reduce the amount of data that needs to be sent to the headsets. I could see that a combination of foveated rendering and a powerful 5G-enabled device might be able to pull this off.
What is the process for developing a software development kit (SDK) for your headsets? How do you receive feedback on headset usage from developers or analyse feedback to determine where to make improvements or changes?
Jason McGuigan: The ThinkReality team looks at building SDKs as one of our primary connections with developers. Our current generations of SDKs emphasize ease of use and enterprise deployment.
Our future developments all target integration using OpenXR standards to allow developers to move more seamlessly across different endpoint devices. We believe developing universal standards is important for the democratization of the Metaverse.
Our ThinkReality platform enables enterprises to monitor device utilization, allowing developers to track and improve experiences based on data-driven analysis.
We take a customer-centric approach to our technology and services by maintaining an open dialogue to ensure constant improvements to our development tools.
Urho Konttori: We make our SDK development processes super easy, as all OpenXR and OpenVR
content just work, and Unity and Unreal Engine are plug-and-play.
Conversely, some high-end simulators that have their own graphics engines tend to take a bit longer. I am personally super happy to see OpenXR gaining critical market share and I continue to follow all of the interesting developments in this particular space.
For me, there is nothing against OpenVR, but I find the platform is just not as extensive and expansive as OpenXR.
Shen Ye: The key part of an SDK is providing a toolset that makes it easy for developers to build for our systems. We not only do internal testing on these to create internal demo content to show off our prototypes, but we also seed them to developers very early on and receive feedback from there.
A key piece here is OpenXR, which allows developers to create content against one SDK and have it running across multiple headsets that also support OpenXR in their runtime.
Christoph Fleischmann: We categorically have only had positive experiences with VR/MR hardware manufacturers, and tend to provide extensive feedback on our own experiences as well as that of our users.
From this, we have had multiple cases where our input was heard and implemented. This applies both to the operating system capabilities of headsets as well as to hardware improvements made throughout each generation.
There seems to be a difference of opinion in the headlines about whether virtual or augmented reality will dominate the XR market. What would be your case for arguing in favour of a strong VR industry?
Jason McGuigan: I think this is like asking which tool will dominate: the hammer or the screwdriver? It is not really an either-or proposition, and although there are a lot of crossovers, AR and VR are separate tools capable of very different things in the Metaverse.
Enterprises and consumers will use and benefit from having both AR and VR in their XR toolbox.
VR will continue to proliferate because its strengths are:
- VR is more immersive, and a deeper journey into the Metaverse will require VR, and VR experiences to connect users far better with content by offering more emotional bonds, better understanding and retention, and fewer distractions. Enterprises and educational institutions are also embracing VR training and learning.
- VR is transformative, whereas AR depends on physical assets and locations. AR is highly effective with [superimposing] digital overlays of a physical asset in front of the user. However, to work with intricate 3D models, the experience is far better with VR as it is totally virtual and also more cost-effective.
- VR hardware is years ahead of AR and will likely remain so for a while. Factors such as FoV, stability, and graphic fidelity are all more advanced in VR.
Urho Konttori: Content exists in the VR ecosystem already, both on the games and professional apps side. Almost everything done for PC and console gaming works conceptually out of the box in VR, but would need to be completely rethought to integrate and function well in AR.
AR is great for information overlays, and perhaps when we get those black pixels one day, it will become great for object viewing as well, but to date, the industry is far, far away from that.
I do have a strong sense that video-based XR will take us to interesting places, for both entertainment and productivity, faster and better than AR does.
Shen Ye: For us, it really should not be a one or the other choice. Each has its targeted use cases, but share similarities. Focusing on high-fidelity VR content establishes the building blocks needed for AR.
Waveguides needed for optical AR primarily use similar technologies to create semiconductors, which makes AR quite expensive.
AR also has a long way to go in terms of resolution, brightness, and even simple things like FoV, and requires different sensors and cameras, which scan the wider environment differently compared to VR. VR devices with camera passthrough is a good starting point to prototype on.
Overall, VR hardware needs more time to mature, but there are challenges to both technologies.
Christoph Fleischmann: For us, it is pretty clear that spatial computing will not unfold with AR and VR competing for users. VR headsets add MR capabilities, which are essentially AR, while AR headsets are trying to push the quality of experiences and their FoV, eventually resulting in VR.
There will be a convergence of both technologies, and right now, passthrough VR seems to be the most promising to truly bridge the gap until AR headsets are powerful enough to provide a full VR experience. Generally speaking, we see spatial computing prevailing in the short and medium run.
Headset types that feature both AR and VR capabilities will then become the standard. AR will become the non-escapist default mode, where users will engage with the spatial world more than 80 percent of the time. On the other hand, VR will be chosen for work and experiences that benefit from full immersion and an unshackled (not limited by physical surroundings) spatial experience.
We can compare it to how we use laptops and smartphones currently. Right now, I might edit a document on my laptop, as it is a more suitable tool for more intense work, similarly to how VR will be the most capable medium for the “deepest” types of work.
To write a quick message, I might resort to my phone as it is mobile and perfectly capable of handling the complexity of this task. AR will be similar to this type of interaction, although much more powerful than what I can do on a smartphone interface.
So just as I switch between two different sizes of flat digital interfaces, in the future, I will switch between two different ways of viewing the spatial dimension. The great benefit here is that I will likely not need to switch any device as the convergence of AR and VR will likely allow headsets and smart glasses to power both fully.