Mixed reality (MR) headsets have become a critical focus for enterprise use cases as technologies develop to merge the physical and virtual worlds seamlessly.
Adoption rates for MR devices are set to grow as businesses leverage emerging technologies and develop ecosystems to facilitate mass adoption.
For our XR Today round table, we are pleased to welcome:
- David Weinstein, Director of XR, NVIDIA
- Ryan Groom, Co-Founder and CTO, Kognitiv Spark
Our panellists have explored the latest solutions across mixed reality, how MR technologies can assist current and future workforces. They also explain why such tools are vital to the development of the Metaverse.
XR Today: What are the top trends in MR headset development for 2023?
David Weinstein: In 2023, one of the top trends in MR will be high-quality streamed experiences. This will include photorealistic renderings that blur the line between what’s real and what’s virtual, as well as seamless interplay between real and virtual objects.
One area of impressive progress has been creating more realistic virtual objects that accurately interact with the real world. For example, the headlights of a virtual car can light up real-world objects, and the virtual car can cast shadows and show reflections that accurately reflect its position in the real world.
This accurate occlusion and interaction of real objects with virtual content — and vice versa — creates more immersive, believable MR experiences.
Another trend is likely to be the development of more comfortable, ergonomic MR headsets. As MR technology becomes more mainstream, there will be greater demand for lightweight headsets that are easy to wear for extended periods of time without causing discomfort or strain.
In 2023, we may see the exploration of new battery materials and evolving support of connections to high-bandwidth, low-latency networks like telco 5G to reduce compute and power needed on the actual head-mounted display.
A final area where we expect to see incredible progress is the integration of MR technology with artificial intelligence (AI) tools and platforms. Digital co-pilots will also become a natural extension of MR devices.
The computing for AI-based on new transformers, such as ChatGPT, NVIDIA NeMo, and others, will again require fast, robust connections to the edge.
Ryan Groom: The biggest trend is to reduce the headset’s size and make them more useable outdoors. Like smartphones in the early days, units are a bit fragile, so they need a suitable IPX rating to use in industrial environments or life in general.
Lightly ruggedized, resistant to the elements, and an experience not affected by direct sunlight will be needed for mass scale. Units need 4G/5G connections, have independent computing, and an operating system (OS) designed for running on a small form factor device with an above-average sensor count to deliver the experience.
XR Today: What do you believe is the next stage of development for MR headsets?
David Weinstein: The next development stage for MR headsets will likely include enhancing natural interfaces through AI. The goal is to make the user experience more seamless and intuitive, enabling more natural, immersive user interactions with MR environments.
One example of how AI can enhance natural interfaces is by creating realistic facial animations for more natural interactions with virtual avatars. This technology could integrate with MR headsets to create more realistic, expressive virtual avatars, which would make interactions in MR environments feel more like face-to-face conversations.
Adding natural language AIs can combine automatic speech recognition (ASR), large language models (LLMs), and zero-shot natural language processing (NLP) models enables systems.
These are particularly useful for applications like virtual collaboration, where participants can communicate across any language and feel like they’re in the same room. These speech interfaces can also interact with XR applications and environments, eliminating the need for obtrusive graphical interfaces and menus.
Another area where AI could significantly impact MR headset development is content generation. Generative AI algorithms can create vast, immersive virtual environments that can be explored in MR.
This could be particularly useful for applications like gaming, where players want to feel fully immersed in the game world. By using generative AI, developers could create virtually limitless game worlds, ensuring that players never run out of new experiences to explore.
In addition to these areas, AI could improve other aspects of MR headset technology, such as tracking and rendering. Using machine learning (ML) algorithms to analyze sensor data, developers could create more accurate and responsive tracking systems, making interactions in MR environments feel more natural and immersive. Similarly, AI could improve rendering techniques, creating more realistic and lifelike virtual environments.
Finally, MR device sensors will likely evolve. MR devices can be considered Internet of Things (IoT) devices, pulling data in from the user’s real environment for AI analysis onboard via remote computing, where AI analysis tools will then present contextual information about the user’s environment.
Ryan Groom: The need to have cellular and GPS capabilities is a must so the devices can be better at location awareness. To work seamlessly and combine the digital and physical worlds, a headset needs to be aware of its physical space, indoors and outdoors, with precision.
Size and style are both going to be important. It has to go from a headset to smart glasses to get true widespread adoption. The experience needs to be as easy as putting on a pair of glasses that integrate your digital world into your physical world without a cumbersome setup or worrying about wearing a massive headset.
The headsets or glasses need to have independent computing. Not just a smart screen tethered to a mobile phone, but they must be standalone.
XR Today: Which use cases do you help to facilitate with your enterprise solutions?
David Weinstein: It’s computationally challenging to deliver high-quality MR experiences. The seamless interplay between virtual content and the real world requires highly accurate rendering, lighting, materials, physics, tracking, and more. Today, we’re just scratching the surface of what’s possible, and it’s already stunning.
In the not-too-distant future, the experiences will be jaw-dropping as the lines between the physical and the digital continue to blur and ultimately disappear. AI will play a fundamental role in all this, from generating and rendering that content to providing natural interfaces and digital assistants that will streamline experiences.
The power required to deliver this will be substantial, with much of it being delivered from the edge with streaming. NVIDIA AI services and CloudXR streaming will play an important role in marshalling sensor data from MR devices to the cloud, creating renderings and scene understanding, and seamlessly blending the virtual and the real. These technologies will uplevel MR experiences across every imaginable vertical – freeing developers to use as much computational power as they need to deliver the highest-quality immersion.
The first verticals where we expect NVIDIA technology to impact MR will be those where photorealistic design is an essential element of the product pipeline, including in media and entertainment, content creation, automotive design, and architectural design.
NVIDIA RTX ray tracing and Material Definition Language are already emerging as critical components in these industries. Also, the NVIDIA Omniverse platform, based on the Universal Scene Description (USD) framework, enables unified, multi-source data integration and multi-participant immersive collaboration for collaborative design and exploration.
Ryan Groom: We offer worker performance support and provide enterprise content in holographic spaces to assist workers in doing their job. This can be in the form of PDFs sized and placed where you need them in your physical work environment.
Additionally, it can incorporate 3D mentoring files that show step-by-step how to perform a complex task, and even have the unit make a video call to give a first-person view of your situation to someone anywhere to help you.
XR Today: How do your products and services help to incubate the enterprise metaverse?
David Weinstein: NVIDIA is making significant contributions toward the incubation of the enterprise metaverse. The metaverse refers to a collective, virtual space where people can interact with a computer-generated environment and other users. NVIDIA products and services are instrumental in making the metaverse accessible and immersive for everyone, regardless of their computing power.
NVIDIA Omniverse is an open platform allowing users to collaborate on real-time 3D simulations, animation, and visualization projects. The platform provides a common framework for different software tools, which enables users to create and share content easily. Omniverse is based on the USD format, which enables cross-platform collaboration and ensures compatibility between different tools.
In addition, NVIDIA RTX is a powerful visual computing platform that enables real-time ray tracing and AI-accelerated computing. The technology allows for the creation of realistic environments and objects in virtual worlds, making the metaverse more immersive for users. RTX also enables real-time physics, which enhances interactions between objects in virtual environments.
NVIDIA AI technology is also a critical component of the enterprise metaverse. AI enables the creation of intelligent virtual agents and assistants that can interact with users in virtual worlds. These agents can provide users with support, guidance, and entertainment, making the metaverse more engaging and compelling. NVIDIA AI also allows for the creation of personalized experiences for users to enhance their engagement and satisfaction.
Finally, the NVIDIA CloudXR platform serves as an essential tool for delivering the enterprise metaverse to everyone, everywhere. CloudXR enables streaming XR experiences to devices of varying computational power, making high-quality MR accessible to a broader audience.
The platform uses NVIDIA video-streaming technologies to deliver low-latency, high-quality streaming of XR content, allowing users to access the metaverse from any device with an internet connection.
NVIDIA products and services are playing a significant role in incubating the enterprise metaverse. Through NVIDIA Omniverse, RTX, AI, and CloudXR, we’re providing the tools necessary to create immersive virtual environments accessible to everyone.
Ryan Groom: One of the most interesting features of RemoteSpark that helps incubate the enterprise metaverse is the ability to place or pin digital content in a physical space.
Imagine it’s day one on the job in a new factory and you put an MR device on your head, turn it on, and log in. Holographic pins appear throughout the factory, which you can open to help you do your job.
This content can vary from the maintenance history, maintenance process, 3D animated troubleshooting guides, 3D digital metrics dashboards, and even the contact to call if you need assistance floating over the machine you are working on. No more menus and no searching for the right document or phone number. Your entire workspace is annotated with the knowledge to assist you at the actual place you need the data. The simplicity is powerful.
XR Today: Anything else you would like to add?
David Weinstein: One of the most exciting aspects of MR is the idea of collaboration. Imagine working on a design of any industrial digital twin — a house, manufacturing facility, automobile, boat, or other objects.
People can look at the design at full scale, in high fidelity, with complete photorealism, and then look across that model while talking to colleagues in the virtual room with you from across the world, or even talking with an AI co-pilot. That’s the promise of MR in industrial digital twins.
AI co-pilots are going to change XR in unimaginable ways. Just a few months ago, most people thought creating virtual worlds through a short text description was impossible. Now, everyone’s talking about it, and we’re seeing incredible early demos of speech AI, LLMs, recommenders, and virtual co-pilots.
AI can also accelerate rendering through NVIDIA DLSS technology, making high-fidelity ray tracing at high frame rates with low latency practical. With more AI processing and graphics processing unit (GPU) power, the lighting models underneath the ray tracing will become increasingly complex and realistic. MR has jumped into practice at the right time to grow along with the unprecedented acceleration of AI.
Ryan Groom: The adoption of MR devices will have a unique trajectory. Mobile phones were a slow burn until the smartphone caused a consumer and enterprise explosion of adoption.
Everybody soon used them for personal and professional reasons. VR, which too often gets lumped in with MR, is not a good example of its power. This muddies the waters of understanding and adoption.
Even the adoption of personal computers happened at personal and enterprise levels together. With MR, there is currently not a good example at the consumer level, and I think it has slowed adoption in the industrial space because the concept is very new to everyone.
It would be great if a lightweight, even limited-featured, set of MR glasses or headsets could launch for mass adoption to eliminate consumer sentiments that an XR experience is a VR experience.