Augmented, virtual, and mixed reality (AR, VR, and MR), collectively referred to as extended reality (XR), has the potential to transform most aspects of our lives, including the way we teach, conduct science, practice medicine, entertain ourselves, train professionals, interact socially, and more. Indeed, XR is envisioned to be the next interface for most of computing. 

While current XR systems exist today, they have a long way to go to provide a tetherless experience approaching perceptual abilities of humans. There is a gap of several orders of magnitude between what is needed and achievable in the dimensions of performance, power, and usability. Overcoming this gap will require cross-layer optimization and hardware-software-algorithm co-design, targeting improved end-to-end user experience related metrics. This needs a concerted effort from academia and industry covering all layers of the system stack (e.g., hardware, compilers, operating systems), all algorithmic components comprising XR (e.g., computer vision, machine learning, graphics, optics, haptics, and more), and end-user applications software and middleware (e.g., game engines).   

The current XR ecosystem is mostly proprietary. For system researchers, there have been no open-source benchmarks or open end-to-end systems to drive or benchmark hardware, compiler, or operating system research. For algorithm researchers, there are no open systems to benchmark the impact of new algorithms in the context of end-to-end user experience metrics.  Similarly, developers/implementers of individual hardware and software components (e.g., a new SLAM hardware accelerator or a new software implementation of foveated rendering) do not have open access to benchmark their individual components in the context of end-to-end impact.

We aim to bring together academic and industry members to develop a community reference open-source testbed and benchmarking methodology for XR systems research and development that can mitigate the above challenges and propel advancements across the field.

Goals for the Testbed

Our goals for the testbed include the following:

  • Standalone components that comprise representative AR/VR/MR workflows (e.g., odometry, eye tracking, hand tracking, scene reconstruction, reprojection, etc.). These components may be hardware or software implementations, written in various languages, and optimized for various current and future hardware (e.g., CPUs, DSPs, GPUs, accelerators).
  • End-to-end XR system with a runtime to integrate the above components. The XR device runtime should provide: 
    • compliance with the OpenXR standard, 
    • real-time scheduling and resource management, 
    • offload and onload functionality for other edge devices and servers and the cloud to enable multi-tenant and multi-party experiences, 
    • plug-n-play functionality for replacing different implementations of a given component (in hardware or software), 
    • flexibility to instantiate different workflows consisting of subsets of components representing a variety of use cases in AR, VR, or MR.
  • Edge and cloud server frameworks for multi-tenant and multi-party experiences and content serving.
  • End-user applications, including middleware to represent a variety of single- and multi-user AR, VR, and MR use cases such as games, virtual tours, education, etc.
  • Data sets and test stimuli to drive the testbed and telemetry, representing realistic use cases. 
  • Telemetry to provide extensive benchmarking and profiling information, ranging from detailed hardware measurements (e.g., performance and power related statistics) to end-to-end user experience metrics (e.g., motion to photon latency, image quality metrics, etc.).

Our goals are ambitious and require contributions from the wider XR, hardware, and software systems community. The consortium aims to bring together these communities, seeking and curating contributions towards the above goals for a common open-source community testbed.