skip to main content

Working Groups

Tentative Working Groups


Globally, what needs to be measured (e.g., image quality, pose error, reconstruction error, energy, and so on)? For each piece of information, what quantitative metric should be used (e.g., FLIP, ATE, etc.)? Which metrics should be automatically collected by the runtime? Which metrics should be the responsibility of the components? This group will split into multiple groups, possibly one per class of metrics.


What should be the generic communication interface? What should the individual component-to-component interfaces look like (e.g., camera to VIO, reprojection to hologram, and so on)? How should the interfaces be specified? How should conformance be tested (of both the runtime implementation and of the component being plugged in)? Should there be an interface for metrics? Again, this group will also split into multiple groups, possibly one per component.


What does the complete system look like? What should be the default implementation for each component? What alternative implementations should the testbed itself provide? Again, this group will also split into multiple groups, possibly one per component and possibly integrated with the interface group for the component.

Configurations and benchmark reporting

What is a system configuration for benchmarking? Possible definition: set of components, their parameters (frequencies and other knobs), constraints (e.g., power budget), and hardware. What should the baseline configuration be? What other representative configurations should the testbed provide? For benchmarking, is the result from one configuration enough or is a full set of results required? How should benchmark results be reported?


What representative OpenXR applications should the testbed provide to drive the system? What should the mixture of AR, VR, and MR applications be? Which domains should the applications come from? Should benchmark submitters be allowed to run whatever application they want?

Data sets

What representative data sets should the testbed provide to drive the system? (Similar issues as applications.)


Repo hosting. CI/CD. Contribution mechanisms. Analysis scripts for processing metrics, generating graphs, and creating shareable reports. Data set and application hosting. This will also likely split into multiple groups.

Research related

Research related activities; e.g., white papers, workshops, conferences, etc. Working groups around potentially various topcis; e.g., accelerators, memory system, scheduling, security and privacy, offloading, multiparty, testing, compilation, etc.