Working Groups
Tentative Working Groups
Metrics
Globally, what needs to be measured (e.g., image quality, pose error, reconstruction error, energy, and so on)? For each piece of information, what quantitative metric should be used (e.g., FLIP, ATE, etc.)? Which metrics should be automatically collected by the runtime? Which metrics should be the responsibility of the components? This group will split into multiple groups, possibly one per class of metrics.
Interfaces
What should be the generic communication interface? What should the individual component-to-component interfaces look like (e.g., camera to VIO, reprojection to hologram, and so on)? How should the interfaces be specified? How should conformance be tested (of both the runtime implementation and of the component being plugged in)? Should there be an interface for metrics? Again, this group will also split into multiple groups, possibly one per component.
Components
What should the complete system look like? What should be the default implementation for each component? What alternative implementations should the testbed itself provide? Again, this group will also split into multiple groups, possibly one per component and possibly integrated with the interface group for the component.
Computation Offload
How to offload computations from the user device to edge servers or the cloud? What offload architecture and protocols should be supported by default? What system mechanisms should be provided to determine when and what should be offloaded? What should the interfaces be to seamlessly execute computations locally and remotely? Again, this group may split into multiple groups, possibly one per component, and possibly integrated with both the interface and component groups.
Distributed XR
How to enable collaborative multiparty experiences? How to scale the number of users? What should the network communication protocols and data formats be?
Configurations and benchmark reporting
What is a system configuration for benchmarking? Possible definition: set of components, their parameters (frequencies and other knobs), constraints (e.g., power budget), and hardware. What should the baseline configuration be? What other representative configurations should the testbed provide? For benchmarking, is the result from one configuration enough or is a full set of results required? How should benchmark results be reported?
Applications
What representative OpenXR applications should the testbed provide to drive the system? What should the mixture of AR, VR, and MR applications be? Which domains should the applications come from? Should benchmark submitters be allowed to run whatever application they want?
Data sets
What representative data sets should the testbed provide to drive the system? (Similar issues as applications.)
Infrastructure
Repo hosting. CI/CD. Contribution mechanisms. Analysis scripts for processing metrics, generating graphs, and creating shareable reports. Data set and application hosting. This will also likely split into multiple groups.
Research related
Research related activities; e.g., white papers, workshops, conferences, etc. Working groups around potentially various topcis; e.g., accelerators, memory system, scheduling, security and privacy, offloading, multiparty, testing, compilation, etc.