ILLIXR Consortium Speaker Info

Abstract

Virtual, augmented, and mixed reality (VR, AR, MR), collectively referred to as extended reality (XR), have the potential to transform all aspects of our lives, including medicine, science, entertainment, teaching, social interactions, and more. There is, however, an orders of magnitude performance-power-quality gap between the state-of-the-art and the desirable XR systems. At the same time, the end of Dennard scaling and Moore's law means that "business-as-usual" innovations from systems researchers that are typically technology driven and within siloed system abstraction layers will be insufficient. Systems of 2030 and beyond require researchers to learn how to do application-driven, end-to-end quality-of-experience driven and hardware-software-application co-designed domain-specific systems research. We recently released ILLIXR - - Illinois Extended Reality tested - - the first open source XR system and research testbed and launched the ILLIXR consortium (https://illixr.org) to enable this new era of research. Our work is inter-disciplinary and touches the fields of computer vision, robotics, optics, graphics, hardware, compilers, networking, operating systems, distributed systems, security and privacy and more. I will describe recent results, open problems, and avenues for leveraging ILLIXR to address many of these problems.

Bio

Sarita V. Adve is the Richard T. Cheng Professor of Computer Science at the University of Illinois at Urbana-Champaign. Her research interests span the system stack, ranging from hardware to applications. She co-developed the memory models for the C++ and Java programming languages based on her early work on data-race-free models. Recently, her group released ILLIXR (Illinois Extended Reality testbed), the first open source extended reality system. She is also known for her work on heterogeneous systems and software-driven approaches for hardware resiliency. She is a member of the American Academy of Arts and Sciences, a fellow of the ACM and IEEE, and a recipient of the ACM/IEEE-CS Ken Kennedy award, the Anita Borg Institute Women of Vision award in innovation, the ACM SIGARCH Maurice Wilkes award, and the University of Illinois campus award for excellence in graduate student mentoring. As ACM SIGARCH chair, she co-founded the CARES movement, winner of the CRA distinguished service award, to address discrimination and harassment in Computer Science research events. She received her PhD from the University of Wisconsin-Madison and her B.Tech. from the Indian Institute of Technology, Bombay.

Muhammad Huzaifa is a Ph.D. candidate in Computer Science at the University of Illinois at Urbana-Champaign.

Resources

recording slides

Abstract

Many have predicted the future of the Web to be the integration of Web content with the real-world through technologies such as augmented reality. This overlay of virtual content on top of the physical world, called the Spatial Web (in different contexts AR Cloud, MetaVerse, Digital Twin), holds promise for dramatically changing the Internet as we see it today, and has broad applications.

In this talk, I will give a brief background on mixed reality systems and then present on ARENA, a new Spatial Computing architecture being developed by the CONIX Research Center (a six-university collaboration funded by the semiconductor industry and DARPA). The ARENA is a multi-user and multi-application environment that simplifies the development of mixed reality applications. It allows cross-platform interaction with 3D content that can be generated by any number of network-connected agents (human or machine) in real-time. It is network transparent in that users and data can execute and migrate seamlessly (and safely) across any number of connected computing resources that could include edge servers, in-network processing units, or end-user devices like headsets and mobile phones. I will discuss several systems designed using ARENA with networked sensors and actuators for digitizing real-world environments. These projects expose a number of bottlenecks and future challenges we face when developing more immersive spatial computing systems. Lastly, I will talk the potential of integrating ARENA with ILLIXR.

Bio

Anthony Rowe is the Daniel and Karon Walker Siewiorek Professor in the Electrical and Computer Engineering Department at Carnegie Mellon University. His research interests are in networked real-time embedded systems with a focus on wireless communication. His most recent projects have related to large-scale sensing for critical infrastructure monitoring, indoor localization, building energy-efficiency and technologies for microgrids. His past work has led to dozens of hardware and software systems deployed in industry, five best paper awards and several widely adopted open-source research platforms. He earned a Ph.D in Electrical and Computer Engineering from CMU in 2010, received the Lutron Joel and Ruth Spira Excellence in Teaching Award in 2013, the CMU CIT Early Career Fellowship and the Steven Ferves Award for Systems Research in 2015 and the Dr. William D. and Nancy W. Strecker Early Career chair in 2016.

Resources

recording slides

Abstract

SLAMBench [1] is an open-source benchmarking framework for SLAM, 3D reconstruction, and scene understanding methods. This talk will cover the core functionalities of SLAMBench and discuss evaluation methodologies for assessing the performance, accuracy, and robustness of SLAM systems. We will also discuss some of the current challenges in reprehensibility and evaluation across computing platforms.
[1]https://github.com/pamela-project/slambench

Bio

Mihai Bujanca is a Research Assistant in the Advanced Processor Technologies group at The University of Manchester. His work is focused on efficient SLAM and 3D reconstruction methods for mobile robots.

Resources

recording slides

Abstract

Current Wi-Fi networks are not prepared to support the stringent Quality-of-Service (QoS) requirements associated with emerging use cases in Industry 4.0 and extended reality (XR). Transforming wireless networks to enable time-sensitive applications has been a major target of research and standards in recent years. In particular, Intel Labs has been developing Wireless Time-Sensitive Networking (TSN) technologies, extending 802.11 capabilities to provide deterministic service.
In this talk, we will present new features introduced in next generation Wi-Fi (Wi-Fi 6E and beyond) and describe how they can be leveraged to achieve ultra-reliable and bounded low latency communications. In addition, we will describe novel TSN wireless extensions developed in Intel Labs aimed to enable accurate wireless time synchronization (802.1AS over 802.11) and time-aware scheduled transmissions (802.1Qbv) in Wi-Fi networks. Finally, we will discuss XR use cases where the aforementioned technologies could be leveraged.

Bio

Javier Perez-Ramirez was born in Malaga, Spain, in 1981. He received the M.S. and Ph.D. degrees in electrical engineering from New Mexico State University, Las Cruces, NM, USA, in 2010 and 2014, respectively, and the Telecommunications Engineering degree in sound and image from the Universidad de Malaga, Malaga, in 2006. From 2005 to 2008, he was a Lecturer with the Escenica Technical Studies Center, Malaga. He is currently with Intel Labs, Hillsboro, OR, USA. His current research interests include wireless time-sensitive networks, channel coding, estimation and detection theory, navigation and positioning, and optical wireless communications.

Resources

recording slides

Abstract

Hardware accelerators have the potential to allow robotics systems to meet stringent deadline and energy constraints. However, in order to accurately evaluate the system-level performance of a heterogeneous robotics SoC it is necessary to co-simulate the architectural behavior of hardware running a full robotics software stack together with a robotics environment modeling system dynamics and sensor data. This talk presents a co-simulation infrastructure integrating AirSim, a drone simulator, and FireSim, a FPGA-accelerated RTL simulator, followed by discussion on evaluating robotics and XR hardware on the component, SoC, and system level.

Bio

Dima Nikiforov is a second year PhD student at the SLICE Lab at UC Berkeley, advised by Prof. Sophia Shao and Prof. Bora Nikolic. Their research interests include the design and integration of hardware accelerators for robotics applications.
Chris Dong is a masters student and a graduate student researcher at the SLICE Lab at UC Berkeley, advised by Prof. Sophia Shao. She is also currently working as a software engineer at Tesla. Her research interest lies in hardware-software co-design for autonomous robotics.

Resources

recording

Abstract

Mixed-reality applications present a visually unique experience which is characterized by deep immersion and demand for high-quality and high-performance graphics. Given the computational demands on such systems, it is often essential to trade visual quality for improvements in rendering performance. Image Quality Assessment (IQA) metrics that accurately capture potential perceptual artifacts are useful in exploring this design space. However, traditional metrics for image quality assessment are insufficient due to the unique viewing conditions as well as the nature of perceptual artifacts. In my talk I will motivate the need as well as requirements for new IQA metrics that will solve this problem, as well as details of a recent metric---FovVideoVDP---that aims to address this need.

Bio

Anjul Patney is a Principal Research Scientist in NVIDIA's Human Performance and Experience research group, based in Redmond, Washington. Previously, he was a Research Scientist at Facebook Reality Labs (2019-2021) and a Senior Research Scientist in Real-Time Rendering at NVIDIA (2013-2019). He received a Ph.D. from UC Davis in 2013, and B.Tech. from IIT Delhi in 2007. Anjul's research areas include visual perception, computer graphics, machine learning, and virtual/augmented reality. His recent work led to advances in deep-learning for real-time graphics (co-developed DLSS 1.0), perceptual metrics for spatiotemporal image quality (co-developed FovVideoVDP), foveated rendering for VR graphics, and redirected walking in VR environments.

Resources

recording

Abstract

Project Northstar is an open-source augmented reality headset and provides a reference design for a wide FOV, affordable augmented reality system. However having an open-source optics system is only half the battle, tracking & sensing the world is as important as being able to augment it. Since Northstar started in 2018, we've gone through a handful of off-the-shelf sensors and are now investing in building out our own. Given the nature of Northstar & our goal of democratizing AR, we're aiming to build a low-cost sensor system that still will allow us to run hand tracking & slam off of a single set of cameras.

Bio

Bryan has been working on Project Northstar since 2018 and has helped with development efforts at multiple layers of the software stack including SteamVR integrations, optical calibration systems & Unity and Unreal integrations. He graduated from Morgan State University in 2017 with a Degree in Architecture and worked in Architectural Visualization & rendering for 5 years before joining Looking Glass Factory in March of 2021.

Resources

recording

Abstract

When a real or digital object's pose is defined relative to a geographical frame of reference, it will be called a geographically-anchored pose, or ''GeoPose'' for short. All physical world objects have a geographically-anchored pose. Digital objects may be assigned/attributed a GeoPose. The OGC GeoPose Standard defines the encodings for the real world position and orientation of a real or a digital object in a machine-readable form. Using GeoPose enables the easy integration of digital elements on and in relation to the surface of the planet.

The core of the OGC GeoPose Standard is the abstract frame transform. This is a representation of the transformation taking an outer frame coordinate system to an inner frame coordinate system. This abstraction is constrained in GeoPose v 1.0 to only allow transformations involving translation and rotation.

During this open meeting, members of OGC GeoPose Standard Working Group will present key aspects of the standard and discuss next steps for implementing GeoPose in the ILLIXR testbed.

Abstract

Optical hand tracking is essential for a good AR/VR experience. Over the past year or so, Moses Turner has developed a suitable optical hand tracking pipeline inside Monado. In this talk, they will demonstrate the use of this pipeline, compare it to other solutions, and explain the architecture, dataset, and training process. Also, they will go over failure cases and inaccuracies of the current tracking, and detail areas of possible further improvements and collaboration.

Bio

Resources