ILLIXR Consortium Speaker Info
Abstract
Virtual, augmented, and mixed reality (VR, AR, MR), collectively referred to as extended reality (XR), have the potential to transform all aspects of our lives, including medicine, science, entertainment, teaching, social interactions, and more. There is, however, an orders of magnitude performance-power-quality gap between the state-of-the-art and the desirable XR systems. At the same time, the end of Dennard scaling and Moore's law means that "business-as-usual" innovations from systems researchers that are typically technology driven and within siloed system abstraction layers will be insufficient. Systems of 2030 and beyond require researchers to learn how to do application-driven, end-to-end quality-of-experience driven and hardware-software-application co-designed domain-specific systems research. We recently released ILLIXR - - Illinois Extended Reality tested - - the first open source XR system and research testbed and launched the ILLIXR consortium (https://illixr.org) to enable this new era of research. Our work is inter-disciplinary and touches the fields of computer vision, robotics, optics, graphics, hardware, compilers, networking, operating systems, distributed systems, security and privacy and more. I will describe recent results, open problems, and avenues for leveraging ILLIXR to address many of these problems.
Bio
Sarita V. Adve is the Richard T. Cheng Professor of Computer Science at the University of Illinois at Urbana-Champaign. Her research interests span the system stack, ranging from hardware to applications. She co-developed the memory models for the C++ and Java programming languages based on her early work on data-race-free models. Recently, her group released ILLIXR (Illinois Extended Reality testbed), the first open source extended reality system. She is also known for her work on heterogeneous systems and software-driven approaches for hardware resiliency. She is a member of the American Academy of Arts and Sciences, a fellow of the ACM and IEEE, and a recipient of the ACM/IEEE-CS Ken Kennedy award, the Anita Borg Institute Women of Vision award in innovation, the ACM SIGARCH Maurice Wilkes award, and the University of Illinois campus award for excellence in graduate student mentoring. As ACM SIGARCH chair, she co-founded the CARES movement, winner of the CRA distinguished service award, to address discrimination and harassment in Computer Science research events. She received her PhD from the University of Wisconsin-Madison and her B.Tech. from the Indian Institute of Technology, Bombay.
Muhammad Huzaifa is a Ph.D. candidate in Computer Science at the University of Illinois at Urbana-Champaign.
Resources
recording slidesAbstract
Many have predicted the future of the Web to be the integration of Web content with the real-world through technologies such as augmented reality. This overlay of virtual content on top of the physical world, called the Spatial Web (in different contexts AR Cloud, MetaVerse, Digital Twin), holds promise for dramatically changing the Internet as we see it today, and has broad applications.
In this talk, I will give a brief background on mixed reality systems and then present on ARENA, a new Spatial Computing architecture being developed by the CONIX Research Center (a six-university collaboration funded by the semiconductor industry and DARPA). The ARENA is a multi-user and multi-application environment that simplifies the development of mixed reality applications. It allows cross-platform interaction with 3D content that can be generated by any number of network-connected agents (human or machine) in real-time. It is network transparent in that users and data can execute and migrate seamlessly (and safely) across any number of connected computing resources that could include edge servers, in-network processing units, or end-user devices like headsets and mobile phones. I will discuss several systems designed using ARENA with networked sensors and actuators for digitizing real-world environments. These projects expose a number of bottlenecks and future challenges we face when developing more immersive spatial computing systems. Lastly, I will talk the potential of integrating ARENA with ILLIXR.
Bio
Anthony Rowe is the Daniel and Karon Walker Siewiorek Professor in the Electrical and Computer Engineering Department at Carnegie Mellon University. His research interests are in networked real-time embedded systems with a focus on wireless communication. His most recent projects have related to large-scale sensing for critical infrastructure monitoring, indoor localization, building energy-efficiency and technologies for microgrids. His past work has led to dozens of hardware and software systems deployed in industry, five best paper awards and several widely adopted open-source research platforms. He earned a Ph.D in Electrical and Computer Engineering from CMU in 2010, received the Lutron Joel and Ruth Spira Excellence in Teaching Award in 2013, the CMU CIT Early Career Fellowship and the Steven Ferves Award for Systems Research in 2015 and the Dr. William D. and Nancy W. Strecker Early Career chair in 2016.
Resources
recording slidesAbstract
SLAMBench [1] is an open-source benchmarking framework for SLAM, 3D reconstruction, and scene understanding methods. This talk will cover the core functionalities of SLAMBench and discuss evaluation methodologies for assessing the performance, accuracy, and robustness of SLAM systems. We will also discuss some of the current challenges in reprehensibility and evaluation across computing platforms.
[1]https://github.com/pamela-project/slambench
Bio
Mihai Bujanca is a Research Assistant in the Advanced Processor Technologies group at The University of Manchester. His work is focused on efficient SLAM and 3D reconstruction methods for mobile robots.
Resources
recording slidesAbstract
Current Wi-Fi networks are not prepared to support the stringent Quality-of-Service (QoS) requirements associated with emerging use cases in Industry 4.0 and extended reality (XR). Transforming wireless networks to enable time-sensitive applications has been a major target of research and standards in recent years. In particular, Intel Labs has been developing Wireless Time-Sensitive Networking (TSN) technologies, extending 802.11 capabilities to provide deterministic service.
In this talk, we will present new features introduced in next generation Wi-Fi (Wi-Fi 6E and beyond) and describe how they can be leveraged to achieve ultra-reliable and bounded low latency communications. In addition, we will describe novel TSN wireless extensions developed in Intel Labs aimed to enable accurate wireless time synchronization (802.1AS over 802.11) and time-aware scheduled transmissions (802.1Qbv) in Wi-Fi networks. Finally, we will discuss XR use cases where the aforementioned technologies could be leveraged.
Bio
Javier Perez-Ramirez was born in Malaga, Spain, in 1981. He received the M.S. and Ph.D. degrees in electrical engineering from New Mexico State University, Las Cruces, NM, USA, in 2010 and 2014, respectively, and the Telecommunications Engineering degree in sound and image from the Universidad de Malaga, Malaga, in 2006. From 2005 to 2008, he was a Lecturer with the Escenica Technical Studies Center, Malaga. He is currently with Intel Labs, Hillsboro, OR, USA. His current research interests include wireless time-sensitive networks, channel coding, estimation and detection theory, navigation and positioning, and optical wireless communications.
Resources
recording slidesAbstract
Hardware accelerators have the potential to allow robotics systems to meet stringent deadline and energy constraints. However, in order to accurately evaluate the system-level performance of a heterogeneous robotics SoC it is necessary to co-simulate the architectural behavior of hardware running a full robotics software stack together with a robotics environment modeling system dynamics and sensor data. This talk presents a co-simulation infrastructure integrating AirSim, a drone simulator, and FireSim, a FPGA-accelerated RTL simulator, followed by discussion on evaluating robotics and XR hardware on the component, SoC, and system level.
Bio
Dima Nikiforov is a second year PhD student at the SLICE Lab at UC Berkeley, advised by Prof. Sophia Shao and Prof. Bora Nikolic. Their research interests include the design and integration of hardware accelerators for robotics applications.
Chris Dong is a masters student and a graduate student researcher at the SLICE Lab at UC Berkeley, advised by Prof. Sophia Shao. She is also currently working as a software engineer at Tesla. Her research interest lies in hardware-software co-design for autonomous robotics.
Resources
recordingAbstract
Mixed-reality applications present a visually unique experience which is characterized by deep immersion and demand for high-quality and high-performance graphics. Given the computational demands on such systems, it is often essential to trade visual quality for improvements in rendering performance. Image Quality Assessment (IQA) metrics that accurately capture potential perceptual artifacts are useful in exploring this design space. However, traditional metrics for image quality assessment are insufficient due to the unique viewing conditions as well as the nature of perceptual artifacts. In my talk I will motivate the need as well as requirements for new IQA metrics that will solve this problem, as well as details of a recent metric---FovVideoVDP---that aims to address this need.
Bio
Anjul Patney is a Principal Research Scientist in NVIDIA's Human Performance and Experience research group, based in Redmond, Washington. Previously, he was a Research Scientist at Facebook Reality Labs (2019-2021) and a Senior Research Scientist in Real-Time Rendering at NVIDIA (2013-2019). He received a Ph.D. from UC Davis in 2013, and B.Tech. from IIT Delhi in 2007. Anjul's research areas include visual perception, computer graphics, machine learning, and virtual/augmented reality. His recent work led to advances in deep-learning for real-time graphics (co-developed DLSS 1.0), perceptual metrics for spatiotemporal image quality (co-developed FovVideoVDP), foveated rendering for VR graphics, and redirected walking in VR environments.
Resources
recordingAbstract
Project Northstar is an open-source augmented reality headset and provides a reference design for a wide FOV, affordable augmented reality system. However having an open-source optics system is only half the battle, tracking & sensing the world is as important as being able to augment it. Since Northstar started in 2018, we've gone through a handful of off-the-shelf sensors and are now investing in building out our own. Given the nature of Northstar & our goal of democratizing AR, we're aiming to build a low-cost sensor system that still will allow us to run hand tracking & slam off of a single set of cameras.
Bio
Bryan has been working on Project Northstar since 2018 and has helped with development efforts at multiple layers of the software stack including SteamVR integrations, optical calibration systems & Unity and Unreal integrations. He graduated from Morgan State University in 2017 with a Degree in Architecture and worked in Architectural Visualization & rendering for 5 years before joining Looking Glass Factory in March of 2021.Resources
recordingAbstract
When a real or digital object's pose is defined relative to a geographical frame of reference, it will be called a geographically-anchored pose, or ''GeoPose'' for short. All physical world objects have a geographically-anchored pose. Digital objects may be assigned/attributed a GeoPose. The OGC GeoPose Standard defines the encodings for the real world position and orientation of a real or a digital object in a machine-readable form. Using GeoPose enables the easy integration of digital elements on and in relation to the surface of the planet.
The core of the OGC GeoPose Standard is the abstract frame transform. This is a representation of the transformation taking an outer frame coordinate system to an inner frame coordinate system. This abstraction is constrained in GeoPose v 1.0 to only allow transformations involving translation and rotation.
During this open meeting, members of OGC GeoPose Standard Working Group will present key aspects of the standard and discuss next steps for implementing GeoPose in the ILLIXR testbed.
Resources
recordingAbstract
Optical hand tracking is essential for a good AR/VR experience. Over the past year or so, Moses Turner has developed a suitable optical hand tracking pipeline inside Monado. In this talk, they will demonstrate the use of this pipeline, compare it to other solutions, and explain the architecture, dataset, and training process. Also, they will go over failure cases and inaccuracies of the current tracking, and detail areas of possible further improvements and collaboration.
Bio
Resources
slides recordingAbstract
AR/VR devices sense information about the real world, and fuse that with some shared context-related information, in order to render the user display correctly. This sensed information, along with the system hardware and run-time software, is shared between multiple applications running on the same device. Information can also be shared by multiple devices running a multi-user application. This system model introduces a number of potential opportunities for attackers to compromise the security and privacy of the system. In the first part of this talk, I will discuss our recent work showing how to exploit shared information to compromise the privacy of applications running on the same platform. We also discuss why it is not straightforward to protect against such vulnerabilities using traditional defense approaches. In the second part of the talk, I will discuss additional threat models and some thoughts about how to think about security for AR/VR system
Bio
Nael Abu-Ghazaleh is a Professor in the Computer Science and Engineering as well as the Electrical and Computer Engineering Departments at the University of California, Riverside. His research is in architecture and system security, high-performance computing, and systems and security for Machine Learning. He has published over 200 papers in these areas, several of which have been recognized with best paper awards or nominations. His offensive security research has resulted in the discovery of several new attacks on CPUs and GPUs that have been disclosed to companies including Intel, AMD, ARM, Apple, Microsoft, Google, and Nvidia, and resulted in patches and modifications to products, and coverage from technical news outlets. He is a member of the Micro Hall of Fame, an ACM distinguished member, and an IEEE distinguished speaker.Resources
slides recordingAbstract
Because of the waning benefit of transistor scaling and the increasing demands on computing power, both industries and academics are seeking specialized accelerators for better performance and energy efficiency. Reconfigurable accelerators have shown their promise on this while retaining flexibility, but all these accelerators require non-trivial human efforts to have full-stack implementations. Our research aims to automate this design process. The insight is that a wide range of prior accelerators can be approximated by composing a set of simple and common hardware primitives. With careful design, a compiler that understands these primitives can be developed. By taking advantage of the compiler's awareness of the software/hardware affinity, the primitive composition can be guided to generate deeply specialized accelerators for the given application domain.
This work aims to replace the existing engineering effort-intensive accelerator design flow for both general-purpose and application-specific under a unified software stack. In the context of FPGAs, our presented workflow can also challenge HLS as the standard FPGA programming paradigm-generated designs can serve as an overlay with comparable performance and orders-of-magnitude faster compilation and reconfiguration time over HLS.
Finally, we will discuss the future direction of extending this framework to different contexts. For example, for a real-time system, like AR/XR, latency is the first-order consideration, and for data center accelerators, the energy and throughput will matter more.
Bio
Jian Weng is a 6th-year Ph.D. student from UCLA, advised by Tony Nowatzki. His research interests span specialized accelerator design and their associated compiling techniques. His work has been accepted by top-tier architecture conferences and selected as IEEE Micro Honorable Mentions.Resources
slidesAbstract
In this talk, I will summarize our ongoing work on securing multi-user AR/VR platforms. Our focus is on location tampering attacks that can occur when information is shared between multiple AR clients and/or the cloud. A fundamental assumption in many AR use cases is that a device should be in a particular real-world location to view a particular hologram. Breaking this assumption can cause users to miss or view wrong holograms, resulting in harmful effects (for example, a "left turn" navigation hologram is shown instead of a "right turn", causing the user to walk to an unsafe area). I will also review our work on AR/VR side-channel leakages (from the 10/26/2022 talk by Dr. Nael Abu-Ghazaleh), then open the floor for for discussion on integrating potential defense mechanisms with ILLIXR.
Bio
Jiasi Chen is currently an Associate Professor in the Department of Computer Science & Engineering at the University of California, Riverside. She received her Ph.D. from Princeton University and her B.S. from Columbia University. Her research interests include mobile edge computing, wireless networks, and AR/VR. She is the recipient of an NSF CAREER award and a Meta Faculty Research Award. She will be joining the University of Michigan in Fall 2023.Resources
recording