X.Org Developer Conference 2021

Daniel Vetter (Intel) , Mark Filion, Radek Szwichtenberg (Intel) , Samuel Iglesias Gonsálvez (Igalia)

The X.Org Developers Conference 2020 is the event for developers working on all things Open graphics (Linux kernel, Mesa, DRM, Wayland, X11, etc.).

XDC 2021 Registration
  • Alan Coopersmith
  • Alex Deucher
  • Alyssa Rosenzweig
  • Andi Shyti
  • Andrius Stasauskas
  • Antonio Caggiano
  • Arcady Goldmints-Orlov
  • Arkadiusz Hiler
  • Arthur Rasmusson
  • bacteria bacteria
  • Baldur Karlsson
  • Bas Nieuwenhuizen
  • Bas Vermeulen
  • Benjamin Mezger
  • Benjamin Tissoires
  • Boris Brezillon
  • Bryan Stine
  • Camilla Löwy
  • Charles Giessen
  • Charles Turner
  • Christian Gmeiner
  • Connor Abbott
  • Corentin Noël
  • Daniel Schürmann
  • Daniel Stone
  • Daniel Vetter
  • Danylo Piliaiev
  • David Peter
  • David Yat Sin
  • Demi Obenour
  • Dominik Grzegorzek
  • Edward Betts
  • Efe Itietie
  • Eleni Maria Stea
  • Emma Anholt
  • Eric Engestrom
  • Erico Nunes
  • Erika Johnson
  • Ethan Lee
  • Felix Kuehling
  • Francisco Regateiro
  • François Cami
  • Gary Gary C Wang
  • Gustavo Noronha Silva
  • Harsh Aggarwal
  • Hello World
  • Hritik V
  • Hyunjun Ko
  • Iago Toral
  • Italo Nicola
  • jack schora
  • Jakub Kuderski
  • Jason Ekstrand
  • Jason Francis
  • Javier Martinez Canillas
  • Jay Aherkar
  • Jean-Luc Duprat
  • Jens Owen
  • Jitendra Jitendra Sharma
  • Jonas Ådahl
  • Jordan Crouse
  • José María Casanova Crespo
  • Juan A. Suarez
  • Karen Ghavam
  • Karol Herbst
  • Kenneth Graunke
  • Laurent Pinchart
  • Leandro Ribeiro
  • Leonid Wilde
  • Liam Middlebrook
  • Louis-Francis Ratté-Boulianne
  • Luke Leighton
  • Luming Yin
  • Luna Jernberg
  • Lyude Paul
  • Maaaace Fu
  • Maaz Mombasawala
  • Manasi Navare
  • Marcin Kidziński
  • Marcin Ślusarz
  • Marcos Alano
  • Mario Kleiner
  • Marius Vlad
  • Mark Filion
  • Martin Peres
  • Martin Weber
  • Matt Roper
  • Matt Turner
  • Matthew Auld
  • Matthieu Herrb
  • Maxime Ripard
  • Melissa Wen
  • Michael Larabel
  • Michael Larabel
  • Michael Proto
  • Michal Mrozek
  • Michel Dänzer
  • Mike Schuchardt
  • Naseer Ahmed
  • Neal Gompa
  • Neil Roberts
  • Nick Yamane
  • Niels De Graef
  • Paul Ivanov
  • Paul Kocialkowski
  • Pawel Stawicki
  • Pi Pony
  • Prabhu Sundararaj
  • Qing Xia
  • Quentin Colombet
  • Rajneesh Bhardwaj
  • Ray Huang
  • Regina Phalange
  • Ricardo Garcia
  • Richard Wright
  • Rob Clark
  • Robert Foss
  • Robin Perkins
  • Rodrigo Vivi
  • Rohan Garg
  • Roman Gilg
  • Ron Jailall
  • Rouven Czerwinski
  • Ryan Houdek
  • Ryan Meyer
  • Ryszard Knop
  • Sagar Ghuge
  • saikishore konda
  • Sameer Lattannavar
  • Samuel Iglesias Gonsálvez
  • Samuel Pitoiset
  • Sander van Zoest
  • Scott Mansell
  • Sebastian Krzyszkowiak
  • Simon Ser
  • Sujaritha Sundaresan
  • Sumera Priyadarsini
  • Thomas Hellström
  • Tim Renouf
  • Timon de Groot
  • Timur Kristóf
  • Tomasz Mistat
  • Tomeu Vizoso
  • Tony Wasserka
  • Trevor Woerner
  • Vasily Khoruzhick
  • venkata sai Patnana
  • Vladimir Boldyrev
  • Víctor Manuel Jáquez Leal
  • Yana Timoshenko
  • Yu-Hong Lin
  • Zack Rusin
  • Zbigniew Kempczyński
    • 13:00 19:00
      Main Track
      • 13:00
        Opening Session 10m
      • 13:15
        Raspberry Pi Vulkan driver update 45m

        Last year we presented our on-going work to bring Vulkan support to the
        Raspberry Pi 4 platform. This talk is intended to provide a progress update
        after a year of additional development, discussing main priorities and
        achievements during this period as well as future development plans.

        Speaker: Iago Toral (Igalia, S.L.)
      • 14:05
        Lima driver status update 2021 45m

        Lima is an open source graphics driver which supports Mali Utgard (Mali-4xx) embedded GPUs from ARM.
        It’s a reverse-engineered, community-developed driver.

        At XDC 2019 there was a presentation about Lima, which happened not long after its initial inclusion in upstream.
        At that time, it was still missing some important features to be a complete driver.
        Most of those have been addressed since then and the situation now is notably more stable.

        This talk aims to provide a status update on Lima, a review of the more relevant recent work on it, and some possible paths going forward.

        Speaker: Erico Nunes
      • 14:55
        The Occult and the Apple GPU 45m

        The Internet has been under a spell over the M1 system-on-chip. Is Apple's GPU architecture magically faster than the rest of the industry? Or is it all smoke and mirrors? Only a reverse-engineering witch can divine that truth. Grab your cape, because we're about to spill the chip's secrets, solve mysteries we were never supposed to know about, and gain a Mesa driver along the way.

        Speaker: Alyssa Rosenzweig (Collabora)
      • 15:45
        ChromeOS + freedreno update 45m

        Now that we are shipping arm chromebooks with upstream mesa graphics drivers, we would like to give a status update, covering the work to get to this point, and what lies ahead.

        Speaker: Rob Clark (Google)
      • 16:35
        SSA-based Register Allocation for GPU Architectures 45m

        SSA-based register allocation is a new strategy for register allocation which decouples register allocation from spilling and guarantees predictable register usage. It holds special promise for GPUs due to common architectural features like dynamic register sharing, but there are also challenges in real-world implementations. After first being used in Mesa by the ACO compiler backend for AMD GPUs, it is now also in use by the Freedreno driver for Qualcomm Adreno GPUs. In this talk we will explain the basic concepts, considerations for real-world implementations, and implementation choices made in freedreno and ACO.

        Speakers: Connor Abbott (Valve) , Daniel Schürmann (Valve)
      • 17:25
        etnaviv: status update 20m

        Just a yearly status update about etnaviv (NIR, CI, ..).

        Speaker: Christian Gmeiner
      • 17:50
        Fast Checkpoint Restore for AMD GPUs with CRIU 45m

        CRIU a.k.a Checkpoint Restore in Userspace is the de-facto choice for Checkpoint and Restore but one of its major limitations is to Checkpoint and Restore tasks that have a device state associated with them and need the driver to manage their state which CRIU cannot control but provides a flexible plugin mechanism to achieve this. So far there is no serious real device plugin (at least in public domain) that deals with a complex device such as a GPU. We would like to discuss our work to support CRIU with AMD ROCm which is AMD's fully open source solution to Machine Learning and HPC compute space. This will potentially be extended to support video decode / encode using render nodes.

        CRIU already has a plugin architecture to support processes using device files. Using this architecture we added a plugin for supporting CRIU with GPU compute applications running on the AMD ROCm software stack. This requires new ioctls in the KFD kernel mode driver to save and restore hardware and kernel mode driver state, such as memory mappings, VRAM contents, user mode queues, and signals. We also needed a few new plugin hooks in CRIU itself to support remapping of device files and mmap offsets within them, and finalizing GPU virtual memory mappings and resuming execution of the GPU after all VMAs have been restored by the PIE code.

        The result is the first real-world plugin and the first example of GPU support in CRIU.

        While there were several new challenges that we faced to enable this work, we were finally able to support real tensorflow/pytorch work loads across multi-gpu nodes using criu and were also able to migrate the containers running gpu bound worklaods.In this talk, we'd like to talk about our journey where we started with a small 64KB buffer object in GPU VRAM to Gigabytes of single VRAM buffer objects across GPUs. We started with /PROC/PID/MEM interface initially and then switched to a faster direct approach that only worked with large PCIE BAR GPUs but that was still slow. For instance, to copy 16GB of VRAM, it used to take ~15 mins with the direct approach on large bars and more than 45 mins with small bars. We then switched to using system DMA engines built into most AMD GPus and this resulted in very significant improvements. We can checkpoint the same amount of data within 5 seconds now. For this we initially modified libdrm but the maintainers didn't agree to change an private API to expose GEM handles to the userspace so we finally ended up make a kernel change and exporting the buffer objects in VRAM as DMABUF objects and then import in our plugin using libdrm.

        We are going to present the architecture of our plugin, how it interacts with CRIU and our GPU driver during the checkpoint and restore flow. We can also talk about some security considerations and initial test results and performance stats.

        Further reading: https://github.com/RadeonOpenCompute/criu/tree/criu-dev/plugins/amdgpu#readme
        Our work-in-progress code: https://github.com/RadeonOpenCompute/criu/tree/amd-criu-dev-staging

        Speakers: Mr Rajneesh Rajneesh Bhardwaj (AMD) , Mr Felix Kuehling (AMD) , Mr David Yat Sin (AMD)
      • 18:40
        Emulating Virtual Hardware in VKMS 20m

        The Virtual Kernel Mode-setting(VKMS) driver aims to help with testing and development of graphics drivers without having to use actual graphics hardware. My work during Outreachy comprised adding support for emulation of virtual hardware in VKMS. This involved writing/refactoring code in IGT GPU tests as well. I want to talk about my journey as a newcomer in exploring DRM and IGT GPU tools, debugging mysterious errors, and working with the community to develop a solution.

        Speaker: Sumera Priyadarsini
    • 19:05 20:20
      Demos / Lightning talks I

      Demos have priority over lightning talks in this session.

      Lightning talks get schedule as time permits throughout the assigned time block. Please be ready!

      • 19:05
        LibVF.IO & Hyperborea - New tech for VFIO graphics passthrough users 10m

        LibVF.IO is a library providing automated mdev mediated dive partitioning, GPU scheduling, and memory allocation for VFIO graphics passthrough users.

        Hyperborea is daemon driven by LibVF.IO allowing users to create, run, and manage unikernel VMs (single application per VM) with full performance graphics acceleration.

        In our lightning talk we'd love to give a quick demo of how easy it is to create and run a VM with LibVFIO and show some of the underlying tech!

        Speaker: Arthur Rasmusson
      • 19:15
        Another year, another ISA: Panfrost update 5m

        A lightning talk about the state-of-the-art of the Panfrost driver for Arm Mali GPUs, including support for the new Valhall instruction set architecture in the latest Mali designs.

        Speaker: Alyssa Rosenzweig (Collabora)
      • 19:20
        The Input Method Hub 5m

        Quick overview of ongoing efforts to improve upon the current state of text input and input method Wayland protocols.

        Speaker: Roman Gilg
      • 19:25
        Quick Overview of VK_EXT_multi_draw 5m

        The VK_EXT_multi_draw Vulkan extension was recently released and closes an existing gap between the OpenGL and Vulkan APIs. It can be used to improve the performance of some Vulkan apps and as a tool when implementing OpenGL on top of Vulkan as Zink does. This talk will give a quick overview of the extension.

        Speaker: Ricardo Garcia (Igalia, S.L.)
      • 19:30
        SDL: The Quest for Wayland By Default 5m

        The recently-released SDL 2.0.16 dramatically improves native Wayland support. The experience is about 90% there! This lightning talk will go over the other 90% needed to make Wayland the default video driver for Linux.

        Ethan Lee is a Linux game developer with over 60 games of experience, including Celeste, Streets of Rage 4, Transistor, and many more! He is also the maintainer of FNA and is a co-maintainer of SDL.

        Speaker: Ethan Lee (flibitijibibo)
    • 13:00 21:35
      Main Track
      • 13:00
        Opening Session 10m
      • 13:15
        Addressing wayland robustness 45m

        One of the biggest user-facing issues facing wayland adoption is robustness. A crash in the compositor can take down the entire session and lead to data loss.

        With wayland being a constantly changing landscape and with more workload being put on the compositor process this doesn't seem to be going away.

        This talk showcases work across multiple libraries and toolkits to tackle this at the root with a method of "compositor handoffs" allowing clients to safely securely and seamlessly reconnect to a relaunched wayland compositor. This not only tackles the issue of robustness but also opens up a whole avenue of new opportunities that were previously impossible; such as freezing and resuming applications.

        We talk through the POC implementations made across multiple toolkits, and what changes are needed throughout wayland and mesa to support this.

        Speaker: David Edmundson (KDE)
      • 14:05
        Compiling Vulkan shaders in the browser: A tale of control flow graphs and WebAssembly 45m

        Ever wondered what happens when you mix Emscripten, Graphviz, and a Vulkan driver? I couldn’t help myself and tried: What started as a simple visualizer for shader control flow has since grown into a port of Mesa’s shader compiler ACO running in the browser, capable of compiling thousands of shaders on-the-fly. Don’t believe it? Demo included!

        Putting this experiment into wider context reveals a landscape of powerful debugging tools rarely utilized in low-level programming: With robust and efficient code left at the core, external web-based tools benefit from quicker iteration cycles and easier UI prototyping.

        This talk doesn’t present ground-breaking ideas: At worst, you’ll see a cool tool made with love. At best, you’ll walk away with new ideas for creating debuggable systems.

        Speaker: Tony Wasserka
      • 14:55
        Dissecting and fixing Vulkan rendering issues in drivers with RenderDoc 45m

        Broken and flickering geometry, corrupted textures, and even hangs in real-world games and apps are common issues in open-source graphics driver development. While conformance tests are mostly narrow and confined, finding driver problems when running triple-A games can be a challenging task.

        This talk will show a major misrendering example when running a game and the steps taken to pinpoint the underlying problem in shader compilation using RenderDoc. We will briefly touch the taxonomy of different issues, typical causes, and generic methods to try.

        Speaker: Danylo Piliaiev (Igalia S.L.)
      • 15:45
        Ray-tracing in Vulkan pt. 2: Implementation 45m

        At last year's XDC, Jason gave an overview of the VK_KHR_ray_tracing extensions and how they can be used to implement a ray-tracing render from a client POV. In this talk, Jason will discuss the implementation of those extensions in Intel's Linux Vulkan driver. We'll cover over-all architecture as well as detailed topics such as bindless thread dispatch on Intel HW, Shader call/return lowering, and BVH building with OpenCL kernels. Watching last year's talk as preparation is highly recommended.

        Speaker: Jason Ekstrand (Intel)
      • 16:35
        KWinFT in 2021: Latest development, Next Steps 45m

        This talk presents an overview of the KWinFT project in 2021. The following topics will be discussed:

        • original motives for founding the KWinFT project,
        • recap of previous developments in 2020,
        • overview of current developments,
        • project organisation and scaling,
        • embedding in the ecosystem: long-term plan for KWinFT as a C++ library collection for the creation of feature-rich Wayland (and X11) compositors.
        Speaker: Roman Gilg
      • 17:25
        Enabling Level zero Sysman APIS for Tool developers to control the GPUs. 45m

        We talk about a new programming interface “Sysman” which is part of level zero library.
        Sysman (System Resource Management) is used to monitor and control the power, frequency, temperature etc , of accelerator devices.
        Sysman is an API that will,
        • Enable HPC (High Performance Compute) GPU servers to optimize/track power, temperature ,utilization, memory bandwidth & scheduling of Intel discrete graphics cards for the kind of workloads that run in those environments.
        • Provide system level monitoring of important telemetry like power, frequencies, temperature and updating the firmwares
        • Be integrated as part of OneAPI Level0 with hooks into the Level Zero UMD driver.

        Speakers: saikishore konda, Ravindra Babu Ganapathi,, Mr Jitendra Sharma, T J Vivek Vilvaraj,
      • 18:15
        Redefining the Future of Accelerator Computing with Level Zero 45m

        Modern applications in areas like Machine Learning, Artificial Intelligence, and 3D Graphics, require a synergistic software/hardware ecosystem that allow developers to take full advantage of hardware accelerators. In this scenario, it is critical to have a low-level API that can easily support and adapt to any device, in order to minimize the impact in upper-levels of the software stack when exposing novel hardware capabilities to higher-level programming models and frameworks.

        Level-Zero API, part of Intel OneAPI product, defines a device-independent, vendor-agnostic, low-level, direct-to-metal interface to accelerator devices that abstracts users and upper-level components of the software stack from the specifics of the target devices, while providing them with the access needed to fully exploit their hardware capabilities. This is essential for Intel to expose new hardware features at a faster pace and to effectively compete against established CUDA-based ecosystem from NVIDIA.

        This presentation offers an overview of the rich set of interfaces defined in Level-Zero, focusing on capabilities such as unified-shared memory, peer-to-peer communication, and inter-process communication. Additionally, the status of the implementation of Level-Zero and its adoption by higher-level compiler, analysis tools, performance libraries and other frameworks are presented.

        Speakers: Jaime Arteaga (Intel) , Ravindra Babu Ganapathi,, Aravind Gopalakrishnan (Intel) , Michal Mrozek (Intel) , Brandon Fliflet (Intel) , Ben Ashbaugh (Intel)
      • 19:05
        X.Org security 20m

        I'm going to present a summary of the last 10 years or so of participating to the moderation and animation of the xorg-security@ mailing lists.
        This is an opportunity for people interested in taking over this responsibility to have an insight of the kind of issues that are submitted and how we've been dealing with them.

        Speaker: Matthieu Herrb
      • 19:30
        X.Org Foundation Board of Directors Meeting 1h
    • 13:15 19:15
      • 13:15
        Coordinating the CI efforts for Linux + userspace 2h

        With the ever-increasing focus on testing found in our community, let's try to coordinate the efforts of every individual.

        The main focus for this workgroup will be two-fold:

        • Ramp up the trace-based testing in Mesa CI / DXVK / ...
        • Bring kernel testing to more drivers than i915

        Please ping mupuf on IRC on OFTC's #freedesktop to add additional topics or show interest in one.

        Speaker: Martin Roukala (néé Peres) (X.Org / Valve contractor)
      • 16:35
        SSA-based Register Allocation 2h

        After the talk "SSA-based Register Allocation for GPU Architectures", this workshop will be for people considering implementing SSA-based register allocation or wanting to understand the ACO and Freedreno implementations. We can also go more in-depth with different strategies and heuristics used to optimize the register allocation problem, if there is interest.

        Speakers: Connor Abbott (Valve) , Daniel Schürmann (Valve)
    • 13:00 17:45
      Main Track
      • 13:00
        Opening Session 10m
      • 13:15
        Improving the Linux display stack reliability 45m

        Due to its nature, the display stack can be hard to test. Indeed, the component we want to test often sends the pixels to an external display without any way to retrieve the image being output, let alone make sure it's correct.

        And while a human can perform some of those tests by looking at the screen, some issues can prove to be difficult to spot, such as colours being slightly off or pixels being offset. More complex tests can also be tedious to set up or hard to trigger.

        The ecosystem of devices that Linux supports also adds further constraints on the display interfaces we want to test, but also on the system size, the tools available, the connectivity of the device, etc.

        In this talk, we will first discuss the constraints and what makes testing the display stack unique. We will then talk about the existing solutions, their limitations, and what we have been working on to improve the situation.

        Speaker: Maxime Ripard
      • 14:05
        KWinFT's wlroots backend 20m

        The big change in KWinFT this year is the replacement of all its own hardware backend plugins for its Wayland session with a single backend talking to wlroots.

        This talk goes into detail on:

        • reasons for this strategic move,
        • technical realization,
        • outcome with advantages and disadvantages,
        • long-term impact on the ecosystem.
        Speaker: Roman Gilg
      • 14:30
        TTM conversion in i915 45m

        The purpose of TTM is to provide buffer object contents in memory where it is mappable by the CPU and GPU when needed, and also to allow overcommitting by means of swapping or eviction.

        This talk will cover the process of moving memory management in i915 kernel driver to TTM.

        Speaker: Thomas Hellstrom (Intel)
      • 15:20
        Status of freedesktop.org gitlab/cloud hosting 45m

        Last year, it was fires everywhere. This year? well, it was also the same, sort of.

        In this talk, we will see what steps we took to reduce further more our bill for our gitlab hosting. We will also tell some jokes like "oh, BTW, we almost lost all of our storage", or something like "oops, I killed the entire cluster". Oh the fun we had.

        So yes, this is basically the continuation of the talk I gave last year to present the new infrastructure and the roadmap we have for gitlab.freedesktop.org.

        Speaker: Benjamin Tissoires (Red Hat)
      • 16:10
        Making bare-metal testing accessible to every developer 20m

        With Freedesktop's move to Gitlab every project not only got access to a lot of machine time, but they also got all the infrastructure to automate their runs, inspect the results, and provide automated testing reports of merge requests. This has led to a lot of projects adopting it to reduce regressions and maintenance costs to the point of almost bankrupting Freedesktop.org! The only downside of the current testing infrastructure is that it is meant to run in the cloud, not on the GPUs we develop drivers for! Of course, some efforts are underway to make even the DRM subsystem testable in the cloud (VKMS) but if we are to prevent regressions through pre-merge testing, we need at some point to run on the real hardware!

        Hardware-testing labs do exists, but they rarely seem to happen without a corporation to back them up as only they have the resources to pay for the development of the system interfacing with the hardware, its hosting, and its maintenance. In order to be within the reach of hobbyist projects, we estimate the cost should be limited to $1kUSD, one week-end of hardware set up time, and a couple of evenings of tweaking before reaching stability, and no more than an hour per week of maintenance after that. To reach this goal, we need to make the deployment as easy as assembling plastic bricks, keep maintenance costs down through self-configuration/healing, and running Gitlab CI jobs in the farm as easy as inheriting from a CI template and setting a couple of environment variables!

        While we have not yet fully reached this loafty goal, we already are operating 3 farms in 3 locations with the above properties mostly implemented \o/ In this talk, we are presenting how easy it is to deploy a kernel and run containers in our farm, show what it takes to set up a test farm at home, and what can be done to get hobbyist projects like Nouveau tested!

      • 16:35
        A new CPU performance scaling proposal for tuning VKD3D-Proton 20m

        The CPU performance scaling is one of key parts in Linux Kernel, it is to manage the CPU frequency according to kernel and processor status and widely used by many user mode application to talk to the processors. The system information APIs in Wine will use the CPU performance scaling interfaces to manage the multi-core processor schedule timing compatibilities from windows application to Linux environment for VKD3D-Proton (the full Direct3D 12 API on top of Vulkan) on Steam. The original CPU performance scaling module is based on the legacy kernel common ACPI cpufreq driver on AMD processors. We found it was not very performance/power efficiency for modern AMD platforms. So this talk is to introduce a new CPU performance scaling design for AMD platform which has better performance per watt scaling on such as 3D game like Horizon Zero Dawn with VKD3D-Proton on Steam.

        The idea is inspired by co-working with Valve software guys for tuning animation slow down problem (https://github.com/ValveSoftware/Proton/issues/4125) of VKD3D-Proton on steam.

        Speaker: Ray Huang
      • 17:00
        Video decoding in Vulkan: A brief overview of the provisional VK_KHR_video_queue & VK_KHR_video_decode APIs 20m

        In April of this year, Khronos released a provisional set of extensions: VK_KHR_video_queue, VK_KHR_video_decoder_queue and VK_KHR_video_encoder_queue. They all aim for hardware accelerated video decoding and encoder with the Vulkan API. In this talk, we will introduce the basics of video decoding and give an overview of the concepts used to decode video via the new Vulkan extension, using as example the usage of the API in a GStreamer element. The talk will be educational and focus on helping others in the X/Mesa community to understand the new API concepts.

        Speaker: Victor Manuel Jáquez Leal (Igalia)
      • 17:25
        State of the X.Org 20m

        Your secretary's yearly report on the state of the X.org Foundation. Expect updates on the freedeskoptop.org, internship and student programs, XDC, and more!

        Speaker: Lyude Paul (Red Hat)
    • 13:15 17:30
      • 13:15
        Hostile Multi-Tenancy on a Single Commodity GPU: Can it be secure? 2h

        While GPU multi-tenancy in the server world has grown rapidly, hostile multi-tenancy on single, commodity GPUs has been virtually unexplored. Existing multi-tenancy solutions for GPUs all fall short in at least one of the following areas: Minimizing attack surface, strongly isolating potentially hostile tenants, supporting consumer GPUs, and allowing parallel sharing of a single GPU between tenants. Containers and VirtualBox’s virtual GPU are not secure enough to protect against hostile workloads. VirGL, KVMGT, XenGT, and WebGL are all incredibly complex solutions with massive attack surface. AMD and NVIDIA already support GPU virtualization, but it is limited to costly enterprise cards and the NVIDIA solution requires proprietary drivers. Hyper-V GPU partitioning support is neither free software nor production ready. Finally, PCIe pass-through to a VM requires 1 GPU per tenant, which makes it insufficient for desktop partitioning solutions such as Qubes OS.

        This workshop is a twofold challenge: First, determine if hostile multi-tenancy on a single commodity GPU can be implemented securely. If it can, figure out how; if it cannot, determine what would be needed from GPU vendors. The goal is to begin work towards a secure, capability-based GPU multiplexer that runs on commodity hardware and is agnostic to the specific CPU-side isolation mechanism, whether it be a microkernel, a hypervisor, or something else entirely.

        Speaker: Demi Obenour (Invisible Things Lab)
      • 15:30
        X.Org security BoF 2h

        I'm going to present a summary of the last 10 years or so of participating to the moderation and animation of the xorg-security@ mailing lists.
        This is an opportunity for people interested in taking over this responsibility to have an insight of the kind of issues that are submitted and how we've been dealing with them.

        Speaker: Matthieu Herrb
    • 17:50 18:25
      Lightning Talks II
      • 17:50
        Spoilers: XDC 2022 5m

        In this talk we'll reveal the location and vintage of XDC 2022.

        Speakers: Jeremy White (CodeWeavers) , Arkadiusz Hiler (CodeWeavers)
      • 17:55
        Conclusions about BVH building with RADV and ANV 5m

        Conclusions from the BVH building break-out

        Speakers: Jason Ekstrand (Intel) , Bas Nieuwenhuizen (RADV)
      • 18:00
        Rust in Mesa 5m

        I played around with how we can make use of Rust in mesa and wanted to give a short talk about what I've done and what the biggest missing things are.

        Speaker: Karol Herbst (Red Hat, Nouveau)
      • 18:05
        Summary of discussions from multi-tenancy workshop 5m

        This is a summary of what was discussed in the workshop on GPU multi-tenancy.

        Speaker: Demi Obenour (Invisible Things Lab)
      • 18:10
        Xorg security BoF summary 5m

        This will present the topics discussed during the BoF session.

        Speaker: Matthieu Herrb
      • 18:15
        Notes on the CI workshop 5m

        Just a quick recap from the testing workshop.

        Speaker: Martin Roukala (néé Peres) (X.Org / Valve contractor)
      • 18:20
        Virtual conference how-to 5m

        We will explain our experience organizing this XDC as virtual conference.

        Speaker: Ryszard Knop (Intel)
    • 18:30 18:40
      Main Track
      • 18:30
        Closing session 10m