GStreamer Conference 2025

Europe/London
Barbican Centre

Barbican Centre

Level 4, Silk Street, London, EC2Y 8DS, UK
Description

23-24 October 2025 | London, UK

Conference Website

https://gstreamer.freedesktop.org/conference/2025/

Venue

The conference will take place at the Barbican Centre in Central London.

Call for Presentations

Submit your talk or lightning talk proposal now!

Registration

You can register for the conference here.

Updates

Follow us @GStreamer on Mastodon or Bluesky, our Discourse Forum, or the GStreamer Events channel on Matrix for the latest updates and to stay connected. Or check the main conference website (see above).

GStreamer Conference 2025 Contact
    • 19:30 22:30
      Social Events: Arrival day welcome drinks/food at The Angel EC1 (1st floor) (The Angel EC1)

      (1st floor)

      The Angel EC1

      73 City Road London EC1Y 1BD
    • 08:30 09:30
      Registration
    • 09:31 09:34
      Opening Session
    • 09:35 10:15
      Room 1

      Frobisher Auditorium 2

      • 09:35
        GStreamer State of the Union 40m

        This talk will take the usual bird's eye look at what's been happening in and around GStreamer in the last release cycle(s) and look forward at what's next in the pipeline.

        Speaker: Tim-Philipp Müller (Centricular)
    • 10:20 11:05
      Room 1

      Frobisher Auditorium 2

      • 10:20
        GStreamer in the Medical Simulation Environment 45m

        Laerdal Medical is dedicated to helping save lives by being a leading provider of medical training resources around the world, from hardware like CPR manikins to software, such as SimCapture. Underpinning SimCapture is a media recording and streaming service called CaptureNode, which, over time, has migrated from Flash to WebRTC and continues to grow to better serve the medical community. In our talk, we will discuss the service's architecture and the growing impact of the GStreamer community in this space.

        Speakers: Thomas Goodwin (Laerdal Labs, DC) , Jeff Wilson (Laerdal Labs, DC)
    • 10:20 11:05
      Room 2

      Frobisher Rooms

      • 10:20
        Hardware-Accelerated Live Broadcasting of Uncompressed ST 2110 Streams with GStreamer leveraging NVIDIA GPUs and NICs 45m

        This presentation showcases pipelines with open sourced NVIDIA GStreamer plugins for GPU and NIC accelerated RTP transmit and receive (NvDsUdp) and NMOS integration (NvDsNmos) for uncompressed broadcast ST 2110 workflows with low CPU overhead and high throughput. It details technical advances such as direct GPU-to-NIC memory transfer, packet paced transmission, and kernel bypass for optimized networking, as well as dynamic endpoint registration and connection management via NMOS. Further, we introduce new SMPTE291 RTP payloader/depayloader for ancillary data streaming, which we demonstrate used in real-time AI enabled live workflows.

        SMPTE: Society of Motion Picture and Television Engineers
        ST2110: Suite of standards from SMPTE describing how to send very high bitrate streams (over 10 Gb/s for UHD) with precise synchronization over an IP network.
        SMPTE291: RTP payload specification for Ancillary Data
        NMOS: Networked Media Open Specifications for interoperability on the control layer for media devices on an IP infrastructure

        Speaker: Johan Jino (Nvidia)
    • 11:05 11:25
      Coffee break 20m
    • 11:25 13:05
      Room 1

      Frobisher Auditorium 2

      • 11:25
        Threadshare, a plugin collection to increase scalability 30m

        The Threadshare framework is an asynchronous runtime and a set of elements which allows reducing resource usage when handling many streams.

        It was introduced in 2018 and presented at the GStreamer conference in Edinburgh the same year.

        After a reminder of the core principles of the framework, this talk will present the changes which occured since 2018.

        Speaker: François Laignel (Centricular ltd)
      • 12:00
        A new era for GStreamer C++ bindings 30m

        A common question from people who are interested in using GStreamer and who are planning to use C++ is, which C++ bindings to use. Most of the available choices so far are essentially unmaintained or not widely used, so the recommendation was usually to use the C API directly. I hope to change this recommendation now.

        This year a new C++ bindings generator for GObject-based libraries was announced and gained some traction in GNOME: peel. Unlike alternatives, it is providing headers-only, dependency-less and zero-cost bindings while making use of modern C++ features.

        In this talk I will give an overview of peel: how to use it, the kind of bindings API it provides and why I believe you should really consider it for any future C++ project making use of GStreamer, no matter if application, library or GStreamer plugin.

        Speaker: Sebastian Dröge (Centricular Ltd)
      • 12:35
        Brief history of GStreamer adoption at Twilio 30m

        Brief history of GStreamer use at Twilio - starting from Video rooms in 2016 to modern day media processing on Gstreamer for voice, media recordings, media streams and integration with conversational AI providers.

        Speakers: Mr Jeff Foster (Twilio) , Andrey Kovalenko (Twilio)
    • 11:25 13:05
      Room 2

      Frobisher Rooms

      • 11:25
        Tools to profile a video encoder 30m

        This talk presents the implementation of a video encoder analysis within GStreamer. We'll demonstrate our video-encoder-stats element that collects real-time encoding performance metrics including bitrate, processing time, CPU usage, and VMAF quality scores, attaching this data as metadata to video buffers throughout the pipeline. The presentation covers our video-compare-mixer element that enables side-by-side visual comparison of multiple encoder outputs with interactive navigation controls, supporting various backends. We'll showcase how these elements work together in our demo tool.

        https://github.com/fluendo/gst-plugins-rs/pull/4

        Speaker: Diego Nieto Munoz (Fluendo)
      • 12:00
        Virtual Hardware: Emulating a Video4linux Hardware Decoder 20m

        Developing and debugging applications leveraging hardware-accelerated video decoding often requires access to specific hardware, creating a significant barrier to early development and continuous integration.

        This talk introduces a new V4L2 decoder implementation that provides a software backend, effectively emulating a hardware decoder. This allows developers to write and test their GStreamer pipelines on generic systems, significantly accelerating development and improving portability. This talk examines the design choices and implementation details of this new decoder.

        Speaker: Jan Schmidt (Centricular Ltd)
      • 12:35
        VVC/H.266 Alpha Channel support in GStreamer 30m

        Alpha Channel is an essential tool in modern video workflows, enabling transparency and visual effects as required in telepresence and Virtual Reality applications, or modern websites. The Versatile Video Coding (VVC/H.266) standard supports Alpha Channel natively by encoding transparency data as an independent auxiliary layer signaled via SDI and ACI SEI messages (ITU H.274).

        This presentation details the work to integrate this standard-compliant VVC/H.266 alpha channel support into VVenc and GStreamer for encoding and decoding streams with transparencies.

        The presentation will cover the following topics:
        - Introduction to Alpha Channel
        - Alpha Channel support in video codecs
        - Alpha Channel decoding support in GStreamer with GstAlphaDecodeBin
        - VVC/H.266 Alpha Channel encoding with VVenc
        - VVC/H.266 Alpha Channel decoding with GStreamer
        - Encoding and decoding demo

        Speaker: Andoni Morales Alastruey (Fluendo)
    • 13:05 14:15
      Lunch (at venue) 1h 10m
    • 14:15 15:45
      Room 1

      Frobisher Auditorium 2

      • 14:15
        From Streams to Insights: Advancing GstAnalytics 30m

        GstAnalytics has evolved into an important part of GStreamer, providing powerful elements and metadata APIs that streamline the creation of sophisticated analytics pipelines. This presentation showcases the significant advancements made to GstAnalytics this year, including:

        • Tensor negotiation capabilities for seamless ML model integration
        • More Pythonic bindings that facilitate analytics development
        • Enhanced metadata and new tensor decoders

        We will share our roadmap for future GstAnalytics improvements and conclude with a demonstration of these capabilities through sports analytics applications, showcasing the power of GstAnalytics.

        Speaker: Daniel Morin (Collabora)
      • 14:50
        GStreamer in VR devices manufacturing 30m

        At Meta, we develop and manufacture Quest VR devices. For various use cases, such as camera calibration and computer vision (CV) algorithm development, we currently use an in-house solution for camera and sensor recording in our labs. We are now transitioning to a GStreamer-based solution.

        In this presentation, we will cover: * Why we chose GStreamer. * How we use GStreamer. * The challenges and lessons learned.

        Speaker: Ivan Loskutov
      • 15:23
        librice: the TURNing point (ONLINE ONLY) 1m

        ICE (Internet Connectivity Establishment) is a widely used standard for NAT (Network Address Translation) hole punching. If NAT hole punching fails, then TURN (Traversal Using Relays around NAT) can be used to relay data between peers.

        librice is a sans-IO library that handles the intricacies of ICE and has recently gained support for communicating with TURN servers. We will discuss TURN and how it is implemented within librice.

        Speaker: Matthew Waters (Centricular)
      • 15:25
        State of MPEG-TS in GStreamer 20m

        State of MPEG-TS in GStreamer

        Speaker: Edward Hervey (Centricular Ltd)
    • 14:15 15:45
      Room 2

      Frobisher Rooms

      • 14:15
        WirePlumber, present challenges and future directions 30m

        Over a year ago, we introduced WirePlumber 0.5, bringing major advancements such as the event stack for fine-grained control of PipeWire events, runtime settings to dynamically adjust behavior, and smart filters for automatic audio and video filter handling. These features marked a big step forward. However, WirePlumber is still evolving, and many use cases remain unmet.

        In this talk, we’ll share the key challenges we’re tackling, our roadmap to overcome them, and our vision for a robust, stable API that will pave the way to WirePlumber 1.0.

        Speaker: Julian Bouzas (Collabora)
      • 14:50
        The road to Enhanced FLV and RTMP in GStreamer 20m

        A new version of the Enhanced RTMP(v2) specification was announced earlier this year and we have one of the features, Multitrack Capabilities, implemented (!9682) in GStreamer recently (audio only for now though).

        In this talk, I will briefly introduce the eRTMP specification, and then talk about my experience in adding new capabilities in the GStreamer FLV plugin.

        I will cover aspects like:

        • challenges we have seen in extending the FLV muxer and demuxer
        • the options we considered to address the challenges
        • how the final implementation took shape
        • interoperability issues with other implementations
        • scope for inclusion of other features of the spec
        Speaker: Mr Taruntej Kanakamalla (Centricular Ltd)
      • 15:25
        Enabling I-frame playlists with HLS CMAF 20m

        HTTP Live Streaming (HLS), a widely adopted protocol for live video streaming, and has been supported by GStreamer for a long time. HLS enables streaming of multiple formats and bit rates, allowing players to dynamically adjust their streaming quality based on network conditions for ensuring optimal viewer experience.

        HLS specification has support for I-frame only playlist where each media segment in the playlist describes only an I-frame. I-frame playlists are used for trick play, such as fast forward, and scrubbing.

        This talk will briefly cover the implementation details for enabling I-frame only playlist support for the GStreamer HLS CMAF plugin.

        Speaker: Mr Sanchayan Maity (Centricular)
    • 15:45 16:00
      Coffee break 15m
    • 16:00 16:55
      Room 1

      Frobisher Auditorium 2

      • 16:00
        VVC/H.266 in GStreamer 20m

        Basic support for VVC/H.266 was added in GStreamer 1.26. This talk will give an overview of the VVC codec and ecosystem, along with the building blocks and contributions in GStreamer for supporting this codec.

        Speaker: Carlos Bentzen (Igalia)
      • 16:35
        Costly Speech: an introduction 20m

        This is a continuation from my 2023 presentation (https://indico.freedesktop.org/event/5/contributions/232/).

        In this iteration I will present the improvements that have been made to existing speech to text and translation elements, new transcription backends (deepgram, ..), and a new family of text to speech elements.

        A demo might even happen!

        Speaker: Mathieu Duponchelle (Centricular)
    • 16:00 16:55
      Room 2

      Frobisher Rooms

      • 16:00
        Making GStreamer Go! 30m

        Go is a modern systems programming language that offers awesome concurrency. This talk will show you why you should consider Go for your next GStreamer project.

        Speaker: Wilhelm Bartel
      • 16:35
        Region-Based Compression in GStreamer 20m

        In my master’s thesis, I explored region-based compression for sports broadcasting. We used FFmpeg because it provided a generic addroi filter for attaching ROI metadata that we could use for multiple encoders, but GStreamer currently lacks a simple way to define regions and pass them to encoders downstream, and I wanted to change this.

        To prototype this, I built a small Rust plugin that works similar to addroi to append GstVideoRegionOfInterestMeta to frames and extended x264enc to consume it. In this lightning talk, I will demo the prototype and show some results of using it, highlighting how a generic ROI solution could enable broader support for region-based compression in GStreamer.

        Images:
        addroi_filter: shows simple example of more compression in one region
        controlled_compression: Usuage of compressing audience in football arena more, for better quality on the field.

        Speaker: Axel Tobieson (Spiideo)
    • 17:00 18:45
      Lightning Talks
      • 17:00
        Rewriting CoreAudio-based elements on macOS 5m

        A short story about what's wrong with osxaudiosrc/sink, what the new elements will try to do better, and what possibilities will that unlock.

        Speaker: Piotr Brzeziński (Centricular)
      • 17:05
        Showing the invisible: Analysing buffer flow with tracers 5m

        This talk will show how the buffer lateness, queue levels & pad push timings tracers can assist GStreamer developers in analysing buffer flows.

        Speaker: François Laignel (Centricular ltd)
      • 17:10
        Video Frame Scheduler Plugin for Improving WebRTC Playback Quality 5m

        In WebRTC-based applications, video frames are often delivered at irregular intervals due to the nature of real-time communication. This irregularity can cause issues in waylandsink, which commits frames based on the arrival of frame_redraw_cb signals.

        As a result, even when all frames are correctly delivered—for example, in a 60fps video where all 60 frames reach waylandsink—frames may still be dropped in the gst_wayland_sink_show_frame function if the redraw callback is not received in time.

        On webOS devices developed by LGE, we tested 60fps video playback using cloud gaming and observed a frame drop rate of over 35% in waylandsink. In contrast, when using non-WebRTC streaming methods with more consistent frame intervals, playback was smooth with almost no frame drops under the same conditions.

        To address this issue, we developed a GStreamer plugin called Video Frame Scheduler, which sits between the video decoder and sink. It schedules frame delivery at regular intervals, simulating consistent frame timing. This reduced the frame drop rate to below 10%, significantly improving playback smoothness and reliability.

        In this Lightning Talk, we would like to share the challenges we encountered, the design of our solution, and engage with the GStreamer community to explore alternative or complementary approaches. We look forward to exchanging ideas and insights with fellow developers and multimedia experts.

        Speaker: HAEJUNG HWANG (LG Electronics)
      • 17:15
        soothe: a proposal for encoder testing 5m

        https://github.com/Igalia/soothe

        Soothe is a testing framework written in Python for encoder quality. It's a command line interface application that runs a number of test suites with the supported encoders. Its purpose is to compare different encoder implementations and configurations. It uses VMAF binary to measure the transcoded videos.

        Speaker: Victor Manuel Jáquez Leal (Igalia)
      • 17:20
        Audio source separation using snakes, crabs and torches 5m

        Audio source separation is the process of separating the individual sources from a mixed audio stream. This can be used, for example, to remove the vocals from a song in a karaoke application, to extract voice from a movie to allow for better transcription without background noise, or to remove one specific instrument from your favorite song and playing that instrument yourself.

        In this lightning talk I will present the GStreamer "demucs" plugin. The plugin uses a PyTorch music source separation model, that is available as an easily installable Python package, from a GStreamer plugin written in Rust. This mix of technologies provides some interesting technical puzzles.

        Speaker: Sebastian Dröge (Centricular Ltd)
      • 17:25
        pexLGPL bundle 5m

        A story of how we pushed the limits of LGPL compliance and created a monster dynamic library to rule them all.

        Speaker: Mr Tulio Beloqui (Pexip)
      • 17:30
        What's up with Video4Linux support 5m

        Nicolas will share the latest updates on Video4Linux support in GStreamer, highlighting recent changes, ongoing integration work, and developments from the broader Linux Media community. This lightning talk continues the annual tradition of keeping the GStreamer community up to date with the fast-moving V4L2 ecosystem.

        Speaker: Nicolas Dufresne (Collabora)
      • 17:35
        GstVA and GStreamer-VAAPI updates 5m

        We have removed GStreamer-VAAPI subproject. It's mostly replaced by GstVA in gst-plugins-bad. We will talk about what's missing and how does the roadmap look.

        Speaker: Victor Manuel Jáquez Leal (Igalia)
      • 17:40
        Full GPU driven AI workloads with GStreamer and Raven 5m

        Raven is an AI engine we are developing at Fluendo, designed for multimedia AI workflows. It combines AI inference with GPU-accelerated processing, giving full control over the entire GPU pipeline, from memory allocation to execution, allowing for deep customization across hardware and environments.

        In this lightning talk, I will showcase a set of GStreamer plugins for background removal, anonymization, and superresolution built with Raven running on Windows at full speed.

        Speaker: Andoni Morales Alastruey (Fluendo)
      • 17:45
        burn: a little case study on using GstAnalytics from Rust 5m

        Almost all existing code using the GstAnalytics API is in C or Python: inference elements, tensor decoders and all kinds of infrastructure elements.

        In this lightning talk I will talk about my experience writing an inference element around the Rust burn deep-learning / machine-learning framework, writing a tensor decoder for YOLOX in Rust, and how it integrates with the remaining GstAnalytics infrastructure.

        burn is a Rust framework that is modeled after the PyTorch API and supports many different CPU/GPU/NPU backends.

        Speaker: Sebastian Dröge (Centricular Ltd)
      • 17:50
        Vulkan Video: pipeline update 5m

        State of the art of the present and coming Vulkan video elements. We'll talk about architecture, codecs and challenges to achieve vulkan support ...

        Speaker: Stéphane Cerveau (Igalia)
      • 17:55
        Fallback Streaming for RTSP Server 5m

        We wanted to serve content immediately while waiting for the primary source to arrive.
        To solve this, we implemented a fallback mechanism using the fallbackswitch element combined with a custom parsebinloop element that continuously loops a seekable file source with seamless timestamps, making it appear as a live stream.

        The system maintains two sources: the incoming primary stream and the looping fallback. When clients connect, they immediately receive the fallback content. Once the primary source arrives, we automatically switch to it and cleanly destroy the fallback pipeline. This ensures a stable RTSP server that always serves video, eliminating the "waiting for stream" experience.

        In this lightning talk, I'll quickly go over the challenges of implementing seamless looping with continuous timestamps, lessons learned about building it, and demo the system in action showing the smooth transition from fallback to live content.

        Speaker: Axel Tobieson (Spiideo)
      • 18:00
        Video Reshaping with Skia 5m

        This talk will present skiareshape(gl), GStreamer elements that bring geometric transformations to your video pipelines using the Skia graphics library.

        Speaker: Thibault Saunier (Igalia)
      • 18:05
        Reading v4l2 data from MR813 devices in a way that doesn't suck 5m

        When your device's v4l2 implementation is so particular that it turns out to be easier to write a new element than to patch v4l2src

        Speaker: Vivia Nikolaidou
      • 18:10
        Animate Your Subtimelines in GES 5m

        This talk will present the primary timeline registration feature we are working on in GES which enables live updates of Subtimelines. An important feature to enable true collaborative editing for complex projects.

        Speaker: Thibault Saunier (Igalia)
    • 19:00 23:00
      Social Events: Food and drinks at BrewDog Chancery Lane BrewDog Chancery Lane

      BrewDog Chancery Lane

      1 Plough Place London, EC4A 1DE England
    • 09:00 09:45
      Registration: Coffee and tea
    • 09:45 11:10
      Room 1

      Frobisher Auditorium 2

      • 09:45
        GStreamer at Scale: Recent Lessons from Real-World Video Conferencing 40m

        At Pexip, GStreamer powers our global video conferencing platform, processing real-time media for millions of users. Since the last GStreamer Conference, we’ve continued to evolve our use of the framework, focusing on interoperability, scaling, and performance in production.

        In this talk, we’ll share highlights from the past couple of years: TWCC statistics, device monitoring and sinks, GstBin teardown optimizations, RTP session SSRC handling, pipeline auto-removal, and more; covering both the challenges faced and the improvements contributed back to the community.

        Speaker: Håvard Graff (Pexip)
      • 10:30
        GStreamer for audio distribution at Sveriges Radio 40m

        We will guide you through our journey from a proprietary software and hardware solution to an open and Gstreamer-based platform for our IP based audio distribution at the Swedish public radio broadcaster, Sveriges Radio (SR). Sharing experiences, challenges, opportunities and crossroads we have faced when building, deploying and operating this hybrid cloud solution. Our platform supports both HLS, with different flavours of AAC, and traditional ICY-streaming with support for FLAC and Opus, as well as audio processing through custom Gstreamer-plugins. We will also outline some future work.

        Speakers: Karl Johannes Jondell (Sveriges Radio AB) , Christofer Bustad (Sveriges Radio AB)
    • 09:45 11:10
      Room 2

      Frobisher Rooms

      • 09:45
        PipeWire’s pipeline operation vs GStreamer’s explained 40m

        In GStreamer, the media pipeline operation is defined by threads that are continuously pushing or pulling data through the pads of the linked elements. This is an elegant and very flexible way of operating the pipeline, but there are certain drawbacks.

        PipeWire follows a different approach, using a single thread that wakes up elements in a dependency-based sequence to consume and produce data that is placed in shared memory. With that approach, it can schedule pipelines with a predictable processing latency that span across multiple processes, quite efficiently. How does that work exactly and are there drawbacks to that as well?

        This talk aims to explain in depth and discuss pipeline operation and how to get the most out of different systems.

        Speaker: George Kiagiadakis (Collabora)
      • 10:30
        Improving WebRTC datachannel performance 30m

        In WebRTC, data channels are used to exchange arbitrary data.

        Data channels makes the perfect companion to the live video and audio features of WebRTC. Unfortunately, the throughput of the data channels is far from satisfactory in many network environments.

        This presentation will summarize what Axis have been doing, are doing, and possibly will be doing to improve the performance in the GStreamer implementation of SCTP, the transport protocol used for data channels.

        Speaker: Emil Ljungdahl (Axis Communications)
    • 11:10 11:30
      Coffee break 20m
    • 11:30 13:10
      Room 1

      Frobisher Auditorium 2

      • 11:30
        The Art of Debugging GStreamer Software 40m

        Debugging and hardening software that relies on GStreamer often comes down to experience and the ability to pinpoint where problems really come from. Since GStreamer is Open Source, anyone can build that expertise, provided they can break problems down and trace them to their origin.

        In this talk, Nicolas will describe the methods he uses in practice such as: splitting bitstreams into smaller parts, applying advanced tracing to uncover what is really happening, and defining clear expectations of correct behaviour to guide debugging. The goal is not only to fix the immediate bug, but to improve the code so that the same area does not need to be revisited later. This talk is intended for application and plugin developers, and the approaches discussed will be useful both to newcomers and to very experienced developers.

        Speaker: Nicolas Dufresne (Collabora)
      • 12:15
        Auxiliary Stream Wrangling in playbin3 30m

        GStreamer’s playbin3 element provides a convenient and simple interface for basic media playback. Like earlier playbin elements, it has some support for external subtitles, but with some limitations. Extending it to handle multiple auxiliary streams - such as multiple audio or subtitle tracks - reveals a surprising number of complexities.

        This talk delves into the challenges faced when adding robust auxiliary stream support to playbin3, covering bugs found, pipeline negotiation intricacies, state management issues, and the corner cases that arose.

        Speaker: Jan Schmidt (Centricular Ltd)
      • 12:50
        Bringing AMD HIP into GStreamer 20m

        In this talk, we introduce how AMD's HIP has been integrated into GStreamer. We will look at the motivation behind supporting HIP, the integration approach, and what it means for building GPU-accelerated media pipelines that run efficiently on both AMD and NVIDIA hardware.

        Speakers: Max Campbell (Veo Technologies ApS, Max Campbell Technologies ENK) , Seungha Yang (Centricular)
    • 11:30 13:10
      Room 2

      Frobisher Rooms

      • 12:15
        GstWebRTC in WebKit, current status & plans 30m

        The WebKit WPE and GTK ports are using GstWebRTC and webrtcbin as their WebRTC
        backend. As the first Web engine to rely on GstWebRTC, improving spec
        compatiblity is an important goal for us. During this talk we will present the
        current integration status of GstWebRTC in WebKit and the achievements
        accomplished since GStreamer Conference 2024.

        Speaker: Philippe Normand (Igalia)
      • 12:50
        Why Keep a Thread Running? Meet GstBaseIdleSrc 20m

        In Pexip we’ve implemented a new base class, GstBaseIdleSrc, designed for elements that don’t need a dedicated streaming thread but only push buffers occasionally; either when signaled from the outside or on specific events. This avoids the overhead of an always-running thread and simplifies writing event-driven sources.

        In this talk, we’ll introduce the motivation behind GstBaseIdleSrc, show how it differs from GstBaseSrc and appsrc, and share examples of where it fits well in real-world pipelines.

        Speaker: Camilo Celis Guzman (Pexip)
    • 13:10 14:20
      Lunch (at venue) 1h 10m
    • 14:20 15:25
      Room 1

      Frobisher Auditorium 2

      • 14:20
        Rebuilding Our Video Server Engine on GStreamer 30m

        Starting in the spring of 2018, our company has rebuilt the core technology of our video server platform on top of GStreamer. What started as a proof-of-concept to solve a cost of goods problem grew into a complete re-architecture of our application, enabling new products and solutions we never thought possible before.

        In this talk, I’ll share our journey with GStreamer: how we first discovered it, what made it stand out from other frameworks, and how it gradually became the foundation of our business. I’ll cover the early wins, the steep learning curves, and the unique challenges we faced integrating GStreamer into product.

        Throughout the talk I'll share what worked well, and what we we find lacking in the GStreamer ecosystem, and what we are doing to help improve it.

        Speaker: Ray Tiley (Tightrope Media Systems)
      • 14:50
        The Quest for Low-Latency Desktop Audio 35m

        The presenter has had the privilege of working on an app that requires low latency audio capture and render on macOS, Windows, and Linux. This has allowed him to make a direct comparison between the audio servers on these operating systems.

        In this talk, he will talk about some of the challenges involved in shipping a production-ready desktop app that runs on all three OSes.

        Speaker: Nirbheek Chauhan (Centricular Ltd)
    • 14:20 15:25
      Room 2

      Frobisher Rooms

      • 14:20
        Lessons Learned 20m

        Abstract: Transforming Our Video Management System with GStreamer
        In our journey to modernize and optimize our Video Management System (VMS), which currently manages more than 100000 cameras from various manufacturers, we've transitioned the foundation of our live RTSP camera streaming from a custom FFmpeg-based solution to GStreamer pipelines. Additionally, we have reengineered our offline video player, replacing its FFmpeg-based framework with GStreamer to unify and enhance our media processing workflow.
        Our application runs on Windows, where we’ve leveraged GStreamer’s rapidly evolving capabilities, particularly the Direct3D 11 (d3d11) plugins, to achieve efficient hardware-accelerated decoding and rendering. While the newer Direct3D 12 (d3d12) plugins have gained maturity since we began, our next step is to investigate and integrate them into our application to further improve performance and scalability.
        In this talk, we will share our experiences with migrating to GStreamer, the challenges involved in adapting a complex VMS system, our insights on using the d3d11 plugin suite on Windows, and the lessons we've learned along the way. We believe our story will resonate with others modernizing legacy systems and highlight the versatility of GStreamer in developing high-performance multimedia applications.

        Speaker: Mr BUMİN KAAN AYDIN (HAVELSAN A.Ş.)
      • 14:50
        Gst.wasm season 3 35m

        The purpose of this talk is to present the changes made after last year’s presentation. Initially, we will show again the goal behind this project, along with the following project's changes:

        • WebTransport support in GStreamer
        • Optimizing GStreamer with ORC for WASM
        • Upstreaming process
        • Real-life usage
        Speakers: Jorge Zapata, Fabián Orccón
    • 15:25 15:40
      Coffee break 15m
    • 15:40 16:50
      Room 1

      Frobisher Auditorium 2

      • 15:40
        Time Remapping and GES: Implementation Details and Latest Updates 20m

        This talk will provide an update on the latest developments in GStreamer Editing Services (GES), with a special focus on the newly stabilized linear time remapping feature. Time remapping enables powerful video effects like slow motion, fast forward, and reverse playback by manipulating the relationship between input and output timestamps. We'll explore how this feature has been implemented at the GStreamer and NLE level. The presentation will cover the core GStreamer elements involved, the challenges we faced, and how these capabilities are exposed through GES for video editing applications. We'll also discuss the roadmap ahead, including exploring future work on dynamic speed changes during playback (smoothly transitioning from normal to slow motion while playing) and other improvements to the GES stack.

        Speaker: Thibault Saunier (Igalia)
      • 16:05
        What’s New in GStreamer D3D12 20m

        This session will introduce newly added Direct3D12-based elements and features, and highlight the latest improvements. Building on last year’s presentation, we will showcase the progress made and share practical updates for creating efficient media pipelines on Windows

        Speaker: Seungha Yang (Centricular)
      • 16:30
        Rusty Pipes and Oxidized Wires 20m

        Earlier this year, I began writing a native PipeWire client library in Rust. The aim is to provide a safer alternative to the bindings around the C library, while also reducing the amount of boilerplate in both the library implementation and the user-facing API.

        Achieving parity with the C API is no small task. In this talk, I will go over the overall approach to solving the problem, review the current state of the library (basic clients are already possible!), and chart a course to a complete native Rust API for PipeWire.

        I will also take a detour into the challenges of using Rust for a low-level system library, such as reconciling the PipeWire API's object lifecycle with Rust's ownership and lifetime system.

        Speaker: Arun Raghavan (Valve Corporation)
    • 15:40 16:50
      Room 2

      Frobisher Rooms

      • 15:40
        Cutting audio latency with bidirectional WebRTC 20m

        This talk will discuss the methods used to reduce latency in a bidirectional WebRTC pipeline, i.e., video but primarily audio in both directions. The primary topics are

        • Splitting the pipeline using intersrc/intersink, discussing the global pipeline latency and how to do avoid unnecessary clock syncing. This discusses pipeline latency and what is needed to allow media streaming in two directions.
        • Investigating different audio sinks, mainly pipewire and alsa and its effect on overall latency, both pipeline latency as well as perceived/measured latency.
        • Possible further improvements. Jitterbuffer, external properties (pipewire properties like quantum), mentioning webrtcbin2 from GStreamer that should potentially solve the first issue on its own.

        The end result reduced latency from 650-700ms to roughly 230ms, with as low as 180ms depending on the setup, making real-time communication possible, with a goal of 150ms.

        Speaker: Albert Sjölund (Axis Communications)
      • 16:05
        dcSCTP in GStreamer 20m

        How we ported and brought dcsctp into GStreamer to replace the current stack (usrsctp).

        Speaker: Mr Tulio Beloqui (Pexip)
      • 16:30
        Fluster news 20m

        This talk will present new features, important changes and bugfixes that are part of Fluster releases 0.2.x-0.5.x. Emphasizing two very useful new features

        • A new pixel comparison method that allows tolerance with the reference decoder.
        • Addition of profile information and reports.

        Also, We would also like to present some numbers, eg. increase in number of available decoders, test suites, test vectors and possibly other metrics.

        Speaker: Rubén Gonzalez
    • 16:55 17:00
      Closing Session
    • 09:30 17:00
      GStreamer Hackfest @ Amazon LHR16 Amazon LHR16

      Amazon LHR16

      1 Principal Place London EC2A 2FA
    • 09:30 17:30
      GStreamer Hackfest @ Amazon LHR16 Amazon LHR16

      Amazon LHR16

      1 Principal Place London EC2A 2FA