GStreamer Conference 2023



Muelle de Trasatlánticos, s/n, 15003 A Coruña, A Coruña, Spain

September 25-26, 2023 | A Coruña, Spain

The schedule timezone of the conference is UTC+1, unless you set "Use my timezone" setting in your user preferences along with your current timezone.


The conference will happen in Palexco, a conference center at the city center of A Coruña, Spain.

Palexco building


Call for Papers

The Call for Papers is now closed for talks proposals, but lightning talks can still be submitted up to the day of the conference.

You can also follow @GStreamer on Twitter or follow @GStreamer on Mastodon for updates.

    • 20:00 23:00
      Social events: Welcome Drinks and/or Food The Breen's Tavern

      The Breen's Tavern

      Pr. de María Pita, 24, 15001 A Coruña, Spain's+Tavern/@43.3696622,-8.4140623,17z/data=!4m6!3m5!1s0xd2e7c7f602a4849:0xf5925c54af1061cc!8m2!3d43.3711091!4d-8.3965816!16s%2Fg%2F12ml2y2r9?entry=ttu
    • 08:00 09:25
    • 09:25 09:28
      Opening session
    • 09:30 13:20
      Room 1 Room 1

      Room 1


      • 09:30
        GStreamer State of the Union 45m

        This talk will take the usual bird's eye look at what's been happening in and around GStreamer in the last release cycle(s) and look forward at what's next in the pipeline.

        Speaker: Tim-Philipp Müller (Centricular)
      • 10:20
        Pexip+GStreamer 40m

        An update on how Pexip continues to use GStreamer.
        Talking about some of our more interesting patches as of late, spanning topics such as RTP, Network, TWCC, SCTP, iOS/Android, RTMP and Audio.

        Speaker: Håvard Graff (Pexip)
      • 11:05
        Coffee break 10m
      • 11:20
        Adding W3C Media Source Extensions and Encrypted Media Extensions to GStreamer 40m

        There are several existing media player frameworks based on client-side web technology that rely on Media Source Extensions (MSE) API within web browsers. A new GStreamer library has implemented the MSE API in GObject C to make it possible for these players to run on top of GStreamer without depending on a web browser library. Separately, since there is currently no complete solution within GStreamer to support the playback of DRM-protected media, a new GStreamer API was designed which maps closely to the Encrypted Media Extensions (EME) specification. Usage of the GStreamer MSE and EME APIs may be combined by applications, though both APIs are designed to function independently.

        This presentation will discuss the WebKit origins of the MSE library, its design, and the differences between the original implementation and the GStreamer library. The presentation will also provide an overview of the GStreamer EME API design from two perspectives: one of a developer writing an application designed to play protected media, and the second of a developer making a content decryption module (CDM) available to GStreamer. Finally, an end-to-end solution will be shown of a GStreamer application using the EME API to play encrypted content using a commercially available CDM.

        Speaker: Jordan Yelloz
      • 12:05
        GstWebRTC in WebKit, current status and challenges 30m

        The WebKit WPE and GTK ports are aiming to use GstWebRTC/webrtcbin as their WebRTC backend, as an alternative to LibWebRTC. During this talk we will present the current integration status of GstWebRTC in WebKit. Several pipelines are involved, even in a basic p2p video call. We will dive in the guts of a video call, from the media capture handling, to the streaming, including handling of incoming audio/video/data tracks handling and final rendering with <video>, Canvas or even WebAudio.

        Speaker: Philippe Normand (Igalia)
      • 12:40
        Lessons learnt in new playback and adaptive components 40m

        Over the past 2 years, a new set of "Adaptive demuxers" (to support HLS, DASH, MSS) has appeared, along with a in-depth refactoring of the new playback elements (playbin3, decodebin3, ...).

        During this talk, we will go over how those new features came to be, and how they have a profound impact on the resulting "Quality of Experience" for playback use-cases in GStreamer.

        Speaker: Edward Hervey (Centricular Ltd)
    • 10:20 13:20
      Room 2 Room 2

      Room 2


      • 10:20
        HLS/LL-HLS and DASH playback 40m

        GStreamer 1.22 saw the introduction of a new plugin for adaptive playback of HLS, DASH and MSS streams. The new elements take a substantially different approach to playing those types of streams, with better buffering and bitrate selection as well as features like LL-HLS playback.

        This talk will explain how these elements improve upon the older adaptive demuxers and how they work.

        Speaker: Jan Schmidt (Centricular Ltd)
      • 11:05
        Coffee break 10m
      • 11:20
        High-level WebRTC APIs in GStreamer 40m

        After an overview of the basic components needed to establish a WebRTC connection, this talk will present how GStreamer is providing user-friendly solutions to handle bi-directional communications with the webrtcsink and webrtcsrc elements. It will also present how those elements can communicate transparently with web browsers using the gstwebrtc javascript API.

        Speakers: Thibault Saunier (Igalia) , Mr Mathieu Duponchelle (Centricular) , Loïc Le Page (Igalia)
      • 12:05
        NVIDIA Deepstream and GStreamer for AI workloads 30m

        GStreamer is the ideal platform for executing AI workloads and with NVIDIA's Deepstream toolkit, developers can execute complex multimedia pipelines with state-of-the-art analytics for audio and video. This session discusses the basics with tricks and techniques to take full advantage of this framework.

        Speaker: Brad Greenway
      • 12:40
        Deep Upstream : Hardware Agnostic GStreamer Analytics 40m

        With the growing power of machine-learning, the time has come for GStreamer to support complex, platform-independent analytic pipelines for tracking, super-resolution, noise filtering, speech recognition and more general analysis of timed data streams. We discuss a new flexible and efficient design to address these problems, without vendor or framework lock-in, which can easily interoperate with existing downstream approaches.

        To achieve this goal, we have designed new framework-independent graph-based infrastructure using the existing GstMeta structure to store complex metadata and their relationships. We have also generalized the existing ONNX-based object detector to easily support many new inference models targeting a variety of hardware backends, and have built a new OSD to visualize the generated analytics metadata. Care has been taken to ensure efficient pipelines with support for batch processing and zero-copy. Finally, we have built a bridge to non-GStreamer land with a new cloud metadata sink that can send analytics results to cloud servers.

        We will also present a demo at the end of the talk showcasing a complex two-phase video analysis pipeline.

        Speaker: Daniel Morin (Collabora Inc.)
    • 13:25 14:35
      Lunch 1h 10m

      At the venue

    • 14:40 17:21
      Room 1 Room 1

      Room 1


      • 14:40
        ICE: How to find your way through the internet 20m

        The internet is a vast place full of different hardware and software routing packets to the correct device. Connecting a client to a server is easy, however connecting a peer to another peer is not as easy because more often than not, an address address may be shared between many devices and needs to be translated. ICE is a standard to figure out how (and if) a connection can be established with a peer. This talk will focus on the use of ICE in a WebRTC context.

        Speaker: Matthew Waters (Centricular)
      • 15:05
        Adding Rust to a C++ GStreamer WebRTC application 20m

        In this talk we dive into the development of a specific feature of our existing C++ WebRTC application based on GStreamer. The solution we chose involved setting up a GStreamer pipeline to send video over the WebRTC datachannel in fmp4 format all produced by the element isofmp4mux from gst-plugins-rs. Inspired by the use of GStreamer Rust bindings we found in the isofmp4mux code we decided to try out Rust for developing our new feature. In the talk we give an overview of how we integrated the Rust code into our our existing application and share our experiences from the journey.

        Speaker: Johan Sternerup
      • 15:30
        Applications of GStreamer in Surgical Devices 20m

        At Carl Zeiss Meditec AG we use GStreamer in several of our products. We will give an introduction of the video capabilities of our current devices and highlight the requirements which are particularly important for our customers.

        The implementations of some features require more involved solutions on the GStreamer level. In the second part of the talk we will focus on these topics. This includes:

        • strategies for robust and performant recording of large video data
        • dynamic adaptations of running pipelines to different use cases
        • generating video overlays from Qt QML scenes
        • optimizations to reduce the consumption of memory bandwidth
        Speaker: Matthias Fuchs (ZEISS Group)
      • 15:55
        MSE and EME on GStreamer based WebKit ports 20m

        Media Source and Encrypted Media Extensions are W3C JavaScript APIs for multimedia playback on the web. They are widely used on the websites and apps of different streaming platforms for content delivery. In this talk we will explain the architecture and status of the implementation of those two APIs in the GStreamer based ports for WebKit (mainly WPE and GTK).

        Speaker: Xabier Rodríguez Calvar calvaris (Igalia)
      • 16:20
        RidgeRun / Texas Instruments: Edge AI GStreamer Plugins 20m

        RidgeRun, sponsored by Texas Instruments (TI), has created more than 20 GStreamer Open Source elements focused on getting the most out of the Jacinto and Sitara ARM based System on Chip, leveraging GStreamer potential to create high performance, zero-copy, and user-oriented applications for object detection, image classification, semantic segmentation, optical flow analysis, single-input / multi-input custom inference pipelines and many more multimedia related applications.

        RidgeRun would like to present the open source elements, what they do, how they interact with each other, important design considerations, challenges, performance and applications created. This talk will highlight the process of adapting GStreamer to the embedded world and how it comes to ease and extend the multimedia application development, considering performance budget while getting the most out of the platform, in the rapidly growing edge AI industry.

        Speaker: Mr Marco Herrera-Valverde (RidgeRun)
      • 16:45
        Improved RTSP connection latency for live streams 20m

        The GStreamer rtsp-server has provided configurations that ensured a low
        connection frequency but it was not possible to also receive up to date
        decodable data.

        By keeping an RTSP pipeline in the playing state after the initial DESCRIBE
        request, the connection latency can be reduced. By conditionally forcing
        keyframes, decodable frames are possible. This can be achieved by
        manipulating the pipeline with pad probes, a useful skill to master.

        Speaker: Jacob Johnsson (Axis Communications)
      • 17:10
        Coffee break 11m
    • 14:40 17:20
      Room 2 Room 2

      Room 2


      • 14:40
        GstPluginPylon: A Study of Dynamic Element Properties in Basler Cameras 20m

        In this talk, RidgeRun shares techniques used to develop GstPluginPylon, an open-source project with a source element that adapts its properties and behaviors based on the specific camera models connected to the system. By utilizing introspection, child proxy, advanced GObject, and other APIs, the pylonsrc element can discover devices, probe their capabilities, and expose them as GObject properties at runtime. Attendees will learn about the challenges and benefits of using these designs, gaining insights that may be applicable to their own projects.

        Speaker: Mr Miguel Taylor-Lopez (RidgeRun)
      • 15:05
        Closed Captions: What GStreamer Can Do 20m

        Closed captions are an auxiliary stream that may contain subtitles, and other informational data along side the video frames they are associated with. For this talk, we will mostly talk about subtitling with both CTA-608 and CTA-708 data and look at some of the things that GStreamer can do with closed captions.

        Speaker: Matthew Waters (Centricular)
      • 15:30
        Transcription, Translation, Closed Captions 20m

        A brief overview of a few new elements for use in a speech to closed captions pipeline, the challenges with respect to latency and synchronization, and aspects that could still be improved.

        Speaker: Mathieu Duponchelle (Centricular)
      • 15:55
        Updates in GStreamer VA-API (and DMABuf negotiation) 20m

        News for GStreamer VA-API

        • Present GstVA and its library
        • Talk about the deprecation of GStreamer-VAAPI
        • Present the new DMABuf negotiation mechanism
        • Discuss the deprecation of the presentation layer on libva
        • Share the new va driver for windows (d3d11 bridge)
        • Talk about encoders and the proposal of a new base class for hwaccel encoders
        • Discuss with the community about future va elements
        Speaker: Victor Manuel Jáquez Leal (Igalia)
      • 16:20
        DgiStreamer: a Gstreamer Pipeline Editor 20m

        GStreamer is a powerful multimedia framework allowing users to build all possible types of media pipelines. However, complex pipeline development can be challenging to follow and debug. To address this, we propose DgiStreamer, a one-stop solution with a graph-based UI and connection type validation. DgiStreamer simplifies pipeline development, visualizes the flow, and ensures pads type compatibility. It empowers developers to focus on their project, reducing development and debugging time.

        Speaker: Mattia Angelini (Cyens CoE)
      • 16:45
        GStreamer and VSCode: a love story 20m

        Emacs, Vim or ... VScode? Launched publicly in 2016, Visual Studio Code has quickly become the preferred IDE among professional developers, with a 74% share among all IDEs based on StackOverflow's 2023 annual survey.

        During this year, several efforts have been made in Meson, its VScode plugin and GStreamer itself resulting in a seamless user experience to hack on GStreamer with VSCode, whether it's in the C/C++ libraries and plugins or the Rust plugins. The VSCode integration with meson provides intellisense support, unit tests integration, debugging and much more, allowing to build and debug GStreamer with a single click of a button, even on Windows.

        In this talk, we will start with a quick introduction to how Meson's Visual Studio Code integration works and a summary of all the efforts done to reach the current stage. The talk will continue by explaining how to set up and configure VScode to work on GStreamer for C/C++ and Rust and how to use the different integration features for development, testing and debugging. We will finish this talk with an example of its use in a demo application.

        Speaker: Andoni Morales Alastruey (Fluendo)
      • 17:10
        Coffee break 10m
    • 17:30 19:00
      Lightning talks
      • 17:30
        WebCodecs in WebKit, with GStreamer! 5m

        In this lightning talk we will showcase the current support for the W3C WebCodecs spec, with GStreamer, in WebKit WPE and GTK ports!

        Speaker: Philippe Normand (Igalia)
      • 17:35
        Update of Four Years of V4L2 Support 5m

        Once again, I would like to share the great work and huge progress done in the V4L2 GStreamer plugin. Four years since the last update is a long time and there is just as much to say about the progress made with the Linux CODEC and all the new hardware being supported.

        Speaker: Nicolas Dufresne (Collabora)
      • 17:40
        GstPipelineStudio version 0.3.0 is out ! 5m

        A quick overview of the new release 0.3.0 of GstPipelineSudio and the coming features

        Speaker: Mr Stéphane Cerveau (Igalia)
      • 17:50
        GStreamer plugin in Rust on webOS OSE 5m

        LGE has a software platform called webOS, which is web-centric and usuability-focused, and webOS can be experienced mostly in television made by LGE.

        Now, to expand webOS to other devices, a new plugin is considered to contain various SoC vendor's plugins such as decoder and sink.

        We've tried to implement this plugin on webOS OSE(Open Source Edition) with Rust, which is a emerging language even in GStreamer, to check feasibility for the future of webOS.

        Speaker: Seungwook Cha (LG Electronics)
      • 17:55
        GStreamer Daemon Project Update 5m

        In this brief lightning talk, RidgeRun highlights the latest features added to the GStreamer Daemon open-source project. For those unfamiliar with Gstd, the talk will provide an overview of the project and its use cases. Meanwhile, attendees already acquainted with it will discover the capabilities and fixes in the newest version.

        Speaker: Mr Miguel Taylor-Lopez (RidgeRun)
      • 18:00
        Playing around with Artistic Style Transfer with GStreamer + NNStreamer 5m


        1. Who am I?
        2. Working in LGE
        3. Software engineer in charge of maintaining and developing Media Framework in webOS TV.

        4. Artistic Style Transfer with GStreamer

        5. What is Artistic Style Transfer?
        6. GStreamer pipeline for Artistic Style Transfer
        7. Results on webOS TV

        8. Examples of possible applications?

        9. Creating fun profiles?
        10. Screen savers?

        11. Future Work / Q&A

        Speaker: Hosang Lee (LG Electronics)
      • 18:05
        3 milliseconds from diaphragm to diaphragm 5m

        Glass-to-glass latency is so passé, diaphragms are in this season.

        Speaker: Nirbheek Chauhan (Centricular Ltd)
      • 18:15
        libcamerasrc: Sensor configuration and enumeration 5m

        The talk will focus on the exhibits with respect to sensor configuration from applications' POV to kernel. Starting with a general introduction on ways to configure a sensor in libcamerasrc and how much information to expose in the API itself, so as to not overwhelm the user but also giving enough flexibility for some fine-grain configuration.

        Speaker: Umang Jain Umang Jain (IdeasOnBoard)
      • 18:20
        Behind the GStreamer Conferences video archive 5m

        This lightning talk will present what is the GStreamer Conferences video archive portal, some usage statistics, and where GStreamer is used in the process.

        Speaker: Florent Thiéry (UbiCast)
      • 18:25
        GStreamer WebRTC: The Quadrennium Update 5m

        A quick look into what has been happening in the world of GStreamer WebRTC over the past quadrennium.

        Speaker: Matthew Waters (Centricular)
    • 20:35 23:35
      Social events: Dinner Tira do Playa

      Tira do Playa

      Andén de Riazor, s/n, 15011 A Coruña, Spain,-8.4158836,17z/data=!3m1!4b1!4m6!3m5!1s0xd2e7c619cbb8b79:0x76175911aea9d618!8m2!3d43.3684897!4d-8.4135043!16s%2Fg%2F1q6cqlxx2?entry=ttu
    • 09:30 13:00
      Room 1 Room 1

      Room 1


      • 09:30
        More Efficient Streaming using Linux DRM Modifiers 30m

        Have you ever heard of frame tiling or frame buffer compression? These, usually hardware-specific, formats are commonly used behind the scene to make your CODEC and GPU hardware run a lot faster. Until now, we have always tried to hide these formats from users. This would always lead to limitation and surprising side effects when the information was lost. Mis-negotiating these formats has resulted in many visual corruption issues with our original VA API decoders.

        In this talk, we will discuss how Linux DRM Modifier is fixing these issues. You will learn the new negotiation method and the tool that has been developed to make this possible. You will learn how these formats can be used and applied to DMABuf exchanges between various Linux components like cameras, VA and V4L2 CODECs, GL Stack and of course Wayland and the Linux Display drivers.

        Speaker: Nicolas Dufresne (Collabora)
      • 10:05
        Server-side Media Processing with GStreamer 30m

        We all know that GStreamer is a (relatively) popular framework for processing media on consumer devices, be it desktops, robots, cars, phones, and so on. However, when we talk about server-side media processing, arguably GStreamer is still not quite as popular as it could be.

        Daily is a video/voice calling platform-as-a-service that offers a browser- and libwebrtc-based SDK for to make calls using WebRTC. The service includes a number of features that involve media processing in the backend such as recording, live streaming, transcription, media ingestion, and SIP interoperability. All these services have been built using GStreamer.

        In this talk, we will walk through the overall architecture of these services, and some interesting problems we came across while implementing them. We will then reflect on what we’ve learned from using GStreamer in these scenarios and how we might improve the experience for others who might want to tread this path.

        Speaker: Arun Raghavan (Asymptotic Inc.)
      • 10:40
        WebRTC in Axis cameras and in the surveillance industry 30m

        In the surveillance industry, more and more cameras are becoming cloud-connected, and new streaming solutions are needed - like WebRTC. It's great for low-latency live streaming from cameras to web browsers, but Axis also use it for things like controlling camera movement and to play recorded video.

        Speaker: Jonas Cremon (Axis Communications AB)
      • 11:15
        Coffee break 10m
      • 11:30
        Video Editing with GStreamer: an update 30m

        This talk will present the work that has been done in several parts of GStreamer to make non-linear editing simpler and more efficient.

        We will also discuss the what is next and the long term vision of GES.

        Speaker: Thibault Saunier (Igalia)
      • 12:05
        Developing Low latency Video Telephony solutions using GStreamer 30m

        GStreamer provides a powerful and flexible way to develop streaming media applications for use-cases such as Video Telephony, Live Audio/Video streaming, Video Conferencing e.t.c, by supporting plugins which utilize both hardware accelerated media components present in SoC and also supporting software based processing entities.

        To ensure good overall user experience on such streaming media applications, there are various quality factors to ensure both at user-space and kernel-space level such as those related to maintaining Audio/Video Sync, Latency optimization & Performance fine-tuning, Detecting and avoiding Video Frame Skip, Audio distortion & Clipping, maintaining Audio/Video Quality and Error recovery e.t.c to ensure good overall user experience. The talk will go through above design considerations and also cover how to prototype, debug and optimize such low latency audio/video streaming application.

        Taking a TI K3 based SoC as a reference example for prototyping the video telephony use-case, the talk will go through the basic building blocks of the video telephony use-case covering relevant concepts involved for each component from GStreamer and underlying Linux kernel frameworks perspective.

        Lastly it will cover tools & techniques for testing, debugging, stabilizing and optimizing such solutions.

        Speaker: Devarsh Thakkar
      • 12:40
        Vulkan Video in GStreamer 20m

        Talk the long road of adding Vulkan Video extensions (decoding and encoding) with GStreamer

        Speakers: Victor Manuel Jáquez Leal (Igalia) , Stéphane Cerveau (Igalia)
    • 09:30 13:00
      Room 2 Room 2

      Room 2


      • 09:30
        Bridging WebRTC and SIP using GStreamer & SIPjs 30m

        WebRTC has become ubiquitous as the technology powering various forms of video experiences online. While Session Initiation Protocol (SIP) predated WebRTC by 12 years, and had become the predominant protocol used to set up real-time media sessions between groups of users, WebRTC looked to add real-time media i.e. audio, video to every web browser without the need of a separate soft phone client.

        While WebRTC has become the de-facto standard for real time communication on the Internet, SIP still sees use in some scenarios, such as bridging to phone networks (PSTN) and physical conferencing equipment.

        In this talk, we talk about how we went about connecting WebRTC and SIP systems using GStreamer and SIP.js.

        Speaker: Sanchayan Maity (
      • 10:05
        WirePlumber 0.5: Making PipeWire policies easier 30m

        WirePlumber is the default session manager of PipeWire, the powerful multimedia IPC framework that has become the standard for low-latency audio, Bluetooth audio, video capture and many more use cases on modern Linux systems. WirePlumber 0.4 featured a Lua scripting mechanism that was meant to make it easy to write custom policies, but in practice it turned out to be cumbersome. In the upcoming 0.5 release, WirePlumber is seeing fundamental changes to this mechanism that redefine the entire development experience. In this talk, George will take a closer look at these changes and also discuss other interesting upcoming features.

        Speaker: Julian Bouzas (Collabora)
      • 10:40
        Fluster: A framework for multimedia decoder conformance testing 30m

        Fluster is an open-source, OS-independent, testing framework written in Python for multimedia decoder conformance. Its purpose is to check various implementations against reference test suites with known and proven results. The decoders can be standalone executables, as well as GStreamer- or FFmpeg-based. The tool was originally designed to check the conformance of H.265/HEVC decoders, nowadays it also supports H.264/AVC, H.266/VVC, VP8, VP9, AV1 and AAC.

        Fluster is composed of a CLI application that runs a number of test suites with the supported decoders and compares the checksum of the resulting outputs against reference ones. Its modular design permits someone to easily extend its functionality and also add more decoders and test suites.

        In this talk we will provide an overview of Fluster and its functionalities covering the following topics:

        • Introduction to Fluster
        • Running Fluster and analyzing the output
        • Extending Fluster: adding a new decoder or a new test suite
        • Next step: PyPI, fluster package, webcodecs, GStreamer CI integration
        • Open Questions to talk with the fluster community


        Speakers: Rubén Gonzalez, Michalis Dimopoulos
      • 11:15
        Coffee break 10m
      • 11:30
        New MPEG-5 part 2 (LCEVC) plugin for GStreamer 30m

        MPEG-5 part 2 LCEVC (Low Complexity Enhancement Video Coding) is the latest standard by MPEG and ISO. However, this one is different to typical codecs, instead it acts as a layer on top of existing codecs to improve their compression efficiency (better quality at lower bitrates) and reduce transcoding compute requirements. The LCEVC data is carried along with metadata in the actual video stream (e.g. in SEI messages for H264). By complementing other codecs rather than competing with them it rather circumvents the codec wars, and is changing the video processing landscape we know.

        In this talk, from Collabora and in collaboration with V-Nova who were the primary originators of the standard, Julian will describe how LCEVC was implemented in GStreamer, and the challenges he faced when integrating such an enhancement codec while keeping the GStreamer modularity and flexibility intact. He will also describe why new LCEVC caps were introduced for autoplugging elements to work, and how the LCEVC enhancement data is passed through the base decoder using a new type of GstMeta. When concluding the talk, Julian will also talk about the future plans of the new LCEVC plugin for GStreamer, and will show a demo of a working GStreamer pipeline decoding LCEVC video.

        Speaker: Julian Bouzas (Collabora)
      • 12:05
        Patching a 3rd-party plugin in runtime 30m

        It is about one interesting side of the GObject, that allows us to use in runtime an already registered gstreamer element as a base class, and therefore register a new gstreamer element that would contain certain modifications or interceptions applied to the original one.
        In particular this can be interesting because it allows to intercept the behaviour of an element, the code of which we can’t or don’t want to modify.

        The technical side of the idea is quite simple, you can find the explanation is a short code example here:

        The possible usecases are not that clear, so at the same time we will present that few we can imagine but also let the listeners think about if they can propose more usecases.

        The talk could be scheduled as:

        Brief introduction into how the GType registration works, and how GStreamer uses it - 10 min

        Walk through the code example - 10 min

        Speaking about the possible usecases - 5 min

        Questions - 5 min

        Speaker: Mr Alexander Slobodeniuk (Software developer)
      • 12:40
        ONVIF metadata streams in Network security cameras 20m

        Axis Network surveillance cameras provide network streams for video, audio and a multiplexed stream of various auxiliary streams such as video analytics, events and much more. The metadata can be used to optimize and enhance many cases and can be very beneficial in many surveillance uses cases, such as detect motion, combine video streams with radar, licence plate recognition and much more. The streams are available over RTSP, which is also part of the ONVIF standard. This talk will cover how GStreamer is used to implement APIs that deliver video, audio and metadata together.

        Speakers: Linus Svensson (Axis Communications AB) , Johan Bjäreholt (Axis Communications AB)
    • 13:05 14:25
      Lunch 1h 20m

      At the venue

    • 14:30 18:25
      Room 1 Room 1

      Room 1


      • 14:30
        Four Years of Cross-platform Improvements 40m

        It's been four years since we last heard about improvements to the platform-specific support in GStreamer, such as Windows, macOS, and mobile. I'm here to talk about that on behalf of the authors that should be bragging about their great work!

        Speaker: Nirbheek Chauhan (Centricular Ltd)
      • 15:15
        Splitting GStreamer pipelines 40m

        The GStreamer pipeline is the top-level concept that encapsulates all the elements of a data processing flow. Or is it? There are good reasons why one might want to split the processing of data up into different pipelines, such as creating logical components, or preventing errors from affecting other processing.

        Over the years, there have been many different approaches to the problem - leading to a bevy of elements for creating connection tunnels between pipelines.

        This talk will discuss the available elements, what they each bring to the table and which ones you might want to use in which situations.

        Speaker: Jan Schmidt (Centricular Ltd)
      • 16:00
        Writing GStreamer applications with C# 30m

        .NET is a popular open-source cross-platform framework allowing to build different types of applications for web, mobile, desktop, IoT or servers. It supports several programming languages, being C# the most popular one.
        With the correct integration, GStreamer could become the reference framework for multimedia applications in .NET, bringing in new users to our community.

        Over the last year, the C# bindings have received some love after years of being un-maintained, with several bug fixes, updating the bindings to the latest GStreamer release, adding support for .NET, and providing NuGet packages.

        In this talk, we will present the current status of the C# bindings and how to use them to write GStreamer applications covering the following topics:

        • Status of the C# bindings
        • Nuget packages
        • Using the GStreamer bindings
        • UI toolkits integration (MAUI, Avalonia, Uno)
        • Tips and Tricks
        • Future work
        Speaker: Andoni Morales Alastruey (Fluendo)
      • 16:35
        Coffee break 10m
      • 16:50
        libcamerasrc: Introduction and usage of libcamera's GStreamer element 30m

        libcamera is open source camera stack and framework for Linux, Android, and ChromeOS. This talk will focus on libcamerasrc, libcamera's GStreamer element and how it can used and configured in order to exercise a functioning GStreamer pipeline.

        The goal of this talk is to introduce the libcamerasrc, configuring the camera and settings the supported controls. The talk will also provide a overview of what libcamerasrc supports today and prospects for future development.

        Speaker: Umang Jain Umang Jain (IdeasOnBoard)
      • 17:25
        Variations on a WebRTC relay architecture (featuring Janus and WebRTC{Src,Sink}) 20m

        Serving multimedia streams to multiple consumers often requires using a relay server. For instance, the producer might operate under constrained resources and/or behind limited bandwith connection.

        This talk presents two variations on a WebRTC relay server: one using Janus WebRTC server and the other based on WebRTCSrc & WebRTCSink.

        Speaker: François Laignel (Centricular ltd)
      • 17:50
        GstWASM: GStreamer for the web 30m

        An initial overview of the challenges to bringing GStreamer to the web through Emscripten and WASM. The particularities of building, testing, and running GStreamer on a node.js environment or in the browser.

        A detailed description of what was needed to have a GStreamer pipeline in the browser and the required changes needed for further landing into mainstream.

        Speaker: Jorge Zapata
    • 14:30 18:25
      Room 2 Room 2

      Room 2


      • 14:30
        API translation layers: the benefits and challenges of using GStreamer in Wine 40m

        The Windows operating system, in its three or so decades of existence, has introduced at least four different APIs dedicated to multimedia encoding or decoding. Wine, as a Free Software replacement of Windows, implements most of those APIs using GStreamer as a backend.

        In this presentation I intend to talk about our experiences working with GStreamer as a backend, and its advantages and disadvantages that we have found with it.

        I also intend to talk about some larger unsolved problems we have that are specific to the challenge of implementing another API using GStreamer. These include:

        • supporting zero-copy into application-provided buffers,

        • matching application expectations of synchronous decoding,

        • consistently retrieving stream metadata, especially optional stream metadata.

        In the talk I intend to propose some potential solutions to these problems, but more generally to raise them as questions for the GStreamer development community to think about.

        [As as side note, I've proposed this as a 45-minute talk, because I anticipate it can easily run that long. However, if there is not time for a 45-minute talk, I would be happy to give a talk in a shorter time slot, condensing the presentation if necessary.]

        Speaker: Zeb Figura (CodeWeavers, Inc.)
      • 15:15
        Building your own USB camera with GStreamer 40m

        USB cameras are commonly used in desktops and laptops for streaming video or
        participating in video conferences. Thus, USB is more or less the standard for
        connecting a camera to a PC.

        Linux allows to turn hardware that has a USB device controller (UDC), for example the
        Rapsberry Pi 4, into a USB peripheral. The kernel provides a number of
        different USB gadgets to implement various USB device classes. One of them is
        the UVC (USB Video Class) gadget to implement an USB camera.

        However, correctly configuring such a system and passing a video stream to the
        USB gadget is not that easy. Fortunately, the new uvcsink element allows you to
        easily stream an arbitrary GStreamer video pipelines into the UVC gadget and,
        thus, to any USB host system.

        Michael will show you how to prepare a system as a UVC gadget and stream video
        data to an UVC host using a simple GStreamer pipeline like "gst-launch-1.0
        videotestsrc ! uvcsink". He will also give some insight into the implementation
        details of the uvcsink element.

        Speaker: Michael Grzeschik
      • 16:00
        The evolution of HTTP based signalling for WebRTC in GStreamer 30m

        The adoption of WebRTC in the broadcasting/streaming industry has been hindered due to lack of standard signalling that can be a simple plug and play model. With introduction of WHIP and WHEP specifications that is changing and the acceptance of WHIP/WHEP is evident with all major multimedia open source software implementing them.

        GStreamer already had the client side implementations WHIP/WHEP (whipsink and whepsrc) as of release 1.22 written in Rust. And the server side implementations are in progress.

        With WebRTCSink and WebRTCSrc written to support any signalling protocol as an interface separating from the sink/src functionality, it has become easy to write all the client and server side implementations of WHIP/WHEP on top of WebRTCSink/Src. This also helps to leverage the support of both raw and encoded streams, the congestion control mechanism and every other new improvement that will be added in WebRTCSink/Src in the future.

        My talk is going to be an introduction on WHIP/WHEP protocols and the initial version of elements implemented in GStreamer using Rust and how they are evolving using the Signaller based design in the GStreamer WebRTC Rust plugins.

        Speaker: Mr Taruntej Kanakamalla (
      • 16:35
        Coffee break 10m
      • 16:50
        How we are building a distributed multi-camera real-time sports tracking system using GStreamer and Rust 30m

        At Spiideo we offer automated sports video solutions to our customers for recording, analysis and broadcasting. We use a multi-camera setup to create a stitched panoramic video with an AI assisted cameraman.

        This talk will showcase how we moved from a segment based system with a glass-to-glass latency of almost two minutes to a frame-based system with a latency of around three seconds.

        In those three seconds we need to perform stitching of multiple 4K streams, detect objects in each camera stream and predict where to aim the virtual camera to follow the action on the pitch. And we do all of this across multiple instances in the cloud.

        This was Spiideo’s first real use of GStreamer and we will talk about what we struggled with, what helped us (a lot) and what we still do not really (really) understand.

        Speakers: Robin Gustavsson (Spiideo) , Daniel Pendse (Spiideo)
      • 17:25
        HYPE: HYbrid Parallel Encoder 20m

        Modern computers tend to have multiple GPUs or a single GPU with several encoding cores, but encoders can only use one of these cores. Parallelizing the encoding across multiple cores can theoretically increase transcoding speeds linearly, resulting in a 2x transcoding speed for VoD in a GPU with two encoding cores.

        This talk will present HyPE, an Open Source GStreamer meta-encoder written in Rust that can parallelize the encoding process across several encoding cores to take advantage of all the available hardware resources.

        Its codec-agnostic design allows for seamless integration with a diverse range of codecs, making it a versatile choice for a wide variety of applications. It is also hardware-agnostic to get compatibility with various systems, including NVIDIA, AMD, Intel, and ARM architectures.

        We will showcase the design of this plugin, present the achieved results, and examine the limitations of this element, along with potential areas for enhancement in future iterations.

        Speaker: Rubén Gonzalez
      • 17:50
        Flumes: Scan and index your multimedia files 20m

        Flumes is an open-source service we developed at Fluendo with the purpose of improving our QA process. It was designed with our multimedia playback/decoding products in mind. The main goals of the service are, to provide easy access to multimedia files of concrete specifications and a feeding mechanism to reproduction tools or test automation frameworks. As such, it becomes the connecting link between multimedia test collections and testing tools.

        It consists of diverse technologies that allow managing, editing, viewing and searching metadata information of multimedia content. It is developed in Python 3, uses Glib and the gst-discoverer tool and stores metadata in an SQLite database. The service runs as a daeamon on Linux, constantly monitoring your collection's path, ensuring that the metadata database stays up-to-date.

        Speaker: Michalis Dimopoulos
    • 18:25 18:30
      Closing session