The schedule timezone of the conference is UTC+1, unless you set "Use my timezone" setting in your user preferences along with your current timezone.
The conference will happen in Palexco, a conference center at the city center of A Coruña, Spain.
The Call for Papers is now closed for talks proposals, but lightning talks can still be submitted up to the day of the conference.
You can also follow @GStreamer on Twitter or follow @GStreamer on Mastodon for updates.
This talk will take the usual bird's eye look at what's been happening in and around GStreamer in the last release cycle(s) and look forward at what's next in the pipeline.
An update on how Pexip continues to use GStreamer.
Talking about some of our more interesting patches as of late, spanning topics such as RTP, Network, TWCC, SCTP, iOS/Android, RTMP and Audio.
There are several existing media player frameworks based on client-side web technology that rely on Media Source Extensions (MSE) API within web browsers. A new GStreamer library has implemented the MSE API in GObject C to make it possible for these players to run on top of GStreamer without depending on a web browser library. Separately, since there is currently no complete solution within GStreamer to support the playback of DRM-protected media, a new GStreamer API was designed which maps closely to the Encrypted Media Extensions (EME) specification. Usage of the GStreamer MSE and EME APIs may be combined by applications, though both APIs are designed to function independently.
This presentation will discuss the WebKit origins of the MSE library, its design, and the differences between the original implementation and the GStreamer library. The presentation will also provide an overview of the GStreamer EME API design from two perspectives: one of a developer writing an application designed to play protected media, and the second of a developer making a content decryption module (CDM) available to GStreamer. Finally, an end-to-end solution will be shown of a GStreamer application using the EME API to play encrypted content using a commercially available CDM.
The WebKit WPE and GTK ports are aiming to use GstWebRTC/webrtcbin as their WebRTC backend, as an alternative to LibWebRTC. During this talk we will present the current integration status of GstWebRTC in WebKit. Several pipelines are involved, even in a basic p2p video call. We will dive in the guts of a video call, from the media capture handling, to the streaming, including handling of incoming audio/video/data tracks handling and final rendering with <video>, Canvas or even WebAudio.
Over the past 2 years, a new set of "Adaptive demuxers" (to support HLS, DASH, MSS) has appeared, along with a in-depth refactoring of the new playback elements (playbin3
, decodebin3
, ...).
During this talk, we will go over how those new features came to be, and how they have a profound impact on the resulting "Quality of Experience" for playback use-cases in GStreamer.
GStreamer 1.22 saw the introduction of a new plugin for adaptive playback of HLS, DASH and MSS streams. The new elements take a substantially different approach to playing those types of streams, with better buffering and bitrate selection as well as features like LL-HLS playback.
This talk will explain how these elements improve upon the older adaptive demuxers and how they work.
After an overview of the basic components needed to establish a WebRTC connection, this talk will present how GStreamer is providing user-friendly solutions to handle bi-directional communications with the webrtcsink and webrtcsrc elements. It will also present how those elements can communicate transparently with web browsers using the gstwebrtc javascript API.
GStreamer is the ideal platform for executing AI workloads and with NVIDIA's Deepstream toolkit, developers can execute complex multimedia pipelines with state-of-the-art analytics for audio and video. This session discusses the basics with tricks and techniques to take full advantage of this framework.
With the growing power of machine-learning, the time has come for GStreamer to support complex, platform-independent analytic pipelines for tracking, super-resolution, noise filtering, speech recognition and more general analysis of timed data streams. We discuss a new flexible and efficient design to address these problems, without vendor or framework lock-in, which can easily interoperate with existing downstream approaches.
To achieve this goal, we have designed new framework-independent graph-based infrastructure using the existing GstMeta structure to store complex metadata and their relationships. We have also generalized the existing ONNX-based object detector to easily support many new inference models targeting a variety of hardware backends, and have built a new OSD to visualize the generated analytics metadata. Care has been taken to ensure efficient pipelines with support for batch processing and zero-copy. Finally, we have built a bridge to non-GStreamer land with a new cloud metadata sink that can send analytics results to cloud servers.
We will also present a demo at the end of the talk showcasing a complex two-phase video analysis pipeline.
At the venue
The internet is a vast place full of different hardware and software routing packets to the correct device. Connecting a client to a server is easy, however connecting a peer to another peer is not as easy because more often than not, an address address may be shared between many devices and needs to be translated. ICE is a standard to figure out how (and if) a connection can be established with a peer. This talk will focus on the use of ICE in a WebRTC context.
In this talk we dive into the development of a specific feature of our existing C++ WebRTC application based on GStreamer. The solution we chose involved setting up a GStreamer pipeline to send video over the WebRTC datachannel in fmp4 format all produced by the element isofmp4mux from gst-plugins-rs. Inspired by the use of GStreamer Rust bindings we found in the isofmp4mux code we decided to try out Rust for developing our new feature. In the talk we give an overview of how we integrated the Rust code into our our existing application and share our experiences from the journey.
At Carl Zeiss Meditec AG we use GStreamer in several of our products. We will give an introduction of the video capabilities of our current devices and highlight the requirements which are particularly important for our customers.
The implementations of some features require more involved solutions on the GStreamer level. In the second part of the talk we will focus on these topics. This includes:
Media Source and Encrypted Media Extensions are W3C JavaScript APIs for multimedia playback on the web. They are widely used on the websites and apps of different streaming platforms for content delivery. In this talk we will explain the architecture and status of the implementation of those two APIs in the GStreamer based ports for WebKit (mainly WPE and GTK).
RidgeRun, sponsored by Texas Instruments (TI), has created more than 20 GStreamer Open Source elements focused on getting the most out of the Jacinto and Sitara ARM based System on Chip, leveraging GStreamer potential to create high performance, zero-copy, and user-oriented applications for object detection, image classification, semantic segmentation, optical flow analysis, single-input / multi-input custom inference pipelines and many more multimedia related applications.
RidgeRun would like to present the open source elements, what they do, how they interact with each other, important design considerations, challenges, performance and applications created. This talk will highlight the process of adapting GStreamer to the embedded world and how it comes to ease and extend the multimedia application development, considering performance budget while getting the most out of the platform, in the rapidly growing edge AI industry.
The GStreamer rtsp-server has provided configurations that ensured a low
connection frequency but it was not possible to also receive up to date
decodable data.
By keeping an RTSP pipeline in the playing state after the initial DESCRIBE
request, the connection latency can be reduced. By conditionally forcing
keyframes, decodable frames are possible. This can be achieved by
manipulating the pipeline with pad probes, a useful skill to master.
In this talk, RidgeRun shares techniques used to develop GstPluginPylon, an open-source project with a source element that adapts its properties and behaviors based on the specific camera models connected to the system. By utilizing introspection, child proxy, advanced GObject, and other APIs, the pylonsrc element can discover devices, probe their capabilities, and expose them as GObject properties at runtime. Attendees will learn about the challenges and benefits of using these designs, gaining insights that may be applicable to their own projects.
A brief overview of a few new elements for use in a speech to closed captions pipeline, the challenges with respect to latency and synchronization, and aspects that could still be improved.
News for GStreamer VA-API
GStreamer is a powerful multimedia framework allowing users to build all possible types of media pipelines. However, complex pipeline development can be challenging to follow and debug. To address this, we propose DgiStreamer, a one-stop solution with a graph-based UI and connection type validation. DgiStreamer simplifies pipeline development, visualizes the flow, and ensures pads type compatibility. It empowers developers to focus on their project, reducing development and debugging time.
Emacs, Vim or ... VScode? Launched publicly in 2016, Visual Studio Code has quickly become the preferred IDE among professional developers, with a 74% share among all IDEs based on StackOverflow's 2023 annual survey.
During this year, several efforts have been made in Meson, its VScode plugin and GStreamer itself resulting in a seamless user experience to hack on GStreamer with VSCode, whether it's in the C/C++ libraries and plugins or the Rust plugins. The VSCode integration with meson provides intellisense support, unit tests integration, debugging and much more, allowing to build and debug GStreamer with a single click of a button, even on Windows.
In this talk, we will start with a quick introduction to how Meson's Visual Studio Code integration works and a summary of all the efforts done to reach the current stage. The talk will continue by explaining how to set up and configure VScode to work on GStreamer for C/C++ and Rust and how to use the different integration features for development, testing and debugging. We will finish this talk with an example of its use in a demo application.
In this lightning talk we will showcase the current support for the W3C WebCodecs spec, with GStreamer, in WebKit WPE and GTK ports!
Once again, I would like to share the great work and huge progress done in the V4L2 GStreamer plugin. Four years since the last update is a long time and there is just as much to say about the progress made with the Linux CODEC and all the new hardware being supported.
A quick overview of the new release 0.3.0 of GstPipelineSudio and the coming features
LGE has a software platform called webOS, which is web-centric and usuability-focused, and webOS can be experienced mostly in television made by LGE.
Now, to expand webOS to other devices, a new plugin is considered to contain various SoC vendor's plugins such as decoder and sink.
We've tried to implement this plugin on webOS OSE(Open Source Edition) with Rust, which is a emerging language even in GStreamer, to check feasibility for the future of webOS.
In this brief lightning talk, RidgeRun highlights the latest features added to the GStreamer Daemon open-source project. For those unfamiliar with Gstd, the talk will provide an overview of the project and its use cases. Meanwhile, attendees already acquainted with it will discover the capabilities and fixes in the newest version.
(Draft)
Software engineer in charge of maintaining and developing Media Framework in webOS TV.
Artistic Style Transfer with GStreamer
Results on webOS TV
Examples of possible applications?
Screen savers?
Future Work / Q&A
TBD
Glass-to-glass latency is so passé, diaphragms are in this season.
The talk will focus on the exhibits with respect to sensor configuration from applications' POV to kernel. Starting with a general introduction on ways to configure a sensor in libcamerasrc and how much information to expose in the API itself, so as to not overwhelm the user but also giving enough flexibility for some fine-grain configuration.
This lightning talk will present what is the gstconf.ubicast.tv GStreamer Conferences video archive portal, some usage statistics, and where GStreamer is used in the process.
A quick look into what has been happening in the world of GStreamer WebRTC over the past quadrennium.
Have you ever heard of frame tiling or frame buffer compression? These, usually hardware-specific, formats are commonly used behind the scene to make your CODEC and GPU hardware run a lot faster. Until now, we have always tried to hide these formats from users. This would always lead to limitation and surprising side effects when the information was lost. Mis-negotiating these formats has resulted in many visual corruption issues with our original VA API decoders.
In this talk, we will discuss how Linux DRM Modifier is fixing these issues. You will learn the new negotiation method and the tool that has been developed to make this possible. You will learn how these formats can be used and applied to DMABuf exchanges between various Linux components like cameras, VA and V4L2 CODECs, GL Stack and of course Wayland and the Linux Display drivers.
We all know that GStreamer is a (relatively) popular framework for processing media on consumer devices, be it desktops, robots, cars, phones, and so on. However, when we talk about server-side media processing, arguably GStreamer is still not quite as popular as it could be.
Daily is a video/voice calling platform-as-a-service that offers a browser- and libwebrtc
-based SDK for to make calls using WebRTC. The service includes a number of features that involve media processing in the backend such as recording, live streaming, transcription, media ingestion, and SIP interoperability. All these services have been built using GStreamer.
In this talk, we will walk through the overall architecture of these services, and some interesting problems we came across while implementing them. We will then reflect on what we’ve learned from using GStreamer in these scenarios and how we might improve the experience for others who might want to tread this path.
In the surveillance industry, more and more cameras are becoming cloud-connected, and new streaming solutions are needed - like WebRTC. It's great for low-latency live streaming from cameras to web browsers, but Axis also use it for things like controlling camera movement and to play recorded video.
This talk will present the work that has been done in several parts of GStreamer to make non-linear editing simpler and more efficient.
We will also discuss the what is next and the long term vision of GES.
GStreamer provides a powerful and flexible way to develop streaming media applications for use-cases such as Video Telephony, Live Audio/Video streaming, Video Conferencing e.t.c, by supporting plugins which utilize both hardware accelerated media components present in SoC and also supporting software based processing entities.
To ensure good overall user experience on such streaming media applications, there are various quality factors to ensure both at user-space and kernel-space level such as those related to maintaining Audio/Video Sync, Latency optimization & Performance fine-tuning, Detecting and avoiding Video Frame Skip, Audio distortion & Clipping, maintaining Audio/Video Quality and Error recovery e.t.c to ensure good overall user experience. The talk will go through above design considerations and also cover how to prototype, debug and optimize such low latency audio/video streaming application.
Taking a TI K3 based SoC as a reference example for prototyping the video telephony use-case, the talk will go through the basic building blocks of the video telephony use-case covering relevant concepts involved for each component from GStreamer and underlying Linux kernel frameworks perspective.
Lastly it will cover tools & techniques for testing, debugging, stabilizing and optimizing such solutions.
Talk the long road of adding Vulkan Video extensions (decoding and encoding) with GStreamer
WebRTC has become ubiquitous as the technology powering various forms of video experiences online. While Session Initiation Protocol (SIP) predated WebRTC by 12 years, and had become the predominant protocol used to set up real-time media sessions between groups of users, WebRTC looked to add real-time media i.e. audio, video to every web browser without the need of a separate soft phone client.
While WebRTC has become the de-facto standard for real time communication on the Internet, SIP still sees use in some scenarios, such as bridging to phone networks (PSTN) and physical conferencing equipment.
In this talk, we talk about how we went about connecting WebRTC and SIP systems using GStreamer and SIP.js.
WirePlumber is the default session manager of PipeWire, the powerful multimedia IPC framework that has become the standard for low-latency audio, Bluetooth audio, video capture and many more use cases on modern Linux systems. WirePlumber 0.4 featured a Lua scripting mechanism that was meant to make it easy to write custom policies, but in practice it turned out to be cumbersome. In the upcoming 0.5 release, WirePlumber is seeing fundamental changes to this mechanism that redefine the entire development experience. In this talk, George will take a closer look at these changes and also discuss other interesting upcoming features.
Fluster is an open-source, OS-independent, testing framework written in Python for multimedia decoder conformance. Its purpose is to check various implementations against reference test suites with known and proven results. The decoders can be standalone executables, as well as GStreamer- or FFmpeg-based. The tool was originally designed to check the conformance of H.265/HEVC decoders, nowadays it also supports H.264/AVC, H.266/VVC, VP8, VP9, AV1 and AAC.
Fluster is composed of a CLI application that runs a number of test suites with the supported decoders and compares the checksum of the resulting outputs against reference ones. Its modular design permits someone to easily extend its functionality and also add more decoders and test suites.
In this talk we will provide an overview of Fluster and its functionalities covering the following topics:
References:
https://github.com/fluendo/fluster
https://fluendo.com/en/blog/fluster-a-framework-for-multimedia-decoder-conformance/
MPEG-5 part 2 LCEVC (Low Complexity Enhancement Video Coding) is the latest standard by MPEG and ISO. However, this one is different to typical codecs, instead it acts as a layer on top of existing codecs to improve their compression efficiency (better quality at lower bitrates) and reduce transcoding compute requirements. The LCEVC data is carried along with metadata in the actual video stream (e.g. in SEI messages for H264). By complementing other codecs rather than competing with them it rather circumvents the codec wars, and is changing the video processing landscape we know.
In this talk, from Collabora and in collaboration with V-Nova who were the primary originators of the standard, Julian will describe how LCEVC was implemented in GStreamer, and the challenges he faced when integrating such an enhancement codec while keeping the GStreamer modularity and flexibility intact. He will also describe why new LCEVC caps were introduced for autoplugging elements to work, and how the LCEVC enhancement data is passed through the base decoder using a new type of GstMeta. When concluding the talk, Julian will also talk about the future plans of the new LCEVC plugin for GStreamer, and will show a demo of a working GStreamer pipeline decoding LCEVC video.
It is about one interesting side of the GObject, that allows us to use in runtime an already registered gstreamer element as a base class, and therefore register a new gstreamer element that would contain certain modifications or interceptions applied to the original one.
In particular this can be interesting because it allows to intercept the behaviour of an element, the code of which we can’t or don’t want to modify.
The technical side of the idea is quite simple, you can find the explanation is a short code example here:
https://github.com/aslobodeniuk/fun/blob/master/gobject-mutogene/examples/gobject-mutation.c
The possible usecases are not that clear, so at the same time we will present that few we can imagine but also let the listeners think about if they can propose more usecases.
The talk could be scheduled as:
Brief introduction into how the GType registration works, and how GStreamer uses it - 10 min
Walk through the code example - 10 min
Speaking about the possible usecases - 5 min
Questions - 5 min
Axis Network surveillance cameras provide network streams for video, audio and a multiplexed stream of various auxiliary streams such as video analytics, events and much more. The metadata can be used to optimize and enhance many cases and can be very beneficial in many surveillance uses cases, such as detect motion, combine video streams with radar, licence plate recognition and much more. The streams are available over RTSP, which is also part of the ONVIF standard. This talk will cover how GStreamer is used to implement APIs that deliver video, audio and metadata together.
At the venue
It's been four years since we last heard about improvements to the platform-specific support in GStreamer, such as Windows, macOS, and mobile. I'm here to talk about that on behalf of the authors that should be bragging about their great work!
The GStreamer pipeline is the top-level concept that encapsulates all the elements of a data processing flow. Or is it? There are good reasons why one might want to split the processing of data up into different pipelines, such as creating logical components, or preventing errors from affecting other processing.
Over the years, there have been many different approaches to the problem - leading to a bevy of elements for creating connection tunnels between pipelines.
This talk will discuss the available elements, what they each bring to the table and which ones you might want to use in which situations.
.NET is a popular open-source cross-platform framework allowing to build different types of applications for web, mobile, desktop, IoT or servers. It supports several programming languages, being C# the most popular one.
With the correct integration, GStreamer could become the reference framework for multimedia applications in .NET, bringing in new users to our community.
Over the last year, the C# bindings have received some love after years of being un-maintained, with several bug fixes, updating the bindings to the latest GStreamer release, adding support for .NET, and providing NuGet packages.
In this talk, we will present the current status of the C# bindings and how to use them to write GStreamer applications covering the following topics:
libcamera is open source camera stack and framework for Linux, Android, and ChromeOS. This talk will focus on libcamerasrc, libcamera's GStreamer element and how it can used and configured in order to exercise a functioning GStreamer pipeline.
The goal of this talk is to introduce the libcamerasrc, configuring the camera and settings the supported controls. The talk will also provide a overview of what libcamerasrc supports today and prospects for future development.
Serving multimedia streams to multiple consumers often requires using a relay server. For instance, the producer might operate under constrained resources and/or behind limited bandwith connection.
This talk presents two variations on a WebRTC relay server: one using Janus WebRTC server and the other based on WebRTCSrc & WebRTCSink.
An initial overview of the challenges to bringing GStreamer to the web through Emscripten and WASM. The particularities of building, testing, and running GStreamer on a node.js environment or in the browser.
A detailed description of what was needed to have a GStreamer pipeline in the browser and the required changes needed for further landing into mainstream.
The Windows operating system, in its three or so decades of existence, has introduced at least four different APIs dedicated to multimedia encoding or decoding. Wine, as a Free Software replacement of Windows, implements most of those APIs using GStreamer as a backend.
In this presentation I intend to talk about our experiences working with GStreamer as a backend, and its advantages and disadvantages that we have found with it.
I also intend to talk about some larger unsolved problems we have that are specific to the challenge of implementing another API using GStreamer. These include:
supporting zero-copy into application-provided buffers,
matching application expectations of synchronous decoding,
consistently retrieving stream metadata, especially optional stream metadata.
In the talk I intend to propose some potential solutions to these problems, but more generally to raise them as questions for the GStreamer development community to think about.
[As as side note, I've proposed this as a 45-minute talk, because I anticipate it can easily run that long. However, if there is not time for a 45-minute talk, I would be happy to give a talk in a shorter time slot, condensing the presentation if necessary.]
USB cameras are commonly used in desktops and laptops for streaming video or
participating in video conferences. Thus, USB is more or less the standard for
connecting a camera to a PC.
Linux allows to turn hardware that has a USB device controller (UDC), for example the
Rapsberry Pi 4, into a USB peripheral. The kernel provides a number of
different USB gadgets to implement various USB device classes. One of them is
the UVC (USB Video Class) gadget to implement an USB camera.
However, correctly configuring such a system and passing a video stream to the
USB gadget is not that easy. Fortunately, the new uvcsink element allows you to
easily stream an arbitrary GStreamer video pipelines into the UVC gadget and,
thus, to any USB host system.
Michael will show you how to prepare a system as a UVC gadget and stream video
data to an UVC host using a simple GStreamer pipeline like "gst-launch-1.0
videotestsrc ! uvcsink". He will also give some insight into the implementation
details of the uvcsink element.
The adoption of WebRTC in the broadcasting/streaming industry has been hindered due to lack of standard signalling that can be a simple plug and play model. With introduction of WHIP and WHEP specifications that is changing and the acceptance of WHIP/WHEP is evident with all major multimedia open source software implementing them.
GStreamer already had the client side implementations WHIP/WHEP (whipsink and whepsrc) as of release 1.22 written in Rust. And the server side implementations are in progress.
With WebRTCSink and WebRTCSrc written to support any signalling protocol as an interface separating from the sink/src functionality, it has become easy to write all the client and server side implementations of WHIP/WHEP on top of WebRTCSink/Src. This also helps to leverage the support of both raw and encoded streams, the congestion control mechanism and every other new improvement that will be added in WebRTCSink/Src in the future.
My talk is going to be an introduction on WHIP/WHEP protocols and the initial version of elements implemented in GStreamer using Rust and how they are evolving using the Signaller based design in the GStreamer WebRTC Rust plugins.
At Spiideo we offer automated sports video solutions to our customers for recording, analysis and broadcasting. We use a multi-camera setup to create a stitched panoramic video with an AI assisted cameraman.
This talk will showcase how we moved from a segment based system with a glass-to-glass latency of almost two minutes to a frame-based system with a latency of around three seconds.
In those three seconds we need to perform stitching of multiple 4K streams, detect objects in each camera stream and predict where to aim the virtual camera to follow the action on the pitch. And we do all of this across multiple instances in the cloud.
This was Spiideo’s first real use of GStreamer and we will talk about what we struggled with, what helped us (a lot) and what we still do not really (really) understand.
Modern computers tend to have multiple GPUs or a single GPU with several encoding cores, but encoders can only use one of these cores. Parallelizing the encoding across multiple cores can theoretically increase transcoding speeds linearly, resulting in a 2x transcoding speed for VoD in a GPU with two encoding cores.
This talk will present HyPE, an Open Source GStreamer meta-encoder written in Rust that can parallelize the encoding process across several encoding cores to take advantage of all the available hardware resources.
Its codec-agnostic design allows for seamless integration with a diverse range of codecs, making it a versatile choice for a wide variety of applications. It is also hardware-agnostic to get compatibility with various systems, including NVIDIA, AMD, Intel, and ARM architectures.
We will showcase the design of this plugin, present the achieved results, and examine the limitations of this element, along with potential areas for enhancement in future iterations.
Flumes is an open-source service we developed at Fluendo with the purpose of improving our QA process. It was designed with our multimedia playback/decoding products in mind. The main goals of the service are, to provide easy access to multimedia files of concrete specifications and a feeding mechanism to reproduction tools or test automation frameworks. As such, it becomes the connecting link between multimedia test collections and testing tools.
It consists of diverse technologies that allow managing, editing, viewing and searching metadata information of multimedia content. It is developed in Python 3, uses Glib and the gst-discoverer tool and stores metadata in an SQLite database. The service runs as a daeamon on Linux, constantly monitoring your collection's path, ensuring that the metadata database stays up-to-date.
https://github.com/fluendo/flumes
https://github.com/fluendo/flumes-fuse
https://github.com/fluendo/flumes-django