At Spiideo we offer automated sports video solutions to our customers for recording, analysis and broadcasting. We use a multi-camera setup to create a stitched panoramic video with an AI assisted cameraman.
This talk will showcase how we moved from a segment based system with a glass-to-glass latency of almost two minutes to a frame-based system with a latency of around three seconds.
In those three seconds we need to perform stitching of multiple 4K streams, detect objects in each camera stream and predict where to aim the virtual camera to follow the action on the pitch. And we do all of this across multiple instances in the cloud.
This was Spiideo’s first real use of GStreamer and we will talk about what we struggled with, what helped us (a lot) and what we still do not really (really) understand.
|Daniel and Robin are software engineers at Spiideo working with video processing.