Speaker
Description
In a typical Linux GUI system, a GUI application creates an application buffer and renders pixels on it. This application buffer can have multiple properties related to the color specifications of the buffer. Each application sends this app buffer to the graphics compositor SW. The compositor SW accepts these app buffers from various applications, composes a result framebuffer which contains pixels from all the app buffers and sends this result framebuffer to display driver to be shown on the connected Display sink.
Now, in order to compose/blend the resulting buffer correctly and accurately from the app-buffers, the color property specifications of each of the app-buffers need to be the same, which means, for example:
o the color space of all the app-buffers must be the same as target resulting framebuffer
o the color format of all the app-buffers must be the same as target resulting framebuffer
o the output colour tone of all the app-buffers must be the same as target resulting framebuffer (HDR or SDR)
All the app buffers which do not match the specifications need to be transformed before the composition blending process can take place. These transformations are jointly called “colour-correction” of a buffer.
This method proposes training an AI model for these color transformation, so that it can predict and suggest the most optimized color transformations for a typical composition scenario.
GSoC, EVoC or Outreachy | No |
---|---|
In-person or virtual presentation | In-person |
Code of Conduct | Yes |