[cairo] Planar YUV support
Koen Kooi
k.kooi at student.utwente.nl
Thu May 13 00:40:09 PDT 2010
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 12-05-10 22:04, Soeren Sandmann wrote:
> Hi Benjamin,
>
>> Here's an update on my work on YUV planar support, also known as "what
>> we got from the gstreamer-cairo hackfest". I've reworked my experimental
>> code from back then to incorporate the things we agreed on. It is not
>> complete yet, but only misses details. So if you're a maintainer of the
>> libraries in question, now is a good time for a review and complaints or
>> issues with the general design of the code.
>
> First of all, thanks for working on YCrCb support. There is a lot of
> interesting things that become possible with this. If all goes as
> planned, this will be the major new feature in 0.20.0.
>
>> Details missing in the implementation that I intend to fix:
>> - Getting the fetchers right for subsampled formats and different
>> filters.
>
> This has to do with an overall concern I have, which is about how
> YCrCb fits in with pixman's image processing pipeline, and how to do
> filtering and subsample locations.
>
> The existing 8 bits R'G'B' pipeline looks like this:
>
> * Convert each image sample to a8r8g8b8
> * Extend grid in all directions according to repeat mode
> * Interpolate between samples according to filter
> * Transform
> * Resample
> * Combine
> * Store
>
> Your patches add YCbCr support at the first stage in the sense that it
> converts each Y sample to a8r8g8b8 using the nearest neighbor Cb and
> Cr samples.
>
> The problem with this scheme is that if we add bilinear interpolation
> and support for MPEG-2 sampling structure, then those features would
> also have to be added at the fetching stage. And then the later
> interpolation stage would either do the wrong thing by interpolating
> _again_, or it would have to be disabled and the fetch stage would
> have to deal with fetching with transformed coordinates. Basically,
> this gets really messy quickly.
>
> Instead, we need to make some changes to the pipeline. A YCbCr image
> is an RGB image that has had two things done to it:
>
> 1. It was converted to YCbCr
> 2. It had its Cb and Cr components subsampled
>
> To get such an image back to RGB where we can composite with it, we
> have to reverse each of those transformations: First we have to
> reverse the subsampling, then we have to reverse the color coding.
>
> Reversing subsampling is a form of interpolation, so it seems natural
> to do it as part of the existing interpolation step. Since color
> conversion has to happen after interpolation, this then implies that
> the first stage can no longer be "convert to argb", but instead must
> simply be "widen to 8 bits, while keeping the existing color coding".
>
> The location of the chroma sample points varies from format to
> format. This means that the interpolation code will have to apply
> special adjustments when it computes which samples to use for any
> given intermediate point.
>
> The new pipeline then looks like this:
>
> * Widen to 8 bit components
> * Extend
Wouldn't that massively increase the memory bandwidth needed? For the
YUV42x case you'd go from 12bpp to 24bpp. The hardware I use (omap,
davinci, etc) has hardware support for handling YUV in the framebuffers
(including transparency, colourkeying and scaling), so keeping it YUV
would help a lot. Moving HD video thru ram is already pushing the
bandwidth a lot, but for simple codecs the cpu is idle enough to do
fancy pixman and cairo effects.
regards,
Koen
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.5 (Darwin)
iD8DBQFL66zZMkyGM64RGpERAjJTAJ9O7aMfjStS/sGrWp8QoFtVFmhQ/QCgnCsF
fkLoI4qYl+WGKmDcCaDPR2o=
=oivp
-----END PGP SIGNATURE-----
More information about the cairo
mailing list