[cairo] Planar YUV support
Soeren Sandmann
sandmann at daimi.au.dk
Thu May 13 13:02:34 PDT 2010
Benjamin Otte <otte at redhat.com> writes:
> > Your patches add YCbCr support at the first stage in the sense that it
> > converts each Y sample to a8r8g8b8 using the nearest neighbor Cb and
> > Cr samples.
> >
> You did not look at the (old) YUV branch by accident and used the
> current (planar) branch, right? I'm a bit confused because you reference
> the PIXMAN_COLOR_SPACE enum from the new branch but refer to a design
> from the old branch.
Yeah, I think I looked at the old branch at one point and somehow got
it into my head that the fetch_raw functions were doing color
conversions, even though they don't in the current branch. Sorry about
that.
> The new pipeline then looks like this:
> >
> > * Widen to 8 bit components
> > * Extend
> > * Interpolate between samples according to filter
> > * Transform
> > * Convert to RGB coding
> > * Resample
> > * Combine
> > * Store
> >
>
> In the planar branch, the colorspace conversion is done before
> combining, in general_composite_rect() to be exact. So while the branch
> still does the interpolation of subsampled images too early, it seems to
> otherwise fit your description of how things should look quite well.
Yeah, it does. I do think that interpolation should not be the
responsibility of the sample fetchers though because that ultimately
leads to them having to deal with fractional positions.
> > But the PIXMAN_COLOR_SPACE_ARGB_UNMULTIPLIED doesn't fit in here
> > because premultiplication is something that has to happen _before_
> > interpolation, whereas color decoding needs to happen after. This
> > suggests to me that those two things don't belong in the same
> > enum. I do think support for unpremultiplied formats is
> > worthwhile, but it seems orthogonal to YCrCb support.
> >
> I added unmultiplied support for one simple reason really: YCbCr with
> alpha channel is unmultiplied. So it seemed rather trivial to support
> unmultiplied ARGB, too.
>
> I'm also not sure where in interpolation or resampling an operation is
> non-linear and would result in wrong values for unmultiplied color
> spaces, but in those few cases, it seems worthwhile to use a different
> function that handles those correctly, no?
Premultiplication itself is not a linear operation because it
multiplies two of its inputs, something a linear operator can't
do. Even if it were, linear operators don't commute, so they can't
just be reorderd.
The case for interpolating premultiplied pixels as opposed to
premultiplying interpolated pixels.
Conceptually, the alpha value represents coverage, whereas
(unpremultiplied) color values represent how much of that color is
reflected back with 0 meaning all light is absorbed and 1 meaning all
reflected. This means the total amount of light reflected back at the
sample point is alpha * channel. The amount of light reflected back at
an intermediate position then is the intermediate between the two
(alpha * channel) values.
As Bill mentions, premultiplication can be done on AYUV too, and in
fact, if we add support for that format, it seems to me that
processing should work like this:
1 Premultiply the pixels
2 Interpolate
3 Color convert the Y, Cb, Cr channels to RGB, and leave the
alpha channel in place
Which again means that dealing with premultiplication and color
conversion happens at two different stages.
> > In practical terms, the above means YCrCb processing will have to go
> > through the bits_image_fetch_transformed() path and that the
> > fetch_scanline() and fetch_pixel() function pointers should be
> > considered *sample* fetchers that simply widen and complete the
> > samples wrt. their existing color coding, but doesn't try to do
> > interpolation or color conversion. The x and y coordinates passed to
> > them must then always be integers and refer to samples. If you pass
> > (1, 1) to one of those fetchers, the result will be the second sample
> > on the second row of the component in question.
>
> The current fetch_raw implementations for subsampled formats do the (I
> guess most common) operation of fetching the NEAREST sample for
> subsampled formats, so you can use them fine for integer
> translations.
They can certainly be used when the filter is NEAREST, but if it's
BILINEAR, then the intermediate values will be incorrect at the points
where pixels are missing, even when the transformation is an integer
translation.
In fact, the optimization of ignoring the filter when the
transformation is the identity is no longer valid for subsampled
formats because we can't gurantee that the resampled pixels exactly
hit sample points. So the setup code in bits-image.c will have to use
the fetch_transformed function whenever the format is subsampled.
> That said, my idea was to have a replacement for
> bits_image_fetch_transformed() that takes care of subsampled formats,
> but I was never quite sure on how to best implement it, which is why I
> didn't do it. I like your idea of a fetch_component function, though I'm
> not quite sure on where to do the image => component conversion for the
> x/y coordinates.
Will the pseudo code I posted not work? Ie., simply decide in
fetch_filtered() whether component wise fetching is necessary?
> > Maybe your idea of eliminating the get_pixel() altogether and just use
> > fetch_scanline() with a length of one could make this simpler.
> >
> Considering there was no measurable performance impact, I thought of it
> as a great idea since day one ;)
Another possibility is to get rid of the scanline fetchers and use
get_pixel() exclusively. In some sense this is the more fundamental
operation, and fetch_scanline() could be considered an
optimization. (And Toshok's gradient work would actually allow them to
be treated as such).
Soren
More information about the cairo
mailing list