[cairo] [RFC] high-quality pixman image downscaling
sandmann at daimi.au.dk
Fri Jun 26 16:42:56 PDT 2009
Below are some observations on where image scaling and blurring could
fit into the pipeline, and how it would relate to the
downscale-then-transform algorithm in your patch.
Here is a simplified version of the pixman pipeline as it looks
currently. Each image goes through these steps:
1. Reconstruction. The image is extended to be defined on all of
the real plane. Points in between pixels are constructed based
on the filter attribute, and points outside the pixel grid are
created from the repeat attribute.
2. Transformation. The image is transformed according to the
3. Resampling. The image is resampled once per destination pixel.
4. Compositing. The resampled pixels from each input image are
composited and stored in the destination.
Step 3 is the reason downscaling looks so terrible currently. Sampling
only once throws away a lot of information when each destination pixel
corresponds to many source pixels.
A general solution to that would be to make that step configurable by
allowing users to set:
(a) A sample rate for the resampling, given in subpixels per
destination pixel, vertically and horizontally.
(b) A filter kernel that the subpixels are convolved with.
This would specify a subpixel sampling grid for each destination pixel
and a filter kernel with entries in subpixel coordinates. The filter
in (b) could be given as a symbolic value ('gaussian', 'lanczos',
'box') along with a size, so that the implementation could take
advantage of separability etc.
Having this feature would fix several issues:
* Very high quality image scaling if you do the full compositing
and filtering in one pass, but still allows the two pass
* Higher quality gradients. Currently gradients with high-frequency
color changes can cause bad aliasing. Super sampling them would
help with that.
* Support for blurring, which was a recent feature request for
Also note how a future polygon image would fit in. The antialiasing
that cairo currently does is precisely equivalent to resampling with a
15x17 box filter, so if we added a polygon image, this new interface
would naturally apply to it. Aside from the blurring usecase, better
filtering for polygons is interesting anyway to reduce aliasing at
Bill said that blurring geometry would be probably be difficult or
slow to implement, but I don't think that is the case at all. In fact,
I think arbitrary convolution kernels can be applied to supersampled
polygons quite efficiently, but I'll spare you the details for now.
So if we need a way to have general resampling filters anyway, it
probably does not make sense to have a separate downscaling API that
would have a subset of the general functionality.
I do like the idea of first rescaling, then running the general
compositing, but it seems like if we had the resampling filters along
with your rescalers as fast paths for the scale-only case, it would be
doable without any other new API. I have not yet read the code
carefully, so it's entirely possible that I'm missing some reason this
One thing that might be difficult to do in this scheme is the
alternating box filter, but a simple answer might be to simply allow
that sampling pattern to be a settable option.
More information about the cairo