[cairo] [Pixman] Supersampling - 2nd attempt

Krzysztof Kosiński tweenk.pl at gmail.com
Mon Aug 16 16:24:47 PDT 2010


2010/8/16 Bill Spitzak <spitzak at gmail.com>:
> The problem I am having is that this does not match how filtering of
> transforms is done in any image processing software I am familiar with.
>
> This is how all software I am familiar with works, this replaces the three
> steps you show above:
>
>        - Figure out the INVERSE transform
>
>        - For each output pixel, the x, y, and inverse transform figures out
> a weighing factor for every pixel in the input image. These are weights for
> input PIXELS, not weights for input "points".
>
>        - Multiply input pixels by these weights and sum to get output pixel.

Pixman actually stores the inverse transform. For example, an image
scaled to 1/4 its width and height has the following Pixman matrix:
4 0 0
0 4 0
0 0 1

There are two possible approaches:
a) determine which input pixels are in the sampling region; sample the
filter kernel for each input pixel to compute weights
b) precompute weights by sampling the filter kernel; compute
interpolated color values corresponding to points at which the filter
kernel was sampled; weigh the interpolated values using precomputed
weights

You seem to be advocating a) - is that correct?

For simple kernels like box or tent, the performance will be very
similar for a) and b), but for high quality filters like Gaussian or
Lanczos, it's less expensive to compute interpolated values of the
pixels and weigh them with precomputed samples of the filter kernel
than it is to sample the kernel for each subpixel. I'm not sure what's
the difference in quality between those approaches. I could try
prototyping both of them.

> For a more concrete example, our image transforms consist of translating the
> output x/y into an input axis-aligned rectangle that is as close as possible
> to the area of the output pixel inverse-transformed to the input. This
> rectangle dimensions are then used to scale 2 1-D filters to the right size
> (they are truncated sync or similar functions and the non-zero area is about
> twice the size of the rectangle). These two filters are used in a 2-pass
> algorithm to weigh and sum all the pixels in the rectangle and around it to
> produce the output pixel.

How does it work for skews? I'd like to avoid interating over pixels
inside an axis-aligned rectangle, because it could give us terrible
worst-case performance, for example with large skews. This is an
important consideration when we want to transform a bitmap
interactively as in Inkscape when scaling an embedded raster image.

> You cannot describe the input image as a "continuous domain" and get any
> kind of sampling that is any better than putting the same number of samples
> right in the middle of the pixels.

Describing the image as a continuous domain allows us to use approach
b) outlined above. Of course, describing the image as a continuous
domain alone won't improve downscaling, because we still have to apply
a low-pass filter.

Regards, Krzysztof


More information about the cairo mailing list