# [cairo] pixman regression

Mon Mar 10 15:21:53 PDT 2008

```Soeren Sandmann wrote:
> > The problem seems to be that the 0.5 horizontal translation makes pixman
> > take the fbFetchTransformed path. In fbFetchTransformed, the new code makes
> > us compute a transformed y coordinate (v.vector[1]) which is very slightly
> > less than 1/2. Then on the bilinear filter path we do "v.vector[1] -=
> > v.vector[2] / 2" which makes v.vector[1] very slightly negative, and I think
> > this makes us sample outside the image just a little bit.
>
> I think this is exactly what is going on. Thanks for the bug report
> and the excellent test case.

Seconded.

> > Reverting that commit fixes this bug but presumably regresses other things.
> > I haven't studied pixman enough yet to come up with a real fix.
>
> I believe that patch was intended to make sure that when we are using
> a NEAREST filter, we always round 0.5 down to 0 rather than up to 1.

Correct. For reference, here's what I wrote about that patch earlier:

|Patch 1: fix cairo's  a1-image-sample  test
| The problem here is that the image source offset is subtracted from
| the sample position, so an offset of 0.5 will end up as -0.5 in the
| math and will then be rounded up, so the effective offset is 0.
|
| The patch essentially adds an epsilon to get the desired rounding
| behaviour. There's no good mathematical justification for it.

> However, this rounding needs to be applied *after* transforming, not
> before. For affine translations this doesn't matter, but for
> non-affine, it does.

When making the change, I had two testcases.

1) a1-image-sample  with an extra row an column of pixels added to
make the described geometry symmetric (with that change, all
four corner samples were perfectly aligned to the pixel grid)

This can be accomplished by the following change:

-#define POINTS 10
-#define STEP   (1.0 / POINTS)
+#define POINTS 11
+#define STEP   (1.0 / (POINTS-1))

2) the same test with an extra

cairo_translate (cr, WIDTH, HEIGHT);
cairo_scale (cr, -1);

to turn the image by 180 degrees.

Both tests describe the same geometry, so I expected the output to
be the same. This expectation (which can be argued about) led to
doing the adjustment before applying the transformation.

That being said, doing the adjustment after the transformation has
a better justification, so let's do that instead.

> And as you noticed it also breaks BILINEAR filtering, even for affine
> transformations. In this case, tweaking the coordinates is incorrect
> since we are interpolating between samples, not rouding.

Right.

> The thing that needs to be ensured is that
>
>     N(v' + ku') = N(v + ku) + a
>
> where N means normalizing of homogeneous coordinates, v is the initial
> vector, u is the unit vector, k is any integer, and a is the
> adjustment we want to apply (in this case (-epsilon, -epsilon, 0)).

Hmm. Yes and no. We're really only interested in the boundary case
where  N(v + ku) = (n + 1/2 + 1/2, m + 1/2 + 1/2), where one + 1/2
comes from the translation of the pattern, one +1/2 from the sample
position. We want to 'round' that result to (n, m).

So an alternative approach would be to let the adjustment have the
opposite sign of the w coordinate and make it as small as possible.

[snip correct math]

Content-Description: The patch
> diff --git a/pixman/pixman-compose.c b/pixman/pixman-compose.c
> index 1b2d581..90e176e 100644
> --- a/pixman/pixman-compose.c
> +++ b/pixman/pixman-compose.c
> @@ -4192,8 +4192,8 @@ fbFetchTransformed(bits_image_t * pict, int x, int y, int width, uint32_t *buffe
>      stride = pict->rowstride;
>
>      /* reference point is the center of the pixel */
> -    v.vector[0] = pixman_int_to_fixed(x) + pixman_fixed_1 / 2 - 1;
> -    v.vector[1] = pixman_int_to_fixed(y) + pixman_fixed_1 / 2 - 1;
> +    v.vector[0] = pixman_int_to_fixed(x) + pixman_fixed_1 / 2;
> +    v.vector[1] = pixman_int_to_fixed(y) + pixman_fixed_1 / 2;
>      v.vector[2] = pixman_fixed_1;
>
>      /* when using convolution filters or PIXMAN_REPEAT_PAD one might get here without a transform */
> @@ -4218,6 +4218,12 @@ fbFetchTransformed(bits_image_t * pict, int x, int y, int width, uint32_t *buffe
>
>      if (pict->common.filter == PIXMAN_FILTER_NEAREST || pict->common.filter == PIXMAN_FILTER_FAST)
>      {
> +	/* adjust vector so that 0.5 rounds down to 0, rather than up to 1 */
> +	v.vector[0] -= (pixman_fixed_e * v.vector[2]) >> 16;
> +	v.vector[1] -= (pixman_fixed_e * v.vector[2]) >> 16;
> +	unit.vector[0] -= (pixman_fixed_e * unit.vector[2]) >> 16;
> +	unit.vector[0] -= (pixman_fixed_e * unit.vector[2]) >> 16;
^^^
> +

If we assume that the sign of the w coordinate doesn't change, we
could also use

int delta;

if (v.vector[2] > 0)
delta = -1;
else
delta = 1;

v.vector[0] += delta;
v.vector[1] += delta;