[cairo] Explanation/justification for tolerance change?

Adrian Johnson ajohnson at redneon.com
Thu Dec 6 05:45:03 PST 2007


Behdad Esfahbod wrote:
> On Wed, 2007-12-05 at 17:32 -0500, Carl Worth wrote:
>> Some of you might have noticed that I had a couple of open questions
>> in the release notes for the 1.5.4 snapshot I sent out last
>> night. Here's the promised follow-up for one[*] of them:
> 
> Yes.  Very welcome habit.
> 
> 
>> The commit of interest says only the following:
>>
>>         [meta-surface] Adjust tolerance when replaying to surfaces
>> with device
>> scale
>>
>>         such that we get a consistent look with different fallback
>>         resolutions.
>>
>> Behdad, can you explain/justify this change please? In what cases is
>> the output different, and how is it better?
> 
> Sure.  For output difference please see before/after PDFs in the mail I
> sent in October (that got no comments):
> 
>   http://lists.cairographics.org/archives/cairo/2007-October/011815.html
> 
> The pages show the same piece rendered to fallback images of various
> resolutions.  Tolerance has been increased to 5.0 IIRC.
> 
> In my book, the after shot is more consistent and so preferred.  
> 
> 
>> I know that Adrian doesn't agree with the change, so I'm hoping for a
>> good debate. :-)
> 
> Adrian has a point of course, the short version being: "why would one
> ask for 1200dpi fallback image but get the same output of seemingly
> 300dpi"?  The main issue is that we have not defined tolerance for
> vector backends, and the main reason is that normally those backends
> don't need to flatten curves.

The aim of the finer-grained fallbacks is for fallback images to
seamlessly blend in with the vector parts of the page. This can only be
achieved by setting the fallback resolution to the printer resolution
and using a curve flattening tolerance similar to the tolerance used by
the printer.

Of course it is not always feasible to set the fallback resolution to
the printer resolution. Halftoning will reduce the effective resolution
and applications may choose a lower resolution to reduce file sizes and
print times.

We also don't know what the tolerance used by the printer is and the
application is unlikely to know either. However I think it is safe to
assume that providing the printer has a reasonable amount of memory the
tolerance will be close to the printer resolution. So the only tolerance
that makes sense for fallback images is < 1 pixel but not too much less.
In other words the same as what was used before the tolerance patch.

> It also doesn't help that cairo_set_tolerance() specifically define it
> as maximum error tolerated when flattening curves.
> 
> So lets define tolerance more usefully: maximum error distance, measured
> in device units.  To implement this I suggest:
> 
>   - Use tolerance to round numbers printed in vector output.  It just
> wastes space to write out to a precision of 1e-5 PS points when
> tolerance is just 0.1 points.

I agree that cairo_set_tolerance() should set precision of numbers in
paths but only if the default tolerance is backend specific.
0.1 point is insufficient resolution for PS/PDF when printers are often
> 1200dpi. PDFs may be zoomed in on the screen and when using a vector
format for importing/exporting graphics a higher precision is desirable.
Future backends may have larger device units and therefore require a
smaller tolerance.

There will also be a little bit more work required in the text output
code (where relative displacements are used) to ensure that the rounding
errors caused by reduced precision do not accumulate to the point that
the error in glyph position exceeds the tolerance.

>   - Deprecate cairo_surface_[sg]et_fallback_resolution() and use
> tolerance to deduce fallback resolution.  That is, a default tolerance
> of 0.1 will generate a fallback image of 600dpi and draw to it with a
> fixed tolerance of 0.1. (0.1 points means 720dpi, times sqrt(2) for the
> diameter and then div by two to get pixel rounding error gives ~509 dpi.
> Round up to 600.  The second 0.1 is quite arbitrary.)  

This does not make sense. The fallback resolution is independent of the
output precision. For example a PDF or SVG may be printed at 2400dpi or
used as a vector file format to import into another application (where
high precision is required) however  the fallback image resolution may
be set to 150dpi to keep the file size down.

It is not acceptable to do arbitrary rounding of the fallback image
resolution. Not every printer has a resolution that is a multiple of
100dpi.

> This makes a whole lot of sense.  Now if you need a fast light *draft*
> PDF to print at 300dpi, you set tolerance to 0.2.  If you want a heavy
> camera-ready 2400dpi PDF, you set tolerance to 0.02.

No, the precision of vector paths and resolution of fallback images
needs to be separate. The precision of vector paths makes very little
difference to the printing speed.

> It's simply beautiful and thanks to the tolerance living in gstate (as
> opposed to surface) and fine-grained fallbacks, it means that you can
> immediately get different fallback resolutions for different parts of
> the same page.  It will all just work.

This will not work. The tolerance is applied to each drawing operation.
However a finer-grained fallback may include or intersect multiple
drawing operations. There is no sane way to obtain a fallback resolution
for each fallback image from the tolerance of the drawing operations
that are included in the image.

I also can not see any reason why different resolutions for each
fallback image would be useful. The fallback images are largely outside
the control of the application. The application does not know exactly
when fallback images are used or which drawing operations are involved
without doing a lot of analysis and having a detailed understanding of
the current capabilities of the backend in use.

> I also noticed now that our tolerance guarantee is not as strong as it
> should be.  For example we don't account for rounding error that happens
> when converting paths to cairo_fixed_t.  Not hard to fix.
> 
> Another issue that Adrian raised was that 0.1 is too coarse for PDF.  I
> don't think it is, but if that's the case, we can easily make default
> tolerance come from surface, like font options.

0.1 seems to coarse to me. Looking at other applications that create
PS/PDF files with an unscaled CTM the number of decimal places ranges
between 3 and 6.

3 decimal places seems a good compromise between preserving the
precision and keeping the file size down. Most applications are not
going to alter the default so it is better to have more than enough
precision for 99% of use cases as the default and let the application
reduce the tolerance if it really wants to reduce the file size at the
expense of the precision.

I do not want to start seeing bug reports complaining about rounding
problems with the vector output. This will be like the bug reports for
broken PS/PDF output where the application is not checking the error
status of the surface (ie not using the API correctly) and the bug is
reported by a user with no control over how the application uses the API.

> Now does that make any sense or I'm just crazy?
> 
>> Personally, I'm undecided because I don't think I fully understand the
>> issue yet. But I definitely want to get some good consensus on this
>> before
>> 1.6.
>>
>> -Carl
> 



More information about the cairo mailing list