[cairo] Automated testing of Cairo
cworth at cworth.org
Fri Aug 18 14:10:12 PDT 2006
On Fri, 18 Aug 2006 13:27:01 -0700, Bryce Harrington wrote:
> Of course, in this case why not put the git trees themselves onto nfs.
> That could make sense, although care would need to be taken that the
> clients only access them read-only, or else that machines be able to
> lock before doing a modification. Also, there'd need to be care taken
> in ensuring there's no race conditions when doing git fetches; e.g., if
> you're in the middle of updating the tree, and a client uses the tree to
> pull code, will that work or will there be inconsistencies?
This is a place where git shines. If you have direct access to a
"master" repository that you want to treat as read-only then all you
have to do is make a local clone of it for doing whatever you want:
git clone -l -s /path/to/repo /path/to/clone
This is about the fastest command you can imagine for the effect you
get. The result is a fully functional clone in which you can do
anything you'd like, (checkout old versions, generate diffs, do your
tar-tree stuff, etc. etc.), but the clone itself is basically
instantaneous since it's not copying any object but simply referencing
them in the original repository.
And the true beauty is that you don't have to worry about ongoing
fetches happening in the master repository at all. With git, all a
fetch operation does it create new objects. It never modifies any
existing objects. So you are safe to clone and checkout existing
versions while new versions are being updated and there is no race
condition to worry about whatsoever.
> But the approach that feels the best is to have crucible pull both the
> automake-generated version from the website, as well as build its own
> tarballs from git for use with the git patches.
OK. That's basically just a crucible-specific issue. If you do local
git clones and just checkout the tree of interest within them, then
all of the tar,patch,diff stuff can go away. But, either way, we've
figured out what the bug was in the tar,patch,diff stuff so you should
be able to avoid that now.
> One of the reasons why I like to keep the official tarball as a base, is
> because that is what lives in the wild, so it ensures our test results
> will be directly comparible to anyone else that's running tests on stock
I can understand that testing a released tar file is important. But
after that, using the old tar file as a way of getting to the current
state of the tree is fundamentally just more error prone than getting
the current tree from git, and I don't see any benefit. But no matter.
> 1. Failure to set [xlib|ps|svg] target
> 2. Image size mismatch (AxB at C) vs. (A+1xB+1 at D)
> 3. XXX pixels differ from reference image
> I'm curious about the second error there; it seems to correspond most
> closely to the failures unique to this machine. The fact that the sizes
> are off by one, with a wildly different third number (resolution maybe?)
> makes me wonder if it could be a platform specific thing?
An image size mismatch is a very bad problem, (and explains the broken
links by the way---if the image comparison code is fed mismatched
sizes of images it gives up on computing the visual diff).
Oh, but if this is SVG-specific then it reminds me that there was at
some point in the past a bug in librsvg where it did a DPI calculation
with incorrect rounding to get the final image size---so that would
result in off-by-one errors in the image sizes. That sounds awfully
familiar to what you're seeing here.
> The unique thing about nfs11 is that it's a 64-bit Xeon, running in an
> emulated 32-bit mode. I don't know enough about how this emulation
> works to say whether it'd cause an error like this, though. What do you
It could be that the "corrected" rounding code is still incorrect on
this machine. The thing to do is to find an identical SVG file on this
machine and another that works and see if svg2png is generating images
of different sizes in the two cases.
I can help look into that more when I get back from vacation after
next week if you haven't been able to resolve that.
> Also, while it's not necessary for this case, we also have in crucible
> the way to do post-processing of test results on the test driver. This
> way, if there is some analysis or reporting tool to run on the results,
> but that may not work identically on any arbitrary system (such as if
> you wished to use a tool that only works on x86), then the
> post-processing can be centralized there. This feature is really
> designed for doing comparisons across test runs (e.g., graphing if
> today's changes make things faster than yesterday's, or diffing results
> from several different platforms.)
Oh, you could use something like that. By and large the problems
you've been seeing have all been after cairo is succesful at creating
a .svg, .pdf, or .ps result. And you've just been running into
problems trying to faithfully convert its output into a .png image. So
those problems aren't really helping us test cairo itself, but a
delicate collection of other tools.
So, yes, centralizing that would make these problems go away, (or
reduce them to only needing to be solved on one machine), without
losing any functionality in the testing of cairo itself.
But then again, if we can make it easier to get cairo's standard test
suite to run perfectly, (documenting and automatically checking for
the necessary versions of the various tools---bundling dependent
fonts, etc.), then that will help anyone that runs the test suite. And
I would definitely like to improve this situation because we get a
fair amount of noisy traffic from people running the test suite and
reporting all the false positives they get.
LocalWords: XXX Xeon diffing succesful
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 189 bytes
Desc: not available
Url : http://lists.freedesktop.org/archives/cairo/attachments/20060818/c3f38a04/attachment.pgp
More information about the cairo