[cairo] ANN: "Caskbench" performance benchmark

Bryce Harrington bryce at osg.samsung.com
Wed Oct 1 10:52:48 PDT 2014


On Sat, Sep 27, 2014 at 07:51:59PM +0100, Chris Wilson wrote:
> On Fri, Sep 26, 2014 at 01:01:25PM -0700, Bryce Harrington wrote:
> > On Fri, Sep 26, 2014 at 07:33:03AM +0100, Chris Wilson wrote:
> > > On Thu, Sep 25, 2014 at 05:54:40PM -0700, Bryce Harrington wrote:
> > > > At Samsung one of the projects I've been working on is a benchmark test
> > > > for comparing the performance of Cairo and Skia with EGL + MSAA, called
> > > > Caskbench.  I presented about this testing at LinuxConf US in Chicago
> > > > last month.
> > > > 
> > > > This is no where near as comprehensive or meticulous as Cairo's
> > > > performance test suite, but it runs quickly, and includes Skia ports of
> > > > each of the tests.  The idea here being to do fair apples-to-apples
> > > > comparisons of the two codebases, or for comparing performance of
> > > > Cairo's image backend with the egl backend.
> 
> I've been playing more with it and as it is, it's not immediately useful
> for tuning. This is simply because that everything is randomized (or at
> least by default). I would rather improve cairo-perf-micro to add any
> examples you have that are not already there, have finer control over
> what is run (wrt to patterns and geometry options, e.g. aliasing and
> tolerance) and incorporate your innovation of having native tests for
> skia, Qt, adg, direct2d, quartz, etc.

Sure, for the record I'd be fine with that.  In fact you're not the
first to suggest it... rasterman suggested this as well a few months
back when it was first open sourced.  I don't have any big plans for
this test suite at the moment other than cleaning up a few lingering
warts, so if it can help revitalize cairo-perf-micro that'd be great.
There's obvious overlap between their test cases.

The main problem I've had with 'make perf' is that out of the box it
just takes too long to run.  With caskbench one of my design intents was
to make it run and complete quickly by default, relying on the
randomness to vary its coverage from run to run (although since we fix
the seed now, we don't see that benefit).  Maybe there's better ways of
achieving that (e.g. the -f arg?), but I feel that if 'make perf' by
default would run to completion faster (and more reliably... I've had
scattered issues with lockups), then maybe more people would run it, and
possibly we'd be better able to catch performance issues.

I don't know if you noticed but there's also a "save the rendered output
to png files" option flag.  I've found this handy for analyzing if the
test is working properly, and also useful for pasting into reports for
having something visual to show.

Bryce


More information about the cairo mailing list