[cairo] [PATCH cairo-traces] Move some macro-benchmark documentation here from cairo/perf/README

Bryce W. Harrington b.harrington at samsung.com
Mon Jul 8 19:06:48 PDT 2013


cairo-traces evidently used to be included in cairo proper, and the
documentation was referencing it as if it still was, which is a *bit*
confusing.  Instead, extract the macro-benchmark specific directions
and move it here.

Signed-off-by: Bryce Harrington <b.harrington at samsung.com>
---
 README |  109 +++++++++++++++++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 95 insertions(+), 14 deletions(-)

diff --git a/README b/README
index 8d55df0..592ec1f 100644
--- a/README
+++ b/README
@@ -1,19 +1,100 @@
-This directory contains a collection of traces that represent applications
-in the wild. In particular, they represent behaviour that someone thought
-worthy of further study and analysis!
+This repository contains a collection of traces that represent
+applications in the wild. These aim to give an overall feel as to
+whether cairo is faster for everyday use.  In particular, they represent
+behaviour that someone thought worthy of further study and analysis!
+
+Building
+--------
+Traces are stored in the archive as compressed files.  The make command
+uncompresses them and optimizes the traces for the system cairo:
+
+  git clone git://anongit.freedesktop.org/cairo-traces
+  cd cairo-traces && make && cd ..
+
+Alternately, cairo-perf-trace also looks for cairo traces at
+/usr/src/cairo-traces and /usr/share/cairo-traces.  Or, you can point
+the environment variable CAIRO_TRACE_DIR to any location you prefer.
+
+To capture and replay traces you will need the cairo-trace and
+cairo-perf-trace tools from cairo.  If you are testing against the
+system installed cairo library (the default behavior), you'll need the
+system cairo perf tools as well; this may be packaged separately from
+cairo itself, depending on your distro (e.g. look for cairo-tools on
+RedHat-alikes or cairo-perf-utils on Debian-alikes).
+
+On the other hand, if you want to test against a local cairo build
+rather than the system cairo, just unpack the traces you need and run
+them with cairo-perf-trace directly.  For example:
+
+  lzma -cd cairo-traces/benchmark/ocitysmap.lzma > ./ocitysmap.cs
+  cairo/perf/cairo-perf-trace ./ocitysmap.cs
+
+Capturing Traces
+----------------
+Using cairo-trace you can record the exact sequence of graphic
+operations made by an application and replay them later. These traces
+can then be used by cairo-perf-trace to benchmark the various backends
+and patches.
+
+To record a trace:
+  cairo-trace --profile <application> [args...]
 
-To capture a trace:
-$ cairo-trace --profile application [args...]
 This will produce a application.$pid.lzma file.  If you think it is
-interesting add it to this repository, giving it a short but descriptive
-name, and add a note here explaining what the trace captures.
-
-To replay traces:
-Use cairo/perf/cairo-perf-trace. By default cairo-perf-trace will loop
-over all traces within cairo/perf/cairo-traces for all test targets. To run
-against a subset specify a series of traces on the command line and use the
-CAIRO_TEST_TARGET environment variable to filter the target. For example,
-CAIRO_TEST_TARGET=xlib ./cairo-perf-trace firefox.
+interesting add it to this repository, give it a short but descriptive
+name, and append a note to this file explaining what the trace captures.
+Or place it into cairo/perf/cairo-traces (or set CAIRO_TRACE_DIR to
+point at your trace directory), and the trace will be included in your
+cairo performance testing when running make perf.
+
+Another way to collect traces is:
+  cairo-trace --no-mark-dirty --no-callers <application> [args...]
+
+--no-mark-dirty is useful for applications that are paranoid about
+surfaces being modified by external plugins outside of their control,
+the prime example here is firefox.
+
+--no-callers disables the symbolic caller lookup and so speeds tracing
+(dramatically for large c++ programs) and similarly speeds up the replay
+as the files are much smaller.
+
+The output file will be called $APPLICATION.$PID.trace, the actual path
+written to will be displayed on the terminal.
+
+
+Replaying Traces
+----------------
+The macro-benchmarks are run by a single program called
+cairo-perf-trace:
+
+  cairo-perf-trace <trace-name>
+
+cairo-perf-trace produces the same output and takes the
+same arguments as cairo-perf-micro.
+
+By default cairo-perf-trace will loop over all traces within
+cairo-traces for all test targets. To run against a subset specify a
+series of traces on the command line.  Some examples of running it:
+
+  # Report on all tests with default number of iterations:
+  cairo-perf-trace
+
+  # Report on 100 iterations of all firefox tests:
+  cairo-perf-trace -i 100 firefox
+
+  # Generate raw results for 10 iterations into cairo.perf
+  cairo-perf-trace -r -i 10 > cairo.perf
+  # Append 10 more iterations of the poppler tests
+  cairo-perf-trace -r -i 10 poppler >> cairo.perf
+
+  # Run all the firefox tests, but only against the xlib backend
+  CAIRO_TEST_TARGET=xlib cairo-perf-trace firefox
+
+Analyzing Traces
+----------------
+Several tools are available for post-processing the performance test
+output, such as cairo-perf-diff.  See cairo/perf/README for more
+details.
+
 
 Guidelines
 ----------
-- 
1.7.9.5


--
Bryce Harrington
Senior Open Source Developer  -  b.harrington at samsung.com
Open Source Group             -  Samsung Research America



More information about the cairo mailing list