[cairo-commit] 3 commits - INSTALL.meson test/meson.build test/README.meson

GitLab Mirror gitlab-mirror at kemper.freedesktop.org
Mon Jan 2 10:54:19 UTC 2023


 INSTALL.meson     |   12 -
 test/README.meson |  329 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 test/meson.build  |    4 
 3 files changed, 332 insertions(+), 13 deletions(-)

New commits:
commit 7f83c30943951d87b24913452f94fb529ce7504f
Merge: 8dbc5893f 39081b0c4
Author: Adrian Johnson <ajohnson at redneon.com>
Date:   Mon Jan 2 10:54:17 2023 +0000

    Merge branch 'test-readme' into 'master'
    
    Add README.meson file for meson test
    
    See merge request cairo/cairo!385

commit 39081b0c409284b8d9365597f3774bc20d3fe3cd
Author: Adrian Johnson <ajohnson at redneon.com>
Date:   Sun Jan 1 16:27:53 2023 +1030

    Add README.meson file for meson test

diff --git a/INSTALL.meson b/INSTALL.meson
index 4040b5479..252dcf737 100644
--- a/INSTALL.meson
+++ b/INSTALL.meson
@@ -44,14 +44,4 @@ The "-D" options can also be used with "meson setup"
 
 Tests
 -----
-Refer to test/README. The main difference with running a test from a
-meson build is that the cairo-test-suite executable needs to be run in
-$builddir/test to find the image conversion executables but also needs
-to be told where the reference images are.
-
-eg
-  cd $builddir/test
-  srcdir="../../test" ./cairo-test-suite
-
-Where srcdir is the path to the "test" directory in the cairo source. The
-above example assumes $builddir is a directory in the cairo source tree.
+Refer to test/README.meson
diff --git a/test/README.meson b/test/README.meson
new file mode 100644
index 000000000..26b483973
--- /dev/null
+++ b/test/README.meson
@@ -0,0 +1,329 @@
+How to use cairo's test suite
+=============================
+Using this test should be as simple as running:
+
+  meson test -C $builddir
+
+assuming that the cairo distribution in the directory above has been
+configured and built in $builddir. The test suite here will use the
+locally compiled library rather than installed version.
+
+The results of the test suite run are summarized in an index.html file
+written to $builddir, which, when viewed in a web browser makes it
+quite easy to visually see any failed renderings alongside the
+corresponding reference image, (and a diff image as well).
+
+As some browsers do not permit Javascript to read from the local
+filesystem, the view-test-results.py script in the $builddir can used
+to view the results. It starts a http server serving the current
+directory before displaying the test results in a browser.
+
+The test suite needs to be run before any code is committed and before
+any release. See below for hints and rules governing the use of the suite.
+
+Running specific tests
+----------------------
+During development it is desirable to only run a single test or groups
+of related tests. The test suite is built as a single binary,
+"cairo-test-suite", which allows you to choose individual or
+categories of tests to run.
+
+The cairo-test-suite executable needs to be run in $builddir/test in
+order to find the image conversion executables. Simply running the
+executable will run all tests. eg
+
+  cd $builddir/test
+  ./cairo-test-suite
+
+Individual tests can be run be specifying the tests names as arguments
+to cairo-test-suite. For example, to run specific tests:
+
+  ./cairo-test-suite record-neg-extents-unbounded record-neg-extents-bounded
+
+Or if you want to run all paint.* related tests you can use:
+
+  ./cairo-test-suite paint
+
+Or if you want to check the current status of known failures:
+
+  ./cairo-test-suite XFAIL
+
+Or to run a subset of tests, use the -k option to run only the tests
+that include the given keyword:
+
+  ./cairo-test-suite -k downscale
+
+The -l option lists all tests available. For convenience, the file
+"completion.bash" in $builddir can be sourced to provide bash
+completion of the test names in cairo-test-suite.
+
+By default, each test that is run will be tested on all backends that
+have been built. The CAIRO_TEST_TARGET environment variable can be
+used to select specific backends. For example:
+
+    CAIRO_TEST_TARGET=pdf ./cairo-test-suite text-rotate
+
+Each test is run twice on each backend, once for each format. The
+CAIRO_TEST_TARGET_FORMAT environment variable selects a single
+format. The formats are "rgba" and "rgb". For example to run the
+text-rotate test only on the image backend and only for the "rgba"
+format:
+
+  CAIRO_TEST_TARGET=image CAIRO_TEST_TARGET_FORMAT=rgba ./cairo-test-suite text-rotate
+
+Running tests under modified environments or tools
+--------------------------------------------------
+To run tests under in gdb, one can simply run the cairo-suite-suite in
+gdb. Usually a specific test will be desired. eg
+
+  CAIRO_TEST_TARGET=image CAIRO_TEST_TARGET_FORMAT=rgba gdb --args ./cairo-test-suite text-rotate
+
+The cairo test suite runs each test in a child process so that crashes
+do not stop the test. When running the test suite under tools, the "-f"
+option disables forking. eg to run a test under valgrind:
+
+  CAIRO_TEST_TARGET=image CAIRO_TEST_TARGET_FORMAT=rgba valgrind ./cairo-test-suite -f text-rotate
+
+The cairo test suite detects if it is running under gdb and disables
+forking. Hence the reason the gdb example above did not use "-f".
+
+Complete list of test options
+-----------------------------
+cairo-test-suite options:
+
+  ./cairo-test-suite [-afkxsl] [test-names|keywords ...]
+
+  -a    all; run the full set of tests. By default the test suite
+          skips similar surface and device offset testing.
+  -f    foreground; do not fork
+  -k    match tests by keyword
+  -l    list only; just list selected test case names without executing
+  -s    include slow, long running tests
+  -x    exit on first failure
+
+Environment Variables:
+
+CAIRO_TEST_TARGET
+  Run the test only the the listed targets. eg
+    CAIRO_TEST_TARGET="image"
+    CAIRO_TEST_TARGET="pdf,ps3,ps2"
+
+CAIRO_TEST_TARGET_FORMAT
+  Run the test only the the listed targets. eg
+    CAIRO_TEST_TARGET_FORMAT="rgba"
+    CAIRO_TEST_TARGET_FORMAT="rgba,rgb"
+
+CAIRO_TEST_MODE
+  May contain a list of one or more of the following options
+    "full"            - equivalent to the "-a" option
+    "similar"
+    "offset"
+    "scale"
+    "foreground"      - equivalent to the "-f" option
+    "exit-on-failure" - equivalent to the "-x" option
+  eg
+    CAIRO_TEST_MODE="full,foreground"
+
+CAIRO_TESTS
+  A list of test to run. This is equivalebt to listing the tests as
+  arguments to cairo=test-suite. eg
+    CAIRO_TESTS="record-neg-extents-unbounded record-neg-extents-bounded"
+  The tests may be separated by anyh of " \t,:;".
+
+srcdir
+  The test suite needs to find the "test" directory in the source
+  tree. The srcdir environment variable can be used to specify the
+  location of this directory. If this environment variable is not set,
+  the binary looks for the directory "srcdir" in the current
+  directory. The meson build system symlinks "srcdir" in the
+  $builddir/test directory to the "test" directory in the source
+  tree. If this is not found the binary defaults to the current
+  directory.
+
+CAIRO_REF_DIR
+  The directory containing the reference images. By default the test
+  suite looks for "reference" in $srcdir. For convenience, you can set
+  CAIRO_REF_DIR to point at a previous test directory, relative to the
+  current test directory, and any previous output will be used by
+  preference as reference images.
+
+CAIRO_TEST_TIMEOUT
+  The maximum time, in seconds, an indivdual test may run before it is
+  stopped. The default is 60 seconds.
+
+Getting the elusive zero failures
+---------------------------------
+It's generally been very difficult to achieve a test run with zero
+failures. The difficulties stem from the various versions of the many
+libraries that the test suite depends on, (it depends on a lot more
+than cairo itself), as well as fonts and other system-specific
+settings. If your system differs significantly from the system on
+which the reference images were generated, then you will likely see
+the test suite reporting "failures", (even if cairo is working just
+fine).
+
+Since the test suite must pass the Continuous Integration test in the
+Gitlab instance hosting the cairo repository, the easiest way to get a
+new or updated test to pass is to extract the test output from the
+cairo gitlab CI to use as reference images.
+
+What if I can't make my system match?
+-------------------------------------
+For one reason or another, you may be unable to get a clean run of the
+test suite even if cairo is working properly, (for example, you might
+be on a system without freetype). In this case, it's still useful to
+be able to determine if code changes you make to cairo result in any
+regressions to the test suite. But it's hard to notice regressions if
+there are many failures both before and after your changes.
+
+For this scenario, you can capture the output of a run of the test
+suite before your changes, and then use the CAIRO_REF_DIR environment
+variable to use that output as the reference images for a run after
+your changes. The process looks like this:
+
+  # Before code change there may be failures we don't care about
+  cd $builddir/test
+  ./cairo-test-suite
+
+  # Let's save those output images
+  mkdir /some/directory/
+  cp -r output /some/directory/
+
+  # hack, hack, hack
+
+  # Now to see if nothing changed:
+  CAIRO_REF_DIR=/some/directory/ ./cairo-test-suite
+
+Best practices for cairo developers
+===================================
+If we all follow the guidelines below, then both the test suite and
+cairo itself will stay much healthier, and we'll all have a lot more
+fun hacking on cairo.
+
+Before creating a Merge Request
+-------------------------------
+After pushing your branch to your cairo fork on gitlab.freedesktop.org,
+check that the CI passes. A MR cannot be merged until the CI passes.
+
+It is likely that you will need to download test output images from
+the Gitlab CI to use as reference images in order to get CI to pass.
+
+The reference image filenames have the form:
+
+<test name>.*.ref.png for correct images that pass.
+
+<test name>.*.xfail.png for incorrect images due to bugs.
+
+The <test name>.ref.png is the most general image that all backends
+and formats for the <test name> test will be required to match.
+
+When comparing test output to the reference images, the test suite
+performs a perceptual diff which permits some minor pixel differences
+that are not visually discernible. Some targets may have differences
+that exceed the threshold of the perceptual diff but still appear the
+same as the reference image when eyeballed. In these cases, target
+specific reference images can be created that override the general
+reference image for the specific target.
+
+Reference images for specific backends can be created using filenames of the form:
+  <test name>.<target>.ref.png
+
+eg clip.pdf.ref.png
+
+The target name "ps" can be used for both "ps2" and "ps3" targets.
+
+Reference images for specific formats can be created using filenames of the form:
+  <test name>.<target>.<format>.ref.png
+
+eg
+  clip.pdf.argb32.ref.png
+  clip.pdf.rgb24.ref.png
+
+The recommended practice for creating reference images for a new test is:
+
+1. Create a <test name>.ref.png reference image from the *.image.argb32.out.png output.
+
+2. Push to gitlab and check the CI result.
+
+3. If the image target fails, copy the *.image.argb32.out.png output
+from CI and use as the reference image.
+
+4. Once the image target passes in CI, check that the other targets
+pass. If any fail, create *.<target>.ref.png reference images if the
+output appears correct.
+
+5. If a target output is incorrect, try to determine if the cairo
+target output is correct. eg view pdf files in Adobe Reader. If the
+cairo target output is correct, it can be assumed the image converson
+from target to png is buggy. File a bug against the conversion tool
+and create a *.<target>.xfail.png reference image.
+
+In all cases, before creating a reference image from test output, the
+image should be viewed to ensure that it is correct.
+
+When a new bug is found
+-----------------------
+A new test case should be added by imitating the style of an existing
+test. This means adding the following files:
+
+	new-bug.c
+	reference/new-bug.ref.png
+	reference/new-bug.xfail.png
+
+Where new-bug.c is a minimal program to demonstrate the bug, following
+the style of existing tests. The new-bug.ref.png image should contain
+the desired result of new-bug.c if the bug were fixed while
+new-bug.xfail.png contains the current results of the test.
+
+"meson.build" should be edited by adding new-bug.c to test_sources.
+And last but not least, don't forget to "git add" the new files.
+
+When a new feature is added
+---------------------------
+It's important for the regression suite to keep pace with development
+of the library. So a new test should be added for each new feature.
+The work involved is similar the work described above for new bugs.
+The only distinction is that the test is expected to pass so it
+should not need a new-bug.xfail.png file.
+
+While working on a test
+-----------------------
+Before a bugfix or feature is ready, it may be useful to compare
+output from different builds. For convenience, you can set
+CAIRO_REF_DIR to point at a previous test directory, relative
+to the current test directory, and any previous output will be
+used by preference as reference images.
+
+When a bug is fixed
+-------------------
+The fix should be verified by running the test suite which should
+result in an "unexpected pass" for the test of interest. Rejoice as
+appropriate, then remove the relevant xfail.png file from git.
+
+Before releasing
+----------------
+All tests should return a result of PASS for all supported (those enabled by
+default) backends, meaning all known bugs are fixed, resulting in the happy
+message:
+
+	All XX tests passed
+
+Some notes on limitations in poppler
+====================================
+One of the difficulties of our current test infrastructure is that we
+rely on external tools to convert cairo's vector output (PDF,
+PostScript, and SVG), into an image that can be used for the image
+comparison. This means that any bugs in that conversion tool will
+result in false negatives in the test suite.
+
+Here's the rule: If a cairo-pdf test reports a failure, but viewing
+the resulting PDF file with acroread suggests that the PDF itself is
+correct, then there's likely a bug in poppler. In this case, we can
+simply report the poppler bug. Once we've done this, we can capture
+poppler's buggy output as a pdf-specific reference image (as
+reference/*.xfail.png) so that the test suite will regard the test as
+passing, (and we'll ensure there is no regression).
+
+Once the poppler bug gets fixed, the test suite will start reporting a
+false negative again, and this will be easy to fix by simply removing
+the pdf-specific reference image.
commit c51b5e80211deb2d967edda1f4bf9ec69f354fe6
Author: Adrian Johnson <ajohnson at redneon.com>
Date:   Sun Jan 1 16:27:02 2023 +1030

    Copy completion.bash to $builddir/test

diff --git a/test/meson.build b/test/meson.build
index 74e059b39..28047cb04 100644
--- a/test/meson.build
+++ b/test/meson.build
@@ -658,8 +658,8 @@ exe = executable('cairo-test-suite', [cairo_test_suite_sources, test_sources, ca
     libpdiff_dep],
 )
 
-html_files = ['index.html', 'testtable.js', 'view-test-results.py']
-foreach file : html_files
+build_dir_files = ['completion.bash', 'index.html', 'testtable.js', 'view-test-results.py']
+foreach file : build_dir_files
   configure_file(input: file, output : file, copy: true)
 endforeach
 


More information about the cairo-commit mailing list