[cairo-bugs] [Bug 103037] Segmentation fault in _cairo_traps_compositor_glyphs

bugzilla-daemon at freedesktop.org bugzilla-daemon at freedesktop.org
Mon Oct 9 21:49:18 UTC 2017


https://bugs.freedesktop.org/show_bug.cgi?id=103037

--- Comment #17 from Mikhail Fludkov <fludkov.me at gmail.com> ---
(In reply to Bill Spitzak from comment #15)
> I think it would be better for the error message to read "use of _cairo_atomic_init_once_leave without _cairo_atomic_init_once_enter".
I like it. Sound better.

> 1. Everybody is wrong and the above code can fail on Intel-style processors.
It can fail if compiler decided to optimize access to "y" and reads it only
once before entering the loop. Let's assume compiler didn't do it and that
'atomic_set' in the example code is the same as 'atomic_store' from C11. Then
it is not portable but will work fine on Intel processors. Because regular
'mov' already gives us 'release-acquire' semantics.

> 2. The way you get the above in a form that can be ported to other platforms
> is to do "<code>".
  Thread 1:
  /* We want to be 100% sure that all writes to 'x' are visible after we set
   * 'y' to 1 in all other threads. The only semantics that gives us
   * this guarantee is memory_order_seq_cst, thats why we use atomic_store and
   * not anything weaker */
  x = foo;
  atomic_store(&y, 1);
  // never set y again

  Thread 2:
  /* We only care about the value of 'y' here, because the code above
   * guarantees us that all writes to x will be visible to all other threads as
   * soon as y = 1. Therefore reading with  memory_order_acquire is enough */
  while (atomic_load_explicit (&y, memory_order_acquire) != 1) {
    // code that does other atomic operations
  }
  assert( x == foo );


Having said that we can rewrite
_cairo_atomic_init_once_enter/_cairo_atomic_init_once_leave in C11:

static cairo_always_inline cairo_bool_t
_cairo_atomic_init_once_enter(cairo_atomic_once_t *once)
{
    /* The thread writing to 'once' (_cairo_atomic_init_once_leave) should
     * guarantee visibility of all the writes happened before it, that's
     * why memory_order_acquire */
    if (likely(atomic_load_explicit (once, memory_order_acquire) ==
CAIRO_ATOMIC_ONCE_INITIALIZED))
        return 0;

    if (atomic_compare_exchange_strong_explicit(once,
                  CAIRO_ATOMIC_ONCE_UNINITIALIZED,
                  CAIRO_ATOMIC_ONCE_INITIALIZING,
                  memory_order_acq_rel,
                  memory_order_acquire))
        return 1;

    while (atomic_load_explicit (once, memory_order_acquire) !=
CAIRO_ATOMIC_ONCE_INITIALIZED) {}
    return 0;
}

static cairo_always_inline void
_cairo_atomic_init_once_leave(cairo_atomic_once_t *once)
{
    /* All writes before we enter here must be visible to all other threads,
     * that's why memory_order_seq_cst and nothing weaker */
    if (unlikely(atomic_compare_exchange_strong_explicit(once,
                  CAIRO_ATOMIC_ONCE_INITIALIZING,
                  CAIRO_ATOMIC_ONCE_INITIALIZED,
                  memory_order_seq_cst,
                  memory_order_acquire)))
        assert (0 && "incorrect use of _cairo_atomic_init_once API (once !=
CAIRO_ATOMIC_ONCE_INITIALIZING)");
}

-- 
You are receiving this mail because:
You are the QA Contact for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cairographics.org/archives/cairo-bugs/attachments/20171009/ed39f760/attachment.html>


More information about the cairo-bugs mailing list