[cairo] Re: Re: Re: Munging header files for export
(and other) attributes
mike at plan99.net
Wed Sep 7 17:15:23 PDT 2005
On Wed, 07 Sep 2005 15:29:02 -0700, Carl Worth wrote:
> At cairographics.org we're currently hosting the entire set of cairo
> releases back to the beginning of time. Yes, that's only one tar file so
> far, but I certainly expect to have them around forever.
True, but it means you have to actually compile the library. Which is
probably OK for Cairo, but as a general principle it's a pain as maybe the
old version of the library needs old dependencies. Maybe GCC refuses to
Using old headers also makes it impossible to use relaytool (weak linking).
> And in your solution the people compiling the software have to know they
> need to set the CAIRO_TARGET flag. So I don't really see a substantive
> difference here.
Not really, the idea here is that the documentation says "if you set
CAIRO_TARGET, your program may automatically benefit from optimisations
and new features". Or perhaps CAIRO_TARGET defaults to the latest version,
and portability systems like apbuild (a GCC wrapper that improves binary
compatibility) can automatically set it to 1.
>> This problem is unique to Linux/open source and does *not* affect
>> Win32/MacOSX platform APIs so invariably developers don't expect it
>> then get taken by surprise when their users start reporting strange
>> errors about symbols they never heard of
> I don't understand the distinction being made here. The issue at hand
> should affect API compatibility for any C library, regardless of platform,
Right. But the Win32/MacOS platform APIs are very large C APIs and they
don't have this problem. For instance, if you needed to compile all your
apps on Windows 98 to distribute them to your end users Microsoft would
have very many unhappy developers. Developers like being able to create a
binary on their own computer then send it to other people. It's convenient.
> I'm not personally familiar with Apbuild.
It's used something like this:
CC=apgcc CXX=apg++ ./configure; make; make install
It does many things to help ensure the resulting binary works on many
> This still strikes me as the easiest solution. If I were to distribute an
> application built against cairo 1.2 (say) I would advertise it as
> requiring cairo 1.2 or newer. And if I wanted to allow the application to
> work with cairo 1.0 I would instead build against cairo 1.0.
Well, like I said developers tend not to realise this is a problem. They
say "Hmm, I avoided all functions marked as 'Since 1.2' therefore my
program needs cairo 1.0 so this is what I shall say it needs". The idea
that even if you avoid new APIs your binary can end up needing them is
unfamiliar to most developers in my experience.
> Even if we had CAIRO_TARGET to protect new-symbol-introducing changes,
> (which as I said I think we can avoid without any problem), what guarantee
> do I have that my application doesn't directly use new symbols introduced
> between cairo 1.0 and cairo 1.2? The only way I know to get that guarantee
> is to build against cairo 1.0.
No guarantees. That's deliberate though - you may be weak linking against
them (effectively half using them).
> That much I'm fine with. I guess what this provides is for someone to
> develop an application against 1.x, but then if "accidentally" compiled
> against 1.y, it will still run with 1.x.
Yes. It's not accidental though, as I said, developers want to build
applications against modern sets of headers (for instance on their
own systems) whilst still distributing the resulting binaries. And weak
linking as a technique is only possible when you can use latest headers.
> That does seem like a useful guarantee, and it's something we should
> provide in cairo. No header file munging needed, we'll just consider
> those kinds of changes ABI breaks, and we won't make them.
OK, that's fine as well. Thanks.
More information about the cairo