[cairo] Spot colors (and CMYK)
ku.b at gmx.de
Wed Feb 17 04:57:13 PST 2010
Am 16.02.10, 12:08 -0800 schrieb Bill Spitzak:
> Kai-Uwe Behrmann wrote:
>> Broken software needs to be fixed. A screenshot needs the monitors ICC
>> profile attached if it has no other means to know about the image data. A
>> screenshot with the monitors ICC profile will show equal on the same
>> monitor and on monitor with larger gamut. Some shifts for very saturated
>> colours can happen on smaller gamut monitors.
> You are living in a fantasy world if you believe screen shots have ICC
> profiles attached. This is not true even on OSX.
Then fill a bug report.
> More importanly, "attached profiles" are NOT WANTED, EVER! I work with
A different world.
> professional image processing software, and we long ago learned that attached
> profiles are NEVER wanted. We ignore them in png, we ignore them is exif
I have read such statements some time ago, but wonder where the workflow
is broken to come to that conclusion. Possible a transition issue from
uncolour managed to ICC based workflows?
> files, we ignore them in Red camera data. One of the biggest pains at The
> Foundry is figuring out how to ignore profiles in Apple Quicktime files and
> get the raw data. Please take a look at the Nuke release notes if you don't
Hey hey. Getting raw data and tagging that with ICC profiles a two
different things. Let me explain in an easy exapmple. Indexed images can
not efficiently converted by handled in many workflows. Say have a ICC
profile for the indexed image, add some effect and proofing profiles and
then convert to output profile. So what is needed to solve that? Convert
the indexed pixels to the correct ICC profile and continue from that. If
the indexed pixels are needed further keep them somewhere in the app. Jpeg
and other YUVs can be described as ICC profile. It should be no problem to
generate a profile, which converts YUV to a embedded ICC profile and
attach instead of the ICC profile.
A capable consultant should be able to work out similiar strategies for
your problems. If you want, you shurely will find someone in your
> believe me. This crap is occupying at least one developer 100% of the time
> and many other are working on it. Currently Nuke is using ffmpeg whenever
Workarounds are typical expensive compared to straight forward solutions.
I prefere the later. Thats why I participate in the thread.
> possible, it reports raw YUV values and we do a FIXED and predictable
> conversion to sRGB. THIS IS WHAT USERS WANT!!!!
So you does colour management on your own. :-)
> In real software, "correctly displaying" is a very very very rare desire.
Here the opposite.
> "lossless in/out" is ENORMOUSLY more important. And I am sorry to report, but
Agreed. My editors are all as good as I can designed to work on native
data and convert on the fly to output devices with applied proofing and so
on. So data loss is greatly reduced. Preserving metadata is important for
that to work.
> users think that "strip the profile" and "display with a program that ignores
> the profile" should be lossless operations. No arguing from color theorists
This is displaying. The data processing should be independent from that.
Look at major open source projects how they separate internal data storage
from on the fly displaying. Krita, GEGL for Gimp, Inkscape and so on come
to my mind. They do not open a file and convert to display and blend and
work in that. That later simple concept was never really flexible.
> is going to change this and we have to live with that fact and make software
> that is actually user friendly.
Maybe a proper designed CM concept can be a value add for your software
and its further development?
>> Of course. Your "solution" does not work on multi monitor situations.
> I don't know what you are talking about. My solution is the ONLY way I can
> see to work on multiple monitors. The input to Cairo has to be in a defined
> fixed color space, and then it can be converted as needed correctly to each
> monitor. I do not think any application should have to change the data it
> sends depending on the monitor.
A screenshot is, as I understand it in the the according monitors device
space. So it should be tagged with that ICC profile.
Where comes the need to convert to sRGB? From your application?
A pure viewer will load the monitorRGB screenshot and display on the same
device to the same monitorRGB without a real conversion. Its a pass
through. Hence all should look identical. On a different monitor the same
screenshot looks like the original if CM is applied.
In case your users desktop is not colour managed, as most desktops are not
today, then the each application will look different. This is a problem of
incomplete CM. Oyranos' compiz plugin as well the more popular osX Snow
Leopard show how that can be solved. Both provide a complete colour
managed desktop. All desktops including applications will look almost the
same on different full colour managed hardware. The OS or in the compiz
case the X11 window manager does this.
>> As screenshots almost always are intented for remote observers, it is clear
>> that colour management is the solution for that main use case.
> That is nonsense. Screenshots are used locally. I want to look at a pixel
Personally I make screenshots only for others.
> with a color picker and know exactly what values software needs to
> communicate to Cairo or whatever to get the same color. I want to paint on
Then the picker would need to convert the values from monitor RGB to sRGB
if thats what your application expects.
> that screen shot and send it back to the first user and they see the same
> colors they got initially. "color management" makes this impossible.
Most of your problems appear to me as coming from a half way transition
to colour management. I can understand many of your frustration as OSes
are sometimes out of scope to change and technology is not available or
mature enough. And on top of that the target is always changing.
However once the CM is implemented, user installations are updated
to use full CM for the desktop and applications know how to communicate
colour with ICC profiles that trouble and the belonging workarounds can
>> The question must be: can I produce a screenshot which looks identical on
>> my laptop, my desktop computer monitor and on the monitor of others. Non
>> broken adn correctly colour managed software together with correctly
>> configured hardware can this.
> Absolutely you can do this. Cairo will return a "screen shot" as sRGB values.
How can cairo return a screenshot? I use xwd on Xorg or other tools. Cairo
is not able to readout a framebuffer itself. And its certainly not sRGB
what comes out of xwd.
> You then send it to another screen as sRGB values. And it will look
> "identical". In a perfect world this means that advanced color correction
> will cause the display to emit exactly the photons defined by sRGB and all
> your devices will be emitting these exact same photons.
If it is realy sRGB that might work. If the colours come from a wide gamut
display and get converted to sRGB and then back to wide gamut its at risk
to get clamped.
> In the real world you have to remember that "identical" means that if there
> are 256 levels in the image, I better get 256 different output levels. If I
> pick a color off the image and put it into my painting program and then
> display that painting I must get the same color. If there is a beautifully
Taking colour directly from screen is a realy painful method.
I wanted to suggest already a colour picker proposal to put a actual
colour picker value in a X atom, including a sRGB and a native reference
with ICC profile. That way a app can put itself values on the picker cache
and other apps can register to colour picker change events in order to
update their representation. It can provide the naitve precission
including HDR values.
Sorry, too much other things here to come to that myself soon. If someone
will take this over we could discuss the details on the OpenICC email
> dithered smooth gradient in the image, I better not see banding or
> posterization. A ramp that is as smooth as possible better not have Mach
> highlights in it (which would be caused by a lookup table that clamps or
> otherwise does not have a continuous derivative).
> I think that color theorists are ignoring the above. Users think that huge
> skews in the emitted photons are "identical" but consider problems like I
> have listed to be unacceptable differences. Color theorists have got to learn
> how the real world works. We are not looking at flat fields of color, we are
> looking at images with tons of information stored in the higher frequencies
> and that information must be preserved!
I completely agree with the later. Still my take on it is to provide means
to communicate things clearly between applications instead of work on
internal workarounds. The later might break as pure assumptions get easily
out of sync.
>> The hardware configuration is often enough automatic (EDID).
> I would agree. If possible Cairo should be using such settings to convert
> sRGB data to emit the correct colors from the screen. However this has
> nothing to do with the Cairo application API.
Yes, its a backend thing, almost.
>> This is completely wrong. If a program wants a good usage of hardware then
>> it needs to supply colours close to the hardwares colour space.
> Stop pretending the sRGB data is clamped! It is not and can specify any color
> in any well-known 3D color space.
I should have mentioned my premisse. I meant in the context of sending
typical 8-bit per channel or other integer values to the device sRGB will
result in clamping.
To put in actual capabilities. If someone came today with a patch to fix
cairos Xlib backend featuring a conversion from assumedly sRGB to monitor
RGB, the RGB values in cairo are still integers. Or am I wrong?
>>> sRGB values must be sent to the display, or color management must be
>>> done by hardware between the buffer and the display.
>> In the cable?
> Yes, if the buffer is only 8 bits then that is the ONLY possibility.
> There are only two possible designs:
> 1. The buffer contains sRGB values and output hardware converts them to the
> display. This is the ONLY solution if the buffer is 8 bits as there must be 8
> bits of fractional resolution so that images are lossless. If your display
> has a larger gamut then either it cannot display the full gamut, or a
> *SIGNED* format with at least 8 bits of fraction must be used (16 bit half is
> probably best as it has hardware support, if you insist on integers than an
> offset with a range like -.5 to 1.5 is needed). Note that unsigned larger
> than 8 bits is almost useless.
Hmm. I know only of one HDR display system and I am in doubt that it will
influence the mass marked in a reasonable timeframe. Today we transit from
8-bit to 10-bit or slightly more per channel for displaying. Xorg and
libpixman support that already relativley nicely.
Has VESA something on the board about sending half floats to displays?
> 2. The buffer contains values that are linearly related to what the hardware
> requires, this can use clamped integers. As round-trip conversion of 8-bit
> sRGB must be the identity, this requires I believe 10 bits for Adobe RGB and
> even more bits for higher gamuts. As compositing will be done in this space
> and this puts serious limits on the gamma, it cannot vary too much from sRGB
> due to applications relying on 1-x being perceptually inverted and how
> partial transparency composites.
Compiz does its compositing in 16-bit per channel. Unfortunedly it does
not support 10-bits or higher in- and ouput. But typical applications or
images need no compositing, which would be related to displaying.
Converting colours from the apps internal colour buffer to the displays
native resolution is the best one can obtain. Say a 16-bit camera RGB
colour space is converted to 8-bit monitor RGB should be almost smooth.
The same for synthetic gradients. Thats really simple.
(Beside that, I personally think 8-bit can not be really smooth. Dithering
is just a workaround at the cost of colour resolution.)
But maybe your premisse is something different? I have read years ago
about maybe related problems for pure 8-bit workflows. Furtunedly the
photographers and in part printers (?) have moved to 16-bit workflows.
> Notice that if the buffer is 8 bits and there is no hardware conversion, then
> the only possible implementation is to claim that the buffer contains sRGB
> data. It does not matter how screwed up the display is, that is the best you
> are going to get.
Not for 16-bit workflows. The 8-bit display path was long long a real
bottleneck after the great step from indexed visuals. (Can anyone
remember? conflicting colour maps :)
But I am really happy that affordable 10-bit per channel displays for many
professionals entered the market. I hope the value of the display in front
of me drops due to 10-bit becoming more mainstream and cheap. Thats a
really great development for graphic people. And what most amateur
photographers have now since quite some time, 12-14 bit raw material to
work from, is shortly on the desire to hobby movie makers too.
To get back to cairo and CM, I would orient to proper bandwidth between
capturing devices, storage, applications, the OS (Xorg/cairo), graphic
cards, HDMI-1.3 or DisplayPort connections and 10-bit or more per channel
output devices. Guessing from the current development, we all will be
there in some years, some earlier some later.
developing for colour management
www.behrmann.name + www.oyranos.org
More information about the cairo