[cairo] path gradients and cairo

Bill Spitzak spitzak at d2.com
Thu Apr 29 13:21:22 PDT 2004


On Thursday 29 April 2004 04:05 am, Carl Worth wrote:
> I imagine OpenGL allows a mesh of colored triangles, right? Is there any
> reason we shouldn't just use that?

I'm not sure what new things are in modern OpenGL, but:

OpenGL way back at the beginning had as it's basic object a triangle with the 
color specified only at the three corners, and it then Gourand shaded (linear 
interpolate in screen space) those triangles. You can directly specify the 
color of each vertex, this is done at the *same* time you set the xyz 
coordinate (not later!!!).

Most often  you would also specify a lighting and shading setup before 
sending the verticies, and the stored color would actually be calculated from 
the lighting, and the color (and  a "normal") provided for the vertex. This 
was done at the moment the vertex was specified and only the resulting color 
was remembered. In addition the xyz was "perspective projected" into an xyzw 
value and only this resulting value was stored.

The end result very closely resembled what you would see if you did the 
lighting calculation at every pixel, but was far faster. This scheme is also 
entirely non-ambiguous, a 4-sided shape would get divided into two triangles 
in a predictable way (it was the caller's job to make the seam invisible) and 
the linear interpolation was identical no matter how it was rotated.

Modern hardware replaces this with a "vertex shader" which is invoked for 
each vertex, it can replace the perspective calculation with it's own, and 
make up a set of values to store with each vertex. These could be colors, or 
arbitrary arguments to the fragment shading calculation.

The Gourand shading (and texture mapping) is replaced with "fragment shaders" 
which are (in effect) called for every pixel in the triangle. They are given 
the results from 3 calls to the vertex, linearly interpolated between the 
points in the triangle (ie they get a single set of numbers).

One concern about Cairo is that your shading is not based on the verticies 
and may map very poorly to OpenGL and DirectX hardware. The main reason 
vertex shaders are necessary is that vertex shaders can look at the resulting 
xyzw in screen space and use that to pick the other values stored in the 
vertex and then interpolated for shading. Other than that the Cairo shading 
is hardly taxing the abilities of hardware shading languages and was probably 
supportable by older hardware.

The reason the old hardware did not provide any way to run the shader from 
the projected coordinate is that this is totally unnecessary for most 3D. The 
Z can be (and is) used for fog, but it was needed for z-buffer compositing 
anyway. The screen xy would only be useful for rendering a window, but that 
could be done by rendering what is outside first and then drawing the window 
frame atop it.

Because this is so unnecessary for 3D, I would not be suprised if modern 
hardware, even though it can do it, performs poorly (perhaps blocking the 
pipeline) if shaders need to look at the projected screen coordinates. This 
could seriously slow things down. So it is quite possible that the design of 
Cairo should be altered to assumme 3D hardware, even if you assumme totally 
programmable and full-power shaders on the hardware.

-- 
                   ,~,~,~,~ ~ ~ ~ ~
     /\_       _|_========___         Bill Spitzak
 ~~~/\/\\~~~~~~\____________/~~~~~~~~ spitzak at d2.com




More information about the cairo mailing list