[Bf-committers] A contribution
Wed, 30 Jun 2004 00:03:50 +0100
Bill Baxter wrote:
> Dan Brown wrote:
>> The Iriz+zbuf seems to do it like that, 8 channels, 1 each for bits
>> 0-7, 8-15, 16-23 and 24-31 and 4 channels for RGBA. I wanted to save
>> floats though, mainly because OpenGL and Direct3D use floats
>> internally for their depth buffers and it would be more in keeping
>> with that for people to use. Using tga though means it is easier to
>> visualise quickly for testing.
> Just to nitpick, all graphics hardware I'm aware of uses a 24-bit (or
> 16-bit) fixed point depth buffer. So while OpenGL and Direct3D may do
> some depth computations internally in floating point, the depth buffer
> is ultimately not stored as floats on the card.
The hardware does whatever it needs to for speed on that architecture, but the
software interface is whatever has been defined as. Direct3D uses integers, so I
was wrong about that, OpenGL accepts ints or floats but uses a float format
internally. From the developers guide -
Each pixel is a single depth component. Floating-point data is converted
directly to an internal floating-point format with unspecified precision. Signed
integer data is mapped linearly to the internal floating-point format such that
the most postive representable value maps to 1.0, and the most negative
representable value maps to -1.0. Unsigned integer data is mapped similarly: the
largest integer value maps to 1.0, and zero maps to 0.0. The resulting
floating-point depth value is the multiplied by GL_DEPTH_SCALE and added to
GL_DEPTH_BIAS. The result is clamped to the range [0, 1]."
So using a TGA or other non-lossy format would be perfectly acceptable. But it
does seem like a workaround, if I go to the bother of adding a 'save
depth-buffer' button surely it would be better in a direct format, i.e. 32-bit
ints or floats rather than as 4 planes that have to be spliced back together
again before it can be used.
> Bf-committers mailing list