[Bf-committers] Screen Space Shaders

Jason Wilkins jason.a.wilkins at gmail.com
Mon May 30 15:37:46 CEST 2011


I have a feeling I may be missing something or that Blender can
already do what I'm proposing in some way.  At least it seems from
what I've seen done with the game engine it is possible there.
Anyway, here is what I'm thinking about implementing a framework for
screen space shaders for Blender's view port.

The onion branch has a feature I call the "on-surface" brush.  To draw
it, the depth buffer of a window needs to be preserved between calls
to draw menus and cursors, etc.  For drawing modes that are not
'triple' there is no special action needed, but for 'triple' I added
code that saves the depth buffer along with the color buffer.

Recently I've been experimenting with a shader that uses the depth
buffer to darken concave areas and brighten convex areas.  It does
this in screen-space and is applied as a full-screen quad.  To work
the shader needs to copy the depth buffer to a texture, just like the
triple drawing code for on-surface brush.

If you do something once in a program it is fine that it be specific
to that one case.  The code will be the least complex if you just put
it in the one place it is needed for a single purpose.  But now I have
two potential places I want to preserve the depth buffer and if I want
to have post process shaders work with color, I'd have more than one
spot to save the color buffer as well.

So I think that I would like to make the part of triple that saves
color and depth into a general reusable component that could be used
by other modules to access the color and depth buffers as textures.
The module would be used by wm_draw to implement triple drawing mode
and it could be used by another new module that can be used to setup
textures to do post processing, like my curvature shader. (note, even
though I mention on-surface brush as a motivation, on-surface brush
needs depth to be preserved in the buffer, not in a texture.  Triple
drawing mode does a full window quad drawing which restores the depth
buffer, so the triple drawing mode is the client, not on-surface brush
directly).

The results of the curvature shader itself could be used as input to
another screen space shade, so the module should support texture
render targets (fbo) as well as TexCopySubImage textures.

Eventually the screen space texture module should support saving
color, depth, positions, and normals, which would allow the
implementation of deferred rendering on the view port.

In the immediate future I just want to use my curvature shader to
provide better shading of objects during sculpting.  All I really have
to do to make that work is multiply the window contents by the
curvature.  However, further on I'd like to implement an environment
map renderer that uses two maps, one for convex "lit" areas and one
for concave "shaded" areas.  That case will require a texture that
stores normals.

Ultimately, a deferred renderer could be used to allow for more
complicated materials whose costs increases with only screen
resolution instead of geometric complexity and it would also allow you
to efficiently change material properties as long as you didn't change
the camera.

If I did not think this was best done by implementing something more
general then I'd do it as a one-off feature on the weekends, but I
need advice on where I would make changes if I was going to implement
something like this as a reusable component.  Or better, somebody
might have an idea to make this easier.

I already know some objections that may be raised.  The first
objection I know of is that the view port rendering code is already
very complicated (read spaghetti) and adding complexity like saving
textures, multiple rendering targets, and multiple passes is not
something the code can bear. The next objection is that many video
cards would be brought to their knees by this. Also, people may object
that we do not really need anything better than wire/solid/textured
and the current GLSL implementation. Finally, if it is too complicated
I shouldn't be using my valuable SOC time implementing it.

To the potential objections, I see this as a longer term project that
is beyond the scope of my soc project.  I'd be willing to work on it
this fall and beyond if we can come up with an acceptable design,
which may include re-factoring the view port code.  We do not need to
support all video cards forever.  I think that there are many
different ways to view ones work in different situations that are not
included in wire frames, phone shading, and material shaders and a new
approach to rendering the view port may be needed to implement them.

Thank you for your attention.


More information about the Bf-committers mailing list