[Bf-committers] Viewport FX Design

Jason Wilkins jason.a.wilkins at gmail.com
Fri Jun 1 13:54:05 CEST 2012


On Fri, Jun 1, 2012 at 5:28 AM, Brecht Van Lommel
<brechtvanlommel at pandora.be> wrote:
> Regarding matrix transformations:
>
> * "There would be no need to change "modes" and more matrix stackes
> could be created than the "standard" 3."
>
> I don't understand this, why would you want more stacks, and how would
> you avoid changing modes?

That is written in a contradictory way since the example code I wrote
does mode switches.

The reason is because you could just directly address the matrix you
want to use.  It would be a more object oriented interface that does
not rely on global state to know which matrix to address.

> * "An option to separate modelview into "model" and "view" matrices
> which remain valid until you change the modelview directly."
>
> Also don't understand this, to me it seems the model and view
> separation is nicely handled by matrix stacks.

Think about instanced geometry, if you move the camera you would have
to update all of their modelview matrices.  If you do the model * view
transformation in the vertex shader you only have to update view
matrix.  Including the view and model matrices in the same stack is
based on the assumption that people would be using glBegin/glEnd to
send geometry every frame.

> * "Place lights either using the legacy method of using the current
> modelview or using a custom matrix."
>
> This seems to be a rare enough thing to do that we should just follow
> whatever GL does, not try to make it nicer by adding an abstraction.

Again, this is something that relies on the idea that you will be
revisiting the whole scene graph every frame.  It should be possible
to just update one entities matrix without having to reconstruct the
exact state that OpenGL was in when you told it where something was.
I will admit I need to study how this might be done better however.

> * "Concatenate transformations to the top or bottom."
>
> I can't think of a situation where this would be a good thing to do.
> It means you have to know something about what happened below in the
> stack, if you follow the GL rules any transformation is nicely
> 'local'.

I've written code many times where I needed to glGetFloat, add a
transformation to the beginning, then put the matrix back.  Having the
option to separate the model and view matrix probably eliminates that
pattern though because that is all I ever used that kind of thing for.

> Regarding the Start/Commit/Rollback idea, why not add the functions
> gpuTranslateScale and glTranslateRotate, and just immediately apply
> the transforms always. From looking through the code, that covers
> nearly all cases, with a few places in the code that could construct
> the matrix manually, and the remaining ones either not being
> performance critical enough to bother avoiding the state change.

You are absolutely right.  I will admit that I have not had time to
properly think about coming up with a set of functions that just
captures what Blender already does.  No need for complex interface if
we could just have a handful of common operations that all just take
effect immediately because all cases reduce to one function call.

What I was proposing amounts to my wishlist for a nice OpenGL
transformation library.

> On Fri, Jun 1, 2012 at 11:11 AM, Antony Riakiotakis <kalast at gmail.com> wrote:
>>  In fact, if we ever go for a pure GL3+ core profile
>> this is absolutely essential as the gpuMatrixTransformCommit
>> (BLI_MatrixTransformCommit ?) call would need to probably calculate
>> and bind an inverse matrix to the uniform buffer object as well.
>
> Isn't such an inverse matrix is needed only for either normals or some
> more advanced shading operation? I hope this would not be needed in
> e.g. UI drawing code.
>
> Brecht.

I believe an inverse matrix of some sort is needed for lighting?
Since core won't calculate everything needed for fixed function
lighting anymore, we may need different functions that take the
current matrix and load it into the GL in different ways depending on
how it will be used.  The UI would only need to load a single
modelviewprojection matrix, but a full lighting setup will need to
have things broken down into different matrixes.  I can imagine in the
future there being more sophisticated methods for complex shaders.

One thing I'd like to do is require the programmer to be more explicit
about what is needed, so that the library is simple and fast, even if
using it is more complex.  That is why my gpuBegin/gpuEnd requires a
gpuImmediateLock/gpuImmediateUnlock.  That way it doesn't need to be
super complex in order to figure out what the programmer really wants
to do.  In the case of matrix multiplies I think Antony has a good
point I had not thought of, depending on what you want to do, you may
use a single glUniform to load a matrix, or you may need to use
several in addition to calculating inverses.

I do wonder if I've gone too far.  I've found myself writing little
utility functions to make sure that vertex formats are setup correctly
and that the buffer is locked rather than using the interface I
created for that directly.   Blender only seems to use about 7 vertex
formats, but my interface can setup many more.  In the end, I think I
like the idea of building a fairly low level interface, then on top of
that building functions that are more custom made for Blender.  It
just seems cleaner to say exactly what you need to have done and have
the code do it, rather than imply what you want to have done and have
really complex code to figure out what to do.

I think that is why driver writers decided to rip all that stuff out.
Optimizing it was just a big pile of special cases.  I want to write a
low level library (from the point of view that it does not have to
"think" much about how it will dispatch a command to OpenGL).  Such a
library only does exactly what you tell it to do, without trying to
think very hard about how to do it.  We can then build Blender's
special cases on top of that.

This may be getting too long for an email, but I want to give an
example of what I mean:

Let's say we have a function gpuMultMatrix, which multiplies by a
given matrix and loads that matrix into OpenGL state.  The slow way to
do this would be to just calculating all the matrices that may ever be
needed by a Blender shader and load them.  We could make it faster by
checking a bunch of flags and deciding what is needed "automatically".
 This results in complex code that runs every time even when we don't
need anything special.

A lower level interface requires us to load each type of matrix
separately and unconditionally.  If we need lighting we need to load
all the matrices we need explicitly.

On top of that we build a set of utility functions for Blender's
special cases that reduce (say 3) functions calls to just 1.  In some
cases they may have to make more complicated decisions, but those more
complicated functions would only be used where they are actually
needed.  Instead of having one big fat function that always makes all
the decisions every time.

I think we should do it that way because we aren't rebuilding a OpenGL
fixed function pipeline, we are re-building the subset of that which
Blender actually uses.

> _______________________________________________
> Bf-committers mailing list
> Bf-committers at blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers


More information about the Bf-committers mailing list