[Bf-viewport] Grease Pencil Roadmap vs Viewport/OpenGL Work

Brecht Van Lommel brechtvanlommel at pandora.be
Sun Apr 3 17:58:03 CEST 2016


Hi,

On Sun, Apr 3, 2016 at 4:52 PM, Joshua Leung <aligorith at gmail.com> wrote:
> While I was originally planning on just waiting to see what replacements for
> the old OpenGL stuff came up (e.g. similar to D1880), I now think that we
> might be better off creating a set of custom shaders for Grease Pencil
> stroke drawing, so that we can have full control over what strokes can and
> cannot do, without having to worry about breaking anything else. Plus,
> Grease Pencil strokes will eventually need a bit more fanciness than simple
> UI elements will need.

Custom shaders for grease pencil makes sense. We don't need the
overhead of advanced patterns and caps for basic line drawing in the
UI.

We still have to start refactoring the current GLSL shading code to
take more advantage of modern OpenGL functions. But regardless it's
not complicated to create custom GLSL shaders now, the messy stuff is
wrapped in pretty easy to use functions.

> == Open Issues/Questions ==
> 1) Can we use geometry shaders?
> It looks like the OpenSubdiv stuff already gets to do this via an extension,
> so can we assume that this can work in a similar way too?  But, would using
> geometry shaders mean we run into those legacy vs modern GL context issues
> on Mac (i.e. no geometry shaders work until all of Blender is ready to use
> the new profile)?

Generally the plan has been that we will require OpenGL 3.2 for
Blender 2.8, which would include geometry shaders.
https://wiki.blender.org/index.php/Dev:2.8/Source/OpenGL

But indeed there are issues on Mac until we are done a lot of OpenGL
refactoring switch to the core profile, and of course it's unclear
still when 2.8 will actually happen.

> 2) What's the current plan (assuming there is one) for how we're going to
> transition Blender to using the modern GL, and how would it be best to fit
> this work in around that?
> - At least initially, I'll likely work on this in a branch - just to try out
> some approaches, get some experience with how this all works out, and
> hopefully stabilise it all. The main question then would be where this
> branch gets merged, and when (when it's ready of course).

I don't know to be honest, there has been that much progress on OpenGL
refactoring on Blender 2.8 recently, so I don't dare to give any
specific dates. You can always implement the geometry shader code on
the CPU if you want to merge it earlier, it might not be so hard to
port such code between the CPU and GPU.

> 3) Are there any special precautions/policies I should be aware of when it
> comes to managing GLSL shader stuff? (i.e. Do I need to reset things to
> certain default states, etc.)?

Not more than when doing other OpenGL drawing. Mostly you just have to
bind the GLSL shader and unbind it when you're done with it, there is
less state to worry about. When you bind a GLSL shader then some
OpenGL states like materials or lighting become irrelevant, while
other states like blend modes or backface culling still have an
effect.

> 4) Where should the code go?
> - Of course the GLSL shaders will go with the other shaders in the GPU
> module
> - At least some wrapper functions will likely also migrate to the GPU module
> - just like with the other things there already
> - What's unclear currently is how much this stuff can be tied into the
> GPU_shader api's, and how much we'll need a separate set of API's to manage
> this stuff

Currently we have keep most of this stuff in the GPU module, but it
wasn't really a conscious decision to have it centralized as far as I
know. If it's possible to make the GPU side fairly generic and not
tied to grease pencil data structures, then I'd suggest to put it in
the GPU module.

> 5) Regarding the vertex buffers/arrays...  (probably silly questions that
> would be answered by a FAQ, but since we're here):
> - Can you refer to different buffers on different frames/redraws?  Or would
> that be "bad" (though not as bad as what we do with immediate mode currently
> anyway)?
> - If we can only use a single buffer across frames, can the number of
> vertices in the buffer change (or do we need to keep that constant - i.e.
> figure out the frame with the largest number of points, create a buffer of
> that size, and only populate it with as much data as needed on all other
> frames)?
>
> I ask because it seems that a lot of the time, people are only dealing with
> animated character meshes whose geometry/topology doesn't change (i.e. no
> new bits added/removed on different frames), whereas with Grease Pencil,
> you're effectively doing replacement animation on each frame.

It's a matter of performance. You can allocate/free buffers on every
frame fine, it's just more overhead than reusing a fixed buffer and
updating its contents. For optimal performance you might even have
double buffering, so that you can write to one buffer on the CPU while
the GPU or CPU driver is drawing the other buffer.

Of course the actual performance depends on a lot of factors, and the
allocation/free may or may not be significant in practice. I'd just
start with reallocating the buffers every frame, or at least whenever
their size must be increased, and then you can figure out from there
if it's worth optimizing further. Looping over all the frames to
figure out the optimal buffer size has its own overhead too.

> == Proposed Approach - Lines (Tentative) ==
> That said, Rougier's technique seems to provide a lot more of the things
> artists would want already (in particular, the cap style stuff, and builtin
> ability to do dashed lines too). There's also an example of how it can be
> used in 2D and in 3D.
>
> Stuff we'd have to hack into these would be: 1) ability to have varying
> thickness and/or opacity based on pressure, 2) ability to use absolute
> worldspace size vs absolute screenspace size.
>
> Thoughts? Suggestions?

The paper seems quite good, I haven't really though about these
algorithms much so not sure which one would be best.

> == Proposed Approach - Dots/Discs/GLUQuadrics Replacements (Tentative) ==
> I've also been looking into techniques for replacing a lot of the current
> GLUQuadric stuff (used for the single-point strokes, as well as for drawing
> "Volumetric Strokes").
>
> It currently looks like we may end up doing something like:
>    http://stackoverflow.com/a/27099691
>
> 1) Is this one of those OpenGL API's that we'll be supporting in 2.7x, or is
> this something that's only available with later versions?
> 2) Does anyone know if the existing limits on line/point size (e.g. "10")
> still apply?

The maximum is still entirely up to the implementation. I don't know
if there is some common minimum supported size that exists on all GPUs
we support, I couldn't find information about that, probably it's not
a good idea to rely on such a thing.

We don't really have a plan yet for replacing these specific
functions. I imagine we'd have some utility functions to easily draw
various shapes. If you need them now you can implement them, and then
later they could be folded into a more general API.

Regards,
Brecht.


More information about the Bf-viewport mailing list