[Bf-committers] Proposal to use OpenGL in part of the snapping operations.

Brecht Van Lommel brechtvanlommel at gmail.com
Mon Jul 1 11:43:36 CEST 2019


There was earlier discussion on this here.
https://developer.blender.org/D2795

My understanding is that the old code would remain for faces. So if
this does not completely replace the snapping BVH, how does this
reduce memory usage or simplify the code? Wouldn't it do the opposite?

What about cases where we want to use X-ray snap to occluded vertices
or edges? Or maybe snap to the closest point from the moving vertex
rather than the closest point to the viewer? Don't we still need a BVH
for those?

In general I think tools should be working in 3D space whenever
possible, be it snapping, sculpting or painting. There is usually a
point where working in 2D view space becomes too limiting.



On Mon, Jul 1, 2019 at 1:53 AM <germano.costa at ig.com.br> wrote:
>
> The idea is to use OpenGL only for snapping operations that involves edges and vertices. Therefore snap to faces (or raycasting) would not be affected.
> A while ago I started a discussion to know users' opinions about features proposals for precision modeling:
> https://devtalk.blender.org/t/discussions-for-better-snapping-and-precision-modeling-to-come/5351 (https://devtalk.blender.org/t/discussions-for-better-snapping-and-precision-modeling-to-come/5351)
> The feedback is very good, and indeed many other proposals were presented (and will be considered in a later stage of development).
> As shown in the topic, the first item to be worked is the addition of new snap options: "Middle" and "Perpendicular".
> The current CPU solution would be to modify the callback used for the BVH of edges.
> It is not really a problem to continue with this CPU solution, but there are some drawbacks in using BVHs:
> 1. The resulting BVH can consume a large amount of memory:
> To get an idea, the BVH of a Suzane subdivided 4 times consumes 32.428,66 KB.
> For comparison, a 4k uint texture (that would be used on GPU) consumes 33.177,60 KB.
> Currently to avoid duplicate memory consumption, the snap to edges and vertices use the BVH of triangles.
> This is not as efficient as it would be if they were specific BVHs (one for edges and another for vertices).
> This "workaround" would be aggravated with the new snap options ("Middle" and "Perpendicular").
> 2. It is necessary to use artifice to simulate occlusion:
> Since we can't use OpenGL, we can't use a depth map to know what is in front or behind an object.
> The current CPU-based solution in Blender is to first make a raycast to get the polygon pointed by the mouse cursor and snap to vertices and edges of that polygon.
> Also with this polygon is created a plane that separates the elements in the 3d view that will be tested to snap.
> These steps would be avoided with a depth map obtained with OpenGL.
> 3. Big and complicated code:
> BVH works with callbacks, the snapping system with BVH uses callbacks for raycast and another for mixed snap.
> These callbacks also have to be compatible with different object types. (Mesh, EditMesh, Displists).
> So within these callbacks there are other callbacks to get the coordinates of the vertices depending on the type of object.
> This complication would be avoided with a simple texture mapping all ids.
> On the other hand, a GPU based solution would have the advantages of:
> 1. Take advantage of the existing Blender solution:
> Currently a Blender already does something similar in the system of selecting an edited mesh.
>
> So the existing code would be harnessed and improved.
> 2. GPU depth test for occlusion.
> Using GPU to snap is not strictly necessary, but I would like your opinion on this subject.
> Thanks,
> Germano
> _______________________________________________
> Bf-committers mailing list
> Bf-committers at blender.org
> https://lists.blender.org/mailman/listinfo/bf-committers


More information about the Bf-committers mailing list