[Soc-2018-dev] Weekly Report #10 - Further Development for Cycles' Volume Rendering

Geraldine Chua chua.gsk at gmail.com
Tue Jul 24 19:56:02 CEST 2018


Hi all,

I'm sorry for not being specific about the bug.

Render time for VDB grids converted to sparse grids:
- With Mesh: 00:48
- Without Mesh: 00:43

Render time for VDB grids directly saved as textures:
- With Mesh: 01:45
- Without Mesh: 00:25

For reference, it takes ~20 seconds to do the VDB to sparse texture
conversion.

Both types of textures have noise and solid shadows only if there is a
mesh. So for sparse VDBs, there is no slowdown issue, but there is still a
problem with the visuals.

This is my patch for faster volume mesh generation:
>

Checking how long each function takes (direct VDB with Mesh), total mesh
building time is ~55 secs, with 31 seconds in adding nodes and 24 seconds
in deduplicate_vertices. With Stefan's patch, deduplication time becomes
negligible :) (1:15 rendering time!) The extra time in the node adding
could be explained by the long iteration plus the padding.

Putting a diffuse shader shows a (heavily scaled down) complex mesh (
http://pasteall.org/pic/show.php?id=9f0a3c8cb3e44368558fd81ca4261b50). This
may explain the long building time, but not the noise or the different
shadow. Increasing light transparency to 16, max bounce to 24, and volume
scattering to 4 yields the same issues. Trying the other test VDBs also has
some problems (another example here: with mesh (
http://pasteall.org/pic/show.php?id=1cea2c0f82d29ce53da1179a6162391d) and
without mesh (
http://pasteall.org/pic/show.php?id=b978667234b614c02b1d6cd9494cc620)).

I think the place of the code is ok, it's easiest to do it just in time to
> ensure we don't do it when shader evaluation doesn't need it. We can cache
> the velocity lookup in ShaderData so it is performed only once if multiple
> volume reads are done.
>

Alright! I originally thought to try to alter the ray shooting to gather
more samples that way, since that is the approach mentioned by the paper,
but I am very unfamiliar with that part of the kernel. This seemed like a
decent approach to it. How would I go about caching the lookup? Maybe add a
new member volume_velocity?

Request and export of the velocity grid should be disabled if the Blender
> object has motion blur disabled. This is done by checking if
> mesh->motion_steps > 0.
>

I remember seeing veloctiy grids being used to color smoke, so there may be
a use for it aside from motion blur. The velocity attribute node is
supposed to plug into a Color socket as well, so I don't think it needs to
be removed when there's no motion blur.

In principle the volume bounds also need padding to account for the motion
> blur, but if the velocity is not too big it's difficult to notice the
> difference so we could ignore that for now.
>

There is an issue with smoke that (I think) happens when the smoke is near
the edges of the bounding box (example:
http://pasteall.org/pic/show.php?id=33ee8cb52f1b48a608bbbf1e8ba2faf4).
Scaling the domain fixes the problem but greatly increases rendering time
since every tile in the volume is now active. I wonder if padding can help
with this?

It's not clear to me why there is a kernel_data.cam.motion_samples (= 3)
> used here. If I understand the paper correctly, formula (7) requires two
> volume lookups to compute the velocity, and then there is a third volume
> lookup to get the requested attribute itself.
>

The idea behind the Eulerian motion blur formula is that instead of storing
multiple smoke frames and interpolating, we can instead use the velocity
data of Eulerian structures to calculate the appearance of the smoke at a
given time aside from the current frame. If we use the algorithm only once
at time N, what will happen is the smoke gets shifted to what it will look
like at instance N, rather than the smear effect we need. The way I went
about it was to calculate 3 samples: at the start, middle, and end of the
shutter time, then average the samples together. So altogether there are 4
lookups: 1 velocity and 3 densities. Technically velocity should also be
advected, but I tried it first without and the output seemed alright, so I
forgot to try it with advected velocity :P I believe with one sample, there
should be two lookups (1 velocity and 1 density).

Incidentally, I believe Eulerian motion blur was already implemented for
dynamic meshes as sync_mesh_fluid_motion() in blender_mesh.cpp, but it
works a bit differently, since there are only 2 samples per vertex and the
vertices are pre-calculated. I haven't been able to trace very well how
they are handled in the kernel either so the volume method is quite a bit
different. Now that I think about it, it actually might be possible to also
pre-calculate the density. I will have to try that out soon.

Best regards,
Geraldine

On Tue, Jul 24, 2018 at 5:08 PM, Stefan Werner <stewreo at gmail.com> wrote:

> This is my patch for faster volume mesh generation:
> https://developer.blender.org/P756
>
> -Stefan
>
>
> On 23. Jul 2018, at 14:25, Stefan Werner <stewreo at gmail.com> wrote:
>
> Hi Geraldine!
>
> On 20. Jul 2018, at 18:01, Geraldine Chua <chua.gsk at gmail.com> wrote:
>
> There appears to be a bug where VDB textures are rendered with artifacts
> and solid shadows, but only if a volume mesh is created. I have attempted
> to debug the issue but cannot figure out the reason so far. In addition to
> this, generating a mesh takes a considerable amount of time compared to
> built-in smoke. A simple fix to these issues would be disabling volume mesh
> construction for VDB grids, but it would be nice to figure out the cause of
> this bug.
>
>
> The volume mesh generation uses a vertex deduplication routine with O(n^2)
> runtime, that’s probably what you’re seeing. I should have a potential fix
> for that around somewhere, it’s possible to do this without creating
> duplicate vertices to begin with.
>
> With reference to the Stephen's previous work on volume motion blur (
> https://github.com/tangent-animation/blender278/pull/103), I wrote a
> minor patch with promising results. I am uploading the diff instead of
> committing since it's pretty hacky and there may be more efficient ways of
> going about implementing it (https://developer.blender.org/P755).
>
>
> Your patch works under the assumption that the motion vectors stored in
> the grid are in world coordinates, which they may not necessarily be. You
> probably need to apply the inverse scale and rotation matrix of the volume
> objects to the values read from the volume.
>
> -Stefan
>
>
>
> --
> Soc-2018-dev mailing list
> Soc-2018-dev at blender.org
> https://lists.blender.org/mailman/listinfo/soc-2018-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.blender.org/pipermail/soc-2018-dev/attachments/20180725/0bad8ea4/attachment.html>


More information about the Soc-2018-dev mailing list