[Soc-2018-dev] Weekly Report #1 - Further Development for Cycles' Volume Rendering

Lukas Stockner lukas.stockner at freenet.de
Sun May 20 23:01:17 CEST 2018


Hi,

same here, I'll just answer the questions here.

On 05/20/2018 07:11 PM, Geraldine Chua wrote:
> * Aside from logging the size of the data structures, is there another
> method to measure memory usage?
Well, there is the --debug-cycles flag for Blender that will make Cycles
print out its memory allocations, unless that's what you mean by logging.
> * What is the difference in function between `bvh` and `kernel/bvh`?
intern/cycles/bvh contains the BVH builder that generates the BVH from
the scene geometry, which is done in the host code while synchronizing
the scene, while intern/cycles/kernel/bvh is the traversal code that
gets called from the kernel during rendering.
> * What are closures?
Closures are how Cycles represents materials internally during
rendering. Essentially, a shader in Cycles is a program that is executed
when a ray hits an object and that results in a list of closures, which
Cycles then uses for calculating direct lighting and scattering.
Closures generally correspond to BSDF nodes, but one node might result
in multiple closures (for example, a Glass BSDF node will be turned into
a reflection and a refraction closure).
> * What is a split kernel and how did it make Cycles more efficient
> than before split kernel was implemented?
The split kernel mainly exists to overcome limitations of GPU computing.
The regular kernel works by creating a ray, tracing it through the scene
until it terminates, and then writing the result. However, this causes
some problems:
- The code of the kernel becomes extremely long by GPU computing
standards, which used to crash AMD's OpenCL compiler
- The amount of local variables exceeds the available registers, causing
slowdowns due to variables needing to be stored in the global memory
- Most importantly, thread divergence can be massive - as long as at
least one path is active, all 32 (Nvidia) or 64 (AMD) threads have to be
kept active. Since path tracing is a Monte-Carlo algorithm, the active
threads end up being randomly distributed by default.
Therefore, the idea of the split kernel is to split up the computation:
One kernel to generate paths, one to trace them to the next
intersection, one to evaluate shaders, one to compute direct lighting
etc. This way, you can replace the finished paths with new ones as soon
as they're finished, which helps a lot to keep occupancy high. There are
some downsides, though - the code becomes more complex, there is quite a
lot of management overhead and all the path data has to be kept in
global memory. This is why the split kernel is currently only enabled
for AMD GPUs using OpenCL, even though it also works on CUDA and even CPU.
> * Does index space vs object space mean the coordinate system of the
> world vs the coordinate system of the object? How is the object's
> coordinate system determined?
I guess "index space" refers to the VolumeMeshBuilder? As far as I can
tell, index space there just refers to the indexing that is used to turn
the 3D coordinates into a linear index to look it up in the grid.
As for the object coordinate system, objects in Blender have a transform
that specifies the transform from object to world space.

- Lukas
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pEpkey.asc
Type: application/pgp-keys
Size: 1774 bytes
Desc: not available
URL: <http://lists.blender.org/pipermail/soc-2018-dev/attachments/20180520/4480459b/attachment.key>


More information about the Soc-2018-dev mailing list