[Bf-cycles] Tiled Texture Caching for Cycles
stewreo at gmail.com
Sun May 7 11:05:25 CEST 2017
> On 6. May 2017, at 21:38, Brecht Van Lommel <brechtvanlommel at pandora.be> wrote:
> Yes, that's a good reference for how to compute the differentials of
> any operation. I fear the only way to do this is to manually implement
> multiple versions of SVM nodes though. Maybe that's acceptable if it's
> only the ones commonly used for manipulating texture coordinates.
Maybe most of this can be handled with typedef/#define? OSL’s Dual class
has the usual operator overloads, this could be possible without too
> Yeah, interesting comparison to REYES. We are basically doing this for
> bump mapping already, running the shader 3x, I don't think this would
> be very different in implementation?
It’s not quite the same. For bump mapping the subtrees are cloned and run
sequentially. I was thinking of running the shaders on three points in parallel,
(similar to how code runs on a GPU shading unit) so that at any point in the
shader, it can calculate partial derivatives by looking at the neighbouring
I was playing with the subtree cloning approach, and that may be a way
to get the basics working (at the expense of a larger shader tree). Maybe
then I can continue with introducing optimised shortcuts for common
use cases, such as direct use of UV maps, box/sphere mapping, etc.
> Right, I was thinking we could assume a sharp reflection. This means
> loading in a higher resolution texture than we could get away with,
> but the impact might not be so bad since it's only for lights? There
> is still some ray footprint then, and it's not really worse than e.g.
> a sharp reflection from a high frequency bump map or hair.
High frequency bumps on sharp speculars are indeed one of the worst
case scenarios for texture (or geometry) caching. Disco balls are evil. :)
If the are only a few light sources, then it’s OK. It gets tricky when there
are many meshes with emitting shaders on them, or as I have seen some
users do it, use ubershaders with emission everywhere, with 100% black
image textures plugged into most (but not all) emission values.
> I guess we could split light sampling so that we do sample point on
> light -> evaluate BSDF -> evaluate light shader, if it's not too
> complicated or slow.
I was looking at that, and this looks like shouldn’t be too hard. In addition
to evaluating the BSDF, we’d also need to get domega_in_dx/dy from the
BSDFs, maybe through a separate call. For the branched path tracing
case, we would then need to pick the result from the BSDF with the
More information about the Bf-cycles