[Bf-cycles] Tiled Texture Caching for Cycles
Brecht Van Lommel
brechtvanlommel at pandora.be
Sun Apr 30 13:29:56 CEST 2017
Great to see this being worked on.
1) Running the shader 3 times is interesting, would be good to see
what kind of performance impact that has in practice. The way I
imagined it is that we'd have two variations of shaders, one with
differentials and one without. However if we can automate it without
too much of a performance impact that would be great. I think texture
coordinates being looked up in textures can work, you could do lookups
at P, P+dPdx and P+dPdx, all with the same differentials dPdx and
dPdy, and then compute the differentials from the results of those 3
2) I would keep this an SVM implementation thing without any new type
of socket exposed to the outside. I think the input would be tagged as
requiring differentials somehow, and then the SVM compiler and nodes
could allocate and use more space for those sockets.
3) With reverse paths in bidirectional this also seems quite
difficult. An approximation would be to use differentials as if the
emitter was directly seen from the camera, making the same assumptions
as we do for adaptive subdivision. For the current case though, isn't
it possible to compute differentials somehow? I think we know the view
direction, current shading normal, and light direction when
shader_setup_from_sample() is called. So there must be some
differentials we can derive from that?
On Wed, Apr 26, 2017 at 10:28 PM, Stefan Werner <stewreo at gmail.com> wrote:
> Hi everyone,
> I’d like to tackle the challenge of adding support for texture caching to Cycles, through the means of OIIO’s TextureSystem. Eventually, that should allow us to render scenes with unreasonably large texture assets in a modest memory footprint.
> I attempted this at an earlier point already, but didn’t have the time to finish it. To get any benefit whatsoever, texture lookups must use correct dsdx, dsdy, dtdx and dtdy differentials. And that is where I remember three stumbling blocks:
> 1) Generating the differentials. We already have dudx, dudy, dvdx and dvdy in Cycles. Thus, it’s fairly straightforward to calculate the differentials when the texture lookups are being done with the UV coordinates of the underlying geometry, provided that the UV map is not degenerate. It does however get tricky, when the texture lookup uses s,t coordinates derived by other means, for example through procedural patterns or worse, by looking them up in another texture (which again would then need a filtered lookup). Does anyone have an idea how we could reasonably derive those differentials in those cases for SVM? We could run the shader three times, once at P, then at P+dPdx and at P+dPdy (similar to how bump maps are calculated), but that would break down when texture coordinates are looked up themselves in textures.
> 2) Passing texture coordinates around. Since a texture lookup is now not using s, t as parameters but s, t, dsdx, dtdx, dsdx and dsdy. So the sockets would then not be float but float. This would either require extra input sockets on texture nodes or a new type of socket. Both methods could break existing shaders.
> 3) Currently, shader_setup_from_sample() sets all differentials to zero. Any texture lookups from samples (for example, textured mesh lights or background lights) would therefore happen unfiltered and have terrible performance. The only correct way out of that I can think of is to defer shader evaluation for light samples until after the full path has been constructed, which may not be easy to do, especially with the split kernel.
> I’d be happy to hear your thoughts. Am I overlooking potential problems or am I missing an obvious solution?
> Bf-cycles mailing list
> Bf-cycles at blender.org
More information about the Bf-cycles