# [Bf-cycles] Future of the non-progressive renderer and Cycles strand rendering

Brecht Van Lommel brechtvanlommel at pandora.be
Sat Jan 5 01:15:04 CET 2013

```Hi Nathan,

On Fri, Jan 4, 2013 at 9:17 PM, Nathan Vegdahl <cessen at cessen.com> wrote:
> To deal with expanded bounds, you can just add/subtract the expansion
> radius to the max/min of each bounding box as you visit it.  It
> shouldn't require a rebuild of the BVH.

Ah, that's an interesting idea.

> It's a bit of a chicken-and-egg problem in this case, though, because
> you need a good estimate of the ray footprint to know how much to
> expand each bounding box, but you need to intersect the ray with the
> bounding boxes to estimate the footprint.
>
> But just expanding the bounds of a BVH is pretty straightforward.
>
> Incidentally, if you can estimate an expansion radius for the root
> node, then the rest should be pretty straight forward, as you can use
> the estimate from parent nodes to expand the child nodes as you go
> along.  Something like:
>
> recurse_bvh(ray, node, expand_radius):
>     if intersect(ray, node.bbox + expand_radius):
>         new_radius = node.max_footprint_inside_bounds(ray)
>         recurse_bvh(ray, node.child_1, new_radius)
>         recurse_bvh(ray, node.child_2, new_radius)
>
> Not sure how best to estimate the expansion radius for the root node, though.

I guess you'd take the fartest point on the bounding box and compute
what the radius would be there, which would give you an upper bound on
the radius. Estimating only the root may not be precise enough though.
If you've got a field of grass, the difference between the near and
far grass width may be quite big, and a too conservative estimate
would be a big performance problem. Just one far away stray hair could
mess up performance for all others. So you'd probably still end up
doing it for lower BVH nodes too, though perhaps not for all.

I do wonder if just refitting the BVH beforehand isn't still faster in
most cases. We do not actually need to do a full rebuild and can do
fast multithreaded refitting for the viewport, even if it degrades
performance compared to a full rebuild.

> Another thing to note is that even with the tightest possible bounds,
> the BVH quality will degrade substantially as the hairs get thicker
> and thicker, due to increased overlap in the leaf bounds.  In the
> worst-case, all of the hairs substantially overlap with each other,
> and you end up testing each ray against almost every hair.  I imagine
> that would degrade performance pretty enormously (though it would be
> good to test, regardless).  I feel like some kind of hair pruning
> would need to happen as well to maintain performance for distant hair,
> or hair reflected in a convex surface.

This is indeed problematic. If you've got lots of scaled up,
transparent hairs, it reduces aliasing but the cost per sample can go
up dramatically. From what I've been told, Arnold has a setting that
controls the maximum number of hairs that would be made transparent.
I'm not sure how much limiting this number affects the final look, but
I can image that after e.g. 5 layers of hair the lower layers may not
be that different.

Brecht.
```