[Bf-cycles] Optimization project

Brecht Van Lommel brechtvanlommel at pandora.be
Wed Feb 5 21:21:17 CET 2014


Hi David,

On Wed, Feb 5, 2014 at 7:12 PM, David Fenner <d4vidfenner at gmail.com> wrote:
> 1) Perfect frame time estimation:  Right now time estimation has an error
> margin way too big to the point it is useless. For heavy scene rendering and
> tight deadlines a better prediction is mandatory. A great way to do this
> would be to make, in f12 rendering, a first pass of "progressive refine"
> (let's say 20 samples per tile), then calculate the time based on samples
> left, and then continue on each tile until full sampled, with no more
> "progressive refine". The problem right now is that time estimation is done
> by calculating the tiles being used only, but error comes when some tiles
> take 10 secs on some others take 10 min. If we did a time estimation on all
> the tiles based on 20 samples then there is no margin for error, since the
> next 20 samples will take the same time on each tile, and so on. Having
> perfect prediction (thanks to the nature of path tracing) is a blessing for
> high end production.

Fair enough, the remaining time that was added was only a quick
feature. It could be made more accurate.

> 2) The glossy shader could have a button/ticket box to make it "only
> direct". Basically this would make the glossy shader react only to direct
> light and hdri. For many, many types of shaders, this specular-like usage of
> the glossy shader is more than enough, and probably it would save a lot of
> bounces. For example, I wanted to do only specular to the leaves (more than
> enough, no reflection needed), but I couldn't without lowering all the
> glossy samples, therefore killing the reflection in the river (the one
> reflection that I did need).

Right, this has been on the To Do list, added to the optimization page
as well now.

> 3) Currently, hair particles seem to be the only way to distribute objects
> through a surface in a procedural manner (like c4d cloner or 3dsmax
> scatter). This is what I used for grass (a few modeled planes distributed,
> was faster than hair and better looking), but it seemed that the more I
> increased the quantity, the more memory was used. Aren't this supposed to be
> instances? As far as I know, when you use and object instead of hair it is
> only position, scale and rotation are considered, so I don't see why they
> couldn't be instances.

With the particle system each particle/hair is an instances, but
instances use memory too. If you're instancing an object with just a
few polygons that's not going to save any memory. The memory usage of
instances could be reduced, there's a note about this on the
optimization page. However if each grass leaf is its own instance
that's never going to be memory efficient. I don't know if that's what
you're doing.

It's better to instance patches of grass with each a patch a mesh
containing many grass leaves. That has to be done manually at the
moment, it would be a good if the particle system could do this
automatically somehow. I'm not sure what the typical trick is to
render lots of grass is in other renderers.

> 4) Dealing with transparency for custom render passes (object ids, custom
> light for compositing, extra character ghost, whatever) is currently very
> very hard. Basically you can't get a grey geometry to make a custom light
> pass without killing transparency settings (and in the future displacement)
> with the material override. Could it be possible that renderlayer material
> override respected the last transparency shader of the original material
> tree? as well as the displacement? This way you could get custom passes but
> keeping the shape/transparency of your render. Currently all sort of tricks
> need to be done, like making a giant shader that has many transparency
> shaders mixed by custom attributes like UV, vertex color, object ID, stuff
> like that. Bad to set up and memory intensive.

Transparency is indeed a pain, the best solution to that would be deep
compositing but that's not something that can be solved in Cycles,
most of the work for that would be in the compositor.

I don't quite understand what you are suggesting. If you somehow want
to preserve the transparency from the shader for override materials
then I guess that's possible but I'm not sure what that would achieve
exactly. Probably we'd need to analyze what you're trying to do here
in compositing, it seems a bit of a hack for a problem that may have a
better solution. There's no way really to separate the transparent
surface cleanly without deep compositing, the way the pass is computed
can be tweaked but there's no great solution probably.

But this is a bit out of scope, it's more about compositing workflow
than performance.

> 5) With the setting above, maybe it could be easier to do an extra render
> pass for Normal and vector, like a separate, 30 sample render? This way some
> complexity could be taken a way for final (heavy) scene render, by taking
> out AO pass, normal, vector, mist, object id, etc. And make an override for
> another, less complex and less sampled render that respects transparency and
> displacement, that gives antialiased normal and vector, mist, AO, anti
> aliased object id, etc.  I know two pass isn't ideal, but is a very descent
> workaround and could be part of the same render (just with a "AOV" stage).
> On the other hand, GPU's really went down to their knees on this jungle
> render... to the point that adding an AO pass was simple impossible. Having
> it separate could ease a little the burden for GPU's that clearly don't do
> as well as in simple scenes. (In fact, TITAN is usually about 3 to 5 times
> faster than our 12 core xeon cpus, but on this jungle scene it was about 1.6
> times faster only).

With more complex scenes divergence becomes a big problem on the GPU
where only part of the cores will actually be working. It's not clear
to me though that a separate render pass would be faster overall,
maybe, but again depends on what you are trying to do here exactly
which I don't understand.

> 6) The mist pass has artifacts when transparency limit is hit. If you have
> many leaves and a top of, for example, 7 transparency levels, if the limit
> is hit in one leave this leave will be seen white on the mist pass.

This sounds like a bug.

> 7)  I think this is quite obvious, but I'll point it out anyway: Normal and
> vector pass are a necessity for compositing but are currently useless (no
> anti-aliasing, doesn't take transparency into account).

The normal and vector passes are already antialiased? For
transparency, it indeed only uses the normal/vector from the first
surface. Mixing the values based on transparency may be better if the
surface is entirely transparent, for partial transparency I'm not sure
if it's possible to do much useful here without deep compositing.

Brecht.


More information about the Bf-cycles mailing list