[Bf-cycles] Optimization project

Greg Zaal gregzzmail at gmail.com
Wed Feb 5 19:41:07 CET 2014


Some great points David, on the Render Layer side of things I've always
thought it would be best to have per-object material overrides. However the
method I use to get around this is to use render layers with linked
duplicate geometry and materials linked to objects (not object data,
allowing you to give the duplicate a different material) I usually run out
of layers fairly quickly with this, but I'd rather not start another debate
about that ;)

As for the "Only Direct" option for glossy shaders, I'd instead suggest a
per-shader bounce limit, but I believe Brecht already knows about this.

Cheers,
Greg


On 5 February 2014 20:12, David Fenner <d4vidfenner at gmail.com> wrote:

> Hi all. My name is David Fenner, I'm 3d director at Loica (www.loica.tv).
> Regarding Cycles optimizations, we recently did a very (VERY) heavy scene
> in cycles that made us suffer a little more than we expected (about 1 hour
> per frame on a GTX TITAN, with minimal bounces and simplified shaders). I
> wrote down a few things regarding heavy scene production in cycles that I
> belive would help a lot. The scene is a jungle from the inside... it
> doesn't really get any more complex than that, I'll share it as soon as my
> boss lets me, for now I'd like to share the ideas/comments, that probably
> are similar problems than the ones that the gooseberry team will be facing:
>
> 1) Perfect frame time estimation:  Right now time estimation has an error
> margin way too big to the point it is useless. For heavy scene rendering
> and tight deadlines a better prediction is mandatory. A great way to do
> this would be to make, in f12 rendering, a first pass of "progressive
> refine" (let's say 20 samples per tile), then calculate the time based on
> samples left, and then continue on each tile until full sampled, with no
> more "progressive refine". The problem right now is that time estimation is
> done by calculating the tiles being used only, but error comes when some
> tiles take 10 secs on some others take 10 min. If we did a time estimation
> on all the tiles based on 20 samples then there is no margin for error,
> since the next 20 samples will take the same time on each tile, and so on.
> Having perfect prediction (thanks to the nature of path tracing) is a
> blessing for high end production.
>
> 2) The glossy shader could have a button/ticket box to make it "only
> direct". Basically this would make the glossy shader react only to direct
> light and hdri. For many, many types of shaders, this specular-like usage
> of the glossy shader is more than enough, and probably it would save a lot
> of bounces. For example, I wanted to do only specular to the leaves (more
> than enough, no reflection needed), but I couldn't without lowering all the
> glossy samples, therefore killing the reflection in the river (the one
> reflection that I did need).
>
> 3) Currently, hair particles seem to be the only way to distribute objects
> through a surface in a procedural manner (like c4d cloner or 3dsmax
> scatter). This is what I used for grass (a few modeled planes distributed,
> was faster than hair and better looking), but it seemed that the more I
> increased the quantity, the more memory was used. Aren't this supposed to
> be instances? As far as I know, when you use and object instead of hair it
> is only position, scale and rotation are considered, so I don't see why
> they couldn't be instances.
>
> 4) Dealing with transparency for custom render passes (object ids, custom
> light for compositing, extra character ghost, whatever) is currently very
> very hard. Basically you can't get a grey geometry to make a custom light
> pass without killing transparency settings (and in the future displacement)
> with the material override. Could it be possible that renderlayer material
> override respected the last transparency shader of the original material
> tree? as well as the displacement? This way you could get custom passes but
> keeping the shape/transparency of your render. Currently all sort of tricks
> need to be done, like making a giant shader that has many transparency
> shaders mixed by custom attributes like UV, vertex color, object ID, stuff
> like that. Bad to set up and memory intensive.
>
> 5) With the setting above, maybe it could be easier to do an extra render
> pass for Normal and vector, like a separate, 30 sample render? This way
> some complexity could be taken a way for final (heavy) scene render, by
> taking out AO pass, normal, vector, mist, object id, etc. And make an
> override for another, less complex and less sampled render that respects
> transparency and displacement, that gives antialiased normal and vector,
> mist, AO, anti aliased object id, etc.  I know two pass isn't ideal, but is
> a very descent workaround and could be part of the same render (just with a
> "AOV" stage). On the other hand, GPU's really went down to their knees on
> this jungle render... to the point that adding an AO pass was simple
> impossible. Having it separate could ease a little the burden for GPU's
> that clearly don't do as well as in simple scenes. (In fact, TITAN is
> usually about 3 to 5 times faster than our 12 core xeon cpus, but on this
> jungle scene it was about 1.6 times faster only).
>
> 6) The mist pass has artifacts when transparency limit is hit. If you have
> many leaves and a top of, for example, 7 transparency levels, if the limit
> is hit in one leave this leave will be seen white on the mist pass.
>
> 7)  I think this is quite obvious, but I'll point it out anyway: Normal
> and vector pass are a necessity for compositing but are currently useless
> (no anti-aliasing, doesn't take transparency into account).
>
> That's about it... I hope you guys find some points useful and good luck
> with the project. I'll upload the jungle scene as benchmark too as soon as
> I am able too.
>
> David.
>
>
>
> 2014-02-05 Brecht Van Lommel <brechtvanlommel at pandora.be>:
>
> Hi all,
>>
>> With Gooseberry coming up, it would be good to start focusing on (CPU)
>> optimization for the next few release cycles. For new features I still
>> plan to finish volume rendering and add deformation motion blur soon,
>> but besides that a more organized optimization project is something
>> I'd like to spend time on.
>>
>> See this wiki page for more details:
>>
>> http://wiki.blender.org/index.php/Dev:2.6/Source/Render/Cycles/Optimization
>>
>> There's already great work being done by Sv. Lockal and others in this
>> area, on the low level optimization front, but there's many more
>> opportunities. I'll try to add more detail and ideas over time, this
>> is just what I could think of off the top of my head.
>>
>> You can help out in various ways:
>> * Contribute code to implement optimizations
>> * Help gathering of test scenes representative of Gooseberry shots
>> * Suggest practical optimizations ideas
>>
>> It would also be cool if someone could set up some webpage with a
>> graph that tracks Cycles performance on test scenes over time, maybe
>> doing a render with the latest Git version once a week or so (like
>> http://arewefastyet.com/).
>>
>> Thanks,
>> Brecht.
>> _______________________________________________
>> Bf-cycles mailing list
>> Bf-cycles at blender.org
>> http://lists.blender.org/mailman/listinfo/bf-cycles
>>
>
>
> _______________________________________________
> Bf-cycles mailing list
> Bf-cycles at blender.org
> http://lists.blender.org/mailman/listinfo/bf-cycles
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.blender.org/pipermail/bf-cycles/attachments/20140205/1f27c08c/attachment.htm 


More information about the Bf-cycles mailing list