[Bf-committers] MB/DOF Further optimizations without current render architecture

Magnus Löfgren lofgrenmeister at gmail.com
Tue Apr 27 11:02:47 CEST 2010


Some typos there and forgot some info:

Regarding the DOF method, obviously you need a target as you randomly move
the camera over the disc,, but this was probably obvious, just forgot to
mention it.
The camera target is not transformed over the disc, it is static over the
subpasses.
This can eather be an actual camera target in the scene (such as an empty),
or a "invisible" target wich get's is position from the local axis from the
camera in it's initial position/rotation, as well as the DOF distance
(specified by the user).


And in the Pros/Cons second, I typed "Increased render times", of coarse
that should read: "deacreased render times" :)



2010/4/27 Magnus Löfgren <lofgrenmeister at gmail.com>

> Let's change this subject a bit :)
>
> Some other ideas, possibly easier to implement without changes to the
> architecture:
>
> Is it possible to render all the sub-passes "silently" per render tile?
> What I mean by that is you calculate all the scene transforms over the
> specified number of sub-passes before the scene is rendered, and then you
> render all of the sub-passes in each render tile/bucket and only display the
> result when it's finished.
>
> So, if you have specified 32 full scene motion blur passes, you do:
> 1: Calculate all transforms in the scene + raytree + shadow maps etc 32
> times
> 2:    Silently render all 32 passes in the FIRST render tile
> 3:       Display the result in the FIRST render tile
> 4:    Silently render all 32 passes in the SECOND render tile
> 5:       Display the result in the SECOND render tile
> 6:    Etc
>
> Possible further opimizations on this method:
> * Render objects that has no animated deformations as instances (i.e.
> object tranformation only)
> * Render sub-passes with dithering or random noise, omitting pixels
> completely from the rendering process to save render time (this will be
> filled when the subpasses are merged after completion of the render
> tile/bucket. Besides faster rendertimes this will reduce strobing effects
> somewhat
> * Deep Shadow maps can be combined intelligently in step 1 (calculate
> transforms, raytree, shadow maps)
>
>
> And this could be extended to depth of field, for example with 32
> sub-passes:
> 1: Attach camera to a disc with a user specified radius
> 2:    Randomly place camera over surface of the disc 32 times
> 3:        Silently render all 32 passes in the FIRST render tile
> 4:             Silently render all 32 passes in the SECOND render tile
> 5:                  Etc
>
>
> For stills only, the DOF method would greatly benefint from rendering
> instances.
>
> Combined with motion blur, this process could be:
>
> 1: Attach camera to disc with a user specified radius
> 2:    Calculate all transforms in the scene (including placing camera
> randomly on the disc) + raytree + shadow maps etc 32 times
> 3:        Silently render all 32 passes in the FIRST render tile/bucket
> 4:            Display result in the FIRST render tile
> 5:        Silently render all 32 passes in the SECOND render tile/bucket
> 6:             Display the result in the SECOND render tile
>
>
> Pros/Cons of this method:
>
> + Will require minimal change to the render architecture (hopefully)
> + Increased render time by optimizing the process, reducing alot of steps
> in the motion blur process
> + Increased quality, especially if dithering/and/or/noise method is added.
> Approaching raytraced or REYES like motion blur results
> + Blender actually gets a real depth of field feature that's not a 2.5d
> image based filter
>
> - Will possibly require quite alot of render tiles, probably more than the
> default 8x8
> - Initial scene transform step might be quite memory consuming, in
> particular with heavy displacements, might have to move transform step per
> tile instead?
>
> Any feedback appreciated
>
> //Magnus
>
>
> ---------- Forwarded message ----------
> From: Magnus Löfgren <lofgrenmeister at gmail.com>
> Date: 2010/4/21
> Subject: Blender render architecture - Interleaved sampling
> To: bf-committers at blender.org
>
>
> Hi,
>
> Being a non-programmer and lacking understanding of blenders render
> architecture, I'm not sure if this is a stupid question.
> But how much work would it take to implement interleaved sampling in
> blenders internal renderer for Motion Blur and Depth of field?
> http://www.cs.ubc.ca/labs/imager/tr/2001/keller2001a/
>
> Blender has raytraced area light shadows and glossy reflections after all,
> and a similar approach must have been made in this area.
>
> If this is indeed a mammoth task and there isn't enough time to implement
> this, how about another approach to optimize the multi-sample motion blur
> and multi-sample DOF already present in blender 2.4x?
>
> For example to in Project Messiah, their internal renderer (it shares the
> same roots as Arnold from Sony), when you render motion blur, the raytrace
> samples are divided by the amount of passes, so that each "sub-frame"
> rendered is extremely noisy, but merged together produces noise free
> results.
> It's still a multi-sample "render-the-whole-scene x amount of times" but
> alot smarter than blenders approach, when you can keep the amount of samples
> in each subframe to a minimum to reduce total rendertime without sacrificing
> quality. So you can increase the amount of passes to reduce the common
> strobing effects of multipass motion blur.
>
> In blender, the noise pattern remains identical for each subframe, so
> reducing the amount of samples in ambient occlusion or raytraced shadows to
> take advantage of multi-pass rendering didn't really give an advantage like
> in Project Messiah.
>
> Of coarse interleaved sampling would still be ideal since strobing is
> completely eliminated and the only artifact you have to take care of is
> noise, so you only need to increase the pixel samples.
>
> Thanks
>
>


More information about the Bf-committers mailing list