[Bf-committers] DOF, a more opticaly correct method.

Campbell Barton bf-committers@blender.org
Fri, 18 Jun 2004 17:22:05 +1000


Hi I was trying to figure out how to render DOF in blender,
lets get this out of the way, zblur is cool and fast but Im trying to 
simulate real DOF as is seen through a lense.

The most accurate method I have seen so far is done by spinning the 
camera while it faces (constrains) to a point (the focal point) many 
images are rendered and then merged into 1. (can use motion blur for this)

This has 3 problems.
1. Its slow, needs to render 6-16 images instead of 1 (however can be 
used to replace AA because its still oversampling a different way)

2.  A lot if detail is rendered that does not need to be. anythin thats 
out of focus ends up being rendered hi quality and then merged into what 
turn out to be a blur.

3. Its not opticly accurate, a 24/32bit RGB image only has the dynamic 
range to cope with white. A real light may be brighter then white. but 
the image clips this at white.
With this method a realy bright object will still turn into a grey 
circle (rather then a white/lighter circle).

I have worked out how resolve these problems.

Solution  for 1 & 2.
Divide the scene into different depth slices (from clip start to clip 
end) one segment in the middle of the focal point.
-in python set the clipping of the camera to the closest segment and 
render 8 segments  of the rotation at a lower resolution then the output 
image.
-At the focal point slice render 1 image at full res.
-Drop the resolution and increase the segments in the circle to render 
as the slices are further away from the focal point.

Now merge all the images in order of front first.
Viola- DOF image, should be a fair bit quicker to render then the 
standard spinning camera method.
I realize splitting the image into clipping segments is a bit clunky and 
could cause some artifacts, they could probably be removed via a little 
overlapping with the clipping.

Solution  for 3. Divide the exposure by the number of DOF renders you 
are doing and then merge them in an additive way, this will stop bright 
areas from washing out into a flat grey.

Another issue is spinning is not quite right, points inside the spinning 
area should also be rendered.
This is the difference between a small out of focus light being a grey 
circle.
and it being a grey filled spot (as it is through a real lense)
Tweaking of the density and distrobution of these points could result if 
different blur effects.

All of this could be done in blender:python but I think its relevent to 
blender too.
- Cam