[Bf-funboard] Deep compositing.

Knapp magick.crow at gmail.com
Tue Feb 25 09:53:52 CET 2014


On Tue, Feb 25, 2014 at 7:56 AM, David McSween <3pointedit at gmail.com> wrote:

> I guess that the best reason would be lack of system resources. A 32bit
> flat image at 24fps can be quite large, now multiply that for the depth of
> resolution you require. That is one HUGE data set. Also how would you infer
> this for live footage, as there is no simple way to capture density or
> depth in the field (other than with a stereo rig).
>
> Blender currently creates a Z-depth pass which we can already use, but I'm
> not sure that the volumetrics are going to be integrated. Heck it doesn't
> even include alpha AFAIK, you have to hack a mist pass to include textures
> etc.


I can't even pretend to know the technical side of this but it is very
clear that the future holds 3d cameras/recorders and you are seeing this in
Hollywood and 3d printer tech. I think that Blender should plan for this
future. Computer power will continue its doubling every 14 months so I
don't see that as a huge problem. I was just looking at buying a 5tb hard
drive this morning. My computer has 16gb ram and can move up to 32 without
much cost (not that I currently even need that much) and did not cost that
much. I really think we should at least start planning how to grow Blender
in this way. Seems like a great way it improve Blender to me and the future
of 3d editor programs.

A wiki link about it.
http://en.wikipedia.org/wiki/Deep_image_compositing

-- 
Douglas E Knapp

Creative Commons Film Group, Helping people make open source movies
with open source software!
http://douglas.bespin.org/CommonsFilmGroup/phpBB3/index.php

Massage in Gelsenkirchen-Buer:
http://douglas.bespin.org/tcm/ztab1.htm
Please link to me and trade links with me!

Open Source Sci-Fi mmoRPG Game project.
http://sf-journey-creations.wikispot.org/Front_Page
http://code.google.com/p/perspectiveproject/


More information about the Bf-funboard mailing list