[Bf-vfx] Blender camera tracking approach

Remo Pini remo.pini at avexys.com
Sun May 8 20:03:06 CEST 2011


Well, I can't speak for everybody, but if I look at my VFX stuff, the
camera information (lens, ...) is usually available.
So I only use the tracker to provide me with the 3d camera move.
I also make sure to never use anything like a zoom, especially if vfx is
involved (zoom should only be used in very specific situations anyway).
With the 3d data generated, I then go into the 3d package of choice to
do 3d asset generation.
Compositing is then done in yet another package.

I think if we want to be serious about VFX use of blender, there are two
different possible approaches that "make sense":

1. provide 3d tracking and 3d asset generation (limited piece of the vfx
workflow)
2. provide 3d tracking, 3d asset generation, compositing (larger piece
of the vfx workflow)

The current state of Blender would suggest, that (2) is relatively far
out in the future, compositing packages alone have the complexity of
Blender and to get there, a LOT of programming needs to be done.

One key requirement is the stable support of a wide variety of video
input (and output) formats (i.e. the consumer stuff mp4, mov, xvid, avi,
... but also the more pro stuff like R3D, ProRES, DPX), otherwise, there
will always be a transcoding step before and after Blender...
Also the meta data from these input formats (camera data, time codes,
shot/reel info, ...) should be used and preserved where ever present.

Just my 2c
Remo

> -----Original Message-----
> From: bf-vfx-bounces at blender.org [mailto:bf-vfx-bounces at blender.org]
> On Behalf Of Tobias Kummer
> Sent: Sonntag, 8. Mai 2011 3:49
> To: bf-vfx at blender.org
> Subject: [Bf-vfx] Blender camera tracking approach
> 
> Hey all!
> 
> Here are some of my thoughts on the matchmoving workflow in blender,
> after some discussion in IRC chat. Since I've never used Pfmatchit, I
> cannot say anything about it, but when Syntheyes was mentioned I
looked
> into it a little and thought its workflow might be suboptimal.
> It re-applies lens distortion on an image post-processing level, which
> is bad quality-wise. A better way would be setting blenders camera
> parameters, so the rendered output already has matching distortion and
> does not have to be altered afterwards (which brings up the topic
> blender real world camera settings - would be desirable anyway).
> See this (crappy) flowchart as illustration:
> http://www.pasteall.org/pic/12079
> 
> The first step in the proposed workflow accomodates for the fact that
> blender cannot draw lens distortions in the 3D viewport. So the
> outcoming undistorted footage is just used for visualisation purposes
> for the artist - for rendering/compositing, the original distorted
> footage would be used. This way, we skip the step that compromises
quality.
> As blender will have combined 3D/tracking functionality, we should
> leverage on that and not copy the tracking-only workflow of other
packages.
> 
> Please correct me if there are any major mistakes in my thoughts!
> 
> Greets,
> 
> Tobias
> _______________________________________________
> Bf-vfx mailing list
> Bf-vfx at blender.org
> http://lists.blender.org/mailman/listinfo/bf-vfx


More information about the Bf-vfx mailing list