[Bf-vfx] Blender camera tracking approach

Francesco Callari fgcallari at gmail.com
Sun May 8 23:57:03 CEST 2011


A couple of comments.

1. Whether the final composite is done on the original plate or a lens
distortion-corrected (i.e. undistorted) one is an artistic decision, not
just one based on image quality.   The production crew may want to remove
the plate distortion, leave it unchanged or modify it as the shot demands
from an artistic point of view (I have seen shots in which the lens
distortion was intentionally magnified). The tools should support their
decisions whatever they are. The image quality argument may be moot if the
original plate resolution is higher than the final one and the distortion is
moderate.

2. From a practical point of view, re-distorting the CG upon rendering so it
matches the original (lens-distorted) plate is just one way to prepare the
elements for compositing. It may not the the most efficient way, and it may
even be impractical when the compositing is done with a tool other than
Blender, or when some of the CG assets come from other tools. In these case
the preferred alternative is to expose the calibrated lens distortion to the
compositing tool. A common way to do this is by having the matchmoving tool
export an "offset map", i.e. an image whose red/green channels encode
per-pixel the amount of x/y distortion. Many compositing tools (e.g. Shake)
can import such images in a variety of formats. This ability to export the
distortion as an image is a basic requirement.

On Sun, May 8, 2011 at 6:49 AM, Tobias Kummer <supertoilet at gmx.net> wrote:

> Hey all!
>
> Here are some of my thoughts on the matchmoving workflow in blender,
> after some discussion in IRC chat. Since I've never used Pfmatchit, I
> cannot say anything about it, but when Syntheyes was mentioned I looked
> into it a little and thought its workflow might be suboptimal.
> It re-applies lens distortion on an image post-processing level, which
> is bad quality-wise. A better way would be setting blenders camera
> parameters, so the rendered output already has matching distortion and
> does not have to be altered afterwards (which brings up the topic
> blender real world camera settings - would be desirable anyway).
> See this (crappy) flowchart as illustration:
> http://www.pasteall.org/pic/12079
>
> The first step in the proposed workflow accomodates for the fact that
> blender cannot draw lens distortions in the 3D viewport. So the
> outcoming undistorted footage is just used for visualisation purposes
> for the artist - for rendering/compositing, the original distorted
> footage would be used. This way, we skip the step that compromises quality.
> As blender will have combined 3D/tracking functionality, we should
> leverage on that and not copy the tracking-only workflow of other packages.
>
> Please correct me if there are any major mistakes in my thoughts!
>
> Greets,
>
> Tobias
> _______________________________________________
> Bf-vfx mailing list
> Bf-vfx at blender.org
> http://lists.blender.org/mailman/listinfo/bf-vfx
>



-- 
Franco Callari <fgcallari at gmail.com>

            EC67 BEBE 62AC 8415 7591  2B12 A6CD D5EE D8CB D0ED

I am not bound to win, but I am bound to be true. I am not bound to succeed,
but I am bound to live by the light that I have. (Abraham Lincoln)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.blender.org/pipermail/bf-vfx/attachments/20110508/e513f38a/attachment.htm 


More information about the Bf-vfx mailing list