[Bf-vfx] Blender camera tracking approach

Sebastian König koenig.sebastian at gmx.net
Mon May 9 12:55:07 CEST 2011


That video shows some impressive results.
3D stabilisation gives much better results than just 2d stabilizing. Syntheyes does that as well, but i don't think it has the warping, which really seems to give good results.
However, as seen in the samples, it also crops the image quite a bit to get stable camera. That can work if you shoot 4k for a 2k end-result, but otherwise you sacrifice a lot of image information.

I have tried to visualize 3 different tracking workflows, with preprocessing, without pp, and with a real 3d camera lens distortion.
The tracking workflow itself is not described in detail, just how it could integrate into blender.
The flowchart shows the usage of proxy-nodes, which could be viewer-nodes or something like that, just to have something to render into some sort of image cache, without having to save the temporarily preprocessed images to disk. Or better yet: The proxy-node saves to disk, just like cache in physx baking, but makes the images immediately available with out having to use an extra image input node.

Sebastian

-------------- next part --------------
A non-text attachment was scrubbed...
Name: 3_Workflows.pdf
Type: application/pdf
Size: 46483 bytes
Desc: not available
Url : http://lists.blender.org/pipermail/bf-vfx/attachments/20110509/83678852/attachment-0001.pdf 
-------------- next part --------------



Am 09.05.2011 um 12:36 schrieb ibkanat:

> I was looking at a paper the other day that saving the tracking and
> distortion data might help. I sent this to Julian he had just
> implemented a fix for rolling shutter in libmv.  Cmos chips have this
> trouble panning.
> 
> http://pages.cs.wisc.edu/%7Efliu/project/3dstab.htm It uses warping to
> stablize video. I am not sure if is a goal for Blender's VE to become
> more robust at video editing.  But a few tweaks could make blender quite
> a bit more capable in that area.  One being the ability to stablize
> video.  Others being(three point edit, ripple editing, improve the ui
> for color balance, etc).
> 
> As far a work flow I havent done as much as others.  But I read quite
> often that they get a great track stablize then add the vfx then add
> back a controlled amount of what ever they want(shake, distortion, etc)
> If there could be a way to store this data there could be lots of non
> standard uses(particle effects following optical flow patterns, etc)
> 
> 
> On Mon, 2011-05-09 at 11:16 +0200, Sebastian K?nig wrote:
>> Hey all!
>> That is a pretty good and detailed workflow, Tom. I also like Tobias' graph.
>> I want to a few thoughts as well.
>>> From my experience with Syntheyes, 2d) from Tom's list (undistorting the footage) can also happen after or during the solving process.
>> Of course you can create the distortion parameters with a grid before the tracking happens, but since the whole tracking thing is just an approximation anyway it can also happen based on the initial track itself. At least that's how it is done in Syntheyes. Though i don't have the slightest idea how that is achieved code-wise. I guess it happens in a similar way as the 3d camera solution itself is calculated, based on the movements of the tracked points in relation to each other.
>> In most cases it improves tracking quality dramatically, because the tracker doesn't get confused buy the apparently "wrong" movements of the feature points. Once you have the lens distortion data available you have 2 options, either undistort the footage and export that, or just export the camera-movement and re-distort the final CG afterwards, before bringing original footage and distorted CG together as a final step. 
>> If we could bypass that step by rendering distorted images directly it would be cool, but, as Tom said, distorting CG is a little bit of a loss in image quality of CG, but barely noticeable in most cases. Usually you'd have to add grain and a bit of blur to the CG anyways to match the look of the footage, which is rarely as perfect and clean as 3d renders.
>> 
>> I wanted to add another thing, following the discussion in IRC yesterday whether or not to have the tracker integrated into the compositor or vice versa, because of image preprocessing.
>> Preprocessing can indeed enhance the track quite a bit. A slightly blurred black and white image with enhanced contrast can make it easier for the tracker to recognize feature points in the footage and track them over time, as that preprocessing can eliminate noise in the footage.
>> If I understood correctly Ton wants to add a new editor for tracking (clip editor?). This will then be used to track the features, create masks, start the solving, and also maybe manage the 3d (and also 2d) tracking points. I like that idea, and I think it will play nicely with the compositor and the whole concept of editors in Blender. 
>> But what you add as footage in the clip editor could just as well be the output from a viewer-node, that has a preprocessed version of the footage. But for that to work efficiently the viewer-node (or the compositor in general) needs a new feature: image cache. We do need a good caching system anyway. 
>> So a workflow that deals with tracking and the compositor could look like this:
>> 
>> 1) Import footage in clip editor. 
>> 2) If no preprocessing is needed because tracking is just fine then continue with 3) from Tom's workflow
>> 2a) If you realize that the feature points do not track sufficiently because there is too much noise or too little contrast, then
>> 3) Exchange the clip input in clip editor with a viewer-node-output (or a new cache-node or whatever) and
>> 4) go to the compositor
>> 5) load image sequence into an image-input-node
>> 6) preprocess with blur, scale, rgb-curves etc. 
>> 7) If we go with a workflow that lets you correct lens-distortion before tracking, we could use a lens-node for that. 
>> 8) pipe the preprocessed result into a viewer node and go back to clip editor
>> 8a) also create a scaled down lowres proxy version of the footage to have that in the viewport background for fast playback. I guess this has to be undistorted to fit to the geometry in the viewport. 
>> --
>> All this would ideally require really fast preprocessing capabilities of the compositor and a good caching system. But if that is not possible we can always preprocess into a new proxy image sequence on harddrive.
>> --
>> 9) If we had a blender camera that can use distortion values from the tracker we could just render our CG and put that on top of our original footage.
>> 9a) If that is not possible we would have to use the lens distortion values and distort the rendered images with that.
>> 
>> So far.
>> Masks could be generated in clip editor too, but accessed from the compositor as well, i don't know.
>> 
>> Cheers!
>> 
>> 
>> Seb
>> 
>> 
>> 
>> Am 08.05.2011 um 23:05 schrieb Tom M:
>> 
>>> Tobias,
>>> 
>>> here is a 'complete' workflow for tracking, it might be good to
>>> provide details for some of your flow areas off to the side.
>>> 
>>> 1) load image data - video or sequence of images
>>> 2a) (optional) correct rolling shutter
>>> 2b) (optional) input known camera parameters
>>> 2c) (optional) provide hints to camera lens undistortion solver
>>> (identify straight lines in the image) to help with undistorting the
>>> image
>>> 2d) (optional) undistort the camera lens using a distortion solver
>>> 3) (optional) create mask for data you don't want tracked (or for
>>> 'grouping'/seperating of tracking ie tracking the motion of a camera
>>> and tracking the motion of an object)
>>> 4) (optional) adjust color/contrast/etc so that feature points to
>>> track have increased contrast and are thus easier for the tracker to
>>> find them (and easier for the tracker to ignore stuff you don't want
>>> tracked - ie uneven lighting on a green screen creating spurious
>>> motion)
>>> 5) (optional) select specific features you want tracked (either by
>>> mask or placement of tracking markers)
>>> 5a) if you specified specific features to the tracker you may also
>>> want to specify how the points should be tracked (bounding box for the
>>> tracker to look for the point in; whether the object being tracked is
>>> 'rigid' or can deform, and knowledge of the types of camera/object
>>> motion - object translation; rotation; camera translation; rotation)
>>> 6) send image data, specified track data; camera data; and mask data to tracker
>>> 7) tracker library can automatically identify feature points to track
>>> and or use user specified features, ignoring areas that are masked
>>> 8) do a 'solve' of the camera motion based on the track points,
>>> including statistics on how 'good' the tracks are in contributing to
>>> the solve
>>> 9) return track point data and camera solve to software, including the
>>> statistical analysis of the track points and their 3d projection
>>> 10) based on the statistical analysis pick error thresholds for what
>>> track points to automatically delete (automatic threshold picking or
>>> manually adjusted)
>>> 11) (optional) manually delete any track points
>>> 12) (optional) create a mask to 'hide' unwanted track points
>>> 12a) (optional) mask can be assigned to follow a set of track points
>>> to automatically mask a moving object from the tracker/solver
>>> 12b) (optional) mask can be manually keyframed so that it moves and
>>> deforms over time to mask a moving object from the tracker/solver
>>> 12) (optional) provide a manually created camera curve to 'hint' the
>>> tracker/solver what you expect the actual curve to look like
>>> 13) retrack if additional tracker points are now needed
>>> 14) pass the tracker points, camera hints, etc. to the camera solver
>>> 15) return the solved camera track and the 3d location of the track
>>> points to the software
>>> 16) visualize the camera motion and track points in the 3d view
>>> 17) define a ground plane reference to the 3d track points and camera
>>> 18) define the scene origin relative to the 3d track points and camera
>>> 19) define the world scale and orientation relative to the 3d track
>>> points and data
>>> 20) add a test object into the 3d view and see if it stays in the
>>> proper location
>>> 21) (optional) stabalize the camera view based on the solved camera track
>>> 22) (optional) smooth the camera track curve
>>> 23) repeat until the error in the solve is good enough
>>> 
>>> On Sun, May 8, 2011 at 12:33 PM, Tobias Kummer <supertoilet at gmx.net> wrote:
>>>> After some more tinkering, I came up with the following revised proposal:
>>>> http://www.pasteall.org/pic/12084
>>>> 
>>>> This takes some of your (Remos) concerns into account.
>>>> 
>>>> Regards,
>>>> 
>>>> Tobi
>>>> 
>>>> On 05/08/2011 10:09 PM, Remo Pini wrote:
>>>>> Hm... I have never noticed the slight distortions to cause issues with
>>>>> the tracker... might be a question of the implementation, but I don't
>>>>> see, why the material would have to be converted twice, even IF
>>>>> "undistorting" is required...
>>>>> 
>>>>> (1) take original footage
>>>>> (2) undistort to track (temporary copy)
>>>>> (3) track
>>>>> (4) distort tracking data according to undistort
>>>>> (5) use original footage + track data + lensinfo from source (or
>>>>> undistort function)
>>>>> 
>>>>> Syntheyes seems to do that under the hood, I guess, never seen tracking
>>>>> issues due to distortions...
>>>>> 
>>>>> 
>>>>>> -----Original Message-----
>>>>>> From: bf-vfx-bounces at blender.org [mailto:bf-vfx-bounces at blender.org]
>>>>>> On Behalf Of Tom M
>>>>>> Sent: Sonntag, 8. Mai 2011 8:20
>>>>>> To: Blender motion tracking&  VFX
>>>>>> Subject: Re: [Bf-vfx] Blender camera tracking approach
>>>>>> 
>>>>>> On Sun, May 8, 2011 at 10:06 AM, Remo Pini<remo.pini at avexys.com>
>>>>>> wrote:
>>>>>>> I kind of disagree...
>>>>>>> 
>>>>>>> If you have lens distortion in you footage, this would most likely
>>>>> be
>>>>>>> bad and you would not want the final result to have that in it
>>>>> anymore,
>>>>>>> sooo....
>>>>>> You are thinking extreme distortion - this is about the very slight
>>>>>> distortion that comes from all camera lenses due to their lense shape
>>>>>> (as well as lense imperfections - but the imperfections are mostly
>>>>>> ignored by most lense solvers).
>>>>>> 
>>>>>> Our brains ignore the distortion but it confuses trackers since travel
>>>>>> in a straight line isn't straight with the distortion.
>>>>>> 
>>>>>> Undistorting the lense reduces the image quality because it shifts
>>>>>> pixel locations and thus 'blurs' pixels that the undistorted location
>>>>>> of a pixel will not be exactly on the pixel boundary of a previous
>>>>>> pixel.
>>>>>> 
>>>>>> Thus undistortion is only used to provide straighter lines to the
>>>>> tracker.
>>>>>> Applying the lens distortion to your rendered image is so that it's
>>>>>> straight lines are curved similarly to that of the original image.
>>>>>> Again this introduces blurring of your render if it is applied as a
>>>>>> post process - so either you need to render larger or do the
>>>>>> distortion at render time.  In practice the amount of bluring from
>>>>>> doing it as a post process is probably not significant enough to be
>>>>>> noticable in most situations.
>>>>>> 
>>>>>> LetterRip
>>>>>> _______________________________________________
>>>>>> Bf-vfx mailing list
>>>>>> Bf-vfx at blender.org
>>>>>> http://lists.blender.org/mailman/listinfo/bf-vfx
>>>>> _______________________________________________
>>>>> Bf-vfx mailing list
>>>>> Bf-vfx at blender.org
>>>>> http://lists.blender.org/mailman/listinfo/bf-vfx
>>>>> 
>>>> _______________________________________________
>>>> Bf-vfx mailing list
>>>> Bf-vfx at blender.org
>>>> http://lists.blender.org/mailman/listinfo/bf-vfx
>>>> 
>>> _______________________________________________
>>> Bf-vfx mailing list
>>> Bf-vfx at blender.org
>>> http://lists.blender.org/mailman/listinfo/bf-vfx
>> 
>> _______________________________________________
>> Bf-vfx mailing list
>> Bf-vfx at blender.org
>> http://lists.blender.org/mailman/listinfo/bf-vfx
> 
> 
> _______________________________________________
> Bf-vfx mailing list
> Bf-vfx at blender.org
> http://lists.blender.org/mailman/listinfo/bf-vfx



More information about the Bf-vfx mailing list