[Bf-blender-cvs] SVN commit: /data/svn/bf-blender [42396] trunk/blender: Camera tracking: merge hybrid tracker from tomato branch

Sergey Sharybin sergey.vfx at gmail.com
Sun Dec 4 14:26:21 CET 2011


Revision: 42396
          http://projects.blender.org/scm/viewvc.php?view=rev&root=bf-blender&revision=42396
Author:   nazgul
Date:     2011-12-04 13:26:11 +0000 (Sun, 04 Dec 2011)
Log Message:
-----------
Camera tracking: merge hybrid tracker from tomato branch

Comment from Keir's commit:

Add a new hybrid region tracker for motion tracking to libmv, and
add it as an option (under "Hybrid") in the tracking settings. The
region tracker is a combination of brute force tracking for coarse
alignment, then refinement with the ESM/KLT algorithm already in
libmv that gives excellent subpixel precision (typically 1/50'th
of a pixel)

This also adds a new "brute force" region tracker which does a
brute force search through every pixel position in the destination
for the pattern in the first frame. It leverages SSE if available,
similar to the SAD tracker, to do this quickly. Currently it does
some unnecessary conversions to/from floating point that will get
fixed later.

The hybrid tracker glues the two trackers (brute & ESM) together
to get an overall better tracker. The algorithm is simple:

1. Track from frame 1 to frame 2 with the brute force tracker.
   This tries every possible pixel position for the pattern from
   frame 1 in frame 2. The position with the smallest
   sum-of-absolute-differences is chosen. By definition, this
   position is only accurate up to 1 pixel or so.
2. Using the result from 1, initialize a track with ESM. This does
   a least-squares fit with subpixel precision.
3. If the ESM shift was more than 2 pixels, report failure.
4. If the ESM track shifted less than 2 pixels, then the track is
   good and we're done. The rationale here is that if the
   refinement stage shifts more than 1 pixel, then the brute force
   result likely found some random position that's not a good fit.

svn command used: svn merge -r 42375:42376 -r 42377:42379 ^/branches/soc-2011-tomato

Modified Paths:
--------------
    trunk/blender/extern/libmv/CMakeLists.txt
    trunk/blender/extern/libmv/libmv-capi.cpp
    trunk/blender/extern/libmv/libmv-capi.h
    trunk/blender/release/scripts/startup/bl_ui/space_clip.py
    trunk/blender/source/blender/blenkernel/intern/tracking.c
    trunk/blender/source/blender/makesdna/DNA_tracking_types.h
    trunk/blender/source/blender/makesrna/intern/rna_tracking.c

Added Paths:
-----------
    trunk/blender/extern/libmv/libmv/tracking/brute_region_tracker.cc
    trunk/blender/extern/libmv/libmv/tracking/brute_region_tracker.h
    trunk/blender/extern/libmv/libmv/tracking/hybrid_region_tracker.cc
    trunk/blender/extern/libmv/libmv/tracking/hybrid_region_tracker.h

Property Changed:
----------------
    trunk/blender/
    trunk/blender/source/blender/editors/space_outliner/


Property changes on: trunk/blender
___________________________________________________________________
Modified: svn:mergeinfo
   - /branches/soc-2011-cucumber:37517
   + /branches/soc-2011-cucumber:37517
/branches/soc-2011-tomato:42376,42378-42379

Modified: trunk/blender/extern/libmv/CMakeLists.txt
===================================================================
--- trunk/blender/extern/libmv/CMakeLists.txt	2011-12-04 12:58:31 UTC (rev 42395)
+++ trunk/blender/extern/libmv/CMakeLists.txt	2011-12-04 13:26:11 UTC (rev 42396)
@@ -53,6 +53,8 @@
 	libmv/image/array_nd.cc
 	libmv/tracking/pyramid_region_tracker.cc
 	libmv/tracking/sad.cc
+	libmv/tracking/brute_region_tracker.cc
+	libmv/tracking/hybrid_region_tracker.cc
 	libmv/tracking/esm_region_tracker.cc
 	libmv/tracking/trklt_region_tracker.cc
 	libmv/tracking/klt_region_tracker.cc
@@ -100,6 +102,8 @@
 	libmv/image/sample.h
 	libmv/image/image.h
 	libmv/tracking/region_tracker.h
+	libmv/tracking/brute_region_tracker.h
+	libmv/tracking/hybrid_region_tracker.h
 	libmv/tracking/retrack_region_tracker.h
 	libmv/tracking/sad.h
 	libmv/tracking/pyramid_region_tracker.h

Copied: trunk/blender/extern/libmv/libmv/tracking/brute_region_tracker.cc (from rev 42376, branches/soc-2011-tomato/extern/libmv/libmv/tracking/brute_region_tracker.cc)
===================================================================
--- trunk/blender/extern/libmv/libmv/tracking/brute_region_tracker.cc	                        (rev 0)
+++ trunk/blender/extern/libmv/libmv/tracking/brute_region_tracker.cc	2011-12-04 13:26:11 UTC (rev 42396)
@@ -0,0 +1,322 @@
+// Copyright (c) 2011 libmv authors.
+//
+// Permission is hereby granted, free of charge, to any person obtaining a copy
+// of this software and associated documentation files (the "Software"), to
+// deal in the Software without restriction, including without limitation the
+// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+// sell copies of the Software, and to permit persons to whom the Software is
+// furnished to do so, subject to the following conditions:
+//
+// The above copyright notice and this permission notice shall be included in
+// all copies or substantial portions of the Software.
+//
+// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+// IN THE SOFTWARE.
+
+#include "libmv/tracking/brute_region_tracker.h"
+
+#ifdef __SSE2__
+#include <emmintrin.h>
+#endif
+
+#ifndef __APPLE__
+// Needed for memalign on Linux and _aligned_alloc on Windows.
+#include <malloc.h>
+#else
+// Apple's malloc is 16-byte aligned, and does not have malloc.h, so include
+// stdilb instead.
+#include <cstdlib>
+#endif
+
+#include "libmv/image/image.h"
+#include "libmv/image/convolve.h"
+#include "libmv/image/sample.h"
+#include "libmv/logging/logging.h"
+
+namespace libmv {
+namespace {
+
+// TODO(keir): It's stupid that this is needed here. Push this somewhere else.
+void *aligned_malloc(int size, int alignment) {
+#ifdef _WIN32
+  return _aligned_malloc(size, alignment);
+#elif __APPLE__
+  // On Mac OS X, both the heap and the stack are guaranteed 16-byte aligned so
+  // they work natively with SSE types with no further work.
+  CHECK_EQ(alignment, 16);
+  return malloc(size);
+#else // This is for Linux.
+  return memalign(alignment, size);
+#endif
+}
+
+void aligned_free(void *ptr) {
+#ifdef _WIN32
+  _aligned_free(ptr);
+#else
+  free(ptr);
+#endif
+}
+
+bool RegionIsInBounds(const FloatImage &image1,
+                      double x, double y,
+                      int half_window_size) {
+  // Check the minimum coordinates.
+  int min_x = floor(x) - half_window_size - 1;
+  int min_y = floor(y) - half_window_size - 1;
+  if (min_x < 0.0 ||
+      min_y < 0.0) {
+    return false;
+  }
+
+  // Check the maximum coordinates.
+  int max_x = ceil(x) + half_window_size + 1;
+  int max_y = ceil(y) + half_window_size + 1;
+  if (max_x > image1.cols() ||
+      max_y > image1.rows()) {
+    return false;
+  }
+
+  // Ok, we're good.
+  return true;
+}
+
+#ifdef __SSE2__
+
+// Compute the sub of absolute differences between the arrays "a" and "b". 
+// The array "a" is assumed to be 16-byte aligned, while "b" is not. The
+// result is returned as the first and third elements of __m128i if
+// interpreted as a 4-element 32-bit integer array. The SAD is the sum of the
+// elements.
+//
+// The function requires size % 16 valid extra elements at the end of both "a"
+// and "b", since the SSE load instructionst will pull in memory past the end
+// of the arrays if their size is not a multiple of 16.
+inline static __m128i SumOfAbsoluteDifferencesContiguousSSE(
+    const unsigned char *a,  // aligned
+    const unsigned char *b,  // not aligned
+    unsigned int size,
+    __m128i sad) {
+  // Do the bulk of the work as 16-way integer operations.
+  for(unsigned int j = 0; j < size / 16; j++) {
+    sad = _mm_add_epi32(sad, _mm_sad_epu8( _mm_load_si128 ((__m128i*)(a + 16 * j)),
+                                           _mm_loadu_si128((__m128i*)(b + 16 * j))));
+  }
+  // Handle the trailing end.
+  // TODO(keir): Benchmark to verify that the below SSE is a win compared to a
+  // hand-rolled loop. It's not clear that the hand rolled loop would be slower
+  // than the potential cache miss when loading the immediate table below.
+  //
+  // An alternative to this version is to take a packet of all 1's then do a
+  // 128-bit shift. The issue is that the shift instruction needs an immediate
+  // amount rather than a variable amount, so the branch instruction here must
+  // remain. See _mm_srli_si128 and  _mm_slli_si128.
+  unsigned int remainder = size % 16u;
+  if (remainder) {
+    unsigned int j = size / 16;
+    __m128i a_trail = _mm_load_si128 ((__m128i*)(a + 16 * j));
+    __m128i b_trail = _mm_loadu_si128((__m128i*)(b + 16 * j));
+    __m128i mask;
+    switch (remainder) {
+#define X 0xff
+      case  1: mask = _mm_setr_epi8(X, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0); break;
+      case  2: mask = _mm_setr_epi8(X, X, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0); break;
+      case  3: mask = _mm_setr_epi8(X, X, X, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0); break;
+      case  4: mask = _mm_setr_epi8(X, X, X, X, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0); break;
+      case  5: mask = _mm_setr_epi8(X, X, X, X, X, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0); break;
+      case  6: mask = _mm_setr_epi8(X, X, X, X, X, X, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0); break;
+      case  7: mask = _mm_setr_epi8(X, X, X, X, X, X, X, 0, 0, 0, 0, 0, 0, 0, 0, 0); break;
+      case  8: mask = _mm_setr_epi8(X, X, X, X, X, X, X, X, 0, 0, 0, 0, 0, 0, 0, 0); break;
+      case  9: mask = _mm_setr_epi8(X, X, X, X, X, X, X, X, X, 0, 0, 0, 0, 0, 0, 0); break;
+      case 10: mask = _mm_setr_epi8(X, X, X, X, X, X, X, X, X, X, 0, 0, 0, 0, 0, 0); break;
+      case 11: mask = _mm_setr_epi8(X, X, X, X, X, X, X, X, X, X, X, 0, 0, 0, 0, 0); break;
+      case 12: mask = _mm_setr_epi8(X, X, X, X, X, X, X, X, X, X, X, X, 0, 0, 0, 0); break;
+      case 13: mask = _mm_setr_epi8(X, X, X, X, X, X, X, X, X, X, X, X, X, 0, 0, 0); break;
+      case 14: mask = _mm_setr_epi8(X, X, X, X, X, X, X, X, X, X, X, X, X, X, 0, 0); break;
+      case 15: mask = _mm_setr_epi8(X, X, X, X, X, X, X, X, X, X, X, X, X, X, X, 0); break;
+#undef X
+    }
+    sad = _mm_add_epi32(sad, _mm_sad_epu8(_mm_and_si128(mask, a_trail),
+                                          _mm_and_si128(mask, b_trail)));
+  }
+  return sad;
+}
+#endif
+
+// Computes the sum of absolute differences between pattern and image. Pattern
+// must be 16-byte aligned, and the stride must be a multiple of 16. The image
+// does pointer does not have to be aligned.
+int SumOfAbsoluteDifferencesContiguousImage(
+    const unsigned char *pattern,
+    unsigned int pattern_width,
+    unsigned int pattern_height,
+    unsigned int pattern_stride,
+    const unsigned char *image,
+    unsigned int image_stride) {
+#ifdef __SSE2__
+  // TODO(keir): Add interleaved accumulation, where accumulation is done into
+  // two or more SSE registers that then get combined at the end. This reduces
+  // instruction dependency; in Eigen's squared norm code, splitting the
+  // accumulation produces a ~2x speedup. It's not clear it will help here,
+  // where the number of SSE instructions in the inner loop is smaller.
+  __m128i sad = _mm_setzero_si128();
+  for (int r = 0; r < pattern_height; ++r) {
+    sad = SumOfAbsoluteDifferencesContiguousSSE(&pattern[pattern_stride * r],
+                                                &image[image_stride * r],
+                                                pattern_width,
+                                                sad);
+  }
+  return _mm_cvtsi128_si32(
+             _mm_add_epi32(sad,
+                 _mm_shuffle_epi32(sad, _MM_SHUFFLE(3, 0, 1, 2))));
+#else
+  int sad = 0;
+  for (int r = 0; r < pattern_height; ++r) {
+    for (int c = 0; c < pattern_width; ++c) {
+      sad += abs(pattern[pattern_stride * r + c] - image[image_stride * r + c]);
+    }
+  }
+  return sad;
+#endif
+}
+
+// Sample a region of size width, height centered at x,y in image, converting
+// from float to byte in the process. Samples from the first channel. Puts
+// result into *pattern.
+void SampleRectangularPattern(const FloatImage &image,
+                              double x, double y,
+                              int width,
+                              int height,
+                              int pattern_stride,
+                              unsigned char *pattern) {
+  // There are two cases for width and height: even or odd. If it's odd, then
+  // the bounds [-width / 2, width / 2] works as expected. However, for even,
+  // this results in one extra access past the end. So use < instead of <= in
+  // the loops below, but increase the end limit by one in the odd case.
+  int end_width = (width / 2) + (width % 2);
+  int end_height = (height / 2) + (height % 2);
+  for (int r = -height / 2; r < end_height; ++r) {

@@ Diff output truncated at 10240 characters. @@



More information about the Bf-blender-cvs mailing list