[Bf-funboard] video and audio sync from multiple sources
3pointedit at gmail.com
Thu May 14 22:34:51 CEST 2015
While blender can assign timecode it is for frame tracking so that the
correct order is retained. Also most devices do not record timecode "jam
sync". That is slaved to a master generator.
A recent Gooseberry video tutorial demonstrates syncing multiple sources
manually to a recorded sync pop or clap. That's what clapper boards are for
Automated syncing would be outside the scope of the vse as it is primarily
a string out tool not a core editor.
However I did propose a psuedo solution. Imagine converting selected all
audio to fcurve then analyse them for similarities of wave form. Ie
distance between peaks. Use this data to find offset value between all
waveforms. This offset Could be applied to the vse strips
I'm not a python coder however sorry.
On 15 May 2015 01:51, "Daniel Pocock" <daniel at pocock.pro> wrote:
> This question is about video editing rather than animation.
> Let's say that a user has a video recording and they have audio
> recordings in separate files, collected by separate devices.
> Does Blender provide a convenient way to align the audio files with the
> video? Is this documented anywhere?
> When I tried searching for details, Google finds a lot of pages about
> the playback sync setting (where audio plays faster than the video),
> that is not what I'm looking for.
> Does Blender need timestamps embedded in the files or if they exist, can
> it use them? Some professional recording equipment embeds millisecond
> Can Blender automatically align audio strips based on analysis of the
> content? I can see this would be possible to do manually by displaying
> the waveforms and trying to align the peaks.
> Can Blender use EXIF data from video? My DSLR embeds the time a movie
> starts, but it does not have milliseconds, so this is only useful for
> getting a rough alignment with audio.
> Bf-funboard mailing list
> Bf-funboard at blender.org
More information about the Bf-funboard