[Bf-funboard] New compositing and VSE monitor tools

David McSween 3pointedit at gmail.com
Sun Feb 21 14:01:22 CET 2016


I have illustrated this current approach of integrating the compositor and
VSE at the stack exchange
http://blender.stackexchange.com/questions/2105/color-correction-in-the-compositor-from-the-video-editor/47400#47400
feel free to comment.

David

On Sun, Feb 21, 2016 at 9:15 PM, David McSween <3pointedit at gmail.com> wrote:

> Douglas,
> You must remember that the VSE strips are unique to the VSE. You can load
> movie clips elsewhere (tracker, compositor) but they do not correspond with
> the VSE directly. The only way to import media from elsewhere in Blender
> (IIRC) to the VSE is via Scene strips or Masks. A VSE Movie clip strip is
> only a local, unique instance of another datablock elsewhere in Blender.
>
> So what you are describing in the bug workflow is not strictly what is
> actually happening. The VSE only uses local instances of external media.
>
> The actual current solution is to define other scenes as wrappers for
> external media. That is a scene would be named for the camera clip or shot
> that it holds. This scene would have a Compositor set for grading the shot
> and its corresponding audio strip on its own VSE timeline.
>
> In your master edit scene, you add the camera native strip and this
> Composited/color grade scene strip and Meta strip them together. You would
> then cut with the metastrip (using a proxy of the footage if required) and
> at picture lock, switch to the graded scene strip.
>
> This workaround(?) fails to provide correspondence between the edit and
> the source footage however, so multiple grades from alternate sections of
> the same source media could be an issue. This depends on length of source
> of course, as single short shots probably don't change much. While long
> screen captures or wild recordings can last hours (changing all the time).
>
> Troy would suggest that clearly defined shot listing would resolve this
> later issue.
>
> However I don't recall what the implications for color transforms are.
>
> Hope this helps.
>
> David
>
> On Sun, Feb 21, 2016 at 8:58 PM, Knapp <magick.crow at gmail.com> wrote:
>
>> I just filed a bug. [developer.blender.org] [Created] T47514: VSE and
>> Node
>> Compositor Film loading not properly linked.
>>
>> If you grab the blend you can see my way of doing color grading. I think
>> this works for now. And would likely be much like our solution to the
>> problem.
>>
>> Thanks Douglas
>>
>> On Sun, Feb 21, 2016 at 11:10 AM, Knapp <magick.crow at gmail.com> wrote:
>>
>> >
>> >
>> > On Sat, Feb 20, 2016 at 8:02 PM, Troy Sobotka <troy.sobotka at gmail.com>
>> > wrote:
>> >
>> >> On Sat, Feb 20, 2016 at 1:06 AM Knapp <magick.crow at gmail.com> wrote:
>> >>
>> >> > OK, so then working in the compositor and doing color grading in the
>> >> > compositor is a valid concept that will work. This leaves the VSE
>> good
>> >> for
>> >> > high speed scrubbing.
>> >> >
>> >>
>> >> You have just stumbled into precisely the reason that an offline
>> >> production
>> >> pipeline exists; there is simply too much stuff moving around to cram
>> all
>> >> of that vital image science and complexity into an NLE.
>> >>
>> >> Full stop.
>> >>
>> >> So what is the option? Very close to what the current implementation of
>> >> the
>> >> VSE is currently, which is basically a method to arrive at an editorial
>> >> endpoint (based on offlines, aka proxies) and commence the heavy
>> lifting
>> >> in
>> >> a non-realtime environment.
>> >>
>> >>
>> >> > This also means that having scopes as output nodes is a valid idea.
>> >> >
>> >>
>> >> The scopes in the UV Image Editor were relatively robust, but have hard
>> >> coded elements which make them somewhat broken in the modern day
>> version
>> >> of
>> >> Blender with OCIO. Having scopes / waveforms / etc. crammed into a
>> sidebar
>> >> is far from optimal, of course.
>> >>
>> >
>> > I don't know the code but was a  programmer in the distant past. It
>> would
>> > seem that a lot of the code could be reused and refactored into a nodal
>> > system. I can't of course say how much work this is but it does seem in
>> > line with the other nodal projects that are starting right now.
>> >
>> >
>> > > Also a VSE channel input node in the compositor might be of some
>> value.
>> >> > Correct?
>> >> >
>> >>
>> >> I'm unsure that the solution is optimal, and I'm not entirely sure how
>> >> best
>> >> to solve this one, although some ideas have rooted over the past
>> decade.
>> >>
>> >> I'd personally prefer _not_ to see folks mixing and matching phases,
>> flip
>> >> flopping in the pipeline, back and forth. Rather, I'd like to see a
>> clear
>> >> path from editorial, to post production CGI / Vfx, to grading /
>> finishing
>> >> be plausible. I believe that something close to what the VSE is today
>> is a
>> >> critical part of that; a Temporal Viewer if you will.
>> >>
>> >
>> > Sounds good.
>> >
>> >
>> >>
>> >> Grading comes with a unique set of challenges, not the least of which
>> is
>> >> per shot grades. This means that the temporal placement of the shots
>> needs
>> >> to be maintained for something like a dissolve from A to B. Shot A
>> >> requires
>> >> grading as does shot B, completely independent of one-another. Somewhat
>> >> obvious perhaps, but the temporal view of independent shots is rather
>> >> important in contrast to a strictly node based non-temporal view.
>> >>
>> >
>> > Yes.
>> >
>> >
>> >>
>> >> Would a strip to node alleviate that complexity? I don't think so.
>> Nodes
>> >> _are_ the solution, but having 1000 shots would add 1000 nodes. It
>> would
>> >> seem that more granularity is required in each node view rather like
>> our
>> >> existing Material nodes, for example.
>> >>
>> >
>> > Sounds like a good plan. One compositor window of each strip. The active
>> > window would change with the active strip. BTW this does bring up a
>> point.
>> > Should the strips not be in the Outliner? Seems that they should to me.
>> >
>> >
>> >>
>> >> We also have to remember that grading is an _extremely complex_
>> operation
>> >> that covers all aspects of looks. This could be heavily processor
>> based,
>> >> using tracking, mattes or restricted regions, and many other things
>> >> including even Z depth etc. Now couple that with the understanding that
>> >> you
>> >> are frequently operating on scene referred data (zero to infinity) of
>> both
>> >> footage and CGI elements, and you can begin to see the power needs.
>> Again,
>> >> a per-shot Material node-like view makes sense here, with a Temporal
>> Strip
>> >> View being the navigation mechanism.
>> >>
>> >
>> > Agreed with one side point. Most of the Blender users are not making 2
>> > hour long films of 4k quality. Most of my projects have about 20 strips
>> max
>> > and I think this is normal and mostly just HD and not too complex.
>> >
>> >
>> >>
>> >> > Why not use a linearized reference space in the VSE then? Why not use
>> >> > linearized reference spaces everywhere?
>> >>
>> >> If we think about what an NLE should do, we can probably peel apart
>> _how_
>> >> it might do it.
>> >>
>> >> Here are a few points that we can (hopefully) agree upon at this point:
>> >>  * If we are performing manipulations, we _must_ use a linearized
>> >> reference
>> >> space. This includes blending, compositing, blurring, mixing, etc.
>> >>
>> >
>> > Yes.
>> >
>> >
>> >>  * Grading is complex, requiring massive horsepower.
>> >>
>> >
>> > At a pro level yes but at a normal Blender user's level I don't think
>> this
>> > is a big factor. Am I wrong here? Most of us are doing shorts or
>> tutorials
>> > not 2 hour movies. A modern computer with 2 titans and a good i7 CPU
>> has a
>> > lot of power and about 32gb ram. This should be enough for the average
>> user
>> > or even above average user. I have a Nvidia 580 and even it is getting
>> the
>> > short end of the stick (5% speed loss) do to its age. So there is a fair
>> > amount of power and the average user can afford to wait. Pros can
>> through
>> > cash at it and have server farm.
>> >
>> >
>> >>  * CGI animation / visual effects are complex, requiring massive
>> >> horsepower.
>> >>
>> >
>> > As above.
>> >
>> >
>> >
>> >>
>> >> All three of these points, as trivial as they seem, challenge the
>> notion
>> >> of
>> >> the KitchenSink™ NLE application. If you care about the above, you
>> can't
>> >> have everything moving along realtime. Nor do you want it to.
>> >>
>> >> Agreed.
>> >
>> >
>> >
>> >> Linearized data, especially scene linear zero to infinity data,
>> requires a
>> >> bare minimum of half float data. That means that all of the incoming
>> >> imagery needs to be either already at float or half float, or the codec
>> >> requires promoting an eight bit buffer to a half float representation.
>> The
>> >> View on the reference then converts that not-quite-viewable-directly
>> data
>> >> into useful display referred output. Much like what already exists in
>> the
>> >> UV / Image Viewer chain for nodes.
>> >>
>> >> Immediately we can see though, that we effectively double our memory
>> >> requirement, and likely significantly increase our draw on processing,
>> >> taxing our concepts of realtime. Now load 600 of those in a simple
>> short
>> >> project[1].
>> >>
>> >> The essence of an editorial system is to deliver editorial, which means
>> >> rapid changes and realtime display of the things that matter to
>> editorial:
>> >> pacing, beats, duration, timing, audio cues, etc. All of those realtime
>> >> facets cannot be negotiated with the extremely granular needs of CGI,
>> >> visual effects, grading, etc. Worse, Blinn's Law seems in full effect.
>> >> Deep
>> >> compositing might be a good example here.
>> >>
>> >> Yes.
>> >
>> >
>> >> The TL;DR is that there _is_ a path forward for the VSE, or rather, its
>> >> next generation iteration. There _is_ a need. It _can_ be solved for
>> >> imagers within not only a Blender paradigm, but also one that works
>> beyond
>> >> Blender for mixing and matching tools.
>> >>
>> >> OK.
>> >
>> >
>> >
>> >> We just need to sort out the noise from the design goals, and focus on
>> >> those very specific needs in relation to Blender as it exists now.
>> Blender
>> >> imagers have the advantage of having seen some of the complexity
>> involved
>> >> across the may facets of imaging, and as such, can likely more easily
>> >> understand statements like "You can't do that in an NLE." This makes a
>> >> typical Blender imager much more likely to be able to agree to and
>> >> understand a clear design path forward without the ridiculous noise
>> that
>> >> typically surrounds NLE discussions.
>> >>
>> >>
>> > Yes. It makes full sense to me to make the VSE the real time viewer and
>> > focus on speed and the compositor the place for the heavy lifting. To
>> make
>> > its node view per strip in the VSE and to include strips in the full
>> assets
>> > management system, IE Outliner view.
>> >
>> > A first step would be to have the scopes as output nodes in the
>> compositor.
>> >
>> >
>> >
>> >> With respect,
>> >> TJS
>> >>
>> >> [1] I'd actually insist we upgrade the next strip viewer's
>> implementation
>> >> to half float representation. We could then get a much closer idea as
>> to
>> >> what an editorial temporary look might be in terms of a rough grade or
>> >> blending. Again, even though it would still be a proxy, having half
>> float
>> >> makes the guesswork a little closer, allowing us to reuse our existing
>> >> curves and such 1:1 on the full resolution, full bit depth sources.
>> Cycles
>> >> too has a need for a library such as this, as well as painting, so it
>> is
>> >> an
>> >> effort that would span across many areas.
>> >>
>> >> Sounds good.
>> >
>> > Thanks!
>> >
>> >
>> > --
>> > Douglas E Knapp, MSAOM, LAc.
>> >
>>
>>
>>
>> --
>> Douglas E Knapp, MSAOM, LAc.
>> _______________________________________________
>> Bf-funboard mailing list
>> Bf-funboard at blender.org
>> http://lists.blender.org/mailman/listinfo/bf-funboard
>>
>
>


More information about the Bf-funboard mailing list