[Bf-compositor] Extending some node functions to VSE

Sean Kennedy mack_dadd2 at hotmail.com
Fri May 23 17:59:35 CEST 2014


Regarding only this comment:
"This way, the compositor is the final stage of the pipeline, and receives the output of the VSE using a VSE input node. "

This is not good for film production pipelines. Editing, color timing, sound production, etc, should always remain at the end of the pipeline.
My layer node I proposed wasn't to bring any VSE functionality into the compositor. It was simply an easier, visual way to set the offset settings. Just a different way of showing the data the image sequence node already shows. Nothing groundbreaking or changing any functionality, just simply an easier way to quickly set offsets. And it wasn't even a formal proposal or anything, just an idea I used to wish for while working in R&H's proprietary software. It had a layer view, but I always thought it would have been easier to set those without even leaving the node view.
sean

From: goosey15 at gmail.com
Date: Fri, 23 May 2014 16:33:57 +0100
To: bf-compositor at blender.org
Subject: Re: [Bf-compositor] Extending some node functions to VSE

Hi all, this is a very brief concept on workflow, and it might be rather short sighted.
The VSE is built around layer based, temporally dependent sequences. The compositor is also aware of time, it simply lacks the layers. To me it would seem silly to attempt to create a layer node, and provide a sound node, because this only applies to the VSE, and image compositing doesn't require some of these features. Moreover. layering is best done as the VSE does, for now.



I believe that if we wanted to combine the facility of the two in the same pipeline, it might be useful to define "layer" effect strips which are used as timing queues for the compositor. When using the VSE if the compositor is enabled, it will have the ability to composite the entire sequence, as well as take input from a VSE layer node. This node would serve two purposes:


1. It acts like a switch, becoming enabled when the layer being rendered in the VSE.2. It provides timing information (a relative frame, and the total number of frames) which may also be of use as the compositor provides switch statements and a time value already.


This way, the compositor is the final stage of the pipeline, and receives the output of the VSE using a VSE input node. 
This gives the compositor the timing information from the VSE without attempting to assimilate its behaviour, as they are suited to their own tasks.


In terms of implementation, this wouldn't require any complex switching between VSE and compositor, so no nested composites. (that would be ugly!)
It also means that the VSE and the compositor could be used in tandem with the render pipeline. Currently, it would require the user to shift animations to the correct frame to match the VSE which seems incorrect. We should be treating the render output as rendered footage, rather than immediate rendering, even though it is rendering it on the fly. Therefore, it would seem sensible that the VSE should either:


1. Ask the Blender render to render specific frames. This seems like a bad idea because, although I haven't looked at this, I suspect that the time progression (frame) within Blender is not unique to each editor, and is global. 

2. Ask the Blender render to render the entire scene to disk and then retrieve specific frames, but do behind the scenes.
Anyway, this was more of a thought exercise than an actual proposal. and I'm sure I shall be told why this is an awful idea!



Regards,Angus Hollands

_______________________________________________
Bf-compositor mailing list
Bf-compositor at blender.org
http://lists.blender.org/mailman/listinfo/bf-compositor 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.blender.org/pipermail/bf-compositor/attachments/20140523/5939b064/attachment.htm 


More information about the Bf-compositor mailing list