[Bf-committers] Proposal: Canvas compositing
j.bakker at atmind.nl
j.bakker at atmind.nl
Wed Aug 3 09:09:37 CEST 2011
Thanks for your response. If I recap what I have and haven't heard so far:
1. use Image space for the backdrop handles, not the backdrop
2. Does a user want a viewport to define the output. It could also be
controlled by the distort nodes
3. Use matrix for distortion.
IMO the Image space is not the ideal place as it does not have the concept
of the canvas. It could technically be added there, but you will see the
concepts of canvas also with images that are not used by the compositor.
And what about having a single image used with two ImageNodes locating the
images in other places on the canvas. I think that a separate Space would
be better. This space could be used for Images/RenderLayers that are used
in the compositor, output of the compositor, X-ray to the image data, place
to use handles, canvas.
a. Do we stick to the Image space or do we need to introduce a new Space?
b. If a new Space is needed this needs to have a good design up front with
goals, limitations and scope?
The viewport idea is perhaps complicating the things a lot to the user, and
has limited benefits. I don't think that the area of interest is a that
wanted feature anymore, in the compositor branch it has been implemented by
a hotspot and the compositor calculates that area as first.
c. Do we remove the idea of the viewport?
The third item is already present in the compositor branch, but the idea of
using a 3d-matrix and passing that along is a good idea. The current
implementation stacks 'Operations' and this stack is evaluated to the
original input node, this also includes color operations and mix
operations. But the image is only evaluated once via linear/bi-linear reads
for better quality. And yes only by Filter nodes this operation is halted,
precisely as Matt describes.
From: Matt Ebb matt at mke3.net
Date: Wed, 3 Aug 2011 08:43:58 +1000
To: j.bakker at atmind.nl, bf-committers at blender.org
Subject: Re: [Bf-committers] Proposal: Canvas compositing
Hi Jeroen, I'm not keen on this proposal - it seems to mix two concepts and
does it in a bit of an odd way.
First concept is region of interest, or bounding box:
It should be possible to define regions of interest within which only the
contained pixels will be processed/output/etc. This can be used for things
like the oldskool preview window in blender's compositor to only update a
cropped subset of the pixels to speed things up while tweaking, but can also
include things like having different data windows and display windows (like
EXRs contain). Sometimes you'll want to render an EXR with overscan, so the
data window is larger than the display window, which lets you do things like
add distortion, camera shake, etc using the extra pixels around the edge of
the visible frame so there aren't edge artifacts.
The second concept is a matter of transforming nodes:
IMO this should really be implemented as a 3D transformation matrix per node
(better to use 3D to be future proof, even if only 2D is used atm). I'm
pretty sure most other good compositors do it this way as well.
All transform nodes should just be modifying that matrix, passing on a
reference to the original buffer, and not doing any processing of their own.
Rather, whenever real pixels are needed (for example, a blur) in the
function that retrieves the buffer (or tile) for the blur node, it should
then sample the original buffer once, using the current transform matrix.
This allows concatenation of transforms (as in Shake and Nuke), where you
can have as many different transform nodes one after the other like
rotate->shear->scale->translate->rotate, and it will only sample the image
once, to prevent deterioration in image quality.
There's a bit of info on these things here and in the nuke docs:
On Wed, Aug 3, 2011 at 1:27 AM, j.bakker at atmind.nl
<j.bakker at atmind.nl>wrote:
> Hi all,
> Some of you might already know this, but I have done some effort in making
> canvas compositing working in Blender.
> I have made a draft-proposal to see where it should go to. I would like to
> have feedback on the user interface level. How can we make this usable to
> our users? How can we keep the render width/height settings in sync with
> the canvas width/height?
> In the current compositor the composite works on a by code defined
> coordinate system. This coordinate system (ImBuf x, y) cannot be changed
> even used by the user. This proposal will open up this coordinate system
> and implement canvas compositing in Blender. This is possible as the
> compositor redesign project does not enforce a coordinate system and
> handles data/memory in a different way (no buffers).
> This will introduce 4 new concepts in the compositor
> - Ordered List Item Canvas (being the existing background grid)
> - Viewport (the camera that the compositor looks through, the area that
> will be the result of the composite)
> - Positionable input nodes (data can be positioned/scaled/rotated on the
> - Backdrop handles (interactive handles on the backdrop)
> The link below has to proposal including sample images and a (technical)
> Happy Blending!
> mail2web.com Enhanced email for the mobile individual based on
> Exchange - http://link.mail2web.com/Personal/EnhancedEmail
> Bf-committers mailing list
> Bf-committers at blender.org
mail2web.com - Microsoft® Exchange solutions from a leading provider -
More information about the Bf-committers