[Bf-committers] Multiview branch feedback

Adriano Oliveira adriano.ufrb at gmail.com
Mon Jun 17 06:33:01 CEST 2013


Thank you Brecht for your analysis of my feedback.

Let me try to explain my points. I don’t want to be aggressive at all, so
if my poor english sounds like that I ask you and Dalai’s excuses in
advance.

My only problem with Multiview Branch is the way it affects Image Editor
and Node Editor. From my stereoscopic workflow's point of view, the branch
solves a problem that a don’t have with a solution that doesn't help me to
do what I have been done.

I am in the middle of a short film stereo production and I was longing to
use Multiview Branch. To my surprise, in its actual state it doesn’t help
me as much as I would like. No doubt, it is a matter of workflow and I do
not exclude the possibility that Dalai's proposal offers a better approach,
but right now, I can’t see it.

The overall branch philosophy seams to offer a toy rather than a tool,
because what would be great for BGE is not as good for offline rendering.
And the price is high, because the branch changes a lot in the actual
Blender UI.

>From the beginning: We can produce a stereo stills and animations in
Blender right now without any new implementation. With Sebastian
Schneider's camera add-on (http://www.noeol.de/s3d/) it is even easier.

Schneider's approach relies in Composite for obtain stereo results. This is
perfect for my outputs because it offers me control and uses what is right
there in Blender already. My stereo outputs are: (1) Side by site or over
under composites to preview in my 3d TV and web delivering; (2) separated
L/R files for blu-ray and DCP authoring.

The main difficulty we face today is the relation camera-scene in Blender,
that forces Schneider to use clone scenes to manager L/R views.

Obtaining good professional stereo is very hard. The secret is in the
camera convergence. In my workflow, I don’t miss stereo preview in viewport
as much, because I rely in convergence visual and numerical controls for
professional results. This is due to the fact that the camera convergence
that produces good stereo in my 24pol monitor will not be as good for me to
watch it in my 47pol 3DTV. Convergence, screen size and viewer distance are
intrinsically related, unfortunately.

Nevertheless, even with a good camera, sometimes we need to “lie” to get
good results. For example, to certain shot to work in stereo I may need to
control convergence of objects and background separately. Imagine a space
scene: camera close to an alien pilot in the cockpit, a second spaceship 10
meters way, and the star field in the background… No real camera rig would
get this right in one shot. We better control convergence plane by plane in
composite.

Other example: take a real footage stereo shot, track it and try to place
3d elements over it. Having access to R/L views from Render Layers is
essential for a good match.

When producing a stereo film, I miss these things:

1. A real stereo camera (or a good L/C/R camera rig) with numeric and
visual convergence/safe areas controls. Schneider's add-on is a good
starting point.
2. OpenGL stereo preview of viewport, based on camera rig convergence. (I
think corrected color anaglyph or b/w anaglyph is the most useful here).
3. Easy way to render L/R views with Cycles speed optimizations: not
calculating geometry twice, for instance.
4. Easy way to access L/R views from Render Layers in Node Editor for me to
compose whatever I want. The Stereo Node that I proposed is not essential,
but something user-friendly as a ubbershader.
5. EXR support for L/R views archiving, no doubt.


Back to Multiview Branch suggestions.

The relation between stereo views and EXR is an excellent solution that
opens potentialities that goes beyond stereo. However, as I have already
pointed, the way it is done limits the access to views from Render Layer
node. So, to avoid that the whole new logic of multi-VIEW compromises the
old good logic of multi-LAYER, I propose:

1. Separate stereo preview in Blender's viewport (OpenGL / 3D View) from
render output in Image Editor. We don’t need automated multiview in Image
Editor, because the way it is done just masses with node editor/composite
output.
2. Let me do my stereo composites in node editor. If you want to help
newcomers, create a Stereo Node, as I have proposed.
3. I need to access views from a Render Layer the same way I have access to
render passes.
4. Let us use the great work Dalai is doing with EXR 2 and go further by
adding ways to merge and insert new layers in composite output. This way we
could work views from Render Layers separately and recompose them to save
in EXR. The same functionalities would allow me to add/merge other new
layers (corrected mattes for instance) to composite output, so I can save
as EXR as well. This would be helpful for communicating with other
software, or among different teams.
5. We don’t need a SELECT VIEW in Image Editor, just the list of the true
layers in the image. No matter it may seams repetitive, it is just more
correct and informative. In addition, if we implement a node do add/merge
new layers in the composition, we will need to see them listed in Image
Editor's SELECT PASS.

Small points:

6. Let’s change any reference to "3-D" to "Stereo" or "Multiview".
7. I would prefer not create a new Render Views tab in Properties panel.
It's functions may better be relocated to recently created Render Passes
tab.


Best regards,

Adriano A. Oliveira


2013/6/16 Brecht Van Lommel <brechtvanlommel at pandora.be>

> Hi,
>
> Thanks for the feedback. Dalai might have a different opinion but here
> is my take on this design.
>
> On Wed, Jun 12, 2013 at 4:22 AM, Adriano Oliveira
> <adriano.ufrb at gmail.com> wrote:
> > In my humble opinion, the problems are related to the fact that the very
> > implementations that allow stereo preview in 3d viewport are leading to
> > lack of control in Image Editor and Composite. To be more specific:
>
> The current workflow is to composite the left view and right view
> using the same node setup. Being able to run compositing for stereo
> renders without needing special nodes is very valuable I think. It
> makes it easy to switch stereo on, and switch it back off as well
> without having to fiddle with nodes. Of course there may be more
> advanced scenarios where you want special nodes to somehow use both
> left and right views in a single node setup, but I don't see why you
> would make that a requirement.
>
> Can you be more specific about why you think it has to work this way,
> like specific node operations that you need to be able to do that are
> not possible now?
>
> > *What I am not sure of and proposals:*
> >
> > -          The same approach that allows previewing stereoscopy in 3d
> > viewport is not as useful in Image Editor (render outcome) and
> Composite. I
> > think it is better that Image Editor only shows stereo images as long as
> > they have been composed in Composite as such.
>
> Why is this better? Isn't it useful to be able to enable stereo, press
> render, and get a stereo image immediately?
>
> > -          For that reason, I would remove the “3-D” option in “Select
> > View”. Better: In Image Editor I would remove this new “Select View”, for
> > it leads to confusion when dealing with Composite outputs. Even though it
> > may seams repetitive, it is more correct and informative to add the views
> > ids in the old “Select Pass”, by adding suffixes: Composite.Left;
> > Composite.Right; Z.Left; Z.Right… These suffixes should be added whenever
> > the user activates L or R views in Render Views.
>
> I'm not sure why this is more correct, I don't see why representing
> these as passes instead of views helps? The only reason OpenEXR stores
> them this way is to keep the file format compatible, but for user
> interfaces it makes sense to represent things more organized and not
> at this low level. We could have done the same for RenderLayers and
> RenderPasses, putting them all together in one big list. They full
> names in OpenEXR files are RenderLayer.Z.Left, etc. But this does not
> make for a better UI in my opinion.
>
> > -          In consequence, stereo automated previews shoud not be a UI or
> > Blender Window general option in Preferences, but an option only related
> to
> > a single 3d Viewport, in the very Viewport.
>
> I can see how it would be useful to have control over this at the
> viewport level. But even if we have that a default stereo viewing
> method in the user preferences seems like a good idea to me? It
> depends on the monitor and operating system configuration so it
> belongs in user preferences I think.
>
> > -          In Composite, Render Layers nodes are lacking a switch for
> > choosing Left or Right, like Image nodes already have. Both should offer
> > Left and Right switches, and…
>
> I think there were some plans to allow you to use a specific view in
> the Render Layer instead of automatically left/right but I'm not sure
> what the status of that is. You would get the options Auto / Left /
> Right and then you can choose which view to use. If it's there for
> images I guess it will be there soon for Render Layer nodes too.
>
> > -          Render Layers and Image nodes should NOT offer a “3-D”
> > switchfor it is not useful in Composite and leads to confusion. 3-D is
> > not a
> > channel within Render Layer or Image, it’s just a fast composition option
> > of two layers, based on generic parameters.
>
> Not sure why this is a problem. These settings in the header are just
> a way to specify what you want to view. So why not specify there that
> you want to view Left, Right or 3D? Is it because it might confusing
> users into thinking this data will be saved in files? I can see that,
> maybe the UI should be designed to make that more clear. The option to
> view 3D should be there somewhere though.
>
> > -          It is more logical to get stereo outcome composing Left and
> > Right in a new “Stereo Node”, that would offer: 02 inputs (channels from
> > toggled L / R Render Layers or Image nodes), 01 selector with presented
> > stereo modes (side by side, over under…); 01 input numerical parameter
> for
> > convergence correction; 01 composed stereo image output.
>
> I don't agree with this, I think that's a file format and display
> feature. Right now we only support saving stereo OpenEXR images, and
> views do not need to be composited together for that. If we support
> for example PNS or JPS in the future we can composite together the
> images appropriate for those file formats and write all the proper
> metadata along with it. Even now we could have an option to save
> images anaglyph or side by side in regular file formats.
>
> If this stereo compositing happens in the compositing nodes then the
> result is no longer file format and stereo viewing method agnostic and
> it becomes complicated for the user to ensure that the compositing
> setup matches the file format. If you want to view the render result
> on the different monitor you might need to change the compositing
> nodes setup. Even worse, you wouldn't be able to view the result on
> your monitor in one way but save it in another.
>
>
> Overall, I think the way it's designed to work in Blender makes a lot
> of sense. The whole pipeline really understands stereo, from the 3D
> viewport to rendering and compositing. That's different than some
> other applications where you have to deal with plugins and manually
> split and merge render passes, composite stereo images, etc, and I
> think we can make things more user friendly because it's natively
> supported.
>
> Think about how nice it is if you can just take existing files, enable
> stereo, press render and see the result in 3D. And then being able to
> easily turn it off when you want to work on shading and lighting, or
> switch file formats quickly, etc.
>
> Brecht.
> _______________________________________________
> Bf-committers mailing list
> Bf-committers at blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers
>


More information about the Bf-committers mailing list