[Bf-usd] Welcome to the BF-usd mailing list

Jan Walter jdb.walter at posteo.de
Mon Dec 7 16:58:54 CET 2020


Hi Sybren,

> Welcome to the list, and thanks for the introduction.

Thanks.

> Is this something you are interested in working on yourself? Or can
> you help in other ways to make this a reality?

Yes, I would like to cooperate with someone to make this reality.

Maybe it helps if I translate some parts of that digital production
article about USD into English and explain from there. I did focus in
that article mainly on the rendering side. If you compile Pixar's USD
library yourself, there is one executable shipping with it, called
usdview. Here is a video about that:

https://youtu.be/i-bGVUjh8NU

So, in terms of Blender (or Houdini) it does not matter where the USD
file comes from, the DCC exports such a file (with animations) and
usdview should be able to play back the animation using kind of a OpenGL
(or DirectX or Vulkan) preview. Such a preview renderer is nothing else
than a Hydra Render Delegate plugin itself.

For 3rd party renderers (like Pixar's RenderMan, or Arnold, or Cycles)
such a render delegate can be written and copied somewhere where usdview
can pick it up as a shared library. In the article I explained a bit
what changed over the years regarding rendering, physically based
shading, path tracing etc. but mainly it comes down to the time to the
first pixel, meaning that in the past a raytracer was not very reliable
to deliver a pixel in a predictable time frame. A glossy glass or metal
material could make the raytracer spend a lot of time for a single
pixel, whereas other parts of the scene return a color for other pixels
much faster. A uni- or bi-directional path tracer in combination with
physically based materials (using BSDFs, etc.) kind of uses much more
samples per pixel, but returns a first guess much faster, basically by
not exploding into thousand of rays tracing into many directions, but
rather use one sample from the eye/camera, hitting a material, deciding
to spawn a reflected or refracted ray (or both) and following that path
and bounces to a certain depth. So it's returning a sample (of a pixel)
fast, but uses many pixel samples and averages all of them into a single
color. Which makes it much more predictable for real-time rendering (of
course with a lot of noise), but as soon as you release the mouse, don't
interact with the scene anymore, it converges slowly to a noise free
image. A de-noiser helps in addition to that.

Back to Blender and Cycles, such a plugin was already written by Tangent
Animation and to my knowledge it was open sourced recently.

I'm in the middle of investigating how those Hydra Render Delegates can
be used within Houdini, but I have seen videos that it is possible and I
made some tests with Houdini Indie (which allows 3rd party renderers),
and Arnold, Redshift, Octane, and other renderers. They all go down the
USD path and integrating them into Houdini got easier (even though it's
still beta and there are several ways to render and apply materials
etc.). Whereas with Blender every renderer has decided to create an
addon, make their shaders etc. available in the user interface etc. That
is how it was with Maya and other DCCs as well. Which makes it hard to
switch renderers within the same DCC (I think Houdini always allowed to
have a switch for renderer specific shaders/materials which can be
picked up/switched per output ROP, basically the renderer).

The new way would be to export the geometry with animation as an USD
file, use some placeholder materials (e.g. using Cycles in Blender or
Mantra/Karma/Solaris in Houdini) and overwriting those materials
externally (once they are written to disk) before a 3rd party renderer
renders the animation e.g. on the render farm. All supported renderers
would share the same scene description, instead of defining their own.

But this is only one aspect of USD and the article talks mainly about
the rendering side. The other aspect is how to store geometry and
animation in the DCC itself, and that's where it will get difficult for
everybody. The worst case would be that the DCC has it's own scene
graph, internal representation of geometry etc., which surely every DCC
in use today has, then the same data gets duplicated once or twice for
real-time viewport rendering, the GPU, the graphics card etc. Sure you
have to copy things from main memory to e.g. the graphics card, but you
don't need it several times (like in the past for Blender's game engine,
before rendering, triangulated during rendering etc.) in main
memory. Anyway, I think this is the tricky part, where you need a
strategy to do this slowly and step by step. To replace the internal
representation by a USD counterpart. And maybe it's risky to do that?

I do know a lot about rendering and that's where I could help. The
Blender internals changed too much over the years, so I'm not that
familiar with it, but it would be good to discuss things with all kind
of developers from a USD perspective.

What can be done?
How do others (e.g. Houdini developers) approach the problem?
How deep should the USD integration go?

I think all those questions and answers are easier from a user
perspective, as in: What do you wish for?

> Is this something you are interested in working on yourself?

I would love to work on the USD integration into Blender, but not on my
own, with a team of people who can provide inside knowledge about
Blender's internals, share information about USD (I guess everybody
needs to learn more about what it really is and which bits and pieces to
use).

> Or can you help in other ways to make this a reality?

And I could provide contacts to the movie/advertisment industry
regarding how a potential USD integration should look like from a user
perspective, testing things on real production data, and hopefully
bringing some money back into a development fond.

Anyway, this email is getting far too long, but I hope you got some
useful information out of it.


More information about the Bf-usd mailing list