[Bf-viewport] Rendering to a cubemap

Clément FOUCAULT foucault.clem at gmail.com
Thu Jan 28 23:37:50 CET 2016


Hi Ton,

I don't have a design doc but I can share some thoughts about what I read
on this mailing list.

Please note I am not feeling very familiar with all concepts below and I
can really be misleaded. I am a 3D artist before being a coder.
Also please note that I'm primarily focusing on the viewport and not the
game engine.
Some of my notes have already been expressed by other participants.

About deferred shading :
I think it's a bad idea overall.
Why? Because of the multitude and flexibility of blender's node setups we
can have all sorts of bsdf inside one shader and thus having a lot of
parameters.
In fact the number of parameters is not fixed and we cannot pass this to
the shading step unless using a gigantic G-buffer.
So deferred shading would be nice ONLY if we have a static "ubershader"
like unreal with consistent inputs. But that would mean subtracting
features.
Even Epic is reluctant to add anisotropic shaders because of the cost of
another parameter inside the G-buffer.
I know we could support both and switch between the one that fits better to
the task. But in the short run I think we should try to focus only on one
of these.

About light probes :
Needs to be HDR (kinda obvious). Needs to be dynamic (rendered from opengl)
if not it wont be worth it to user to spend time to hit a render button and
wait at least 30secs for a preview.
Pre-convolving (baking) the bsdf to the cubemap can be slow even for the
gpu (500ms for a 64px cubemap with diffuse brdf with average gpu). So I
would prefer to have this as an option.
Also we can't convolve for each Bsdf that would mean a lot of cubemaps.
As I said previously, I'm in favor a global cubemap which would be the
result of the world node tree. This one is applied by default to all
materials / objects.
Then similarly to the blender internal envmap texture, we could specify a
viewpoint for the object cubemap.
To get accurate reflection Epic Unreal Engine correct the ray casting (who
sample the cubemap) to fit a volume sphere or box. This would be useful for
indoor scenes.
As of now, in my branch I use Importance sampling (
http://http.developer.nvidia.com/GPUGems3/gpugems3_ch20.html ) because it
does not need to precompute anything.
The number of samples could be exposed to the user to get the most accurate
reflections for render, or to lower the samples for performance reason.
For diffusion i'm using spherical harmonics.
https://seblagarde.wordpress.com/2012/01/08/pi-or-not-to-pi-in-game-lighting-equation/
Computing them depends on the size of the map as it read each texel and the
type of mapping. But it's fairly quick and require only 1sec for a
2000x1000px equirectangular envmap.
Any kind of (lengthy) pre-computation should not freeze blender's UI.
Pretty much like the renderers do.

About lamps :
I'm already using epic's approach of area lights.
Using tube lights for area (rectangular) lights as I was not able to find a
correct algorithm for them.
But energy conservation is not perfect for this solution and does not quite
match cycles at grazing angles.
We could use this paper https://www.youtube.com/watch?v=O3TG1VXx1tg but I
highly doubt the computational cost is worth it.
Another technique is by Michael Drobot inside GPU Pro 5. But I did not have
much success with it.

About aligning on UE4 :
Their implementation of PBR is already doable inside cycles.
They just use baking to do less calculation at runtime.
My branch is already using a lot of their maners.
Supporting cycles shaders implies supporting UE4 shaders.
If you want to talk about rendering pipeline that's another topic.

About viewport framebuffer:
We need a float framebuffer for a lots of effects. Like bloom,
tone/colormapping, Quality dof.
You could also render quick passes with float informations (like Z depth or
world normal for instance) instead of doing a slow cycle pass.
I suggested a R10_G11_B11 format but that means having a separate alpha
buffer.
To have a "correct shading", with only the indirect light being multiplied
by ambiant occlusion, we need to separate the direct light from the
indirect lighting.
That would mean another float buffer unfortunately.
Maybe there is a way to encode everything inside a big 64bit buffer with
alpha but it can get a bit complicated.
I've seen people doing these kind of things but I'm not completely sure if
it's worth it.

RGBA16    R16   G16   B16   A16
Direct    8bits 8bits 8bits 4bits -> R9G10B9
Indirect  8bits 8bits 8bits 4bits -> R9G10B9
Alpha     -     -     -     8bits

I don't know if these extra bits could be enough.
Also not to forget the depth buffer.

I'd really like to also see ScreenSpaceLocalReflection inside the viewport
but this means to have a mean to tag relevant area of the frame-buffer to
blend with a reflection buffer.
And it should overwrite only the indirect specular (envmap), that means
basically splitting up the color buffer again and it can get pretty far.
So i'm not positive on this. But if someone has an idea about this that
would be cool.

I can populate some design docs (at least try I've) but as I mention
earlier i'm not familiar with all the opengl drawing methods and how
blender handles it.

I'll be around IRC when I will have more time. I'm starting a job Monday so
it won't be easy for me.

I hope we can decide where to go and help each others.

Thank you for your interest.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.blender.org/pipermail/bf-viewport/attachments/20160128/ca101a58/attachment.htm 


More information about the Bf-viewport mailing list