<div dir="ltr"><div>Just my two cents : from a user pov having a monolithic ubershader for viewport seems fine, probably covers most uses and you make it sound like it's way easier to make. Now my use case is more like having a nice preview of actual cycles shaders (ie not necessarily super-accurate) rather than a full fledged cryengine, maybe not everyone is in the same boat.<br><br>By the way thanks a lot Clément for contributing.<br><br></div>Hadrien<br></div><div class="gmail_extra"><br><div class="gmail_quote">On 28 January 2016 at 23:37, Clément FOUCAULT <span dir="ltr"><<a href="mailto:foucault.clem@gmail.com" target="_blank">foucault.clem@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div>Hi Ton,<br><br>I don't have a design doc but I can share some thoughts about what I read on this mailing list.<br><br>Please note I am not feeling very familiar with all concepts below and I can really be misleaded. I am a 3D artist before being a coder.<br>Also please note that I'm primarily focusing on the viewport and not the game engine.<br>Some of my notes have already been expressed by other participants.<br><br>About deferred shading : <br>I think it's a bad idea overall.<br>Why? Because of the multitude and flexibility of blender's node setups we can have all sorts of bsdf inside one shader and thus having a lot of parameters.<br>In fact the number of parameters is not fixed and we cannot pass this to the shading step unless using a gigantic G-buffer.<br>So deferred shading would be nice ONLY if we have a static "ubershader" like unreal with consistent inputs. But that would mean subtracting features.<br>Even Epic is reluctant to add anisotropic shaders because of the cost of another parameter inside the G-buffer.<br>I know we could support both and switch between the one that fits better to the task. But in the short run I think we should try to focus only on one of these.<br><br>About light probes : <br>Needs to be HDR (kinda obvious). Needs to be dynamic (rendered from opengl) if not it wont be worth it to user to spend time to hit a render button and wait at least 30secs for a preview.<br>Pre-convolving (baking) the bsdf to the cubemap can be slow even for the gpu (500ms for a 64px cubemap with diffuse brdf with average gpu). So I would prefer to have this as an option.<br>Also we can't convolve for each Bsdf that would mean a lot of cubemaps.<br>As I said previously, I'm in favor a global cubemap which would be the result of the world node tree. This one is applied by default to all materials / objects.<br>Then similarly to the blender internal envmap texture, we could specify a viewpoint for the object cubemap.<br>To get accurate reflection Epic Unreal Engine correct the ray casting (who sample the cubemap) to fit a volume sphere or box. This would be useful for indoor scenes.<br>As of now, in my branch I use Importance sampling ( <a href="http://http.developer.nvidia.com/GPUGems3/gpugems3_ch20.html" target="_blank">http://http.developer.nvidia.com/GPUGems3/gpugems3_ch20.html</a> ) because it does not need to precompute anything.<br>The number of samples could be exposed to the user to get the most accurate reflections for render, or to lower the samples for performance reason.<br>For diffusion i'm using spherical harmonics. <a href="https://seblagarde.wordpress.com/2012/01/08/pi-or-not-to-pi-in-game-lighting-equation/" target="_blank">https://seblagarde.wordpress.com/2012/01/08/pi-or-not-to-pi-in-game-lighting-equation/</a><br>Computing them depends on the size of the map as it read each texel and the type of mapping. But it's fairly quick and require only 1sec for a 2000x1000px equirectangular envmap.<br>Any kind of (lengthy) pre-computation should not freeze blender's UI. Pretty much like the renderers do.<br><br>About lamps :<br>I'm already using epic's approach of area lights.<br>Using tube lights for area (rectangular) lights as I was not able to find a correct algorithm for them.<br>But energy conservation is not perfect for this solution and does not quite match cycles at grazing angles.<br>We could use this paper <a href="https://www.youtube.com/watch?v=O3TG1VXx1tg" target="_blank">https://www.youtube.com/watch?v=O3TG1VXx1tg</a> but I highly doubt the computational cost is worth it.<br>Another technique is by Michael Drobot inside GPU Pro 5. But I did not have much success with it.<br><br>About aligning on UE4 :<br>Their implementation of PBR is already doable inside cycles.<br>They just use baking to do less calculation at runtime.<br>My branch is already using a lot of their maners.<br>Supporting cycles shaders implies supporting UE4 shaders.<br>If you want to talk about rendering pipeline that's another topic.<br><br>About viewport framebuffer:<br>We need a float framebuffer for a lots of effects. Like bloom, tone/colormapping, Quality dof.<br>You could also render quick passes with float informations (like Z depth or world normal for instance) instead of doing a slow cycle pass.<br>I suggested a R10_G11_B11 format but that means having a separate alpha buffer.<br>To have a "correct shading", with only the indirect light being multiplied by ambiant occlusion, we need to separate the direct light from the indirect lighting.<br>That would mean another float buffer unfortunately.<br>Maybe there is a way to encode everything inside a big 64bit buffer with alpha but it can get a bit complicated.<br>I've seen people doing these kind of things but I'm not completely sure if it's worth it.<br><span style="font-family:monospace,monospace"><br>RGBA16 R16 G16 B16 A16<br>Direct 8bits 8bits 8bits 4bits -> R9G10B9<br>Indirect 8bits 8bits 8bits 4bits -> R9G10B9<br>Alpha - - - 8bits <br></span><br>I don't know if these extra bits could be enough.<br>Also not to forget the depth buffer.<br><br>I'd really like to also see ScreenSpaceLocalReflection inside the viewport but this means to have a mean to tag relevant area of the frame-buffer to blend with a reflection buffer.<br>And it should overwrite only the indirect specular (envmap), that means basically splitting up the color buffer again and it can get pretty far. <br>So i'm not positive on this. But if someone has an idea about this that would be cool.<br><br>I can populate some design docs (at least try I've) but as I mention earlier i'm not familiar with all the opengl drawing methods and how blender handles it.<br><br></div>I'll be around IRC when I will have more time. I'm starting a job Monday so it won't be easy for me.<br></div><br></div>I hope we can decide where to go and help each others.<br><br></div>Thank you for your interest.<br></div>
<br>_______________________________________________<br>
Bf-viewport mailing list<br>
<a href="mailto:Bf-viewport@blender.org">Bf-viewport@blender.org</a><br>
<a href="http://lists.blender.org/mailman/listinfo/bf-viewport" rel="noreferrer" target="_blank">http://lists.blender.org/mailman/listinfo/bf-viewport</a><br>
<br></blockquote></div><br></div>