[Bf-cycles] Render to memory buffer
Brecht Van Lommel
brechtvanlommel at pandora.be
Sun Mar 18 13:08:29 CET 2018
It seems unnecessarily complicated. If you call session->draw() that will
mostly just pass the session->display buffer to draw_pixels on the device.
So why not just get the pixels from that display buffer directly? Something
uint width = session->display->draw_width;
uint height = session->display->draw_height;
uchar4 *pixels = session->display->rgba_byte.data();
... do something ...
On Sun, Mar 18, 2018 at 11:43 AM, Guillaume Chéreau <
guillaume.chereau at gmail.com> wrote:
> OK after some more tests it seems to me that the easiest would be to
> create my own CPUDevice subclass and overload the draw_pixels method
> to render into my own texture, then somehow set this as the session
> Does that make sense, or am I going to run into some subtle bugs with
> this approach?
> - Gui
> On Sun, Mar 18, 2018 at 2:31 AM, Lukas Stockner
> <lukas.stockner at freenet.de> wrote:
> > Answers inline.
> > On 17.03.2018 18:10, Guillaume Chéreau wrote:
> >> On Sat, Mar 17, 2018 at 11:01 PM, Lukas Stockner
> >> <lukas.stockner at freenet.de> wrote:
> >>> - Are you fine with getting a raw float buffer?
> >> Yes that's fine, I can easily do the conversion to RGB.
> > Okay, that makes things easier because you can just process the
> > RenderBuffers directly and don't have to mess around with the
> > kernels.
> >>> - Are you using interactive rendering?
> >> Yes, I am looking for the same kind of rendering as in Blender, where
> >> we start we a low res version and progressively improve it, if the
> >> user changes the scene of the camera position, the rendering is reset.
> >> It works more or less out of the box with the session->draw method,
> >> but it is using OpenGL to do the rendering directly on screen. What -I
> >> think- I want is to get the raw data and add put it into a dynamic
> >> texture that I can integrate into goxel rendering.
> >>> - Do you want to support multiple devices?
> >> Not sure what that means. I only want to render on screen, and also
> >> into a file for export.
> > I was referring to the MultiDevice functionality, which is currently used
> > for rendering on multiple GPUs at once (and might in the future be used
> > network rendering). The problem with that is that each device has its own
> > memory space, so you can't just write all tiles into a single buffer -
> > have to allocate per-tile buffers, copy them back to the host and then
> > combine them there (which is what Blender does when doing a
> > render).
> > Technically it is possible to allocate a single large buffer (on every
> > device), have them write into it and then copy the right parts from the
> > right devices to assemble it directly without moving stuff around, but
> > you can only copy back linear memory regions you'd have to modify tiling
> > split the image into horizontal slices (one slice per device). In fact,
> > is what Cycles currently does for viewport previews since OpenGL drawing
> > also works better that way.
> >>> The easiest approaches I can think of right now are either using a
> >>> RenderBuffer for all tiles (already sort of implemented in session.cpp)
> >>> or
> >>> just copying the pixels from their buffers into your shared buffer
> >>> similar
> >>> to how Cycles for Blender does it in
> >>> BlenderSession::do_write_update_render_result.
> >> OK thanks for the tips, I will have a look at that.
> >> - Gui
> >> _______________________________________________
> >> Bf-cycles mailing list
> >> Bf-cycles at blender.org
> >> https://lists.blender.org/mailman/listinfo/bf-cycles
> > _______________________________________________
> > Bf-cycles mailing list
> > Bf-cycles at blender.org
> > https://lists.blender.org/mailman/listinfo/bf-cycles
> guillaume.chereau at gmail.com
> +886 970422910
> Bf-cycles mailing list
> Bf-cycles at blender.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Bf-cycles