[Bf-committers] CUDA backend implementation for GSoC?

Martin Poirier theeth at yahoo.com
Tue Dec 16 17:50:42 CET 2008




--- On Tue, 12/16/08, Timothy Baldridge <tbaldridge at gmail.com> wrote:

> Perhaps that's the best starting point. Can we get some
> solid
> benchmarks that show overhead (latency and bandwidth) for
> transfering
> data to and from the CPU (and setting up a simple program
> on the GPU)
> vs doing it all in memory. Don't forget, in Blender you
> will have to
> grab data from and insert data back into the Blender
> structures,
> unless you plan on handing data to CUDA/OpenCL in the
> format Blender
> uses it in.

I've worked in the GPGPU field dbefore, oing real time processing (still under NDA, so I can't say much), I can tell you one thing: all latency issue are much worth the vast advantage in throughput.

IMHO, the thing that will yield the greatest speed advantage and be the easiest to do would be moving the sequencer and compositor to the GPU, other parts of Blender being much less suited for conversion (not to say impossible, of course).

As far as CUDA vs OpenCL vs whatever, I don't really have an opinion. CUDA was barely starting when I did this work, but from what I remember, memory transfer benchmarks were much better with CUDA buffers than with DirectX/OpenGL straight texture buffers.

Martin


      


More information about the Bf-committers mailing list