Hi Brecht,<div><br></div><div>You explained me about the CU_DEVICE_ATTRIBUTE_KERNEL_EXEC_TIMEOUT over IRC, but I wanted to double-check if I got it right.</div><div>I have a system with three GPUs, all of them setup as "Display GPU".</div>
<div><br></div><div>In this case, should we have the CUDA_MULTI* as the automatic option if we set CUDA as device?</div><div>Or to have CUDA_0 as the default (as it is now) is the expected behaviour?</div><div><br></div>
<div>
The available computing devices:</div><div>('Intel Xeon CPU', 'Tesla M2070', 'Tesla M2070', 'Tesla M2070', 'Tesla M2070 (3x)')</div><div>(‘CUDA_0′, ‘CUDA_1′, ‘CUDA_2′, ‘CUDA_MULTI_2′)</div>
<div><br></div><div>And with this patch [1] I get the following info:</div><div>[1] <a href="http://www.pasteall.org/38837">http://www.pasteall.org/38837</a></div><div><br></div><div><div>*** Begin CUDA Debug</div><div>CUDA device 0 is a Display GPU</div>
<div>CUDA device 0 has "14"</div><div>CUDA device 1 is a Display GPU</div><div>CUDA device 1 has "14"</div><div>CUDA device 2 is a Display GPU</div><div>CUDA device 2 has "14"</div><div>*** End CUDA Debug</div>
<div><br></div><div>Is this the expected behaviour? </div><div><br></div><div>Thanks,</div><div>Dalai</div><div><br></div><div><a href="http://blendernetwork.org/member/dalai-felinto" target="_blank">blendernetwork.org/member/dalai-felinto</a><br>
<a href="http://www.dalaifelinto.com" target="_blank">www.dalaifelinto.com</a></div>
</div>