[Bf-blender-cvs] [1e6038a426b] master: Cycles: Implement automatic global size for CUDA split kernel

Mai Lavelle noreply at git.blender.org
Tue Apr 11 09:35:22 CEST 2017


Commit: 1e6038a426b992bf991040eac18ae7d83ae6a8bb
Author: Mai Lavelle
Date:   Tue Apr 11 02:36:08 2017 -0400
Branches: master
https://developer.blender.org/rB1e6038a426b992bf991040eac18ae7d83ae6a8bb

Cycles: Implement automatic global size for CUDA split kernel

Not sure this is the best way to do things for CUDA but its much better than
being unimplemented.

===================================================================

M	intern/cycles/device/device_cuda.cpp

===================================================================

diff --git a/intern/cycles/device/device_cuda.cpp b/intern/cycles/device/device_cuda.cpp
index 4c1a49878f5..ef283c9d455 100644
--- a/intern/cycles/device/device_cuda.cpp
+++ b/intern/cycles/device/device_cuda.cpp
@@ -1613,10 +1613,23 @@ int2 CUDASplitKernel::split_kernel_local_size()
 	return make_int2(32, 1);
 }
 
-int2 CUDASplitKernel::split_kernel_global_size(device_memory& /*kg*/, device_memory& /*data*/, DeviceTask * /*task*/)
+int2 CUDASplitKernel::split_kernel_global_size(device_memory& kg, device_memory& data, DeviceTask * /*task*/)
 {
-	/* TODO(mai): implement something here to detect ideal work size */
-	return make_int2(256, 256);
+	size_t free;
+	size_t total;
+
+	device->cuda_push_context();
+	cuda_assert(cuMemGetInfo(&free, &total));
+	device->cuda_pop_context();
+
+	VLOG(1) << "Maximum device allocation size: "
+	        << string_human_readable_number(free) << " bytes. ("
+	        << string_human_readable_size(free) << ").";
+
+	size_t num_elements = max_elements_for_max_buffer_size(kg, data, free / 2);
+	int2 global_size = make_int2(round_down((int)sqrt(num_elements), 32), (int)sqrt(num_elements));
+	VLOG(1) << "Global size: " << global_size << ".";
+	return global_size;
 }
 
 bool device_cuda_init(void)




More information about the Bf-blender-cvs mailing list