[Bf-docboard-svn] bf-manual: [7177] trunk/blender_docs/manual/render/cycles/gpu_rendering.rst: GPU Docs: Use dropdown menus for FAQ and Errors

Aaron Carlisle noreply at blender.org
Thu Oct 1 21:02:02 CEST 2020


Revision: 7177
          https://developer.blender.org/rBM7177
Author:   Blendify
Date:     2020-10-01 21:02:02 +0200 (Thu, 01 Oct 2020)
Log Message:
-----------
GPU Docs: Use dropdown menus for FAQ and Errors

Modified Paths:
--------------
    trunk/blender_docs/manual/render/cycles/gpu_rendering.rst

Modified: trunk/blender_docs/manual/render/cycles/gpu_rendering.rst
===================================================================
--- trunk/blender_docs/manual/render/cycles/gpu_rendering.rst	2020-10-01 19:01:18 UTC (rev 7176)
+++ trunk/blender_docs/manual/render/cycles/gpu_rendering.rst	2020-10-01 19:02:02 UTC (rev 7177)
@@ -90,53 +90,45 @@
 Frequently Asked Questions
 ==========================
 
-Why is Blender unresponsive during rendering?
----------------------------------------------
+.. dropdown:: Why is Blender unresponsive during rendering?
 
-While a graphics card is rendering, it cannot redraw the user interface, which makes Blender unresponsive.
-We attempt to avoid this problem by giving back control over to the GPU as often as possible,
-but a completely smooth interaction cannot be guaranteed, especially on heavy scenes.
-This is a limitation of graphics cards for which no true solution exists,
-though we might be able to improve this somewhat in the future.
+   While a graphics card is rendering, it cannot redraw the user interface, which makes Blender unresponsive.
+   We attempt to avoid this problem by giving back control over to the GPU as often as possible,
+   but a completely smooth interaction cannot be guaranteed, especially on heavy scenes.
+   This is a limitation of graphics cards for which no true solution exists,
+   though we might be able to improve this somewhat in the future.
 
-If possible, it is best to install more than one GPU,
-using one for display and the other(s) for rendering.
+   If possible, it is best to install more than one GPU,
+   using one for display and the other(s) for rendering.
 
+.. dropdown:: Why does a scene that renders on the CPU not render on the GPU?
 
-Why does a scene that renders on the CPU not render on the GPU?
----------------------------------------------------------------
+   There maybe be multiple causes,
+   but the most common one is that there is not enough memory on your graphics card.
+   Typically, the GPU can only use the amount of memory that is on the GPU
+   (see `below <Would multiple GPUs increase available memory?>`_ for more information).
+   This is usually much smaller than the amount of system memory the CPU can access.
+   With CUDA and OptiX devices, if the GPU memory is full Blender will automatically try to use system memory.
+   This has a performance impact, but will usually still result in a faster render than using CPU rendering.
+   This feature does not work for OpenCL rendering.
 
-There maybe be multiple causes,
-but the most common one is that there is not enough memory on your graphics card.
-Typically, the GPU can only use the amount of memory that is on the GPU
-(see `below <Would multiple GPUs increase available memory?>`_ for more information).
-This is usually much smaller than the amount of system memory the CPU can access.
-With CUDA and OptiX devices, if the GPU memory is full Blender will automatically try to use system memory.
-This has a performance impact, but will usually still result in a faster render than using CPU rendering.
-This feature does not work for OpenCL rendering.
+.. dropdown:: Can multiple GPUs be used for rendering?
 
+   Yes, go to :menuselection:`Preferences --> System --> Compute Device Panel`,
+   and configure it as you desire.
 
-Can multiple GPUs be used for rendering?
-----------------------------------------
+.. dropdown:: Would multiple GPUs increase available memory?
 
-Yes, go to :menuselection:`Preferences --> System --> Compute Device Panel`, and configure it as you desire.
+   Typically, no, each GPU can only access its own memory, however, some GPUs can share their memory.
+   This is can be enabled with :ref:`Distributed Memory Across Devices <prefs-system-cycles-distributive-memory>`.
 
+.. dropdown:: What renders faster, Nvidia or AMD, CUDA, OptiX or OpenCL?
 
-Would multiple GPUs increase available memory?
-----------------------------------------------
+   This varies depending on the hardware used. Different technologies also have different compute times
+   depending on the scene tested. For the most up to date information on the performance of different devices,
+   browse the `Blender Open Data <https://opendata.blender.org/>`__ resource.
 
-Typically, no, each GPU can only access its own memory, however, some GPUs can share their memory.
-This is can be enabled with :ref:`Distributed Memory Across Devices <prefs-system-cycles-distributive-memory>`.
 
-
-What renders faster, Nvidia or AMD, CUDA, OptiX or OpenCL?
-----------------------------------------------------------
-
-This varies depending on the hardware used. Different technologies also have different compute times
-depending on the scene tested. For the most up to date information on the performance of different devices,
-browse the `Blender Open Data <https://opendata.blender.org/>`__ resource.
-
-
 Error Messages
 ==============
 
@@ -144,84 +136,76 @@
 or through the package manager on Linux.
 
 
-Unsupported GNU version
------------------------
+.. dropdown:: Unsupported GNU version
 
-On Linux, depending on your GCC version you might get this error.
-See the `NVIDIA CUDA Installation Guide for Linux
-<https://docs.nvidia.com/cuda/archive/10.2/cuda-installation-guide-linux/index.html>`__
-For a list of supported GCC versions. There are two possible solutions to this error:
+   On Linux, depending on your GCC version you might get this error.
+   See the `NVIDIA CUDA Installation Guide for Linux
+   <https://docs.nvidia.com/cuda/archive/10.2/cuda-installation-guide-linux/index.html>`__
+   For a list of supported GCC versions. There are two possible solutions to this error:
 
-Use an alternate compiler
-   If you have an older GCC installed that is compatible with the installed CUDA toolkit version,
-   then you can use it instead of the default compiler.
-   This is done by setting the ``CYCLES_CUDA_EXTRA_CFLAGS`` environment variable when starting Blender.
+   Use an alternate compiler
+      If you have an older GCC installed that is compatible with the installed CUDA toolkit version,
+      then you can use it instead of the default compiler.
+      This is done by setting the ``CYCLES_CUDA_EXTRA_CFLAGS`` environment variable when starting Blender.
 
-   Launch Blender from the command line as follows:
+      Launch Blender from the command line as follows:
 
-   .. code-block:: sh
+      .. code-block:: sh
 
-      CYCLES_CUDA_EXTRA_CFLAGS="-ccbin gcc-x.x" blender
+         CYCLES_CUDA_EXTRA_CFLAGS="-ccbin gcc-x.x" blender
 
-   (Substitute the name or path of the compatible GCC compiler).
+      (Substitute the name or path of the compatible GCC compiler).
 
-Remove compatibility checks
-   If the above is unsuccessful, delete the following line in
-   ``/usr/local/cuda/include/host_config.h``
+   Remove compatibility checks
+      If the above is unsuccessful, delete the following line in
+      ``/usr/local/cuda/include/host_config.h``
 
-   ::
+      ::
 
-      #error -- unsupported GNU version! gcc x.x and up are not supported!
+         #error -- unsupported GNU version! gcc x.x and up are not supported!
 
-   This will allow Cycles to successfully compile the CUDA rendering kernel the first time it
-   attempts to use your GPU for rendering. Once the kernel is built successfully, you can
-   launch Blender as you normally would and the CUDA kernel will still be used for rendering.
+      This will allow Cycles to successfully compile the CUDA rendering kernel the first time it
+      attempts to use your GPU for rendering. Once the kernel is built successfully, you can
+      launch Blender as you normally would and the CUDA kernel will still be used for rendering.
 
 
-CUDA Error: Kernel compilation failed
--------------------------------------
+.. dropdown:: CUDA Error: Kernel compilation failed
 
-This error may happen if you have a new Nvidia graphics card that is not yet supported by
-the Blender version and CUDA toolkit you have installed.
-In this case Blender may try to dynamically build a kernel for your graphics card and fail.
+   This error may happen if you have a new Nvidia graphics card that is not yet supported by
+   the Blender version and CUDA toolkit you have installed.
+   In this case Blender may try to dynamically build a kernel for your graphics card and fail.
 
-In this case you can:
+   In this case you can:
 
-#. Check if the latest Blender version
-   (official or `experimental builds <https://builder.blender.org/download/>`__)
-   supports your graphics card.
-#. If you build Blender yourself, try to download and install a newer CUDA developer toolkit.
+   #. Check if the latest Blender version
+      (official or `experimental builds <https://builder.blender.org/download/>`__)
+      supports your graphics card.
+   #. If you build Blender yourself, try to download and install a newer CUDA developer toolkit.
 
-Normally users do not need to install the CUDA toolkit as Blender comes with precompiled kernels.
+   Normally users do not need to install the CUDA toolkit as Blender comes with precompiled kernels.
 
+.. dropdown:: CUDA Error: Out of memory
 
-CUDA Error: Out of memory
--------------------------
+   This usually means there is not enough memory to store the scene for use by the GPU.
 
-This usually means there is not enough memory to store the scene for use by the GPU.
+   .. note::
 
-.. note::
+      One way to reduce memory usage is by using smaller resolution textures.
+      For example, 8k, 4k, 2k, and 1k image textures take up respectively 256MB, 64MB, 16MB and 4MB of memory.
 
-   One way to reduce memory usage is by using smaller resolution textures.
-   For example, 8k, 4k, 2k, and 1k image textures take up respectively 256MB, 64MB, 16MB and 4MB of memory.
+.. dropdown:: The Nvidia OpenGL driver lost connection with the display driver
 
+   If a GPU is used for both display and rendering,
+   Windows has a limit on the time the GPU can do render computations.
+   If you have a particularly heavy scene, Cycles can take up too much GPU time.
+   Reducing Tile Size in the Performance panel may alleviate the issue,
+   but the only real solution is to use separate graphics cards for display and rendering.
 
-The Nvidia OpenGL driver lost connection with the display driver
-----------------------------------------------------------------

@@ Diff output truncated at 10240 characters. @@


More information about the Bf-docboard-svn mailing list