[Bf-committers] Node Unification.. it's a bit winded

Tyler Tricker tntricker at gmail.com
Wed Jun 17 12:17:01 CEST 2009


It's 3:00am and apparently I feel like kicking a dead horse so.


Node Unification –the main idea behind unifying the nodes is mostly
combining the branched editors for one, and expanding upon the idea. Not
only can this be used with only the current node trees but can be used to
extend and replace the entire material/composting/audio/game engine/and
rendering system front end.



The node food chain

Constant<Signal/Expression<Map<Texture<Filters<Material<Geometry<Object<Group<Modifiers
<world<Scene<Game Engine & Render & Animation<Blender (*note this is just an
outline, for conceptual purposes only; things still have to be hashed out.
Like filters would mix with every lower node or each filter would be
duplicated for each level. Also the hierarchy is kind of long I’m sure there
is a way to simplify the lower levels)

Scene-to-color link wouldn’t make sense in the context of a node editor;
neither would a color-to-scene. There is a natural hierarchy.

http://ww.nodejoe.net/forum/userpix/3_DropMaterialNodeJoe_1.jpg

http://www.darksim.com/assets/images/ssDarkTreeEditor.jpg

These are examples from  competing applications sporting a comprehensive
node editor. There is a natural order. Looks like a DAG scene graph would be
a good starting solution. There is already a big step already completed in
having a comprehensive node system in blender. It’s staring most people in
the face and they don’t even realize it. The “outliner OOPS” window already
shows how data blocks are linked and the hierarchy of… well
everything(except nodes). What also makes this solution even more noteworthy
is that much of the current code wouldn’t have to change. Only a new editor
would need to be written filling in what the outliner doesn’t show and being
able to directly change and manipulate links and fields. There is no
rewriting the kernel, changing file formats, passing function pointers, or
even modifying the current node system. All it does is take the old separate
systems and push many of them into a single DAG editor.



Transversals and precomputation

This proposes a new method for precomputation during user time(well maybe
not new.. but revised). The main idea behind an advanced node system with
precomputation can be thought of as a push-pull strategy. Static data
(images, mesh, audio) can be calculated into a buffer and pushed up the tree
but dynamic data (say explicit surfaces, scripted functions, textures,
filters, modifiers) must be pulled up from the bottom upon the tree for
accuracy(which follows the raytracing paradigm pretty well). I know that
data sampled above the nyquist frequency (which was proposed by robin on the
mailing list) will prevent aliasing but the problem in sampling analog
functions is the bottom nodes don’t know what the nyquist values are, and in
long filter chains the numbers can grow extremely large and lead to a memory
wall. So data must be pulled up through the chain through the dynamic nodes
(as the system currently does); but this doesn’t mean the system can’t be
made more efficient. Culling in sampling chains, similar to what is already
found in geometric scene graphs can be used and samples previously called
from other branches can be cached as long as no ‘critical’ changes were made
in the lower or higher levels.

*Special case Flattening nodes (eg. to tex>image) nodes can be used to make
long dynamic chains static. This would make the scenario of 1000 chained
filters just a single processed image and greatly hasten processing and
rendering (without the need for baking). Modifiers, surfaces and object
casting can follow the same sort of optimization flattening optimizations.

Progressive GLSL/CPU background raytracing could be a very useful step for
pulling data through a dynamic chain. Rendering previews and the current
scene frame could take place at a low throttle in the back ground while the
user is working. If the user does click to render the scene blender should
put the job into high priority to finish it. This should be possible with
the use of job queuing (for internal renderer only).



Standard Idle job order (automatically jumps to this mode after editing)

<-static data calc (if needs updating)

<-node previews

<-Frame Render

<-Animation Render

<-spin till scene changes



Modified to active job (render frame)

<-static data calc (if needs updating)

<-Frame Render

<-node previews

<-Animation Render

<-spin till scene changes



Modified to active job (render animation)

<-static data calc

<-Animation Render <- elevated

<-node previews

<-Frame Render (if needed)

<-spin till scene changes



Modified to active job(game engine)

<-static data calc(if needed)

<-spin until game mode exits



In any solution, it is up to the users to use to the system effectively.
While a chain of 1000 filters will be slow, the system should still be
stable and responsive. Progress bars on the working node (or processing
node) could be shown to give the user an idea on how efficient their method
is.



Nodes as a Future Feature

Looking forward node graph scripting can become a very popular for
implementing procedural textures, objects, sounds, etc. and will pave way
for tighter script integration with the user interface without the threat of
interface bloat.

Rendering can be made faster than any current graphics system by a fusion of
precomp, hardware acceleration, and network framework management. Samples
can later on be computing using massive parallel job threading with OpenCL
with CPU management.

A strong modular design could also help bring the game engine and animation
systems closer together by the game engine being able to manipulate node
mechanics on other systems, like particle.

Baked textures can be stored in a flattening node chain and later be revised
and modified without burdening the system or requiring file swaps.

Can open a door for other powerful features previously related to as
complicated (or unimportant) within blender: parametric objects via object
node scripting, complicated procedural textures, brep objects, nurbs via
scripts, etc.



Other thoughts

Script package management tools (archive bundling, direct from archive
loading, script archive .blend packing) would be extremely useful in
extending the node system


why this should not be implemented... because it's a massive revision of the
UI... and possibly insane. honestly I don't even know if I am making any
sense anymore.. or was to begin with.


More information about the Bf-committers mailing list