[Bf-committers] FM 2.8 Proposal : Fracturing, Cache, Rigidbody

Martin Felke martin.felke at t-online.de
Mon Feb 26 13:20:14 CET 2018


Hi, first attempt to get my thoughts / ideas sorted on how to move on
with FM in 2.8.


FM 2.8 Proposal : Fracturing, Cache, Rigidbody
==============================================

Fracturing:
----------
Main Goal is to avoid 1000s of real objects, this still causes serious
lags in blenders viewport. Should this be already fixed we could just
use those and we are almost done.
We need to decompose the  base or previous modifier mesh to islands,
with an Operator for example. Now we have an Execute operator which
“invokes” the modifier and suppresses modifier eval afterwards again.
Result of the fracture operator is a non-persistent Mesh (pre-fractured)
or a non-persistent Mesh Sequence (dynamic). We have the possibility to
apply to a single mesh object or to convert to multiple Mesh Objects.
It should be kept as a modifier for (nondestructive ?) interaction with
the stack or nodes, else it would be “nailed” to the top of the stack,
no modifiers before could be placed before it.
Optionally we could  have a  more general pack operator which packs a
group of objects and constraints (similarly to how it happens already
with FM external mode)


Cache:
-----
The fracture result should be kept in the RAM (Geometry Cache). It also
could directly be written to a persistent Cache File.
Alembic could be used as cache file / disk cache for geometry. Those
per-modifier cache files could be packed into the blend, so the user
doesn’t need to take care not to “forget” files when sharing the
blend. 
It would be needed to feed the alembic file backend differently with
data. It needs to be avoided that mesh gets fully dumped again and again
each frame, because this will unnecessarily bloat the file size.
Instead, we should treat each island like an object and store only
transforms over time.

A Fracture Modifier should hold a cache data block similar to the mesh
sequence cache modifier. So each of those objects and / or modifiers
would be linked to a disk cache file.
During fracturing, the initial state will be written to a geometry cache
file.
During simulation, another file will be created and the initial state
from simulation will be modified, by writing transform info.
For the case of dynamic fracture, the cache needs to be able to extend
or shrink (modify) the initial state. At each fracture event we need to
add a new state to some event sequence. But it needs to be avoided to
repeat redundant info as in writing the same geometry twice.

Generally, we need to make the storage more dynamic, so the cache can
grow and shrink, so instead of just storing an array of "points" per
frame,the cache could be holding just a sequence of items where each has
its own frame range.
To ensure quick lookup then (state of cache at frame x) we could keep an
interface similar to the old point cache.
Going over all elements and linear search is just too slow with high
element count, so jumping in cache and even sequential playback could be
too, if we don’t know what is the “next” changeset.
We could build up a lookup structure which works like an array of
frames. This references the items according to whether this frame is
part of their frame range. 
Building that structure could be done at sim start and at addition /
removal of items in dynamic sim too. This helps to ensure fast playback
and jumping at the expense of some overhead in dynamic simulations. But
simulation generally is expected to be slower than cache playback, so
the overhead should not be the main cause of slow performance.
Each cache item should be able to hold any type of “custom” data. so the
same cache structure could be used for any type of simulation purpose.
Blenders Custom data Layer system could be applied to each of those
cache items. It allows to append already “simple” datalayers like int,
float, string, bool. Optionally we could attempt to extend it to hold
pointers to arbitrary structs. We should be able to register new custom
data types at runtime too, and expose the cache to python. (so addons
could read / write from / into it)
For example, the Rigidbody Object structure should be treated as custom
data layer for the cache. Each island would be an item and index into
the rigidbody layer.


Rigidbody:
---------
The rigidbody world would reference all objects which have caches with
rigidbody custom data in it, with Fracture Modifiers referencing their
cache datablocks, and their items.
Access would be like : World -> Objects -> Caches per Object -> Items ->
RigidBodyObjectLayer[Item.Index]

The constraints should be managed by the modifiers as well, where we
have “inner” and “outer” constraints then, the latter are cross-object.
The cross-object constraints will be managed by a dedicated
constraint-only Modifier. Each of those constraints should hold
references to rigidbodies (or cache items, since a rigidbody is just a
layer then) of the same or of the other participating objects. 
The old "regular" Object based rigidbodies shall be removed, they can be
replaced by single-island modifiers.
Rigidbody as modifier becomes movable in the stack / among nodes, and is
not nailed to final / deform / base.

Additional functionality like autohide and automerge could become
separate modifiers, unless they also need to access cache details.


Further Thoughts
----------------

Should EACH modifier have a cache component, which is either being
passed down the stack, and can be accessed externally too ?
like ob.modifiers[“mod”].cache.layers.xxx ?

A generic script modifier would be nice (where a certain python class
could be attached to), could cooperate with the exposed cache component
(read from input cache, write to output cache)

Caches per modifier would also allow to stack multiple fracture
modifiers without performance penalty (of continuous refractures) 
(would work in the old style DNA / DM storage too, but many places in
the code assume there is only one FM, like the duplicated rigidbody
code)



More information about the Bf-committers mailing list