[Bf-taskforce25] RNA and API wrap-up

Campbell Barton ideasman42 at gmail.com
Tue Jun 30 00:54:06 CEST 2009


Hey Elia,

On Mon, Jun 29, 2009 at 5:06 AM, Elia Sarti<vekoon at gmail.com> wrote:
>
> Hi,
> I've been almost absent in the last couple of months (which I apologize
> for) and as I'm trying to make a bit of a summary of current state of
> API related stuff I thought it would be useful to write it down in order
> for everyone to have a more clear idea and make it possible to discuss
> further so that then we can put up something definite and have some
> specs for other things to be settled that depend on this (like
> exporters/importers etc).
>
>
> Data access and performance
>
> Data access as of now for pyscripts always happens through RNA. The
> advantage of having everything abstracted as always presents the same
> problem, which is performance. While it should work fine to access
> almost any data it would get very slow for things like vertices and in
> general anything which is a huge collection of pieces of data.
> Talking a bit on IRC Campbell proposed having a special function
> implemented PYAPI side in the form of obj.foreach_set("name", [array])
> and same for getting obj.foreach_get("name", [array]). For instance (as
> I remember it) mesh.foreach_get("verts", vertex_array) would get all the
> vertices for the mesh "mesh" and put them into "vertex_array". We could
> also extend this to use RNA paths like mesh.foreach_get("verts.co",
> coords). I'm not sure what the complications of this could be, I think
> trying it out is the best option here. As Campbell said this is
> essential to have importers/exporters working which again is essential
> in order to have Blender working.

Python classes (types) are also inspected at runtime, we're just
forwarding the lookups from python to RNA, So even if we did a fully
autogenerated Py/C api, Python would still be doing hash lookups
internally.
Basically if our hash lookups are as fast as pythons, there should be
no speed difference.


The array access foreach_get/set will allow faster access, and I'm
still not ruling out having rna support python3's buffer protocol.
We could have something like this...
  coords = mesh.verts.as_buffer("co")
  normals = mesh.verts.as_buffer("no")

This could make a new buffer object, python3 has support for buffer
stride and offset (like numpy afaik)

Still, this is more of a hassle to define our own buffer type, and
foreach_get/set can also be more flexible...   with other args we
might want to use...

 # eg. to only position selected verts.
 mesh.verts.foreach_set(attr="co", array=myList, check_attr="selected")


> Operators design
>
> Operators are more or less defined/complete at least C side. API side
> though we still have things undefined or problematic. For instance an
> issue is unique naming. Although the current naming convention fits
> perfectly and it's cohesive on our core application side it presents
> some limitations for extensibility as we cannot really control how many
> pyoperators will be written and handle collisions between these. A way
> could be simply to always prepend pyops with PY_ (or tell py devs to use
> this convention). We could tell devs to prepend their pyoperators with
> their own names/ids of choice. Or we could use python own module/package
> system to combine the final operator name. As of now I don't seem to see
> any check on operator names? We could allow a
> module.OBJECT_OT_do_something() type of name syntax which could be
> translated automatically from py? Not sure if this conflicts with something.
> Another issue for operators in general is context constraints. For
> instance as of now as Ton mentioned in IRC add_object operator works in
> node editor. Is this fine? Can it be expected by the user? From my point
> of view being able to run operators from contexts you wouldn't expect is
> totally acceptable architecture-wise as running constraints are defined
> in terms of "needs" and if requirements are respected I see no reason
> why the operator should not run. They should just be hidden from the
> user in order not to create confusion. I think we could do this with
> some type of "flag" both on operators and space types defining the
> constraints of context. Something like CTX_HANDLE_OBJECT,
> CTX_HANDLE_NODE etc. On the operator this is the same as represented by
> the prefix of its idname so we could actually use this on spaces
> although string comparison would introduce some issues but would be more
> solid e.g. for py/plugin spaces which brings me back to pyops naming as
> using this method would leave module/package approach as the only usable
> naming convention.
> You might wonder why not to put the space check directly on the
> operator's poll? Well the reason for this is that as I said poll()
> should just check requirements and not interface issues which is instead
> something spaces take care of. This is not just a design choice but also
> has some practical examples. For instance, the same example I made on
> IRC, suppose at some point Outliner space gets capacity of
> listing/handling nodes. To support node tools in outliner you would have
> to go and change every single node operator to add additional check on
> space type while with the former approach you just update outliner space
> to accept NODE_* operators. This of course is even more clear for
> python/plugin spaces.

Personally Id prefer if we could completely avoid the C operator
naming - OBJECT_OT_do_something

though its possible to do this now with the current python api it
would be cumbersome to fake the different operators into submodules
eg.
- bpy.ops.object.do_something

Id propose drop OBJECT_OT_do_something  for the operator ID's and use
dot syntax instead.
 ot->idname= "object.do_something";

OBJECT_OT_do_something  cant be kept for the C operator definitions
only, users and script writers would never see it.

Python can convert this into bpy.ops.object.do_something most
efficiently if each operator type, OBJECT, ANIM, SEQUENCER etc are
each in their own list/hash.

> Type registration
>
> This is less important and more of a personal question than general issue.
> Types can get registered for adding customization like it happens for
> Panels or Menus. If I'm not mistaken this is how current architecture works:
>
> script.py -> class SomePanel(Panel): [...] bpy.types.register(SomePanel) ->
>
>    RNA: Panel.register() -> register the type in some custom way ->
> (ARegionType.paneltypes)
>
>        ARegionType.draw() ->
>
>            ED_region_panels("context") ->
>
>                foreach (panel in paneltypes) if (panel.context ==
> "context") draw(panel)
>
>
> How it should work in my opinion:
>
> script.py -> class SomePanel(Panel): [...] bpy.types.register(SomePanel) ->
>
>    [RNA: insert SomePanel in optimized lookup list]
>
>        ARegionType.init() ->
>
>            retrieve all Panel-inherited RNA types and register ->
> (ARegionType.autopaneltypes)
>
>        ARegionType.draw() ->
>
>            ED_region_panels("context") ->
>
>                foreach (panel in paneltypes) draw(panel)
>                foreach (panel in autopaneltypes) if (panel.context ==
> "context") draw(panel)
>
>
>
> Advantages of this kind of architecture are that it keeps RNA more
> consistent and self-contained, no messing around (i.e. no real logics)
> with interface. By consequence it also keeps interface stuff more
> self-contained by letting it handle its own supported custom types.
> The apparent disadvantage is low performance, having to do lookups at
> every init. I understand this could be why the first method was adopted
> but to me this doesn't seem a valid reason. To solve this problem
> instead we could create a coexistent fast-to-access list of registered
> types. This could be ordered like some type of tree to optimize access
> based on type hierarchy lookup. I know caches are bad but still I think
> they're better than code fragmentation. Once we have this type of cache
> we could further optimize for instance like (in pseudo-code):
>
> if type_tree.is_hierarchy_changed_for(Panel):
>    re-init ARegionType.autopaneltypes
>
> This could simply be a flag on the cache-stored type updated by the
> cache itself. This way we would re-do the lookup only when some new
> specific type gets registered/unregistered.
>
>
>
> I think I mentioned the main issues that must be either solved, decided
> upon or simply settled, if you think I forgot something feel free to add it.
> I'm also thinking about writing some more detailed docs on RNA and its
> internals in order not to turn it into a black box that only a few
> understand. This of course with the assistance of Brecht.
>
>
>
> _______________________________________________
> Bf-taskforce25 mailing list
> Bf-taskforce25 at blender.org
> http://lists.blender.org/mailman/listinfo/bf-taskforce25
>



-- 
- Campbell


More information about the Bf-taskforce25 mailing list