[Bf-taskforce25] RNA and API wrap-up

Campbell Barton ideasman42 at gmail.com
Tue Jun 30 22:54:19 CEST 2009


On Tue, Jun 30, 2009 at 1:02 AM, Elia Sarti<vekoon at gmail.com> wrote:
> Hi Campbell,
>
> Campbell Barton wrote, on 06/30/2009 12:54 AM:
>> Hey Elia,
>>
>> On Mon, Jun 29, 2009 at 5:06 AM, Elia Sarti<vekoon at gmail.com> wrote:
>>
>>> Hi,
>>> I've been almost absent in the last couple of months (which I apologize
>>> for) and as I'm trying to make a bit of a summary of current state of
>>> API related stuff I thought it would be useful to write it down in order
>>> for everyone to have a more clear idea and make it possible to discuss
>>> further so that then we can put up something definite and have some
>>> specs for other things to be settled that depend on this (like
>>> exporters/importers etc).
>>>
>>>
>>> Data access and performance
>>>
>>> Data access as of now for pyscripts always happens through RNA. The
>>> advantage of having everything abstracted as always presents the same
>>> problem, which is performance. While it should work fine to access
>>> almost any data it would get very slow for things like vertices and in
>>> general anything which is a huge collection of pieces of data.
>>> Talking a bit on IRC Campbell proposed having a special function
>>> implemented PYAPI side in the form of obj.foreach_set("name", [array])
>>> and same for getting obj.foreach_get("name", [array]). For instance (as
>>> I remember it) mesh.foreach_get("verts", vertex_array) would get all the
>>> vertices for the mesh "mesh" and put them into "vertex_array". We could
>>> also extend this to use RNA paths like mesh.foreach_get("verts.co",
>>> coords). I'm not sure what the complications of this could be, I think
>>> trying it out is the best option here. As Campbell said this is
>>> essential to have importers/exporters working which again is essential
>>> in order to have Blender working.
>>>
>>
>> Python classes (types) are also inspected at runtime, we're just
>> forwarding the lookups from python to RNA, So even if we did a fully
>> autogenerated Py/C api, Python would still be doing hash lookups
>> internally.
>> Basically if our hash lookups are as fast as pythons, there should be
>> no speed difference.
>>
>>
> I don't understand this, I know how RNA lookups are done BPY side, I
> wasn't questioning RNA lookup speed vs py internal speed, I was just
> pointing out (as we discussed on IRC) that for generic speed increase we
> should provide direct access to some type of data (huge arrays
> basically) like vertices and so. It was basically a summary of what we
> came up with for others to know as well.
I misunderstood you, no matter..

>> The array access foreach_get/set will allow faster access, and I'm
>> still not ruling out having rna support python3's buffer protocol.
>> We could have something like this...
>>   coords = mesh.verts.as_buffer("co")
>>   normals = mesh.verts.as_buffer("no")
>>
>> This could make a new buffer object, python3 has support for buffer
>> stride and offset (like numpy afaik)
>>
>>
> As I said in IRC for buffers I'd agree to supporting them but only if we
> can keep access safety (doing our own validation basically). I don't
> think the py scripter would expect crashing the application or
> corrupting an entire mesh just because he set a coord wrong or something
> like that. Maybe we could define some type of "convention" calls where
> you can get/set direct access to data (that's why in IRC I suggested
> using static functions, to distinguish them with other regular
> functions). Like for instance:
>
> Mesh.as_array_get(mesh, "verts")  # get vertex array
> Mesh.as_buffer_get(mesh, "verts.co")  # get coordinates buffer

I'm not sure about this, the reason for crashing is unlikely to be
because of setting a bad value.
You wont be able to get an array of verts and set their flags to
something weired -as if you can full access to the struct in C.
Only an array of coords (floats), or conceivably and array of bools
for selection, so this way it you cant corrupt data.

The way you could crash blender is to get the buffer and then say
enter-exit editmode so the vertex buffer is now freed and our buffer
object will point to an invalid array, but this is a problem we need
to solve with normal rna data too.

>> Still, this is more of a hassle to define our own buffer type, and
>> foreach_get/set can also be more flexible...   with other args we
>> might want to use...
>>
>>  # eg. to only position selected verts.
>>  mesh.verts.foreach_set(attr="co", array=myList, check_attr="selected")
>>
> I agree, doing the filtering our own C side would avoid bottlenecks. If
> I was crazy I would even suggest using some limited type of XPath-like
> syntax, although it would be cumbersome to handle code-wise I guess:
>
> Mesh.as_array_get(mesh, "verts[selected=1].no")  # get normals of the
> currently selected vertices

If really rather not start encoding rna strings into function args, we
should be able to get by with single attributes or passing an array of
bools for the selecton
array = Mesh.as_array_get(mesh, "verts[selected=1].no")
could be...
array = mesh.verts.as_array(attr="no", attr_test=selected)

Though the simplest way with foreach_* (and as_array probably) is to
pass an array of bools.
array = mesh.verts.as_array(attr="no", partial=array_bools)

>>> Operators design
>>>
>>> Operators are more or less defined/complete at least C side. API side
>>> though we still have things undefined or problematic. For instance an
>>> issue is unique naming. Although the current naming convention fits
>>> perfectly and it's cohesive on our core application side it presents
>>> some limitations for extensibility as we cannot really control how many
>>> pyoperators will be written and handle collisions between these. A way
>>> could be simply to always prepend pyops with PY_ (or tell py devs to use
>>> this convention). We could tell devs to prepend their pyoperators with
>>> their own names/ids of choice. Or we could use python own module/package
>>> system to combine the final operator name. As of now I don't seem to see
>>> any check on operator names? We could allow a
>>> module.OBJECT_OT_do_something() type of name syntax which could be
>>> translated automatically from py? Not sure if this conflicts with something.
>>> Another issue for operators in general is context constraints. For
>>> instance as of now as Ton mentioned in IRC add_object operator works in
>>> node editor. Is this fine? Can it be expected by the user? From my point
>>> of view being able to run operators from contexts you wouldn't expect is
>>> totally acceptable architecture-wise as running constraints are defined
>>> in terms of "needs" and if requirements are respected I see no reason
>>> why the operator should not run. They should just be hidden from the
>>> user in order not to create confusion. I think we could do this with
>>> some type of "flag" both on operators and space types defining the
>>> constraints of context. Something like CTX_HANDLE_OBJECT,
>>> CTX_HANDLE_NODE etc. On the operator this is the same as represented by
>>> the prefix of its idname so we could actually use this on spaces
>>> although string comparison would introduce some issues but would be more
>>> solid e.g. for py/plugin spaces which brings me back to pyops naming as
>>> using this method would leave module/package approach as the only usable
>>> naming convention.
>>> You might wonder why not to put the space check directly on the
>>> operator's poll? Well the reason for this is that as I said poll()
>>> should just check requirements and not interface issues which is instead
>>> something spaces take care of. This is not just a design choice but also
>>> has some practical examples. For instance, the same example I made on
>>> IRC, suppose at some point Outliner space gets capacity of
>>> listing/handling nodes. To support node tools in outliner you would have
>>> to go and change every single node operator to add additional check on
>>> space type while with the former approach you just update outliner space
>>> to accept NODE_* operators. This of course is even more clear for
>>> python/plugin spaces.
>>>
>>
>> Personally Id prefer if we could completely avoid the C operator
>> naming - OBJECT_OT_do_something
>>
>> though its possible to do this now with the current python api it
>> would be cumbersome to fake the different operators into submodules
>> eg.
>> - bpy.ops.object.do_something
>>
>> Id propose drop OBJECT_OT_do_something  for the operator ID's and use
>> dot syntax instead.
>>  ot->idname= "object.do_something";
>>
>> OBJECT_OT_do_something  cant be kept for the C operator definitions
>> only, users and script writers would never see it.
>>
>> Python can convert this into bpy.ops.object.do_something most
>> efficiently if each operator type, OBJECT, ANIM, SEQUENCER etc are
>> each in their own list/hash.
>>
>>
> I suggested using module/package name because this way it gets automatic
> for the scripter and he doesn't have to specify it manually. Maybe we
> could combine the two? I'd agree in not forcing same C naming syntax if
> the idname really never gets visibile from the user perspective and as
> of now it seems to be the case so it's fine by me. My only concern is to
> maintain name uniqueness between pyops but maybe I'm being just paranoid
> about it.

I'm not worried about name collisions and would be happy to allow them even.

Id also rather python operators use a matching ID to C operators.
IMHO "EXPORT_OT_ply" should be called the same thing no matter if its
written in C or python,

> _______________________________________________
> Bf-taskforce25 mailing list
> Bf-taskforce25 at blender.org
> http://lists.blender.org/mailman/listinfo/bf-taskforce25
>



-- 
- Campbell


More information about the Bf-taskforce25 mailing list