[Bf-taskforce25] RNA and API wrap-up

Campbell Barton ideasman42 at gmail.com
Wed Jul 1 12:42:36 CEST 2009


On Wed, Jul 1, 2009 at 1:03 AM, Elia Sarti<vekoon at gmail.com> wrote:
>
>
> Campbell Barton wrote, on 06/30/2009 10:54 PM:
>> On Tue, Jun 30, 2009 at 1:02 AM, Elia Sarti<vekoon at gmail.com> wrote:
>>
>>>> The array access foreach_get/set will allow faster access, and I'm
>>>> still not ruling out having rna support python3's buffer protocol.
>>>> We could have something like this...
>>>>   coords = mesh.verts.as_buffer("co")
>>>>   normals = mesh.verts.as_buffer("no")
>>>>
>>>> This could make a new buffer object, python3 has support for buffer
>>>> stride and offset (like numpy afaik)
>>>>
>>>>
>>>>
>>> As I said in IRC for buffers I'd agree to supporting them but only if we
>>> can keep access safety (doing our own validation basically). I don't
>>> think the py scripter would expect crashing the application or
>>> corrupting an entire mesh just because he set a coord wrong or something
>>> like that. Maybe we could define some type of "convention" calls where
>>> you can get/set direct access to data (that's why in IRC I suggested
>>> using static functions, to distinguish them with other regular
>>> functions). Like for instance:
>>>
>>> Mesh.as_array_get(mesh, "verts")  # get vertex array
>>> Mesh.as_buffer_get(mesh, "verts.co")  # get coordinates buffer
>>>
>>
>> I'm not sure about this, the reason for crashing is unlikely to be
>> because of setting a bad value.
>> You wont be able to get an array of verts and set their flags to
>> something weired -as if you can full access to the struct in C.
>> Only an array of coords (floats), or conceivably and array of bools
>> for selection, so this way it you cant corrupt data.
>>
>> The way you could crash blender is to get the buffer and then say
>> enter-exit editmode so the vertex buffer is now freed and our buffer
>> object will point to an invalid array, but this is a problem we need
>> to solve with normal rna data too.
>>
> Well I though the idea for increasing performance when using buffers was
> to do a binary 1:1 between the python object and the property that was
> retrieved but now as I understand it you meant to only get/set buffers
> of basic types? i.e. you set buffers of coords, normals etc. at once
> which are simple floats but you're not allowed to set a buffer of
> vertices directly? This makes actually sense. In this case you could
> just mess the mesh at most.

Python can give direct access to a buffer without allowing the user to
write into any part of the array by having stride, length and offset
values.
you could think of a python buffer as a view on an array, limiting
each buffer to a single attribute works with this.

Ill probably try rna buffer support later on but for now see how
foreach_getset go...

>>>> Still, this is more of a hassle to define our own buffer type, and
>>>> foreach_get/set can also be more flexible...   with other args we
>>>> might want to use...
>>>>
>>>>  # eg. to only position selected verts.
>>>>  mesh.verts.foreach_set(attr="co", array=myList, check_attr="selected")
>>>>
>>>>
>>> I agree, doing the filtering our own C side would avoid bottlenecks. If
>>> I was crazy I would even suggest using some limited type of XPath-like
>>> syntax, although it would be cumbersome to handle code-wise I guess:
>>>
>>> Mesh.as_array_get(mesh, "verts[selected=1].no")  # get normals of the
>>> currently selected vertices
>>>
>>
>> If really rather not start encoding rna strings into function args, we
>> should be able to get by with single attributes or passing an array of
>> bools for the selecton
>> array = Mesh.as_array_get(mesh, "verts[selected=1].no")
>> could be...
>> array = mesh.verts.as_array(attr="no", attr_test=selected)
>>
>> Though the simplest way with foreach_* (and as_array probably) is to
>> pass an array of bools.
>> array = mesh.verts.as_array(attr="no", partial=array_bools)
>>
> Yes this way I guess is absolutely enough. Maybe we could use py
> dictionaries at most? Like:
>
> mesh.verts.as_array(attr="no", filter={'selected' : 1})
>
> So we could easily add other allowed filtering attributes (not thinking about vertices at the moment but also for other properties).
yep, this looks good.

>>>>> Operators design
>>>>>
>>>>> Operators are more or less defined/complete at least C side. API side
>>>>> though we still have things undefined or problematic. For instance an
>>>>> issue is unique naming. Although the current naming convention fits
>>>>> perfectly and it's cohesive on our core application side it presents
>>>>> some limitations for extensibility as we cannot really control how many
>>>>> pyoperators will be written and handle collisions between these. A way
>>>>> could be simply to always prepend pyops with PY_ (or tell py devs to use
>>>>> this convention). We could tell devs to prepend their pyoperators with
>>>>> their own names/ids of choice. Or we could use python own module/package
>>>>> system to combine the final operator name. As of now I don't seem to see
>>>>> any check on operator names? We could allow a
>>>>> module.OBJECT_OT_do_something() type of name syntax which could be
>>>>> translated automatically from py? Not sure if this conflicts with something.
>>>>> Another issue for operators in general is context constraints. For
>>>>> instance as of now as Ton mentioned in IRC add_object operator works in
>>>>> node editor. Is this fine? Can it be expected by the user? From my point
>>>>> of view being able to run operators from contexts you wouldn't expect is
>>>>> totally acceptable architecture-wise as running constraints are defined
>>>>> in terms of "needs" and if requirements are respected I see no reason
>>>>> why the operator should not run. They should just be hidden from the
>>>>> user in order not to create confusion. I think we could do this with
>>>>> some type of "flag" both on operators and space types defining the
>>>>> constraints of context. Something like CTX_HANDLE_OBJECT,
>>>>> CTX_HANDLE_NODE etc. On the operator this is the same as represented by
>>>>> the prefix of its idname so we could actually use this on spaces
>>>>> although string comparison would introduce some issues but would be more
>>>>> solid e.g. for py/plugin spaces which brings me back to pyops naming as
>>>>> using this method would leave module/package approach as the only usable
>>>>> naming convention.
>>>>> You might wonder why not to put the space check directly on the
>>>>> operator's poll? Well the reason for this is that as I said poll()
>>>>> should just check requirements and not interface issues which is instead
>>>>> something spaces take care of. This is not just a design choice but also
>>>>> has some practical examples. For instance, the same example I made on
>>>>> IRC, suppose at some point Outliner space gets capacity of
>>>>> listing/handling nodes. To support node tools in outliner you would have
>>>>> to go and change every single node operator to add additional check on
>>>>> space type while with the former approach you just update outliner space
>>>>> to accept NODE_* operators. This of course is even more clear for
>>>>> python/plugin spaces.
>>>>>
>>>>>
>>>> Personally Id prefer if we could completely avoid the C operator
>>>> naming - OBJECT_OT_do_something
>>>>
>>>> though its possible to do this now with the current python api it
>>>> would be cumbersome to fake the different operators into submodules
>>>> eg.
>>>> - bpy.ops.object.do_something
>>>>
>>>> Id propose drop OBJECT_OT_do_something  for the operator ID's and use
>>>> dot syntax instead.
>>>>  ot->idname= "object.do_something";
>>>>
>>>> OBJECT_OT_do_something  cant be kept for the C operator definitions
>>>> only, users and script writers would never see it.
>>>>
>>>> Python can convert this into bpy.ops.object.do_something most
>>>> efficiently if each operator type, OBJECT, ANIM, SEQUENCER etc are
>>>> each in their own list/hash.
>>>>
>>>>
>>>>
>>> I suggested using module/package name because this way it gets automatic
>>> for the scripter and he doesn't have to specify it manually. Maybe we
>>> could combine the two? I'd agree in not forcing same C naming syntax if
>>> the idname really never gets visibile from the user perspective and as
>>> of now it seems to be the case so it's fine by me. My only concern is to
>>> maintain name uniqueness between pyops but maybe I'm being just paranoid
>>> about it.
>>>
>>
>> I'm not worried about name collisions and would be happy to allow them even.
>>
>> Id also rather python operators use a matching ID to C operators.
>> IMHO "EXPORT_OT_ply" should be called the same thing no matter if its
>> written in C or python,
>>
>>
> I don't understand this last part? You said you'd rather use dot-syntax
> for py ops? Or do you mean "called" in the sense of "invoked"? As of now
> if I remember correctly calling ops was something like:
>
> bpy.ops.EXPORT_OT_ply(...)
>
> You mean we change the way you invoke ops through python? For instance
> do instead something like:
>
> bpy.ops.Operator("export.ply").invoke()

As you say operators are called like this in python...
 bpy.ops.EXPORT_OT_ply(...)

My preference is...
 bpy.ops.export.ply(...)

There is some crappy code to make this work correctly in python - so
"export.ply" does a lookup for EXPORT_OT_ply,
Id prefer EXPORT, OBJECT etc were in their own lists and that would be
less messy but its possible whatever happens.

> This way naming convention would be totally transparent I guess.
> For name collisions I'm not sure, but I have the feeling it could
> present itself as a problem in the future if we don't take care of it
> now. What was your idea if there's a collision? I mean if a user loads
> two pyops with the same name what do we do? Throw an error? Silent
> overwrite? Ask to replace/keep current?
> Doing as I suggested still leaves the default condition to be the same
> as not using module names, but still lets you retrieve a specific op if
> it's needed (for instance for macros). Except for introducing some
> additional logic to operator invocation I don't see other cons? Of
> course we don't have to use module name, it could also be specified
> manually by the scripter directly into the idname of the op (or as some
> other pyop property).

I dont really mind what happens, raise an error unless an overwrite
arg is given?
AFAIK we didnt have name duplicate problems with menu items in blender.

It could be cool to be able to replace blenders internal operators
with python ones in some cases.


More information about the Bf-taskforce25 mailing list