[Bf-taskforce25] Drag & Drop into 2.5

Elia Sarti vekoon at gmail.com
Sun Jul 26 23:39:33 CEST 2009


Hi,

Actually my architecture is pretty similar to what you're proposing 
here, it's just a different way of looking at it.
As I see it, it would be far more useful (and used) drag & drop as 
editor-specific behavior rather than on UI items. For instance dragging 
an image over the image editor or 3d view or material/texture over 
object etc.

For this reason, and to keep everything simple, I don't see any need to 
have custom UI items as these are not really needed for UI to handle 
drag & drop. This could simply be done in the UI buttons handlers and 
using RNA to determine automatically if the drag is valid or not.
And yes for editors this would be handled through operators as you suggest.

But there's no need for using rna-paths I believe? The reason animation 
system does I think is so that it can store that information but in this 
case we don't need to store it so I'm using directly RNA pointers.


Ton Roosendaal wrote, on 26/07/2009 18.24:
> Hi Elia,
>
> I can't follow much your proposal... maybe it just names things 
> differently.
>
> This is how I've intended drag/drop to work in the 2.5 architecture:
>
> - Drag items: are designated areas, defined with interface/ module. 
> Something like
>    uiDefDraggableItem() or so.
>    Some buttons can already automatically generate this.
> It can retrieve data for the draggable item similarly to 'rna paths' as 
> in use for animation system.
>
> - the UI code then handles mouse input for such items as regular 
> buttons, so if you click-drag inside the 'button' it activates the 
> drag.
>
> - The dragging itself is an Operator, with a modal() call running on 
> toplevel of queues.
>    On drop, it makes sure context is properly set (to the region), and 
> it creates a wmEvent, type "EVT_DROP", with customdata the rna path. 
> (wmEvent will have proper mouse coord too)
>
> - You then just need event handlers that catch the EVT_DROP event, 
> which can again be coded in the current interface buttons as well, for 
> specific and appropriate types.
> Later you can add EVT_DROP handlers in any region, like for the Image 
> window (drop image), or for 3d window (check boundbox/visible object, 
> assign texture) etc.
>
>
> -Ton-
>
> ------------------------------------------------------------------------
> Ton Roosendaal  Blender Foundation   ton at blender.org    www.blender.org
> Blender Institute   Entrepotdok 57A  1018AD Amsterdam   The Netherlands
>
> On 23 Jul, 2009, at 14:45, Elia Sarti wrote:
>
>   
>> Brecht Van Lommel wrote, on 07/23/2009 02:28 PM:
>>     
>>> Hi,
>>>
>>> On Thu, 2009-07-23 at 12:00 +0200, Elia Sarti wrote:
>>>
>>>       
>>>> Yes but the problem is exactly this, that each operator in the end 
>>>> gets
>>>> own context. Maybe I'm missing something but there's need for the
>>>> MOUSEDROP event to have both mouse_drag and mouse_drop data. The 
>>>> problem
>>>> is that these two types of data lie into two different space 
>>>> contexts,
>>>> unless we store this data at some higher level (like WM), which I'm 
>>>> not
>>>> sure is ok? This is the data retrieved from the CTX_data_drag_pointer
>>>> and CTX_data_drop_pointer. We'll have to set them under C->wm. If 
>>>> this
>>>> is fine then there's no problem.
>>>>
>>>>         
>>> I think the MOUSEDROP should not have to call CTX_data_drag_pointer.
>>> That is done by the MOUSEDRAG operator, which then passes it on to the
>>> windowmanager, which will give it to MOUSEDROP as customdata in the
>>> event.
>>>
>>> Thinking of it further, I don't understand why CTX_data_drag_pointer 
>>> and
>>> CTX_data_drop_pointer are really needed. They can be computed in the
>>> MOUSEDRAG/MOUSEDROP operator directly, unless there is some reason to
>>> have that decoupled?
>>>
>>> Brecht
>>>       
>> Yes this is what I suggested as alternative solution in the first
>> e-mail, but I was not sure it was ok? I mean MOUSEDRAG operator is 
>> modal
>> and when it gets MOUSEDROP it sets custom data for MOUSEDROP and pass 
>> on
>> the event which then arrives to the current space the cursor is in with
>> the new custom data (which will contain the drag_pointer).
>> If this is ok (and doable) then we can just do it this way I guess with
>> no need for CTX_ calls. One reason to have them was to make things
>> reusable. This way we could have one generic function that accepts
>> context and then in every editor you have a specific operator that
>> determines what's the data at x, y mouse coords for that specific 
>> space.
>> But I guess the generic code can just be a function that accepts two 
>> RNA
>> pointers.
>> _______________________________________________
>> Bf-taskforce25 mailing list
>> Bf-taskforce25 at blender.org
>> http://lists.blender.org/mailman/listinfo/bf-taskforce25
>>
>>     
>
> _______________________________________________
> Bf-taskforce25 mailing list
> Bf-taskforce25 at blender.org
> http://lists.blender.org/mailman/listinfo/bf-taskforce25
>
>   


More information about the Bf-taskforce25 mailing list