[Robotics] Integration of middlewares

Herman Bruyninckx Herman.Bruyninckx at mech.kuleuven.be
Sun Apr 19 19:30:24 CEST 2009


On Fri, 17 Apr 2009, Damien Marchal wrote:

[...]
>>> But I totally agree that there is plenty of other protocol to transmit
>>> normalized data (corba, ws, ace, verse, osc(tuio), yarp).
>>
>> Blender should not know about any of those... :-)
>>
> I was thinking to have something called RawSensor/RawActuator that would
> behave like in and out ports by which the data will flow in and out from
> and to blender.

What is your definition of "raw"? Just a byte stream, I guess, judging from
the rest of your message below. I will make some comments about this
approach below too...

> But this RawSensor/RawActuator need to use something (udp,tcp,shmem) to
> exchange data with external component_wrapper isn't it ?

Of course.

> The component_wrapper hides the externals framework (yarps, etc..) from
> blender but at least one communication framework has to be used in order
> to exchange data between blender and the component_wrapper. This can be a
> really low level framework just exchanging byte stream or arrays or a
> more higher level one (exchanging message, sycnrhonous asynchronous
> etc...). In this context I was citing Verse just because it already
> exists in Blender. ACE seems often cited and rather modern.
There is no shortage of such communication frameworks, on the contrary :-)

The lessons I have learned the last couple of years are the following:
- each "agent" that wants to send/receive data should be allowed/forced to
   "publish" the details about this data.
- it is only a small extra step to let not only data flows be described and
   published, but also "method call" interfaces and "event" interfaces.
- each piece of software (a "component") that contains a set of such
   "agents" provides a "nameserver" that the individual "agents" use to
   publish their "required"/"provided" interfaces (data, method, event).
- let's call all these interfaces "messages", to use a generic name.
- this nameserver is the piece of code that handles all incoming and
   outgoing messages, being responsible for making sure that
   - an incoming message is delivered to the "read" port of the agent that
     is expecting it;
   - that an outgoing message from an agent is sent over the appropriate
     "channel" (more about that below);
   - only well-formed messages are delivered;
   - the communication protocol asked by the agent is realised (e.g., the
     agent wants to receive only the last message, discarding all the ones
     that have been received before it could handle the next message)
- the nameserver has _configuration_ options that allow it to use different
   "middlewares" (udp, Yarp, CORBA, asynchronous message passing, ...) for
   different channels;
- the nameserver can add and remove "channels" and "agents" dynamically
   during the lifetime of the "component" it serves in.
- messages should be "self descriptive", such as in, for example, the
   NETCDF standard.
- the interconnection of "components" is a configuration activity that is
   to be done _outside_ of the "component" (in casu Blender).
In summary, these are basically the motivations behind "component based
programming" and "middleware"; it just took me quite some time to
understand why they _really_ make sense :-) (And many middleware projects
still do not get it...)

>
>>> Let's suppose that the data_structure is bound to a "class" name. When
>>> the component_wrapper connects
>>> to a sensor or an actuator it advertise this class name to Blender.
>>> Something like: I will send you "6dofphanom" or
>>> "framebufferimage".
>>>
>> Do you think there is a need for _Blender_ to have to know these class
>> names? Or do you think only the GameEngine Actors have to know, and Blender
>> should just pass the data uninterpreted?
>>
> The best would be to have the actor's controller to know how to
> interprete the raw_structure.
It would even be better if it only received those messages that is can and
want to handle. Hence the above-mentioned "name serving" suggestion.

> This would clearly permit to keep blender out of the user/robotics
> specific complexity. The only drawback of doing so is that any function
> that is not supported in the controller will not be possible to make

Is this really a drawback...? To me it seems _impossible_ to execute a
function that is not supported :-)

> (unless the python api is extended and or the simulation loop is
> changed).

>>> Now something to keep in mind is that all the sensors are not only
>>> "data_structure" but they can also do complex processing like, in
>>> blenderTUIO i have sensor reacting to condition like
>>> "finger_is_over_objet" in which you have to test intersection with
>>> object of the scene. This is something important to keep in mind (in my
>>> case but maybe not for robotics).
>>
>> Of course also for robotics! :-) For example, grasp simulation comes to
>> mind: you could let a Blender Actor do grasping, and send the position of,
>> and forces on, the fingers to an external "haptic interface" or grasp
>> controller, or estimator, or...
>>
> This is funny I just have been asked  today to illustrate for the 15 of
> may a simulation of grasping for
> Christian Duriez: http://www2.lifl.fr/~duriez/EG2008/contactSkinning.pdf
> (but the simulation is totally done using their framework:
> http://www.sofa-framework.org/)

Damn, yet another framework that we (or rather Blender) should try to work
together with! :-)

> Bye for now and best, regards,
> Damien

Herman


More information about the Robotics mailing list