[Robotics] Integration of middlewares

Damien Marchal damien.marchal at lifl.fr
Fri Apr 17 13:22:00 CEST 2009


Hi all,

All the previous discussion are really interesting.

Few more info about the GameEngine:
In the GameEngine we have the following component: A Game contains
several Scenes that contains several Actors. An actor has a set of 
Sensors connected
to a Controllers that are connected to Actuators. An actor also has a
State (I'm a bit weak on this point  as I only use one State) that 
describes
how the Sensor/Controller/Actuator are connected and which one is active.

Such kind of approach was made to allow people to  design game without 
writing
a line of code. This is nice for human being but for a programmer you 
will always
want to do something the is not supported and have to spend time on 
hacking :)


>> To better define the problem, could you write down on the Wiki what a
>> "generic interface" in the context of Blender would look like for a
>> sensor like a camera, for instance? I'm not sure, for example, that the
>> framerate of a virtual (Blender) camera is something we can easily
>> control -> should it appears in the interface? etc...
>>     
>
> These are the right questions! Let me try to give a partial "answer"... :-)
> (This "answer" is basically a summary of the reasons why "component based
> programming" is needed in addition to "object oriented programming".)
> - different classes of sensors provide "data structures" that can be shared
>    by all members of the class. For example, in the camera case: an image is
>    a matrix with pixel values, with some configuration parameters such as
>    size, time stamp, gray value intervals, ...
> - these "data structures" would be the only thing that should be encoded
>    in Blender.
> - Blender should get a "Read" and "Write" interface to the "outside world",
>    where any activity within Blender can input or output such data
>    structures in a _non blocking_ way.
> - Blender should get a "component wrapper" around itself (whose use or
>    not-use could be a simple configuration setting) that allows to connect the
>    above-mentioned "Read"/"Write" data ports to "ports" offered by
>    communication middleware. In the worst case, each middleware project that
>    wants to integrate with Blender provides its own "Blender device driver"
>    to make this happen. In the best case, Blender can have a "generic"
>    interface to such middleware. (And the concrete middleware projects then
>    have to add a "device driver" for this generic interface to their own
>    code.)
> - The "component wrapper" to communication middleware can have several
>    (configurable!) policies: (un)buffered, FIFO, overwrite when new
>    data arrives, ...
> - The paragraphs above only deal with "data flow"; the next step is to
>    introduce "event flow".
>   

I like the way Herman has defined this flexible way by which blender 
will get/set external data.

Here is an exemple of a such schema:
 1) A real-sensor
  2) component_wrapper: from the framework data format to a blender 
in/out normalized format
               |
               | (udp, tcp, shmem)
               |
  4) InOutSensor
  5) controller: deserialize the data
                       interprets the data 
                       process something
                       serialize data for the actuators.
  6) InOutActuator
             |
             | (udp, tcp, shmem)
             |
  7) component_wrapper: deserialization of data to the framework data 
format.
  8) the real actuator

In the context of blender maybe a first attempt can be made to support 
the Verse protocole for the network part.
Verse seems "stalled" since years it is already part of mainstream 
blender which is definitely a plus to avoid
maintaining a fork. After a quick look it seems that Verse can exchange 
texture data (in which you can store a bytestream).
But I totally agree that there is plenty of other protocol to transmit 
normalized data (corba, ws, ace, verse, osc(tuio), yarp).

About the "data_structure" that have to be implemented in blender. I 
wonder if other options are possible.
Let's suppose that the data_structure is bound to a "class" name. When 
the component_wrapper connects
to a sensor or an actuator it advertise this class name to Blender. 
Something like: I will send you "6dofphanom" or
"framebufferimage". Instead of having blender implementing a  
"data_structure" wrapper it would be possible to have
the data wrapper provided by the component_wrapper. For exemple at 
connexion the component_wrapper can send a python script
implementing the decoding/access to the data structure. The result is 
that the data_structure encoding/decoding is
now defined out of Blender. You can even combined the two approach by 
using a factory to select if you wrapper is implemented
and hardcoded C++ version or a python version it.

Now something to keep in mind is that all the sensors are not only 
"data_structure" but they can also do
complex processing like, in blenderTUIO i have sensor reacting to 
condition like "finger_is_over_objet"
in which you have to test intersection with object of the scene. This is 
something important to keep in mind
(in my case but maybe not for robotics). Do collision detection have to 
be exposed to the Python API ?

About grabbing the Blender rendering context, this may require some 
tricks (to be sure to get the last frame)
but sound possible. I have read that some guy are reshaping the GE so 
that we can plug much more easily to
such event. A kind of Sensor triggered during the 
"on_begin_frame/on_end_frame" this can be done.


Now about working together, do we already have a shared repository for 
source code ? I can set up
one or  maybe this can be asked to the Blender Foundation.

I'm going to holliday for the next 2 weeks. So here is most of the code 
(for 2.48) for blenderTUIO
http://forge.lifl.fr/PIRVI/raw-attachment/wiki/MTUtils/blenderTUIO/sourcecode_blenderTUIO.zip
I wasn't prepare to shared this source code in the current state (sorry 
for the lame status of it) but I
think it can be an hint to know where to look to support new 
sensor/actuator.


=========== Some low level details about blenderTUIO =============
In blenderTUIO I have a TUIO sensor.
A TUIOReceiver is shared by all the sensor and is running a thread 
receiving UDP packed. These UDP packet containing
TUIO frames; The frames are decoded and then passed to the Sensors based 
on an observer/observable pattern. Using this information
all the tuio sensors (that are connected to the objects) are feed with 
values like position of the user hand in 2d or 3d space etc...

For a touchtable the  sensor can be configured to trigger a controller if:
- on_cursor: if a finger is on the surface the sensor triggers its 
controllers. The controller
                    then have to use the python API to access the 
sensors values.
- on_cursor_over: trigger the controller if a finger is over the object. 
The python API is different
                            as it allow to return the collision 
properties: "location of the hit, normal to the hit...etc"
- on_cursor_over_any: trigger the controller if one of the finger is 
over any object in the scene...I don't like this one
                                   because it should be a scene property 
and not an object on.
 Remark: the on_cursor_over requires the geometrical description of the 
scene while on_cursor is just a kind of
measurment tool. Implementing it properly is a bit tricky as the 
complete scene have to be tested for collision with
the fingers. Currently this is done for each active on_cursor_over_* 
sensor while this should be done once per scene.

Ok so now let's have a quick look at the main GE 2.48a loop. The 
description may not be very
precise but I allow to have better understanding on how/where we can 
"plug-in".

In gameengine\Ketsji\KX_KetsjuEngine.cpp most of the simulation loop 
into the
NextFrame method. It is working this way:
   - update physic geometry,
   - update scene geometry,
   - KXScene.cpp::LogicBeginFrame - > SCA_LogicManager.cpp::BeginFrame
         1) for each sensor manager::BeginFrame (it will check for each 
of its active sensors if they need to trigger), if so insert the 
controller it into a "to_process list".
         2) for each i in to_processlist: apply the controller.   
   - update (again) scene geometry to take the into acount the 
controller actions.
   - KXScene.cpp::LogicUpdateFrame - > SCA_LogicManager.cpp::UpdateFrame
         3) apply the actuators.         
   - update the scene geometry and physics parameters
   - physical simulation, integrate timestep
   - update scene geometry
================ End of low level details ==================


Best regards,
Damien.


More information about the Robotics mailing list