[Robotics] Introduction

Séverin Lemaignan severin.lemaignan at laas.fr
Thu Aug 27 17:02:22 CEST 2009


Hello Herman, hello everyone,

I start with the end of your mail:
 > I also want to repeat my suggestion of two weeks ago, to organize a
 > workshop or conference around open source robotics software. I have
 > received close to zero reactions to that suggestion, which is in contrast
 > with the self-acclaimed openness and cooperation of the open source
 > community... :-(

I started to answer you, but after a Thunderbird crash, I need to start 
over again. In brief, I was wondering how the RoSta project and the 
outcomes of RoSta could weight on this symposium.
More precisely, regarding interoperability, it seems to me that it's 
something everybody talk about, even think about, but is not always 
committed to make happen. For several good reasons, including the work 
it requires. I think that was one of the conclusion of the RoSta 
project. Maybe you could go into some details of what kind of results 
you would expect from the symposium.

Now, regarding more specifically the simulation: we are still at a very 
early design stage, more like experimenting than anything else. But an 
open discussion on this topic with other teams (especially teams 
developing middlewares) could be very interesting.
In this case, I see something more like a workshop than a symposium.

To your remarks, now:

> - there is a significant difference between the game engine and the
>    simulation mode. The former is the most important one for robotics
>    (because it has the interaction possibilities with the outside world) and
>    hence we should concentrate on that.

That's an important aspect. We do actually work with the GameEngine 
because we need the simulation of Physics. That's even our main reason 
for choosing Blender. But I agree with you, interactions with the 
simulation are not easy in the GameEngine mode, and it's a problem.
I wrote a mail once to Ton to know if Blender 2.5 could possibly allow 
to have the GameEngine running while keeping the Blender interface 
active. It's still unclear to me, but would be a very welcome 
improvement for us.

In the meantime, and in our use case, I see the workflow like that:

1- You design your robot from the "normal" Blender interface, made of 
reusable components (I'll be back on this topic later) you can drag&drop 
(imagine a component library integrated in the Blender UI). You add a 
GPS, a camera, an arm... like that. each component (sensor or actuator) 
may be configured.
2- You design the whole scene in the same way, importing your 
previously-made robots in the scene. You configure here some global 
parameters (like a geo-reference for the GPS, or the middleware you'll 
be using).
3- You start the simulation by starting the GameEngine, and start 
talking to your robot through your middleware.

> - Blender has a very difficult user interface for starters, and for "poor"
>    robotics engineers. So, coming up with a 'robotics panel' would be a good
>    idea, I think. The new 2.50 provides lots of opportunities for customized
>    panels.

We plan to rely heavily on this feature.
In particular, in our current approach, each component would come with 
it's own configuration panel (as a specific Python file).

> - we all want to interface Blender with our own middleware, hence it makes
>    sense to develop together the Blender part of that interface such that it
>    is compatible with all possible external middleware projects, with just a
>    little amount of configuration.
> - this interface middleware is just half of the story: besides being able
>    to send "messages" to, and receive from, the outside world, we also have
>    to come up with a "standard" about the meaning of the data structures in
>    the messages.

That's one of the big challenges. Our current idea is to rely on Genom3 
(Herman, I think you attended the presentation at ICAR. Cf the attached 
slides): it's basically a code generator that take in input a 
description of the services (and datatypes) your component offers, a set 
of so-called "codels" that implement the actual computation required to 
generate the sensor output or actuator actions and a generic template 
per middleware.

With that, someone who wants to use the simulator with it's own 
middleware just need to write its own template and everything works, and 
someone who want to add a new component just describe the services the 
component offers and write the Python code to be executed in Blender to 
generate the data.

More details on that on the wiki in a few days.

> - the game engine has already very interesting sensors for robotics, the
>    "message" and the "distance sensor" being the first ones that come to my
>    mind. Also making artificial "camera images" should be rather easy. But
>    more complex robotics sensors should be added, and we should be able to
>    share developments. For example, inertia sensors, laser range scans,
>    force and touch sensors, ...

The first components we want to implement to validate the approach are a 
GPS and a stereohead. Laser scanner are actually very close to camera 
(since we can generate a fake laser range scanner output from a OpenGL 
depth map).

Cheers,
Severin
-------------- next part --------------
A non-text attachment was scrubbed...
Name: severin_lemaignan.vcf
Type: text/x-vcard
Size: 310 bytes
Desc: not available
Url : http://lists.blender.org/pipermail/robotics/attachments/20090827/9c71b870/attachment.vcf 


More information about the Robotics mailing list