[Robotics] pose mode and game engine

Herman Bruyninckx Herman.Bruyninckx at mech.kuleuven.be
Tue Jan 4 16:02:27 CET 2011


On Tue, 4 Jan 2011, David Hutto wrote:

> Not to butt in, but I'm looking for projects. So I was wondering if
> there are any plans to implement voice commands, and environmental
> responses from field bots in progress.

It's always good to have some extra hands :-) So, let's see whether we can
find an interesting job for you to do...

First of all, what do you mean exactly with the latter thing
("environmental responses from field bots in progress")? If you mean that
we want to visually represent in Blender/MORSE what is going on with a real
robot then the answer is definitely "yes". It would be a good contribution
to work on how to visually represent the various pieces of information that
come from the real robot; e.g., what are the uncertainties that the robot
has on its current position in the world, or of the objects it perceived in
the world? How could we represent such uncertainty? Could we visually
represent the current 'task' of the robot, so that the human
operator/developer can evaluate whether the robot is doing the same task as
what that human expects it should be doing right now? Etc.

About voice commands: I have a very simple view on this, namely that the
voice input is "parsed" somewhere and transformed in "commands" that are
not different from any other command that Blender?MORSE can work with. In
other words, I do not see a reason to put a "voice controller" into
Blender...

Herman


More information about the Robotics mailing list