[Robotics] pose mode and game engine

Herman Bruyninckx Herman.Bruyninckx at mech.kuleuven.be
Tue Jan 4 16:25:49 CET 2011


On Tue, 4 Jan 2011, David Hutto wrote:

> On Tue, Jan 4, 2011 at 10:02 AM, Herman Bruyninckx
> <Herman.Bruyninckx at mech.kuleuven.be> wrote:
>> On Tue, 4 Jan 2011, David Hutto wrote:
>>
>>> Not to butt in, but I'm looking for projects. So I was wondering if
>>> there are any plans to implement voice commands, and environmental
>>> responses from field bots in progress.
>>
>> It's always good to have some extra hands :-) So, let's see whether we can
>> find an interesting job for you to do...
>>
>> First of all, what do you mean exactly with the latter thing
>> ("environmental responses from field bots in progress")? If you mean that
>> we want to visually represent in Blender/MORSE what is going on with a real
>> robot then the answer is definitely "yes". It would be a good contribution
>> to work on how to visually represent the various pieces of information that
>> come from the real robot; e.g., what are the uncertainties that the robot
>> has on its current position in the world, or of the objects it perceived in
>> the world? How could we represent such uncertainty? Could we visually
>> represent the current 'task' of the robot, so that the human
>> operator/developer can evaluate whether the robot is doing the same task as
>> what that human expects it should be doing right now? Etc.
>>
>
> This is somewhat "mars roverish", meaning response is related to
> onboard hardware being able to transmit in "real time". So what is the
> onboard hardware expected, and the distance being used in the process?
>
All these "platform specific parameters" are irrelevant for the work being
done on Blender. They _will_ influence whether the Blender approach is
useful or efficient for your application, but these communication
performance parameters can not be influenced by the Blender software.

>> About voice commands: I have a very simple view on this, namely that the
>> voice input is "parsed" somewhere and transformed in "commands" that are
>> not different from any other command that Blender?MORSE can work with. In
>> other words, I do not see a reason to put a "voice controller" into
>> Blender...
>
> Voice input and output, are at first for the disabled, but secondary,
> is the ability to modify on the fly with voice commands that are
> recognized, confirmed, and then initiated. This is oriented more
> toward it being verbally responded to in terms of operations... a
> convenience feature for users.

So what? Please do not mix "human interface" issues with "simulation"
issues. Both are important in a specific application, but the former one
should not have to be imposed on Blender, which is, by definition, a
_graphical_ program, with a "scripting" command interface; hence, all
non-graphical inputs have to be translated into graphical and/or scripting
commands, and that transformation is not the responsibility of
Blender/MORSE, but of the "middleware" your application wants to put around
Blender.

Have you ever heard about "separation of concerns"? :-)
<http://en.wikipedia.org/wiki/Separation_of_concerns>

Herman


More information about the Robotics mailing list