[Bf-committers] Blender At Your Fingertips: Prototype Unistroke Commands for Tablets

Jason Wilkins jason.a.wilkins at gmail.com
Mon Nov 19 05:57:10 CET 2012


I think I came off a bit short in my response, so I thought I'd make a
better case for what I'm thinking.  Of course it is my fault if
anybody misses the point, so thank you for you feedback.

The basic (easy) multitouch gestures are:

1, 2, 3, 4 finger N-tap
1, 2, 3, 4 finger drag
2 finger pinch

You can use contextual clues such as proximity and orientation to
change the exact meaning of these gestures.

You could introduce the pinky finger, or use both hands, or even the
palm, and that would be fine if we were building a piano application.

Multitouch is good for spatial operations (this includes single touch
operations with 1 finger).  However, single touch glyphs are good for
activating commands that do not have a spatial aspect to them
(although I intend to explore the idea of using some spatial
information from glyphs in activating commands).

Single-touch glyph gestures are not replaced by multitouch, but can
complement them, by providing a way to access commands that are not
spatially oriented and otherwise would be buried in some menu, on some
button that takes up space, or would require the attachment of a
keyboard, (or some other interface like speech-to-command).

It should be possible to make a few dozen single touch stroke gestures
and binding them to non-spatial commands would reduce the need to add
buttons.

Even though the interface in that video was a toy, even in it I could
imagine hiding the shape toolbar and drawing the shapes that you
wanted instead of selecting them.  Using the size and position of the
drawn single touch glyph you could even determine and initial position
and size for the new shapes.

Glyphs and gestures kind of different.  I'm emphasizing the glyph
drawing aspect of touch, which is most intuitive with a single touch.
But in the future I intend to explore the possibility of multitouch
glyphs.  I think multitouch glyphs would allow for fast command entry,
and multisegmented mutitouch glyphs (multiple "letters") would allow
for really complicated command entry and also be easy to communicate
by using visual representations of the glyphs.  If that is unclear,
think of how the '+' sign can be thought of as a single touch glyph,
but '#' could be the same motion but with two fingers.  Then '##' be
used to indicate in writing that you do that motion twice.

I hope that makes it clear that I'm thinking of how single and
multitouch can be used to enter glyph-like commands and not space-like
gestures, so it isn't anachronistic at all.



On Sun, Nov 18, 2012 at 11:17 AM, Harley Acheson
<harley.acheson at gmail.com> wrote:
> These simple stroke gestures, like we had years ago, now seems so
> anachronistic.  It harkens to a time when we could only track a single
> point of contact from the mouse.  In the video every gesture-drawing step
> looked so unnecessary and time-wasting.
>
> All tablets today support multi-touch interfaces, so there is no longer a
> need to draw a symbol that indicates the  action you wish to take next.
> Instead we want direct interaction with the objects.
>
> The following YouTube video is an example of using multi-touch gestures for
> manipulating 3D objects.
>
> http://www.youtube.com/watch?v=6xIK07AhJjc
>
>
> On Sun, Nov 18, 2012 at 6:03 AM, Jason Wilkins <jason.a.wilkins at gmail.com>wrote:
>
>> More details about the video and the prototype.
>>
>> The recognizer used in the video is very simple to implement and
>> understand.  It is called $1 (One Dollar) and was developed at the
>> University of Washington [1].  We had a seminar recently about
>> interfaces for children and extensions to $1 were presented and I was
>> inspired by their simplicity because it meant I could just jump right
>> in.  It works OK and is good enough for research purposes.
>>
>> One thing $1 does not do is input segmentation.  That means it cannot
>> tell you how to split the input stream into chunks for individual
>> recognition.  What I'm doing right now is segmenting by velocity.  If
>> the cursor stops for 1/4 of a second then I attempt to match the
>> input.  This worked great for mice but not at all for pens due to
>> noise, so instead of requiring the cursor to stop I just require it to
>> slow down a lot.  I'm experimenting with lots of different ideas in
>> rejecting bad input.  I'm leaning towards a multi-modal approach where
>> every symbol has its own separate criteria instead of attempting a
>> one-size-fits-all approach.
>>
>> The recognizer is driven by the window manager and does not require a
>> large amount of changes to capture the information it needs.
>> Different recognizers could be plugged into the interface.
>>
>> The "afterglow" overlay is intended to give important feedback about
>> how well the user is entering commands and to help them learn.  The
>> afterglow gives an indication that a command was successfully entered
>> (although I haven't disabled the display of valid but unbound gestures
>> yet).  The afterglow morphs into the template shape to give the user
>> both a clearer idea of what the gesture was and to help the user fix
>> any problems with their form.
>>
>> In the future I want to use information about the gesture itself, such
>> as its size and centroid, to drive any operator that is called.  For
>> example, drawing a circle on an object might stamp it with a texture
>> whose position and size were determined by the size and position of
>> the circle.
>>
>> Additionally I want to create a new window region type for managing,
>> training, and using gestures.  That might be doable as an add-on.
>>
>> [1] https://depts.washington.edu/aimgroup/proj/dollar/
>>
>>
>> On Sun, Nov 18, 2012 at 7:42 AM, Jason Wilkins
>> <jason.a.wilkins at gmail.com> wrote:
>> > I've been exploring some research ideas (for university) and using
>> > Blender to prototype them.  I made a short video that demonstrates
>> > what I was able to do the last couple of days.  I'm starting to create
>> > a general framework for sketch recognition in Blender.
>> >
>> > http://youtu.be/IeNjNbTz4CI
>> >
>> > The goal is an interface that could work without a keyboard or most
>> > buttons.  I think a Blender with gestures is far more like Blender
>> > than a Blender that is plastered with big buttons so it works on a
>> > tablet.  It puts everything at your fingertips.
>> _______________________________________________
>> Bf-committers mailing list
>> Bf-committers at blender.org
>> http://lists.blender.org/mailman/listinfo/bf-committers
>>
> _______________________________________________
> Bf-committers mailing list
> Bf-committers at blender.org
> http://lists.blender.org/mailman/listinfo/bf-committers


More information about the Bf-committers mailing list