Quantcast
Channel: Teehan+LaxLabs | Teehan+Lax
Viewing all articles
Browse latest Browse all 12

Insights into Kinect UI

$
0
0


Ryan Challinor is a programmer at Harmonix Music Systems who is best known for his gestural menu system in Dance Central for Kinect. Immersing himself further into Kinect UI, he’s currently developing a synaesthetic experience using Kinect, Ableton Live, and Quartz Composer. I had the opportunity to chat with Ryan about his work on developing UI for the Kinect.

What was the timeline for the whole project?
Dance Central was in full production for about 14 months. The primary Kinect gestural menu work took place over the course of 6 months of prototyping.

When you got the Kinect, what was your reaction to the tech?
The first time you see your skeleton on the screen, it’s pretty jaw-dropping. After you get past the initial wow factor and discover the limitations, it can be pretty disheartening. But after a while of learning how to properly exploit the technology, I think it’s tough not to become enamored with it again.

When you started designing UI for the Kinect, was it hard to wrap your head around this concept? What was the biggest hurdle you faced?
The biggest hurdle was trying to overcome existing UI paradigms. The obvious approach is to emulate mouse input, with a cursor and click action.  A cursor is easy enough to create, but it becomes apparent that the skeletal input is not well suited to detecting an analog to a “click” action. Once we eventually ditched the cursor, it became possible for us to start thinking about new ways to control a UI, ways that are more suited to the data Kinect gives you.

Biggest challenge for designing a gestural interface?
Overcoming the mental block of how to select something without an equivalent of a mouse click or button press.

What was your first experiment with the Kinect?
Our first approach was a mouse-like interface, with giant buttons, and an arm extension.  It failed pretty miserably. I actually did a whole talk on the various prototypes that we progressed through in developing Dance Central’s UI. You’ll probably find it helpful, you can find the slides here and the video here (in their little video app, click “2011″, then scroll until you find my name)

With all the tests you did, what were the most common gestures that people gravitate to when using the Kinect?
In their first exposure to the technology, when people were completely unfamiliar with Kinect and before we were properly messaging how to use the UI, people tend to do a lot of pawing at the screen. There’s a lot of leaning forward and reaching out awkwardly, trying to figure out what the game is looking for. We even had some people trying to touch the TV screen.  Without proper guidance from the game, the first thing people tried to do is push their hand forward, and the second most frequent interaction tended to be swiping their hand around, which is what we’re actually looking for in Dance Central. This tended to make the gesture pretty discoverable for a large percentage of new players.

What was the best way you found to flush out ideas?
My approach was to lay out in my head “what do we need?”, and “what do we have?”. I would prototype how you could use the data coming from Kinect (“what we have”) to control the different elements of a UI (“what we need”). Then, when something failed, I’d ask “what doesn’t work?”, and use that to whittle down the “what we have” field until I hit upon how to properly use the data to accomplish our goals.

What do you think of the Kinect mod community? Have they shown anything that piqued your interest?
I am very interested in the stuff in the Kinect mod community, I’ve actually recently entered the fray with Synapse. I do have a bit of a chip on my shoulder about the reactions to the mod community, though, in that people see a cool mod and say “that’s so cool, why aren’t the games that cool?”. Kinect mods, for the most part, are just toys that are fun to play with for five minutes. It’s easy to make a toy that people have fun with for five minutes, it’s much more difficult to create a real compelling experience, and a product that works well out in the real world as opposed to a neat thing that works for one kid in his bedroom. The games will come, we just need to give developers time to figure out how to properly exploit the technology.

Any advice on what works well and what doesn’t work well when designing UI for the Kinect?
Don’t try to figured out how to cram Kinect into an existing UI paradigm, instead design a UI paradigm that’s from the ground-up intended to exploit the Kinect’s functionality.

Do you see Kinect being used in other types of applications?
Absolutely! One look at the Kinect mod community gives you just a taste of all of the different things you could do with Kinect. I’m excited all the time by the new application of Kinect that I discover on YouTube.

Keep up with Ryan’s work on Synapse on his blog.


Viewing all articles
Browse latest Browse all 12

Trending Articles