I just got back from the conference on Tangible, Embedded and Embodied Interaction (the E in TEI apparently exists in a probabilistic state between “embedded” and “embodied”). I’ve always thought this conference had a clear relevance to people interested in innovative game controllers, but this year they had a session entirely around Games and Narrative. I also did a “Bizarro Game Controllers” workshop with another former UCI student, Eric Kabisch. That will go up soon, since I still have some video to deal with from that.
A conference on Tangibles will approach games largely from the angle of doing interesting things with game controllers, and that’s exactly what we saw in three out of four papers. Karen and Josh Tanenbaum’s paper was the last in the session (and one of the best at the conference) and actually dealt elegantly with both tangible objects and narrative, where the two are entirely integral to one another.
Luc Geurts (from GROUP T in Brussels) presented Digital Games for Physical Therapy: Fulfilling the Need for Calibration and Adaptation. They were concerned with the fact that people with motor disabilities have trouble with “regular” controllers, and thought to incorporate games into their normally “boring” therapy exercises in order to motivate progress and make the whole experience more pleasurable. The key here was calibration, since everyone’s bodies were disabled in different ways and to different degrees, so they worked with therapists to measure and calibrate according to each player’s body’s capabilities. The four games focused on balance, straightening the arm and reaching, head position and neck muscle control, and lifting the leg. Both therapists and patients were pretty pleased with the results of incorporating these games into therapy sessions.
I think it’s cool work but I had two concerns, one practical and one theoretical. Practically – some of the patients in the study had Multiple Sclerosis, which is degenerative. Videogames are pretty much predicated on the idea of starting easy and then the player’s performance improves, and they did not redesign this trait in their games. So to a patient with a degenerative disease, this seems like an exercise in frustration. They did point out, however, that it’s important to monitor the sudden drops in performance that MS causes, and the game helped them track this… however, it seems like a needlessly frustrating way to track it. My theoretical issue is more that focusing entirely on therapy tends to reduce disabled people to their disability alone. What if you have a motor disability and you just want to have some fun playing Assassin’s Creed? (OK, I’m being a little mean here because you have to scope your paper somehow, and therapy is a fine way to scope it, but looking at the body of work dealing with both HCI and disability… well…)
David Robert (from MIT) presented Exploring Mixed Reality Robot Gaming. There’s not a lot of fancy theory to this one, just a pretty cool gaming setup. I can explain this with one picture:
Ali Mazalek and Michael Nitsche (from Georgia Tech) presented I’m in the Game: Embodied Puppet Interface Improves Avatar Control. This work was very grounded in cog psych, specifically “common coding theory”, which emphasizes the cognitive and neurological links between action, perception and imagination of movement. This has certain implications for the design of control interfaces. So they built this wearable puppet-thing:
And ran a test of people using this interface to rotate 3D objects (rotating them mentally is a cognitively difficult task). Their puppet controller beat out conventional game controllers and keyboards, even though participants all had a lot more experience with the latter two interfaces.
An interesting note: the speaker pointed out that the Kinect is experienced more like a mirror, which is cognitively really different than projecting your body onto a puppet’s.
Lastly, Josh Tanenbaum (from Simon Fraser University) presented work that he and Karen Tanenbaum had done on their Reading Glove project. Here they use the metaphor of psychometry (gaining info about the past or future by touching objects or people) to design a system of objects that are gateways into a fictional world. There are a few other papers on this project, so here they focused mostly on the theoretical implications of the work, and some of the challenges of multi-dimensional non-linear interactive narrative. Each object in the system then, is a point of negotiation and different perspectives, and the reader/player is meant to identify with the main character by holding the objects that he’s held. They made this really interesting point about the Heideggerian/phenomenological distinction between present-at-hand and ready-to-hand: this works great for functional objects. But it’s less great for semantics and the meaning of objects. They attempted to make their objects “present-at-mind”, sort of like if you were considering the wear on the handle of your hammer and what it might say about the person who’d owned it before you. I do think Heidegger addressed the links to others that are embedded in man-made objects, somewhere in Being and Time, but the specifics escape me until I find the time to wade through that tome again.
Also, I should note that Josh, Karen, and Allen Bevans won the student design competition which, this year, was to design a superhero costume. Here’s Josh in his costume, with another winner, Axon (I mean, Koen Beljaars from Eindhoven). It had EXPANDABLE WINGS. It got repeatedly delayed in customs because the chest-mounted vacuum-tube clock screamed “suicide bomber”. It was pretty cool.