Woo! Next month, I will be going to Brisbane, Australia to present work that was done last summer in the DGP Lab by myself, Matthew Lakier, and Mingzhe (Franklin) Li, about Haunted User Interfaces. We were interested in developing new ways that information could be conveyed to users in a household setting and used ideas from haunted and paranormal phenomenon to do so.
Our animatronic moose built from LEGO and Servo Motors!
Along with a number of prototypes, we also ran a Mechanical Turk study to gather information about the objects people have in their living rooms and how they interact (or as it turned out, ignore) these objects. We also synthesized the survey results, prototypes, and construction lessons into a Haunted Design Framework that can be used to develop or re-imagine interfaces for the home.
A quick video illustrating some of the ideas and prototypes:
Abstract: Within this work, a novel metaphor, haunted design, is explored to challenge the definitions of display’ used today. Haunted design draws inspiration and vision from some of the most multi-modal and sensory diverse experiences that have been reported, the paranormal and hauntings. By synthesizing and deconstructing such phenomena, four novel opportunities to direct display design were uncovered, e.g., intensity, familiarly, tangibility, and shareability. A large scale design probe, The Living Room, guided the ideation and prototyping of design concepts that exemplify facets of haunted design. By combining the opportunities, design concepts, and survey responses, a framework highlighting the importance of objects, their behavior, and the resulting phenomena to haunted design was developed. Given its emphasis on the odd and unusual, the haunted design metaphor should great spur conversation and alternative directions for future display-based user experiences.
This year at UIST (November 2015), I will be fortunate enough to give two presentations. The first is on the unintended touch ToCHI work that I did at Microsoft Research and the second is on a new project that I undertook while at Autodesk Research and the DGP Lab at the University of Toronto. As I am a papercrafter and love using my Silhouette machine, back in January / February I began working on a small idea to create a unique menu for my wedding. The end result was MoveableMaker (and a number of menus!), a novel software application that automates the creation of interactive, moveable papercraft. More wedding details will soon follow (and I can now talk about it) and the UIST publication will be posted as it becomes available.
Wedding Menu pre-MoveableMaker
Wedding Menu post-MoveableMaker (everything is much cleaner and required must less effort)
In this work, we explore moveables, i.e., interactive papercraft that harness user interaction to generate visual effects. First, we present a survey of children’s books that captured the state of the art of moveables. The results of this survey were synthesized into a moveable taxonomy and informed MoveableMaker, a new tool to assist users in designing, generating, and assembling moveable papercraft. MoveableMaker supports the creation and customization of a number of moveable effects and employs moveable-specific features including animated tooltips, automatic instruction generation, constraint-based rendering, techniques to reduce material waste, and so on. To understand how MoveableMaker encourages creativity and enhances the workflow when creating moveables, a series of exploratory workshops were conducted. The results of these explorations, including the content participants created and their impressions, are discussed, along with avenues for future research involving moveables.
I am very happy to announce that the last work from my Dissertation is set to be published at Graphics Interface this year. The publication, “Hands, Hover, and Nibs: Understanding Stylus Accuracy on Tablets” reports on a two-stage user study that evaluated the role hand posture, the information available from the hover cursor, and nib diameters have on the user experience while inking. a user study that was conducted at Microsoft Research during my extended internship. Walter assisted me with the analysis and discussion sections of the work. Once the full paper is available, I will provide a link to it, but for now the abstract is below.
Although tablets and styli have become pervasive, styli have not seen widespread adoption for precise input tasks such as annotation, note-taking, algebra, and so on. While many have identified that stylus accuracy is a problem, there is still much unknown about how the user and the stylus itself influences accuracy. The present work identifies a multitude of factors relating to the user, the stylus, and tablet hardware that impact the inaccuracy experienced today. Further, we report on a two-part user study that evaluated the interplay between the motor and visual systems (i.e., hand posture and visual feedback) and an increasingly important feature of the stylus, the nib diameter. The results determined that the presence of visual feedback and the dimensions of the stylus nib are crucial to the accuracy attained and pressure exerted with the stylus. The ability to rest one’s hand on the screen, while providing comfort and support, was found to have surprisingly little influence on accuracy.
Two days before my PhD defense, I found out that a manuscript that I submitted to ToCHI on unintended touch (aka palm rejection) was accepted for publication! The manuscript is entitled “Exploring and Understanding Unintended Touch during Direct Pen Interaction” (previously titled, “Is it Intended or Unintended? Palm Rejection during Direct Pen Interaction”) details a data collection experiment and an algorithmic analysis of various possible solutions to unintended touch on tablets. The work was part of the larger collection of pen-based work that I performed while I was an intern at Microsoft Research (yippee! for publication #4 from my MSR time).
An example of unintended touch information from the perspective of a digitizer. The current stylus location is denoted in blue and the unintentional touch events from the palm and little finger are denoted in varying shades of orange.
The manuscript will not be published until December 2014, so here is the abstract:
The user experience on tablets that support both touch and styli is less than ideal, due in large part to the problem of unintended touch or palm rejection. Devices are often unable to distinguish between intended touch, i.e., interaction on the screen intended for action, and unintended touch, i.e., incidental interaction from the palm, forearm, or fingers. This often results in stray ink strokes and accidental navigation, frustrating users. We present a data collection experiment where participants performed inking tasks, and where natural tablet and stylus behaviors were observed and analyzed from both digitizer and behavioral perspectives. An analysis and comparison of novel and existing unintended touch algorithms revealed that the use of stylus information can greatly reduce unintended touch. Our analysis also revealed many natural stylus behaviors that influence unintended touch, underscoring the importance of application and ecosystem demands, and providing many avenues for future research and technological advancement.
Rounding out today’s news is another Graphics Interface 2014 publication that I have forthcoming. This publication, “The Pen Is Mightier: Understanding Stylus Behaviour While Inking on Tablets” reports on a user study that was conducted at Microsoft Research during my extended internship. The study investigated the differences in hand posture, hand movements, writing size, and user preferences while participants were performing note-taking and sketching tasks using traditional pen and paper, a digital tablet with a passive stylus, and a digital tablet. Dr. Anoop Gutpa served as my Microsoft mentor during the project and Fraser and Walter assisted me with the analysis and discussion sections of the work. Once the full paper is available, I will provide a link to it.
Abstract: Although pens and paper are pervasive in the analog world, their digital counterparts, styli and tablets, have yet to achieve the same adoption and frequency of use. To date, little research has identified why inking experiences differ so greatly between analog and digital media or quantified the varied experiences that exist with stylus-enabled tablets. By observing quantitative and behavioural data in addition to querying preferential opinions, the experimentation reaffirmed the significance of accuracy, latency, and unintended touch, whilst uncovering the importance of friction, aesthetics, and stroke beautification to users. The observed participant behaviour and recommended tangible goals should enhance the development and evaluation of future systems.