GI 2017 Paper: Ivy

Also at GI this year will be another project that I was part of while at Autodesk Research. Barrett Ens, who was interning with Fraser (Anderson) and Tovi Grossman, had a keen interest in VR and 3D user interfaces and a really interesting idea to explore the next generation of programming environments that were based in VR and powered by Internet of Things devices and activities. Thus, Ivy was born! Ivy explored how intelligent, aware environments could be programmed in-situ, within VR representations of the target environment (or possibly in the future, AR representations of it). The project not only presented possible programming constructs and corresponding visualizations that would be useful for programmers of such spaces, but also explored how to integrate and represent real world data and breakpoints in a manner that was appropriate for spatially-situated environments. Aside from loving the name, I personally really appreciated seeing data flowing from sensors to other machines or equipment. The simple act of perceiving data as moving entities, really brings the notion of such programming paradigms life. The paper, Ivy: Exploring Spatially Situated Visual Programming for Authoring and Understanding Intelligent Environments, will be presented by Barrett and was also co-authored by Pourang Irani (University of Manitoba) and George Fitzmaurice (Autodesk Research).


The availability of embedded, digital systems has led to a multitude of interconnected sensors and actuators being distributed among smart objects and built environments. Programming and understanding the behaviors of such systems can be challenging given their inherent spatial nature. To explore how spatial and contextual information can facilitate the authoring of intelligent environments, we introduce Ivy, a spatially situated visual programming tool using immersive virtual reality. Ivy allows users to link smart objects, insert logic constructs, and visualize real-time data flows between real-world sensors and actuators. Initial feedback sessions show that participants of varying skill levels can successfully author and debug programs in example scenarios.

GI 2017 Paper: No Handed Interaction

This past year I had the pleasure of working with Seongkook Heo while he was an intern at Autodesk Research on quite a cool input and interaction techniques project. The project focused on analyzing, and developing an understanding of the situational factors that can constrain our opportunities for input with smart watches, and then used this knowledge (and the resulting taxonomy) to ideate on ways that we an utilize other body parts or actions to re-enable such input. From 3D printing fake hands to reading through participant comments about from Mechanical Turk about Seongkook kneading dough, the project was a very interesting exploration of input opportunities and ended up being a lot of fun.  The paper, No Need to Stop What You’re Doing: Exploring No-Handed Smartwatch Interaction, was also co-authored by Ben Lafreniere,  Tovi Grossman, and George Fitzmaurice from Autodesk Research, and will be presented at GI in May.


Smartwatches have the potential to enable quick micro-interactions throughout daily life. However, because they require both hands to operate, their full potential is constrained, particularly in situations where the user is actively performing a task with their hands. We investigate the space of no-handed interaction with smartwatches in scenarios where one or bot hhands are not free. Specifically, we present a taxonomy of scenarios in which standard touchscreen interaction with smartwatches is not possible, and discuss the key constraints that limit such interaction. We then implement a set of interaction techniques and evaluate them via two user studies: one where participants viewed video clips of the techniques and another where participants used the techniques in simulated hand-constrained scenarios. Our results found a preference for foot-based interaction and reveal novel design considerations to be mindful of when designing for no-handed smartwatch interaction scenarios.

alt. CHI 2017 Paper: Machines as Co-Designers

This year I was fortunate enough to collaborate with Jeeeun Kim and Tom Yeh (from the University of Colorado) and Haruki Takahashi and Homei Miyashita (from Meiji University) on a rather interesting alt. CHI paper. The work, entitled “Machines as Co-Designers: A Fiction on the Future of Human-Fabrication Machine Interaction” draws attention to the ways in which current fabrication practices do not facilitate the serendipitous and in-situ creativity discoveries that occur during traditional craft practices. For me, this project and the accompanying alt. CHI review process were very illuminating (I highly recommend that anyone who has not submitted an alt. CHI paper and experienced the nervousness that comes from reading community’s reviews of their work everyday to do so – it’s a great learning experience). The full paper will be submitted at CHI 2017 and I will link to it after it has been published. Until now, here is the abstract!

While current fabrication technologies have led to a wealth of techniques to create physical artifacts of virtual designs, they require unidirectional and constraining interaction workflows. Instead of acting as intelligent agents that support human’s natural tendencies to iteratively refine ideas and experiment, today’s fabrication machines function as output devices. In this work, we argue that fabrication machines and tools should be thought of as live collaborators to aid in-situ creativity, adapting physical dynamics come from unique materiality and/or machine specific parameters. Through a series of design narratives, we explore Human-FabMachine Interaction (HFI), a novel viewpoint from which to reflect on the importance of (i) interleaved design thinking and refinement during fabrication, (ii) enriched methods of interaction with fabrication machines regardless of skill level, and (iii) concurrent human and machine interaction.