Sanders Series Lecture
Yang Li: Enabling New Input Dimensions for Mobile Interaction
2016-04-05 12:30 at MaRS Auditorium
The limited interaction bandwidth of existing mobile user interfaces is incompatible with the rapidly growing computing power of mobile and wearable devices. To address this problem, it is important to explore new interaction dimensions that can utilize the rich sensing capabilities of these devices as well as their seamless integration into our everyday activity. In this talk, I will first describe how we can significantly reduce user effort in mobile interaction, at scale, by leveraging gestural input. I will then describe how to empower developers to leverage new input dimensions such as gestural, cross-device and contextual input through new tools and frameworks. From these systems, I will discuss how these input dimensions, though natural to the user, deeply challenge traditional interactive computing, and how we can address this challenge by providing high-level tool support.
Yang Li, PhD
Yang Li is a Senior Research Scientist in Human Computer Interaction and Mobile Computing at Google. He leads the Predictive User Interfaces group at Google. He is also an affiliate faculty member in Computer Science & Engineering at the University of Washington. He earned a Ph.D. degree in Computer Science from the Chinese Academy of Sciences, and conducted postdoctoral research in EECS at the University of California at Berkeley. He has published over 50 papers in the field of Human Computer Interaction, including 29 publications at CHI, UIST and TOCHI. He has constantly served on the program committees of top-tier HCI and mobile computing conferences.
Yang’s research focuses on novel tools and methods for creating mobile interaction behaviors, particularly regarding emerging input modalities (such as gestures and cameras), cross-device interaction and predictive user interfaces. Yang wrote Gesture Search, a popular Android app for random access of mobile content using gestures. Yang develops software tool support and recognition methods by drawing insights from user behaviors, and leveraging techniques such as machine learning, computer vision and crowdsourcing to make complex tasks simple and intuitive.