Tux Member Presentation
Speaker

Haijun Xia and Seongkook Heo:
A TUX Member Presentation

2018-11-06 12:30 at DGP Lab: 40 St. George St., 5th Floor

Haijun Xia

Abstract

Supporting Direct Human-Computer Communication

Abstract: From the dawn of digital computing, we have striven to communicate with computers to fully leverage their computing power. The development of sensing technologies enables such communication with verbal language, gestural language, and graphical language.
Despite the many different input techniques, conversations with computers are all structured around a fixed sets UI elements that do not support much flexibility. As such, the rich and dynamic thoughts we could have articulated naturally with flexible words, gestures, and visuals must be formalized as structured, restrictive, rigid, and repetitive tasks around such element. I seek to design a new interaction language that enables us to directly and flexibly articulate our creative thoughts. I approach this from two directions. First, I design new graphical representations of digital content to match our dynamic and flexible needs. Second, I invent novel interaction techniques to enable the direct articulation of user intention.

Bio

I am a PhD student advised by Prof. Daniel Wigdor at DGP lab, University of Toronto. I am also a Microsoft PhD Fellow and Adobe PhD Fellow.

My research area is Human-Computer Interaction, in which I focus on creating flexible digital media to augment our creativity. I approach this from two directions: 1) I invent novel representation of the abstract content to match our dynamic needs; and 2) I develop novel interaction techniques that allow us to express our thoughts and ideas via graphical, gestural, and vocal communication that we are all naturally capable of. For more information, please visit www.haijunxia.com

 

Seongkook Heo

Abstract

Expanding Touch Interaction Bandwidth by Making Computers to Feel Our Touch and to be Felt

Our natural touch is rich, nuanced, and full of physical properties such as force and posture that imply our intentions. When we manipulate physical objects, we also understand the status of the object and control the posture or the forces by what we feel from our fingers. This rich physical interaction enables eyes-free and skillful object manipulation. However, most touch interfaces ignore this rich source of information and only register the contact location of a finger and do not give any physical reactions to our touch. This limited bandwidth of input and output channel often necessitates the use of multiple input modes and many buttons and makes eyes-free interaction challenging. In this talk, I will introduce projects that my colleagues and I have worked on to enrich our touch interaction with computers by utilizing previously unused physical properties of our touch. I will discuss how we can make the computers to sense more from our touch and use such information to make the richer interaction and also our haptic feedback methods that could enable virtual contents to be felt.

Bio

Seongkook is a postdoctoral fellow in the DGP Lab at the University of Toronto working with Prof. Daniel Wigdor. He received his Ph. D. in Computer Science at KAIST in 2017, under the supervision of Prof. Geehyuk Lee. He is interested in making communication between human and computers richer and more natural through the better use of new input and output modalities. His work has been published in premier conference proceedings and journals, such as CHI, UIST, CSCW, and IEEE Trans. Haptics. (Learn more at: http://seongkookheo.com)