News and status updates (Winter, 2018)

  • I am on leave from the university, and I am not taking on new students.

  • Prairie Rose Goodwin successfully defended her Ph.D. dissertation, titled Error Recovery Microstrategies in a Touch Screen Environment

  • Recent publication: Ward, J. L., St. Amant, R., and Fields, M. A. (2017). Spatial relationships and fuzzy methods: Experimentation and modeling. Proceedings of ICCM.

  • Recent publication: Horton, T. E., and St. Amant, R. (2017). A Partial Contour Similarity-Based Approach to Visual Affordances in Habile Agents. IEEE Transactions on Cognitive and Developmental Systems.

  • Recent publication: Chen, Z., Healey, C. G., and St. Amant, R. (2017). Performance characteristics of a camera-based tangible input device for manipulation of 3D information. Proceedings of Graphics Interface.


This is a brief summary of current research in my lab. Images for the different research areas are linked to representative videos.

Embodied cognitive models

CogTool experiment results

In the intersection between computer science and psychology we find embodied cognitive models: computational simulations of human performance on specific tasks. Projects in my lab include modeling of vision, gesture, and interaction with mobile devices, based on existing and novel cognitive architectures; we are also exploring brain computer interfaces.

Accessibility and intelligent user interfaces

CAVIAR device

Techniques from artificial intelligence and related areas have great promise for improving interactive systems. One project in my lab, TIKISI (Touch It, Key It, Speak It), helps blind users interact with graphical information such as maps; a past project, CAVIAR, uses a specialized wristband and computer vision algorithms running on a mobile phone to guide a blind person's hand toward specific objects. Other work focuses on novel interaction techniques.

Tool-based user interfaces

Augmented reality in a cube

Tool use is a hallmark of intelligent behavior, but current interactive systems do not fully exploit our abilities. A project called CAPTIVE began in the summer of 2013, in collaboration with Jae Yeol Lee at Chonnam National University, Korea. CAPTIVE is an augmented reality/tangible user interface system for dealing with 3D information: the user holds a physical cube, watching it through a display with a camera mounted on the back (a stereo configuration is in progress), and sees virtual objects that track the cube's movement. We have also built a tool-based user interface for managing documents on a large touch surface.


Past students