News and status updates (Summer, 2017)

  • I am on leave from the university for the calendar year 2017.

  • Recent publication: Ward, J. L., St. Amant, R., and Fields, M. A. (2017). Spatial relationships and fuzzy methods: Experimentation and modeling. Proceedings of ICCM.

  • Recent publication: Horton, T. E., and St. Amant, R. (2017). A Partial Contour Similarity-Based Approach to Visual Affordances in Habile Agents. IEEE Transactions on Cognitive and Developmental Systems.

  • Recent publication: Chen, Z., Healey, C. G., and St. Amant, R. (2017). Performance characteristics of a camera-based tangible input device for manipulation of 3d information. In Proceedings of Graphics Interface.


This is a brief summary of current research in my lab. Images for the different research areas are linked to representative videos.

Embodied cognitive models

CogTool experiment results

In the intersection between computer science and psychology we find embodied cognitive models: computational simulations of human performance on specific tasks. Projects in my lab include modeling of vision, gesture, and interaction with mobile devices, based on existing and novel cognitive architectures; we are also exploring brain computer interfaces. Current students:Prairie Rose Goodwin, Liang Dong, Sina Bahram, Huseyin Sencan.

Accessibility and intelligent user interfaces

CAVIAR device

Techniques from artificial intelligence and related areas have great promise for improving interactive systems. One project in my lab, TIKISI (Touch It, Key It, Speak It), helps blind users interact with graphical information such as maps; a past project, CAVIAR, uses a specialized wristband and computer vision algorithms running on a mobile phone to guide a blind person's hand toward specific objects. Other work focuses on novel interaction techniques. Current students: Sina Bahram, Brian Clee

Tool-based user interfaces

Augmented reality in a cube

Tool use is a hallmark of intelligent behavior, but current interactive systems do not fully exploit our abilities. A project called CAPTIVE began in the summer of 2013, in collaboration with Jae Yeol Lee at Chonnam National University, Korea. CAPTIVE is an augmented reality/tangible user interface system for dealing with 3D information: the user holds a physical cube, watching it through a display with a camera mounted on the back (a stereo configuration is in progress), and sees virtual objects that track the cube's movement. We have also built a tool-based user interface for managing documents on a large touch surface.



  • Pat Cash, Ph.D. candidate. Intelligent user interfaces, mobile search.
  • Liang Dong, Ph.D. candidate. Artificial intelligence, human-computer interaction.
  • Prairie Rose Goodwin, Ph.D. candidate. Mobile interaction.

Past students

  • Shea McIntee, Ph.D., 2016. A task model of free-space movement-based gestures.
  • Huseyin Sencan, Ph.D., 2016. (Dis)Similarity-based classification of cross domain multivariate spatiotemporal systems using dynamic network structures and graph edit distances.
  • Kyung Wha Hong, Ph.D., 2014. Improving interface usability through model transformation using interaction design models. (Now at Samsung.)
  • Arpan Chakraborty, Ph.D., 2014. A biologically inspired active vision framework for cognitive agents. (Now at Udacity.)
  • Shishir Kakaraddi, M.S., 2012. A comparison of summarization techniques for small sets of micro blogs. (Now at VMware.)
  • Yanglei Zhao, M.S., 2011. Gibbon: A wearable device for pointing gesture recognition. (Now at TransLoc.)
  • Thomas Horton, Ph.D., 2011. A partial contour similarity-based approach to visual affordances in habile agents.
  • Marivic Bonto-Kane, Ph.D., 2010. Statistical modeling of human response times for task modeling in HCI. (Now at the Naval Medical Information Management Center.)
  • Reuben Cornel, M.S., 2009. Coglaborate -- An environment for collaborative cognitive modeling. (Now at Salesforce.)
  • Lloyd Williams, Ph.D., 2009. Dynamic ontology driven learning and control of robotic tool using behavior. (Now a professor at Shaw University.)
  • Wei Mu, Ph.D., 2009. A schematic representation for cognitive tool-using agents. (Now at Microsoft.)
  • Lucas Layman. Ph.D., 2008 (co-chair with Laurie Williams). Information needs of developers for program comprehension during software maintenance tasks. (Now at the Fraunhofer Center for Experimental Software Engineering, University of Maryland.)
  • James Ward, M.S., 2008. A comparison of fuzzy logic spatial relationship methods for human robot interaction. (Now at U.S. Army Research Office.)
  • Chaya Narayanan Kutty, M.S., 2008. Toward video games on video. (Now at Cisco Systems.)
  • Kevin Damm, M.S., 2008. Incorporating student note-taking into online intelligent computer-assisted instruction. (Now at Google.)
  • Andrea Dawkins, M.S., 2007. Personalized hierarchical menu organization for mobile device users. (Now at Entrinsik.)
  • Kenya Freeman, Ph.D., 2006 (Psychology, co-chair with Eric Wiebe). The effects of automated decision aid reliability and algorithm modality on reported trust and task performance. (Now at LexisNexis Group.)
  • Curtis Boyce, M.S., 2006. Video-based augmented reality for robot navigation. (Now at GlaxoSmithKline.)
  • Sean P. McBride, M.S., 2005. Data organization and abstraction for distributed intrusion detection. (Now at the Washington Post Company.)
  • Alexander Wood, M.S., 2005. Effective tool use in a habile agent. (Now at Grayhawk Systems.)
  • Lloyd Williams, M.S., 2005. Opening the Black Box on Statistical Modeling, The Theory Behind VisualBayes.
  • Thomas Horton, M.S., 2004. HabilisDraw: a tool-based direct manipulation software environment.
  • Bradley Siegler, M.S., 2004. Supporting electronic CRC card sessions with natural interaction.
  • Colin G. Butler, M.S., 2004. Exploring bimanual tool-based interaction in a drawing environment.
  • Nihar Namjoshi, M.S., 2004. Web information retrieval using Web document structures. (Now at Microsoft.)
  • Martin Dulberg, Ph.D., 2003. A task-based evaluation framework for comparing input devices. (Now at DELTA, North Carolina State University.)
  • Ajay Dudani, M.S., 2003. User interface softbots. (Now at Qualcomm Innovation Center.)
  • Kunal Shah, M.S., 2003. Image processing for cognitive models in dynamic gaming environments. (Now at Adobe Systems.)
  • Sameer Rajyaguru, M.S., 2003. Image processing substrate to assist cognitive models interact with dynamic environments. (Now at Amazon.)
  • Mark O. Riedl, M.S., 2001. A computational model of navigation in social environments. (Now a professor at Georgia Tech.)
  • Troy Tolle, M.S., 2000. IDIOM: An intelligent, dynamically manipulable simulation for high school physics Education. (Now at Digital Chalk.)
  • T. Edward Long, M.S., 1999. A navigation testbed.


  • Outstanding Teacher Award, North Carolina State University, 2013.
  • Best Paper (with Reuben Cornel and Jeff Shrager), 19th Behavior Representation in Modeling & Simulation (BRIMS) Conference, Charleston, SC, 2010.
  • Best Paper (with Lucas Layman and Laurie Williams), First International Symposium on Empirical Software Engineering and Measurement (ESEM), Madrid, Spain, 2007.
  • Best Applied Paper (with Frank Ritter, Penn State University), Sixth International Conference on Cognitive Modeling (ICCM), Pittsburgh, August, 2004.
  • Outstanding new teacher, Department of Computer Science, North Carolina State University, 1999.
  • Recognition of special service, Office of the Army (Heeresamt), Cologne, Germany, 1991.