Rob St. Amant
I'm an associate professor in the Department of Computer Science at North Carolina State University. My CV is online. I study topics in human-computer interaction and cognitive modeling. I've just written a popular science book called Computing for Ordinary Mortals (Oxford University Press, 2012). From the catalog description:
Computing isn't only (or even mostly) about hardware and software; it's also about the ideas behind the technology. In Computing for Ordinary Mortals, computer scientist Robert St. Amant explains this "really interesting part" of computing, introducing basic computing concepts and strategies in a way that readers without a technical background can understand and appreciate. Each of the chapters illustrates ideas from a different area of computing, and together they provide important insights into what drives the field as a whole. St. Amant starts off with an overview of basic concepts as well as a brief history of the earliest computers, and then he traces two different threads through the fabric of computing...
News and status updates (May, 2013)
KyungWha Hong will present the poster "Integrating Action Graphs and Keystroke Level Model to Represent the Usability Improvement Process," by KyungWha Hong and Robert St. Amant, at Graphics Interface in May.
Arpan Chakraborty will present the paper "Modeling the Concentration Game with ACT-R," by Titus Barik, Arpan Chakraborty, Brent Harrison, David L. Roberts, and Robert St. Amant, at the the International Conference on Cognitive Modeling in July.
"Intelligent Interaction in Accessible Applications," by Sina Bahram, Arpan Chakraborty, Srinath Ravindran, and Robert St. Amant, has been published in the edited volume A Multimodal End-2-End Approach to Accessible Computing, P. Biswas, C. Duarte, P. Langdon, L. Almeida, and C. Jung (eds.), Springer, Human–Computer Interaction Series.
JaeYeol Lee and I are working on a new augmented reality project.
Chris Healey and I will shortly begin work on an SBIR with Soar Tech.
Research in my lab can be summarized as targeting models of interaction, drawing on concepts in artificial intelligence, human-computer interaction, and cognitive science. (Our results have appeared in HCI, AI, and even animal behavior publications.) Some of the pictures on this page are linked to videos.
How can computers help people with vision impairment? CAVIAR uses a specialized wristband and computer vision algorithms running on a mobile phone to guide a blind person's hand toward specific objects. More recently we have explored the area of accessible user interfaces, in particular access to graphical information, through the development of a system called TIKISI (Touch It, Key It, Speak It). TIKISI can already help blind users interact with Google Maps. Current efforts are to extend TIKISI to other types of graphics, including flow charts and graphs. Students: Sina Bahram, Arpan Chakraborty.
Why is solving problems on a computer often harder than in the physical world? We are interested in the nature of the interface between agents (people, robots, and software agents) and their environments (real or virtual). We have developed wearable input devices, a robot that can choose simple tools for different jobs, and various drawing applications. We have even studied animal tool-using abilities. Students: Sina Bahram, Arpan Chakraborty, Prairie Rose Goodwin, Shea McIntee. (Recent graduates: Thomas Horton, Ph.D., Lloyd Williams, Ph.D., Jim Creager, B.S.)
Intelligent user interfaces and modeling for HCI
Could computers do a better job of assisting users? Would it help if we had a better understanding of users' abilities? These are core issues in HCI. We use task and cognitive modeling techniques, such as GOMS and ACT-R, to build engineering models of real users; we also build and model intelligent user interfaces. Students: Sina Bahram, Pat Cash, KyungWha Hong, Huseyin Sencan. (Recent graduates: Shishir Kakaraddi, M.S., Yanglei Zhao, M.S.)
Software: CAVIAR (Computer-vision Assisted Vibrotactile Interface for Accessible Reaching) is available for download. I maintain a set of AI planning systems written in Common Lisp intended for classroom use. (People may find the AI Planning Resources page useful too.) Two older systems may be of historical interest: G2A translates high-level procedural GOMSL models into detailed cognitive ACT-R 4 models. SegMan is a perceptual substrate that uses simple image processing techniques to "see" the Microsoft Windows graphical user interface.
- Sina Bahram, Ph.D. in progress. Area: Accessibility and intelligent user interfaces.
- Pat Cash, Ph.D. in progress. Area: Context-based intelligent user interfaces.
- Arpan Chakraborty, Ph.D. in progress. Area: Cognitive vision and accessibility.
- Prairie Rose Goodwin, Ph.D. in progress. Area: TBD.
- KyungWha Hong, Ph.D. in progress. Area: Model-based user interface generation.
- Shea McIntee, Ph.D. in progress. Area: Gesture-based interaction and modeling.
- Huseyin Sencan, Ph.D. in progress. Area: Brain-computer interfaces.
- Shishir Kakaraddi, M.S., 2012. A comparison of summarization techniques for small sets of micro blogs. (Now at VMware.)
- Yanglei Zhao, M.S., 2011. Gibbon: A wearable device for pointing gesture recognition. (Now at TransLoc.)
- Thomas Horton, Ph.D., 2011. A partial contour similarity-based approach to visual affordances in habile agents.
- Marivic Bonto-Kane, Ph.D., 2010. Statistical modeling of human response times for task modeling in HCI. (Now at the Naval Medical Information Management Center.)
- Reuben Cornel, M.S., 2009. Coglaborate -- An environment for collaborative cognitive modeling. (Now at Salesforce.)
- Lloyd Williams, Ph.D., 2009. Dynamic ontology driven learning and control of robotic tool using behavior. (Now a professor at Shaw University.)
- Wei Mu, Ph.D., 2009. A schematic representation for cognitive tool-using agents. (Now at Microsoft.)
- Lucas Layman. Ph.D., 2008 (co-chair with Laurie Williams). Information needs of developers for program comprehension during software maintenance tasks. (Now at the Fraunhofer Center for Experimental Software Engineering, University of Maryland.)
- James Ward, M.S., 2008. A comparison of fuzzy logic spatial relationship methods for human robot interaction. (Now at U.S. Army Research Office.)
- Chaya Narayanan Kutty, M.S., 2008. Toward video games on video. (Now at Cisco Systems.)
- Kevin Damm, M.S., 2008. Incorporating student note-taking into online intelligent computer-assisted instruction. (Now at Google.)
- Andrea Dawkins, M.S., 2007. Personalized hierarchical menu organization for mobile device users. (Now at Entrinsik.)
- Kenya Freeman, Ph.D., 2006 (Psychology, co-chair with Eric Wiebe). The effects of automated decision aid reliability and algorithm modality on reported trust and task performance. (Now at LexisNexis Group.)
- Curtis Boyce, M.S., 2006. Video-based augmented reality for robot navigation. (Now at GlaxoSmithKline.)
- Sean P. McBride, M.S., 2005. Data organization and abstraction for distributed intrusion detection. (Now at the Washington Post Company.)
- Alexander Wood, M.S., 2005. Effective tool use in a habile agent. (Now at Grayhawk Systems.)
- Lloyd Williams, M.S., 2005. Opening the Black Box on Statistical Modeling, The Theory Behind VisualBayes.
- Thomas Horton, M.S., 2004. HabilisDraw: a tool-based direct manipulation software environment.
- Bradley Siegler, M.S., 2004. Supporting electronic CRC card sessions with natural interaction.
- Colin G. Butler, M.S., 2004. Exploring bimanual tool-based interaction in a drawing environment.
- Nihar Namjoshi, M.S., 2004. Web information retrieval using Web document structures. (Now at Microsoft.)
- Martin Dulberg, Ph.D., 2003. A task-based evaluation framework for comparing input devices. (Now at DELTA, North Carolina State University.)
- Ajay Dudani, M.S., 2003. User interface softbots. (Now at Qualcomm Innovation Center.)
- Kunal Shah, M.S., 2003. Image processing for cognitive models in dynamic gaming environments. (Now at Adobe Systems.)
- Sameer Rajyaguru, M.S., 2003. Image processing substrate to assist cognitive models interact with dynamic environments. (Now at Amazon.)
- Mark O. Riedl, M.S., 2001. A computational model of navigation in social environments. (Now a professor at Georgia Tech.)
- Troy Tolle, M.S., 2000. IDIOM: An intelligent, dynamically manipulable simulation for high school physics Education. (Now at Digital Chalk.)
- T. Edward Long, M.S., 1999. A navigation testbed.
- Outstanding Teacher Award, North Carolina State University, 2013.
- Best Paper (with Reuben Cornel and Jeff Shrager), 19th Behavior Representation in Modeling & Simulation (BRIMS) Conference, Charleston, SC, 2010.
- Best Paper (with Lucas Layman and Laurie Williams), First International Symposium on Empirical Software Engineering and Measurement (ESEM), Madrid, Spain, 2007.
- Best Applied Paper (with Frank Ritter, Penn State University), Sixth International Conference on Cognitive Modeling (ICCM), Pittsburgh, August, 2004.
- Outstanding new teacher, Department of Computer Science, North Carolina State University, 1999.
- Recognition of special service, Office of the Army (Heeresamt), Cologne, Germany, 1991.