David’s papers

The following items are my (David Pautler’s) major publications.

  • A set of serious games, and a data collection infrastructure, aimed at early detection of Alzheimer’s by monitoring gameplay.
    • Farber, I., Fua, K.C., Gupta, S., Pautler, D. 2016. MoCHA: Designing Games to Monitor Cognitive Health in Elders at Risk for Alzheimer's Disease. Proceedings of the 13th International Conference on Advances in Computer Entertainment Technology (ACE) (PDF)
    • Fua, K., Gupta, S., Pautler, D., & Farber, I. (2013). Designing Serious Games for Elders. Proceedings of the 8th International Conference on the Foundations of Digital Games (FDG 2013), pp. 291-297. May 14-17, 2013. Chania, Crete, Greece. (PDF)
  • A 3D avatar that the user can speak with to practice skills such as interviewing. The avatar is part of a web page and uses the computer’s microphone, speakers, and webcam to track the user’s engagement level
    • Ramanarayanan, V., Lange, P., Pautler, D., Yu, Z., Mulholland, M., and Suendermann-Oeft, D. 2016. Interview With An Avatar: A Real-Time Engagement Tracking-Enabled Cloud-Based Multimodal Dialog System For Learning And Assessment. (PDF tba) (Demo description PDF)
  • A computational cognitive model of how people make sense of events that they observe. Ultimately, we’d like to use video of people as input, but as a start we use animations of simple geometric figures that appear to many people to tell a story.
    • Pautler, D., Koenig, B.L., Quek, B.K., Ortony, A. (2011). Using modified incremental chart parsing to ascribe intentions. Behavior Research Methods 43(3), 643-665, DOI: 10.3758/s13428-011-0128-2. (PDF) (publisher page)
    • Pautler, D., Wirawan, E., Koenig, B.L., Wan, K-S. Design objectives for cognitive models of intention perception. 2016 IEEE Workshop on Intention Recognition in Human-Robot Interaction. (PDF) (slides)
  • An AI system that generates plans starting from any combination of 13 persuasion goal types and using 56 speech act types. It uses integer linear programming and compared well with judgments by human test participants.
    Pautler, D. 2007. Computable Social Communication. PhD dissertation, University of Hawaii-Manoa.
    (short unpub'd paper) (dissertation PDF) (presentation PDF) (downloadable source)
  • An earlier, rule-based implementation that used a different set of goals, and fewer speech act types (40). It was able to guide the user’s topic selection, but was never evaluated with test participants.
    Pautler, D. & A. Quilici. 1998. "A computational model of social perlocutions." Presented at COLING-ACL, Montreal, Quebec. (PDF)
  • A system that helped novice users learn Unix by modeling their likely misunderstandings when doing simple tasks.
    Pautler, D. & A. Quilici. 1998. "A Computational Model Of Recognizing and Revising Inappropriate Advice." Presented at the Cognitive Science Society annual conference. (PDF)
  • A system that generated test scenarios for Army tank units sitting in fight simulators. For example, if the test leader wanted to focus on responding to helicopter attacks, the system would find a spot in the test terrain where such an attack might happen, and guide the unit there through a series of plausible commands and theater updates.
    Pautler, D., S. Woods, & A. Quilici. 1997. "Exploiting Domain-Specific Knowledge To Refine Simulation Specifications." Presented at the International Conference on Software Engineering. (PDF)
  • A system that simulated how people from different political perspectives could interpret the same situation in different, self-serving ways.
    Ferguson, W., D. Pautler. 1991. "You've Heard It All Before: A Case-Based Approach to Argument Formation." Presented at the AAAI Spring Symposium on Argumentation and Belief. (PDF)