Imagine a virtual tutor who can recognize when you are confused, frustrated, or simply have no clue about solving a problem or learning a new concept.
In recent years, computer-assisted tutoring systems have been transformed from electronic workbooks to sophisticated applications that harness the power of artificial intelligence in order to provide learning activities that match the specific needs of the individual. These applications, called intelligent tutoring systems, focus on learning objectives that are primarily cognitive in nature, and do not address the reality that learning is a process that occurs within a social, emotional, and motivational context.
Dr. Sarrafzadeh and his research team at Massey University/Auckland Institute of Information and Mathematical Sciences, and the University of Canterbury, have been working on the next generation of intelligent tutoring systems, known as affective tutoring systems, use technology to detect emotions, recognize gestures, and interpret and related biological signals from students, in order to individually adapt instruction, via a virtual tutor named Eve.
During the development of this system, researchers used information from observations and videotaped sessions of students and teachers interacting with one-another, in order to model the teaching-learning process in the virtual environment.
Diagram of the system, also from the project's website:
If you are interested in exploring this topic further, here is a sample of articles from the project's list of publications and other listings from the website:
Sarrafzadeh, A., Alexander, S., Dadgostar, F., Fan, C., See Me, Teach Me: Facial Expression and Gesture Recognition for Intelligent Tutoring Systems, International Conference on Innovations in Information Technology, 19-21 November 2006, Dubai, UAE.
Sarrafzadeh, A., Alexander, S., Dadgostar, F., Fan, C., Bigdeli, A., (2007). How do you know that I don't understand?" A look at the future of intelligent tutoring systems. Elsevier Journal- Computers in Human Behavior (in press).
Sarrafzadeh, A., Alexander, S., Dadgostar, F., Fan, C., See Me, Teach Me: Facial Expression and Gesture Recognition for Intelligent Tutoring Systems, International Conference on Innovations in Information Technology, 19-21 November 2006, Dubai, UAE.
Messom, C., Sarrafzadeh, A., Fan, C., Johnson, M., Affective State Estimation from Facial Images using Neural Networks and Fuzzy Logic, to appear in D. Wang (ed.), Neural Networks Applications in Information Technology and Web Engineering, Kota Samarahan, Malaysia: University of Malaysia Sarawak, 2005.
Dadgostar, F., Ryu, H., Sarrafzadeh, A. Overmyer, S. P., Making Sense of Student Use of Nonverbal Cues for Intelligent Tutoring Systems, Proceedings of the Annual Conference of the Australian Computer-Human Interaction (OZCHI), 21-25 November 2005, Canberra, Australia.
Related References:
Alexander, S.T.V.: Emulating Human Tutor Empathy, Proceedings of the IIMS Postgraduate Conference, Albany, New Zealand (2004)
Alexander, S.T.V., Hill, S., Sarrafzadeh, A.: How do Human Tutors Adapt to Affective State?, Proceedings of User Modeling, Edinburgh, Scotland (2005)
Beal, C. R., Mitra, S., & Cohen, P. R. (2007). Modeling learning patterns of students with a tutoring system using Hidden Markov Models. In R. Luckin, K. R. Koedinger, & J. Greer (Eds.), Artificial intelligence in education: Building technology rich learning contexts that work, pp. 238-245.
Abstract: Hidden Markov Models were used to model the actions of high school students who worked with an online tutorial for mathematics. Including a hidden state estimate of learner engagement increased the accuracy and predictive power of the models, within and across tutoring sessions. Groups of students with distinct engagement trajectories were identified and replicated in two independent samples. Finalist for Best Paper Award.
McQuiggan, S.W, & Lester, J.C. Diagnosing Self-Efficacy in Intelligent Tutoring Systems: An Empirical Study.
No comments:
Post a Comment