Using Embedded Agents to Support Student Learning
Animated agents and tutors can help provide your students with disabilities with targeted just-in-time supports for learning.
Agents and tutors are the lifelike characters in multimedia software and online applications that pop up on the screen to explain rules, provide hints, or prompt the user to use the program's features. They can be human or nonhuman, animated or static. You may have encountered agents in the form of the "Microsoft Paperclip" or when shopping online. Some businesses (for example Ikea) have developed agents with some degree of artificial intelligence to help shoppers find the information and answers they are looking for without having to call customer service.
As technologies improve, more and more software programs and websites are using agents to support users with tasks such as navigation, solving problems, finding resources, or contacting support.
Students can make use of these agents in a variety of ways. Multimedia agents have been found to:
- Increase interest and lessen content difficulty
- Serve as an effective mentor or tutor
- Simulate peer tutoring
Multimedia environments with well-designed agents can provide just-in-time prompts that support students' learning of content. Most students enjoy agents and find their advice valuable.
Though not all software tools make use of agents or helpers, many tools are adding this feature. The use of multimedia agents can be a great way of providing multiple options for representation, engagement and action.
Digital agents or tutors can provide students with supplemental instruction and guided practice on any number of academic skills. For example, many reading programs now use agents to help students build skills and fluency or to prompt the application of comprehension strategies. Similarly, agents are embedded in simulation software teaching mathematics and science.
Multimedia environments that use agents can support students' understanding of content area concepts and the relationships among ideas and concepts in a discipline.
Choosing Programs with Animated Agents
When evaluating multimedia programs for your classroom, it is important to determine the type of interaction that the agent has with the user as well as the quality of interaction it solicits from users.
What to look for:
- Agents ask questions that promote higher-order thinking (why, what-if, and how questions)
- Agents do not limit or define student thinking about a topic - beware of agents that provide a student with shallow definitions of key concepts and look instead for tools that represent multiple perspectives and encourage deep thinking and reflection.
- Agents that are personalized - either referring to student by name or as "you". Example: "Now I'm going to help you get started...".
A series of side by side comparison studies with youth and young adults have shown that students' learning and interaction is enhanced when they work with an agent that is programmed to demonstrate emotions and act in some unpredictable ways and to speak in a personalized tone (using "you" and "me/we") over other static graphic agents or voice narration only (Moreno, 2005). These findings are reviewed in Moreno (2005); here we provide some examples.
Atkinson (2002) evaluated student interactions with Peedy, a parrot with a personality, in a multimedia program designed to assist students with algebra word problems. Undergraduate students working with Peedy reported less difficulty and had higher post-test scores than students in a control condition working with the same narration but without the Peedy agent. Moreover, students who worked with a talking version of Peedy benefited on post-tests more than students who worked with a Peedy that presented written explanations in a thought bubble. Students in a study by Moundridou and Virvou (2002) also reported experiencing less difficulty and greater enjoyment with a multimedia program featuring an agent that helped students solve algebraic equations rather than the program without the agent.
In another study, middle school students worked with multiple versions of a multimedia agent called "Herman the Bug" within a multimedia program about botany (Lester, Stone, & Stelling, 1999). Changes in test scores demonstrated that all students learned the material, but students who worked with a speaking Herman reported higher levels of interest and engagement than their counterparts who worked with less interactive versions of Herman.
Researchers at the University of Memphis are designing agents that may increase reading comprehension by prompting students to self-explain their learning—asking themselves why, what-if, and how questions and engaging in an interactive dialogue that reinforces reading strategies. AutoTutor and iSTART are two web-based prototypes that incorporate such agents. Both have been found effective at increasing comprehension of science content text for youth and young adults (Graesser et al, 2003; Graesser, Lu, et al. 2004). AutoTutor uses a human-like head to provide explanation, while iSTART uses a collection of three-dimensional agents, each performing a different function in the training module. Interactive dialogues are incorporated directly into the program through the agent, or developed through peer interaction within student pairs. Peer dialogue around a multimedia learning experience has elsewhere been shown to improve learning for young adults (Craig, Driscoll, & Gholson, 2004).
An animated agent is an integral part of the commercial program Thinking Reader® (Tom Snyder Productions, Scholastic). The program embeds strategy instruction into award-winning novels for intermediate and middle school students and is based on research conducted with struggling adolescent readers (Dalton, Pisha, Eagleton, Coyne, & Deysher, 2001). The books are digitized and embedded with multiple supports including human voice narration, text-to-speech, a multimedia glossary, background knowledge links, strategy instruction, and a worklog. Agents prompt the students to "stop and think" (apply reading strategies), and they provide corrective feedback on their performance. The use of these books has been shown to significantly improve reading comprehension of struggling readers compared to traditional reciprocal teaching instruction (Dalton, Pisha, Eagleton, Coyne, & Deysher, 2001).
Bosseler and Massaro (2003) developed a multimedia training environment called the Language Wizard/Player that includes an agent, Baldi, who serves as a speech-language tutor. This agent provides specific feedback on students' vocabulary and speech production. Baldi's skin can be made transparent to show the articulatory movements in the mouth and throat. Young children with autism who worked with Baldi increased their vocabulary and generalized their new words to natural settings.
Atkinson, R. K. (2002). Optimizing learning from examples using pedagogical agents. Journal of Educational Psychology, 94(2), 416-427.
Biswas, G., Leelawong, K, Schwartz, D., Vye, N., & the Teachable Agents Group at Vanderbilt. (2005). Learning by teaching: A new agent paradigm for educational software. Applied Artificial Intelligence, 19, 363-392.
Bosseler, A., & Massaro, D. (2003). Development and evaluation of a computer-animated tutor for vocabulary and language learning in children with autism. Journal of Autism and Developmental Disorders, 33(6), 653-672.
Chi, M. T. H. (2000). Self-explaining: The dual processes of generating inference and repairing mental models. In R. Glaser (Ed.), Advances in instructional psychology: Vol 5. Educational design and cognitive science 161-238. Mahwah, NJ: Erlbaum.
Clark, R. E., & Felton, D. F. (2005). Five common but questionable principles of multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 97-116). New York: Cambridge University Press.
Craig, S. D., Driscoll, D. M., & Gholson, B. (2004). Constructing knowledge from dialog in an intelligent tutoring system: Interactive learning, vicarious learning, and pedagogical agents. Journal of Educational Multimedia and Hypermedia, 13(2), 163-183.
Dalton, B. Pisha, B., Eagleton, M., Coyne, P., & Deysher, S. (2001). Engaging the text: Reciprocal teaching and questioning strategies in a scaffolded learning environment. Final report to the U.S. Department of Education. Peabody, MA: CAST.
Graesser, A. C., McNamara, D.S., & Van Lehn, K. (2005). Scaffolding deep comprehension strategies through Point&Query, AutoTutor, and iSTART. Educational Psychologist, 40(4), 225-234.
Graesser, A. C., Lu, S., Jackson, G. T., Mitchell, H., Ventura, M., Olney, A. et al. (2004). AutoTutor: A tutor with dialogue in natural language. Behavioral Research Methods, Instruments, and Computers, 36, 180-192.
Lester, J. C., Stone, B., & Stelling, G. (1999). Lifelike pedagogical agents for mixed-initiative problem solving in constructivist learning environments. User Modeling and User-Adapted Interaction, 9, 1–44.
McNamara, D. S., & Shapiro, A. M. (2005). Multimedia and hypermedia solutions for promoting metacognitive engagement, coherence, and learning. Journal of Educational Computing Research, 33(1), 1-29.
Moreno, R., Mayer, R. E., Spires, A. H., & Lester, J. C. (2001). The case for social agency in computer-based teaching: Do students learn more deeply when they interact with animated pedagogical agents? Cognition and Instruction, 19(2), 177-213.
Moreno, R. (2005). Multimedia learning with animated pedagogical agents. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 507-523). New York: Cambridge University Press.
Moundridou, M. & Virvou, M. (2002). Evaluating the personal effect of an interface agent in a tutoring system. Journal of Computer Assisted Learning, 18, 253-261.
Palincsar, A. S., & Brown, A. L. (1984). Reciprocal teaching of comprehension-fostering and comprehension-monitoring activities. Cognition & Instruction, 1(2), 117.
Pedersen, S., & Liu, M. (2002). The effects of modeling expert cognitive strategies during problem-based learning. Paper presented at the Annual Meeting of the American Educational Research Association. Seattle, WA. ERIC Document TM 032 801.
Strangman, N. & Dalton, B. (2005). Improving struggling readers' comprehension through scaffolded hypertexts and other computer-based literacy programs. In M. C. McKenna, L. D. Labbo, R. D. Kieffer, & D. Reinking (Eds.), International handbook of literacy and technology, Volume II (pp. 75-92). Mahwah, NJ: Lawrence Erlbaum Associates.