“when we teach, we learn.”Seneca
Besides working on my own research, the PhD gave me the chance to teach tremendously interesting subjects such as social signal processing, explainability in machine learning, and data science. To teach I had to spend more time reviewing my own knowledge, find different ways to express intuitions and methods, as well as instill interest and passion for something I am fascinated about. I eventually ended up making a greater effort to learn better what I previously did learn for myself, spending more time on learning activities (e.g., reading, coding). This sense of responsibility towards others that motivates learning has been called protégé effect or learning by teaching and it has been proven to provide an environment in which knowledge can be improved through revision, and protects students’ from psychological ramifications of failure (Chase et al. 2009, Chandra et al. 2019). It is not surprising to notice that we can obtain significantly better performance in learning whenever we consider the importance of the social context. Researchers investigated the possibility of extending these findings to people interacting with intelligent agents and observed that while interacting with social agents, e.g., for example in Second Life, the Sims, and World of Warcraft, people have natural social inclinations. In learning scenarios, social agents can assume distinct pedagogical roles. Kim and Baylor 2016 identified three main roles a pedagogical agent could play: expert, motivator, and mentor. Expert agents formally provide accurate information in a clear and succinct way. Motivator agents are designed as peers and are not presented as particularly knowledgeable but as eager participants who suggest their own ideas and, by asking questions, foster the learners to reflect on their thinking and develop coping strategies. Mentor agents have advanced experience and knowledge and work collaboratively with the learners to achieve goals. Agents can offer new models for how to think or act and perform pedagogically-relevant agent behaviors such as showing, explaining, and questioning. Moreover, teachable agents offer the possibility to reverse the role teacher-student. By using artificial intelligence to learn and reason about what it has been taught, TAs both behave independently and contribute to the protégé effect. Betty’s Brain is an example of a teachable agent. Betty’s Brain was designed to model chain-like mechanisms of cause and effect relationships. Students teach their TA by creating a concept map of nodes connected by quantitative causal links, e.g., “greenhouse gas emissions” increase “global warming rate”. Once taught, the TA can answer questions such as “If X increases/decreases, what happens to Y?”, and traces its reasoning process by sequentially highlighting each node and link in a causal chain. When the TA animates its reasoning on the screen it externalizes its thought process making its “thinking” visible. This can be interpreted as an explanatory behavior to untangle the TA decision-making process and a transparent mechanism to link specific learning behavior to improved performance.
The protégé effect can be contrasted with common motivational features added to computer environments in which learning is a side effect of sustained engagement, e.g., gaining points towards some quantitative goal. When teaching others, either artificial or human agents, we generally tend to believe that our efforts can change the intellectual ability of someone else and are more motivated to learn per se. Hence, the sense of responsibility and the ego-protective buffer inspired by teachable agents make us both better teachers and learners.