When do humanlike virtual assistants help - or hinder - online learning?
Online learning is an increasingly popular tool across most levels of education. Currently, all 50 states in the United States offer online learning at the K-12 level, and about 74% of K-8 teachers use educational software as a classroom tool. About 5.8 million higher education students are taking at least one online course, and revenue from mobile learning products in North America is predicted to rise steadily, reaching $410 million by the end of 2018.
Many teachers and practitioners believe that online learning can increase learners’ help-seeking by securing their privacy and thereby reducing the psychological costs of help-seeking. Thus, in the hope of increasing help-seeking and facilitating learning, a lot of computer-based learning programs offer various forms of help. Leveraging knowledge from the literature on anthropomorphism, which suggests that people tend to build strong bonds with humanlike virtual assistants, many online learning programs are endowed with computerized helpers with anthropomorphic features. However, despite the ready availability of such forms of help, the effects have never been tested.
To better understand who is more (or less) likely to seek help and when they will do so during online learning, we explored the role of an anthropomorphized helper during online learning along with individuals’ implicit theories of intelligence: whether or not individuals fundamentally believe that abilities like intelligence can change. The findings show that anthropomorphic features may not prove beneficial in online learning settings, especially among those who believe their abilities are fixed (i.e., entity theorists), compared to those who believe that abilities can change (i.e., incremental theorists). Because of their belief in fixed abilities, entity theorists worry about presenting themselves as incompetent to others – even virtual others.
Across two studies, we show that entity theorists, who are interested in demonstrating their abilities and thus concerned about what others think of them, are more sensitive to anthropomorphic cues in a computerized helper compared to incremental theorists. Thus, when they encounter an anthropomorphic (vs. non-anthropomorphic) helper, entity theorists perceive high costs of help seeking (e.g., “Others might think I am incompetent because I received help during the task”), and they are less likely to seek help even at the cost of lower performance. Conversely, because incremental theorists are interested in learning itself and thus are less concerned about others’ judgments, their help-seeking and performance are less affected by the presence of anthropomorphic cues.
This research is the first to show that, in achievement settings, the use of anthropomorphic computerized helpers can backfire and hurt performance. Past research has shown that in socio-emotional settings, where interpersonal bonds are desirable, endowing a computerized helper with anthropomorphic features is beneficial (learn more here, here, and here). In contrast to this past research, however, the current research shows that in achievement settings, where independence is desirable, entity theorists avoid seeking help when they feel like someone (i.e., an anthropomorphized helper), rather than something (i.e., a non-anthropomorphized helper), is helping them. Thus, when developing educational computer programs for online learning, educators and program designers should pay special attention to unintended meanings that come from online learning features—which can vary across individuals as a function of their theories of intelligence —in order to more effectively encourage students to seek help and the increase the benefits they experience as a result of receiving it.
Dr. Sara Kim is an Assistant Professor of Marketing at the University of Hong Kong. Her research focus is on consumer and managerial decision-making and its implications for marketing management.