Theoretical Frameworks for GenAI and Learning

A big question in education research right now is whether generative AI (genAI) can help students learn. It’s an important question because we know that many students are using genAI to assist in their college coursework (Arowosegbe et al., 2024; Johnston et al., 2024), and genAI is being integrated into many tools, often marketed as a learning aid. At EDLI, we have thought a lot about how genAI might meaningfully support learning. I am going to share two guiding theoretical frameworks for considering whether and how students might use genAI for productive learning: the Technology Acceptance Model and Self-Regulated Learning.

We are applying these frameworks in MSU’s pilot program that provides students with access to Khanmigo, Khan Academy’s generative AI tutoring chatbot. Khanmigo uses socratic questioning to encourage students to be active learners, rather than getting direct answers, while benefitting from individualized assistance. Students in specific programs at MSU, such as the College Assistance Migrant Program (CAMP) and the Dow STEM scholars, were given a license for unlimited Khanmigo use to assist in their course work for the academic year. 

Technology Acceptance Model: Whether or Not Students Will Use GenAI for Learning

The availability and capability of technologies such as ChatGPT or Khanmigo are not the only predictors of whether students will actually use them in their academic journey. The Technology Acceptance Model (TAM) and its various offshoots (e.g., TAM2, TAM3) aim to determine the likelihood of use of a certain technology for a specific purpose. If the question is “Will students use [a particular generative AI tool] for learning?”, then a TAM-based framework is a good approach to get the answer.

The most basic level of the TAM measures “perceived usefulness” and “perceived ease of use” as predictors technology use (Figure 1; Marikyan and Papagiannidis, 2024). TAM2 and TAM3 have expanded on this with additional predictors and model paths. TAM2 adds several variables to predict use: subjective norm, image, job relevance, output quality, result demonstrability, experience, voluntariness.

A path diagram demonstrating that perceived ease of use impacts perceived usefulness and intention to use. Perceived usefulness also impacts intention to use. Intention to use impacts actual use.
Technology Acceptance Model (TAM) from Marikyan and Papagiannidis (2024)

Let’s consider some of the nuances of TAM2 and how it helps us think about which genAI tools a student might use for assignment assistance. For students, “job relevance” refers to how relevant the tool is in completing their “job” of learning. GenAI tutoring tools may feel irrelevant to some students because they don’t give direct answers to assignments. These students might consider quick or efficient completion of coursework as a primary goal in their studies, and therefore would be more at risk of circumventing their learning through heavier use of general-purpose genAI tools. Through this kind of TAM2 analysis, it quickly becomes clear that we need to consider broader factors about how students are approaching not only technology but also their learning process in order to predict how students will integrate different types of genAI tools into their learning.

TAM3 expanded more on predictors that designers might use in interventions for improving technologies themselves, or available training for technologies to increase usage: computer self-efficacy, perception of external control, computer anxiety, computer playfulness, perceived enjoyment and objective usability. These components might be more relevant for developers of genAI tools. You can read more about the various TAM models and their effectiveness in predicting technology use here.

Questions and measures from the TAM models can help evaluate where educators and researchers might need to focus efforts in promoting use of appropriate genAI technologies for learning purposes. However, students’ use of a genAI technology for learning does not reveal the perhaps more important factor of whether the technology is impactful in improving student learning outcomes and processes. For that question, we turn to our second theoretical framework.

Self-Regulated Learning: Whether or Not GenAI Will Help

Self-Regulated Learning (SRL) is the process through which learners plan, monitor, and reflect on their learning behaviors (Panadero, 2017). SRL’s emphasis on the process of learning and on metacognition (thinking and reflecting on learning processes) is key in fostering successful learning through genAI use. SRL has many proposed models and pathways, with several features in common: they are cyclical and include multiple phases of cognitive, metacognitive, and emotional strategies that consider a learner’s context, planning, monitoring of learning, and reflection (Panadero, 2017).

Winne and Hadwin’s (1998) model of SRL may be particularly appropriate when exploring learning and genAI because it has often been used in computer-assisted learning environments (Panadero et al., 2016). Winne and Hadwin’s model identifies four main phases:

  1. Task definition: understanding the learning task and its context
  2. Goal setting and planning: defining a specific goal and methods to achieve it
  3. Enactment of strategies: strategically carrying out the planned tasks
  4. Metacognitive adaptation: continuous revision and adaptation of students’ engagement and long-term beliefs about themselves, learning, and the context

With modern genAI systems, cognitive aspects the SRL process can be automated, with the tools completing the enactment phase for students. Historical AI tutoring programs had a focus on assisting in some of the metacognitive components of SRL, such as by evaluating student responses and determining the appropriate difficulty level of the learner’s next task (Molenaar, 2022). While potentially helpful for content learning, this might also harm students’ SRL skills by removing the need for students to engage in metacognition. An appropriately designed genAI learning system would support learners by scaffolding metacognitive and cognitive SRL components in a hybrid human-AI regulation model (Lin, 2023; Molenaar, 2022).

Unfortunately, most AI-enabled chatbots neither account for nor promote metacognition or SRL (e.g., Shetye, 2024). In fact, the use of genAI for learning can lead to “metacognitive laziness”, eroding students’ SRL capabilities and critical thinking skills (Deng & Yu, 2023; Fan et al., 2024). Todd Zakrajsek of UNC-Chapel Hill provides a similar analysis about genAI and SRL in a recent post on the Scholarly Teacher. Considering how genAI tools can better support SRL processes is key in our thinking about how to evaluate genAI tools’ potential to improve student learning outcomes, and in contextualizing evaluations we do on the tools themselves. 


References

  • Arowosegbe, A., Alqahtani, J. S., & Oyelade, T. (2024). Perception of generative AI use in UK higher education. Frontiers in Education, 9. https://doi.org/10.3389/feduc.2024.1463208
  • Deng, X., & Yu, Z. (2023). A Meta-Analysis and Systematic Review of the Effect of Chatbot Technology Use in Sustainable Education. Sustainability, 15(4), Article 4. https://doi.org/10.3390/su15042940
  • Fan, Y., Tang, L., Le, H., Shen, K., Tan, S., Zhao, Y., Shen, Y., Li, X., & Gašević, D. (2024). Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance. British Journal of Educational Technology, 56(2), 489-530. https://doi.org/10.1111/bjet.13544
  • Johnston, H., Wells, R. F., Shanks, E. M., Boey, T., & Parsons, B. N. (2024). Student perspectives on the use of generative artificial intelligence technologies in higher education. International Journal for Educational Integrity, 20(1), 2. https://doi.org/10.1007/s40979-024-00149-4
  • Lin, X. (2023). Exploring the Role of ChatGPT as a Facilitator for Motivating Self-Directed Learning Among Adult Learners. Adult Learning, 35(3). https://doi.org/10.1177/10451595231184928
  • Marikyan, D.& Papagiannidis, S. (2024) Technology Acceptance Model: A review. In S. Papagiannidis (Ed), TheoryHub Book. Available at https://open.ncl.ac.uk / ISBN: 9781739604400
  • Molenaar, I. (2022). The concept of hybrid human-AI regulation: Exemplifying how to support young learners’ self-regulated learning. Computers and Education: Artificial Intelligence, 3, 100070. https://doi.org/10.1016/j.caeai.2022.100070
  • Panadero, E. (2017). A review of self-regulated learning: Six models and four directions for research. Frontiers in Psychology, 8(APR), 1–28. https://doi.org/10.3389/fpsyg.2017.00422
  • Panadero, E., Klug, J., & Järvelä, S. (2016). Third wave of measurement in the self-regulated learning field: When measurement and intervention come hand in hand. Scandinavian Journal of Educational Research, 60(6), 723–735. https://doi.org/10.1080/00313831.2015.1066436
  • Shetye, S. (2024). An Evaluation of Khanmigo, a Generative AI Tool, as a Computer-Assisted Language Learning App. Studies in Applied Linguistics and TESOL, 24(1), Article 1. https://doi.org/10.52214/salt.v24i1.12869
  • Winne, P. H., and Hadwin, A. F. (1998). Studying as self-regulated engagement in learning, in Metacognition in Educational Theory and Practice, eds D. Hacker, J. Dunlosky, and A. Graesser (Hillsdale, NJ: Erlbaum), 277–304.