Sunday, October 29, 2023

The Impact of Artificial Intelligence on Teaching, Learning, and Educational Equity: A Review

Title: The Impact of Artificial Intelligence on Teaching, Learning, and Educational Equity: A Review

Abstract

This paper provides a comprehensive review of the impact of artificial intelligence (AI) on education, with a focus on teaching, learning, and educational equity. AI refers to computer systems that exhibit human-like intelligence and capabilities. The rapid advancement of AI, fueled by increases in computing power and availability of big data, has enabled the proliferation of AI applications in education. However, as AI becomes more deeply embedded in educational technologies, significant opportunities as well as risks emerge. 

This paper reviews recent literature on AI in education and synthesizes key insights around three major themes: 
1) AI-enabled adaptive learning systems to personalize instruction; 
2) AI teaching assistants to support instructors; and 
3) the emergence of algorithmic bias and threats to educational equity. 

While AI shows promise for enhancing learning and instruction, risks around data privacy, student surveillance, and discrimination necessitate thoughtful policies and safeguards. Realizing the benefits of AI in education requires centering human values and judgment, pursuing context-sensitive and equitable designs, and rigorous research on impacts.

Introduction

The field of artificial intelligence (AI) has seen tremendous advances in recent years, with technologies like machine learning and neural networks enabling computers to exhibit human-like capabilities such as visual perception, speech recognition, and language translation [1]. As these intelligent systems become increasingly sophisticated, AI is permeating various sectors of society including business, healthcare, transportation, and education [2]. Within education, AI technologies are being incorporated into software platforms, apps, intelligent tutors, robots, and other tools to support teaching and learning [3]. Proponents argue AI can enhance educational effectiveness and efficiency, for example by providing adaptivity and personalization at scale [4]. However, critics point to risks around data privacy, student surveillance, and algorithmic bias [5]. This paper reviews recent literature on AI applications in education and synthesizes key insights around impacts on teaching, learning, and equity.  

The surge of interest in AI for education is evident in the rapid increase in both academic publications and industry activity. As Chaudhry and Kazim [6] note in their review, publications on "AI" and "education" have grown exponentially since 2015. Major technology firms like Google, Amazon, Microsoft, and IBM are actively developing AI capabilities for education [7]. Venture capital investment in AI and education startups has also risen sharply, with over $1.5 billion invested globally in just the first half of 2019 [8]. The Covid-19 pandemic further accelerated AI adoption as schools rapidly transitioned online [9]. 

Within education, AI techniques have been applied across three major domains: 
1) improving learning and personalization for students;
 2) assisting instructors and enhancing teaching; and 
3) transforming assessment and administration [10].

 This paper synthesizes findings and insights from recent literature around each of these domains. It highlights opportunities where AI shows promise in advancing educational goals as well as risks that necessitate thoughtful policies and safeguards. Realizing the benefits of AI in education requires centering human values and judgement, pursuing context-sensitive and equitable designs, and rigorous research on impacts.

AI for Personalized and Adaptive Learning

A major focus of AI in education has been developing intelligent tutoring systems and adaptive platforms to personalize learning for students [11]. The goal is to customize instruction, activities, pace, and feedback to each individual student's strengths, needs, interests, and prior knowledge. Studies of early intelligent tutors like Cognitive Tutor for mathematics indicated they can improve learning outcomes [12]. With today's advances in machine learning and educational data mining, researchers aim to expand the depth and breadth of personalization [13]. For example, AI techniques can analyze patterns in how students interact with online learning resources to model learner knowledge and behaviors [14]. Analytics-driven systems provide customized course content sequences [15], intelligent agents offer personalized guidance [16], and affect-sensitive technologies adapt to students' emotional states [17].

Proponents argue AI-enabled personalization makes learning more effective, efficient, and engaging [4]. It allows students to learn at their own pace with systems responsive to their individual progress. AI tutors can provide hints, feedback, and explanations tailored to each learner's difficulties [18]. Researchers are expanding personalization beyond academic knowledge to include motivational and metacognitive factors critical to self-regulated learning [19]. AI also facilitates access, for instance by providing accommodations for diverse learners through multi-modal interactions [20]. 

However, critics note data-driven personalization risks narrowing educational experiences in detrimental ways [21]. AI systems modeled on standardized datasets may miss out on contextual factors teachers understand. Patterns recognized by algorithms do not necessarily correspond to effective pedagogy. Students could become over-dependent on AI guidance rather than developing self-direction. AI could also enable new forms of student surveillance and monitoring by tracking detailed behavioral data [22]. More research is needed on how to design AI that adapts to learners in holistic rather than reductive ways. Centering human values around agency, trust, and ethical use of student data is critical [23].

AI Teaching Assistants for Instructors

Another major application of AI is developing virtual teaching assistants to support instructors. The goal is to automate routine administrative tasks and provide teachers with data-driven insights to enhance their practice [24]. Proposed AI assistance ranges from facial and speech recognition to track classroom interactions [25], to automated essay scoring and feedback to students [26], to AI-generated lesson plans personalized to each teacher's needs [27]. Some argue offloading repetitive tasks like grading could allow teachers to focus on higher-value practices like mentoring students [28]. AI tutors might also extend teachers' ability to individualize instruction when facing constraints of time and resources [29].

However, effective adoption of AI teaching assistants depends on thoughtful implementation guided by teachers' own priorities [30]. Rather than replacing human judgement, teachers need AI designed to complement their expertise [31]. This requires transparent and overseeable systems teachers can monitor, interpret, and override as needed [32]. Teachers must shape the goals and constraints of AI tools based on pedagogical considerations, not technical capabilities alone. Alignment to ethical priorities like student privacy and equitable treatment is essential. Teachers will also require extensive training to work effectively with AI systems and understand their limitations [33]. More research should center teacher voice in co-designing educational AI [34].

Algorithmic Bias and Threats to Equity

As algorithms play an expanding role in education, researchers and ethicists have raised concerns about risks of bias, discrimination, and threats to educational equity [35]. Although often presumed to be objective, AI systems can propagate and amplify biases present in underlying training data [36]. Algorithms trained on datasets with systemic gaps or distortions may lead to unfair outcomes. Discriminatory decisions could scale rapidly as AI gets embedded into school software infrastructures [37]. Students from marginalized communities may face new forms of algorithmic discrimination if systems learn and reproduce historical inequities [38]. 

Biased AI presents significant risks across education. In personalized learning platforms, some students could be unfairly stranded on remedial paths [39]. Algorithmic hiring tools could discount talented teacher candidates [40]. Automated proctoring software might exhibit racial and gender bias in flagging students for cheating [41]. As schools adopt AI technologies, they must rigorously evaluate for potential harms using tools like equity audits [42]. Reducing algorithmic bias requires improving data quality as well as designing systems that deliberately counteract structural inequality [43]. Centering stakeholders in participatory design can also help align AI to communities' values [44]. Ongoing oversight, transparency, and accountability are critical [45].

Future Directions and Policy Implications 

This review highlights both significant opportunities and serious risks as AI becomes further embedded into education. Optimists see potentials to improve learning, teaching, and assessment at scale. Pessimists warn of amplified inequality, loss of privacy, and diminished human relationships. The likely future trajectory depends on how key stakeholders guide AI development and adoption in education [46]. Students, families, educators, and communities must be empowered in shaping the use of AI in schools. In addition to technical skills, designers of educational AI need cross-disciplinary expertise in learning sciences, human development, and ethics [47]. Policymakers will need to evolve regulations around data privacy and algorithmic accountability in education [48]. With thoughtful, equitable implementation guided by research, AI may support more personalized, empowering, and human-centered educational experiences. But we must proactively address risks and center human judgement to prevent AI from narrowing pedagogical possibilities or harming vulnerable student populations. The promise for transformative benefits makes progress imperative, but so too does the threat of bakein and scaling inequality. there seems to be some evolution in perspectives from focusing on opportunities and efficiency to giving more attention to risks, ethics, and equitable access. The papers we referenced broadly agree on the opportunities but disagree or contradict on the risks and challenges. The need to center human judgement and oversight becomes a point of greater emphasis in more recent work. By bringing broad consensus on the difficult questions early and insisting technologies align with educational values and goals, the education community can lead the way for ethical and empowering innovation.

References

[1] Russell, S.J., Norvig, P., Davis, E. (2010). Artificial intelligence: a modern approach. Prentice Hall, Upper Saddle River.

[2] Makridakis, S. (2017). The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Futures, 90, 46-60.

[3] Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign.

[4] Shah, D. (2018). By the numbers: MOOCs in 2018. Class Central. 

[5] Williamson, B. (2017). Who owns educational theory? Big data, algorithms and the politics of education. E-Learning and Digital Media, 14(3), 129-144.

[6] Chaudhry, M.A, & Kazim, E. (2021). Artificial Intelligence in Education (AIEd): A high-level academic and industry note 2021. AI and Ethics. 

[7] Doorn, N. (2019). Algorithms, artificial intelligence and joint human-machine decision-making. In Ethics of Data Science Conference.

[8] Wan, T. (2019). Edtech unicorns show the health of venture capital. EdSurge.

[9] Educate Ventures Research. (2020). Shock to the system: COVID-19's long-term impacts on education in Europe. Cambridge University Press.

[10] Luckin, R., Holmes, W., Griffiths, M., Forcier, L.B. (2016). Intelligence unleashed: An argument for AI in education. Pearson.

[11] Nkambou, R., Bourdeau, J., Mizoguchi, R. (Eds.). (2010). Advances in intelligent tutoring systems (Vol. 308). Springer Science & Business Media.

[12] Ma, W., Adesope, O. O., Nesbit, J. C., & Liu, Q. (2014). Intelligent tutoring systems and learning outcomes: A meta-analysis. Journal of Educational Psychology, 106(4), 901.

[13] Bienkowski, M., Feng, M., & Means, B. (2012). Enhancing teaching and learning through educational data mining and learning analytics: An issue brief. US Department of Education, Office of Educational Technology, 1-57.

[14] Baker, R. S., & Inventado, P. S. (2014). Educational data mining and learning analytics. In Learning analytics (pp. 61-75). Springer, New York, NY.

[15] Manouselis, N., Drachsler, H., Vuorikari, R., Hummel, H., & Koper, R. (2011). Recommender systems in technology enhanced learning. In Recommender systems handbook (pp. 387-415). Springer, Boston, MA.

[16] Veletsianos, G. (2016). The defining characteristics of emerging technologies and emerging practices in digital education. In Emergence and innovation in digital learning (pp. 3-16). AU Press, Athabasca University.

[17] Afzal, S., & Robinson, P. (2011). Designing for automatic affect inference in learning environments. Educational Technology & Society, 14(4), 21-34.

[18] Rus, V., D'Mello, S., Hu, X., & Graesser, A. C. (2013). Recent advances in intelligent tutoring systems with conversational dialogue. AI Magazine, 34(3), 42-54.

[19] Roll, I., & Wylie, R. (2016). Evolution and revolution in artificial intelligence in education. International Journal of Artificial Intelligence in Education, 26(2), 582-599.

[20] Sottilare, R. A., Brawner, K. W., Goldberg, B. S., & Holden, H. K. (2012). The generalized intelligent framework for tutoring (GIFT).

[21] Roberts-Mahoney, H., Means, A. J., & Garrison, M. J. (2016). Netflixing human capital development: Personalized learning technology and the corporatization of K-12 education. Journal of Education Policy, 31(4), 405-420.

[22] Williamson, B. (2020). Datafication and automation in higher education: Trojan horse or helping hand?. Learning, Media and Technology, 45(1), 1-14.

[23] Prinsloo, P., & Slade, S. (2017). An elephant in the learning analytics room: The obligation to act. LAK17: Proceedings of the Seventh International Learning Analytics & Knowledge Conference, 46-55.  

[24] Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson.

[25] Chen, G., Clarke, S. N., & Resnick, L. B. (2015). Classroom discourse analyzer (CDA): A discourse analytic tool for teachers. Technology, Instruction, Cognition & Learning, 10.

[26] Ke, Z., & Ng, V. (2019). Automated essay scoring: A survey of the state of the art. In IJCAI (pp. 6300-6308).

[27] Celik, I., Dindar, M., Muukkonen, H., Järvelä, S., Makransky, G., & Larsen, D.S. (2022). The promises and challenges of artificial intelligence for teachers: A systematic review of research. TechTrends, 66, 616–630. 

[28] Bryant, J., Heitz, C., Sanghvi, S., & Wagle, D. (2020). How artificial intelligence will impact K-12 teachers. McKinsey & Company.  

[29] Timms, M. J. (2016). Letting artificial intelligence in education out of the box: Educational cobots and smart classrooms. International Journal of Artificial Intelligence in Education, 26(2), 701-712.

[30] Molenaar, I. (2022). Towards hybrid human-AI learning technologies. European Journal of Education. 

[31] Tabuenca, B., Kalz, M., Drachsler, H., & Specht, M. (2015, March). Time will tell: The role of mobile learning analytics in self-regulated learning. Computers & Education, 89, 53-74.

[32] Kazimzade, E., Koshiyama, A., & Treleaven, P. (2020). Towards algorithm auditing: A survey on managing legal, ethical and technological risks of AI, ML and associated algorithms. arXiv preprint arXiv:2012.04387.

[33] Kennedy, M. J., Rodgers, W. J., Romig, J. E., Mathews, H. M., & Peeples, K. N. (2018). Introducing preservice teachers to artificial intelligence and inclusive education. The Educational Forum, 82(4), 420-428. 

[34] Moeini, A. (2020). Theorising evidence-informed learning technology enterprises: A participatory design-based research approach (Doctoral dissertation, UCL (University College London)).

[35] Hutt, S., Mills, C., White, J., Donnelly, P. J., & D'Mello, S. K. (2016). The eyes have it: Gaze-based detection of mind wandering during learning with an intelligent tutoring system. In EDM (pp. 86-93).

[36] Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica, May, 23.

[37] Baker, R. S. (2019). Challenges for the future of educational data mining: The baker learning analytics prizes. Journal of Educational Data Mining, 11(1), 1-17.

[38] Benjamin, R. (2019). Race after technology: Abolitionist tools for the new jim code. John Wiley & Sons.

[39] Kizilcec, R. F., & Lee, E. K. (2020). Algorithmic fairness in education. arXiv preprint arXiv:2007.05443.

[40] Bornstein, M. H. (2017). Do teachers’ implicit biases contribute to income-based grade and developmental disparities. Psychological Science Agenda. 

[41] Zhang, S., Lesser, V., McCarthy, K., King, T., Zhang, D., Merrill, N., ... & Stautberg, S. (2021, April). Understanding effects of proctoring and privacy concerns on student learning. In Proceedings of the 14th ACM International Conference on Educational Data Mining (pp. 335-340).

[42] Raji, I. D., Gebru, T., Mitchell, M., Buolamwini, J., Lee, J., & Denton, E. (2020). Saving face: Investigating the ethical concerns of facial recognition auditing. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 145-151).

[43] Holstein, K., McLaren, B. M., & Aleven, V. (2018). Student learning benefits of a mixed-reality teacher awareness tool in AI-enhanced classrooms. In International conference on artificial intelligence in education (pp. 154-168). Springer, Cham. 

[44] Roschelle, J., Penuel, W. R., & Shechtman, N. (2006). Co-design of innovations with teachers: Definition and dynamics. In Proceedings of the 7th international conference on Learning sciences (pp. 606-612).

[45] O'Neil, C. (2017). The ivory tower can’t keep ignoring tech. The New York Times.

[46] Dieterle, E., Dede, C., & Walker, M. (2022). The cyclical ethical effects of using artificial intelligence in education. AI and Ethics, 1-13.

[47] Luckin, R., Holmes, W., Griffiths, M., & Forcier, L.B. (2016). Intelligence unleashed: An argument for AI in education. Pearson. 

[48] Nentrup, E. (2022). How policymakers can support educators and technology vendors towards safe AI. EdSafe AI Alliance.

In conclusion, the rapid advancement of AI is bringing transformational opportunities as well as risks to education. Thoughtful governance, equitable implementation, rigorous research, and centering human values and judgement will be critical to realizing AI's benefits while protecting students and teachers. By proactively addressing concerns around data privacy, surveillance, algorithmic bias, and other threats, the education community can lead in developing ethical, empowering, and socially beneficial AI systems. With diligence, AI may enhance learning experiences and help schools better achieve their missions. But we must insist technologies align with educational values, not vice versa. The promise is immense, but so is the necessity of progressing prudently and equitably.

No comments :

Post a Comment

Comments will appear on the post after moderation.