Login 

Share

Letter from America: Will AI remake or destroy education?

Daron Acemoglu describes four possible futures for AI in education

In the decades to come, late 2022 and early 2023 may be remembered as a first step in a generative AI revolution, as ChatGPT became arguably the fastest-spreading new technology in human history and expectations of AI-powered riches became commonplace. Or the rollout of ChatGPT may be remembered as a uniquely irresponsible act, unleashing this powerful but flawed technology without any guardrails being in place (all in the name of hype and market share).

The future of generative AI is of course not just hard to predict, but largely unwritten. Nevertheless, it is reasonable to expect that some of its most negative, as well as potentially some of its most beneficial, effects could be on the education sector.

Hundreds of millions of students around the globe are already using ChatGPT. In the United States, in May 2023, it was estimated that 30% of college students were already doing so. More will start experimenting with generative AI as these tools advance and become more widely available. How the education sector will respond to the widespread use of this technology by students and educators alike remains to be seen, but any positive effects depend on the right responses by educational institutions as well as the tech industry.

To understand the possibilities ­– and the dangers – it is useful to distinguish four scenarios.

The first, a positive scenario, involves the use of generative AI tools in education in order to introduce new teaching methods and classroom organization that could boost productivity in instruction, especially for children who are currently left behind. The main promise here is advances in “personalized teaching”, which are targeted to each student’s strengths and weaknesses in different fields. Currently, such personalized teaching is undertaken in the context of individualized education programs (IEPs), which are expensive and cannot be offered to more than a minority of students, for example those with disabilities. This is despite the evidence suggesting that they are quite effective, particularly for kids from lower socioeconomic backgrounds or those facing learning difficulties.

Implementing anything like IEPs at a mass scale is currently prohibitively costly. This can change with generative AI. For example, generative AI applications could enable teachers to determine in real time how the curriculum should be adaptively altered for different subsets of students (and also at the same time identify students who should be assigned to different subsets, depending on the specific problems each student is having and how the material is being modified).

This is a completely new set of tasks for teachers who would need to be trained and become much more skilled. More teachers would need to be hired to work alongside these human-complementary AI tools. The promise is significant improvements in learning outcomes for students from diverse backgrounds. Additionally, because this reorganization of classrooms and implementation of new tools introduces new tasks for educators, this could be one of the areas in which generative AI could contribute to enriched human work (and create a counterweight against AI-based automation).

AI image

While I believe such advances are feasible, one big roadblock is that there is currently very little investment in these types of tools. Instead, the emphasis so far has been in AI-based automation (automatic testing technologies, computerized teaching etc.). Worse, there may be a chicken and egg problem: the tools for creating new learning methods and tasks for teachers are not there, so schools are not recognizing the need for such tools. And without explicit demand from schools, the tools are not being developed. Nor is this emerging as a priority area for the tech industry.

A second scenario could be even more positive, but also face even greater roadblocks. This scenario would build on the first, but also involve the leveraging of large language models (LLMs) to help students in self-training, learning and information filtration. A promise of LLMs is in enabling diverse users to discover accurate information and gain expertise in various areas. This could be highly valuable to students in both secondary schooling and higher education, provided that the tools are developed in a way that supplies reliable information, while still leaving humans in the driving seat.

For example, almost every student in the United States also has access to a vast amount of information on the Internet, but how to organize, filter, and process this information is a daunting task that is becoming more challenging as the amount of available information becomes larger. Generative AI could be most useful in training and learning (as well as decision-making in general) if it can provide this type of information filtering and processing help.

An additional challenge in this case is that the current path of LLM development may not be consistent with achieving this goal, because the information these models provide is still far from reliable – as highlighted by the oft-reported hallucination problems. Moreover, judging the reliability of LLM-generated output remains elusive for most users. Reliable self-training, information discovery, and filtration may require a very different architecture of generative AI models than the current path.

While these positive scenarios are technically feasible, they may be much less likely than negative ones.

What may be more likely is the use of generative AI tools in the same way that earlier generations of digital technologies and earlier waves of AI have been deployed: mostly for automation. Excessive focus on automation has an obvious cost for educators – the disappearance of various tasks and jobs for them. If this were in the service of much better outcomes in education, there would be a relevant trade-off. However, as with other rapid adoption of automation technologies, the real danger is that this could correspond to “so-so automation” – eliminate human tasks, but without transformative productivity gains.

Although the automation of knowledge tasks, such as teaching, has some unique features relative to automation of blue-collar and office work in the past, what we have learned from the past would still have some relevance. This may imply greater inequality as some workers lose out from automation. This so-so automation path is more likely if education does not become an area of priority for the industry, and if it happens, it could be very costly for students, especially those who are already underperforming.

An even worse scenario is also on the cards – failure of educational institutions, norms, and regulations to adjust to the fact that ChatGPT and other LLMs will be widely used by students. Without such adjustment, homework assignments would become impossible to monitor, making one of the important tools of learning inoperative, and cheating may become endemic in exams and other assignments as well. Classroom instruction may be disrupted too (and, here, I’m not using disruption in the positive sense that the tech industry sometimes does).

This pernicious scenario becomes most likely when LLMs have enough knowledge and ability to answer questions and write simple essays so that they can complete assignments and pass certain exams, but because they will lack the adaptability, social and communication skills, and the broader problem-solving capabilities of teachers and lecturers, they cannot be as good a tool for student learning. In this scenario, one or two generations of students in both middle school and higher education may miss a considerable amount of learning, with dire consequences for workforce skills in industrialized and developing countries.

The good news is that which of the scenarios will materialize depends on choices that educators, regulators, and innovators in the tech industry will make. The bad news is that many educators may be unready; regulators, especially in the United States, may have already fallen behind; and the tech industry may be pushing in the wrong directions – toward unnecessary automation and maximizing hype, rather than the human-complementary capabilities of this new and exciting technology.

Daron Acemoglu, 29 August 2023

References and further reading

Acemoglu, D., Autor, D., Hazell, J. and Restrepo, P. (2022). Artificial Intelligence and jobs: Evidence from online vacancies. Journal of Labor Economics, 40:S1, S293-S340.

Acemoglu, D. and Johnson, S. (2023a). Power and progress: Our thousand-year struggle over technology and prosperity. PublicAffairs, New York.

Acemoglu, D. and Johnson, S. (2023b). What’s wrong with ChatGPT? Project Syndicate, February 6.

Acemoglu, D. and Restrepo, P. (2018). The race between man and machine: Implications of technology for growth, factor shares, and employment. American Economic Review, 108(6), 1488-1542.

Acemoglu, D. and Restrepo, P. (2019). Automation and new tasks: How technology displaces and reinstates labor. Journal of Economic Perspectives, 33(2), 3-30.

Acemoglu, D. and Restrepo, P. (2020). The wrong kind of AI? Artificial intelligence and the future of labour demand. Cambridge Journal of Regions, Economy and Society, 13(1), 25-35.

Acemoglu, D. and Restrepo, P. (2022). Tasks, automation, and the rise in U. S. wage inequality. Econometrica, 90(5), 1973-2016.

Bloom, B. S. (1984). The 2 Sigma Problem: The search for methods of group instruction as effective as one-to-one tutoring. Educational Researcher, 13(6), 4-16 [JSTOR].

The Economist (2023). Large, creative AI models will transform lives and labour markets.

Forbes (2023). CHATGPT is the fastest growing app in the history of web applications. Forbes, 2 Feb.

IEEE Spectrum (2023). Hallucinations could blunt ChatGPT’s success. 13 March.

Intelligent.com (2023).  One-third of college students used ChatGPT for schoolwork during the 2022-23 academic year.

Muralidharan, K., Singh, A. and Ganimian, A. J. (2019). Disrupting education? Experimental evidence on technology-aided instruction in India. American Economic Review, 109(4), 1426-60.

The Street (2023). OpenAI targets businesses with ChatGPT’s latest huge update. August 2023.
Uteach (2023). The power of test automation in education.

Washington Post (2021). It’s tempting to replace teachers with technology, but it would be a mistake. 24 March.