School of Information Sciences

Should educators worry about ChatGPT?

Ted Underwood
Ted Underwood, Professor

The artificial intelligence chatbot ChatGPT can, among other things, generate essays and write computer code. Since being released to the public for testing late last year, it has raised concerns about students using ChatGPT to complete their homework and led some secondary public schools to ban it and college professors to change their course assignments. Ted Underwood is a professor of English and of information sciences and the associate dean of academic affairs in the School of Information Sciences. He recently commented in Inside Higher Ed on how to view the technology's place in higher education. He talked with News Bureau arts and humanities editor Jodi Heckel.

What is ChatGPT and how is it different from previous versions of chatbots?

Discussion tends to focus on ChatGPT because this product was made widely available for free last fall – and it was a little easier to use than earlier models of language. But ChatGPT is far from unique. 

ChatGPT is based on technology that has been around in one form or another since OpenAI released the first version of generative pre-trained transformers in 2018. The basic idea is that a model is trained to predict the next word in an observed sequence of words. Then when you write a short passage – a "prompt" – the model can predict the next word in the sequence, and then the next word, and so on. To do a really good job, a model needs to recognize high-level patterns and behave as if it understood language. Because models like this grew better at generalizing as researchers increased the size of the model, they are sometimes called "large language models." They're also called "generative AI" because they don't just analyze texts but use what they have learned to create new texts.

ChatGPT improves on earlier versions of this technology by training a model specifically to treat prompts as turns in a conversation and respond, instead of just continuing your statement.

ChatGPT is not unique in this; similar models have been released by Google, Meta and Anthropic. And OpenAI itself has recently released GPT-4, which is better than ChatGPT.

Should educators be worried about students using ChatGPT or other artificial intelligence writing programs to write their research papers, or should they look at how AI applications can be used as educational tools to help students learn?

We should do both. But I would urge us to focus a little less on the short-term fate of our assignments and more on long-term consequences for students.

Some students are using models to help write their papers and do homework, and yes, that is something to worry about. We want students to learn, and if they're just pasting an assignment into a box and hitting return, they're not learning much.

But that's a small part of a bigger issue, which is that the students now entering college are likely to graduate into a world transformed by artificial intelligence. Models like ChatGPT are already being integrated into word processing software and search engines. In 10 years, they will be as familiar as autocomplete is to us now. So, telling students "just say no to AI" is not going to be a sufficient way to prepare them for the 2030s. Students will be using these models, and will need to understand them.

There are definitely some contexts, like a closed-book exam, where it's appropriate to say "don't use AI," just as we currently say "don't look up the answer on the web." But universities also will need to offer courses and assignments that teach students how to understand these tools and use them in appropriate, creative ways.

What are some other uses of AI language models?

Right now, we're approaching AI in the way we often approach a new technology: We're trying to fit it into an existing niche. Large language models are widely understood as writing machines, so we think, "maybe students will use models to write their term papers." Models also seem able to answer questions, so we think, "maybe they'll replace search engines."

A language model isn't a library or a copy of the internet; it's literally just a model of language. People will be disappointed if they expect the language model itself to provide knowledge.

I think we're going to find more interesting ways to use this technology. Instead of asking old questions for which answers already exist, the interesting way to use one of these models is often to hand it new evidence that you want analyzed, while precisely describing the analysis you want to perform.

I like British programmer Simon Willison's way of putting this, which is that a language model is a calculator for words. The model doesn't contain exhaustive knowledge. But it's a flexible little machine that can follow verbal instructions, transform text and think out loud – so to speak – in writing.

You wouldn't ask a calculator to perform a physics experiment or engineer a bridge on its own – and by the same token, we probably shouldn't ask a language model to write important documents on its own. But if we can break a project down into well-defined tasks, a language model may make those tasks easier. A model could, for instance, read through a stack of emails one by one in order to assess their relevance to a question, and then instruct itself to condense the most relevant emails into a summary.

In short, these aren't substitutes for human writing or human knowledge. They're flexible tools for transforming language. We'll need to learn how to use them, and it's even possible that we'll end up using them for analysis more than we do for writing.

Tags:
Updated on
Backto the news archive

Related News

Chan’s "Predatory Data" named a 2026 PROSE Award finalist

Professor Anita Say Chan's book Predatory Data: Eugenics in Big Tech and Our Fight for an Independent Future (University of California Press, 2025) has been named a finalist in the Computing and Information Sciences Category of the 2026 PROSE Awards. The annual awards bestowed by the Association of American Publishers recognize the very best in professional and scholarly publishing and celebrate works that have made significant advancements in their respective fields of study.

Anita Say Chan

He inducted into Sigma Xi

Professor Jingrui He has been inducted into Sigma Xi, The Scientific Research Honor Society. Sigma Xi is the international honor society of science and engineering and one of the oldest and largest scientific organizations in the world, boasting a history of service to science and society spanning over 125 years. It has a multidisciplinary membership of scientists, engineers, and scholars, and Sigma Xi chapters can be found in universities and colleges, government laboratories, and commercial research centers.

Jingrui He

Hassan and Bashir receive distinguished paper award

A paper co-authored by PhD student Muhammad Hassan and Associate Professor Masooda Bashir received the Distinguished Paper Award at the Workshop on Security and Privacy in Standardized IoT, which was held last month in San Diego, California, in conjunction with the Network and Distributed System Security (NDSS) Symposium 2026. 

iSchool researchers to present work at Technocracy Conference

This week, iSchool PhD students and faculty will present their research at the Technocracy Conference. Hosted by the Unit for Criticism and Interpretive Theory at the University of Illinois on March 5–6, the conference will begin with a panel of graduate student papers and continue the following day with invited speakers and a keynote. All events will take place at the Levis Faculty Center on the Urbana campus. 

New multi-institutional project to use AI to represent past historical periods

A new project led by a team of researchers from four universities aims to create and evaluate language models that represent past historical periods. The project, "Artificial Intelligence for Cultural and Historical Reasoning," was recently selected for a 2025 Humanities and AI Virtual Institute (HAVI) award from Schmidt Sciences. The $800,000 grant will be split among four institutions: Cornell University, the University of Illinois Urbana-Champaign, The University of British Columbia, and McGill University. Professor Ted Underwood will serve as the principal investigator for the portion of the project at Illinois.

Ted Underwood

School of Information Sciences

501 E. Daniel St.

MC-493

Champaign, IL

61820-6211

Voice: (217) 333-3280

Email: ischool@illinois.edu

Back to top