Should educators worry about ChatGPT?

Ted Underwood
Ted Underwood, Professor

The artificial intelligence chatbot ChatGPT can, among other things, generate essays and write computer code. Since being released to the public for testing late last year, it has raised concerns about students using ChatGPT to complete their homework and led some secondary public schools to ban it and college professors to change their course assignments. Ted Underwood is a professor of English and of information sciences and the associate dean of academic affairs in the School of Information Sciences. He recently commented in Inside Higher Ed on how to view the technology's place in higher education. He talked with News Bureau arts and humanities editor Jodi Heckel.

What is ChatGPT and how is it different from previous versions of chatbots?

Discussion tends to focus on ChatGPT because this product was made widely available for free last fall – and it was a little easier to use than earlier models of language. But ChatGPT is far from unique. 

ChatGPT is based on technology that has been around in one form or another since OpenAI released the first version of generative pre-trained transformers in 2018. The basic idea is that a model is trained to predict the next word in an observed sequence of words. Then when you write a short passage – a "prompt" – the model can predict the next word in the sequence, and then the next word, and so on. To do a really good job, a model needs to recognize high-level patterns and behave as if it understood language. Because models like this grew better at generalizing as researchers increased the size of the model, they are sometimes called "large language models." They're also called "generative AI" because they don't just analyze texts but use what they have learned to create new texts.

ChatGPT improves on earlier versions of this technology by training a model specifically to treat prompts as turns in a conversation and respond, instead of just continuing your statement.

ChatGPT is not unique in this; similar models have been released by Google, Meta and Anthropic. And OpenAI itself has recently released GPT-4, which is better than ChatGPT.

Should educators be worried about students using ChatGPT or other artificial intelligence writing programs to write their research papers, or should they look at how AI applications can be used as educational tools to help students learn?

We should do both. But I would urge us to focus a little less on the short-term fate of our assignments and more on long-term consequences for students.

Some students are using models to help write their papers and do homework, and yes, that is something to worry about. We want students to learn, and if they're just pasting an assignment into a box and hitting return, they're not learning much.

But that's a small part of a bigger issue, which is that the students now entering college are likely to graduate into a world transformed by artificial intelligence. Models like ChatGPT are already being integrated into word processing software and search engines. In 10 years, they will be as familiar as autocomplete is to us now. So, telling students "just say no to AI" is not going to be a sufficient way to prepare them for the 2030s. Students will be using these models, and will need to understand them.

There are definitely some contexts, like a closed-book exam, where it's appropriate to say "don't use AI," just as we currently say "don't look up the answer on the web." But universities also will need to offer courses and assignments that teach students how to understand these tools and use them in appropriate, creative ways.

What are some other uses of AI language models?

Right now, we're approaching AI in the way we often approach a new technology: We're trying to fit it into an existing niche. Large language models are widely understood as writing machines, so we think, "maybe students will use models to write their term papers." Models also seem able to answer questions, so we think, "maybe they'll replace search engines."

A language model isn't a library or a copy of the internet; it's literally just a model of language. People will be disappointed if they expect the language model itself to provide knowledge.

I think we're going to find more interesting ways to use this technology. Instead of asking old questions for which answers already exist, the interesting way to use one of these models is often to hand it new evidence that you want analyzed, while precisely describing the analysis you want to perform.

I like British programmer Simon Willison's way of putting this, which is that a language model is a calculator for words. The model doesn't contain exhaustive knowledge. But it's a flexible little machine that can follow verbal instructions, transform text and think out loud – so to speak – in writing.

You wouldn't ask a calculator to perform a physics experiment or engineer a bridge on its own – and by the same token, we probably shouldn't ask a language model to write important documents on its own. But if we can break a project down into well-defined tasks, a language model may make those tasks easier. A model could, for instance, read through a stack of emails one by one in order to assess their relevance to a question, and then instruct itself to condense the most relevant emails into a summary.

In short, these aren't substitutes for human writing or human knowledge. They're flexible tools for transforming language. We'll need to learn how to use them, and it's even possible that we'll end up using them for analysis more than we do for writing.

Tags:
Updated on
Backto the news archive

Related News

Student says ‘thank you’ with a helicopter ride

Last month, Michael Ferrer showed his appreciation for one of his MSIM instructors in a unique way—by inviting him for an insider’s look at his work as a reservist in the Illinois Army National Guard. For the ILARNG BOSS Lift, which took place on June 18 at Camp Atterbury, Indiana, Ferrer selected Michael Wonderlich, iSchool adjunct lecturer and senior associate director of business intelligence and enterprise architecture for Administrative Information Technology Services (AITS) at the University of Illinois.

Michael Wonderlich and Michael Ferrer hold a U of I flag in front of a military helicopter

Project helps librarians use data storytelling to advocate for public libraries

A toolkit for public librarians can help them use data to communicate the value of their services and justify their funding needs. The Data Storytelling for Librarians Toolkit helps librarians present data in story form using narrative strategies. It was developed by University of Illinois Urbana-Champaign information sciences professors.

Kate McDowell

Chan to deliver keynote at SIGCIS 2024

Associate Professor Anita Say Chan will deliver the keynote at the 15th annual conference of the SHOT (Society for the History of Technology) Special Interest Group for Computing, Information, and Society (SIGCIS), which will be held on July 14 in Viña del Mar, Chile. SIGCIS is the leading international group for historians with an interest in the history of information technology and its applications. The theme for SIGCIS 2024 is "System Update: Patches, Tactics, Responses."

Anita Say Chan

Mattson receives ISTE Making It Happen Award

Adjunct Lecturer Kristen Mattson has received the 2024 International Society for Technology in Education (ISTE) Making It Happen Award. The award honors educators and leaders who demonstrate outstanding commitment, leadership, courage, and persistence in improving digital learning opportunities for students.

Kristen Mattson

NISO publishes Recommended Practice on retracted science

The National Information Standards Organization (NISO) has announced the publication of the Communication of Retractions, Removals, and Expressions of Concern (CREC) Recommended Practice (NISO RP-45-2024), which is the product of a working group made up of cross-industry stakeholders, including Associate Professor Jodi Schneider. 

Jodi Schneider