To learn how ChatGPT can transform university learning, we interview a professor who has integrated it in their assignments.
On November 30th, a company by the name of OpenAI launched an AI conversational chatbot called ChatGPT. This chatbot, built on a Large Language Machine Learning model, quickly gained traction, with the New York Times hailing it as the finest artificial intelligence chatbot ever made available to the public.
The early days of ChatGPT were met with a mixture of excitement and skepticism. Various news outlets covered the launch of ChatGPT, with some calling it a revolutionary breakthrough in AI technology, while others expressed concerns over the potential risks and ethical implications of such advanced language processing tools.
While some were quick to embrace ChatGPT, the education world exhibited a more cautious response. Some educators and institutions were worried about the potential for students to misuse the AI, enabling academic dishonesty or undermining the learning process. There were concerns that relying on AI-driven content generation could lead to a decline in critical thinking and writing skills, as students might opt for “the easy way out” over genuine engagement with the material.
As a student or faculty member at a university, you may have heard of ChatGPT or even used it in some capacity. Since then, GPT (The Model working in the background) and ChatGPT have evolved significantly, making the tool even more powerful than it was just five months ago.
Newer versions of the GPT model have been launched with GPT-4, in the words of its creators “exhibits human-level performance on various professional and academic benchmarks”. In fact, it was claimed the GPT-4 is capable of scoring in the top 10 percent of the Uniform Bar exam and getting a 700 score in an SAT.
Additionally, several new competitors have made their way into the market from Bing’s AI Chat (which offers references in its response and is in fact based on GPT-4) to Google’s Bard (available only in the US as of April).
No matter which side of the debate you are on, there is no doubt that ChatGPT is a game-changing innovation, surpassing tools you may have used in the past.
I talked to Margaret Vail, the StFX Systems and Data Services Librarian and Kaitlin Fuller, the StFX Scholarly Communications & Health Sciences Librarian. They have been interested in how ChatGPT can be applied in helping students do research in the library.
Kaitlin became more interested in ChatGPT when she saw discovery layers like Elicit or Consensus using language models. These are AI assistants that use technology similar to ChatGPT but specifically geared towards research. They discussed how ChatGPT can be used to enhance workflows, support student research, and help with internal processes. Margaret noted that ChatGPT can be most useful for scaffolding and getting started as students find it difficult to start from a blank slate. She cautioned however that it is important to remember Chatbots like ChatGPT are “essentially predictive text and we do not know how it is predicting the text”. Other potential uses in research are finding synonyms (useful when looking up search terms), summarizing and analyzing articles, and outlining methodologies.
Both Kaitlin and Margaret explained the importance of critical thinking and information literacy when using language models like ChatGPT, as they can be helpful tools but should not replace the intellectual activity and decision making involved in research and learning.
Next month the library will be organizing a webinar titled “ChatGPT in Academic Libraries” to explore the use of this new tool and applications in academic libraries.
To gain a better understanding of how ChatGPT is being integrated (or can be) into university education and in the class, I sat down with Dr. Donna Trembinski, an Associate Professor from the History Department.
Dr Trembinski is a medieval historian. She was, in her words, “traditionally trained to look at books” but also has a huge interest in technology and enjoys using new technology in her class which was what made her interested in ChatGPT.
What was your first reaction to ChatGPT?
Dr Trembinski: When ChatGPT came out, I'm in lots of discussion groups and I saw there were two responses. One was, this is the end of humanities as we know it. And the other was, what can we do with it? And so, that's why I kind of got interested in doing something. And when I ran it for this class as an assignment, it really was just to see how it would work.
Can you tell me a little bit more about what you did, how you used ChatGPT in the class?
Dr Trembinski: So, there's one assignment. And it was partially just to let students know that I knew that ChatGPT existed. I said go and use ChatGPT, although I know there's a lot of choice now, and to ask it a question about a pre-modern subject and have it write, like a 700-word essay. And then they were to do any editing that was required, like copy editing. And then also to comment on where sources were required, and to comment on the quality of the essay. So, it was an optional assignment. And they didn't have to do it. But I had about probably 80% of the class turn it in.
You asked them to generate the essay. And then also like a look at the sources. This is one of the things that most of these large language models cannot do at the moment. Because they're just predicting the next word. How did you get them to look up the references?
Dr Trembinski: I didn't, I didn't necessarily have them look up the references, although some students did. I only said when you think there should be a footnote that's not there, make sure you mark it and say why you think there should be a footnote. However, I did have assignments where students went and found sources. And they found language that was very similar to some websites, especially encyclopedias such as National Geographic. So, I think that I presume that they were just searching the terms, like, sort of like, you know, putting in Google.
You obviously gave the same assignment to all the students. Did you notice a lot of similarities or were they very different?
Dr Trembinski: So, they were able to come up with a question. The responses were quite different. What was similar was that it was grammatically pretty decent. But in terms of digging in historically with a question, it was very superficial.
Some people are saying that this is the end of the essay, literature review, and summarizing because ChatGPT does a lot of that work for you. What are your thoughts on this?
Dr Trembinski: Let me think about that a little bit. First of all, we're not there yet. The technology is not there yet. So, if you're asking me to think 10 years into the future, what does that look like? I think what's going to happen is our students are going to be getting jobs where the chatbot will produce the text, and we will be editing, and fact-checking the text. And so, I think I'm probably going to be assigning more assignments like this one, which was experimental. So, if that's what you're asking me, yeah, I think probably, we're going to have to turn to looking at chatbot-produced text and look at editing and refining it rather than producing from nothing. Is that the end of research? Depends how good these machines get at reading. But what's so much of what we still produce in the humanities is behind paywalls now. So, it'll be interesting to see, and I know ChatGPT is not mining. Many of them aren't mining behind paywalls.
I mean, we don't know if they do it or not.
Dr Trembinski: No, I know they don't. I can tell by what it's producing, it's not. So, at least I haven't yet seen evidence of work that's behind paywalls. In fact, I don't see it. What it seems to be reading as far as I can tell is more general information. That's sites like Wikipedia. So, once it starts reading the things that's behind paywalls and becomes a bit more sophisticated, then I think... Then I don't know how can I manage that?
Let's say they were able to access the content behind the paywall, from a student's perspective, how would it change? Like you said that you might make it more difficult, for instance, assignments.
Dr Trembinski: Well, I don't know how it's going to be difficult, but I think I'm going to do more of it because I think that's what students are going to end up doing. When they go into the work world, right? They're going to be accessing these and they're going to have to fact-check and make sure the research is correct. There was a huge debate. Where people were like, we have to shut it. We have to say you can't use it. I'm like, I don't think that's the answer because cat’s out of the bag. To be honest, I'm not a futurist. Whatever that means. But I see no point in ignoring a technology that's going to be revolutionary.
Since you're in history, there are a lot of visual elements to it as well… And the newer models that have come up, such as GPT-4 incorporate the image aspect in addition to text. So, from your perspective, not just in class, but also in research, how do you see that changing research and academia in general? Since it can take images as input and give out images as well.
Dr Trembinski: I see it being much more useful for teaching at this point. I think it will eventually be good for research. One of the things that I struggle with in some of my classes is producing decent textbooks because I don't teach in a traditional way that follows a history textbook. So, one of the things that I think is going to happen very quickly is we'll be able to use something like a chatbot to produce a reasonable text. We're going to have to edit it, but it will write it much more quickly, and the images will help as well. So, I can see that being something that happens in the next three, four years.
What are your opinions on banning it versus increasing the amount of work that you have. For instance, if you're doing a literature review of five papers. But you're like, okay, now you have these tools. So instead of preventing students from using it, I'm going to make you review 15 papers. Is that how you would approach it?
Dr Trembinski: I think probably the latter. But I also think if I ask a group of students to produce an essay and research it themselves. And not to use ChatGPT. They will do it ... Like students generally want to do well and don't want to cheat. So, I don't think it needs to be banned for a couple of reasons. I really think students will do the assignment as required mostly. Maybe that's naïve of me but I don't think so. And I don't want to start from a place of distrust of my students, right? And the other thing is, I think as we see what these tools can do. And I'm still very much learning what these tools can do. I'm going to make my assignment maybe not harder, but I'm going to use this tool in a way that I think is appropriate for training my students to use this tool.
So that brings me to my next question. A lot of the time these large language models like GPT-4 hallucinate a lot. It'll either make stuff up, or it'll give you actual links. But if you go on that link, it does not exist. it's making stuff up as it goes.
Dr Trembinski: So fascinating, right?
What do you think about all of this? Like, how do you see that from an academic perspective?
Dr Trembinski: This is why I thought fact-checking was a good idea. And this is why that assignment was done. And I did see it happen with my own assignments. But my favorite so far is someone who wrote an essay about pre-modern religion. It's large. And it kind of thought pre-modern religions, for all, meant one thing that was kind of mulled on kind of smoosh together. And it was Egypt and Greece. And it absolutely made up, I think, total theology for them. And we didn't ask for sources for it. So ChatGPT didn’t provide sources. But it was so clear that it was grounded in some idea of reality. But it had made up a whole theology. So, if you're asking me, yeah, it's clearly a problem. I've seen it myself. It will probably get better.
And one of the demos that I've seen about GPT-4, the newer one, which is not out yet, but one of the demos is its use from a teacher's perspective. It was actually focused more on school learning, lesson planning, and stuff like that and obviously for students as well. Do you see it being used in some way from a teacher’s or professor’s perspective?
Dr Trembinski: I think that for me, personally, the best use is going to be producing text or images around stuff that I can't find textbooks for. But right now, what I've seen is, it is not particularly great for even producing like first-year essays. But I think in a year, that'll be totally different. And I'm going to have to figure out what to do then. The first thing that has to happen for me for it to be really useful for students, it has to start looking at academic literature, which is what I was going to do with this. But what it'll be interesting to see is whether it actually forces those paywalls to come down. This will not happen because I'm not an optimist about this stuff. Or rather, the technology will just be adopted and bought by some of those larger consortiums. And then you'll have to pay to access it that way. It would be nice to see it break the paywalls and actually make some of the academic literature much more widely available. Open sourcing and open digitally, you know, open journals and things like that have tried to do that. But we haven't been able to successfully do it. I'm not sure that this will either, but it'd be great if it did.
What are your thoughts on the detection aspect of it? People have attempted or are trying to detect if a work was written by a Chatbot.
Dr Trembinski: I had a very interesting experience with this. I actually was marking an essay that I thought was probably produced by ChatGPT. And I ran it through one of the checkers. I ran it through a couple of detectors. I can't remember which ones, but I ran it through a couple. It came back like 67 percent maybe; then I took my daughter's novel, which is writing. She's 12 and I ran it through and hers came out about the same. So, what I actually think is detecting is like patterns and writing. And when you haven't had enough practice as a writer, it comes out as though it's computer generated.
And it's not a huge problem because of the way I design my essays. My research essays as I expect them to be really focused and I have not been able to have ChatGPT produce anything that is as focused as I would like it to be. It wants to talk about the big grand questions and I'm like, I want to talk about this tiny little thing. So, it hasn't been an issue for me yet just because of what I expect in terms of historical research essays, but I do think eventually it will be. But then we’ll cross the bridge when it comes.
So basically, you don't see it as being enough at least for now. It's not good enough to generate the essay, but it does help you in certain passages in certain areas or maybe brainstorm ideas.
Dr Trembinski: It's great for finding that relatively well, wouldn't say it's specific information but finding information on a particular topic that you'd like to see; I had it write an essay when I was playing around with it, based on a very short primary source that the students wrote a discussion on, and it was able to do that and it was not a terrible essay based on this thousand word primary source that it could find on the internet. So, I'm going to have to be careful because I thought I probably would have given that essay probably B minus. I was like it can be done but I don't think it's quite there yet, so I'm not worried about it because I'm searching every other paper I encounter.
What are your thoughts on its use as a summarizing tool? Given a lot of text, the newer GPT-4 can actually take in a lot more words as input. It's a common thing in a lot of classes to go and read papers and summarize them. Maybe give a presentation on them. Do you see that changing?
Dr Trembinski: Yeah, well, I mean I actually think it's going to be an important tool. I try not to get my students to summarize. I try to get them to analyze based on the summary. So, as a tool to help them further understand what they've been reading, they'll be able to process, eventually be able to process a lot more information… So, it will be interesting to see how that changes. That's how I really expect things to go. As a historian, I do worry about what's going to get lost in the gaps we're not reading through everything. And sometimes what's interesting is what's in the gaps. But I do take it realistically these tools are there and they're going to be enormously helpful.
Do you have any closing thoughts on these tools and when and how to use them?
Dr Trembinski: Like I said, assignments, and, first of all, I think I'm going to do it again because the students were really engaged in thinking about it. Sometimes I don't think they know we know the stuff. I'm eager to see where it goes. I think we ignore technology at our own peril.
But it's pretty interesting to have some of the more creative stuff that is coming up and it'd be interesting to have students produce something and then critique it, like a sonnet or something like that. I can see lots of ways, so I don't want my students to be afraid of that. And I don't want to be afraid of it myself. I want them to see what they can do with it and have fun.