Are today’s college students turning in essays generated by ChatGPT? Are professors using AI to grade students? Has artificial intelligence revealed a better way to learn? And is a university education in danger of becoming extraneous?
Julie Schell, UT’s assistant vice provost of academic technology and the director of the Office of Academic Technology, has been an expert on the use of technology in education since the 1990s, when the cutting edge was PowerPoint. Since then, she has held positions at Yale, Stanford, Columbia and Harvard. She was recruited from Harvard by then-vice provost Harrison Keller to be the founding director of OnRamps, UT’s program to help Texas high school students improve their college readiness through technology-enhanced education. During COVID, as assistant dean of instructional continuity and innovation in the College of Fine Arts and an assistant professor of design and higher education leadership, she helped lead the College of Fine Arts in online learning.
The State of Play
The easiest of the above questions to answer is the second one, about grading students: “A faculty member should never upload a student’s writing, get feedback from AI and then provide the AI’s feedback to the student. We have a hard line on that,” Schell says.
“AI is very seductive because it’s so good. It can save you so much time, and you’re surprised by the quality of the response. But it’s also full of these paradoxes: It can teach you a lot, but it can also teach you bad information, so you can have negative learning. It can help you be really creative, but it can also diminish your creative voice by making you be more like other people. Faculty should never use it as a proxy for their own feedback.”
“It’s hard to make sense of this whole world,” she admits.
Schell thinks of AI as having two kinds of use: transactional and transformative, and whether its use is good or bad in either case depends on the situation. “There are times it’s okay to use it as a transactional tool: ‘I need a list of ideas for planning dinner for the week,’ or ‘I need a list of ideas for a meeting coming up,’ or ‘Help me brainstorm research ideas.’ Those are low-stakes transactions, and we need to help students understand when transactional use is OK.”
But in this transactional category, she once experimented with using AI to write letters of recommendation, something that can be a time-consuming task for those in academia. “When I read what it said, it wasn’t fair. It wasn’t me. It didn’t have the flavor of how I really thought about the student, and I didn’t think it was fair to my student for me to use that output,” she says. “That’s a moral bridge I would not cross.” She adds, “It takes about 15 hours of experimentation to realize AI is not as good as you. It’s good, and it can do things faster than you can, and it has a vaster knowledge base than you do, but it’s not as good as you because it doesn’t have your voice. It’s not you.” That’s equally true for faculty members and for students. “I want to see my students’ voices and identities show up in their work,” she says.
Then there is the transformational use of AI. “Let’s say I input a prompt for a journalism class that I am teaching: ‘Help me write three learning outcomes for burying the lede.’ And then it spits out the three learning outcomes. If I just take those, copy them and teach from them, that’s transactional use. Transformational use is to take that output, look at it, evaluate it, critique it, see what doesn’t fit, edit it, transform it and use it more as a scaffold, so that you transform it to integrate with your perspective.” In this example, transactional use is bad; transformational, good.
Ban or Teach?
When it comes to the use of AI by students it is a more nuanced issue. “There are contingents of educators who are very against the use of AI [by students]. There are also some institutions that bar the use of AI.” Schell’s view and that of her colleagues in the Provost’s Office is that “the cost of policing students to never use this incredibly relevant, timely, transformative tool is more than the cost of creating a culture of academic integrity and honesty.”
Where AI is concerned, the horse has left the barn. Ignoring AI or banning its use is not preparing students even for the world as it is now, much less the world of the future. Students need and expect their higher education institutions to help them engage ethically with AI, Schell says. “Telling them never to use it is not helping them with that.”
Instead, she believes in putting that effort into helping people become “the architects of their own ethical frameworks.” “When they leave here, there aren’t going to be bars on these things, so the critical thinking, the decision making, misinformation, bias — those are all baked into the AI tools. Our students are going to thrive in environments where they’re prepared to address that ambiguity.”
That said, generating an essay with an AI tool such as ChatGPT and passing it off as one’s own is prohibited as an academic integrity violation. “There is a clear prohibition on doing that, but it wouldn’t just be for writing, it would be for generating code or preparing a presentation. But I think academic integrity is not a 1 / 0 on something like that — is it cheating or is it not cheating?”
Schell knows the difficulties of AI firsthand because she teaches a class in design pedagogy in which she introduces AI. She does so in phases: “First, I talk to my students about AI, and I make it really clear that if they use AI, they have to cite it, and I show them how. They need to document how they are going to use it.”
But she recalls one instance that was instructive. “We were making user personas, and a student turned in one that was a really great graphic. I said, ‘You did a great job. I really get the sense of the user by the image, and it feels very connected,’ and they said, ‘Thanks! I used AI!’ I was so surprised because I had made it really clear that that was not how to go about AI use in our class. But in that moment, I realized my students needed more help. It needs to be a conversation. It’s not a 1 / 0. It’s not ‘Follow my rule.’ A UT-quality learning experience is about helping them become architects of their own frameworks and engaging with AI effectively.”
On the second project, she actively encourages them to use AI, but introduces UT’s AI-forward framework, which includes six concerns about AI students should always consider: 1. privacy and security 2. hallucinations (AI stating things as facts that are false) 3. misalignment (when a user prompts an AI application to produce a specific output, but the application produces an unexpected or undesired result) 4. bias 5. ethical dilemmas and 6. cognitive offloading. Of the last item, she explains, “If we’re not careful, if we give AI too much, then we can lose cognitive capacities, so we have to be careful and judicious about what we decide to offload to it.”
Finally, in the last project she requires her students to use AI. With this phased approach, she hopes to build both skill and savvy about AI’s limitations.
The Upside: Introducing Sage
Asked about the upside of AI in education, Schell says, “I’m getting goosebumps talking about this. One of the things I’m most excited about is an AI tutor we’re working on named Sage. A custom GPT (generative pretrained transformer) is also known as a custom bot.
ChatGPT and other text-based AI tools used on campus such as Microsoft Copilot are large language models. When you ask a question, it finds where the information lives and then answers it. With a custom GPT, you can create your own limits around what you can train it to ask. “You can train it to ask the kinds of questions you want to ask,” Schell says. “You can train it to have resources that you want to respond with.”
It takes about 15 hours of experimentation to realize AI is not as good as you."
The first time she created her own custom bot and used it, she remembers, “I pushed back from my desk and thought, this is going to change everything.” She sent it to her boss, vice provost Art Markman, who said. “Yeah, you’ve got to pursue this.”
The result was Sage, an AI tutor that UT is making available to all faculty so that they can create custom GPTs to share with their students.
“Let’s use the same example about the lesson on burying the lede,” Schell says. “As a faculty member, you’ll talk with Sage, and Sage will help you build a tutor that you can then share with your students that they can interact with outside of class. Let’s say you give the lecture on burying the lede and have some readings on it. Then they’ll have this interactive tutor they can engage with, practice typing up some stories to develop the lede and get the feedback, but it’s trained by the faculty member and has your unique approach to that.”
This is going to help our educators solve some of the most unyielding and persistent problems they have with their particular content, she says. For example, one of the faculty beta users teaches organic chemistry, a notoriously hard course for which students often don’t have the background knowledge, which puts them at an immediate disadvantage. The faculty member can’t possibly give the struggling student the amount of time it would take. Students can engage with this AI tutor, available 24/7, to help them develop this background knowledge.
Another faculty member said students in his computer science class are so excited about what they’re learning they often want to learn about fringe topics, but he can’t teach them fringe topics because of bandwidth. “We could use Sage to help teach them fringe topics. That’s a perfect use case,” Schell says.
“We as educators have this moral impulse to help our students learn better and to provide them with these resources, but there’s limited capacity and time to do that. AI is going to help our educators solve problems they haven’t been able to solve before, and I think that’s the proper use of AI or any technology. We shouldn’t use it when we have got it down and don’t need help. It should help us solve problems we can’t solve without it.”
UT alumnus Ahir Chatterjee is consulting on Sage, and MFA design student Alix Zang is building the user interface and user experience. Schell’s office is leading the pedagogical side of the project and UT’s Information Technology Services – Emerging Technologies group is leading the technology side. Schell hopes all faculty will be able to use Sage this spring.
How AI Meshes With How People Learn
Schell says people learn best when 1.) they’re engaged 2.) they feel like they belong and 3.) they can self-direct and self regulate their learning. “Those are like fireworks for how people learn.” AI, she says, can contribute to each of those factors.
“On the engagement front, what we have at UT is a way for teachers and students to use generative AI in a safe environment. They can log in with their UT EID and they can engage in a learning process 24/7.”
On the belonging front, “One of the things I love about it is that it’s not rude.” It never tells you it doesn’t have time for you, and students never have to worry about asking a “dumb question.” In that way, “even though it’s not a human, it does create that sense of belonging.” Students don’t have to worry that they’re bothering someone, and it’s lower stakes for students to reveal to AI that they may not know something.
When they leave here, there aren’t going to be bars on these things . . . Our students are going to thrive in environments where they’re prepared to address that ambiguity.”
And as for self-directed learning, she recalls, “With a chatbot, I just pushed it to help me learn as much as I could: ‘Give me an analogy. I don’t understand that one — give me another analogy.’ I ended up really developing my expertise with that. I can direct it. I can identify that I don’t understand this. It elevates metacognition, which is the awareness of your learning process.” If you can identify that you’re still confused, you can direct your learning further. “Clearly I’m very excited about all this!” Schell says with a laugh.
A University Education in the Age of AI
So, in the environment of AI and YouTube, what’s the point of college? What would Schell say to 18-year-olds who think they could learn most anything without teachers?
It has to do with durable abilities, she says. “I don’t think using AI presents the challenge of our students’ lifetime. I think dealing with ambiguity is the challenge of their lifetime. Coming to college and engaging with tools like AI and being able to navigate that ambiguity and being able to come up with creative solutions to problems, to identify misinformation, to think critically, to collaborate — these are all durable skills. Being able to make moral and ethical decisions in the face of all kinds of ambiguity and misinformation is something our students learn at UT Austin.”
It’s not about the shiny new tool, she says. “It’s about helping our students navigate the world with durable skills. There’s going to be a new technology that comes through beyond AI, but it’s this durable ability to deal with ambiguity that is going to be special in undergraduate education.”