Alumni
RE Log Magazine

Taking on AI: RE faculty together confront the challenges of artificial intelligence

Since its debut last November, ChatGPT has lost a bit of the show-pony sheen of its early days. For months, the software, a “chatbot” interface developed by OpenAI that can produce rivers of text in response to virtually any prompt, made headlines and dazzled the world as it seemed to crush ever-more-elaborate intellectual benchmarks. GPT-4, the newest version, passed the multistate bar exam in the 90th percentile; received a 5 on AP US History; even scored a 77 percent on the Advanced Sommelier Exam for professional wine-tasters, despite not having taste buds at all.
Then, as the novelty wore off, ChatGPT did what it was inevitably going to do, slotting quietly into the workflows of white-collar professionals and taking on a percentage of the world’s intellectual labor. Now it writes legal briefs for lawyers. It does most of the actual coding for software engineers. It probably wrote at least part of the most recent mass email you didn’t read.  

For educators, however, the Age of AI has only raised more questions in the months since ChatGPT’s debut. Big questions, fundamental questions – the kinds of questions that most teachers didn’t think they would have to confront only three years after the Covid-19 pandemic forced its own paradigm shift.  
 
What will critical thinking look like when so much thinking can be outsourced to machines?  
What will computer science look like if humans no longer need to learn how to write code? 
Can we even assign essays anymore?  
 
For Humanities Department Chair Jen Nero, these questions are a feature, not a bug. Nero is the chair of the school’s new Artificial Intelligence Ransom Everglades (AIRE) Task Force, which was formed last semester by teachers determined to confront concerns that were already simmering in the background of the pre-AI digital age.  

“I think it’s going to heighten educators. It’s pushing us really, really hard to raise our game, and I think that’s probably one of the things I’m most excited about,” she said.  

Convened in February, AIRE includes faculty and staff members from humanities, STEM, world languages and technology services – and the author of this piece. The group met weekly in the spring semester, both to discuss the rapidly evolving state of AI in education and to work on a presentation to the faculty that we eventually gave during end-of-year meetings, a presentation that highlighted ChatGPT’s fearsome capabilities as well as its great potential as a “thinking partner” for both students and teachers.  
 
In the wake of that presentation, RE’s response to AI is now entering its second phase: widespread, bottom-up, course-by-course change. Knowing that they cannot afford to be stagnant, and empowered by RE’s independence, teachers in every discipline are adapting their curricula to a new reality in which we can expect AI tools to be not just ubiquitous, but embedded into nearly every piece of software that students once used to produce work on their own.  

The technology presents daunting challenges. But it’s also, in its own way, liberating. In forcing teachers to adapt, AI is also asking them to pinpoint what forms of thinking truly matter in their classes – and how to elicit those forms of thinking more purely, creatively and visibly.  

In some cases, teachers are going “super retro,” in Nero’s words: taking tech out of the equation and returning to “the classical model of the Socratic classroom.” But in other cases, they’re embracing AI in ways that wouldn’t have been possible in 2022.  
 
Academic integrity and AI  
Over the past few months, RE’s Chief Technology Officer Linda Lawrence has found herself thinking about “big, uncomfortable questions” of her own: “What is the world where these tools are ubiquitous? How do we assist students in preparing for their future?”

“What is the world where these tools are ubiquitous? How do we assist students in preparing for their future?”
Chief Technology Officer Linda Lawrence

When ChatGPT burst onto the world stage, the first salvos of public discourse among educators focused, understandably, on the question of “cheating.” How could we ensure academic integrity in a world where AI, always increasing in sophistication, could simply write papers or complete problems for our students? Software developers (and even OpenAI itself) released tools that could determine the likelihood that a piece of text was AI-authored. But such tools could only ever provide a probability – never definitive proof.  

It didn’t take long for our task force to determine that dreaming up ever more elaborate ways to defend against ChatGPT, or to catch students using it, wasn’t a productive course of action. On the one hand, it would go against the culture of honor and accountability that we aspire to create in our classrooms, and that students create with us. On the other hand, it would be a perpetually losing battle.  

We realized, however, that we could gain ground against a significant threat raised by the increasing prevalence of AI by working to ensure that RE students continued to possess a sense of agency over their learning.  

What happens to our sense of agency when we allow machines to do our thinking for us? In March, Matthew G. Kirschenbaum, a Professor of English and Digital Studies at the University of Maryland, College Park, asked that question on a grand scale in an essay for The Atlantic titled “Prepare for the Textpocalypse.” Kirschenbaum imagined a dark future in which we have allowed AI to take on so much of the world’s communication that the internet itself becomes a cesspool of AIs talking to other AIs, “flooding the internet with synthetic text devoid of human agency or intent: gray goo, but for the written word.” As in the garbage-filled and sun-scorched Earth in the movie Wall-E, the world of knowledge becomes a barren wasteland. Meanwhile, humans, having outsourced all thinking to machines long ago, become pure consumers of content, their intellectual muscles atrophying to the point that they can no longer think at all.  

Kirschenbaum’s vision of the AI-powered future is, admittedly, a dour one. But it does raise questions about the intellectual muscles of our students as AI tools become more and more ready-at-hand, and as it becomes easier and easier to automate the thinking routines that we want to cultivate in the classroom. How do we face the challenge of – as Kirschenbaum phrased it in an interview via Google Meet – “a kind of estrangement from human writing”? How do we prevent students from feeling disempowered by this incredibly powerful technology? 

Maybe take technology out of the equation – at least at times, in controlled situations in the classroom. Kirschenbaum studies digital writing professionally, but he also runs a scriptorium at UMD: a deliberately low-tech approximation of the places where monks used to produce illuminated manuscripts.  

“They get a goose quill. They get iron gall ink. They get a pretty good facsimile of parchment paper, and we turn off the lights and use LED candles,” he explained. “One scenario is that writing instruction becomes that kind of scriptorium, this kind of excruciatingly artificial environment where you draw the blinds and lock the doors and light the candles.”   

It might seem backward-facing to re-envision the 21st-century classroom as a scriptorium. But in certain contexts, a degree of strategic Luddism seems necessary to ensure unassisted critical thinking – and to ensure that by the time students step into the AI-dominated world, they know how to think for themselves, distinguish the true from the false, and chart a meaningful path.  

Nero uses a different metaphor: she imagines the classroom becoming an “intellectual gym.” This year, she plans to implement more in-class writing. Fewer papers. Oral exams. Above all: an even greater investment in Socratic dialogue, Harkness discussions and other modes of discourse that have provided the foundation of Ransom Everglades pedagogy for 120 years.  

“Now that we know that AI can generate so much in the written word, we’re going to expect you to be able to sit in a room and respond in real time. To make an argument, to counteract it, to be able to keep your cool. Make eye contact. It’s going to force students to be more present – and teachers, too,” she said.

In my own course, Research into Anglophone Literature, we have gone full medieval this fall, not quite by replicating a scriptorium but by replacing a traditional essay with a storytelling contest based on The Canterbury Tales. Jester hats, turkey legs, ribaldry – and not a laptop in sight.

“Now that we know that AI can generate so much in the written word, we’re going to expect you to be able to sit in a room and respond in real time. To make an argument, to counteract it, to be able to keep your cool. Make eye contact. It’s going to force students to be more present – and teachers, too.”
Humanities Department Chair Jen Nero

AI as a thinking partner 
And yet, Nero, like many teachers at RE, finds herself contemplating not just low-tech plans, but loftier, more exciting ambitions: opportunities for students to unleash AI as a thinking partner, harnessing its capabilities.  

“We cannot sit. We cannot be flat-footed. We have to go out there and show our students that we’re not afraid to use it. Even if it means we’re going to make mistakes, too,” she said.  

That was the main focus of the AIRE Task Force presentation to the faculty: not strategies for circumventing AI, but strategies for embracing it. At one point, world languages faculty members Felipe Amaro and Alfredo Palacio took the stage to demonstrate how ChatGPT can function as a bespoke language tutor for students at any level, conversing fluently and correcting the student’s grammar in real time. In early May, Khan Academy CEO Sal Khan delivered a TED Talk that celebrated the potential of AI to democratize this kind of one-to-one feedback. Even RE students, who receive significant one-on-one support from faculty, stand to benefit.  

The potential for democratization goes far beyond tutoring. In April, I allowed my students to use AI-generated images in a project that involved creating Brave New World-inspired propaganda posters. One of my students, Nicolas Poliak ’24, confessed that art wasn’t his strength. But he had a vision for a poster that combined elements of Soviet realism and 1950s American beauty ads. AI brought his vision to life in a way that would not have been possible even six months earlier.  

In a similar way, AI tools have liberated Skye McPhillips ’24 as she works to complete her Bowden Fellowship in the Humanities project, a study of creativity among Dominican tabaqueros (cigar rollers). She’s used ChatGPT to digest and index long academic papers, to provide feedback on her writing, and even – with the help of plugins and the machine learning expertise of Elliott Gross ’24 – as a translator and data analysis tool for her more than 30 interviews.  

“AI allows [students] to skip phases of searching for a quote for hours or waiting to ask a teacher about their paper right before it’s due, giving [them] more time to think about the next step of their endeavors and pushing them to innovate,” McPhillips said.  

In a survey that the AIRE Task Force sent out in May, 33 percent of faculty members said that they were already “actively encouraging” their students to use ChatGPT in various projects. That percentage stands to increase significantly in the 2023-24 school year, which promises to be a massive petri dish of pedagogical experimentation.  

The experiments will cut across disciplines. In the sciences, students in Paul Natland ’02’s physics classes will use ChatGPT “as a coding partner to create visualizations and analyze data” – to the point that Natland is planning to make “effective use of AI tools for problem-solving, brainstorming and drafting” one of his core standards. Students in all sections of biology will use AI tools to do something their peers already did in May: imagine an entirely new species – complete with illustrations. 

In the humanities, faculty member Kate Bloomfield will allow students in Political Culture in the United States and Understanding the Abrahamic Religions to use ChatGPT to fill out review sheets before assessments – with the caveat that it will be their responsibility to then “tailor” the chatbot’s “often generic” responses to each course’s specific curriculum. Faculty member Cameron Ferguson plans to teach his middle school students how to use AI as a cultural and historical research tool, one that can give them baseline knowledge of a place or time before they delve into more specific lines of inquiry.  

For upper school photography teacher Matt Stock, the most exciting prospect involves the old meeting the new. In the spring, he helped Bryce Sadler ’24, a student in his Experimental Photography class, enhance cyanotypes – “a contact-printed analog process from the 1860s” – with the image-generation capabilities of Adobe Firefly, an AI tool that now lives inside Photoshop. As the image transformed from analog to digital and back again, a new way of making art started to take shape.  

“This year, I want to allow students to mix cutting-edge digital tools with antiquarian methods, and see what they can come up with using a combination of methodologies,” Stock said.  

From the vantage point of the school’s CTO, being “future-forward” is the only option – and Lawrence said she’s here to support teachers as they use AI tools to try radically new things.  

“It’s a time where we have to stay ahead of the curve, because we are Ransom Everglades,” she said. “We’ve got such knowledgeable people and such creative people. We have to encourage them to take different strategic approaches and experiment with these tools, so that they can show students not just how to use them to pass this course, but how to use them to create pathways to understanding.”  

Some version of the Wall-E Textpocalypse seems inevitable as AI tools continue to embed themselves into work, communication and daily life. But I have also thought a lot about another Disney movie: 2014’s Big Hero 6. In that film, brilliant young people at an engineering academy bring outlandish inventions to life with the aid of AI tools that automate the dirty work: coding, manufacturing, prototyping. The main protagonist, Hiro Hamada, simply has to think of something in his mind, and then a swarm of “microbots” bring his vision into physical form. 

My five-year-old son already thinks the Fernandez STEM Center literally is the school in that movie. What if he ends up being right? 
 
If one way to empower students involves creating a space for them to think for themselves, another involves helping them use the technology to think and create in ways that wouldn’t have been possible before – and in ways that we have yet to imagine.
Back
Founded in 1903, Ransom Everglades School is a coeducational, college preparatory day school for grades 6 - 12 located on two campuses in Coconut Grove, Florida. Ransom Everglades School produces graduates who "believe that they are in the world not so much for what they can get out of it as for what they can put into it." The school provides rigorous college preparation that promotes the student's sense of identity, community, personal integrity and values for a productive and satisfying life, and prepares the student to lead and to contribute to society.