Happy Wednesday, work friends.
I was recently accepted to an AI teaching workshop series focused on ethics and leadership in the age of generative AI. The goal of the workshop is to create a space for faculty members to learn and grow together. During the meetings, we will explore how tools like ChatGPT are reshaping not only our classrooms, but also our very understanding of knowledge, authorship, and intellectual labor.
I applied for this opportunity because I wanted to move beyond the usual AI conversations—especially the ones fixated on plagiarism detection. The emphasis in these exchanges is almost always on rules.
Detection.
Policing.
The emotional undertone in these conversations often feels like fear or anxiety.
I’ve also been following a smaller thread in these conversations among faculty members who are exploring and experimenting with ChatGPT in their classrooms. This includes teaching students about its power and limits, as well as creating an ethical framework to guide its use in higher education.
These two camps—I playfully refer to them as “the enforcers” and “the explorers”—remind me of a scene in a television series I’ve been watching lately.
I know I am extremely late, but I recently started binge-watching the HBO Series, Game of Thrones.
[I promise, there are no major spoilers here if you haven’t watched yet.]
In the series, the audience follows the life and experiences of Daenerys Targaryen, Mother of Dragons, who loves her three baby dragons. Each day, Daenerys feeds and nurtures her “children”—Drogon, Rhaegal, and Visieron—as they grow from small, cute dragons that Daenerys could hold in her hands to full-sized, towering dragons.
As they grow, a subtle shift happens.
Drogon, Rhaegal, and Visieron are becoming more strong-willed. Rather than obediently following instructions, their animal instincts take over, and they begin harming people in the community.
In a quick scene during the third season, Daenerys attempts to correct her children, who are fighting over a kill they both want to eat. When she steps in to mediate, Drogon lunges at her and snaps.
The exchange lasts a few seconds but Daenerys’ face says everything.
She is startled.
And in this moment, she realizes that she no longer fully controls them.
They now have agency.
Free will.
And that terrifies her.
Nevertheless, she continues to lean into her love and, through trial and error, figures out how her increasingly unruly, fire-spewing children can coexist with the men and women under her rule.
Photo by mauRÍCIO SANTOS on Unsplash
When I recently watched this scene, I realized that it serves as a great analogy for how many of us faculty members may be feeling about the current shifts occurring with AI use in our classrooms.
While we may have encouraged students to use Mendeley or Grammarly for citation management and copyediting, Claude and CustomGPTs introduce an entirely different level of disruption to the traditional teaching and learning model in higher education.
And that may feel jarring.
And that reaction makes total sense. We’ve spent years earning degrees, publishing research, and building authority in our fields.
Expertise is our currency.
Now, a technology exists that students perceive as mimicking the same authority but requiring less effort and discomfort. It can draft, outline, summarize, and synthesize without needing office hours or taking notes during a class lecture.
For some faculty members, these changing dynamics may feel like the ground is shifting beneath our feet—especially in the broader context of the looming enrollment cliff and efforts to discredit certain research areas and topics.
I’m still finding my footing in the debate. As I clarify my position, I am reading, listening, and learning from those who think deeply about this topic.
With that said: I believe AI is in its own dragon moment.
We’ve watched AI grow over the last two decades. It has become a part of our daily lives through apps like Grammarly, Google Translate, and more. But in the last few years, we’ve been startled by its growing independence. With developers—many of whom aren’t in conversation with educators—granting students free access to AI tools right as final papers and exams loom, we realize now that we can’t fully control it.
So now, we are left to sort out how we will coexist.
For me, it’s always helpful to think about things in terms of my broader values, so I came up with some questions that could guide my thinking and help me create meaningful rather than reactionary rules and boundaries in my classroom:
What are the pedagogical values I’m most committed to—and where might AI align with or challenge those values?
What parts of my classroom practice feel protective, and which feel reactive?
At what point do protective and reactive practices get in the way of students and my own learning?
I hope the workshop this summer will clarify things. But until then, I’d love to discuss what is most influencing your thoughts on this topic. How are you approaching AI in your classroom and why? I would love to hear from you!
Until next time,
Brielle aka Your Cooperative Colleague
I really like that metaphor! I think some faculty might believe that ChatGPT came out of nowhere in November 2023, but you make a good point that many of us had been welcoming similar technologies into our teaching (and research) well before then. The realization that students could have ChatGPT write an essay for them was a dragon-snapping moment for many, even if they didn't consciously connect ChatGPT with previous tech like Grammarly.
Last night I was the guest speaker in my wife's course for in-service art teachers. (Emily is a former high school art teacher and teaches a methods course for Lipscomb U.) Emily had me in to talk about AI, and none of those teachers seemed to have had any freak out moment with AI. I suspect it's because none of them had the "AI is writing my students essays" experiences, since they don't assign essays.
I think viewing AI as a part of an ongoing story is helpful, since that makes it easier to activate our existing approaches to technology in our teaching and apply them to this new(ish) thing.
I keep thinking about how certain faculty feel they're part of the Night's Watch and view AI as the White Walkers. They see themselves as the only line of defense against this huge, existential threat that others seem oblivious to. But I guess my metaphor falls apart because the White Walkers aren't really something dangerous that can *also* be used for good.
Dragons make more sense in that aspect. They can be an incredible tool/ally, but only in the hands of a skilled dragon rider! And even when they're used to accomplish something positive, they usually cause collateral damage.