I’m professor Meghan Sullivan, the Wilsey Family College Professor of Philosophy here at Notre Dame. I’m also the director of the University-wide Ethics Initiative and the Institute for Ethics and the Common Good. Many of you may know me from “God and the Good Life,” the undergraduate course in philosophy that I created with Paul Blaschko.
As a philosopher, I’m deeply interested in ethics and how it impacts our ability to live good lives. Recently I’ve been focused on the complex, thorny ethical questions that are developing at breakneck speed around artificial intelligence.
How will AI change education, both for students and teachers? What will work look like as AI begins to automate labor? Will our understanding of beauty and truth shift as we increasingly interact with AI-generated content? How can we love each other as God intends when our lives are mediated by screens? What makes a human life worth living, and how is this unique, valuable and distinct from what technology offers us?
More than $25 billion was spent on AI development in 2023; $60 billion in 2024. A huge transformation is happening, and questions like these are going to be central to your lives. Many of them already are. Recently, The Observer Editorial Board ran a piece calling for Notre Dame to ban student use of AI tools in core courses, arguing that “education is not merely a matter of acquiring technical skills, but rather becoming a virtuous person and a responsible citizen.”
You are the transitional generation — the first generation of humans to have to figure out life in the midst of this new technology. It’s absolutely critical for you as young adults (and, hopefully, also as virtuous people and responsible citizens) to be actively engaged in this conversation about the world that we are creating with AI.
There’s a lot of shouting happening in that conversation right now. It’s being dominated by influencers, mainstream media, technology conglomerates — loud voices that are deeply invested in driving particular outcomes.
Where are the voices of faith?
Christians have a whole lot to offer on AI ethics, and it would be bad, not just for Christian communities, but for everyone, if these voices are left out.
Our institute has spent the past year working to identify, engage with and amplify these perspectives. We hosted more than 140 interviews with technologists, with ethicists, with faith leaders from a variety of Christian denominations, with journalists and policymakers, asking two questions: One, what unique perspectives do you think the Christian community has to offer? And two, what do you think it would take to bring these voices to the forefront of the conversation?
Through this work, we learned that there’s actually a lot of energy among Christians in tech hubs like Silicon Valley and Boston, where people of faith are working hard to try to mobilize others to think about these questions. Over and over, we heard that there is a deep, powerful hunger for faith-based frameworks that can help guide our conversations about AI.
In response, we created the DELTA framework, a deeply faith-informed approach rooted in enduring Christian values: dignity, embodiment, love, transcendence and agency.
Over the coming weeks, I’m going to give you a mini seminar on AI, faith and DELTA right here in The Observer. Together, we’ll unpack the framework and use each of the five concepts to examine and illuminate pressing questions around the application and integration of AI.
It’s time to get into the conversation. This debate is moving lightning fast, and one of the ways that you lose your seat at the table is by holding back, by being too tentative, by watching and waiting. Your generation cannot afford to do that. None of us can.
Meghan Sullivan
Wilsey Family College Professor of Philosophy and director of University-wide Ethics Initiative and the Institute for Ethics and the Common Good
Oct. 27








