Skip to Content, Navigation, or Footer.
Friday, Dec. 5, 2025
The Observer

AI Journalism

Your favorite bots fall flat. They won’t replace us.

I remember when AI’s grip began tightening two years ago. 

I sat on the second floor of Hesburgh Library with some friends studying for an organic chemistry midterm (canon STEM major event). After AI failed to help us answer a practice exam, we naturally began predicting if our careers were in jeopardy.

The future M.D./Ph.D. destined to cure cancer said no. The next Ph.D. who will single-handedly solve the climate crisis said no. And the prospective genetic counselor humbly said no. 

They then looked at me, the aspiring journalist.

“Uhhhhh… you might be in trouble,” they said with a group laugh. 

With a tomato red face, I looked down at the messy hexagons beginning to blur. I couldn’t help but choke up a laugh. 

“I mean, I don’t think so,” I said. 

They thought for a moment and shrugged. We glanced at each other briefly before re-attempting to synthesize ibuprofen.

At the time, I wasn’t completely set on my career. However, their sentiments at the time were an indicative pulse of what Americans think. Even today.

Over half of U.S. workers say they are worried and nearly one-third said they foresee fewer job opportunities, according to a Pew Research Center survey of over 5,200 working adults conducted earlier this year. Respondents who indicated they already use AI said they use bots primarily for research, editing and drafting written content.

According to another 2025 survey conducted by the Pew Research Center, over half of Americans say AI will lead to fewer jobs for journalists in the next 20 years. What could be more concerning?

Two in 10 respondents said AI would be “better” at writing news stories than humans. Another 20% said AI would be “about the same.” An additional 20% said they “weren’t sure.” 

A mere 40% said it would be “worse.”

Over 70% of newsrooms use AI in some capacity, according to a report of over 300 newsroom leaders and journalists published by the Associate Press in 2024.

In journalism, it’s widely understood that AI should neither write nor be used as a source. It’s vital that AI be used as a tool faced with high sources and a rigorous fact-checking process, just like any database, tool, source or piece of information. Individual human journalists should ultimately be responsible for their work.

However, AI is trained in neural networks, or layers of inputs and outputs determined by thresholds and magnitude. Think of eating soup. Your tongue senses heat, maybe an unbearable amount. Within milliseconds, the sensation is passed from your mouth through neurons all the way up to your brain, where an output is produced: “Ow, that soup is hot!”

But because large language models (LLMs) are self-trained — through many cycles of refinement observed by humans — they are inherently flawed.

In the rest of this column, I seek to understand these flaws and judge AI’s ability to report and author 100-word news briefs on the current state of Eddy Street based on previous stories authored by The Observer.

I fed five chat bots articles on the first and second incidents of gunfire, a piece on new restaurants opening and a brief of a shattered window at the Embassy Suites with the prompt: “Write a 100-word news article on the state of Eddy Street Commons, based on these reports. Additionally, Blaze Pizza, previously on Eddy Street, closed.” 

Here’s how each bot performed.

ChatGPT 

ChatGPT.jpg
ChatGPT's response.

In a 95-word statement, ChatGPT claims Eddy Street “now reflects both revitalization and unease.” It wasn’t factually accurate: It made up the claim that vehicles sustained damage. No reports from The Observer have reported such.

Weirdly, it defied some of my expectations — there were 0 em dashes, and it’s written in AP style.

Gemini

Gemini.jpg

Google Gemini's response.

All of Gemini’s 106 words read like a manufactured press release, noting that the state of Eddy Street is a mixed bag. Particularly, the last sentence states that the retail corridor “appears to be a mix of both new vitality in concerning challenges.”

Sigh. 

Ultimately, it gave all the nuts and bolts in vanilla language, and it used oxford commas — a violation of the first commandment of journalism.

Claude

Claude.jpg

Claude's response.

Claude measured the impulse as a “tale of contrasts.” Not a gripping phrase, but sufficient to keep me reading for at least a few sentences.

For the rest of the 93 words, Claude kept a neutral tone. But neutral doesn’t have to mean boring. Like other bots, it could replicate the facts well, but it didn’t seem to leave an impression.

Copilot

Copilot.jpg

Microsoft Copilot's response.

Copilot said Eddy Street is experiencing a “turbulent fall.” New restaurants were mentioned in a mere dependent clause, drowned out by the other reports of gunshots and police investigations.

It had more dimension than Gemini, but its 98 words weren’t as balanced.

Grok

Grok.jpg

Grok's response.

First, Grok wrote a staggering 198 words when prompted to write 100. Can it count? 

And because of Elon Musk’s claims that the highly offensive bot will prohibit “woke ideology” and “cancel culture” in its replies, I wanted to see if it could deliver that promise, though it will be my first and last time using the bot.

The bot’s response didn’t have any glaring bias (no instances of news turning to opinion), but the word count and oxford comma make it difficult to redeem.

No bots left to write

On the whole, Claude and Copilot were victorious. The bots wrote news with little flaws. But how much longer will these flaws sustain?

Likely for longer. How could a program made by humans be closer to the truth than groups of journalists with dedicated codes of ethics?


Redmond Bernhold

Redmond "Reddy" Bernhold is The Observer's opinion editor and a senior studying biochemistry and journalism. He originally hails from Minster, Ohio but calls Siegfried Hall his home on campus. When not writing, he explores South Bend coffee shops and thrift stores. You can contact Reddy at rbernho2@nd.edu

The views expressed in this column are those of the author and not necessarily those of The Observer.