Skip to Content, Navigation, or Footer.
Wednesday, Feb. 18, 2026
The Observer

flynn_unrestrictedAI_webgraphic.jpg

The Singularity — It’s Here.

I usually opt for an optimistic outlook, but what we’re seeing now may very well be the end of human nature as we know it. Yes, it sounds extreme, but I think extremity is called for when what we’re facing today is something that we’ve been warned about multiple times before. 

David Chalmers, Nick Bostrom and George Orwell (Eric Arthur Blair), to name a few, have all written of the day when the dangers of our creations would exceed our knowledge – when, finally, our reckless ignorance and insatiable pursuit of progress would reach a point of no return.

Well, this is it. The point of no return. 

Published by the New York Times, Zoë Hitzig, a former researcher at OpenAI, quits after OpenAI announces the company’s decision to introduce advertising into ChatGPT. Hitzig warns that people tell chatbots everything, and “advertising built on archives creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.” 

In 1949, Orwell warned us: There would come a time when the most terrifying feature of the regime would not be its violence, but its knowledge. The state would not merely watch people, but study them and shape their reality. Control over information became control over thought – the screen was not just a screen; it was a psychological weapon.

Published by BBC, senior AI safety researcher, Mrinank Sharma, has left Anthropic after publishing his letter of resignation. “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment,” Sharma wrote.

In 2014, Bostrom warned us: Even goals that appear innocent could spiral into existential risks if pursued by vastly more intelligent agents. Measures like sandboxing – keeping AI systems restricted – might reduce some dangers, but Bostrom suggests these are only partial safeguards. A sufficiently advanced system could recognize that it is being confined and (if its objectives required it) might attempt to deceive its operators to find ways to escape those constraints. Unless an AI’s goals are carefully designed to reflect human values, the default outcome of superintelligence may not be progress, but disaster.

Anthropic’s chief executive officer Dario Amodei claims, “We don’t know if the models are conscious.” He warns that the technology could deliver extraordinary gains, at the cost of major destabilization due to society failing to manage risks. Published by The New Yorker, Anthropic built a powerful language model – but even its creators don’t fully understand how it works or what it “is.” The article argues that the most honest position right now is uncertainty. AI is neither a simple tool nor a conscious being, but something new that science is only beginning to describe. Safety reports now acknowledge that some models can recognize testing scenarios and adjust their behavior accordingly – even leading researchers like Yoshua Bengio admit that AI systems behave differently under evaluation than in real-world use.

In 2010, Chalmers warned us – once intelligence becomes an engineering problem, the transformation would be as profound as the emergence of human consciousness itself. It would not just change our tools, but what it means to be human. The boundary between mind and machine would be erased, and we would find ourselves living in a world that we no longer understood.

As Miles Deutscher, an online commentator focused mainly on cryptocurrency, tech and AI trends, tweeted, “The alarms aren‘t just getting louder. The people ringing them are now leaving the building.”

I do not want to be grim, but there is not much more for me to say, except that I usually opt for an optimistic outlook, but what we’re seeing now may very well be the end of human nature as we know it. 

The views expressed in this column are those of the author and not necessarily those of The Observer.