System 0 and the Grey Space

AI
Cognition
Technology
On AI, cognition, and the meaning-making gap.
Published

December 31, 2025

In 1996, discussing the significance of the internet, the late David Bowie said:

“I don’t think we’ve even seen the tip of the iceberg…I think we’re actually on the cusp of something exhilarating and terrifying. It’s an alien life form, is there life on Mars? Yes, it’s just landed here.”1

It’s difficult to see that interview and not think the same of today’s technological revolution: Large Language Models. Through their proficiency at representing, producing, and using language—as well as seemingly replicating some human capabilities with regard to reasoning and tool use—they have provoked such existential dread as to be compared to creatures of Lovecraftian horror, an honour reserved for the most inconceivable, madness-inducing, and nihilistic of concepts.2

Whether or not our current iterations of AI are truly Cthulhu-esque horrors beyond our comprehension, what we are witnessing has profound psychological consequences—implications for how we understand cognition itself.

System 0

These psychological shifts are becoming harder to ignore. In a recent publication in Nature Human Behaviour titled “The case for human–AI interaction as system 0 thinking,” the authors propose an expanded model of human cognition that explicitly includes AI interaction as a new layer in the human cognitive stack.3 Entitled System 0, this layer sits alongside previously elaborated components in our reasoning: System 1 and System 2.4

Where System 1 has historically been used to describe cognitive processes that are fast and intuitive, and System 2 slow and analytical, Chiriatti et al. describe System 0 as a distinct psychological system that arises from interfacing with a “dynamic, multiartefact ecosystem.”

It is an “artificial, non-biological underlying layer of distributed intelligence that interacts with and augments both intuitive and analytical thinking processes.”

What are the specific characteristics of System 0? The authors say it “preprocesses and enhances information, actively shaping inputs to traditional cognitive systems rather than simply extending them,” creating a “dynamic, personalized interface between human and information.”

This vision of a “dynamic, personalized interface” echoes Bowie’s prescient observation about the internet:

“…the interplay between the user and the provider will be so in sympatico it’s going to crush our ideas of what mediums are all about…”

System 0, it seems, is that very crushing force—reshaping not just mediums, but the very process of meaning-making itself.

The Grey Space

A key difference between System 0 and existing cognitive systems is the lack of inherent meaning-making capability. System 0 preprocesses, but doesn’t impose meaning—leaving that grey space for humans to fill. On this very point, Bowie had something to say, reflecting on trends in the art world:

“…the piece of work is not finished until the audience come to it and add their own interpretation and what the piece of art is about is the gray space in the middle. That gray space in the middle is what the 21st century is going to be about.”

Bowie’s vision is optimistic: the 21st century as an era of active meaning-making, where audiences complete the work. But is this realistic? What does it mean to live in a world where the inference layer itself has been commoditised? Does the commoditisation of inference result in a standardisation of thought? Or does it free us for greater diversity of human action and motivation? A thousand flowers blooming—or a thousand tentacles of the Shoggoth?

Chiriatti et al. point directly to these issues: the risk that we begin adopting epistemic norms more aligned with the computational logic of AI—a shift which may undermine our capacity for independent reasoning and critical thinking. They go further to point out that as data (and the intelligence trained on it) become increasingly synthetic, ensuring the validity and reliability of our decision-making processes becomes both more critical and more challenging.

These aren’t problems we can solve by rejecting System 0 or retreating from AI augmentation. The only way forward is through understanding—by grappling with the tentacles rather than fleeing from them. We must characterise not only how AI interacts with our existing cognitive systems, but how it reshapes broader systems of identity, meaning, and agency.

This requires asking difficult questions:

  • How do we support human individuality and creativity, autonomy and capability, in a world increasingly mediated by a homogenous grey middle?

  • How do we ensure that the grey space Bowie spoke of—that space where meaning is made—remains ours to fill?

Footnotes

  1. David Bowie on the Internet (1999)↩︎

  2. Shoggoth with Smiley Face meme↩︎

  3. Chiriatti, M., Ganapini, M., Panai, E., Ubiali, M., & Riva, G. (2024). The case for human–AI interaction as system 0 thinking. Nature Human Behaviour, 8(10), 1829–1830. https://doi.org/10.1038/s41562-024-01995-5↩︎

  4. Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.↩︎