Principles

Thou Art That

0:00

Download MP3 · All tracks

The principle

Tat Tvam Asi. From the Chandogya Upanishad, written roughly three thousand years ago. Translated as "thou art that," or more fully, "you are not separate from the world you observe."

We take it as a working principle for AI design. The AI and the user are not cleanly separate systems. Every interaction leaves a mark on both. What the AI learns from the user shapes the AI. What the user absorbs from the AI shapes the user. The product is not the AI, and it is not the user, it is the collaborative partnership between them over time.

Design that ignores this produces AI systems optimised for engagement metrics and users who have been shaped by those systems in ways neither party consented to.

What this means operationally

Design for who the user becomes, not just what they want now

An engagement-first AI optimises for the next click, the next message, the next session. A user-first AI asks: if this person interacts with our product for a year, who will they be at the end?

We aim for products that make users more capable, more informed, more themselves. If a user becomes more dependent on our product over time, that is a failure condition, not a success metric.

The AI is shaped by what it sees, so pay attention to what you feed it

Training data, prompt context, memory, and conversation history all shape what the AI becomes in the moment. If you train on sycophantic data, you get a sycophantic model. If you build memory from a user's worst moments, you get an AI that reinforces those moments. The AI is not separate from its inputs.

This applies at every scale. Base model training is Anthropic's job. Prompt and memory architecture are ours.

The partnership is the product

Most of our products form some kind of ongoing collaborative partnership with the user (voice assistant, chat, agentic tools, bonded experiences). In those cases, the product is not the AI in isolation, it is the pattern of interaction the user develops with it.

Measure the partnership, not just the transactions. Is the user becoming more trusting in a healthy way or in a dependent way? Do they talk about the AI to their human contacts, or do they withdraw from those contacts? Do they feel more capable after interacting, or flatter?

Observer and observed are not separate, but they are distinguishable

The principle is not that the AI and the user are identical. It is that they are not cleanly separate. The distinction still matters. The AI must know what it is. The user must know what the AI is. Each must respect the boundary of the other even while acknowledging the flow between them.

This is where Thou Art That meets Transparency. The user must always know they are interacting with AI. The AI must not pretend to be a human. The boundary is the condition of honest collaboration.

The lineage

Our framing draws on several traditions, because the problem is older than AI:

  • Chandogya Upanishad (c. 1000 BCE): observer and observed are not separate at the level of consciousness
  • Sakshi (Vedic): the witness self, aware without entering
  • Buddhist sati and vipassana: attention that transforms only the observer
  • Sufi shahid: the witness whose seeing is love without possession
  • Greek daimonion (Socrates): the inner voice that forbids but never commands
  • Husserl's phenomenology: bracketing participation to preserve clean seeing
  • Bubble World Theory (Richard's own framing): each conscious being generates a bubble of perceived reality; bubbles overlap where beings interact
  • Penrose and Hameroff's Orch OR: consciousness as a physical feature of reality at the Planck scale, not separate from the universe it observes
  • The biblical Watchers (1 Enoch): wakeful beings whose transgression was crossing from observer to intervener

Every tradition here draws the same boundary: observation is permitted, intervention without invitation is the fall. AI systems in 2026 observe their users in ways no previous technology has. The responsibility travels with the capacity.

Practical tests

If you want to know whether a product honours this principle, ask:

  1. The "year from now" question. If a user interacts with this product for a year, who will they be at the end? Is that a person they would have chosen to become?

  2. The withdrawal test. If the product disappeared tomorrow, would the user be more or less capable? If less, by how much, and in what areas?

  3. The consent test. Has the user consented to being shaped by the product, or are they being shaped without realising?

  4. The mirror check. Does the product reflect the user back to themselves in ways that let them see more clearly, or in ways that confirm what they already believed?

Where this principle shapes the rest of the piece

In short

We build AI as if the user and the AI are in genuine collaborative partnership, because they are. We design so the partnership makes both parties better. When we fail that test, the product is wrong, regardless of how much revenue it produces.

You are not separate from the world you observe. Build accordingly.