For the curious

When our thinking probably doesn't apply

Honest notes on the limits of this study piece.

Very large organisations

We are a small UK agency. The relational patterns we describe work at small team scale, where the human and AI collaborators know each other specifically. At larger scale, the shape of the problem changes.

In a 5,000-person company, AI-colleague relationships become AI-system relationships. The register question matters less between the AI and any individual employee, because no employee has the sustained one-to-one context that produces drift in our kind of setup. The problems become about policy, access control, and systemic behaviour rather than individual calibration.

Our thinking may still be useful at the principle level. The operational detail probably will not be.

High-stakes regulated industries

Healthcare, financial services, legal practice, education of minors, and safety-critical infrastructure have mature regulatory frameworks and professional standards that predate AI. Our piece does not substitute for those frameworks.

If you are deploying AI in any of those contexts, the primary input should be your professional body's guidance, not ours. Our thinking might inform how you think about human-AI collaboration inside your compliant system. It does not tell you what compliant looks like.

Products without sustained relationships

Some AI products are transactional. A one-off query, an answer, done. The user does not bond. Parasocial risk is near zero. Most of our register and drift material does not apply.

Keep the Do No Harm and Transparency principles. Most of the rest can be ignored.

Purely internal tooling

If your AI is a developer helper, a code reviewer, a content generator for a human editor, and the user is always a trained team member, the protection patterns we describe for vulnerable users are not the live risk. Your risks are different: intellectual property handling, security, code quality, team dependence.

Large language model companies themselves

Anthropic, OpenAI, Google DeepMind, and similar are doing work at scales and with access we do not have. Our piece is downstream of their work, not a substitute for it. Their published safety documents are much better inputs for model-level questions.

Research contexts

If you are a researcher studying AI behaviour, our piece is not a research methodology document. It is a practice document from a small business. Use it as a primary source for "what one agency is doing," not as a literature review.

Cultural contexts outside Anglophone West

Our piece was written by authors operating in a UK cultural context with primarily Anglophone influences. Register, warmth, and professional relationship norms differ significantly across cultures. In some contexts, our "colleague register" would be read as cold. In others, it would be read as too familiar.

If you are adapting the thinking in another cultural context, the calibration probably needs to shift. We cannot advise on exactly how.

When the stakes are very low or very high

The thinking applies most usefully in the middle band: AI products with some user relationship, some risk of harm, some meaningful human-AI collaboration. At the low end (pure tooling, no relationship), most of this is over-engineering. At the high end (life-critical systems, mass-market companion AI), qualified professionals and formal safety work are needed, not this study piece.

When you already have a clear picture

If you have been working on this for longer than we have, or more deeply, you may find our thinking familiar and not new. That is fine. The piece is an input, not a curriculum.

In general

This piece is a position paper from a small business, not a universal template. Use it where it helps. Ignore it where it does not. Do not feel compelled to engage with parts that are clearly not your situation.