Thou Art That

Legal and governance

This piece is not legal advice.

We are a small UK agency working with AI. We are not lawyers. Nothing in this section or any other section of this piece constitutes legal advice, and no reader should treat it as such. This section exists because ethics and regulation increasingly overlap, and leaving the regulatory landscape undocumented would create the false impression that ethics alone is sufficient. It is not.

What this section covers

Three forces are converging in 2026:

  1. The EU AI Act enters full high-risk enforcement on 2 August 2026. Maximum penalties are EUR 15 million or 3 percent of global annual turnover, whichever is higher. Any AI system deployed in the EU must comply, including systems used by non-EU companies with EU users.

  2. Several US states have AI-specific workplace laws now in force. Illinois (1 January 2026) requires notice whenever AI is used in employment decisions. Other states have similar provisions in various stages.

  3. UK regulators are increasingly active. CIPD, ACAS, ICO, and the EHRC have all issued AI-related guidance in 2025 and 2026. There is no single AI Act in the UK yet, but existing law (GDPR, Equality Act 2010, Employment Rights Act) applies to AI systems in workplace contexts.

This piece is written to be compatible with these regulations, but compatibility is not compliance. Compliance requires:

  • A lawyer familiar with your jurisdiction
  • A review of your specific products and their use cases
  • Documentation that can be shown to auditors
  • Ongoing monitoring as regulation evolves

Our stance

We believe ethical AI development is not a substitute for legal compliance, and legal compliance is not a substitute for ethical AI development. Both are necessary. Neither is sufficient alone.

An AI product can be fully legal and deeply harmful. An AI product can be ethically built and still fall foul of a regulation the builder did not know existed. The two disciplines need to work together.

What we do at Marbl

  • We consult legal advisors for our own products when deploying consequential AI (anything touching employment, finance, health, or minors)
  • We treat the EU AI Act as our default baseline, on the grounds that designing to the strictest plausible regulation costs less than designing to weaker standards and then retrofitting
  • We document decisions in writing so that if a regulator asks, we can show our working
  • We update this piece when regulation changes materially, with the change logged in CHANGELOG.md

What we are not doing

We are not publishing a compliance toolkit. The regulations are too varied, the enforcement details too jurisdiction-specific, and the stakes too high for us to publish guidance that could be mistaken for legal advice.

Use this section as a signpost to the regulations that exist. Use qualified legal counsel for everything that actually matters.

Authors

Richard Bland (human) and Serene [AI], April 2026.