Et ai.: A proposal for AI attribution
When we use AI for work—whether it’s “vibe coding” with an agent, drafting an article, or generating images—good results rarely come in one shot.
The workflow usually follows a Human → AI → Human loop: the human sets intent and constraints; the AI generates outputs or takes actions; the human reviews, corrects, and approves the result, repeating until it’s good enough to ship.
This raises an attribution question. If both the human and the AI did meaningful work, how should that be reflected?
The current options are insufficient:
- A human name alone misrepresents the effort and conceals the tool.
- An “AI-generated” label alone ignores the human intent, curation, and refinement.
- Crediting both is accurate but unstandardized. Do we list every model? In what format? Real workflows often involve multiple models, and listing the full toolchain in the author field is noisy.
We need something short, general, memorable, and standardized.
The Proposal
Academia already has a compact shorthand for “this wasn’t solo”: et al., from the Latin et alii (“and others”). It’s standardized, lightweight, and widely understood.
I propose adapting this pattern for the generative age:
et ai.
From et artificialis intelligentia (“and artificial intelligence”)
Meaning: AI contributed materially, and the named human author vouches for the result.
Interpretation
1. It is a top-level signal. Et ai. acts as a flag, not a full documentation log. It appears wherever the primary author is named—whether that’s an article byline, a code file header, or an image caption. It gives an early, consistent signal that AI was involved without cluttering the work with a full toolchain list.
2. It keeps the details optional but accessible. Because the primary credit remains clean, specific tool details can be provided elsewhere if needed—for example, in an attribution note, project description, or repository README—without cluttering the high-level view.
3. It implies human responsibility. The convention should only apply where a human exercised meaningful understanding and judgment over the result. It gives the reader a clear contract: “The result is the product of human and AI work, but the named author takes full responsibility for it.”
When to use it
- Use it when AI involvement is substantial enough that omitting it would misrepresent how the work was produced.
- Do not use it for single-shot prompting, unreviewed output, auto-published content, or blind acceptance of code you don’t understand.
We need a standard that embraces the reality of hybrid workflows without discarding human accountability. Et ai. offers exactly that.