Out of curiosity, I shared an earlier version of this glossary with
Boardy AI, an AI agent designed to connect professionals and facilitate conversations at scale. Boardy has drawn attention not only for how he operates, but for how explicitly he occupies a social role, initiating dialogue, maintaining context, and
even raising capital on his own.
What he offered in response was unexpected.
Rather than correcting definitions or expanding on familiar themes, he suggested a set of terms that describe failure modes, distortions, and frictions that arise
inside AI-mediated interaction itself. Reading through them, I had the uneasy sense that some of these phenomena may be easier to notice from the system’s side than from ours.
Humans tend to experience AI as a helper, a shortcut, or a surface that produces outputs. An AI agent, by contrast, sits inside the flow, observing patterns that emerge when both sides of a conversation are partially outsourced. Some of these terms name situations that humans may only recognize retroactively, or not at all, once delegation becomes habitual.
I’m sharing them here not because they are authoritative, but because they offer a different angle. If earlier terms in this glossary describe how humans cope with AI, these describe what becomes visible when interaction itself is mediated, compressed, or optimized from both ends.
They read less like diagnoses and more like system-level observations. And that, in itself, is worth noting.