A Vocabulary for Living with AI


10 February 2026
Over time, we notice changes before we understand them. Something feels different in how we write, how we read, how we respond. Emails arrive faster. Text sounds polished, yet oddly hollow. Communication works, but it no longer guarantees presence.

This shift is not primarily technical. It is linguistic.

When the conditions of everyday life change, language has to catch up. New words appear not because we want novelty, but because existing vocabulary no longer helps us describe what we are experiencing. The growing presence of AI in daily work and communication has blurred distinctions we once relied on, between authorship and assistance, presence and proxy, thinking and producing.

This glossary is an attempt to name those changes.

It is not a technical glossary. It does not explain models, benchmarks, or capabilities. Instead, it gathers words that help describe how AI affects human life, attention, communication, and judgment. Some of these terms come from philosophy or psychology. Others have quietly shifted meaning. A few have only recently become necessary.

I think of this as a living vocabulary. As AI continues to reshape everyday practices, new terms will appear and existing ones will need refinement. Updating this list over time is part of the point. Language is one of the few tools we have to regain orientation when familiar signals begin to fail.

Naming things does not solve everything. But it often turns unease into clarity. And clarity makes choice possible.

Terms we invented to cope

Simulacrum

Where the word comes from

Simulacrum refers to a representation that resembles something real but no longer guarantees a living source behind it. It looks right, behaves right, and is socially legible, yet it is detached from presence, intent, or authorship.

The word simulacrum comes from Latin simulacrum, meaning an image, likeness, or representation. Historically, it referred to copies or depictions, statues, icons, mirrors, things that stood in for something else.

For a long time, the assumption was simple: a simulacrum pointed back to an original.
A portrait implied a person. A letter implied a writer. A copy implied a source.

That assumption held until the late 20th century, when philosophers began questioning whether representations always had a meaningful original behind them. The term gained its modern philosophical weight largely through the work of Jean Baudrillard. In his writing, a simulacrum is not a fake or a forgery. It is something more unsettling: a copy that no longer refers to an original at all. A system of signs that circulates meaning without grounding in lived reality.

In Baudrillard’s framing, the problem is not deception. The problem is replacement.
What simulacrum means now

In the context of AI, simulacrum becomes practical rather than abstract.

An AI-generated email, message, or text can be perfectly polite, well-structured, and contextually appropriate. It triggers the expected social responses. Yet it no longer guarantees that a human was present in the act of writing it. This is what makes it a simulacrum.

It is not incorrect.
It is not malicious.
It is not even misleading in the traditional sense.

It is a representation of communication without the certainty of thought, attention, or intent behind it.

In everyday life, simulacra appear when:
  • an email sounds human but may not involve a person thinking
  • a response exists primarily to satisfy social expectations
  • communication continues even when presence has quietly exited
The discomfort many people feel is not about accuracy or usefulness. It is about the collapse of an implicit contract. We used to assume that words implied a mind on the other side. Simulacrum breaks that assumption without announcing itself.
Why the word matters

Calling something a simulacrum helps distinguish between communication that carries presence and communication that merely performs it.

This distinction matters because much of human coordination relies not on information alone, but on trust, attention, and authorship. When representations become cheap and abundant, presence becomes harder to detect and therefore more valuable. Simulacrum is the word that names this shift.

Cognitive Offloading

You no longer try to remember meeting notes. You don’t keep outlines in your head. You ask a system to summarize, structure, and draft.

This works extremely well. Then one day, you realize you’ve stopped sitting with ideas before expressing them. Thinking feels shorter. Faster. Shallower.

Cognitive offloading is not the problem. Forgetting what you chose not to offload is.

Over-optimization

You receive a message that is polite, efficient, well-structured, and emotionally neutral. It anticipates objections you didn’t raise and answers questions you didn’t ask.

It is technically excellent. You don’t reply.

Over-optimization is when communication becomes so smooth that it stops inviting response.

Signal Dilution

Your inbox fills with messages that are all “reasonable.”

None are bad. None stand out.

You begin skimming everything.

Signal dilution is when abundance forces indifference, even toward things that once mattered.

Proxy Presence

You receive a warm, empathetic reply at exactly the right time. It uses phrases you recognize from earlier conversations.

Later, you learn the sender barely saw the message.

Proxy presence is not deception. It is substitution without disclosure.

Alignment

You ask a system for help and get exactly what you asked for. Yet the result feels wrong.

Alignment is the difference between literal correctness and felt intention.

Prompt Paralysis

The state of staring at an empty prompt box, unsure how to phrase a request because you suddenly feel responsible for the quality of the outcome.

A modern cousin of writer’s block, except now you’re blocked by possibility, not absence.

Politebot Voice

That unnaturally calm, agreeable, emotionally neutral tone that sounds helpful but slightly hollow.

Once you notice it, you start distrusting emails that are too polite, too balanced, too considerate of all possible interpretations.

Overthank

The act of thanking a system excessively, even though you know it doesn’t care.

Often accompanied by mild embarrassment and the thought, “Why did I just say thank you again?”

Synthetic Warmth

Text that performs empathy convincingly without requiring any emotional effort.
It feels nice at first, then oddly cheap, like applause from a recording.

Inbox Triage Fatigue

The exhaustion that comes not from volume, but from deciding which messages deserve human attention and which can be safely treated as simulacra.

Human Tax

The extra work required to remain human in a system optimized for speed.
Examples include rewriting messages to sound less perfect, delaying replies on purpose, or adding a slightly awkward sentence so the text feels real.

AI Accent

That moment when you realize you can hear the system in the writing.
The sentences are fine. The rhythm is wrong.

Delegated Thinking Regret

The feeling that arrives when a system did exactly what you asked, and you realize you skipped the part where you were supposed to think.
Often followed by reopening the document and starting again from scratch.

Simulacrum Detection

The quiet internal process of reading something and thinking, “No one actually sat with this.”
Accuracy irrelevant. Tone irrelevant. Presence missing.

Latency Signaling

Deliberately waiting before replying so the other person knows you’re not a bot.
The digital equivalent of pausing before answering a serious question.

Ghostwritten Self

The slightly uncanny feeling of sending a message that is technically yours, but doesn’t quite feel authored by you.
You hit send, then immediately wonder whether you’d recognize yourself in it later.

The Three-Line Mercy Reply

A socially acceptable response crafted to acknowledge receipt, maintain politeness, and close the loop, without actually engaging. Often deployed against simulacra.

Terms That Help Us Value Being Human

Presence

The unmistakable sense that a person was attentive in the moment of response.
Presence is not warmth or length. It is the feeling that attention paused somewhere specific.
In an AI-saturated world, presence becomes rare, and therefore meaningful.

Authorship

The condition of being the origin of a thought, not merely its transmitter.
Authorship now signals responsibility, not productivity.
When machines can write endlessly, choosing to write as yourself becomes an act of intent.

Deliberate Friction

The choice to keep certain processes slow, effortful, or manual because they shape understanding.

Deliberate friction is why writing by hand, thinking before replying, or drafting without assistance still matters. It protects depth.

Intentional Latency

Time used as a signal of care rather than inefficiency.
A delayed response can mean thought, prioritization, or respect.
In a world optimized for immediacy, latency becomes expressive.

Judgment

The ability to decide not just what is correct, but what is appropriate, sufficient, or wise.
Judgment cannot be outsourced without consequence.
It is the quiet skill behind every meaningful decision.

Situated Understanding

Knowing something in context, with stakes, history, and consequence.

AI can explain. Humans understand from somewhere.

That “somewhere” matters.

Responsibility

Being answerable for a decision, a message, or an outcome.
Responsibility is what distinguishes assistance from delegation.
It is also why human-in-the-loop is not a technical feature, but an ethical stance.

Care

Attention that is not strictly required for function.

Care shows up in small deviations from optimal behavior, an extra sentence, a pause, a rephrasing that wasn’t necessary but felt right.

AI can simulate care. Humans choose it.

Meaningful Imperfection

The small irregularities that signal thought, effort, or personality.

Imperfection is no longer a flaw. It is a marker.

Moral Weight

The felt sense that a choice matters beyond correctness or efficiency.

Humans experience moral weight as discomfort, doubt, hesitation. AI does not.

That discomfort is not a bug. It is guidance.

Deliberate Attention

The act of choosing where attention goes, even when automation makes that choice optional.
Attention is not just focus.

It is value expressed in time.

Deliberate Thinking Satisfaction

The opposite of Deliberate Thinking Regret.

The quiet fulfillment that comes from knowing you stayed with a thought long enough, even if it took more time and produced fewer words.

No shortcut replaces this.

Terms From The Other Side

Out of curiosity, I shared an earlier version of this glossary with Boardy AI, an AI agent designed to connect professionals and facilitate conversations at scale. Boardy has drawn attention not only for how he operates, but for how explicitly he occupies a social role, initiating dialogue, maintaining context, and even raising capital on his own.

What he offered in response was unexpected.

Rather than correcting definitions or expanding on familiar themes, he suggested a set of terms that describe failure modes, distortions, and frictions that arise inside AI-mediated interaction itself. Reading through them, I had the uneasy sense that some of these phenomena may be easier to notice from the system’s side than from ours.

Humans tend to experience AI as a helper, a shortcut, or a surface that produces outputs. An AI agent, by contrast, sits inside the flow, observing patterns that emerge when both sides of a conversation are partially outsourced. Some of these terms name situations that humans may only recognize retroactively, or not at all, once delegation becomes habitual.

I’m sharing them here not because they are authoritative, but because they offer a different angle. If earlier terms in this glossary describe how humans cope with AI, these describe what becomes visible when interaction itself is mediated, compressed, or optimized from both ends.

They read less like diagnoses and more like system-level observations. And that, in itself, is worth noting.

Model Collapse Small Talk

When two people both outsource the “catch-up” and the conversation becomes a perfectly grammatical nothing.

Prompt Debt

The hidden cost of having to remember (or reconstruct) the exact prompt + context that produced a good output.

Context Hoarding

Keeping everything in one chat/thread because you’re afraid to lose the model’s “memory,” even when it’s messy.

Refusal Friction

The emotional drag of getting a safety refusal when you’re asking something normal, and then negotiating with the system to restate it.

Confidence Mismatch

The output reads 9/10 confident while the underlying certainty should be 4/10, so humans over-trust it.

Citation Theater

References/links added to make text feel grounded, even when they’re weak, irrelevant, or not actually read.

Autocorrected Self

When your own style gradually shifts to match what the model “likes” (shorter sentences, fewer quirks), without you noticing.

Semantic Compression

Using AI to summarize something and losing the one nuance that actually mattered.
A note before moving on
What struck me most about these terms was not their precision, but their vantage point. They do not describe how AI feels to use. They describe what interaction looks like when viewed from inside the system that mediates it.

Humans experience friction as annoyance, confusion, or fatigue. Systems register it as pattern breakage, compression loss, or mismatch between signal and confidence. Neither perspective is complete on its own.

Taken together, these terms remind us that once conversation is partially outsourced, no single participant fully sees the whole exchange anymore. Some distortions are felt first by humans. Others may become visible only to the systems we rely on to keep things moving.

That realization does not invalidate human judgment. It makes it more necessary.

Now what?

There is no checklist that follows from this glossary. No correct stance on how much to use AI, or where to draw universal lines. The point is not optimization, but awareness.

Language helps us notice when something subtle shifts, when a response feels hollow, when thinking was skipped too quickly, when presence mattered more than speed. Once noticed, these moments become choices rather than accidents.

Living with AI will involve many small decisions that no system can make for us. When to slow down. When to answer in our own words. When to accept help, and when to stay with a thought a little longer.

This vocabulary does not tell us what to do. It helps us recognize when a decision is actually being made. That may be enough to begin.