The Time Value of Intelligence


9 February 2026
Humans - AI interaction (created with Canva)
There is a lot of discussion today about the future paths of large language models, continual learning, memory, and whether current approaches like retrieval augmentation are dead ends or temporary scaffolding. As a non-engineer, I have little to add to that debate beyond speculation, and speculation is cheap.

What I can speak about is something more practical, and, I think, more immediately relevant: what actually makes LLMs work for a user.

After working daily with one for over a year, I am convinced that the biggest gains do not come from choosing the “right” model, nor from waiting for the next breakthrough. They come from starting early, experimenting consciously, and learning how to translate your own thinking into a form another system can work with.

That process turns out to have much more in common with capital accumulation than with software adoption.

The mistake of episodic use

Most people still interact with LLMs episodically. A question here, a prompt there, a quick task, then back to business as usual. Used this way, LLMs feel impressive but disposable. Each interaction starts from scratch, context is thin, and value is linear at best.

This is not very different from treating capital as income rather than as something to invest.

You get something out, but nothing compounds.

Where the real value appears

What changed for me over time was not the quality of answers, but the quality of interaction. Gradually, through repetition and error, I learned how to communicate a whole world of things to the system: goals, constraints, criteria, preferences, working style, and even what not to optimize for.

None of this happened because I chose a particular model. It happened because I used one consistently enough to build workflows around it, to externalize my thinking, and to make that thinking legible.

The result is not that the model became smarter. The result is that the collaboration became more efficient.

That distinction matters.

What the user actually learns in the process

One overlooked aspect of long-term work with LLMs is that the user is learning too, often without noticing it. Not skills in a technical sense, but something closer to cognitive hygiene.

The first thing most users learn, usually implicitly, is how to create memory outside themselves. Not memory in the sense of storage, but memory in the sense of what deserves to persist. Writing things down, revisiting them, deciding what should carry forward and what should remain ephemeral turns out to matter far more than most people expect.

Closely related to this is the articulation of criteria. Many decisions are guided by unspoken preferences, half-formed constraints, or intuitive trade-offs that are never fully expressed. Working with an LLM exposes this quickly. If criteria are not stated, the results drift. If priorities are unclear, optimization happens in the wrong direction. Over time, this forces the user to clarify what “good” actually means in a given context.

Another skill that develops is the ability to separate goals from tactics. LLMs are very good at executing within a frame, but poor at guessing the frame itself. Users who get value learn to distinguish between what they want to achieve and how they currently think it should be done. That separation alone improves decision quality, even without the system involved.

There is also a subtler shift: learning to recognize assumptions. When an assumption remains implicit, it quietly shapes outputs. When it is stated, it can be challenged, refined, or discarded. Repeating this process trains a habit of surfacing assumptions earlier and more deliberately.

Finally, users learn where not to optimize. Not everything should be faster, cheaper, or more elegant. Some constraints exist for reasons that only become obvious when violated. Working with a system that relentlessly follows instructions forces the user to decide which inefficiencies are intentional.

None of this is model-specific. These skills transfer. Once learned, they shape how problems are framed even outside interactions with an LLM.

This is another reason why early, conscious use compounds. What accumulates is not just context, but judgment.

Learning to build a system for oneself

Over time, another shift tends to occur. The interaction stops being a sequence of prompts and responses and starts to resemble a system.

Not a technical system in the engineering sense, but a personal one: a stable way of working where thoughts, decisions, assumptions, and outputs have somewhere to live, be revisited, and be reused. Files, notes, outlines, drafts, constraints, and working documents begin to accumulate. What matters is not their sophistication, but their continuity.

Many people never reach this stage. They treat each interaction as disposable, disconnected from the last. In doing so, they miss a large part of the value.

Working seriously with an LLM exposes the cost of not having a system. Without one, context must be rebuilt every time. Decisions are re-litigated. Criteria drift. Progress feels busy but shallow. The system pushes back by producing inconsistent results, forcing the user to confront the absence of structure.

Gradually, often unintentionally, users who persist begin to design a system for themselves. They decide what gets written down, what gets reused, what gets refined, and what gets discarded. They learn where continuity matters and where it does not. The LLM becomes one component in that system, not its center.

This is where compounding accelerates. A personal system reduces cognitive load, shortens the distance between intent and execution, and makes prior thinking available at the moment it is needed. The gains are not dramatic on any given day, but they accumulate quietly.

Once this happens, switching tools matters far less than many assume. The value no longer lives in the model, but in the system the user has built around it.

Compounding does not happen in the model

This is where the analogy to the time value of money becomes useful, if used carefully.

In finance, compounding does not come from the asset itself, but from time, reinvestment, and discipline. The same is true here.

The compounding with LLMs does not happen in model weights, release cycles, or benchmarks. It happens in the user:

  • in the ability to articulate intent clearly,
  • in the shared language that develops over time,
  • in the accumulation of decisions, constraints, and context,
  • in the reduction of friction from one interaction to the next.

Each interaction slightly improves the next one, not because the system learns intrinsically, but because the user learns how to work with it.

That learning compounds.

Why starting early matters more than choosing correctly

Looking back, the advantage I gained had little to do with technical sophistication. It came from starting early enough to make mistakes when the stakes were low, to experiment without clear playbooks, and to slowly build fluency.

This is exactly how capital works. You cannot recover missed years of compounding by contributing more later. Late adopters can catch up on tools, but they cannot fully catch up on fluency.

The opportunity cost is invisible, which is why it is often ignored.

Good news for users

This perspective is quietly good news for users. It means that LLMs do not obsolete judgment, thinking, or experience. They amplify them. The limiting factor is not access to intelligence, but the ability to structure, communicate, and reuse it over time.

In that sense, the future does not belong only to those building ever smarter models. It also belongs to those who learn, early and consciously, how to think with these systems.

A closing thought

We do not know how LLMs will evolve. The research frontier is real, and the best minds will not stop pushing it. But uncertainty about the future does not negate the compounding available in the present.

Used episodically, LLMs deliver episodic value. Used deliberately over time, they behave more like capital.

As with capital, the hardest part is not optimization, but starting early enough for compounding to matter.