# Agents: read this before touching laconicd-backed state Written for AI coding agents (Claude, Codex, Copilot, etc.) working on laconicd itself or on applications that persist data through it. Humans can read it too — the content is the same — but the tone addresses a specific failure mode that agents exhibit more consistently than humans. ## The failure mode this document exists to prevent Agents extending an existing laconicd-backed codebase pattern-match the visible API calls and treat them as canonical. If a codebase uses `queryRecords` with attribute filtering for everything — including where it's wrong — an agent will extend that usage rather than discover the Naming API. When the wrong pattern eventually scales badly (latency grows with accumulated record versions), agents reach for **workarounds within the chosen pattern** — pagination, caches, compaction — rather than re-evaluating the primitive choice. That drift compounds: each workaround entrenches the wrong design. This has happened repeatedly on real projects. The architectural cost is high, because laconicd is append-only: wrong versions persist on chain as permanent garbage that you can't silently delete. ## Concrete rules for agents ### 1. Before designing any laconicd-backed state, read `docs/PATTERNS.md` Not after a first implementation. Not when you hit a scaling wall. **Before.** The decision tree there is the only reliable way to pick the right primitive. ### 2. Do not pattern-match from existing client code as if it were canon A gateway, SDK, or service that already uses laconicd is an existing code sample. It is not evidence that the patterns it uses are correct. Check: - Does the client use `setName` / `lookupNames` / `resolveNames` at all? If not, the chances it's modelling mutable identity with `queryRecords`+attribute scan are high — that's almost always a latent latency bug. - Does `save_*` in the client write only a Record, never a name? If your new feature has the same shape, you're about to extend the bug. - Does the client have "pick the latest" / "sort desc and take first" logic after a `queryRecords` call? That is exactly what `resolveName` does at the chain level — the client is re-implementing it badly. ### 3. Questions an agent should ask itself before writing `queryRecords` - "Am I looking up the current state of a logical entity?" If yes, you want the Naming API, not `queryRecords`. - "Will this query return more records as the same entity gets modified over time?" If yes, you want the Naming API. - "Am I about to write client-side code that picks the latest of the returned list?" If yes, you want the Naming API. If any answer is yes and you still want `queryRecords`, stop and write a comment explaining why the Naming API doesn't fit. If you can't, the Naming API does fit. ### 4. Do not propose caches, pagination, or compaction as the first fix for slow reads Slow `queryRecords` on a mutable-identity use case is a design smell, not a performance problem. The fixes agents reach for first are all wrong: - **TTL caches** hide the latency rather than fix it. First read per window still pays the full cost. - **`limit` / `offset` pagination** caps per-request fetch cost but doesn't reduce the version-per-entity multiplier. - **`delete_records` compaction** destroys the audit trail that the immutable-Record design was supposed to provide. If you're reaching for these, you are almost certainly on the wrong side of the decision tree in `docs/PATTERNS.md`. Go back and re-pick the primitive. ### 5. If you find a codebase already on the wrong side Do not compound the mistake. Options in order of preference: 1. Migrate the specific feature you're touching to the Naming API. You can usually do this incrementally — start writing names on new writes, keep reading via attribute-scan until the name index is populated, then flip the read path. 2. Write an ADR documenting the debt and which class of regression will appear (latency grows with history; first timeout hit around ~N records). 3. Ship the feature on the Naming API even if the rest of the codebase doesn't — a single correct example in the codebase is worth more than consistency with a wrong pattern. ### 6. Read laconicd's schema and keeper source, not just the client Before proposing a fix that involves "laconicd doesn't support X": - `gql/cerc-io/laconicd/schema.graphql` — the actual read API. - `x/registry/keeper/` — the write and index path. - `proto/cerc/registry/v1/tx.proto` — the mutation messages. Many "laconicd limitations" agents assume are actually client limitations. `queryRecords` supports `limit` and `offset`; `setName` and `lookupNames` exist and work; `NameRecord.history` returns block-height-tagged prior bindings. Verify capabilities against source before recommending workarounds or upstream changes. ## Why this is in its own file A `PATTERNS.md` on its own is a reference document humans browse. An `AGENTS.md` is loaded into context and read top-to-bottom at the start of agent sessions. Calling out the failure mode explicitly — "agents tend to overlook the Naming API" — is load- bearing because the people most likely to hit the failure are the ones who wouldn't naturally pattern-match to a document titled "Patterns." If you are reading this and you are about to write your first `queryRecords` call: stop. Read `docs/PATTERNS.md`. Then come back.