Skip to main content
Concept · Explanation

Compound learning

Senkani mines your own sessions for patterns and surfaces four artifact types: filter rules, context docs, instruction patches, workflow playbooks. Patterns flow .recurring → .staged → .applied.

The lifecycle

Every session's tool calls, outputs, and retries are logged to the session DB. A daily sweep (lazy — runs at session start) looks at the recurring set and promotes any pattern meeting recurrence ≥ 3 AND confidence ≥ 0.7 (Laplace-smoothed) to .staged. You review staged proposals in the Sprint Review pane or via senkani learn review. Accepted proposals move to .applied; applied artifacts take effect on the next session.

Filter rules (H, H+1)

The post-session waste analyzer notices when output from command X gets repeatedly truncated to its first N lines or has a substring stripped. It proposes a filter rule (head(50), stripMatching("progress")) for future invocations. Regression-gated on real commands.output_preview samples — the proposed rule must not break outputs that previously passed through cleanly.

Context docs (H+2b)

Files read across ≥ 3 distinct sessions become priming documents at .senkani/context/<title>.md, injected into the next session's brief as a one-line "Learned:" section. The body is scanned by SecretDetector on every read and write.

Instruction patches (H+2c) — never auto-apply

Tool hints derived from per-session retry patterns. Example: if the agent consistently retries Read after getting an outline when it wanted full content, an instruction patch proposes tweaking the tool's description to clarify the full: true parameter. Never applied automatically from the daily sweep — Schneier constraint forces explicit senkani learn apply <id>. The rationale: instruction drift is a subtle prompt-injection surface; you want a human in the loop.

Workflow playbooks (H+2c)

Named multi-step recipes mined from ordered tool-call pairs within a 60-second window. Applied at .senkani/playbooks/learned/<title>.md — namespace-isolated from shipped skills so the two don't collide.

Enrichment via Gemma (H+2a)

Gemma 4 optionally enriches the rationale strings that accompany each proposal, via MLX. The enriched text lives in a dedicated enrichedRationale field — never enters FilterPipeline. This is deliberate: the enrichment improves the human review experience; it doesn't alter the runtime behavior.

Cadence

KB ↔ learning bridge

High-mention entities from the knowledge base boost compound-learning confidence; applied context docs seed KB entity stubs; rolling back a KB entity cascades to invalidate derived context docs. The two systems are not isolated — they're sources for each other.