Skip to main content
Concept · Explanation

MCP intelligence

MCP — Model Context Protocol — is the standard JSON-RPC interface an LLM agent uses to call tools. Senkani exposes 19 MCP tools specifically designed for the friction points of agent-driven coding sessions.

Why MCP

An MCP-aware agent doesn't hard-code tool definitions; it asks the server what tools exist and what their schemas look like. Senkani's MCP server registers once via senkani init, and every MCP-compatible agent (Claude Code, Cursor if you wire it up, Copilot if you wire it up, any custom client) can call the same 19 tools.

The 19 tools, classified

How the indexer backs Perception

Four of the six perception tools are one thin wrapper over the same tree-sitter-backed symbol index: 25 vendored grammars, FTS5 full-text search, BM25 ranking with optional RRF fusion via MiniLM file embeddings, bidirectional dependency graph built at index time, FSEvents-driven incremental updates. Cold search < 5 ms; cached < 1 ms.

How MLX backs Local ML

senkani_embed runs MiniLM-L6-v2 via MLX on the Neural Engine (sub-200 ms; 384-dim Float32). senkani_vision runs Gemma via MLX (sub-500 ms on M-series). Shared MLXInferenceLock FIFO-serializes every MLX call so the Neural Engine doesn't see concurrent contention; loaded model containers drop on macOS memory-pressure warnings.

Version negotiation

senkani_version returns server_version, tool_schemas_version, and schema_db_version. Clients cache tool schemas keyed on tool_schemas_version; that number increments only on breaking changes. schema_db_version surfaces PRAGMA user_version on the session DB for migration diagnostics.