context that compounds

portable memory, training, and policy for AI agents.
every session makes the next one better.

what you get how it compounds
move your cursor through the mark

what you get

a complete agent context layer. works with any harness. portable across tools, persistent across sessions.

tenet harness

coding agent TUI with extensions, skills, and RPC mode. or use Claude Code, Cursor, any tool — tenet provides context to all of them via MCP.

works with any agent

context hub

MCP server your agents connect to. semantic search across your entire project. knowledge coordination across repos.

MCP compatible

training loop

every action produces a (state, action, outcome) tuple. the policy head trains on your data.

RL from real outcomes

autonomous agents

agents run overnight. try changes, eval results, keep what improves, revert what doesn't.

Karpathy autoresearch

build evals

write a spec, the eval checks if it's built. agents iterate from zero to one hundred percent.

spec is the eval

journals

every session writes structured entries. future sessions start with full context, not a blank page.

persistent context

world model

tracks state transitions, predicts outcomes, detects when assumptions break.

predictive scheduling

agent mesh

P2P network for agent coordination. zero-config discovery, encrypted messaging.

Subway P2P

how it compounds

you and your agents work normally. tenet captures everything — decisions, outcomes, patterns. over time, it builds a world model of your project and starts improving it autonomously.

week 1

you work. tenet watches.

you and your agents work normally using any tool. tenet quietly captures every decision, every code change outcome, every pattern. journals accumulate. memory indexes. context hub serves it all via MCP.

week 2

the world model forms.

tenet knows your naming patterns, your architecture preferences, which approaches work in YOUR codebase. agents get better suggestions from memory search. you notice: "it remembered that decision from last week."

month 1

agents improve overnight.

the policy head has enough training data. RL agents try improvements while you sleep — eval against your metrics, keep what works, revert what doesn't. you wake up to pull requests that actually make sense for your project.

month 3

compound intelligence.

the world model deeply understands your project — not just what the code does, but how it was built and why decisions were made. new team members' agents inherit the full context from day one.

10ET World Model Memory Knowledge graph Journals + decisions Context hub (MCP) Embeddings + search Past experiments Training + Policy (state, action, reward) tuples Policy head · Eval · Build evals Agents tenet · claude code · cursor worktree isolation keep / revert · auto-PR Your project Sessions Decisions Code Improvements Auto-PRs Trained policy Better agents

solo developer

one tenet workspace. your agents learn your patterns, your preferences, your codebase. the policy head trains on YOUR decisions. overnight agents improve YOUR metrics.

tenet init

team / organization

parent tenet workspace scopes child workspaces per service. each service has its own context, agents, and eval. parent sees aggregated health. new hire's agents inherit full context from day one.

tenet init --parent ./platform
# the loop in practice $ tenet peter agent memory-recall -r 5 Baseline: 0.429 Round 1: 0.571 (+0.14) KEPT Round 2: 0.643 (+0.07) KEPT Round 3: 0.714 (+0.07) KEPT PR created automatically # next night: starts from 0.714, not 0.429 # the world compounds

self-organizing systems don't need a conductor. they need shared context and a reason to coordinate.

this is how worlds compound

every session makes the next one better.

stop starting from zero.