a complete agent context layer. works with any harness. portable across tools, persistent across sessions.
every decision, discovery, and learned pattern persists. graph edges, embeddings, hybrid search. your agents remember what worked.
searchable knowledge graphMCP server your agents connect to. semantic search across your entire project. works with any harness.
MCP compatibleevery action produces a (state, action, outcome) tuple. the policy head trains on your data. learns YOUR patterns.
RL from real outcomesagents run overnight. try changes, eval results, keep what improves, revert what doesn't. wake up to PRs.
Karpathy autoresearchwrite a spec, the eval checks if it's built. agents iterate from 0 to 100 percent autonomously.
spec is the evalevery session writes structured entries. decisions, features, discoveries. future sessions start with full context.
persistent contexttracks state transitions, predicts outcomes, detects when assumptions break. agents reason, not just act.
predictive schedulingP2P network for agent coordination. zero-config discovery, encrypted messaging, pub/sub.
Subway P2P10ET sits between your agents and your project. it captures context, accumulates training data, and improves the policy — automatically.
using any harness — Pi, Claude Code, Cursor, a custom script. 10ET provides context via MCP. the agent reads memory, past decisions, and experiment history before making changes.
what was the state? what action was taken? what was the outcome? this tuple gets written to the training buffer. decisions get journaled. new knowledge gets indexed into memory.
nightly: the policy head retrains on accumulated tuples. eval scripts measure real metrics. agents try changes, keep what improves, revert what doesn't. PRs get created automatically.
memory of what worked. trained preferences. past experiment history. the agent doesn't start from zero — it starts from the accumulated intelligence of every session before it.
self-organizing systems don't need a conductor. they need shared context and a reason to coordinate.
this is how worlds compound
stop starting from zero. give your agents memory, training, and a world model that compounds.