An observational memory server that uses LLM agents to capture, compress, and recall project-specific decisions and context across AI coding sessions. It stores structured observations in SQLite to maintain long-term session continuity and architectural awareness.
Install from
M8ven verifies MCPs across every public registry — install directly from whichever one you prefer.
process.env. You'll be asked to provide them before it can run.CODEWATCH_AUTO_REFLECT— true Enable auto-reflectionCODEWATCH_CHARS_PER_TOKENCODEWATCH_DATA_DIR— ~/mcp-data/codewatch-memory/ SQLite storage locationCODEWATCH_DEFAULT_BRANCHCODEWATCH_FALLBACK_PROVIDER— openai Fallback LLM (google, openai, groq, none)CODEWATCH_GOOGLE_MODEL— gemini-2.5-flash Google modelCODEWATCH_GROQ_MODEL— llama-3.3-70b-versatile Groq modelCODEWATCH_LLM_PROVIDER— "": "groq"CODEWATCH_LOG_LEVEL— info Logging verbosityCODEWATCH_MAX_COMPRESSION— 3 Max compression level (0-3)CODEWATCH_OPENAI_MODEL— gpt-4o-mini OpenAI modelCODEWATCH_PROJECT_DIRCODEWATCH_REFLECT_THRESHOLD— 40000 Auto-reflect trigger (tokens)[](https://m8ven.ai/mcp/klausandrade-codewatch-ej0s2f)