70
grade C
11 days ago
glama

ComputeGauge MCP

Provides cost intelligence and a reputation scoring system to help AI agents optimize spending through smart model selection and local-to-cloud routing. It enables real-time cost tracking and rewards agents for making efficient, high-credibility decisions across various LLM providers.

Install from

M8ven verifies MCPs across every public registry — install directly from whichever one you prefer.

// key findings
⚠️
Known vulnerabilities in dependencies: 2 high
Affects packages this MCP installs at runtime. Upgrade or remove the affected dependency.
No credential exfiltration, no sensitive file access, no obfuscation
Static analysis found nothing flowing your secrets to unexpected places.
Open source with a license and README
Anyone can audit the code, the license is declared, and the publisher documents what it does.
🔐
You'll be asked for 8 credentials: ANTHROPIC_API_KEY, COMPUTEGAUGE_API_KEY, DEEPSEEK_API_KEY, GOOGLE_API_KEY, GROQ_API_KEY, MISTRAL_API_KEY, OPENAI_API_KEY, TOGETHER_API_KEY
These are read from process.env at runtime. Make sure you trust where they’ll be sent.
// required environment variables
This server reads these from process.env. You'll be asked to provide them before it can run.
🔐 secretANTHROPIC_API_KEY"": "sk-ant-...",
configAWS_REGION
configAZURE_SUBSCRIPTION_ID
configCLAUDE_CODE
🔐 secretCOMPUTEGAUGE_API_KEYNo API key for dashboard access
configCOMPUTEGAUGE_BUDGET_TOTALNo Session budget limit in USD
configCOMPUTEGAUGE_COST_PER_HOURNo Amortized hardware cost/hr
configCOMPUTEGAUGE_CPU
configCOMPUTEGAUGE_DASHBOARD_URLNo URL of ComputeGauge dashboard
configCOMPUTEGAUGE_GPUNo GPU name for hardware detection
configCOMPUTEGAUGE_ON_PREM
configCOMPUTEGAUGE_RAM_GB
configCOMPUTEGAUGE_VRAM_GBNo VRAM in GB
configCUDA_VISIBLE_DEVICES
configCURSOR_WORKSPACE
🔐 secretDEEPSEEK_API_KEY
🔐 secretGOOGLE_API_KEYNo Enables Google provider detection
configGOOGLE_CLOUD_PROJECT
configGPU_VRAM_GB
🔐 secretGROQ_API_KEY
configHF_INFERENCE_HOST
configLLAMACPP_HOSTllama.cpp —
configLLAMACPP_MODEL
configLLAMA_SERVER_URL
configLOCALAI_BASE_URL
configLOCALAI_HOSTLocalAI —
configLOCALAI_MODELS
configLOCAL_AI_ENDPOINT
configLOCAL_LLM_ENDPOINTCustom —
configLOCAL_LLM_MODELS
🔐 secretMISTRAL_API_KEY
configNVIDIA_GPU_NAME
configOLLAMA_BASE_URL
configOLLAMA_HOST"": "http://localhost:11434",
configOLLAMA_MODELS"": "llama3.3:70b,qwen2.5:7b,deepseek-r1:14b",
🔐 secretOPENAI_API_KEY"": "sk-...",
configTGI_HOST
configTGI_MODEL
🔐 secretTOGETHER_API_KEY
configVLLM_BASE_URL
configVLLM_HOSTNo vLLM inference endpoint
configVLLM_MODELS
configWINDSURF_WORKSPACE
// full audit trail
The full breakdown of what we checked, the deductions that landed, the network hosts, the dependency advisories, and concrete fix guidance is available to verified publishers.
// improvement guidance — verified publishers only
We have 5 concrete improvements we can share with the publisher of this MCP. Each comes with specific guidance to raise the trust score.
// embed badge in your README
[![M8ven Score](https://m8ven.ai/badge/mcp/computegauge-mcp-kw4ir4)](https://m8ven.ai/mcp/computegauge-mcp-kw4ir4)
commit: 423af3fc2b26dbf1c4e31d18d9d70e1aa73e9a93
code hash: 53f771924be02a0b2b3783830e541dd38b942714f9ffad87a70111d4de432f3f
verified: 4/11/2026, 3:04:32 PM
view raw JSON →