43
grade D
11 days ago
glama

mcp-turboquant

MCP server for LLM quantization. Compress any HuggingFace model to GGUF, GPTQ, or AWQ format. 6 tools: info, check, recommend, quantize, evaluate, push. Self-contained Python server — no external CLI needed.

Install from

M8ven verifies MCPs across every public registry — install directly from whichever one you prefer.

// key findings
No credential exfiltration, no sensitive file access, no obfuscation
Static analysis found nothing flowing your secrets to unexpected places.
Open source with a license and README
Anyone can audit the code, the license is declared, and the publisher documents what it does.
// full audit trail
The full breakdown of what we checked, the deductions that landed, the network hosts, the dependency advisories, and concrete fix guidance is available to verified publishers.
// improvement guidance — verified publishers only
We have 1 concrete improvement we can share with the publisher of this MCP. Each comes with specific guidance to raise the trust score.
// embed badge in your README
[![M8ven Score](https://m8ven.ai/badge/mcp/shipitandpray-mcp-turboquant-0f1790)](https://m8ven.ai/mcp/shipitandpray-mcp-turboquant-0f1790)
commit: 4ef603bac647492bb309fcbc38000b3001e7b406
code hash: 37d5bd926f45dcebd33d85b113f99f1583de0bd79789a0ddf04ec4443656d50d
verified: 4/11/2026, 2:21:44 PM
view raw JSON →