Inference providers, attested.
Where you call your model matters. Quant variants, custom kernels, speculative decoding, and silent provider-side updates can shift benchmark scores by 2–15 points without any change to the model name. We attest every hosted model on every supported provider, then surface the drift against the canonical first-party API.
Drift transparency for hosted-model marketplaces.
Every first-party provider (Anthropic API, OpenAI API, Google AI Studio) is the canonical reference for the models they make. Aggregators (OpenRouter), hosts (Together, Groq, Fireworks, Baseten), and self-hosted stacks (Ollama, vLLM) are measured against that canonical and the delta is published.
If you operate a provider and want every hosted model attested with drift alerts piped into your Slack, the Provider Verified tier is $499/mo with unlimited multi-model attestations + a customer-facing badge widget. See the pitch →