The LLM Stats API gives you direct, read-only access to the same dataset that powers llm-stats.com — every model, every benchmark score, every pricing change.
Base URL
https://api.llm-stats.com/stats/v1
Authentication
Send your API key as a Bearer token on every request.
Authorization: Bearer YOUR_API_KEY
Request access and create keys from the developer console.
Keys starting with ze_ are LLM Stats keys. The same key works for both the
Stats API and the Gateway API.
Endpoints at a glance
| Method | Path | Description |
|---|
| GET | /v1/models | Catalog with metadata, pricing, and category scores |
| GET | /v1/models/{id} | Full model detail with every benchmark score |
| GET | /v1/benchmarks | All benchmarks with categories and model counts |
| GET | /v1/scores | Score matrix — filter across models and benchmarks |
| GET | /v1/rankings | TrueSkill rankings by category |
| GET | /v1/updates | Recently added models (1–30 day lookback) |
See the Endpoints section for full request and response schemas.
Errors
Every error uses the same envelope, so you only need to write the handling code once.
{
"error": {
"code": "not_found",
"message": "Human-readable explanation.",
"param": "model_id"
}
}
code is the contract — branch on it, never on message.
Rate limits
Limits are applied per API key, per endpoint:
| Endpoint | Limit |
|---|
/v1/models/{id} | 120 / minute |
/v1/rankings | 120 / minute |
/v1/models | 60 / minute |
/v1/benchmarks | 60 / minute |
/v1/updates | 60 / minute |
/v1/scores | 30 / minute |
Exceeding a limit returns HTTP 429 with the standard error envelope and a Retry-After header. Need higher limits? Get in touch.