AI API
Endpoints for interacting with AI models through a unified provider layer. These endpoints are served by the backend API and do not require session authentication.List models (frontend)
Response
Errors
| Code | Description |
|---|---|
| 500 | Failed to fetch models or invalid response from provider |
Health check
Response
status field is healthy when all providers are reachable and degraded when one or more are down.
Errors
| Code | Description |
|---|---|
| 503 | AI service unavailable |
List models
Response
List models by provider
Path parameters
| Parameter | Type | Description |
|---|---|---|
provider | string | Provider name (for example, openrouter) |
Response
Select model
Request body
| Field | Type | Required | Description |
|---|---|---|---|
taskType | string | No | Type of task (default: general) |
Response
Errors
| Code | Description |
|---|---|
| 404 | No models available |
Chat completion
Request body
| Field | Type | Required | Description |
|---|---|---|---|
messages | array | Yes | Array of message objects with role (user, assistant, or system) and content |
model | string | No | Model ID. Auto-selected based on taskType if omitted. |
taskType | string | No | Used for auto-selection when model is omitted |
temperature | number | No | Sampling temperature |
top_p | number | No | Nucleus sampling parameter |
max_tokens | number | No | Maximum tokens in the response |
Example request
Response
Returns the AI provider’s chat completion response. The exact shape depends on the provider used.Errors
| Code | Description |
|---|---|
| 400 | Messages array is required and must be non-empty |
| 404 | No models available |
| 500 | AI provider error |
Estimate cost
Request body
| Field | Type | Required | Description |
|---|---|---|---|
model | string | Yes | Model ID |
inputTokens | number | Yes | Number of input tokens |
outputTokens | number | Yes | Number of output tokens |
Response
Errors
| Code | Description |
|---|---|
| 400 | Model, inputTokens, and outputTokens are all required |