LLM Gateway API Reference
Base URL: https://llm.gotab.org/api/v1
Create Prompt
Section titled “Create Prompt”POST /api/v1/promptsCreate a new prompt with messages, variables, and production settings.
Request body:
{ "name": "Customer Support Reply", "description": "Generates a polite reply to a customer inquiry", "messages": [ { "role": "system", "content": "You are a helpful support agent for {{company}}." }, { "role": "user", "content": "{{customer_message}}" } ], "variables": { "company": { "default": "Acme Corp", "description": "Company name", "required": false }, "customer_message": { "default": "", "description": "The customer's message to respond to", "required": true } }, "selected_provider": "anthropic", "selected_model": "claude-sonnet-4-6-20260217"}Notes:
nameis required- At least one message is required
slugis auto-generated from name if not provided- Variables use
{{varName}}or{{varName|default}}syntax in message content - Set
required: trueon a variable to make invocations fail if it’s not supplied selected_providerandselected_modelconfigure the production invoke endpoint
Response: 201 Created — returns the full prompt object with generated id and slug.
Invoke Prompt
Section titled “Invoke Prompt”POST /api/v1/prompts/:slugOrId/invokeExecute a prompt against its configured production provider.
Request body:
{ "variables": { "company": "Acme Corp", "customer_message": "I need help with my order #12345" }, "version": 3, "provider": "openai", "model": "gpt-4.1"}All fields are optional:
variables— values for template placeholdersversion— pin to a specific snapshot (omit for latest)provider/model— override the prompt’s production settings
Response headers: X-Prompt-Id, X-Provider, X-Model, X-Latency-Ms, X-Cached, X-Prompt-Version (when pinned).
Response body:
{ "text": "Thank you for reaching out about order #12345...", "finishReason": "stop", "usage": { "inputTokens": 142, "outputTokens": 87 }}Returns 502 with error details if the provider call fails.
List Prompts
Section titled “List Prompts”GET /api/v1/promptsReturns all prompts ordered by most recently updated.
Get Prompt
Section titled “Get Prompt”GET /api/v1/prompts/:idRetrieve a single prompt by ID or slug.
Update Prompt
Section titled “Update Prompt”PUT /api/v1/prompts/:idUpdate a prompt’s messages, variables, or settings. A version snapshot is automatically created when content changes (messages, variables, or provider selection). Metadata-only changes (name, description, test_settings) do not create a version.
Delete Prompt
Section titled “Delete Prompt”DELETE /api/v1/prompts/:idPermanently delete a prompt and all its version history and test runs. Returns { "ok": true }.
Run Test
Section titled “Run Test”POST /api/v1/prompts/:id/testRun a prompt against one or more providers concurrently.
{ "providers": [ { "provider": "anthropic", "model": "claude-sonnet-4-6-20260217" }, { "provider": "openai", "model": "gpt-4.1" } ], "variables": { "customer_message": "Hello, I need help" }}Results include latency, token usage, and response text for each provider. Test results are persisted in R2 and can be retrieved via GET /api/v1/test-runs/:id/results.
List Providers
Section titled “List Providers”GET /api/v1/providersReturns all available LLM providers and their supported models.
[ { "id": "anthropic", "name": "Anthropic", "models": ["claude-sonnet-4-6-20260217", "claude-opus-4-6-20260205"], "defaultModel": "claude-sonnet-4-6-20260217", "configured": true }]