Skip to main content
Agents

Agent API Reference

Execute agents via the OpenAI-compatible agent endpoint. Pass your agent ID via model in the request body alongside your messages.

OpenAI SDK compatible

Agents have their own /api/agents endpoint. Point the OpenAI SDK's baseURL at it, set model to your agent slug, and everything just works — streaming, tool calling, and all.

Prerequisites
  • API Key — Generate from Dashboard → Settings → API Keys
  • Agent ID — The slug shown in the agent editor (e.g. my-research-agent)
  • Agent must be active — Disabled agents return a 403 error
Try It First

Before writing code, test your agent in the Dashboard Playground:

  1. Go to Dashboard → Playground
  2. Switch to Agent mode using the toggle
  3. Select your agent from the dropdown
  4. Send a message and see the response in real time

Note: The Playground uses Firebase auth internally. For external API calls, use your API key (mp_*) in the Authorization: Bearer header.

Endpoint
POST
endpoint
https://agentlify.co/api/agents

The OpenAI SDK automatically appends /chat/completions to the baseURL, so it actually hits /api/agents/chat/completions. Both paths route to the same handler.

Request Body

json
{
  "model": "your-agent-id",
  "messages": [
    { "role": "user", "content": "Your message here" }
  ],
  "stream": false
}

Headers

AuthorizationBearer mp_YOUR_API_KEY
Content-Typeapplication/json
Code Examples
javascript
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: process.env.AGENTLIFY_API_KEY,   // mp_xxx key
  baseURL: 'https://agentlify.co/api/agents',
});

const completion = await client.chat.completions.create({
  model: 'my-research-agent',
  messages: [{ role: 'user', content: 'Summarize the latest on AI agents' }],
});

console.log(completion.choices[0].message.content);

// Agent metadata (not on the typed object — access via cast)
const meta = (completion as any).agent_metadata;
// { execution_id, agent_id, agent_name, steps_executed, total_latency, ... }
Response Format

Standard OpenAI chat completion format with an extra agent_metadata field for execution details.

json
{
  "id": "agent-exec-uuid",
  "object": "chat.completion",
  "created": 1706000000,
  "model": "agent:my-research-agent",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "The agent's final response..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 150,
    "completion_tokens": 200,
    "total_tokens": 350
  },
  "agent_metadata": {
    "execution_id": "exec-uuid",
    "agent_id": "agent:my-research-agent",
    "agent_name": "my-research-agent",
    "steps_executed": 2,
    "total_latency": 1234,
    "skills_invoked": 1
  }
}
Router vs Agent

Rule of thumb: use a router for single-turn LLM calls with automatic model selection; use an agent for multi-step workflows that plan, research, and execute autonomously. Both use the OpenAI SDK — just change the baseURL:

javascript
import OpenAI from 'openai';

// Router — auto-selects the best model for you
const router = new OpenAI({
  apiKey: 'mp_YOUR_API_KEY',
  baseURL: 'https://agentlify.co/api/router/YOUR_ROUTER_ID',
});
const routerRes = await router.chat.completions.create({
  messages: [{ role: 'user', content: 'Hello' }],
});

// Agent — multi-step autonomous execution
const agent = new OpenAI({
  apiKey: 'mp_YOUR_API_KEY',
  baseURL: 'https://agentlify.co/api/agents',
});
const agentRes = await agent.chat.completions.create({
  model: 'my-research-agent',
  messages: [{ role: 'user', content: 'Research this topic' }],
});
Streaming Events

Unlike a simple LLM stream, agent streaming emits named SSE events for each phase of execution. Content chunks follow the standard OpenAI format, but you also get observability into planning, steps, and tool calls.

agent_start

Execution begins — includes execution_id and event_version

plan_summary

Planning complete — summary of planned steps (if planning mode is on)

step_start

A workflow step begins — step_id, step_name, step_type

data: {chunk}

Content delta — standard OpenAI chat.completion.chunk format

tool_call_start

Agent invokes a tool — tool_name, tool_call_id

tool_call_complete

Tool call arguments fully received — about to execute

tool_result

Tool returns — success, output_preview, execution_time

skill_invocation

Instructional skill retrieved — skill_name, success

clarification_request

Agent needs more context — questions[] to present to the user

step_complete

Step finished — output_preview, tokens, latency

agent_complete

Execution done — metrics, usage, total_latency

Example stream output

bash
event: agent_start
data: {"execution_id":"exec-abc","agent_name":"my-research-agent","event_version":"2.0"}

event: plan_summary
data: {"summary":"Planning complete","steps_planned":2}

event: step_start
data: {"step_id":"step-1","step_name":"research","step_type":"llm"}

data: {"choices":[{"delta":{"content":"Here are the"}}]}
data: {"choices":[{"delta":{"content":" latest trends..."}}]}

event: tool_call_start
data: {"tool_name":"builtin_web_search","tool_call_id":"call-123"}

event: tool_result
data: {"tool_name":"builtin_web_search","success":true,"execution_time":820}

event: step_complete
data: {"step_id":"step-1","step_name":"research","latency":2450}

event: step_start
data: {"step_id":"step-2","step_name":"summarize","step_type":"llm"}

data: {"choices":[{"delta":{"content":"In summary..."}}]}

event: step_complete
data: {"step_id":"step-2","step_name":"summarize","latency":1200}

event: agent_complete
data: {"status":"success","total_latency":3850}

data: [DONE]

Tip: Set streamFinalOnly: false in your agent's Settings → Advanced panel to receive step-level events. When true (default), only the final step's content chunks and the agent_start / agent_complete events are streamed.

Error Codes

Errors follow the OpenAI error format. The OpenAI SDK raises typed exceptions automatically.

400

Bad Request

Missing messages array or invalid format.

401

Unauthorized

Invalid or missing API key (mp_*).

402

Insufficient Credits

Add credits in Dashboard → Billing.

403

Agent Disabled

Enable the agent from the editor first.

404

Agent Not Found

No agent with that ID, or no access.

408

Timeout

Execution exceeded the configured timeout.

429

Rate Limited

Too many requests. Back off and retry.