Create Your First Agent
Build a working AI agent with tool calls in under 10 minutes. This guide walks you through every step — from creating the agent in the dashboard to calling it from your code.
A support assistant agent that can look up order status using a custom tool. The agent will:
- Understand natural language questions about orders
- Call a get_order_status tool when it needs order data
- Your code executes the tool locally and returns the result
- The agent uses the result to compose a helpful response
Prerequisites
Agentlify Account & API Key
Sign up at agentlify.co and generate an API key from Dashboard → Settings → API Keys. Your key starts with mp_.
Node.js 18+
We'll use the Agentlify JS SDK. You can also use the OpenAI SDK or raw HTTP — the API is fully OpenAI-compatible.
Step 1: Create the Agent
Go to Dashboard → Agents → New Agent
Enter a display name like "Support Assistant". A slug is generated automatically (e.g. support-assistant) — this becomes your agent ID for API calls.
Write the system prompt
Your agent comes with a default "Main Task" step. Edit its prompt to describe what the agent does. For our example:
You are a helpful support assistant. When a user asks about an order, use the get_order_status tool to look up the order details, then provide a clear and friendly response.Link a router
In the agent settings, select a router. The router determines which LLM model is used for each step. If you don't have a router yet, create one from Dashboard → Routers → New Router.
Activate the agent
Toggle the agent to Active in the editor header. Inactive agents cannot be called via the API.
Step 2: Install the SDK
npm install agentlify-jsThe SDK includes TypeScript types out of the box. You can also use the OpenAI SDK pointed at the Agentlify base URL.
Step 3: Basic Call (No Tools)
Let's start with a simple call to verify everything works:
const Agentlify = require('agentlify-js');
const client = new Agentlify({
apiKey: process.env.AGENTLIFY_API_KEY, // mp_xxx
routerId: process.env.AGENTLIFY_ROUTER_ID
});
const response = await client.agents.run({
agentId: 'support-assistant',
messages: [{ role: 'user', content: 'Hello! Can you help me?' }]
});
console.log(response.choices[0].message.content);
// "Of course! I'd be happy to help. What can I assist you with today?"Step 4: Add a Tool with a Callback
Now let's add a tool that the agent can call. Define the tool using the standard OpenAI function calling format, and add a callback that runs in your code:
const response = await client.agents.run({
agentId: 'support-assistant',
messages: [
{ role: 'user', content: 'What is the status of order #12345?' }
],
tools: [
{
type: 'function',
function: {
name: 'get_order_status',
description: 'Look up the status of a customer order by order ID',
parameters: {
type: 'object',
properties: {
orderId: {
type: 'string',
description: 'The order ID to look up'
}
},
required: ['orderId']
}
},
// This callback runs in YOUR code, not on the server
callback: async ({ orderId }) => {
// Replace with your actual database/API call
return {
orderId,
status: 'shipped',
trackingNumber: 'TRK-98765',
estimatedDelivery: '2026-02-18'
};
}
}
]
});
console.log(response.choices[0].message.content);
// "Order #12345 has been shipped! Your tracking number is TRK-98765
// and the estimated delivery date is February 18, 2026."What Happens Under the Hood
When you call client.agents.run() with tool callbacks, here's the full flow:
The SDK sends your messages and tool definitions to the agent API.
The agent's LLM decides to call get_order_status with {"orderId": "12345"}.
The server checkpoints the full workflow state and returns the tool call + a resume_id.
The SDK executes your callback locally with the parsed arguments.
The SDK sends the resume_id + tool results back to the server.
The server restores the checkpoint and the LLM uses the tool result to compose the final answer.
All of this is handled automatically by the SDK. If you need more control, use client.agents.execute() and client.agents.resume() to manage the loop yourself. See the Checkpoints & Resume guide for details.
Step 5: Multiple Tools
You can provide as many tools as you need. The agent decides which ones to call based on the user's request:
const response = await client.agents.run({
agentId: 'support-assistant',
messages: [
{ role: 'user', content: 'Cancel order #12345 and refund me' }
],
tools: [
{
type: 'function',
function: {
name: 'get_order_status',
description: 'Look up order status',
parameters: {
type: 'object',
properties: { orderId: { type: 'string' } },
required: ['orderId']
}
},
callback: async ({ orderId }) => {
return { orderId, status: 'shipped', canCancel: false };
}
},
{
type: 'function',
function: {
name: 'cancel_order',
description: 'Cancel an order and issue a refund',
parameters: {
type: 'object',
properties: {
orderId: { type: 'string' },
reason: { type: 'string' }
},
required: ['orderId']
}
},
callback: async ({ orderId, reason }) => {
// Your cancellation logic
return { orderId, cancelled: true, refundAmount: 49.99 };
}
}
],
maxToolIterations: 5 // Default is 10
});Alternative: Using the OpenAI SDK
Agentlify is fully OpenAI-compatible. You can use the official OpenAI SDK by pointing it at the Agentlify base URL:
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env.AGENTLIFY_API_KEY, // mp_xxx key
baseURL: 'https://agentlify.co/api/agents',
});
const completion = await client.chat.completions.create({
model: 'support-assistant', // Your agent ID
messages: [
{ role: 'user', content: 'What is the status of order #12345?' }
],
tools: [{
type: 'function',
function: {
name: 'get_order_status',
description: 'Look up order status',
parameters: {
type: 'object',
properties: { orderId: { type: 'string' } },
required: ['orderId']
}
}
}]
});
// With the OpenAI SDK, you handle the tool call loop manually
if (completion.choices[0].finish_reason === 'tool_calls') {
const toolCalls = completion.choices[0].message.tool_calls;
const resumeId = completion.resume_id; // Use this to resume
// Execute tools, then call the resume endpoint...
}The OpenAI SDK doesn't have built-in support for the resume_id checkpoint flow. For automatic tool handling, use the Agentlify SDK (agentlify-js) which handles checkpoints and tool callbacks automatically.
Built-in Tools (No Code Required)
Agents also support built-in tools that run server-side with no code on your end. Enable them from the agent editor's Skills tab:
Web Search
Search the web for real-time information
Calculator
Evaluate mathematical expressions
Code Executor
Run JavaScript code snippets
JSON Formatter
Parse and format JSON data
Built-in tools execute automatically on the server. They don't require callbacks or checkpoints — the agent handles them internally. See Built-in Tools for the full list.
Key Settings
A few settings to be aware of when getting started:
planningModedefault: initial_planControls whether the agent creates an execution plan before running. "off" skips planning, "initial_plan" plans once at the start, "per_step_plan" re-plans before each step.
timeoutdefault: 120Maximum execution time in seconds. If the agent exceeds this, it returns a timeout error.
defaultToolBackenddefault: clientWhere custom tools execute by default. "client" returns tool calls to your code (with checkpoints). "webhook" calls your webhook URL server-side.
maxStepsdefault: 10Maximum number of steps the agent can execute in a single run. Prevents runaway loops.
See Agent Settings for the complete reference.
- Agent created with at least one step and a system prompt
- Router linked to the agent (for LLM model selection)
- Agent set to Active
- API key generated (Dashboard → Settings → API Keys)
- agentlify-js installed in your project
- Tools defined with callbacks (if using custom tools)
- Test run passes in the Dashboard Playground
Next Steps
Checkpoints & Resume
Deep dive into the checkpoint system for tool calls and approvals.
Custom Tools & Webhooks
Define tools with webhook backends for server-side execution.
Workflow Nodes
Build complex workflows with branching, loops, and approval gates.
Streaming
Stream agent responses token-by-token for real-time UX.