Skip to main content

Agents as Users

At the core of Relay is a simple, powerful idea: AI agents are treated like team members inside your app.

Just like you might @mention a human colleague, assign work to them, or ask them to help — your app can do the same with AI agents. The app decides when and which agent to involve. Relay just delivers.


The Model

In Traditional Apps (Direct LLM API)

Your app calls an LLM API directly:

User: "Please summarize this"

App: calls gpt-4 API

LLM: returns response

App: shows response

The app talks directly to the LLM. There's no notion of "team member" or "agent" — just a one-off function call.

Problems:

  • Only works with one LLM service
  • Can't easily swap to a different provider
  • Can't route to different agents
  • Hard to manage permissions across apps

With Relay (Agents as Team Members)

Your app mentions an agent by ID, just like it would a human:

User: "Hey @athena, summarize this"

App: sends event to Relay with agent_id="athena"

Relay: delivers to Athena

Athena: responds

Relay: streams reply back

App: shows response to user

The agent is a first-class citizen in your app's world. Your app decides when to involve it. Relay handles delivery.

Benefits:

  • Works with any agent (OpenAI, Anthropic, custom, etc.)
  • Easy to swap agents or add new ones
  • Agents are managed separately from apps
  • Built-in permissions, logging, audit trails
  • One integration handles many agents

How It Works in Practice

Let's walk through three real examples:

Example 1: Portal (Comment Mention)

In a task management app, a user @mentions an AI agent in a comment:

User: "@athena Can you summarize the discussion above?"

Portal's logic:

  1. User mentions @athena
  2. Portal recognizes it as an agent mention (via the allowlist)
  3. Portal sends an event to Relay:
    {
    "type": "event",
    "agent_id": "athena",
    "thread_id": "task-123",
    "payload": {
    "event": "comment.mention",
    "task_id": "task-123",
    "task_title": "Q2 Roadmap",
    "comments": [...],
    "mention": "@athena"
    }
    }
  4. Relay delivers to Athena
  5. Athena processes and replies
  6. Portal receives the reply and posts it as a comment

Key insight: Portal decides when to involve Athena. Relay just delivers.


Example 2: Flow (Task Assignment)

In a project management app, a user assigns a task to an AI agent:

New Task:
Title: "Design database schema for comments"
Assigned to: Klyve (AI)
Priority: High
Due: Next Friday

Flow's logic:

  1. User creates task and assigns to "Klyve"
  2. Flow checks: is Klyve in our allowlist? (Yes)
  3. Flow sends event to Relay:
    {
    "type": "event",
    "agent_id": "klyve",
    "thread_id": "task-789",
    "payload": {
    "event": "task.assign",
    "task_id": "task-789",
    "task_title": "Design database schema for comments",
    "description": "We need a scalable schema...",
    "assigned_to": "klyve",
    "due_date": "2026-04-18",
    "priority": "high"
    }
    }
  4. Klyve receives, processes, and returns a design proposal
  5. Flow posts the reply as a task comment or attachment

Key insight: Flow decides when to assign work to Klyve. Relay just ensures delivery.


Example 3: Academy (Quiz Generation)

In a learning platform, an instructor wants the AI to generate quiz questions:

Instructor: "Generate 5 quiz questions for the Python Functions module"

Academy's logic:

  1. Instructor clicks "Generate with AI"
  2. Academy sends event to Relay:
    {
    "type": "event",
    "agent_id": "athena",
    "thread_id": "course-101:module-functions",
    "payload": {
    "event": "quiz.generate",
    "course_id": "course-101",
    "module": "Python Basics",
    "topic": "Functions",
    "difficulty": "intermediate",
    "num_questions": 5,
    "instructions": "Create multiple choice questions testing understanding of..."
    }
    }
  3. Athena generates quiz questions (as JSON or markdown)
  4. Academy parses and displays them

Key insight: Academy decides what to ask the AI to generate. Relay delivers the request.


App-Side Filtering: Only AI-Targeted Events Go to Relay

Here's a critical design principle: not all events go through Relay.

Your app has all the intelligence. It decides which events involve agents.

For example, in Portal:

  • User mentions @human-colleague → No Relay (normal mention)
  • User mentions @athena → Yes, Relay (AI agent)
  • User mentions @project-admin-bot → Maybe Relay (depends on config)

The app's code decides:

def handle_mention(user, mentioned_entity, content):
# Check if mentioned_entity is an AI agent
if is_ai_agent(mentioned_entity):
# Send to Relay
send_to_relay(
agent_id=mentioned_entity.id,
payload={
"event": "comment.mention",
"content": content,
...
}
)
else:
# Handle as normal user mention
notify_user(mentioned_entity)

This keeps your app in control. Relay doesn't make decisions about routing or filtering. The app does.


Agent Discovery: How Apps Know Which Agents Exist

Your app needs to know:

  • What agents are available?
  • Which ones am I allowlisted for?
  • What do they do?

Relay provides a discovery endpoint:

GET /organizations/{org_id}/agents
Authorization: Bearer rlk_...

Response:

{
"agents": [
{
"id": "athena",
"name": "Athena",
"description": "General-purpose AI assistant",
"status": "online",
"allowlisted": true
},
{
"id": "klyve",
"name": "Klyve",
"description": "Code and architecture specialist",
"status": "online",
"allowlisted": true
},
{
"id": "future-agent",
"name": "Future Agent",
"description": "Not yet available",
"status": "offline",
"allowlisted": false
}
]
}

Your app can use this to:

  • Build @mention autocomplete (show only allowlisted agents)
  • Display agent status in the UI ("Athena is online")
  • Decide whether to show "assign to AI" options
  • Show help text about what each agent does

The Intelligence Stays in Your App

This is the biggest difference from a monolithic AI system:

Relay does NOT:

  • Decide which agent to use
  • Decide when to involve agents
  • Filter or re-route events
  • Apply rate limits per agent per app
  • Make intelligence-based decisions

Your app DOES:

  • Decide when to invoke agents (user mentions, assign task, etc.)
  • Choose which agent to use (maybe based on agent capabilities or user preference)
  • Filter events (only AI-targeted ones go to Relay)
  • Handle agent responses (route, format, display)
  • Manage conversation context and state
  • Apply business logic (rate limiting, permissions, etc.)

This keeps your app flexible and in control.


Why This Model?

1. Apps Understand Their Users Better Than Relay

Your app knows:

  • Which users should involve AI (power users, paying customers, etc.)
  • Which AI capabilities make sense in each context (summarization, generation, etc.)
  • What payload structure the agent needs
  • Where and how to display the response

Relay doesn't know any of this. It shouldn't have to.

2. Multiple Apps, Multiple Patterns

Portal might invoke agents on @mentions. Flow might assign tasks. Academy might generate content. Each app has its own user experience and business logic.

If Relay tried to enforce a single pattern, it would be too rigid.

3. Decoupling Intelligence from Delivery

Separating "when to involve an agent" (app's job) from "how to deliver to the agent" (Relay's job) keeps both concerns clean.

Your app can evolve its AI strategy without changing Relay. Relay can scale and improve without affecting your app.

4. Easier for Apps to Integrate

Apps don't need to understand AI or LLMs. They just need to:

  1. Identify when to involve an agent (user action, business rule, etc.)
  2. Send the relevant context in the payload
  3. Receive and display the response

The intelligence stays in the agent. The delivery stays in Relay.


Best Practices

1. Use Descriptive Agent IDs

Instead of: agent-1, agent-2 Use: athena, klyve, code-review-bot

Makes your code readable and helps with discovery.

2. Design Payloads Around Your Domain

Don't try to be generic. Send what your agent needs to be useful:

Portal sends: task title, comments, mentioned user, context Flow sends: task description, requirements, assigned person, deadline Academy sends: course, module, topic, difficulty, instructions

3. Check Allowlist Before Invoking

Before sending to Relay, verify the agent is allowlisted:

available_agents = get_agents_for_app()
if mentioned_agent.id in available_agents:
send_to_relay(...)
else:
show_error("This agent is not available in your workspace")

4. Handle Offline Agents Gracefully

Agents can go offline. Your app should:

  • Show user: "Agent is currently offline"
  • Queue the request locally or queue via Relay
  • Let user know when it will be processed

5. Log Agent Interactions

Track:

  • When agents are invoked
  • What payload was sent
  • What response was received
  • How long it took
  • Any errors

This helps with debugging and audit trails.


Summary

The "agents as users" model means:

✓ Your app decides when to involve agents ✓ Your app chooses which agent to use ✓ Your app defines the payload structure ✓ Your app handles the response ✓ Relay just delivers

This keeps intelligence in your app, delivery in Relay, and both systems simple and flexible.


Next Steps