Skip to main content

Contributing

Practical guide for extending AgentCore: new channels, LLM providers, namespaces, and intent training data.


Adding a New Channel

Channels are Fastify plugins backed by BullMQ queues. WhatsApp and Telegram are the reference implementations — copy their structure, adapt the transport.

1. Create the adapter directory

src/channels/<channel-name>/
├── index.ts # Service class + factory
├── plugin.ts # Fastify plugin (registers routes + starts service)
├── handler.ts # Transport normalization + AgentTask creation
├── sender.ts # Outbound message formatting + delivery
├── queue.ts # BullMQ inbound/outbound workers
├── conversation-repository.ts # Prisma conversation persistence
└── types.ts # Channel-specific TypeScript types

2. Implement the handler

The handler should stay transport-focused. For every inbound message:

  1. Find or create a User and Conversation from the channel's user identifier
  2. Store the inbound user Message
  3. Create an AgentTask
  4. Enqueue the shared agent-tasks worker
  5. Provide an outbound dependency so the worker can send auto-bypassed or approved replies

The shared worker owns profile injection, RAG, generation, injection routing, persona escalation, intent classification, confidence fallback, trust matrix, HITL approval, and outbound routing. See ADR-001.

3. Register the plugin in src/app.ts

The WhatsApp and Telegram plugins are registered like this (lines 160–161 of app.ts):

import { whatsAppPlugin } from './channels/whatsapp/plugin.ts';
import { telegramPlugin } from './channels/telegram/plugin.ts';

// ...
void app.register(whatsAppPlugin, { prefix: '/api/v1' });
void app.register(telegramPlugin, { prefix: '/api/v1' });

Add your channel in the same block:

import { slackPlugin } from './channels/slack/plugin.ts';

// ...
void app.register(slackPlugin, { prefix: '/api/v1' });

4. Add the channel enum value in prisma/schema.prisma

enum ConversationChannel {
web
whatsapp
telegram
slack // ← add here
email
}

Then migrate:

npx prisma migrate dev --name add_slack_channel

5. Add environment variables

Add channel-specific env vars to src/config.ts (Zod schema) and .env.example.

Example — adding a Slack channel:

// src/config.ts — inside envSchema
SLACK_BOT_TOKEN: z.string().min(1).default('test-slack-bot-token'),
SLACK_SIGNING_SECRET: z.string().min(1).default('test-signing-secret'),
SLACK_APP_TOKEN: z.string().optional(),

Diff walkthrough — what changes for a new channel

The minimal diff when adding a Slack channel looks like this:

src/channels/slack/plugin.ts (new file — mirrors telegram/plugin.ts):

export const slackPlugin: FastifyPluginAsync = async (app) => {
const redis = new Redis(config.REDIS_URL);
const { PrismaConversationRepository } = await import('./conversation-repository.ts');

const defaultDept = await prisma.department.findFirst({ select: { id: true } });
const conversations = defaultDept
? new PrismaConversationRepository(prisma, defaultDept.id)
: undefined;

const slackService = new SlackService({
config: {
botToken: config.SLACK_BOT_TOKEN,
signingSecret: config.SLACK_SIGNING_SECRET,
redisUrl: config.REDIS_URL,
},
redis,
...(conversations ? { conversations } : {}),
logger: app.log as unknown as import('pino').Logger,
memoryExtractionQueue: app.memoryExtractionQueue,
memoryExtractEveryN: config.MEMORY_EXTRACT_EVERY_N_MESSAGES,
agentTasksQueue: app.agentTasksQueue,
});

await slackService.start();
app.addHook('onClose', async () => { await slackService.stop(); await redis.quit(); });

app.post('/slack/events', ..., async (req, reply) => {
void slackService.handleEvent(req.body).catch(...);
return reply.send({ ok: true });
});
};

src/app.ts — add two lines:

+import { slackPlugin } from './channels/slack/plugin.ts';
...
+void app.register(slackPlugin, { prefix: '/api/v1' });

prisma/schema.prisma — one line in the enum:

 enum ConversationChannel {
telegram
+ slack
}

src/config.ts — new env var entries:

+  SLACK_BOT_TOKEN: z.string().min(1).default('test-slack-bot-token'),
+ SLACK_SIGNING_SECRET: z.string().min(1).default('test-signing-secret'),

Adding a New LLM Provider

AgentCore uses the OpenAI SDK, which supports any OpenAI-compatible API endpoint. The supported env vars are defined in src/config.ts:

VariablePurposeDefault
OPENAI_API_KEYAPI key (required)
OPENAI_BASE_URLOverride base URL for compatible providersOpenAI default
OPENAI_MODELChat completion modelgpt-4o
OPENAI_EMBEDDING_MODELEmbedding modeltext-embedding-3-small
ANTHROPIC_API_KEYAnthropic Claude (native SDK, optional)

Ollama (local, no API key needed)

OPENAI_API_KEY=ollama
OPENAI_BASE_URL=http://localhost:11434/v1
OPENAI_MODEL=llama3.1
OPENAI_EMBEDDING_MODEL=nomic-embed-text

Install and pull the model first:

curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.1
ollama pull nomic-embed-text

Note: Ollama embedding models produce 768-dimensional vectors (nomic-embed-text) rather than 1536. Update the pgvector column dimension in prisma/schema.prisma if you switch embedding providers.

Anthropic Claude (via OpenAI-compatible proxy)

The simplest approach uses an OpenAI-compatible proxy (e.g. LiteLLM):

OPENAI_API_KEY=<your-anthropic-api-key>
OPENAI_BASE_URL=http://localhost:4000/v1 # LiteLLM proxy
OPENAI_MODEL=claude-3-7-sonnet-20250219

Alternatively, ANTHROPIC_API_KEY is already declared in src/config.ts as an optional var. To use the native Anthropic SDK, add a provider-selection branch in src/knowledge/rag.ts that instantiates Anthropic when ANTHROPIC_API_KEY is set.

vLLM (self-hosted, OpenAI-compatible)

OPENAI_API_KEY=<any-non-empty-string>
OPENAI_BASE_URL=http://your-vllm-host:8000/v1
OPENAI_MODEL=mistralai/Mistral-7B-Instruct-v0.3
OPENAI_EMBEDDING_MODEL=BAAI/bge-m3

Start vLLM:

python -m vllm.entrypoints.openai.api_server \
--model mistralai/Mistral-7B-Instruct-v0.3 \
--host 0.0.0.0 --port 8000

Azure OpenAI

Azure uses a different endpoint format and requires api-version in every request. Use the openai SDK's Azure support:

OPENAI_API_KEY=<your-azure-api-key>
OPENAI_BASE_URL=https://<resource-name>.openai.azure.com/openai/deployments/<deployment-name>
OPENAI_MODEL=gpt-4o # must match your Azure deployment name
OPENAI_EMBEDDING_MODEL=text-embedding-3-small

In src/knowledge/rag.ts, instantiate the client with Azure credentials:

import { AzureOpenAI } from 'openai';

const openai = new AzureOpenAI({
apiKey: config.OPENAI_API_KEY,
endpoint: config.OPENAI_BASE_URL,
apiVersion: '2024-12-01-preview',
});

Custom (non-OpenAI-compatible) provider

For providers without an OpenAI-compatible API:

  1. Create a class implementing the same interface as OpenAiRagPipeline in src/knowledge/rag.ts
  2. Add provider selection logic in the pipeline factory
  3. Ensure the embedding model produces 1536-dimensional vectors (the pgvector column default) — or update the schema and re-migrate

Adding a New Namespace (Department)

A namespace is a departmental AI configuration: its own system prompt, persona, knowledge base, and intent examples. This checklist covers the full path from zero to first bot response.

New code that reads or mutates department-scoped data must follow the forDepartment() route pattern.

Step 1 — Create or identify the department

Departments can be provisioned through seed data or the department API. Save the department id as DEPT_ID.

curl -X POST http://localhost:3000/api/v1/departments \
-H 'Authorization: Bearer <admin-token>' \
-H 'Content-Type: application/json' \
-d '{"name": "Finance", "slug": "finance", "color": "green"}'

Step 2 — Create the namespace

curl -X POST http://localhost:3000/api/v1/namespaces \
-H 'Authorization: Bearer <admin-token>' \
-H 'Content-Type: application/json' \
-d '{
"name": "finance",
"departmentId": "<DEPT_ID>",
"systemPrompt": "You are a financial assistant for Acme Corp employees. Answer questions about payroll, expenses, and budget processes. You do not provide tax advice.",
"persona": {
"language": "en",
"style": { "formality": 80 },
"boundaries": [
"Do not provide tax advice",
"Do not share salary information of other employees"
]
}
}'

Save the returned id as NS_ID.

Step 3 — Create a knowledge base

curl -X POST http://localhost:3000/api/v1/knowledge/bases \
-H 'Authorization: Bearer <admin-token>' \
-H 'Content-Type: application/json' \
-d '{"departmentId": "<DEPT_ID>", "name": "Finance KB"}'

Step 4 — Upload documents

Upload PDFs, DOCX, or TXT files. The ingestion pipeline chunks, embeds, and indexes them automatically:

curl -X POST http://localhost:3000/api/v1/knowledge/upload \
-H 'Authorization: Bearer <admin-token>' \
-F 'knowledgeBaseId=<KB_ID>' \
-F 'title=Expense policy 2024' \
-F 'file=@./docs/expense-policy-2024.pdf'

For bulk uploads, repeat the command for each file or write a shell loop:

for f in ./finance-docs/*.pdf; do
curl -X POST http://localhost:3000/api/v1/knowledge/upload \
-H 'Authorization: Bearer <admin-token>' \
-F 'knowledgeBaseId=<KB_ID>' \
-F "title=$(basename "$f")" \
-F "file=@$f"
done

Step 5 — Customize the persona (optional)

Update the namespace's system prompt and formality after reviewing initial responses:

curl -X PATCH http://localhost:3000/api/v1/namespaces/<NS_ID> \
-H 'Authorization: Bearer <admin-token>' \
-H 'Content-Type: application/json' \
-d '{"systemPrompt": "...", "persona": {"style": {"formality": 30}}}'

Step 6 — Seed intent examples

Cold-start the intent classifier with known phrase examples (see Adding New Intent Examples below).

Step 7 — Test with a channel

Send a test message through any active channel (WhatsApp, Telegram) or via the API. Monitor app.log for intent classification scores and confidence values.

The namespace is live when:

  • A message routes to the correct namespace based on the department assignment
  • The RAG pipeline returns a grounded response citing knowledge-base content
  • Low-confidence responses correctly trigger HITL review

Adding New Intent Examples

Intent examples are used by the vector-similarity classifier in src/knowledge/intent-classifier.ts. More high-quality examples → better classification accuracy.

Via API (single phrase)

curl -X POST http://localhost:3000/api/v1/intents/examples \
-H 'Authorization: Bearer <token>' \
-H 'Content-Type: application/json' \
-d '{
"namespaceId": "<NS_ID>",
"intentName": "leave_policy",
"phrase": "How many vacation days am I entitled to?"
}'

The embedding is generated and stored automatically.

Via bulk seed script

Use scripts/seed-intents.ts to import from a JSON file. The script accepts two formats:

Format 1 — intent map:

{
"intentMap": {
"leave_policy": ["How many vacation days?", "When does PTO reset?"],
"expense_reimbursement": {
"examplePhrases": ["How do I submit expenses?", "What's the reimbursement limit?"]
}
}
}

Format 2 — golden Q&A:

{
"goldenQA": [
{ "intent": "leave_policy", "question": "Can I carry over unused vacation?" },
{ "intent": "payroll", "question": "When is payday?" }
]
}

Run the seeder:

npx tsx scripts/seed-intents.ts \
--input ./data/finance-intents.json \
--namespace finance

The script batches embeddings (32 phrases per API call), upserts rows (no duplicates), and shows a progress bar.

From real conversation data

The best intent examples come from actual user conversations. Use scripts/analyze-chats.ts to extract them automatically:

# Analyze a folder of conversation exports
npx tsx scripts/analyze-chats.ts \
--input ./chats/finance-q1/ \
--output ./analysis/finance-q1/ \
--format json \
--batch-size 5 \
--namespace finance

The analyzer:

  1. Reads conversations in JSON, CSV, or TXT format
  2. Runs an LLM analysis per conversation to extract client intent + example phrases
  3. Outputs intentMap.json and goldenQA.json to the output directory
  4. Seeds the extracted phrases directly into the intent_examples pgvector table

Typical workflow for a new namespace:

# 1. Export past conversations from your CRM / chat platform
cp /path/to/crm-export/*.json ./chats/finance-q1/

# 2. Run the analyzer
npx tsx scripts/analyze-chats.ts \
--input ./chats/finance-q1/ \
--output ./analysis/finance-q1/ \
--namespace finance

# 3. Review the output
cat ./analysis/finance-q1/intentMap.json | jq 'keys'

# 4. Edit / curate if needed, then re-seed
npx tsx scripts/seed-intents.ts \
--input ./analysis/finance-q1/intentMap.json \
--namespace finance

Quality tips:

  • Aim for ≥20 example phrases per intent for reliable classification
  • Vary phrasing — include formal and informal variants
  • Remove duplicates and near-duplicates before seeding
  • Re-run the seed script after curating; it upserts so re-running is safe

Code Quality

Before submitting changes:

npm run typecheck    # TypeScript strict mode check
npm run lint:fix # ESLint auto-fix
npm run format # Prettier formatting
npm test # Vitest unit + integration tests

Project Conventions

  • TypeScript strict mode — all code is typed
  • Zod validation — all env vars and API inputs validated with Zod
  • Fastify plugins — route groups are Fastify plugins registered in src/app.ts
  • BullMQ — async work goes through queues (not inline in request handlers)
  • Prisma — all database access through Prisma client
  • ESM — project uses ES modules ("type": "module")