Skip to main content

Overview

KaireonAI includes a built-in AI assistant that understands the platform and can help you build, configure, and troubleshoot your decisioning setup. Open it with Cmd+I (macOS) or Ctrl+I (Windows/Linux).

Capabilities

The assistant has 105+ tools organized by module:
ModuleToolsWhat It Can Do
Data10Create schemas, add fields, set up connectors, build pipelines, test connections
Studio16Create offers, configure channels, set up qualification rules, manage contact policies, triggers, guardrails, outcome types
V2 Pipeline9Create V2 flows, add/remove/update pipeline nodes, list scoring/ranking/allocation methods
Algorithms8Train models, configure experiments, manage predictors, update model config
Content10Generate creative copy, manage content items and templates, sync CMS sources
Metrics5Create behavioral metrics, preview values, trigger computation, create metric rules
Dashboards2Query metrics, list alerts
Intelligence15Explain decisions, trace journeys, analyze policy conflicts, simulate changes, detect model drift, run health checks
Mutations8Update offers, channels, contact policies, qualification rules; delete entities; publish flows
Docs1Search platform documentation (hybrid local + Mintlify MCP)

V2 Composable Pipeline

The assistant fully supports the V2 composable pipeline with 14 node types in 3 phases, 4 scoring methods (including external endpoints) with channel overrides and champion/challenger, 4 ranking algorithms, and Hungarian optimal allocation for multi-placement scenarios. Ask the assistant:
  • “What scoring methods are available?” — Lists priority_weighted, propensity, and formula with details
  • “Create a V2 flow with diversity ranking” — Creates a 4-node pipeline with your specified config
  • “Add an enrich node to my flow” — Adds enrichment at the correct pipeline position
  • “Set up multi-placement with Hungarian allocation” — Configures group node with optimal strategy
See the AI Assistant feature page for full V2 pipeline documentation.

Context Routing

The assistant automatically adapts to your current page. When you are on the Data module, it prioritizes data tools. On the Studio Decision Flows page, it includes the full V2 pipeline toolset. On Algorithms pages, model management and intelligence tools are added. This makes interactions more relevant and efficient. The general fallback (any page not matching a specific route) provides access to all 105+ tools.

Guided Autonomy

Write operations follow a preview -> approve -> execute pattern:
  1. The assistant shows what it plans to create or change
  2. You review and click Approve or Cancel
  3. Only after approval, the mutation is executed via confirmMutation
Read operations and analysis run immediately without approval.

Conversation Persistence

Conversations are stored in the database and can be resumed:
  • Auto-created on first message if no conversationId is provided
  • Auto-titled from the first user message
  • Full message history (including tool invocations) is loaded when resuming
  • Up to 50 recent conversations are shown per tenant

Supported Providers

Configure your preferred LLM provider in Settings > AI Configuration:
ProviderDefault ModelNotes
GoogleGemini 2.5 FlashDefault, good balance of speed and quality
AnthropicClaude Sonnet / OpusStrong reasoning, best for complex analysis
OpenAIGPT-4oWidely available
Amazon BedrockConfigurableEnterprise, uses IAM auth or access keys
OllamaAny local modelLocal at localhost:11434, no API costs
LM StudioAny local modelLocal at localhost:1234/v1, no API costs
Configuration priority: Database settings > Environment variables (AI_PROVIDER, AI_MODEL, AI_API_KEY, AI_BASE_URL) > Defaults (Google Gemini 2.5 Flash).

Security

  • Prompt injection defense — 7 injection patterns detected and filtered; messages truncated to 10,000 characters
  • PII redaction — Emails, SSN, credit cards, phone numbers, API keys, connection strings, and sensitive field names are stripped from all tool outputs before reaching the LLM
  • RBAC enforcement — Chat requires admin/editor/viewer; mutations inherit the user’s role
  • Rate limiting — 30 requests per minute per user (non-fail-open)
  • Tenant scopingtenantId is auto-injected into every tool call
  • Audit logging — All AI interactions logged with module, route, message count, and conversation ID
  • Max tool steps — Limited to 5 sequential tool calls per message
  • Request timeout — 60 seconds

API Integration

The AI assistant is available via REST API:
curl -N -X POST https://playground.kaireonai.com/api/v1/ai/chat \
  -H "Content-Type: application/json" \
  -H "X-Tenant-Id: my-tenant" \
  -d '{
    "messages": [{ "role": "user", "content": "List my active offers" }],
    "route": "/studio"
  }'
The response is a Server-Sent Events stream in Vercel AI SDK wire format. The X-Conversation-Id header in the response can be used for follow-up messages.

MCP Server

For integration with AI IDEs (Claude Code, Cursor, VS Code Copilot), KaireonAI provides a separate MCP server with the same tool set. See the MCP Server Reference for details.

Next Steps

AI Assistant Details

Full tool reference, V2 pipeline support, example prompts, and configuration.

MCP Server

Connect AI IDEs (Claude Code, Cursor, VS Code) to KaireonAI.