Overview
KaireonAI includes a built-in AI assistant that understands the platform and can help you build, configure, and troubleshoot your decisioning setup. Open it with Cmd+I (macOS) or Ctrl+I (Windows/Linux).Capabilities
The assistant has 105+ tools organized by module:| Module | Tools | What It Can Do |
|---|---|---|
| Data | 10 | Create schemas, add fields, set up connectors, build pipelines, test connections |
| Studio | 16 | Create offers, configure channels, set up qualification rules, manage contact policies, triggers, guardrails, outcome types |
| V2 Pipeline | 9 | Create V2 flows, add/remove/update pipeline nodes, list scoring/ranking/allocation methods |
| Algorithms | 8 | Train models, configure experiments, manage predictors, update model config |
| Content | 10 | Generate creative copy, manage content items and templates, sync CMS sources |
| Metrics | 5 | Create behavioral metrics, preview values, trigger computation, create metric rules |
| Dashboards | 2 | Query metrics, list alerts |
| Intelligence | 15 | Explain decisions, trace journeys, analyze policy conflicts, simulate changes, detect model drift, run health checks |
| Mutations | 8 | Update offers, channels, contact policies, qualification rules; delete entities; publish flows |
| Docs | 1 | Search platform documentation (hybrid local + Mintlify MCP) |
V2 Composable Pipeline
The assistant fully supports the V2 composable pipeline with 14 node types in 3 phases, 4 scoring methods (including external endpoints) with channel overrides and champion/challenger, 4 ranking algorithms, and Hungarian optimal allocation for multi-placement scenarios. Ask the assistant:- “What scoring methods are available?” — Lists priority_weighted, propensity, and formula with details
- “Create a V2 flow with diversity ranking” — Creates a 4-node pipeline with your specified config
- “Add an enrich node to my flow” — Adds enrichment at the correct pipeline position
- “Set up multi-placement with Hungarian allocation” — Configures group node with optimal strategy
Context Routing
The assistant automatically adapts to your current page. When you are on the Data module, it prioritizes data tools. On the Studio Decision Flows page, it includes the full V2 pipeline toolset. On Algorithms pages, model management and intelligence tools are added. This makes interactions more relevant and efficient. The general fallback (any page not matching a specific route) provides access to all 105+ tools.Guided Autonomy
Write operations follow a preview -> approve -> execute pattern:- The assistant shows what it plans to create or change
- You review and click Approve or Cancel
- Only after approval, the mutation is executed via
confirmMutation
Conversation Persistence
Conversations are stored in the database and can be resumed:- Auto-created on first message if no
conversationIdis provided - Auto-titled from the first user message
- Full message history (including tool invocations) is loaded when resuming
- Up to 50 recent conversations are shown per tenant
Supported Providers
Configure your preferred LLM provider in Settings > AI Configuration:| Provider | Default Model | Notes |
|---|---|---|
| Gemini 2.5 Flash | Default, good balance of speed and quality | |
| Anthropic | Claude Sonnet / Opus | Strong reasoning, best for complex analysis |
| OpenAI | GPT-4o | Widely available |
| Amazon Bedrock | Configurable | Enterprise, uses IAM auth or access keys |
| Ollama | Any local model | Local at localhost:11434, no API costs |
| LM Studio | Any local model | Local at localhost:1234/v1, no API costs |
AI_PROVIDER, AI_MODEL, AI_API_KEY, AI_BASE_URL) > Defaults (Google Gemini 2.5 Flash).
Security
- Prompt injection defense — 7 injection patterns detected and filtered; messages truncated to 10,000 characters
- PII redaction — Emails, SSN, credit cards, phone numbers, API keys, connection strings, and sensitive field names are stripped from all tool outputs before reaching the LLM
- RBAC enforcement — Chat requires admin/editor/viewer; mutations inherit the user’s role
- Rate limiting — 30 requests per minute per user (non-fail-open)
- Tenant scoping —
tenantIdis auto-injected into every tool call - Audit logging — All AI interactions logged with module, route, message count, and conversation ID
- Max tool steps — Limited to 5 sequential tool calls per message
- Request timeout — 60 seconds
API Integration
The AI assistant is available via REST API:X-Conversation-Id header in the response can be used for follow-up messages.
MCP Server
For integration with AI IDEs (Claude Code, Cursor, VS Code Copilot), KaireonAI provides a separate MCP server with the same tool set. See the MCP Server Reference for details.Next Steps
AI Assistant Details
Full tool reference, V2 pipeline support, example prompts, and configuration.
MCP Server
Connect AI IDEs (Claude Code, Cursor, VS Code) to KaireonAI.