POST /api/v1/ai/chat
Stream a conversation with the AI assistant. Uses the Vercel AI SDK to stream responses with tool calling support. Rate limited to 30 requests per 60 seconds.Request Body
| Field | Type | Required | Description |
|---|---|---|---|
messages | array | Yes | AI SDK message array (user/assistant/tool messages) |
route | string | No | Current page route for context injection (e.g., "/studio/decision-flows") |
conversationId | string | No | Existing conversation ID to continue. Auto-created if omitted |
Response
Returns a streamingUIMessageStreamResponse. The X-Conversation-Id header contains the conversation ID for follow-up messages.
Example
GET /api/v1/ai/conversations
List recent conversations for the tenant (up to 50, newest first). Requires session authentication.POST /api/v1/ai/conversations
Create a new conversation.Request Body
| Field | Type | Required | Description |
|---|---|---|---|
title | string | No | Conversation title. Default: "New conversation" |
201 Created
POST /api/v1/ai/analyze/
Run AI-powered analysis on tenant data. Supports three analysis types with automatic LLM/ML Worker routing based on data volume.Path Parameters
| Parameter | Values | Description |
|---|---|---|
type | "policies", "segments", "content" | Type of analysis to perform |
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
schemaId | string | For segments | Schema to analyze |
confirmed | boolean | No | Set true to bypass confirmation for large datasets |
preferredTier | string | No | Force "llm" or "ml_worker" tier |
Response (confirmation required)
When the dataset exceeds the LLM tier threshold, a confirmation prompt is returned:Response (analysis complete)
GET /api/v1/ai/recommendations
List AI-generated recommendations for the tenant.Query Parameters
| Parameter | Type | Description |
|---|---|---|
type | string | Filter by type: "policy", "rule", "segment", "content" |
status | string | Filter by status: "new", "applied", "dismissed" |
Response
POST /api/v1/ai/recommendations//apply
Apply an AI recommendation by creating the corresponding entity (policy, rule, segment, or content item) in draft status. Requires admin role.Response
POST /api/v1/ai/intelligence
Dispatch to intelligence backend tools. Used by the MCP server for programmatic access.Request Body
| Field | Type | Required | Description |
|---|---|---|---|
tool | string | Yes | Tool name (see available tools below) |
params | object | Yes | Tool-specific parameters |
Available Tools
| Tool | Description |
|---|---|
explainDecision | Explain why a specific decision was made for a customer |
compareOfferEligibility | Compare which offers a customer qualifies for |
listCustomerSuppressions | List contact policy suppressions for a customer |
traceCustomerJourney | Trace a customer’s journey history |
analyzeQualificationFunnel | Analyze qualification rule pass/fail rates |
analyzeContactPolicySuppression | Analyze contact policy suppression rates |
analyzePolicyConflicts | Detect policy conflicts and overlaps |
analyzeOfferPerformance | Analyze offer conversion and engagement metrics |
simulateRuleChange | Simulate the impact of a rule change |
simulateFrequencyCapChange | Simulate frequency cap adjustments |
analyzeModelHealth | Check model performance and drift |
explainModelScoring | Explain how a model scores a customer |
suggestModelImprovements | Get model improvement suggestions |
detectModelDrift | Detect feature or concept drift |
runHealthCheck | Run a system-wide health check |
POST /api/v1/ai/parse-rule
Parse a natural language rule description into structured JSON using AI.Request Body
| Field | Type | Required | Description |
|---|---|---|---|
text | string | Yes | Natural language rule description |
Example
Response
GET /api/v1/ai/ml-worker/status
Check ML Worker connectivity status.Response
GET /api/v1/ai/analyzer-settings
Get current AI analyzer settings for the tenant (with defaults filled in).PUT /api/v1/ai/analyzer-settings
Update AI analyzer settings. Requires admin role.Request Body
Settings for each analyzer module (segmentation, policy, content) including thresholds, lookback windows, and model parameters.Roles
| Endpoint | Allowed Roles |
|---|---|
POST /ai/chat | admin, editor, viewer |
POST /ai/analyze/{type} | admin, editor |
GET /ai/recommendations | admin, editor, viewer |
POST /ai/recommendations/{id}/apply | admin |
POST /ai/intelligence | any authenticated |
POST /ai/parse-rule | admin, editor |
GET /ai/ml-worker/status | admin, editor, viewer |
GET /ai/analyzer-settings | admin, editor, viewer |
PUT /ai/analyzer-settings | admin |