Documentation Index
Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The AI Insights Dashboard (/ai/insights) is your central hub for proactive platform intelligence. It automatically runs three intelligence tools in parallel and presents the findings as categorized insight cards sorted by severity.
Navigate to AI > Insights in the sidebar to open the dashboard.
How It Works
On page load (and every 5 minutes thereafter), the dashboard calls four intelligence tools in parallel viaPOST /api/v1/ai/intelligence:
| Tool | Section | What It Checks |
|---|---|---|
runHealthCheck | Health | Model health, policy conflicts, budget burn, suppression rates, stale entities, experiment status. Results cached server-side for 5 minutes. |
analyzeOfferPerformance | Performance | Offer impressions, conversions, conversion rates, revenue, and trends. Identifies top and bottom performers. |
analyzePolicyConflicts | Policies | Cross-entity conflicts: contradictions, overlaps, gaps, and priority ties across offers, rules, policies, and experiments. |
analyzeCrossModule | Correlations | Cross-module analysis — connects dots across offers, policies, and models to surface insights that span module boundaries. |
Dashboard Layout
The Insights page is a summary dashboard that provides a unified view, with links to dedicated drill-down pages for each area.KPI Strip
Four top-level metrics at a glance:| KPI | Description | Links To |
|---|---|---|
| Health Score | 0-100% based on critical/warning/info issue counts | — |
| Total Impressions | Sum of all offer impressions in the current period | Content Intelligence |
| Revenue | Total revenue from offer conversions | Content Intelligence |
| Policy Conflicts | Number of active policy conflicts detected | Policy Recommendations |
Health Progress Bar
Shows overall system health with a colored progress bar and breakdown of critical, warning, and info issue counts.Summary Cards
Three cards provide at-a-glance previews, each linking to the relevant drill-down page:| Card | Shows | Links To |
|---|---|---|
| Health | Top 3 critical issues | AI > Segments |
| Performance | Top and bottom performers by CVR | AI > Content Intelligence |
| Policies | Active conflicts with entity names | AI > Policy Recommendations |
Cross-Module Correlations
This section is unique to the Insights page and provides the platform’s most valuable intelligence — insights that connect the dots across modules.| Correlation Type | What It Detects | Example |
|---|---|---|
| Policy blocks top offer | A contact policy is suppressing impressions for a high-CVR offer | ”Category suppression may limit ‘Win-Back Lapsed Policy’ (9.8% CVR)“ |
| Low-AUC model in active flow | A model with near-random AUC is available while a decision flow is published | ”Model ‘Cross-Sell GBM’ has AUC 0.503 while ‘Banking Main Flow’ is live” |
| Zero-CVR high spend | An offer has many impressions but zero conversions (wasted opportunity cost) | “‘Dental & Vision Rider’ has 378 impressions but 0 conversions” |
- Impact statement — why it matters (e.g., “Potential revenue loss: 10-20%”)
- Action button — one-click action to resolve the issue
One-Click Actions
Cross-module correlations and drill-down pages include action buttons that execute changes directly:| Action | What It Does | API Call |
|---|---|---|
| Review Policy | Navigates to Contact Policies page | Navigation |
| Pause Offer | Sets offer status to paused | PUT /api/v1/offers |
| Disable Model | Sets model status to paused | PUT /api/v1/algorithm-models/:id |
Actionable Insights
Aggregated recommendations from the offer performance analyzer, with a link to see more in the Content Intelligence page.Recommendation Lifecycle
AI recommendations generated by the analyzers (policy, segmentation, content, rule building) follow a four-state lifecycle:| Status | Meaning |
|---|---|
| New | Freshly generated, awaiting review |
| Reviewed | Opened and read by a user |
| Applied | Accepted and converted into a draft entity |
| Dismissed | Rejected — no further action |
PATCH /api/v1/ai/recommendations/:id.
Applying Recommendations
When you click Apply on a recommendation:- KaireonAI creates a draft entity in the appropriate module (Contact Policy, Qualification Rule, Customer Segment, or Creative)
- You are redirected to the relevant editor to review and finalize the draft
- The recommendation status updates to Applied with a link to the created entity via the
appliedEntityIdfield
ML Worker Status Indicator
The dashboard shows the ML Worker connection status (fetched fromGET /api/v1/ai/ml-worker/status):
| Status | Indicator | Meaning |
|---|---|---|
| Connected | Green | ML Worker is running and healthy at the configured URL |
| LLM Only | Amber | No ML Worker configured or disabled; AI features use LLM-based analysis only |
| Disconnected | Red | ML Worker is configured but not responding to health checks |
Configure the ML Worker connection in Settings > Integrations > ML Worker. The health check polls
/health with a 30-second cache TTL and 3-second timeout.Auto-Refresh
The dashboard automatically refreshes every 5 minutes. The “Last updated” timestamp shows when the most recent refresh completed. You can also navigate away and return to trigger a fresh load.Analytics Foundation Endpoints
Beyond the dashboard tools above, KaireonAI exposes four analytical primitives that power deeper insight workflows. Each is atype= value on GET /api/v1/dashboard-data (full schemas in the Dashboard Data API reference).
Selection Frequency
type=selection_frequency aggregates decision traces to answer “how often was each offer eligible, scored, and selected — and at what rank?” For each offer in the window it returns eligibleCount, scoredCount, selectedCount, selectionRate, avgRank, and a 10-element rankDistribution histogram. Optional filters: channelId, categoryId, decisionFlowId, segmentId.
Use it to:
- Spot offers that are always eligible but rarely chosen — these are candidates for score inputs or strategy tuning.
- Compare rank distributions side-by-side to understand competitive pressure between offers.
- Slice by
segmentIdto compare how ranking differs for VIP vs. general customers.
Anomaly Candidates
type=anomaly_candidates compares the current period (days) to a prior baseline (baselineDays) and surfaces metric moves large enough to warrant attention. Severity is classified from the larger of z-score magnitude and absolute percent change:
| Severity | Trigger | ||||
|---|---|---|---|---|---|
info | z | ≥ 2 or | % | ≥ 15 | |
warning | z | ≥ 3 or | % | ≥ 30 | |
critical | z | ≥ 4 or | % | ≥ 50 |
acceptance_rate (overall, per-offer, per-channel), revenue (per-offer), degraded_scoring_rate (overall). The endpoint is stateless — it returns candidates for a dashboard to display.
The anomaly surface feeds two downstream consumers:
- The Executive Dashboard’s anomaly feed, which renders candidates directly on load.
- The Alert Rules evaluator, which converts anomaly-shaped metric moves into notifications.
Alert firing only happens when
/api/cron/tick is invoked — by AWS
EventBridge when wired, or manually via curl in the meantime. During
pilot / initial deployment the cron is usually not wired, so the
anomaly-candidates endpoint still returns data (and drives the dashboard
panel) but configured alert rules sit dormant until a tick occurs. See
EventBridge Setup for the optional
automation path.Why-Not-Ranked Aggregate
type=why_not_ranked complements the per-customer Why-Not API by answering the question in aggregate: for a given target offer over the window, how often was it eligible but not selected, and why? The response breaks misses into scoredTooLow, filteredByContactPolicy, filteredByQualification, and beatenBy — the top five offers that won when this one was scored but passed over. rankDistribution surfaces where in the scored ranking this offer typically lands.
The trace sample is capped at 1,000 rows per call to keep latency predictable; the sampleSize and sampleCap fields in the response tell you when to narrow the window.
Cross-Decision Narratives
Three helper functions inplatform/src/lib/ai/intelligence/decision-explainer.ts build human-readable narratives on top of the endpoints above:
explainOfferUnderperformance({tenantId, offerId, days, segmentId?})— names the offer, states its selection rate, and identifies the top competitor when relevant.explainSegmentCoverage({tenantId, segmentId, days})— lists the top offers delivered to a segment and the active offers that never reached it.explainAnomaly({tenantId, metric, dimension, dimensionKey, days})— produces root-cause hypotheses for an anomaly tuple (policy pressure, scoring rank pressure, model degradation).
{narrative: string, support: {...}}. The narrative string is safe to render directly in dashboards or executive emails; support carries structured evidence that the report narrator uses for richer output.
These helpers also back the Explain button on the Executive
Dashboard’s anomaly feed. The feed today uses a deterministic fallback
drawn from the anomaly tuple’s fields; wiring the helpers through a
dedicated /api/v1/ai/explain HTTP route is tracked in the
roadmap.
Next Steps
Executive Dashboard
Where the LLM narrative + anomaly feed + segment × offer view live.
Alert Rules
Convert anomaly-shaped metric moves into paged notifications.
Reports
Compose, narrate, and deliver scheduled reports built from these endpoints.
Smart Policy Recommender
AI-powered contact policy optimization.
Natural Language Rule Building
Create rules by describing them in plain English via the AI chat panel.
Auto-Segmentation
Discover customer segments from your data.