Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt

Use this file to discover all available pages before exploring further.

Overview

The AI Insights Dashboard (/ai/insights) is your central hub for proactive platform intelligence. It automatically runs three intelligence tools in parallel and presents the findings as categorized insight cards sorted by severity. Navigate to AI > Insights in the sidebar to open the dashboard.

How It Works

On page load (and every 5 minutes thereafter), the dashboard calls four intelligence tools in parallel via POST /api/v1/ai/intelligence:
ToolSectionWhat It Checks
runHealthCheckHealthModel health, policy conflicts, budget burn, suppression rates, stale entities, experiment status. Results cached server-side for 5 minutes.
analyzeOfferPerformancePerformanceOffer impressions, conversions, conversion rates, revenue, and trends. Identifies top and bottom performers.
analyzePolicyConflictsPoliciesCross-entity conflicts: contradictions, overlaps, gaps, and priority ties across offers, rules, policies, and experiments.
analyzeCrossModuleCorrelationsCross-module analysis — connects dots across offers, policies, and models to surface insights that span module boundaries.
Each tool call is independent — if one fails, the others still display their results.

Dashboard Layout

The Insights page is a summary dashboard that provides a unified view, with links to dedicated drill-down pages for each area.

KPI Strip

Four top-level metrics at a glance:
KPIDescriptionLinks To
Health Score0-100% based on critical/warning/info issue counts
Total ImpressionsSum of all offer impressions in the current periodContent Intelligence
RevenueTotal revenue from offer conversionsContent Intelligence
Policy ConflictsNumber of active policy conflicts detectedPolicy Recommendations

Health Progress Bar

Shows overall system health with a colored progress bar and breakdown of critical, warning, and info issue counts.

Summary Cards

Three cards provide at-a-glance previews, each linking to the relevant drill-down page:
CardShowsLinks To
HealthTop 3 critical issuesAI > Segments
PerformanceTop and bottom performers by CVRAI > Content Intelligence
PoliciesActive conflicts with entity namesAI > Policy Recommendations

Cross-Module Correlations

This section is unique to the Insights page and provides the platform’s most valuable intelligence — insights that connect the dots across modules.
Correlation TypeWhat It DetectsExample
Policy blocks top offerA contact policy is suppressing impressions for a high-CVR offer”Category suppression may limit ‘Win-Back Lapsed Policy’ (9.8% CVR)“
Low-AUC model in active flowA model with near-random AUC is available while a decision flow is published”Model ‘Cross-Sell GBM’ has AUC 0.503 while ‘Banking Main Flow’ is live”
Zero-CVR high spendAn offer has many impressions but zero conversions (wasted opportunity cost)“‘Dental & Vision Rider’ has 378 impressions but 0 conversions”
Each correlation includes:
  • Impact statement — why it matters (e.g., “Potential revenue loss: 10-20%”)
  • Action button — one-click action to resolve the issue

One-Click Actions

Cross-module correlations and drill-down pages include action buttons that execute changes directly:
ActionWhat It DoesAPI Call
Review PolicyNavigates to Contact Policies pageNavigation
Pause OfferSets offer status to pausedPUT /api/v1/offers
Disable ModelSets model status to pausedPUT /api/v1/algorithm-models/:id
After a successful action, the insight is removed from the list. All actions create audit log entries.

Actionable Insights

Aggregated recommendations from the offer performance analyzer, with a link to see more in the Content Intelligence page.

Recommendation Lifecycle

AI recommendations generated by the analyzers (policy, segmentation, content, rule building) follow a four-state lifecycle:
StatusMeaning
NewFreshly generated, awaiting review
ReviewedOpened and read by a user
AppliedAccepted and converted into a draft entity
DismissedRejected — no further action
Status transitions are managed via PATCH /api/v1/ai/recommendations/:id.

Applying Recommendations

When you click Apply on a recommendation:
  1. KaireonAI creates a draft entity in the appropriate module (Contact Policy, Qualification Rule, Customer Segment, or Creative)
  2. You are redirected to the relevant editor to review and finalize the draft
  3. The recommendation status updates to Applied with a link to the created entity via the appliedEntityId field
Draft entities are never auto-activated. You always review and explicitly activate them.

ML Worker Status Indicator

The dashboard shows the ML Worker connection status (fetched from GET /api/v1/ai/ml-worker/status):
StatusIndicatorMeaning
ConnectedGreenML Worker is running and healthy at the configured URL
LLM OnlyAmberNo ML Worker configured or disabled; AI features use LLM-based analysis only
DisconnectedRedML Worker is configured but not responding to health checks
When the ML Worker is connected, analyzers that support dual-tier routing (segmentation, policy, content) automatically use the ML Worker for datasets exceeding 5,000 rows.
Configure the ML Worker connection in Settings > Integrations > ML Worker. The health check polls /health with a 30-second cache TTL and 3-second timeout.

Auto-Refresh

The dashboard automatically refreshes every 5 minutes. The “Last updated” timestamp shows when the most recent refresh completed. You can also navigate away and return to trigger a fresh load.

Analytics Foundation Endpoints

Beyond the dashboard tools above, KaireonAI exposes four analytical primitives that power deeper insight workflows. Each is a type= value on GET /api/v1/dashboard-data (full schemas in the Dashboard Data API reference).

Selection Frequency

type=selection_frequency aggregates decision traces to answer “how often was each offer eligible, scored, and selected — and at what rank?” For each offer in the window it returns eligibleCount, scoredCount, selectedCount, selectionRate, avgRank, and a 10-element rankDistribution histogram. Optional filters: channelId, categoryId, decisionFlowId, segmentId. Use it to:
  • Spot offers that are always eligible but rarely chosen — these are candidates for score inputs or strategy tuning.
  • Compare rank distributions side-by-side to understand competitive pressure between offers.
  • Slice by segmentId to compare how ranking differs for VIP vs. general customers.

Anomaly Candidates

type=anomaly_candidates compares the current period (days) to a prior baseline (baselineDays) and surfaces metric moves large enough to warrant attention. Severity is classified from the larger of z-score magnitude and absolute percent change:
SeverityTrigger
infoz≥ 2 or%≥ 15
warningz≥ 3 or%≥ 30
criticalz≥ 4 or%≥ 50
Covered metrics: acceptance_rate (overall, per-offer, per-channel), revenue (per-offer), degraded_scoring_rate (overall). The endpoint is stateless — it returns candidates for a dashboard to display. The anomaly surface feeds two downstream consumers:
  • The Executive Dashboard’s anomaly feed, which renders candidates directly on load.
  • The Alert Rules evaluator, which converts anomaly-shaped metric moves into notifications.
To fire alerts on these anomalies, configure Alert Rules with the same metrics and a notification destination.
Alert firing only happens when /api/cron/tick is invoked — by AWS EventBridge when wired, or manually via curl in the meantime. During pilot / initial deployment the cron is usually not wired, so the anomaly-candidates endpoint still returns data (and drives the dashboard panel) but configured alert rules sit dormant until a tick occurs. See EventBridge Setup for the optional automation path.
Rules evaluate on every cron tick and respect cooldown, so you can tune threshold + window to match the anomaly severity bands above.

Why-Not-Ranked Aggregate

type=why_not_ranked complements the per-customer Why-Not API by answering the question in aggregate: for a given target offer over the window, how often was it eligible but not selected, and why? The response breaks misses into scoredTooLow, filteredByContactPolicy, filteredByQualification, and beatenBy — the top five offers that won when this one was scored but passed over. rankDistribution surfaces where in the scored ranking this offer typically lands. The trace sample is capped at 1,000 rows per call to keep latency predictable; the sampleSize and sampleCap fields in the response tell you when to narrow the window.

Cross-Decision Narratives

Three helper functions in platform/src/lib/ai/intelligence/decision-explainer.ts build human-readable narratives on top of the endpoints above:
  • explainOfferUnderperformance({tenantId, offerId, days, segmentId?}) — names the offer, states its selection rate, and identifies the top competitor when relevant.
  • explainSegmentCoverage({tenantId, segmentId, days}) — lists the top offers delivered to a segment and the active offers that never reached it.
  • explainAnomaly({tenantId, metric, dimension, dimensionKey, days}) — produces root-cause hypotheses for an anomaly tuple (policy pressure, scoring rank pressure, model degradation).
Each returns {narrative: string, support: {...}}. The narrative string is safe to render directly in dashboards or executive emails; support carries structured evidence that the report narrator uses for richer output. These helpers also back the Explain button on the Executive Dashboard’s anomaly feed. The feed today uses a deterministic fallback drawn from the anomaly tuple’s fields; wiring the helpers through a dedicated /api/v1/ai/explain HTTP route is tracked in the roadmap.

Next Steps

Executive Dashboard

Where the LLM narrative + anomaly feed + segment × offer view live.

Alert Rules

Convert anomaly-shaped metric moves into paged notifications.

Reports

Compose, narrate, and deliver scheduled reports built from these endpoints.

Smart Policy Recommender

AI-powered contact policy optimization.

Natural Language Rule Building

Create rules by describing them in plain English via the AI chat panel.

Auto-Segmentation

Discover customer segments from your data.