Skip to main content
KaireonAI ships with five monitoring dashboards, each designed for a different audience and set of questions. Together they give you real-time visibility into every layer of the platform — from infrastructure health to business outcomes to model drift. All dashboards auto-refresh every 30 seconds. Navigate to Dashboards in the sidebar to access them.
DashboardAudienceQuestions It Answers
OperationsPlatform engineersIs the system healthy? How fast are decisions? Is the DLQ growing?
BusinessAnalysts, marketersWhich offers convert? Which channels perform? What is the funnel?
Data HealthData engineersAre connectors up? Are pipelines running? How many schemas exist?
Model HealthData scientistsIs model accuracy stable? Which features matter? Are experiments running?
AttributionGrowth, analyticsWhich channels drive conversions? How should credit be split?

Operations Dashboard

Path: /dashboards/operations The Operations Dashboard is the primary system-health view for platform engineers. It answers: Is the decisioning engine running fast and correctly?

Performance KPI Cards

Four cards across the top row:
CardWhat It ShowsSource
Total RunsCount of all Decision Flow executionsRuns API
Avg LatencyMean latency in millisecondsRuns API
P95 Latency95th-percentile latencyRuns API
Success Ratecompleted / (completed + failed) as percentageRuns API

Latency Visualizations

ChartDescription
Latency DistributionBar chart where each bar = one run, height proportional to latency vs. P95. Hover for run ID and exact latency.
Latency Trend (7-day)Line chart with P50 and P99 series over the past week. Shows sample data badge when no real time-series data exists.

Acceptance Rate by Offer

Bar chart plus detail table showing per-offer acceptance rates from the interaction_summaries table:
ColumnDescription
Offer IDUnique identifier
ImpressionsTotal presentations
PositiveAccepted/converted interactions
NegativeRejected/dismissed interactions
Accept Ratepositive / impressions as percentage

Budget Utilization

Progress bars for offers with budget allocations. Color-coded:
ColorConditionMeaning
GreenBelow 70%Healthy spend rate
Amber70—90%Approaching limit
RedAbove 90%Near or at budget exhaustion

Decision Pipeline Panel

Metrics from /api/v1/metrics/summary (Prometheus as JSON):
  • Duration P50/P99 from kaireon_decision_pipeline_duration_ms histogram
  • Candidates per stage (Initial, After Qualification, After Contact Policy)
  • Filter rates shown as progress bars (red > 80%, amber > 50%)

Dead Letter Queue (DLQ) Panel

StatusConditionColor
Healthy0 eventsGreen
Warning1—10 eventsAmber
Critical10+ eventsRed
Shows topic breakdown (e.g., decision.outcome, pipeline.error) with Retry All and Purge action buttons.

Circuit Breaker Panel

Displays state change history for each breaker (e.g., enrichment-redis, connector-snowflake). Color-coded badges: red for open, amber for half_open, green for closed.

Investigating a Drop in Acceptance Rate

Suppose you notice offer_summer_promo dropped from 12% to 3% acceptance overnight. Here is how to investigate:
  1. Check the Decision Pipeline panel. If “After Qualification” is normal but “After Contact Policy” drops sharply, a new frequency cap is filtering aggressively.
  2. Open a Decision Trace. The trace shows candidates at each stage. If the offer is present at Qualification but absent after Contact Policy, expand the contact policy section to see which rule suppressed it.
  3. Check the Filter Rate bars. Contact Policy filter rate above 80% (red) means most candidates are being suppressed — likely a misconfigured policy.
If DLQ depth rises while acceptance rates drop, check whether the outcome recording pipeline is failing. Missing outcomes make acceptance rate calculations unreliable.

Business Dashboard

Path: /dashboards/business The Business Dashboard answers: How are my offers performing? Designed for analysts and marketers who need to track conversion rates, channel effectiveness, and revenue.

Summary Cards

CardSource
Active OffersOffer records with status = "active"
Active ChannelsChannel records with status = "active"
ExperimentsActive Experiment records
JourneysTotal Journey records
Active TriggersActive TriggerRule records
Pending ApprovalsPending ApprovalRequest records

Offer Funnel

Four-stage funnel showing conversion from configuration to delivery readiness:
  1. Total Offers — all offers in the tenant
  2. Active — offers with status = "active"
  3. With Creatives — active offers that have at least one active creative
  4. Active Creatives — total count of active creatives
If you have 20 active offers but only 5 have creatives, 15 offers cannot be delivered. The funnel makes this gap immediately visible.

Charts

ChartWhat It Shows
Channel Effectiveness (pie)Each channel’s share of conversions (positive / impressions * 100)
Offer Performance (bar)Horizontal bars ranking offers by conversion count
Daily Trend (line, 7-day)Impressions and conversions over time from InteractionHistory

Offer Performance Detail Table

ColumnDescription
OfferOffer name
PriorityConfigured priority value
CreativesCount of active creatives
ImpressionsTotal presentations
ConversionsPositive interactions
Conv. Rateconversions / impressions * 100
RevenueSum of totalValue from interaction summaries
High impressions but zero conversions? Check if the creative is compelling and the channel matches the audience. Conversions but zero revenue? Verify that Respond API calls include a value field.

Data Health Dashboard

Path: /dashboards/data-health Monitors the ingestion layer — connectors, pipelines, and schemas.

Summary Cards

CardDescription
Total ConnectorsAll configured connectors (S3, Kafka, Snowflake, etc.)
Active ConnectorsConnectors with status = "active"
PipelinesTotal pipeline definitions
SchemasTotal data schema definitions

Connectors Table

ColumnDescription
NameConnector name
Typee.g., AWS S3, Snowflake, Kafka
Statusactive (green), error (red), inactive (gray)
AuthAuthentication method (e.g., iam_role, api_key, oauth2)
A connector showing error usually means the last connection test failed. Navigate to Data > Connectors, select it, and run Test Connection for the specific error. Common causes: expired credentials, changed IP allowlists, network policy changes.

Model Health Dashboard

Path: /dashboards/model-health Tracks scoring model status, accuracy trends, and experiment activity. Designed for data scientists monitoring model performance.

Summary Cards

CardDescription
Total ModelsAll registered algorithm models
Active ModelsModels with status = "active"
ExperimentsTotal experiment definitions

Charts

ChartWhat It Shows
AUC Trend (line, 7-day)Area Under Curve over time for the primary active model
Feature Importance (bar)Top predictors ranked by importance weight

Models Table

ColumnDescription
NameModel name
TypeScorecard, Bayesian, or Gradient Boosted
Statusactive, error, or draft
AUCLatest AUC metric as percentage
A sudden AUC drop (more than 5 points in a week) suggests feature drift. Check the Feature Importance chart for ranking changes, then consider retraining with recent data.

Attribution Dashboard

Path: /dashboards/attribution Multi-touch attribution analysis comparing how different models distribute credit across channels. Helps answer: Which channels actually drive conversions, and how should I allocate budget?

Attribution Models

ModelHow Credit Is Assigned
Last Touch100% to the final touchpoint before conversion
First Touch100% to the first touchpoint
LinearEqual split across all touchpoints
Time DecayMore credit to touchpoints closer to conversion
Position Based40% first, 40% last, 20% split across middle

Summary KPIs

CardDescription
Total ConversionsNumber of attribution results
ChannelsDistinct channels across all touchpoints
Total CreditSum of all credit values
Avg Touchpoints / ConversionMean touchpoints per conversion journey

Channel Contribution Chart

Horizontal bar chart showing each channel’s percentage contribution to total credit, with raw credit values.

Conversions Per Channel Table

ColumnDescription
ChannelChannel identifier
TouchpointsInteraction count for this channel
Total CreditSum of credit assigned
Contribution %channel credit / total credit * 100
Compare the same data across attribution models. If a channel ranks number 1 under Last Touch but number 4 under First Touch, it is strong at closing but weak at introducing — useful for budget allocation decisions.

Prometheus Metrics

KaireonAI exposes a Prometheus-compatible scrape endpoint at /api/metrics (requires admin role).

Scrape Configuration

scrape_configs:
  - job_name: "kaireon"
    scheme: https
    metrics_path: /api/metrics
    authorization:
      type: Bearer
      credentials: "<your-admin-api-key>"
    static_configs:
      - targets: ["your-kaireon-instance.com"]
    scrape_interval: 15s

Key Metrics

Decision Engine
MetricTypeDescription
kaireon_decision_latency_msHistogramDecision engine latency (by channel)
kaireon_decision_pipeline_duration_msHistogramFull pipeline execution time
kaireon_decision_candidatesGaugeCandidate count at each stage
kaireon_decision_qualification_filter_rateGaugeQualification filter ratio
kaireon_decision_contact_policy_filter_rateGaugeContact policy filter ratio
kaireon_decision_delivery_totalCounterTotal responses delivered
HTTP & API
MetricTypeDescription
kaireon_http_request_duration_secondsHistogramHTTP request duration (by method, route, status)
kaireon_http_error_totalCounterHTTP 4xx/5xx errors
Pipelines & Connectors
MetricTypeDescription
kaireon_pipeline_execution_latency_msHistogramPipeline execution time
kaireon_pipeline_rows_processed_totalCounterTotal rows processed
kaireon_connector_test_latency_msHistogramConnector test duration
Infrastructure
MetricTypeDescription
kaireon_cache_hits_total / kaireon_cache_misses_totalCounterCache performance
kaireon_dlq_depthGaugeDead letter queue depth (by tenant)
kaireon_circuit_breaker_state_change_totalCounterCircuit breaker transitions
kaireon_active_worker_jobsGaugeActive background jobs
Experiments & Models
MetricTypeDescription
kaireon_experiment_assignment_totalCounterExperiment variant assignments
kaireon_scoring_model_failure_totalCounterScoring model failures (fallback triggered)
kaireon_mandatory_cap_hit_totalCounterMandatory offer daily cap hits
Compliance
MetricTypeDescription
kaireon_gdpr_erasure_totalCounterGDPR erasure requests
kaireon_dsar_request_totalCounterDSAR requests (by type, status)
kaireon_sso_auth_totalCounterSSO auth attempts (by provider, result)
The /api/v1/metrics/summary endpoint returns the same metrics as JSON (filtered to kaireon_* prefix). This is what the Operations Dashboard uses internally.

Segment Dimension

Six of the Business and Operations dashboard queries accept an optional segmentId query parameter to scope their aggregation to customers in a specific segment. When provided, the underlying SQL joins the segment’s materialized view (seg_<id_prefix>) onto the interaction tables by customerId.
Dashboard data type=Segment filter behavior
acceptance_ratePer-offer acceptance scoped to the segment.
offer_performanceTop-20 offer impressions/conversions/revenue within the segment.
offer_performance_groupedgroupBy=segment fans the response out by every active materialized segment; each offer row is decorated with segmentId + segmentName. Alternatively, pass segmentId=<id> with groupBy=channel or groupBy=category to filter within one segment.
channel_effectivenessChannel-level impressions/conversions scoped to the segment.
daily_trend7-day impression/conversion line scoped to the segment.
revenue_trendN-day revenue line from interaction summaries scoped to the segment.

Behavior when a segment is not ready

Segments are materialized to PostgreSQL views asynchronously. If a segment exists but its view has not been built (for example a draft segment), the endpoint returns {"data": [], "warning": "segment view not materialized"} — dashboards can render an informational banner instead of an error. An unknown segmentId returns {"data": [], "warning": "unknown segmentId"} under the same pattern.

Example

curl -H "X-API-Key: $API_KEY" -H "X-Tenant-Id: $TENANT_ID" \
  "https://playground.kaireonai.com/api/v1/dashboard-data?type=offer_performance&segmentId=seg_vip&days=7"
See the Dashboard Data API reference for complete parameter and response definitions.

Export and Save as Report

Every dashboard in the platform ships with two header-bar buttons that turn the current view into a report artefact:
Export works unconditionally — no cron wiring required. Clicking Export renders the artifact server-side and streams the file back to the browser immediately.Save as Report also creates the template + schedule immediately, but the resulting ReportSchedule only fires on cadence once /api/cron/tick is being invoked by an external scheduler. During pilot / initial deployment this is usually not wired. Use the Run Now button on the saved template in /settings/reports for on-demand delivery until you follow EventBridge Setup (optional).

Export

Dropdown with PDF / CSV / Markdown / HTML options. Clicking a format:
  1. Builds a transient report template from the dashboard’s current filters (date range, segment, etc.) via the view-to-template.ts bridge.
  2. POSTs the transient template to /api/v1/reports/preview — no database row is created.
  3. Receives a base64-encoded artifact and triggers a browser download via Blob + URL.createObjectURL.
CSV exports skip LLM narration to keep latency low; PDF / Markdown / HTML include a narrated executive summary when an AI provider is configured.

Save as Scheduled Report

Opens a modal pre-filled from the current view. Pick:
  • Name (required; defaults to {Dashboard label} · {days}d).
  • Formats (multi-select; default PDF).
  • Narrative toggle.
  • Schedule (preset: Daily 8am / Weekly Mon 8am / Monthly 1st 8am / Custom cron).
  • Destinations (multi-select of configured notification providers — Slack, Teams, webhook, Ops-email).
Submitting creates a ReportTemplate and a ReportSchedule. When the cron is wired (AWS EventBridge → /api/cron/tick), the schedule runs on cadence and delivers artefacts to every destination. Until then, trigger delivery from /settings/reports using Run Now, or call POST /api/v1/reports/templates/[id]/run-now directly. View the persisted template at /settings/reports.

Which data sources are sent?

Each dashboard declares its source list in src/lib/dashboards/view-to-template.ts:
DashboardData sources
Executiveoffer_performance, channel_effectiveness, revenue_trend, anomaly_candidates, selection_frequency
Businessoffer_performance, channel_effectiveness, revenue_trend, funnel
Operationsdaily_trend, decision_traces_summary
Model Healthselection_frequency, anomaly_candidates
Data Healthdaily_trend
Attributionoffer_performance, channel_effectiveness
Every source is tenant-scoped (requireTenant() + tenantId filter on every DB query).

Executive Dashboard

C-suite summary with narrated weekly highlights and KPI deltas.

Reports

Templates, schedules, formats, and delivery — the engine behind Save-as-Report.

Decision Traces

Forensic tracing for debugging qualification and ranking.

Scaling & Performance

Full Prometheus reference, Grafana dashboards, and scaling guidance.

Algorithms & Models

Understand the scoring models tracked by Model Health.