Monitor platform health, business performance, data quality, model accuracy, and channel attribution from five purpose-built dashboards.
KaireonAI ships with five monitoring dashboards, each designed for a different audience and set of questions. Together they give you real-time visibility into every layer of the platform — from infrastructure health to business outcomes to model drift. All dashboards auto-refresh every 30 seconds.Navigate to Dashboards in the sidebar to access them.
Dashboard
Audience
Questions It Answers
Operations
Platform engineers
Is the system healthy? How fast are decisions? Is the DLQ growing?
Business
Analysts, marketers
Which offers convert? Which channels perform? What is the funnel?
Data Health
Data engineers
Are connectors up? Are pipelines running? How many schemas exist?
Model Health
Data scientists
Is model accuracy stable? Which features matter? Are experiments running?
Attribution
Growth, analytics
Which channels drive conversions? How should credit be split?
Path:/dashboards/operationsThe Operations Dashboard is the primary system-health view for platform engineers. It answers: Is the decisioning engine running fast and correctly?
Displays state change history for each breaker (e.g., enrichment-redis, connector-snowflake). Color-coded badges: red for open, amber for half_open, green for closed.
Suppose you notice offer_summer_promo dropped from 12% to 3% acceptance overnight. Here is how to investigate:
Check the Decision Pipeline panel. If “After Qualification” is normal but “After Contact Policy” drops sharply, a new frequency cap is filtering aggressively.
Open a Decision Trace. The trace shows candidates at each stage. If the offer is present at Qualification but absent after Contact Policy, expand the contact policy section to see which rule suppressed it.
Check the Filter Rate bars. Contact Policy filter rate above 80% (red) means most candidates are being suppressed — likely a misconfigured policy.
If DLQ depth rises while acceptance rates drop, check whether the outcome recording pipeline is failing. Missing outcomes make acceptance rate calculations unreliable.
Path:/dashboards/businessThe Business Dashboard answers: How are my offers performing? Designed for analysts and marketers who need to track conversion rates, channel effectiveness, and revenue.
High impressions but zero conversions? Check if the creative is compelling and the channel matches the audience. Conversions but zero revenue? Verify that Respond API calls include a value field.
A connector showing error usually means the last connection test failed. Navigate to Data > Connectors, select it, and run Test Connection for the specific error. Common causes: expired credentials, changed IP allowlists, network policy changes.
Path:/dashboards/model-healthTracks scoring model status, accuracy trends, and experiment activity. Designed for data scientists monitoring model performance.
A sudden AUC drop (more than 5 points in a week) suggests feature drift. Check the Feature Importance chart for ranking changes, then consider retraining with recent data.
Path:/dashboards/attributionMulti-touch attribution analysis comparing how different models distribute credit across channels. Helps answer: Which channels actually drive conversions, and how should I allocate budget?
Compare the same data across attribution models. If a channel ranks number 1 under Last Touch but number 4 under First Touch, it is strong at closing but weak at introducing — useful for budget allocation decisions.
The /api/v1/metrics/summary endpoint returns the same metrics as JSON (filtered to kaireon_* prefix). This is what the Operations Dashboard uses internally.
Six of the Business and Operations dashboard queries accept an optional segmentId query parameter to scope their aggregation to customers in a specific segment. When provided, the underlying SQL joins the segment’s materialized view (seg_<id_prefix>) onto the interaction tables by customerId.
Dashboard data type=
Segment filter behavior
acceptance_rate
Per-offer acceptance scoped to the segment.
offer_performance
Top-20 offer impressions/conversions/revenue within the segment.
offer_performance_grouped
groupBy=segment fans the response out by every active materialized segment; each offer row is decorated with segmentId + segmentName. Alternatively, pass segmentId=<id> with groupBy=channel or groupBy=category to filter within one segment.
channel_effectiveness
Channel-level impressions/conversions scoped to the segment.
daily_trend
7-day impression/conversion line scoped to the segment.
revenue_trend
N-day revenue line from interaction summaries scoped to the segment.
Segments are materialized to PostgreSQL views asynchronously. If a segment exists but its view has not been built (for example a draft segment), the endpoint returns {"data": [], "warning": "segment view not materialized"} — dashboards can render an informational banner instead of an error. An unknown segmentId returns {"data": [], "warning": "unknown segmentId"} under the same pattern.
Every dashboard in the platform ships with two header-bar buttons that turn the current view into a report artefact:
Export works unconditionally — no cron wiring required. Clicking
Export renders the artifact server-side and streams the file back to the
browser immediately.Save as Report also creates the template + schedule immediately, but
the resulting ReportSchedule only fires on cadence once
/api/cron/tick is being invoked by an external scheduler. During pilot
/ initial deployment this is usually not wired. Use the Run Now
button on the saved template in /settings/reports for on-demand
delivery until you follow
EventBridge Setup (optional).
Submitting creates a ReportTemplate and a ReportSchedule. When the cron is wired (AWS EventBridge → /api/cron/tick), the schedule runs on cadence and delivers artefacts to every destination. Until then, trigger delivery from /settings/reports using Run Now, or call POST /api/v1/reports/templates/[id]/run-now directly. View the persisted template at /settings/reports.