Documentation Index
Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt
Use this file to discover all available pages before exploring further.
See also: Reports REST API reference for request/response shapes, status codes, and error semantics.
- Data sources — a template references one or more registered sources (offer performance, channel effectiveness, anomaly candidates, and more). Each fetches a tabular result scoped to the current tenant.
- Narrative — optionally, the LLM narrative engine takes the section data and produces an executive summary, per-section paragraphs, and 3–5 key takeaways.
- Formats — each template picks one or more formats (PDF, CSV, Markdown, HTML). Each format renders a single artifact containing every section plus the narrative.
- Delivery — for scheduled runs and
/run-nowinvocations, each destination configured on the schedule receives the report through its notification provider: email gets every artifact as an attachment, Slack/Teams get the narrative plus a deep link to the run, webhook receives full base64 payloads inline.
Built-in data sources
| Key | Description |
|---|---|
offer_performance | Top-N active offers with impressions, conversions, revenue, conversion rate |
channel_effectiveness | Impressions + conversions by channel, with effectiveness % |
selection_frequency | How often each offer was eligible, scored, selected, plus avg rank |
anomaly_candidates | Current-vs-baseline divergences with info/warning/critical severity |
why_not_ranked | For a specific offer: times eligible, filtered by contact policy, filtered by qualification, scored too low, selected |
decision_traces_summary | Funnel averages (candidate → qualified → suppressed → policy → scored → final) |
funnel | Offer-counts at each catalog stage |
revenue_trend | Daily revenue for N days, zero-filled |
daily_trend | Impressions + conversions per day for N days |
budget_burn | Allocation vs spent, remaining, burn-rate per allocation |
Built-in formats
| Format | MIME | Notes |
|---|---|---|
pdf | application/pdf | Rendered server-side via @react-pdf/renderer. Cover page + section tables + narrative. |
csv | text/csv | Single document; each section is preceded by # Section: … divider comments. |
markdown | text/markdown | Title, exec summary, key takeaways, per-section tables. |
html | text/html | Self-contained HTML with inline CSS; survives email clients that strip remote assets. |
Narrative engine
- Uses the
aiv6 SDK through the tenant-configured AI provider (Anthropic, OpenAI, Google Gemini, Amazon Bedrock, or Ollama). - Reads provider + model + credentials from the encrypted
platform settings vault (category
ai) — the same place the AI assistant reads from. - Input sampling caps any section at 50 rows. When exceeded, the prompt receives the top 10 + bottom 10 rows plus an explicit truncation note so the model cannot fabricate details it never saw.
- Failure-tolerant: LLM provider errors or unparseable output produce an
empty-narrative result with a surfaced
errorfield; the caller decides whether to emit a narrativeless report or fail the run.
Template lifecycle
- Create at
/settings/reports → New Report. - Compose — pick data sources and per-source params (window in days).
- Format & narrative — multi-select format(s), toggle AI narrative, optionally append prompt guidance.
- Schedule — pick notification destinations (configured under Settings → Integrations → Notifications), set a cron expression + timezone, save the schedule.
- Preview — the editor’s preview pane hits
/previewevery 600ms after form changes, rendering sections + narrative + artifact metadata without persisting a run or dispatching. - Run —
Run nowexecutes immediately. Scheduled runs fire whennextRunAtelapses and/api/cron/tickis invoked (by EventBridge when wired, or manually viacurlin the meantime).
Run Now is unconditional. Clicking Run Now in the UI (or invoking
the
POST /api/v1/reports/templates/[id]/run-now route) fires the
runner immediately, persists a report run record, and dispatches
to every configured destination. It does not depend on EventBridge,
CRON_TOKEN, or any background scheduler.Scheduled runs are conditional on the cron being invoked. Without an
external scheduler hitting /api/cron/tick, schedules sit dormant — they
still show nextRunAt, but the runner is never called. During pilot /
initial deployment this is usually the case; use Run Now until you’re
ready to wire the cron. See
EventBridge Setup for the optional
automation path.Run history
Every run creates a report run record with status (pending,
running, completed, completed_with_errors, failed),
durationMs, the full sectionsData used, the narrator’s output,
every rendered artifact (base64 inline), and per-destination delivery
results.
Artifacts are downloadable individually at
/api/v1/reports/runs/[id]/artifacts/[format].
Deployment notes
Reports share the same EventBridge →/api/cron/tick trigger as alert
evaluation. No new infrastructure is required to get started —
preview, Run Now, and ad-hoc API invocations work immediately. Enable
the one-minute EventBridge rule per the
EventBridge setup guide when you’re
ready for scheduled runs to fire automatically (optional during pilot).
Dependencies
@react-pdf/rendererfor server-side PDF rendering.cron-parserfor cron-expression parsing + next-run computation.aiv6 SDK for LLM narration (already present for the AI assistant).