The Reports subsystem lets operators turn platform analytics into
scheduled executive summaries delivered through the same notification
providers used for alerts (Slack, Teams, webhook, ops email).
Every report run flows through the same pipeline:
- Data sources — a template references one or more registered
sources (offer performance, channel effectiveness, anomaly
candidates, and more). Each fetches a tabular result scoped to the
current tenant.
- Narrative — optionally, the LLM narrative engine takes the
section data and produces an executive summary, per-section
paragraphs, and 3–5 key takeaways.
- Formats — each template picks one or more formats (PDF, CSV,
Markdown, HTML). Each format renders a single artifact containing
every section plus the narrative.
- Delivery — for scheduled runs and
/run-now invocations, each
destination configured on the schedule receives the report through
its notification provider: email gets every artifact as an
attachment, Slack/Teams get the narrative plus a deep link to the
run, webhook receives full base64 payloads inline.
Built-in data sources
| Key | Description |
|---|
offer_performance | Top-N active offers with impressions, conversions, revenue, conversion rate |
channel_effectiveness | Impressions + conversions by channel, with effectiveness % |
selection_frequency | How often each offer was eligible, scored, selected, plus avg rank |
anomaly_candidates | Current-vs-baseline divergences with info/warning/critical severity |
why_not_ranked | For a specific offer: times eligible, filtered by contact policy, filtered by qualification, scored too low, selected |
decision_traces_summary | Funnel averages (candidate → qualified → suppressed → policy → scored → final) |
funnel | Offer-counts at each catalog stage |
revenue_trend | Daily revenue for N days, zero-filled |
daily_trend | Impressions + conversions per day for N days |
budget_burn | Allocation vs spent, remaining, burn-rate per allocation |
Data sources are an extension point. Add new ones by creating
src/lib/reports/data-sources/<name>.ts, implementing the
ReportDataSource interface, and importing the module from
src/lib/reports/data-sources/register.ts.
| Format | MIME | Notes |
|---|
pdf | application/pdf | Rendered server-side via @react-pdf/renderer. Cover page + section tables + narrative. |
csv | text/csv | Single document; each section is preceded by # Section: … divider comments. |
markdown | text/markdown | Title, exec summary, key takeaways, per-section tables. |
html | text/html | Self-contained HTML with inline CSS; survives email clients that strip remote assets. |
Narrative engine
- Uses the
ai v6 SDK through the tenant-configured AI provider
(Anthropic, OpenAI, Google Gemini, Amazon Bedrock, or Ollama).
- Reads provider + model + credentials from the encrypted
PlatformSetting vault (category ai) — the same place the AI
assistant reads from.
- Input sampling caps any section at 50 rows. When exceeded, the prompt
receives the top 10 + bottom 10 rows plus an explicit truncation note
so the model cannot fabricate details it never saw.
- Failure-tolerant: LLM provider errors or unparseable output produce an
empty-narrative result with a surfaced
error field; the caller
decides whether to emit a narrativeless report or fail the run.
Template lifecycle
- Create at
/settings/reports → New Report.
- Compose — pick data sources and per-source params (window in days).
- Format & narrative — multi-select format(s), toggle AI narrative,
optionally append prompt guidance.
- Schedule — pick notification destinations (configured under
Settings → Integrations → Notifications), set a cron expression +
timezone, save the schedule.
- Preview — the editor’s preview pane hits
/preview every 600ms
after form changes, rendering sections + narrative + artifact
metadata without persisting a run or dispatching.
- Run —
Run now executes immediately. Scheduled runs fire when
nextRunAt elapses and /api/cron/tick is invoked (by
EventBridge when wired, or manually via curl in the meantime).
Run Now is unconditional. Clicking Run Now in the UI (or calling
POST /api/v1/reports/templates/[id]/run-now) fires the runner
immediately, persists a ReportRun, and dispatches to every configured
destination. It does not depend on EventBridge, CRON_TOKEN, or any
background scheduler.Scheduled runs are conditional on the cron being invoked. Without an
external scheduler hitting /api/cron/tick, schedules sit dormant — they
still show nextRunAt, but the runner is never called. During pilot /
initial deployment this is usually the case; use Run Now until you’re
ready to wire the cron. See
EventBridge Setup for the optional
automation path.
Run history
Every run creates a ReportRun row with status (pending, running,
completed, completed_with_errors, failed), durationMs, the full
sectionsData used, the narrator’s output, every rendered artifact
(base64 inline), and per-destination delivery results.
Artifacts are downloadable individually at
/api/v1/reports/runs/[id]/artifacts/[format].
Deployment notes
Reports share the same EventBridge → /api/cron/tick trigger as alert
evaluation. No new infrastructure is required to get started —
preview, Run Now, and ad-hoc API invocations work immediately. Enable
the one-minute EventBridge rule per the
EventBridge setup guide when you’re
ready for scheduled runs to fire automatically (optional during pilot).
Dependencies
@react-pdf/renderer for server-side PDF rendering.
cron-parser for cron-expression parsing + next-run computation.
ai v6 SDK for LLM narration (already present for the AI assistant).