Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt

Use this file to discover all available pages before exploring further.

Alert rules monitor platform metrics and trigger notifications when thresholds are breached. Each rule defines a metric, comparison operator, threshold, time window, and destinations. See Platform — Alert Rules for the operator guide and Notifications API for configuring destinations. All endpoints require tenant context and RBAC:
  • viewer / editor / admin can list and read
  • editor / admin can create, update, and delete

GET /api/v1/alerts

List all alert rules for the tenant.

Response

[
  {
    "id": "clx...",
    "tenantId": "my-tenant",
    "name": "High Error Rate",
    "metric": "degraded_scoring_rate",
    "operator": "gt",
    "threshold": 0.05,
    "windowMinutes": 5,
    "channels": [
      "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"
    ],
    "cooldownMinutes": 60,
    "enabled": true,
    "lastFiredAt": null,
    "status": "ok",
    "createdAt": "2026-04-17T12:00:00.000Z",
    "updatedAt": "2026-04-17T12:00:00.000Z"
  }
]

POST /api/v1/alerts

Create a new alert rule. Editor or Admin.

Request Body

FieldTypeRequiredDescription
namestringYesAlert rule name
metricstringYesSee Supported metrics
operatorstringYesgt, lt, gte, lte, eq
thresholdnumberYesThreshold value to compare against
windowMinutesnumberNoObservation window (default: 5)
channelsarrayYesDestinations — see Channel shapes
cooldownMinutesnumberNoMin time between consecutive fires (default: 60)
enabledbooleanNoWhether the rule is active (default: true)

Supported metrics

MetricUnitDescription
acceptance_rate0–1Positive outcomes / impressions
ctr0–1Clicks / impressions
revenuecurrency unitsSum of conversion values
selection_frequency0–1Decisions with ≥1 selected offer / total traces
latency_p99millisecondsp99 of decision latency
degraded_scoring_rate0–1Traces with degradedScoring=true / total

Channel shapes

The channels array accepts three shapes (mix and match allowed):
// 1. Provider UUID (preferred — references a NotificationProvider)
"aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"

// 2. { providerId } — same semantics, explicit key
{ "providerId": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee" }

// 3. Legacy { type, target } — direct webhook / email dispatch
{ "type": "webhook", "target": "https://hooks.slack.com/services/..." }
{ "type": "email",   "target": "ops@example.com" }
Configure destinations under Settings → Integrations → Notifications or via POST /api/v1/notifications/providers.

Example

curl -X POST https://playground.kaireonai.com/api/v1/alerts \
  -H "Content-Type: application/json" \
  -H "X-Requested-With: XMLHttpRequest" \
  -H "X-Tenant-Id: my-tenant" \
  -d '{
    "name": "p99 Latency Spike",
    "metric": "latency_p99",
    "operator": "gt",
    "threshold": 500,
    "windowMinutes": 10,
    "cooldownMinutes": 30,
    "channels": [
      "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"
    ]
  }'

Response (201)

Returns the created alert rule.

GET /api/v1/alerts/:id

Get a single alert rule by ID.

PUT /api/v1/alerts/:id

Update an existing alert rule. All fields are optional. Editor or Admin.

DELETE /api/v1/alerts/:id

Delete an alert rule. Editor or Admin. Returns 204 No Content.

Rule lifecycle status

The status field reflects the outcome of the most recent evaluation:
StatusMeaning
okLast evaluation did not trigger
firedLast evaluation triggered and at least one destination accepted dispatch
cooldownTriggered, but still inside cooldownMinutes since lastFiredAt
delivery_failedTriggered, but every destination returned a failure
unsupported_metricmetric is not recognized by the evaluator
The evaluator runs on every cron tick — see Cron for the /api/cron/tick entry point.