Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt

Use this file to discover all available pages before exploring further.

A freshly-created algorithmModel row does nothing until an operator advances it through four orthogonal lifecycle dimensions. This is the page that explains what those dimensions are, what the safe defaults look like, and the explicit sequence to take a model from creation to scoring real customer requests.

The four lifecycle controls at a glance

DimensionFieldDefaultWhat it gates
Operational statusstatus"draft"Whether /recommend will even consider this model. status: "draft" is invisible to live scoring.
Registry lifecycleregistryStatus"draft"The promotion ladder: draft → shadow → challenger → champion → archived. Only champion is the default scorer for its registry family. shadow records scores silently.
Learning cadenceautoLearn + learnMode + learnSchedulefalse / "none" / nullWhether the model keeps improving after creation. Bandits and online-learners ignore these and always learn continuously; tabular models need them explicitly turned on. See Learning cadence.
Outcome interpretationoutcomeWeightsnull (→ weight=1 for positive, weight=−1 for negative)How /respond outcome types map to training signal. Misconfiguration can invert learning — see the warning below.
A model in status: "active", registryStatus: "draft" is “operationally live but not a champion” — it can be referenced by name from a decision flow’s score node, but it isn’t the default scorer for its family. A model in status: "draft", registryStatus: "champion" is impossible to construct via the API — the promote endpoint refuses to advance a draft-status model. These two axes are deliberately separate so operators can stage operational rollouts independently of model-evaluation lifecycle decisions.
Out-of-the-box defaults are intentionally inert. Every new model row starts as status: "draft", registryStatus: "draft", autoLearn: false, learnMode: "none", outcomeWeights: null. There is no automatic “go live” path. This is by design — you should never wake up to find a model you forgot about scoring production traffic.

What happens when you POST a model with no overrides

curl -X POST https://playground.kaireonai.com/api/v1/algorithm-models \
  -H "Content-Type: application/json" \
  -d '{
    "key": "my-new-model",
    "name": "My New Model",
    "modelType": "gradient_boosted"
  }'
The persisted row will be:
{
  "id": "...",
  "key": "my-new-model",
  "name": "My New Model",
  "modelType": "gradient_boosted",
  "status": "draft",
  "registryStatus": "draft",
  "autoLearn": false,
  "learnMode": "none",
  "learnSchedule": null,
  "outcomeWeights": null,
  "metrics": {},
  "metricsHistory": [],
  "modelState": {},
  "predictors": [],
  "trainingSamples": 0,
  "lastTrainedAt": null,
  "lastLearnedAt": null
}
This model:
  • ❌ Is invisible to /recommend (filtered out by the status: "active" predicate).
  • ❌ Is not a champion for any registry family (registryStatus: "draft").
  • ❌ Will not retrain on schedule (autoLearn: false).
  • ❌ Has no learned state, no metrics, no AUC.
  • ✅ Exists in the database and can be inspected via GET /algorithm-models/{id}.
It’s a placeholder. Nothing more.

The four-step path to live champion

To turn the inert row into a model that actually scores production traffic, an operator does four explicit things — and they correspond exactly to the four lifecycle dimensions above.

Step 1 — Activate operationally

Set status: "active". This makes the model visible to /recommend and to the registry-promote logic. You can do this on creation by passing "status": "active" in the POST body, or via PUT later:
curl -X PUT https://playground.kaireonai.com/api/v1/algorithm-models/$MODEL_ID \
  -H "Content-Type: application/json" \
  -d '{ "status": "active" }'
After this step the model is operational but still inert from a scoring standpoint — no decision flow refers to it yet, and it isn’t the registry champion.

Step 2 — Promote through the registry

Move the model through the lifecycle: draft → shadow → challenger → champion. Each transition is enforced by POST /algorithm-models/{id}/promote and writes an AuditLog row. The “one champion per family” invariant means only one model in each registryFamily can sit at champion at a time — promoting a new one auto-demotes the old.
# Inspect candidates on real traffic without affecting decisions
curl -X POST https://playground.kaireonai.com/api/v1/algorithm-models/$MODEL_ID/promote \
  -H "Content-Type: application/json" \
  -d '{ "toStatus": "shadow", "metricsSnapshot": { "offline_auc": 0.78 } }'

# Enter the A/B as a challenger (traffic split governed by an Experiment row)
curl -X POST https://playground.kaireonai.com/api/v1/algorithm-models/$MODEL_ID/promote \
  -H "Content-Type: application/json" \
  -d '{ "toStatus": "challenger" }'

# Win the experiment, become the default scorer for the family
curl -X POST https://playground.kaireonai.com/api/v1/algorithm-models/$MODEL_ID/promote \
  -H "Content-Type: application/json" \
  -d '{ "toStatus": "champion", "family": "credit_propensity" }'
See Experiments — shadow vs champion/challenger for the full registry lifecycle invariants (auto-rollback guard, one-champion-per-family rule, audit-log row written on every transition). Alternative: instead of going through the registry, you can wire the model into a specific decision flow’s score node by its key. The decision-flow engine looks up the score node’s modelKey directly, bypassing the registry-champion resolution. Use this for per-flow specialization (e.g. “this flow’s credit propensity is bayesian-v3 even though the default credit family champion is gbm-v7”).

Step 3 — Enable learning (or accept stasis)

For tabular models, learning is off by default. Without flipping the toggle, your model will keep producing the same scores forever:
# Nightly retrain over the last 30 days of interactions
curl -X PUT https://playground.kaireonai.com/api/v1/algorithm-models/$MODEL_ID \
  -H "Content-Type: application/json" \
  -d '{
    "autoLearn": true,
    "learnMode": "scheduled",
    "learnSchedule": "24h"
  }'
Bandits, online-learners, and Bayesian-with-priors do NOT need this — their continuous-update path is hardcoded in the respond handler and runs on every outcome regardless of autoLearn. See Learning cadence for the full per-algorithm cadence table.

Step 4 — Configure outcome weights

outcomeWeights is a JSON map from outcome-type key to a signed numeric weight. The default behavior — when outcomeWeights is null — falls back to +1 for any outcome classified as positive and −1 for any classified as negative. That’s almost always wrong for nuanced workloads.
curl -X PUT https://playground.kaireonai.com/api/v1/algorithm-models/$MODEL_ID \
  -H "Content-Type: application/json" \
  -d '{
    "outcomeWeights": {
      "convert":      1.0,
      "click":        0.3,
      "renewed":      1.2,
      "unsubscribed": -1.5,
      "complaint":    -2.0,
      "no_action":     0.0
    }
  }'
Misconfigured outcome weights silently invert your learning. If outcomeWeights is null but your most common positive outcome key isn’t classified "positive" in outcome_types, the respond handler logs a warning (“no explicit weight for outcome X; using default”) and treats it as a neutral signal. Repeat this 10,000 times and your bandit’s posteriors lock onto whichever offer happens to NOT be your business’s best one. Always set explicit weights for the outcomes your business actually cares about.

Reading the lifecycle of an existing model

GET /algorithm-models/{id} returns everything you need to inspect a model’s lifecycle position. Useful field combinations:
If you see…It means…
status: "draft"Model exists but is invisible to /recommend.
status: "active", registryStatus: "draft"Operational but never promoted; only used if a flow’s score node references its key directly.
status: "active", registryStatus: "shadow"Scoring silently — recorded into decision_traces.scoringResults[].shadowScores but never affecting decisions.
status: "active", registryStatus: "champion"Live default scorer for its registry family. Real customer requests are hitting this model.
lastLearnedAt: null, trainingSamples: 0Model has never learned — neither offline retrain nor online update has fired. Most common cause: autoLearn: false.
lastTrainedAt set but lastLearnedAt nullImpossible by construction — every train pass writes both. If you see this, file a bug.
lastLearnedAt advancing but lastTrainedAt nullBandit or online-learner doing incremental updates with no offline retrain pathway. Correct for those types.
metricsHistory: [] but trainingSamples > 0Either a bandit / online-learner type (which don’t accumulate AUC snapshots), or a tabular model whose previous retrains hit the offline path before the lastTrainedAt bookkeeping fix landed (see PROOF_BUNDLE entry for “Model training visibility — platform-wide fix”).

What the platform does NOT do automatically

To prevent surprises, the platform deliberately does none of the following:
  • ❌ Activate models on creation. You must set status: "active" explicitly.
  • ❌ Promote models to champion. Even an active model never becomes the default scorer until you POST /promote.
  • ❌ Enable auto-learning. Tabular models stay frozen until you flip autoLearn: true.
  • ❌ Infer outcome weights. The default +1/−1 mapping is a fallback, not a recommendation.
  • ❌ Train on creation. Even gradient-boosted with autoLearn: true waits for the first cron tick after learnSchedule elapses; if you want a one-off immediate retrain, call POST /algorithm-models/{id}/train.
If you want any of these to happen, configure them — every dimension is independently controllable, every default is conservative.

Bulk operations

Setting up several models at once (e.g. shadow-mode rollout of a model family) is supported but requires the same per-model explicit configuration. The platform does not have a “bulk go-live” endpoint and is unlikely to add one — each model going live should be a deliberate, audited decision. For programmatic setup, the recommended pattern is:
# 1. Create as draft
NEW_ID=$(curl -sX POST .../algorithm-models -d '{"key":"...","name":"...","modelType":"gradient_boosted"}' | jq -r .id)

# 2. Configure outcome weights + learning
curl -X PUT .../algorithm-models/$NEW_ID -d '{
  "outcomeWeights": { "convert": 1.0, "click": 0.3, "unsubscribed": -1.5 },
  "autoLearn": true,
  "learnMode": "scheduled",
  "learnSchedule": "24h"
}'

# 3. Trigger a one-off retrain so the model isn't empty at activation
curl -X POST .../algorithm-models/$NEW_ID/train

# 4. Activate operationally
curl -X PUT .../algorithm-models/$NEW_ID -d '{ "status": "active" }'

# 5. Promote through the registry — usually starting with shadow
curl -X POST .../algorithm-models/$NEW_ID/promote -d '{"toStatus": "shadow"}'
After enough shadow-mode evidence (compare shadowScores in decision_traces against the current champion’s scores), continue to challenger and then champion.
See also: Learning cadence | Algorithm Models API | Experiments — shadow vs champion/challenger | Decision Traces — provenance deep-dive