A freshly-createdDocumentation Index
Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt
Use this file to discover all available pages before exploring further.
algorithmModel row does nothing until an operator advances it through four orthogonal lifecycle dimensions. This is the page that explains what those dimensions are, what the safe defaults look like, and the explicit sequence to take a model from creation to scoring real customer requests.
The four lifecycle controls at a glance
| Dimension | Field | Default | What it gates |
|---|---|---|---|
| Operational status | status | "draft" | Whether /recommend will even consider this model. status: "draft" is invisible to live scoring. |
| Registry lifecycle | registryStatus | "draft" | The promotion ladder: draft → shadow → challenger → champion → archived. Only champion is the default scorer for its registry family. shadow records scores silently. |
| Learning cadence | autoLearn + learnMode + learnSchedule | false / "none" / null | Whether the model keeps improving after creation. Bandits and online-learners ignore these and always learn continuously; tabular models need them explicitly turned on. See Learning cadence. |
| Outcome interpretation | outcomeWeights | null (→ weight=1 for positive, weight=−1 for negative) | How /respond outcome types map to training signal. Misconfiguration can invert learning — see the warning below. |
status: "active", registryStatus: "draft" is “operationally live but not a champion” — it can be referenced by name from a decision flow’s score node, but it isn’t the default scorer for its family. A model in status: "draft", registryStatus: "champion" is impossible to construct via the API — the promote endpoint refuses to advance a draft-status model. These two axes are deliberately separate so operators can stage operational rollouts independently of model-evaluation lifecycle decisions.
Out-of-the-box defaults are intentionally inert. Every new model row starts as
status: "draft", registryStatus: "draft", autoLearn: false, learnMode: "none", outcomeWeights: null. There is no automatic “go live” path. This is by design — you should never wake up to find a model you forgot about scoring production traffic.What happens when you POST a model with no overrides
- ❌ Is invisible to
/recommend(filtered out by thestatus: "active"predicate). - ❌ Is not a champion for any registry family (
registryStatus: "draft"). - ❌ Will not retrain on schedule (
autoLearn: false). - ❌ Has no learned state, no metrics, no AUC.
- ✅ Exists in the database and can be inspected via
GET /algorithm-models/{id}.
The four-step path to live champion
To turn the inert row into a model that actually scores production traffic, an operator does four explicit things — and they correspond exactly to the four lifecycle dimensions above.Step 1 — Activate operationally
Setstatus: "active". This makes the model visible to /recommend and to the registry-promote logic. You can do this on creation by passing "status": "active" in the POST body, or via PUT later:
Step 2 — Promote through the registry
Move the model through the lifecycle:draft → shadow → challenger → champion. Each transition is enforced by POST /algorithm-models/{id}/promote and writes an AuditLog row. The “one champion per family” invariant means only one model in each registryFamily can sit at champion at a time — promoting a new one auto-demotes the old.
score node by its key. The decision-flow engine looks up the score node’s modelKey directly, bypassing the registry-champion resolution. Use this for per-flow specialization (e.g. “this flow’s credit propensity is bayesian-v3 even though the default credit family champion is gbm-v7”).
Step 3 — Enable learning (or accept stasis)
For tabular models, learning is off by default. Without flipping the toggle, your model will keep producing the same scores forever:autoLearn. See Learning cadence for the full per-algorithm cadence table.
Step 4 — Configure outcome weights
outcomeWeights is a JSON map from outcome-type key to a signed numeric weight. The default behavior — when outcomeWeights is null — falls back to +1 for any outcome classified as positive and −1 for any classified as negative. That’s almost always wrong for nuanced workloads.
Reading the lifecycle of an existing model
GET /algorithm-models/{id} returns everything you need to inspect a model’s lifecycle position. Useful field combinations:
| If you see… | It means… |
|---|---|
status: "draft" | Model exists but is invisible to /recommend. |
status: "active", registryStatus: "draft" | Operational but never promoted; only used if a flow’s score node references its key directly. |
status: "active", registryStatus: "shadow" | Scoring silently — recorded into decision_traces.scoringResults[].shadowScores but never affecting decisions. |
status: "active", registryStatus: "champion" | Live default scorer for its registry family. Real customer requests are hitting this model. |
lastLearnedAt: null, trainingSamples: 0 | Model has never learned — neither offline retrain nor online update has fired. Most common cause: autoLearn: false. |
lastTrainedAt set but lastLearnedAt null | Impossible by construction — every train pass writes both. If you see this, file a bug. |
lastLearnedAt advancing but lastTrainedAt null | Bandit or online-learner doing incremental updates with no offline retrain pathway. Correct for those types. |
metricsHistory: [] but trainingSamples > 0 | Either a bandit / online-learner type (which don’t accumulate AUC snapshots), or a tabular model whose previous retrains hit the offline path before the lastTrainedAt bookkeeping fix landed (see PROOF_BUNDLE entry for “Model training visibility — platform-wide fix”). |
What the platform does NOT do automatically
To prevent surprises, the platform deliberately does none of the following:- ❌ Activate models on creation. You must set
status: "active"explicitly. - ❌ Promote models to champion. Even an active model never becomes the default scorer until you
POST /promote. - ❌ Enable auto-learning. Tabular models stay frozen until you flip
autoLearn: true. - ❌ Infer outcome weights. The default
+1/−1mapping is a fallback, not a recommendation. - ❌ Train on creation. Even gradient-boosted with
autoLearn: truewaits for the first cron tick afterlearnScheduleelapses; if you want a one-off immediate retrain, callPOST /algorithm-models/{id}/train.
Bulk operations
Setting up several models at once (e.g. shadow-mode rollout of a model family) is supported but requires the same per-model explicit configuration. The platform does not have a “bulk go-live” endpoint and is unlikely to add one — each model going live should be a deliberate, audited decision. For programmatic setup, the recommended pattern is:shadowScores in decision_traces against the current champion’s scores), continue to challenger and then champion.
See also: Learning cadence | Algorithm Models API | Experiments — shadow vs champion/challenger | Decision Traces — provenance deep-dive