KaireonAI’s Score node can route through any of ten algorithm types via itsDocumentation Index
Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt
Use this file to discover all available pages before exploring further.
modelKey field. They are NOT equivalent — each one optimizes for a different goal, requires different data, and produces a different style of score. This page walks through the choice operator-first.
Quick decision table
Use this when you know the business goal and want the algorithm:| Business goal | Recommended algorithm | Why |
|---|---|---|
| Day-1 launch, no interaction history | scorecard | Operator-defined rules → predictable, auditable, no training data needed. |
| Maximize conversion rate, have ≥ 1k labeled outcomes | logistic_regression or bayesian | Both produce calibrated probabilities. Logistic is faster on numeric features; Bayesian is more robust to small samples. |
| Maximize conversion rate, have ≥ 10k labeled outcomes | gradient_boosted | Tree ensembles capture interactions linear models miss. Best raw accuracy on tabular data. |
| Explore vs exploit with a small set of arms (≤ 50 offers) | thompson_bandit or epsilon_greedy | Both balance exploitation of known winners with exploration of uncertain arms. Thompson is more theoretically optimal; ε-greedy is simpler. |
| Streaming traffic, can’t afford a retrain | online_learner | Updates weights on every interaction. Cheap inference. Lower ceiling than batch-trained models. |
| Need (user × offer) collaborative signal, not just per-offer | neural_cf | Two-tower embeddings learn that “users like Alice also responded to offer Y”. Needs enough interaction density. |
| Already have a production model in another stack | external_endpoint (HTTP) or onnx_imported (in-process ONNX) | Don’t retrain — call your existing service or load the model file. |
| Auditor / regulator needs to see exactly why an offer was scored | scorecard (or logistic_regression with low feature count) | Both produce per-feature contribution explanations. Tree ensembles do too via SHAP, but the value is harder to defend in writing. |
The ten algorithm types
Each links to its dedicated page with fixture config, training, score interpretation, and pitfalls:- Scorecard —
scorecard— weighted rules + sigmoid normalization. Operator-defined, fully transparent. - Bayesian (Naive Bayes) —
bayesian— Laplace-smoothed likelihoods, posterior probability. - Logistic Regression —
logistic_regression— weighted sum → sigmoid. Calibrated probabilities, linear feature interactions. - Gradient Boosted Trees (AGB) —
gradient_boosted— tree ensemble → sigmoid. Captures nonlinear interactions; SHAP-explainable. - Thompson Sampling Bandit —
thompson_bandit— Beta-Bernoulli posterior sampling. Theoretically-optimal explore/exploit. - Epsilon-Greedy Bandit —
epsilon_greedy— exploit known winners, randomly explore with probability ε. - Online Learner —
online_learner— online SGD. Updates on every reward, no batch retrain. - Neural Collaborative Filtering —
neural_cf— two-tower MLP over user + item embeddings. - External Endpoint —
external_endpoint— HTTP POST to operator-hosted scoring service. Async batch. - ONNX Imported —
onnx_imported— load an ONNX model file. Async via onnxruntime-node.
Two non-obvious considerations
Algorithm choice interacts with the scoring strategy
The Score node’smethod (priority_weighted / propensity / formula) controls how the model output is USED, not whether the model runs. The most common mistake is picking a great model but leaving method: priority_weighted — in which case the model never affects ranking. See Scoring Strategies for the full strategy decision guide.
”Per-offer evidence” matters more than total data volume
A scorecard with 5 rules beats a gradient_boosted model when the GBT was trained on 100 outcomes spread thin across 50 offers. The per-(customer, offer) signal density is what drives model quality, not the row count. Cold-start mitigations (the propensity score floor, the maturity ramp, propensity smoothing) buy you time, but they don’t replace data.What to do when you’re starting fresh
- Week 1: Set
method: priority_weighted. Pick winners by hand-setpriorityon each offer. Watch the response interactions accumulate. - Week 2: Build a
scorecardmodel with 5–10 rules from your domain expertise. Switch the Score node tomethod: propensity,modelKey: <scorecard>. Score now reflects the rules. - Week 3+: Once you have ≥ 50 outcomes per major offer, train a
bayesianorlogistic_regressionmodel offline (via the Algorithms → Train page) and swapmodelKey. Compare against the scorecard in Studio’s Recommendation Preview before publishing. - Month 2+: If you have ≥ 10k outcomes total and significant feature interactions, train
gradient_boosted. Add it toshadowModelKeysfirst (so it scores in parallel without affecting ranking) and inspect the offline differential. Promote tomodelKeyonly when offline metrics agree with the previous model. - Always: keep propensity score floor at
0.05(default) so any offer that accumulates negative-only outcomes can still earn an impression and prove itself.
Where the per-algorithm pages go deeper
Each algorithm page in ai-ml/algorithms/ covers:- Math — the actual update rule and inference formula.
- When to use — domain signals that favor or rule out this algorithm.
- Fixture config — minimal viable model state for testing.
- Training — what
train.tsdoes, how outcomes flow back through it. - Score interpretation — what the returned
scorevalue means. - Pitfalls — common ways to mis-configure it.