Documentation Index Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt
Use this file to discover all available pages before exploring further.
Overview
KaireonAI’s adaptive learning system learns from every customer interaction to predict which offers each customer is most likely to engage with. Unlike batch-only ML systems, KaireonAI updates propensity estimates in real time — every impression, click, conversion, and dismissal immediately improves future recommendations.
The system uses a hierarchical architecture that shares learning across offers, categories, and channels while maintaining per-offer specialization.
How It Works
Customer Outcome Recorded (via Respond API)
│
▼
┌──────────────────────────────────────┐
│ Atomic Adaptation Updates │
│ │
│ offer:Auto Renewal → evidence +1 │
│ category:Retention → evidence +1 │
│ channel:Email → evidence +1 │
│ global → evidence +1 │
└──────────────────────────────────────┘
│
▼
Next Recommend API Call
│
▼
┌──────────────────────────────────────┐
│ Hierarchical Propensity Lookup │
│ │
│ 1. Offer level (if evidence >= 50) │
│ 2. Category level (if evidence >= 20)│
│ 3. Global level (if evidence >= 10) │
│ 4. Model score fallback │
│ 5. Default: 0.5 │
└──────────────────────────────────────┘
Default Propensity
New offers start with a default propensity of 0.5 — a neutral score that neither favors nor penalizes the offer. As evidence accumulates, the learned propensity replaces the default.
Evidence Blending
When an offer has some evidence but below the maturity threshold (50 interactions), the system blends offer-level data with category-level priors:
propensity = (offerRate × offerEvidence + categoryRate × smoothingWeight)
/ (offerEvidence + smoothingWeight)
This gives new offers a warm start from their category’s average performance, rather than starting cold.
Maturity Levels
Evidence Status Behavior 0 Cold start Uses category prior or 0.5 default 1-49 Immature Blends offer data with category prior 50-199 Maturing Uses offer-level propensity directly 200+ Mature Stable, reliable predictions
KaireonAI uses a weighted geometric mean of four factors to produce a final priority score:
Score = P^wp × R^wr × I^wi × E^we
Factor Name Range Source Default Weight P Propensity 0–1 Adaptive learning or ML model 0.4 R Relevance 0–1 Channel match, recency, segment 0.2 I Impact 0–1 Business value, margin, revenue 0.3 E Emphasis 0–1 Offer priority (marketer lever) 0.1
Weights must sum to 1.0. The geometric mean ensures:
A zero in any dimension eliminates the candidate (0^x = 0)
Default propensity (0.5) produces a baseline score of ~0.5
Each factor contributes proportionally to its weight
Weight Profiles
Configure PRIE weights on the Score node or via a Strategy Profile:
{
"method" : "formula" ,
"formula" : {
"propensityWeight" : 0.4 ,
"relevanceWeight" : 0.2 ,
"impactWeight" : 0.3 ,
"emphasisWeight" : 0.1
}
}
Propensity-heavy (P=0.8, R=0.05, I=0.1, E=0.05): Model-driven — offers the AI predicts will perform best dominate.
Emphasis-heavy (P=0.1, R=0.1, I=0.1, E=0.7): Marketer-driven — offer priority determines ranking.
Impact-heavy (P=0.1, R=0.1, I=0.7, E=0.1): Revenue-driven — highest business value offers surface first.
ArbitrationProfile Weight Mapping
When using an Arbitration Profile , the weights JSON maps to PRIE as follows:
ArbitrationProfile key PRIE factor Default conversionP (Propensity) 0.4 recencyR (Relevance) 0.2 marginI (Impact) 0.3 fairnessE (Emphasis) 0.1
Model Adaptation Table
Per-offer learning is stored in the model_adaptations table — not as a JSON blob, but as independent rows that support atomic concurrent updates:
Field Type Description scopeenum global, category, offer, channelscopeIdstring Entity ID (offerId, categoryId, etc.) — null for global positivesint Count of positive outcomes negativesint Count of negative outcomes evidenceint Total interactions tracked positiveRatefloat Computed: positives / evidence pausedboolean When true, learning is frozen predictorAucsJSON Per-predictor univariate AUC scores
Each (modelId, scope, scopeId) combination gets its own row, updated atomically via INSERT ON CONFLICT UPDATE.
Evidence Decay
To prevent stale historical patterns from dominating, the system applies exponential evidence decay daily:
Decay rate : 0.5% per day (evidence halves in ~139 days)
Applied by : GET /api/v1/cron/scheduled-retrains (cron job)
Effect : Recent interactions matter more than old ones
Predictor Auto-Activation
During batch training, each predictor’s univariate AUC is computed:
AUC Status Meaning < 0.52 Inactive No better than random — excluded from scoring ≥ 0.52 Active Informative — contributes to propensity
Predictor AUCs are stored in the global adaptation row and surfaced in the model detail API.
Reset & Pause
Reset Offer Learning
When an offer was misconfigured (wrong QR rules, wrong audience), reset its learned state:
POST /api/v1/algorithm-models/{id}/reset-offer
{
"offerId" : "offer-auto-renewal",
"resetTo" : "category_prior",
"reason" : "Fixed qualification rules"
}
Options for resetTo:
category_prior — Fall back to category average (recommended)
global_prior — Fall back to tenant-wide average
zero — Full cold start (0.5 default)
Pause Learning
Freeze learning for an offer while investigating:
POST /api/v1/algorithm-models/{id}/reset-offer
{
"action" : "pause",
"offerId" : "offer-auto-renewal",
"reason" : "Investigating data quality"
}
Resume with "action": "resume".
Reset Category
Reset all offers in a category:
POST /api/v1/algorithm-models/{id}/reset-offer
{
"scope" : "category",
"categoryId" : "cat-retention",
"reason" : "Category restructure"
}
Scheduled Retraining
The cron endpoint GET /api/v1/cron/scheduled-retrains handles:
Schedule-based retraining : Models with learnSchedule (e.g., “1h”, “24h”, “7d”) are retrained when the interval elapses
Evidence-based retraining : Models are retrained when 100+ new outcomes accumulate, regardless of schedule
Evidence decay : Applied daily to all adaptation rows
Configure per model:
{
"autoLearn" : true ,
"learnMode" : "both" ,
"learnSchedule" : "1h"
}
learnMode Behavior noneNo automatic learning incrementalOnline updates on every outcome (via Respond API) scheduledBatch retraining on schedule (via cron) bothIncremental + scheduled (recommended)
Attribution-Aware Learning
When a conversion outcome has attribution data, the system looks up the attribution credit for the specific offer. This enables weighted learning — an offer that contributed 33% to a conversion gets proportional credit, not full credit.
This prevents feedback inversion where offers that appear frequently (high impression count) get disproportionate positive signal from conversions they didn’t actually cause.
Next Steps
Decision Flows Configure the Score node with PRIE weights and model selection.
Algorithm Models Create and manage ML models for propensity scoring.