Overview
KaireonAI’s adaptive learning system learns from every customer interaction to predict which offers each customer is most likely to engage with. Unlike batch-only ML systems, KaireonAI updates propensity estimates in real time — every impression, click, conversion, and dismissal immediately improves future recommendations. The system uses a hierarchical architecture that shares learning across offers, categories, and channels while maintaining per-offer specialization.How It Works
Default Propensity
New offers start with a default propensity of 0.5 — a neutral score that neither favors nor penalizes the offer. As evidence accumulates, the learned propensity replaces the default.Evidence Blending
When an offer has some evidence but below the maturity threshold (50 interactions), the system blends offer-level data with category-level priors:Maturity Levels
| Evidence | Status | Behavior |
|---|---|---|
| 0 | Cold start | Uses category prior or 0.5 default |
| 1-49 | Immature | Blends offer data with category prior |
| 50-199 | Maturing | Uses offer-level propensity directly |
| 200+ | Mature | Stable, reliable predictions |
PRIE Scoring Formula
KaireonAI uses a weighted geometric mean of four factors to produce a final priority score:| Factor | Name | Range | Source | Default Weight |
|---|---|---|---|---|
| P | Propensity | 0–1 | Adaptive learning or ML model | 0.4 |
| R | Relevance | 0–1 | Channel match, recency, segment | 0.2 |
| I | Impact | 0–1 | Business value, margin, revenue | 0.3 |
| E | Emphasis | 0–1 | Offer priority (marketer lever) | 0.1 |
- A zero in any dimension eliminates the candidate (0^x = 0)
- Default propensity (0.5) produces a baseline score of ~0.5
- Each factor contributes proportionally to its weight
Weight Profiles
Configure PRIE weights on the Score node or via a Strategy Profile:ArbitrationProfile Weight Mapping
When using an Arbitration Profile, theweights JSON maps to PRIE as follows:
| ArbitrationProfile key | PRIE factor | Default |
|---|---|---|
conversion | P (Propensity) | 0.4 |
recency | R (Relevance) | 0.2 |
margin | I (Impact) | 0.3 |
fairness | E (Emphasis) | 0.1 |
Model Adaptation Table
Per-offer learning is stored in themodel_adaptations table — not as a JSON blob, but as independent rows that support atomic concurrent updates:
| Field | Type | Description |
|---|---|---|
scope | enum | global, category, offer, channel |
scopeId | string | Entity ID (offerId, categoryId, etc.) — null for global |
positives | int | Count of positive outcomes |
negatives | int | Count of negative outcomes |
evidence | int | Total interactions tracked |
positiveRate | float | Computed: positives / evidence |
paused | boolean | When true, learning is frozen |
predictorAucs | JSON | Per-predictor univariate AUC scores |
(modelId, scope, scopeId) combination gets its own row, updated atomically via INSERT ON CONFLICT UPDATE.
Evidence Decay
To prevent stale historical patterns from dominating, the system applies exponential evidence decay daily:- Decay rate: 0.5% per day (evidence halves in ~139 days)
- Applied by:
GET /api/v1/cron/scheduled-retrains(cron job) - Effect: Recent interactions matter more than old ones
Predictor Auto-Activation
During batch training, each predictor’s univariate AUC is computed:| AUC | Status | Meaning |
|---|---|---|
| < 0.52 | Inactive | No better than random — excluded from scoring |
| ≥ 0.52 | Active | Informative — contributes to propensity |
Reset & Pause
Reset Offer Learning
When an offer was misconfigured (wrong QR rules, wrong audience), reset its learned state:resetTo:
category_prior— Fall back to category average (recommended)global_prior— Fall back to tenant-wide averagezero— Full cold start (0.5 default)
Pause Learning
Freeze learning for an offer while investigating:"action": "resume".
Reset Category
Reset all offers in a category:Scheduled Retraining
The cron endpointGET /api/v1/cron/scheduled-retrains handles:
- Schedule-based retraining: Models with
learnSchedule(e.g., “1h”, “24h”, “7d”) are retrained when the interval elapses - Evidence-based retraining: Models are retrained when 100+ new outcomes accumulate, regardless of schedule
- Evidence decay: Applied daily to all adaptation rows
| learnMode | Behavior |
|---|---|
none | No automatic learning |
incremental | Online updates on every outcome (via Respond API) |
scheduled | Batch retraining on schedule (via cron) |
both | Incremental + scheduled (recommended) |
Attribution-Aware Learning
When a conversion outcome has attribution data, the system looks up the attribution credit for the specific offer. This enables weighted learning — an offer that contributed 33% to a conversion gets proportional credit, not full credit. This prevents feedback inversion where offers that appear frequently (high impression count) get disproportionate positive signal from conversions they didn’t actually cause.Next Steps
Decision Flows
Configure the Score node with PRIE weights and model selection.
Algorithm Models
Create and manage ML models for propensity scoring.