Skip to main content

Overview

KaireonAI’s adaptive learning system learns from every customer interaction to predict which offers each customer is most likely to engage with. Unlike batch-only ML systems, KaireonAI updates propensity estimates in real time — every impression, click, conversion, and dismissal immediately improves future recommendations. The system uses a hierarchical architecture that shares learning across offers, categories, and channels while maintaining per-offer specialization.

How It Works

Customer Outcome Recorded (via Respond API)


┌──────────────────────────────────────┐
│  Atomic Adaptation Updates           │
│                                      │
│  offer:Auto Renewal  → evidence +1   │
│  category:Retention  → evidence +1   │
│  channel:Email       → evidence +1   │
│  global              → evidence +1   │
└──────────────────────────────────────┘


Next Recommend API Call


┌──────────────────────────────────────┐
│  Hierarchical Propensity Lookup      │
│                                      │
│  1. Offer level (if evidence >= 50)  │
│  2. Category level (if evidence >= 20)│
│  3. Global level (if evidence >= 10) │
│  4. Model score fallback             │
│  5. Default: 0.5                     │
└──────────────────────────────────────┘

Default Propensity

New offers start with a default propensity of 0.5 — a neutral score that neither favors nor penalizes the offer. As evidence accumulates, the learned propensity replaces the default.

Evidence Blending

When an offer has some evidence but below the maturity threshold (50 interactions), the system blends offer-level data with category-level priors:
propensity = (offerRate × offerEvidence + categoryRate × smoothingWeight)
             / (offerEvidence + smoothingWeight)
This gives new offers a warm start from their category’s average performance, rather than starting cold.

Maturity Levels

EvidenceStatusBehavior
0Cold startUses category prior or 0.5 default
1-49ImmatureBlends offer data with category prior
50-199MaturingUses offer-level propensity directly
200+MatureStable, reliable predictions

PRIE Scoring Formula

KaireonAI uses a weighted geometric mean of four factors to produce a final priority score:
Score = P^wp × R^wr × I^wi × E^we
FactorNameRangeSourceDefault Weight
PPropensity0–1Adaptive learning or ML model0.4
RRelevance0–1Channel match, recency, segment0.2
IImpact0–1Business value, margin, revenue0.3
EEmphasis0–1Offer priority (marketer lever)0.1
Weights must sum to 1.0. The geometric mean ensures:
  • A zero in any dimension eliminates the candidate (0^x = 0)
  • Default propensity (0.5) produces a baseline score of ~0.5
  • Each factor contributes proportionally to its weight

Weight Profiles

Configure PRIE weights on the Score node or via a Strategy Profile:
{
  "method": "formula",
  "formula": {
    "propensityWeight": 0.4,
    "relevanceWeight": 0.2,
    "impactWeight": 0.3,
    "emphasisWeight": 0.1
  }
}
Propensity-heavy (P=0.8, R=0.05, I=0.1, E=0.05): Model-driven — offers the AI predicts will perform best dominate. Emphasis-heavy (P=0.1, R=0.1, I=0.1, E=0.7): Marketer-driven — offer priority determines ranking. Impact-heavy (P=0.1, R=0.1, I=0.7, E=0.1): Revenue-driven — highest business value offers surface first.

ArbitrationProfile Weight Mapping

When using an Arbitration Profile, the weights JSON maps to PRIE as follows:
ArbitrationProfile keyPRIE factorDefault
conversionP (Propensity)0.4
recencyR (Relevance)0.2
marginI (Impact)0.3
fairnessE (Emphasis)0.1

Model Adaptation Table

Per-offer learning is stored in the model_adaptations table — not as a JSON blob, but as independent rows that support atomic concurrent updates:
FieldTypeDescription
scopeenumglobal, category, offer, channel
scopeIdstringEntity ID (offerId, categoryId, etc.) — null for global
positivesintCount of positive outcomes
negativesintCount of negative outcomes
evidenceintTotal interactions tracked
positiveRatefloatComputed: positives / evidence
pausedbooleanWhen true, learning is frozen
predictorAucsJSONPer-predictor univariate AUC scores
Each (modelId, scope, scopeId) combination gets its own row, updated atomically via INSERT ON CONFLICT UPDATE.

Evidence Decay

To prevent stale historical patterns from dominating, the system applies exponential evidence decay daily:
  • Decay rate: 0.5% per day (evidence halves in ~139 days)
  • Applied by: GET /api/v1/cron/scheduled-retrains (cron job)
  • Effect: Recent interactions matter more than old ones

Predictor Auto-Activation

During batch training, each predictor’s univariate AUC is computed:
AUCStatusMeaning
< 0.52InactiveNo better than random — excluded from scoring
≥ 0.52ActiveInformative — contributes to propensity
Predictor AUCs are stored in the global adaptation row and surfaced in the model detail API.

Reset & Pause

Reset Offer Learning

When an offer was misconfigured (wrong QR rules, wrong audience), reset its learned state:
POST /api/v1/algorithm-models/{id}/reset-offer
{
  "offerId": "offer-auto-renewal",
  "resetTo": "category_prior",
  "reason": "Fixed qualification rules"
}
Options for resetTo:
  • category_prior — Fall back to category average (recommended)
  • global_prior — Fall back to tenant-wide average
  • zero — Full cold start (0.5 default)

Pause Learning

Freeze learning for an offer while investigating:
POST /api/v1/algorithm-models/{id}/reset-offer
{
  "action": "pause",
  "offerId": "offer-auto-renewal",
  "reason": "Investigating data quality"
}
Resume with "action": "resume".

Reset Category

Reset all offers in a category:
POST /api/v1/algorithm-models/{id}/reset-offer
{
  "scope": "category",
  "categoryId": "cat-retention",
  "reason": "Category restructure"
}

Scheduled Retraining

The cron endpoint GET /api/v1/cron/scheduled-retrains handles:
  1. Schedule-based retraining: Models with learnSchedule (e.g., “1h”, “24h”, “7d”) are retrained when the interval elapses
  2. Evidence-based retraining: Models are retrained when 100+ new outcomes accumulate, regardless of schedule
  3. Evidence decay: Applied daily to all adaptation rows
Configure per model:
{
  "autoLearn": true,
  "learnMode": "both",
  "learnSchedule": "1h"
}
learnModeBehavior
noneNo automatic learning
incrementalOnline updates on every outcome (via Respond API)
scheduledBatch retraining on schedule (via cron)
bothIncremental + scheduled (recommended)

Attribution-Aware Learning

When a conversion outcome has attribution data, the system looks up the attribution credit for the specific offer. This enables weighted learning — an offer that contributed 33% to a conversion gets proportional credit, not full credit. This prevents feedback inversion where offers that appear frequently (high impression count) get disproportionate positive signal from conversions they didn’t actually cause.

Next Steps

Decision Flows

Configure the Score node with PRIE weights and model selection.

Algorithm Models

Create and manage ML models for propensity scoring.