KaireonAI’s ranking engine computes a per-candidate score using a four-factor weighted composite called PRIE: Propensity, Relevance, Impact, Emphasis. This page documents the design rationale, the public-source literature each factor is grounded in, and why the four-factor structure is the appropriate level of expressiveness for a multi-objective recommendation system. This page exists so any reader — contributor, customer, partner, or auditor — can verify that PRIE is derived from public machine-learning literature and standard recommender-systems practice, not from any proprietary or confidential source.Documentation Index
Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt
Use this file to discover all available pages before exploring further.
What PRIE computes
For a candidate offerc, the PRIE score is:
propensity(c)∈ [0, 1] is the predicted probability that the customer responds positively to candidatec. Computed from one or more learned models (Bayesian, logistic-regression, or gradient-boosted, depending on the active scoring strategy).relevance(c)∈ [0, 1] is the contextual fit ofcto the current request — channel match, recency match, segment match. Computed from request-time signals.impact(c)∈ [0, 1] is the normalized business value ofc— revenue, margin, or operator-defined value mapped into the unit interval.emphasis(c)∈ [0, 1] is a manual priority lever that lets operators boost or suppress specific candidates without retraining models.Wp + Wr + Wi + We = 1are the per-factor weights, configurable perRankingProfile(see Ranking Profiles API).
Why four factors
A standard observation from multi-criteria recommender-systems literature (Lin, Wu, Yu, Sun, RecSys 2019; Wikipedia: Multi-objective optimization) is that real-world recommendation requires more than a single learned-propensity score. A commercial recommendation system that ranks only by predicted response rate ignores three persistent operational realities:- Same-customer-same-time context shifts. A customer’s response probability for a given offer depends on the channel, time of day, current campaign, recent interaction history, and other contextual signals that the offline-trained propensity model does not always see at request time. Giving these their own factor lets the runtime apply contextual nudges without retraining the model.
- Operator-set business value. Two offers with the same predicted response rate can have very different unit economics (margin, lifetime value, strategic priority). A scoring system that doesn’t surface a business-value axis forces operators to bake economic preferences into propensity model training data, which is brittle and slow to adjust.
- Manual operator override. Real campaigns have moments when an operator needs to immediately boost or suppress a specific candidate — for legal compliance, regulatory blackout, supply chain shock, or campaign launch. Without an explicit lever factor, operators have to deploy code or retrain models to make these adjustments; the lever factor lets them do it through configuration.
Public-source citations per factor
Propensity
A learned probability of positive response. The general-purpose machine-learning literature is the source:- Hosmer, Lemeshow, Sturdivant (2013), Applied Logistic Regression, 3rd edition, Wiley — for logistic-regression propensity.
- Friedman (2001), “Greedy Function Approximation: A Gradient Boosting Machine”, Annals of Statistics — for gradient-boosted propensity.
- Wikipedia: Propensity score, Naive Bayes classifier.
Relevance (contextual fit)
A unit-interval score computed from request-time signals (channel, recency, segment). Conceptually a context-aware prior:- Adomavicius & Tuzhilin (2011), “Context-Aware Recommender Systems”, in Recommender Systems Handbook, Springer.
- Russo, Van Roy, Kazerouni, Osband, Wen (2018), “A Tutorial on Thompson Sampling”, FnT Machine Learning (arXiv:1707.02038) — discusses contextual bandits where context modifies the action-value function.
Impact (business value)
A normalized economic-value score per candidate, set by operator configuration:- Standard utility-theory literature; see Gilbert & Mosteller (1966), “Recognizing the Maximum of a Sequence”, JASA — early multi-criteria utility decomposition.
- The principle that recommendation systems must surface a separate “business value” axis is documented across recommender-systems industry literature (e.g. Salesforce Einstein NBA docs, AWS Personalize Next-Best-Action recipe).
Emphasis (manual lever)
An operator-set boost factor for tactical adjustments. The need for manual override on top of an automated scoring system is a long-standing operations-research observation:- Sutton & Barto (2018), Reinforcement Learning: An Introduction, 2nd edition, MIT Press — separates “policy” (learned) from “policy override” (engineered).
- Industrial-control and adtech literature uses “lever” / “override factor” terminology consistently; the design pattern long predates any specific commercial product.
Why a multiplicative composite (not additive)
Two reasons:- Hard-stop semantics. A multiplicative composite has the property that
score(c) = 0whenever any factor is zero. This is the right behavior for ranking: if a candidate has zero propensity (model is sure the customer won’t respond), zero relevance (request-time signals say “wrong channel”), zero impact (business value is negative or absent), or zero emphasis (operator has explicitly suppressed), the candidate should not appear in the ranking regardless of how strong the other factors are. An additive composite cannot achieve this without explicitif-zero-then-zerofiltering, which is fragile. - Scale invariance. Multiplicative scoring is invariant to constant rescaling of any single factor, which makes per-factor calibration independent — operators can tune
Wpwithout affecting the relative scores produced byrelevance,impact, oremphasis. Additive scoring requires global rescaling whenever any single weight changes.
Why these specific factor names
The names Propensity, Relevance, Impact, Emphasis (PRIE) describe what each factor measures in plain English:- Propensity — the customer’s likelihood to respond. A standard ML term used across causal-inference, recommender-systems, and credit-scoring literature.
- Relevance — the contextual fit between the candidate and the current request. Standard information-retrieval term (Salton & McGill 1983, Introduction to Modern Information Retrieval).
- Impact — the business value impact of the candidate. Standard economics / decision-theory term.
- Emphasis — the operator’s manual priority lever for the candidate. Standard project-management / operations-research term.
Implementation
The PRIE formula lives atplatform/src/lib/ranking.ts. The Lagrangian dual-ascent solver for coupled-resource constraints (budget, inventory, frequency) is at platform/src/lib/ranking/lagrangian.ts. EXP3-IX online weight tuning is at platform/src/lib/ranking/online-weights.ts.
For the API surface that exposes ranking profiles to clients (operator-set weights, channel overrides, champion-challenger configurations), see Ranking Profiles API.
For the end-to-end provenance of every algorithm in the platform, see Sources & Provenance.