Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt

Use this file to discover all available pages before exploring further.

KaireonAI’s ranking engine computes a per-candidate score using a four-factor weighted composite called PRIE: Propensity, Relevance, Impact, Emphasis. This page documents the design rationale, the public-source literature each factor is grounded in, and why the four-factor structure is the appropriate level of expressiveness for a multi-objective recommendation system. This page exists so any reader — contributor, customer, partner, or auditor — can verify that PRIE is derived from public machine-learning literature and standard recommender-systems practice, not from any proprietary or confidential source.

What PRIE computes

For a candidate offer c, the PRIE score is:
score(c) = propensity(c)^Wp × relevance(c)^Wr × impact(c)^Wi × emphasis(c)^We
Where:
  • propensity(c) ∈ [0, 1] is the predicted probability that the customer responds positively to candidate c. Computed from one or more learned models (Bayesian, logistic-regression, or gradient-boosted, depending on the active scoring strategy).
  • relevance(c) ∈ [0, 1] is the contextual fit of c to the current request — channel match, recency match, segment match. Computed from request-time signals.
  • impact(c) ∈ [0, 1] is the normalized business value of c — revenue, margin, or operator-defined value mapped into the unit interval.
  • emphasis(c) ∈ [0, 1] is a manual priority lever that lets operators boost or suppress specific candidates without retraining models.
  • Wp + Wr + Wi + We = 1 are the per-factor weights, configurable per RankingProfile (see Ranking Profiles API).
The geometric-mean form (each factor raised to its weight) ensures that a near-zero value in any single factor pulls the composite toward zero — useful for hard-stop semantics where any one of “no response likelihood,” “no contextual fit,” “no business value,” or “explicit suppression” should knock the candidate out of contention.

Why four factors

A standard observation from multi-criteria recommender-systems literature (Lin, Wu, Yu, Sun, RecSys 2019; Wikipedia: Multi-objective optimization) is that real-world recommendation requires more than a single learned-propensity score. A commercial recommendation system that ranks only by predicted response rate ignores three persistent operational realities:
  1. Same-customer-same-time context shifts. A customer’s response probability for a given offer depends on the channel, time of day, current campaign, recent interaction history, and other contextual signals that the offline-trained propensity model does not always see at request time. Giving these their own factor lets the runtime apply contextual nudges without retraining the model.
  2. Operator-set business value. Two offers with the same predicted response rate can have very different unit economics (margin, lifetime value, strategic priority). A scoring system that doesn’t surface a business-value axis forces operators to bake economic preferences into propensity model training data, which is brittle and slow to adjust.
  3. Manual operator override. Real campaigns have moments when an operator needs to immediately boost or suppress a specific candidate — for legal compliance, regulatory blackout, supply chain shock, or campaign launch. Without an explicit lever factor, operators have to deploy code or retrain models to make these adjustments; the lever factor lets them do it through configuration.
Three-factor systems (typically propensity × value × context) are common, but they conflate “automated context fit” with “manual operator emphasis,” which are different signals coming from different sources. Five-factor systems exist (often adding fairness or diversity as separate axes), but those concerns are better handled at the ranking-pipeline level with hard-constraint filters and Lagrangian dual-ascent (see Ranking — Lagrangian) rather than as a fifth multiplied factor. The four-factor split — likelihood × contextual fit × business value × manual lever — is the smallest factor set that decomposes cleanly along the four lines a real operator team needs to tune independently.

Public-source citations per factor

Propensity

A learned probability of positive response. The general-purpose machine-learning literature is the source:
  • Hosmer, Lemeshow, Sturdivant (2013), Applied Logistic Regression, 3rd edition, Wiley — for logistic-regression propensity.
  • Friedman (2001), “Greedy Function Approximation: A Gradient Boosting Machine”, Annals of Statistics — for gradient-boosted propensity.
  • Wikipedia: Propensity score, Naive Bayes classifier.

Relevance (contextual fit)

A unit-interval score computed from request-time signals (channel, recency, segment). Conceptually a context-aware prior:
  • Adomavicius & Tuzhilin (2011), “Context-Aware Recommender Systems”, in Recommender Systems Handbook, Springer.
  • Russo, Van Roy, Kazerouni, Osband, Wen (2018), “A Tutorial on Thompson Sampling”, FnT Machine Learning (arXiv:1707.02038) — discusses contextual bandits where context modifies the action-value function.

Impact (business value)

A normalized economic-value score per candidate, set by operator configuration:
  • Standard utility-theory literature; see Gilbert & Mosteller (1966), “Recognizing the Maximum of a Sequence”, JASA — early multi-criteria utility decomposition.
  • The principle that recommendation systems must surface a separate “business value” axis is documented across recommender-systems industry literature (e.g. Salesforce Einstein NBA docs, AWS Personalize Next-Best-Action recipe).

Emphasis (manual lever)

An operator-set boost factor for tactical adjustments. The need for manual override on top of an automated scoring system is a long-standing operations-research observation:
  • Sutton & Barto (2018), Reinforcement Learning: An Introduction, 2nd edition, MIT Press — separates “policy” (learned) from “policy override” (engineered).
  • Industrial-control and adtech literature uses “lever” / “override factor” terminology consistently; the design pattern long predates any specific commercial product.

Why a multiplicative composite (not additive)

Two reasons:
  1. Hard-stop semantics. A multiplicative composite has the property that score(c) = 0 whenever any factor is zero. This is the right behavior for ranking: if a candidate has zero propensity (model is sure the customer won’t respond), zero relevance (request-time signals say “wrong channel”), zero impact (business value is negative or absent), or zero emphasis (operator has explicitly suppressed), the candidate should not appear in the ranking regardless of how strong the other factors are. An additive composite cannot achieve this without explicit if-zero-then-zero filtering, which is fragile.
  2. Scale invariance. Multiplicative scoring is invariant to constant rescaling of any single factor, which makes per-factor calibration independent — operators can tune Wp without affecting the relative scores produced by relevance, impact, or emphasis. Additive scoring requires global rescaling whenever any single weight changes.
The multiplicative-composite + per-factor-weight pattern is the textbook choice for multi-criteria utility scoring; see Keeney & Raiffa (1976), Decisions with Multiple Objectives, Cambridge University Press, and the more recent Recommender Systems Handbook coverage of multi-objective recommendation.

Why these specific factor names

The names Propensity, Relevance, Impact, Emphasis (PRIE) describe what each factor measures in plain English:
  • Propensity — the customer’s likelihood to respond. A standard ML term used across causal-inference, recommender-systems, and credit-scoring literature.
  • Relevance — the contextual fit between the candidate and the current request. Standard information-retrieval term (Salton & McGill 1983, Introduction to Modern Information Retrieval).
  • Impact — the business value impact of the candidate. Standard economics / decision-theory term.
  • Emphasis — the operator’s manual priority lever for the candidate. Standard project-management / operations-research term.
Each name is generic English vocabulary describing what its factor does. KaireonAI did not coin these terms; they describe orthogonal axes that any commercial recommendation system needs to tune independently.

Implementation

The PRIE formula lives at platform/src/lib/ranking.ts. The Lagrangian dual-ascent solver for coupled-resource constraints (budget, inventory, frequency) is at platform/src/lib/ranking/lagrangian.ts. EXP3-IX online weight tuning is at platform/src/lib/ranking/online-weights.ts. For the API surface that exposes ranking profiles to clients (operator-set weights, channel overrides, champion-challenger configurations), see Ranking Profiles API. For the end-to-end provenance of every algorithm in the platform, see Sources & Provenance.