Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt

Use this file to discover all available pages before exploring further.

KaireonAI is built from publicly available sources. This page documents the references that informed each algorithm, scoring approach, and architectural pattern in the platform. It exists so any reader — contributor, customer, partner, or auditor — can verify that the platform’s design draws on public materials and standard machine-learning literature, not proprietary or confidential sources. If a citation is missing or incorrect, please open an issue on the docs repository or contact the maintainers.

Scoring & ranking

Multi-factor weighted-composite scoring (PRIE)

KaireonAI’s PRIE composite — score = propensityWeight × propensity + relevanceWeight × relevance + impactWeight × impact + emphasisWeight × emphasis — is a standard weighted-multi-criteria scoring function. Multi-criteria ranking has decades of public literature; representative public sources:
  • Yu, Lin, Wei, Wu, Yu, Sun (2019), “A Pareto-Efficient Algorithm for Multiple Objective Optimization in E-commerce Recommendation” — RecSys 2019. (ACM Digital Library)
  • Sutton & Barto (2018), Reinforcement Learning: An Introduction, MIT Press, 2nd edition — chapters on multi-objective and multi-criteria decision-making.
  • Wikipedia: Multi-objective optimization, Multi-criteria decision analysis.
The factor names — Propensity, Relevance, Impact, Emphasis — describe four orthogonal axes that any commercial recommendation system needs (likelihood, contextual fit, business value, manual prioritization). The acronym PRIE is KaireonAI-specific naming; the underlying mathematics is industry-standard.

Multi-armed bandit & Thompson sampling

KaireonAI’s experiment routing and adaptive scoring use the standard multi-armed-bandit framework with Thompson sampling for online learning.

EXP3-IX (adversarial bandit)

The EXP3-IX algorithm used in KaireonAI’s contextual-bandit ranking variant comes from public research:
  • Neu (2015), “Explore no more: Improved high-probability regret bounds for non-stochastic bandits”, NeurIPS 2015. (NeurIPS proceedings)

Lagrangian dual ascent for constrained ranking

KaireonAI’s coupled-constraint solver in lib/ranking/lagrangian.ts implements standard dual-ascent Lagrangian relaxation for linearly-constrained resource allocation:
  • Boyd & Vandenberghe (2004), Convex Optimization, Cambridge University Press — Chapter 5 covers Lagrangian duality. (free PDF, Stanford)
  • Bertsekas (2014), Constrained Optimization and Lagrange Multiplier Methods, Athena Scientific.

Model explainability

TreeSHAP

The exact TreeSHAP implementation in KaireonAI’s gradient-boosted-model explainer comes from:
  • Lundberg, Erion, Chen, DeGrave, Prutkin, Nair, Katz, Himmelfarb, Bansal, Lee (2020), “From local explanations to global understanding with explainable AI for trees”, Nature Machine Intelligence. (Nature)
  • Open-source reference implementation: SHAP library.

KernelSHAP

For non-tree models (logistic regression, neural collaborative filtering), KaireonAI uses KernelSHAP per:
  • Lundberg & Lee (2017), “A Unified Approach to Interpreting Model Predictions”, NeurIPS 2017. (arXiv:1705.07874)

LIME

The LIME explainer follows:
  • Ribeiro, Singh, Guestrin (2016), “Why Should I Trust You? Explaining the Predictions of Any Classifier”, KDD 2016. (arXiv:1602.04938)

Counterfactual explanations

The counterfactual-explanation generator follows standard public methodology:

Fairness & bias measurement

KaireonAI’s fairness suite implements standard metrics from public research:
  • Hardt, Price, Srebro (2016), “Equality of Opportunity in Supervised Learning”, NeurIPS 2016. (arXiv:1610.02413) — equalized odds, equal opportunity.
  • Dwork, Hardt, Pitassi, Reingold, Zemel (2012), “Fairness Through Awareness”, ITCS 2012. (arXiv:1104.3913) — individual fairness, Lipschitz ratio.
  • Wikipedia: Fairness in machine learning.
  • DeLong, DeLong, Clarke-Pearson (1988), “Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach”, Biometrics — the paired-AUC test used in fairness comparison.
  • Two-sample Kolmogorov–Smirnov test: classical statistical literature, Wikipedia.

Online learning & adaptive models

KaireonAI’s online-learning components draw on standard ML literature:

Uplift modeling

KaireonAI’s two-proportion z-test for incremental uplift follows:
  • Wikipedia: Uplift modelling.
  • Radcliffe & Surry (2011), “Real-World Uplift Modelling with Significance-Based Uplift Trees”, Stochastic Solutions White Paper.

Decision flow / decisioning concepts

The conceptual building blocks of KaireonAI’s decisioning engine — eligibility filtering, fit scoring, contact policies, frequency caps, A/B holdout testing, multi-objective ranking — are widely documented in the personalization and recommendation-systems industry literature. Public references:
  • Wikipedia: Next-best-action marketing (the generic concept; not a Pega-trademarked term).
  • McKinsey, “The value of getting personalization right—or wrong—is multiplying” (2021). (McKinsey)
  • BCG, Personalization Index 2025 (2025). (BCG)
  • AWS Personalize documentation (public). (AWS)
  • Google Recommendations AI documentation (public). (Google Cloud)
  • Salesforce Einstein documentation (public). (Salesforce help docs)

Implementation libraries

Open-source libraries directly used or referenced for algorithm correctness:

What KaireonAI is not derived from

  • Internal source code, design documents, or compiled artifacts of any commercial proprietary decisioning platform (Pega, Adobe, Salesforce, Braze, Oracle, SAP, IBM, etc.).
  • Customer-engagement-specific implementation patterns from any consulting engagement.
  • Confidential or login-gated training materials of any commercial vendor.
  • Reverse engineering of any proprietary product binary.
The architectural choices in KaireonAI are common to the open ML and recommender-systems literature and to multiple competing public products. Where a concept has a Pega-flavored name in their public marketing (e.g. their “Action Arbitration” formula), KaireonAI uses the industry-neutral term (ranking) and cites the public ML literature above.

Trademark attribution

For the legally-required trademark attribution of third-party marks referenced in this documentation (Pega®, Adobe®, Braze®, Salesforce®, etc.), see Trademarks & Notices.