KaireonAI is built from publicly available sources. This page documents the references that informed each algorithm, scoring approach, and architectural pattern in the platform. It exists so any reader — contributor, customer, partner, or auditor — can verify that the platform’s design draws on public materials and standard machine-learning literature, not proprietary or confidential sources. If a citation is missing or incorrect, please open an issue on the docs repository or contact the maintainers.Documentation Index
Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt
Use this file to discover all available pages before exploring further.
Scoring & ranking
Multi-factor weighted-composite scoring (PRIE)
KaireonAI’s PRIE composite —score = propensityWeight × propensity + relevanceWeight × relevance + impactWeight × impact + emphasisWeight × emphasis — is a standard weighted-multi-criteria scoring function. Multi-criteria ranking has decades of public literature; representative public sources:
- Yu, Lin, Wei, Wu, Yu, Sun (2019), “A Pareto-Efficient Algorithm for Multiple Objective Optimization in E-commerce Recommendation” — RecSys 2019. (ACM Digital Library)
- Sutton & Barto (2018), Reinforcement Learning: An Introduction, MIT Press, 2nd edition — chapters on multi-objective and multi-criteria decision-making.
- Wikipedia: Multi-objective optimization, Multi-criteria decision analysis.
Multi-armed bandit & Thompson sampling
KaireonAI’s experiment routing and adaptive scoring use the standard multi-armed-bandit framework with Thompson sampling for online learning.- Russo, Van Roy, Kazerouni, Osband, Wen (2018), “A Tutorial on Thompson Sampling”, Foundations and Trends in Machine Learning. (arXiv:1707.02038)
- Wikipedia: Multi-armed bandit, Thompson sampling.
EXP3-IX (adversarial bandit)
The EXP3-IX algorithm used in KaireonAI’s contextual-bandit ranking variant comes from public research:- Neu (2015), “Explore no more: Improved high-probability regret bounds for non-stochastic bandits”, NeurIPS 2015. (NeurIPS proceedings)
Lagrangian dual ascent for constrained ranking
KaireonAI’s coupled-constraint solver inlib/ranking/lagrangian.ts implements standard dual-ascent Lagrangian relaxation for linearly-constrained resource allocation:
- Boyd & Vandenberghe (2004), Convex Optimization, Cambridge University Press — Chapter 5 covers Lagrangian duality. (free PDF, Stanford)
- Bertsekas (2014), Constrained Optimization and Lagrange Multiplier Methods, Athena Scientific.
Model explainability
TreeSHAP
The exact TreeSHAP implementation in KaireonAI’s gradient-boosted-model explainer comes from:- Lundberg, Erion, Chen, DeGrave, Prutkin, Nair, Katz, Himmelfarb, Bansal, Lee (2020), “From local explanations to global understanding with explainable AI for trees”, Nature Machine Intelligence. (Nature)
- Open-source reference implementation: SHAP library.
KernelSHAP
For non-tree models (logistic regression, neural collaborative filtering), KaireonAI uses KernelSHAP per:- Lundberg & Lee (2017), “A Unified Approach to Interpreting Model Predictions”, NeurIPS 2017. (arXiv:1705.07874)
LIME
The LIME explainer follows:- Ribeiro, Singh, Guestrin (2016), “Why Should I Trust You? Explaining the Predictions of Any Classifier”, KDD 2016. (arXiv:1602.04938)
Counterfactual explanations
The counterfactual-explanation generator follows standard public methodology:- Wachter, Mittelstadt, Russell (2017), “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR”. (arXiv:1711.00399)
- Molnar (2024), Interpretable Machine Learning, Chapter 15 (free online textbook). (christophm.github.io/interpretable-ml-book)
Fairness & bias measurement
KaireonAI’s fairness suite implements standard metrics from public research:- Hardt, Price, Srebro (2016), “Equality of Opportunity in Supervised Learning”, NeurIPS 2016. (arXiv:1610.02413) — equalized odds, equal opportunity.
- Dwork, Hardt, Pitassi, Reingold, Zemel (2012), “Fairness Through Awareness”, ITCS 2012. (arXiv:1104.3913) — individual fairness, Lipschitz ratio.
- Wikipedia: Fairness in machine learning.
- DeLong, DeLong, Clarke-Pearson (1988), “Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach”, Biometrics — the paired-AUC test used in fairness comparison.
- Two-sample Kolmogorov–Smirnov test: classical statistical literature, Wikipedia.
Online learning & adaptive models
KaireonAI’s online-learning components draw on standard ML literature:- Cesa-Bianchi & Lugosi (2006), Prediction, Learning, and Games, Cambridge University Press — foundational text on online learning.
- Sutton-Barto (2018), Chapter 6: Temporal-difference learning.
- The “shadow mode” deployment pattern (a candidate model receives production traffic for evaluation only) is industry-standard ML-Ops practice; see Google’s Site Reliability Engineering for Machine Learning and AWS SageMaker Shadow Variants documentation.
- The “champion-challenger” model deployment pattern predates modern ML and has been used in banking and credit-scoring since the 1990s; see for example Forbes 2014: A/B Testing Vs. Champion/Challenger Marketing.
Uplift modeling
KaireonAI’s two-proportion z-test for incremental uplift follows:- Wikipedia: Uplift modelling.
- Radcliffe & Surry (2011), “Real-World Uplift Modelling with Significance-Based Uplift Trees”, Stochastic Solutions White Paper.
Decision flow / decisioning concepts
The conceptual building blocks of KaireonAI’s decisioning engine — eligibility filtering, fit scoring, contact policies, frequency caps, A/B holdout testing, multi-objective ranking — are widely documented in the personalization and recommendation-systems industry literature. Public references:- Wikipedia: Next-best-action marketing (the generic concept; not a Pega-trademarked term).
- McKinsey, “The value of getting personalization right—or wrong—is multiplying” (2021). (McKinsey)
- BCG, Personalization Index 2025 (2025). (BCG)
- AWS Personalize documentation (public). (AWS)
- Google Recommendations AI documentation (public). (Google Cloud)
- Salesforce Einstein documentation (public). (Salesforce help docs)
Implementation libraries
Open-source libraries directly used or referenced for algorithm correctness:- scikit-learn — logistic regression, gradient boosting reference behavior. scikit-learn.org
- LightGBM — gradient-boosted-tree training (KaireonAI’s
gradient_boostedmodel type). lightgbm.readthedocs.io - XGBoost — alternative GBM reference. xgboost.readthedocs.io
- shap — TreeSHAP and KernelSHAP reference implementation. github.com/shap/shap
- Anthropic AI SDK — LLM integration for narrative explanations. docs.anthropic.com
- Model Context Protocol (MCP) — standard protocol for AI tool exposure. modelcontextprotocol.io
What KaireonAI is not derived from
- Internal source code, design documents, or compiled artifacts of any commercial proprietary decisioning platform (Pega, Adobe, Salesforce, Braze, Oracle, SAP, IBM, etc.).
- Customer-engagement-specific implementation patterns from any consulting engagement.
- Confidential or login-gated training materials of any commercial vendor.
- Reverse engineering of any proprietary product binary.
ranking) and cites the public ML literature above.