Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt

Use this file to discover all available pages before exploring further.

KaireonAI computes exact Shapley values for every tree-based and neural-collaborative-filtering model:
  • gradient_boostedTreeSHAP (Lundberg, Erion, Lee 2018, Algorithm 2, path-dependent variant). Deterministic, O(T·L·D²).
  • neural_cfKernelSHAP (Lundberg, Lee 2017). Exact over the 2·embDim feature set when ≤ 12 features, deterministically sampled otherwise.
The additivity identity holds in both:
sum(shapValues) + baseline = rawMargin
score(x) = sigmoid(rawMargin)

When SHAP fires

Opt-inBehavior
llmExplanationsEnabled = false (default)No SHAP computation. Zero /recommend latency impact.
llmExplanationsEnabled = trueSHAP computed for every gradient_boosted + neural_cf candidate; persisted on scoringResults[i].shapValues.
POST /decisions/:id/shap (always available)On-demand SHAP for one offer with caller-supplied attributes.
Enable in Settings → AI Configuration or via the /api/v1/ai/explanations-settings endpoint:
{ "llmExplanationsEnabled": true }

On-demand SHAP: POST /decisions/:id/shap

For audit workflows where raw attributes live outside the trace (PII minimization), supply them in the request body:
curl -X POST https://your-host/api/v1/decisions/trace_001/shap \
  -H "Content-Type: application/json" \
  -H "X-Api-Key: $KAIREON_API_KEY" \
  -d '{
    "modelId": "mdl_premium_card_gbm",
    "offerId": "off_premium_card",
    "attributes": {
      "recent_transaction_count": 12,
      "tenure_months": 36,
      "average_balance": 15400
    }
  }'
{
  "shapValues": {
    "recent_transaction_count": 1.314,
    "tenure_months": 0.428,
    "average_balance": -0.211
  },
  "baseline": -0.602,
  "rawMargin": 1.012,
  "additivityResidual": 0.0,
  "featureCount": 3
}
additivityResidual reports |rawMargin − baseline − Σ shapValues| on every call. Non-zero residual indicates a malformed model and should be treated as a warning.

Performance

The TreeSHAP implementation is in-place buffered: each recursion writes into its own non-overlapping window of a pre-allocated structure-of-arrays buffer (Float64Array + Int32Array), and sibling subtrees share scratch space because they execute serially. There is no per-node object allocation along the hot path. Measured on synthetic LightGBM-shaped ensembles (best of 3, locally):
ShapePer-call (ms)
100 trees × depth 5 (LightGBM default)~1.0
100 trees × depth 7~5.4
100 trees × depth 8~12.3
200 trees × depth 6~4.8
Two perf-regression tests (tree-shap-perf.test.ts) lock these budgets in CI with a 4× cushion so a future change that reintroduces per-node cloning trips immediately.

Persisted SHAP (hot-path path)

When opted in, DecisionTrace.scoringResults[i] carries:
{
  "offerId": "off_premium_card",
  "score": 0.73,
  "modelType": "gradient_boosted",
  "explanations": [ ... path-heuristic factors ... ],
  "shapValues": { "recent_transaction_count": 1.31, ... },
  "shapBaseline": -0.602
}
The Narrative API automatically includes SHAP in the LLM prompt when present, and the in-app Explain dialog renders a signed bar chart of the top 8 contributions in the Regulator tab.

Probability-space approximation

SHAP values are in raw-margin (logit) space — the only space where the additivity identity holds. For probability-space effect size:
delta_p_i ≈ sigmoid(baseline + shap_i) − sigmoid(baseline)
Monotone with shap_i but not additive in probability space. UI surfaces both: raw φ for compliance, sigmoid-mapped delta for human-readable display.

Neural CF SHAP (KernelSHAP)

For neural_cf models, features are the 2·embDim user+item embedding coordinates:
  • user.emb_0, user.emb_1, …, user.emb_{D−1}
  • item.emb_0, item.emb_1, …, item.emb_{D−1}
Baseline = zero embeddings (uninformative prior). The exact solver runs when 2·embDim ≤ 12 (2^12 = 4096 coalitions); sampled KernelSHAP with 512 default samples runs above that threshold. Seed is exposed in the result for reproducibility.

Compute SHAP from the Decision UI

neural_cf decisions do not persist SHAP onto the trace (PII minimization — raw user attributes are not retained), so the studio Explain this decision dialog can compute KernelSHAP on demand. When the dialog detects modelType === "neural_cf" and the trace has no shapValues, the SHAP sub-tab swaps the bar chart for a small developer panel that takes the model id (auto-prefilled from the trace’s scoringResults[i].modelId when present) and the attributes JSON used at scoring time. Clicking Run posts to /api/v1/decisions/:id/shap and renders the returned values in the same bar chart, labelled “KernelSHAP contributions (sampled)” with the additivity residual shown alongside. Gradient-boosted decisions skip the panel because TreeSHAP values are already on the trace whenever tenantSettings.aiAnalyzerSettings.llmExplanationsEnabled = true.
See also: LLM Explanations | Fairness + Drift | Decision Traces API