KaireonAI computes exact Shapley values for every tree-based and neural-collaborative-filtering model:Documentation Index
Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt
Use this file to discover all available pages before exploring further.
- gradient_boosted — TreeSHAP (Lundberg, Erion, Lee 2018, Algorithm 2, path-dependent variant). Deterministic, O(T·L·D²).
- neural_cf — KernelSHAP (Lundberg, Lee 2017). Exact over the 2·embDim feature set when ≤ 12 features, deterministically sampled otherwise.
When SHAP fires
| Opt-in | Behavior |
|---|---|
llmExplanationsEnabled = false (default) | No SHAP computation. Zero /recommend latency impact. |
llmExplanationsEnabled = true | SHAP computed for every gradient_boosted + neural_cf candidate; persisted on scoringResults[i].shapValues. |
POST /decisions/:id/shap (always available) | On-demand SHAP for one offer with caller-supplied attributes. |
/api/v1/ai/explanations-settings endpoint:
On-demand SHAP: POST /decisions/:id/shap
For audit workflows where raw attributes live outside the trace (PII
minimization), supply them in the request body:
additivityResidual reports |rawMargin − baseline − Σ shapValues|
on every call. Non-zero residual indicates a malformed model and
should be treated as a warning.
Performance
The TreeSHAP implementation is in-place buffered: each recursion writes into its own non-overlapping window of a pre-allocated structure-of-arrays buffer (Float64Array + Int32Array), and sibling subtrees share scratch space because they execute serially. There is no per-node object allocation along the hot path. Measured on synthetic LightGBM-shaped ensembles (best of 3, locally):| Shape | Per-call (ms) |
|---|---|
| 100 trees × depth 5 (LightGBM default) | ~1.0 |
| 100 trees × depth 7 | ~5.4 |
| 100 trees × depth 8 | ~12.3 |
| 200 trees × depth 6 | ~4.8 |
tree-shap-perf.test.ts) lock these budgets
in CI with a 4× cushion so a future change that reintroduces per-node
cloning trips immediately.
Persisted SHAP (hot-path path)
When opted in,DecisionTrace.scoringResults[i] carries:
Probability-space approximation
SHAP values are in raw-margin (logit) space — the only space where the additivity identity holds. For probability-space effect size:shap_i but not additive in probability space.
UI surfaces both: raw φ for compliance, sigmoid-mapped delta for
human-readable display.
Neural CF SHAP (KernelSHAP)
Forneural_cf models, features are the 2·embDim user+item embedding
coordinates:
user.emb_0,user.emb_1, …,user.emb_{D−1}item.emb_0,item.emb_1, …,item.emb_{D−1}
2·embDim ≤ 12 (2^12 = 4096 coalitions); sampled
KernelSHAP with 512 default samples runs above that threshold. Seed
is exposed in the result for reproducibility.
Compute SHAP from the Decision UI
neural_cf decisions do not persist SHAP onto the trace (PII
minimization — raw user attributes are not retained), so the studio
Explain this decision dialog can compute KernelSHAP on demand.
When the dialog detects modelType === "neural_cf" and the trace has
no shapValues, the SHAP sub-tab swaps the bar chart for a small
developer panel that takes the model id (auto-prefilled from the trace’s
scoringResults[i].modelId when present) and the attributes JSON used
at scoring time. Clicking Run posts to /api/v1/decisions/:id/shap
and renders the returned values in the same bar chart, labelled
“KernelSHAP contributions (sampled)” with the additivity residual
shown alongside.
Gradient-boosted decisions skip the panel because TreeSHAP values are
already on the trace whenever tenantSettings.aiAnalyzerSettings.llmExplanationsEnabled = true.
See also: LLM Explanations | Fairness + Drift | Decision Traces API