Skip to main content
KaireonAI computes exact Shapley values for every tree-based and neural-collaborative-filtering model:
  • gradient_boostedTreeSHAP (Lundberg, Erion, Lee 2018, Algorithm 2, path-dependent variant). Deterministic, O(T·L·D²).
  • neural_cfKernelSHAP (Lundberg, Lee 2017). Exact over the 2·embDim feature set when ≤ 12 features, deterministically sampled otherwise.
The additivity identity holds in both:
sum(shapValues) + baseline = rawMargin
score(x) = sigmoid(rawMargin)

When SHAP fires

Opt-inBehavior
llmExplanationsEnabled = false (default)No SHAP computation. Zero /recommend latency impact.
llmExplanationsEnabled = trueSHAP computed for every gradient_boosted + neural_cf candidate; persisted on scoringResults[i].shapValues.
POST /decisions/:id/shap (always available)On-demand SHAP for one offer with caller-supplied attributes.
Enable in Settings → AI Configuration or via the /api/v1/ai/explanations-settings endpoint:
{ "llmExplanationsEnabled": true }

On-demand SHAP: POST /decisions/:id/shap

For audit workflows where raw attributes live outside the trace (PII minimization), supply them in the request body:
curl -X POST https://your-host/api/v1/decisions/trace_001/shap \
  -H "Content-Type: application/json" \
  -H "X-Api-Key: $KAIREON_API_KEY" \
  -d '{
    "modelId": "mdl_premium_card_gbm",
    "offerId": "off_premium_card",
    "attributes": {
      "recent_transaction_count": 12,
      "tenure_months": 36,
      "average_balance": 15400
    }
  }'
{
  "shapValues": {
    "recent_transaction_count": 1.314,
    "tenure_months": 0.428,
    "average_balance": -0.211
  },
  "baseline": -0.602,
  "rawMargin": 1.012,
  "additivityResidual": 0.0,
  "featureCount": 3
}
additivityResidual reports |rawMargin − baseline − Σ shapValues| on every call. Non-zero residual indicates a malformed model and should be treated as a warning.

Persisted SHAP (hot-path path)

When opted in, DecisionTrace.scoringResults[i] carries:
{
  "offerId": "off_premium_card",
  "score": 0.73,
  "modelType": "gradient_boosted",
  "explanations": [ ... path-heuristic factors ... ],
  "shapValues": { "recent_transaction_count": 1.31, ... },
  "shapBaseline": -0.602
}
The Narrative API automatically includes SHAP in the LLM prompt when present, and the in-app Explain dialog renders a signed bar chart of the top 8 contributions in the Regulator tab.

Probability-space approximation

SHAP values are in raw-margin (logit) space — the only space where the additivity identity holds. For probability-space effect size:
delta_p_i ≈ sigmoid(baseline + shap_i) − sigmoid(baseline)
Monotone with shap_i but not additive in probability space. UI surfaces both: raw φ for compliance, sigmoid-mapped delta for human-readable display.

Neural CF SHAP (KernelSHAP)

For neural_cf models, features are the 2·embDim user+item embedding coordinates:
  • user.emb_0, user.emb_1, …, user.emb_{D−1}
  • item.emb_0, item.emb_1, …, item.emb_{D−1}
Baseline = zero embeddings (uninformative prior). The exact solver runs when 2·embDim ≤ 12 (2^12 = 4096 coalitions); sampled KernelSHAP with 512 default samples runs above that threshold. Seed is exposed in the result for reproducibility.
See also: LLM Explanations | Fairness + Drift | Decision Traces API