Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt

Use this file to discover all available pages before exploring further.

Multi-language narratives (W4.1)

POST /api/v1/decisions/:id/narrative now accepts a language field in the request body. Honored only when the tenant has tenantSettings.aiAnalyzerSettings.multiLanguageEnabled = true.
{
  "mode": "regulator",
  "language": "es-MX"
}

Supported languages

12 ISO 639-1 codes: en, es, fr, de, pt, it, nl, ja, zh, ko, hi, ar. BCP-47 tags are normalized — en-USen, es-MXes, etc. Any unrecognized locale falls back to English.

Response additions

{
  "narrative": "...",
  "mode": "regulator",
  "model": "claude-haiku-4-5",
  "cached": false,
  "language": "es",
  "quality": { "score": 87, "grade": "B" },
  "leakWarnings": []
}
  • language: BCP-47 language actually used after normalization.
  • quality: deterministic 0-100 score + A-F grade based on structural rules per mode (regulator narratives expect numbered bullets, agent narratives expect JSON, etc.).
  • leakWarnings: lines flagged as likely English-leak in non-Latin script outputs. Best-effort — never blocks delivery.

Cache key

The narrative cache key now includes the language, so the same decision trace can be cached per (tenant × trace × mode × model × language). No re-translation cost when the same narrative is requested twice.

GitOps drift detection cron (W4.4)

GET /api/v1/cron/gitops-drift-check (Bearer CRON_SECRET) sweeps every tenant nightly, comparing live production resources (ArbitrationProfile, ContactPolicy, QualificationRule, Offer) against the last-applied YAML snapshot. Drift events are recorded in AuditLog with action: "gitops_drift_detected". Note: Until the GitOpsLastApplied Prisma model lands, the last-applied snapshot is treated as empty — the sweep still detects added_in_prod resources but cannot detect field-level drift. When the storage migration arrives, the cron route will compare full specs. In helm/values.yaml add to the cron tier:
schedules:
  gitopsDriftCheck: "0 5 * * *"   # daily 5am UTC

Admin SBOM endpoint (W4.5)

GET /api/v1/admin/sbom returns the CycloneDX 1.5 SBOM for the running deployment, computed on-demand from package-lock.json. Admin role required.
curl -H "Authorization: Bearer $ADMIN_TOKEN" \
     -H "X-Requested-With: XMLHttpRequest" \
     https://playground.kaireonai.com/api/v1/admin/sbom \
  | jq '.components | length'
# → 1643
Response shape:
{
  "sbom": { "bomFormat": "CycloneDX", "specVersion": "1.5", "..." },
  "digest": "<sha256 hex>",
  "components": 1643
}
The X-SBOM-Digest response header carries the same digest so caching proxies can serve the right artifact. For release-time SBOM artifacts (signed alongside the Docker image), see the W3 release pipeline at .github/workflows/release.yml.

Counterfactual + LIME explanation modes (W4.2 + W4.3)

POST /api/v1/decisions/:id/narrative now accepts an explanationType field with three values:
ValueBehavior
"shap" (default)Existing behavior — the trace’s persisted SHAP values flow into the LLM context. No extra inputs required.
"counterfactual"Runs findCounterfactuals() against a real gradient_boosted scorer; finds the smallest single-feature change that would flip the decision.
"lime"Runs computeLime() against the same scorer; returns local linear coefficients (per-feature effect on the score).
Counterfactual + LIME are gated by a separate tenant flag because they recompute against the live model on every call:
{
  "aiAnalyzerSettings": {
    "llmExplanationsEnabled": true,
    "advancedExplanationsEnabled": true
  }
}

Required body for non-SHAP modes

{
  "mode": "regulator",
  "explanationType": "counterfactual",
  "modelId": "<gradient_boosted AlgorithmModel id>",
  "attributes": { "income": 40000, "credit_score": 600 },
  "featureRanges": [
    { "feature": "income", "lower": 10000, "upper": 200000 },
    { "feature": "credit_score", "lower": 500, "upper": 850 }
  ]
}
  • modelId — must be a tenant-owned gradient_boosted AlgorithmModel with at least one trained tree. Any other model type returns a 400.
  • attributes — same shape used at scoring time (the trace doesn’t persist raw attributes by default, so the caller passes them).
  • featureRanges — required only for counterfactual; defines the empirical p05/p95 bounds the binary search may explore.

Response shape additions

{
  "narrative": "...",
  "explanationType": "counterfactual",
  "advancedExplanation": {
    "type": "counterfactual",
    "counterfactuals": [
      {
        "feature": "income",
        "originalValue": 40000,
        "proposedValue": 50500.25,
        "originalScore": 0.18,
        "proposedScore": 0.51,
        "cost": 0.055
      }
    ]
  }
}
LIME response:
{
  "explanationType": "lime",
  "advancedExplanation": {
    "type": "lime",
    "lime": {
      "coefficients": { "income": 1.42, "credit_score": -0.03 },
      "r2": 0.81,
      "samples": 500
    }
  }
}

Honest limits

  • We do not synthesize a scorer from persisted SHAP values. If the caller doesn’t pass a real modelId + attributes, the request is rejected — we’d rather 400 than mislead.
  • Counterfactual search is single-feature, numeric-only. Multi-feature joint counterfactuals are out of scope for V1.
  • LIME samples default to 500. Bump via options.limeSamples if you need a tighter weighted R² fit.

Cache key

The narrative cache key now includes the explanationType, so each explanation primitive caches independently per (tenant × trace × mode × model × language × explanationType).

UI: NarrativeDialog tabs

<NarrativeDialog /> renders a 3-button strip inside each mode tab: SHAP / Counterfactual / LIME. Switching tabs hits the cache before re-fetching, so explanations switch instantly after the first load. When advancedDefaults are not supplied to the dialog, the Counterfactual + LIME tabs render a developer panel where the operator can paste a modelId + attributes JSON + (for counterfactual) a featureRanges JSON, then click Run. This is the same shape the SHAP endpoint already accepts.
<NarrativeDialog
  decisionTraceId={trace.id}
  advancedDefaults={{
    modelId: "gbm-prod-v3",
    attributes: { income: 40000, credit_score: 600 },
    featureRanges: [
      { feature: "income", lower: 10000, upper: 200000 },
    ],
  }}
/>