Documentation Index
Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt
Use this file to discover all available pages before exploring further.
Fairness hard-gate (publish-time enforcement)
POST /api/v1/decision-flows/publish runs the fairness hard-gate as a
pre-publish check when the tenant has opted in. If configured
thresholds breach, the publish is blocked with HTTP 422 and a
structured violation report — the new flow version is not written.
Configuration
Set ontenant.settings.fairnessPolicy:
| Field | Default | Purpose |
|---|---|---|
enabled | false | Master switch. When unset/false the gate is a no-op. |
sensitiveAttribute | — | The protected-attribute key the gate looks up in decision_trace.qualificationResults[*].context.attributes. |
thresholds.disparateImpactRatio | 0.8 | Four-fifths rule (29 CFR § 1607.4D). Below this → block. |
thresholds.demographicParityGap | 0.2 | Max allowed maxRate − minRate across groups. Above this → block. |
thresholds.equalOpportunityGap | 0.2 | Max allowed TPR gap when ground-truth labels are present. |
minSampleSize | 100 | Skip the gate (not block) when fewer than this many traces in the last 7 days carry the sensitive attribute. |
override | — | Four-eyes bypass: { approvedBy, expiresAt }. Active overrides skip the gate (audit-logged). |
Behavior
- Not configured / disabled — gate is a no-op, publish proceeds.
- Active override (not expired) — gate is skipped, publish proceeds, audit log records the bypass.
- Insufficient samples — gate is skipped (
enforced: false, reason explainssamples < minSampleSize). - Thresholds breached — publish blocked with 422:
- Infrastructure error — fail-open (publish proceeds, warning logged) so a transient DB blip doesn’t block legitimate compliance work.
Sample-source caveats
The MVP derives the protected group fromdecision_trace.qualificationResults[*].context.attributes[sensitiveAttribute].
If your traces don’t yet include the sensitive attribute in qualification
context, the gate skips with a “no usable samples” reason. Upcoming work
extends sample sourcing to InteractionHistory and to a tenant-supplied
custom attribute resolver.
Backed by the platform’s fairness hard-gate enforcement helper.
Tiered fairness evaluation
POST /api/v1/fairness/evaluate?metrics=basic|advanced runs the full
fairness pipeline. The query string controls which metrics tier is
returned.
Basic tier (default)
Existing demographic-parity, four-fifths-rule, equal-opportunity, and equalized-odds gap calculations fromlib/fairness/metrics.ts.
Unchanged behavior — every existing caller is bit-identical.
Advanced tier
Adds intersectional analysis + mitigation recommendations fromlib/fairness/advanced.ts:
- Intersectional cells require per-sample
intersectionalGroups: { axisName: groupValue }. The route runs the intersectional evaluator with a default minimum cell size of 10 samples and surfaces the cells plus the worst disparate-impact ratio. - Mitigation recommendations are derived from the report shape (DI ratio, four-fifths violation, equal-opportunity gap) — no extra inputs needed.
intersectionalGroups, the response includes
advancedAwaitingConfig: ["intersectional: no per-sample intersectionalGroups supplied"]
so operators know why the analysis is empty. No silent fallback.
Counterfactual + Lipschitz
These primitives also live inlib/fairness/advanced.ts but are NOT
auto-run from this route — they need a real scorer + paired
counterfactual samples that the evaluate route does not have. Pipeline
callers invoke them directly.
EU AI Act report
POST /api/v1/fairness/report runs the same fairness pipeline and
returns a formatted report:
format | Content type | Notes |
|---|---|---|
csv | text/csv | Per-group + summary metrics, suitable for compliance archive ingestion. |
html | text/html | Markup-only; pipe through headless Chromium / wkhtmltopdf for PDF. |
/evaluate. Optional title + subtitle
override the defaults (“Fairness Assessment Report” / “EU AI Act
Article 10 § 2(f)”).
Audit trail
Every call to/evaluate and /report writes one audit-log row
(action: fairness_evaluate or fairness_report) so DSAR exports
can cite the exact report contents.