Overview
AI Configuration lets administrators tune the behavior of each AI analyzer across the platform. Settings are saved per tenant (organization), so each organization can customize analysis thresholds, caps, and algorithms independently. Navigate to Settings > AI Configuration in the KaireonAI sidebar.Setup
Review the four analyzer sections
The page shows collapsible cards for each analyzer: Segmentation, Policy, Content Intelligence, and Rule Building. Each card lists the tunable parameters with their current values.
Adjust parameters
Change any value by editing the input field. Each parameter shows a plain-language description. Click Technical detail to see the underlying implementation detail.
Settings are loaded once on page load and edited locally until you save. Multiple admins editing at the same time will overwrite each other — the last save wins.
Segmentation Parameters
| Parameter | Default | Description | Technical Detail |
|---|---|---|---|
| Min Clusters | 2 | Fewest groups to split customers into | K-Means n_clusters lower bound. Fewer clusters = broader segments. |
| Max Clusters | 8 | Most groups customers can be split into | K-Means n_clusters upper bound. More clusters = finer-grained segments. |
| Algorithm | kmeans | Method used to find natural groupings | kmeans (centroid-based, fast) or dbscan (density-based, handles outliers). |
| Included Features | All | Which attributes to consider when grouping | Feature whitelist for clustering input matrix. Null = all numeric features. |
Policy Parameters
| Parameter | Default | Description | Technical Detail |
|---|---|---|---|
| Daily Cap | 3 | Max messages a customer receives per day | Hard cap on contact frequency per customer per calendar day. |
| Weekly Cap | 10 | Max messages a customer receives per week | Hard cap per customer per rolling 7-day window. |
| Monthly Cap | 30 | Max messages a customer receives per month | Hard cap per customer per rolling 30-day window. |
| Lookback Days | 90 | How far back to analyze interaction history | Interaction history window (days) for policy violation detection. |
| Min Sample Size | 100 | Minimum interactions before analysis is meaningful | Statistical minimum sample size for reliable policy analysis. |
Content Intelligence Parameters
| Parameter | Default | Description | Technical Detail |
|---|---|---|---|
| Min Impressions | 100 | Times content must be shown before judging performance | Minimum impression count before metrics are statistically valid. |
| Metric Weights (CTR) | 0.33 | Weight given to click-through rate | Weighted blend coefficient for CTR in content scoring. |
| Metric Weights (CVR) | 0.34 | Weight given to conversion rate | Weighted blend coefficient for CVR in content scoring. |
| Metric Weights (Revenue) | 0.33 | Weight given to revenue per impression | Weighted blend coefficient for revenue-per-impression. |
| Confidence Level | 0.95 | How sure we need to be before recommending a change | Statistical confidence threshold (1 - alpha) for hypothesis tests. |
Rule Building Parameters
| Parameter | Default | Description | Technical Detail |
|---|---|---|---|
| Max Conditions | 5 | Maximum conditions allowed in a single rule | Upper bound on rule clause count to prevent overly complex rules. |
| Allowed Operators | equals, gt, lt, gte, lte, contains, in | Comparison types available when building rules | Operator whitelist for AI-generated rule conditions. |
| Field Type Constraints | All | Which field types can be used in rules | Field type whitelist for rule condition targets. Null = no restrictions. |
Per-Run Overrides
Each AI analyzer (Segmentation, Policy Recommender, Content Intelligence, Rule Building) includes configurable parameters. When using the AI chat panel to build rules, these parameters control the constraints applied to generated rules. Per-run overrides:- Take effect for that run only and do not change the saved tenant configuration
- Default to the current tenant-level values from AI Configuration
- Include a Reset to Defaults button to restore tenant-level values
Large Dataset Warning Flow
When a dataset contains 5,000 or more rows, the platform displays a confirmation dialog before proceeding with analysis. The dialog provides three pieces of information:- Accuracy — The ML Worker uses K-Means clustering, logistic regression, and TF-IDF, which are more accurate than LLM pattern matching for large datasets.
- Cost estimate — Proceeding with the LLM shows the estimated token count and cost (e.g., “~150,000 tokens, ~$0.02”).
- Speed — The ML Worker processes data locally in seconds vs. LLM round-trip latency.
- Use ML Worker — Routes the analysis to the ML Worker for full-dataset processing (only available if the ML Worker is connected and healthy).
- Proceed with LLM — Continues with LLM-based analysis using sampled data. Useful when the ML Worker is unavailable.