Skip to main content

Overview

AI Configuration lets administrators tune the behavior of each AI analyzer across the platform. Settings are saved per tenant (organization), so each organization can customize analysis thresholds, caps, and algorithms independently. Navigate to Settings > AI Configuration in the KaireonAI sidebar.

Setup

1

Open AI Configuration

In the sidebar, expand Settings and click AI Configuration.
2

Review the four analyzer sections

The page shows collapsible cards for each analyzer: Segmentation, Policy, Content Intelligence, and Rule Building. Each card lists the tunable parameters with their current values.
3

Adjust parameters

Change any value by editing the input field. Each parameter shows a plain-language description. Click Technical detail to see the underlying implementation detail.
4

Save or reset

Click Save Configuration to persist your changes. Use Reset Section on an individual card or Reset All to Defaults to restore factory defaults.
Settings are loaded once on page load and edited locally until you save. Multiple admins editing at the same time will overwrite each other — the last save wins.

Segmentation Parameters

ParameterDefaultDescriptionTechnical Detail
Min Clusters2Fewest groups to split customers intoK-Means n_clusters lower bound. Fewer clusters = broader segments.
Max Clusters8Most groups customers can be split intoK-Means n_clusters upper bound. More clusters = finer-grained segments.
AlgorithmkmeansMethod used to find natural groupingskmeans (centroid-based, fast) or dbscan (density-based, handles outliers).
Included FeaturesAllWhich attributes to consider when groupingFeature whitelist for clustering input matrix. Null = all numeric features.

Policy Parameters

ParameterDefaultDescriptionTechnical Detail
Daily Cap3Max messages a customer receives per dayHard cap on contact frequency per customer per calendar day.
Weekly Cap10Max messages a customer receives per weekHard cap per customer per rolling 7-day window.
Monthly Cap30Max messages a customer receives per monthHard cap per customer per rolling 30-day window.
Lookback Days90How far back to analyze interaction historyInteraction history window (days) for policy violation detection.
Min Sample Size100Minimum interactions before analysis is meaningfulStatistical minimum sample size for reliable policy analysis.

Content Intelligence Parameters

ParameterDefaultDescriptionTechnical Detail
Min Impressions100Times content must be shown before judging performanceMinimum impression count before metrics are statistically valid.
Metric Weights (CTR)0.33Weight given to click-through rateWeighted blend coefficient for CTR in content scoring.
Metric Weights (CVR)0.34Weight given to conversion rateWeighted blend coefficient for CVR in content scoring.
Metric Weights (Revenue)0.33Weight given to revenue per impressionWeighted blend coefficient for revenue-per-impression.
Confidence Level0.95How sure we need to be before recommending a changeStatistical confidence threshold (1 - alpha) for hypothesis tests.

Rule Building Parameters

ParameterDefaultDescriptionTechnical Detail
Max Conditions5Maximum conditions allowed in a single ruleUpper bound on rule clause count to prevent overly complex rules.
Allowed Operatorsequals, gt, lt, gte, lte, contains, inComparison types available when building rulesOperator whitelist for AI-generated rule conditions.
Field Type ConstraintsAllWhich field types can be used in rulesField type whitelist for rule condition targets. Null = no restrictions.

Per-Run Overrides

Each AI analyzer (Segmentation, Policy Recommender, Content Intelligence, Rule Building) includes configurable parameters. When using the AI chat panel to build rules, these parameters control the constraints applied to generated rules. Per-run overrides:
  • Take effect for that run only and do not change the saved tenant configuration
  • Default to the current tenant-level values from AI Configuration
  • Include a Reset to Defaults button to restore tenant-level values
This lets analysts experiment with different settings without affecting the organization-wide defaults.

Large Dataset Warning Flow

When a dataset contains 5,000 or more rows, the platform displays a confirmation dialog before proceeding with analysis. The dialog provides three pieces of information:
  1. Accuracy — The ML Worker uses K-Means clustering, logistic regression, and TF-IDF, which are more accurate than LLM pattern matching for large datasets.
  2. Cost estimate — Proceeding with the LLM shows the estimated token count and cost (e.g., “~150,000 tokens, ~$0.02”).
  3. Speed — The ML Worker processes data locally in seconds vs. LLM round-trip latency.
The dialog offers two choices:
  • Use ML Worker — Routes the analysis to the ML Worker for full-dataset processing (only available if the ML Worker is connected and healthy).
  • Proceed with LLM — Continues with LLM-based analysis using sampled data. Useful when the ML Worker is unavailable.
For datasets over 5,000 rows, LLM-based analysis samples the data rather than processing it all. This reduces cost but may miss patterns present in the full dataset. Deploy the ML Worker for the most accurate results on large datasets.
If the ML Worker is not connected, the dialog still appears but explains that the ML Worker is not available and suggests enabling it in Settings > Integrations.

Next Steps