Documentation Index
Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The Composable Pipeline is the execution model for Decision Flows. You assemble a pipeline from typed node blocks arranged across three ordered phases. Each node performs one job — load inventory, filter candidates, score offers, rank results — and you choose which nodes to include and how to configure them. The pipeline is configured through a visual canvas editor with three phase lanes (Narrow, Score & Rank, Output). Nodes are displayed as connected cards on a React Flow canvas — click a node to configure it in the side panel, use the toolbar to add nodes, and drag to reorder within a phase. Key capabilities:- Sequential enrichment — Add multiple Enrich nodes to load customer data from different tables in sequence (e.g., customer profile → account details → transaction history)
- Channel-aware scoring — Configure different scoring models per channel without visual branching
- Optimal placement allocation — The Group node can maximize total score across all placements simultaneously, not just greedily
version: 2 in its config, KaireonAI uses the composable pipeline engine.
Three Phases
Every composable pipeline organizes its nodes into three phases. Nodes within a phase run in position order; phases always execute in sequence.| Phase | Name | Purpose | Allowed Nodes |
|---|---|---|---|
| 1 | Narrow | Load candidates and reduce the set | inventory, filter, match_creatives, enrich, qualify, contact_policy, conditional, call_flow |
| 2 | Score & Rank | Score, optimize, rank, and group survivors | score, optimize, rank, group, call_flow |
| 3 | Output | Compute final values and format response | compute, set_properties, response |
| — | Cross-phase | Can appear in any phase | call_flow, extension_point |
Node Types
Phase 1 — Narrow
Inventory
Inventory
Loads candidate offers from the database. Every pipeline must start with exactly one Inventory node.Config:
scope — "all" loads every active offer, or "category" restricts to a specific category.Filter
Filter
Removes candidates that fail condition-based rules. Supports 13 operators across four namespaces.Operators:
eq, neq, gt, gte, lt, lte, in, not_in, contains, starts_with, regex, is_null, is_not_nullNamespaces: offer.*, customer.*, request.*, channel.*Config:conditions— Array of{ field, operator, value }rulescombinator—"AND"(all must pass) or"OR"(any can pass)
Match Creatives
Match Creatives
Attaches eligible creative assets to each candidate offer based on channel targeting and audience rules.Config:
requireCreative— Whether to remove candidates without a creative (defaulttrue)placementMatchMode—"exact","any", or"none"(default"any")
Enrich
Enrich
Loads customer data from schema tables. The loaded fields become available as
customer.* variables in all downstream nodes.Config: sources — Array of enrichment sources, each with:schemaId— Which schema table to querylookupKey— The lookup field (default"customer_id")fields— Which columns to loadprefix— Namespace prefix for loaded fields (default"customer")cacheTtlSeconds— Redis cache duration (default 60)optional— Whether to continue if the lookup fails (defaulttrue)orderBy— Column to sort by when multiple rows match (useful for collection schemas)orderDirection—"ASC"or"DESC"(default"DESC")filterCondition— WHERE clause for row selection (e.g.,"is_primary = true", max 200 chars)multiRow— Whether to return multiple rows (defaultfalse)aggregation— WhenmultiRowis true, aggregate functions per field:sum,count,avg,min,max,first
true/false (not coerced to strings), numeric/decimal values are converted to JavaScript numbers, and null is preserved. This matters for qualification rules that compare with eq true or eq false.Transforms: The Enrich node also supports an optional transforms array for in-memory record-level transforms (rename_field, cast_type, expression, map_values, hash, mask_pii, split_field, merge_fields, drop_field, add_field, filter_condition) applied to the enriched data before it enters the pipeline.Reading from summary tables: When a Collection schema has summary columns materialized by the Summarize pipeline transform, the Enrich node can read from the summary table for pre-aggregated data instead of querying the full collection.Qualify
Qualify
Applies qualification rules with AND/OR logic trees. Rules can be combined into nested groups for complex eligibility logic.Config:This means: customer must pass age AND region AND (premium OR loyalty).
mode—"all"(run every rule),"selected"(pick specific rules), or"none"(skip)qualificationRuleIds— Array of rule IDs when mode is"selected"logic— Optional AND/OR logic group for combining rule results
Contact Policy
Contact Policy
Enforces frequency and timing guardrails to prevent over-contacting customers.Config:
mode—"all","selected", or"none"contactPolicyIds— Array of policy IDs when mode is"selected"
Conditional
Conditional
Evaluates conditions against enriched customer attributes and routes matching candidates to a true-branch sub-flow. Non-matching candidates can either be filtered out or kept in the pipeline, controlled by the Platinum customers are routed through a premium flow (e.g., with higher-value offers and personalized scoring), while Standard customers continue through the main pipeline unchanged. You can chain multiple Conditional nodes to handle additional segments (e.g., Platinum to one flow, Gold to another, Standard to basic).
keepNonMatching flag.The Conditional node uses the same condition syntax as the Filter node (field/operator/value with AND/OR combinators), but instead of removing candidates, it routes matches through a separate Decision Flow for segment-specific processing.Config:conditions— Array of{ field, operator, value }rules (same syntax as Filter)combinator—"AND"or"OR"for combining conditionstrueBranchFlowId— The Decision Flow to execute for matching candidatesfalseBranchFlowId— Optional Decision Flow to execute for non-matching candidateskeepNonMatching— Whether non-matching candidates continue in the pipeline (true) or are discarded (false, defaultfalse)label— Optional human-readable label for the condition
- Matching candidates are passed into the target flow as context (the sub-flow skips its own inventory)
- Maximum nesting depth is 2 levels, consistent with Call Flow nodes
- Circular reference detection prevents infinite loops — the engine tracks visited flow IDs across the call chain
- Results from the sub-flow replace the matching candidates in the pipeline
Call Flow (Phase 1)
Call Flow (Phase 1)
Delegates to another Decision Flow as a sub-pipeline. Useful for reusable filtering, qualification logic, or secondary scoring.Config:
flowId— The target Decision Flow to callpassContext— Whether to pass the current candidate set to the sub-flow (defaulttrue). When true, the sub-flow skips its own inventory and works on the parent’s candidates.mergeMode—"replace"(use sub-flow results only, default) or"append"(add sub-flow results to candidates)optional— Whether to continue if the sub-flow fails (defaulttrue). When true, candidates pass through unchanged on error. When false, the pipeline aborts.
- Maximum nesting depth is 2 levels (enforced at save time and runtime)
- Circular reference detection prevents infinite loops — the engine tracks visited flow IDs across recursive calls
- Circular references are detected and rejected at save time
Phase 2 — Score & Rank
Score
Score
Runs scoring models against each candidate. Every pipeline must have exactly one Score node.Config:This keeps the pipeline canvas clean — no visual branching needed for channel-specific scoring.Champion/Challenger experiments:Traffic is split deterministically using a hash of the customer ID, so each customer always sees the same model variant.
method—"priority_weighted","propensity","formula"(weighted composite), or"external_endpoint"defaultModel— The model key to use when no override matchesoverrides— Array of model overrides scoped to offer, category, or channeloverridePriority— Resolution order, e.g.["offer", "category", "channel", "default"]formula— PRIE weight-based blending when method is"formula":Weights must sum to 1.0. Legacy field names (contextWeight,valueWeight,leverWeight) are accepted for backwards compatibility and map torelevanceWeight,impactWeight, andemphasisWeightrespectively.
Optimize
Optimize
Applies multi-objective portfolio optimization using saved profiles or inline weight sliders. Balances revenue, margin, propensity, engagement, and custom objectives to produce a single composite score per candidate.Config:
profileId— Reference to a saved Portfolio Optimization profile (optional; use inline objectives if not set)objectives— Inline objective weights, e.g.{ "revenue": 40, "margin": 30, "propensity": 20, "engagement": 10, "custom": 0 }. Each value is 0–100.
profileId and inline objectives are provided, the profile takes precedence.The Optimize node is optional. If omitted, candidates proceed to the Rank node with their raw scores from the Score node.Rank
Rank
Produces the final ordered list. One Rank node allowed per pipeline.Config:
explore_exploit details: The
method—"topN","diversity","round_robin", or"explore_exploit"maxCandidates— Maximum number of candidates to keep in the output (1–50, default 5). All offers beyond this limit are discarded. This is important: if a Group node follows Rank, it can only allocate from the candidates that survive this cut.maxPerCategory— Optional cap on offers per categorymaxPerChannel— Optional cap on offers per channelexplorationRate— Optional 0–1 value for explore/exploit (default 0.1)
| Method | Algorithm | Backfill? | Use Case |
|---|---|---|---|
topN | Sort by score descending, take top N | N/A | Default. Simple, deterministic. |
diversity | Round-robin across categories, then backfill remaining slots by score | Yes | Cross-sell campaigns needing category spread |
round_robin | Strict equal picks per category. May return fewer than maxCandidates | No | Fairness-oriented campaigns with equal exposure |
explore_exploit | Epsilon-greedy: top offers for exploit slots, random for explore slots | N/A | New offer discovery, cold-start optimization |
explorationRate controls what fraction of slots are used for exploration (random selection from the remaining pool). A rate of 0.2 means 80% exploit (top scores) and 20% explore (random). Exploration is deterministic per customer — the same customer always sees the same exploration picks, ensuring a stable experience. Different customers see different exploration picks for diversity.All methods enforce maxPerCategory and maxPerChannel caps after the primary ranking.Group
Group
Allocates candidates across multiple named placements (e.g., hero banner, sidebar, email). Use Group instead of Rank when you need multi-placement allocation.Config:
placements— Array of{ placementId, count }definitionsallocationStrategy— How offers are assigned to placements:"optimal"(default) — Uses the Hungarian algorithm (Kuhn-Munkres) to find the globally optimal assignment across all placements simultaneously. Complexity is O(n³) where n is the number of candidates. The best choice for most use cases."greedy"— Fills placements in config order. Each placement grabs the highest-scoring remaining offers. Simpler and predictable, but may not produce the globally best assignment."priority_fill"— Alias for greedy; fills placements in config order, O(n log n).
allowPartial— Whether some placements can receive no offers (defaulttrue)
placements object instead of a flat array:Phase 3 — Output
Compute
Compute
Evaluates formula-based computed fields for each candidate offer. One Compute node allowed.Config:
overrides— Flow-level formula overrides (array of{ name, formula, outputType })extras— Additional computed fields beyond what categories define (same shape)
Set Properties
Set Properties
Attaches static or derived key-value pairs to each candidate before the response is assembled.Config:
properties — Array of { key, value } pairs. Each property can also include a formula field for dynamically computed values.Response
Response
Formats the final output. Every pipeline must end with exactly one Response node.Config:
includeDebugTrace— Whether to include debug trace data (defaultfalse)responseFormat—"standard"(flat array) or"grouped"(placements object, requires Group node)
Extension Point
Extension Point
Injects custom logic at critical moments in the pipeline without modifying the core flow. No-op when unconfigured.Config:
hookName— One ofpre_score,score_override,post_ranklabel— Human-readable label for the extension pointdescription— Description of the extension point’s purposeconfigured— Whether the extension point is active (defaultfalse)subFlowId— Optional sub-flow to execute at this hook
Node Implementation Status
All 16 node types are fully functional in the current release:| Node | Status | Notes |
|---|---|---|
| inventory | Fully functional | Loads offers by scope, respects status filters |
| match_creatives | Fully functional | Matches creatives to placements |
| enrich | Fully functional | Queries schema tables, caches via Redis, supports multiple sources |
| qualify | Fully functional | AND/OR logic trees, rule loading from database |
| contact_policy | Fully functional | Full history-based policy evaluation |
| filter | Fully functional | 13 operators, AND/OR combinator |
| conditional | Fully functional | Condition-based routing to sub-flows, keepNonMatching, depth limit (2), circular reference guard |
| call_flow | Fully functional | Sub-flow invocation with depth limit (2), circular reference guard, fail-open/closed |
| score | Fully functional | 3 methods, channel overrides, champion/challenger |
| optimize | Fully functional | Multi-objective portfolio optimization with saved profiles or inline weights |
| rank | Fully functional | 4 algorithms (topN, diversity, round_robin, explore_exploit) |
| group | Fully functional | Hungarian optimal allocation, greedy, allowPartial |
| compute | Fully functional | Formula overrides and extras |
| set_properties | Fully functional | Static and formula-derived properties |
| response | Fully functional | Standard and grouped formats |
| extension_point | Fully functional | pre_score, score_override, post_rank hooks with optional sub-flow |
Worked Example
This example walks through 8 offers being processed by a pipeline with grouping and computed fields.Pipeline Config
Step-by-Step Execution
Step 1 — Inventory: Loads 8 active offers from the database.| Offer | Priority | Weight | base_rate |
|---|---|---|---|
| Premium Card | 90 | 100 | 14.99 |
| Travel Rewards | 80 | 80 | 17.99 |
| Cash Back | 70 | 90 | 15.49 |
| Student Card | 25 | 100 | 22.99 |
| Balance Transfer | 60 | 70 | 12.99 |
| Secured Card | 20 | 100 | 24.99 |
| Business Platinum | 85 | 60 | 16.99 |
| Everyday Card | 40 | 50 | 19.99 |
offer.priority >= 30): Removes Student Card (25) and Secured Card (20). 6 remain.
Step 3 — Score (priority_weighted): Each candidate gets score = (priority/100) * (weight/100).
| Offer | Score |
|---|---|
| Premium Card | 0.90 |
| Travel Rewards | 0.64 |
| Cash Back | 0.63 |
| Business Platinum | 0.51 |
| Balance Transfer | 0.42 |
| Everyday Card | 0.20 |
| Placement | Offers |
|---|---|
hero (1 slot) | Premium Card (0.90) |
sidebar (3 slots) | Travel Rewards (0.64), Cash Back (0.63), Business Platinum (0.51) |
display_rate = round(base_rate * 0.9, 2) for each placed candidate.
| Offer | display_rate |
|---|---|
| Premium Card | 13.49 |
| Travel Rewards | 16.19 |
| Cash Back | 13.94 |
| Business Platinum | 15.29 |
Pipeline Validation
KaireonAI validates the pipeline structure when you save a flow. Invalid pipelines are rejected with specific error codes:| Code | Rule |
|---|---|
EMPTY_PIPELINE | Pipeline must contain at least one node |
MISSING_INVENTORY | Must start with an Inventory node |
MISSING_RESPONSE | Must end with a Response node |
MISSING_SCORE | Must contain a Score node |
DUPLICATE_SINGLETON | Only one of each: inventory, score, rank, group, compute, response |
PHASE_ORDER_VIOLATION | Phases must be non-decreasing (1 -> 2 -> 3) |
FILTER_WRONG_PHASE | Filter nodes must be in Phase 1 |
RANK_AND_GROUP_CONFLICT | Rank and Group nodes cannot coexist in the same flow |
CALL_FLOW_WRONG_PHASE | Call Flow nodes must be in Phase 1 or 2 |
CALL_FLOW_MAX_DEPTH | Sub-flow nesting cannot exceed 2 levels |
CALL_FLOW_CIRCULAR | Circular call_flow references are not allowed |
INVALID_NODE_CONFIG | A node’s config doesn’t match its type-specific schema |
Config Format
A composable pipeline flow config uses this JSON structure:version: 2 field identifies the config as a composable pipeline.
API
Saving a Flow
Use the standard Decision Flows API. ThedraftConfig field accepts the composable pipeline format:
Executing a Flow
Flows are executed through the Recommend API:Next Steps
Decision Flows
Learn the basics of Decision Flows and the pipeline model.
Computed Values
Formula syntax and supported functions for computed fields.
Formula Reference
Complete operator and function reference for the formula engine.