Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt

Use this file to discover all available pages before exploring further.

Overview

The Composable Pipeline is the execution model for Decision Flows. You assemble a pipeline from typed node blocks arranged across three ordered phases. Each node performs one job — load inventory, filter candidates, score offers, rank results — and you choose which nodes to include and how to configure them. The pipeline is configured through a visual canvas editor with three phase lanes (Narrow, Score & Rank, Output). Nodes are displayed as connected cards on a React Flow canvas — click a node to configure it in the side panel, use the toolbar to add nodes, and drag to reorder within a phase. Key capabilities:
  • Sequential enrichment — Add multiple Enrich nodes to load customer data from different tables in sequence (e.g., customer profile → account details → transaction history)
  • Channel-aware scoring — Configure different scoring models per channel without visual branching
  • Optimal placement allocation — The Group node can maximize total score across all placements simultaneously, not just greedily
When you save a flow with version: 2 in its config, KaireonAI uses the composable pipeline engine.

Three Phases

Every composable pipeline organizes its nodes into three phases. Nodes within a phase run in position order; phases always execute in sequence.
PhaseNamePurposeAllowed Nodes
1NarrowLoad candidates and reduce the setinventory, filter, match_creatives, enrich, qualify, contact_policy, conditional, call_flow
2Score & RankScore, optimize, rank, and group survivorsscore, optimize, rank, group, call_flow
3OutputCompute final values and format responsecompute, set_properties, response
Cross-phaseCan appear in any phasecall_flow, extension_point

Node Types

Phase 1 — Narrow

Loads candidate offers from the database. Every pipeline must start with exactly one Inventory node.Config: scope"all" loads every active offer, or "category" restricts to a specific category.
Removes candidates that fail condition-based rules. Supports 13 operators across four namespaces.Operators: eq, neq, gt, gte, lt, lte, in, not_in, contains, starts_with, regex, is_null, is_not_nullNamespaces: offer.*, customer.*, request.*, channel.*Config:
  • conditions — Array of { field, operator, value } rules
  • combinator"AND" (all must pass) or "OR" (any can pass)
Filter nodes are only allowed in Phase 1.
Attaches eligible creative assets to each candidate offer based on channel targeting and audience rules.Config:
  • requireCreative — Whether to remove candidates without a creative (default true)
  • placementMatchMode"exact", "any", or "none" (default "any")
Loads customer data from schema tables. The loaded fields become available as customer.* variables in all downstream nodes.Config: sources — Array of enrichment sources, each with:
  • schemaId — Which schema table to query
  • lookupKey — The lookup field (default "customer_id")
  • fields — Which columns to load
  • prefix — Namespace prefix for loaded fields (default "customer")
  • cacheTtlSeconds — Redis cache duration (default 60)
  • optional — Whether to continue if the lookup fails (default true)
  • orderBy — Column to sort by when multiple rows match (useful for collection schemas)
  • orderDirection"ASC" or "DESC" (default "DESC")
  • filterCondition — WHERE clause for row selection (e.g., "is_primary = true", max 200 chars)
  • multiRow — Whether to return multiple rows (default false)
  • aggregation — When multiRow is true, aggregate functions per field: sum, count, avg, min, max, first
Type preservation: Enriched values preserve their PostgreSQL types. Booleans remain true/false (not coerced to strings), numeric/decimal values are converted to JavaScript numbers, and null is preserved. This matters for qualification rules that compare with eq true or eq false.Transforms: The Enrich node also supports an optional transforms array for in-memory record-level transforms (rename_field, cast_type, expression, map_values, hash, mask_pii, split_field, merge_fields, drop_field, add_field, filter_condition) applied to the enriched data before it enters the pipeline.Reading from summary tables: When a Collection schema has summary columns materialized by the Summarize pipeline transform, the Enrich node can read from the summary table for pre-aggregated data instead of querying the full collection.
Applies qualification rules with AND/OR logic trees. Rules can be combined into nested groups for complex eligibility logic.Config:
  • mode"all" (run every rule), "selected" (pick specific rules), or "none" (skip)
  • qualificationRuleIds — Array of rule IDs when mode is "selected"
  • logic — Optional AND/OR logic group for combining rule results
AND/OR Logic: Rules and groups can be nested recursively:
{
  "operator": "AND",
  "ruleIds": ["rule_age", "rule_region"],
  "groups": [
    {
      "operator": "OR",
      "ruleIds": ["rule_premium", "rule_loyalty"],
      "groups": []
    }
  ]
}
This means: customer must pass age AND region AND (premium OR loyalty).
Enforces frequency and timing guardrails to prevent over-contacting customers.Config:
  • mode"all", "selected", or "none"
  • contactPolicyIds — Array of policy IDs when mode is "selected"
Evaluates conditions against enriched customer attributes and routes matching candidates to a true-branch sub-flow. Non-matching candidates can either be filtered out or kept in the pipeline, controlled by the keepNonMatching flag.The Conditional node uses the same condition syntax as the Filter node (field/operator/value with AND/OR combinators), but instead of removing candidates, it routes matches through a separate Decision Flow for segment-specific processing.Config:
  • conditions — Array of { field, operator, value } rules (same syntax as Filter)
  • combinator"AND" or "OR" for combining conditions
  • trueBranchFlowId — The Decision Flow to execute for matching candidates
  • falseBranchFlowId — Optional Decision Flow to execute for non-matching candidates
  • keepNonMatching — Whether non-matching candidates continue in the pipeline (true) or are discarded (false, default false)
  • label — Optional human-readable label for the condition
Sub-flow execution:
  • Matching candidates are passed into the target flow as context (the sub-flow skips its own inventory)
  • Maximum nesting depth is 2 levels, consistent with Call Flow nodes
  • Circular reference detection prevents infinite loops — the engine tracks visited flow IDs across the call chain
  • Results from the sub-flow replace the matching candidates in the pipeline
Use case — segment-based routing:
{
  "conditions": [
    { "field": "customer.segment", "operator": "eq", "value": "Platinum" }
  ],
  "combinator": "AND",
  "trueBranchFlowId": "df_premium_flow",
  "keepNonMatching": true
}
Platinum customers are routed through a premium flow (e.g., with higher-value offers and personalized scoring), while Standard customers continue through the main pipeline unchanged. You can chain multiple Conditional nodes to handle additional segments (e.g., Platinum to one flow, Gold to another, Standard to basic).
Delegates to another Decision Flow as a sub-pipeline. Useful for reusable filtering, qualification logic, or secondary scoring.Config:
  • flowId — The target Decision Flow to call
  • passContext — Whether to pass the current candidate set to the sub-flow (default true). When true, the sub-flow skips its own inventory and works on the parent’s candidates.
  • mergeMode"replace" (use sub-flow results only, default) or "append" (add sub-flow results to candidates)
  • optional — Whether to continue if the sub-flow fails (default true). When true, candidates pass through unchanged on error. When false, the pipeline aborts.
Safety guards:
  • Maximum nesting depth is 2 levels (enforced at save time and runtime)
  • Circular reference detection prevents infinite loops — the engine tracks visited flow IDs across recursive calls
  • Circular references are detected and rejected at save time
Call flow nodes are allowed in Phase 1 and Phase 2 only.

Phase 2 — Score & Rank

Rank and Group are mutually exclusive. Use Rank for single-placement (top N) or Group for multi-placement allocation — not both. When both are present, Rank throttles the candidate pool before Group runs, leaving placements unfilled.
  • Single-placement flow: Inventory → Enrich → Qualify → Contact Policy → Score → Rank → Response
  • Multi-placement flow: Inventory → Enrich → Qualify → Contact Policy → Score → Group → Response
Runs scoring models against each candidate. Every pipeline must have exactly one Score node.Config:
  • method"priority_weighted", "propensity", "formula" (weighted composite), or "external_endpoint"
  • defaultModel — The model key to use when no override matches
  • overrides — Array of model overrides scoped to offer, category, or channel
  • overridePriority — Resolution order, e.g. ["offer", "category", "channel", "default"]
  • formula — PRIE weight-based blending when method is "formula":
    {
      "propensityWeight": 0.4,
      "relevanceWeight": 0.2,
      "impactWeight": 0.3,
      "emphasisWeight": 0.1
    }
    
    Weights must sum to 1.0. Legacy field names (contextWeight, valueWeight, leverWeight) are accepted for backwards compatibility and map to relevanceWeight, impactWeight, and emphasisWeight respectively.
Channel Overrides:Apply a different scoring method for specific channels. At runtime, the engine checks the request’s channel against the overrides. If a match is found, it uses that override’s method and model. Otherwise, the default method applies.
{
  "channelOverrides": [
    { "channelId": "ch_email", "method": "propensity", "modelKey": "email-propensity-v2" },
    { "channelId": "ch_push", "method": "propensity", "modelKey": "push-recency-model" },
    { "channelId": "ch_inapp", "method": "priority_weighted" }
  ]
}
This keeps the pipeline canvas clean — no visual branching needed for channel-specific scoring.Champion/Challenger experiments:
{
  "championChallenger": {
    "enabled": true,
    "champion": { "modelKey": "model_v2", "weight": 80 },
    "challengers": [
      { "modelKey": "model_v3", "weight": 20 }
    ]
  }
}
Traffic is split deterministically using a hash of the customer ID, so each customer always sees the same model variant.
Applies multi-objective portfolio optimization using saved profiles or inline weight sliders. Balances revenue, margin, propensity, engagement, and custom objectives to produce a single composite score per candidate.Config:
  • profileId — Reference to a saved Portfolio Optimization profile (optional; use inline objectives if not set)
  • objectives — Inline objective weights, e.g. { "revenue": 40, "margin": 30, "propensity": 20, "engagement": 10, "custom": 0 }. Each value is 0–100.
The optimizer normalizes each dimension before blending. When both profileId and inline objectives are provided, the profile takes precedence.The Optimize node is optional. If omitted, candidates proceed to the Rank node with their raw scores from the Score node.
Produces the final ordered list. One Rank node allowed per pipeline.Config:
  • method"topN", "diversity", "round_robin", or "explore_exploit"
  • maxCandidates — Maximum number of candidates to keep in the output (1–50, default 5). All offers beyond this limit are discarded. This is important: if a Group node follows Rank, it can only allocate from the candidates that survive this cut.
  • maxPerCategory — Optional cap on offers per category
  • maxPerChannel — Optional cap on offers per channel
  • explorationRate — Optional 0–1 value for explore/exploit (default 0.1)
Ranking methods:
MethodAlgorithmBackfill?Use Case
topNSort by score descending, take top NN/ADefault. Simple, deterministic.
diversityRound-robin across categories, then backfill remaining slots by scoreYesCross-sell campaigns needing category spread
round_robinStrict equal picks per category. May return fewer than maxCandidatesNoFairness-oriented campaigns with equal exposure
explore_exploitEpsilon-greedy: top offers for exploit slots, random for explore slotsN/ANew offer discovery, cold-start optimization
explore_exploit details: The explorationRate controls what fraction of slots are used for exploration (random selection from the remaining pool). A rate of 0.2 means 80% exploit (top scores) and 20% explore (random). Exploration is deterministic per customer — the same customer always sees the same exploration picks, ensuring a stable experience. Different customers see different exploration picks for diversity.All methods enforce maxPerCategory and maxPerChannel caps after the primary ranking.
Allocates candidates across multiple named placements (e.g., hero banner, sidebar, email). Use Group instead of Rank when you need multi-placement allocation.Config:
  • placements — Array of { placementId, count } definitions
  • allocationStrategy — How offers are assigned to placements:
    • "optimal" (default) — Uses the Hungarian algorithm (Kuhn-Munkres) to find the globally optimal assignment across all placements simultaneously. Complexity is O(n³) where n is the number of candidates. The best choice for most use cases.
    • "greedy" — Fills placements in config order. Each placement grabs the highest-scoring remaining offers. Simpler and predictable, but may not produce the globally best assignment.
    • "priority_fill" — Alias for greedy; fills placements in config order, O(n log n).
  • allowPartial — Whether some placements can receive no offers (default true)
Why optimal matters: With greedy allocation, the order of placements in the request affects results. A placement listed first gets the best offer, even if that offer would produce a higher total score in a different placement. Optimal allocation considers all placements together and finds the best global assignment.When a Group node is present, the Recommend API response includes a placements object instead of a flat array:
{
  "placements": {
    "hero": [{ "offerId": "...", "score": 0.95 }],
    "sidebar": [{ "offerId": "...", "score": 0.82 }]
  }
}

Phase 3 — Output

Evaluates formula-based computed fields for each candidate offer. One Compute node allowed.Config:
  • overrides — Flow-level formula overrides (array of { name, formula, outputType })
  • extras — Additional computed fields beyond what categories define (same shape)
See the Computed Values Guide and Formula Reference for formula syntax.
Attaches static or derived key-value pairs to each candidate before the response is assembled.Config: properties — Array of { key, value } pairs. Each property can also include a formula field for dynamically computed values.
Formats the final output. Every pipeline must end with exactly one Response node.Config:
  • includeDebugTrace — Whether to include debug trace data (default false)
  • responseFormat"standard" (flat array) or "grouped" (placements object, requires Group node)
Injects custom logic at critical moments in the pipeline without modifying the core flow. No-op when unconfigured.Config:
  • hookName — One of pre_score, score_override, post_rank
  • label — Human-readable label for the extension point
  • description — Description of the extension point’s purpose
  • configured — Whether the extension point is active (default false)
  • subFlowId — Optional sub-flow to execute at this hook
See Decision Flows — Extension Points for details on each hook.

Node Implementation Status

All 16 node types are fully functional in the current release:
NodeStatusNotes
inventoryFully functionalLoads offers by scope, respects status filters
match_creativesFully functionalMatches creatives to placements
enrichFully functionalQueries schema tables, caches via Redis, supports multiple sources
qualifyFully functionalAND/OR logic trees, rule loading from database
contact_policyFully functionalFull history-based policy evaluation
filterFully functional13 operators, AND/OR combinator
conditionalFully functionalCondition-based routing to sub-flows, keepNonMatching, depth limit (2), circular reference guard
call_flowFully functionalSub-flow invocation with depth limit (2), circular reference guard, fail-open/closed
scoreFully functional3 methods, channel overrides, champion/challenger
optimizeFully functionalMulti-objective portfolio optimization with saved profiles or inline weights
rankFully functional4 algorithms (topN, diversity, round_robin, explore_exploit)
groupFully functionalHungarian optimal allocation, greedy, allowPartial
computeFully functionalFormula overrides and extras
set_propertiesFully functionalStatic and formula-derived properties
responseFully functionalStandard and grouped formats
extension_pointFully functionalpre_score, score_override, post_rank hooks with optional sub-flow

Worked Example

This example walks through 8 offers being processed by a pipeline with grouping and computed fields.

Pipeline Config

{
  "version": 2,
  "nodes": [
    { "id": "n1", "type": "inventory", "phase": 1, "position": 0,
      "config": { "scope": "all", "includeStatuses": ["active"] } },
    { "id": "n2", "type": "filter", "phase": 1, "position": 1,
      "config": {
        "conditions": [
          { "field": "offer.priority", "operator": "gte", "value": 30 }
        ],
        "combinator": "AND"
      } },
    { "id": "n3", "type": "score", "phase": 2, "position": 0,
      "config": { "method": "priority_weighted" } },
    { "id": "n4", "type": "group", "phase": 2, "position": 1,
      "config": {
        "placements": [
          { "placementId": "hero", "count": 1 },
          { "placementId": "sidebar", "count": 3 }
        ],
        "allocationStrategy": "priority_fill"
      } },
    { "id": "n5", "type": "compute", "phase": 3, "position": 0,
      "config": {
        "extras": [
          { "name": "display_rate", "formula": "round(base_rate * 0.9, 2)", "outputType": "number" }
        ]
      } },
    { "id": "n6", "type": "response", "phase": 3, "position": 1,
      "config": { "responseFormat": "grouped" } }
  ]
}

Step-by-Step Execution

Step 1 — Inventory: Loads 8 active offers from the database.
OfferPriorityWeightbase_rate
Premium Card9010014.99
Travel Rewards808017.99
Cash Back709015.49
Student Card2510022.99
Balance Transfer607012.99
Secured Card2010024.99
Business Platinum856016.99
Everyday Card405019.99
Step 2 — Filter (offer.priority >= 30): Removes Student Card (25) and Secured Card (20). 6 remain. Step 3 — Score (priority_weighted): Each candidate gets score = (priority/100) * (weight/100).
OfferScore
Premium Card0.90
Travel Rewards0.64
Cash Back0.63
Business Platinum0.51
Balance Transfer0.42
Everyday Card0.20
Step 4 — Group (priority_fill): Allocates all 6 scored candidates to placements in score order, no duplicates across placements. The total placement slots (1 + 3 = 4) naturally limit the output.
PlacementOffers
hero (1 slot)Premium Card (0.90)
sidebar (3 slots)Travel Rewards (0.64), Cash Back (0.63), Business Platinum (0.51)
Balance Transfer (0.42) and Everyday Card (0.20) are unplaced. Step 5 — Compute: Evaluates display_rate = round(base_rate * 0.9, 2) for each placed candidate.
Offerdisplay_rate
Premium Card13.49
Travel Rewards16.19
Cash Back13.94
Business Platinum15.29
Step 6 — Response (grouped format): Returns the final response:
{
  "customerId": "cust_12345",
  "placements": {
    "hero": [
      {
        "offerId": "offer_premium_card",
        "offerName": "Premium Card",
        "score": 0.90,
        "rank": 1,
        "personalization": { "display_rate": 13.49 }
      }
    ],
    "sidebar": [
      {
        "offerId": "offer_travel_rewards",
        "offerName": "Travel Rewards",
        "score": 0.64,
        "rank": 2,
        "personalization": { "display_rate": 16.19 }
      },
      {
        "offerId": "offer_cash_back",
        "offerName": "Cash Back",
        "score": 0.63,
        "rank": 3,
        "personalization": { "display_rate": 13.94 }
      },
      {
        "offerId": "offer_biz_platinum",
        "offerName": "Business Platinum",
        "score": 0.51,
        "rank": 4,
        "personalization": { "display_rate": 15.29 }
      }
    ]
  },
  "traceSummary": {
    "totalCandidates": 8,
    "afterQualification": 0,
    "afterContactPolicy": 0,
    "topScores": [
      { "offerId": "offer_premium_card", "score": 0.90 },
      { "offerId": "offer_travel_rewards", "score": 0.64 },
      { "offerId": "offer_cash_back", "score": 0.63 },
      { "offerId": "offer_biz_platinum", "score": 0.51 }
    ]
  }
}

Pipeline Validation

KaireonAI validates the pipeline structure when you save a flow. Invalid pipelines are rejected with specific error codes:
CodeRule
EMPTY_PIPELINEPipeline must contain at least one node
MISSING_INVENTORYMust start with an Inventory node
MISSING_RESPONSEMust end with a Response node
MISSING_SCOREMust contain a Score node
DUPLICATE_SINGLETONOnly one of each: inventory, score, rank, group, compute, response
PHASE_ORDER_VIOLATIONPhases must be non-decreasing (1 -> 2 -> 3)
FILTER_WRONG_PHASEFilter nodes must be in Phase 1
RANK_AND_GROUP_CONFLICTRank and Group nodes cannot coexist in the same flow
CALL_FLOW_WRONG_PHASECall Flow nodes must be in Phase 1 or 2
CALL_FLOW_MAX_DEPTHSub-flow nesting cannot exceed 2 levels
CALL_FLOW_CIRCULARCircular call_flow references are not allowed
INVALID_NODE_CONFIGA node’s config doesn’t match its type-specific schema

Config Format

A composable pipeline flow config uses this JSON structure:
{
  "version": 2,
  "nodes": [
    { "id": "n1", "type": "inventory", "phase": 1, "position": 0, "config": { "scope": "all" } },
    { "id": "n2", "type": "filter", "phase": 1, "position": 1, "config": { "conditions": [{ "field": "offer.status", "operator": "eq", "value": "active" }], "combinator": "AND" } },
    { "id": "n3", "type": "score", "phase": 2, "position": 0, "config": { "method": "priority_weighted" } },
    { "id": "n4", "type": "rank", "phase": 2, "position": 1, "config": { "method": "topN", "maxCandidates": 5 } },
    { "id": "n5", "type": "response", "phase": 3, "position": 0, "config": {} }
  ],
  "flowConfig": {
    "experiment": { "enabled": false },
    "timeout": { "maxMs": 500 },
    "caching": { "enabled": false }
  }
}
The version: 2 field identifies the config as a composable pipeline.

API

Saving a Flow

Use the standard Decision Flows API. The draftConfig field accepts the composable pipeline format:
PUT /api/v1/decision-flows
{
  "id": "df_12345",
  "draftConfig": {
    "version": 2,
    "nodes": [
      { "id": "n1", "type": "inventory", "phase": 1, "position": 0, "config": { "scope": "all" } },
      { "id": "n2", "type": "score", "phase": 2, "position": 0, "config": { "method": "priority_weighted" } },
      { "id": "n3", "type": "rank", "phase": 2, "position": 1, "config": { "method": "topN", "maxCandidates": 5 } },
      { "id": "n4", "type": "response", "phase": 3, "position": 0, "config": {} }
    ],
    "flowConfig": {}
  }
}
KaireonAI validates the config with both schema validation and pipeline structural rules before saving.

Executing a Flow

Flows are executed through the Recommend API:
POST /api/v1/recommend
{
  "customerId": "cust_12345",
  "decisionFlowKey": "credit_cards",
  "attributes": { "channel": "web" },
  "limit": 5
}

Next Steps

Decision Flows

Learn the basics of Decision Flows and the pipeline model.

Computed Values

Formula syntax and supported functions for computed fields.

Formula Reference

Complete operator and function reference for the formula engine.