This page is the working-example companion to the platform docs. Every cURL block below was actually executed against a live local platform during the 2026-05-02 testing pass — copy, paste, change the IDs, run.Documentation Index
Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt
Use this file to discover all available pages before exploring further.
Setup assumptions
- Platform running at
http://localhost:3000(or substitute your host). - A user is signed in; cookies are persisted in
kc.txt(see API Authentication for getting a session cookie programmatically). For server-to-server, swap the-b kc.txtfor-H "X-API-Key: krn_...". - AWS CLI configured with creds that can
aws sts assume-roleagainst the role you’re about to create.
1. Create the IAM role the connector will assume
Trust policy lets your local IAM identity assume the role with an external ID; the policy grants minimal S3 access on one bucket.2. Upload a sample CSV
customer_id, first_name, last_name, email, phone, address, city, state, zip_code, country, signup_date, segment, status, lifetime_value, orders_count.
3. Create the S3 connector via the API
POST /api/v1/connectors with authMethod: "iam_role" triggers the
STS AssumeRole + HeadBucket + ListObjectsV2 verification path.
4. Create the schema with customer_id as the primary key
When any field is marked isPrimaryKey: true, the auto-generated id BIGSERIAL column is skipped and your column becomes the table’s
PK. See Data Model.
customer_id is the
PK and there is no auto id column:
5. Create the pipeline IR
The IR has four nodes:source → transform → validate → target
(upsert keyed on customer_id, with the safety bundle: empty-source
guard on, validate quarantine to a DLQ table). Note the required
irVersion: "1.0" and the errorHandling block.
6. Run the pipeline
7. Switch the load mode and re-run
The IR is updated viaPOST /api/v1/pipelines/:id/ir (creates a new
version atomically). Pattern: GET the current IR, mutate the target
node, POST the new envelope. Re-upload the source file each run because
the source executor archives it after success.
Switch to truncate (with the empty-source guard)
Test the empty-source guard
Don’t re-upload the file. The next run produces an empty source, and the runtime abortstruncate BEFORE the destructive statement so your
table is preserved:
warning System Health alert (“Pipeline source had no files”) and an
error alert (“Pipeline run failed: Customer File Test”) fire
automatically. Read them via GET /api/v1/system-health (see
System Health).
Switch to blue_green
ds_customer_new, then issues the
atomic 3-step rename (target → target_old, target_new → target,
DROP target_old). Empty-source guard applies — same protection as
truncate.
Switch to incremental_watermark
MAX(signup_date) from the target and writes the
high-water to pipeline_watermarks. Subsequent runs only load rows
where signup_date > <persisted watermark>. The INSERT and watermark
upsert run inside a single Postgres transaction so a failed load rolls
back the watermark advance.
Try cdc_mirror (env-gated)
cdc_mirror requires FLOW_STREAMING_ENABLED=true and a configured
Debezium connector. Without those, the runtime returns a clear
“streaming disabled” error — useful as a smoke test that the gate is
wired:
8. Verify rows landed
lifetime_value arrives as a string (“10576.95”) because the route
runs through the safeJson helper that converts Postgres numeric /
Decimal to its string form (preserves precision past
Number.MAX_SAFE_INTEGER). See Flow Lineage.
9. Read the System Health feed
Every operational error from the run path lands here:10. Optional: drive the same pipeline via MCP
The MCP server exposes the same operations as tools. From an MCP-aware client (Claude Desktop, Cursor):| Tool | What it does |
|---|---|
listFlowPipelines | Returns the same data as GET /api/v1/pipelines |
runFlowPipeline | Same as POST /api/v1/pipelines/:id/run |
getFlowPipelineIr | Same as GET /api/v1/pipelines/:id/ir |
updateFlowPipelineIr | POST a new IR version |
inspectFlowError | Returns DLQ rows for the latest failed run |
replayFlowRun | Re-runs a pipeline from a specific run id |
cd platform && npm run mcp (set
KAIREONAI_API_KEY and KAIREONAI_TENANT_ID in env). See
MCP Flow Server.