Documentation Index
Fetch the complete documentation index at: https://docs.kaireonai.com/llms.txt
Use this file to discover all available pages before exploring further.
Connector status: 80 connector types are registered in the UI.
Two —
amazon_kinesis and braze — ship as coming-soon: they
expose create/edit forms (and a working Test Connection probe for
Amazon Kinesis), but pipeline runs that source from them no-op (the
executor logs a message and returns zero rows). The 26 W16 expansion
entries documented on Connectors Expanded
are also coming-soon. Every other registered type is production-ready.GET /api/v1/connectors
List all connectors for the current tenant. Supports cursor-based pagination.Query Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
limit | integer | 20 | Max results per page (max 100) |
cursor | string | — | Cursor for pagination (ID of last item from previous page) |
Response
The
authConfig field is never returned in list or detail responses to prevent secret leakage.POST /api/v1/connectors
Create a new connector.Request Body
| Field | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Unique connector name |
type | string | Yes | Connector type (see supported types below) |
description | string | No | Human-readable description |
config | object | No | Type-specific configuration (host, port, bucket, etc.). Also accepts connectionConfig as an alias. |
authMethod | string | No | Authentication method. Default: "access_key" |
authConfig | object | No | Credentials (encrypted at rest, never returned in responses) |
The
config field contains type-specific settings like bucket, region, prefix for S3 or account, warehouse, database for Snowflake. You can also send connectionConfig as an alias — the API accepts both names and merges them.Supported Connector Types
| Type | Status | Required Config Fields |
|---|---|---|
postgresql, mysql, redshift | Ready | host, port, database |
snowflake | Ready | account, warehouse, database, sourceTable, rowLimit (optional) |
bigquery | Ready | project, dataset, sourceTable, rowLimit (optional) |
mongodb | Ready | host or connectionString |
kafka, confluent_kafka | Ready (batch polling) | bootstrapServers |
aws_s3 | Ready | bucket, region |
gcs | Ready | bucket, project |
azure_blob | Ready | container, storageAccount |
sftp | Ready | host, port |
rest_api, webhook | Ready | url |
salesforce, hubspot | Ready | instanceUrl or apiKey |
databricks | Ready | host, httpPath |
segment, shopify, stripe, mailchimp | Ready | See /platform/data for fields |
amazon_kinesis | Coming soon (connection test works) | streamName, region |
braze | Coming soon | See /platform/data for fields |
Kafka is batch polling, not true streaming. Each pipeline run opens a
consumer, reads up to
maxMessages records (default 1000) with a configurable
wait timeout (default 15 seconds), commits offsets, and closes. True
long-lived streaming requires a persistent worker that is not yet implemented.Snowflake and BigQuery row limits. Both connectors accept a
sourceTable
(required) and rowLimit (optional). The executor issues SELECT * FROM <sourceTable> LIMIT <rowLimit>.
The default rowLimit is 100,000 rows (demo-safe). Set to 0 to remove
the cap — only do this once you have sized the target database and pipeline
run budget to handle full-table reads.Example
201 Created with the connector object (excluding authConfig).
PUT /api/v1/connectors
Update an existing connector. Only provided fields are updated.Request Body
| Field | Type | Required | Description |
|---|---|---|---|
id | string | Yes | Connector ID |
name | string | No | Updated name |
type | string | No | Updated type |
description | string | No | Updated description |
config | object | No | Updated configuration |
authMethod | string | No | Updated auth method |
authConfig | object | No | Updated credentials (re-encrypted) |
status | string | No | Updated status |
200 OK with the updated connector (excluding authConfig).
DELETE /api/v1/connectors
Delete a connector by ID.Query Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
id | string | Yes | Connector ID to delete |
204 No Content
POST /api/v1/connectors/test
Test a connector’s connection by performing a real probe (TCP, HTTP, or SDK-specific check). Rate limited to 100 requests per 60 seconds.Request Body
| Field | Type | Required | Description |
|---|---|---|---|
id | string | Yes | ID of the connector to test |
Response
- Validates required configuration fields for the connector type
- Performs a real connection probe (TCP for databases, HTTP HEAD for REST/webhook, bucket checks for cloud storage, SDK-specific probes for Databricks and Amazon Kinesis)
- Updates the connector’s
statusto"active"(success) or"error"(failure) - Uses a circuit breaker to prevent hammering failed connectors
- Includes SSRF protection (blocks private IPs, validates DNS resolution)
- Returns masked
authConfigvalues (first 2 + last 2 characters visible)
Connection testing for the coming-soon
braze connector reports a
generic failure until a dedicated probe is wired. Amazon Kinesis
has a working test probe today.Error Responses
| Status | Cause |
|---|---|
400 | Missing id or invalid JSON |
404 | Connector not found |
429 | Rate limit exceeded |
Roles
| Endpoint | Allowed Roles |
|---|---|
GET /connectors | admin, editor, viewer |
POST /connectors | admin, editor |
PUT /connectors | admin, editor |
DELETE /connectors | admin, editor |
POST /connectors/test | admin, editor |