Overview
The Data module is the foundation of Kaireon. It lets you connect to external data sources, define entity schemas that create real database tables, and build visual pipelines to transform and load data.Connectors
Connectors define how Kaireon reaches your external data. Over 20 connector types are supported:| Category | Types |
|---|---|
| Cloud Storage | Amazon S3, Google Cloud Storage, Azure Blob |
| Databases | PostgreSQL, MySQL, Oracle, SQL Server, MongoDB |
| Data Warehouses | Snowflake, BigQuery, Redshift, Databricks |
| Streaming | Kafka, Amazon Kinesis, Azure Event Hubs |
| APIs | REST API, GraphQL, Salesforce, HubSpot |
| Files | SFTP, Local File Upload |
Schemas
Schemas define your entity structure — customers, accounts, transactions, etc. When you create a schema, Kaireon creates an actual PostgreSQL table with the fields you define. Supported field types:text, integer, decimal, boolean, date, timestamp, json.
Schemas are referenced by:
- Enrichment stages in decision flows (to load customer data at decision time)
- Computed values (formulas that reference
customer.*fields) - Pipelines (as target destinations)
Pipelines
Pipelines are visual ETL workflows built with a drag-and-drop flow editor. Each pipeline has:- Source nodes — Read from a connector
- Transform nodes — 14 transform types including cast, expression, rename, filter, hash, mask PII, join, aggregate, sort, deduplicate, pivot, unpivot, lookup, and custom SQL
- Target nodes — Write to a schema table or external destination
Execution Config
Pipelines can run in batch or streaming mode with configurable:- Batch size and parallelism
- Partitioning strategy
- Error handling (skip, fail, dead-letter queue)
- Scheduling (cron expressions)