This section provides detailed instructions on the process of creating a new Pipeline and importing data into the designated connector. Prior to this, it is essential to properly configure the Data Store in order to proceed with the subsequent steps.
To add a new Pipeline, navigate to the left side panel and click on the “Add Pipeline” button

Note: Pipeline name using only alphanumeric characters and click on the
tickicon to confirm and save the Pipeline details.
Provide the designated Pipeline name and proceed by selecting the confirm icon. Subsequently, select the Pipeline name to proceed with further actions.

You will be directed to the Extract page where you can configure the settings for the source connector.

The source data source connector can be configured in a YAML file format. The data source name is displayed in the Connector list panel, where the user can select the connector name and click on Add Template to access sample configuration details. Multiple connectors can be added to a Pipeline to transfer data to a destination database.


Example: Configuration File
version: 1
encrypt_credentials: false
plugins:
extractors:
- name: MySQL
connectorname: MySQL
schemaname:
config:
host: localhost
port: 3306
username: root
database: sakila
password: +NQCHLZ1l/RaR1L0HK+0jg==
drivername: mysql+pymysql
select:
- inventory
- payment
- name: PostgreSQL
connectorname: PostgreSQL
schemaname: test
config:
host: localhost
port: 5438
username: postgres
database: demo
password: +NQCHLZ1l/RaR1L0HK+0jg==
drivername: postgresql+pg8000
select:
- ticket_metricsAfter clicking the Save button, a popup window will appear. Within this window, locate and click on the dropdown menu labeled Select Destination. From the options provided, choose the specific DataStoreName that has been previously configured in the Data Store settings page.

Next, navigate to the “Schedule” page on the platform and select the “Run Now” option to initiate the desired action immediately.

Upon the completion of the project, a data source will be generated within the platforms of “Bold BI” or “Bold Reports.”

Please refer to the current operational status by accessing the information displayed in the “Logs” tab.

isDropTable
Description:
Controls whether existing tables in the destination are dropped and recreated during pipeline execution.
When/Why to Use:
Note:
direct_load_to_destination
Description:
Determines the data flow path: whether data should be moved from a staging layer (DuckDB) to the target datastore or remain in staging.
When/Why to Use:
Note: Applied across all supported datatores.
direct_target_import
Description:
Determines whether data should be moved directly to the target datastore or processed first through a staging layer before being exported (DuckDB → Parquet → destination).
When/Why to Use:
Note:
set_not_null_as_primary_key
Description:
Determines whether columns that contain no null values in DuckDB should be automatically treated as primary keys in the destination when no explicit primary key is defined.
If a column contains no nulls (even if it’s not explicitly defined as NOT NULL in the schema), it is automatically promoted as a primary key candidate in the destination.
When/Why to Use: Use this when importing from sources that often lack explicit primary keys (CSV, JSON, or semi-structured data). It helps auto-detect candidate keys and reduces manual configuration.
Behavior & Caution:
Note:
| Supported Datastores | Not Supported Datastores |
|---|---|
| Apache Doris | Amazon Redshift |
| Google BigQuery | Azure Synapse |
| MySQL | ClickHouse |
| Oracle | Firebolt |
| PostgreSQL | IBM DB2 |
| SQL Server | MinIO |
| Snowflake | SAP HANA Cloud |
| Teradata |
The primary_keys property in the extractor’s properties section allows you to define custom primary keys for the tables being extracted. This property specifies which column(s) should be used as primary key(s) in the destination table.
Syntax:
properties:
primary_keys: '{table_name_1}: {key_1}, {key_2}; {table_name_2}: {key_3}'Example:
properties:
primary_keys: 'customers: customerId, firstName;orders: orderId'For the customers table: primary keys are customerId and firstName (composite key) For the orders table: primary key is orderId
Behavior:
select list.primary_keys not specified,check for set_not_null_as_primary_key property is true column with no nulls values are set as PK or tables remain without primary keys.Important: Ensure the column names are correct and present in the table schema. Mismatched columns may cause data extraction errors.
Note:
| Supported Datastores | Not Supported Datastores |
|---|---|
| Apache Doris | Amazon Redshift |
| Google BigQuery | Azure Synapse |
| MySQL | ClickHouse |
| Oracle | Firebolt |
| PostgreSQL | IBM DB2 |
| SQL Server | MinIO |
| Snowflake | SAP HANA Cloud |
| Teradata |
Example: MySQL Connector with Primary Keys
version: 1.0.1
encrypt_credentials: false
direct_target_import: false
union_all_tables: true
add_dbname_column: false
direct_load_to_destination: true
use_snake_casing: true
set_not_null_as_primary_key: false
plugins:
extractors:
- name: MySQL
connectorname: MySQL
schemaname:
config:
host: localhost
port: 3306
username: root
database: work
password: <password>
drivername: mysql+pymysql
properties:
primary_keys: 'customers: customerId, firstName;orders: orderId'
select:
- customers
- orders