MySQL

This page describes the job and data source options for ingesting data from MySQL.

Syntax

CREATE JOB <job_name>
    [{ job_options }]
    AS COPY FROM MYSQL
                 <connection_identifier>
                 [{ source_options }]
    INTO <table_identifier>
 WITH EXPECTATION <exp_name> EXPECT <sql_predicate> ON VIOLATION { DROP | WARN } ];

Jump to

Upsolver supports MySQL 5.6+ and 8.0.x. You can connect to:

  • Generic MySQL (self-hosted)

  • Amazon RDS MySQL

  • Amazon Aurora MySQL

You must set up your MySQL database before you configure SQLake. Please follow the Setting Up MySQL instructions. Once complete, return to this page to configure the SQLake connector.

Job options

The following job properties configure the behavior of the ingestion job.

[ COLUMN_TRANSFORMATIONS = (<column> = <expression>, ...) ]
[ COMMENT = '<comment>' ]
[ COMPUTE_CLUSTER = <cluster_identifier> ]  
[ DDL_FILTER = ('<ddl_expression>', ...) ]
[ END_AT = { NOW | <timestamp> } ]
[ EXCLUDE_COLUMNS = ( <col>, ...) ]       
[ SKIP_SNAPSHOTS = { TRUE | FALSE } ]
[ SNAPSHOT_PARALLELISM = <integer> ]

Jump to

MySQL job options:

General job options:

See also:

DDL_FILTERS

Type: array[string]

Default: ''

(Optional) Comma-separated list of DDL expressions that the job will ignore when there are errors from the Debezium engine.

SKIP_SNAPSHOTS— editable

Type: Boolean

Default: false

(Optional) By default, snapshots are enabled for new tables. This means that SQLake will take a full snapshot of the table(s) and ingest it into the staging table before it continues to listen for change events. When set to True, SQLake will not take an initial snapshot and only process change events starting from the time the ingestion job is created.

In the majority of cases, when you connect to your source tables, you want to take a full snapshot and ingest it as the baseline of your table. This creates a full copy of the source table in your data lake before you begin to stream the most recent change events. If you skip taking a snapshot, you will not have the historical data in the target table, only the newly added or changed rows.

Skipping a snapshot is useful in scenarios where your primary database instance crashed or became unreachable, failing over to the secondary. In this case, you will need to re-establish the CDC connection but would not want to take a full snapshot because you already have all of the history in your table. In this case, you would want to restart processing from the moment you left off when the connection to the primary database went down.

SNAPSHOT_PARALLELISM

Type: integer

Default Value: 1

(Optional) Configures how many snapshots are performed concurrently. The more snapshots performed concurrently, the quicker it is to have all tables streaming. However, doing more snapshots in parallel increases the load on the source database.

Source options

The following data source properties configure how to replicate data from MySQL.

[ TABLE_INCLUDE_LIST = ('regexFilter1', 'regexFilter2') ]
[ COLUMN_EXCLUDE_LIST = ('regexFilter1', 'regexFilter2') ]

Jump to

TABLE_INCLUDE_LISTeditable

Type: text

Default: ''

(Optional) Comma-separated list of regular expressions that match fully-qualified table identifiers of tables whose changes you want to capture. This maps to the Debezium table.include.list property.

By default, the connector captures changes in every non-system table in all databases. To match the name of a table, SQLake applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the table. It does not match substrings that might be present in a table name.

Each RegEx pattern matches against the full string databaseName.tableName, for example:

RegEx PatternResults

db_name.*

Select all tables under db_name database

db_name.users, db_name.items

Selects tables users and items under db_name database

db1.items_.*

Selects all tables from db1, that start with items_

COLUMN_EXCLUDE_LIST — editable

Type: text

Default: ''

(Optional) Comma-separated list of regular expressions that match the fully-qualified names of columns to exclude from change event record values. This maps to Debezium column.exclude.list property.

By default, the connector matches all columns of the tables listed in TABLE_INCLUDE_LIST. To match the name of a column, SQLake applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; it does not match substrings that might be present in a column name.

Each RegEx pattern matches against the full string databaseName.tableName.columnName, for example:

RegEx PatternResults

db.users.address_.*

Selects all of the columns that start with address_ in the users table of database db.

db.*.(.*_pii)

Selects all of the columns ending with _pii across all tables within db database.

Examples

Ingest multiple tables

The following job creates a synchronized job that replicates the data from the samples.orders and samples.customers tables into the target table in the data lake. Use the TABLE_INCLUDE_LIST source option to specify which data to ingest.

CREATE SYNC JOB replicate_mysql_tables
   AS COPY FROM MYSQL upsolver_mysql_samples 
      TABLE_INCLUDE_LIST = ('samples.orders', 'samples.customers')
   INTO default_glue_catalog.upsolver_samples.orders_raw_data;

Create a job to ingest two tables.

Disable the initial snapshot

This example uses the SKIP_SNAPSHOTS option to instruct the job not to take a snapshot. Setting this value to TRUE ensures that only events that arrive from the time when the job starts, will be ingested. All historical data is ignored.

CREATE SYNC JOB copy_all_table_no_history_job
   SKIP_SNAPSHOTS = TRUE
   AS COPY FROM MYSQL upsolver_mysql_samples
   INTO default_glue_catalog.upsolver_samples.raw_cdc_tables;

Disable the initial snapshot of the source tables.

Exclude columns from ingestion

You can use the TABLE_INCLUDE_LIST to specify the tables you want to ingest. In the example below, the job will copy data from the customers and orders tables. However, personally identifiable information is stored in these tables, and this data should not be held in the staging or target tables as this would violate privacy laws. The COLUMN_EXCLUDE_LIST option enables the columns containing sensitive information to be excluded from the ingestion and, in this case, credit_card and customer address data are ignored from the process. This option could equally be used to remove extraneous data.

CREATE SYNC JOB copy_table_exclude_cols
   AS COPY FROM MYSQL upsolver_mysql_samples
     TABLE_INCLUDE_LIST = ('db.customers', 'db.orders')
     COLUMN_EXCLUDE_LIST = ('db.*.credit_card', 'db.customers.address_.*')
   INTO default_glue_catalog.upsolver_samples.raw_cdc_tables;

Exclude specific columns from selected tables.

Last updated