Ingest Your Microsoft SQL Server CDC Data to Snowflake
This article shows you how to ingest data from your SQL Server CDC database into Snowflake.
Learn how to ingest data from your Microsoft SQL Server database into a staging table in the data lake, prior to transforming and loading it into Snowflake.
Prerequisites
Before you ingest data into Upsolver, you must enable change data capture on your SQL Server database. Please follow this guide if you have not already enabled CDC.
The steps for ingesting your CDC data are as follows:
Connect to SQL Server
Create a staging table to store the CDC data
Create an ingestion job
View the job status to check the snapshotting process
View the CDC data in the staging table
Connect to Snowflake
Create a transformation job
Step 1
Connect to SQL Server
The first step is to connect to the database from which you want to ingest your CDC data. You will need the connection string to your SQL Server database, and the username and password. Ensure your login has appropriate credentials for reading from the change data capture tables.
Here's the code:
Step 2
Create a staging table to store the CDC data
After connecting to your SQL Server source database, the next step is to create a table in the data lake for staging the CDC data.
Here's the code to create the staging table:
Let's understand what this code does. Firstly, a table named orders_raw_data is created in the upsolver_samples database. Notice that no columns have been defined for the table. The open brackets instruct Upsolver to infer the columns and types during data ingestion. This is helpful if you are unsure of the data in the source and want Upsolver to manage type changes and schema updates.
Upsolver recommends partitioning by the system column $event_date
or another date column in order to optimize your query performance. The $event_date
column is added by default as a system column, along with $event_time
, which will be used later when you create your transformation job. You can view all the system columns that Upsolver adds to the tables in your default glue catalog, by expanding the table name in the Entities tree in Upsolver, and then expanding SYSTEM COLUMNS.
Learn More
Step 3
Create an ingestion job
Next, you can create an ingestion job to copy the CDC data into your staging table.
Here's the code to create the ingestion job:
Let's take a look at what this code does. A job named load_raw_data_from_mssql is created with an optional comment that you can use to describe the purpose of your job. Other users in your organization can see comments.
An ingestion job uses the COPY FROM
command to copy source data to the target, in this case, the orders_raw_data table in the AWS Glue Data Catalog, using the my_mssql_connection connection.
In this example, the TABLE_INCLUDE_LIST
source option instructs the job to ingest from the orders, products, and customers tables, and ignore any other discovered tables. In this instance, we want to exclude some PII data and use the COLUMN_EXCLUDE_LIST
source option to tell Upsolver to ignore the credit_card column in the orders table, and all columns in the customers column that are prefixed with address_.
Step 4
View the job status
When the job is created, Upsolver takes a snapshot of each of the included tables prior to the streaming process. You can check the status of the snapshotting process by clicking on Jobs from the main menu on the left-hand side of the Upsolver UI. Then, click on the job you created, e.g. load_raw_data_from_mssql, and the job page displays each table and status, e.g. Snapshotting, or Streaming. After the snapshot process has been completed and all tables are streaming, you can continue to use this page to monitor and troubleshoot your job.
Learn More
Step 5
View the CDC data in the staging table
During the snapshotting process, Upsolver reads the column names and types from the CDC tables in the source database and creates a corresponding column in the staging table. Appended to your CDC columns are system information columns, including the source database, schema, and table names. Additional columns include the log sequence number (LSN) when the change was committed on the source database and an $is_delete column.
Prior to creating a transformation job to load data into the target, you can check the data in the staging table.
Here's the code:
Confirm your data is as expected, before moving on to the next steps of creating a transformation job to load the data into your target.
This example uses a Snowflake database as the target, however, the process for writing to other destinations is similar.
Step 5
Connect to Snowflake
The next step is to connect to your target database, in this case, Snowflake. You can create a persistent connection that is shared with other users in your organization.
Here's the code:
Step 6
Create a transformation job
Now that you have a connection to Snowflake, you can load your data using a transformation job. If you haven't already done so, create the target table in Snowflake.
Here's the code to create a table in Snowflake:
Next, create a transformation job to replicate your CDC data to the target table.
Here's the code:
Let's understand what this job does.
This code creates a job named transform_and_insert_into_snowflake and includes a couple of job options: START_FROM
instructs the job to replicate all historical data by specifying the BEGINNING parameter, while RUN_INTERVAL
tells Upsolver that this job should execute every 1 MINUTE.
The job inserts the data into the ORDERS_TRANSFORMED table in the SALES schema. We don't need to specify the database (DEMO_DB) here because this is included in the connection string.
The MAP_COLUMNS_BY_NAME
option maps each column in the SELECT
statement to the column with the same name in the target table. This is helpful as the job, therefore, does not map the columns based on ordinality: if you compare the order of the columns in the script that creates the table with the order of the columns in the SELECT
statement of the job, you'll notice that CUSTOMER_ID and ORDER_ID are in different positions.
The SELECT
statement specifies which columns will be loaded into the target, and the alias names enable column mapping by name. A string function has been used to concatenate the customer's first and last names into the CUSTOMER_NAME column.
In the WHERE
clause, all rows that have an $event_time
that is between the start and end time interval of the job will be included in the load. The $event_time
system column is populated with a timestamp when the data lands in the staging table.
Conclusion
In this guide you learned how to connect to SQL Server and ingest your change data capture tables into a staging table in the data lake. You learned how to check the status of your tables during the snapshotting process, and how to view the ingested data. Then you discovered how to write a transformation job to replicate the data from your staging table to your target.
Try it yourself
To ingest your CDC data from SQL Server:
Create a connection to your CDC-enabled SQL Server database
Create a staging table and ingest your change data capture to the data lake
Connect to your target data lake or warehouse destination
Write a transformation job to replicate the data to your target
Monitor your jobs using the job status metrics
Learn More
Last updated