Apache Kafka
This page describes how to ingest your data from Apache Kafka.
Last updated
This page describes how to ingest your data from Apache Kafka.
Last updated
Prerequisites
Ensure that you have an connection with the correct permissions to read from your cluster.
Additionally, if you are ingesting to the data lake, you need a metastore connection that can be used to create a staging table as well as a corresponding storage connection that can be used to store your table's underlying files.
You can create a job to ingest your data from Kafka into a staging table in the data lake or ingest directly into your target.
Jump to
After completing the prerequisites, you can create your staging tables. The example below creates a table without defining columns or data types, as these will be inferred automatically by Upsolver, though you can define columns if required:
Upsolver recommends partitioning by the system column $event_date
or another date column within the data in order to optimize your query performance.
Next, you can create an ingestion job as follows:
Note that multiple ingestion jobs can write to the same table, resulting in a final table that contains a UNION ALL
of all data copied into that table. This means that any duplicate rows that are written are not removed and the column list may expand if new columns are detected.
This may not be your intended behavior, so ensure you are writing to the correct table before running your job.
The example above only uses a small subset of all job options available when reading from Kafka. Depending on your use case, there may be other options you want to configure. For instance, you may want to specify the compression of your source data rather than have it be auto-detected.
Directly ingesting your data enables you to copy your data straight into the target system, bypassing the need for a staging table. The syntax and job options are identical to ingesting into a staging table, however, the target connector differs:
Transformations can be applied to your ingestion job to correct issues, exclude columns, or mask data before it lands in the target. Furthermore, you can use expectations to define data quality rules on your data stream and take appropriate action.
Some job options are considered mutable, enabling you to run a SQL command to alter an existing ingestion job rather than create a new job. The job options apply equally to jobs that ingest into the data lake or directly to the target, and the syntax to alter a job is identical.
For example, take the job we created earlier:
If you want to keep the job as is, but only change the cluster that is running the job, execute the following command:
Note that some options such as COMPRESSION
cannot be altered once the connection has been created.
If you no longer need a job, you can easily drop it using the following SQL command. This applies to jobs that ingest into the data lake and directly into the target:
Learn More
To learn about the available job options, see the jobs page, which describes each option in detail and includes examples.
To check which job options are mutable, see .