Pipeline Basics

Read this guide to understand how pipelines work in Upsolver.

Real-time ingestion and analytics in the data lake

Most organizations manage data that is continuously updated in real-time, such as the collection of clickstream events from websites to understand user interaction and improve personalization. This is called streaming data.

But companies also process and analyze data in large batches -- for example, the process of enriching user data with third-party information. This is batch data.

Both batch and streaming are integral to a company's data architecture. In this section, we illustrate how to implement both streaming and batch data analytics in the Upsolver data lake.

Below is a simple diagram that shows a high-level architecture of a data pipeline you can use to implement data analytics:

How does Upsolver merge streaming and batch processing?

Upsolver enables you to ingest both streaming and batch data with just one tool, using only familiar SQL syntax. Let's zoom in to understand how Upsolver manages data.

Now, let's look at the core components of Upsolver:

Connectors

For example:

CREATE S3 CONNECTION raw_s3_zone
	AWS_ROLE = 'arn:aws:iam::111111111111:role/raw_zone_role';
-- Here, you create an S3 connection titled "raw_s3_zone".
-- Create a connection to the data target on the Snowflake data warehouse
CREATE SNOWFLAKE CONNECTION prod_snowflake_connection
	CONNECTION_STRING = 'jdbc:snowflake://ACCOUNT_WITH_REGION.snowflakecomputing.com?db=DB_NAME&warehouse=WAREHOUSE_NAME&role=ROLE_NAME'
	USER_NAME = 'username'
	PASSWORD = 'password'
	MAX_CONCURRENT_CONNECTIONS = 10;
-- Here, you create a Snowflake connection titled "prod_snowflake_connection", 
-- with specified parameters for a username, password, and a maximum amount of 
-- concurrent connections.

Ingestion jobs

Here's an example:

-- Create a job that ingests and stages the source data in the data lake
CREATE JOB ingest_raw_data
    COMPUTE_CLUSTER = 'ProductionCluster'
    CREATE_TABLE_IF_MISSING = TRUE
    STARTING_FROM = NOW
AS COPY FROM S3 'raw_s3_zone'
    BUCKET = 'company_data_lake'
    PREFIX = 'raw/sales_orders'
INTO catalog.prod_db.sales_orders_staging;

Transformation jobs

For example:

-- Create a job to transform the staged data and load it into Snowflake
CREATE JOB transform_and_load
    COMPUTE_CLUSTER = 'ProductionCluster'
    EXECUTION_INTERVAL = 1 MINUTE
    STARTING_FROM = NOW
AS INSERT INTO SNOWFLAKE 'prod_snowflake_connection'
    TABLE_PATH = 'PROD'.'SALE_ORDERS_REPORT'
	SELECT
  	SUM(to_number(nettotal)) as sum_total_orders,
  	AVG(to_number(nettotal)) as avg_total_orders,
  	company_name
	FROM catalog.prod_db.sales_orders_staging
	WHERE ($time between execution_start_time() AND execution_end_time())
	GROUP BY company_name;

Benefits of Upsolver Pipelines

1. Always on

Upsolver pipelines are always on. One of the main benefits of a streaming-first design is that pipelines do not need external scheduling or orchestration. This reduces the complexity of deploying and maintaining pipelines. Instead, Upsolver infers the necessary transformations and task progression from the SQL you write. There are no directed acyclic graphs (DAGs) to create and maintain, and you don't need a third-party orchestration tool such as Dagster, Astronomer, or Apache Airflow.

2. Observability and data quality

If you can understand the source and output data -- its structure, schema, data types, value distribution, and whether key fields contain null values -- then you can deliver reliable, fresh, and consistent datasets. Upsolver job monitoring provides graphs and metrics that indicate status at a glance. Upsolver also exposes system tables that contain all the tasks executed at different stages of the pipeline, such as:

  • Reading from a source system

  • Writing to the staging table

  • Transforming the data

  • Maintaining it in the data lake.


Learn More

Last updated