Upsolver
Contact Support
  • Welcome to Upsolver
  • Getting Started
    • Start using Upsolver for free
    • Get started as a Upsolver user
      • Upsolver in 5 minutes
        • Upsolver Quickstart in 5 minutes
          • Additional sandbox fun
        • Amazon Athena data output
        • MySQL (AWS RDS) data output
        • Local MySQL data output
      • Upsolver free training
        • Introduction to Upsolver
          • Transform and write data to Amazon Athena
          • Pre-aggregate data for efficiency and performance
          • UPSERT streaming data to Amazon Athena
      • Prerequisites for AWS deployment
      • AWS integration
      • Deploy Upsolver on your AWS account
      • Prerequisites for Azure Deployment
      • Azure Integration
        • Prerequisites for Azure Users
        • Log into Upsolver
        • Log into Azure & Authenticate
        • Set Up and Deploy Azure Resources
        • Delegate Resource Group, and Deploy Upsolver in Azure
        • Integrate Azure with Upsolver
    • Upsolver concepts
      • Deployment models
      • Upsolver components
      • Data ingestion
    • Upsolver Amazon AWS deployment guide
      • Private VPC
      • Upsolver VPC
      • AWS role permissions
      • VPC peering
    • Tutorials and FAQ
      • Tutorials
        • How To Re-process Data
        • Create an Amazon S3 data source
        • Create an Amazon Athena data output
        • Join multiple data streams for real-time analytics
        • Use Upsolver to index less data into Splunk
        • Upsert and delete use case
        • AWS S3 to Athena use case
        • Merge data use case
        • Full vs. Partial Inbound Data Records
      • FAQ
      • Infrastructure
        • What is a dry-run cluster?
    • Glossary
      • Language guide
        • SQL syntax reference
        • Functions
          • Aggregation Functions
            • APPROX_COUNT_DISTINCT
            • APPROX_COUNT_DISTINCT_EACH
            • AVG
            • AVG_EACH
            • AVG_TIME_SERIES
            • COLLECT_SET
            • COLLECT_SET_EACH
            • COUNT
            • COUNT(*)
            • COUNT_DISTINCT
            • COUNT_EACH
            • COUNT_IF
            • DECAYED_SUM
            • DYNAMIC_SESSIONS
            • FIRST
            • FIRST_ARRAY
            • FIRST_EACH
            • FIRST_TIME_SERIES
            • LAST
            • LAST_ARRAY
            • LAST_EACH
            • LAST_K
            • LAST_K_EACH
            • LAST_TIME_SERIES
            • MAX
            • MAX_BY
            • MAX_EACH
            • MAX_TIME_SERIES
            • MIN
            • MIN_BY
            • MIN_EACH
            • MIN_TIME_SERIES
            • SESSION_COUNT
            • STD_DEV
            • STD_DEV_EACH
            • STRING_MAX_EACH
            • STRING_MIN_EACH
            • SUM
            • SUM_EACH
            • SUM_TIME_SERIES
            • WEIGHTED_AVERAGE
          • Calculated functions
            • Aerospike functions
            • Array functions
            • Conditional functions
            • Date functions
            • External API functions
            • Filter functions
            • Numeric functions
            • Spatial functions
            • String functions
            • Structural functions
              • ZIP
            • Type conversion functions
      • Data formats
      • Data types and features
      • Database output options
      • Upsolver shards
      • Permissions list
      • Index
    • Troubleshooting
      • My CloudFormation stack failed to deploy
      • My private API doesn't start or I can't connect to it
        • Elastic IPs limit reached
        • EC2 Spot Instance not running
        • DNS cache
        • Security group not open
      • My compute cluster doesn't start
      • I can't connect to my Kafka cluster
      • I can't create an S3 data source
      • Data doesn't appear in Athena table
      • I get an exception when querying my Athena table
      • Unable to define a JDBC (Postgres) connection
  • Connecting data sources
    • Amazon AWS data sources
      • Amazon S3 data source
        • Quick guide: S3 data source
        • Full guide: S3 data source
      • Amazon Kinesis Stream data source
      • Amazon S3 over SQS data source
      • Amazon AppFlow data source
        • Setup Google Analytics client ID and client secret.
    • Microsoft Azure data sources
      • Azure Blob storage data source
      • Azure Event Hubs data source
    • Kafka data source
    • Google Cloud Storage data source
    • File upload data source
    • CDC data sources (Debezium)
      • MySQL CDC data source
        • Binlog retention in MySQL
      • PostgreSQL CDC database replication
    • JDBC data source
    • HDFS data source
    • Data source UI
    • Data source properties
  • Data outputs and data transformation
    • Data outputs
      • Amazon AWS data outputs
        • Amazon S3 data output
        • Amazon Athena data output
          • Quick guide: Athena data output
          • Full guide: Athena data output
          • Output all data source fields to Amazon Athena
        • Amazon Kinesis data output
        • Amazon Redshift data output
        • Amazon Redshift Spectrum data output
          • Connect Redshift Spectrum to Glue Data Catalog
        • Amazon SageMaker data output
      • Data lake / database data outputs
        • Snowflake data output
          • Upsert data to Snowflake
        • MySQL data output
        • PostgreSQL data output
        • Microsoft SQL Server data output
        • Elasticsearch data output
        • Dremio
        • PrestoDB
      • Upsolver data output
      • HDFS data output
      • Google Storage data output
      • Microsoft Azure Storage data output
      • Qubole data output
      • Lookup table data output
        • Lookup table alias
        • API Playground
        • Serialization of EACH aggregations
      • Kafka data output
    • Data transformation
      • Transform with SQL
        • Mapping data to a desired schema
        • Transforming data with SQL
        • Aggregate streaming data
        • Query hierarchical data
      • Work with Arrays
      • View outputs
      • Create an output
        • Modify an output in SQL
          • Quick guide: SQL data transformation
        • Add calculated fields
        • Add filters
        • Add lookups
          • Add lookups from data sources
          • Add lookups from lookup tables
          • Adding lookups from reference data
        • Output properties
          • General output properties
      • Run an output
      • Edit an output
      • Duplicate an output
      • Stop an output
      • Delete an output
  • Guide for developers
    • Upsolver REST API
      • Create a data source
      • Modify a data source
      • API content formats
    • CI/CD on Upsolver
  • Administration
    • Connections
      • Amazon S3 connection
      • Amazon Kinesis connection
      • Amazon Redshift connection
      • Amazon Athena connection
      • Amazon S3 over SQS connection
      • Google Storage connection
      • Azure Blob storage connection
      • Snowflake connection
      • MySQL connection
      • Elasticsearch connection
      • HDFS connection
      • Qubole connection
      • PostgreSQL connection
      • Microsoft SQL Server connection
      • Spotinst Private VPC connection
      • Kafka connection
    • Clusters
      • Cluster types
        • Compute cluster
        • Query cluster
        • Local API cluster
      • Monitoring clusters
      • Cluster tasks
      • Cluster Elastic IPs
      • Cluster properties
      • Uploading user-provided certificates
    • Python UDF
    • Reference data
    • Workspaces
    • Monitoring
      • Credits
      • Delays In Upsolver pipelines
      • Monitoring reports
        • Monitoring system properties
        • Monitoring metrics
    • Security
      • IAM: Identity and access management
        • Manage users
        • Manage groups
        • Manage policies
      • Git integration
      • Single sign-on with SAML
        • Microsoft Azure AD with SAML sign-on
        • Okta with SAML sign-on
        • OneLogin with SAML sign-on
      • AMI security updates
  • Support
    • Upsolver support portal
  • Change log
  • Legal
Powered by GitBook
On this page
  • How Upsolver works with Azure Blob storage
  • Create an Azure Blob storage data source:

Was this helpful?

  1. Connecting data sources
  2. Microsoft Azure data sources

Azure Blob storage data source

This article provides an introduction to how Upsolver works with Azure Blob storage along with a guide on how to create an Azure Blob storage data source in Upsolver.

PreviousMicrosoft Azure data sourcesNextAzure Event Hubs data source

Last updated 4 years ago

Was this helpful?

How Upsolver works with Azure Blob storage

Upsolver can read events directly from your Azure Blob storage, assuming events are partitioned by event date (this partitioning defines the folder structure in the cloud storage).

Upsolver auto-detects the file format's date pattern and shows you a preview of the top files that will be ingested with their corresponding dates. It also auto-detects when to start the ingestion from.

Create an Azure Blob storage data source:

1. From the Data Sources, click New.

2. Select Azure Blob Storage.

As you fill in the fields, the top matching files with their dates appear in the Preview pane on the right.

  • indicates that the file is recognized and will be included in the data source ingestion.

  • indicates that the file will not be included.

You can click Refresh to check on their status as you update fields.

3. Select the Azure Blob storage connection to read from (or ).

4. (Optional) Enter the path to the data folder in Azure Blob storage (e.g. billing data). If this is not specified, the data is assumed to be in the top-level of the hierarchy.

Upsolver adds an implicit / before and after the folder.

For example, if you have bucket a, with folder b, Upsolver looks for the path: s3://a/b/yyyy-MM-dd

The full path displayed is read only. You can use it to review the path Upsolver is using to read from S3.

5. Select or enter in the date pattern of the files to be ingested. This is autodetected but can be modified if required.

The following characters are valid: yyyy MM dd HH mm

Any other letters or characters must be wrapped in single quotes (e.g. 'year='yyyy'month='MM'day='dd'hour='HH'minutes='mm).

It is also possible to specify the date as follows: d-M-yy

6. (Optional) Select a time to start ingesting files from. This is usually auto detected, but if there is no preview, the date cannot be established and this defaults to today's date. In this case, set the required start date.

7. (Optional) Under File Name Pattern, select the type of pattern to use (All, Starts With, Ends With, Regular Expression, or Glob Expression) and then enter the file name pattern that specifies the file set to ingest.

If all the files in specified folders are relevant, select All.

The pattern given is matched against the file path starting from the full path shown above.

8. Select the content format. This is typically auto-detected, but you can manually select a format.

For production environments, this indicates that if you run your task retroactively (Start Ingestion From), your compute cluster will process a burst of additional tasks, possibly causing delays in outputs and lookup tables running on this cluster.

To prevent this, go to the Clusters page to edit your cluster and set the Additional Processing Units For Replay to a number greater than 0.

11. (Optional) In addition to the streaming data, you may have initial data. In this case, to list the relevant data, enter a file name prefix (e.g. for DMS loads, enter LOAD) under Initial Load Configuration.

12. (Optional) Under Initial Load Configuration, enter a regex pattern to select the required files.

13. Name this data source.

14. Click Continue. A preview of the data will appear.

15. For CSV, select a Header.

16. Click Continue again.

Parsed data samples will appear and you can review the Sample Size, Parsed Successfully, and # of Errors.

17. (Optional) If there are any errors, click Back to change the settings as required.

18. Click Create.

You can now use your Azure Blob storage data source.

If necessary, configure the content format options. See:

9. From the dropdown, select a compute cluster (or ) to run the calculation on.

A may appear. This warning can be ignored for POCs (proof of concept).

10. Select a target storage connection (or ) where the data read will be stored (output storage).

Data formats
create a new one
create a new one
create a new one