LogoLogo
OverviewQuickstartsHow To GuidesReferenceArticlesSupport
How To Guides
How To Guides
  • How To Guides
  • SETUP
    • Deploy Upsolver on AWS
      • Deployment Guide
      • AWS Role Permissions
      • VPC Peering Guide
      • Role-Based AWS Credentials
    • Enable API Integration
    • Install the Upsolver CLI
  • CONNECTORS
    • Create Connections
      • Amazon Kinesis
      • Amazon Redshift
      • Amazon S3
      • Apache Kafka
      • AWS Glue Data Catalog
      • ClickHouse
      • Confluent Cloud
      • Elasticsearch
      • Microsoft SQL Server
      • MongoDB
      • MySQL
      • PostgreSQL
      • Snowflake
      • Tabular
    • Configure Access
      • Amazon Kinesis
      • Amazon S3
      • Apache Kafka
      • AWS Glue Data Catalog
      • Confluent Kafka
    • Enable CDC
      • Microsoft SQL Server
      • MongoDB
      • MySQL
      • PostgreSQL
  • JOBS
    • Basics
      • Real-time Data Ingestion — Amazon Kinesis to ClickHouse
      • Real-time Data Ingestion — Amazon S3 to Amazon Athena
      • Real-time Data Ingestion — Apache Kafka to Amazon Athena
      • Real-time Data Ingestion — Apache Kafka to Snowflake
    • Advanced Use Cases
      • Build a Data Lakehouse
      • Enriching Data - Amazon S3 to ClickHouse
      • Joining Data — Amazon S3 to Amazon Athena
      • Upserting Data — Amazon S3 to Amazon Athena
      • Aggregating Data — Amazon S3 to Amazon Athena
      • Managing Data Quality - Ingesting Data with Expectations
    • Database Replication
      • Replicate CDC Data into Snowflake
      • Replicate CDC Data to Multiple Targets in Snowflake
      • Ingest Your Microsoft SQL Server CDC Data to Snowflake
      • Ingest Your MongoDB CDC Data to Snowflake
      • Handle PostgreSQL TOAST Values
    • VPC Flow Logs
      • Data Ingestion — VPC Flow Logs
      • Data Analytics — VPC Flow Logs
    • Job Monitoring
      • Export Metrics to a Third-Party System
    • Data Observability
      • Observe Data with Datasets
  • DATA
    • Query Upsolver Iceberg Tables from Snowflake
  • APACHE ICEBERG
    • Analyze Your Iceberg Tables Using the Upsolver CLI
    • Optimize Your Iceberg Tables
Powered by GitBook
On this page
  • Install Upsolver CLI
  • Install using Brew (MacOS only)
  • Install using pip
  • Use Upsolver CLI
  • Help
  • Execute
  • Configure
  1. SETUP

Install the Upsolver CLI

The page describes how to install and use the Upsolver command line interface (CLI).

Upsolver CLI is a command line client that provides a terminal-based interactive shell for running queries. It connects to Upsolver to execute SQL statements and perform all DDL and DML operations, including loading data as a continuous stream to various data destinations.

Upsolver CLI provides a command line interface to interact with the Upsolver API service over HTTP.

Install Upsolver CLI

Install using Brew (MacOS only)

brew tap upsolver/cli https://github.com/Upsolver/cli
brew install upsolver-cli

Install using pip

python 3.8+ is required for installing Upsolver CLI.

Make sure you have Python 3.8+ installed, and then use pip or brew to install the CLI tool:

pip3 install -U upsolver-cli

You can grab the latest archive link from https://github.com/Upsolver/cli/releases

Use Upsolver CLI

Help

For a complete description of every command and its flags, you can run it without any arguments or with a --help flag.

Examples

> upsolver
# Help will be displayed
> upsolver --help
# Help will be displayed
> upsolver execute
# Help will be displayed
> upsolver execute --help
# Help will be displayed

Execute

To execute a SQL statement, you should provide an API token. You can choose whether to provide a file or a command to run.

Examples

Run an inline command:

> upsolver execute \
    -t mytoken
    -c 'CREATE TABLE default_glue_catalog.upsolver_samples.orders_raw_data()'
# Result will be displayed

Run command from a file:

> upsolver execute -t mytoken -f create_table_command.usql
# Result will be displayed

If you want to avoid passing the token (and perhaps other optional flags) on every command, you can create a profile with the configure command.

Configure

You can use the configure command to save a configuration file, so you won't have to provide a token and other optional flags to the execute command every time.

upsolver configure -t <token>

The configuration profile will be located under ~/.upsolver/config.

Examples

Create a default profile with an API token and CSV as output format:

> upsolver configure -t mytoken -o csv
> cat ~/.upsolver/config
[profile]
token = mytoken
output = CSV

Create another profile with an API URL and a different token:

> upsolver -p anotherpofile configure -t anothertoken -u https://specificapi.upsolver.com
> cat ~/.upsolver/config
[profile]
token = mytoken
output = CSV

[profile.anotherpofile]
token = anothertoken
base_url = https://specificapi.upsolver.com

Using Profiles

You can perform execute command using different profiles by passing the -p or --profile flag to the upsolver command.

Examples

upsolver -p anotherpofile execute -c 'CREATE TABLE default_glue_catalog.upsolver_samples.orders_raw_data()'

Without the -p flag the default profile will be used:

> upsolver execute -c 'CREATE TABLE default_glue_catalog.upsolver_samples.orders_

Last updated 11 months ago