Upsolver
Contact Support
  • Welcome to Upsolver
  • Getting Started
    • Start using Upsolver for free
    • Get started as a Upsolver user
      • Upsolver in 5 minutes
        • Upsolver Quickstart in 5 minutes
          • Additional sandbox fun
        • Amazon Athena data output
        • MySQL (AWS RDS) data output
        • Local MySQL data output
      • Upsolver free training
        • Introduction to Upsolver
          • Transform and write data to Amazon Athena
          • Pre-aggregate data for efficiency and performance
          • UPSERT streaming data to Amazon Athena
      • Prerequisites for AWS deployment
      • AWS integration
      • Deploy Upsolver on your AWS account
      • Prerequisites for Azure Deployment
      • Azure Integration
        • Prerequisites for Azure Users
        • Log into Upsolver
        • Log into Azure & Authenticate
        • Set Up and Deploy Azure Resources
        • Delegate Resource Group, and Deploy Upsolver in Azure
        • Integrate Azure with Upsolver
    • Upsolver concepts
      • Deployment models
      • Upsolver components
      • Data ingestion
    • Upsolver Amazon AWS deployment guide
      • Private VPC
      • Upsolver VPC
      • AWS role permissions
      • VPC peering
    • Tutorials and FAQ
      • Tutorials
        • How To Re-process Data
        • Create an Amazon S3 data source
        • Create an Amazon Athena data output
        • Join multiple data streams for real-time analytics
        • Use Upsolver to index less data into Splunk
        • Upsert and delete use case
        • AWS S3 to Athena use case
        • Merge data use case
        • Full vs. Partial Inbound Data Records
      • FAQ
      • Infrastructure
        • What is a dry-run cluster?
    • Glossary
      • Language guide
        • SQL syntax reference
        • Functions
          • Aggregation Functions
            • APPROX_COUNT_DISTINCT
            • APPROX_COUNT_DISTINCT_EACH
            • AVG
            • AVG_EACH
            • AVG_TIME_SERIES
            • COLLECT_SET
            • COLLECT_SET_EACH
            • COUNT
            • COUNT(*)
            • COUNT_DISTINCT
            • COUNT_EACH
            • COUNT_IF
            • DECAYED_SUM
            • DYNAMIC_SESSIONS
            • FIRST
            • FIRST_ARRAY
            • FIRST_EACH
            • FIRST_TIME_SERIES
            • LAST
            • LAST_ARRAY
            • LAST_EACH
            • LAST_K
            • LAST_K_EACH
            • LAST_TIME_SERIES
            • MAX
            • MAX_BY
            • MAX_EACH
            • MAX_TIME_SERIES
            • MIN
            • MIN_BY
            • MIN_EACH
            • MIN_TIME_SERIES
            • SESSION_COUNT
            • STD_DEV
            • STD_DEV_EACH
            • STRING_MAX_EACH
            • STRING_MIN_EACH
            • SUM
            • SUM_EACH
            • SUM_TIME_SERIES
            • WEIGHTED_AVERAGE
          • Calculated functions
            • Aerospike functions
            • Array functions
            • Conditional functions
            • Date functions
            • External API functions
            • Filter functions
            • Numeric functions
            • Spatial functions
            • String functions
            • Structural functions
              • ZIP
            • Type conversion functions
      • Data formats
      • Data types and features
      • Database output options
      • Upsolver shards
      • Permissions list
      • Index
    • Troubleshooting
      • My CloudFormation stack failed to deploy
      • My private API doesn't start or I can't connect to it
        • Elastic IPs limit reached
        • EC2 Spot Instance not running
        • DNS cache
        • Security group not open
      • My compute cluster doesn't start
      • I can't connect to my Kafka cluster
      • I can't create an S3 data source
      • Data doesn't appear in Athena table
      • I get an exception when querying my Athena table
      • Unable to define a JDBC (Postgres) connection
  • Connecting data sources
    • Amazon AWS data sources
      • Amazon S3 data source
        • Quick guide: S3 data source
        • Full guide: S3 data source
      • Amazon Kinesis Stream data source
      • Amazon S3 over SQS data source
      • Amazon AppFlow data source
        • Setup Google Analytics client ID and client secret.
    • Microsoft Azure data sources
      • Azure Blob storage data source
      • Azure Event Hubs data source
    • Kafka data source
    • Google Cloud Storage data source
    • File upload data source
    • CDC data sources (Debezium)
      • MySQL CDC data source
        • Binlog retention in MySQL
      • PostgreSQL CDC database replication
    • JDBC data source
    • HDFS data source
    • Data source UI
    • Data source properties
  • Data outputs and data transformation
    • Data outputs
      • Amazon AWS data outputs
        • Amazon S3 data output
        • Amazon Athena data output
          • Quick guide: Athena data output
          • Full guide: Athena data output
          • Output all data source fields to Amazon Athena
        • Amazon Kinesis data output
        • Amazon Redshift data output
        • Amazon Redshift Spectrum data output
          • Connect Redshift Spectrum to Glue Data Catalog
        • Amazon SageMaker data output
      • Data lake / database data outputs
        • Snowflake data output
          • Upsert data to Snowflake
        • MySQL data output
        • PostgreSQL data output
        • Microsoft SQL Server data output
        • Elasticsearch data output
        • Dremio
        • PrestoDB
      • Upsolver data output
      • HDFS data output
      • Google Storage data output
      • Microsoft Azure Storage data output
      • Qubole data output
      • Lookup table data output
        • Lookup table alias
        • API Playground
        • Serialization of EACH aggregations
      • Kafka data output
    • Data transformation
      • Transform with SQL
        • Mapping data to a desired schema
        • Transforming data with SQL
        • Aggregate streaming data
        • Query hierarchical data
      • Work with Arrays
      • View outputs
      • Create an output
        • Modify an output in SQL
          • Quick guide: SQL data transformation
        • Add calculated fields
        • Add filters
        • Add lookups
          • Add lookups from data sources
          • Add lookups from lookup tables
          • Adding lookups from reference data
        • Output properties
          • General output properties
      • Run an output
      • Edit an output
      • Duplicate an output
      • Stop an output
      • Delete an output
  • Guide for developers
    • Upsolver REST API
      • Create a data source
      • Modify a data source
      • API content formats
    • CI/CD on Upsolver
  • Administration
    • Connections
      • Amazon S3 connection
      • Amazon Kinesis connection
      • Amazon Redshift connection
      • Amazon Athena connection
      • Amazon S3 over SQS connection
      • Google Storage connection
      • Azure Blob storage connection
      • Snowflake connection
      • MySQL connection
      • Elasticsearch connection
      • HDFS connection
      • Qubole connection
      • PostgreSQL connection
      • Microsoft SQL Server connection
      • Spotinst Private VPC connection
      • Kafka connection
    • Clusters
      • Cluster types
        • Compute cluster
        • Query cluster
        • Local API cluster
      • Monitoring clusters
      • Cluster tasks
      • Cluster Elastic IPs
      • Cluster properties
      • Uploading user-provided certificates
    • Python UDF
    • Reference data
    • Workspaces
    • Monitoring
      • Credits
      • Delays In Upsolver pipelines
      • Monitoring reports
        • Monitoring system properties
        • Monitoring metrics
    • Security
      • IAM: Identity and access management
        • Manage users
        • Manage groups
        • Manage policies
      • Git integration
      • Single sign-on with SAML
        • Microsoft Azure AD with SAML sign-on
        • Okta with SAML sign-on
        • OneLogin with SAML sign-on
      • AMI security updates
  • Support
    • Upsolver support portal
  • Change log
  • Legal
Powered by GitBook
On this page
  • Requirements for integration
  • Integrate with AWS using your own VPC

Was this helpful?

  1. Getting Started
  2. Upsolver Amazon AWS deployment guide

Private VPC

This article walks through what it means to deploy Upsolver to your own VPC and how to do so.

PreviousUpsolver Amazon AWS deployment guideNextUpsolver VPC

Last updated 4 years ago

Was this helpful?

You can deploy your cluster and API servers to your own VPC in AWS.

Private VPC mode gives you full security control over the data that Upsolver handles and ensures the data being processed never leaves your AWS account. Upsolver's global API server will not have access to your data in this deployment mode; therefore, this mode requires running an API server in the VPC.

If you prefer Upsolver to manage and host your servers, you can use instead.

If you have already integrated your account with AWS using the Upsolver VPC, you may now want to change your servers over to run on your own VPC.

To do this, add a .

Requirements for integration

In order to integrate your Upsolver account with AWS, you must have an AWS user with permissions to:

  • Run CloudFormation scripts

  • Create and manage user roles

  • Create and manage S3 buckets

  • Create VPCs and related resources (ACLs, Subnets, ...)

See:

Integrate with AWS using your own VPC

You must integrate Upsolver into your AWS account to provide read and write access to the data to be processed.

The following resources are created when you integrate with AWS using the My VPC option:

This role is used by your EC2 servers during start up and it also provides data access.

This role is used by Upsolver to create and manage our servers in your VPC and it provides no data access; your data remains inaccessible from outside the account.

This bucket is used as the default output location for Upsolver outputs.

To integrate AWS with your own VPC:

1. Log in to your Upsolver account. You will be prompted to integrate your account with AWS.

2. Within the message displayed, click click here.

3. Decide whether or not to grant Upsolver permission to:

This will allow for the creation and management of Athena tables for Athena outputs created in Upsolver.

This will allow Upsolver to retrieve a list of your Kinesis streams in order to create a data source.

4. Check Create a Dedicated Upsolver User. This option must be selected.

Note: When Upsolver creates a role in your AWS account it will contain the required permissions and any optional permissions selected in the previous step.

5. Check Create a Dedicated Upsolver Bucket to create a new bucket in your account. This will be the default bucket that intermediate files and Upsolver outputs are written to.

6. Select the cluster deployment method. To use your own VPC in your AWS account, select My VPC, My VPC with Spotinst Account, or My Existing VPC.

Note: This only affects where the processing servers are deployed, as your data is never stored on the Upsolver AWS account and never leaves your AWS bucket.

7. Under VPC CIDR, enter in the range of IPv4 addresses for your VPC in CIDR block format. Leave the default range if you don't need to peer the VPC.

8. In the Ingress Traffic CIDR List field, enter a comma-separated list of CIDR ranges from which the local API server will be accessible.

By default, this field contains 0.0.0.0/0 (meaning that all the machines inside the VPC have access) and your current IP address.

You should add your office IP and any other IP you would like to be able to use Upsolver from.

9. Check Allow Upsolver Access to add Upsolver's office IP address to the ingress permissions for the VPC. This will allow Upsolver to easily assist you with any issues or questions.

10. If you selected My VPC with My Spotinst Account as your cluster deployment type, fill in your AWS Spotinst token and account ID.

11. If you selected My Existing VPC as your cluster deployment type, configure the following options:

Enter in the ID of the VPC you wish to use.

Fill in the list of subnets to deploy Upsolver servers in by inputting their availability zone and subnet ID. You can also import this information from a CSV.

Note: The subnets must have outbound internet traffic access.

If you're using Spotinst, you can optionally also enter in your token and account ID that will give Upsolver permissions to create machines in your AWS environment.

12. Select the region where you want to run the integration to create a VPC for Upsolver servers to run in your account.

13. Click Continue.

14. Click Launch Integration; you will be directed to a CloudFormation stack page to create the necessary resources.

15. At the bottom of the page, check the I acknowledge statement and then click Create stack.

16. Wait for the stack creation status to change from CREATE_IN_PROGRESS to CREATE_COMPLETE.

17. Return to the Upsolver tab and wait for the integration window to indicate that the integration is complete (The Relaunch Integration button will change to Done).

18. Click Done.

You can now work in Upsolver.

See: for a detailed description of the permissions given to the roles.

AWS role permissions
Upsolver's managed VPC
Spotinst Private VPC connection
The message displayed
Prerequisites for AWS deployment