Upsolver
Contact Support
  • Welcome to Upsolver
  • Getting Started
    • Start using Upsolver for free
    • Get started as a Upsolver user
      • Upsolver in 5 minutes
        • Upsolver Quickstart in 5 minutes
          • Additional sandbox fun
        • Amazon Athena data output
        • MySQL (AWS RDS) data output
        • Local MySQL data output
      • Upsolver free training
        • Introduction to Upsolver
          • Transform and write data to Amazon Athena
          • Pre-aggregate data for efficiency and performance
          • UPSERT streaming data to Amazon Athena
      • Prerequisites for AWS deployment
      • AWS integration
      • Deploy Upsolver on your AWS account
      • Prerequisites for Azure Deployment
      • Azure Integration
        • Prerequisites for Azure Users
        • Log into Upsolver
        • Log into Azure & Authenticate
        • Set Up and Deploy Azure Resources
        • Delegate Resource Group, and Deploy Upsolver in Azure
        • Integrate Azure with Upsolver
    • Upsolver concepts
      • Deployment models
      • Upsolver components
      • Data ingestion
    • Upsolver Amazon AWS deployment guide
      • Private VPC
      • Upsolver VPC
      • AWS role permissions
      • VPC peering
    • Tutorials and FAQ
      • Tutorials
        • How To Re-process Data
        • Create an Amazon S3 data source
        • Create an Amazon Athena data output
        • Join multiple data streams for real-time analytics
        • Use Upsolver to index less data into Splunk
        • Upsert and delete use case
        • AWS S3 to Athena use case
        • Merge data use case
        • Full vs. Partial Inbound Data Records
      • FAQ
      • Infrastructure
        • What is a dry-run cluster?
    • Glossary
      • Language guide
        • SQL syntax reference
        • Functions
          • Aggregation Functions
            • APPROX_COUNT_DISTINCT
            • APPROX_COUNT_DISTINCT_EACH
            • AVG
            • AVG_EACH
            • AVG_TIME_SERIES
            • COLLECT_SET
            • COLLECT_SET_EACH
            • COUNT
            • COUNT(*)
            • COUNT_DISTINCT
            • COUNT_EACH
            • COUNT_IF
            • DECAYED_SUM
            • DYNAMIC_SESSIONS
            • FIRST
            • FIRST_ARRAY
            • FIRST_EACH
            • FIRST_TIME_SERIES
            • LAST
            • LAST_ARRAY
            • LAST_EACH
            • LAST_K
            • LAST_K_EACH
            • LAST_TIME_SERIES
            • MAX
            • MAX_BY
            • MAX_EACH
            • MAX_TIME_SERIES
            • MIN
            • MIN_BY
            • MIN_EACH
            • MIN_TIME_SERIES
            • SESSION_COUNT
            • STD_DEV
            • STD_DEV_EACH
            • STRING_MAX_EACH
            • STRING_MIN_EACH
            • SUM
            • SUM_EACH
            • SUM_TIME_SERIES
            • WEIGHTED_AVERAGE
          • Calculated functions
            • Aerospike functions
            • Array functions
            • Conditional functions
            • Date functions
            • External API functions
            • Filter functions
            • Numeric functions
            • Spatial functions
            • String functions
            • Structural functions
              • ZIP
            • Type conversion functions
      • Data formats
      • Data types and features
      • Database output options
      • Upsolver shards
      • Permissions list
      • Index
    • Troubleshooting
      • My CloudFormation stack failed to deploy
      • My private API doesn't start or I can't connect to it
        • Elastic IPs limit reached
        • EC2 Spot Instance not running
        • DNS cache
        • Security group not open
      • My compute cluster doesn't start
      • I can't connect to my Kafka cluster
      • I can't create an S3 data source
      • Data doesn't appear in Athena table
      • I get an exception when querying my Athena table
      • Unable to define a JDBC (Postgres) connection
  • Connecting data sources
    • Amazon AWS data sources
      • Amazon S3 data source
        • Quick guide: S3 data source
        • Full guide: S3 data source
      • Amazon Kinesis Stream data source
      • Amazon S3 over SQS data source
      • Amazon AppFlow data source
        • Setup Google Analytics client ID and client secret.
    • Microsoft Azure data sources
      • Azure Blob storage data source
      • Azure Event Hubs data source
    • Kafka data source
    • Google Cloud Storage data source
    • File upload data source
    • CDC data sources (Debezium)
      • MySQL CDC data source
        • Binlog retention in MySQL
      • PostgreSQL CDC database replication
    • JDBC data source
    • HDFS data source
    • Data source UI
    • Data source properties
  • Data outputs and data transformation
    • Data outputs
      • Amazon AWS data outputs
        • Amazon S3 data output
        • Amazon Athena data output
          • Quick guide: Athena data output
          • Full guide: Athena data output
          • Output all data source fields to Amazon Athena
        • Amazon Kinesis data output
        • Amazon Redshift data output
        • Amazon Redshift Spectrum data output
          • Connect Redshift Spectrum to Glue Data Catalog
        • Amazon SageMaker data output
      • Data lake / database data outputs
        • Snowflake data output
          • Upsert data to Snowflake
        • MySQL data output
        • PostgreSQL data output
        • Microsoft SQL Server data output
        • Elasticsearch data output
        • Dremio
        • PrestoDB
      • Upsolver data output
      • HDFS data output
      • Google Storage data output
      • Microsoft Azure Storage data output
      • Qubole data output
      • Lookup table data output
        • Lookup table alias
        • API Playground
        • Serialization of EACH aggregations
      • Kafka data output
    • Data transformation
      • Transform with SQL
        • Mapping data to a desired schema
        • Transforming data with SQL
        • Aggregate streaming data
        • Query hierarchical data
      • Work with Arrays
      • View outputs
      • Create an output
        • Modify an output in SQL
          • Quick guide: SQL data transformation
        • Add calculated fields
        • Add filters
        • Add lookups
          • Add lookups from data sources
          • Add lookups from lookup tables
          • Adding lookups from reference data
        • Output properties
          • General output properties
      • Run an output
      • Edit an output
      • Duplicate an output
      • Stop an output
      • Delete an output
  • Guide for developers
    • Upsolver REST API
      • Create a data source
      • Modify a data source
      • API content formats
    • CI/CD on Upsolver
  • Administration
    • Connections
      • Amazon S3 connection
      • Amazon Kinesis connection
      • Amazon Redshift connection
      • Amazon Athena connection
      • Amazon S3 over SQS connection
      • Google Storage connection
      • Azure Blob storage connection
      • Snowflake connection
      • MySQL connection
      • Elasticsearch connection
      • HDFS connection
      • Qubole connection
      • PostgreSQL connection
      • Microsoft SQL Server connection
      • Spotinst Private VPC connection
      • Kafka connection
    • Clusters
      • Cluster types
        • Compute cluster
        • Query cluster
        • Local API cluster
      • Monitoring clusters
      • Cluster tasks
      • Cluster Elastic IPs
      • Cluster properties
      • Uploading user-provided certificates
    • Python UDF
    • Reference data
    • Workspaces
    • Monitoring
      • Credits
      • Delays In Upsolver pipelines
      • Monitoring reports
        • Monitoring system properties
        • Monitoring metrics
    • Security
      • IAM: Identity and access management
        • Manage users
        • Manage groups
        • Manage policies
      • Git integration
      • Single sign-on with SAML
        • Microsoft Azure AD with SAML sign-on
        • Okta with SAML sign-on
        • OneLogin with SAML sign-on
      • AMI security updates
  • Support
    • Upsolver support portal
  • Change log
  • Legal
Powered by GitBook
On this page
  • What is Amazon AppFlow
  • Create an Amazon AppFlow data source connection
  • Create a flow on Amazon AppFlow
  • Create an AppFlow data source

Was this helpful?

  1. Connecting data sources
  2. Amazon AWS data sources

Amazon AppFlow data source

PreviousAmazon S3 over SQS data sourceNextSetup Google Analytics client ID and client secret.

Last updated 4 years ago

Was this helpful?

To help you get started with Upsolver, you can try it out for free. Sign up for subscription to use Amazon AppFlow. It gives you free Upsolver units (UUs), units of processing capability per hour based on VM instance type.

You can also use Upsolver through . All of the options will allow you to ingest data to Amazon S3, perform transformations with simple SQL, and output to your favorite database or analytics tools—in minutes.

What is Amazon AppFlow

Amazon AppFlow is a fully managed integration service that enables you to securely transfer data between Software-as-a-Service (SaaS) applications like Salesforce, Marketo, Slack, and ServiceNow, and AWS services like Amazon Athena, Amazon S3 and Amazon Redshift, in just a few clicks.

Create an Amazon AppFlow data source connection

This example reads data from Google Analytics using Amazon AppFlow and utilizing Upsolver for data transformation.

1. Navigate to DATA SOURCE > NEW.

2. Find Amazon AppFlow and click on SELECT.

3. Click on Create your first Amazon AppFlow Connection under APPFLOW CONNECTION

Make sure you're logged into the AWS account running AppFlow.

4. Define the connection integration information. Click on LAUNCH INTEGRATION.

  • NAME is the name of the connection between Upsolver and Amazon AppFlow. You will use this connection for the data coming from Amazon AppFlow to Upsolver. You can create multiple data sources using the same connection.

  • BUCKET NAME is the S3 bucket that AppFlow sends its data for Upsolver to process. The example shown below has the BUCKET NAME called googleanalytics. The integration will automatically create a Amazon S3 bucket name called appsolver-appflow-googleanalytics. This is the bucket that you will select when you set up your Flow later on.

5. Scroll down on the CloudFormation page and check the I acknowledge box and click on Create Stack.

6. Wait until the integration is complete.

7. Navigate back to Upsolver and click on DONE.

Create a flow on Amazon AppFlow

1. Navigate to your AppFlow service in your AWS environment and click on Create flow.

2. Give the flow a name and description. This example collects data from google analytics. Click on Next.

3. You may choose any sources Choose Google Analytics as your source and click on Connect.

4. Enter your Google Analytics information, give the connection a name and click on Continue.

5. Fill out the required fields for your source. For Google Analytics, you will see AppFlow requesting access to your Google account.

6. Choose Upsolver as your destination. From the first section - Create an Amazon Appflow data source connection step 4, you've created an AppFlow connection with BUCKET NAME called googleanalytics. Behind the scenes, it automatically created an Amazon S3 bucket called upsolver-appflow-googleanalytics. This bucket will show up as an option when you create an Upsolver destination. Select the upsolver-appflow-googleanalytics bucket.

7. Choose your Flow trigger (how frequently do you want the flow to run) and click on Next.

8. Check the fields that you want to work with and click on Map fields directly.

9. Verify that all needed fields from Google Analytics are selected and click on Next.

10. Click on Next and skip the filters. We can filter data with Upsolver if needed.

11. Review your Flow definition and click on Create flow.

12. Click on Run flow and verify that the Flow ran successfully under Run history tab.

Create an AppFlow data source

1. Navigate back to the data source setup page by clicking on DATA SOURCE > NEW.

2. Find Amazon AppFlow and click on SELECT.

3. Fill out the information and choose the connection that you've created from the first section and click on CONTINUE.

4. Verify that the sample data coming from Google Analytics Flow that you created from the previous section and click on CREATE.

If you don't know your Google Analytics Client ID and Client Secret, please follow to set it up.

Congratulations! You have created an Upsolver Amazon AppFlow data source. Now you can transform or enrich the data using any built-in .

these steps
data outputs
Upsolver Dedicated Compute
AWS Marketplace