Upsolver
Contact Support
  • Welcome to Upsolver
  • Getting Started
    • Start using Upsolver for free
    • Get started as a Upsolver user
      • Upsolver in 5 minutes
        • Upsolver Quickstart in 5 minutes
          • Additional sandbox fun
        • Amazon Athena data output
        • MySQL (AWS RDS) data output
        • Local MySQL data output
      • Upsolver free training
        • Introduction to Upsolver
          • Transform and write data to Amazon Athena
          • Pre-aggregate data for efficiency and performance
          • UPSERT streaming data to Amazon Athena
      • Prerequisites for AWS deployment
      • AWS integration
      • Deploy Upsolver on your AWS account
      • Prerequisites for Azure Deployment
      • Azure Integration
        • Prerequisites for Azure Users
        • Log into Upsolver
        • Log into Azure & Authenticate
        • Set Up and Deploy Azure Resources
        • Delegate Resource Group, and Deploy Upsolver in Azure
        • Integrate Azure with Upsolver
    • Upsolver concepts
      • Deployment models
      • Upsolver components
      • Data ingestion
    • Upsolver Amazon AWS deployment guide
      • Private VPC
      • Upsolver VPC
      • AWS role permissions
      • VPC peering
    • Tutorials and FAQ
      • Tutorials
        • How To Re-process Data
        • Create an Amazon S3 data source
        • Create an Amazon Athena data output
        • Join multiple data streams for real-time analytics
        • Use Upsolver to index less data into Splunk
        • Upsert and delete use case
        • AWS S3 to Athena use case
        • Merge data use case
        • Full vs. Partial Inbound Data Records
      • FAQ
      • Infrastructure
        • What is a dry-run cluster?
    • Glossary
      • Language guide
        • SQL syntax reference
        • Functions
          • Aggregation Functions
            • APPROX_COUNT_DISTINCT
            • APPROX_COUNT_DISTINCT_EACH
            • AVG
            • AVG_EACH
            • AVG_TIME_SERIES
            • COLLECT_SET
            • COLLECT_SET_EACH
            • COUNT
            • COUNT(*)
            • COUNT_DISTINCT
            • COUNT_EACH
            • COUNT_IF
            • DECAYED_SUM
            • DYNAMIC_SESSIONS
            • FIRST
            • FIRST_ARRAY
            • FIRST_EACH
            • FIRST_TIME_SERIES
            • LAST
            • LAST_ARRAY
            • LAST_EACH
            • LAST_K
            • LAST_K_EACH
            • LAST_TIME_SERIES
            • MAX
            • MAX_BY
            • MAX_EACH
            • MAX_TIME_SERIES
            • MIN
            • MIN_BY
            • MIN_EACH
            • MIN_TIME_SERIES
            • SESSION_COUNT
            • STD_DEV
            • STD_DEV_EACH
            • STRING_MAX_EACH
            • STRING_MIN_EACH
            • SUM
            • SUM_EACH
            • SUM_TIME_SERIES
            • WEIGHTED_AVERAGE
          • Calculated functions
            • Aerospike functions
            • Array functions
            • Conditional functions
            • Date functions
            • External API functions
            • Filter functions
            • Numeric functions
            • Spatial functions
            • String functions
            • Structural functions
              • ZIP
            • Type conversion functions
      • Data formats
      • Data types and features
      • Database output options
      • Upsolver shards
      • Permissions list
      • Index
    • Troubleshooting
      • My CloudFormation stack failed to deploy
      • My private API doesn't start or I can't connect to it
        • Elastic IPs limit reached
        • EC2 Spot Instance not running
        • DNS cache
        • Security group not open
      • My compute cluster doesn't start
      • I can't connect to my Kafka cluster
      • I can't create an S3 data source
      • Data doesn't appear in Athena table
      • I get an exception when querying my Athena table
      • Unable to define a JDBC (Postgres) connection
  • Connecting data sources
    • Amazon AWS data sources
      • Amazon S3 data source
        • Quick guide: S3 data source
        • Full guide: S3 data source
      • Amazon Kinesis Stream data source
      • Amazon S3 over SQS data source
      • Amazon AppFlow data source
        • Setup Google Analytics client ID and client secret.
    • Microsoft Azure data sources
      • Azure Blob storage data source
      • Azure Event Hubs data source
    • Kafka data source
    • Google Cloud Storage data source
    • File upload data source
    • CDC data sources (Debezium)
      • MySQL CDC data source
        • Binlog retention in MySQL
      • PostgreSQL CDC database replication
    • JDBC data source
    • HDFS data source
    • Data source UI
    • Data source properties
  • Data outputs and data transformation
    • Data outputs
      • Amazon AWS data outputs
        • Amazon S3 data output
        • Amazon Athena data output
          • Quick guide: Athena data output
          • Full guide: Athena data output
          • Output all data source fields to Amazon Athena
        • Amazon Kinesis data output
        • Amazon Redshift data output
        • Amazon Redshift Spectrum data output
          • Connect Redshift Spectrum to Glue Data Catalog
        • Amazon SageMaker data output
      • Data lake / database data outputs
        • Snowflake data output
          • Upsert data to Snowflake
        • MySQL data output
        • PostgreSQL data output
        • Microsoft SQL Server data output
        • Elasticsearch data output
        • Dremio
        • PrestoDB
      • Upsolver data output
      • HDFS data output
      • Google Storage data output
      • Microsoft Azure Storage data output
      • Qubole data output
      • Lookup table data output
        • Lookup table alias
        • API Playground
        • Serialization of EACH aggregations
      • Kafka data output
    • Data transformation
      • Transform with SQL
        • Mapping data to a desired schema
        • Transforming data with SQL
        • Aggregate streaming data
        • Query hierarchical data
      • Work with Arrays
      • View outputs
      • Create an output
        • Modify an output in SQL
          • Quick guide: SQL data transformation
        • Add calculated fields
        • Add filters
        • Add lookups
          • Add lookups from data sources
          • Add lookups from lookup tables
          • Adding lookups from reference data
        • Output properties
          • General output properties
      • Run an output
      • Edit an output
      • Duplicate an output
      • Stop an output
      • Delete an output
  • Guide for developers
    • Upsolver REST API
      • Create a data source
      • Modify a data source
      • API content formats
    • CI/CD on Upsolver
  • Administration
    • Connections
      • Amazon S3 connection
      • Amazon Kinesis connection
      • Amazon Redshift connection
      • Amazon Athena connection
      • Amazon S3 over SQS connection
      • Google Storage connection
      • Azure Blob storage connection
      • Snowflake connection
      • MySQL connection
      • Elasticsearch connection
      • HDFS connection
      • Qubole connection
      • PostgreSQL connection
      • Microsoft SQL Server connection
      • Spotinst Private VPC connection
      • Kafka connection
    • Clusters
      • Cluster types
        • Compute cluster
        • Query cluster
        • Local API cluster
      • Monitoring clusters
      • Cluster tasks
      • Cluster Elastic IPs
      • Cluster properties
      • Uploading user-provided certificates
    • Python UDF
    • Reference data
    • Workspaces
    • Monitoring
      • Credits
      • Delays In Upsolver pipelines
      • Monitoring reports
        • Monitoring system properties
        • Monitoring metrics
    • Security
      • IAM: Identity and access management
        • Manage users
        • Manage groups
        • Manage policies
      • Git integration
      • Single sign-on with SAML
        • Microsoft Azure AD with SAML sign-on
        • Okta with SAML sign-on
        • OneLogin with SAML sign-on
      • AMI security updates
  • Support
    • Upsolver support portal
  • Change log
  • Legal
Powered by GitBook
On this page
  • Upsolver architecture for various data structures
  • Create an Amazon S3 data output
  • Use the UI or SQL to aggregate data before sending to Splunk
  • Define Amazon S3 output parameters
  • Configure Splunk environment to read data from S3
  • Verify data in Splunk
  • What’s next?

Was this helpful?

  1. Getting Started
  2. Tutorials and FAQ
  3. Tutorials

Use Upsolver to index less data into Splunk

This article provides a guide on how to use Upsolver to index less data into Splunk.

PreviousJoin multiple data streams for real-time analyticsNextUpsert and delete use case

Last updated 4 years ago

Was this helpful?

This guide provides an example of how to index less data into Splunk and thereby reduce your Splunk cost.

Before we begin, you should have already and.

Upsolver architecture for various data structures

The Upsolver architecture liberates your data from vendor lock-in; it allows many ways of analyzing data including SQL engine, Machine Learning, and Searching. Many Upsolver users utilize Athena to run SQL on log data.

Create an Amazon S3 data output

1. Click on Outputs on the left and then click New on the right upper corner.

2. Select Amazon S3 as your data output.

3. Give the data output a name and define your output format (tabular or hierarchical).

5. Click Next to continue.

Use the UI or SQL to aggregate data before sending to Splunk

1. Select the SQL window from the upper right hand corner. Keep in mind that everything that you do on the UI will be reflected in SQL and vice versa.

2. The sample SQL below aggregates multiple values together for a given period of time, reducing the amount of data being sent to Splunk.

Sample SQL
SELECT data."account-id" AS ACCOUNT_ID, 
    data.action AS action, 
    SUM(TO_NUMBER(data.bytes)) AS SUM_BYTES, 
    SUM(TO_NUMBER(data.packets)) AS SUM_PACKETS, 
    COUNT(*) AS count
FROM “bhopp-vpc-flowlogs”
GROUP BY data."account-id", data.action

3. Click on Properties on the upper right hand corner.

4. Under Scheduling, change the Output Interval to your desired length.

This property defines how frequently Upsolver outputs the aggregated data, with the default being 1 minute.

5. Click Run on the upper right hand corner.

Define Amazon S3 output parameters

1. Define the Output Format and the S3 Connection information; then click Next.

Keep in mind that Upsolver supports all file types.

2. Define the compute cluster that you would like to use and the time range of the data you would like to output.

Keep in mind that setting Ending At to Never means the output will be a continuous stream.

3. Click Deploy.

Configure Splunk environment to read data from S3

While waiting for the data to write to the output, configure the Splunk environment to read from S3; this guide uses a size t2.large Splunk instance.

1. After logging in, click on Find More Apps.

2. Find the Splunk Add-on for Amazon Web Services app and click Install.

4. The installation might take a few seconds and Splunk will prompt you to restart. Click Restart Now.

5. Login to your Splunk environment again and click on the Splunk Enterprise logo. Then click Splunk Add-on for AWS.

6. Click on the Configuration tab and then click Add on the right.

7. Give your Account a name (make sure to remember this name, we will use it for the data input next). Fill out your AWS Access Key (Key ID) and Secret Key information then click Add.

8. Click on Settings > Data inputs on your Splunk UI’s upper right hand corner.

9. Find and select AWS S3 data input (most likely on page 2).

10. Give the data input a name and fill out your AWS Account information. It should be the same Account Name from step 5.

11. Give it a bucket name. This must match the bucket name on your AWS account where the output data is being stored.

12. Change the Polling interval to 10. Define Key prefix as your S3 folder path.

13. Scroll down and check More settings to configure additional setting options.

14. Change Set sourcetype to From list, and select json_no_timestamp from the Select sourcetype from list dropdown. Then click Next.

15. Click Start searching.

Verify data in Splunk

1. Click on Data Summary under What to Search.

2. Click on Sourcetype and json_no_timestamp.

3. Verify your indexed data is the same as the aggregated data from Upsolver. Success!

What’s next?

4. Select your data sources. This guide uses .

If you don’t have a Splunk environment, you can easily in the same environment in which Upsolver is deployed.

3. Fill out your login information for . Check the license and agreement box and click Login and Install.

If you don’t have an account, click on on the upper right hand corner and sign up for a free account.

AWS VPC Flow Logs
start up a Splunk instance
Splunk.com
FREE SPLUNK
Upsert data to Snowflake
deployed Upsolver
created data sources