Amazon Redshift Spectrum data output
This article provides an introduction to Amazon Redshift Spectrum along with a guide on creating an output to Amazon Redshift Spectrum with Upsolver.
Why use Amazon Redshift Spectrum?
Using Amazon Redshift Spectrum, you can query and retrieve structured and semistructured data from files in Amazon S3 without having to load the data into Amazon Redshift tables.
Redshift Spectrum queries employ parallelism to execute quickly against large datasets. Much of the processing occurs in the Redshift Spectrum layer and most of the data remains in Amazon S3. Multiple clusters can concurrently query the same dataset in Amazon S3 without the need to make copies of the data for each cluster.
Create an output to Amazon Redshift Spectrum
1. Go to the Outputs page and click New.
2. Select Redshift Spectrum as your output type.
3. Name your output and select your Data Sources, then click Next.
Click Properties to review this output's properties. See: Output properties
How many of the events in this data source include this field, expressed as a percentage (e.g. 20.81%).
The percentage distribution of the field values. These distribution values can be exported by clicking Export.
The number of fields in the selected hierarchy.
Note: All numbers are mapped to doubles by default. Change this to BIGINT
if you know that your numbers are integers.
Toggle from UI to SQL at any point to view the corresponding SQL code for your selected output.
You can also edit your output directly in SQL. See: Transform with SQL
7. Add any required calculated fields and review them in the Calculated Fields tab. See: Adding calculated fields
8. Add any required lookups and review them under the Calculated Fields tab.
9. Through the Filters tab, add a filter like WHERE
in SQL to the data source.
See: Adding filters
10. Click Make Aggregated to turn the output into an aggregated output. Read the warning before clicking OK and then add the required aggregation. This aggregation field will then be added to the Schema tab. See: Aggregation functions
11. In the Aggregation Calculated Fields area under the Calculated Fields tab, add any required calculated fields on aggregations. See: Functions, Aggregation functions
12. Partition the data by clicking More > Manage Partitions and then selecting the following:
Key: Partitions the data table using one or more fields (or calculated fields)
Partitioning Time: Partitions the data table using a specific time field
13. To keep only the latest event per upsert key, click More > Manage Upserts then select the following:
Keys: A unique key identifying a row in the table
Deletions: The delete key (events with the value true in their deletion key field will be deleted)
See: Data types and features, How do upserts work?
Click Preview at any time to view a preview of your current output.
14. Click Run and fill out the following fields:
S3 Storage: The storage for the data of the table; the access key of this storage must belong to the same AWS account as the access key of the selected connection
Connection: How to create a new connection
Database Name
Table Name
See: Running an output
Note: If you select Redshift Spectrum as your connection type, you will be prompted to select an Athena connection.
15. Click Next and complete the following:
Select the compute cluster to run the calculation on. Alternatively, click the drop-down and create a new compute cluster.
16. Finally, click Deploy to run the output. It will show as Running in the output panel and is now live in production and consumes compute resources.
You should now be able to access your table from your Redshift Query Editor.
Last updated