This article provides an introduction to Amazon Redshift Spectrum along with a guide on creating an output to Amazon Redshift Spectrum with Upsolver.
What is Amazon Redshift Spectrum?
Redshift Spectrum is a feature of Amazon Redshift that allows you to query data stored on Amazon S3 directly and supports nested data types.
Why use Amazon Redshift Spectrum?
Using Amazon Redshift Spectrum, you can query and retrieve structured and semistructured data from files in Amazon S3 without having to load the data into Amazon Redshift tables.
Redshift Spectrum queries employ parallelism to execute quickly against large datasets. Much of the processing occurs in the Redshift Spectrum layer and most of the data remains in Amazon S3. Multiple clusters can concurrently query the same dataset in Amazon S3 without the need to make copies of the data for each cluster.
Create an output to Amazon Redshift Spectrum
1. Go to the Outputs page and click New.
2. Select Redshift Spectrum as your output type.
3. Name your output and select your Data Sources, then click Next.
in the fields tree to view information about a field.
The following will be displayed:
Density in Events
Density in Data
How many of the events in this data source include this field, expressed as a percentage (e.g. 20.81%).
The density in the hierarchy (how many of the events in this branch of the data hierarchy include this field), expressed a percentage.
How many unique values appear in this field.
The total number of values ingested for this field.
The first time this field included a value, for example, a year ago.
The last time this field included a value, for example, 2 minutes ago.
Field Content Samples Over Time
The percentage distribution of the field values. These distribution values can be exported by clicking Export.
A time-series graph of the total number of events that include the selected field.
The most recent data values for the selected field and columns. You can change the columns that appear by clicking Choose Columns.
5. Click the information icon
next to a hierarchy element (such as the overall data) to review the following metrics:
# of Fields
# of Keys
# of Arrays
The number of fields in the selected hierarchy.
The number of keys in the selected hierarchy.
The number of arrays in the selected hierarchy.
A stacked bar chart (by data type) of the number of fields versus the density/distinct values or a stacked bar chart of the number of fields by data type.
A list of the fields in the hierarchy element, including Type, Density, Top Values, Key, Distinct Values, Array, First Seen, and Last Seen.
6. Click the plus icon
in the fields tree to add a field from the data source to your output. This will be reflected under the Data Source Field in the Schema tab.
If required, modify the Output Column Name along with the Column Type.
Note: All numbers are mapped to doubles by default. Change this to BIGINT if you know that your numbers are integers.
Toggle from UI to SQL at any point to view the corresponding SQL code for your selected output.
9. Through the Filters tab, add a filter like WHERE in SQL to the data source.
10. Click Make Aggregated to turn the output into an aggregated output.
Read the warning before clicking OK and then add the required aggregation.
This aggregation field will then be added to the Schema tab.
11. In the Aggregation Calculated Fields area under the Calculated Fields tab, add any required calculated fields on aggregations.
See:Functions, Aggregation functions
12. Partition the data by clicking More > Manage Partitions and then selecting the following:
Key: Partitions the data table using one or more fields (or calculated fields)
Partitioning Time: Partitions the data table using a specific time field
13. To keep only the latest event per upsert key, click More > Manage Upserts then select the following:
Keys: A unique key identifying a row in the table
Deletions: The delete key (events with the value true in their deletion key field will be deleted)