Full guide: S3 data source
This article provides a full guide on how to create an Amazon S3 data source in Upsolver with information on how to configure the advanced settings.
Last updated
Was this helpful?
This article provides a full guide on how to create an Amazon S3 data source in Upsolver with information on how to configure the advanced settings.
Last updated
Was this helpful?
1. From the Data Sources page, click New.
2. Select Amazon S3.
3. Scroll down and select the Advanced option.
4. From the dropdown, select the desired S3 connection (or ).
5. (Optional) Enter the path to the data folder in the Amazon S3 bucket (e.g. billing data). If this is not specified, the data is assumed to be in the top-level of the hierarchy.
The full path displayed is read only. You can use it to review the path Upsolver is using to read from S3.
6. Select or enter in the date pattern of the files to be ingested. This is autodetected but can be modified if required.
7. (Optional) Select a time to start ingesting files from. This is usually auto detected, but if there is no preview, the date cannot be established and this defaults to today's date. In this case, set the required start date.
8. (Optional) Under File Name Pattern, select the type of pattern to use (All, Starts With, Ends With, Regular Expression, or Glob Expression) and then enter the file name pattern that specifies the file set to ingest.
9. Select the content format. This is typically auto-detected, but you can manually select a format (Avro, Parquet, ORC, JSON, CSV, TSV, x-www-form-urlencoded, Protobuf, Avro-record, Avro with Schema Registry, or XML).
For production environments, this indicates that if you run your task retroactively (Start Ingestion From), your compute cluster will process a burst of additional tasks, possibly causing delays in outputs and lookup tables running on this cluster.
To prevent this, go to the Clusters page to edit your cluster and set the Additional Processing Units For Replay to a number greater than 0.
12. (Optional) In addition to the streaming data, you may have initial data. In this case, to list the relevant data, enter a file name prefix (e.g. for DMS loads, enter LOAD) under Initial Load Configuration.
13. (Optional) Under Initial Load Configuration, enter a regex pattern to select the required files.
14. Name this data source.
15. Click Continue. A preview of the data will appear.
16. For CSV, select a Header.
17. Click Continue again.
18. (Optional) If there are any errors, click Back to change the settings as required.
19. Click Create.
You can now use your Amazon S3 data source.
If necessary, configure the content format options. See:
10. From the dropdown, select a compute cluster (or ) to run the calculation on.
A may appear. This warning can be ignored for POCs (proof of concept).
11. Select a target storage connection (or ) where the data read will be stored (output storage).