Amazon SageMaker enables developers to operate at a number of levels of abstraction when training and deploying machine learning models.
At its highest level of abstraction, SageMaker provides pre-trained machine-learning (ML) models that can be deployed as-is.
In addition, SageMaker provides a number of built-in ML algorithms that developers can train on their own data. Further, SageMaker provides managed instances of TensorFlow and Apache MXNet, where developers can create their own ML algorithms from scratch.
Regardless of which level of abstraction is used, a developer can connect their SageMaker-enabled ML models to other AWS services, such as the Amazon DynamoDB database for structured data storage, AWS Batch for offline batch processing, or Amazon Kinesis for real-time processing.
1. Go to the Outputs page and click New.
2. Select Amazon SageMaker as your output type.
3. Name your output and select whether the output should be Tabular or Hierarchical. After adding your Data Sources, click Next.
Click Properties to review this output's properties. See: Output properties​
4. Click the information iconin the fields tree to view information about a field.
The following will be displayed:
How many of the events in this data source include this field, expressed as a percentage (e.g. 20.81%).
The density in the hierarchy (how many of the events in this branch of the data hierarchy include this field), expressed a percentage.
How many unique values appear in this field.
The total number of values ingested for this field.
The first time this field included a value, for example, a year ago.
The last time this field included a value, for example, 2 minutes ago.
The percentage distribution of the field values. These distribution values can be exported by clicking Export.
A time-series graph of the total number of events that include the selected field.
The most recent data values for the selected field and columns. You can change the columns that appear by clicking Choose Columns.
5. Click the information iconnext to a hierarchy element (such as the overall data) to review the following metrics:
The number of fields in the selected hierarchy.
The number of keys in the selected hierarchy.
The number of arrays in the selected hierarchy.
A stacked bar chart (by data type) of the number of fields versus the density/distinct values or a stacked bar chart of the number of fields by data type.
A list of the fields in the hierarchy element, including Type, Density, Top Values, Key, Distinct Values, Array, First Seen, and Last Seen.
6. Click the plus iconin the fields tree to add a field from the data source to your output. This will be reflected under the Data Source Field in the Schema tab.
If required, modify the Output Column Name.
Toggle from UI to SQL at any point to view the corresponding SQL code for your selected output.
You can also edit your output directly in SQL. See: Transform with SQL​
7. Add any required calculated fields and review them in the Calculated Fields tab. See: Adding Calculated Fields​
8. Add any required lookups and review them under the Calculated Fields tab.
​from data sources​
​from lookup tables​
​from reference data​
9. Through the Filters tab, add a filter like WHERE
in SQL to the data source.
See: Adding Filters​
10. Click Make Aggregated to turn the output into an aggregated output. Read the warning before clicking OK and then add the required aggregation. This aggregation field will then be added to the Schema tab. See: Aggregation Functions​
11. In the Aggregation Calculated Fields area under the Calculated Fields tab, add any required calculated fields on aggregations. See: Functions, Aggregation Functions​
Click Preview at any time to view a preview of your current output.
12. Click Run and select a pre-exiting S3 connection or create a new one.
See: Running an output, How to create a new Amazon S3 connection​
13. Click Next and complete the following:
Select the compute cluster to run the calculation on. Alternatively, click the drop-down and create a new compute cluster.
The range of data to process. This can start from the data source beginning, now, or a custom date and time. This can never end, end now, or end at a custom date and time.
14. Finally, click Deploy to run the output. It will show as Running in the output panel and is now live in production and consumes compute resources.
Your output has now been added to your S3 bucket. To easily navigate to your output, click Properties then View output in AWS S3 Console, which will take you to the SageMaker outputs folder in your S3 storage bucket.