Comment on page

Elasticsearch data output

This article provides an introduction to Elasticsearch along with a guide on creating an Elasticsearch data output using Upsolver.

What is Elasticsearch?

Elasticsearch is an open-source, RESTful, distributed search and analytics engine built on Apache Lucene. It is commonly used for log analytics, full-text search, security intelligence, business analytics, and operational intelligence use cases.
You can send data in the form of JSON documents to Elasticsearch using the API or ingestion tools such as Logstash and Amazon Kinesis Firehose. Elasticsearch automatically stores the original document and adds a searchable reference to the document in the cluster’s index.
You can then search and retrieve the document using the Elasticsearch API. You can also use Kibana, an open-source visualization tool, with Elasticsearch to visualize your data and build interactive dashboards.

Supported Versions

These are the versions supported by Upsolver for direct output into Elasticsearch:
  1. 1.
  2. 2.
  3. 3.
OpenSearch is a fork of Elasticsearch. While it isn't officially supported, some versions may still function properly as targets for this output. Specifically, versions 1.3.0 and 2.3.0 have been tested successfully but future versions may not work. We recommend testing new versions of OpenSearch with a standalone output before updating existing clusters.

Create an Elasticsearch data output

1. Go to the Outputs page and click New.
2. Select Elasticseaarch as your output type.
3. Name your output and select your Data Sources, then click Next.
Click Properties to review this output's properties. See: Output properties
4. Click the information icon
in the fields tree to view information about a field. The following will be displayed:
Density in Events
Density in Data
Distinct Values
Total Values
First Seen
Last Seen
How many of the events in this data source include this field, expressed as a percentage (e.g. 20.81%).
The density in the hierarchy (how many of the events in this branch of the data hierarchy include this field), expressed a percentage.
How many unique values appear in this field.
The total number of values ingested for this field.
The first time this field included a value, for example, a year ago.
The last time this field included a value, for example, 2 minutes ago.
Value Distribution
Field Content Samples Over Time
The percentage distribution of the field values. These distribution values can be exported by clicking Export.
A time-series graph of the total number of events that include the selected field.
The most recent data values for the selected field and columns. You can change the columns that appear by clicking Choose Columns.
5. Click the information icon
next to a hierarchy element (such as the overall data) to review the following metrics:
# of Fields
# of Keys
# of Arrays
Fields Breakdown
Fields Statistics
The number of fields in the selected hierarchy.
The number of keys in the selected hierarchy.
The number of arrays in the selected hierarchy.
A stacked bar chart (by data type) of the number of fields versus the density/distinct values or a stacked bar chart of the number of fields by data type.
A list of the fields in the hierarchy element, including Type, Density, Top Values, Key, Distinct Values, Array, First Seen, and Last Seen.
6. Click the plus icon
in the fields tree to add a field from the data source to your output. This will be reflected under the Data Source Field in the Schema tab. If required, modify the Output Column Name.
Toggle from UI to SQL at any point to view the corresponding SQL code for your selected output.
You can also edit your output directly in SQL. See: Transform with SQL
7. Add any required calculated fields and review them in the Calculated Fields tab. See: Adding Calculated Fields
8. Add any required lookups and review them under the Calculated Fields tab.
9. Through the Filters tab, add a filter like WHERE in SQL to the data source. See: Adding Filters
10. Click Make Aggregated to turn the output into an aggregated output. Read the warning before clicking OK and then add the required aggregation. This aggregation field will then be added to the Schema tab. See: Aggregation Functions
11. In the Aggregation Calculated Fields area under the Calculated Fields tab, add any required calculated fields on aggregations. See: Functions, Aggregation Functions
Click Preview at any time to view a preview of your current output.
12. Click Run and fill out the following fields:
  • Index Name: See warning below
  • Index Partition Size
  • S3 connection: Select an intermediate storage location where Upsolver will store the intermediate bulk files before loading it into Elasticsearch
The index name must be that of a pre-existing index (Upsolver will not automatically create a new index) and the name should be suffixed with the year (e.g. upsolver upsolver_2020).
13. Click Next and complete the following:
Compute Cluster
Processing Time Range
Select the compute cluster to run the calculation on. Alternatively, click the drop-down and create a new compute cluster.
The range of data to process. This can start from the data source beginning, now, or a custom date and time. This can never end, end now, or end at a custom date and time.
14. Finally, click Deploy to run the output. It will show as Running in the output panel and is now live in production and consumes compute resources.
You have now successfully outputted your data to Elasticsearch.


Upsolver writes to Elasticsearch using Bulk requests.
For versions 7 and above Upsolver does not currently report the status of the internal requests inside such bulks. The only indication recieved is if the bulk request succeeded or not. It is therefore possible for some items within these bulk requests to fail, without getting notified about this in the Upsolver UI.