Elasticsearch is a text indexing and query engine which is useful for data analysis and exploration. Upsolver can export batches of your processed data to Elasticsearch for you.
Creating A New Elasticsearch Ouptut
- In the Outputs tab click Add Output.
- Give the output a name in the Name field and optionally provide a description for this output in Description field.
- If you want to aggregate your data (see Aggregating Data) check the Aggregate Data checkbox.
- In the Data Source drop down menu select the Data Source you would like to create this output from. If you do not yet have a Data Source you can create one from this menu.
- In the Output To menu select Elasticsearch.
- In the Connection menu select the connection to the Elasticsearch Cluster you would like to export to. If you have not yet configured a connection you can create one from this menu. The connection string must be in the format:
elasticsearch://host:port,host:portwith the hosts of your cluster.
- Click Next.
- In the Index Name Prefix field type a prefix for the indices that will be created in your Elasticsearch cluster.
- If you would like to give the exported events a
Typeyou can name it in the Index Type field. Read more about types in Elasticsearch here.
- In the Index Partition Size select how large you want each index to be.
- In the Bulk Max Size In Bytes field configure the maximum batch size to output to Elasticsearch.
- In the S3 Connection select the connection to the S3 bucket where you would like Upsolver to store the batch files to load into Elasticsearch.
- If you would like to have the files written to a certain prefix within the bucket you provided, type that prefix in the Path field.
- Click Next.
- You will now see a list of all the columns in that table, you can configure which field gets mapped to each column in the same manner described in Configuring Schema And Filters. Optionally you can also add filters to limit the data exported to Redshift.
- When ready, start running your output.