Dynatrace

To export monitoring data from your Upsolver account, you must first create a connection that provides the appropriate credentials to access your Dynatrace account.

Syntax

CREATE DYNATRACE CONNECTION <connection_identifier>
    API_TOKEN = '<generated_token>'
    ENVIRONMENT_ID = '<env_id>'
    [ COMMENT = '<comment>' ];

Jump to

  • API_TOKEN

  • ENVIRONMENT_ID

  • COMMENT

Connection Options

API_TOKEN

Type: text

The API token for your Dynatrace account. When you generate the API token in your Dynatrace account, ensure that it has ingest metrics scope. See the Dynatrace documentation for more information.

ENVIRONMENT_ID

Type: text

The environment ID for your Dynatrace account.

COMMENT — editable

Type: text

(Optional) A description or comment regarding this connection.

Examples

Create a connection

The following example creates a new connection named my_dynatrace_connection that will be used to send monitoring information to Dynatrace.

CREATE DYNATRACE CONNECTION my_dynatrace_connection
    API_TOKEN = 'my_api_token'
    ENVIRONMENT_ID = 'my_env_id'
    COMMENT = 'Dynatrace connection for Upsolver job metrics';

Create a job

The following script creates a job named send_monitoring_data_to_dynatrace that sends job metrics to the Dynatrace account my_dynatrace_connection created in the above example:

CREATE JOB send_monitoring_data_to_dynatrace 
    START_FROM = NOW 
AS INSERT INTO my_dynatrace_connection
    MAP_COLUMNS_BY_NAME
       SELECT job_id AS tags.id, 
              job_name AS tags.job_name, 
              discovered_files_lifetime AS files,
              avg_file_size_today AS avg_file_size,
              avg_rows_scanned_per_execution_today AS avg_rows_scanned,
              cluster_id AS tags.cluster_id,
              cluster_name AS tags.cluster_name,              
              RUN_START_TIME() AS time
       FROM system.monitoring.jobs;

The job includes the cluster_id and cluster_name as tags, which is helpful if you have multiple clusters in your organization.

Note

Please ensure you include RUN_START_TIME() AS time in the SELECT statement that reads from the system tables to avoid seeing an error.

Last updated