top of page
Coding Station

Unifying Observability with Dynatrace Workflows and ServiceNow

When we talk about Dynatrace and ServiceNow in the observability space, the immediate thoughts go to ITOM Event, ITSM Incident, and CMDB Integration. Dynatrace supports these integrations, but traditionally, it's a one-way street.


Enter Dynatrace Workflows, a powerful automation tool that goes beyond just monitoring. Typically used for health assessments or self-healing deployments, Workflows can eliminate the need for custom scripts by connecting and gathering business data, events, and metrics directly into Dynatrace or pushing them to any external API or datastore.


In this example, let's explore how Workflows can interact with the ServiceNow Incident API to fetch all unresolved incidents. Why do this? In a recent proof of concept, a customer was overwhelmed by numerous monitoring tools alongside ServiceNow, leading to a fragmented view of system health. Not everyone had access to all tools, which slowed down issue resolution. By integrating real-time data from all tools, including network data from Meraki and Netscout (or any other API), we constructed a unified visibility dashboard.


This approach not only streamlines operations but also democratizes access to critical information, ensuring everyone can respond to issues more effectively.


Before you start, you need to make sure you have access to the Servicenow API and the rights to add a workflow. Once you confirm this, have the URL and token for authentication on the servicenow side, we are ready.


Here is the workflow, I'll explain each step:


Schedule: this is the timer for execution (every 2 mins in our case)

api_incident_api_production: this is where we add our API information. You need your query URL and token (note this connection is an API Gateway) query: /servicenow-tableapi/1.0/incident?sysparm_limit=100&sysparm_display_value=true&sysparm_fields=number,location,incident_state,active,severity,priority,short_description,assignment_group,business_service,service_offering,sys_created_on,state,category,assignment_groupg&sysparm_query=sys_created_onincident_stateNOT IN6,7

add_variables: where we define a table variable so that our data is collected in the same pot. Makes it easier to query later. // optional import of sdk modules

import { execution } from '@dynatrace-sdk/automation-utils';


export default async function ({ execution_id }) {

// your code goes here

// e.g. get the current execution

const ex = await execution(execution_id);


// insert the variable 'apiSource' into each JSON Object

var task_result = await ex.result('api_incident_api_production');

for (let key in task_result.json.result) {

task_result.json.result[key].apiSource ="/api/v3/measurements/incidents";

//task_result.json.result[key].timestampBeginning = task_result.json.request.start;

}

return task_result;

}



ingest_as_bizevents: Where we post the results, in this case, the Dynatrace API (you could send to an S3 bucket if you wanted).



Now that we have a workflow, there is an option to "Run". Do this to confirm that each step is executing correctly. Everything will appear in GREEN when completed successfully. Sometimes you might need to adjust your query to pull more fields etc, so make sure you understand which fields are meaningful in your own environment.


Now if you click on "api_incident_api_production" the UI on the right will show you the options, one of them is called "Result".

click on result to review the payload, you should see the json payload if the query worked. Note: make sure your sysparm_fields match the fields in your environment.

When working, we are now ready to create a dashboard and visualise the data. Simple query to validate if the "pot" of data has been populated: /api/v3/measurements/incidents


DQL to fetch and count all priority 4-Low (panel in red below)

fetch bizevents

| filter apiSource == "/api/v3/measurements/incidents"

| filter priority == "4-Low"

| summarize countDistinctExact(number)


Dynatrace Dashboard:

Other panels on the same dashboard helped to unify intelligence from Clouds, Netscout and Meraki. This helped to give a network, site and application overview.

By adding variables, this allowed us to filter the entire dashboard based on user location, assignment group...etc

Example for Business_Service:


fetch bizevents

| filter apiSource == "/api/v3/measurements/incidents"

| fieldsAdd timestamp_openedAt = toTimestamp(opened_at)

| fieldsAdd open_since = timestamp - timestamp_openedAt

| parse location, "JSON:locationJSON"

| parse business_service, "JSON:businessServiceJSON"

| parse assignment_group, "JSON:assignment_groupJSON"

| fieldsFlatten assignment_groupJSON

| fieldsFlatten businessServiceJSON

| fieldsFlatten locationJSON

| fields businessServiceJSON.display_value


As demonstrated, Dynatrace Workflows democratize data access and enhance intelligence sharing, all while being quick and easy to deploy.


There's no reason you can't replicate this workflow to interact with the EVENT or CHANGE tables in ServiceNow. This approach will give Dynatrace users visibility into alerts from other tools, serving as early warning signals or helping track when changes occur, all without the need to manually cross-reference with ITOM or Change Management systems.


If you're interested in exploring integration possibilities, reach out to us at hello@visibilityplatforms.com (mailto:hello@visibilityplatforms.com).

Comments


bottom of page