Integration API

Here we describe how to import and export data from/to LeanIX using the Integration API

Overview

The Integration API provides the ability to import and export data using a generic LeanIX Data Interchange Format (LDIF). LDIF is a JSON format with a very simple structure described in the following sections. All mapping and processing of the incoming and outgoing data is done using "Data Processors" that are configured behind the API. Configuration of the processors can be done using the UI. The configurations can be managed using the Integration API as well.

Updates

February 2020

  • Inbound Integration API automatic deletion support. Processing now supports automatic deletion of relations by defining specific scopes and relation filters. See section "Automatic deletion"
  • Inbound Integration API automatic deletion support. Processing now supports the processing mode "full" mode when creating the configuration. For configurations with mode "full" a new section "deletionScope" is read in the processor configuration. If that section does contain a key "factSheets", all fact sheets matching the scope query inside will be removed if they are not found in the processed LDIF. See section "Automatic deletion"

January 2020

  • Inbound data processors now allow to configure a "read" section for each data processor. In this section, an administrator can define the fact sheet fields, relations, tags, subscriptions and documents information, that will be read from the fact sheet before writing to it. Using this functionality it is possible to e.g. increase cost fields (write existing+new value) or do some updates differently based on current content found in the fact sheet.

December 2019

  • Functionality to export tag and subscription information from LeanIX using the outbound processor was added (see section "Outbound Configuration" for a sample)
  • In addition to iterating over lists, "forEach" now supports iterating over maps. Each map entry will be iterated, the names of the keys are available in "integration.keyOfForEach" and "integration.output.keyOfForEach" (for inner "forEach" loop in output section). See section "ForEach logic"

Advantages

Complexity is minimized as developers no longer have to understand the LeanIX data model

Configuration for LeanIX is contained in every connecter

No longer having to map logic inside of code

Integration now possible even if the external system does not allow direct access

No need for a direct REST connection

Flexible error handling, no failure due to a single or even multiple data issues. Data that does not meet the requirements, is simply ignored

Cross-cutting concerns are part of the API and in each connector

Lean Data Interchange Format (LDIF)

All data sent to the LeanIX Integration API needs to be in a standard format, called LDIF. For synchronization of data from LeanIX to other systems (outbound), the Integration API will provide all data in the same format as well.

The LDIF contains the following information:

Data sent from the external system to LeanIX. Or in the case of outbound, data extracted from LeanIX.

Metadata information to identify the connector instance that wrote the LDIF. The metadata is used to define ownership of entities in LeanIX if we need to ensure name spacing/deletion

Identification of the target workspace and the target API version

Allow customers adding some arbitrary description for any kind of grouping, notification or any unstructured notes for display purposes that is not processed by the API

LDIF Mandatory Elements

LDIF Mandatory Elements
Details

"connectorType"

Contains a string identifying a connector type in the workspace. A connector type identifies code that can be deployed to multiple locations and is able to communicate with a specific external program. In the above example, "lxKubernetes" identifies a connector that was written to read data from a Kubernetes installation and can be deployed into various Kubernetes clusters. In conjunction with "connectorID", the Integration API will match configurations the administrators created and stored in LeanIX and use it when processing incoming data.

Incoming data cannot be processed if the corresponding connectorType has not been configured for the LeanIX workspace.

"connectorVersion"

A string used for informational purposes only. The connector writes its own software version to the LDIF file for better referencing and understanding potential changes in the written LDIF. LDIF output of different connector versions is expected to be compatible and to be processed with the same configuration set on the LeanIX side if different versions of a connector send data in parallel. In case of incompatible data, a new connectorType needs to be sent and a corresponding configuration added on the LeanIX Integration API side.

"connectorId"

Contains a string to identify a specific deployment of a connector type. As an example: A Kubernetes connector might be deployed multiple times and collect different data from the different Kubernetes clusters. In conjunction with "connectorID", the Integration API will match configurations administrators created and stored in LeanIX and use it when processing incoming data. Administrators in LeanIX can manage each instance of a connector by creating and editing processing configuration, monitor ongoing runs (progress, status) and interact (pause, resume, cancel...). One data transfer for each instance can run at any point of time.

Incoming data cannot be processed if the corresponding connectorId has not been configured for the LeanIX workspace.

"lxVersion"

Defines the version of the Integration API the connector expects to send data to and will be used to ensure that a component grabbing LDIF files from a cloud storage will send it to the right integration API version (in case no direct communication is available).

"content"

The content section contains a list of Data Objects. This section contains the data collected from the external system or the data to be sent to the external system. Each Data Object is defined by an "id" (typically a UUID in the source system), a type to group the source information and a map of data elements (flat or hierarchical maps). Values of map entries may be single string values or lists of string values.

Incoming data cannot be processed if the corresponding connectorType has not been configured for the LeanIX workspace.

LDIF Content Section

The content section contains a list of Data Objects. This section contains the data collected from the external system or the data to be sent to the external system. Each Data Object is defined by an "id" (typically a UUID in the source system), a type to group the source information and a map of data elements. The keys "id", "type" and "data are mandatory. The data map may be flat or hierarchical maps. Values of map entries may be single string values, maps or lists.

Elements in Content Section
Description

"id"

Contains a string typically representing a unique identifier that allows to track back the item in the source system and to ensure updates in LeanIX always go to the same Factsheet. LeanIX Data processors will provide an efficient matching option to allow configuration of specific mapping rules based on IDs or groups of IDs that can be identified by patterns.

"type"

A string representing any required high level structuring of the source data. Example content in case of Kubernetes are e.g. "Cluster" (containing data that identifies the whole Kubernetes instance) or "Deployment" (which can represent a type of application in Kubernetes) Will typically be used to create different types or subtypes of Factsheets or relations from. LeanIX Data processors will provide an efficient matching option to allow configuration of specific mapping rules based on Type or groups of Type strings that can be identified by patterns.

"data"

The data extracted from the source system. The format is simple: All data has to be in a Map. Each map can contain a single string as a value for a key, a list of strings as a value or contain a map. The map again has to follow the rules just described.

Mandatory attributes in Content Section

The above three attributes are always required in each content block. Additional attributes like name and description can be placed under data.

LDIF Optional Elements

Optional Elements
Description

"processingMode"

May contain "PARTIAL" (default if not existing) or "FULL". Full mode allows to automatically remove all Fact Sheets that match a configured query and are not touched by the integration.

"chunkInformation"

If existing, it contains "firstDataObject", "lastDataObject" and "maxDataObject". Each value is a number defining what is in this potentially chunked LDIF.

"description"

The description can contain any string that may help to identify source or type of data. Sometimes it is helpful to add some information to analysis purposes or when setting up configuration on LeanIX side

"customFields"

This optional section may contain a map of fields and values defined by the producer of the LDIF. All data can be referenced in any data processor. It will be used for globally available custom meta data.

"lxWorkspace"

Defines the LeanIX workspace the data is supposed to be sent to or received from. The content will be used for additional validation by the integration API to check if data will be sent to the right workspace. The content has to match the string visible in the URL when accessing the workspace in a browser.

Users need to enter the UUID of the workspace in order to make use of this additional security mechanism. The UUID can e.g. be read from the administration page where API Tokens are created. Example "lxWorkspace": "19fcafab-cb8a-4b7c-97fe-7c779345e20e"

LDIF Notes

  • Additional fields in the LDIF that do not match the requirements of defined here will be silently ignored.
  • Each of the values listed above, except the values in the "content" section must not be longer than 500 characters.
{
  "connectorType": "cloudockit",
  "connectorId": "CDK",
  "connectorVersion": "1.0.0",
  "lxVersion": "1.0.0",
  "lxWorkspace": "workspace-id",
  "description": "Imports Cloudockit data into LeanIX",
  "processingDirection": "inbound",
  "processingMode": "partial",
  "content": [
    {
      "type": "ITComponent",
      "id": "b6992b1d-4e4d",
      "data": {
        "name": "Gatsby.j",
        "description": "Gatsby is a free and open source framework based on React that helps developers build websites and apps.",
        "category": "sample_software",
        "provider": "gcp",
        "applicationId": "28db27b1-fc55-4e44"
      }
    },
    {
      "type": "ITComponent",
      "id": "cd4fab6c-4336",
      "data": {
        "name": "Contentful",
        "description": "Beyond headless CMS, Contentful is an API-first content management infrastructure to create, manage and distribute content to any platform or device.",
        "category": "cloud_service",
        "provider": "gcp",
        "applicationId": "28db27b1-fc55-4e44"
      }
    },
    {
      "type": "ITComponent",
      "id": "3eaaa629-5338-41f4",
      "data": {
        "name": "GitHub",
        "description": "GitHub is a tool that provides hosting for software development version control using Git.",
        "category": "cloud_service",
        "provider": "gcp",
        "applicationId": "28db27b1-fc55-4e44"
      }
    },
    {
      "type": "Application",
      "id": "28db27b1-fc55-4e44",
      "data": {
        "name": "Book a Room Internal",
        "description": "Web application that's used internal to book rooms for a meeting."
      }
    }
  ]
}

LDIF including Lifecyles

{
  "connectorType": "cloudockit",
  "connectorId": "CDK",
  "connectorVersion": "1.0.0",
  "lxVersion": "1.0.0",
  "lxWorkspace": "workspace-id",
  "description": "Imports Cloudockit data into LeanIX",
  "processingDirection": "inbound",
  "processingMode": "partial",
  "content": [
  {
      "type": "Application",
			"id": "company_app_1",
			 "data": {
        "name": "TurboTax",
				"plan": null,
				"phaseIn": "2016-12-29",
				"active": "2019-12-29",
				"phaseOut": "2020-06-29",
				"endOfLife": "2020-12-29"
      }
		},
    {
      "type": "Application",
			"id": "company_app_2",
			"data": {
        "name": "QuickBooks",
				"plan": null,
				"phaseIn": "2016-11-29",
				"active": "2019-11-29",
				"phaseOut": "2020-05-29",
				"endOfLife": "2020-11-29"
			}
		}
  ]
}

Types Of Data Processors

Data Processors come in two main types, outbound and inbound. Inbound Processors are configured to look at the provided data (via the LDIF) and produce a result that is filed into LeanIX. While Outbound Data Processors allow for the exportation of specific data on Fact Sheets, such as data on relations.

Using current Fact Sheet information when writing to a Fact Sheet

All inbound processors allow to read information from the fact sheet (currently supported: fields, relations, subscriptions, tags, documents) and use the information when writing back to the fact sheet. The below example shows two use case . examples, where a cost field is increased by the incoming value and an update of the risk section will only be done if the description is not starting with a key word "manually".
The example as well contains information how to use this feature.

{
	"processors": [
		{
			"processorType": "inboundFactSheet",
			"processorName": "Apps from Deployments",
			"processorDescription": "Creates LeanIX Applications from Kubernetes Deployments",
			"type": "Project",
			"filter": {
				"type": "prj"
			},
			"identifier": {
				"external": {
					"id": {
						"expr": "${content.id}"
					},
					"type": {
						"expr": "externalId"
					}
				}
			},
			"run": 0,
			"updates": [
				{
					"key": {
						"expr": "budgetOpEx"
					},
					"values": [
						{
							"expr": "${lx.factsheet.budgetOpEx+data.monthlyOpEx}"
						}
					]
				},
				{
					"key": {
						"expr": "projectRisk"
					},
					"values": [
						{
							"expr": "${(lx.tags.toString().contains('\"name\":\"MANUAL_INPUT'))?null:data.risk}",
							"regexMatch": ".+"
						}
					],
					"optional": true
				},
				{
					"key": {
						"expr": "projectRiskDescription"
					},
					"values": [
						{
							"expr": "${(lx.tags.toString().contains('\"name\":\"MANUAL_INPUT'))?null:data.riskDescription}",
							"regexMatch": ".+"
						}
					],
					"optional": true
				}
			],
			"logLevel": "debug",
			"read": {
				"fields": [
					"budgetOpEx"
				],
				"tags": {
					"groups": [
						"Other tags"
					]
				}
			}
		}
	]
}

Example LDIF to test the above processor. The workspace needs to contain a Project Fact Sheet with external ID "12345". Or change the LDIF data to an external ID of a Project Fact Sheet existing in the workspace:

{
	"connectorType": "showcaseUpdate",
	"connectorId": "showcaseUpdate",
	"connectorVersion": "1.0.0",
	"lxVersion": "1.0.0",
	"content": [
		{
			"type": "prj",
			"id": "12345",
			"data": {
				"monthlyOpEx": 50000,
				"risk": "lowProjectRisk",
				"riskDescription": "The risk is considered to be low."
			}
		}
	]
}

Availability of information read from the Fact Sheet

Information read from the fact sheet is available in the output section. The information is not available in the outer forEach, in the identifier and the filter section. The reason for this is, that at the time when the content in these sections is evaluated, the target Fact Sheet is not yet identified.

Automatic deletion when loading data using inbound Data Processors

Integration API supports the processing mode "full" mode when creating the configuration. Only in case, the configuration is set to mode "full", a section "deletionScope" is read from the processor configuration.
Deletion of Fact Sheets If that section does contain a key "factSheets", all fact sheets matching the scope query inside will be removed if they are not found in the processed LDIF.
All Fact Sheets that match the deletion scope but are not touched by an inbound Data Processor during processing, will be removed (set to "Archived")
Relations can be automatically removed as well. The structure to define relations to be deleted is similar. See an example configuration below. The example removes all relations but by narrowing the scope to fewer fact sheets, only for these fact sheets relations will be removed.
To ensure the scope of the deletions is set properly, use the "Test Run" mode to check the scope only affects items that are supposed to be deleted. Admins may check the statistics result (shown in the UI as well) to see the number of potentially removed items and compare with expected results based on the test data.

Multiple deletion scopes

Please note that you can define multiple sets of deletion scopes for every type (e.g. 2 fact sheet deletion scopes and 3 relation deletion scopes). Processed items during synchronization runs will be compared against each set separately. Any item in each deletion scope definition will be removed if not touched during processing. It is allowed to even define overlapping scopes. Each item will be handled once only.

Example Configuration that removes all Projects from the LeanIX EA workspace that are no longer part of the incoming LDIF data. As well the relations from applications to ITComponents will be removed

{
	"deletionScope": {
		"factSheets": [
			{
				"facetFilters": [
					{
						"keys": [
							"Project"
						],
						"facetKey": "FactSheetTypes",
						"operator": "OR"
					}
				],
				"ids": []
			}
		],
		"relations": [
			{
				"relationTypes": [
					"relApplicationToITComponent"
				],
				"scope": {
					"facetFilters": [
						{
							"keys": [
								"Application"
							],
							"facetKey": "FactSheetTypes",
							"operator": "OR"
						},
						{
							"keys": [
								"c735330f-cd65-4c83-9be6-8a3f5ecf6560"
							],
							"facetKey": "relApplicationToITComponent",
							"operator": "OR"
						}
					],
					"ids": []
				}
			}
		]
	},
	"processors": [
		{
			"processorType": "inboundFactSheet",
			"processorName": "Apps from Deployments",
			"processorDescription": "Creates LeanIX Applications from Deployments",
			"type": "Project",
			"filter": {
				"type": "prj"
			},
			"identifier": {
				"external": {
					"id": {
						"expr": "${content.id}"
					},
					"type": {
						"expr": "externalId"
					}
				}
			},
			"run": 0,
			"updates": [
				{
					"key": {
						"expr": "name"
					},
					"values": [
						{
							"expr": "${data.name}"
						}
					]
				}
			],
			"enabled": true,
			"logLevel": "debug"
		}
	]
}

LDIF to try out deletion:

{
	"connectorType": "prjFull",
	"connectorId": "prjFull",
	"connectorVersion": "1.0.0",
  "lxVersion": "1.0.0",
	"content": [
  		{
  			"type": "Project",
  			"id": "prj-42",
  			"data": {
  				"name": "Project 42"
  			}
  		},
  		{
  			"type": "Project",
  			"id": "prj-43",
  			"data": {
  				"name": "Project 43"
  			}
  		},  		{
  			"type": "Project",
  			"id": "prj-44",
  			"data": {
  				"name": "Project 44"
  			}
  		}
  ]
}

To try out, please execute multiple times to first create all projects, then remove one item and try again.

Example will delete all projects

Executing this example needs to be done with care. All potentially existing project Fact Sheets in the workspace will be in scope for deletion. To limit, you may want to change the sample that a tag "TEST_PRJ" or similar will be set for the test projects. This tag can be added as filter criteria to the deletion scope definition

Functionality to automatically remove subscriptions, tags and documents will follow shortly.

Valid content for the deletion scope

To create valid JSON content to define the scope of Fact Sheets to be deleted if they no longer exist in the incoming LDIF, admins may want to use an outbound configuration. Using this configuration, a Button "Scope" is available that opens the facet filter UI. Once confirmed, the scope is automatically pasted to the processor configuration. Admins may copy and paste it into the inbound configuration where they need to use automatic deletion.

Usage of RegEx and JUEL expressions

All RegEx filters allow negation and case insensitivity. The Java RegEx syntax can be applied: To match all but "notMe", "^((?!notMe).)*$" would be used. To ensure matching in a case insensitive manner, you'd add "(?i)" to the beginning of the regular expression.

Each inbound Data Processor JUEL expression contain the following references to the data object the is in scope for processing:

Reference
Example
Details
  • header

"header.customFields.myGlodaldata1"

the value of myGlobaldata1" would be useable in any expression, given such a global value is provided in the JUEL. If not present (no customFields section or no defined key), this will always evaluate to an empty string.

  • content

"${content.id}"

"688c16bf-198c-11e9-9d08-926310573fbf"

  • data

"${data.chart}"

ill result in a string "chartmuseum-1.8.4" (given the first data object in the above LDIF is being processed)

"${header.connectorId}

would result in an evaluated string "Kub Dev-001".

Each of them allow to access all data elements in the same or in subsections. It allows to e.g. access the id of the connector creating the LDIF. "${header.connectorId}" would result in an evaluated string "Kub Dev-001".

Using the "header" section, there is as well access to the global custom data section. Using "header.customFields.myGlodaldata1" the value of "myGlobaldata1" would be useable in any expression, given such a global value is provided in the JUEL. If not present (no customFields section or no defined key), this will always evaluate to an empty string.

Other examples with resulting string:

"${content.id}"

will result in a string "688c16bf-198c-11e9-9d08-926310573fbf"

"${data.chart}"

"chartmuseum-1.8.4" (given the first data object in the above LDIF is being processed)

Users can use any type of operation that can be executed on String objects in Java. Documentation of all the Java String methods is not in scope of this documentation. Methods for Java 8 can be found here: https://docs.oracle.com/javase/8/docs/api/java/lang/String.html

Working with Expressions using "integration"

Expression
Details
More Examples

"integration.now"

Contains the information about the date and time the synchronization run started.

"integration.now" contains a Java LocalDateTime object and allows to call methods with parameters of types String or long. E.g.

integration.now.plusHours(1) would return an object showing date and time UTC plus one hour. Content like the date of last sync can be made visible in any LeanIX field like this "Last sync: ${integration.now.getMonth}.${integration.now.getDayOfMonth()}.${integration.now.getYear()}". The values can be used for filtering and/or to write date and time to the output of a data processor.

"integration.contentIndex"

Contains the index number of the currently processed data object. This could be used to e.g. create a filter for a data processor to always run for the first data object of a synchronization run.

"integration.maxContentIndex"

Contains the contentIndex of the last data object in scope of the sync run. Matching this in an advanced filter for a data processor would ensure the processor only runs e.g. when processing the last data object.

"${integration.toJson(data.Properties)}"

Offers a helper method to convert any given section from the LDIF (data.Properties in the example) into a valid JSON string. The JSON can be used to be rendered in a Fact Sheet without any option to search but dump arbitrary data.

Regular Expressions are supported as optional keys together with every "expr" key

In any situation where a key "expr" is used, the configuration may as well contain a key "regexReplace" containing a map of keys "match" and "replace". Both can be used to further process the output of the expression evaluation. Values can on top contain a key "regexMatch" to test the expression result and not continue in case of no match.

Examples how to use JUEL in a more advanced fashion:

JUEL Advanced
Details

Working with keys that contain spaces. Sometimes the keys in LDIF may contain spaces. That means that "." syntax "data.key with space" does not work.

Instead the syntax "data.['key with space']" can be used.

Capitalize an incoming value

${data.name.toUpperCase().charAt(0)}${data.name.substring(1)}

How to use different data based on a condition to map into a field

${data.name1.length()>3 ? data.name1 : data.name2}

Display all list values of a key in LDIF as comma separated string (e.g. input in LDIF: "architecture": ["amd64","Intel"])

${data.architecture} and configure the regexReplace section like this: "regexReplace": { "match": "(\[|\])","replace": "" } (the regex matches all characters '[' and ']' and replaces with an empty string. Result will be "amd64, Intel"

Add a Hash value to make something unique

${data.name} (${data.app.hashCode()>0 ? data.app.hashCode() : data.app.hashCode()*-1})

Combine two fields into one** (here the second is in brackets)

${data.name} (${data.app})

Replace some characters with something else

${data.name.replace('chart','xx')}

Remove characters

${data.name.replace('chart','')}

Use one entry of a string containing values separated by a certain value (in this example a comma)

${data.clusterName.split(',')[1].trim()} (given clusterName has a value of "abc, def, ghi", the resulting string will be "def"

Map a comma separated String found in LDIF to a multi value field in LeanIX

${data.clusterName.split(',')} (given clusterName has a value of "abc,def,ghi", the multivalue field in LeanIX will be filled with these values. An additional regEx replace may be used to remove unwanted space characters if existing in each field

Fill defined values based on some prefix of incoming data

${data.clusterName.toLowerCase().startsWith('lean') ? 'High' : 'Low'}

Accessing hierarchical data in LDIF data section. Given a data section like this:
"data": {"level0": {"level1a":"abc","level1b":"def"}}

${data.level0.level1a} will result in a string "abc"

Exchanging Data between Data Objects and Aggregation

In some situations it may be required to use information from multiple Data Objects and store a joint result in another entity like a Fact Sheet or Relation. Even Creating specific relations if we find certain value combinations in different Data Objects is possible.

In order to perform such operations, a "variables" section is available to write and add to, while iterating over Data Objects. Data Processors in the following Runs (!! Not in the same Run) can then read the values and perform defined operations on them.

This works in the following steps:

Working with Variables
Details
Example

Define the variable with a default value

This avoids errors if a variable was never written but later a try to access is configured (example in the admin section of the UI)

Write additional values to the variable

This is available on all Data Processors by adding a "variables" section (same structure as in step 1) and assigning a value to the variable.

In a subsequent "Run", processors can access the variable and perform operations on it or even use the variable in the "forEach" section (see below) to execute steps for every entry for the variable

Variables can have dynamic names based on content. In combination with the "forEach" feature, this allows powerful use cases.

As an example, the user needs to collect cost data from various data objects. The cost data needs to be grouped by the subscription they belong to. Each data object contains the cost in field "cost" and the id of the subscription in a field "subscriptionId". The user simply needs to collect all subscriptions in a variable "subscriptionList" and add each found cost to another variable named "<subscriptionId>_cost". in the next run, a data processor iterates over all unique entries in "subscriptionList" ("forEach": "${variables.subscriptionList.distinct()}". Then the aggregated cost variable can be accessed by using the name taken from "integration.valueOfForEach" plus "_cost"

Please see the example below

Writing Variables using Expressions

"variables": [
    {
        "key":"prefix_§{dataMyNameFromDataObjectValue}"
        "value": "${data.myValueFromDataObject}"
    }
]

Dynamic Variable Handling

${variables[integration.valueOfForEach.concat('_cost')].sum()}
(which is same as: variables['12345_cost'].sum() in case valueOfForEach is "12345")

Supported Dynamic Variable Operations

Supported operations are listed below. Each invalid entry will be counted as "0" when calculating.

Method
Details

myVariable.sum()

Creates a number adding all values in the variable

myVariable.get()

Reads the variable as a single value (first value)

myVariable.join(String delimiter)

Creates a String concatenating all values using the passed string. E.g. myVariable="1","2","3"] will be converted to "1, 2, 3" by variables.myVariable.join(', ')

myVariable.distinct()

Returns the same list of values but with duplicate entries removed. The result can be used to do further calculations like e.g. variables.myVariable.distinct().join(', ') to show all unique entries

myVariable.contains(String value)

Returns a boolean that e.g. can be used in advanced filters for Data Processors to execute a Data Processor only if certain values occur in a variable

myVariable.count()

Returns a number of entries in the variable

myVariable.average()

Calculates the math average of all values. non numerical values will be ignored

myVariable.toList()

Converts to a Java-List in order to execute standard java list methods

myVariable.max()

Selects the highest number value in the variable and returns it

myVariable.min()

selects the lowest number value in the variable and returns it

myVariable.getNumbers()

Filters out all non-numeric values and returns a list of values other methods explained can be executed on. In the variable and allows to safely calculate average, min, max.. avoiding errors with values added that cannot be converted to a number myVar.getNumbers().average() uses the numbers only that have been added to the variable

myVariable.selectFirst()

Picks the first available String match of the given parameters. If nothing was matched, the first parameter will be selected (default). Please note that the list of options to match needs to be provided as a list as JUEL does not allow a parameter list variable parameters.
A helper function was added to allow creation of a list from any string split result (array). Example: variables.myVariable.selectFirst(helper:toList('default','optionHighPrio','optionMediumPrio','optionLowPrio'))}

ForEach Logic

Each data processor provides additional capabilities to handle values that are lists. Using the standard functionality, every data processor will be executed exactly one time for each data object sent to the Integration API.

Sometimes however, there is a need to update multiple factsheets or multiple fields in a fact sheet for each value we find in a list of values found in the LDIF.

...
"data": {
        "attachedDocuments": [
            {
              "extension": "vsdx",
              "name": "thediagram.vsdx",
              "displayName": "Diagram",
              "url": "sotrage.azure.com/123/thediagram.vsdx",
              "content": null
            },
            {
              "extension": "docx",
              "name": "thedoc.docx",
              "displayName": "Documentation",
              "url": "sotrage.azure.com/123/thedoc.docx",
              "content": null
            },
            {
              "extension": "html",
              "name": "webpage.html",
              "displayName": "Web Page",
              "url": null,
              "content": "<body>the vm 789 ...</body>"
            }
        ]
        "version": "1.8.4",
        "myForEachField": "attachedDocuments",
        "maturity": "3",
        "note": "I did the first comment here",
        "Home Country": "D",
        "Other Country": "UK",
        "clusterName": "leanix-westeurope-int-aks"
      }
...
{
    "processorType": "inboundFactSheet",
    "processorName": "Deployment To Application",
    "processorDescription": "The processor creates or updates an Application from every data object of type 'Deployment'",
    "type": "Application",
    "name": "My Awesome App",
    "run": 0,
    "enabled": true,
    "identifier": {
        "external": {
            "id": {
                "expr": "${content.id}"
            },
            "type": {
                "expr": "externalId"
            }
        }
    },
    "filter": {
        "type": "Deployment"
    },
    "forEach": "${data.attachedDocuments}",
    "updates": [
        {
            "key": {
                "expr": "name"
            },
            "values": [
                {
                    "expr": "${data.attachedDocuments[integration.indexOfForEach].name}", // or in short: ${integration.valueOfForEach.name}
                    "regexReplace": {
                        "match": "",
                        "replace": ""
                    }
                },
                {
                    "expr": "${data.value}"
                }
            ]
        }
    ]
}

Using the "forEach" section in each data processor as in the example above. Will result in executing the data processor "Deployment To Application" four times for the given data object and each run will allow the user to use the index of the current iteration in all expressions (integration.indexOfForEach).

To fill some output field of the data processor with the specific url (see example above), the configuration would look like this: ${data.attachedDocuments[integration.indexOfForEach].name}. This will generate the three different names of the attached documents in each run of the data processor. This could be used to create separate factsheets and relations from the source data.

There is another way to access the value of the element referenced by the current index:

${integration.valueOfForEach}

Which is the same as:

${data.attachedDocuments[integration.indexOfForEach].name}

The index variable however can be used to reference the same index of another list element e.g. Important note: The admin can configure a "regexReplace" section in the forEach section. This will allow to manipulate the JSON representation of the value object resulting from the expression. In case such a manipulation is configured, it will have impact on the "integration.valueOfForEach" and not alter the original data one may reference using the indexOfForEach variable in the original data and reference manually.

Of course, the logic could be used to always execute a data processor n times. Just add '[1,2,3]' as configuration and the data processor will execute three times with the index variable integration.indexOfForEach set to 0-2 for reference.

In case the field 'attachedDocuments' is not available or contains an empty list, the data processor will not execute (operate on an empty list). In case the url is a single value and no list, the data processor will execute once.

The integration API allows to iterate over list values and map values. In case of iterating over a map, indexOfForEach will always return -1 as maps are not sorted. For maps there is an additional variable "keyOfForEach" available providing access to the name of the key. The value will be accessed with "valueOfForEach"

{
    "connectorType": "ee",
    "connectorId": "Kub Dev-001",
    "connectorVersion": "1.2.0",
    "lxVersion": "1.0.0",
    "content": [
        {
            "type": "Deployment",
            "id": "634c16bf-198c-1129-9d08-92630b573fbf",
            "data": {
                "app": "HR Service",
                "version": "1.8.4",
                "myList": ["lValue1","lValue2"],
                "myMap": {
                    "key1":"value1",
                    "key2":"value2"
                }
            }
        }
    ]
}
{
    "processors": [
        {
            "processorType": "inboundFactSheet",
            "processorName": "Apps from Deployments",
            "processorDescription": "Creates LeanIX Applications from Kubernetes Deployments",
            "type": "Application",
            "filter": {
                "type": "Deployment"
            },
            "identifier": {
                "external": {
                    "id": {
                        "expr": "${content.id}"
                    },
                    "type": {
                        "expr": "externalId"
                    }
                }
            },
            "run": 0,
            "updates": [
                {
                    "key": {
                        "expr": "name"
                    },
                    "values": [
                        {
                            "expr": "${data.app}"
                        }
                    ]
                },
                {
                    "key": {
                        "expr": "description"
                    },
                    "values": [
                        {
                            "expr": "${integration.keyOfForEach}: ${integration.valueOfForEach}"
                        }
                    ]
                }
            ],
            "forEach": "${data.myMap}",
            "logLevel": "debug"
        }
    ]
}

Example Mapping Use Cases

Scenario
Input From LDIF
Configured JUEL
Regex Match
Regex Replace
Target Field
Result

Mixed input from single and multi value field written to multi value field

"Home Country": "D"

"Other Countries": ["UK","DK"]

"${data.['Home Country']}"

"${data.['Other countries']}"

multi value

D

UK

DK

Multi value input in LDIF to multi value in LeanIX with mapping of defined input values to alternative multi values in LeanIX, filtering out any undefined values

"Area":

[" EU ","US "," APAC "," MARS "]

"${data.Area.trim()}"

"${data.Area.trim()}"

"${data.Area.trim()}"

^EU$

^US$

^APAC$

EU / Europe

US / United States

APAC / Asia Pacific

multi value

D

UK

DK

Multi value input data in LDIF to multi value field in LeanIX

"flag": ["Important","Urgent"]

"${data.flag}"

multi value

Important

Urgent

Multiple single value Fields in LDIF to one multi value field in LeanIX

"importance": "High"

"urgency": "High"

"${data.importance} Importance"

"${data.urgency}" Urgency

multi value

High Importance

High Urgency

Multi value input data into single value field in LeanIX (first matching will be selected)

"importance": "High"

"urgency": "High"

"${data.importance} Importance"

"${data.urgency}" Urgency

single value

High Importance

Multi value input data into single value field in LeanIX (first matching will be selected, matching on second configured input happens. Importance would only match if value started with "Top")

"importance": "High"

"urgency": "High"

"${data.importance} Importance"

"${data.urgency}" Urgency

^Top .*

single value

High Urgency

Single value input data in LDIF to single value field in LeanIX

"importance": "high"

"${data.importance}"

single value

high

Single value input data into multi value field in LeanIX

"importance": "high"

"${data.importance}"

multi value

high

Single field to single field but only write if the input data contains defined value(s)

"importance": "high"

"${data.importance}"

^very high

multi value

nothing written

Order of RegEx execution

Using the replace regEx will allow to modify the output after applying the match regEx

Setting Up the Connector

Inbound Connector

In the screen shot below shows the Data Processors on the left side and the input on the right (LDIF), the processor is free to be manipulated to match the LDIF coming in. The test run button on the top right won't insert any data into LeanIX and it will show the response that comes back in the output log after running the processor against the LDIF. Run will show the response and proceed to insert the data into LeanIX if the processor has been configured correctly.

The Integration API with an LDIF and an input processor

The Integration API with an LDIF and an input processor

Outbound Connector

The UI is a bit different when setting up the outbound connector. Instead of inputting an LDIF, you can specify the scope of what you would like to retrieve from the workspace. The scope defines the set of fact sheets that will be looked at when iterating over the configured outbound processors and creating content in the resulting LDIF.

The processor shown below is an example of an outbound processor, which is described in the processors section further down in this document.

An outbound connector with one outbound processor called createCloudComponents

An outbound connector with one outbound processor called createCloudComponents

When pressing the Set Scope button you are presented with searching options that comes with filter facets, smart search, similar to the inventory view in a workspace. When you're done, hit the Use Fact Sheet filter button on the bottom right to apply these changes.

Saving your configuration

Configuration and data are strictly separated using the integration API. This is to make synchronization runs reliable, repeatable and audit-able. Whenever you save the configuration of the Data Processors, only the Processors are stored. The system will never store the LDIF data.

Outbound Processor Example

{
  "processorType": "outboundFactSheet",
  "processorName": "<Unnamed processor>",
  "processorDescription": "",
  "filter": null,
  "enabled": true,
  "fields": [
    "nonExistingField",
    "lifecycle",
    "location",
    "createdAt",
    "technicalSuitabilityDescription"
  ],
  "output": [
    {
      "key": {
        "expr": "content.id"
      },
      "mode": "selectFirst",
      "values": [
        {
          "expr": "${lx.factsheet.id}"
        }
      ]
    },
    {
      "key": {
        "expr": "content.type"
      },
      "mode": "selectFirst",
      "values": [
        {
          "expr": "${lx.factsheet.type}"
        }
      ]
    },
    {
      "key": {
        "expr": "lifecycle.times"
      },
      "mode": "list",
      "values": [
        {
          "expr": "${lx.factsheet.lifecycle.endOfLife}"
        },
        {
          "expr": "${lx.factsheet.lifecycle.active}"
        },
        {
          "expr": "${lx.factsheet.lifecycle.phaseOut}"
        },
        {
          "expr": "${lx.factsheet.lifecycle.phaseIn}"
        }
      ]
    },
    {
      "key": {
        "expr": "location"
      },
      "mode": "selectFirst",
      "values": [
        {
          "expr": "${lx.factsheet.location.rawAddress}, with place id: ${lx.factsheet.location.placeId}"
        }
      ]
    },
    {
      "key": {
        "expr": "creationTime"
      },
      "mode": "selectFirst",
      "values": [
        {
          "expr": "${lx.factsheet.createdAt}"
        }
      ]
    },
    {
      "key": {
        "expr": "technicalSuitabilityDescription"
      },
      "mode": "selectFirst",
      "values": [
        {
          "expr": "${lx.factsheet.technicalSuitabilityDescription}"
        }
      ]
    }
  ]
}

Sending an LDIF to the endpoint

Endpoint : <base_url>/services/integration-api/v1, e.g. https://app.leanix.net/services/integration-api/v1
For more information on interacting with the Integration API endpoints, please see:
Swagger API Reference
Swagger JSON

Updated about 11 hours ago

Integration API


Here we describe how to import and export data from/to LeanIX using the Integration API

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.