Field and Value Processing
Data Processors allow to map the incoming data found in the LDIF to fields of the specific entity in SAP LeanIX (e.g. a Fact Sheet) or the other way round in case of "Outbound Data Processors". The configuration allows to map incoming values to different types of SAP LeanIX fields (single value, float, multi value, life cycle...).
Applying Field and Value mappings may result in errors in case of source fields are not existing. Reason may be low input data quality, optional data in the source system. This types of "error" is expected and would only be noted in "test mode". Processing for next configuration lines of the data processor and next data processor always continues.
Type conversions to proper output type (variable for SAP LeanIX, String for LDIF) happen transparently. JUEL provides implicit type conversion and allows even calculations on Strings that contain numbers.
JUEL and RegEx
To provide high flexibility, predictability and at the same time guarantee to easily understand the configuration, all relevant configuration options for the Data Processors always support a combination of JUEL (http://juel.sourceforge.net/) and regEx configuration executed after each other.
JUEL allows to access and combine all input fields and values of the incoming data and data of the target entity (e.g. Fact Sheet).
RegEx allows final string mapping on the JUEL result.
All conversion of data types happens transparently.
While a most simple JUEL is always required to define the value to be used as an output, the RegEx replace may be empty if no value conversion is supposed to happen. Both methods provide in parts overlapping functionality. This is wanted and allows the user to focus on a potential solution based on technical knowledge.
Fields and value mapping in the Data Processors are configured as a list of single field configurations. Each configuration allows a JUEL/RegEx for the Key and a list of JUEL/RegEx Match/RegEx Replace for the value(s). This allows multi value field support.
Each JUEL Expression returns data.
This logic allows for configurations that fit many types of scenarios.
Processing of each field configuration works following this specification:
Value Type | Details |
---|---|
In case of a "List" (multi select field in pathfinder) | Each item in the List is tested against the regEx Match. If it matches, the regEx replace is executed and the result added to the list of target values for the configured field |
In case of a single value, | It is tested against the RegEx Match. If it matches, the regEx replace is executed and the result added to the list of target values for the configured field All Strings in the list of target values are written to the configured target field |
In case of a Multi value target field | All non-empty Strings will be written to Pathfinder |
In case of a Single Value Field | the first non-empty String will be written to Pathfinder |
In case no regEx match is configured | The match is considered to be true |
In case no regEx replace is configured | The original String will be part of the output list The Logic allows to configure all kinds of scenarios. |
RegEx and JUEL
All RegEx filters allow negation and case insensitivity. The Java RegEx syntax can be applied: To match all but "notMe", "^((?!notMe).)*$" would be used. To ensure matching in a case insensitive manner, you'd add "(?i)" to the beginning of the regular expression.
Each inbound Data Processor JUEL expression contain the following references to the data object the is in scope for processing:
Reference | Example | Details |
---|---|---|
* header | "header.customFields.myGlodaldata1" | the value of myGlobaldata1" would be useable in any expression, given such a global value is provided in the JUEL. If not present (no customFields section or no defined key), this will always evaluate to an empty string. |
* content | "${content.id}" | "688c16bf-198c-11e9-9d08-926310573fbf" |
* data | "${data.chart}" | ill result in a string "chartmuseum-1.8.4" (given the first data object in the above LDIF is being processed) |
"${header.connectorId} | would result in an evaluated string "Kub Dev-001". | |
"${content.id}" | will result in a string "688c16bf-198c-11e9-9d08-926310573fbf" | |
"${data.chart}" | "chartmuseum-1.8.4" (given the first data object in the above LDIF is being processed) |
Each of them allow to access all data elements in the same or in subsections. It allows to e.g. access the id of the connector creating the LDIF. "${header.connectorId}" would result in an evaluated string "Kub Dev-001".
Using the "header" section, there is as well access to the global custom data section. Using "header.customFields.myGlodaldata1" the value of "myGlobaldata1" would be useable in any expression, given such a global value is provided in the JUEL. If not present (no customFields section or no defined key), this will always evaluate to an empty string.
Users can use any type of operation that can be executed on String objects in Java. Documentation of all the Java String methods is not in scope of this documentation. Methods for Java 8 can be found here: https://docs.oracle.com/javase/8/docs/api/java/lang/String.html
Advanced JUEL
JUEL Advanced functions | Details |
---|---|
Working with keys that contain spaces. Sometimes the keys in LDIF may contain spaces. That means that "." syntax "data.key with space" does not work. | Instead the syntax "data['key with space']" can be used. |
Capitalize an incoming value | ${data.name.toUpperCase().charAt(0)}${data.name.substring(1)} |
How to use different data based on a condition to map into a field | ${data.name1.length()>3 ? data.name1 : data.name2} |
Display all list values of a key in LDIF as comma separated string (e.g. input in LDIF: "architecture": ["amd64","Intel"]) | ${data.architecture} and configure the regexReplace section like this: "regexReplace": { "match": "(\[|\])","replace": "" } (the regex matches all characters '[' and ']' and replaces with an empty string. Result will be "amd64, Intel" |
Add a Hash value to make something unique | ${data.name} (${data.app.hashCode()>0 ? data.app.hashCode() : data.app.hashCode()*-1}) |
Combine two fields into one** (here the second is in brackets) | ${data.name} (${data.app}) |
Replace some characters with something else | ${data.name.replace('chart','xx')} |
Remove characters | ${data.name.replace('chart','')} |
Use one entry of a string containing values separated by a certain value (in this example a comma) | ${data.clusterName.split(',')[1].trim()} (given clusterName has a value of "abc, def, ghi", the resulting string will be "def" |
Map a comma separated String found in LDIF to a multi value field in SAP LeanIX | ${data.clusterName.split(',')} (given clusterName has a value of "abc,def,ghi", the multivalue field in SAP LeanIX will be filled with these values. An additional regEx replace may be used to remove unwanted space characters if existing in each field |
Fill defined values based on some prefix of incoming data | ${data.clusterName.toLowerCase().startsWith('lean') ? 'High' : 'Low'} |
Accessing hierarchical data in LDIF data section. Given a data section like this: "data": {"level0": {"level1a":"abc","level1b":"def"}} | ${data.level0.level1a} will result in a string "abc" |
How to efficiently check if a source value is not null and not an empty string. | This could be done by "${data.myKey != null && data.myKey != ""}. But it can be combined into a short expression: ${not empty data.myKey} |
How to do a filter that finds a certain word in a multi line text field like in description | "onRead": "${lx.factsheet.alias.matches('(?s).\\bwordToSearch\\b.')}" |
JUEL Use Cases
Scenario | Input From LDIF | Configured JUEL | Regex Match | Regex Replace | Target Field | Result | |
---|---|---|---|---|---|---|---|
Mixed input from single and multi value field written to multi value field | "Home Country": "D" "Other Countries": ["UK","DK"] | "${data.['Home Country']}" "${data.['Other countries']}" | multi value | D UK DK | |||
Multi value input in LDIF to multi value in SAP LeanIX with mapping of defined input values to alternative multi values in SAP LeanIX, filtering out any undefined values | "Area": [" EU ","US "," APAC "," MARS "] | "${data.Area.trim()}" "${data.Area.trim()}" "${data.Area.trim()}" | ^EU$ ^US$ ^APAC$ | EU / Europe US / United States APAC / Asia Pacific | multi value | D UK DK | |
Multi value input data in LDIF to multi value field in SAP LeanIX | "flag": ["Important","Urgent"] | "${data.flag}" | multi value | Important Urgent | |||
Multiple single value Fields in LDIF to one multi value field in SAP LeanIX | "importance": "High" "urgency": "High" | "${data.importance} Importance" "${data.urgency}" Urgency | multi value | High Importance High Urgency | |||
Multi value input data into single value field in SAP LeanIX (first matching will be selected) | "importance": "High" "urgency": "High" | "${data.importance} Importance" "${data.urgency}" Urgency | single value | High Importance | |||
Multi value input data into single value field in SAP LeanIX (first matching will be selected, matching on second configured input happens. Importance would only match if value started with "Top") | "importance": "High" "urgency": "High" | "${data.importance} Importance" "${data.urgency}" Urgency | ^Top .* | single value | High Urgency | ||
Single value input data in LDIF to single value field in SAP LeanIX | "importance": "high" | "${data.importance}" | single value | high | |||
Single value input data into multi value field in SAP LeanIX | "importance": "high" | "${data.importance}" | multi value | high | |||
Single field to single field but only write if the input data contains defined value(s) | "importance": "high" | "${data.importance}" | ^very high | multi value | nothing written |
Best Name for Fact Sheet
There are situations where it is not easy to find the best potential name for a Fact Sheet based on incoming data. The best name may not be available as it is not unique. Another use case might be that the source may provide different candidates for a name where we want to select from best possible option to lower ranked options automatically based on information availability for each data object.
On the other hand we want to ensure that we do not change names of already created Fact Sheets all of a sudden just because a better name option became available during an update.
All the above use cases can be covered simply by providing a list of potential name candidates. Every candidate that results in a null (evaluated in the 'values' section) or is already taken by another Fact Sheet will be skipped. In case I want to keep a name once set and not changed after creation, admins configure to read current Fact Sheet content and use the existing name as a first option. This will automatically be skipped if the Fact Sheet is not yet existing.
Example
Please see the example of a processor and a sample LDIF. You may test play around with matching against existing Fact Sheet names and remove/rename some of the keys from the source data and do test runs:
{
"processors": [
{
"processorType": "inboundFactSheet",
"processorName": "Apps from Deployments",
"processorDescription": "Creates LeanIX Applications from Kubernetes Deployments",
"type": "Application",
"filter": {
"exactType": "Deployment"
},
"identifier": {
"external": {
"id": {
"expr": "${content.id}"
},
"type": {
"expr": "externalId"
}
}
},
"run": 0,
"updates": [
{
"key": {
"expr": "name"
},
"values": [
{
"expr": "${lx.factsheet.name}"
},
{
"expr": "${data.app}"
},
{
"expr": "${data.app2}"
},
{
"expr": "${data.app3}"
}
]
}
],
"read": {
"fields": [
"name"
]
},
"logLevel": "debug"
}
]
}
{
"connectorType": "ee",
"connectorId": "Kub Dev-001",
"connectorVersion": "1.2.0",
"lxVersion": "1.0.0",
"description": "Imports kubernetes data into LeanIX",
"processingDirection": "inbound",
"processingMode": "partial",
"customFields": {},
"content": [
{
"type": "Deployment",
"id": "634c16bf-198c-1129-9d08-92630b573fbf",
"data": {
"app3": "veryLongAndUnhandyNameIDoNotWantToSeeIfPossible",
"app2": "littleBitBetterNameButStillNotGood",
"app": "Best Name"
}
}
]
}
Using "Integration" Object
Expression | Details | More Examples |
---|---|---|
"${integration.now}" | Contains the information about the date and time the synchronization run started. "integration.now" contains a Java LocalDateTime object and allows to call methods with parameters of types String or long. E.g. | integration.now.plusHours(1) would return an object showing date and time UTC plus one hour. Content like the date of last sync can be made visible in any SAP LeanIX field like this "Last sync: ${integration.now.getMonth}.${integration.now.getDayOfMonth()}.${integration.now.getYear()}". The values can be used for filtering and/or to write date and time to the output of a data processor. |
"${integration.contentIndex}" | Contains the index number of the currently processed data object. This could be used to e.g. create a filter for a data processor to always run for the first data object of a synchronization run. | |
"${integration.maxContentIndex}" | Contains the contentIndex of the last data object in scope of the sync run. Matching this in an advanced filter for a data processor would ensure the processor only runs e.g. when processing the last data object. | |
"${integration.toJson(data.Properties)}" | Offers a helper method to convert any given section from the LDIF (data.Properties in the example) into a valid JSON string. The JSON can be used to be rendered in a Fact Sheet without any option to search but dump arbitrary data. | |
"${integration.toObject(data.Properties)}" | The opposite of "toJson". The method converts any Json String back to the corresponding object representation. This might be lists or maps e.g.. | Given a String "{"key1":"value1"}" (a serialized JSON) in a data property 'json'. The method "${integration.toObject(data.json).key1}" will provide "value1" as the result string after evaluating the JUEL expression |
${helper:toActiveLifecyclePhase(lx.factsheet.lifecycle, integration.now)}" | Offers a helper method to read the name of the lifecycle phase at a given point of time. Potential parameter for the current date may be "integration.now" | "${helper:toActiveLifecyclePhase(lx.factsheet.lifecycle, '2020-02-01'}" "${helper:toActiveLifecyclePhase(lx.factsheet.lifecycle, integration.now)}" Note: In case custom lifecycle phases are defined, pleases use the "helper:toActiveLifecyclePhaseOrdered" function to ensure ordering when lifecycle phases occure on the same date. |
${helper:toActiveLifecyclePhaseOrdered(lx.factsheet.lifecycle, integration.now, helper:toList("planned","phaseIn","active","phaseOut","endOfLife"))}" | Same as "helper:toActiveLifecyclePhase" with an additonal parameter to define the order of the phases. | "${helper:toActiveLifecyclePhaseOrdered(lx.factsheet.lifecycle, integration.now, helper:toList("phase1","phase2","phase3"))}" |
"${helper:toList('default','optionHighPrio','optionMediumPrio','optionLowPrio')}" | Converts a set of strings into a list to be used as parameters in a java String method | This works as well if an array is passed to the helper: helper:toList(myString.split(',')) |
integration.processing.* | The methods sum() distinct() average() max() min() getNumbers() allow to operate on every list in the JUEL scope in order to aggregate data and work with all lists in a way that is already supported for variables | Even chaining like data.myvalueWithAList.getNumbers().distinct().size() works to e.g. find out how many different number values are in a given input list. |
integration.processing.mergeList(firstList, secondList) | The method merges two source lists and can be used to iterate over all values in multiple different input lists using one forEach loop | Even merging multiple lists is possible by nesting the calls |
integration.tags.getTagGroupId(tagGroupName}" | Allows to resolve internally used tag group ids from their external name | The conversion is used in search based scope filters to allow Filters based on tag names their name and not their internal IDs. Internal IDs are not exposed easily and will change from workspace to workspace |
integration.tags.getTagId(tagGroupName,tagName) | Allows to resolve internally used tag ids from their external name | The conversion is used in search based scope filters to allow Filters based on tag names their name and not their internal IDs. Internal IDs are not exposed easily and will change from workspace to workspace |
integration.tags.getAllTagGroups() | Allows to work with an object containing all tag groups and all tags defined in the workspace | See example configuration "Export all tag groups" below this table |
helper:localDateTimeFromString(‘2016-03-04 11:30’, ‘yy-MM-dd HH:mm’, ‘yyyy-MM-dd HH:mm’) | Returns a LocalDateTime object representing the time 2016-03-04 11:30 | Can be used to convert any input string into a localDateTime object. The object and available Java methods can then be used in the JUEL expression. See Java documentation: https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/time/LocalDateTime.html and https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/SimpleDateFormat.html |
helper:localDateTimeToString(integration.now, ‘yy-MM-dd HH:mm’) | Returns a string “20-07-02 11:30". The helper allows to convert any localDateTime object back to a string in the required format | See Java documentation for string pattern description: https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/text/SimpleDateFormat.html |
helper:getDuration(localDateTime,localDateTime) | Returns a Duration object to allow flexible work with the result. See java documentation to Duration class methods. Parameters can be the return of the localDateTimeFromString or integration.now as well as localDateTimeObjects returned by other methods called. | The helper will be used to calculate time differences between two points of time. Writing the age in days to a field is one potential use case. See the Java documentation for methods on the returned "duration" object: https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/time/Duration.html |
math:nameOfTheMethod | calculates abs, round and other functions always on the highest available precision (double). In case a method returns a double, users can add ". intValue()" to convert to an integer for display purposes or to push to an integer field | For details on supported methods see the Oracle Java documentation. |
lx.toOrdinal('fieldName') | returns the position of the currently set value for a SINGLE_SELECT field. This allows to do calculations on SINGLE_SELECT fields given the order of the select options reflect a kind of order like "low", "medium", "high" would be returned as 0, 1, 2 | See the configuration example "Calculating with single select fields" below this table. Please be aware that the fields to be used need to be defined as fields in the "read"-section of the processor as shown in the example. |
helper:getFromMap(myMap,myKeyInTheMapAsString) | returns the object referenced by the specified key in the provided map. The helper can be used to access keys with spaces like a map containing a key names "my Important Value", which cannot be referenced by dot Syntax. | The helper allows to avoid [] syntax to access elements of a map where dot syntax is not possible. E.g. in cases where the map is "variables" and the name of the variable needs to be determined dynamically. |
Example
To test the below example, please change the id in "ids" to an existing internal id of a Fact Sheet in your workspace. You may just open a Fact Sheet and copy the id from the browser URL.
In real world scenarios, you may not want to export the whole object, but iterate over tag groups or export a subset of the information.
{
"scope": {
"ids": [
"869ee28b-c60a-4e88-8d18-f9e4ff466456"
],
"facetFilters": []
},
"processors": [
{
"processorType": "outboundFactSheet",
"processorName": "Export tag groups and tags",
"processorDescription": "Sample how to export all available tag groups and all tags as part of one fact sheet export",
"fields": [
"name"
],
"output": [
{
"key": {
"expr": "content.id"
},
"values": [
{
"expr": "${lx.factsheet.id}"
}
]
},
{
"key": {
"expr": "content.type"
},
"values": [
{
"expr": "${lx.factsheet.type}"
}
]
},
{
"key": {
"expr": "name"
},
"values": [
{
"expr": "${lx.factsheet.name}"
}
]
},
{
"key": {
"expr": "tagGroupsAndTags"
},
"values": [
{
"object": "${integration.tags.getAllTagGroups()}"
}
]
}
]
}
]
}
A similar configuration will help gathering all tag groups where a specific Fact Sheet has tags set. The lx.tagGroups list will be filled with all tag groups where the Fact Sheet has at least one tag set. And inside each tag group element there will be a list of the found tags for the Fact Sheet. The below example filters and returns only a sub set (default tag group). Just adding "${true}" in the filter will ensure to return all tag groups and included tags the Fact Sheet has set:
{
"scope": {
"ids": [
"bb8b0b74-f737-4f1b-a937-a06bddf3fe47"
],
"facetFilters": [
{
"keys": [
"Application"
],
"facetKey": "FactSheetTypes",
"operator": "OR"
}
]
},
"processors": [
{
"processorType": "outboundFactSheet",
"processorName": "Export to LDIF",
"processorDescription": "This is an example how to use the processor",
"enabled": true,
"fields": [
"lifecycle",
"location",
"createdAt",
"technicalSuitabilityDescription",
"description"
],
"relations": {
"filter": [
"relToParent",
"relApplicationToITComponent"
],
"fields": [
"description"
],
"targetFields": [
"displayName",
"externalId"
],
"constrainingRelations": false
},
"tags": {
"groups": [
"SomeTagGroupName"
],
"multipleGroups": "${dm.tagGroup.name =='Other tags'}"
},
"subscriptions": {
"types": [
"RESPONSIBLE"
]
},
"documents": {
"filter": ".*"
},
"output": [
{
"key": {
"expr": "content.id"
},
"values": [
{
"expr": "${lx.factsheet.id}"
}
]
},
{
"key": {
"expr": "content.type"
},
"values": [
{
"expr": "${lx.factsheet.type}"
}
]
},
{
"key": {
"expr": "Description"
},
"values": [
{
"expr": "${integration.toJson(lx.tagGroups).toString()}"
}
],
"optional": true
}
]
}
]
}
Calculating with single-select fields
This processor enables calculations with single-select fields
Example
{
"connectorId": "id-92476445-10b3-40f7-9386-6f13c61e4b89",
"connectorType": "ee",
"connectorVersion": "1.2.0",
"processingDirection": "inbound",
"processingMode": "partial",
"processors": [
{
"enabled": true,
"filter": {
"type": "DataObject"
},
"identifier": {
"internal": "${content.id}"
},
"logLevel": "debug",
"processorDescription": "Creates LeanIX Applications from Kubernetes Deployments",
"processorName": "Apps from Deployments",
"processorType": "inboundFactSheet",
"read": {
"fields": [
"businessValue",
"projectRisk",
"dataClassification"
],
"noNullForOrdinal": true
},
"run": 0,
"type": "DataObject",
"updates": [
{
"key": {
"expr": "description"
},
"values": [
{
"expr": "No null - ${lx.toOrdinal('dataClassification')}"
}
]
}
],
"variables": [
{
"key": "deploymentMaturity",
"value": "${data.maturity}"
}
]
}
]
}
Updating multi_select fields from an array
This processor supports updating multi_select fields in SAP LeanIX.
Example
Pre-requisite:- Have the multi-select field myMultiSelect available in the workspace with options: FOO and BAR.
Update myMultiSelect field with values from myField in input.
{
"processors": [
{
"processorType": "inboundFactSheet",
"processorName": "Apps from Deployments",
"processorDescription": "Creates LeanIX Applications from Kubernetes Deployments",
"type": "Application",
"filter": {
"exactType": "Deployment"
},
"identifier": {
"external": {
"id": {
"expr": "${content.id}"
},
"type": {
"expr": "externalId"
}
}
},
"updates": [
{
"key": {
"expr": "name"
},
"values": [
{
"expr": "${data.app}"
}
]
},
{
"key": {
"expr": "myMultiSelect"
},
"values": [
{
"expr": "${integration.output.valueOfForEach}",
"forEach": {
"elementOf": "${data['myField']}"
}
}
]
}
]
}
]
}
{
"connectorType": "Kubernetes",
"connectorId": "Kub Dev-001",
"connectorVersion": "1.2.0",
"lxVersion": "1.0.0",
"description": "Imports Kubernetes data into LeanIX",
"processingDirection": "inbound",
"processingMode": "partial",
"customFields": {},
"content": [
{
"type": "Deployment",
"id": "634c16bf-198c-1129-9d08-92630b573fbf",
"data": {
"app": "HR Service",
"tags": [],
"myField": [
"FOO"
],
"version": "1.8.4",
"maturity": "3",
"clusterName": "westeurope"
}
},
{
"type": "Deployment",
"id": "784616bf-198c-11f9-9da8-9263b0573fbe",
"data": {
"app": "Finance Service",
"tags": [
"Important"
],
"myField": [
"FOO",
"BAR"
],
"version": "10.5",
"maturity": "5",
"clusterName": "westeurope"
}
}
]
}
Remove content from fields
The integration API can be used to remove content. In case Values array contains "null" values after evaluating all configured elements in the Values Array, the Integration API will try to reset the configured field to an initial "not filled" state. This is specifically helpful for Single or multi select fields. String fields can simply be cleaned by passing an empty String. Numbers may rather be set to 0.
To avoid a warning, that no value could be found, ensure the "optional" field is used.
variableProcessor
Is used to only write values to internal variables. This will be used for aggregation use cases where the LDIF content needs to be used to only collect values without directly writing anything to SAP LeanIX.
Write-back to Fact Sheet
inboundFactsheet processor, the inboundRelation and the writeToLdif processor allow to read information from the Fact Sheet (currently supported: fields, relations, subscriptions, tags, documents and metrics) and use the information when writing back to the Fact Sheet. In case you need to work with read information in other processors, please write results to a variable first. The below example shows two use case examples, where a cost field is increased by the incoming value and an update of the risk section will only be done if the description is not starting with a key word "manually".
The example as well contains information how to use this feature.
In case you define the read section for the inboundRelation processor, the fields will be read for the Fact Sheet defined in the "from" section. You can still read the fields from the target Fact Sheet using the "relations/targetFields" as shown below.
Example
For the example to work, the workspace needs to contain a Project Fact Sheet with external ID "12345". Or change the LDIF data to an external ID of a Project Fact Sheet existing in the workspace:
{
"processors": [
{
"processorType": "inboundFactSheet",
"processorName": "Apps from Deployments",
"processorDescription": "Creates LeanIX Applications from Kubernetes Deployments",
"type": "Project",
"filter": {
"exactType": "prj"
},
"identifier": {
"external": {
"id": {
"expr": "${content.id}"
},
"type": {
"expr": "externalId"
}
}
},
"run": 0,
"updates": [
{
"key": {
"expr": "budgetOpEx"
},
"values": [
{
"expr": "${lx.factsheet.budgetOpEx+data.monthlyOpEx}"
}
]
},
{
"key": {
"expr": "projectRisk"
},
"values": [
{
"expr": "${(lx.tags.toString().contains('\"name\":\"MANUAL_INPUT'))?null:data.risk}",
"regexMatch": ".+"
}
],
"optional": true
},
{
"key": {
"expr": "projectRiskDescription"
},
"values": [
{
"expr": "${(lx.tags.toString().contains('\"name\":\"MANUAL_INPUT'))?null:data.riskDescription}",
"regexMatch": ".+"
}
],
"optional": true
},
{
"key": {
"expr": "metrics"
},
"values": [
{
"expr": "${integration.toJson(lx.metrics.variableName.values)}"
}
]
}
],
"logLevel": "debug",
"read": {
"fields": [
"budgetOpEx"
],
"tags": {
"groups": [
"Other tags"
]
},
"relations": {
"filter": [
"relToParent",
"relApplicationToITComponent"
],
"fields": [
"description"
],
"targetFields": [
"displayName",
"externalId",
"location"
]
},
"subscriptions": {
"types": [
"RESPONSIBLE"
]
},
"metrics": [
{
"name": "variableName",
"measurement": "money",
"fieldName": "dollars_per_day",
"aggregationFunction": "MEAN",
"groupBy": "1h",
"start": "2020-01-20T00:00:00Z",
"duration": "P0DT24H30M",
"rules": {
"key": "factSheetId",
"comparator": "=",
"compareWith": "${lx.factsheet.id}"
}
}
],
"impacts": {
"readAll": true
}
}
}
]
}
{
"connectorType": "showcaseUpdate",
"connectorId": "showcaseUpdate",
"connectorVersion": "1.0.0",
"lxVersion": "1.0.0",
"content": [
{
"type": "prj",
"id": "12345",
"data": {
"monthlyOpEx": 50000,
"risk": "lowProjectRisk",
"riskDescription": "The risk is considered to be low."
}
}
]
}
Example to access fields on relations and on the target Factsheet of a relation:
{
"lx.relationsElement": {
"id": "9316291b-361a-4050-ac79-bf9f96811fb1",
"type": "relApplicationToITComponent",
"target": {
"id": "161abc0d-7bed-4440-b756-5c14a741e1ad",
"name": "Application Development",
"type": "ITComponent"
},
"activeFrom": "2021-01-18",
"description": ""
}
}
Access to fields on relations and relation target fields
By defining the fields on the relations and on the target Fact Sheets of a relation, admins can use the values in JUEL expressions in the output section. The found relations need to be iterated using "forEach". Each element will then contain the standard information about e.g. name and type of a relation plus the requested fields. They can be accesses following the structure shown in the below example
Dynamic definition of Factsheet fields to read
Sometimes it is helpful to decide at run time which fields from a Fact Sheet to read and not hard code the names of the fields in the configuration.
For this purpose, Integration API allows to define a key "multipleFields" and a value that is a JUEL expression and will be resolved to boolean true and false. The Integration API will iterate over all available fields taken from the data model and allow the expression to do any filter logic required. As input value, the currently iterated field can be used with "dm.factSheetField.name". The type of the field can be identified with "dm.factSheetField.type"
In following JUEL expressions like forEach or update section, the list of read fields can be used with dm.factSheetFields, which is an object with keys: name, type and factsheetType
Example
The below example would read all fields of type "STRING" from a Fact Sheet.
{
"processors": [
{
"processorType": "inboundFactSheet",
"processorName": "Apps from Deployments",
"processorDescription": "Creates LeanIX Applications from Kubernetes Deployments",
"type": "Application",
"filter": {
"exactType": "Deployment"
},
"identifier": {
"external": {
"id": {
"expr": "${content.id}"
},
"type": {
"expr": "externalId"
}
}
},
"read": {
"multipleFields": "${dm.factSheetField.type=='STRING'}"
},
"updates": [
{
"key": {
"expr": "${integration.valueOfForEach.name}"
},
"values": [
{
"expr": "${lx[integration.valueOfForEach.name]} - added by Integration API"
}
]
}
],
"forEach": "${dm.factSheetFields}",
"logLevel": "debug"
}
]
}
In the sample, all String fields of the Fact Sheet get a string " - added by Integration API" appended.
multipleFields can be used for relations as well.
After evaluation of dynamic relation reading, fields dm.relationFilters[], lx.relationTargetFields[] and lx.relationFields[] are available
"multipleFields": "${data.fieldsToRead.contains(dm.relationField.name) && dm.relationField.type=='relToParent'}"
"multipleTargetFields": "${data.fieldsToRead.contains(dm.factSheetField.name) && dm.relationType=='relToParent'}"
"multipleFilters": "${data.myDynamicRelationList.contains(dm.relationType)}",
Same way, dynamic definition of Tags can happen: (after execution, the collected tag groups are available in a "lx.tagGroups" list.
In the filter "multipleGroups", the object of the currently iterated tag group "dm.tagGroup" can be used
{
"read": {
"tags": {
"multipleGroups": "${true}",
"groups": [
"Cloud Transformation"
]
}
}
}
Availability of information read from the Fact Sheet
Information read from the Fact Sheet is available in the output section. The information is not available in the outer forEach, in the identifier and the filter section. The reason for this is, that at the time when the content in these sections is evaluated, the target Fact Sheet is not yet identified.
Auto deletion with inboundProcessors
Integration API supports the processing mode "full" mode when creating the configuration. Only in case, the configuration is set to mode "full", a section "deletionScope" is read from the processor configuration. The following operations are supported:
- Deletion of Fact Sheets If that section does contain a key "factSheets", all Fact Sheets matching the scope query inside will be removed if they are not found in the processed LDIF
- All Fact Sheets that match the deletion scope but are not touched by an inbound Data Processor during processing, will be removed (set to "Archived")
- Relations can be automatically removed as well. The structure to define relations to be deleted is similar. See an example configuration below. The example removes all relations but by narrowing the scope to fewer Fact Sheets, only for these Fact Sheets relations will be removed
- Documents can be deleted by defining a scope of Fact Sheets and adding a regular expression pattern to match the documents by name that may be removed if they are no longer referenced by the incoming LDIF data
How does this work? Do I need to delete any data manually or first delete in a processor?
The concept of deletion is unique with the Integration API in that there is no active deletion needed by the user. All the deletion is done by the iAPI automatically through the configuration of a 'deletion scope'.
The logic of deletion works as follows:
- In an inbound 'Run', factsheets and other data like tags or relations are created and updated. These actions of creating or updating mark artifacts as 'touched'.
- Upon finishing a the 'Run', the API determines which artifacts within the defined deletion scope have not been touched at all in the course of the run and these untouched artifacts are deleted.
- This prevents constant “delete-create-delete-create” cycles that would be visible in the audit log of factsheets and avoids any manual work deletion any data
Sending data for an archived Fact Sheet in the SyncRun
In case an archived Fact Sheet exists in the Workspace with the same externalId as sent in one of the input LDIFs, Integration API will create a new FactSheet rather than recovering the deleted Fact Sheet, this is done in order to avoid unexpected data on the archived Fact Sheet reflected on the Active Fact Sheet.
PS: This scenario can only happen when the "uniqueFactSheet" attribute for a Fact Sheet is set to "false" when set to "true" externalIds are already cleared while archiving a Fact Sheet.
Multiple deletion scopes
Please note that you can define multiple sets of deletion scopes for every type (e.g. 2 Fact Sheet deletion scopes and 3 relation deletion scopes). Processed items during synchronization runs will be compared against each set separately. Any item in each deletion scope definition will be removed if not touched during processing. It is allowed to even define overlapping scopes. Each item will be handled once only.
Example
The example processor in this section removes the following data from a workspace:
- All project fact sheets that are no longer part of the incoming LDIF data
- All relations from applications to IT components
- All documents prefixed with "MyDocs_"
Caution
Before running this example processor, consider implications and always proceed with caution. The processor is configured to delete all project fact sheets present in the workspace. To limit the deletion scope, modify the code so that a tag such as "TEST_PRJ" is assigned to test projects. You can add this tag as a
facetFilter
to the deletion scope definition as shown below:{ "facetKey": "${integration.tags.getTagGroupId('Testing')}", "operator": "OR", "keys": [ "${integration.tags.getTagId('Testing','TEST_PRJ')}" ] }
Run this example processor multiple times to first create all projects, then remove one item and try again.
{
"deletionScope": {
"maximumDeletionRatio": {
"relations": 40,
"factSheets": 30
},
"factSheets": [
{
"scope": {
"facetFilters": [
{
"keys": [
"Project"
],
"facetKey": "FactSheetTypes",
"operator": "OR"
}
],
"ids": []
}
}
],
"relations": [
{
"relationTypes": [
"relApplicationToITComponent"
],
"scope": {
"facetFilters": [],
"ids": []
}
}
],
"documents": [
{
"documentMatches": [
"^MyDocs_.*"
],
"scope": {
"facetFilters": [
{
"keys": [
"Project"
],
"facetKey": "FactSheetTypes",
"operator": "OR"
}
],
"ids": []
}
}
]
},
"processors": [
{
"processorType": "inboundFactSheet",
"processorName": "Apps from Deployments",
"processorDescription": "Creates LeanIX Applications from Deployments",
"type": "Project",
"filter": {
"exactType": "prj"
},
"identifier": {
"external": {
"id": {
"expr": "${content.id}"
},
"type": {
"expr": "externalId"
}
}
},
"run": 0,
"updates": [
{
"key": {
"expr": "name"
},
"values": [
{
"expr": "${data.name}"
}
]
}
],
"enabled": true,
"logLevel": "debug"
}
]
}
{
"connectorType": "prjFull",
"connectorId": "prjFull",
"connectorVersion": "1.0.0",
"lxVersion": "1.0.0",
"content": [
{
"type": "Project",
"id": "prj-42",
"data": {
"name": "Project 42"
}
},
{
"type": "Project",
"id": "prj-43",
"data": {
"name": "Project 43"
}
},
{
"type": "Project",
"id": "prj-44",
"data": {
"name": "Project 44"
}
}
]
}
Remove tags
Deletion of Tags works similar. One or more deletion scope sections of type "tags" needs to be configured. The scope defines the set of Fact Sheets to be looked at when removing tags and allows to configure a tag group and a tag name to be deleted. Tag names support regular expression matching to allow removal of tags based on name patterns. Please note that tags will be removed from the Fact Sheets where no longer referenced by processors adding them but tags themselves will not be deleted.
Example
To remove tags and subscriptions from Fact Sheets
{
"deletionScope": {
"maximumDeletionRatio": {
"tags": 40
},
"tags": [
{
"tagScopes": [
{
"group": "myGroup",
"tag": "Prefix_.*"
}
],
"scope": {
"facetFilters": [
{
"keys": [
"Project"
],
"facetKey": "FactSheetTypes",
"operator": "OR"
}
],
"ids": []
},
"advanced": "${lx.tag.tagGroup.name==null}"
}
],
"subscriptions": [
{
"subscriptionScopes": [
{
"type": "RESPONSIBLE",
"roles": [
"My Role"
]
}
],
"scope": {
"facetFilters": [
{
"keys": [
"Project"
],
"facetKey": "FactSheetTypes",
"operator": "OR"
}
],
"ids": []
}
}
]
},
"processors": []
}
Preventing removal of items
The Integration API can be configured to not delete any configured item if the ratio of to be deleted items in any deletion scope exceeds a defined ratio compared to the total items in scope for the deletion scope. This can be used to protect the existing data in case of erroneously sending incomplete data into a full scope synchronisation with deletion defined. by defining "maximumDeletionRatio" for each type of deletion scope, this mechanism can be used. A format example can be found in the examples above. A threshold of 30 means that all deletion will be stopped if 30% or more items would be deleted for a given deletion scope.
Valid content for the deletion scope
To create valid JSON content to define the scope of Fact Sheets to be deleted if they no longer exist in the incoming LDIF, admins may want to use an outbound configuration. Using this configuration, a Button "Scope" is available that opens the facet filter UI. Once confirmed, the scope is automatically pasted to the processor configuration. Admins may copy and paste it into the inbound configuration where they need to use automatic deletion.
Advanced deletion
The functionality to delete content for elements we do no longer see references in the LDIF, can be used in an even more advanced was if required. The deletion scope for Relations, Tags and Documents may contain an optional key "advanced". If configured in the specific deletion scope, the value is evaluated as a JUEL expression resulting in "true" or "false". Only such elements will be added to the scope of elements potentially to be deleted if not touched where the JUEL expression evaluates to "true".
Multiple deletion scopes of same or different types can be defined. Please note that deletions of Fact Sheets will always happen last. This allows to use the Fact Sheet meta data lx.factsheet.* even for Fact Sheets that will be deleted. If allows to even use the owner field of the Fact Sheet deletion and evaluate in a relation, tag, document or subscription deletion scope in the state before the owner of the current run will be removed from the field (see ownership concept of advanced Fact Sheet deletion)
Please note, that currently no content from fields of type projectStatus is available for advanced deletion.
The information about the current Fact Sheet (for relations always the source Fact Sheet of a relation) is available using "lx.factsheet.*". All fields of the Fact Sheet can be used. In addition, all meta data fields of relations, documents and tags can be used in the JUEL for the related type of deletion scope.
{
"deletionScope": {
"factSheets": [
{
"scope":
{
"facetFilters": [
{
"keys": [
"Project"
],
"facetKey": "FactSheetTypes",
"operator": "OR"
}
]
}
}
],
"relations": [
{
"relationTypes": [
"relProjectToITComponent"
],
"scope": {
"ids": [],
"facetFilters": []
},
"advanced": "${lx.relation.description.contains('from hr service')}"
}
],
"documents": [
{
"documentMatches": [
".*"
],
"scope": {
"facetFilters": [
{
"keys": [
"Project"
],
"facetKey": "FactSheetTypes",
"operator": "OR"
}
]
},
"advanced": "${lx.document.documentType.equals('jira') || lx.document.name.equals('someName') }"
}
]
},
"processors": [
{
"processorType": "inboundFactSheet",
"processorName": "Read Projects",
"processorDescription": "Creates LeanIX Projects from Project Management Solution",
"type": "Project",
"filter": {
"exactType": "prj"
},
"identifier": {
"external": {
"id": {
"expr": "${content.id}"
},
"type": {
"expr": "externalId"
}
}
}
}
]
}
Deletion for multiple external sources
Advanced deletion is as well available for Fact Sheets but works slightly different. Advanced Fact Sheet deletion supports multiple external sources for one SAP LeanIX Fact Sheet. Deletion would not happen unless the last referenced source of a Fact Sheet does not longer contain information about the Fact Sheet.
This functionality is helpful in cases where a Fact Sheet might be created and again removed by potentially more than one foreign system providing separate LDIF to update the SAP LeanIX side.
In cases, where the Fact Sheet was no longer referenced by one of the sources, a deletion would not be a valid solution unless all sources no longer contain the information.
For such cases, the advanced deletion allows every sources to set a unique id as a marker. If this marker, called "owner", was found in the configuration, the Integration API first checks the field with all markers and only removes (archives) the Face Sheet if the list of markers found in the field is empty. The field to store the markers needs to be created in the data model as a standard String type. Integration API will read the content ant treat as a JSON list.
The below example shows example usage including the way to add the marker for a specific owner as part of the output section.
Example
Please note that the example uses the "alias" field to store the owner information. This is for testing and demonstration as it allows easy inspection and required no data model work. For production usage, this needs to be written to a new created field not visible in the UI.
{
"deletionScope": {
"factSheets": [
{
"scope": {
"ids": [],
"facetFilters": [
{
"keys": [
"Process"
],
"facetKey": "FactSheetTypes",
"operator": "OR"
}
]
},
"owner": {
"fieldName": "alias",
"ownerId": "myOwner"
}
}
]
},
"processors": [
{
"processorType": "inboundFactSheet",
"processorName": "Apps from Deployments",
"processorDescription": "Creates LeanIX Applications from Kubernetes Deployments",
"type": "Process",
"identifier": {
"external": {
"id": {
"expr": "fullSyncOwnerTest ${integration.valueOfForEach}"
},
"type": {
"expr": "externalId"
}
}
},
"run": 0,
"updates": [
{
"key": {
"expr": "name"
},
"values": [
{
"expr": "Full Sync Owner Test ${integration.valueOfForEach}"
}
]
},
{
"key": {
"expr": "alias"
},
"values": [
{
"expr": "${helper:addIfNotExisting(lx.factsheet.alias, 'myOwner')}"
}
]
}
],
"enabled": false,
"forEach": "${data.tags}",
"logLevel": "debug",
"read": {
"fields": [
"alias"
]
}
}
]
}
Delete tags of multiple groups
Use case: Remove all tags of two tag groups from Fact Sheets of a given type can be performed as easy as shown in the below example. The deletion scope marks all Fact Sheets with tags in the tag groups and the given Fact Sheet type to be deleted if not touched. Then the processor configuration stays empty. No Fact Sheet will be touched when processing. At the end of the run, all tags will be removed.
Example
Please do not forget to increase the deletion limit set to 50% by default to 101 to allow removing even 100% of the tags.
{
"deletionScope": {
"maximumDeletionRatio": {
"tags": 101
},
"tags": [
{
"tagScopes": [
{
"group": "Cloud:Region",
"tag": ".*"
}
],
"scope": {
"facetFilters": [
{
"facetKey": "FactSheetTypes",
"operator": "OR",
"keys": [
"CloudComponent"
]
}
],
"ids": []
}
},
{
"tagScopes": [
{
"group": "Cloud:Cloud Service",
"tag": ".*"
}
],
"scope": {
"facetFilters": [
{
"facetKey": "FactSheetTypes",
"operator": "OR",
"keys": [
"CloudComponent"
]
}
],
"ids": []
}
},
{
"tagScopes": [
{
"group": "Cloud:Tech Category",
"tag": ".*"
}
],
"scope": {
"facetFilters": [
{
"facetKey": "FactSheetTypes",
"operator": "OR",
"keys": [
"CloudComponent"
]
}
],
"ids": []
}
}
]
},
"processors": []
}
Deletion of Subscriptions
Example configuration to use advanced deletion on subscriptions. Here, to remove all subscriptions on application Fact Sheets that have an anonymised user as a subscriber for a subscription of type "RESPONSIBLE".
Please note, that the below configuration does not configure any processors, thus can work with an empty LDIF as input to trigger Integration API.
Example
{
"deletionScope": {
"subscriptions": [
{
"subscriptionScopes": [
{
"type": "RESPONSIBLE",
"roles": []
}
],
"scope": {
"facetFilters": [
{
"keys": [
"Application"
],
"facetKey": "FactSheetTypes",
"operator": "OR"
}
]
},
"advanced": "${lx.subscription.user.userName=='AnonymizedUser'}"
}
]
},
"processors": []
}
{
"connectorId": "subscription",
"connectorType": "subscription",
"connectorVersion": "1.0.0",
"content": [],
"lxVersion": "1.0.0"
}
Using External IDs in pathfinder search scopes
When working with integrations, specifically with deletion scopes it is handy to know that pathfinder is capable filtering documents by their external id. and not only by internal id which is most of the times not known to foreign systems.
To filter by external ids, just use the field "externalIds" instead of "ids" in the search scope definition.
Please note that Pathfinder required a special syntax when defining external ids using the name of the externalID field, a slash ("/") then value of the externalId.
Example for the default externalId field:
"externalIds": ["${'externalId/'.concat(header.customFields.myExternalId)}"]
Data exchange and Aggregation between Objects
In some situations it may be required to use information from multiple Data Objects and store a joint result in another entity like a Fact Sheet or Relation. Even Creating specific relations if we find certain value combinations in different Data Objects is possible.
In order to perform such operations, a "variables" section is available to write and add to, while iterating over Data Objects. Data Processors in the following Runs (!! Not in the same Run) can then read the values and perform defined operations on them.
This works in the following steps:
Working with Variables | Details | Example |
---|---|---|
Define the variable with a default value | This avoids errors if a variable was never written but later a try to access is configured (example in the admin section of the UI) | |
Write additional values to the variable | This is available on all Data Processors by adding a "variables" section (same structure as in step 1) and assigning a value to the variable. | |
In a subsequent "Run", processors can access the variable and perform operations on it or even use the variable in the "forEach" section (see below) to execute steps for every entry for the variable | Variables can have dynamic names based on content. In combination with the "forEach" feature, this allows powerful use cases. | As an example, the user needs to collect cost data from various data objects. The cost data needs to be grouped by the subscription they belong to. Each data object contains the cost in field "cost" and the id of the subscription in a field "subscriptionId". The user simply needs to collect all subscriptions in a variable "subscriptionList" and add each found cost to another variable named "_cost". in the next run, a data processor iterates over all unique entries in "subscriptionList" ("forEach": "${variables.subscriptionList.distinct()}". Then the aggregated cost variable can be accessed by using the name taken from "integration.valueOfForEach" plus "_cost" Please see the example below |
Writing Variables using Expressions
{
"variables": [
{
"key": "prefix_§{dataMyNameFromDataObjectValue}",
"value": "${data.myValueFromDataObject}"
}
]
}
Example
Processor Writing Variables
Below is an example of a processor with a matching LDIF that shows how variables work. In the first run which is marked by the processor with "run": 0, the variable section can be called on with the key aggregatedCosts and gathers together all the costs in the data section of each entry of an LDIF that is gathered by the filter in place. In this case, that filter is a Fact Sheet of type ITComponent. In the next run marked with "run": 1, the processor is calling the sum function on the variable aggregatedCosts and writing the sum to the description field of the Fact Sheets that fall under the specified filter, which in this case is all the Applications in the LDIF.
The example's result is three Fact Sheets created, two IT Components, and one Application with a description of 11. Note that the costs of the IT Components were not written in the IT Component's Fact Sheets.
{
"processors": [
{
"processorType": "inboundFactSheet",
"processorName": "Create IT Components",
"processorDescription": "One Processor for IT Components",
"enabled": true,
"type": "ITComponent",
"identifier": {
"external": {
"id": {
"expr": "${content.id}"
},
"type": {
"expr": "externalId"
}
}
},
"run": 0,
"filter": {
"exactType": "ITComponent"
},
"updates": [
{
"key": {
"expr": "name"
},
"values": [
{
"expr": "${data.name}"
}
]
}
],
"variables": [
{
"key": "aggregatedCosts",
"value": "${data.cost}"
}
]
},
{
"processorType": "inboundFactSheet",
"processorName": "Create Applications",
"processorDescription": "Aggregated IT Costs in Application's Description",
"enabled": true,
"type": "Application",
"identifier": {
"external": {
"id": {
"expr": "${content.id}"
},
"type": {
"expr": "externalId"
}
}
},
"run": 1,
"filter": {
"exactType": "Application"
},
"updates": [
{
"key": {
"expr": "name"
},
"values": [
{
"expr": "${data.name}"
}
]
},
{
"key": {
"expr": "description"
},
"values": [
{
"expr": "${variables.aggregatedCosts.sum()}"
}
]
}
]
}
],
"variables": {}
}
{
"connectorType": "ee",
"connectorId": "Kub Dev-001",
"connectorVersion": "1.2.0",
"lxVersion": "1.0.0",
"content": [
{
"type": "ITComponent",
"id": "itc1",
"data": {
"name": "IT1",
"cost": 5
}
},
{
"type": "ITComponent",
"id": "itc2",
"data": {
"name": "IT2",
"cost": 6
}
},
{
"type": "Application",
"id": "app",
"data": {
"name": "My App"
}
}
]
}
Dynamic Variable Handling
${variables[integration.valueOfForEach.concat('_cost')].sum()}
(which is same as: variables['12345_cost'].sum() in case valueOfForEach is "12345")
Supported Dynamic Variable Operations
Supported operations are listed below. Each invalid entry will be counted as "0" when calculating.
Method | Details |
---|---|
myVariable.sum() | Creates a number adding all values in the variable |
myVariable.get() | Reads the variable as a single value (first value) |
myVariable.join(String delimiter) | Creates a String concatenating all values using the passed string. E.g. myVariable="1","2","3"] will be converted to "1, 2, 3" by variables.myVariable.join(', ') |
myVariable.distinct() | Returns the same list of values but with duplicate entries removed. The result can be used to do further calculations like e.g. variables.myVariable.distinct().join(', ') to show all unique entries |
myVariable.contains(String value) | Returns a boolean that e.g. can be used in advanced filters for Data Processors to execute a Data Processor only if certain values occur in a variable |
myVariable.count() | Returns a number of entries in the variable |
myVariable.average() | Calculates the math average of all values. non numerical values will be ignored |
myVariable.toList() | Converts to a Java-List in order to execute standard java list methods |
myVariable.max() | Selects the highest number value in the variable and returns it |
myVariable.min() | selects the lowest number value in the variable and returns it |
myVariable.getNumbers() | Filters out all non-numeric values and returns a list of values other methods explained can be executed on. In the variable and allows to safely calculate average, min, max.. avoiding errors with values added that cannot be converted to a number myVar.getNumbers().average() uses the numbers only that have been added to the variable |
myVariable.selectFirst() | Picks the first available String from method parameters that match any of the values of myVariable. If nothing was matched, the first parameter will be selected (default). Please note that the list of options to match needs to be provided as a list as JUEL does not allow a parameter list variable parameters. A helper function was added to allow creation of a list from any string split result (array). Example: variables.myVariable.selectFirst(helper:toList('default','optionHighPrio','optionMediumPrio','optionLowPrio'))} |
forEach Logic
Each data processor provides additional capabilities to handle values that are lists. Using the standard functionality, every data processor will be executed exactly one time for each data object sent to the Integration API.
Sometimes however, there is a need to update multiple Fact Sheets or multiple fields in a Fact Sheet for each value we find in a list of values found in the LDIF.
{
"data": {
"attachedDocuments": [
{
"extension": "vsdx",
"name": "thediagram.vsdx",
"displayName": "Diagram",
"url": "sotrage.azure.com/123/thediagram.vsdx",
"content": null
},
{
"extension": "docx",
"name": "thedoc.docx",
"displayName": "Documentation",
"url": "sotrage.azure.com/123/thedoc.docx",
"content": null
},
{
"extension": "html",
"name": "webpage.html",
"displayName": "Web Page",
"url": null,
"content": "<body>the vm 789 ...</body>"
}
],
"version": "1.8.4",
"myForEachField": "attachedDocuments",
"maturity": "3",
"note": "I did the first comment here",
"Home Country": "D",
"Other Country": "UK",
"clusterName": "leanix-westeurope-int-aks"
}
}
{
"processorType": "inboundFactSheet",
"processorName": "Deployment To Application",
"processorDescription": "The processor creates or updates an Application from every data object of type 'Deployment'",
"type": "Application",
"name": "My Awesome App",
"run": 0,
"enabled": true,
"identifier": {
"external": {
"id": {
"expr": "${content.id}"
},
"type": {
"expr": "externalId"
}
}
},
"filter": {
"exactType": "Deployment"
},
"forEach": "${data.myForEachField}",
"updates": [
{
"key": {
"expr": "name"
},
"values": [
{
"expr": "${data.attachedDocuments[integration.indexOfForEach].name}", // or in short: ${integration.valueOfForEach.name}, remove this comment before trying
"regexReplace": {
"match": "",
"replace": ""
}
},
{
"expr": "${data.value}"
}
]
}
]
}
Using the "forEach" section in each data processor as in the example above. Will result in executing the data processor "Deployment To Application" four times for the given data object and each run will allow the user to use the index of the current iteration in all expressions (integration.indexOfForEach).
To fill some output field of the data processor with the specific url (see example above), the configuration would look like this: ${data.attachedDocuments[integration.indexOfForEach].name}. This will generate the three different names of the attached documents in each run of the data processor. This could be used to create separate Fact Sheets and relations from the source data.
There is another way to access the value of the element referenced by the current index:
${integration.valueOfForEach}
Which is the same as:
${data.attachedDocuments[integration.indexOfForEach].name}
The index variable however can be used to reference the same index of another list element e.g. Important note: The admin can configure a "regexReplace" section in the forEach section. This will allow to manipulate the JSON representation of the value object resulting from the expression. In case such a manipulation is configured, it will have impact on the "integration.valueOfForEach" and not alter the original data one may reference using the indexOfForEach variable in the original data and reference manually.
Of course, the logic could be used to always execute a data processor n times. Just add '[1,2,3]' as configuration and the data processor will execute three times with the index variable integration.indexOfForEach set to 0-2 for reference.
In case the field 'attachedDocuments' is not available or contains an empty list, the data processor will not execute (operate on an empty list). In case the url is a single value and no list, the data processor will execute once.
The Integration API allows to iterate over list values and map values. In case of iterating over a map, indexOfForEach will always return -1 as maps are not sorted. For maps there is an additional variable "keyOfForEach" available providing access to the name of the key. The value will be accessed with "valueOfForEach"
Example: forEach
{
"connectorType": "ee",
"connectorId": "Kub Dev-001",
"connectorVersion": "1.2.0",
"lxVersion": "1.0.0",
"content": [
{
"type": "Deployment",
"id": "634c16bf-198c-1129-9d08-92630b573fbf",
"data": {
"app": "HR Service",
"version": "1.8.4",
"myList": [
"lValue1",
"lValue2"
],
"myMap": {
"key1": "value1",
"key2": "value2"
}
}
}
]
}
{
"processors": [
{
"processorType": "inboundFactSheet",
"processorName": "Apps from Deployments",
"processorDescription": "Creates LeanIX Applications from Kubernetes Deployments",
"type": "Application",
"filter": {
"exactType": "Deployment"
},
"identifier": {
"external": {
"id": {
"expr": "${content.id}"
},
"type": {
"expr": "externalId"
}
}
},
"run": 0,
"updates": [
{
"key": {
"expr": "name"
},
"values": [
{
"expr": "${data.app}"
}
]
},
{
"key": {
"expr": "description"
},
"values": [
{
"expr": "${integration.keyOfForEach}: ${integration.valueOfForEach}"
}
]
}
],
"forEach": "${data.myMap}",
"logLevel": "debug"
}
]
}
For each logic can as well be applied inside the value section for and update key. The list or map will be iterated and the values result will contain n entires that are then mapped to the defined key. Please ensure to set the mode to "list" if not only the first value is to be used as a result (default mode for every key in the update section of a processor is "selectFirst" to only take the first non null result from what was defined in the values array. The "inner forEach" is behaving exactly as if the admin defined a fix number of elements in the "values" section.
Example: Nested forEach
Using all three options to iterate with "forEach" functionality, Integration API now allows to ingest data in LDIF where data structures are nested up to three levels.
{
"key": {
"expr": "targetITComponents"
},
"mode": "list",
"values": [
{
"forEach": {
"elementOf": "${lx.relations}",
"filter": "${true}"
},
"map": [
{
"key": "id",
"value": "${integration.output.valueOfForEach.target.id}"
},
{
"key": "type",
"value": "${integration.output.valueOfForEach.target.type}"
},
{
"key": "name",
"value": "${integration.output.valueOfForEach.target.displayName}"
}
]
}
]
}
Given the processor was configured to read relations (read section) and put the results into a list "lx-relations", the above example of an outboundFactsheet processor, will output all relation results into an array as value of a key named "targetITCmponents".
Please note that admins may configure the filter JUEL expression evaluating to boolean in order to not have some of the input list elements in the output. The JUEL may contain references to "integration.output.valueOfForEach" and filter on any content.
A third option to iterate using "forEach" is to add the key at the level of "updates". It allows to create a dynamic set of field updates to be pushed to e.g. a Fact Sheet. In the below example the fields to be updated will be read from the incoming LDIF. In order to execute the example, the referenced Fact Sheet needs to already exist. Most easy way is to one time execute the "starter example" configuration on the workspace.
Example: forEach : dynamic fields
{
"processors": [
{
"processorType": "inboundFactSheet",
"processorName": "Apps from Deployments",
"processorDescription": "Creates LeanIX Applications from Kubernetes Deployments",
"type": "Application",
"filter": {
"exactType": "Deployment"
},
"logLevel": "debug",
"identifier": {
"external": {
"id": {
"expr": "${content.id}"
},
"type": {
"expr": "externalId"
}
}
},
"updates": [
{
"key": {
"expr": "${integration.updates.keyOfForEach}"
},
"values": [
{
"expr": "${integration.updates.valueOfForEach}"
}
],
"forEach": {
"elementOf": "${data}",
"filter": "${integration.updates.valueOfForEach!='toBeFiltered'}"
}
}
]
}
]
}
{
"connectorType": "Kubernetes",
"connectorId": "Kub Dev-001",
"connectorVersion": "1.2.0",
"lxVersion": "1.0.0",
"description": "Imports Kubernetes data into LeanIX",
"processingDirection": "inbound",
"processingMode": "partial",
"customFields": {},
"content": [
{
"type": "Deployment",
"id": "634c16bf-198c-1129-9d08-92630b573fbf",
"data": {
"name": "HR Service",
"version": "toBeFiltered",
"description": "test description"
}
}
]
}
Use "object" key to output all objects
The "object" key might be used instead of "map" or "expr" to retrieve a representation of any potential input object as a defined value. This allows to easily export all information without need to know about the details inside the object.
{
"scope": {
"facetFilters": [
{
"facetKey": "FactSheetTypes",
"operator": "OR",
"keys": [
"Application"
]
}
],
"ids": [
"90a8296c-92fe-4009-a4cf-21db710719ec"
]
},
"processors": [
{
"processorType": "outboundFactSheet",
"logLevel": "debug",
"fields": [
"lifecycle"
],
"output": [
{
"key": {
"expr": "content.id"
},
"values": [
{
"expr": "${lx.factsheet.id}"
}
]
},
{
"key": {
"expr": "content.type"
},
"values": [
{
"expr": "${lx.factsheet.type}"
}
]
},
{
"key": {
"expr": "description"
},
"values": [
{
"object": "${lx.factsheet.lifecycle}"
}
]
}
]
}
]
}
Filter Processor execution based on current Fact Sheet content
Using the onRead filter in an inbound processor allows to execute a processor based on currently existing Fact Sheets. The processor can be configured to only execute if a Fact sheet already exists or if the Fact sheet has defined values in some fields.
Or exactly the other way round: The Processor may only be executed if the Fact Sheet does not yet exist.
Example
The following Example shows a Processor that will execute if a Fact Sheet exists that has a defined name and is flagged with a certain Tag.
It adds " (Cloud)" to the name of a Fact Sheet if current the name is exactly as defined in the data object and a tag "Public Cloud" in a tag group "Cloud Transformation" is set on the Fact Sheet.
{
"processors": [
{
"processorType": "inboundFactSheet",
"processorName": "Apps from Deployments",
"processorDescription": "Creates LeanIX Applications from Kubernetes Deployments",
"type": "Application",
"filter": {
"exactType": "Deployment",
"onRead": "${lx.factsheet.name==data.name && lx.tags.size()>0 && lx.tags[0].name=='Public Cloud'}"
},
"identifier": {
"external": {
"id": {
"expr": "${content.id}"
},
"type": {
"expr": "externalId"
}
}
},
"updates": [
{
"key": {
"expr": "name"
},
"values": [
{
"expr": "${lx.factsheet.name} (Cloud)"
}
]
}
],
"logLevel": "debug",
"read": {
"fields": [
"name"
],
"tags": {
"groups": [
"Cloud Transformation"
]
}
}
}
]
}
onRead is available for inbound processors only
The onRead filter is available for inbound processors. The outbound processor will ignore setting this filter configuration.
Functionality is available same way outbound however as it is possible to use read content in the "advanced" filter like e.g. "filter": {"advanced": "${lx.relations.size()>0}"}, for cases where you only want to export if any requested relation was found
Order of RegEx execution
Using the replace regEx will allow to modify the output after applying the match regEx
Load large LDIFs
Depending on the source of the incoming data, LDIF files can be very large. The Integration API may not accept LDIF larger than 50 MB. If your file is bigger, it needs to be provided as a URL reference on an Azure Blob storage.
The configuration can be added to the Processor configuration part:
{
"dataProvider": {
"url": "${header.customFields.url}"
},
"processors": [
]
}
The value can be configured fix or as in the example passed in as part of the custom fields information in the LDIF. Please ensure to not send the "content" section in case you want to read from Azure. If content is part of the API call, this content will be used instead of the content in the Azure storage.
The URL needs to contain the path to the blob storage entry plus the Azure SAS token. See Azure documentation for details. https://docs.microsoft.com/bs-latn-ba/azure/storage/blobs/storage-blob-user-delegation-sas-create-cli
No support for IP whitelisting
Please note that we process from our Azure infrastructure where IP addresses can dynamically change. Reading data from a URL only works if no IP whitelisting is set. Instead SaS tokens with limited ttl can be used.
Long running API calls
The default behaviour of Integration API is to execute all changes with the user that is provided by the API client when logging in. E.g. the history of factsheets will show this user as the one executing the changes as if the user logged into the SAP LeanIX UI and did the changes manually. This is easy to understand and communicate. In some situations however, it may not be sufficient.
Use Cases may be:
- The process takes more than 60 minutes.
- All changes the API does are caused by different login users, but should rather be shown to the user as a change the Integration API did instead of showing all the different API users.
In order to support long running inbound or outbound processes, an API Token may be provided in the configuration. This will be used instead of the access token that comes with the call from the API (as it will expire after one hour and does not contain any refresh token to grant access to the workspace data for more than 60 minutes)
{
"credentials": {
"apiToken": "..."
},
"processors": [
]
}
As an alternative, the Integration API can be configured to use a "Technical User" for accessing the SAP LeanIX Pathfinder backend. This user called INTEGRATION_API will be automatically created in the workspace by the API if it is not existing.
To use the Technical Users, please add the following section
{
"credentials": {
"useTechnicalUser": true
},
"processors": [
]
}
Executing Processor Configurations with Custom Technical Users
Besides using the default technical user that is created by the integration, it is possible to use technical users that are already set up in the workspace. To execute a processor configuration with a custom technical user, the user ID of the technical user has to be added to the credentials section of the configuration. In the following example, the CUSTOM_TECHNICAL_USER_ID placeholder would need to be replaced by the ID of the technical user that will be used when reading data from or writing data to your workspace.
{
"credentials": {
"technicalUserId": "CUSTOM_TECHNICAL_USER_ID"
},
"processors": [
]
}
Restricting the Execution of Processor Configurations
To restrict the execution of processor configurations to specific users, you can add the executionRestrictions
object next to the processors
of the configuration.
{
"executionRestrictions": {
"defaultTechnicalUser": true,
"userIds": ["USER_ID"]
},
"processors": [
]
}
The defaultTechnicalUser
parameter specifies whether the default technical user should be allowed to execute the configuration, while the userIds
parameter specifies the user IDs of the users that should be allowed to execute the configuration. To restrict execution to multiple users, simply add their user IDs to the userIds array. Both parameters are optional and when the executionRestrictions
object is specified and keept empty no user will be able to execute the processor.
Search Based Matching of Fact Sheets
When using the "search" based identification of the Fact Sheet that are supposed to be updated by the incoming data object, then the section may contain a section to limit the scope of searched Fact Sheets and an expression filtering the Fact Sheets that should be updated.
In case Integration API iterates over a search result, two variables can be used in all JUEL expressions: search.resultSize (indicating the total number of items we iterate over in the processor) and search.resultIndex (number of current item being iterated)
When configuring an inboundFactsheet processor, a key "search" is now allowed. the value of this key is an object as defined in the example below. One or more Fact Sheets may be identified by the search and be updated based on the same data object in the LDIF.
The search works in two steps:
-
The "scope" defines a search against the pathfinder backend and limits the number of Fact Sheets to be matched. A valid scope can e.g. be created by using an outbound Integration API configuration and click on "set scope". Then the Scope can be copied from there.
-
After reducing the scope of Fact Sheets potentially in scope as good as possible with pathfinder filtering options, an additional JUEL expression is being executed to further narrow down the scope (key: "filter"). This is the far more costly process but allows much more flexibility identifying the right Fact Sheets. Admins should always try to limit the scope in phase 1 as far as possible to avoid long processing times. In the JUEL expression all fields defined in the "read" section can be used for filtering (e.g. ${lx.factsheet.description.startsWith('Autoupdate: ')}
As the Outcome, all identified Fact Sheets will be processed as if they had been found by the processor one by one.
The "search" key can be used in conjunction with the "external" key. In this scenario, the Integration API first tries to find based on the external ID. If that fails, the search will be executed. If that fails as well, the integration API will try to create the Fact Sheet based on information in the "external" value. The last step can be avoided if creation is not allowed for the use case by adding an onRead filter and check for "lx.factsheet" not being null.
From a use case perspective, it allows to search for a Fact Sheet with a specific external id, if not existing then search for a Fact Sheet with e.g. a specific name and add the external id in case it is existing. OR create a new Fact Sheet in case a Fact Sheet with the name was not found.
The key "multipleMatchesAllowed" allows to define the API behaviour in case multiple Fact Sheets are matching the search criteria. Some use cases may only want to update if exactly one Fact Sheet was found (then the value will be set to "false"). By allowing multiple matches, bulk updates on multiple Fact Sheets are possible. Default if not existing is "true"
Example
The Below Processor will update all descriptions of Application Fact Sheets that have a tag "AsiaPacific" in the tag group "Region".
{
"processors": [
{
"processorType": "inboundFactSheet",
"processorName": "Update all Cloud Apps",
"processorDescription": "Updates all Apps with tag 'Cloud'",
"type": "Application",
"filter": {
"exactType": "AppUpdate"
},
"identifier": {
"search": {
"scope": {
"facetFilters": [
{
"facetKey": "FactSheetTypes",
"operator": "OR",
"keys": [
"Application"
]
},
{
"facetKey": "${integration.tags.getTagGroupId('Region')}",
"operator": "OR",
"keys": [
"${integration.tags.getTagId('Region','AsiaPacific')}"
]
}
],
"ids": []
},
"filter": "${true}",
"multipleMatchesAllowed": true
}
},
"logLevel": "debug",
"updates": [
{
"key": {
"expr": "description"
},
"values": [
{
"expr": "External sync executed ${data.dateTime}"
}
]
}
]
}
]
}
{
"connectorType": "searchBasedScope",
"connectorId": "searchBasedScope",
"connectorVersion": "1.0.0",
"lxVersion": "1.0.0",
"description": "Updates external sync date",
"processingDirection": "inbound",
"processingMode": "partial",
"customFields": {},
"content": [
{
"type": "AppUpdate",
"id": "apps",
"data": {
"dateTime": "06/08/2019"
}
}
]
}
Extending the example is easy to e.g. only match update the Fact Sheets where the description already starts with a specific text, indicating that an automatic update was allowed:
{
"processors": [
{
"processorType": "inboundFactSheet",
"processorName": "Update all Cloud Apps",
"processorDescription": "Updates all Apps with tag 'Cloud'",
"type": "Application",
"filter": {
"exactType": "AppUpdate"
},
"identifier": {
"search": {
"scope": {
"facetFilters": [
{
"facetKey": "FactSheetTypes",
"operator": "OR",
"keys": [
"Application"
]
},
{
"facetKey": "${integration.tags.getTagGroupId('Region')}",
"operator": "OR",
"keys": [
"${integration.tags.getTagId('Region','AsiaPacific')}"
]
}
],
"ids": []
},
"filter": "${lx.factsheet.description.startsWith('External sync')}",
"multipleMatchesAllowed": true
}
},
"logLevel": "debug",
"read": {
"fields": [
"description"
]
},
"updates": [
{
"key": {
"expr": "description"
},
"values": [
{
"expr": "External sync executed ${data.dateTime}"
}
]
}
]
}
]
}
Include Archived Fact Sheets in scope result
It is even possible to tell iAPI to include archived Fact Sheets in the result by enabling a specific flag. Just set "omitArchivedFactSheets" to false in the scope:
{
"scope": {
"omitArchivedFactSheets": false,
"ids": [],
"facetFilters": [
{
"keys": [
"BusinessCapability"
],
"facetKey": "FactSheetTypes",
"operator": "OR"
},
{
"keys": [
"archived"
],
"facetKey": "TrashBin",
"operator": "OR"
}
]
},
"processors": [
....
]
}
Using ExternalIDs in search based scope (and in deletion scope)
Scope filters may not only filter for items by their internal ids (key "ids") but by external ids as well. To use this, the key "externalIds needs to be defined and contain an array of searched external ids. Each need to be prefixed with the name of the external id field and a slash. See example below
The Below example can even be dynamic and inject content from the LDIF custom fields:
"externalIds": ["${'externalId/'.concat(header.customFields.myExternalId)}"]
{
"scope": {
"facetFilters": [],
"externalIds": [
"externalId/Ext-ID-0m0NiY6Z"
]
}
}
Bookmarks to define search scope
It is possible to not hardcode the search scope into the Integration API configuration but allow users with access to specific bookmarks to dynamically change the scope of Integration API runs by modifying the Bookmark in the Frontend of the application. In Integration API the ID or name of a Bookmark can be configured to be used to set the scope for search based scoping as shown in the below example:
Example
In case, a bookmark is used, the whole bookmark object is available to be used in JUEL expressions in the processor. The below example shows this by accessing the bookmark name in the updates section.
{
"processors": [
{
"processorType": "inboundFactSheet",
"processorName": "Apps from Deployments",
"processorDescription": "Creates LeanIX Applications from Kubernetes Deployments",
"type": "Application",
"filter": {
"exactType": "Deployment"
},
"identifier": {
"search": {
"filter": "${true}",
"multipleMatchesAllowed": true,
"scopeFromBookmark": "${integration.bookmarks.getBookmarkId('book-1')}"
}
},
"updates": [
{
"key": {
"expr": "description"
},
"values": [
{
"expr": "bookmark name: '${bookmark.name}' AND id=${integration.bookmarks.getBookmarkId('book-1')}"
}
]
}
]
}
]
}
Find out details about Bookmarks
You may output the bookmark object into the description field to inspect the structure and available information in a Bookmark. Use the "test run" mode to not alter any Fact Sheets.
Read-Only FactSheet Processor
In case you want to collect information from the resulting FactSheets into variables, you can enable the "readonly" mode on the inboundFactsheet processor. In the example below, all entries of the "releases" field of each application are collected into a variable releases. Results could then be used in a next run.
Using the "filter": "${myExpression}", the set of Fact Sheet values collected can be even more narrowed down according to the use case. This feature comes handy to save processing time as the processor does not need to prepare any write operation.
Example
{
"processorType": "inboundFactSheet",
"processorName": "Process variables with Search Scope",
"processorDescription": "Collect deploymentMaturity ",
"type": "Application",
"filter": {
"exactType": "Deployment"
},
"identifier": {
"search": {
"scope": {
"ids": [],
"facetFilters": [
{
"keys": [
"Application"
],
"facetKey": "FactSheetTypes",
"operator": "OR"
}
]
}
}
},
"run": 0,
"enabled": true,
"variables": [
{
"key": "releases",
"value": "${lx.release}"
}
],
"logLevel": "debug",
"readOnly": true
}
Accessing the "Hierarchy Level" of a Fact Sheet
It is easily possible to access the hierarchy level of a Fact Sheet just by reading the field "level" provided by the pathfinder backend. This information can be used to filter for certain hierarchy levels or do calculations.
In the below example, it is used to filter for Level 2 Project Fact Sheets for export.
This could of-course have been done by just applying the restriction to the pathfinder scope query. The below is just to show case and allow extending for more advanced filtering.
Example
{
"scope": {
"facetFilters": [
{
"facetKey": "FactSheetTypes",
"operator": "OR",
"keys": [
"Project"
]
}
],
"ids": []
},
"processors": [
{
"processorType": "outboundFactSheet",
"processorName": "Export Projects L2",
"processorDescription": "Exports only Level 2",
"enabled": true,
"filter": {
"advanced": "${lx.factsheet.level==2}"
},
"fields": [
"name",
"level"
],
"output": [
{
"key": {
"expr": "content.id"
},
"values": [
{
"expr": "${lx.factsheet.id}"
}
]
},
{
"key": {
"expr": "content.type"
},
"values": [
{
"expr": "project}"
}
]
},
{
"key": {
"expr": "name"
},
"values": [
{
"expr": "${lx.factsheet.name}"
}
]
},
{
"key": {
"expr": "level"
},
"values": [
{
"expr": "${lx.factsheet.level}"
}
]
}
]
}
]
}