Get started with automatic user provisioning through a direct API-connector to BambooHR

  1. Background
  2. Pre-configuration activities
  3. BambooHR API Key
  4. Deploy the provisioning agents/nodes to support on-premises user synchronization and provisioning
  5. Configure the API-driven provisioning endpoint in the Enterprise application page
  6. Deploy the Logic App template from GitHub
  7. Assign application permissions to your service principal in your logic app
  8. Adjust the Logic app template
  9. Use Postman to send bulk request to the BambooHR directory
  10. Attribute mapping and finalization of our custom logic
  11. Testing
  12. FAQ and troubleshoot guide

Background

Managing a workforce across your identity and application landscape often necessitates reliance on authoritative sources such as HRIS/HCM systems. To fully leverage these systems, it’s essential to connect them with other operational facets, including managed services like IT operations and support.

This is where API-driven inbound user provisioning, which recently became generally available, comes into play.

It connects your HCM system to either Active Directory (On-premises) or Entra ID (Cloud-only), facilitating synchronization. This automation allows for the creation, updating, and deletion of user accounts, saving time and reducing the risk of errors and manual inputs through CSV files or offline data aggregation. The distinguishing factor of the new HR connector, as compared to the standard SAP SuccessFactors and Workday connector, is that the new API-driven connector doesn’t rely on the Odata API to connect to your HR system. Instead, it heavily depends on the Microsoft Graph API.

So, how does it work? The connector employs an API client to map user objects from your chosen HR source to either Entra ID or your on-premises Active Directory. This efficient method ensures your user data remains current across your systems, based on input from your single source of truth.

In this blog post, we will focus on Scenario 1, albeit with a twist! Instead of creating a blob storage that contains our CSV HR workload, we will use our logic app to trigger an HTTP request directly from the open API’s in BambooHR on a frequent basis, which will work as our scheduled refresh task to keep data consistent and accurate.

Here is an example of a design topology I’ve created to illustrate the direct API integration from the BambooHR system to the on-premises Active Directory system – see the design below:


Pre-configuration activities

  • Prepare API keys and permissions from your HCM provider to enable your provisioning service to read the API URL from your HCM system. The method of data retrieval can vary, often involving a direct API URL that points to an employee report or web service directory.
  • Create an enterprise application for Entra ID or with support to Active Directory, depending on your infrastructure (This will act as your inbound provisioning API endpoint where mappings, source credentials and mail notifications are configured).
  • Choose an automation tool to schedule your payload job, selecting from PowerShell, LogicApp, or Runbooks. We focus on the integration from a logic app aspect with a direct API call to our HR-system without use of custom logic app connectors.
  • If using a system-assigned managed identity, it’s recommended to use a system-assigned managed identity with the necessary app permissions to read from your API endpoint in Microsoft Entra ID and expedite the payload.
  • If using a Service Principal, such as an Enterprise Application, store the secret in a key vault and access the secret value during the bulk payload sync with your preferred automation tool by leveraging Azure RBAC in your key vault.
  • For on-premises inbound provisioning, conduct a proof-of-concept in your Test AD, or at least apply a source scope filter to target specific test users from your HR system and create a dedicated test-OU for these users to validate the capabilities of your JML-scenarios with the help of the new API connector.
  • Engage your Application Administrator to set up the user provisioning app in Entra ID for cloud-only sync. For on-premises integration, a Hybrid Identity Administrator is required.

BambooHR API Key

  1. Sign in to your BambooHR instance with administrator permissions (Ask HR to create a dedicated account so that the keys are not tied to an employee – in case of garden leave or termination the keys will deprecate and will result in a connectivity error)
  2. On the Home page, click Account and select API Keys.
  3. Click Add New Key.
  4. Enter a name for the API key in the API Key Name field and click Generate Key.
  5. Copy your newly generated API Key and store it in a Azure key vault.

    Important note. Remember to assign the following RBAC role “Key Vault Secret User” to your managed identity that were created with the deployment of your logic app. This can be done through CLI or from Access Control (IAM) in the Key vault where you store your API Key. Once assigned you will be able to connect to the key vault in your logic app and retrieve the secret value for authentication and connectivity.
  6. Copy the API Secret value from step 3 and place it into Azure Key Vault. You will need to retrieve the secret value from

    Important note. The key that you retrieve from BambooHR is tied to your user profile and hence is unsafe since if the user terminated the API becomes invalid right away and your provisioning might fail indicating a “Error – 403” when running the sync in your automation tool. Ask your HCM-team to create a dedicated service profile in BambooHR along with the API key.

Deploy the provisioning agents/nodes to support on-premises user synchronization and provisioning

1. Download the on-premises agent by navigating to the Entra admin center and head to Hybrid management –> Entra connect –> Cloud sync | agents. Click on Download on-premises agents. Copy your installation files to your selected domain-joined server.

Recommendation: Depending on your AD topology, it is advisable to deploy at least two provisioning agents/nodes on each domain controller or domain-joined server for redundancy and load balancing.


2. Log in with appropriate administrative credentials—either as a Global Administrator or by activating your “Hybrid Identity Administrator” role to adhere to the principle of least privilege— before launching the agent configuration wizard.

3. Proceed with the setup by clicking ‘Next’ and following the subsequent instructions.

4. Choose HR-driven provisioning when prompted. (Note: The interface may reference Workday and SuccessFactors; however, the API-driven provisioning service supports a broader range of HR systems through the Entra Cloud Sync agent capabilities.)

5. Authenticate with your Cloud-only administrator account from Entra ID that has one of the Entra ID roles to complete the configuration:

  • Global Administrator
  • Hybrid Identity Administrator

Note: For high availability and failover support, deploying at least two agents/nodes onto your domain joined servers.

6. Verify the agent’s status after completing the configuration wizard by running “services” from your Windows dashboard. Ensure the provisioning agent is operational. This can also be confirmed under “Hybrid Management” in the Entra Admin portal.

Local Services:

Let’s continue to complete the remaining part of our configurations in the API-driven enterprise app that we created in the beginning before heading the logic app deployment template.

Configure the API-driven provisioning endpoint in the Enterprise application page

  1. Start out by navigating to the Entra ID portal and head to the “Enterprise Applications”.
  2. Click on “New Application” and Search on “API-driven provisioning”.

3. Give it a convenient name. In our case we will name it “[Test] BambooHR User provisioning” since we are going to integrate BambooHR as our HCM-system for automatic user provisioning. Click Create once you had given it a name.

Now that we have our API-driven enterprise application created. Let’s checkout the below tab that states “Get started” with provisioning user accounts. Click on Get Started.

Switch on the provisioning from Manual to Automatic. It’s important to remember that this will not start any provisioning job since we haven’t integrated our logic app to send the bulk payload from HR to the connector (API-driven inbound provisioning) yet.

Furthermore, it isn’t possible to use on-demand provisioning for a single user object like you are able to pull off with the native SAP SuccessFactors and Workday connector.


Since we are going to provision on-premises user objects from HR-system we would need two provisioning agents also referred as provisioning nodes to expedite our payload from HR to Active Directory using SCIM operations.

  1. Remember, we had already set the two provisioning nodes in our Domain-joined servers. So, let’s fill out the admin credentials in order to establish the connectivity to our Active Directory domain.
  2. In our case we only need to choose our FQDN (Fully Qualified Domain Name) from the dropdown-list in the first field. The FQDN will automatically be visible once the agents are deployed successfully.
  3. The second field is where we define the organizational unit where our new users are provisioned to in the OU container in the Active Directory. Create or use an existing OU and name it either “New Users” or “Internals” if you had setup an inbound provisioning connector (Enterprise applications) for different type of workforces in your company.

Note – The default OU can be overridden by using the parentDistinguishedName in the attribute mapping as an alternative way to choose the container you want to point new users to.

  1. Head to Settings and list the recipient that should receive notifications whenever the API-endpoint enters quarantine state and fails with the provisioning job. This will typically happen when the threshold for accidental deletion is reached, provisioning will then switch to quarantine and would require manual remediation by your IAM administrators.
  2. You can also add a threshold for the accidental deletion to prevent provisioned users from getting deleted when they are out of the OU scope.
  3. Let’s say your automation tool is trying to bulk delete/disable user objects you can then set a threshold after your choice. Start out with 2-3 user objects as part of your threshold filter to prevent accidental deletion.

Now that everything has been tested and the connectivity is established to your Active Directory, let’s start deploying the Logic App Template from a GitHub project that Microsoft has configured.

Deploy the Logic App template from GitHub

  1. Click on Deploy To Azure – you will immediately get redirected to the Azure portal. Fill out the all necessary information – this is not really important since we are going to remove and replace some of the steps in the logic app itself.

2. Chose the subscription and the underlying resource group you want to deploy the logic app in. This logic app will be based on the consumption plan. Click Create – wait for the deployment to complete, it may take 1-2 minutes before it’s done.

3. Once successfully deployed let’s dive into the next chapter where we assign the application permissions in order to connect and authenticate to Microsoft Graph and perform the necessary operations.

Assign application permissions to your service principal in your logic app

You need to assign the Graph API permissions to either your managed identity or Service Principal (Enterprise app using Oauth-based auth) running the logic app to send the API payload to the inbound provisioning agent.

We need to assign SynchronizationData-User.Upload and AuditLog.Read.All that allows the logic app to upload the employee data to the provisioning agent and read provisioning audit log activities. Lastly it is also recommended to assign the LifeCycleInfo.ReadWrite.All permission in order to retrieve the value of Employee LeaveDate Time and Hire Date from Graph.

In this case we are using a system-assigned identity that were enabled in the Logic app it self – head to Identity in the left panel in your Logic App and assign the permissions via Powershell or Graph.

Install-Module-Name"Microsoft.Graph"-ScopeAllUsers

Connect-MgGraph-Scopes"Application.Read.All","AppRoleAssignment.ReadWrite.All,RoleManagement.ReadWrite.Directory"

$GraphApp= Get-MgServicePrincipal-Filter"AppId eq '00000003-0000-0000-c000-000000000000'"

# Find our Logic App's managed service principal

$ManagedSp= Get-MgServicePrincipal-Filter"DisplayName eq '$LogicAppName'"# the service principal has the same name as the logic app we created earlier

# Search for app role permission "SynchronizationData-User.Upload" and assign it to our service principal

$PermissionName= "SynchronizationData-User.Upload"

$AppRole= $GraphApp.AppRoles | Where-Object{$_.Value -eq$PermissionName-and$_.AllowedMemberTypes -contains"Application"}

New-MgServicePrincipalAppRoleAssignment-PrincipalId$ManagedSp.Id-ServicePrincipalId$ManagedSp.Id-ResourceId$GraphApp.Id-AppRoleId$AppRole.Id

# Search for app role permission "AuditLog.Read.All" and assign it to our service principal

$PermissionName= "AuditLog.Read.All"

$AppRole= $GraphApp.AppRoles | Where-Object{$_.Value -eq$PermissionName-and$_.AllowedMemberTypes -contains"Application"}

New-MgServicePrincipalAppRoleAssignment-PrincipalId$ManagedSp.Id-ServicePrincipalId$ManagedSp.Id-ResourceId$GraphApp.Id-AppRoleId$AppRole.Id

# Search for app role permission "User-LifeCycleInfo.ReadWrite.All" and assign it to our service principal

$PermissionName= "User-LifeCycleInfo.ReadWrite.All"

$AppRole= $GraphApp.AppRoles | Where-Object{$_.Value -eq$PermissionName-and$_.AllowedMemberTypes -contains"Application"}

New-MgServicePrincipalAppRoleAssignment-PrincipalId$ManagedSp.Id-ServicePrincipalId$ManagedSp.Id-ResourceId$GraphApp.Id-AppRoleId$AppRole.Id

Adjust the Logic app template

1. Let’s take a look at our new Logic App and update some of the actions. As you may notice the built-in logic app from our GitHub project leverages the storage account to convert the payload from a CSV format to JSON.

This is not our goal and we will simply remove the following actions from our Logic app to begin with – see the marked actions below.

2. Add a key vault connector after the recurrence action to retrieve the key secret which is our API key from BambooHR by choosing our Managed Identity that were shipped with our Logic App. This will be used in the next step for authentication purposes in our HTTP Post call.

Note. Remember to assign your Managed Identity the following Azure RBAC role: Key Vault Secret User role in your key vault where the API key is stored.


3. Add an HTTP Post call right after the Azure Key Vault step action so that we are able to schedule our record sync for any source updates with direct connection to our HR system, through the URI. In order to expedite your body request we need to add content-type to application/json which is the expected format that our payload will be based on.

Finally add your required fields from the HR system to your body field – these are the fields that we are going to use as part of our SCIM schema mapping later in the logic app.

Note. Once you hover over the username in the Authentication you will be able to retrieve the key secret value from the dynamic content panel on your right site.

4. Next step is to select our “Body” in the content field below from the dynamic content panel and parse our body request to JSON and generate the sample based on the postman steps below.


Use Postman to send bulk request to the BambooHR directory

  1. Create a collection and fetch your API URI and Keys. Use Basic Auth for your authorization header in order to authentication with your API Key from BambooHR.

2. Create a request with the required fields from BambooHR and launch your POST request to the custom report for directory bulk.

(Note. The HTTP post request will not perform any changes in the directory but only retrieve the entire directory which we will use to parse it to a JSON format in our logic app)

Checkout this link for URI and Field types and names in BambooHR: Field Names (bamboohr.com) only list the fields that you need for your data mapping to Active Directory.

3. You should receive the following response code 200 OK and the body:

4. Copy paste the value of your body request in the JSON parsing step (Don’t mind about the values – the values of your attribute will be cleared and only the required fields will be generated in the JSON payload below, meaning the employee directory payload will not be included once entered).

5. After the body request is pasted into the section you should end up with the following JSON structure. All the required fields from our HTTP Body response is now successfully added with the type of values: [“String”, “null”].

Important info: These JSON value types are important to understand since the payload that we will receive comes from the source system which is controlled by the HCM Specialist/HR-staff and if a field-value is empty in the onboarding order in BambooHR it will generate a null-value which our connector should support in the bulk payload upon creation/update/deletion. So remember to add a “null” entry for our properties in the JSON parsing.

6. Now we want to initialize the variables coming from our JSON parsing as an Array. This can be done if we hover/select the value in the third point by choosing “employees” under dynamic content which points to the root of our JSON Schema – remember to remove the “rows” value from your initial JSONInputArray since we don’t want to initialize variables from a csv file, but directly from our JSON parsing that were received from the HTTP request.

7. These actions that were shipped with the Logic App template doesn’t need any modifications, except for this variable that need to be changed to the following expression in order to process the records that are processed from our JSON parse pointing to the employee property.

Copy paste the following expression into the Value-field by switching to code-view:
length(body('Parse_JSON')?['employees'])

8. The remaining initializations of the variables for IterationCount, SCIMBulkPayload and InvocationDateTime can be skipped for now since it doesn’t depend on any csv input/actions – the remaining steps that we need to modify will be under our conditions where we define our SCIM Attribute scheme.

Attribute mapping and finalization of our custom logic

  1. Before constructing the SCIM Schema in our logic app which we will send to our API-endpoint let’s quickly head back to our Enterprise application that we created earlier in the process which serves as our provisioning endpoint to apply the event logic for how users are provisioned to Active Directory.

    Note. This is a data operation action where we want to apply the mappings between our SCIM attributes and BambooHR fields, but in order to do so we need to create the extend the scheme by configuring the fields which are not yet to be found in the below SCIM scheme.

2. Navigate to your Enterprise application. Click on Attribute mapping in the provisioning tab. The listed attributes between the API (SCIM) Attributes and AD are the default ones that are part of your extension schema below which where shipped after the creaton of your API-driven enterprise application:

urn:ietf:params:scim:schemas:extension:enterprise:2.0:User

3. Before adding the fields that we retrieved from our HTTP POST call let’s add the 3 cloud extension AD attribute which will be mapped to our HireDate, TerminationDate and WorkerType. These are not part of our default extension schema hence we need to add them by clicking on “Edit attribute list for On-premises Active Directory”.

Note. The inbound and outbound sync rules that we created earlier for our 3 extension attributes in AD are the ones we will add to our extension scheme in the Enterprise application. Refer to the below schema and pay attention to the API attribute which refers to our SCIM extension schema.

Inbound AD Attribute nameAD Extension Attribute nameOutbound Entra ID Attribute nameAPI extension Attribute Name
EmployeeTypemsDS-cloudExtension4EmployeeTypeurn:ietf:params:scim:schemas:extension:csv:1.0:User:WorkerType
EmployeeHireDatemsDS-cloudExtension1EmployeeHireDateurn:ietf:params:scim:schemas:extension:csv:1.0:User:HireStatus
EmployeeLeaveDateTimemsDS-cloudExtension2EmployeeLeaveDateTimeurn:ietf:params:scim:schemas:extension:csv:1.0:User:TerminationStatus

4. Once clicked it will probably take 40-60 seconds before the page is loaded since it shows the full AD attribute scheme – be patient it’s legacy in a modern cloud suite 😉 – Once entered remember to click Save.

5. Now let’s add the mappings between our source and target attributes (Active Directory). Start with the ExtensionAttribute 1 and 2 which includes our EmployeeHireDate and LeaveDateTime attribute.

As you may had noticed we need to format our date time for Extension Attribute 1 and 2 since the provisioning API expect the date time in the following output: 20221201080000.0Z. This is not a date string that are supported by BambooHR hence we need to create an expression to convert the date string from BambooHR to a AD friendly ISO-format so that we can populate the value like this on the user property in Entra ID.

6. Refer to the below expression to convert a date string from yyyyMMddhhmmss.fZ to M/d/yyyy h:mm:ss tt so that it is presentable in a view-friendly format.

FormatDateTime(DateAdd("h", 2, [urn:ietf:params:scim:schemas:extension:csv:1.0:User:HireStatus]), , "M/d/yyyy h:mm:ss tt", "yyyyMMddhhmmss.fZ")

7. Once the Cloud Extension Attributes are added and saved in our mappings, let’s scroll up and click on the following target attribute mapping “accountDisabled”. This AD attribute are mapped to a expression (switch) in our default SCIM scheme:

Switch([active], "False", "False", "True", "True", "False")

However, this switch will not look at the effective date of your first working day hence this expression would be useless to cover future hire scenarios.

8. Add the below custom expression or configure it as you want to cover your needs. The custom expression below supports the future hire use case by allowing BambooHR to create the user account in disabled mode in Active Directory 14 days before the first working day. Additionally, the day before the first working day, it will automatically sync the new joiner to Entra ID and enable his/her account. (Note. You can change the integer in the expression as you want so that your requirements are covered as you like).


Switch([active], , "True", IIF(DateDiff("d", Now(), CDate([urn:ietf:params:scim:schemas:extension:csv:1.0:User:HireStatus]))<"1", "False", "True"), "False", "True")

9. You might had wondered about the UPN and Mail creation of the new joiner. This will be covered in the below custom expression which will generate 4 random letters based on the givenName and lastName of the user. Let’s use the great expression tool tester and see how our custom expression will deal with the following Given Name and Last Name:

SelectUniqueValue(Join("@", ToLower(Join("", Left([name.givenName], 2), Mid([name.familyName], 1, 1), Mid([name.familyName], 2, 1)), ), "contoso.com"), Join("@", ToLower(Join("", Left([name.givenName], 1), Mid([name.familyName], 1, 1), Mid([name.familyName], 2, 1), RandomString(1, 0, 0, 0, 1, "")), ), "contoso.com"))

10. Apply the same expression for both mail and the UserPrincipalName attribute in the mappings.

11. There might be some requirements for the full name of the users should be displayed. Let’s look at how we can take advantage of an custom expression that replaces the given name with a preferred name if the value exists in BambooHR.

If you want to use it then map the API expression to the givenName attribute from AD:

Coalesce([urn:ietf:params:scim:schemas:extension:csv:1.0:User:preferredName], [name.givenName])

Note. I always recommending taking advantage of the expression builder in the attribute mapping to validate the effectiveness of your custom expressions or switches to form a logic.

To test that the preferredName is replacing the givenName if the value exist you can insert the values below and check out the expression output:

12. As described earlier we are operating with two sets of attribute schemas such as the Enterprise 2.0 urn:ietf:params:scim:schemas:extension:enterprise:2.0:User which is our default schema that were shipped together with the deployment template from our Logic app.

Lastly, we have our custom schema called urn:ietf:params:scim:schemas:extension:csv:1.0:User. This is the schema we will use to add our custom API attributes like HireStatus and all the relevant fields we retrieved from our HTTP call in postman from BambooHR URI that are not yet mapped in the Enterprise application as default.

Note. You can freely rename the parameter between extension:<Insert your own schema name>:1.0:User – I just called mine csv for demo purposes.

13. Navigate to the Attribute mapping section and click on Edit attribute list for API to extend our custom schema with the BambooHR fields that we retrieved from our HTTP call in step 3 previously.

14. Add all the attributes as string types as you see in the highlights below except for the Manager and WorkerID attributes. Those attributes need to be added as references, since the API attribute named WorkerID needs to be mapped to the default attribute value called externalID which is the unique identifier of our employees in the Master Record of BambooHR. Finally, the manager should refer to the user object since it shares the same object type.

15. Now, with our custom logic applied for the future hire scenario, the date string converted to an ISO-supported format, and a custom expression defining how UserPrincipalNames are generated, let’s save all our configurations. Next, we’ll proceed to the final step: constructing the user schema in our logic app and applying the fields we’ve just added to the Enterprise application in step 1-13.

16. To save some time for all of you, I’ve created custom schema mappings as code in the data operation action below. However, you’ll anyway still need to apply the adjustments in code-view mode, since it can’t be done in the Logic App Designer. Remember to save the configurations once the code is applied successfully – it will throw an error if it recognize or identify any unknown parameters that are unsupported in the logic app.

},

                    “Construct_SCIMUser”: {

                        “inputs”: {

                            “bulkId”: “@{guid()}”,

                            “data”: {

                                “active”: “@if(equals(items(‘For_each’)?[‘status’],’Active’),true,false)”,

                                “addresses”: [

                                    {

                                        “country”: “@{items(‘For_each’)?[‘country’]}”,

                                        “locality”: “@{items(‘For_each’)?[‘location’]}”,

                                        “postalCode”: “@{items(‘For_each’)?[‘ZipCode’]}”,

                                        “streetAddress”: “@{items(‘For_each’)?[‘StreetAddress’]}”,

                                        “type”: “work”

                                    }

                                ],

                                “displayName”: “@{items(‘For_each’)?[‘fullName5’]}”,

                                “externalId”: “@{items(‘For_each’)?[‘id’]}”,

                                “name”: {

                                    “familyName”: “@{items(‘For_each’)?[‘lastName’]}”,

                                    “givenName”: “@{items(‘For_each’)?[‘firstName’]}”,

                                    “middleName”: “@{items(‘For_each’)?[‘middleName’]}”

                                },

                                “nickName”: “@{items(‘For_each’)?[‘UserID’]}”,

                                “phoneNumbers”: [

                                    {

                                        “type”: “work”,

                                        “value”: “@{items(‘For_each’)?[‘workPhone’]}”

                                    }

                                ],

                                “schemas”: [

                                    “urn:ietf:params:scim:schemas:core:2.0:User”,

                                    “urn:ietf:params:scim:schemas:extension:enterprise:2.0:User”,

                                    “urn:ietf:params:scim:schemas:extension:csv:1.0:User”

                                ],

                                “title”: “@{items(‘For_each’)?[‘jobTitle’]}”,

                                “urn:ietf:params:scim:schemas:extension:csv:1.0:User”: {

                                    “GenderPronoun”: “@{items(‘For_each’)?[‘GenderPronoun’]}”,

                                    “HireStatus”: “@{items(‘For_each’)?[‘hireDate’]}”,

                                    “JobCode”: “@{items(‘For_each’)?[‘Custom01’]}”,

                                    “TerminationStatus”: “@{items(‘For_each’)?[‘TerminationDate’]}”,

                                    “WorkerType”: “@{items(‘For_each’)?[’employmentHistoryStatus’]}”,

                                    “fullName5”: “@{items(‘For_each’)?[‘fullName5’]}”,

                                    “lastName”: “@{items(‘For_each’)?[‘lastName’]}”,

                                    “preferredName”: “@{items(‘For_each’)?[‘preferredName’]}”

                                },

                                “urn:ietf:params:scim:schemas:extension:enterprise:2.0:User”: {

                                    “costCenter”: “@{items(‘For_each’)?[‘CostCenter’]}”,

                                    “department”: “@{items(‘For_each’)?[‘department’]}”,

                                    “employeeID”: “@{items(‘For_each’)?[‘id’]}”,

                                    “manager”: {

                                        “value”: “@{items(‘For_each’)?[‘supervisorEID’]}”

                                    },

                                    “middleName”: “@{items(‘For_each’)?[‘middleName’]}”,

                                    “organization”: “@{items(‘For_each’)?[‘Company’]}”

                                },

                                “userName”: “@{items(‘For_each’)?[‘UserID’]}”

                            },

                            “method”: “POST”,

                            “path”: “/Users”

                        },

17. Expand the Condition action and as you quickly noticed the iteration count is set to 50 meaning each bulk request that are send to our API-driven inbound provisioning endpoint can only contain a maximum of 50 user records per bulk request – this can’t be adjusted since it’s value that are set in the backend of the provisioning engine.

18. Once you had expanded the Condition action let’s jump right into the HTTP request action. This is where we send our bulk request with our user records that contains changes if any to our API endpoint. Let’s expand this action and copy paste the the following URI which is our API Endpoint URL from our Enterprise application – as stated earlier this can be founded under View Technical Information in the provisioning tab.

19. Copy and paste the entire Provisioning API endpoint URL into the URI field, and change the HTTP method to POST. This step is necessary because we want to deploy the bulk payload that we processed from the Master Record in BambooHR directly to our Enterprise application.

20. Our API-driven provisioning engine will apply all the custom logic and expressions we’ve configured in the previous steps within our Attribute mappings. Finally, authenticate with your system-assigned managed identity and paste https://graph.microsoft.com/ in the audience field. This allows our managed identity to authenticate to Microsoft Graph and leverage the graph permissions we assigned at the beginning of the blog. One of these permissions is SynchronizationData-User.Upload, which performs the bulk upload of our JSON payload to the identity synchronization service.

Testing

Once a new onboarding order has been filled out in BambooHR – the logic app will run and pick up a new entry from the source system by reading the API URL directly from our HTTP call in the logic app to detect any changes from the source system.

I would recommend to use “Scoping filter” and include your test users that you want to manage through the provisioning service.

We can see in below screen dump that our new “TestPerson” user was successfully synced from BambooHR with Employee Status and the Hire Date, and the employee Id also known as the source ID is seen in the provisioning log, which is our unique identifier.

Be aware that you provisioning service might fail if you don’t include the manager ID of the user object in your scoping filter – in our case we include the value of the supervisorEID which are representing the unique ID of the manager object.

FAQ and troubleshoot guide

Convert hire and leavedate time to a supported ISO-format

You will recieve an export error in your Connector Space in Entra Connect when you are trying to sync the value of EmployeeHireDateTime.

The AD supports only a specific date time format in order to transform the string to a DateTime objet in Active Directory. You have to convert it to valid ISO8601 format – check the reference for all the functions that are available for your data mapping, additionally you can use FormatDateTime to present the date value in a specific/certain way for example:

FormatDateTime([extensionAttribute1], , "yyyyMMddHHmmss.fZ", "yyyy-MM-dd")

Reference for writing expressions for attribute mappings in Microsoft Entra Application Provisioning – Microsoft Entra ID | Microsoft Learn

You need to create and sync the extension attributes for your EmployeeHireDate and EmployeeLeaveDateTime in the Synchronization Rules Editor. Once that is done you need to assign the following Graph API permission to your inbound provisioning app in Entra ID in order to map the HireDate and TerminationStatus to your AD attributes for successful sync.

Microsoft GraphUser-LifeCycleInfo.ReadWrite.AllRead and write all users’ lifecycle informationApplicationRequire admin consent
Microsoft GraphUser.Read.AllRead all users’ full profilesApplicationRequire admin consent

Note. The value of the EmployeeLeaveDateTime can only be retrieved via Graph and per default not shown in the user properties since it’s a hidden field. For more info check out the following reference for more info about lifecycle workflow attributes: How to synchronize attributes for Lifecycle workflows – Microsoft Entra ID Governance | Microsoft Learn

Cross referral chasing in subordinate domains

If you chose to sync your on-premises user objects be aware that the cloud sync provisioning agent doesn’t chase referrals, for example resolving cross-domain manager references.


Let’s say you are operating in multiple countries and for each country you have a child/subordinate domain or a separate forest (requires forest-trust) where you will make sure that your reporting employees across your organization can be linked to their managers. The underlying LDAP API will then try to bind/correlate the target object to the object coming from the other domain by doing a cross-functional directory search.

Additional use cases: It does not only resolve cross-domain manager references, but also validate uniqueness of UPN across multiple domain when users are created.

2 responses to “Get started with automatic user provisioning through a direct API-connector to BambooHR”

  1. Sebastian Stauber

    Pretty complex. Do think it is worth it for an enterprise to go that route with Microsoft IGA or is it better to just buy a mature IGA system like Sailpoint or Saviynt?

    Like

    1. The API-driven inbound provisioning approach discussed in the blog is a more accessible and cost-effective solution for SMBs. It allows them to automate user provisioning without the need for extensive technical knowledge or significant business resources. While it might seem complex at first glance, the aim is to break down the process into manageable steps, making it feasible for smaller organizations through a Logic app and the API-driven enterprise app to apply the JML logic. Remember, that automating lifecycle management processes (HR-integration) has nothing to do with IGA directly.

      The way we define IGA today is about how we control and enforce compliance for our digital identities in the corporate workspace in terms of policies, procedures and access control. Lifecycle management is just a minor part and more of a supplement to the Identity governance concept. So yes I agree – solutions like SailPoint are more scalable and the way SailPoint describes the tiering role model is completely different in terms of Identity governance capabilities.

      For larger enterprises with more complex needs and resources, mature IGA systems like Sailpoint or Saviynt might indeed be a better fit. They offer comprehensive features and capabilities that can effectively handle the intricacies of large-scale identity governance.

      Thanks again for your valuable input! 😊

      Like

Leave a comment