The quality of source data has been found lacking

Over the past few weeks I’ve been working with a prospective customer on their identity governance requirements. During one of the conversations on how to automate the identity provisioning processes an interesting comment was made. They preferred a human to perform all identity provisioning actions as then a human would have a chance to make a decision whether the requested permissions made sense. Now this raised some additional questions for me, not in the least on which framework and internal procedure the IT support engineer would use to decide which permissions would be correct. And how they are currently deciding whether granted access and access permissions are in line with those frameworks.

Digging a little bit further, the root cause for this uncertainty whether automated JML processes had less to do with the clarity of what should be provisioned for different types of employees and roles. But rather, the uncertainty was due to a major distrust in the quality of source data to work with. Their HR system is not consistent in how job functions are defined and attached to different employees. So two employees with the same function in the organization could have a different job title or role attached inside the HR system. Which obviously will lead to a lot of complexity in defining the provisioning rules inside any identity platform to get the right outcomes.

Generally my advice in such situations is to first make sure that the source uses a single consistent dictionary across all employees, both for the initial setup as well as ongoing for future employees. Because Garbage-In is Garbage-Out in such situations. On top of that, you want to make sure that processes exist to update this across the board when organizational changes need to be deployed across all systems. And more importantly, that those processes are actually followed and implemented. Because all too often the effort is done, disregarded after a few months and all of a sudden the IT support engineer is in charge again of making identity decisions. So clean up the source data, remove non-accurate data, because the ownership of correct data for identities is not with the IT department, but it’s where the relationship with the employee is owned. And follow this not just for employees, but this should be common practice for non-HR employees as well. Someone owns the relationship with a specific vendor that needs access to systems, so make sure that ownership for the identities also lies there.

Fortunately for this customer the amount of employees was not too high and it would have been a straightforward task. Even better for them was that an enterprising IT engineer had already done a lot of clean-up of this data in their ITSM platform. The job functions and job titles from the ISTM system were validated by HR and approved, new procedures written for future actions and the only thing left to do was to update the source data inside HR. By connecting multiple sources, enriching the data pulled from the HR system into the identity platform with data from the ITSM platform and then performing a write-back to the HR system with the updated attributes, this project was completed quickly by using the identity platform as a two-way ETL engine.

I understand this will not work everywhere and the exercise is meaningless if the procedures are not changed and implemented at an HR department level. And more importantly, this is the opportunity for us as identity practitioners to educate other departments on how their work supports building more secure operations at every level in the organization. Make this a shared responsibility and a shared journey, because security is too big of a job to leave to any single team within the organization.

The Common Case of having more Roles than Identities

Oktane truly kept me busy last week, so a few days later than anticipated I’m able to share my first Notes from the Field. Recently a fresh RFP landed in my inbox. An organization was looking to replace their existing legacy identity management solution with a new, more modern identity governance platform. One of their Must Have capabilities immediately raised suspicions. An absolute requirement was a migration of their current roles matrix with all application mappings and assigned roles, without having to do any custom work.

This wasn’t the first time I’ve seen such a requirement. Or the requirement to do role mining. And from my experience, in almost all cases the only reason an organization asks for this is because they have no control over their authorization model. As a result, permissions have been assigned over the years, culminating in access for identities that are not in line with their role and their additional projects. Why do we know they have lost control over their authorizations? Because if they would have control and know exactly how to map internal job functions to required authorizations, they would just tell us to implement their existing and updated policies and have the authorizations be applied dynamically. There might still be a few edge cases and exceptions that would require some custom work, but the majority of application assignments and entitlements would be applied automatically.

To validate my suspicions, I started looking through my Field Notes for this customer. And in the past two years I did speak with this customer at multiple events. My initial thoughts were proven to be correct: during a conversation last year they mentioned they had close to 10,000 different roles for their 6,000 identities. Migrating that over to a newer tool only gives you that: a more modern tool to not be in control over your identities and their authorizations. It may look prettier, your teams might be able to create roles a bit quicker, but it doesn’t increase your organization’s security posture and you will still struggle with the same challenges.

My feedback on the RFP was to rethink the project, take a few steps back and think about the models that you want to implement. I love to sell, but I feel it’s more important to help customers achieve their goals rather than to sell another product. But it also got me thinking about RBAC and why so many organizations struggle with keeping their RBAC model under control. And this is not a dig at RBAC and structuring your organization in roles. I do think most modern organizations require a more flexible approach, think about policy-based access controls, but for many organizations an RBAC model can work perfectly. But why is it then so common to speak with organizations that have 2x or 3x the number of roles compared to their employees?

I started to think back to the time I was doing this in one of my first IT jobs at the end of last century. And for many organizations the challenges started during that time. Where roles prior to that were mostly contained to a single application, with the rise of enterprise directories we started to build out roles touching multiple applications, granting permissions across file servers and allowing very specific permissions within databases. That increased the complexity of managing roles and instead of following the best practices to create a new role and deactivate the old role when additional or changed permissions were needed, we started to amend existing roles.

With this the problem would get exponentially bigger. Because the new role didn’t match the profile of a new joiner exactly, we would just create a new role. Typically by copying the old role and making a few changes. Because every identity would have a slightly different profile and be unique in their own ways, the path to parity between number of identities and roles was completed quickly. With new hires joining the organization and not deleting old roles, because you never knew who else would be impacted, the number of roles now outnumber the number of employees. 

As a result, it takes a long time to implement new governance solutions, get value from modern technologies and we’re looking at AI to help us sort this mess. Is there an easy solution? No, I don’t think so. But in order to not be stuck in the same situation in 3, or 5, or even 10 years time, the only solution is to go back to basics. Think about a dynamic authorization model that fits your dynamic organization today, not the organization of 5 or 10 years ago. Understand and accept that not all of your employees will have the same permissions as they currently enjoy. Nor do they have to, because their roles and responsibilities typically have also shifted over all those years. Map the critical authorizations and make sure you have a plan to enforce those when the automated deployment of entitlements doesn’t fully cover them. But only by mapping your model again will you set up your organization for success and reach your goals.

Welcome back!

The titlle is more for myself than for anyone else. If you’ve seen some of my earlier posts, then welcome back. If not, then welcome to my blog.

Since my last post a lot has changed. I’m no longer employed by Citrix or ShareFile. After a short stint at OpenText, following their acquisition of Carbonite where I was employed, this November will mark my 4-year anniversary in presales at Okta. And in that role I’m encountering a lot of challenges organizations are struggling with in regards to their identity security. From lifecycle processes, authentication paths to governance, I’m having discussions on those every day. Every challenge is similar, but also very different and unique.  Add a sprinkling of AI to the conversation and you have an idea about an average working day at the moment.  Talking with the people around me at Okta we came to the conclusion that those discussions, the thought-processes around them and potential solutions would be interesting to share with a broader public.

I keep my notes in Field Notes notebooks and to link that to my writings I’m starting the Notes from the Field series. Occasionally I will also write about more technical topics or Okta product releases, small warning, I’ll be traveling to Oktane this afternoon. My first post reflecting on an RFP that hit my inbox a while back will go up later this week, depending on how my travels to Las Vegas go.

My first reference architecture

I’ve been working with customers looking at ShareFile and Citrix Content Collaboration since 2013. During many of those conversations the main topic focuses on “the best or recommended way” to deploy an on-premises repository to store files. Over the years, we’ve collected a lot of knowledge and insights in how customers are deploying their storage zones and what our guidance and best practices should be.

As a member of the Technical Marketing team at Citrix and with the launch of Citrix Tech Zone, it was time to put this knowledge into a reference architecture. I’ve been working with our technical pre-sales, post-sales deployment and product management teams to gather all available knowledge and create a consistent architecture. The result is not the end stage, but rather a starting point to incorporate more customer and partner feedback and improve on this initial version.

So head over to the reference architecture and let us know your feedback. Both the things you like, but also what’s missing or where you have different opinions on how we should deploy a storage zone.

First time for everything

Today is the first time I’ve actually pre-ordered a new Apple iPhone. This time was different, as my iPhone 7 is broken and I’m currently using my ageing iPhone 6S Plus. The battery performance is going down, it’s struggling to keep up with the iOS 12 and its new features, so it was easy to convince myself to order one as soon as possible.

I’ve selected the iPhone XS Max in Space Grey with 256GB of disk space. Having used the Plus exclusively over the past two months, I’ve come to like the bigger screen. I’m also planning on reducing the amount of stuff to bring on business trips. The current plan is to either bring the iPad Pro or my 2017 MacBook Pro (or even the slower 2015 MacBook), depending on the purpose of the trip.

I’m still thinking about getting a new Watch as well. My first generation Watch (stainless steel in black) is due for replacement. But there are no stainless steel models available in the Netherlands and I’m not convinced I want to switch to an aluminium model. My current plan is to wait, have a look at the new models when I’ll be in the US next month and then decide. I have a few more trips to the US planned for the next couple of months, so plenty of opportunity to get one.

My expense receipts workflow

Filing expense reports must be the thing that many employees dread. A conclusion of my own feelings and the amount of expense reports that land on my desk for approval including expenses incurred over 30 days ago. While things have improved in the experience, with mobile apps and attaching digital receipts, my own workflow had some areas for improvement.

At the company I work for, we use Concur to file our reports. And while I like the mobile app, most of the time I’m using the website as it’s part of my daily routine. The challenge I have is not so much with creating the reports, the corporate card entries are imported and identified correctly most of the time, creating the actual report a task that I can complete in a matter of minutes.

The challenge I have is getting the expense receipts into the system. Yes, the Concur mobile app has a camera feature that allows taking a photo. And in most cases the app does a good job in optimizing the image and focusing it on the actual receipt. The one thing that’s missing is that the images are only available in Concur and no permanent copy is being made. In order to achieve this, I’m using a different tool on my iPhone.

To automate my workflow, I’m using ShareFile, the Citrix ScanDirect app and Zapier to send the images to Concur. I use the SMTP server of my personal email account for this, but you can use other services like Gmail to achieve this. And you can recreate something similar with other cloud services, I just like to drink our own champagne.

The steps are really simple, I take the picture with ScanDirect and store it into a dedicated folder on ShareFile. I’ve made a favourite of that folder, making it easy to navigate to this folder.

Zapier does the rest, monitoring the folder for new files and when a file arrives, it sends an email to Concur.

After filing the report, I keep the receipts in that folder until the report has been approved and any personal payments are reimbursed into my bank account. The paper receipt goes into recycling after I’ve captured the images.