Choosing the right program structure for your organization and culture is an important consideration that can either be a catalyst for growth or an impediment that leads to friction and dissatisfied stakeholders.
As with most of the program design decisions organizations will tackle when establishing their RPA programs, there is not a universal best structure. However, the right one won’t clash with the culture and will support the expectations, risk tolerance, and compliance needs of the organization. Here are a few general considerations that can help you make a good choice.
Centralized versus Decentralized Control
When designing your program, it’s good practice to proactively decide whether you want a highly centralized or highly decentralized program. While there is not a perfect level of control, there are certain guidelines that apply fairly broadly to most organizations irrespective of their cultures and management structure.
Compliance with Regulations
We recommend, at a minimum, to consider centralizing the governance of anything that applies consistently across your organization, such as adherence to external regulatory mandates (General Data Privacy Requirements [GDPR]) or development methodologies affecting critical processes and systems impacting Sarbanes Oxley (SOX) compliance. Ensuring good compliance with external regulations can find their way into governance, development methodologies, required artifacts and checkpoints prior to deploying bots to production.
Shared Capabilities and Resources
Another area that can drive the level of centralization is with developing shared capabilities (i.e., providing coding best practices, reusable code libraries to carry out common activities such as securely accessing a credential vault, error logging, alerting, and logging business value to a corporate automation dashboard). Centralizing the responsibility for creating and maintaining the things all stakeholders need to do can have broad benefits in terms of cost, efficiency, risk mitigation, and compliance. At a minimum, stakeholders will appreciate how these services and capabilities make their lives easier, and at a program level, the company will be able to ensure that there is a way to maintain minimum standards and enforce enterprise-wide policies.
Guardrails versus Central Control
When evaluating how your automation program should be organized, we advise clients to have a bias toward developing guardrails over strict controls. This doesn’t mean you should not have standards and requirements built into governance, but it does recognize the practical consideration that policing the quality of every bot built within the organization is not realistic for most, and is likely to lead to bottlenecks in your bot development pipeline. Some key areas to consider include:1. Center of Excellence: Gatekeeper or Enabler
Serving as a gatekeeper might provide a sense of security that bots will remain in control, but this tends to be more of an illusion rather than actually achieving the goal of quality and compliance. We recommend structuring the program to enable stakeholders to be successful by providing them with guidelines, knowledge sharing, and minimum requirements that will enable them to easily understand the expectations the organization has for quality, security, risk, and ROI. Providing stakeholders with rules of engagement that are easy to understand is the foundation for frictionless governance. This will enable everyone to understand their role and responsibilities and provide them with the resources necessary in order to easily comply.
2. Control vs Guidance
Are there real internal or external factors that require specific controls be put in place, and what is the consequence if those requirements are not met? You want to centralize controls for things that are truly critical to mitigate risks, but whenever possible simplify governance to help improve adoption and the velocity of your program. For example, are you attempting to automate external financial filings in a public company? Ensuring that bad actors do not exploit non-public financial information may be a good reason to require specific controls to be incorporated in a bot in order to increase security.
What will work in your culture? Is your company a command and control organization, or very decentralized and entrepreneurial? Whichever end of the spectrum you fall, do your best to design a program that will integrate well while still ensuring your program goals can be achieved. One example would be if your company is highly security conscience with end user computers locked down with strict firewall settings controlling the flow of information into and out of the company, you might encounter obstacles building bots in a decentralized manner. Simple things such as adjusting a screen resolution to ensure the bot can run successfully may be restricted and global security policy may prohibit end users from adjusting these settings.
Another example we have seen in some companies are end-user machine policies that
automatically screen lock an idle machine after 15 minutes. This is a common end user desktop policy that is sometimes applied to the virtual machines where the bots live. In practice, these VMs act more like servers, which are generally governed by a different set of policies than user desktops. However, if the VM operating system used is a Windows desktop, it may trigger corporate desktop policies to be applied to these machines by default. The screen lock policy is intended to secure unattended workstations that users forget to lock prior to leaving their desk, but the impact in the RPA environment is that the bot may fail to complete its task when the screen is automatically locked.
These VMs are not physical machines sitting unattended on employee desks, but virtual environments that exist in a secured data center that is only accessible by system administrators. So the policy aimed to reduce risks introduced by end users is impeding the progress of a program intended to drive efficiencies. Essentially the policy is at odds with the program. If the security organization refuses to adjust the policy, the solution bot developers may take is to circumvent the screen lock by simulating screen activity. Common sense would tell you that this approach defeats the purpose of the control, but developers may have no choice to ensure that the bots work properly.
Ultimately, culture will dictate more of how your program will need to be structured than you might expect. Early education of the technology and program goals can help overcome some of this, but you will experience fewer obstacles if you structure your program to mesh with the culture.
Interested in learning more about establishing a Robotic Operating Center of Excellence?
Download our guidebook for an insight on why a COE matters, the steps and key considerations necessary to establishing a world-class program, and how to maximize your business value.