D2M Blog

Building an AI Center of Excellence:
How to Expand on an Existing CoE

 

At D2M, we work with global organizations in implementing their data and Machine Learning (ML) projects. We often encounter organizations that have invested in the early stages of automation through Robotic Process Automation (RPA) and wish to expand the scope of the Center of Excellence (CoE) to include cognitive technologies, ML and Artificial Intelligence (AI).

We call this ‘climbing the cognitive ladder’.  When an organization begins to automate to scale, they typically start with basic automation that does not require any type of decision-making. However, as the CoE develops, and new use cases emerge, the CoE realizes that immense value can be achieved leveraging cognitive technologies.

Organizations then begin by expanding their bot offerings to include cognitive automation, such as Optical Character Recognition (OCR) and Intelligent Character Recognition (ICR).

This allows user stories involving automated form processing to be easily achieved. Additionally, they utilize natural language processing (NLP) to extract knowledge from text, such as invoice numbers from emails or terms from contracts. NLP-based chatbots also facilitate client interaction with systems and processes.

All these advances have two important things in common:

  1. Unstructured data is transformed into structured data that can be used in automation.
  2. They are probabilistic and planning for exceptions must take place early on.

Structured data is the key to having a data-first organization, so while these techniques have great value, the second point is critical. The CoE must ensure that each project has a detailed risk assessment in place that involves model accuracy.

Risk assessment and model accuracy requirements go together. A model that provides recommendations for banner ads can have an 80% accuracy rating whereas a model that determines whether a self-driving car recognizes a bicyclist cannot. When creating the requirements, stakeholders must discuss the level of accuracy required and address the inherently probabilistic nature of AI, including the consequences of receiving an incorrect answer from a model. Every CoE must therefore complete a risk assessment and document exception handling to ensure good governance.

You’ll note that we haven’t talked about AI/ML yet. Everything that we have discussed to this point can be achieved using off-the-shelf software and services. The CoE should be able to guide different business units to the right tools and techniques for the job and provide the requisite support. In our next blog, we will dive deeper into how we’ve seen organizations evolve their RPA CoE into a fully functional automation CoE.

CONTACT US