Best Practices in Prototyping

Model the Impact to the Business Process

Similar to the advanced warning problem related above, prototyping teams must be mindful of the day-to-day impact their models have on the business. A customer relationship manager may only have time to call on five customers per day. But if the model requires them to call on an average 15 per day to reduce the risk of customer attrition, the business cannot act on the model’s recommendations without adopting fundamental changes that may not be easy to implement.

We recommend evaluating models to understand the case load of alerts that may be generated – again, using a replay of history to simulate a real-world scenario.

In Figure 34, shown in the previous section, it may appear at first glance that there are only four alerts that correspond to the four failures predicted. In reality, though, we have to take into account the time-based nature of the predictions. If the model is generating results daily, then the actual number of daily alerts could easily total 40 (10x the number of predicted failures). If the model is generating results hourly, we could be looking at 2,400 alerts!

There may be multiple solutions to the problem:

  • Use software post-processing to generate new alerts only when risk scores exceed thresholds for the first time
  • Evaluate scores at an interval that is compatible with business operations
  • Improve the model further before promoting it to production

Regardless of the final solution, it is important to think about the end users and what the model will (or should) require of them and to do so during the prototyping phase.

It’s also critical to design the AI/ML-enabled business process appropriately to ensure value capture and the organizational buy-in that will be required to support any necessary change management. AI/ML techniques enable fundamental business transformation. Algorithms ideally should be designed – within the constraints of feasibility – to simplify business process transitions while maximizing value capture.

Part of redesigning the business process could include designing an office review process in which trained analysts evaluate algorithm results and formally adjudicate cases for promotion to others within the organization. These analyst roles may not exist prior to the AI/ML transformation but are central to the fully AI/ML-enabled organization.

Building on the previous customer attrition example, a financial institution could deploy AI/ML-enabled applications for customer attrition prediction to be used by a team of central office analysts. These analysts could review risk scores, examine evidence packages, affirm or reject algorithm recommendations, and capture valuable feedback for data scientists.

After a thorough review, analysts could then promote selected cases to customer relationship representatives who could interact with clients to reduce customer churn, improve customer satisfaction, and capture value for the financial institution.

This type of multi-tier business process is just one example of how to design an AI/ML-enabled organizational process. Other examples could include direct dispatch of cases to field representatives, support of remote monitoring/engineering functions, or automated control/application of results in certain cases – and those are just a start.

The principal requirement is a detailed business process evaluation at the time of algorithm prototyping. The business process should guide algorithm design, including how algorithm performance is evaluated, how often the algorithm is run, what business value can be captured, and the number of cases, thresholds, setpoints, precision/recall tradeoffs, and model retraining paradigms.