THE UNFULFILLED PROMISE OF OPERATION OPTIMIZATION

OPERATION OPTIMIZATION : THE UNFULFILLED PROMISE OF OPERATIONAL OPTIMIZATION 

  • Edwin Hui, Executive Director & APAC Data Lead, Capco
  • Published: 06 June 2022

 

The threats posed by automation have loomed large in the popular imagination for many decades, but where the idea of robots replacing humans was once the stuff of mere futuristic fantasy, it is today increasingly a reality across many services and industries.

Take, for example, how Alibaba sells cars via Tmall, where a car is purchased via phone and then picked up from an automated multi-storey ‘vending machine’. This has huge implications not only for the many thousands of staff working in the car sales trade, but also for present and future client relationships and interactions with other business-related networks.

On the other hand, while financial institutions in Asia are adopting new technologies to re-invent their processes to improve efficiency and the customer experience, a large amount of manual work remains untouched. For example, when a bank account is opened, a lot of the checking of submitted documents is still conducted by humans in the back office. Many of these processes require some human experience and intelligence to complete, but are at the same time very good candidates for automation through artificial intelligence (AI).

However, several factors have so far held the region back from realizing the potential of AI-driven automation:

Everything behind the firewall

Strict regulation, especially in the banking sector, means that financial institutions have been relatively slow to progress their cloud journeys. While insurers have accelerated cloud migration in the past few years, most banks in Asia still have most of their applications and data stored within their own data centres. However, many automation use cases rely on deep learning algorithms that require a lot of processing power – be that via traditional CPUs or GPUs (or TPUs in the case of Google) – for training and execution.

If everything is maintained on-premises, the necessary initial investment in server farms will scare off even the most forward-thinking institutions. Cloud, on the other hand, is the perfect medium for coping with spikes in processing requirements thanks to its pay-as-you-use model and the consequent reduction in both fixed costs and the need for maintenance/hardware upgrades.

That said, banks and insurers are exploring how to overcome this hurdle, for example through scrambling data before putting it on the cloud. This cost of this preparative work, together with regulatory compliance requirements, needs to be considered and both factors will shape the optimal hybrid infrastructure, e.g., what goes to the cloud and what needs to stay on-premises.

Language barriers

Unlike physical factories, processes in financial institutions revolve around words and language, which gives rise to special challenges in Asia. While there are many different languages across Europe and US, most utilize a finite number of characters. This makes word tokenization – the process of splitting a large sample of text into words, which is a requirement of natural language processing tasks – a lot simpler than for Asian languages. Asian languages usually do not have spaces between words and each word is itself a unique character.

Word tokenization is only the first step to recognizing, understanding and extracting content in text-based documents. While larger vendors like Google and Microsoft are attempting to provide a natural language processing solution in each country, niche solutions such as handwriting recognition are still mainly developed by smaller, local companies. However, these niche solutions are usually an important part of the automation process: imagine if we had to pass every document containing handwriting back to human operators. Good technology vendor ecosystems will need to be established in each country to facilitate the region’s automation journey.

Single platform to serve all use cases

Perhaps the biggest challenge to most financial institutions is that AI is still very much a platform for the builder – that is, you need to build or customize a function before you can use it in the final application. This is in sharp contrast to traditional tools and applications, e.g., visualization or enterprise resource planning tools, where all functions are provided ‘as is’ with a manual on how to use them. Even though big internet vendors and software companies are striving to ensure that ever more functions are as readily available as possible (autoML, as we usually call it), the goal of an all-in-one, easy to use AI platform is still some way off. In the interim, successful adoption and integration of AI will rely on a fit-for-purpose architecture design that integrates different technologies and vendor solutions.



Despite these challenges, the benefits of well-planned automation still far exceed the potential costs and risks. To quote an example, during commercial loan applications, data points need to be extracted from financial statements as inputs for credit risk modelling, and submitted documents such as business registration forms need to be validated. This is all currently executed through human labor, be it the relationship manager or a team of back-office staff.

However, an AI application, built for the cost of perhaps the full year salary of two to three full-time employees, can do the work of over 10 such employees in a 24/7 manner. Besides the cost factor, the whole process can be made much quicker, which in itself can attract more customers.

There are many other examples of cost savings and even degrees of differentiation that effectively create new business models: imagine, for example, credit card approvals executed in a couple of hours, with virtual cards available immediately afterwards.

The combined potential of cost savings and innovation means that financial institutions should start the journey towards automation sooner rather than later. Some companies will do this in a ‘big bang’ approach while others will start more modestly with a proof-of-concept on selected processes. In my next article, I will examine a better approach to starting this journey.

Use the links below to read the next two articles in the series, exploring how data mining can improve the first step of an optimization program – process discovery – and how to establish expert teams to roll out process optimization.