In today's rapidly evolving digital landscape, financial institutions willing to embrace GenAI solutions face many challenges. In the first of a new series of articles on industrializing GenAI technology, we show how to effectively mitigate concerns about data privacy and intellectual property, and how to manage escalating costs.
The emergence of Large Language Models (LLMs) has provided a transformative solution for processing and understanding vast amounts of data. However, the predominance of closed-source models such as ChatGPT frequently raises concerns regarding data confidentiality and the associated costs. As we highlight in our recent white paper, open-source LLMs are a compelling alternative, offering financial services executives a pathway to achieve both enhanced security and cost-effectiveness.1
Closed-source models, such as ChatGPT, typically require data to be processed on external servers, posing potential risks of disclosing sensitive information or intellectual property. In contrast, open-source LLMs can operate within the organization's infrastructure, allowing institutions to maintain strict control over their data and IP and significantly increasing privacy measures. At the same time, it is important to acknowledge that the quality of responses from proprietary models is generally higher than that from open-source models, even though these open-source models are playing catchup very quickly and the delta between proprietary and open source is shrinking.
The monetary implications of using proprietary models can be substantial, primarily due to licensing fees and costs associated with scaling. Open-source models offer a cost-efficient alternative that avoids some of these expenses. However, deploying these models on platforms like AWS Bedrock introduces additional considerations. For instance, when open-source models are implemented on such cloud services, they still incur operational costs and raise questions about data sovereignty and security, as it is crucial to determine whether they effectively lie within the customer's network boundary. While proprietary models often show a striking performance on general content, open-source models may reach or surpass this performance in a particular area of application. For example, it was recently demonstrated that a specialized LLM tailored for COBOL code completion can outperform ChatGPT-4 in this niche area.2
The quest for open source LLM deployment must identify the right balance between cost savings and maintaining high-quality outputs, which is especially crucial as scales increase. Although dedicated hardware investments can be costly, the per-use-case cost tends to decrease as more use cases share the infrastructure. This contrasts with proprietary models where costs typically increase linearly with usage. Therefore, a strategic optimization exercise is essential, particularly if financial considerations are paramount.
For our clients, factors like regulatory compliance and accuracy often hold greater importance than mere cost-efficiency. Recognizing this challenge, Capco is well-positioned to assist. Our expertise in implementing open-source LLM solutions enables financial institutions to fully capitalize on the benefits of these technologies while keeping costs and efforts under control.
The practicality of using open-source LLMs in the banking sector is profound. For instance, our research highlights their effectiveness in updating legacy systems, such as understanding and optimizing legacy code - a critical task for many institutions still reliant on aging technological infrastructures.
The application of open-source LLMs will predominantly focus on use cases which handle highly sensitive data that financial companies would not consider processing outside of their network, versus ‘classic’ use cases in customer service or knowledge pre-classification contexts.
Moreover, in the realm of strict compliance and regulatory reporting, open-source LLMs help banks navigate and adhere to complex legal frameworks, avoiding potential fines and reputational risks.
Case study: Benchmarking open-source LLMs
Capco’s recent white paper on open-source LLMs includes a case study that directly addresses financial services firms’ concerns about handling sensitive data.1 In this study, various LLMs were evaluated for their ability to explain, adapt or replace COBOL code - a task crucial to financial operations and typically involving highly sensitive data. The models were benchmarked against expert human responses and across different query types, providing a thorough assessment of their performance. The rigorous evaluation demonstrated practical benefits from deploying open-source models even beyond the already mentioned aspects of data security.
The ability to fine-tune open-source LLMs with proprietary data and domain-specific knowledge is another significant advantage. For example, specialized adaptations for tasks like COBOL code completion demonstrate that customized LLMs can outperform general-purpose models like ChatGPT-4.2 These tailored solutions not only enhance workflow automation and data analysis but also improve internal communications, thus driving overall productivity.
Adopting open-source LLMs - where appropriate - represents a smart strategic move for banking executives focused on safeguarding data privacy and intellectual property, while reducing operational costs. By selecting the right LLM for specific use cases, financial institutions can not only meet regulatory standards but also optimize resource allocation, enhancing overall business efficiency and competitiveness.
Open-source LLMs are not just tools but strategic assets that can profoundly impact the operational dynamics of financial institutions.
At Capco, we specialize in supporting our clients by offering comprehensive solutions and services such as benchmarking, LLM evaluations, and sizing of open-source LLMs. These services ensure that each financial institution can identify and deploy the most powerful models tailored to their specific needs.
Capco’s methodology allows for the continuous evaluation of LLMs against the latest advancements, ensuring that financial institutions can leverage the most effective technologies. This methodology systematically assesses the costs and benefits associated with each model, providing stakeholders with clear insights into their value propositions.
By partnering with Capco, banks gain the expertise needed to employ the full potential of these powerful technologies, aligning them with both current requirements and future goals.