- Stevenson Munro
- Global Head of Economic Sanctions Compliance, FCC High Risk Clients and Emerging Threats, Standard Chartered
Generative AI is set to supercharge the transformation of banking with unprecedented innovation and efficiency. Here Stevenson Munro, Global Head of Economic Sanctions Compliance, FCC High Risk Clients and Emerging Threats, Standard Chartered, examines the impact it will have on the industry.
Generative AI is fast disrupting traditional banking paradigms. By introducing capabilities that enable FIs to proactively simulate, predict, and respond, it is bringing both opportunities and challenges.
Unlocking new potential for tomorrow’s world…
Banking’s initial foray into AI, characterised by basic chatbots and simple task automation, was a foundational step. Now, the emergence of generative AI promises a revolutionary leap, vastly expanding the sector’s innovation frontier.
At its core, generative AI learns from existing data to produce new data patterns or content, such as images or text, and has the potential to redefine the banking sector in unprecedented ways. By leveraging advanced algorithms and vast data troves, banks can leverage the technology to optimise operations, introduce novel services, and mitigate risks more effectively than ever before.
Within trade finance, generative AI is poised to transform document-related processes. Early use cases include automating the generation of crucial trade documents and subsequently verify their authenticity, thereby reducing errors and ensuring compliance. Such capabilities can speed up trade transactions and foster trust among parties.
In treasury, generative AI use cases offer corporate clients a more proactive approach to liquidity and FX management by simulating cash flow scenarios and suggesting optimal hedging strategies against market fluctuations.
Meanwhile, the technology can also be leveraged to assist in the auto-correction of erroneous payment messages, reducing manual interventions and increasing the rate of STP, as well as automating the generation of regulatory reports related to cross-border transactions, ensuring consistency and compliance.
…with applications for the reality of today
Though its widespread adoption in the banking sector is still nascent, where the potential of generative AI is arguably most transformative is as a countermeasure against fraud.
A major challenge for managing financial risk and compliance is the resources required. Much of the current discussion focuses on use cases for replacing document intensive, human-led processes that follow defined protocols.
Across the financial landscape, the technology can generate countless synthetic scenarios, aiding systems in identifying subtle anomalies in transactions or documents that might hint at fraudulent activity.
AI tools can be leveraged to model, map, predict, and monitor prior, current, and predicted behaviour, validate identities of counterparties and understand their commercial profile and behaviour, and then assess this compilation of information against known but dynamic risk factors in real-time.
The sheer volume and complexity of the data involved are beyond human processing capability, especially in the necessary time frame. Generative AI doesn’t just process this data; it uses it to predict and counter future threats, making instantaneous decisions to flag or halt suspicious activity, even if it is novel or previously unencountered.
This then enables a more effective perpetual know-your-customer [KYC] approach to risk management, which for many FIs currently relies on asking for, collecting, and organising large amounts of data manually. All of this could potentially be enhanced and simplified through AI techniques.
Safeguarding trust in banking
However, incorporating generative AI into such a critical sector requires rigorous oversight. The banking industry’s zero tolerance for error, combined with the high stakes, means that unintended outputs from AI could result in significant financial, regulatory, and trust implications. Therefore, FIs must strike the right balance between innovation and risk management.
In practical terms, this will require regular audits of AI systems for accuracy, bias, and reliability – including periodic cross-checks of AI-generated insights or data against real-world outcomes.
An example of this is Standard Chartered’s Responsible Artificial Intelligence Framework, which ensures every AI use case deployed into production adheres to the pillars of fairness, ethics, transparency, and self-accountability.
We have developed a defined and comprehensive programme, starting with the inventory of all proposed applications for the use of AI and using a governance and oversight model specifically designed to assess and challenge the use and implication and potential risks. The inventory is large and growing, with many third-party engagements.
Furthermore, amid growing public availability of powerful generative AI models, FIs must take steps to set up safeguards to counter its use by fraudulent actors.
This is a huge threat. Deep fakes, for example, can be used to deceive clients or misrepresent the bank and its systems. Detecting and interdicting these is difficult to achieve in operational processes designed to facilitate huge volumes of data or transactional activity in real-time.
To support experimentation and innovation while also – importantly – identifying and addressing associated risks, Standard Chartered has initiated a task force comprising relevant stakeholders to review opportunities, align enablers, and share knowledge and insights.
Beyond engagements with counterparties across the financial ecosystem, proactive collaboration by FIs with regulators can help shape adaptive, visionary policies that address the nuances of AI-generated data while maintaining the trust and integrity that are foundational to the banking sector.
As an active proponent of the use of AI to better support clients and stakeholders, Standard Chartered is playing a key role in helping regulators to shape guidelines for responsible use, through its membership of initiatives such as the Veritas consortium in Singapore and the UK’s Artificial Intelligence Public-Private Forum (AIPPF).
Laying the foundations for a transformative future
Generative AI’s ability to create new data patterns and scenarios propels FIs into an era where proactive strategy, enhanced verification, and optimal resource allocation become not just feasible, but standard operational procedure.
To gear up for a generative AI future in banking, FIs must adopt a forward-thinking and strategic approach. By forging partnerships with global banks, fintech companies and research institutions, FIs can meld industry-specific expertise with cutting-edge insights. Such a multi-pronged strategy will not only facilitate a smooth transition into the generative AI era but also position FIs to harness its full potential – paving the way for a more adaptive, efficient, and secure financial ecosystem.