Eliminating the AI Trust Gap in Treasury
Published: June 19, 2025
At the crossroads of innovation and accountability, those working in the financial sector encounter a critical obstacle in AI adoption known as the ‘trust gap’. Thomas Gavaghan, Senior Vice-President, Product Solutions & Strategy, Kyriba, investigates bridging opportunities.
Undoubtedly, there’s still unexplored potential for AI to further enhance the functions within treasury and finance. A recent IDC InfoBrief confirmed the financial sector’s sentiment with the majority of treasury professionals (around 84%) agreeing that Generative AI (GenAI)will significantly affect their processes within the next 24 months. Despite this positive response, the journey to extensive adoption is stalled by what many see as a ‘trust gap’, marking a divide between AI’s transformative promise and apprehensions about security and privacy risks.
Many professionals in the treasury field are trained to act with a risk mitigation mindset. This inevitably impacts the way AI is viewed and its role in decision-making. Another aspect that feeds into their concerns is the ever-growing pressure to meet the industry’s regulations, which have branched out to cover legal and industry standards around AI use.
Today, along with risk and compliance, data quality and security further complicate the picture. As the information the treasury teams handle is highly sensitive, businesses need to ensure the data used is free of inaccuracies and bias when integrating AI solutions. Additionally, we shouldn’t neglect the skills gap, as professionals may not necessarily be up to date with how to leverage AI effectively and securely in a treasury context.
But AI success goes beyond adopting new technologies; it requires a broader cultural transformation. For example, structured training programmes are important for developing trust and competence in leveraging AI. Gaining hands-on experience with AI tools in real-world scenarios also enables professionals to apply their knowledge and adjust to different environments.
Create a treasury playbook
To effectively bridge the trust gap between the vast opportunities AI offers and the potential risks, businesses should develop three core AI capabilities. This starts with AI communication and interaction. Treasurers should learn how to transparently engage with AI systems by asking effective questions and refining requests. They should also familiarise themselves with how to guide AI tools to support their reporting and analysis.
Another essential skill is data storytelling. Being able to decipher complex AI results into clear, actionable insights helps make financial data more meaningful to stakeholders. It’s not only about interpreting those results but mostly about being able to present them in a clear and compelling manner.
Treasury teams can create standardised prompt templates for recurring analyses – such as liquidity forecasting, FX exposure assessment, or counterparty risk evaluation – ensuring consistent outputs while maintaining an audit trail of how AI-generated insights were produced. These templates become part of the treasury playbook, enabling teams to systematically leverage AI while maintaining control over the process.
Finally, establishing a systematic approach to validating AI-generated insights is key. This makes sure that the results align with regulatory requirements, an important process for not only maintaining compliance but also fostering confidence in AI-driven decisions.
It may sound obvious, but businesses looking to build further trust in AI adoption really should prioritise security and transparency in their technology. For example, choosing AI tools and platforms that offer enterprise-grade security and provide explainable insights is critical. Equally important is making sure that customer data remains private and is not used to train external models.
Additionally, intuitive AI interfaces help teams interact more effectively with new technologies. Utilising visual analytics and dashboards further boosts the ability to tell stories with data, while clear validation frameworks support regulatory and business frameworks. Establishing a trusted platform foundation is the final piece to the puzzle. Building strong and accurate data infrastructure helps teams identify key AI foundation skills.
Transformative potential
The impact of advanced AI skills combined with secure AI solutions is significant. Improved cash visibility aids enhance decision-making, while systematic validation improves compliance and fraud detection. Not only that but collaboration between AI and humans further boosts efficiency, while advanced data storytelling elevates the quality of financial reporting.
In today’s business environment, where trust is crucial, success depends on building strong foundational capabilities integrated with strong AI solutions. Responsible AI platforms are therefore vital for treasury teams, providing cutting-edge technology and essential framework for AI skill development, essential to bridge the gap.
This holistic approach of marrying foundational AI skills with trustworthy AI solutions enables businesses to fully leverage AI’s transformative potential securely and transparently. Consequently, teams are better equipped to make data-driven decisions that develop financial performance and agility, ushering in a new era of operational excellence and strategic financial management.