Safety First: Developing a Fit-for-Treasury AI Strategy

Published: January 16, 2026

Safety First: Developing a Fit-for-Treasury AI Strategy
Sue Turner OBE picture
Sue Turner OBE
Founding Director, AI Governance Limited, and Professor in Practice - AI and Digital Technologies, University of Bristol Business School
Tom Alford picture
Tom Alford
Deputy Editor, Treasury Management International

AI is working its way into almost every aspect of professional life. By adopting robust governance at an early stage, treasurers and their organisations can help mitigate its inherent risks and focus more on the advantages. Sue Turner OBE, Founding Director, AI Governance Limited, and Professor in Practice - AI and Digital Technologies, University of Bristol Business School, offers an expert view on fostering an appropriate AI strategy.

The almost unceasing conversation around AI adoption has well and truly reached niche areas such as corporate treasury. But the talk is as much about the advantages of this developing technology as it is about the fear it generates.

The positives are often based on how AI will improve current processes, with a confidence reported (often by vendors) that it could generate new and as yet unimagined benefits for business users. However, AI’s unknowns, not least around controlling data usage and flows, remain a major source of concern for many.

Somewhere between the enthusiastic early adopters, the fear of missing out (FOMO)  brigade, and the committed naysayers, treasurers have to find a middle ground if they are to help secure a well-managed, smooth, and secure transition into the world of AI.

Reaching maturity

Despite the volume of AI chatter, “there’s still a surprising number of leaders that are spectators,” notes Turner. Referring to her own Stages of AI maturity slide (fig. 1, below), she says these businesses remain sceptical. They are not sure if AI is just a fad, or how it even impacts them. But, she comments, “spectators are becoming fewer in number as more businesses realise there can’t be this much hype without some substance behind it”.

As spectators convert to ‘explorer’ status, Turner notes that their investigation of different AI use cases is “rarely methodically executed”. However, she is nonetheless encouraged that more professionals are now engaging with AI. As explorers progress on their journeys, many will reach stage three, where they’re participating in trials. They are running cost-benefit analyses and beginning to determine the value of AI to their organisations.

Where results are positive, the fourth or ‘scaler’ stage may be reached. Here, Turner explains, AI is brought into the organisation in “a highly systematic way”, adding scale to their operations. Once this stage has yielded the desired outcomes, the business may move to the final stage as an ‘optimiser’.

Few have yet achieved this level, and most that have are AI-first tech companies, having built AI into their business model from scratch. Indeed, notes Turner, outside of this rarefied zone, companies are often today still struggling with how AI integrates with their legacy systems and work flows, even while aspiring to level five.


Figure 1

Click image to enlarge


Speaking of numbers

There are many statistics illuminating the progress of AI adoption. For example, in November 2025, McKinsey reported that 58% of finance functions are now using AI. Its survey also claimed that around 44% of CFOs are using generative AI (GenAI) for more than five AI use cases.

There is no doubt that over the past few years, the uptake of AI has risen dramatically. But there’s still a sizeable maturity gap, suggests Turner. An EY 2024 survey found that 75% of global knowledge workers surveyed are using AI at work, a figure that was just 22% in 2023. Behind that impressive figure is a worrying statistic, she adds. “Although we see that three-quarters of companies are using GenAI, only one-third of them say they have responsible controls in place. So, while they have moved on to stage two, they are not yet underpinning their explorations with a scientific approach.”

On the positive side, there are outcomes of those explorations that are working well. Turner reveals one such case where a client was regularly publishing 43 monthly reports for board consumption. “They had a hunch that many of these weren’t being read,” she recalls. An internal team used AI to identify which reports were being opened and read, and of those, which resulted in an appropriate and timely action. It was found that of the 43 reports, 31 were not used for anything at all. “From there, the report team was able to synthesise the key information and then produce a new set of reports that the board actually wanted.”

Even with clear successes, a measured approach to adoption is necessary to avoid overconfidence in a technology that is still finding its feet, cautions Turner. Sometimes that advice is not heeded.

“I met one CFO, who had been somewhat sceptical of AI, but had been persuaded by a vendor to experiment. After uploading some complex, lengthy financial reports to a GenAI tool, the summary results were found to be extremely accurate, and timely. That single success somehow completely reversed his opinion. I cautioned against this. We need to explore the tools, but we also need to retain a degree of scepticism about AI, and keep asking questions.”

Educate, learn, revise, repeat

But it’s not just AI’s accuracy that demands caution. For treasurers, financial data is obviously paramount. Understanding where and how that data is being used, and where it comes from, is now an essential part of the thought process underpinning an AI deployment.

“Core AI literacy gives people at all levels of the organisation that knowledge,” states Turner. “But we can't just tell them once and consider the work to be done. Too many employees are still confused by AI, so we must keep repeating the message.” A warning not to use certain tools with sensitive information should be clear, concise, and constant.

Having conducted a review in 2025 of AI literacy across one client organisation, Turner found only 19% of its employees understood which AI tools they were allowed to use, and for which purposes. “Everybody had been told, but the message just hadn’t registered.”

Failing to repeat multiple times the message of cautious use, at least until it becomes standard practice, is a common mistake. So, too, is framing that message, and its overarching policy, in purely negative terms, says Turner. “It’s often ‘don’t do this, or don’t use that tool’ and usually conveyed without explanation. This tends to create a lot of ‘shadow’ AI users.”

Shadow use of AI is where employees use unauthorised AI tools, simply because they find them useful in their work. Turner cites one research paper that found among covert users, 46% would carry on regardless of any explicit instruction by the company to cease using them. “They’re seemingly finding these tools so useful for their jobs that they don’t want to give them up.”

To combat the negative approach to control, Turner suggests helping employees to explore and understand the tools and use cases that will help them, and the wider business, achieve improved results, and then “continually repeating that message”.

Repetition is necessary not just to ensure the message is received and being acted upon, but also to ensure that the latest AI understanding is incorporated into working practice, explains Turner. For this reason, she feels a need for more education on the differences between the various forms of AI, such as GenAI and agentic AI, and how each works. But, she adds, the differences between forms of AI software need to be clarified too.

“Many organisations have Microsoft 365 Enterprise with the Copilot integrated AI assistant. Copilot can be self-contained, so not sharing any information with the wider world, or it may enable movement beyond the IT boundaries of the business. Does the ordinary user know if the tiny icon they’re seeing on the screen in an Excel spreadsheet is the same as the icon they see when using the Microsoft Edge AI browser? They really should.”

It's another common mistake with AI to adopt it due to FOMO. Research undertaken in 2025 with Queen Mary University of London revealed to Turner that a major driver for companies adopting GenAI is indeed competitive pressure. “Many believe they are missing out on something and ought to jump onto the AI bandwagon as soon as possible. But it’s a mistake to make that leap without really understanding what you’re trying to achieve.”

At the other end of the spectrum, the study highlighted the issue of what has been termed Second Mouse Syndrome. “The first mouse goes into the trap and meets an unhappy ending. With the trap already triggered, the second mouse believes it can now safely nibble at the cheese. But that effect doesn’t exist in the AI world,” warns Turner. “Each organisation holds data in different ways, and uses a different workflow structure. Because no two data scenarios are exactly alike, waiting for somebody else to make all the mistakes before forging ahead in the belief it is now risk-free just does not work.”

A standard for success

Where an organisation has taken the plunge with AI, and is now permitting its employees to experiment with AI tools beyond GenAI – perhaps even create some of their own – it is vital that an AI inventory is maintained, urges Turner.

There is already a global standard for the management of AI systems. ISO 42001, the first standard for an Artificial Intelligence Management System (AIMS), is the core standard, offering a framework for establishing, implementing, maintaining, and continuously improving within an AI environment.

Despite its existence since 2023, few organisations are yet accredited with ISO 42001. But they will be in the future, Turner expects. “Where there are standards, audits will follow. And when that becomes a requirement, all those organisations that are building AI systems now will wish that they’d kept an inventory, mapping where their AI tools are, there connections, and what they are doing.”

Even stage-one explorer businesses are advised by Turner to “at least have a look” at ISO 42001 documentation. “It’s very accessible. The language is not all technical jargon,” she reassures. Even if there is no intention of seeking ISO accreditation, she encourages all AI-interested parties to read its Appendix B. “It’s a great summary of all the elements an organisation might need to better manage AI systems.”

Question time

While some AI users in treasury and finance will have accrued a working knowledge of the subject, most will still be pondering the basics. But as new business and operational cases emerge for AI’s adoption, thorough questioning of its core elements is critical. This should raise a fundamental discussion point, says Turner. Is the firm looking to build an AI system itself or buy from a third party?

For a self-build, inquiries must arise, for instance, around the level of knowledge of the data held by the organisation, and the outcomes of AI to this resource. The issue of responsible AI should also emerge from such a conversation. This ought to probe if there will be an understanding of the predictions that will be derived by AI from this data, to what extent are these predictions desirable in terms of their impact outside the organisation, and what, if anything, is the plan for bias testing of these results?

“There is a responsible AI framework that we use to shape these discussions,” explains Turner. “It’s quite straightforward and easy to apply to each AI system individually, and then across all AI systems within the organisation.”


Figure 2

Click image to enlarge


The framework (fig. 2) helps raise elementary, but not always posed, questions. These include who’s accountable for the system and what level of training have they had to carry out this task? What is the organisational view of human agency and oversight, to what extent will AI affect this, and does this matter to the organisation? And what degree of technical robustness and security is desirable.

While the latter inquiry will certainly require input from IT specialists, questions around privacy and data governance are business matters, and as such are often forgotten about in the run-up to deployment, says Turner.

Data privacy discussions are, or should be, central to the AI debate, certainly in light of GDPR. Its principle of data minimisation demands that an organisation identify the minimum amount of personal data it needs to fulfil its purpose, and no more. ‘Just-in-case’ data collections are not permissible. 

But data governance within the context of AI extends beyond the regime of GDPR and its global equivalents. While individual data consents may have been given, have these explicitly or implicitly been given for use in an AI model too? Do permissions need to be resought or could a model be created that does not need personal data?

The discussion should also see additional factors come into play when using AI. These will cover the degree of transparency around organisational use of AI, including whether or not those individuals or organisations that come into contact with the system need to know that they, or their data, are interacting with AI, and should they be informed how the algorithm works, and how its output is being managed.

AI adoption has also generated a notably vocal debate around non-discriminatory output. Thoughts on possible bias need to be defined and addressed, bearing in mind that there can be proxies in data (such as education and employment history or postal code) for the explicit factors that are being sought out.

Societal wellbeing is another key discussion point in AI adoption. In the treasury function, automating some jobs may be desirable, but before deciding, open and honest debate is advisable. This becomes essential where there is a likelihood of certain jobs disappearing. Will current incumbents be reallocated to other posts or offered redundancy? And think practically about the future too, advises Turner. What does it mean for the pipeline of treasury expertise if most entry-level jobs and training are displaced by AI?

“If we believe our data is perfect, we will make mistakes,” asserts Turner. “But if we assume our data harbours biases – and it always does, even if not necessarily harmful – we can establish whether or not these could produce any undesirable effects, and then take appropriate action.”

Buyer beware

There are still many questions to ask if the intention is to buy AI from a vendor. At the very least it is essential to be wary of AI “snake oil” being pedalled by some of the vendor community, warns Turner. There are claims being made that certain tools will, for example, solve all forecasting, fraud detection or risk management challenges. Be suspicious, she advises.

It’s likely that some solutions are simply not good enough. This may be revealed at an early stage of exploration. However, a product that has been proven with one organisation may yet fail with another because that user’s legacy system, data or workflow structure does not enable it to function properly. Taking the opportunity to thoroughly test an AI product in your own particular environment is therefore important.

Ideally, Turner seeks a shift in thinking among newcomers to AI, from being a passive recipient of technology to an active participant in the process. “Many client organisations will assume that their vendor knows what they’re doing and will take their assurances, tick the boxes, and sign up. But all users need to be much more assertive with their vendors, asking them many questions.”

These questions will, in many cases, be similar to those asked of an in-house development. But vendor-specific enquires are also needed. Clients should certainly explore the data that the AI tool was trained on, looking at where that data was sourced, whether or not the vendor has secured proper rights to use it, and the degree to which that data has been tested for bias.

“Also, probe how open the vendor is about its AI modelling by asking for an explanation of how its AI makes decisions. Find out what happens if a user detects an ethical problem – will that vendor do something about it? And crucially, ask if the vendor has, and uses, its own responsible AI framework, and how it aims to assure clients that its products are compliant.”

A job well done?

At the heart of a good AI project is a responsible AI framework (such as in fig. 2) of which the organisation is both proud and willing to share, says Turner. But she also considers a structured AI governance programme to be essential for success.

Control of the overall system stems from an application inventory and a log of each stakeholder and their responsibilities. It also demands robust security and strong communication of any issues, with all eyes open to every regulatory change. Control also requires the ability to evaluate diverse aspects such as compliance metrics, the ROI from AI, and the wider impact of its use. Fig. 3 below highlights the eight key stages of good AI governance.


Figure 3

Click image to enlarge


“Underpinning the entire governance of AI must be a degree of humility because none of us have our arms fully around this yet,” states Turner. “With the technology changing so quickly, even if we think our controls and governance are well set up today, they won’t necessarily work a year on. Humility enables us to admit we might be wrong, and to keep exploring and seeing what we can do better.”

It may appear challenging to explain to the board that the view on AI held today, and the policy built around that view, will need to be revisited more often than any other policy document. But, says Turner, frequent revision does not mean policy has failed. “It simply means the organisation is actively managing a multifaceted area that is changing faster than any other, and that the board and C-suite need to keep an open mind.”

It's everyone’s role now

Many are already there, at least in larger organisations. Around 48% of FTSE 100 companies had appointed a Chief AI Officer (CAIO) or an equivalent AI-focused role by 2025. Some 42% of CAIO appointments occurred in the year leading up to April 2025, with 65% made within the previous two years.

While a rapid acceleration in the creation of this role in leading companies is apparent, Turner argues that the need to train employees at every level remains urgent. “AI and data literacy has to become foundational, and right away, at board and C-suite level, and throughout the organisation.”

But while some of the largest companies understand the position, many others do not. A 2025 Nitro survey looked at levels of AI literacy in the C-suite. Some 89% of executives surveyed rated the AI training in their workplace as ‘good’ or ‘excellent’. Only 63% of employees polled rated their AI training at the same level. Further, when EY asked C-suite employees in 2025 to match the appropriate controls against five AI-related risks, only 12% of respondents were able to answer them all correctly. 

This data alone suggests an apparent mismatch between the perception of available AI skills, and the education of all employees required for successful adoption. The bottom line for Turner is simple: “We all have to invest in our own upskilling”.


Team talk

There will be many reading this who have been handed responsibility for AI within their organisations. Regardless of their job title, Turner says a common question is: ‘Where do I start?’, with most quickly realising that it is too much for one person.

“My response to anybody who’s given what may seem like a poisoned chalice, is to really push back. They should insist: ‘If we are to take full advantage of AI, I need a team approach.’ One person can't possibly manage all the reading, learning and keeping up to date with every rapidly changing aspect. So, while that individual should take the lead, and retain accountability, their insisting on help in tackling AI’s diverse elements is entirely reasonable.”

Indeed, unless there is commitment to understanding what AI means for the business, it will never fully yield its potential (which itself is constantly evolving). It may even create more issues than it solves. But as Turner concludes, for any individual who has the “right mixture of curiosity and humility to accept they will never know everything about it”, AI could be the perfect challenge.

Sign up for free to read the full article

Article Last Updated: January 16, 2026

Related Content