Divide and Rule
We explore cash segmentation drivers, options and outcomes for money market investors.
Published: July 31, 2015
Donya Rose, Chief Operating Officer UK and Ireland, Global Transaction Banking, Deutsche Bank, explains what is really meant by ‘Big Data’ and why grappling with it has become a ‘must do’ for banks and businesses everywhere.
It is a truth universally acknowledged that most of the data in the world today was created over the last few years. The amount of data being created and stored globally has soared exponentially due to the lowered costs and increased capabilities of storage, the granular richness of data flows and of course the increased frequency and ease of digital interactions. Each year brings storage improvements, in terms of both methods and cost reduction; three decades ago a gigabyte of storage cost around 1,000,000 times what it does today and was the size of a large fridge rather than the miniature disks now available. With the growth of ‘data lakes’, information of all kinds – but particularly unstructured data – is being generated on all fronts and stored in unprecedented volumes, and its effects and power are only beginning to be explored.
As industries in every area undergo digital transformations, they face decisions about what data to store and where to store it – whether in on-site, remote or cloud-based ‘containers’ – and, most importantly of all, the question of how to process it. The flood of data being captured every second can become a millstone without the power and technology to leverage it correctly. Indeed, without the right tools, even filtering incoming information to separate the meaningful from the white noise becomes a needle-in-a-haystack task. Yet inadequately managing such information is to risk falling behind, whether in terms of market trends, impending threats or potential operational improvements.
It should be noted that the term ‘Big Data’ (one that is increasingly searched for online) is a slightly more nebulous concept than the sheer volume of digital information being produced. Big Data can be defined as anything from ‘a greater scope of information’ to real-time information to social media data, but also refers to technologies with data-related functions. Regardless, the importance of both harvesting and correctly utilising data has been acknowledged; nine out of ten C-suite executives consider data to be the fourth factor of production, after the traditional pillars of land, labour and capital. This shift in mindset is being steadily followed by an appetite to make better use of this data, with global investment in Big Data expected to grow at a CAGR of 17% to reach $76bn by the end of 2020.
What this signifies in practice is that we have moved on from questioning the value of investing in such tools to asking how we can best put them to use. Part of that investment will be spent on human skills, such as hiring Chief Analytics Officers, but on the tech side it covers four key areas: infrastructure, storage, processing and management, and analytics. Clearly, Big Data cannot exist without the first three factors, and meeting the demand for secure and inexpensive storage and sufficient processing power continues to require technological developments. But without effective data analytics, even the largest hoards of data are worthless.
Four main forces are driving this investment in Big Data technology. The first is the explosive growth of data for the reasons mentioned above. As the number of consumers and bank account holders – and the devices they use – continues to swell, so does the volume of data produced by internal and external sources. The recent surge of interest in wearable data will only accelerate this growth further; the world’s data volume has been predicted to grow 50-fold between 2010 and 2020. In contrast, existing legacy systems struggle to keep up. For example, much of banks’ data sits on difficult-to-access databases meaning it is not best used, and most of the data held by financial institutions (80-90%) is unstructured and therefore difficult to analyse.
The second driver is necessity for capabilities to access point-in-time information to comply with changing regulations, for example regarding liquidity planning, asset and liability management or risk reporting. Not too long ago, a static analysis of financial ratios was sufficient, but newer regulations such as Basel III require intraday reporting, edging ever closer to real-time visibility. For the sake of risk management – whether compelled by regulation or not – a key treasury goal is increasingly to achieve real-time monitoring of the financial environment, from counterparty exposure to interest rate volatility.[[[PAGE]]]
Thirdly, the financial industry has already begun to leverage Big Data for counter-terrorism and fraud detection measures. Again this comes down to a mixture of self-interest and regulatory pressures in spotting both internal and external risk. With the right technology, banks can process vast amounts of data to identify individual behavioural patterns that indicate potential risks (the characteristics of a ‘rogue trader’, for example) thereby pre-empting disaster – and in the inverse, such data analytics can be utilised to identify opportunities for profit. Similarly, by collating and analysing weblog data from banks’ internet channels and geospatial data from smartphone applications, banks can combat credit fraud and security breaches.
Such technological capabilities have already caused significant change in this arena; where before fraud analysis was usually performed over a small sample of transactions, it can now be conducted over entire transaction history data sets, significantly increasing its scope and therefore success. Big Data has not replaced banks’ analytical infrastructures here, but instead extended their capabilities to include the extent of data available, strengthening security functions.
Finally, both in banking and elsewhere, close attention to Big Data analysis can improve the quality and effectiveness of service delivered to clients. Through external information sources (such as social media references, customer call records and emails and claims data), and internal data, banks can gain a more comprehensive and acute insight into clients’ needs, preferences and challenges. Such information should, in turn, be used to develop tailored and client-centric solutions, interfaces and interactions.
It should be understood that this phenomenon is far from limited to the developed world; it is predicted that emerging markets’ (EMs’) share of the expanding digital universe will grow from 36% to 62% between 2012 and 2020. And in many cases, EM’s ability to leapfrog developed-market legacy systems in terms of technological developments and unique circumstances that engender a dependency on handheld devices (i.e., lack of physical infrastructure) will position them ahead of the game in terms of technological innovation.
Regardless of circumstance, there can be said to be four stages of Big Data adoption; education, exploration, engagement and execution. In 2012, the largest group of participants (47%) considered themselves to be in Phase 2. Over the next ten years we should see corporates all over the world engage in the piloting of Big Data initiatives to validate value and requirements and then execute two or more Big Data initiatives, and continue to apply advanced analytics.
Such adoption should have different implications across different industries. Within the financial arena, it has already lead to advances in tests used for credit scoring (even up to 50% improvements in loan default rates), holistic customer service experiences, risk mitigation and cost reduction – as well as offering proprietary insight into end-consumer trends for corporate clients, or charting shifting trade patterns through global transaction data.
For all involved, the pitfalls – technical, operational and ethical – of Big Data should not be overlooked. We are likely to hit a dearth of expertise and labour skills in this area and, leveraged incorrectly, Big Data can throw up blind spots and interpretation errors. The increase in data that needs to be protected is rising far faster than overall volumes of data, and security remains a key concern – particularly as regulations around data protection, privacy and intellectual property lag behind the pace of technological change, as do current methods of anonymising data. Organisations need to ensure that their use of Big Data leaves their integrity and public trust intact.
Despite the challenges, engagement with this phenomenon is unavoidable and those that do wish to catch the wave early must strategise how best to capture, analyse and leverage Big Data in order to remain competitive, minimise risks and ease the burden of compliance. Big Data is a new and exciting frontier, and only time will show in which direction its capabilities and effects will take us. One thing is for certain: it is a trend that is ignored at one’s peril.