In what has become a depressingly regular occurrence, major UK banks are once again recovering from IT system outages that left customers unable to access their online accounts. Cue the standard gnashing of teeth, enraged tweets and apologetic, red-faced bank reps citing “complex technical issues”.
The cycle continues and IT teams are left nervously anticipating the next game of IT error Whac-a-Mole.
As Treasury Select Committee chair Andrew Tyrie put it, many UK banks are currently at the mercy of a “systemic weakness in IT infrastructure”. It’s a difficult assessment to argue with, but what exactly is this weakness and why is a permanent fix proving so stubbornly elusive?
Dated and complex IT
The fact is many of the fundamental IT systems used by these banks are, in technology terms, ancient – in some cases more than 30 years old.
During this time, expectations of banking services have changed and consumers demand much more than just a safe place to store their money. Services such as online banking, contactless payments, and Apple Pay, to name just a few, have introduced new channels which existing IT platforms must accommodate, adding further layers of complexity to an already convoluted system.
>See also: Re-imagining the bank: why financial services need digital transformation
Patched on to this creaking infrastructure are numerous software components coded at different times, for different purposes and often in different programming languages.
The result is a lumbering IT Frankenstein’s monster – beyond the control of its creator and ready to cause disruption and havoc at any time. Worse, this havoc will be reported in minutes on social media, compounding misery with public humiliation.
Lacking visibility of risk
Recurrent outrages indicate banks simply do not fully understand the amount or severity of the risk in their IT systems or even where the weaknesses lay.
Highly complex and overlapping software components can be almost impossible to untangle and still more difficult to review without specialised tools and processes.
This worrying lack of transparency is the primary problem banks must address if they are to get their erratic IT under control.
Traditional software quality assurance revolves around functional and load testing, but these methods do not account for structural faults within the software architecture.
A structural fault hides deep in the code and could remain undetected for years until the addition of new functionality, such as online payments, triggers the fault and results in a system crash.
Around a third of glitches are the result of a structural flaw as the traditional approach is entirely inadequate.
Leverage software quality standards
Thorough, automated analysis comparing source code against code quality standards, such as those agreed by the Consortium for Software Quality (CISQ), are required to measure the architectural integrity of systems and provide insight into which applications represent a business risk and are liable to cause problems.
Measuring software quality against the CISQ standards helps detect poorly written and potentially damaging code, identifying and measuring technical debt.
The UK’s major banks must review their IT systems against such benchmarks, or risk alienating account holders and damaging brand reputations built up over decades.
Updating software quality should include greater transparency and communication between banks and software vendors, structural testing of software before deployment, and the careful deployment of updates and patches.
Fix, or pay the price
Logically, all businesses want to minimise unnecessary expenditure and streamline operations. A penny that isn’t working in some way is a penny wasted. What many banks have failed to understand is that ensuring all facets of software infrastructure is a business necessity, not a luxury.
Real-world examples abound. In 2014, RBS was fined a total of £56 million by regulators for a software malfunction in 2012 that was prompted by a software upgrade.
Following the incident, the bank was also compelled to set aside £125 million to compensate those affected. This is far more than the software initiative required to detect such issues would cost.
IT integrity is still something that is taken for granted at the executive level, but the proof is there: failing to consistently review software platforms can be very costly and embarrassing.
An open playing field
With well-established financial institutions being frequently tripped up by their own IT, their customers are beginning to wonder if they would be better served by moving their money to one of the up-and-coming challenger banks.
Some already offer better rates of interest for savers, and few are encumbered by the monolithic and glitch-ridden software systems of their established counterparts.
>See also: The rise of digital challenger banks – are they just for millennial ‘mobivores’?
The British public may start to look beyond the familiar high street names to a new generation of banks, and the government’s account switching campaign is reinforcing this new freedom of choice.
Relative newcomers such as Metro and Virgin Money may not have the presence, prestige or reach of the UK’s big banks, but with the government-initiated Current Account Switch Service prompting two million account switches in just two years, it’s clear that brand loyalty is no longer enough to retain British consumers.
An expanding and highly competitive marketplace, the potential for brand damage and the risk of monetary penalties makes the argument for the UK’s major banks to implement software quality assurance standards no longer just compelling, but vital for their survival.
Sourced from Vishal Bhatnagar, SVP and country manager, CAST Software