Transforming fragmented legacy controls at large banks

Why is it risky for large banks to rely on their legacy controls? We speak to Alex Hammond, partner at Airwalk Reply, to find out

Progress in large, established banks can be slow, with an infinite number of systems and processes to go through.

Though the fear of change can be off-putting, we’ve learned from the likes of the FCA’s £61.1 million fine for Citigroup that relying on legacy controls can have major consequences.

We spoke to Alex Hammond, partner at Airwalk Reply, about legacy controls at large banks and why they need to be reviewed.

Why do large banks still use legacy controls, from your experience?

Firstly, they’re culturally embedded. In a lot of cases, you’ve got large teams that are responsible for executing those controls. So, there isn’t necessarily the imperative for those teams to innovate and change what they’re doing to move to a more modern approach.

While in some cases, this might mean working themselves out of a job, in most cases it means changing what they’re doing quite significantly. This kind of change can be challenging for many people.

Traditional audit structures, approval gates, manual checks are deeply embedded in the culture of how a lot of these big banks operate. To change the culture requires a step-change, and that should come from the top down.

The other thing to consider is that taking a more modern approach, with more modern controls, often requires a significant change in how things are done. You’re not just talking about replacing certain components of a process with technology.

There’s also a cost to this change. It’s not always on the top of the list when budgets come around. Usually, spend goes on areas that are revenue generating or more in the innovation space. It can be somewhat of a hard sell to the higher-ups as to why they would spend money to change something, and a lot of organisations aren’t great at articulating the business case for it. You end up in a cycle where the teams that are responsible for the controls don’t have a huge incentive to do much about it.

If you do get that approval, the change is potentially more drastic than a lot of organisations are up for, and the appetite isn’t necessarily there.

How do you get the team that’s responsible for the controls on-side?

It depends on what the business case is. Changing the nature of the roles within a team and getting them focused on more ‘value add’ activities, rather than running audits or undertaking approvals, checks etc., could be useful. A lot of these people probably didn’t spend their life at university desperate to get into a control role in a bank.

It’s about enabling them to be directed to some of the more interesting elements and bringing to life the benefits the changes will bring them. If the answer that is there isn’t a huge amount of benefit to them, then this needs to be managed in a slightly different way. Although I would say that’s rare. But there are cases where this does happen, and you need to plough on regardless. Sometimes that means doing something about the team whilst you’re making the changes, such as backfilling, with a third party to manage the process.

From an organisational perspective, the proportion of time you’re spending on those value-add activities and customer-facing activities is important. It indirectly might be building out the infrastructure and automation to enable a mortgage development team to test the different concept. They’re not directly coding for customers, but they’re providing the support functions to be able to do that and do that at pace.

As you’ve said, the cost is a factor. How can banks improve their legacy controls without incurring too much cost?

If we’re talking about cost there and then – which is the mistake some big FSIs sometimes make – it’s about how much it’s going to cost me this year to do this project, rather than looking at the longer term or medium-term business case.

If you’re going to spend money on changing anything relatively substantial, it’s going to take more than a year to pay back. That is often the cycle. But if you start to look at how much it might save and how much efficiency it will drive in the medium-term – the two-to-three-to-five-year period – you start to see a very different picture.

Ultimately, if you’re freeing up people to do more interesting things or to enable the bank to move at a faster pace, you’re going to be able to get products out more quickly, generating more revenues. You can also start to take inefficiencies out. So, the same number of people can start to deliver more quickly at greater scale. Very little can be done without adding cost, but the pertinent part is the benefit that cost is driving.

That cost isn’t necessarily monetary. It could be also downtime, interruptions to businesses, inconvenience to customers. How do you minimise that side of things?

And fines! Sometimes, organisations get these things wrong. It results in restrictions from a regulatory perspective, which means they can’t sell certain products. They’re losing revenues as a result or it’s the direct cost of punitive fines, etc.

So, a lot of these things are about spending money to prevent bad things happening, which is a bit like trying to convince someone to get leak insurance on their home before they’ve had a leak. When things go wrong, maybe for a competitor or more broadly in the market or in the organisation itself, this does often help as you tend to get more of an appetite to spend that money to prevent later harm.

Alongside fines, what risks would you say could potentially come up if legacy controls aren’t improved?

I think that the technology has meant scale. It’s meant pace at an unprecedented level. So that in turn means a proliferation of attack vectors or points of failure and making things exponentially more difficult to manage in a traditional manner.

If you take operational resilience perspective, for example, that’s about being able to get your arms around your important business services, using regulatory language. Considering what is supporting them? What does it take to maintain them, keep them resilient and available, and recover them? The reality is that this used to be infinitely more straightforward.

Most of the systems may have been in your own data centre in your own building. Now, the ecosystems that support most of these services are much more complex. You’ve obviously got cloud providers, SaaS providers, and third parties that you’ve outsourced to. You’ve also got a huge number of different services that, even if you’ve bought them and they’re in-house, there are a myriad of internal teams to navigate.

If you’re trying to manage all of that using traditional controls and methods – that could be manual checks, audits, governance gates and more – at best you’ll drastically slow things down. We do see banks where their approach to it is, ‘Well, it all has to go through this process and it’s going to be manually checked and you’re looking at 12 to 16 weeks for a change to get through that process.’

You’re managing and controlling it, but you’ve basically shut down the bank in terms of its ability to get new things out of the door, which impacts its capacity to make money and be competitive. So, what’s more likely is you’re going to miss a load of things because they fall through the cracks. The risk is that the more people move to these modern technologies, platforms and ecosystems, the greater the risk that you just don’t have control – regardless of how much you’re putting in place from a legacy perspective. Your ability to respond to that is going to be pretty limited.

We’ve touched on this a little bit already, but I can imagine
AI plays a role in improving legacy controls.

Yes, absolutely. I think this is a space where, if you look at the most compelling use cases for AI in financial services, it’s about tackling codifiable tasks. There are tasks that are highly complex or contextual that AI is probably not the answer for, certainly not at the moment. But anything that’s inherently codifiable means you can certainly point automation, hyper automation, or possibly AI at those tasks and remove the need for what would traditionally be more human, manually-oriented tasks.

If you can take AI or automation and remove the need for people to be doing those tasks, then suddenly you’ll see a massive amount of inefficiency taken out of these operations. You’re not just standing still; you’re also enhancing the performance posture of the organisation. At the same time, humans inherently make errors; they miss things. They go off sick, they go on holiday, their skills become out of date. Some of those things are also true for AI, as they must continue to be fed and made smarter all the time. But they don’t have some of those other challenges and are able to scale and deal with the pace much more easily by their very nature.

The reality is that in financial services, the opportunity is more about how legacy controls are fundamentally very inefficient, with tens of thousands of people cranking the handle on vast operations. The opportunity for AI to transform and disrupt that is massive.

It’s not as straightforward as saying, ‘build it and they will come’, but it doesn’t have to be inherently disruptive to existing operations if you do it in the right way.

Read more

How artificial intelligence is helping to slash fraud at UK banks – Rob Woods, fraud expert at LexisNexis Risk Solutions, tells Charles Orton-Jones why behavioural data and AI are a powerful fraud-fighting combination

Avatar photo

Anna Jordan

Anna is Senior Reporter, covering topics affecting SMEs such as grant funding, managing employees and the day-to-day running of a business.

Related Topics

Legacy Infrastructure