When technology fails us, or something doesn’t work properly, it’s easier than it’s ever been for us to find fault with our service providers. Let’s face it – it only takes a wireless router to drop out for thirty seconds, or a website to fail to load when we want it to for the modern customer to tweet, email or place a call to point the finger and demand that something must be done to rectify the situation. But what about when you are the service provider responsible for the outage?
For service providers, the concept of ‘finding fault’ can take on an entirely new meaning, but it’s increasingly a task that is easier said than done, as a direct result of the complexity and the sheer volume of technological layers involved. So how do you turn an elaborate ‘treasure hunt’ for faulty systems into an easy-to-navigate troubleshooting process that can remedy the situation quickly and easily?
> See also: IBM Watson and Genesys partner to power smarter customer experiences
First of all, let’s clarify one thing: it doesn’t matter which company you are – almost every service available, regardless of how advanced it is, or how much money customers are paying for it, will develop some kind of fault at some point. The trick, of course, is to find a fix as quickly as possible, and to restore the troublesome systems to full functionality before they can have a negative impact upon the brand. You might think that this would be a simple, straightforward process.
After all, when a fault develops with a specific system, it’s not unreasonable to expect your service provider to know how to provide an immediate fix. The sad and unfortunate truth is that when something goes wrong, many service providers have to navigate their way through so many layers of technology that finding the root of the problem can prove to be as much of an issue as the issue itself.
There are a number of reasons for this. Many telecommunications companies, for example, have gone through a large number of acquisitions that require them to integrate a number of disparate technologies into a single working system. Others have added layers to existing legacy applications as a means of responding to changes in the way that systems are used, as evidenced by some banking mainframes which are now having to adjust to the strain of having to support and enable internet and mobile banking.
Add to this the fact that, as products and their users become more sophisticated, so the complexity behind it increases, and the creation of a ‘nexus of technologies and service’ that is difficult to decompose, let alone manage, the problem becomes clear. One of the problems with this approach is that it can lead to significant challenges when change is required. After all, if you don’t have access to enough information to assess the risk a change would make, would you then decide to make that change? It would be like pressing a button in an aircraft on a flight deck when you had no idea what it does.
The upshot of this is that organisations are increasingly finding themselves having to follow a ‘treasure hunt’ style trail of clues before they can identify the problem and get things back in working order. Clearly, in a society where customer experience has moved from a peripheral consideration to a central concern, this will not do. Indeed, customer experience is increasingly becoming the only frontier that top tier service providers can use to differentiate themselves, which makes it more important than ever that they get it right. So what’s the answer?
> See also: Survival of the fittest: Will DevOps save IT from going the way of the dodo?
Above all else, it’s vital that organisations have a clear picture of everything that is going on within the business. It’s no longer enough for one employee to have a sketch in his or her mind about how technology connects, or for the answer to be buried in paperwork gathering dust on a shelf. Service providers need to be able to have a clear, easily referenceable snapshot of their systems and the way in which they operate so that, when faults occur, they are prepared and ready to deal with it through their Incident and Change processes.
This can be provided by developing asset and configuration management processes that are based on having an up to date model of the technology estate and the interdependencies within it, including the all-important link to services. This model is commonly referred to as a Configuration Management Database (CMDB). Think of it as having a clearly marked map of everything within your technology ecosystem and they relate to each other, instead of a complex, confusing and sometimes misleading diagram that leads you round and round in circles.
Having this in place can also help them to anticipate problems, and deal with them before they can negatively impact on the customer experience and the brand as a whole. As a result, customers will experience fewer problems, engagement will increase, and service providers are able to build longer, more sustainable relationships with them. All of which sounds a lot more desirable than having unhappy customers looking to point the finger of blame at you every time a fault occurs!
Sourced from Alastair Masson, client partner at NTT DATA