James Hodge is Group Vice President & Chief Strategy Advisor at cybersecurity and observability leader, Splunk. The data-driven technologist talks to Information Age about business goals, digital resilience and the implications of AI.
Talk us through ‘big data’
A few years ago, there was a huge uptick in this term ‘big data’. What it really meant was people understanding the value of all this operational machine data that was out there, understanding what it represented.
It’s not data, it’s a record of an interaction that took place that you can start to look at and say, ‘What does that mean to me if I take a security lens to it, an operational lens?’ Or ‘What did someone do with my service and how did they interact with me? How can I provide a better service to them?’ I think there’s that big explosion in awareness, on a large scale, of the possibilities of what data can do to enrich the way we interact with services and enrich our lives.
Brand loyalty is not what it used to be. It’s experience loyalty that you now have. The only way to go and do that is digital. We’re seeing this big rise and reliance on digital systems, and the need for those to be resilient. It’s not good enough to have a great experience, people need to have trust. They need to have cognitive trust, digital trust, societal trust in you as a brand. That almost all exclusively comes down to digital data, the use of your data, and how you go and apply that. That goes across all industries, even mining has incredibly advanced drilling bits that have lots of data on them. Formula One – if they’re doing an engine test, and they can’t get telemetry into that engine – say, the Wi-Fi is not working – they won’t boot up the engine to go and test it because it’s a safety issue and a huge potential expense.
How do businesses take what they’ve got and set out their goals?
I think sometimes that’s where businesses go wrong – they think that data is the answer to everything. And so you go, ‘We’re going to go and see what we can find in the data.’ It’s really understanding what you want to be as a business, then what goals you want. Do you have data that can then go and support that to make better decisions?
What are you going to do if you disagree with it? Start to think about that. We’ve all got biases in everything that we do. You might get a data point that comes out, you look at it and go, ‘Oh, that data is wrong. Because I have belief in [the thought that] I’m right, the data must be wrong.’ We’re starting to see, all the way from frontline staff up to executives and boards, on how you go and deal with the complexity of imperfect data. You don’t want to blindly follow the data either.
What’s really interesting is the application of AI. So, you want to bring in AI to go to help you make decisions. What happens when you disagree with the AI? What are you then going to go and do? If you’re always going to disagree with it and do what you wanted to do anyway, then why bother bringing the AI in? Have you maybe mis-written your requirements and what that AI system is going to go and do for you? A lot of this is the foundational strategy on organisational design, people design, decision making.
As an executive leader, it’s really easy to stand up on stage and say, ‘Here’s our 2050 vision or our 2030 vision.’ At the end of the day, an executive doesn’t do much, they just create the environment for things to happen. It’s frontline staff that make decisions. There are two reasons why you wouldn’t make a decision: you don’t have the right data and context or you don’t have the authority to make that decision. Typically, you only escalate a decision when you don’t have the data and context. It’s your manager that has more data and context, which enables that authority. So, with more data and context, I can push more authority and autonomy down to the frontline to actually go and drive transformation. The frontline is making micro decisions every single day and they end up having a macro impact on your business.
Data can be an amazing enabler to get that flywheel of transformation going because you can then work out why two people see something differently from a data perspective, rather than a ‘I’m right, you’re wrong’ kind of perspective.
How does it help businesses with their cybersecurity efforts?
I think it’s incredibly complicated. Nowadays, if you look at the pace of change in technology, what you could do in cloud computing, the demands from consumers, you’re in a world where continuous development and continuous releases are always out there. So, it’s creating more and more possible vectors for someone to go and compromise your service. Not all malicious – it can be easy to go and just misconfigure something, forget to put the right password in, have something publicly open to the internet.
From a security point of view, step one is understanding where your risk lies. You can put all of your efforts into one area, but if the risk to your business isn’t really there, then you’re wasting money. That’s the first exercise, what risk profile do I have? Starting with the data you do have, that allows you to understand what’s happening within each risk area.
The organisation NIST has a great framework saying that the first thing you need to do is be able to detect that something’s gone wrong. Now, when I’ve identified that, do I then want to go mitigate that risk? Or now I’ve really understood that risk, can I move in preventing that risk happening in the future through policies, technology, people, AI, whatever it is? Do I want to accept the risk and just mitigate it? Do I want to go and prevent that by looking at what the risk is to my business – how much capacity do I have, where am I going to place the next effort?
You’re a real proponent for data ethics. How does a business decide how much data to take from customers and clients in order to meet its needs?
I think it’s really important because of regulatory bodies. There are things like the European AI Act coming out, of course, and the UK following on. I think the UK is taking a very pragmatic look at that – they’ve gone for a sectoral approach, rather than trying to legislate for technology as a whole. I’m oversimplifying here, but as an example, a chatbot for when my washing machine is broken, and I want to organise a repair. This is very different in terms of risk profile to getting medical advice from a chatbot. The chatbot remains the same, but it’s the application of that where risk gets introduced and where we have to think about the ethical applications.
That’s why Splunk is working in the UK regulators and those bodies. It’s really important to understand the guardrails and then look at data sovereignty, which is one area, along with data boundaries. Do I want data to go and leave the UK or France? Or do I have data sharing agreements between the UK and the US?
The last, and most important part, is does that go and solve my business need? Is there a reason for going and using this data? That’s where the role of, especially in the UK, the Data Protection Officer, the Information Commissioner’s Office (ICO) is critical in helping define what the right way to go and build data is.
Read more
Overcoming data loss from embedded devices – Finland-headquartered data storage start-up Tuxera looks to mitigate embedded device data loss and bottlenecks through file system software
Data Protection and Digital Information Bill explained – Here’s how the proposed UK Data Protection and Digital Information Bill could affect the operations of businesses
Why big tech shouldn’t dictate AI regulation – With big tech having their say on how artificial intelligence should be monitored, Jaeger Glucina discusses why we need to widen the AI regulation discussion