Bernd Greifeneder, founder and CTO of Dynatrace, spoke to Information Age about the keys to success in his role, and keeping data secure
When it comes to keeping all assets within the company network secure, visibility across all datasets and activity is paramount. Cyber attacks on companies of all sizes are always evolving, and can catch a business out if they are left behind. This can be mitigated with the use of a full-stack observability platform, which helps to maintain complete visibility while keeping strain off of staff.
Bernd Greifeneder, CTO of Dynatrace, founded his software intelligence company in 2005, with the aim to deliver silo-less data visibility, helping businesses stay innovative and secure. Nearly 20 years on, he has needed to keep a keen eye on widespread change across the tech sector, in line with evolving customer needs and cyber threats.
On the day that Dynatrace released its Grail data lakehouse, Information Age sat down with Greifenender to learn more about his keys to success as a tech leader; the challenges he has overcome in his role; and how to balance security and cost-effectiveness using analytics and intelligence capabilities.
What has been key to keeping up with industry trends and customer pain points over the years, as Dynatrace founder and CTO?
One aspect here is anticipating and understanding that the growth of data and the complexity of the systems is not linear. Now that data is exploding exponentially, all the dependencies amongst that data are also exploding exponentially. This means that approaches fail, so anticipating these changes has always been key.
Dynatrace is all about never utilising typical approaches. It drives me nuts when the world thinks of observability as purely metrics, traces and logs, because it’s also about tying this to the business. Therefore you need to understand user experience, user behaviour and have a much bigger picture. These are what I see as the key elements of observability when enabling to drive innovation.
You should never stick to what the world thinks is the status quo. You should always be looking forwards to the next step. This is also why it’s often so hard for us to make people understand that, hey, this is actually possible, because so often in our history customers have said before they evaluated Dynatrace that “this is not possible” – and then they used it.
How do you go about communicating your tech strategy across to the entire organisation, and keeping the workforce on board with said strategy?
One of the key lessons learned over the entire history of Dynatrace is that with every doubling in size of the team, you have to reinvent how you work together. A year ago we surpassed 1,000 members of staff hired in the R&D team alone. And two years ago, I set out a plan for what my organisation needs to look like when we are moving towards the 3000+ mark, and what has to change.
I’ve spent three years so far coming up with a system that enables entrepreneurial thinking and acting at scale. This is important because looking at the top down, I don’t know all the little things that every member of staff needs to innovate. So, I need to foster and build an organisation that can innovate in many different pieces on their own.
Therefore, I structured a team into areas that I call ‘solutions’ to map out the architecture. These include infrastructure monitoring, application monitoring, and security. These areas focus on the target audiences and use cases for those audiences, so that their experience is optimal. From here, we can consider how we bring them all together with the other data sources, and the right API. This kind of structure that enables us, on one hand, to optimise directories for use cases and audiences. But on the other hand, this brings enough autonomy to the more technical capability or entity teams to come up with new approaches how to deal with petabytes of data properly.
Then, a big part of the communication is to have a joint vision for everyone. To achieve this, we’ve been holding our Perform conferences for the past three years. This has been the key to consistency and helping our vision become reality, because it takes time for everyone to ingest. You need to repeat it, and rephrase it forever across the different teams and combine it with customer practices. This is also why I run a very rigid internal self-adoption program for our staff. This doesn’t start when we have built the product – self-adoption begins within the ideation phase.
All of this together helps drive communication of the vision forward, with a more autonomous, entrepreneurial approach.
What have been the biggest challenges you have faced in your role as CTO, and how have you managed to overcome them?
Interestingly, the biggest challenge is not technology, despite building technology that no one has done before. The hardest challenge is always figuring out how to get the right talent in the right areas. This goes back to this need to change the organisation with every doubling in size, which has been key to retaining our entrepreneurial notion.
I realised that as we grow towards around 300 people, this doubling in size was relatively easy, because it was the founding team and some of the first employees, and we all have this entrepreneurial attitude and desire to get things done and better than the competition. But then, as you keep hiring and hiring, and the new hires onboard newer hires, suddenly, with 300 people and beyond that becomes really hard. At 500 people, something hit me – the new guys had no clue anymore of who we are, why they come to the office every day in the morning, what motivates them. Not even the mentorship programs we had in place worked.
So, I need to make sure that we are explicit. And then it took me a while to figure out how can I make it explicit? We now ingrain a key piece of guidance that everyone has to learn as part of the onboarding program: anyone can make any decision, but you have to consult everyone who is impacted by your decision. Of course, we all need to keep the framework of legal requirements and other regulations in mind to stay compliant. But this mindset maximises the autonomy of people. For example, we encourage everyone to consider: is this consistent with the UI? Is this impacting data storage? This is the right approach to communication, without needing a top-down role.
By consulting with colleagues and realising that a decision may impact performance, you can rethink your decisions while retaining autonomy. This also helps with innovation, because we are not hiring people to tell them what to do. We’re hiring people to come up with ideas that are unique, and tell us the right thing to do. And we help them with the right context to make the right decisions.
You founded Dynatrace in 2005 – has that tech skills gap always been there, in your view?
It has always been there, but it’s getting just worse all the time. And this is also why I am so convinced that no enterprise can survive without automation. Tech demand is continuing to grow, but you want to have talent focus on the business’s core value proposition, and invest all of your energy into digitalising that.
Which would you say have been the biggest cyber security challenges faced by organisations in the past two years, in the post-pandemic landscape?
I’ll never forget the SolarWinds attack, or Log4Shell for sure. These incidents have shown that the firewall alone is no longer enough to secure your perimeter. This is now history – you now have to treat each and every piece of software service as if it’s public. This is happening anyway, as most services get migrated to the cloud. But this implies that you need to secure each and every service.
But how do you do this? When you have over 500 employees writing little rules for each individual service, this approach just does not scale. So you need to immunise from the inside. This is why Dynatrace has the one agent that 1000s of customers have already deployed millions of times. That agent has been enabled to block security attacks.
Now, you could argue that your teams know how to deal with modern cloud native applications, as they have 50 security tools running. The interesting part here, is most of the security challenges are on the newest applications, and on the oldest ones that companies run. You may say that Log4Shell is history, and that every company has fixed those issues already, but many organisations still download old legacy libraries that have the vulnerability, and don’t know it. This is also why it’s important to protect the runtime. So, this is a big challenge that has raised the speed of deployment, and also demonstrates the issue of so many third-party libraries still being out there.
With more data constantly being generated at an exponential rate, how can observability and security analytics datasets be better optimised for complete visibility, while keeping operations cost-effective?
So one of the challenges with security, is that every company has dozens, if not 100+ security tools. It’s massive, and every security tool looks at security in a very isolated fashion. But you need to put this all into context – not just for the actual attacks, but also the vulnerabilities within.
You need to bring the data together across the entire data flow, from the end-user all the way to the back. An end-to-end approach needs to occur, and interestingly, observability aims for that. This is why I started changing my approach a couple of years ago – back then, we thought that having the data was enough, but then I realised that security companies needed to adopt more monitoring capabilities to overcome the challenges they were facing. We have that interoperability knowledge that the security companies don’t, so it’s much easier for us to aid them in finding the security context behind the data.
This, in turn, convinced me to enter the security world first with the agent that’s already in place. Because previously, I was analysing the Runtime Application Security Protection (RASP), and figured that those security companies had a great idea about how to approach protection of applications, but they could not build the proper agents required. And we have been building agents since 2005, and helping organisations keep overhead low in the process. This leaves the need to address the data itself – allowing us to detect the vulnerability in real-time, and also instantaneously provide risk analysis. All of this brings the data together for optimised security.
I believe security information and event management (SIEM), for instance, is one of the security areas that is going to be big going forward. We want to disrupt that road, not only by doing it better, but also differently with the entirety of data at the organisation’s disposal.
Related:
Combating common information security threats – What are the security threats most often faced by businesses today and how can they be overcome?
Establishing a strong information security policy – There are several considerations for companies creating an information security policy. So, how can organisations ensure they have a strong policy in place which reflects the needs of the business?
What is the role of the CTO? – Exploring what the role of the chief technology officer (CTO) entails in today’s ever-changing landscape.