Forget Ballmer and Ellison, and Kagermann; when it comes to enterprise software, IBM’s Steve Mills is undeniably the world’s most powerful figure. The 34-year IBM veteran heads a software business that contributed $20 billion to the company’s $98.8 billion revenues in 2007 (easily outstripping the equivalent enterprise software numbers at rivals Microsoft, Oracle and SAP). More importantly, it banked 40% of the company’s profits.
Leading a team of 45,000, including 25,000 software developers, Mills‘s style is pragmatic and uncompromising: his ‘hype antennae’ are acutely sensitive, and his passion is more aligned to business transformation and the pursuit of great business outcomes than to any deployment of technology for its own sake.
And that has been applied in his own organisation as well as to IBM’s customer accounts. Since the late 1990s, he has been instrumental in transforming what was a hardware-subservient division into an aggressive, fast-growing, independent software group – fuelled by over 60 acquisitions. He has built the industry’s largest business software portfolio through the buy-outs of – among many others – Tivoli, Rational Software, FileNet and, more recently, Cognos and Telelogic; plus the in-house development of the WebSphere and Lotus suites, and the whole host of software for managing its mainframe and mid-range systems.
Information Age (IA): As the head of the world’s biggest enterprise software operation, you must have unique exposure to customers’ business challenges. What are their strategic priorities for 2008?
Steve Mills (SM): It comes in a couple of flavours. There are different perspectives based on whether you are looking at the agenda corporations are setting for themselves or at the agendas distinct to IT operations and IT efficiency that the CIO typically has responsibility for dealing with.
Around the world, businesses all articulate a perspective on the way they wish to operate in the future. They have an aspirational set of goals related to end-to-end automation – straight through processing, frictionless order-to-cash, etc. They see information technology as the tool by which they can realise improved operational models, and therefore greater efficiency.
For some companies, that translates into customer-facing systems with the potential to leverage top-line, in addition to bottom-line, results. Others are focused on more interior types of operational things: if you are in the manufacturing business you are desperately trying to reduce your overall inventory-carrying costs, optimise plant and equipment.
You have similar discussions in government, especially in emerging markets, where people want to leapfrog – they don’t want to simply follow the path that others have [gone down]. They are looking at how services are being delivered on a best-of-breed basis, saying, for example: “Why don’t we go directly to an electronic tax system rather than travel through a few decades of paper and local collection.” Those kinds of things. And IT is very much at the hub of this.
Alongside that, you have the incredible expansion of mobile communications and intelligence handheld devices. We have talked for more than a decade about the possibilities of these devices becoming an instrument of commerce.
IA: That rings true with the recent survey feedback we have had from Information Age readers, in which they rated mobile working and remote deployment of the workforce as the most adopted strategy and one of the top five most effective strategies.
SM: There has been a remarkable change during this decade towards the virtual workplace. Increasingly, the majority of employees are not sitting next to the person that manages them. And the overwhelming majority of people are not at headquarter locations – they are dispersed where the business is conducted. In many cases people no longer have an office in the classical sense.
Certainly, in our outbound jobs, whether they’re sales or consulting or front-of-customer execution, we would never go back to the kind of anchored branch model of decades past. We still have to deal with aspects of connecting people to people; human interaction and collaboration is a critical issue, and the challenge is how to create collaborative infrastructures where people can interact with each other without necessarily always having to be in close proximity.
We are pretty aggressive around the use of social networking technologies. All our folks have trouble finding experts and getting answers to questions. So we have a kind of virtual environment, with hundreds of communities. Literally, people post tens of thousands of documents and links to these. So if I am into a particular topic from a technology perspective, I can tap into that community and see who the experts are, I can start to chat with people, I can send them email, I can look at the material that they have posted.
IA: So is this knowledge management meets social networking/wikis?
SM: It is bottom-up knowledge management. The old notion of knowledge management was top down. In other words, someone has to figure out what everybody needs to know and then capture it and manage the taxonomy. The bottom-up techniques are much more effective in ensuring the information is pertinent to the topic and timely, and that the community self-regulates.
I went through the previous waves of knowledge management – I’ve lost track of how many waves there were – and it just never worked. You could never figure out all the different ways people wanted to work and needed information. This [new wave] captures some of the excitement of the public Internet, social networking and the wiki community while applying it to business. It has been a very powerful tool for us and we have been startled by the number of customers that have jumped on [our offering in this area].
IA: Can we explore another Internet-driven applications phenomenon: the growth of software-as-a-service? What role does IBM expect to play in this area?
SM: A thoughtful analysis of this would reveal that this idea has been with us for many decades. I think the venture capitalists like the expression ‘software-as-a-service’, because it paints a new-age label on an old technique. You can hype your market value that way because you are part of the next wave.
You are certainly familiar with ADP [the payroll processing company] and time-sharing. ADP scratch their heads and say: “We were here a long time ago. We are the most successful software-as-a-service model in the world.” There are thousands of companies in the world that do that today. Many of them are in industry, many of them are local. I don’t think this market space has been properly characterised.
Healthcare is a good example of that. Banks, too. All over the world they are trying to provide processing services for small businesses – handling their receivables, payables, etc. We have a number of different areas where we are delivering these things as a business process-related service. We don’t hang a software-as-a-service label on that.
The reason is because people want to buy the outcome, they don’t want software-as-a-service. So this is actually a very big marketplace. Software as a service as defined by the IT analyst community appears small, but if you took business process outsourcing, as done by the thousands of companies that deliver it today, then it is a huge market.
These [options] have been with us for years but standards and high bandwidth are making it ever more possible. Where the server is located is not that important any more, and response times are great because of high bandwidth. So we are going to continue to see significant growth in this whole business process outsourcing arena. [But] there’ll still be people fanning the flames of this software-as-a-service acronym, which in my opinion is tech-industry doublespeak. The tech industry loves these [kind of] things.
IA: Google, Yahoo and Microsoft are building huge data centres across the
SM: For the consumer, no. We have made it clear in the market that our orientation is not toward [the consumer]. It is not that we won’t help others serve the consumer, we are happy to do that, but we don’t want to be consumer facing. That is not our forte.
Google, Yahoo or MSN have the potential to offer you and I enhanced online services of various kinds, so why not do this through those companies. [But for companies the] considerations are “Where are my documents? Where is my data?”
People [in the corporate world] are intrigued by these ideas and they are looking at them. But your storage problem is a little different in large businesses because of the phenomenon of replicated copies, because of the aspects of changed data and currency.
In the corporate world, 75% of all the data maintained by businesses – terabytes
if not petabytes – is replicated data. Corporations have a master copy and multiple copies of data sitting within departments. Individuals take parts of the data and put it on their PC to interact with that data to suit their particular job requirements. Some of this is done under tight control, some of it just happens ad hoc. If you are in the financial services market, that changed data is critically important. You want to capture the change and make sure every change ripples through. You as the customer can see your portfolio positions, and the rep that serves you out of a bank or office is seeing the same data. So it is not clear that, for large businesses, an outside service provider solves your storage problem.
Now, if I were a small business, maybe I would push all this data upstream to sit with some specialist company that works in my industry. Perhaps, if I were a lawyer using a service provider, I would trust someone to store all my company case management documents.
Would I do that through Google or Yahoo? Maybe, but not yet. It is a little murky. It is just that I might have a hard time connecting my organisation’s specific needs with a company whose business is about the public Internet, and which actually operates predominately in a stateless mode [data only retained while it is active]. There are different aspects of managing things for long-term preservation. And sometimes you have to manage each iteration of the information. It needs to be auditable and protected. So it is not a straight line from here to [there]. New companies can come into the market with a great brand name [like] Google. But while [that company] carries a certain mystery about what it is working on, I guarantee you one thing: Google is working on advertising – 99% of the revenue is advertising.
IA: Acquisitions have become a central part of IBM Software’s expansion strategy, and that has moved the portfolio up the stack from middleware and systems software into the applications layer, with FileNet and Cognos products. Can you give us an idea of whether there is a clear applications line that you do or do not want to cross?
SM: So I am the guy that makes these decisions, and [I can tell you] there is a clear line. However, the line is more related to an understanding of the ecosystems in which we participate.
Take FileNet as an example. Clearly, there are people who look at FileNet as an application. But most of the code, by weight, is infrastructure not application code, even though it has application-like characteristics.
Whenever we buy a company we ask ourselves who is going to hate us when we buy this company; who does it annoy? There are companies that we don’t care if they hate us, because they already hate us. They are not going to stop hating us whether we buy or not… So we view EMC and Microsoft and Oracle as companies competing with us.
We would hesitate on acquisitions in many areas because we see ourselves as needing to work with many companies.
If we were to go into core ERP or core CRM, obviously not only would SAP be very upset but we have relationships with Oracle, Lawson, Sage and the other players. These relationships actually deliver billions of revenue to IBM.
For every $1 of enterprise application money an enterprise customer spends, there is $5 of related services, hardware and software that goes with that product. We are clearly SAP’s biggest partner; we are Oracle’s biggest partner. We clearly do not want to lose the revenue and profit we get through those relationships. What this comes back to is that we are not afraid to deliver applications, but we are clearly very selective about where we do it. It is in spaces which are highly exploitative of infrastructure.
Steve Mills on the Green IT agenda
Information Age (IA): Energy has become a priority item on most IT agendas, particularly in relation to data centres. What role will software play, firstly in addressing the environmental impact of IT, and secondly as a means of reducing an organisation’s carbon footprint?
Steve Mills: The IT issue associated with operational effectiveness and the Green discussion is increasingly getting more [boardroom] attention, as companies are under more external scrutiny [around] what they are exactly doing in the context of the environment, as the world globally begins to worry about the environmental impact of all technology.
And certainly, if you look at computers, they are massive consumers of electricity – in aggregate. Computing devices of all kinds are probably one of the largest consumers [of electricity] in our industrialised world. And they are not necessarily doing work all the time. So “How do I optimise the carbon footprint of my computer infrastructure?” is becoming an ever bigger issue.
You attack that problem from a couple of dimensions. Obviously, there is the basic design of the devices: are they sensitive to power consumption and can you [include] intelligence that can understand when the machine really is idling and [turn] off different elements?
The other dimension of this is simply having fewer things. The more things you’ve got, the more electricity you are going to use. It is pretty hard to beat that equation. No matter how efficient each one of them is, if you just have fewer then your carbon footprint will be less. And therefore, one of the hottest topics within an IT shop today is whether we can consolidate work together, bring things together and run more inside the machine. So we have machines that have tremendous inherent power, but we tend not to use them very efficiently. How do we load up the machine to use it more effectively? And this is where software plays a key role in maximising the use of the machine.
When I started in the business 34 years ago, I was selling 2.7 MIPS processors that cost $12 million. And the idea was to load it up, run the thing to the max – 24/7. How did we do that? We built a software sub-system infrastructure that was designed to work with the hardware to maximise the number of things the hardware was doing at any given moment. You can’t beat the laws of physics, but you can cleverly use time to your advantage. The idea is to slice time up into small pieces; a machine cycle goes only so fast and [the challenge becomes] how many things can you do within a machine cycle?
So with clever mathematics and good design you can optimise the use of machines. We do that with the IBM mainframe – nothing is like a mainframe in terms of its ability to optimise around its resources. You can literally run thousands of concurrent applications. It is enormously efficient per unit of work.
And we have taken those techniques and pushed them down to our other systems commensurate with the reality of what those systems are able to do.
There are other ways you get at the problem. We think we build some unique things into our X-Series [mid-range Unix platforms], which increases operational efficiency. We use our middleware to get more operational efficiency – more work done at the same time.
The other thing that happens in that space is to go in the other direction – chassis and blades. Instead of having rack mounts, you get a much more efficient package when you plug in blades. And you can put power management into the chassis, so you are ‘varying off’ blades that are not being used at a particular time.
So you really load up a machine, really use every last bit of its capability, and the direction is to go ultra cheap and almost disposable, and put some effective power management over the top of it. In doing so, you can thoroughly reduce the cost basis for doing that work, and therefore lower the carbon footprint.
It may not be as effective as the consolidation approach, but it is a significant improvement over other techniques in that space.
Steve Mills on virtualisation
Information Age: One of the key elements that IBM has historically been associated with is virtualisation – at least on the mainframe. IBM’s VM operating system was arguably the first commercial manifestation of that whole concept. But if I look to who is on the customer buying list today, it’s VMware, Citrix Xen or even Microsoft. Do you think you have a strong enough story in that area today?
Steve Mills: We have four levels of servers: the high-end System z, then the p, the i and the x, where the x is an Intel server. These products, VMware, Xen, are geared to the Intel server. We are delivering leading-edge and truly superior virtualisation in the other platforms. What VMware and these other things do does not compare. It is not even in the same universe as far as the level of virtualisation sophistication [available] in IBM systems at the high end.
And then you have the Intel world. Intel delivers a fast micro [processor] that tends to be very poorly utilised. One of the reasons is that the applications typically are not built for sharing. Programmers have long been attracted to the [notion of] building applications on the same platform that they will be deployed on. The assumption is that they are not going to have to share space with another application.
Sharing is hard because it requires more testing and more advance planning, and more choices of infrastructure begin to change because not everything supports a shared environment as well as other things. So VMware came along, as did the open-source Xen project, with the idea of delivering a hypervisor that would allow you to run more than one thing at once. And in most cases you have gone from one to two.
Now that is 100% improvement – but on a machine that was 5% utilised. IBM is able to demonstrate higher levels of concurrency and shared work under our WebSphere product – running Linux or Windows on the Intel micro – than VMware can demonstrate in terms of leveraging its hypervisor.
You have to design the application to use WebSphere and run a lot of concurrent work. We have brought a lot of that historical experience in building sophisticated transaction management services down to that Intel processor. There are different ways in which you can demonstrate efficiency and effectiveness.
Further reading:
IBM’s pitch for the last mile – The launch of IBM’s free desktop productivity suite portends a new war in the software market