In terms of IT infrastructure management, no technology from the past decade has had as much impact as the mainstream adoption of x86 server virtualisation.
Little wonder that end-users and suppliers have long sought to recreate server virtualisation’s dramatic improvements in utilisation and manageability in the storage and networking fields.
For one thing, IT organisations are still under pressure to cut their running costs; for another, the inflexibility of storage and networking infrastructure limits what can be done with virtual servers.
Of course, virtualisation has been applied to both storage and networking, with some success. Just like server virtualisation, both storage and network virtualisation allow organisations to manage underlying hardware resources as though they were abstracted pools of computing resources.
Interesting Links
In 2012, the industry jargon for doing this changed from ‘virtualisation’ to ‘software- defined [storage and networking]’. The idea is that computing resources will be defined, controlled and managed using software, improving utilisation and the ability to automate management.
It is debateable whether the crop of tools now being described as software-defined storage and networking really represent a new category of technology, or whether the term ‘virtualisation’ has simply run its course.
Nevertheless, the term ‘software-defined’ has the endorsement of no less an industry player than VMware, the company that did more than any other to bring virtualisation to the mainstream.
Soft networks
In 2007, computer scientist Martin Casado completed his PhD at Stanford University. In his doctoral paper, he devised a new way to manage network equipment.
Conventional network routers and switches have two functions: the control plane and the data plane. The control plane manages the network map, dictating how packets of data are sent to specific addresses. The data plane assesses incoming traffic to decide what to do with packets as they enter the device.
Both these functions are traditionally served by the networking hardware, but Casado’s innovation was to split out the control plane as a software-based service, running on a separate server. This creates a software layer that manages and controls how a network behaves programmatically, allowing multiple switches and routers to be managed from a single location.
Interesting Links
Casado’s first move was to establish an open standard for ‘software-defined networking’, named OpenFlow. Then he and colleagues set up a company, Nicira.
In February 2012, Nicira unveiled its Network Virtualisation Solution. The company claimed that its technology could save large enterprises between $15 million and $30 million by optimising the use of their network equipment.
Just six months later, VMware announced its intention to acquire Nicira for a staggering $1.3 billion. The significance of the acquisition to VMware was reflected in the fact that at its VMworld user conference in August, its new buzzword was the ‘software- defined data centre’.
The acquisition was seen as a challenge to Cisco. By splitting the network management software from the hardware, SDN could in theory allow companies to use cheap, commodity IP switches and spend their money instead on the software layer. As the world’s leading switch vendor, Cisco has the most to lose from this.
Cisco CEO John Chambers was defiant, saying that buyers still want their network hardware and software to be integrated: “Customers understand that optimising for the hardware-and-software combination to drive consistent experience, policy, quality of service, security and mobility is the only way, in our opinion, to meet their total cost of ownership, reliability and scalability requirements.”
Plus, he noted that the company’s Open Network Environment (ONE) – a software platform that allows users to manage network equipment programmatically – covers most of the features of SDN that customers want.
Still, the fact that Chambers felt moved to respond to the chatter implied that Cisco has not entirely dismissed the threat. In December, market watcher IDC predicted that SDN’s market value would leap from $360 million in 2013 to $3.7 billion by 2016, driven by the growth of cloud services and applications, a focus on converged infrastructures and, of course, the software-defined data centre.
Commoditising storage
The buzz surrounding ‘software-defined networking’ soon spread to the storage sector, and a number of storage management software start-ups began describes their wares as ‘software-defined storage’.
The idea of SDS is to abstract storage resources from the underlying hardware (again, not a million miles from virtualisation). Those resources can then be provisioned, managed and retired in software.
As with networking, SDS may allow enterprises to use commodity hardware in their storage architectures. According to Evan Powell, CEO of open source storage system vendor Nexenta, this redresses a long- standing injustice in the sector.
Interesting Links
“I think that it’s wrong, antediluvian and archaic that the storage world is dominated by vendor lock-in,” Powell told Information Age in October. “Not theoretical vendor lock- in, either. I mean the data that’s stored on the disk in the proprietary format where you only have one way to get your data back, which is the vendor’s product.”
Although not quite as high profile as SDN, there was a flurry of activity around SDS during 2012. Red Hat released the latest version of its Red Hat Storage Server solution in September, while US start-up Coraid unveiled its Ethercloud platform, which it says allows customers to manage storage volumes with the simplicity of consumer cloud services.
In the same month, Nutanix – a US-based vendor that aims to create the ‘SAN-free’ data centre by converging compute and storage into commodity x86 servers running virtual machines – raised $33 million in a series C funding, giving it a total of $71 million as rumours of an IPO began to circulate.
While not receiving anywhere near the attention of networking or storage, ‘software-defined security’ (SDSec) was another term bandied about during the later months of 2012.
Writing on the analyst firm’s blog in November, Neil MacDonald of Gartner’s information security and privacy research team wrote that SDSec is already seeing firewalls become virtualised.
“The bigger trend is the shift from proprietary hardware to software running on commodity hardware (in almost all cases, x86),” he wrote. “That’s the big shift. Whether or not a given security control is packaged as a virtual machine is a matter of requirements (and to some extent preference).”
According to MacDonald, SDSec will play a part in potentially reducing the cost of enforcing security compared with using physical appliances and will “speed up the provisioning of security controls by making their provisioning as easy as provisioning a new virtual machine”.
However, despite claims and developments made in the software-defined space during 2012, some industry players remained sceptical.
“The IT industry is very conservative and is used to a new technology being presented as the second coming, only for it not to do what it says on the tin after it has been stressed and pushed,” Sean Horne, EMC’s unified product director, told Information Age in November.
“People are not going to replace an infrastructure that they’ve spent 20 or maybe 40 years building, developing and investing in, to really understand how to deliver business continuity and data integrity to a very high degree.”