The promise of the cloud is one of infinite scale, microscopic cost and everlasting reliability, but these messages come round with every new technology. So what is the reality behind the marketing?
Looking at these promises in more detail gives a clearer view of the type of environment companies are being driven to adopt, in order to deliver a modern infrastructure that is able to support development and delivery of applications for the 21st century.
Harps and clouds?
Scalability has always been an ever-increasing factor for any business that planned for growth. The advent of the internet obviously piled on even more pressure in its need for an always-on approach to service delivery.
However, a growing awareness of spiraling costs has forced businesses to look again at the price of scalability – doing ‘more for less’ has never been a more repeated phrase.
The promise of cloud deployment is that power can be added when needed, particularly when public cloud resources are effectively unlimited and are individually priced very competitively.
>See also: 6 drivers for moving business to the cloud
The cost of services from the public cloud providers is often quoted as a reason for companies to start building applications using those vendors’ specific toolsets. Since the charging model is tied directly to usage, this feels like a very cost-effective way to handle the problems of scale, while at the same time allowing for costs to be lowered during periods of less demand.
Flexibility is a third key benefit of a cloud approach to development and deployment. The ability to deliver power exactly where and when it is needed brings most companies better control over how they meet the demands of the business, as it balances the running of the organisation with the needs of internal development teams.
This flexibility is even more important when public cloud infrastructure is not available, as the constrained resources of internal data centres need careful allocation to meet the needs of the whole business rather than one small segment.
Storms on the horizon
All of these seem like good reasons to believe that moving into the cloud will have immediate benefit, but each one throws up its own challenge for companies.
Scalability without control tends to lead to wasteful overuse of resources. Projects grow in scope very quickly and are left consuming resources long after testing is complete. Cloud “creep” needs measurement to bring it back under control.
That same measurement very quickly shows that without planning, public cloud usage can be more expensive than originally believed. Not only that, but comparisons of potential costs between public cloud vendors is a difficult task and any idea of finding some kind of “cloud resource brokerage” is impossible to achieve when the best prices come from longer contracts.
Finally, flexibility is a non-trivial thing to deliver. Applications vary wildly in their needs. What may run fine in the controlled environment of a more monolithic style data centre could raise different configuration issues for every public cloud vendor – effectively preventing easy, flexible migration.
Not only that, but green field applications written specifically for the public cloud can quickly become as locked in to the APIs of an individual cloud vendor as they would have been in the days of proprietary software stacks.
Is there hope for a silver lining?
Fortunately, enterprises are at a point where a combination of technologies can start to solve some of the bigger questions posed by cloud infrastructure.
First, cloud management. For this to be successful it needs to be able to encompass both the public cloud vendors and also any in-house private clouds, whether built from raw VM infrastructure or using something more sophisticated like OpenStack.
To deliver wide value, it needs to be able to be implemented without invasive changes to existing infrastructure and should be able to interrogate and control any existing management layers where desired (and permitted).
While the term ‘single pane of glass’ is much overused, it should provide visibility of all elements of a company’s cloud and expose the full deployment of any apps across these elements.
Secondly, some form of automation tool can help to ensure that applications are deployed in a repeatable manner, no matter where in the cloud they physically run.
>See also: Are cloud integrators the silver bullet for execution?
Like the management tool, it should understand the needs of all the different public clouds and should be extensible to handle new situations as they arise. It should work alongside the management tool as the ‘delivery arm’ of the combined toolset.
Finally, businesses should look to build software architectures that transcend the APIs of any one vendor. The rise of containers as an app development and delivery mechanism is a promising direction.
In particular, the use of Kubernetes as the orchestration engine for those containers is a very successful cloud vendor-neutral way to ensure the flexibility that businesses need.
With these pieces in place, organisations can finally start to realise the broader promises of a truly hybrid cloud environment, which they can both measure and control to meet the needs of any kind of business application.
Sourced from Martin Percival, senior cloud architect, Red Hat