The always-on dispersed enterprise of today has made starkly clear the fact that business continuity and protection of data in the modern world has to evolve.
Data protection as it stands has traditionally been built with the 9-to-5 business in mind. Whilst avoiding unplanned downtime during opening hours is crucial, the IT department would have available windows in which to fit any planned downtime.
For instance, data protection and management processes to ensure business continuity could be carried out in a wide window; whilst the IT team could also implement and test upgrades without significantly impacting the business.
Essentially, simple data protection processes, such as backup and recovery, used to be seen as the final piece of a project. Something that would be last on the list in the planning stage, if it was included at all; and otherwise simply bolted on when the project is complete.
Now that organisations are migrating to modern, always-on data centres for the next generation of business, it is essential that organisations have a robust understanding of what they have today and a rock solid ‘copy’ of their existing environment before they commence migration.
If this vaulted as-is environment can then be used as the basis for migration planning – even better. IT needs to be able to test and plan against ‘as-is’ to ensure that ‘to-be’ will be effective for the organisation. It is no longer the case that downtime, whether planned or unplanned, won’t have a direct effect on essential services.
This, in turn, has changed the importance of data protection. From replication to backup and recovery, data protection and business continuity is crucial to the success of a modern business and needs to be one of the first considerations of any IT project or strategy.
When implementing data protection as part of an IT project, there are many considerations an organisation must deal with, beyond choosing which backup and recovery solutions to use. To begin with, at the outset of any project four questions must be asked to identify the risks involved.
What would be the effect of services failing? If implementing a new email service, for example, how would it impact the business to be without email for minutes, hours or even days?
Then, what costs will data protection save? While preventing unplanned downtime is difficult to quantify to decision-makers, IT departments can demonstrate how a modern data protection strategy can also reduce planned downtime, proving data protection’s worth.
Thirdly, what, if any, is the maximum downtime the business could survive? What is the absolute worst-case scenario and what should a data protection plan be designed to prevent at all costs?
And finally, what is the maximum planned downtime that you will accept? For most modern businesses, this should be near zero but it is vital for setting SLAs and benchmarks for the business as a whole.
Reducing the risks
Once an organisation has answered these questions, it will have an understanding of the risks facing an IT project, the benefits data protection will provide, and the level of performance it needs.
With this information, it can build a data protection and business continuity policy into its IT project. More practical considerations will take over, which can be broadly split into three.
First, there is how the organisation will minimise downtime. The team should be clear on the acceptable recovery time objectives (i.e. how long downtime will last) and recovery point objectives (i.e. how much data could be lost) for each part of the infrastructure behind the service.
It must then decide how much of that infrastructure will be replicated, and so available instantly in the event of a disaster, and how much will simply be backed up; increasing the chances of longer downtime and permanently lost data.
In all cases, the team should ensure they are using techniques designed for modern data centres and IT services that can guarantee as near to 100% uptime as possible.
Second, the team should consider how reliable its data protection is. Organisations should be able to validate each backup to ensure it is recoverable: after all, data protection is worthless if, when push comes to shove, the data cannot in fact be recovered.
The strategy should also include unplanned downtime in its considerations. For instance, how can an organisation avoid disruption when it needs to test and implement software upgrades?
As virtualisation lowers the cost of IT infrastructure and storage costs fall, creating a separate, testing infrastructure could be one way of making sure that upgrades don’t cause more downtime than actual outages.
Lastly, the organisation must decide where it places its backups, and in what format. Do they stay on-premise or are they safer off-site? Are they in a private cloud; on public cloud storage; or is the backup itself operated as a service? Is legacy storage, such as tape, being used in any capacity? Has the organisation considered a worst-case option and allowed for, essentially, a backup of its backups as a final failsafe?
Again, the falling cost of virtual infrastructures and storage can make this a valid option. The final decision will depend on the individual enterprise and its overall needs and strategy; yet is no less important because of that.
Whatever an enterprise’s answers to these questions are, the most important thing is that data protection becomes an intrinsic part of its IT strategy for each project. Without this, it will never be able to use modern IT services from a modern data centre with 100% confidence.
Sourced from Ian Wells, VP North-West Europe, Veeam