After all, they only wanted to build a road. In the science fiction classic 'A Hitchhiker’s Guide to the Galaxy' the incident that causes the biggest IT project in history to fail is a banal one.
Five minutes before Earth, a giant computer experiment, is due to come up with the answer to the ultimate question of life, the universe and everything, the computer planet is demolished to make way for a hyperspace bypass. Ten million years of computing are lost.
Today’s IT architects would probably have built two of the supercomputers and sent them off to different galaxies. In the same sense, they now sit back and relax, reassured that their Cloud computing centres are ultra-safe, highly-redundant and highly-available.
But what use is a highly-available Cloud instance, if the digger outside the office severs the fibre optic cable? Yes, we know: the aim was only to build a road.
A five-fold increase in traffic
More and more business-critical applications are moving to the Cloud. The average user of Outlook 2007 used 2.6MB per day; the same user with Outlook Web Access now needs 12.5MB per day.
This five-fold increase in traffic shows that users and systems are in near-constant contact with Cloud services. Any moderate interruption immediately interferes with business activity, causing staff to become less productive and processes to falter.
The challenge is to create a structure that ensures crucial business processes continue to run and business-critical data in the Cloud remains accessible in difficult circumstances. For users, the focus is on the availability of applications; this is why the term “Application Delivery Network” (ADN) is often used in this context.
> See also: Are enterprises taking business continuity seriously?
ADNs are not just about giving branch offices improved and accelerated access to the company’s computing centre at its head office. Now, they also need to ensure that a wide range of end devices and work locations retain access to Cloud instances.
The most important components of an ADN are firewalls. Traditionally, the network firewall was primarily used to block unwanted traffic. However, advances in technology have turned it into a communication gateway.
As a gateway, this ‘next-generation’ firewall has developed a deep understanding of network traffic and is able to provide multi-link support, traffic shaping and application awareness. This means that it can now play a key role in business continuity.
Multi-link support: enabling near-perfect availability
In almost every aspect of IT, redundancy is the key to high-availability. Firewalls with multi-link support can coordinate two or more links to the Internet. When it comes to business continuity this could mean, for example, that as soon as the primary line (MPLS or DSL) fails or has performance problems, the firewall switches seamlessly to 4G or a satellite link.
By combining several alternative inexpensive links, 99.999% availability can be achieved even without MPLS. In normal operations, too, multi-link support helps cut connection costs and prioritise business-critical applications.
For example, a company could have two links running parallel to each other. Important IP telephony packets and SAP transactions might then be transmitted via MPLS, while emails, for which a one-minute delay is typically unimportant, are sent and received by the firewall via DSL.
Traffic shaping: prioritising important data
Traffic shaping utilises rule-based allocation of bandwidth, allowing different applications to be configured on a highly granular basis. This process ensures that when a line fails, the majority of the bandwidth of the intact Internet connection is reserved for business-critical applications only. Bandwidth-gobblers such as Facebook, Spotify and YouTube can be immediately blocked. The performance of business-critical applications is then virtually unimpaired, despite the reduction in bandwidth.
Traffic shaping also takes the pressure off the WAN in another way. In traditional IT architectures, the traffic from each branch office is backhauled to the company head office and then to the Cloud.
However, deploying a firewall at each of these dispersed locations (including small desktop firewalls in home offices) provides direct, decentralised access to Cloud resources.
When using a public Cloud service, ideally there should be another virtual firewall in front of the tenant environment to ensure the traffic is as secure as communications inside your own network.
Application awareness: separating the wheat from the chaff
Conditions for rule-based prioritisation, blocking and management of data streams should be based around application awareness. Current next-generation firewalls have a deep understanding of the data that reaches them.
The firewall analyses the data traffic, identifying which application needs to transmit data via which protocol and from which user. Based on these multiple factors, it not only decides if the action should be blocked, it also determines how much bandwidth should be made available and on which of the various data lines.
The firewall as a communication gateway
In an Application Delivery Network, the perimeter becomes less important and firewalls no longer act as a wall. It is more appropriate to picture them as portals that form the entry point to the tunnel for secure communication. The network firewall builds paths to wherever the users, the data and the applications are: from branches, to home offices and of course to Cloud-based infrastructure.
In the same way, the mechanisms for protecting against attacks, such as IDS/IPS (Intrusion Detection and Intrusion Prevention Systems), filtering and antivirus, no longer operate just at the perimeter. They become just as active in defending the connection between a home office and a Cloud instance. An architecture of this sort takes the pressure off the company network and optimises performance.
This is because traffic from various office locations is no longer controlled completely from the head office, but is instead managed seamlessly by a distributed firewall environment that is partly in the Cloud.
> See also: How to bring disaster recovery to the board: a guide to addressing each member of the c-suite
Any attempt to configure the individual firewalls separately in order to construct an ADN would be doomed to failure from the start. Such a structure remains feasible only if all the firewalls are fully managed centrally and if security and productivity functions can be managed from a simple, user interface.
Centralised networks no longer work
Cloud computing, virtualisation and mobility open up enormous opportunities for cost-efficient business continuity management. Those who replicate their virtualised systems in the Cloud, or rely entirely on Cloud solutions will always have productive systems available – even in the event of a serious incident such as a major fire at the company’s head office.
Many companies see Office 365 as another opportunity in this respect. This solution creates continuous connections between users’ endpoints and the Microsoft Cloud. Such architectures shift the challenge of business continuity management to the issue of connectivity.
Businesses should aim to be able to maintain all business-critical processes, provided that staff and failover systems have access to electricity. Next-generation firewalls can then ensure that business-critical data is kept flowing between devices, on-premises systems and Cloud instances.
IT departments face the challenge of providing secure and reliable access to business applications located somewhere, from anywhere, at any time and in any circumstance. The traditional centralised network model no longer works. IT teams must instead provide reliable route to the Internet at each office location and put policies in place to prioritise business-critical traffic.
Sourced from Wieland Alge, VP and GM EMEA, Barracuda Networks