To serve a global audience, today’s complex online applications need to be globally accessible with low latency and high reliability across key customer markets. Adding service locations, perhaps with a new hosting partner, can be a powerful way to improve online experience, reducing the likelihood that customers will be wooed away to a faster, more reliable competitor.
But selecting an expansion market is never a simple undertaking. Whether hosting in the cloud, building a new presence in an existing facility or designing your own data centre, picking the wrong spot for expansion can be a costly mistake, and smart teams start the planning process by asking a few basic diligence questions.
Identify the audience
The first step is to look externally at where the service’s most important customers or users connect to the Internet. If online services are browser-based, basic audience geolocation information may already be available through a service such as Google Analytics.
If not, teams will need to drill into service logs to accumulate a representative set of client Internet Protocol (IP) addresses and use an IP geolocation service to map them to cities and countries. Reducing latency is the first key to great end-user experience, and when customers are served from geographically remote hosting locations, the speed of light creates long delays that cannot be engineered away.
> See also: Secondhand DDoS: why hosting providers need to take action
But audience geolocation only provides half the picture. The provider relationships that support customer access to the Internet can be just as important, because poor provider handoffs can create additional delay. Two Asian providers who only interconnect in California, for example, may add hundreds of milliseconds to customer-visible latency because every request-and-response involves four unnecessary Pacific crossings.
Internet intelligence services use Border Gateway Protocol (BGP) routing information to identify the set of key Autonomous Systems (ASNs) responsible for routing customer IP addresses. Instead of thinking solely in terms of the originating ASN at the customer’s own organisation or primary ISP, it’s important to identify the most critical transit and peering ISPs who mediate the end-user’s Internet experience. These intermediate providers are key partners in having traffic delivered quickly and reliably, and a new datacentre location should be one where they already serve many customers.
Understand where content is hosted
When contemplating expansion, it’s important that new hosting locations effectively complement the current strategy. An organisation that operates its own servers may be familiar with their physical location and provider relationships, but it can be harder to understand the geolocation and connectivity of services hosted in the cloud. Applying the same approach used to pin down audience location and ISPs, and measuring the paths to key customers from existing servers, creates a baseline map of the audience markets whose latency and reliability are most in need of improvement.
The team should not assume that the service providers in use today will be the best choices in the new expansion location. Different providers across the world have unique strengths and weaknesses, depending on their networks’ physical footprint and the state of their relationships with other regional providers. In general, the best choice for Internet service in an expansion site will be providers that can claim a large percentage of the target audience as direct or indirect customers, and who also have reasonable latencies to existing service locations to support content replication.
Choose the best path
The gold standard for understanding a given provider’s strengths and weaknesses is historical data. Latency trends between that provider and various segments of the target audience over a three- to six-month trailing window can be a reasonable predictor of performance and reliability.
For this final stage of diligence an Internet Intelligence service can provide detailed historical intercity latency measurements. Evaluate a range of ISPs in each candidate city to find those that can reach customer target markets with acceptably low latencies, without huge swings in day-to-day performance. Machine to machine traffic that supports service replication and failover also needs to be fast and reliable, so teams should examine paths and performance between the candidate location and existing service locations as well.
Expanding your service footprint to a new city or a new facility represents a significant commitment of time and resources. Too many times, expansion cities are selected based on conventional wisdom or putting a finger on the map in a reasonably central place.
> See also: Software-defined storage is driving data centre infrastructure innovation
To make sure you’re delivering the greatest efficiency and availability improvements to end users, let measurement drive the expansion process, and ask the right questions early in the process. Successful teams set clear goals for latency improvements and carefully measure the connectivity between ISPs and their most important customers. Having specific goals for latency reductions, devising a plan for measuring customer experience, and using hard data to shop for and engage with providers are some of the keys to managing an effective online growth strategy.
Sourced from Jim Cowie, chief scientist, Dyn