In the absence of good measurement and adaptive control, businesses that simply inject traffic into the Internet and hope for the best are likely to experience inconsistent end-to-end performance results. It’s not uncommon for Internet traffic to be sent on thousand-kilometre detours, crisscrossing oceans multiple times, or even traveling the long way around the planet, so that some service provider in the middle of each transaction can save a few pennies.
How did the Internet evolve into the complex system we have today, and how can we avoid the worst of the Internet’s performance pitfalls? It may help to start with a brief review of how traffic actually gets delivered on the Internet.
Surveying the global internet
The Internet functions as a single, global network, but it’s actually built from the arm’s-length interoperation of over 48,000 smaller networks known as autonomous systems. Some of these networks fit in a single building, while others have switches and routers spread out over multiple continents. But they all play by the same rules of interconnection, using a simple common protocol called the Border Gateway Protocol, or BGP, to build up a mutual map of their connected customers. As soon as two autonomous systems connect their border routers using BGP, traffic can flow directly between their networks, following the new path.
> See also: Next big thing: preparing for the Internet of Things in the enterprise
After 45 years of growth and interconnection, more than 350,000 of these individual bilateral relationships now exist among the Internet’s autonomous systems, each voluntarily negotiated for mutual benefit. Sometimes money changes hands, but the majority of these relationships involve the simple exchange of traffic at no cost.
Many of these relationships are consummated at the world’s major Internet exchanges: large switches that make it easy to interconnect with many autonomous systems for the cost of a single network port. In the world’s largest cities, Internet exchanges can span multiple datacentres, and attract thousands of organisations, all looking to increase connectivity.
All these cities have to be connected together, of course, and again, the connections are voluntarily paid for by autonomous systems that want to exchange traffic with their remote neighbours. Undersea cables connect the continents and traverse inland lakes and seas, while terrestrial fibre optic routes follow a familiar infrastructure such as highways, railroads, power lines, and energy pipelines. Each of the physical cables that carry traffic between distant cities can support tens of thousands of these virtual connections, and there are hundreds of major submarine and terrestrial cables linking major cities, with new ones being built every year.
Complexity is the secret to the internet’s success
Looked at together, the map of all these autonomous systems’ interconnections seems chaotic, redundant—even wasteful—with far more fibre in the ground and under the oceans than can be profitably lit at any one time. But this redundancy is one of the key ingredients to the Internet’s famous survivability in the face of disasters and damage.
> See also: How to keep ERP complexity out of the spotlight
Competition is another key ingredient, and it’s important to remember that each link in this global puzzle originally came into existence to serve some specific consumer demand. Some providers build new routes and new relationships in order to provide shorter paths, which will command a premium price from customers who want to eliminate milliseconds of latency in their delivery of Internet traffic. Others seek to collect Internet traffic in a market where connectivity is expensive (such as central Asia) and deliver it to a market where connectivity is cheap (such as western Europe), making money on the spread.
When a customer connects to your Internet-delivered service, it’s common for three or four autonomous systems to handle the traffic as it flows back and forth. Very often, the forward and reverse paths in a single transaction will be handled by entirely different ISPs, each with its own paths, and its own characteristic performance and reliability. That creates a real challenge for businesses that want to guarantee a consistent user experience across the Internet’s global infrastructure.
Since end-to-end performance depends on what paths are available on a given day and cheapest to the intermediate service providers to deliver, this requires careful consideration. Autonomous systems representing both sides of each Internet transaction, as well as the paid and unpaid relationships that mediate the exchange of traffic, and finally, the latencies of the underlying paths all need to be considered and mapped out.
Managing the internet’s complexity: the next big data challenge?
Active performance monitoring is rapidly becoming an essential element of Internet operations, as enterprises mine server- and client-side latency measurements for insight into the Internet’s constantly changing paths. When congestion affects one part of the global network or a cable is damaged, end-to-end path analysis can suggest alternative routes. Recovery is then a matter of flipping a switch and summoning up a different set of provider relationships that are unaffected by the Internet’s local problems.
> See also: Rapid growth of the Internet of Things will add economic value
With careful attention to the selection of hosting and content distribution partners, intraday monitoring of the performance of Internet paths, and a strategic plan for improving interconnection with key service providers, the non-flat Internet can deliver a consistent customer experience that rivals that of older, more expensive point-to-point networks. Thankfully, because the Internet isn’t flat, it rewards careful measurement and thoughtful management.
Sourced from Jim Cowie, chief scientist, Dyn