In February this year, the United States Federal Communications Commission approved the rules that set the groundwork for a concept known in tech circles as “net neutrality”. This oft-discussed concept has received plenty of press coverage with arguments both for and against the rules.
On one hand, there are the folks who enjoy the prospect of an Internet that treats every packet equally, saying this helps prevent the supposedly abusive practice of making content providers pay ISPs a fee for prioritizing their content – the so-called fast-lane access to customers. On the other hand, there are people who say the legislation is a gateway for further regulation of Internet services and will do nothing to fix the growing problems the United States’ infrastructure faces.
But, What About Traffic?
There’s one argument we’re missing here. While the hustle and bustle continues on every end of the political spectrum, we’re forgetting the most technical aspect of net neutrality: it certainly doesn’t help us do something about traffic-based attacks and concerns on traffic volume.
In gist, ISPs and networks will be unable to adjust or throttle traffic in order to ensure quality of service for their clients. If all traffic were treated equally (all of it), the Internet would be fair game for botnets, malware, DDoS attackers, XSS scripters, and several other nasty things. ISPs, unable to act upon the stress, will theoretically have no choice but to forward the packets straight to their destinations without being able to ban or slow down unwanted traffic.
All of this is theoretical, of course. To date, we’ve seen a 50-fold uptick of DDoS attacks over the past decade. Botnets are starting to become a major issue. Case in point: services like the PlayStation Network have already experienced downtime as a result of these challenges – both in terms of malicious attacks and scenarios in which servers are simply overloaded by unwanted access such as botnets.
Considering the above-cited challenges, risk mitigation should be on the top of every growing company’s priority list. Because of net neutrality, there’s no longer anything that can be done at the level of the ISP. Instead, forward-looking businesses will have to respond to these risks by ensuring a more robust infrastructure—something that will involve a distributed infrastructure with adequate load balancing and failover mechanisms. Having multiple endpoints stacked up in your favor will ensure reduced risk of outages and overloads. Therefore, even if at network level, legitimate traffic from users cannot be granted a fast-lane to your services so you can be sure that such legitimate traffic gets served every time.
With load balancing, packets are routed automatically across a distributed server infrastructure, resulting in the resource delivery being shared across several servers. More sophisticated load balancing mechanisms will dynamically ensure traffic is routed to server that has enough available resources to process a request. This way high availability is easily achieved, minimizing service interruptions to a negligible level.
The application of this technology is akin to building channels between reservoirs across a large area for rain collection. During the rainier seasons, one reservoir may be full while others might not have collected even half their capacity. Load balancing will channel the water from the full reservoir to the emptier one, sheltering the entire network from chaotic overload.
With net neutrality legislation mandating against ISPs banning certain types of traffic or prioritizing others, forward-looking businesses will need to ensure a robust, highly available infrastructure in order to ensure a high quality of service to their online customers. Technologies like load balancing, DDoS protection, and application firewalls will enable startups and established enterprises to be more productive, safeguarding business continuity and protecting the interest of the customer at the same time. It’s a win-win-win!