Applications in Cloud: Keeping Data (& Work) Flowing

by John Addington    |   

Loss of connectivity means a loss of access to all the resources in the cloud. Files are unreachable, applications are inaccessible, and productivity comes to a screeching halt.

Moving applications and services to the cloud can result in significant benefits for most organizations. These benefits aren’t to be taken lightly– they include cost-savings, simplified management, more efficient use of resources and additional technological flexibility. However– you know there’s always a “however”– in order to experience ongoing success with your cloud-based strategy, it’s important to factor in all the potential hazards or complications and plan accordingly. If not, you could see significant issues.

Two common models for implementing a cloud-based strategy include a hosted solution and, alternatively, a private Cloud. Although there are exceptions, in a hosted solution access to the third party hosted service is often achieved through a public Internet circuit. With a private cloud, various locations or users are connecting to a data center (or possibly multiple data centers) through public circuits, private circuits, or even a combination of both. What’s common in either of these scenarios is a reliance on WAN connectivity, be it private or public or a mixture of both.

Many services that might previously have been accessed on the local area network are now accessed via the WAN, and bandwidth requirements will almost certainly escalate accordingly. As those bandwidth needs increase, maintaining reliable connectivity to support the additional traffic becomes imperative. Take file sharing, for example. Data that may have previously been shared on a local file server would have been accessed directly via the local network, not utilizing WAN bandwidth at all. With that data now stored in the cloud, the WAN resources suddenly become a critical factor.

What can you do?

There are three ways to address increased bandwidth requirements.

The first is to reduce the amount of data actually being sent over the WAN, typically through caching and compression of the data. While in many cases this can have a significant impact, the approach does not work equally well for all traffic types, and there are limits as to how far the data can be affected. The second is to simply increase the capacity of the WAN connection. While increasing the capacity of a single connection potentially addresses the issue, budget and availability can limit options. The final approach is to add additional WAN connections, and load-balance the traffic between them. It’s often less expensive to add an additional, lower speed connection from a different provider than to increase the speed of an existing connection. However, this approach is also subject to the availability of access to additional connectivity.

One additional advantage of the third approach is that it also addresses the need for redundancy. With so much critical data now riding on the WAN connectivity, reliance on a single connection becomes a dangerous gamble. It’s not difficult to imagine the effects of a loss of connectivity – it means a loss of access to all those resources in the cloud. Files are unreachable, applications are inaccessible, and productivity comes to a screeching halt. Adding the components for an automatic failover, including a second connection as well as a system to automatically detect and manage the failover process, creates a robust communications infrastructure, in which a single connection could fail without disrupting users’ workflow.

The Right Way to Fail

In addition to the obvious necessity of having the secondary path for failover to be possible, it’s also important to plan so that each path can individually support the traffic critical to productivity. In other words, failover isn’t always enough, rather effective redundancy should be the goal. Failing over a fast connection to a slow one may not be helpful (and, indeed, might even be worse due to delays, timeouts, and disconnects due to congestion). Having plenty of bandwidth on each connection is ideal, but it’s not always feasible. The failover system should also be able to prioritize and manage bandwidth allocation so that critical traffic is not crowded out by less important data. That level of functionality can be useful on a day-to-day basis, but can be a true lifesaver during an outage when users are forced to work with significantly diminished bandwidth capacity.

By having multiple connections, ideally from diverse carriers, and using a WAN link controller to automatically load-balance, failover and manage the connections access to the cloud becomes rock solid and the ongoing benefits of the cloud-based strategy becomes fully realized. Planning ahead and addressing these issues before an outage or congestion causes disruption ensures smooth sailing in the long-term.

Disclaimer: This article was written by a guest contributor in his/her personal capacity. The opinions expressed in this article are the author’s own and do not necessarily reflect those of

Newsletter Signup
John Addington

John Addington


John Addington is a field applications engineer at Ecessa, a company that designs and manufactures networking hardware for constant and seamless Internet connectivity for businesses. John is a passionate technologist who specializes IP Networking and... See the full bio