The impact of the cloud on your network, part 1: Infrastructure

The impact of the cloud on your network

The cloud has been on the agenda for some years now, so much so that even the government has gotten in on the act with its G-Cloud initiative. Web services are rapidly moving over to become virtual applications running on flexible cloud-based systems.

Consumers and business alike are keeping their data online “in the cloud,” and applications are beginning to be delivered in the same way, such as Google Docs, although many vendors have stopped short of live delivery of applications, and instead offer applications on a subscription model, such as Microsoft Office 365 and the Adobe Creative Cloud.

Software and services delivered via the cloud have the advantage, in theory at least, of being available wherever you are, through whatever device you happen to have at hand. They become user-centric, rather than device-centric, which means users can work more flexibly, without having to lug a specific portable device with them, or be so tied to their desk. But this also assumes ubiquitous network availability.

Basic access won’t be a problem for a business with wired and wireless networking across its premises. But as your use of cloud-based services grows, the question is, can your network cope when the majority of mission-critical applications are delivered over the network?

In this feature, we look at the impact of cloud services on network infrastructure provision. In the second part of this series, we will examine the security issues of shifting your business computing to the cloud.

Nothing new under the sun

The impact of the cloud is normally seen as a shift in processing from the client to the server, echoing the temporary interest in Network Computing at the end of the 1990s, and harking back to the era of mainframes and terminals. This has now evolved into the more generic Software as a Service (SaaS), but the underlying principles are the same.

A server that is being used primarily for delivering files will need large amounts of fast, RAID-protected storage. But the server itself can be relatively low powered, and could even be Network Attached Storage (NAS), which is usually delivered in a very light “headless” form. Running SaaS applications is different. For SaaS, the server does most of the processing, and sends the results to the client device, which is essentially acting as a display and input for the server-side activity.

A business running Citrix XenApp or Microsoft App-V, for example, will have a much more complicated server infrastructure than one where user authentication and access to shared storage are the main network services. There may be a licensing server to ensure software policies are adhered to, but most significantly there will be an application server, which will have to take on the tasks that were previously the primary task of client devices.

The advantage is that client devices don’t need to be so powerful, and economies can be made on these. Lots of client devices spend a significant portion of their time idle, but this can’t easily be redistributed when most of the power is provided locally.

When this is provided by a server, however, the system only needs to be able to cope with peak requirements, which will almost certainly not be everyone in the organization using the services at once, particularly in a large company. So there will be less expenditure overall on processing power.

In other words, hardware expenditure moves from the client device towards the server. However, there’s another hidden cost, which is often overlooked, or at least placed much further down the priority list than the server, client, and virtualized software involved with such a shift in focus. This is the performance of the networking infrastructure necessary to support the increased amount of data that will be flowing across the network.

The bandwidth required depends on what services are being supplied from the cloud. If this is just email, the load will be relatively low. This kind of application only requires a 100Kbits per second or so per user for adequate performance. Microsoft has even supplied a bandwidth calculator to figure out how much bandwidth an Exchange client needs.

If, on the other hand, a whole virtualized desktop is being delivered, for example via Microsoft App-V, the bandwidth can be much greater, particularly if the desktops in question are running at a high resolution or displaying multimedia. Citrix estimates anything from 43Kbits per second for general office applications running on its XenDesktop, to nearly 600Kbits per second when printing and over 1.8Mbits per second for high definition video. Cloud storage varies greatly in need, depending on the size and quantities of files users are accessing. Basic word processing and spreadsheet files will be relatively small. But frequent usage of photographs and videos will be bandwidth-heavy.

The dangers of latency

It’s not just bandwidth that is the issue, either. There’s latency to consider too. Lower latency is always better. But while email and Web-based applications will be pretty tolerant of variations, other types of applications won’t be so happy. Latency in all forms of media stream, particularly real-time communications, can be frustrating or may even completely ruin the experience entirely. Voice over IP, for example, will typically require under 100Kbits per second of bandwidth, but would be unusable if there are frequent delays or stutters in delivery caused by high latency. A delay of tens of milliseconds is an inconvenience, but anything over 150 milliseconds will make voice calls intolerable.

With virtual desktops, since we are so used to the responsiveness of locally delivered desktop operating systems, latency can be equally detrimental. A Web application uses clever systems like Asynchronous Javascript Extensions (AJAX) to mask the slowness of the network over which you are using it, guessing the information you might need next and downloading it quietly in the background. But a virtual desktop will be constantly making demands on the network. If bandwidth or latency issues occur, mouse action, button clicking or typing can become erratic, leading to user errors and frustration. And while a single slow Web application can be minimized in favor of another task until it has fully responded, if a user’s entire computing experience is delivered via a virtual desktop, sluggish performance will adversely affect everything they do. For locally delivered cloud-based applications, the switch and router setup will be increasingly important. A large router buffer can actually introduce latency as it tries to cope with faltering bandwidth, which will be fine for email users but potentially disastrous for voice over IP, video-conferencing or a virtual desktop. A wireless network will add its own difficulties to the mix. Where a mixed wireless environment that allows some legacy 802.11g devices will be fine for email and Web applications, running virtual desktops will be a different matter on a busy network. In this case, ensure that your wireless networks can run at full 802.11n speed, perhaps using a secondary network for legacy devices, and be ready to adopt 802.11ac once it becomes widely available.

Quality of Service is key

Quality of Service, where more mission-critical network data is given greater priority over less important traffic, becomes key. Your virtual desktop connections and voice over IP, for example, can be given priority over less bandwidth- and latency-sensitive applications like email and Web applications. Not all virtual desktop protocols are created equal, either. Microsoft RDP is generally considered to remain effective even over low-bandwidth, high-latency connections, whereas VMware View’s PcoIP is more bandwidth-sensitive, although it will provide better multimedia than RDP in return.

These issues become even more relevant for services that are hosted in the public cloud – that is, on the Internet – rather than the private cloud, within your internal network. The low requirements of email will mean that even a relatively standard ADSL connection could support 100 users reasonably comfortably. But virtual desktops will stretch this, and it could even be preferable to have a separate Internet connection dedicated to this traffic.

The importance of testing

Unfortunately, often the only way to get a true representation of how much bandwidth your network needs is to test it, because the real-world usage of companies varies so greatly even between those using the same basic set of applications. You can also use standard network tools to measure the latency of the servers your clients will be accessing, to ensure this is within levels of tolerance. This isn’t something that can be done merely as a snapshot prior to implementation, either. Network performance will be more important when running a greater number of cloud-based applications, so monitoring performance constantly will be key to keeping business activities running.

Either way, the performance of the local network and its connection to the wider Internet should not be neglected when greater usage of cloud-based services is considered. The extra costs of any necessary improvements should be factored into the equation.

There will still be considerable cost benefits from implementing cloud applications, but to get the most out of them it’s fundamentally important to ensure your networking infrastructure is up to job.

In the second feature of this series, we examine how to ensure your network security can meet the added demands of the cloud.

This article originally appeared on

The cloud has been on the agenda for some years now, so much so that even the government has gotten in on the act with its G-Cloud initiative. Web services are rapidly moving over to become virtual applications running on flexible cloud-based systems.

Locked Content

Click on the button below to get access

Unlock Now

Or sign in to access all content on Comcast Business Community

Learn how Comcast Business can help
keep you ready for what's next.