Menu
Reducing VDI cost by exploring alternatives to centralized VM storage

Reducing VDI cost by exploring alternatives to centralized VM storage

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.

One of the most significant cost elements with VDI is the centralized storage required for maintaining virtual machines. Hypervisor vendors argue that to get disaster recovery and high availability the additional expense is really a bargain. And in many instances that might be true. But in many cases cost is paramount, complexity is to be avoided and three or four nines of availability may be far down the list of priorities.

Availability and uptime are always important, but don't forget that moving from traditional desktops to a VDI architecture will improve uptime and availability regardless of whether centralized storage is used. So the real question is whether a fairly significant improvement in availability is good enough if it comes with low-cost, reduced complexity and fewer moving parts. Many IT shops will probably nod in the affirmative.

OUTLOOK: Virtual desktops are all the rage

VIRTUALIZATION WARS: VMware vs. Hyper-V vs. XenServer vs. KVM

Centralized storage is typically implemented with storage area networks that provide redundancy, network accessibility and management features that go beyond conventional, local storage. For example, SAN management tools allow you to add disks to your array(s) without discontinuing service. And hypervisors leverage these capabilities to deliver improved uptime and high availability through support of features like live virtual machine migration (also referred to as VMotion in VMware parlance).

In the event that a hardware failure occurs on the server running your VM, you can use these live migration techniques to transfer a VM to a different server, thus maintaining continuity of service. These features don't, however, make you immune to application crashes, OS crashes or many of the inherent instabilities in the OS and platform stack that don't have anything to do with hardware.

So, even with live migration and a SAN backend it's not as if you're getting complete immunity from downtime. Yes, hardware issues on your host server are eliminated from the downtime equation, but this improvement in uptime comes at a premium. SANs are costly when purchased from the category leaders. There are smaller companies offering more economic solutions, but regardless of the vendor, this class of storage is considerably more expensive when compared to distributed or local storage.

Distributed storage can be defined as self-contained islands of local storage associated with multiple servers, such that each local store remains accessible only to the server it is attached to, but where the aggregate capacity of all such islands can be used to store the total set of VMs required.

For example, if the servers are implementing a collection of hypervisors for VDI, then individual user VMs may find themselves mapped to specific local stores. The aggregate of all local stores is the distributed storage capacity. The big advantages are setup is simple, no SAN or specialized storage management tools are necessary, the server configuration is fairly simple to order and support and the cost per terabyte is much lower than for SAN based centralized storage solutions.

The downside is uptime. If the hypervisor running your VM crashes, then you cannot move that VM to a different hypervisor using live migration and the end user will experience some downtime. But in order to truly understand the real world implications of this kind of failure, we need to consider two cases under which this situation may occur.

First, that an intermittent issue occurs at the failing server, requiring a reboot or a process restart. In this event, a few minutes of downtime waiting for the server to come back online is all the end user will have to deal with; not completely unlike rebooting a conventional PC. The chances that this sort of issue will happen on a server, though, are usually far less than on a PC as most all servers use higher-grade components, error correcting memory, redundant network interfaces and so on.

The second and more problematic scenario is when the server crashes entirely and cannot be recovered via a reboot. In this event, the user's VM is now stuck on a local store that is no longer accessible. Clearly, this is a troublesome outcome because, in order to restore the user's session, the entire VM needs to be re-created and access to the user's data needs to be restored too. But as with all things in IT, one can't rush to conclusions. It turns out you can architect things in a way that makes even this scenario not quite as disastrous as it appears at first blush.

To understand how, let's look at what a user typically has on their PC, or on their VM. There is an operating system of course, layered with applications, drivers, updates and configuration information. On top of this, there are user-specific environment settings -- or profile data. And finally, there is user data.

CASE STUDY: Behind the scenes of Wellesley College's desktop virtualization rollout

A typical network of desktops uses methodologies like roaming profiles, or NAS-resident user data repositories (i.e. where your "Documents" folder points to a network location). If you use Active Directory or a similar centralized permissioning and authentication system, you can typically log into different systems and the group policies ensure your Documents and data folders are accessible and that they point to the correct NAS location.

As far as the OS goes, typically all your office computers will be on the same version, with the same updates applied. But there still might be some differences in the applications accessible to PC A vs. PC B. A corporate IT department will typically have OS images for each type of user, and these images will contain the applications necessary to serve that type of use.

While this approach certainly beats having to manually install each OS configuration every time a new user is commissioned, it is still a little kludgy and time consuming. So, how can we solve each of these issues, i.e. OS availability, app availability and user-data availability in a distributed VDI context?

It so happens that NAS-stored user data or roaming profiles are entirely feasible in conjunction with VDI. You can, in fact, use generic VM images which represent the various application configurations you need to support and allow network-based user data shares to be mounted on these VMs post bootup via group policy or startup scripts.

As for user profile customizations, you get quite a bit of this with default Windows roaming profiles. But if you are not using roaming profiles or prefer alternate approaches, various free resources are available to allow you to clone a user profile from a source Windows system and restore it to a destination.

An IT tech with basic group policy and scripting skills is able to achieve profile migration for free. If, however, the customer requires the "comfort" of a packaged product to achieve what a few scripts will do, there are numerous profile management and migration solutions available from small and large companies alike. Even though the expense may represent some added cost (which, as mentioned, can be avoided), it will probably still handily beat the extra investment necessitated by centralized storage.

Finally, how do we deal with applications? In simple environments where there are a few application configurations, you can likely get away with just having a small number of pre-configured VMs that represent each app configuration. You start off with this master copy, clone it for the needs of an individual user, then mount user data volumes and copy the profile over via group policy on login and you're done.

In enterprise environments where hundreds of application configurations need to be supported across tens of thousands of images, it will be simpler to use a base VM for the OS config, but add Application Streaming or Application Virtualization to customize the image from an app perspective.

VDI delivers unquestionable advantages and a far more flexible delivery architecture for enterprise desktops, and hopefully this article shows you can deploy VDI without centralized storage, at reduced cost, while still delivering most of the advantages of desktop virtualization.

While the approach highlighted is just one of many possible, it is a vendor-agnostic approach and does not require a specific type of hypervisor to work. There are vendor-specific methodologies too, which readers may investigate based on their hypervisor preference. Hopefully, by leveraging our approach, or a variation thereof, the cost of centralized storage will not prevent smaller companies and distributed teams from experiencing the exciting world of virtual desktop computing.

Amir Husain is the president and CEO of VDIworks, an Austin, Texas-based developer of VDI management software. He holds over a dozen filed and awarded patents in virtualization and cloud computing. Amir was the CTO and currently sits on the board of ClearCube Technology, the world's first developer of PC Blade and Connection Brokering technology. Amir is also a board member at Pepper.pk, the maker of 3 World #1 Mobile Applications and Wheel InnovationZ, a Texas-based stealth startup focused on mobile cloud computing.

Read more about data center in Network World's Data Center section.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about ARSASE ITClearCube TechnologyetworkIONKVMMotionNASTechnologyVMware Australia

Show Comments
[]