VM Junkie

March 9, 2011

View Client for iPad out – limitatons & caveats

Filed under: view, vmware — ermac318 @ 8:55 am

So a couple posts ago you may recall I made the following statement:

There’s also a few cool bits under the hood that make it work a bit better with things like Teradici’s new firmware 3.3 for their zero clients, as well as some enhancements that will become obvious in the next month or so when another client becomes available (which I’m not sure I can talk about publicly in detail).

What I was referring to, as you may now guess, is enhancements that make the iPad client work. Please note that the iPad client (which has been posted about many times in the last few hours alone since it launched, even on mainstream tech blogs like Engadget) only supports View Agents running View 4.6 or newer. Also, while it does support the Security Server, because the iPad client is PCoIP only (no RDP support) it means if you want to avoid using a VPN on your iPad you will also need the new View 4.6 security server.

Another important note is that this signals the first time that you have seen a View client which is separately released from a particular version of View. The version number on the iPad client is 1.0.1 – not 4.6. This is the start of a larger trend around View client releases: you should see them become more decoupled from the releases of the View Manager infrastructure components.

Now I just wish I had an iPad. =)

March 7, 2011

Windows 7 SP1 support in View

Filed under: view, vmware — ermac318 @ 9:12 am

Some updates for those on the bleeding edge.

When VMware first published the release notes for View 4.6, the following line was in the What’s New section:

  • Experimental support for Microsoft Windows 7 SP1 RC operating systems

Since that time, the release notes have been updated to the following:

  • Support for Microsoft Windows 7 SP1 operating systems

This is a very important change. When the View 4.6 bits shipped and the release notes were published, Windows 7 SP1 was not fully GA (only limited availability) and therefore the official support statement had to be experimental. Once the bits were generally available, VMware updated the release notes to confirm full support. I have also personally confirmed with the View Product Manager that Windows 7 SP1 is fully supported.

Also of note: while View 4.5 does not officially support Windows 7 SP1, this KB article outlines some View 4.5 patches which will make it work. Note that while it does not explicitly call out Windows 7 SP1, the incompatibility between SP1 and View 4.5 is the same – these patches are rolled up into SP1 and therefore SP1 has this issue. Note that there are some limitations around Local Mode, however.

February 25, 2011

View 4.6 and ThinApp 4.6.1 are out!

Filed under: thinapp, view, vmware — ermac318 @ 1:29 am

Good news, everyone!

After an agonizing wait for some of us in the know, View 4.6 is finally out as of an hour ago. This is not a super-huge release, but rather an incremental release that fixes a lot of bugs, but does add a couple pretty important features:

  • The Security Server can now proxy PCoIP connections. This was semi-announced as a feature back when Mark Benson made this post on the Office of the CTO blog.
  • (Experimental) Windows 7 SP1 support. Note that View 4.5 is not compatible with Windows 7 SP1, so don’t go deploying it to all your View desktops before upgrading your View Agents to 4.6!
  • USB improvements. Now (if you’re crazy) you can sync iPods over View’s USB redirection.

There’s also a few cool bits under the hood that make it work a bit better with things like Teradici’s new firmware 3.3 for their zero clients, as well as some enhancements that will become obvious in the next month or so when another client becomes available (which I’m not sure I can talk about publicly in detail).

So with that out of the way, what’s missing? Unfortunately, profile integration, such as any technology from the RTO acquisition, is still not included in View 4.6. We can only hope in the meantime that when the next major version of View comes out, we will see it or something like it included. In the interim, we’ll have to rely on solutions like LiquidwareLabs ProfileUnity, AppSense, or others.

And also as per usual, the new View release is accompanied by a release of VMware ThinApp, although in this case an even more minor update from 4.6 to 4.6.1. Primarily this release adds better support for Office 2010 and virtualized IE7 and IE8, but you can read the official VMware blog posting about it here.

December 3, 2010

Passed the VCAP-DCA exam!

Filed under: vmware, vSphere — ermac318 @ 1:18 pm

If you follow my twitter, you’ll have noticed that I finally received word that I passed the VCAP-DCA exam! I took the test back on Nov 4th, and had to wait almost a month for my results. But all is forgiven!

Some experiences from my exam:

  • Manage your time well. I actually spent too much time hunting through the provided documentation to answer a couple questions and ended up running out of time. If I had just given up on one particular question and moved on, I might have gotten 2 more right at the end that I couldn’t get to.
  • The blueprint is very important! Make sure you know everything in it.
  • When they say you need to know all the command lines, they’re not kidding.
  • Know your Distributed Virtual Switch stuff well.

Also, I can’t recommend the guides over at vFail enough – they are excellent. Make sure you go there and study before the exam.
Good luck everyone!

October 13, 2010

vSphere Network I/O Control vs. HP VirtualConnect Flex-10 or FlexFabric

Filed under: bladesystem, hp, vmware, vSphere — ermac318 @ 8:48 am

As of vSphere 4.1, VMware introduced a new feature called Network I/O control. Many of the features of Network I/O Control overlap with some of the features of HP VirtualConncet Flex-10 (and subsequently, FlexFabric as well). This article provides a compare and contrast of the two systems and their pros and cons.

HP Flex-10

With HP Flex-10 onboard NICs, you can take a single 10Gb pipe and carve it up into 4 distinct FlexNICs, which appear as their own PCI function in hardware. Using VirtualConnect Server Profiles, you can then specify how much bandwidth you want each FlexNIC to have.

This allows customers in vSphere environments to partition bandwidth between different logical functions in hardware. For example, in the above diagram we could give 500MB of bandwidth for management traffic, 2Gb for vMotion, 4Gb for iSCSI traffic and 3.5Gb for Virtual Machine traffic per FlexNIC. In a FlexFabric environment, one of your four FlexNICs can assume the personality of a FlexHBA, which can act as a Fibre Channel HBA or hardware iSCSI initiator.

Pros:

  • Bandwidth shaping occurs in hardware and is stored in the VC Profile, and therefore is OS independent. For example, FlexNICs can be used by a physical Windows blade.
  • Since the physical NIC function is capped, both ingress and egress traffic is limited by the speed of the FlexNIC you set in hardware.

Cons:

  • Requires Flex-10 or FlexFabric capable blades and interconnect modules.
  • Can only dial up or dial down FlexNIC speeds while blade is powered off.
  • When bandwidth utilization on one FlexNIC is low, another FlexNIC cannot utilized its unused bandwidth.

vSphere Network I/O Control

Introduced in vSphere 4.1, Network I/O Control (or NIOC) is designed to solve many of the same problems as Flex-10. How can I make sure all types of traffic have an appropriate amount of bandwidth allocated, without letting any single network function rob the others of throughput?

By enabling Network I/O Control on a vDistributed Switch (vDS), you can specify limits and shares for particular port groups (illustrated above on the right) or host functions (illustrated above on the left). You can specify that vMotion traffic has a limit of 5Gbps and that is has a share value of 100. You can then specify that your VM traffic has a share value of 50, and your iSCSI traffic has a share value of 50. If all three functions were attempting to push maximum throughput, the vMotion traffic would push 5Gbps (since vMotion is given 100 out of 200 shares), VM and iSCSI traffic would get 2.5Gbps.

An example screenshot, taken with a 1Gb (not 10Gb) NIC.

Pros:

  • Shares allow bandwidth on a single function to utilize the entire 10Gb pipe if the link is not oversubscribed.
  • You can change the speed of a function while the vDS is online and servicing traffic.
  • No special hardware required – can be utilized on rack-mount servers with standard 1Gb or 10Gb NIC interfaces.

Cons:

  • Requires vSphere Enterprise Plus, and requires use of the vDS – NIOC is not available with traditional vSwitches.
  • NIOC can only regulate egress traffic. Ingress traffic will not be affected by NIOC settings.

Conclusions

Both options provide similar capabilities but approach the problem in different ways. While a FlexNIC cannot dial itself up dynamically based on load, it can prevent ingress traffic from overwhelming other functions, whereas NIOC cannot.

The biggest problem with NIOC is that it is only available with the vDistributed Switch, making it challenging for many customers to implement. Not only do they need to be on the most expensive version of vSphere, but they also must then implement vDS, which many customers are not doing or avoiding intentionally due to the added complexity. However, VMware will most likely be targeting only the vDS for future feature enhancements.

In HP Blade environments, it makes sense to utilize the HP VirtualConnect technology as it provides other benefits (MAC address virtualization, server profile migration, and now FlexFabric) beyond just the FlexNIC capability. However, if customers are utilizing competing Blade solutions, or traditional rack-mount servers, then NIOC provides new capabilities to them that they cannot get in hardware.

It is also possible to utilize both solutions in tandem. One could conceivably use FlexNICs to segregate certain types of traffic for security purposes (maybe if your organization doesn’t allow traffic from different security zones on the same vSwitch) and then use NIOC to do bandwidth shaping. Another use case is if you want your Management Traffic to stay on a standard vSwitch, but move all VM/vMotion/etc traffic to a vDS, you can use two FlexNICs per pipe and use NIOC on the larger of the two.

September 9, 2010

View 4.5 is out – Premier includes vShield Endpoint

Filed under: view, vmware — ermac318 @ 8:40 pm

Yes, View 4.5 is out, you can read more about it here on the official View blog. But there’s something important I wanted to point out that a lot of people have so far overlooked, probably because I don’t think VMware ever announced they were doing this until now. At least this is the first I heard of it, but maybe that’s not saying much.

View 4.5 Premier edition (the good one) includes vShield Endpoint protection for all your desktop VMs. This means there is now AntiVirus and Malware protection offload included with the product at no additional charge. If you already own View Premier you now get this for free. That’s a big deal!

Now that the bits are out and I can actually start posting stuff about it hopefully you’ll see some more content around View 4.5 in the coming days.

August 31, 2010

VMworld session TA7805 – Tech Preview: Storage DRS

Filed under: vmworld — ermac318 @ 4:52 pm

Irfan Ahmad presenting, he presented last year on what became Storage I/O Control.

  • Storage DRS is a “stealth” project at VMware
  • Big problem is a VM admin doesn’t necessarily know what class of disk or how many spindles are behind a particular datastore.
  • Create a new primitive called a “datastore group” which is a new domain like a DRS cluster. Note from me: This will of course dovetail nicely into vCD service levels!
  • Storage DRS would automatically load balance across multiple datastores in a datastore pool.
  • When you create a new VM, you place it on a datastore group and it does auto-placement. Takes both free space and I/O into consideration
  • ESX host will gather both free space and I/O stats to help balance for initial placement as well as Storage VMotions.
  • Cluster-datastore group relationship is many to many – a datastore group can span clusters and a cluster can have multiple datastore groups.
  • Recommend all datastore groups are visible from all hosts but not enforced, much like existing datastore presentation in clusters. Storage DRS will do best effort if you don’t follow this, though.
  • You can have Storage DRS affinity rules – keep these two VMs on different arrays or keep all disks from this VM on the same datastore.
  • Datastore Maintenance Mode! Say this datastore is going down and it’ll auto SVMotion all VMs off into other LUNs in the same pool.
  • You can of course add datastores to an existing group like adding a host to a cluster.
  • When you enable it you can do it on Capacity or Capacity and I/O (important for virtualized arrays like EVA where a group of LUNs shares same performance pool)
  • You can set an I/O latency metric so that if latency gets above 15ms it’ll move stuff to a datastore with lower latency. Really smart way to determine if I/O capacity is too high.
  • Balancing will only happen every few days, not every hour like a VMotion.
  • Initial placement will take into account both DRS and StorageDRS metrics, as well as how well connected the datastore is. For example, if datastore is hooked up to all hosts it will prefer that over a datastore only connected to 1 or 2.
  • You can balance capacity based on keeping them all withing a % of each other (in other words prefer balance) or just try to keep them below a certain % (just try to avoid making a datastore full)
  • It even takes into account growth rate of thinly provisioned disks when determining a good placement! Wow, that’s smart. Weights powered-off VMs less since their I/O is generally 0 when off.
  • Prefers moving VMs with low SvMotion overhead (like move smaller VMs before big ones, like DRS).
  • Does load balancing by knowing a more powerful datastore (one with more disks behind it) will have latency degrade slower than a less powerful datastore. This insight is used to model the performance to make smart migration choices.
  • Also model metrics of individual virtual disks and feeds that into the model.
  • They did a man vs. machine test. Made 13 VMs with standard workloads, gave all info to 2 storage admins at VMware vs. Storage DRS algorithm. While IOPS were about the same between experts and algorithm, algorithm beat them significantly on latency!

This was the best session I went to all day. This once again reminds me why VMware makes such cool products – they really understand problems and they have really smart people trying to solve them.

VMworld sesson DV7180 – ThinApp Futures

Filed under: thinapp, Uncategorized, vmware, vmworld — ermac318 @ 3:17 pm

Cool highlights from this session, I’m going to skip all the pre-ThinApp 4.6 stuff since it’s old news:

  • ThinApp (as of 4.6) now does transparent page sharing for applications (like in TS environments) just like ESX does for VMs. Pretty neat!
  • In ThinApp 4.6 you can “harvest” IE6 straight out of an XP system and run it on Win7. Only WinXP specific file that gets put in the package is Shell32.dll, otherwise icons and menus don’t work correctly. The resulting ThinApp performs like IE6 SP1.
  • ThinApp Converter which is included in ThinApp 4.6 allows you to automatically build a package from an automated installer. You can take existing install packages (like from Wise, Alritis, or LANDesk) and convert them to ThinApp packages easily with a simple command line and a blank VM.
  • The futures stuff was talking about a new features called “ThinApp Factory.” It’s a prototype which can download RSS feeds of applications and automatically download the app installer, silently install it and capture the package using ThinApp Converter, and then publish it to users or allow users to download it from an “App Store” kind of thing. This will probably feed into the Horizon stuff they showed this morning in the keynote.

VMworld session DV7907: View Reference Architecture

Filed under: Uncategorized, view, vmworld — ermac318 @ 12:34 pm

Session is around updating the reference architecture for View 4.0 to View 4.5.

  • Goal of View is to delivery Consumer Cloud experience for the Enterprise.
  • Goal of the 4.0 reference architecture was to simulate a realistic desktop workload, validate 2,048 users.
  • Session turning into a pitch for UCS very quickly…
  • Now they’re off to talk about RAWC, which is really old news. New version of RAWC supports simulating workloads on Win7. Still can only simulate a preselected set of apps, no custom app load testing. You can learn more about rock on its Youtube channel.
  • View 4 reference architecture was run on UCS, CX4, vSphere 4.0 and WinXP SP3.
  • Just released – Win7 optimization guide, includes a BAT file that optimizes the VM for you! Already found it here.
  • Going through all the stuff they had to do to the Storage to make it perform. Wouldn’t it have been nice if it was on virtualized storage and you didn’t have to worry about RAID groups and all that crap? 🙂
  • View 4.0 Reference architecture they got up to 16VMs/core. I think this is super aggressive and I don’t recommend customers size for this #.
  • Finally we get to View 4.5 stuff! Talking about the new Tiered storage capabilities of View 4.5
  • They’re putting SSDs in each physical server… again more Cisco specific stuff. I think sticking SSDs in every server drives up the cost too much. Plus wouldn’t it kill vMotion.
  • They did the View 4.5 test with a single non-persistent pool.
  • They see CPU being a bottleneck on optimized Win7 32b deployments… but they were only giving each Win7 VM 1GB of RAM.
  • During the Q&A, asked about HA/VMotion. This reference architecture doesn’t allow for VMotion or HA. And Non-Persistent pools require some sort of 3rd party profile management to make it work. If you want to take a system down you’ll have to do it after hours. Don’t like it! I’ll stick with SANs to give full functionality instead of neutering 1/2 the Enterprise functionality.

June 29, 2010

HP Client Virtualization Reference Architectures

Filed under: bladesystem, hp, vdi, view, vmware — ermac318 @ 2:36 pm

One of the great things that came out of HP Tech Forum last week was the official announcement of HP’s various reference designs around their VDI solutions. The hub at HP’s website is here, as of right now only the VMware View architecture PDF is up but the XenDesktop ones are coming (one for XenServer, the other for Hyper-V). Some aspects of this reference design were announced all the way back at VMworld 2009 and are only now coming to fruition. This is mostly because of this bad boy:

HP P4800 BladeSystem SAN

These are the building blocks of the HP P4800 Blade-based SAN solution. HP took their SAN/IQ software and loaded it directly on blade servers, which then attach to external SAS-attached storage to create a 10GBit iSCSI SAN inside the blade chassis. No 10GBit core or switches required! The P4800 is designed specifically for VDI solutions and currently is only available for that kind of solution (although there’s nothing stopping you from using it as a general purpose iSCSI SAN, it’s not recommended because the I/O patterns for VDI and normal server workloads are very different).

This is HP’s flagship VDI design. Going forward there will be more reference designs for smaller deployments, going all the way down to Micro-Branch type deployments with just two servers and no dedicated shared storage but still full redundancy. All are based on the P4000 SAN.

So I’m not trying to make this an advertisement here (although I do think it’s really cool), the reason I’m linking to this is that HP has done a ton of validation and testing around the solution and have provided some great numbers around storage requirements per user for VDI environments. They’ve broken down users into Task, Productivity, and Knowledge workers. According to their numbers, on average these will take 6, 9, and 22 IOPS respectively to support. This can be very demanding on your SAN, and the #1 hardware design related problem users run into is sizing their VDI environments based on capacity and not on performance. These sizing guidelines should help anyone looking to architect a good VDI solution.

Older Posts »

Blog at WordPress.com.