VM Junkie

October 13, 2010

vSphere Network I/O Control vs. HP VirtualConnect Flex-10 or FlexFabric

Filed under: bladesystem, hp, vmware, vSphere — ermac318 @ 8:48 am

As of vSphere 4.1, VMware introduced a new feature called Network I/O control. Many of the features of Network I/O Control overlap with some of the features of HP VirtualConncet Flex-10 (and subsequently, FlexFabric as well). This article provides a compare and contrast of the two systems and their pros and cons.

HP Flex-10

With HP Flex-10 onboard NICs, you can take a single 10Gb pipe and carve it up into 4 distinct FlexNICs, which appear as their own PCI function in hardware. Using VirtualConnect Server Profiles, you can then specify how much bandwidth you want each FlexNIC to have.

This allows customers in vSphere environments to partition bandwidth between different logical functions in hardware. For example, in the above diagram we could give 500MB of bandwidth for management traffic, 2Gb for vMotion, 4Gb for iSCSI traffic and 3.5Gb for Virtual Machine traffic per FlexNIC. In a FlexFabric environment, one of your four FlexNICs can assume the personality of a FlexHBA, which can act as a Fibre Channel HBA or hardware iSCSI initiator.

Pros:

  • Bandwidth shaping occurs in hardware and is stored in the VC Profile, and therefore is OS independent. For example, FlexNICs can be used by a physical Windows blade.
  • Since the physical NIC function is capped, both ingress and egress traffic is limited by the speed of the FlexNIC you set in hardware.

Cons:

  • Requires Flex-10 or FlexFabric capable blades and interconnect modules.
  • Can only dial up or dial down FlexNIC speeds while blade is powered off.
  • When bandwidth utilization on one FlexNIC is low, another FlexNIC cannot utilized its unused bandwidth.

vSphere Network I/O Control

Introduced in vSphere 4.1, Network I/O Control (or NIOC) is designed to solve many of the same problems as Flex-10. How can I make sure all types of traffic have an appropriate amount of bandwidth allocated, without letting any single network function rob the others of throughput?

By enabling Network I/O Control on a vDistributed Switch (vDS), you can specify limits and shares for particular port groups (illustrated above on the right) or host functions (illustrated above on the left). You can specify that vMotion traffic has a limit of 5Gbps and that is has a share value of 100. You can then specify that your VM traffic has a share value of 50, and your iSCSI traffic has a share value of 50. If all three functions were attempting to push maximum throughput, the vMotion traffic would push 5Gbps (since vMotion is given 100 out of 200 shares), VM and iSCSI traffic would get 2.5Gbps.

An example screenshot, taken with a 1Gb (not 10Gb) NIC.

Pros:

  • Shares allow bandwidth on a single function to utilize the entire 10Gb pipe if the link is not oversubscribed.
  • You can change the speed of a function while the vDS is online and servicing traffic.
  • No special hardware required – can be utilized on rack-mount servers with standard 1Gb or 10Gb NIC interfaces.

Cons:

  • Requires vSphere Enterprise Plus, and requires use of the vDS – NIOC is not available with traditional vSwitches.
  • NIOC can only regulate egress traffic. Ingress traffic will not be affected by NIOC settings.

Conclusions

Both options provide similar capabilities but approach the problem in different ways. While a FlexNIC cannot dial itself up dynamically based on load, it can prevent ingress traffic from overwhelming other functions, whereas NIOC cannot.

The biggest problem with NIOC is that it is only available with the vDistributed Switch, making it challenging for many customers to implement. Not only do they need to be on the most expensive version of vSphere, but they also must then implement vDS, which many customers are not doing or avoiding intentionally due to the added complexity. However, VMware will most likely be targeting only the vDS for future feature enhancements.

In HP Blade environments, it makes sense to utilize the HP VirtualConnect technology as it provides other benefits (MAC address virtualization, server profile migration, and now FlexFabric) beyond just the FlexNIC capability. However, if customers are utilizing competing Blade solutions, or traditional rack-mount servers, then NIOC provides new capabilities to them that they cannot get in hardware.

It is also possible to utilize both solutions in tandem. One could conceivably use FlexNICs to segregate certain types of traffic for security purposes (maybe if your organization doesn’t allow traffic from different security zones on the same vSwitch) and then use NIOC to do bandwidth shaping. Another use case is if you want your Management Traffic to stay on a standard vSwitch, but move all VM/vMotion/etc traffic to a vDS, you can use two FlexNICs per pipe and use NIOC on the larger of the two.

Advertisements

June 29, 2010

HP Client Virtualization Reference Architectures

Filed under: bladesystem, hp, vdi, view, vmware — ermac318 @ 2:36 pm

One of the great things that came out of HP Tech Forum last week was the official announcement of HP’s various reference designs around their VDI solutions. The hub at HP’s website is here, as of right now only the VMware View architecture PDF is up but the XenDesktop ones are coming (one for XenServer, the other for Hyper-V). Some aspects of this reference design were announced all the way back at VMworld 2009 and are only now coming to fruition. This is mostly because of this bad boy:

HP P4800 BladeSystem SAN

These are the building blocks of the HP P4800 Blade-based SAN solution. HP took their SAN/IQ software and loaded it directly on blade servers, which then attach to external SAS-attached storage to create a 10GBit iSCSI SAN inside the blade chassis. No 10GBit core or switches required! The P4800 is designed specifically for VDI solutions and currently is only available for that kind of solution (although there’s nothing stopping you from using it as a general purpose iSCSI SAN, it’s not recommended because the I/O patterns for VDI and normal server workloads are very different).

This is HP’s flagship VDI design. Going forward there will be more reference designs for smaller deployments, going all the way down to Micro-Branch type deployments with just two servers and no dedicated shared storage but still full redundancy. All are based on the P4000 SAN.

So I’m not trying to make this an advertisement here (although I do think it’s really cool), the reason I’m linking to this is that HP has done a ton of validation and testing around the solution and have provided some great numbers around storage requirements per user for VDI environments. They’ve broken down users into Task, Productivity, and Knowledge workers. According to their numbers, on average these will take 6, 9, and 22 IOPS respectively to support. This can be very demanding on your SAN, and the #1 hardware design related problem users run into is sizing their VDI environments based on capacity and not on performance. These sizing guidelines should help anyone looking to architect a good VDI solution.

January 6, 2009

BIOS settings for VMware ESX servers

Filed under: bladesystem, esx, hp, vmware — ermac318 @ 11:32 am

Dave, AKA The VMguy, made a post about BIOS recommendations for IBM servers. I thought I’d make a few comments on his recommendations:

Dave recommends turning off “PowerNow” or “Cool’N’Quiet” and points to a KB article about how this may affect timekeeping. However, the KB article does state that this does not apply to ESX server, but only to the various hosted products such as VMware (GSX) Server, Workstation, etc.

I would recommend doing the opposite, at least for HP servers. I have seen situations in our lab where a server with power management turned off will use more power than necessary. For example, with HP Dynamic Power Savings off, one of my test blades showed this:

(more…)

Create a free website or blog at WordPress.com.