VM Junkie

October 13, 2010

vSphere Network I/O Control vs. HP VirtualConnect Flex-10 or FlexFabric

Filed under: bladesystem, hp, vmware, vSphere — ermac318 @ 8:48 am

As of vSphere 4.1, VMware introduced a new feature called Network I/O control. Many of the features of Network I/O Control overlap with some of the features of HP VirtualConncet Flex-10 (and subsequently, FlexFabric as well). This article provides a compare and contrast of the two systems and their pros and cons.

HP Flex-10

With HP Flex-10 onboard NICs, you can take a single 10Gb pipe and carve it up into 4 distinct FlexNICs, which appear as their own PCI function in hardware. Using VirtualConnect Server Profiles, you can then specify how much bandwidth you want each FlexNIC to have.

This allows customers in vSphere environments to partition bandwidth between different logical functions in hardware. For example, in the above diagram we could give 500MB of bandwidth for management traffic, 2Gb for vMotion, 4Gb for iSCSI traffic and 3.5Gb for Virtual Machine traffic per FlexNIC. In a FlexFabric environment, one of your four FlexNICs can assume the personality of a FlexHBA, which can act as a Fibre Channel HBA or hardware iSCSI initiator.

Pros:

  • Bandwidth shaping occurs in hardware and is stored in the VC Profile, and therefore is OS independent. For example, FlexNICs can be used by a physical Windows blade.
  • Since the physical NIC function is capped, both ingress and egress traffic is limited by the speed of the FlexNIC you set in hardware.

Cons:

  • Requires Flex-10 or FlexFabric capable blades and interconnect modules.
  • Can only dial up or dial down FlexNIC speeds while blade is powered off.
  • When bandwidth utilization on one FlexNIC is low, another FlexNIC cannot utilized its unused bandwidth.

vSphere Network I/O Control

Introduced in vSphere 4.1, Network I/O Control (or NIOC) is designed to solve many of the same problems as Flex-10. How can I make sure all types of traffic have an appropriate amount of bandwidth allocated, without letting any single network function rob the others of throughput?

By enabling Network I/O Control on a vDistributed Switch (vDS), you can specify limits and shares for particular port groups (illustrated above on the right) or host functions (illustrated above on the left). You can specify that vMotion traffic has a limit of 5Gbps and that is has a share value of 100. You can then specify that your VM traffic has a share value of 50, and your iSCSI traffic has a share value of 50. If all three functions were attempting to push maximum throughput, the vMotion traffic would push 5Gbps (since vMotion is given 100 out of 200 shares), VM and iSCSI traffic would get 2.5Gbps.

An example screenshot, taken with a 1Gb (not 10Gb) NIC.

Pros:

  • Shares allow bandwidth on a single function to utilize the entire 10Gb pipe if the link is not oversubscribed.
  • You can change the speed of a function while the vDS is online and servicing traffic.
  • No special hardware required – can be utilized on rack-mount servers with standard 1Gb or 10Gb NIC interfaces.

Cons:

  • Requires vSphere Enterprise Plus, and requires use of the vDS – NIOC is not available with traditional vSwitches.
  • NIOC can only regulate egress traffic. Ingress traffic will not be affected by NIOC settings.

Conclusions

Both options provide similar capabilities but approach the problem in different ways. While a FlexNIC cannot dial itself up dynamically based on load, it can prevent ingress traffic from overwhelming other functions, whereas NIOC cannot.

The biggest problem with NIOC is that it is only available with the vDistributed Switch, making it challenging for many customers to implement. Not only do they need to be on the most expensive version of vSphere, but they also must then implement vDS, which many customers are not doing or avoiding intentionally due to the added complexity. However, VMware will most likely be targeting only the vDS for future feature enhancements.

In HP Blade environments, it makes sense to utilize the HP VirtualConnect technology as it provides other benefits (MAC address virtualization, server profile migration, and now FlexFabric) beyond just the FlexNIC capability. However, if customers are utilizing competing Blade solutions, or traditional rack-mount servers, then NIOC provides new capabilities to them that they cannot get in hardware.

It is also possible to utilize both solutions in tandem. One could conceivably use FlexNICs to segregate certain types of traffic for security purposes (maybe if your organization doesn’t allow traffic from different security zones on the same vSwitch) and then use NIOC to do bandwidth shaping. Another use case is if you want your Management Traffic to stay on a standard vSwitch, but move all VM/vMotion/etc traffic to a vDS, you can use two FlexNICs per pipe and use NIOC on the larger of the two.

Advertisements

Create a free website or blog at WordPress.com.