VM Junkie

October 13, 2010

vSphere Network I/O Control vs. HP VirtualConnect Flex-10 or FlexFabric

Filed under: bladesystem, hp, vmware, vSphere — Justin Emerson @ 8:48 am

As of vSphere 4.1, VMware introduced a new feature called Network I/O control. Many of the features of Network I/O Control overlap with some of the features of HP VirtualConncet Flex-10 (and subsequently, FlexFabric as well). This article provides a compare and contrast of the two systems and their pros and cons.

HP Flex-10

With HP Flex-10 onboard NICs, you can take a single 10Gb pipe and carve it up into 4 distinct FlexNICs, which appear as their own PCI function in hardware. Using VirtualConnect Server Profiles, you can then specify how much bandwidth you want each FlexNIC to have.

This allows customers in vSphere environments to partition bandwidth between different logical functions in hardware. For example, in the above diagram we could give 500MB of bandwidth for management traffic, 2Gb for vMotion, 4Gb for iSCSI traffic and 3.5Gb for Virtual Machine traffic per FlexNIC. In a FlexFabric environment, one of your four FlexNICs can assume the personality of a FlexHBA, which can act as a Fibre Channel HBA or hardware iSCSI initiator.


  • Bandwidth shaping occurs in hardware and is stored in the VC Profile, and therefore is OS independent. For example, FlexNICs can be used by a physical Windows blade.
  • Since the physical NIC function is capped, both ingress and egress traffic is limited by the speed of the FlexNIC you set in hardware.


  • Requires Flex-10 or FlexFabric capable blades and interconnect modules.
  • Can only dial up or dial down FlexNIC speeds while blade is powered off.
  • When bandwidth utilization on one FlexNIC is low, another FlexNIC cannot utilized its unused bandwidth.

vSphere Network I/O Control

Introduced in vSphere 4.1, Network I/O Control (or NIOC) is designed to solve many of the same problems as Flex-10. How can I make sure all types of traffic have an appropriate amount of bandwidth allocated, without letting any single network function rob the others of throughput?

By enabling Network I/O Control on a vDistributed Switch (vDS), you can specify limits and shares for particular port groups (illustrated above on the right) or host functions (illustrated above on the left). You can specify that vMotion traffic has a limit of 5Gbps and that is has a share value of 100. You can then specify that your VM traffic has a share value of 50, and your iSCSI traffic has a share value of 50. If all three functions were attempting to push maximum throughput, the vMotion traffic would push 5Gbps (since vMotion is given 100 out of 200 shares), VM and iSCSI traffic would get 2.5Gbps.

An example screenshot, taken with a 1Gb (not 10Gb) NIC.


  • Shares allow bandwidth on a single function to utilize the entire 10Gb pipe if the link is not oversubscribed.
  • You can change the speed of a function while the vDS is online and servicing traffic.
  • No special hardware required – can be utilized on rack-mount servers with standard 1Gb or 10Gb NIC interfaces.


  • Requires vSphere Enterprise Plus, and requires use of the vDS – NIOC is not available with traditional vSwitches.
  • NIOC can only regulate egress traffic. Ingress traffic will not be affected by NIOC settings.


Both options provide similar capabilities but approach the problem in different ways. While a FlexNIC cannot dial itself up dynamically based on load, it can prevent ingress traffic from overwhelming other functions, whereas NIOC cannot.

The biggest problem with NIOC is that it is only available with the vDistributed Switch, making it challenging for many customers to implement. Not only do they need to be on the most expensive version of vSphere, but they also must then implement vDS, which many customers are not doing or avoiding intentionally due to the added complexity. However, VMware will most likely be targeting only the vDS for future feature enhancements.

In HP Blade environments, it makes sense to utilize the HP VirtualConnect technology as it provides other benefits (MAC address virtualization, server profile migration, and now FlexFabric) beyond just the FlexNIC capability. However, if customers are utilizing competing Blade solutions, or traditional rack-mount servers, then NIOC provides new capabilities to them that they cannot get in hardware.

It is also possible to utilize both solutions in tandem. One could conceivably use FlexNICs to segregate certain types of traffic for security purposes (maybe if your organization doesn’t allow traffic from different security zones on the same vSwitch) and then use NIOC to do bandwidth shaping. Another use case is if you want your Management Traffic to stay on a standard vSwitch, but move all VM/vMotion/etc traffic to a vDS, you can use two FlexNICs per pipe and use NIOC on the larger of the two.


June 29, 2010

HP Client Virtualization Reference Architectures

Filed under: bladesystem, hp, vdi, view, vmware — Justin Emerson @ 2:36 pm

One of the great things that came out of HP Tech Forum last week was the official announcement of HP’s various reference designs around their VDI solutions. The hub at HP’s website is here, as of right now only the VMware View architecture PDF is up but the XenDesktop ones are coming (one for XenServer, the other for Hyper-V). Some aspects of this reference design were announced all the way back at VMworld 2009 and are only now coming to fruition. This is mostly because of this bad boy:

HP P4800 BladeSystem SAN

These are the building blocks of the HP P4800 Blade-based SAN solution. HP took their SAN/IQ software and loaded it directly on blade servers, which then attach to external SAS-attached storage to create a 10GBit iSCSI SAN inside the blade chassis. No 10GBit core or switches required! The P4800 is designed specifically for VDI solutions and currently is only available for that kind of solution (although there’s nothing stopping you from using it as a general purpose iSCSI SAN, it’s not recommended because the I/O patterns for VDI and normal server workloads are very different).

This is HP’s flagship VDI design. Going forward there will be more reference designs for smaller deployments, going all the way down to Micro-Branch type deployments with just two servers and no dedicated shared storage but still full redundancy. All are based on the P4000 SAN.

So I’m not trying to make this an advertisement here (although I do think it’s really cool), the reason I’m linking to this is that HP has done a ton of validation and testing around the solution and have provided some great numbers around storage requirements per user for VDI environments. They’ve broken down users into Task, Productivity, and Knowledge workers. According to their numbers, on average these will take 6, 9, and 22 IOPS respectively to support. This can be very demanding on your SAN, and the #1 hardware design related problem users run into is sizing their VDI environments based on capacity and not on performance. These sizing guidelines should help anyone looking to architect a good VDI solution.

June 18, 2010

The latest jerk move by Oracle

Filed under: hp — Justin Emerson @ 9:15 am

I don’t really have a very high opinion of Oracle as a company. Their products can be quite good, and have a great following, but as a company they are very unfriendly to their customer base. You can go back and look at their spats with EMC and VMware, or their ridiculous licensing terms for any virtual environment other than OVM. But their latest acquisition of Sun has certainly been an eye opener to how they treat their customers:

  • You can no longer download drivers for their Sun servers unless you have a valid support contract (!!)
  • Oracle has ended a lot of programs Sun used to run in the education space that kept them in the game. I know plenty of EDU customers who are now looking at other options because Oracle has just priced themselves out.

The latest chapter, outlined on The Register today, is that Oracle has terminated the agreement with HP to support Solaris x86 on HP ProLiant servers. This is a big deal for customers who are looking to migrate off of Sun hardware, or who are already using HP hardware to run Solaris. And that’s exactly why Oracle terminated the agreement – it was driving people away from SunFire servers and to the competition who made a better product. The linked article mentions that Dell still has an agreement in place, but I think the writing’s on the wall for the OEM agreements with Dell and IBM, as well.

Bottom line: If you’re a Solaris customer and you want to have a future where you’re not forced into buying Oracle servers, storage, OS, and application stack just to be supported, maybe it’s time to start looking at Linux alternatives.

February 2, 2010

How to fix PCoIP VRAM issue with PowerShell, t5545 supports PCoIP now

Filed under: hp, powershell, thinclients, vdi, view, vmware — Justin Emerson @ 4:46 pm

Hi folks! Been a while, but I have a couple nifty tidbits here for you all:

A lot of folks have been having trouble with getting VMware View’s PCoIP to work in higher resolutions. This is caused by your Virtual Machine not having enough VRAM to power higher resolutions (since unlike RDP, PCoIP is using the “physical” video card of your Virtual Machine, as opposed to a virtual video device like RDP). There have been several workarounds:

  • Reset (not reboot) each VM once after it’s deployed. This is because when View clones your template, it fixes the VRAM issue but the VM is already powered on so the setting change won’t take effect until you restart the VM Monitor process. Reset does this. This was outlined in this XTRAVIRT KB article.
  • The better fix is to set your template’s VRAM properly. This is easier said than done, because there are bugs in the vSphere Client’s video settings dialog box that causes some settings (like the number of monitors) not to be written to the VMX file. So the only way to do this reliably has been to add the VM has an Individual Desktop in View Manager and have it do the settings properly. This was outlined at this That’s my View blog post.

That last one is obviously ideal, but it’s a pain and a waste of time to go through creating the pool. Even worse, if you made the mistake of building a pool out of the VM already, the View Manager prevents you from using the template VM as the source for an Individual Desktop!

So I wrote this PowerShell function that, when you feed it a VM object, will correctly set the VM to have enough Video RAM and display ports for 2 Monitors, each running at the max resolution of 1920×1200.

function Set-VRamSize ($vms)
	$vmviews = $vms | Get-View

    $vmConfigSpec = New-Object VMware.Vim.VirtualMachineConfigSpec

    $line1 = New-Object VMware.Vim.optionvalue
    $line2 = New-Object VMware.Vim.optionvalue
    $line3 = New-Object VMware.Vim.optionvalue
    $line4 = New-Object VMware.Vim.optionvalue

    $vmConfigSpec.extraconfig += $line1
	$vmConfigSpec.extraconfig += $line2
	$vmConfigSpec.extraconfig += $line3
	$vmConfigSpec.extraconfig += $line4

	$deviceSpec = New-Object VMware.Vim.VirtualDeviceConfigSpec
	$videoCard = New-Object VMware.Vim.VirtualMachineVideoCard
	$videoCard.numDisplays = 2
	$videoCard.videoRamSizeInKB = 36000
	$videoCard.Key = 500
	$deviceSpec.device += $videoCard
	$deviceSpec.operation = "edit"

	$vmConfigSpec.deviceChange += $deviceSpec
	foreach($vm in $vmviews){

The script is also available on my Sky Drive.

In related news, the HP t5545 (along with a slew of other Linux-based Thin Clients) now supports PCoIP using the new client that you can download here.

September 14, 2009

New HP t5545 Addon for View 3.1

Filed under: hp, thinclients, view — Justin Emerson @ 12:52 pm

On Friday HP just posted a new version of the View Client for the t5545 Thin Client. You can download it and view installation instructions here. I’m out of the office at a customer site this week so I haven’t been able to play with it, but the release notes are pretty boring (basically just adding official View 3.1 support).

When I get a chance to play with it, I will post more, but feel free to check it out and post your results!

July 10, 2009

Using the HP t5545 Thin Client with VMware View

Filed under: hp, thinclients, vdi, view, vmware — Justin Emerson @ 2:22 pm

This is my first in a series of articles outlining how to setup various Thin Clients with VMware View.

One of my favorite thin clients as of late is the HP t5545. It’s a very inexpensive thin client with a lot of great features – local web browser, full VMware View support, Multimedia redirection, USB redirection, and more. Up until recently, however, getting all that to work was a bit of a challenge. (more…)

July 2, 2009

Using HP RGS with VMware View 3.1

Filed under: rgs, vdi, view, vmware, vSphere — Justin Emerson @ 3:26 pm

So as I’m sure many people are aware, VMware will be shipping a protocol in the near future (with View 4, most likely) called PC-over-IP. In the meantime, however, VMware View 3.1 has included support for HP Remote Graphics Software, which provides a similar experience today (albeit not for free). VMware hasn’t come out with tiering or pricing for View 4 yet, and since customers today may require this kind of functionality, I think it’s wise for people to test out how well it works.

First, a disclaimer. In the release notes for View 3.1, VMware states the following policy:

View Client can now use HP Remote Graphics Software (RGS) as the display protocol when connecting to HP Blade PCs, HP Workstations, and HP Blade Workstations. The connection is brokered by View Manager. HP RGS is a display protocol from HP that allows a user to access the desktop of a remote computer over a standard network. VMware View 3.1 supports HP RGS Version 5.2.5. VMware does not bundle or license HP RGS with View 3.1. Please contact HP to license a copy of HP RGS software version 5.2.5 to use with View 3.1. This release does not support HP RGS connections to virtual machines.

So be aware that VMware does not support running RGS Senders in Virtual Machines. That said, HP does support it, and the plan was prior to View 3.1’s launch to have this fully supported by both parties. From what I heard, issues at the last minute caused them to push this off. As a result, if you try to create a pool using VirtualCenter VMs (i.e. an Individual Desktop with VirtualCenter VM selected, any kind of Automated Desktop Pool, or a Manual Desktop Pool with VirtualCenter VM selected) RGS does not show as an available default protocol. (more…)

January 6, 2009

BIOS settings for VMware ESX servers

Filed under: bladesystem, esx, hp, vmware — Justin Emerson @ 11:32 am

Dave, AKA The VMguy, made a post about BIOS recommendations for IBM servers. I thought I’d make a few comments on his recommendations:

Dave recommends turning off “PowerNow” or “Cool’N’Quiet” and points to a KB article about how this may affect timekeeping. However, the KB article does state that this does not apply to ESX server, but only to the various hosted products such as VMware (GSX) Server, Workstation, etc.

I would recommend doing the opposite, at least for HP servers. I have seen situations in our lab where a server with power management turned off will use more power than necessary. For example, with HP Dynamic Power Savings off, one of my test blades showed this:


December 16, 2008

Using the Virtual Serial Port to access the Service Console

Filed under: esx, hp, vmware — Justin Emerson @ 10:44 pm

Like a lot of folks out there, I spend most of my time accessing servers remotely. A lot of this is because of laziness, but it’s also because sometimes you’re dealing with a server in a totally different location.

I have to deal with servers in two different labs, so remote access is critical for me. When all my servers are blades, this is pretty easy, but not all rackmount servers have the same kind of accessibility remotely that blades do. I’ll use HP servers as an example. HP’s c-class blades all have iLo Blade Edition which includes remote graphical console support, but their DL and ML servers don’t have a graphical iLo console unless you upgrade to iLo Advanced.

I had an ESX server (a demo system for when I go to customer sites on sales calls) that I needed to have access to internally, but I couldn’t have it both in our DMZ and our corporate network. To get around this I plugged the iLo into the protected network, since you can’t get back into the iLo from within the OS. This seemed okay to our security guys and they let me go ahead. However, I needed to figure out how to get something useful out of the iLo port, which lead me to some forum posts and I figured out the following solution:

Step 1: Configure the Server BIOS

Enter the Server’s BIOS and under System Options, view the Virtual Serial Port option. Take note of the COM port listed.

If you want to be able to access the BIOS of the server remotely from the serial port, on the main BIOS menu select “BIOS Serial Console & EMS” option, and tell it to use the COM port noted in the previous step.

Step 2: Modify the ESX Server’s inittab

Once ESX server is booted and functional on the system, login as the root account and edit the file /etc/inittab using a text editor such as nano. If the Virtual Serial Port of the system was assigned to COM2, add the following lines to the end of the file:

# Add virtual serial console

7:2345:respawn:/sbin/agetty ttyS1 115200

Note if your COM port is COM1, use ttyS0 instead of ttyS1. Similary, ttyS2 is COM3, etc.

Reboot the ESX server.

You can now access an ESX console from the Virtual Serial Port, either by using the iLo web page’s Java applet, or by using an SSH client to connect to the iLo port, logging in, and typing “VSP” to access the serial port (my preferred method).

NOTE: This serial port connection will not accept a root logon! Much like SSH, root logon is disabled by default on this interface. Use a separate user account and use su to access the root account.

These instructions are for HP servers specifically, but any server brand with a Virtual Serial Port should have a similar solution.

Create a free website or blog at WordPress.com.