VM Junkie

January 29, 2009

Balancing LUN Paths on your ESX Hosts with PowerShell

Filed under: esx, powershell, vmware — ermac318 @ 4:28 pm

After watching this video that was posted by the VI Toolkit team, I immediately thought of this script that was posted quite a while back on Yellow Bricks. I decided to try to recreate this script in PowerShell, and while I was at it expand it so it would modify all nodes in a cluster at once. As such I wrote the following script. Please feel free to give feedback or make modifications! You can download it from my Sky Drive or check it out below:

(more…)

Advertisements

Installable View Client hangs after connection

Filed under: vdi, view, vmware — ermac318 @ 11:13 am

I had been running into this problem during the View Manager 3 Beta and it was still hanging around, albeit in a less annoying fashion in the final version. I finally figured out what was causing this: the ThinPrint services.

After connecting to your Virtual Desktop using the installable View Client, you’ll log in, the system will start its post-logon processes, and then will just sit there and hang for sometimes a good 15 to 20 seconds. After that, everything will be fine. Why?

During this period, the ThinPrint services in the VM start up and try to connect to all the local printers on your machine. If you’re connecting from a system with 3+ local printers (local meaning not network print queues, i.e. TCP/IP port printers count as local) this process can take a very long time. It gets even worse if the printer you have locally attached is a TCP/IP printer which is missing (i.e. your home printer and you’re in the office).

Solution? Delete unnecessary printers on the system you’re connecting from, or when you install the View Client do not install the ThinPrint component.

January 28, 2009

Why you should install PowerShell on your Windows VM Templates

Filed under: esx, microsoft, powershell, vmware — ermac318 @ 11:19 am

So I was up last night when the VI Toolkit 1.5 was released, and I was so excited I updated my demos this morning to use it. The company I work for does Technical Briefings for current and potential customers, and I just demonstrated the new Storage VMotion capabilities of Move-VM in front of 100+ people. Worked like a champ! Thanks VI Toolkit team.

But what I’m most excited about is this command in the 1.5 Toolkit:

Invoke-VMScript

Executes the specified PowerShell script in the guest OS of each of the specified virtual machines. The virtual machines must be powered on and have PowerShell and VM Tools installed. In order to authenticate with the host or the guest OS, one of the HostUser/HostPassword (GuestUser/GuestPassword) pair and HostCredential (GuestCredential) parameters must be provided.

This, I think, uses the VIX framework in the back end to run a PowerShell script inside a VM. This is unbelievably powerful. I can’t wait to start testing it out. I highly recommend everyone check out the release notes for
the new version!

January 27, 2009

Now that’s what I call multipathing!

Filed under: esx, vmware — ermac318 @ 5:08 pm

Saw something really interesting at a customer last week. Customer was using a storage vendor I didn’t have a lot of experience with (Compellant), and they said they were using software iSCSI as a backup to Fiber Channel because they only had one FC hookup on their storage. I told them this was probably a pretty bad idea, because if they did have to failover they would see some serious performance impact. However, what was the most surprising to me was that what they had setup actually worked. Look at this output:

Disk vmhba2:3:18 /dev/sdd (1048576MB) has 2 paths and policy of Fixed
FC 16:0.0 10000000c95ffcca<->5000d310000abd0b vmhba2:3:18 On active preferred
iScsi sw iqn.1998-01.com.vmware:mtahost31-4224a6b9<->iqn.2002-03.com.compellent:5000d310000abd03 vmhba32:0:18 On
Disk vmhba2:0:0 /dev/sdb (1048576MB) has 2 paths and policy of Fixed
FC 16:0.0 10000000c95ffcca<->5000d310000abd17 vmhba2:0:0 On active preferred
iScsi sw iqn.1998-01.com.vmware:mtahost31-4224a6b9<->iqn.2002-03.com.compellent:5000d310000abd10 vmhba32:2:0 On
Disk vmhba2:3:19 /dev/sdf (1048576MB) has 2 paths and policy of Fixed
FC 16:0.0 10000000c95ffcca<->5000d310000abd0b vmhba2:3:19 On active preferred
iScsi sw iqn.1998-01.com.vmware:mtahost31-4224a6b9<->iqn.2002-03.com.compellent:5000d310000abd03 vmhba32:0:19 On
Disk vmhba2:0:1 /dev/sdc (512000MB) has 2 paths and policy of Fixed
FC 16:0.0 10000000c95ffcca<->5000d310000abd17 vmhba2:0:1 On active preferred
iScsi sw iqn.1998-01.com.vmware:mtahost31-4224a6b9<->iqn.2002-03.com.compellent:5000d310000abd10 vmhba32:2:1 On

Now I have no idea if this is a supported configuration (my money is on not supported), but the customer said they had done failover testing before and it kept chugging just fine. Now that’s multipathing!

January 20, 2009

VMware View article at BrianMadden

Filed under: thinapp, vdi, vmware — ermac318 @ 1:27 pm

Roland van der Kruk over at BrianMadden.com has been writing up some articles about VMware View, his latest one has some conclusions at the end which I wanted to take issue with:

  1. Providing linked clones to users that have full control to the system, resulting in user initiated changes to the OS like copying data, removing it, etc., will end up in a system disk that eventually has a bigger size than if the OS was provided to the user without the linked cloning technology.
    This is not the case. The way a linked-disk in VMware ESX works is that the linked disk can only grow to the maximum size of the hard disk it was based off of. So a 20GB system disk fully provisioned is 20GB, and the maximum size of a thin-provisioned disk is 20GB. If Roland would like to test this, just use IOMeter to write a giant file to your system disk and see the size increase and stop.
    At the same time, you can specify at what size to automatically recompose a VM, so if it grows over, say 10GB you can automatically recompose it.
  2. If a View Administrator decided to refresh the OS because he added some hotfixes or extra software, all user modifications to the OS are deleted. In fact the System Disk is simply deleted and a new linked clone is generated off the new state of the ‘master image’.
    This is correct, but this is clearly outlined in the documentation. The purpose of the User Data Disk (or just using Roaming Profiles of some kind) is to keep the user’s profile persistent between Recompositions. As for how you handle Applications, see below.
  3. What ‘Persistent desktop’ actually means is that the state of a disk provided by a View Administrator is ‘persistent’. A desktop can be made persistent by recomposing or (scheduled) refreshing the deployed linked clones, resulting in exactly the state that a View Administrators expects it to be. From the view of end users using Linked Cloned Desktops, no persistence can actually be guaranteed, because all user actions will be undone by ‘Desktop Refresh’ or ‘Desktop Recompose’.
    This is only half-true. Any persistence made in user space are guaranteed by the recomposition. If it’s a change made to HKEY_CURRENT_USER, or to anything in Documents and Settings, it will be persistent across recomposition or refreshes.
  4. As soon as user modifications to the System Disk need to be persistent, no linked clone technology should be used. Instead, 1-on-1 desktops need to be provided, in which deployment tools like SCCM or Altiris will have to be available to maintain the system.
    This exact reason is why you cannot purchase View Composer without purchasing ThinApp. In order to make sure applications persist between rebuilds, as well as application settings and such, you need to use ThinApp (or at least some kind of Application Virtualization technology).

View Composer deployments do not make sense unless the base operating system is disposable. You have 4 components in any desktop:

The Desktop Blob

The Desktop Blob

User Data is handled by Roaming Profiles or the User Data Disk. Applications need to be either disposable (you can throw them out because they are stateless) or through App Virtualization. The hardware is just a VM hardware which is always identical or easily changeable. If those three things are taken care of, the OS itself is disposable, so a desktop recomposition should have no negative effects.

I will agree that View Composer is not perfect, but I think it’s come closest to the best possible solution available so far. That being said, you can ignore the whole recomposition options entirely and just use thin provisioned desktops, but they will grow over time just due to the way Windows writes to the filesystem.

On the upside, there is a really great session from VMworld which talks about performance and best practices for VDI which contains information on modifications you can make to a default Windows XP installation to reduce the amount of space that will be used by the OS at idle. Check out Slide 39 on the PDF.

January 15, 2009

Blue Screen during W2K3 Setup in a Hyper-V Virtual Machine

Filed under: hyper-v, microsoft — ermac318 @ 3:50 pm

I’m planning on getting Hyper-V Certified (MS exam 70-652 if you’re curious) so I’ve been using Hyper-V in the lab a lot and I ran into this problem:

After the initial text-based portion of Windows 2003 Setup, the system would repeatedly blue screen. Turns out the problem was that you have to remove the virtual network adapter and replace it with the legacy network adapter, and then after installing the Hyper-V integration services you can switch it back.

This seems like something that the New Virtual Machine Wizard should handle for me. XenServer does this with the hard drive controller; before you install XenTools it’s using IDE, after it gets installed your boot controller switches over to the paravirtualized SCSI driver. VMware does something similar with the vLance driver. Why can’t MS do the same, especially for an OS that’s supposed to be as ubiquitous as Windows Server? It’s their OS!

January 12, 2009

ESX 3.5 Update 3 now on Microsoft’s SVVP List

Filed under: esx, microsoft, vmware — ermac318 @ 11:33 am

Hot on the heels of the Update 2’s new certified configurations comes Update 3’s inclusion on the MS-approved Virtualization Platform list. Now we’re just waiting on Mike D’s promised ESXi support!

I’m really glad Microsoft finally got its act together and started to provide some kind of support for additional hypervisors other than their own, but something which I wanted to point out to everyone who is saying that the SVVP program is “just as good” as internal Microsoft support:

Additionally, for vendors with whom Microsoft has established a support relationship that covers virtualization solutions, or for vendors who have Server Virtualization Validation Program (SVVP) validated solutions, Microsoft will support server operating systems subject to the Microsoft Support Lifecycle policy for its customers who have support agreements when the operating system runs virtualized on non-Microsoft hardware virtualization software. This support will include coordinating with the vendor to jointly investigate support issues. As part of the investigation, Microsoft may still require the issue to be reproduced independently from the non-Microsoft hardware virtualization software. Where issues are confirmed to be unrelated to the non-Microsoft hardware virtualization software, Microsoft will support its software in a manner that is consistent with support provided when that software is not running together with non-Microsoft hardware virtualization software.

From Support policy for Microsoft software running in non-Microsoft hardware virtualization software, emphasis mine.

That means that should Microsoft so desire, they can still, even if you’re running in an SVVP-approved hypervisor, require you to reproduce your issue on physical hardware. So what was the point of the SVVP program again? Guess it means V2P Migrations if you want to get support from MS!

Update:

Some great follow up from Mike D in the comments, who took issue with what I had said about being forced to run your workload in a physical machine during troubleshooting. If you have a problem, the work flow goes like this:

  • Customer calls MS.
  • MS can’t solve the problem so they call VMware.
  • VMware and MS together can’t solve the problem so you V2P to replicate.
  • Now let’s say someone in MS tech support isn’t trained right or just decides to be anti-competitive (both have happened) and just hangs up on you once they find you’re running VMware. All you have to do is call VMware support (or whoever you get your VMware support through) and they in turn can call MS through the SVVP program. MS can’t hang up on the VMware support team (VMware support won’t let them and they go through a deeper level in the support organization). That’s the recourse. If you get hung up on at MS just call VMware and VMware will bring all of the parties together.

That’s really great info, and I will be definitely getting the word out to customers on how to deal with cranky Microsoft support people! Thanks Mike.

January 11, 2009

Performing V2P migrations

Filed under: vmware — ermac318 @ 2:12 pm

So lots of people perform P2V migrations. In fact, there have been lots of third-party products (such as those from LeoStream and PlateSpin) and first-party products (VMware’s original P2V Assistant and now vCenter Converter), as well as community-supported free products (such as Mike L’s Ultimate P2V).

Back when I did my first server consolidation, we used Ultimate P2V since at the time VMware’s P2V assistant wasn’t free. (BTW VMware, making Converter free was a VERY smart move for ESX adoption!) One of the interesting things I had noticed in the original fix-vmscsi BartPE plugin was the ability to inject not only the VMware SCSI controller drivers into a Windows installation, but also for some standard HP and Dell servers. I actually used these to perform a P2P migration, but never a V2P, though there was no reason I couldn’t have.

Now VMware has (uncharacteristically) actually posted some information in a whitepaper on performing V2P migrations. I say that because VMware’s never talked much about doing V2P migrations; as far as I know this is the first time any official V2P information has been posted by them.

Their strategy revolves around using Sysprep along with the [MassStorageDevices] entry in the sysprep.inf to make the system bootable post-clone. I actually think using the Ultimate P2V system (using fix-vmscsi post-clone) is a more efficient method. I know now that vCenter Converter is free and has a lot of great features that Ultimate P2V has kind of gone out of style, but as you can see from this it still has some great uses!

January 6, 2009

BIOS settings for VMware ESX servers

Filed under: bladesystem, esx, hp, vmware — ermac318 @ 11:32 am

Dave, AKA The VMguy, made a post about BIOS recommendations for IBM servers. I thought I’d make a few comments on his recommendations:

Dave recommends turning off “PowerNow” or “Cool’N’Quiet” and points to a KB article about how this may affect timekeeping. However, the KB article does state that this does not apply to ESX server, but only to the various hosted products such as VMware (GSX) Server, Workstation, etc.

I would recommend doing the opposite, at least for HP servers. I have seen situations in our lab where a server with power management turned off will use more power than necessary. For example, with HP Dynamic Power Savings off, one of my test blades showed this:

(more…)

January 5, 2009

Why ThinApp is revolutionary from a security perspective

Filed under: thinapp, vmware — ermac318 @ 1:28 pm

Hi everyone, I hope you all had a great holiday break and a happy new year. I took some time off myself, so in hindsight maybe I should have started the blog post-holiday! But them’s the breaks.

Today I wanted to make a post about ThinApp, and one reason why I think it’s so incredibly important for the desktop; one that few people seem to be pointing out. I’m going to take a leap of faith at this point and assume you’re somewhat familiar with ThinApp. If you aren’t, I highly recommend looking at some of the product info at the previous link, or the product documentation. There’s also an official ThinApp Blog at VMware.

Among ThinApp’s many strengths are surprise, fear application/OS independence, easy application updates and regression testing, and app compatibility. But I think one of the most important points about ThinApp is, somewhat by accident, it changes the permission paradigm for applications to talk to each other. Traditionally, in Windows and pretty much any OS, if you’re running an application, it has access to the other applications you have running. The simplest example of this is I want to cut and paste text from one program to another. Every program has access to the clipboard. What if my computer gets infected with a trojan and that trojan constantly monitors my clipboard for a number resembling a SSN or a Credit Card number?

There’s also much more sinister examples. It’s possible for any application you run to overwrite the memory of any other application you’re running (assuming one’s not a system process or running in another user context). This can be great – it allows applications to talk to each other directly and can give you some cool integration between different applications. But the security implications of it are that every app has full permission to every other app by default, with no real way to deny permission. This is why we run into many of the security problems (and for that matter, compatibility and stability problems) on modern desktops. When we see application vulnerabilities today, usually they scope of the vulnerability is everything the application has access to is potentially compromised. This means that browser remote code execution vulnerability can’t take over the entire machine (because it wasn’t the OS itself that was compromised) but there’s plenty of damage that can be done because for an application, the whole user session is the sandbox.

ThinApp changes this because each application gets its own Virtual OS (VOS) sandbox to play in. Depending on how you build your packages, you can make it so applications can’t see each other, and don’t even see their files. You can even isolate certain parts of the underlying OS from the application so it has no visibility into those areas. This is important because we are now limiting the scope of infection in the case of a security problem. The VOS is now the sandbox, as opposed to the entire user session. And even better, if you build your packages correctly, the infection can be removed simply by deleting the ThinApp sandbox.

The other implication of this is that ThinApp changes permissions between applications to deny by default. By bundling each app in its own ThinApp package, they can’t talk to each other anymore, potentially preventing many malicious or compromised applications for doing their dastardly deeds. Now, I say this changes things to deny by default, not just deny, because with AppLink what we’re essentially doing is granting permission for one application to talk to another.

This is really an important concept to grasp, I think, because from a security perspective this can mean a whole new world of permission granularity. Do I ever need this app to talk to this other app? If not, why do I leave that hole open for someone to potentially exploit? ThinApp reduces your exposure at the desktop in a way that just hasn’t been possible without application virtualization.

Create a free website or blog at WordPress.com.