Where I work I spend a lot of time talking to customers about “cloud,” whatever the hell it means these days. Unfortunately the definition has gotten so polluted as of late (and these Microsoft commercials aren’t helping) that most people don’t know what the heck it means anymore. But what everyone can agree on is no matter what kind of cloud you’re talking about, if it’s not a private cloud of some sort, you’re giving some kind of control or responsibility for your data and your applications to someone else. This sounds great on paper: it’s like out-sourcing my IT! We know how much business folk like out-sourcing. It’s so popular it got its own sitcom!
But recently there have been a couple high-profile events that should give us all pause before recommending people move everything to the cloud. A while back it was the story of a man whose entire Flickr account was deleted accidentally by a support technician, and then apparently Flickr had no backups. Fortunately the story had a happy ending (they restored his stuff) but I’ll bet you that man isn’t storing the only copies of his photos in Flickr, anymore. Then there was work that Gmail went down and deleted mailboxes of more than 100,000 users. Restoration of these is still ongoing. What prompted me to write this blog post, though, was the nail in the coffin of Danger.
You may remember Danger (now part of Microsoft) was in the news before, when they performed what was later reported as an array firmware upgrade without a good backup and lost the cloud data of every T-Mobile Sidekick phone. After finally recovering somewhat from that debacle, Microsoft and T-Mobile have announced that they are shutting down Sidekick service permanently at the end of May. Thankfully, they are working on giving customers ways of getting their data out, but what if they decided it wasn’t worth their time or money?
The Cloud presents a huge business opportunity, but also a huge business risk for customers. That’s why VMware’s vCloud strategy is so critical. I’m not going to repeat the Hotel California analogy (because frankly it doesn’t really jive with the song lyrics, I prefer the “roach motel” analogy) but it’s very important not only to be able to go “all in” to the cloud, but also all out. Otherwise, you and your data could be at the mercy of your cloud provider. What if you’re a Terremark customer who can’t stand Verizon? What if Amazon decides to shut down EC2 next week because it’s not making them enough money anymore? All of these things are real concerns that should give CIOs pause before heaving their critical applications over the wall to let someone else run for them.
I see a lot of people give the analogy that cloud computing makes computing a utility, like electricity. You just pay for what you use!
My retort: How many customers have their own UPS and Generators on-site, because they don’t trust the power company? Be careful with your analogies.
Today I had an article published in Escapist Magazine, it’s about an older computer game called Allegiance that was made by Microsoft more than 10 years ago. If that sort of thing interests you, you can check it out here.
I hope everyone has a great holiday, expect more content in the new year!
Some of you may be playing around with the new version of View’s ThinApp integration, and wondering why you run into this issue:
Streaming option is grayed out
You add your packages, but you can’t select Streaming. Why is this? The documentation tells you to add your packages into the ThinApp Repository along with the MSI installers created by ThinApp. What you need to do is in the package.ini file for your ThinApp application (and by the way make sure you’re using ThinApp 4.6) add the following line:
What this does is changes the MSI file to not store any files at all, but simply include pointers to the EXE and DAT files next to it. This has the advantage of reducing the space you need (because you aren’t storing the files twice, once on the share and once in the MSI), but also as you probably guessed from the name of the parameter, it makes Streaming work!
So rebuild your packages with MSIStreaming=1 and you should be streaming your virtual apps into virtual desktops in no time.
Mike Laverick, Stu from Vinternals, John Troyer from VMware and myself all did a panel discussion on the events of VMworld on the last day of the conference. You can check out the replay of it here.
Cool highlights from this session, I’m going to skip all the pre-ThinApp 4.6 stuff since it’s old news:
- ThinApp (as of 4.6) now does transparent page sharing for applications (like in TS environments) just like ESX does for VMs. Pretty neat!
- In ThinApp 4.6 you can “harvest” IE6 straight out of an XP system and run it on Win7. Only WinXP specific file that gets put in the package is Shell32.dll, otherwise icons and menus don’t work correctly. The resulting ThinApp performs like IE6 SP1.
- ThinApp Converter which is included in ThinApp 4.6 allows you to automatically build a package from an automated installer. You can take existing install packages (like from Wise, Alritis, or LANDesk) and convert them to ThinApp packages easily with a simple command line and a blank VM.
- The futures stuff was talking about a new features called “ThinApp Factory.” It’s a prototype which can download RSS feeds of applications and automatically download the app installer, silently install it and capture the package using ThinApp Converter, and then publish it to users or allow users to download it from an “App Store” kind of thing. This will probably feed into the Horizon stuff they showed this morning in the keynote.
Session is around updating the reference architecture for View 4.0 to View 4.5.
- Goal of View is to delivery Consumer Cloud experience for the Enterprise.
- Goal of the 4.0 reference architecture was to simulate a realistic desktop workload, validate 2,048 users.
- Session turning into a pitch for UCS very quickly…
- Now they’re off to talk about RAWC, which is really old news. New version of RAWC supports simulating workloads on Win7. Still can only simulate a preselected set of apps, no custom app load testing. You can learn more about rock on its Youtube channel.
- View 4 reference architecture was run on UCS, CX4, vSphere 4.0 and WinXP SP3.
- Just released – Win7 optimization guide, includes a BAT file that optimizes the VM for you! Already found it here.
- Going through all the stuff they had to do to the Storage to make it perform. Wouldn’t it have been nice if it was on virtualized storage and you didn’t have to worry about RAID groups and all that crap? :)
- View 4.0 Reference architecture they got up to 16VMs/core. I think this is super aggressive and I don’t recommend customers size for this #.
- Finally we get to View 4.5 stuff! Talking about the new Tiered storage capabilities of View 4.5
- They’re putting SSDs in each physical server… again more Cisco specific stuff. I think sticking SSDs in every server drives up the cost too much. Plus wouldn’t it kill vMotion.
- They did the View 4.5 test with a single non-persistent pool.
- They see CPU being a bottleneck on optimized Win7 32b deployments… but they were only giving each Win7 VM 1GB of RAM.
- During the Q&A, asked about HA/VMotion. This reference architecture doesn’t allow for VMotion or HA. And Non-Persistent pools require some sort of 3rd party profile management to make it work. If you want to take a system down you’ll have to do it after hours. Don’t like it! I’ll stick with SANs to give full functionality instead of neutering 1/2 the Enterprise functionality.
Something amusing that I noticed in the compatibility matrix a couple weeks ago:
Holy cow, they finally got VMware Server support in vCenter! It only took them… 4 years? I wonder why they even bothered. Anyone wanna try it out?
I”m here in San Francisco and I will be blogging and tweeting about VMworld! Follow along on twitter (I’m @justinemerson) or check the recent tweets on the right.
Sunday and Monday I’ll be busy with PTAB meetings (Partner Technical Advisory Board) doing stuff I can’t tell any of you about. =) But Tuesday through Thursday expect content from here. Hope to see you all there! I’m the nerdy looking redhead. Come up and say hi!
Greeting everyone. Been very busy preparing for a technical conference my company is putting on, but I thought I’d post on one of the issue’s I’ve come across in the lab.
When upgrading a vCenter 4 server to vCenter 4.1, you aren’t given the option of selecting the LocalSystem account because the installer forces it to be the currently logged-on account. This broke my first upgrade of vCenter and I had to roll back.
There have been some recommendations on the forum to simply change the account type back to LocalSystem afterwards, however I thought my solution was slightly more elegant: run the installer as LocalSystem. This is more difficult than it sounds. In each MS release Microsoft has made it a bit harder to do this (the trick using the “at” command doesn’t work anymore, nor does the Scheduled Task trick). The easiest way it turns out is to use PsTools from SysInternals. To launch the vCenter 4.1 installer as LocalSystem, run the following:
psexec -i -s D:\autorun.exe
(where autorun.exe is the installer shell on the vCenter CD)
Note you must run psexec from a UAC elevated prompt. You can substitute any command you want at the end there, but it’s recommended to launch the installers from the autorun screen.
So it was with quite a lot of surprise that I opened my email this weekend and saw that I have been awarded the title of vExpert for 2010. I find it almost ridiculous that I’m up there with the other folks that have this award. I won’t bother naming them because invariably I will leave someone out who deserves this award much more than I.
The award I guess is kind of a wake up call – as you can see my last update here was some time ago. I could give excuses about work schedule or vacations or whatever but really the onus is on my and hopefully in the coming months I will show that I was worthy of this honor. Thanks again for all of you who read and comment.