Friday, January 18, 2008

MS Office Tip: Past Special

When pasting Excel tables or graphs in Word, I prefer to paste them as pictures, rather than embedded objects. In order to do this, I used to click 'Past Special' (which in Office 2003 had to be done from the menu) and select the appropriate format.

Today, I found out by accident that CTRL-ALT-v opens the Past Special dialog. This means, I can now avoid using the mouse even more...

Probably I'm the last person in the world to know this ;)

Thursday, January 17, 2008

VI 3.5: Consolidation Extension

At a client we are installing VI 3.5 with all bells and whistles. All is going well, and the number of features that were added in this version is quite impressive.

One of the new things is the addition of extensions (on the server side) and plugins (on the client side). The new update manager is one of them, but also an extension is available for light capacity planning including a VC snap-in for VMware converter.

We started the capacity planner wizard (because it is required before one can actually do a conversion). The wizard asks us for an account to start the service. After that, it apparently starts to scan the AD (automated inventory) without ever asking. This inventory is currently running for over a day! I agree, it is a large domain, but shouldn't the wizard have asked me if I wanted to do that or not?

BTW: When using the capacity planner tool, you explicitly have to start an inventory job, which I hardly ever use.

Wednesday, January 16, 2008

VMware Capacity Planner vs. Platespin PowerRecon

I have been asked to compare both products, so here you go... you will notice that it is virtually impossible to say which product is better, because they are fundamentally different in their approach to monitoring performance and creating consolidation scenarios. Except for one big difference...



Existing Comparisons


Peter van den Bosch has already done an analysis (see this PDF) of the differences between both products. However, it feels to me like comparing apples to bananas. Criteria like: 'Needs a database', 'Needs IIS' are listed alongside real features like 'Automatic discovery of servers'. Furthermore, features like being able to export to html are said to be not available with the VMware product, although (by construction) everything is presented in html. I do not blame Peter for this, precisely because both are different products doing the same thing in a different way.

 

I would like to compare the situation with the use of an email client: Outlook versus YAWEC (yet another web email client, think hotmail, gmail, etc.). Both offer interfaces to our email inbox, but both products do it in a completely different way. A feature that is very relevant for Outlook, synchronization and mail fetching for instance, is completely irrelevant for the web client because no synchronization has to occur.

 

That said, I will do an attempt to explain further the differences between both products and what I like and don't like about both.

 

Functionality and Differences

The following components can be found in both products:

  1. A monitor to get the performance data
  2. A database to store the performance data
  3. A 'client' able to analyze the data and create consolidation scenarios and other reports based on the data

The monitors of both Capacity Planner as well as PowerRecon enable one to monitor Windows and UNIX machines without having to install an agent. If required, monitors can be load-balanced. Prior to starting to really monitor performance, in both tools, one has to run an inventory scan at least once. This inventory scan looks up the configuration of the systems, software that is installed, services that are running, etc.

Obviously it is possible to select different accounts for different servers to connect with. This is all relatively easy to set up and use.

Both tools also have a database, but the main difference is that the database for Capacity Planner is hosted on a VMware server (https://optimize.vmware.com) whereas PowerRecon installs/uses a local database on the monitor (or another database server). There is a difference in the way of storing data, but this is out of scope for the current post.

Consequently, when we come to the client part (the third component mentioned above), there are differences in how this is implemented: Capacity Planner is accessed using a web-interface whereas PowerRecon uses a client that can be installed on any Windows box (and also uses a web interface on the monitor server). The two types of client each have their own advantages.

 

Which one is better?

The fact that you like one or the other primarily has to do with personal preference and every benefit has a drawback associated with it (this is real life after all):

What is nice about VMware Capacity Planner:

  • Very light monitor installation and easy installation. The drawback is that on the monitor, no overview of the performance data is available, only an average and the latest value.
  • Information is uploaded to the VMware capacity planner website, which enables one to analyze the performance data from anywhere at anytime.
  • Additionally, it gives you the opportunity to compare your statistics to other systems in the database (Industry Average).

The main drawback is that in my experience the site is not always very responsive.

What is nice about PowerRecon:

  • Information is stored locally (but can be moved and analyzed off-site) in a database.
  • Because the data is locally available, and a fat client is installed to analyze this data, one can get a near real-time view of performance characteristics.

The drawback is that the installation is more involved and implies setting up IIS on the monitor server. Consequently, the requirements for this server are higher.

 

Main Difference: Monitoring VMs

Is there no difference that is really distinguishing both and has nothing to do with the way they work?

Yes, there is one big difference: PowerRecon has the ability to monitor a virtual infrastructure and virtual machines. Whereas you can monitor a virtual machine with Capacity Planner, the results for CPU utilization are not to be trusted. PowerRecon (given the correct license) connects to the VI server in order to get performance data, which is the only correct way of working.

Clearly, Capacity Planner is aimed towards consolidation projects and the people from VMware have produced a product that is perfectly aimed at that.

 

Note: I only scratched the surface in this post, no details are given. Bot products have a different data model, have slightly different feature-sets, completely different licensing schemes, etc. When interested in additional information, just let me know in the comments...

Thursday, January 10, 2008

VMware: Extracting info from VMX files

In order to get some information on the exact names of VMs and the VMDK files they have mapped, I created a dump of all .VMX files on the SAN and used the following sed script to extract some relevant data:

s/#!.*/Configuration:/p
s/displayName/ &/p
s/scsi.:.\.fileName/ &/p
s/sched.swap.derivedName/ &/p

The script can be run using the following command:
$  sed -n -f "sed_script" "config-list"

The result is:
Configuration:
displayName = "Service Desk 5.2"
scsi0:0.fileName = "Service Desk 5.2.vmdk"
sched.swap.derivedName = "/vmfs/volumes/46363464-80...

If you want, it can even be formatted in other ways:
s/displayName = \"\(.*\)\"/\n\1\n/p
s/scsi.:.\.fileName = \"\(.*\)\"/ \1/p
s/sched.swap.derivedName = \"\(.*\)\"/ \1/p

Resulting in:
Service Desk 5.2
Service Desk 5.2.vmdk
/vmfs/volumes/46363464-80...


Tuesday, January 08, 2008

Capacity Monitoring for Desktops?

With the rise of VDI (Virtual Desktop Infrastructure, see here for a comprehensive overview) and the momentum it has, one starts to ask similar questions as with conventional server consolidation: what type of virtualization platform is required, how many users/desktops can I host, will there still be room for scaling, etc.

The way to approach this in server virtualization projects is by means of capacity monitoring and planning using virtualization scenarios. We refer to earlier posts for more information about this topic.

The question we are asking: can we use the same concepts and ideas for desktop virtualization? My answer is 'NO', because:

  1. A user acts completely different than a process or service: less predictable, depending on our mood, depending on the time of the day, etc.
  2. Some end-user applications ask 1OO% of the CPU even while they are not doing anything. The classic example used to be the game 'Pinball'. Even a screensaver can take a large amount of CPU power.
  3. In a VDI context, this becomes even more important. For instance: why would you scale your virtual desktop CPU and memory to include desktop search if really nothing personal can be found on the hosted desktop?
  4. When starting and running 10 applications at the same time, we will probably only use 5 of them later, but still... they require CPU power (and memory) while seeming idle.
  5. If a user has meetings half the time, does that mean his session is closed? Does it still require processing power? How can this be analyzed?
  6. Etc.

In other words, a desktop environment is inherently different from a server environment. This is why in my opinion it is harder to maintain a Citrix farm than a VMware farm: applications and users tend to be less predictable and 'stable' than servers.

Does that mean that we can not do any sizing or planning in a VDI context? On the contrary, we should only keep in mind that applying exactly the same reasoning as with server consolidation planning is not a good idea.

A few more tips:

  • Make sure you have an overview of running processes and their CPU/memory utilization. This helps in deciding what is really important.
  • Be careful with averages and peaks: a PC that is running all the time will have a low average, but may be used heavily during the day and a PC may be 100% used during the day because of some applications.
  • Take into account inactive time during the day.
  • Do not try to analyze the 'average' user, instead create classes (say 5 or so) that each have typical characteristics. Use these classes to create the virtualization scenarios.

Friday, January 04, 2008

Octopus Platform: VDI 2.0

I came in contact with the people of e-BO yesterday. Their Octopus platform is truly amazing. In fact, this company already deployed VDI infrastructures years ago, when the term VDI did not even exist! Moreover, the whole concept is not even tied to VDI (hosted desktops), but can include other types of application/desktop provisioning as well.

Under the hood, an abstraction layer is created between the user front-end (thin or fat client) and the desktop/application provisioning backend. This layer makes it possible to do advanced things like: load-balancing, settings connection policies, secure channels, failover, etc.

Definitely worth a look!

Wednesday, January 02, 2008

Chargeback

On the VMworld website, you can find a featured presentation concerning chargeback. The presentation is by the CEO of VKernel, a company that I have mentioned on this blog before.

Chargeback, as pointed out in the presentation, is about 'measuring' and accounting for two types of costs: Consumables and non-consumables. The latter has to do with floor space, licenses, support and administration, etc. Consumables refers to utilization of CPU, memory, power, etc. Obviously, performance monitoring is critical in this aspect. Check out the presentation (registration required)!

Custom Search