Wednesday, November 26, 2008

Why Personal Productivity may be Hard and 'The Corporation' has the answer

Eli has drawn my attention to 'The Corporation'. Shame on me for not knowing and having seen this documentary before (1). One word struck me when I heard it, because it is a topic I wanted to discuss already for quite some time in a different context: Accountability.

What does it have to do with personal productivity then?

I'm involved in quite some thinking about IT and business processes lately. Defining the process is generally easy, the measurement of their performance both from a process as well as a quality/content point of view is much more difficult (2). One thing, though, that is a core component of every step in the process is defining the person who is accountable, which is usually different from the person who is responsible for doing the work. Usually, people use a so-called RACI diagram to define the respective roles for every step in a process or task.

Personal Productivity is all about processes (think GTD, for instance) and personal work flow. Similarly, it's not easy to measure the quality of the process or deliverable. But what really makes personal productivity hard is the fact that one person is both accountable and responsible, or in other words: we have to 'control' our own work.

Maybe that is why we need a personal assistent?


(1) No, I'm not going into the details of the documentary, and I am not commenting on the reasoning of Eli at this time.
(2) No, that is also not what I wanted to talk about now (but will do in the future).

Thursday, November 13, 2008

Book Review: Personal Development for Smart People (Steve Pavlina)

On a sidenote: The past weeks have been completely unproductive when it comes to writing. That does not mean I don't have anything to blog about: My second child was born the end of September and I changed jobs during that period as well. Enough changes for now. Back to business...


Introduction



I managed to read Personal Development for Smart People (PDfSP) during nightly and early morning hours and I'm happy I did. PDfSP has a special meaning for me in two ways.

First of all, because years ago, when I started my own 'personal development project', I subscribed to a course about Louise Hay's book: 'You can heal your life'. Ten years later, PDfSP is published by Hay House, founded by the same Louise H. as before. That closes the cycle for the first time.

The second reason why PDfSP resonates so well (to use a verb often found in PDfSP) is the quest for the universal foundation of things. Looking for structure in the world that surrounds us. If you've read this post, you know what I'm talking about.


About The Book


I'm not going to review the book in a conventional way, this has been done by many others before me. I would like to connect Steve's book to something I have been thinking about a lot lately.

Let us start with a few questions and answers:
  • Is the book any different from the myriad of self-help books and CDs around? - Yes it is.
  • Is the book for everyone? - Probably not.
  • Do you need to be acquainted with the subject in order to read it? - Not at all, but it helps.

Let me explain...

The reason why the book is probably not for everyone, is exactly the same as why not everyone likes the content of Steve's blog. Let me give an example. For some, 'oneness' is an obvious thing, no doubt about it. For others, being connected to other people sounds completely insane.

It helps to be acquainted with the subject, because at least you know that things like 'oneness', 'law of attraction', life purpose, etc. are a fundamental ingredient of most of quite a lot of books you find in this field.


Book classification


Some time ago, I started to think about a classification of personal development books. Intuitively, I've always thought 'You can heal your life' (by Louise Hay) as being more written towards woman: holistic, intuitive, not everything can be seen or proven, etc. On the other hand, 'The 7 Habits' (by S. Covey) seems to attract more to a male public: logical, analytical, no 'unfounded' messages, etc. For me, they are like prototype of two ways of expressing a personal development message. in other words, their form is different, the core message is often similar.

Coming back to PDfSP, I find it hard to classify it using the above types. (That probably proves that my classification is good!) PDfSP starts out as a male book, with a search for the foundations of personal development, but rather quickly mixes up with female aspects as mentioned earlier.


Final Remarks


If you ask me how I would describe the effect of PDfSP on me, suffice it to say that it hit me. That alone is an accomplishment only few writers have managed to achieve. If you're interested in personal development and looking for some more background and insight, you won't be disappointed. Just keep in mind that the book has 'male and female aspects'.

Monday, September 15, 2008

How to Measure Productivity?

A comment by reader 'rbis' on my previous article about the definition of productivity refers to the website and blog of productivity guru Matthew Cornell. The reference reminded me of an article by Matt in which he asks the question: How do you measure personal productivity?.

The 3 Layers Again


In my previous article, I discussed the existence of 3 layers in what people consider (personal) productivity: Layer 1 (L1) deals with so-called life hacks, tricks on how to deal with tasks and things in a smarter way. Layer 2 (L2) is about the approach to these tasks and the organization of them, or in other words the process. In Layer 3 (L3), we look at the purpose behind everything, the driving force behind the whole thing.

When thinking about these layers and the measurement of productivity, two conclusions pop up

Measurement Depends on the Layer


It is easier to measure in L1 than it is in L3. To give an example: testing ones speed of typing, or the number of blogs topics read is just a matter of counting. The fact that ones purpose is not clear or not lived out, is a much harder nut to crack.

This may sound obvious, but in practice, productivity is often seen as one big beast which has to be tackled with one method.

To come back to the article by Matt, careful reading reveals that if the layered approach had been used, things would have turned out even more transparent. Where he discusses that measures are required, he mostly deals with layer 1 and 2 activities (email processed, poor planning, inneficient meetings), whereas when talking about why measurements are hard, layer 3 comes into play (personal goals, quality instead of quantity).

Layer 2 and Layer 3 are not about Counting


It will be clear by now that L3 can not be measured by numbers, but qualitative aspects are important. And qualitative by definition means subjective. This is a good thing, but it also makes it hard and confusing.

Stephen Covey may try hard to make sure we get our personal mission statement clear, but how many of us really have one? Or how many times a year is this mission statement revised and if needed adapted? Ok, I blame guilty myself.

I think we feel it when our mission or life's purpose is clear and our activities support it. I have noticed lately that two famous bloggers, in two different areas come to a similar conclusion: They were no longer certain that what they were blogging about really made sense to them, that it is what they wanted to do. An insightful article by Robert Scoble in this sense can be found here, whereas Merlin Mann discusses some of the thought process in this article.


Enough about layers, let's get productive!

Friday, September 12, 2008

Definition of Productivity



I've always liked looking for structure and relations between things that surround me. I guess I'm not the only one? One of the things I would like to find structure in today is the term 'productivity' and what is usually associated with it. This includes so-called productivity systems, lifehacks, workflow tools, etc. In this article, I want to argument that 3 levels can be distinguished.

Efficiency is a term that is often used together with productivity. The Wikipedia page tells us:


... While productivity is the amount of output produced relative to the amount of resources (time and money) that go into the production, efficiency is the value of output relative to the cost of inputs used. ...

Most definitions of productivity are based on the production or manufacturing of (physical) goods. In sharp contrast with this, our Western civilization has evolved into a service-oriented society. Most of us no-longer produce anything physical (except for documents perhaps). This is often referred to as knowledge-work. By definition, knowledge work productivity is much harder to measure, as it involves creativity, thinking, finding solutions to problems, etc.

In my opinion, productivity-related information can be divided in 3 groups, I call them levels:

Level 1: Tips and Tricks


This level deals with the question: How can I optimally perform the task at hand. This task could be: process email, have a meeting, brainstorm, etc. Note that this level does not deal with the which task is done first or why this task is important.

The various popular life hack blogs and sites are usually concerned with this level of productivity and give plenty of tips on how to collaborate online, clean your house, etc. in a productive way.

Level 2: The Process


At this level, we ask ourselves: 'What', 'When' and 'in which order'? In other words: how do we approach things?

This level is about the tools and techniques that let you plan your life and work: todo lists, Getting Things Done, Do It Tommorrow, etc.

Level 3: Purpose


In this level, we ask ourselves 'Why'. In other words: what drives us, what is our vision and mission, what is our purpose?

In many cases, this level is forgotten about. Think about the successful manager who at the age of 60 regrets not having spent more time with his kids. Or think about people trying to do 1001 things on a day without standing still to see whether these things are really valuable.



Each of these levels can be further split in parts and obviously some things will be on the boundary or cross these levels. Generally speaking, though, every level influences the level below: Understanding your purpose (level 3) tells you which activities are valuable (level 2) and enables you to find tricks (level 1) to do them more quickly and productively.

The question remains which questions should be asked first. This is for later.



This article is the first in a series of almost literal translations from dutch blog posts by myself on choose2live.blogspot.com.

Monday, September 08, 2008

Free review copy of "Personal Development for Smart People"

Seeing this post on Steve Pavlina's blog, I couldn't resist and sent in my request for a review copy of his upcomming book "Personal Development for Smart People".

Guess what?! I was accepted. The book has to be shipped from the States and I will have to pay 10 Euro taxes because Belgian customs wants to take a look, but that still saves me some money compared to buying the book myself.

If you don't know Steve Pavlina, I suggest you check out some of his most popular blog posts to get an idea of the kind of person he is:


New (temporary) blog name

Application Availability isn't the topic of this blog anymore, so it was time to change it's name. The current name is temporary, until creativity hits me with a better one...

Any suggestions?

Tuesday, August 26, 2008

VMware Update Manager for Windows VMs

[ Note: yet another technical post ! I just can't resist it... ]

I had never taken the time to look into Update Manager. Today, I decided to dive in and try it out with a Windows Template I'm building.

It struck me that the update process (actually it's called 'remediate') took so long. Nothing could be seen on the server console and no CPU was utilized. Strange.

After a couple of minutes waiting, I went back to the VM console and saw a CD drive mapped. I should have known! Windows updates are deployed the same way the VMware tools are: via a virtual CD (an ISO file mounted from the virtual center server). Indeed on the VC server, I found an ISO with the exact patches I selected to be installed on the VM.

That's why the first phase of the remediate process took so long: it was preparing the ISO.

On a sidenote: This ISO can as well be used for physical machines that need to be patched! Just copy the ISO file, burn it on a CD. The only thing missing still is the so-called Update Manager guest agent which is installed on the VM to be patched. It seems that 'vum-launcher.exe' is the thing that does most of the work. Did anyone test this out already?

Wednesday, August 13, 2008

[Update] VMware Bug: Waiting for a patch

I was right in expecting that a host reboot would not be required to intall the patch.
I was wrong to think this means I did not have to move my VMs away or shut them down. Maintenance mode is required to install it. Lame!

Fortunately, I still have some 3.0 servers around.

Tuesday, August 12, 2008

VMware Bug: Waiting for a patch...

Ok, I couldn't resist... I had to get my opinion out on the 'by-now-famous' bug in the ESX 3.5u2 hypervisor.

This morning, I was worried. I imagined having to shut down several updated ESX servers hosting more than 100 VMs, patching the servers and bringing the VMs up again.

Looking around and discussing with Tim Jacobs, I looked back at one the posts he refers to. It struck me that the error is logged in '/var/log/vmware/hostd.log'. This means that it is the host management process that is logging the error. To me, it would make sense if the VMkernel doesn't care about licensing and just does its job.

As a consequence, it must be possible for VMware to create a patch that does not require a host reboot.

This afternoon, I look forward to such a patch.

Up to something different ...

According to Google Analytics, this blog has between 25 and 50 visitors per day, with over 70% from search engines. Feedburner tells me that there are little over 50 feed subscriptions, which I find nice. The small Google banner on the right has generated around $20 in five months or so. All these things make me happy. This might soon be over.

Virtualization and VMware products in particular are nice to work with and blog about. Especially when it comes to capacity planning and analysis, a lot is still to be discussed. But not by me.

I made a jump (content-wise) about 5 years ago, going from an academic research group to the IT industry. The jump I'm about to make in a couple of weeks is probably even bigger. Probably starting in a project management function, with the intention of getting into BPM, Service Management or who knows what...


Why Management Consulting?

Because of the money? No
Because of the challenge? Probably
Because it interests me? Sure enough.
Because it's related to what really keeps me busy. Sure!

But also because I do not see myself implementing VMware infrastructure products (or any other product for that matter) for 5 more years. It is not the thing I have in mind for my older days. Is management consulting the answer? Perhaps, we'll see.


Why not IT?

There are some possible answers:

1) What really interests me with products like VMware and such are the foundation layers, the theory behind, the statistics, the stories, etc. This probably stems from my theoritical background. Suffice it to say I was most interested in the talk of Irfan at VMworld Europe 2008.
This is one of the main reasons I was attracted to VMware in the first place, just take a look at some of the earlier posts to see that people at VMware are publishing papers in scientific journals.

2) What I liked a lot in past projects is what we call the 'design phase'. Discussing with a client about requirements, expectations, boundary conditions etc. and coming up with a good compromise that is cost-effective. This requires thinking and communication, two things I miss a lot in the 'implementation phase' of a project.

But then again, looking at the Belgian context, making a design for 3 ESX servers and 2 VLANs isn't really that exciting. We simply don't have that many large corporations in Belgium and the cake has to be shared with other consulting firms.

3) When designing or implementing a software product, one is bound to what is deliverd by the vendor. It is very frustrating to deploy a solution, only to find out that the software contains a bug that only pops up under the specific circumstances at the customer's site. At best, you can get a hotfix or patch from the vendor to cover this up, but bottom line, we depend on the quality of work of others.

By the way, coming from a Linux background, I'm convinced that this last argument is an important one in the discussion about open source versus closed source software. Agreed, most of us are not capable of coding our own software, but if required we could hire an independant developer to fix our bug when the vendor does not support us!

Did I have bad experiences with VMware? With Citrix? Or with any other product? Yes I had, with most of them. Think about some vendors not giving support to applications that run in a virtualized environment, think about the famous VMotion bug that has caused a client of us to have a lot of issues with an SQL server, think about so many other things that cause us to spend 80% of the project time getting the 20% of the product under control.

Update: I typed most of this letter last week, could I possibly know that today (August 12, 2008) would be called "D-Day for VMware" by Tim Jacobs and even worse by others? Ironically, I upgraded most of the servers at a client site yesterday (11th), only to find out that a bug causes major havoc as of today!

4) I like to talk to people, think, brainstorm, discuss on the blackboard, etc. Feeling synergie when people get together. Giving training (as I used to) comes close, but is often too much one-direction. Workshops (as I do often now) is good, but doing the same thing 10 times in a row is not my intention.

5) Technology changes really fast. At first, I found this exciting. Nowadays, I sometimes think it has become a burden. Don't understand me wrong: I have nothing against new features and clever products and I'm the first to check out what's new. The expectations of some clients, however, are such that sometimes you're expected to know all about these things before they are released and that you know these things by heart.

The above deals with the content of the IT work. I have a lot to say about the form as well (how project consulting is misunderstood and how people are resources, rather than assets). That is an intirely different discussion.


Back to the Future

What about the future? Well, as my current interests and future work will be about different things, I'm still unsure as to what I'll do with this blog.

Somehow I like blogging about what keeps me busy. I am co-author of a (dutch) blog (http://www.slimmerwerken.be) and what does not fit there usually ends up on my personal (dutch) blog (http://choose2live.blogspot.com).

What holds me back to continue verbeiren.blogspot.com is the fact that in IT, nobody really cares if your English text is not perfectly written without spelling or grammar mistakes. When writing about business processes, communication itself or personal productivity this may not hold true.

So the first question is whether I should at all blog about my future activities (in English). The second question naturally follows: should I keep this blog (and change the subject/content), or stop it here and create a different one.

If you have any advice, please let me know in the comments.

Thursday, August 07, 2008

VMware, backup and VSS

People looking for information concerning backup and the latest features included in ESX 3.5u2 should take a look at this article by Tim Jacobs.Not only does it introduce VSS, but it also details why VSS support for VMware backups is a good thing.

Keep up the good work, Tim!

Tuesday, May 27, 2008

Using Excel to manage a Virtual Infrastructure

I have always been a big fan of Excel in terms of using it as a graphical user interface for doing things one should probably not think of at first sight: generate Word documents, as a database interface, generate scripts based on input, etc.

This is exactly what Carter Shanklin has done in this video (and the accompanying script). The idea is simple and the solution elegant. It can be applied to a variety of other tasks that require a lot of similar actions.

Under the hood, a little vbscript is used to launch a powershell script that does the job. You need the VI Toolkit for it to work, of course...

Read more at the source.

Wednesday, May 07, 2008

Seamless VDI Windows (or application publishing)

The famous Brian Madden has written an article which touches upon similar arguments as what I wrote before concerning seamless applications in a VDI context in order to try and understand why the prices for XenDesktop are so low.

It appears Ericom and Quest have products that already enable this feature. VMware is technically able to (as I pointed out). Apparently, Citrix does not offer the feature in its XenDesktop product, maybe Brian is right in suggesting that Citrix can not afford to take away business from its Presentation Server (currently XenApp) product:

The bottom line is the fact that XenDesktop is so cool yet so cheap is really going to come back to haunt Citrix. And they're stuck. They can't raise the price because they have to compete with VMware and Quest. Quest already has the single app VDI publishing feature, but no one is paying too much attention to them (yet). But can you imagine what would happen if VMware added single-user app publishing to their VDI solution? And if they kept the price down to under $200 or so? What would Citrix do then? Talk about game-changing!
Follow the discussion here.

Monday, April 07, 2008

The importance of (VMware) Unity

I'll start this post by talking about Citrix instead of VMware...

Part I: About Citrix's Seamless Applications

One of the things that Citrix has offered enterprises for many years already is the integration of a remote desktop with a local desktop by means of seamless windows. In theory, one does not need to know whether an application is running locally or on the server: it looks the same and reacts the same. This is important, especially if you note that Microsoft released a similar feature only with their just-released (2008) version of the Windows server OS.

Part II: About VDI

As an alternative to server based computing, VDI is gaining a lot of momentum lately. For specific users and workload, running a desktop on a virtualization platform can have significant benefits over traditional fat clients or server based computing.

The idea remains the same, however, and remote desktop protocols are required to transfer relevant data over the wire. In practice, what we 'see' on the client side is a published desktop, not an application. This is where the final part comes in:

Part III: Seamless VDI

What if we need just 1 application from our corporate VDI desktop and not the full desktop environment (with icons, Start menu, etc.)? We would need seamless applications for a VDI desktop.

Citrix obviously is able to achieve this with XenDesktop (they did it with Presentation Server), and now VMware is ready too. The technical roadblock that is required to embed applications running in a virtual desktop on the local machine is tackled. The rest is a matter of using an appropriate remote desktop protocol.

I wouldn't be surprised if the next version of VMware VDI/VDM support 'seamless' VDI applications, embedded in the local client desktop.

VMware Workstation 6.5 beta: Unity

Unity is a feature in the new beta of VMware Workstation that allows users to see only specific windows inside the VM, as opposed to a complete virtualized desktop. The feature exists already in VMware Fusion, the Mac counterpart of Workstation. This feature is long awaited and a welcome addition to the functionality of the product.

I was expecting a lot of comments about how this is a major roadblock in the ongoing battle with Citrix and Microsoft. I didn't find any references in the line of what I was thinking, so I will put them here. See my next post for what I think is the importance of Unity.

Thursday, April 03, 2008

VMware vscsiStats: The paper

I wrote about vscsiStats before, but it seems I was amongst the first to do so. Luckily, one of the creators has put some more info on his blog. In this post, he refers to his paper about the technology.

Thursday, March 27, 2008

VMware & PowerShell: Creating a PSdrive

I'm not the first to report it, but I wanted to share this here. My first reaction was: 'Cool!', and so is my second reaction... It is related to the fact that VMware released the beta of the PowerShell toolkit for VI.

Imagine you can browse your virtual infrastructure using a command line interface like this:

cd vi:
cd Folder01\DataCenter01\host\Web\LiveHost01


The original article can be found here. Have fun!

Monday, March 10, 2008

VMware VscsiStats: Measuring at the virtual SCSI level

I mentioned earlier that one of the presentations at VMworld Europe 2008 was about measuring at the level of the virtual SCSI adapter of a VM. A wealth of information is available when looking at this kind of information.

A tool is available on ESX 3.5 that creates histograms by default (and complete traces if wanted) is VscsiStats. As an option, one provides the vSCSI handle ID and the VM World ID. In order to get some statistics at all, one first needs to start the monitoring:


./vscsiStats -s

After some time, the relevant statistics can be fetched by issuing a command like:

./vscsiStats -i 8260 -w 1438 -p ioLength

This, for instance, yields an histogram of the size of IO pacakges sent to the virtual SCSI adapter (and thus to the storage array). To finalize the monitoring, it has to be stopped as well:

./vscsiStats -x


The result of the command above, in my test, was a graph like the one below:


This is a VM running Windows 2003. Remember this is a histogram, we put the measures points in 'buckets' according to their size and plot their relative frequency.

Remark that there is some IO with sizes 4095 bytes 8191 bytes. This is a sign that the file systems are not aligned properly. The fact that 4K sizes are the majority is nice, because VMFS is optimized for IO of 4KB.

Thursday, March 06, 2008

VMware: Virtual VCB Server/Proxy for iSCSI storage

(I know, I did not get past day 1 of VMworld Europe 2008, perhaps I will write some more about the other two days later this week...)

Something different now: It is known that VCB for VI 3.5 adds support for iSCSI devices, in itself this is not a big deal, but there is something really 'cool' about this: it means we can use a virtual server to act as a VCB proxy! This is a big step forward in my opinion.

How can this be done?
1. Install a virtual server
2. Connect the virtual network card of the server to the iSCSI network (either giving it the correct VLAN tag, or connecting it to the proper vSwitch).
3. Install a software intiator (e.g., the one you can download from the MS website. This step is pretty much the same as for a physical server.
4. Install & Configure VCB.
5. Configure the SAN to correctly present the LUNs to the backup proxy (based on the iqn).

Now, combine this with the possibility to add an iSCSI or NFS storage appliance to your virtual infrastructure, and you're ready to have a complete virtualized backup solution that is no longer tied to physical hardware.

Tuesday, February 26, 2008

VMworld Europe 200: Day 1

Here are some quick thought, remarks, things I picked up today in a random order. The moment I'm home with a decent internet connection, I might update some of this info.

General things:

  • From this morning's key note: 3% of all energy generated goes into our data centers. This is the reason why green data centers are important.
  • As you can read elsewhere, Novell has acquired PlateSpin. Let's hope prices will drop.
  • Site Recover Manager is nice, but does not offer anything that can not be done by hand or script.

Performance related:

  • File system alignment tends to become more and more important. Two of the speakers today claimed it to be very important for file system performance. Just make sure you use the VI Client to create the file systems, it does the alignment for you.
  • Some Linux kernel use a Khz CPU timer, which causes CPU overhead in the guest. The kernel boot option 'divider' modifies this behavior.

SAN/NAS related:

  • A lot of people were interested in the talk discussing differences between FC SAN, iSCSI (SW & HW) and NFS. NFS, however, was not really covered in this talk. All in all, there were no big surprises here: FC is usually better than all the other alternatives, especially for large block sizes (less SCSI overhead), software iSCSI uses more CPU cycles than hardware iSCSI, etc. A whitepaper has been published with this info, but I don't have the link ready.
  • iSCSI has been optimized for 8K block sizes, as this is block size is encountered a lot. The result is clearly reflected in the stats.
  • An experimental tool is available to analyze guest disk I/O statistics. It basically creates histograms of throughput, latency, average read/write distance, etc. The command line is 'vscsiStats'. I could not yet test it out yet, as I don't have an ESX server in my hotel room. This alone makes it worth being here...
  • In order to troubleshoot SAN performance issues, allocate a small LUN (e.g. 100MB), so that everything can be cached. This way, you avoid effects of physical disks, spindles, etc.

Network related:

  • In order to use the enhanced vmxnet driver in 3.5, you need to first remove the existing vNIC and add a new one. Then you can select the new enhanced interface with support for all the new features.
  • When setting up network failover policies, it is important to take into account the fact that by default the spanning tree protocol takes 30 seconds to open the uplink port on a physical switch. During this time, the virtual switch sees the link (to the physical switch) as up. 30 seconds is twice the default timeout for VMware HA. Rebooting a switch may cause a lot of havoc in this case

As you might notice, I am particularly interested in everything that relates to performance. Furthermore, I have a lot of references to interesting KB articles, but I need to check them out first before posting any info.

Friday, February 22, 2008

VMworld Europe 2008 Timetable - Abstracts

Somebody noticed (in the comments here) that the abstracts for the talks are not in the Excel file I created. I don't have time to add this info now, but I created a quick and dirty html page (with some sed, grep and bash scripting) that can be found here: Session_Abstracts.html

Hope this helps...

Wednesday, February 20, 2008

VMworld Europe 2008 Timetable

For those attending VMworld Europe 2008 next week, it is a hard time selecting relevant talks and presentations from the list provided by VMware. It's a shame no more user-friendly overview is provided. Here's a screenshot of what it looks like:



Rene has done a good job converting the flat list to a table in Excel.

I did not yet find this served my own purpose, trying to get my schedule right for next week. So I created my own Excel planning file. It consists of a planning sheet, the resulting schedule and also a sheet with an overview of the amount of clones I need of myself. I did a first scan of the topics, and the resulting table looks like this:



MMMmmm, it seems I will have to cut hard in my selection...

Anyway, if you're interested in the Excel (2007) file, it's rough, undocumented, but you can download it here: ProgramVMworldEurope2008.xlsx

Some notes perhaps for those interested: the column 'Attend?' should be filled in with 'Y' or nothing. If 'Y' it means you want to attend. The column with '#' stands for the amount of sessions with the same topic over the three days. In the last sheet, you get an overview of these topics with the dates. In order to update the tables for the planning sheet and the issues, just press CTRL-ALT-F5.

Monday, February 18, 2008

Software iSCSI in VI 3: Multipathing and Redundancy

Last week, I did my first VI (3.5) installation using the MD3000i iSCSI SAN. Don't expect many features or a Navisphere-like interface, but expect a light-weight, cost-effective iSCSI solution that is up and running in a matter of minutes.  Moreover, it is a supported storage device for VI 3.5.

We set up the storage network, just like I usually do with a Fibre Channel array. The topology is presented below:

image

I always thought that iSCSI is very similar to FC in its network configuration, and in principle it is, as long as you have two HBAs.

With software iSCSI (as opposed to hardware iSCSI), you can only have 1 iSCSI initiatior (think of it as a virtual HBA). Redundancy is obtained by connecting multiple physical NICs to the storage virtual switch. So far so good, replace HBA 1 en HBA 2 with physical NIC 1 en physical NIC 2 from Server 1, knowing that both pNICs are connected to the storage vSwitch.



Scanning the SAN reveals ... 2 paths (instead of the naively expected 4). Doing failover testing reveals that no failover occurs when disconnecting for instance the link between pNIC 1 and the physical switch. The SAN simply disappeared!

We quickly realized that the whole problem is caused by the fact that there is only one iSCSI initiator (with a specific MAC and IP address) and no real load-balancing (originating port teaming policy is used). Only if we remove the primary uplink of the server, it switches over to the second pNIC, which connects to a different physical switch and also a different NIC on the SAN. In other words, one only sees the third and fourth path in case of a link or NIC failure!

In order for the server to see 4 paths to the SAN, and have complete redundancy for every physical component, one needs an interlink between both physical switches. This effectively solved our issue.

Note: one might be tempted to think that setting the teaming policy to IP hash would solve the above situation of having the second NIC on standby. This is true, only in that case one would need a NIC bond across the two physical switches which also requires an interlink. The effect, in other words, is the same.

Friday, February 15, 2008

Virtual Machine BIOS too fast

As you can read in this post, and as I have encountered many a times: the BIOS boot sequence of a VM is VERY fast. The speed even improved in version 3.5! Trying to get to the boot sequence is harder than playing Quake to the last level...

In the post mentioned above, the author suggests to modify the VMX file and add an option. It turns out that this option is even available from the VI Client under VM Options. This is much easier than changing the config file.

Thursday, February 07, 2008

VMware memory management: Memory Tax

I'm looking into the specifics of how the ESX hypervisor handles memory, and how resource allocation is performed.

One of the things that has kept me busy was the so-called 'memory tax'. This concept is explained in esx3_memory.pdf but was not clear to me when reading this document. This is a quote from the document:

If a virtual machine is not actively using its currently allocated memory, ESX Server charges a memory tax — more for idle memory than for memory that is in use. That is, the idle memory counts more towards the share allocation than memory in use. The default tax rate is 75 percent, that is, an idle page of memory costs as much as four active pages. This rate can be changed by modifying a parameter setting.

Looking further on the web, I found that in 2002 at, a conference, Carl A. Waldspurger was awarded the best paper award for the following: Memory Resources Management in VMware ESX Server. The slides of the presentation can be found here. Maybe it is because I'm used to reading papers (or at least I used to), but I found the explanation much more clear in this document.

I encourage everyone interested in memory management to read the paper.

Friday, January 18, 2008

MS Office Tip: Past Special

When pasting Excel tables or graphs in Word, I prefer to paste them as pictures, rather than embedded objects. In order to do this, I used to click 'Past Special' (which in Office 2003 had to be done from the menu) and select the appropriate format.

Today, I found out by accident that CTRL-ALT-v opens the Past Special dialog. This means, I can now avoid using the mouse even more...

Probably I'm the last person in the world to know this ;)

Thursday, January 17, 2008

VI 3.5: Consolidation Extension

At a client we are installing VI 3.5 with all bells and whistles. All is going well, and the number of features that were added in this version is quite impressive.

One of the new things is the addition of extensions (on the server side) and plugins (on the client side). The new update manager is one of them, but also an extension is available for light capacity planning including a VC snap-in for VMware converter.

We started the capacity planner wizard (because it is required before one can actually do a conversion). The wizard asks us for an account to start the service. After that, it apparently starts to scan the AD (automated inventory) without ever asking. This inventory is currently running for over a day! I agree, it is a large domain, but shouldn't the wizard have asked me if I wanted to do that or not?

BTW: When using the capacity planner tool, you explicitly have to start an inventory job, which I hardly ever use.

Wednesday, January 16, 2008

VMware Capacity Planner vs. Platespin PowerRecon

I have been asked to compare both products, so here you go... you will notice that it is virtually impossible to say which product is better, because they are fundamentally different in their approach to monitoring performance and creating consolidation scenarios. Except for one big difference...



Existing Comparisons


Peter van den Bosch has already done an analysis (see this PDF) of the differences between both products. However, it feels to me like comparing apples to bananas. Criteria like: 'Needs a database', 'Needs IIS' are listed alongside real features like 'Automatic discovery of servers'. Furthermore, features like being able to export to html are said to be not available with the VMware product, although (by construction) everything is presented in html. I do not blame Peter for this, precisely because both are different products doing the same thing in a different way.

 

I would like to compare the situation with the use of an email client: Outlook versus YAWEC (yet another web email client, think hotmail, gmail, etc.). Both offer interfaces to our email inbox, but both products do it in a completely different way. A feature that is very relevant for Outlook, synchronization and mail fetching for instance, is completely irrelevant for the web client because no synchronization has to occur.

 

That said, I will do an attempt to explain further the differences between both products and what I like and don't like about both.

 

Functionality and Differences

The following components can be found in both products:

  1. A monitor to get the performance data
  2. A database to store the performance data
  3. A 'client' able to analyze the data and create consolidation scenarios and other reports based on the data

The monitors of both Capacity Planner as well as PowerRecon enable one to monitor Windows and UNIX machines without having to install an agent. If required, monitors can be load-balanced. Prior to starting to really monitor performance, in both tools, one has to run an inventory scan at least once. This inventory scan looks up the configuration of the systems, software that is installed, services that are running, etc.

Obviously it is possible to select different accounts for different servers to connect with. This is all relatively easy to set up and use.

Both tools also have a database, but the main difference is that the database for Capacity Planner is hosted on a VMware server (https://optimize.vmware.com) whereas PowerRecon installs/uses a local database on the monitor (or another database server). There is a difference in the way of storing data, but this is out of scope for the current post.

Consequently, when we come to the client part (the third component mentioned above), there are differences in how this is implemented: Capacity Planner is accessed using a web-interface whereas PowerRecon uses a client that can be installed on any Windows box (and also uses a web interface on the monitor server). The two types of client each have their own advantages.

 

Which one is better?

The fact that you like one or the other primarily has to do with personal preference and every benefit has a drawback associated with it (this is real life after all):

What is nice about VMware Capacity Planner:

  • Very light monitor installation and easy installation. The drawback is that on the monitor, no overview of the performance data is available, only an average and the latest value.
  • Information is uploaded to the VMware capacity planner website, which enables one to analyze the performance data from anywhere at anytime.
  • Additionally, it gives you the opportunity to compare your statistics to other systems in the database (Industry Average).

The main drawback is that in my experience the site is not always very responsive.

What is nice about PowerRecon:

  • Information is stored locally (but can be moved and analyzed off-site) in a database.
  • Because the data is locally available, and a fat client is installed to analyze this data, one can get a near real-time view of performance characteristics.

The drawback is that the installation is more involved and implies setting up IIS on the monitor server. Consequently, the requirements for this server are higher.

 

Main Difference: Monitoring VMs

Is there no difference that is really distinguishing both and has nothing to do with the way they work?

Yes, there is one big difference: PowerRecon has the ability to monitor a virtual infrastructure and virtual machines. Whereas you can monitor a virtual machine with Capacity Planner, the results for CPU utilization are not to be trusted. PowerRecon (given the correct license) connects to the VI server in order to get performance data, which is the only correct way of working.

Clearly, Capacity Planner is aimed towards consolidation projects and the people from VMware have produced a product that is perfectly aimed at that.

 

Note: I only scratched the surface in this post, no details are given. Bot products have a different data model, have slightly different feature-sets, completely different licensing schemes, etc. When interested in additional information, just let me know in the comments...

Thursday, January 10, 2008

VMware: Extracting info from VMX files

In order to get some information on the exact names of VMs and the VMDK files they have mapped, I created a dump of all .VMX files on the SAN and used the following sed script to extract some relevant data:

s/#!.*/Configuration:/p
s/displayName/ &/p
s/scsi.:.\.fileName/ &/p
s/sched.swap.derivedName/ &/p

The script can be run using the following command:
$  sed -n -f "sed_script" "config-list"

The result is:
Configuration:
displayName = "Service Desk 5.2"
scsi0:0.fileName = "Service Desk 5.2.vmdk"
sched.swap.derivedName = "/vmfs/volumes/46363464-80...

If you want, it can even be formatted in other ways:
s/displayName = \"\(.*\)\"/\n\1\n/p
s/scsi.:.\.fileName = \"\(.*\)\"/ \1/p
s/sched.swap.derivedName = \"\(.*\)\"/ \1/p

Resulting in:
Service Desk 5.2
Service Desk 5.2.vmdk
/vmfs/volumes/46363464-80...


Tuesday, January 08, 2008

Capacity Monitoring for Desktops?

With the rise of VDI (Virtual Desktop Infrastructure, see here for a comprehensive overview) and the momentum it has, one starts to ask similar questions as with conventional server consolidation: what type of virtualization platform is required, how many users/desktops can I host, will there still be room for scaling, etc.

The way to approach this in server virtualization projects is by means of capacity monitoring and planning using virtualization scenarios. We refer to earlier posts for more information about this topic.

The question we are asking: can we use the same concepts and ideas for desktop virtualization? My answer is 'NO', because:

  1. A user acts completely different than a process or service: less predictable, depending on our mood, depending on the time of the day, etc.
  2. Some end-user applications ask 1OO% of the CPU even while they are not doing anything. The classic example used to be the game 'Pinball'. Even a screensaver can take a large amount of CPU power.
  3. In a VDI context, this becomes even more important. For instance: why would you scale your virtual desktop CPU and memory to include desktop search if really nothing personal can be found on the hosted desktop?
  4. When starting and running 10 applications at the same time, we will probably only use 5 of them later, but still... they require CPU power (and memory) while seeming idle.
  5. If a user has meetings half the time, does that mean his session is closed? Does it still require processing power? How can this be analyzed?
  6. Etc.

In other words, a desktop environment is inherently different from a server environment. This is why in my opinion it is harder to maintain a Citrix farm than a VMware farm: applications and users tend to be less predictable and 'stable' than servers.

Does that mean that we can not do any sizing or planning in a VDI context? On the contrary, we should only keep in mind that applying exactly the same reasoning as with server consolidation planning is not a good idea.

A few more tips:

  • Make sure you have an overview of running processes and their CPU/memory utilization. This helps in deciding what is really important.
  • Be careful with averages and peaks: a PC that is running all the time will have a low average, but may be used heavily during the day and a PC may be 100% used during the day because of some applications.
  • Take into account inactive time during the day.
  • Do not try to analyze the 'average' user, instead create classes (say 5 or so) that each have typical characteristics. Use these classes to create the virtualization scenarios.

Friday, January 04, 2008

Octopus Platform: VDI 2.0

I came in contact with the people of e-BO yesterday. Their Octopus platform is truly amazing. In fact, this company already deployed VDI infrastructures years ago, when the term VDI did not even exist! Moreover, the whole concept is not even tied to VDI (hosted desktops), but can include other types of application/desktop provisioning as well.

Under the hood, an abstraction layer is created between the user front-end (thin or fat client) and the desktop/application provisioning backend. This layer makes it possible to do advanced things like: load-balancing, settings connection policies, secure channels, failover, etc.

Definitely worth a look!

Wednesday, January 02, 2008

Chargeback

On the VMworld website, you can find a featured presentation concerning chargeback. The presentation is by the CEO of VKernel, a company that I have mentioned on this blog before.

Chargeback, as pointed out in the presentation, is about 'measuring' and accounting for two types of costs: Consumables and non-consumables. The latter has to do with floor space, licenses, support and administration, etc. Consumables refers to utilization of CPU, memory, power, etc. Obviously, performance monitoring is critical in this aspect. Check out the presentation (registration required)!

Custom Search