ESXi Host loses config after reboot, no remediation/update possible. altbootbank damaged.

Short story: A freshly installed ESXi host lost its config after the first reboot. Curiously, it kept it after a factory reset. However, after a vCenter join no update was possible. Solution down below! 😉

Longer story: It has been quiet here. That is due to multiple factors, one of them being that our vSphere installation is running nice and smoothly.

But now we decided to re-install the hosts with a new image and join them to a new vCenter Server. Both, hosts and vCenter, have been upgraded again and again, and sometimes you just want to start over.

So, I installed the newest HP image, configured the hosts management interface, joined it to the vCenter and configured other things like vMotion network and so on. After a reboot, the host did not reconnect to the vCenter Server. The DCUI stated it had not IP whatsoever, and I couldn’t even give it a new one. No vmkernel NICs showed in the ESX CLI.

After a factory reset everything was there again, so like before I configured everything, joined the server to the vCenter and everything seemed jolly. However, I recognized it wasn’t the newest built, so I tried using update manager to remediate the host.

It wouldn’t even stage the patches, so I went to the console and looked at the /var/log/esxupdate.log file. I sure found the problem:

There was an error checking file system on altbootbank, please see log for detail.’)

Solution: With this error message and google right at hand the solution was easy to find: VMware KB 2033564. It seemed that somehow the bootbank / altbootbank was damaged, for whatever reason I cannot be certain. The important part is: It is fixable, and the host is now up and running with all the latest patches.

vSphere 5.5 and ESXi 5.5

Hi all,

today I am not writing because of a certain problem or thing I stumbled upon. The “news” I want to share is somewhat “old” (26 August 2013), too: VMware announced vSphere 5.5 and ESXi 5.5!

Why am I posting this? Besides some cool new features in Hardware Version 10 or on the VDP side and Hypervisor side, a mayor change that will affect how we use vCenter in our Company is: Full Mac OS X Client integration (including the plugin for vCenter WebClient).

Now, if that isn’t great news? 😉

Here’s a short sheet about whats new:

And heres the long story:

All the best,


Execution error: E10056: Restore failed due to existing snapshot. Job Id: (Full Client Path:)

After a while of backing up VMs via vSphere Data Protection (VDP) the backup jobs for four VMs failed. The message said they needed consolidation.

After the consolidation everything started to work for 3 VMs, but not for the fourth. Now I was getting the following error:

Execution error: E10056: Restore failed due to existing snapshot. Job Id: <job-id> (Full Client Path:)

The GUI said nothing about needed consolidation, no snapshots where created, either, and if you look into the VMs config you see that the hdd points to a vmdk, not to a 00001.vmdk snapshot file. So, everything seemed to be in order, right?

After reading some articles I found a vmware KB entry: VDP Backup fails

The solution therein: Old 000001.vmdk-files lying around unused, nowhere referenced or anything. Simply deleting them will help (but an additional move to another location is recommended just to be on the save side).

So with this everything is up and running again! Thanks vmware!

vSphere Data Protection 5.1: Backup fails for Windows Server 2008 R2 VMs

So today I got to the bottom of another interesting case concerning backups with vSphere Data Protection.

After deploying the virtual appliance, registering it to the vSphere Server and creating backup jobs, something interesting happened: Linux VMs got backed up, whereas Server 2008 R2 VMs got errors.

To make a long story short: It has to do with the UUIDs of the virtual HardDisks and Windows VSS, and the fix is quite easy, as can be seen in this KB from VMware:

esxtop not working in OS X terminal

Also, as I did some troubleshooting lately and came across this issue, here is how to resolv the problem with the OS X terminal and esxtop:

Simply change the setting of the terminal emulation from xterm-256-color to just xterm. voila it works!

Thanks to Punching Clouds:

Registering vSphere Data Protection to vCenter does not work…

So it seems that when you install vSphere Data Protection and want to use a distinct user that is not Administrator or root, you need to give that user (in this installation it was called datarecovery from the old version) rights on vCenter Level on its own. Just putting that user into a Active Directory Group will not suffice, as registration to vCenter will then give an error as result.


vSphere/VMware: failed to connect virtual device ethernet0

failed to connect virtual device ethernet0

That message said hello for every single VM after there was a major breakdown in a data center. The breakdown was seen as a welcome opportunity to upgrade everything from 4.1 to 5.1. And since everything was broken anyway (although the VMs continued to run, yeah VMware ;-)) no one bothered going the proper path but just evacuated some ESXi-Hosts, re-installed them with 5.1, created a new vCenter and tried to import the VMs.

What was happening?

The GUI gave no hint as to what was wrong. But in the ESXi host logfiles something gave away what was going on: “vShield filters cannot be found for ethernet0”. Now, that is a clue, indeed!

The old 4.1 was running with everything filtered through vShield, whereas it was decided to not use vShield in this setup for 5.1 anymore. But in every single vmx-file for every VM there had to be removed the following two lines in order for everything to work as it should:

ethernet0.filter0.param1 = “uuid=5006f477-a2df-b018-b331-b2b61f1b95f9.000” = “vshield-dvfilter-module”

So, people, beware of vShield when moving VMs from one cluster to another.

All the best,