vSphere 5.5 and ESXi 5.5

Hi all,

today I am not writing because of a certain problem or thing I stumbled upon. The “news” I want to share is somewhat “old” (26 August 2013), too: VMware announced vSphere 5.5 and ESXi 5.5!

Why am I posting this? Besides some cool new features in Hardware Version 10 or on the VDP side and Hypervisor side, a mayor change that will affect how we use vCenter in our Company is: Full Mac OS X Client integration (including the plugin for vCenter WebClient).

Now, if that isn’t great news? 😉

Here’s a short sheet about whats new: http://blogs.vmware.com/vsphere/files/2013/09/vSphere-5.5-Quick-Reference-0.5.pdf

And heres the long story: http://www.vmware.com/files/pdf/vsphere/VMware-vSphere-Platform-Whats-New.pdf

All the best,

maybeageek

Execution error: E10056: Restore failed due to existing snapshot. Job Id: (Full Client Path:)

After a while of backing up VMs via vSphere Data Protection (VDP) the backup jobs for four VMs failed. The message said they needed consolidation.

After the consolidation everything started to work for 3 VMs, but not for the fourth. Now I was getting the following error:

Execution error: E10056: Restore failed due to existing snapshot. Job Id: <job-id> (Full Client Path:)

The GUI said nothing about needed consolidation, no snapshots where created, either, and if you look into the VMs config you see that the hdd points to a vmdk, not to a 00001.vmdk snapshot file. So, everything seemed to be in order, right?

After reading some articles I found a vmware KB entry: VDP Backup fails

The solution therein: Old 000001.vmdk-files lying around unused, nowhere referenced or anything. Simply deleting them will help (but an additional move to another location is recommended just to be on the save side).

So with this everything is up and running again! Thanks vmware!

vSphere Data Protection 5.1: Backup fails for Windows Server 2008 R2 VMs

So today I got to the bottom of another interesting case concerning backups with vSphere Data Protection.

After deploying the virtual appliance, registering it to the vSphere Server and creating backup jobs, something interesting happened: Linux VMs got backed up, whereas Server 2008 R2 VMs got errors.

To make a long story short: It has to do with the UUIDs of the virtual HardDisks and Windows VSS, and the fix is quite easy, as can be seen in this KB from VMware:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2035736

esxtop not working in OS X terminal

Also, as I did some troubleshooting lately and came across this issue, here is how to resolv the problem with the OS X terminal and esxtop:

Simply change the setting of the terminal emulation from xterm-256-color to just xterm. voila it works!

Thanks to Punching Clouds: http://www.punchingclouds.com/2013/01/30/esxtop-data-display-issues-with-osx-terminal-application/

Registering vSphere Data Protection to vCenter does not work…

So it seems that when you install vSphere Data Protection and want to use a distinct user that is not Administrator or root, you need to give that user (in this installation it was called datarecovery from the old version) rights on vCenter Level on its own. Just putting that user into a Active Directory Group will not suffice, as registration to vCenter will then give an error as result.

Bild

vSphere/VMware: failed to connect virtual device ethernet0

failed to connect virtual device ethernet0

That message said hello for every single VM after there was a major breakdown in a data center. The breakdown was seen as a welcome opportunity to upgrade everything from 4.1 to 5.1. And since everything was broken anyway (although the VMs continued to run, yeah VMware ;-)) no one bothered going the proper path but just evacuated some ESXi-Hosts, re-installed them with 5.1, created a new vCenter and tried to import the VMs.

What was happening?

The GUI gave no hint as to what was wrong. But in the ESXi host logfiles something gave away what was going on: “vShield filters cannot be found for ethernet0”. Now, that is a clue, indeed!

The old 4.1 was running with everything filtered through vShield, whereas it was decided to not use vShield in this setup for 5.1 anymore. But in every single vmx-file for every VM there had to be removed the following two lines in order for everything to work as it should:

ethernet0.filter0.param1 = “uuid=5006f477-a2df-b018-b331-b2b61f1b95f9.000”
ethernet0.filter0.name = “vshield-dvfilter-module”

So, people, beware of vShield when moving VMs from one cluster to another.

All the best,

Thomas

vShield Manager 4.1 spikes in CPU usage, acting up…

Since a couple of weeks we let vSphere monitor our VMs in addition to the nagios monitoring we have had from the beginning.

The vShield Manager VM started to show odd behavior: CPU spikes to 100% every 10 to 30 minutes, then returning to normal. Operation worked as normal, nothing showed in the log files (although: you don’t really see the system log files in the interface).

Somewhere I read something about the mySQL database having problems, and a reboot would fix that. So, reboot… After all, what could go wrong, right? The single firewall VMs are working independent from the Manager, so nobody should even notice… so far so good.

vShield Manager VM did not boot up properly again. You could not login via the web interface. On the console you’d see only the message “System startup is not complete. Please logout and log back in after a few minutes.”

After some hours still the same. On the console (you don’t get a full bash, just a stripped down management shell that does not much) not much was showing. On thing made us curious though: “show filesystems” did say something about mount file could not be read.

A bad foreboding struck my colleague and me: Filesystem corrupt? Without a bash there is not much you can do. So we downloaded a live linux cd, rebooted the VM into that livelinux, and saw it in an instance: The filesystem simply was full. 0 bytes free. Ok, that definitely is a reason for mySQL and tomcat not to start, and definitely a reason why nothing shows up in the log files.

Now with a bash at hand, we investigated, and our first thought where the log files. But we found only 20 MB worth of log files in /var/log. Everything rotated, gzipped and deleted as supposed. So what was going on?

Turns out vmware forgot about the tomcat logfiles. Tomcat saves its own logfiles into its own directory. And that directory was full of catalina-logfiles, going back to the very day the system was installed.

Simple fix: Delete the old junk no one needs, and the vShield Manager is rebooting like a charm.

So, vmware, I’d guess you have fixed that in a newer version already, although I have not checked yet. If not: Shame on you. Leaving admins out with no proper bash or other good way to determine what is going on. That is a shortfall on your side!

P.S.: No hard feelings though, vmware. I still love you 🙂