Virtual machine “Edit Settings” are greyed out or not available for a virtual machine in the Virtual Infrastructure Client (VIC)

February 3, 2014

1) Putty to a host in the cluster (I always Putty to the host the target vm is on) and edit the vm's "edit vmx file" or the vmx` file with the tilde on it.  Do not edit the vmx file directly.

2) Change the value from "true" to "false" on the device place holder for the "CD-ROM" device using VI Editor.

ide0:1.present = "TRUE"

2a) Enter *i "insert mode", make the edit
2b) Type the esc key to exit VI edit mode
2c) Type :wq to save the file and exit VI Editor

3) Restart the management agents on the ESXi host using Putty or iLO to the ESXi host's DCUI.

4) Check to see if the vm's "Edit Settings" are available (probably wont be yet) in VIC.

5) Disconnect the host.

6) Reconnect the host.

7) "Edit Settings" are available now. Change the CD-ROM to Client Device, pass through IDE mode.

8) Migrate vm or enter Maintenance Mode.


VMware HA Cluster Failure – Split Brain Interrogation

September 13, 2010

If one or more VMWare ESX cluster nodes have suffered a hard crash or failure, you must reintroduce them back into the cluster by following these steps below. Do these steps for each host one at a time. This guide is helpful when multiple ESX hosts in an HA cluster have crashed due to a power outage, massive hardware failure, etc and the HA service on all or some of the ESX nodes in the cluster are non-functional. Furthermore, virtual machines have been displaced by the God forbid this ever happens to you "split-brain scenario".

It may be useful using PowerShell to initially query the cluster for your HA Primaries. I use the VMware PowerCLI and run this simple script I call Get-HA-Primaries.ps1

Connect-VIServer YourVirtualCenterServerNameHere
((Get-View (Get-Cluster YourESXClusterNameHere).id).RetrieveDasAdvancedRuntimeInfo()).DasHostInfo.PrimaryHosts

This will output what the cluster currently knows about HA Primaries.

1)      At the root of the cluster, disable VMotion by setting it to “Manual”. This is to ensure that migrations do not start until all nodes are correctly configured  and are back in the cluster. In Virtual Center, right click the root of the cluster and choose “Edit Settings”, click on “VMWare DRS”, set it to “Manual” and  click OK.

2)      Power on the ESX host if it is off and watch it from the console to make sure it boots properly.

3)      Next, log into the SIM page of the host (if applicable) as root to validate that the hardware is not displaying any obvious problems.

4) In Virtual Center, verify that the ESX host is back in the cluster. If the host shows disconnected or has any HA errors, do steps 4 thru 8 in their exact   order.

5)      Restart the Virtual Center Server service – “VMware VirtualCenter Server”

6) Run the following commands from the problematic ESX host’s console (KVM, local console or Putty) as sudo or root.

        5) service vmware-vpxa restart

        6) service mgmt-vmware restart

        7) service xinetd restart

Verify that the VMware core services are running on the host server by typing:

         ps -ef | grep hostd

It should show results similar to this: The following shows that hostd is running.

root      1887     1  0 Oct31 ?        00:00:01 cmahostd -p 15 -s OK
root      2713     1  0 Oct31 ?        00:00:00 /bin/sh /usr/bin/vmware-watchdog -s hostd -u 60 -q 5 -c /usr/sbin/hostd-support /usr/sbin/vmware-hostd -u
root      2724  2713  0 Oct31 ?        00:11:41 /usr/lib/vmware/hostd/vmware-hostd /etc/vmware/hostd/config.xml -u
root     21263 12546  0 11:34 pts/0    00:00:00 grep hostd

End of host commands

        8) Reconfigure HA within VMCenter by right-clicking on the VM host and selecting    “Reconfigure for HA”. If any HA or connection errors persist, try disconnecting and reconnecting the host. These are both right-click operations on the host from within VMCenter. You may be asked to re-authenticate the host to VMCenter. Simply provide the root password for the host if you are prompted by this wizard.

If the host cannot be re-connected after following these steps, either call the VMWare lead or VMWare support at 1-877-4VM-Ware.

If the host becomes connected and operational, you may have VM guest registration issues.

There are several different scenarios that may require you to remove and re-add the virtual machines back into inventory. If multiple hosts crash simultaneously, you will most likely have HA issues that create a known state called “split-brain” whereas virtual machines are split around the cluster due to the SAN locking mechanism used by the ESX host servers. This results in more than one host “thinking” it has the same virtual machine registered to it. Also, the SAN locking on the hosts could have locks on the guest’s vswap files on several hosts at the same time. You must release the lock manually on each host with the outdated vswap file location info. This is time consuming. The virtual machine(s) will not boot until the lock is freed. The following command allows one to view where the lock is located (always on either vmnic0 or vmnic1) by enumerating the MAC address to determine which host has the invalid data.

vmkfstools -D /vmfs/volumes/sanvolumename/vmname/swapfile

tail -f /var/log/vmkernel

Once you identify the host, reboot it to flush the memory and locks to force the release of bad, outdated vm inventory data. Be sure to migrate all of the guests off and put the host into maintenance mode prior to rebooting it.

If the MAC indicates that the vm guest is actually locked on the host the guest is attempting to boot from, simply delete the vswap file and let the guest re-create it upon booting. The way to determine if the host booting the guest is the owner, the output command will contain all zeroes in the hex field the MAC would be otherwise. The vswap file is in the virtual machines folder in /vmfs/volumes/sanvolumename/vmname.

To view vm registration on a host, view /etc/vmware/hostd/vmInventory.xml

This is the esx host’s local database file for vm inventory.

Also can view this file via, vmware-cmd –l from the \ directory.

Good luck.


How To – Commit VMware snapshots

September 13, 2010

How To – Commit VMware snapshots
Snapshot activities are far more consistent and reliable when using the ESX host's Service Console in lieu of the vCenter GUI.
VMware recommends having free space equal to the snapshots and base disk size before committing snapshots.
If you do not have enough free space on the source LUN, migrate to another disk that has enough free space and consolidate the snapshots into a new virtual disk file (VMDK).

Virtual machines with snapshots ironically cannot be migrated with Storage vMotion. The server will need to be powered off and the virtual machine files will need to be manually migrated in a cold state (powered off).

For more information on consolidating disk files, see Consolidating snapshots (1007849).
To commit snapshots to a base disk from the command-line:
1. Find the path to the VMX file of the virtual machine either from the Virtual Infrastructure Client or by running the following command:
sudo vmware-cmd -l

2. Determine if the virtual machine has snapshots:
sudo vmware-cmd /vmfs/volumes/VM_DATA_C000_01/SomeVirtualMachine1.vmx hassnapshot
The output will look like one of the following:
hassnapshot() =
hassnapshot() = 1
If the result is not equal to one (1), there are no snapshots for the virtual machine and there is no reason to proceed further.

3. Remove (or commit) the snapshot by running the following command:
sudo vmware-cmd /vmfs/volumes/VM_DATA_C000_01/SomeVirtualMachine1.vmx removesnapshots
removesnapshots() = 1
If the result is one (1), the snapshots have been successfully committed. If the result is something other than one (1), file a Support Request with VMware Support and note this KB Article ID in the problem description. Note: The above procedure deletes all snapshots on the virtual machine and commits the changes in the delta disks to the base disc. The base disc has all changes to the data.
This process can take over an hour to complete. It all depeneds on the amount of snapshot deltas and the size of the disks to be committed.


How To – Properly Rename a VMware virtual machine

September 13, 2010

How To – Properly Rename a virtual machine:
This wiki describes the process in which a virtual machine should be properly renamed in the virtual infrastructure
and does not consider the over all process of renaming a server (DNS, monitoring, backups, AD, OS name, etc.).
1) Power off the virtual machine.
2) Make note of the Datastore the virtual machine resides in. Example Datastore:(VM_DATA_C000_01)
    Note all Datastore examples need to be replaced with the actual Datastore your virtual machine is in.
3) Right-click the virtual machine you want to rename in VMCenter and remove it from inventory. This unregisters the virtual machine from VMCenter. (Do not delete the files from disk).
4) SSH (Putty) into one of the hosts of the cluster the virtual machine is in.
5) Switch your user context to root with sudo su –
6) Create a new directory to hold the virtual machine files and move them with mv /vmfs/volumes/VM_DATA_C000_01/OLD_VIRTUAL_MACHINE_NAME /vmfs/volumes/VM_DATA_C000_01/NEW_VIRTUAL_MACHINE_NAME
mv /vmfs/volumes/VM_DATA_C000_01/VMWEB01VD /vmfs/volumes/VM_DATA_C000_01/VMWEB01VT
   This will move your vmdk, vmx, vmsd, vmxf, and nvram files to the new directory
7) Navigate to the virtual machine's new folder with cd /vmfs/volumes/VM_DATA_C000_01/VMWEB01VT
8) Rename the virtual disk file(s) with vmkfstools -E VMWEB01VD.vmdk VMWEB01VT.vmdk
   This command renames the .vmdk, flat.vmdk, and updates the .vmdk pointer. You may have to run this for any additional virtual disks present.
9) Use nano to modify the .vmx and .vmxf files to reflect the new virtual machine name. Nano is similar to VI editor so pay attention to the menu legend at the bottom of the screen for options. Be careful with NANO; it can put an inadvertent carriage return in the file, which then keeps the vmx file from being properly seen as a vmx file. Example: nano /vmfs/volumes/VM_DATA_C000_01/VMWEB01VT/VMWEB01VT.vmx
   nano /vmfs/volumes/VM_DATA_C000_01/VMWEB01VT/VMWEB01VT.vmxf
   Find all virtual machine name references in these files and replace them with the new virtual machine name.
Also, rename the files themselves after editing them. Refer to the vmx, vmxf, vmsd, nvram files.
10) Register the virtual machine in VMCenter.
    Navigate to any of the hosts in the target cluster.
    Right click on the datastore the virtual machine is in.
    Choose “Browse Datastore”.
    Open the virtual machines folder and right click on the [servername].vmx file and choose “Add to Inventory”. Follow the steps in the wizard.
Refer to Appendix A of the Virtual_Machine_Migration.doc if you have any issues getting console access to the virtual machine in VMCenter once it is back into inventory.


How To change the Service Console IP settings in a VMWare DRS, HA, VMotion Cluster

September 13, 2010

How To change the Service Console IP settings in a VMWare DRS, HA, VMotion Cluster:
Have Networking change DNS on only one server at a time as you change the IP's and VLAN ID's.
There's no sense disconnecting all of the hosts from VMCenter at the same time. Have someone in Networking available during the whole process. It only takes approx. 2 minutes per each host. A reboot is not required.
1) Put the ESX host into Maintenance Mode (right click host in VMCenter, "Enter Maintenance Mode"). If the cluster node cannot be put into Maintenance Mode because of over provisioning, disconnect it from VMCenter.
Right click host in VMCenter, "Disconnect". All the guests will remain running during this time. You may have to enter the root password when reconnecting the host into the cluster.
2) Wait for all the guests to migrate to another node in the cluster. If disconnected, you cannot migrate any VM's so you must proceed to re-IP the host and change the VLAN ID.
3) Console into the host. Do not use Putty because you will lose connectivity to the host. Use a KVM or local console.
4) Change the Service Console IP and VLAN ID. See steps A thru G below.
Once the VLAN ID for the service console is changed, it will no longer be able to rejoin the cluster untill all ESX hosts in the cluster have the same VLAN ID. (HA requirement)
From KVM or local console on the ESX host:

A) Change IP address:
esxcfg-vswif -i XXX.XXX.XXX.XXX -n vswif0
B) Change Default Gateway on Service Console:
nano -w /etc/sysconfig/network
C) Change VLAN ID to "200" on vSwitch0:
esxcfg-vswitch -v 200 -p "Service Console" vSwitch0
D) Restart networking (or reboot)
service network restart
To view changes reflected in VMCenter,
service mgmt-vmware restart
reboot host
E) Now you must disconnect the host from VMCenter and then re-Connect it by right clicking the host and choosing the disconnect and then the connect option. This is only if you did not have to disconnect the host earlier.
F) Right click host and Exit Maintenance Mode in VMCenter
G) Make sure Cluster DRS, HA and VMotion are all re-enabled after all hosts are done.


VMware ESX and Virtual Center Connectivity

September 13, 2010

Common Problem – You cannot connect Virtual Center to a specific ESX host server or the ESX host is "Not Responding"
If an existing ESX host, for some reason, goes into (not responding) mode, use the following steps to troubleshoot.
This also applies if an attempt to add a new host fails.
General questions to think about:
Can you connect via SSH to the host directly?
Will the host answer to PING requests?
Is the host running at all?
Can you connect your VI client directly to the ESX server host?
Yes. If so, there is some other network problem between Virtual Center and the ESX host, or there may be problems with the
vpxa daemon on the ESX host. However, you have just proven that the ESX host server is running and its Service Console communications are intact.

1) Validate the network connectivity between the Virtual Center server and the ESX host. Try a PING request from Virtual
    Center to the ESX host directly.
2) Make sure the vpxa daemon is running on the host. Gain console access to the ESX host and issue the following command:
service vmware-vpxa restart
Wait a few minutes. You will most likely see screen refreshes in your Virtual Infrastructure client as the target ESX host
and Virtual Center communicate. This will usually fix this issue and the ESX host will generally become accessible via Virtual Center.
No. If your Virtual Infrastructure client cannot connect directly to the ESX server host, try using an SSH connection. If SSH
is functional, but the Virtual Infrastructure client is not, then the hostd daemon on the host is probably not running, but,
the Service Console network communications are intact. Restart the hostd daemon and issue the following command:
service mgmt-vmware restart
– or –
Reboot the ESX server host
If SSH is NOT functional, but PING requests are anwsered, then you have multiple problems. Maybe SSH and hostd are both down
but, the host is running and the Service Console network communications are intact.
Reboot the ESX host server


EMC PowerPath/VE (Virtual Edition) for VMware vSphere Install Reference Guide

September 13, 2010
EMC PowerPath/VE (Virtual Edition) for VMware vSphere
The purpose of this document is to describe the method for installing, configuring, and licensing PowerPath/VE on an ESX vSphere host system. For a quick reference of commands in sequential order, refer to page 9. The same methodology applies to ESXi.
PowerPath/VE (Virtual Edition) is used to manage the I/O and failover of storage devices on a VMware vSphere. PowerPath/VE allows all LUNs to be managed and owned by the PowerPath plug-in in place of VMware’s NMP (native multi-pathing plug-in). There are several benefits to using PowerPath/VE. Most importantly, full multi-pathing intelligence and utilization of all available paths vs. VMware’s Round Robin, MRU (most recently used) or Fixed Path algorithms. Performance characteristics of PowerPath/VE far exceed that of VMware NMP.  Also, zoning ESX hosts on EMC storage is far easier than without having PowerPath/VE resident on the hosts.
VMware vSphere CLI and PowerPath/VE installation
PowerPath/VE is intended to be installed on both ESX and ESXi servers so all interaction with PowerPath/VE is via remote access. Never locally on the host ESX service console. This is where the vSphere CLI comes in handy.
The installation can be executed on Windows XP or Windows 7, 32 and 64 bit. Commands must be run from within the bin directory unless you define an environment variable for it. Refer to the install path post install for bin location.
Using the CLI, you can among many things, query the hosts with the following command: same output as old ESX service console command esxupdate query
1)      Put the target ESX host into maintenance mode
2) –query –server yourvspherehost
This command queries the host for all installed components, OS and patch version.
        You will be prompted for credentials for each and every command. Use root.

End of Page 1

Page 2

The output will look similar to: Note PowerPath is not listed until it is installed.
Following these instructions are all that are required for a successful install.
PowerPath/VE installation command is: “replace the server name, install path and version where needed”. It’s best to copy this string into Notepad and then into the CLI to remove any formatting.
3) –server yourvspherehost –install –bundle=\\your-unc-path\
After this command executes (it takes approx. 2 minutes to complete), you will see the following:
DO NOT REBOOT at this point. If the host you are performing the install on is attached to an EMC Symmetrix or CLARiiON array, you must follow the procedures described on pages 5 thru 8 before rebooting. Claim rules can ONLY be modified on I/O active LUNs after PowerPath is installed and prior to the post install reboot.
Run the query command again to validate the install.
4) –query –server yourvspherehost
The output will look similar to: Note PowerPath/VE is now listed.

End of Page 2

Page 3

5)      Reboot
PowerPath/VE licensing and rpowermt remote administration
Licenses are installed and registered via the EMC rpowermt utility also referred to as RTOOLS. You can access the tool via a standard Microsoft Command Prompt. rpowermt is set in the server’s path, so commands can be executed from any context.
The EMC remote administration tool "rpowermt" is aware of this license file location because the following command was ran to set the path variable.
set PPMT_LIC_PATH=E:\PowerPathVE\Licenses
Once defined, the host can be licensed using the following command:
rpowermt host= yourvspherehost register
Once registered, verify by this command:
rpowermt host= yourvspherehost check_registration
The output will be similar to this:
PowerPath License Information:
Host ID     : some ID hash here
Type        : unserved (uncounted)
State       : licensed
Days until expiration : (non-expiring)
License search path: E:\PowerPathVE\Licenses
License file(s):            E:\PowerPathVE\Licenses\license.lic
                                      E:\PowerPathVE\Licenses\ license.lic

End of Page 3

Page 4

                                  E:\PowerPathVE\Licenses\ license.lic
                                  E:\PowerPathVE\Licenses\ license.lic
                                  E:\PowerPathVE\Licenses\ license.lic
                                  E:\PowerPathVE\Licenses\ license.lic
The ESX or ESXi host should now have PowerPath/VE installed and is licensed in “unserved” mode. PowerPath/VE will be set as the owner of all Datastore LUNs.
During installation on ESX hosts attached to EMC Symmetrix with LUNZ presented (C0:T0:L0), ESX claimrules must be altered to account for this. EMC CLARiiON attached hosts are not affected. LUNZ is also known as LUN Zero (0).
The presence of these devices causes several issues and errors on the ESX console. See error:
Use the vSphere CLI for executing commands and queries against an ESX 4.0 vSphere hosts.
Below is a post PowerPath/VE installation corestorage claimrule definition. Note that PowerPath “owns” rules 250 and above after the installation.
esxcli –server yourvspherehost corestorage claimrule list

End of Page 4

Page 5

ESX 4 – vSphere now has a concept of a PSA – pluggable storage architecture. VMware corestorage claim rules control how the ESX/ESXi server utilizes storage presented to it. These claim rules are stored in the esx.conf file and is parsed during each reboot or rescan of an HBA device by the host.
Refer to page 83 from the VMware esxcli reference guide:
Also, refer to VMware kb 1015084: Unpresenting a LUN containing a Datastore from ESX 4.x and ESXi 4.x
Run the following commands in this order specific to an individual ESX/ESXi host. Replace the hostname in the examples below with the desired host name. Claim rule altering commands must be performed on each HBA device on the host that is attached to the SAN. Refer to HBA1 and HBA2.
1)      esxcli –server yourvspherehost corestorage claimrule list
This command is to list the claim rules for review.
2)      esxcli –server yourvspherehost corestorage claimrule add –plugin MASK_PATH –rule 102 –type location -A vmhba1 -C 0 -T 0 -L 0
This command creates a new claim rule “102” that masks LUNZ from the host on HBA1.
3)      esxcli –server yourvspherehost corestorage claimrule add –plugin MASK_PATH –rule 103 –type location -A vmhba2 -C 0 -T 0 -L 0

End of Page 5

Page 6

This command creates a new claim rule “103” that masks LUNZ from the host on HBA2.
4)      esxcli –server yourvspherehost corestorage claimrule list
List the claim rules again for review. Note that newly added rules 102 and 103 are only 50% built prior to a mandatory reboot to fully reload the host’s storage system. You will only see the file reference of the rule and not the runtime rule until a reboot occurs.
Before reboot output:
102   file    location MASK_PATH adapter=vmhba1 channel=0 target=0 lun=0
103   file    location MASK_PATH adapter=vmhba2 channel=0 target=0 lun=0
After reboot output:
102   runtime location MASK_PATH adapter=vmhba1 channel=0 target=0 lun=0
102   file    location MASK_PATH adapter=vmhba1 channel=0 target=0 lun=0
103   runtime location MASK_PATH adapter=vmhba2 channel=0 target=0 lun=0
103   file    location MASK_PATH adapter=vmhba2 channel=0 target=0 lun=0
5)      esxcli –server yourvspherehost corestorage claimrule load
This command loads the new claim rules into memory.
6)      esxcli –server yourvspherehost corestorage claimrule run
This command runs the new esx.conf corestorage claimrule definitions.
7)      Reboot the ESX/ESXi host and verify that the desired LUN is masked by looking in vCenter > Configuration tab > Storage Adapters. Highlight vmhba1 and vmhba2 separately and verify that the LUN you wanted masked does not appear.

End of Page 6

Page 7

 See before view with LUN 0 listed as a 2.81MB device owned by NMP.
After claim rules are set to mask the LUNZ devices on each host’s HBA following the instructions above. The console errors cease and the LUNZ device is no longer being presented to the host
If you skipped licensing, go back and complete the steps on pages 3 and 4 “PowerPath/VE licensing and rpowermt remote administration“.

End of Page 7

Page 8

PowerPath install commands quick reference –query –server yourvspherehost –server yourvspherehost–install –bundle=\\your-unc-path\ –query –server yourvspherehost
PowerPath licensing commands quick reference
set PPMT_LIC_PATH=E:\PowerPathVE\Licenses
rpowermt host= yourvspherehostregister
rpowermt host= yourvspherehostcheck_registration
ESX Claimrules quick reference
esxcli –server yourvspherehostcorestorage claimrule list
esxcli –server yourvspherehostcorestorage claimrule add –plugin MASK_PATH –rule 102 –type location -A vmhba1 -C 0 -T 0 -L 0
esxcli –server yourvspherehostcorestorage claimrule add –plugin MASK_PATH –rule 103 –type location -A vmhba2 -C 0 -T 0 -L 0
esxcli –server yourvspherehostcorestorage claimrule list
esxcli –server yourvspherehostcorestorage claimrule load
esxcli –server yourvspherehostcorestorage claimrule run
If deletion of a claimrule is required:
esxcli –server yourvspherehostcorestorage claimrule delete –rule ###

End of Page 8