How To – Properly Rename a VMware virtual machine

September 13, 2010

How To – Properly Rename a virtual machine:
This wiki describes the process in which a virtual machine should be properly renamed in the virtual infrastructure
and does not consider the over all process of renaming a server (DNS, monitoring, backups, AD, OS name, etc.).
1) Power off the virtual machine.
2) Make note of the Datastore the virtual machine resides in. Example Datastore:(VM_DATA_C000_01)
    Note all Datastore examples need to be replaced with the actual Datastore your virtual machine is in.
3) Right-click the virtual machine you want to rename in VMCenter and remove it from inventory. This unregisters the virtual machine from VMCenter. (Do not delete the files from disk).
4) SSH (Putty) into one of the hosts of the cluster the virtual machine is in.
5) Switch your user context to root with sudo su –
6) Create a new directory to hold the virtual machine files and move them with mv /vmfs/volumes/VM_DATA_C000_01/OLD_VIRTUAL_MACHINE_NAME /vmfs/volumes/VM_DATA_C000_01/NEW_VIRTUAL_MACHINE_NAME
mv /vmfs/volumes/VM_DATA_C000_01/VMWEB01VD /vmfs/volumes/VM_DATA_C000_01/VMWEB01VT
   This will move your vmdk, vmx, vmsd, vmxf, and nvram files to the new directory
7) Navigate to the virtual machine's new folder with cd /vmfs/volumes/VM_DATA_C000_01/VMWEB01VT
8) Rename the virtual disk file(s) with vmkfstools -E VMWEB01VD.vmdk VMWEB01VT.vmdk
   This command renames the .vmdk, flat.vmdk, and updates the .vmdk pointer. You may have to run this for any additional virtual disks present.
9) Use nano to modify the .vmx and .vmxf files to reflect the new virtual machine name. Nano is similar to VI editor so pay attention to the menu legend at the bottom of the screen for options. Be careful with NANO; it can put an inadvertent carriage return in the file, which then keeps the vmx file from being properly seen as a vmx file. Example: nano /vmfs/volumes/VM_DATA_C000_01/VMWEB01VT/VMWEB01VT.vmx
   nano /vmfs/volumes/VM_DATA_C000_01/VMWEB01VT/VMWEB01VT.vmxf
   Find all virtual machine name references in these files and replace them with the new virtual machine name.
Also, rename the files themselves after editing them. Refer to the vmx, vmxf, vmsd, nvram files.
10) Register the virtual machine in VMCenter.
    Navigate to any of the hosts in the target cluster.
    Right click on the datastore the virtual machine is in.
    Choose “Browse Datastore”.
    Open the virtual machines folder and right click on the [servername].vmx file and choose “Add to Inventory”. Follow the steps in the wizard.
Refer to Appendix A of the Virtual_Machine_Migration.doc if you have any issues getting console access to the virtual machine in VMCenter once it is back into inventory.


How To change the Service Console IP settings in a VMWare DRS, HA, VMotion Cluster

September 13, 2010

How To change the Service Console IP settings in a VMWare DRS, HA, VMotion Cluster:
Have Networking change DNS on only one server at a time as you change the IP's and VLAN ID's.
There's no sense disconnecting all of the hosts from VMCenter at the same time. Have someone in Networking available during the whole process. It only takes approx. 2 minutes per each host. A reboot is not required.
1) Put the ESX host into Maintenance Mode (right click host in VMCenter, "Enter Maintenance Mode"). If the cluster node cannot be put into Maintenance Mode because of over provisioning, disconnect it from VMCenter.
Right click host in VMCenter, "Disconnect". All the guests will remain running during this time. You may have to enter the root password when reconnecting the host into the cluster.
2) Wait for all the guests to migrate to another node in the cluster. If disconnected, you cannot migrate any VM's so you must proceed to re-IP the host and change the VLAN ID.
3) Console into the host. Do not use Putty because you will lose connectivity to the host. Use a KVM or local console.
4) Change the Service Console IP and VLAN ID. See steps A thru G below.
Once the VLAN ID for the service console is changed, it will no longer be able to rejoin the cluster untill all ESX hosts in the cluster have the same VLAN ID. (HA requirement)
From KVM or local console on the ESX host:

A) Change IP address:
esxcfg-vswif -i XXX.XXX.XXX.XXX -n vswif0
B) Change Default Gateway on Service Console:
nano -w /etc/sysconfig/network
C) Change VLAN ID to "200" on vSwitch0:
esxcfg-vswitch -v 200 -p "Service Console" vSwitch0
D) Restart networking (or reboot)
service network restart
To view changes reflected in VMCenter,
service mgmt-vmware restart
reboot host
E) Now you must disconnect the host from VMCenter and then re-Connect it by right clicking the host and choosing the disconnect and then the connect option. This is only if you did not have to disconnect the host earlier.
F) Right click host and Exit Maintenance Mode in VMCenter
G) Make sure Cluster DRS, HA and VMotion are all re-enabled after all hosts are done.


VMware ESX and Virtual Center Connectivity

September 13, 2010

Common Problem – You cannot connect Virtual Center to a specific ESX host server or the ESX host is "Not Responding"
If an existing ESX host, for some reason, goes into (not responding) mode, use the following steps to troubleshoot.
This also applies if an attempt to add a new host fails.
General questions to think about:
Can you connect via SSH to the host directly?
Will the host answer to PING requests?
Is the host running at all?
Can you connect your VI client directly to the ESX server host?
Yes. If so, there is some other network problem between Virtual Center and the ESX host, or there may be problems with the
vpxa daemon on the ESX host. However, you have just proven that the ESX host server is running and its Service Console communications are intact.

1) Validate the network connectivity between the Virtual Center server and the ESX host. Try a PING request from Virtual
    Center to the ESX host directly.
2) Make sure the vpxa daemon is running on the host. Gain console access to the ESX host and issue the following command:
service vmware-vpxa restart
Wait a few minutes. You will most likely see screen refreshes in your Virtual Infrastructure client as the target ESX host
and Virtual Center communicate. This will usually fix this issue and the ESX host will generally become accessible via Virtual Center.
No. If your Virtual Infrastructure client cannot connect directly to the ESX server host, try using an SSH connection. If SSH
is functional, but the Virtual Infrastructure client is not, then the hostd daemon on the host is probably not running, but,
the Service Console network communications are intact. Restart the hostd daemon and issue the following command:
service mgmt-vmware restart
– or –
Reboot the ESX server host
If SSH is NOT functional, but PING requests are anwsered, then you have multiple problems. Maybe SSH and hostd are both down
but, the host is running and the Service Console network communications are intact.
Reboot the ESX host server


EMC PowerPath/VE (Virtual Edition) for VMware vSphere Install Reference Guide

September 13, 2010
EMC PowerPath/VE (Virtual Edition) for VMware vSphere
The purpose of this document is to describe the method for installing, configuring, and licensing PowerPath/VE on an ESX vSphere host system. For a quick reference of commands in sequential order, refer to page 9. The same methodology applies to ESXi.
PowerPath/VE (Virtual Edition) is used to manage the I/O and failover of storage devices on a VMware vSphere. PowerPath/VE allows all LUNs to be managed and owned by the PowerPath plug-in in place of VMware’s NMP (native multi-pathing plug-in). There are several benefits to using PowerPath/VE. Most importantly, full multi-pathing intelligence and utilization of all available paths vs. VMware’s Round Robin, MRU (most recently used) or Fixed Path algorithms. Performance characteristics of PowerPath/VE far exceed that of VMware NMP.  Also, zoning ESX hosts on EMC storage is far easier than without having PowerPath/VE resident on the hosts.
VMware vSphere CLI and PowerPath/VE installation
PowerPath/VE is intended to be installed on both ESX and ESXi servers so all interaction with PowerPath/VE is via remote access. Never locally on the host ESX service console. This is where the vSphere CLI comes in handy.
The installation can be executed on Windows XP or Windows 7, 32 and 64 bit. Commands must be run from within the bin directory unless you define an environment variable for it. Refer to the install path post install for bin location.
Using the CLI, you can among many things, query the hosts with the following command: same output as old ESX service console command esxupdate query
1)      Put the target ESX host into maintenance mode
2) –query –server yourvspherehost
This command queries the host for all installed components, OS and patch version.
        You will be prompted for credentials for each and every command. Use root.

End of Page 1

Page 2

The output will look similar to: Note PowerPath is not listed until it is installed.
Following these instructions are all that are required for a successful install.
PowerPath/VE installation command is: “replace the server name, install path and version where needed”. It’s best to copy this string into Notepad and then into the CLI to remove any formatting.
3) –server yourvspherehost –install –bundle=\\your-unc-path\
After this command executes (it takes approx. 2 minutes to complete), you will see the following:
DO NOT REBOOT at this point. If the host you are performing the install on is attached to an EMC Symmetrix or CLARiiON array, you must follow the procedures described on pages 5 thru 8 before rebooting. Claim rules can ONLY be modified on I/O active LUNs after PowerPath is installed and prior to the post install reboot.
Run the query command again to validate the install.
4) –query –server yourvspherehost
The output will look similar to: Note PowerPath/VE is now listed.

End of Page 2

Page 3

5)      Reboot
PowerPath/VE licensing and rpowermt remote administration
Licenses are installed and registered via the EMC rpowermt utility also referred to as RTOOLS. You can access the tool via a standard Microsoft Command Prompt. rpowermt is set in the server’s path, so commands can be executed from any context.
The EMC remote administration tool "rpowermt" is aware of this license file location because the following command was ran to set the path variable.
set PPMT_LIC_PATH=E:\PowerPathVE\Licenses
Once defined, the host can be licensed using the following command:
rpowermt host= yourvspherehost register
Once registered, verify by this command:
rpowermt host= yourvspherehost check_registration
The output will be similar to this:
PowerPath License Information:
Host ID     : some ID hash here
Type        : unserved (uncounted)
State       : licensed
Days until expiration : (non-expiring)
License search path: E:\PowerPathVE\Licenses
License file(s):            E:\PowerPathVE\Licenses\license.lic
                                      E:\PowerPathVE\Licenses\ license.lic

End of Page 3

Page 4

                                  E:\PowerPathVE\Licenses\ license.lic
                                  E:\PowerPathVE\Licenses\ license.lic
                                  E:\PowerPathVE\Licenses\ license.lic
                                  E:\PowerPathVE\Licenses\ license.lic
The ESX or ESXi host should now have PowerPath/VE installed and is licensed in “unserved” mode. PowerPath/VE will be set as the owner of all Datastore LUNs.
During installation on ESX hosts attached to EMC Symmetrix with LUNZ presented (C0:T0:L0), ESX claimrules must be altered to account for this. EMC CLARiiON attached hosts are not affected. LUNZ is also known as LUN Zero (0).
The presence of these devices causes several issues and errors on the ESX console. See error:
Use the vSphere CLI for executing commands and queries against an ESX 4.0 vSphere hosts.
Below is a post PowerPath/VE installation corestorage claimrule definition. Note that PowerPath “owns” rules 250 and above after the installation.
esxcli –server yourvspherehost corestorage claimrule list

End of Page 4

Page 5

ESX 4 – vSphere now has a concept of a PSA – pluggable storage architecture. VMware corestorage claim rules control how the ESX/ESXi server utilizes storage presented to it. These claim rules are stored in the esx.conf file and is parsed during each reboot or rescan of an HBA device by the host.
Refer to page 83 from the VMware esxcli reference guide:
Also, refer to VMware kb 1015084: Unpresenting a LUN containing a Datastore from ESX 4.x and ESXi 4.x
Run the following commands in this order specific to an individual ESX/ESXi host. Replace the hostname in the examples below with the desired host name. Claim rule altering commands must be performed on each HBA device on the host that is attached to the SAN. Refer to HBA1 and HBA2.
1)      esxcli –server yourvspherehost corestorage claimrule list
This command is to list the claim rules for review.
2)      esxcli –server yourvspherehost corestorage claimrule add –plugin MASK_PATH –rule 102 –type location -A vmhba1 -C 0 -T 0 -L 0
This command creates a new claim rule “102” that masks LUNZ from the host on HBA1.
3)      esxcli –server yourvspherehost corestorage claimrule add –plugin MASK_PATH –rule 103 –type location -A vmhba2 -C 0 -T 0 -L 0

End of Page 5

Page 6

This command creates a new claim rule “103” that masks LUNZ from the host on HBA2.
4)      esxcli –server yourvspherehost corestorage claimrule list
List the claim rules again for review. Note that newly added rules 102 and 103 are only 50% built prior to a mandatory reboot to fully reload the host’s storage system. You will only see the file reference of the rule and not the runtime rule until a reboot occurs.
Before reboot output:
102   file    location MASK_PATH adapter=vmhba1 channel=0 target=0 lun=0
103   file    location MASK_PATH adapter=vmhba2 channel=0 target=0 lun=0
After reboot output:
102   runtime location MASK_PATH adapter=vmhba1 channel=0 target=0 lun=0
102   file    location MASK_PATH adapter=vmhba1 channel=0 target=0 lun=0
103   runtime location MASK_PATH adapter=vmhba2 channel=0 target=0 lun=0
103   file    location MASK_PATH adapter=vmhba2 channel=0 target=0 lun=0
5)      esxcli –server yourvspherehost corestorage claimrule load
This command loads the new claim rules into memory.
6)      esxcli –server yourvspherehost corestorage claimrule run
This command runs the new esx.conf corestorage claimrule definitions.
7)      Reboot the ESX/ESXi host and verify that the desired LUN is masked by looking in vCenter > Configuration tab > Storage Adapters. Highlight vmhba1 and vmhba2 separately and verify that the LUN you wanted masked does not appear.

End of Page 6

Page 7

 See before view with LUN 0 listed as a 2.81MB device owned by NMP.
After claim rules are set to mask the LUNZ devices on each host’s HBA following the instructions above. The console errors cease and the LUNZ device is no longer being presented to the host
If you skipped licensing, go back and complete the steps on pages 3 and 4 “PowerPath/VE licensing and rpowermt remote administration“.

End of Page 7

Page 8

PowerPath install commands quick reference –query –server yourvspherehost –server yourvspherehost–install –bundle=\\your-unc-path\ –query –server yourvspherehost
PowerPath licensing commands quick reference
set PPMT_LIC_PATH=E:\PowerPathVE\Licenses
rpowermt host= yourvspherehostregister
rpowermt host= yourvspherehostcheck_registration
ESX Claimrules quick reference
esxcli –server yourvspherehostcorestorage claimrule list
esxcli –server yourvspherehostcorestorage claimrule add –plugin MASK_PATH –rule 102 –type location -A vmhba1 -C 0 -T 0 -L 0
esxcli –server yourvspherehostcorestorage claimrule add –plugin MASK_PATH –rule 103 –type location -A vmhba2 -C 0 -T 0 -L 0
esxcli –server yourvspherehostcorestorage claimrule list
esxcli –server yourvspherehostcorestorage claimrule load
esxcli –server yourvspherehostcorestorage claimrule run
If deletion of a claimrule is required:
esxcli –server yourvspherehostcorestorage claimrule delete –rule ###

End of Page 8