vmware

Cannot remove Datastore because filesystem is busy

This error may appear when you are trying to unmount or delete VMFS.

There are few things that we need to keep in mind before unmounting datastore

Dumpfile

[root@HOSTB:~] esxcli system coredump file list
Path                                                                                                                Active  Configured        Size
------------------------------------------------------------------------------------------------------------------  ------  ----------  ----------
/vmfs/volumes/60992065-35330208-6275-78ac443ba554/vmkdump/4C4C4544-0052-5810-805A-CAC04F5A4233-3095396352.dumpfile    true        true  3095396352

# To remove the coredump file use command 
esxcli system coredump file remove --force

vsantrace

[root@HOSTB:~] lsof |grep vsantraced |grep volumes
[root@HOSTB:~]
# To remove the log, simply stop vsantraced, unmount the datastore and start it again:
~ # /etc/init.d/vsantraced stop
# Remove/Unmount Datastore
~ # /etc/init.d/vsantraced start
If you do not useVSAN you can also disable :
# chkconfig vsantraced off

Scratch 

From the advanced setting ScratchConfig.CurrentScratchLocation  (ESXi > Configure > System > Advanced System Settings) check if the ESxi host is used as Scratch Location .Edit the value ScratchConfig.ConfiguredScratchLocation and reboot the host.

Other

  • No virtual machine, template, snapshot or mounted in VM CD/DVD image that point to the datastore
  • Datastore is not used as a scratch location
  • Datastore is not used as VMkernel Dump file locatio
  • Datastore is not used as active vsantraced location
  • Datastore is not used as Scratch
  • Datastore is not used to store VM swap
  • Datastore is not used for vSphere HA heartbeat
  • Datastore is not a part of Storage I/O Control
vmware

vSAN Component Metadata Invalid state

I had to resolve issue where vSAN after crash was complaining that few of disks were indicating Component metadata Invalid state

From vcenter rvc console vsan.health.health_summary was showing following errors

Physical disk - Component metadata health: red
  +--------------+--------------------------------------+--------+---------------+
  | Host         | Component                            | Health | Notes         |
  +--------------+--------------------------------------+--------+---------------+
  | 172.30.91.22 | 77d46e60-7941-acf3-1e3b-48df37176888 | Error  | Invalid state |
  | 172.30.91.22 | 90969b5f-3a4f-10f4-4b17-48df37176ad4 | Error  | Invalid state |
  | 172.30.91.22 | 824b7262-01dd-9b27-844d-48df37176888 | Error  | Invalid state |
  | 172.30.91.22 | 824b7262-2449-9d27-b928-48df37176888 | Error  | Invalid state |
  +--------------+--------------------------------------+--------+---------------+

using vsan.cmmds_find, search on the component UUID as reported in the health check (components with errors) to get the disk UUID. In my case nothing was found


/127.0.0.1/Datacenter1/computers> vsan.cmmds_find 0 -u 77d46e60-7941-acf3-1e3b-48df37176888
+---+------+------+-------+--------+---------+
| # | Type | UUID | Owner | Health | Content |
+---+------+------+-------+--------+---------+
+---+------+------+-------+--------+---------+

/127.0.0.1/Datacenter1/computers>  vsan.cmmds_find 0 -u 90969b5f-3a4f-10f4-4b17-48df37176ad4
+---+------+------+-------+--------+---------+
| # | Type | UUID | Owner | Health | Content |
+---+------+------+-------+--------+---------+
+---+------+------+-------+--------+---------+

/127.0.0.1/Datacenter1/computers>  vsan.cmmds_find 0 -u 824b7262-01dd-9b27-844d-48df37176888
+---+------+------+-------+--------+---------+
| # | Type | UUID | Owner | Health | Content |
+---+------+------+-------+--------+---------+
+---+------+------+-------+--------+---------+

/127.0.0.1/Datacenter1/computers> vsan.cmmds_find 0 -u 824b7262-2449-9d27-b928-48df37176888
+---+------+------+-------+--------+---------+
| # | Type | UUID | Owner | Health | Content |
+---+------+------+-------+--------+---------+
+---+------+------+-------+--------+---------+

In case you have the diskUuid, you can use that in the next command to find disk device name

vsan.cmmds_find 0 -t DISK -u xxx

In my case, I had to use esxcli vsan debug disk list to determine disk with Metadata Health in Invalid status .
Once you determine disk we need to Evacuate data , remove and readd disk to disk group.

Process for removing a disk:
Cluster > Configure > vSAN > Disk Management > Select the Disk-Group > Select the disk in the lower-pane > Click ‘Remove’ button > Select Full data migration option > OK
Once this task has been completed the disk should be available for adding back to the Disk-Group:
Cluster > Configure > vSAN > Disk Management > Select the Disk-Group > Click ‘Add disk’ button and select the correct disk

vmware

Troubelshooting NTP on ESXi host.

In some cases ESXi is loosing time sync with windows based Time server.

In some cases flash value is 400 and in that case, NTP client won’t work.

First we need to check ntp assoc using command ntpq -c assoc and based on assid check stats using ntpq

Below outputs are after remediation and flash value is 00 – that means issue NTP is working properly.

If flash=400 then insert a “tos maxdist 30” to /etc/ntp.conf and restart ntp

ref https://kb.vmware.com/s/article/1035833

[root@Witness:~] ntpq -c as
ind assid status  conf reach auth condition  last_event cnt
===========================================================
  1 14989  961a   yes   yes  none  sys.peer    sys_peer  1
[root@Witness:~] ntpq
ntpq> rv 14989
associd=14989 status=961a conf, reach, sel_sys.peer, 1 event, sys_peer,
srcadr=172.23.180.121, srcport=123, dstadr=172.23.180.108, dstport=123,
leap=00, stratum=5, precision=-23, rootdelay=70.358, rootdisp=137.589,
refid=172.23.180.118,
reftime=e5c98d6e.e4338491  Wed, Mar  2 2022  6:30:06.891,
rec=e5c9904a.fa749454  Wed, Mar  2 2022  6:42:18.978, reach=037,
unreach=0, hmode=3, pmode=4, hpoll=6, ppoll=6, headway=19, flash=00 ok,
keyid=0, offset=7.346, delay=0.552, dispersion=438.310, jitter=3.720,
xleave=0.079,
filtdelay=     0.55    0.74    0.78    0.70    0.79    0.00    0.00    0.00,
filtoffset=    7.35    7.12    7.24    6.38   -0.03    0.00    0.00    0.00,
filtdisp=      0.00    1.01    1.98    2.99    3.96 16000.0 16000.0 16000.0

Below you can find values while the issue was occurring.

associd=27366 status=901a conf, reach, sel_reject, 1 event, sys_peer,
srcadr=172.23.180.121, srcport=123, dstadr=172.23.180.105, dstport=123,
leap=00, stratum=5, precision=-23, rootdelay=68.451, rootdisp=7899.643,
refid=172.23.180.118,
reftime=e5c994da.b16bb129  Wed, Mar  2 2022  7:01:46.693,
rec=e5c994ec.854ba036  Wed, Mar  2 2022  7:02:04.520, reach=077,
unreach=0, hmode=3, pmode=4, hpoll=6, ppoll=6, headway=0,
flash=400 peer_dist, keyid=0, offset=-46.479, delay=0.312,
dispersion=188.367, jitter=43.868, xleave=0.033,
filtdelay=     0.31    0.51    0.36    0.32    0.52    0.58    0.00    0.00,
filtoffset=  -46.48  -17.81    0.90    0.58    0.34   -0.11    0.00    0.00,
filtdisp=      0.00    0.98    1.95    2.91    3.87    4.88 16000.0 16000.0

Same settings need to be applied on vcenter appliance

To restart service on vcsa we can use the following commands

service ntpd restart

vmware

Failed to open vmdk – Inaccesible vSAN objects

Recently I have an issue where few VM were unable to boot. This issue happened after one of my vSAN host has hang and even after power cycle host VM disk objects were inassesible and in vmware.log I found following errors

2022-02-01T06:49:25.479Z| vmx| I125: DISKLIB-LINK  : "/vmfs/volumes/vsan:5243fe7b1c27eea3-a47b9372d3d99021/e319cf5a-4867-19a5-c6eb-246e968de1a4/VM_1-000002.vmdk" : failed to open (Input/output error).
2022-02-01T06:49:25.479Z| vmx| I125: DISKLIB-CHAIN : "/vmfs/volumes/vsan:5243fe7b1c27eea3-a47b9372d3d99021/e319cf5a-4867-19a5-c6eb-246e968de1a4/VM_1-000002.vmdk" : failed to open (Input/output error).

To dump all vSAN object details in ssh session from one of vSAN data nodes run following command

esxcli vsan debug object list > /tmp/obj

Based on object list I found inaccessible object list and object owner

Object UUID: 8a1acf5a-d25d-c738-cc9e-246e968de1a4
   Version: 5
   Health: inaccessible - Lost quorum.
   Owner: RSVMH03.RSDC.local

To change the ownership of the object we need to run listed below command . This has to be run from the host which is the current owner

vsish -e set /vmkModules/vsan/dom/ownerAbdicate 8a1acf5a-d25d-c738-cc9e-246e968de1a4

Once I fix all inaccessible objects in vSAN cluster I was able to start VMs that were throwing Input/output errors.

Other way to perform same task would be to use rvc tool

vsan.check_state ~cluster~ 
# fix 
vsan.check_state -r  ~cluster~ 

# -r Refresh state and then check state
# -e Un-register and re-register VMs in inventory
# https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/vsan/vmware-ruby-vsphere-console-command-reference-for-virtual-san.pdf 
vmware

Troubleshooting CIM on ESXi

Recently I’ve updated from to the latest 6.7 ESXi release . Afterthat , I have detected the following error messages flooding vmkernel.log

2021-10-30T06:49:17.261Z cpu9:2103791)User: 3173: sfcb-intelcim: wantCoreDump:sfcb-intelcim signal:11 exitCode:0 coredump:enabled
2021-10-30T06:49:17.472Z cpu9:2103791)UserDump: 3110: sfcb-intelcim: Dumping cartel 2103782 (from world 2103791) to file /var/core/sfcb-intelcim-zdump.001 ...

Before I manage to find a solution I have to verify a few things. Just a small background. sfcbd stands for “Small Footprint CIM Broker (SFCB) daemon”. For performance and health monitoring ESXi enables an agent less approach using industry standards like CIM (Common Information Model) and WBEM (Web-Based Enterprise Management). At the ESXi side, there is the CIM agent, represented by the sfcbd. CIM providers are the counter part, often supplied by 3rd parties like hardware vendors. CIM providers come as .VIB files. After detecting 3rd party CIM provider, the sfcbd (with that the WBEM services) is automatically started by ESXi.

To get an overview of the CIM providers installed, log in to the ESXi host as user root and run the commands:

# get CIM status 
esxcli system wbem get
   Authorization Model: password
   CIMObject Manager PID: 2101998
   Enabled: true
   Enabled Running SSLProtocols: tlsv1.2
   Enabled SSLProtocols:
   Enabled System SSLProtocols: tlsv1.2
   Loglevel: warning
   Port: 5989
   Service Location Protocol PID: 0
   WSManagement PID: 2101662
   WSManagement Service: true

#get CIM provider list with status  
esxcli system  wbem  provider list
Name              Enabled  Loaded
----------------  -------  ------
sfcb_base            true    true
vmw_base             true    true
vmw_hdr              true    true
vmw_iodmProvider     true    true
vmw_kmodule          true    true
vmw_omc              true    true
vmw_pci              true    true
vmw_smx-provider     true    true 

Since sfcb was crashing and the crash was caused by sfcb-intelcim provider I’ve decided to remove that module but alternatively you can disable it.

# uninstall package intelcim-provider that was causing issues 
/etc/init.d/sfcbd-watchdog stop
esxcli software vib list | grep -i intel
esxcli software vib remove -n intelcim-provider 
/etc/init.d/sfcbd-watchdog start
#ref https://support.hpe.com/hpesc/public/docDisplay?docId=a00048925en_us&docLocale=en_US

# alternatively provider can be disabled 
sxcli system  wbem  provider set --enable false --name="intelcim-provider"


Once intelcim-provider has been removed dump file was created anymore.  I hope this was useful.

vmware

Manually remove and recreate a vSAN disk group

One of vSAN cluster experienced issue with SSD disk and once disk has been replaced capacity tier disk still belongs to old vSAN disk group.

To get this resolved first we need to indentify “old” SSD disk. This can be done using rvc

/127.0.0.1/Datacenter/computers/IDC/hosts> ls
0 esxp01 (host): cpu 2*8*2.09 GHz, memory 274.00 GB
1 esxp02 (host): cpu 2*8*2.09 GHz, memory 274.00 GB
2 esxp03 (host): cpu 2*8*2.09 GHz, memory 274.00 GB
3 esxp04 (host): cpu 2*8*2.09 GHz, memory 274.00 GB

/127.0.0.1/Datacenter/computers/IDC/hosts> vsan.disks_info 0
2021-04-17 05:53:46 +0000: Gathering disk information for host esxp01
2021-04-17 05:53:46 +0000: Done gathering disk information
Disks on host esxp01:
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+
| DisplayName                                                       | isSSD | Size    | State                                                                  |
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+
| Local TOSHIBA Disk (naa.50000399c8482f41)                         | MD    | 1117 GB | inUse                                                                  |
| TOSHIBA AL15SEB120N                                               |       |         | vSAN Format Version: v7                                                |
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+
| Local TOSHIBA Disk (naa.50000399c8482f89)                         | MD    | 1117 GB | inUse                                                                  |
| TOSHIBA AL15SEB120N                                               |       |         | vSAN Format Version: vN/A                                              |
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+
| Local TOSHIBA Disk (naa.50000399c8482f0d)                         | MD    | 1117 GB | inUse                                                                  |
| TOSHIBA AL15SEB120N                                               |       |         | vSAN Format Version: vN/A                                              |
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+
| Local TOSHIBA Disk (naa.50000399c8482f4d)                         | MD    | 1117 GB | inUse                                                                  |
| TOSHIBA AL15SEB120N                                               |       |         | vSAN Format Version: vN/A                                              |
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+
| Local TOSHIBA Disk (naa.50000399c848313d)                         | MD    | 1117 GB | inUse                                                                  |
| TOSHIBA AL15SEB120N                                               |       |         | vSAN Format Version: vN/A                                              |
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+
| Local TOSHIBA Disk (naa.50000399c8482ef9)                         | MD    | 1117 GB | inUse                                                                  |
| TOSHIBA AL15SEB120N                                               |       |         | vSAN Format Version: v7                                                |
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+
| Local SEAGATE Disk (naa.5000c5009882d057)                         | MD    | 1117 GB | inUse                                                                  |
| SEAGATE ST1200MM0088                                              |       |         | vSAN Format Version: v7                                                |
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+
| Local TOSHIBA Disk (naa.50000399c8483109)                         | MD    | 1117 GB | inUse                                                                  |
| TOSHIBA AL15SEB120N                                               |       |         | vSAN Format Version: vN/A                                              |
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+
| Local TOSHIBA Disk (naa.50000399c84831c5)                         | MD    | 1117 GB | inUse                                                                  |
| TOSHIBA AL15SEB120N                                               |       |         | vSAN Format Version: v7                                                |
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+
| Local USB Direct-Access (eui.00a0504658335330)                    | MD    | 29 GB   | ineligible (Existing partitions found on disk 'eui.00a0504658335330'.) |
| Cypress RAID                                                      |       |         |                                                                        |
|                                                                   |       |         | Partition table:                                                       |
|                                                                   |       |         | 5: 0.24 GB, type = vfat                                                |
|                                                                   |       |         | 6: 0.24 GB, type = vfat                                                |
|                                                                   |       |         | 7: 0.11 GB, type = coredump                                            |
|                                                                   |       |         | 8: 0.28 GB, type = vfat                                                |
|                                                                   |       |         | 9: 5.16 GB, type = coredump                                            |
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+
| Local SEAGATE Disk (naa.5000c5009883164f)                         | MD    | 1117 GB | inUse                                                                  |
| SEAGATE ST1200MM0088                                              |       |         | vSAN Format Version: v7                                                |
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+
| Local SEAGATE Disk (naa.5000c5009882b12f)                         | MD    | 1117 GB | inUse                                                                  |
| SEAGATE ST1200MM0088                                              |       |         | vSAN Format Version: v7                                                |
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+
| Local ATA Disk (naa.55cd2e404c25b806)                             | SSD   | 447 GB  | eligible                                                               |
| ATA INTEL SSDSC2BX48                                              |       |         |                                                                        |
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+
| Local TOSHIBA Disk (naa.50000399c84831dd)                         | MD    | 1117 GB | inUse                                                                  |
| TOSHIBA AL15SEB120N                                               |       |         | vSAN Format Version: v7                                                |
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+
| Local SEAGATE Disk (naa.5000c5009882a7a3)                         | MD    | 1117 GB | inUse                                                                  |
| SEAGATE ST1200MM0088                                              |       |         | vSAN Format Version: v7                                                |
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+
| Local TOSHIBA Disk (naa.50000399c851c6f5)                         | MD    | 1117 GB | inUse                                                                  |
| TOSHIBA AL15SEB120N                                               |       |         | vSAN Format Version: v7                                                |
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+
| Local ATA Disk (naa.55cd2e404c20424f)                             | SSD   | 447 GB  | inUse                                                                  |
| ATA INTEL SSDSC2BB48                                              |       |         | vSAN Format Version: v7                                                |
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+
| Local TOSHIBA Disk (naa.5000039768098a85)                         | MD    | 1117 GB | inUse                                                                  |
| TOSHIBA AL14SEB120N                                               |       |         | vSAN Format Version: v7                                                |
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+
| Local ATA Disk (naa.55cd2e404c35970a)                             | SSD   | 447 GB  | inUse                                                                  |
| ATA INTEL SSDSC2BX48                                              |       |         | vSAN Format Version: v7                                                |
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+
| Absent vSAN Disk (vSAN UUID:52a8ce51-bc37-253b-4246-80f0eede122d) | SSD   | 0 GB    | inUse                                                                  |
|                                                                   |       |         | vSAN Format Version: vN/A                                              |
+-------------------------------------------------------------------+-------+---------+------------------------------------------------------------------------+


Next we can ssh to esxi host and check vsan disk 52a8ce51-bc37-253b-4246-80f0eede122d using vdq command and check disks that belongs to the same disk group


# list disk groups 
(https://kb.vmware.com/s/article/2108910)

vdq -Hi
Mappings:
   DiskMapping[0]:
           SSD:  52a8ce51-bc37-253b-4246-80f0eede122d
            MD:  naa.50000399c8482f89
            MD:  naa.50000399c8482f0d
            MD:  naa.50000399c8482f4d
            MD:  naa.50000399c848313d
            MD:  naa.50000399c8483109

   DiskMapping[2]:
           SSD:  naa.55cd2e404c20424f
            MD:  naa.50000399c8482f41
            MD:  naa.50000399c8482ef9
            MD:  naa.50000399c84831c5
            MD:  naa.50000399c84831dd
            MD:  naa.50000399c851c6f5

   DiskMapping[4]:
           SSD:  naa.55cd2e404c35970a
            MD:  naa.5000c5009882d057
            MD:  naa.5000c5009883164f
            MD:  naa.5000c5009882b12f
            MD:  naa.5000c5009882a7a3
            MD:  naa.5000039768098a85

Once we know disk ID’s – we can check disk group UUID and if disk are being used by vSAN

MD: naa.50000399c8482f89
MD: naa.50000399c8482f0d
MD: naa.50000399c8482f4d
MD: naa.50000399c848313d
MD: naa.50000399c8483109

# check if disk is in use 
esxcli vsan storage list -d naa.50000399c8482f89 | grep CMMDS
   In CMMDS: false
esxcli vsan storage list -d naa.50000399c8482f0d | grep CMMDS
   In CMMDS: false
esxcli vsan storage list -d naa.50000399c8482f4d | grep CMMDS
   In CMMDS: false
esxcli vsan storage list -d naa.50000399c848313d | grep CMMDS
   In CMMDS: false
esxcli vsan storage list -d naa.50000399c8483109 | grep CMMDS
   In CMMDS: false

#check vSAN disk group UUID
esxcli vsan storage list -d naa.50000399c8482f89 | grep "Disk Group UUID"
   VSAN Disk Group UUID: 52a8ce51-bc37-253b-4246-80f0eede122d
esxcli vsan storage list -d naa.50000399c8482f0d | grep "Disk Group UUID"
   VSAN Disk Group UUID: 52a8ce51-bc37-253b-4246-80f0eede122d
esxcli vsan storage list -d naa.50000399c8482f4d | grep "Disk Group UUID"
   VSAN Disk Group UUID: 52a8ce51-bc37-253b-4246-80f0eede122d
esxcli vsan storage list -d naa.50000399c848313d | grep "Disk Group UUID"
   VSAN Disk Group UUID: 52a8ce51-bc37-253b-4246-80f0eede122d
esxcli vsan storage list -d naa.50000399c8483109 | grep "Disk Group UUID" 
   VSAN Disk Group UUID: 52a8ce51-bc37-253b-4246-80f0eede122d

Next we can check disk evacuation result and simply remove them once they are not being used

##1

# check evacuation result for  disk in disk group  52a8ce51-bc37-253b-4246-80f0eede122d 
[root@esxp01:~] esxcli vsan storage list -d naa.50000399c8482f89 | grep "VSAN UUID"
   VSAN UUID: 5264fa92-5823-88a9-c86c-56fe18c94eb4
localcli vsan debug evacuation precheck -e  5264fa92-5823-88a9-c86c-56fe18c94eb4 -a noAction
Errors:
 Not able to determine if 5264fa92-5823-88a9-c86c-56fe18c94eb4 is a disk/diskgroup/hostname uuid or its name.

# remove disk 
 esxcli vsan storage remove -u 5264fa92-5823-88a9-c86c-56fe18c94eb4


##2
esxcli vsan storage list -d naa.50000399c8482f0d | grep "VSAN UUID"
 VSAN UUID: 523fc628-8dd2-a179-118a-946b8cefb8ff
localcli vsan debug evacuation precheck -e  523fc628-8dd2-a179-118a-946b8cefb8ff -a noAction
Errors:
 Not able to determine if 523fc628-8dd2-a179-118a-946b8cefb8ff is a disk/diskgroup/hostname uuid or its name.
# remove disk 
 esxcli vsan storage remove -u 523fc628-8dd2-a179-118a-946b8cefb8ff
 
##3
esxcli vsan storage list -d naa.50000399c8482f4d | grep "VSAN UUID"
VSAN UUID: 52ea30ba-775c-59d1-ab6d-b70c11330d57
localcli vsan debug evacuation precheck -e 52ea30ba-775c-59d1-ab6d-b70c11330d57 -a noAction
Errors:
 Not able to determine if 52ea30ba-775c-59d1-ab6d-b70c11330d57 is a disk/diskgroup/hostname uuid or its name.
# remove disk 
 esxcli vsan storage remove -u 52ea30ba-775c-59d1-ab6d-b70c11330d57
 
##4
esxcli vsan storage list -d naa.50000399c848313d | grep "VSAN UUID"
VSAN UUID: 5223cdd1-fb0d-93a7-02fa-83f78c5b459b
localcli vsan debug evacuation precheck -e 5223cdd1-fb0d-93a7-02fa-83f78c5b459b -a noAction
Errors:
 Not able to determine if 5223cdd1-fb0d-93a7-02fa-83f78c5b459b is a disk/diskgroup/hostname uuid or its name.
# remove disk 
 esxcli vsan storage remove -u 5223cdd1-fb0d-93a7-02fa-83f78c5b459b 

##5
esxcli vsan storage list -d naa.50000399c8483109 | grep "VSAN UUID"
VSAN UUID: 52a5def9-3518-d4e1-caf3-15c5efc4286d
localcli vsan debug evacuation precheck -e 52a5def9-3518-d4e1-caf3-15c5efc4286d -a noAction
Errors:
 Not able to determine if 52a5def9-3518-d4e1-caf3-15c5efc4286d is a disk/diskgroup/hostname uuid or its name.
# remove disk 
esxcli vsan storage remove -u 52a5def9-3518-d4e1-caf3-15c5efc4286d  

Now it’s time to create new vSAN disk group

# check SSD that is not assigned to disk group (Is SSD: true )
 esxcli storage core device list
... 
SSD's 
 
 naa.55cd2e404c25b806
 naa.55cd2e404c20424f
 naa.55cd2e404c35970a
 
 
vdq -Hi
Mappings:
   DiskMapping[0]:
           SSD:  naa.55cd2e404c20424f
            MD:  naa.50000399c8482f41
            MD:  naa.50000399c8482ef9
            MD:  naa.50000399c84831c5
            MD:  naa.50000399c84831dd
            MD:  naa.50000399c851c6f5

   DiskMapping[2]:
           SSD:  naa.55cd2e404c35970a
            MD:  naa.5000c5009882d057
            MD:  naa.5000c5009883164f
            MD:  naa.5000c5009882b12f
            MD:  naa.5000c5009882a7a3
            MD:  naa.5000039768098a85


 # Create new disk group 
 esxcli vsan storage add -s naa.55cd2e404c25b806 -d naa.50000399c8482f89 -d naa.50000399c8482f0d -d naa.50000399c8482f4d -d naa.50000399c848313d -d naa.50000399c8483109
 
 # check that all is OK 
vdq -Hi
Mappings:
   DiskMapping[0]:
           SSD:  naa.55cd2e404c25b806
            MD:  naa.50000399c8482f89
            MD:  naa.50000399c8482f0d
            MD:  naa.50000399c8482f4d
            MD:  naa.50000399c848313d
            MD:  naa.50000399c8483109

   DiskMapping[2]:
           SSD:  naa.55cd2e404c20424f
            MD:  naa.50000399c8482f41
            MD:  naa.50000399c8482ef9
            MD:  naa.50000399c84831c5
            MD:  naa.50000399c84831dd
            MD:  naa.50000399c851c6f5

   DiskMapping[4]:
           SSD:  naa.55cd2e404c35970a
            MD:  naa.5000c5009882d057
            MD:  naa.5000c5009883164f
            MD:  naa.5000c5009882b12f
            MD:  naa.5000c5009882a7a3
            MD:  naa.5000039768098a85
windows

Change network profile on Windows Server 2016

Network pofile change can be done using powershell

PS C:\> Get-NetConnectionProfile


Name             : Network
InterfaceAlias   : Ethernet0 2
InterfaceIndex   : 5
NetworkCategory  : Private
IPv4Connectivity : Internet
IPv6Connectivity : NoTraffic

Set-NetConnectionProfile -InterfaceIndex 5 -NetworkCategory DomainAuthenticated

Set-NetConnectionProfile : Unable to set NetworkCategory to 'DomainAuthenticated'.  This NetworkCategory type will be
set automatically when authenticated to a domain network.

In my case Restarting Network Location Awareness service helped and profile reverted back to DomainAuthenticated

# set firewall to disabled 
Set-NetFirewallProfile -Profile Domain,Public,Private -Enabled False
# restart NLA service 
Get-Service -Name NLASvc  | Restart-Service -Force 

PS C:\> Get-NetConnectionProfile


Name             : mydomain.local
InterfaceAlias   : Ethernet0 2
InterfaceIndex   : 5
NetworkCategory  : DomainAuthenticated
IPv4Connectivity : Internet
IPv6Connectivity : NoTraffic

vmware

vSAN user objects consuming large amount of space

Recently I’ve been working on vSAN cluster where unassociated objects were consuming around 20Tb and all together it was 800 objects. After verification it was clear that those objects are orphaned VMs that were removed from vcenter inventory instead or removing from disk. I was looking for a quick way how to remove them using script.

Unassociated objects can be verified using the rvc tool.

Referencing KB : 70726 , I have followed the steps to get the attributes from all 800 unassociated objects.

Identification of these Objects can be achieved via RVC:
1. Create a list of the unassociated Object UUIDs by copying the formatted list from the end of the output from RVC to a txt file:

 vsan.obj_status_report -t <pathToCluster>


2. Unformat the list by copying the formatted output into a text file (e.g. using vi or another available text editor) and run the below script against this file (on vCSA/ESXi) to generate a file with all UUIDs in a single line:

cat /tmp/inputlist.txt | awk '{print $2}'  |awk '/^'.'/ {printf "%s ",$0} END {print ""}' > /tmp/outputlist.txt

3. Use the input file data in conjunction with objtool on any host in the cluster via CLI:

for i in `cat /tmp/inputlist.txt | awk '{print $2}'`; do python -c "print('*' * 100)";/usr/lib/vmware/osfs/bin/objtool getAttr -u $i | grep -E 'UUID|Object class|Object path'>> outputlist.txt ; done

Processing removing objects in loop based on outputlist.txt prepared in previous step

for x in cat outputlist.txt | grep UUID | cut -f2 -d:; do /usr/lib/vmware/osfs/bin/objtool delete -u $x -f ; done

vmware

VCSA regenerate expired certificates

Recently I have to replace vcenter certificate. In my case Security Token Service (STS) certificate has expired after two year lifespan and caused problems for authentication on vCenter Server.

Logging in through the Web client display for me  errors:

503 Service Unavailable (Failed to connect to endpoint: [N7Vmacore4Http20NamedPipeServiceSpecE:0x00007fb444041040] _serverNamespace = / action = Allow _pipeName =/var/run/vmware/vpxd-webserver-pipe)

Once I have this error I found vmware KB that explain how to replace expired STS certificate

https://kb.vmware.com/s/article/79248?lang=en_US

Using attached to KB article python script I’ve confirmed that STS certificate needs to be replaced

root@photon-machine [ /tmp ]# python checksts.py

1 VALID CERTS
================

        LEAF CERTS:

        None

        ROOT CERTS:

        [] Certificate F0:D2:09:CD:AC:2E:1B:E0:44:C1:80:F2:0C:A7:4B:F1:20:A7:17:11 will expire in 2906 days (8 years).

1 EXPIRED CERTS
================

        LEAF CERTS:

        [] Certificate: 86:0A:72:F6:1B:F1:E7:6B:27:A4:A0:25:2C:C7:95:16:F0:A7:B5:54 expired on 2021-02-12 17:06:58 GMT!

        ROOT CERTS:

        None

    WARNING!
    You have expired STS certificates.  Please follow the KB corresponding to your OS:
    VCSA:  https://kb.vmware.com/s/article/76719
    Windows:  https://kb.vmware.com/s/article/79263

Once I know that STS certificate is expired we need to execute another script provided by vmware to solve the problem.

root@photon-machine [ /tmp ]# chmod +x fixsts.sh
root@photon-machine [ /tmp ]# ./fixsts.sh

root@photon-machine [ /tmp ]# ./fixsts.sh
NOTE: This works on external and embedded PSCs
This script will do the following
1: Regenerate STS certificate
What is needed?
1: Offline snapshots of VCs/PSCs
2: SSO Admin Password
IMPORTANT: This script should only be run on a single PSC per SSO domain
==================================
Resetting STS certificate for photon-machine.vcenter.prod started on Tue Feb 23 01:28:38 EST 2021


Detected DN: cn=10.20.111.10,ou=Domain Controllers,dc=vsphere,dc=local
Detected PNID: 10.20.111.10
Detected PSC: 10.20.111.10
Detected SSO domain name: vsphere.local
Detected Machine ID: ec81ca7e-5a63-4ff1-9eef-b7ec381781cc
Detected IP Address: 10.20.111.10
Domain CN: dc=vsphere,dc=local
==================================
==================================

Detected Root's certificate expiration date: 2031 Feb 17
Detected today's date: 2021 Feb 23
==================================

Exporting and generating STS certificate

Status : Success
Using config file : /tmp/vmware-fixsts/certool.cfg
Status : Success


Enter password for administrator@vsphere.local: ldap_bind: Invalid credentials (49)

ldap_bind: Invalid credentials (49)
Amount of tenant credentials:

Amount of trustedcertchains:


Applying newly generated STS certificate to SSO domain
ldap_bind: Invalid credentials (49)

Replacement finished - Please restart services on all vCenters and PSCs in your SSO domain
==================================
IMPORTANT: In case you're using HLM (Hybrid Linked Mode) without a gateway, you would need to re-sync the certs from Cloud to On-Prem after following this procedure
==================================
==================================

Once certificate has been replaced we need to restart services

root@photon-machine [ /tmp ]# service-control --stop --all
Operation not cancellable. Please wait for it to finish...
Performing stop operation on service vmware-pod...
Successfully stopped service vmware-pod
Performing stop operation on profile: ALL...
Successfully stopped service vmware-vmon
Successfully stopped profile: ALL.
Performing stop operation on service vmdnsd...
Successfully stopped service vmdnsd
Performing stop operation on service vmware-stsd...
Successfully stopped service vmware-stsd
Performing stop operation on service vmware-sts-idmd...
Successfully stopped service vmware-sts-idmd
Performing stop operation on service vmcad...
Successfully stopped service vmcad
Performing stop operation on service vmdird...
Successfully stopped service vmdird
Performing stop operation on service vmafdd...
Successfully stopped service vmafdd
Performing stop operation on service lwsmd...
Successfully stopped service lwsmd
root@photon-machine [ /tmp ]# service-control --start --all
Operation not cancellable. Please wait for it to finish...
Performing start operation on service lwsmd...
Successfully started service lwsmd
Performing start operation on service vmafdd...
Successfully started service vmafdd
Performing start operation on service vmdird...
Successfully started service vmdird
Performing start operation on service vmcad...
Successfully started service vmcad
Performing start operation on service vmware-sts-idmd...
Successfully started service vmware-sts-idmd
Performing start operation on service vmware-stsd...
Successfully started service vmware-stsd
Performing start operation on service vmdnsd...
Successfully started service vmdnsd
Performing start operation on profile: ALL...
Successfully started service vmware-vmon
Service-control failed. Error: Failed to start services in profile ALL. RC=1, stderr=Failed to start vapi-endpoint, vpxd-svcs, cm services. Error: Operation timed out



Next step is to determine other expired certificates for the vCenter Server Appliance

root@photon-machine [ /tmp ]# for i in $(/usr/lib/vmware-vmafd/bin/vecs-cli store list); do echo STORE $i; sudo /usr/lib/vmware-vmafd/bin/vecs-cli entry list --store $i --text | egrep "Alias|Not After"; done
STORE MACHINE_SSL_CERT
Alias : __MACHINE_CERT
            Not After : Feb 19 01:21:02 2021 GMT
STORE TRUSTED_ROOTS
Alias : c0e060fc53e39ea5d335ceda19c4a37dcef5e083
            Not After : Feb 13 13:21:01 2029 GMT
STORE TRUSTED_ROOT_CRLS
Alias : b597fb1dddc2bc820fde06f3c14f5730b8ccc51e
STORE machine
Alias : machine
            Not After : Feb 18 13:12:01 2021 GMT
STORE vsphere-webclient
Alias : vsphere-webclient
            Not After : Feb 18 13:12:01 2021 GMT
STORE vpxd
Alias : vpxd
            Not After : Feb 18 13:12:01 2021 GMT
STORE vpxd-extension
Alias : vpxd-extension
            Not After : Feb 18 13:12:02 2021 GMT
STORE SMS
Alias : sms_self_signed
            Not After : Feb 19 13:28:19 2029 GMT
STORE APPLMGMT_PASSWORD
STORE data-encipherment
Alias : data-encipherment
            Not After : Feb 13 13:21:01 2029 GMT

In my case I’ve others certificates expired so Ihave to regenerate certificates using certificate-manager tool

https://kb.vmware.com/s/article/2097936



/usr/lib/vmware-vmca/bin/certificate-manager
root@photon-machine [ /tmp ]# /usr/lib/vmware-vmca/bin/certificate-manager
                 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
                |                                                                     |
                |      *** Welcome to the vSphere 6.7 Certificate Manager  ***        |
                |                                                                     |
                |                   -- Select Operation --                            |
                |                                                                     |
                |      1. Replace Machine SSL certificate with Custom Certificate     |
                |                                                                     |
                |      2. Replace VMCA Root certificate with Custom Signing           |
                |         Certificate and replace all Certificates                    |
                |                                                                     |
                |      3. Replace Machine SSL certificate with VMCA Certificate       |
                |                                                                     |
                |      4. Regenerate a new VMCA Root Certificate and                  |
                |         replace all certificates                                    |
                |                                                                     |
                |      5. Replace Solution user certificates with                     |
                |         Custom Certificate                                          |
                |                                                                     |
                |      6. Replace Solution user certificates with VMCA certificates   |
                |                                                                     |
                |      7. Revert last performed operation by re-publishing old        |
                |         certificates                                                |
                |                                                                     |
                |      8. Reset all Certificates                                      |
                |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _|
Note : Use Ctrl-D to exit.
Option[1 to 8]: 4
Do you wish to generate all certificates using configuration file : Option[Y/N] ? : y

Please provide valid SSO and VC privileged user credential to perform certificate operations.
Enter username [Administrator@vsphere.local]:
Enter password:
certool.cfg file exists, Do you wish to reconfigure : Option[Y/N] ? : y

Press Enter key to skip optional parameters or use Previous value.

Enter proper value for 'Country' [Previous value : US] :

Enter proper value for 'Name' [Previous value : photon-machine] :

Enter proper value for 'Organization' [Previous value : VMware] :

Enter proper value for 'OrgUnit' [Previous value : VMware] :

Enter proper value for 'State' [Previous value : California] :

Enter proper value for 'Locality' [Previous value : US] :

Enter proper value for 'IPAddress' (Provide comma separated values for multiple IP addresses) [optional] : 10.20.111.10

Enter proper value for 'Email' [Previous value : email@acme.com] :

Enter proper value for 'Hostname' (Provide comma separated values for multiple Hostname entries) [Enter valid Fully Qualified Domain Name(FQDN), For Example : example.domain.com] : photon-machine.vcenter.prod

Enter proper value for VMCA 'Name' :photon-machine
vmware

Manage vSphere Storage Solutions

  • LUN Masking

LUN masking is a process that will mask away LUNs, or make those LUNs inaccessible to certain ESXi hosts. Typically this is done on storage side or FC switches but can be done on ESXi level as well.

 The LUN shown blow image has an identifier of “naa.6589cfc00000094b1a28df7aabbb4ce8” is the one I’m targetting.

[root@esx01:~] esxcli storage vmfs extent list
Volume Name             VMFS UUID                            Extent Number  Device Name                           Partition
----------------------  -----------------------------------  -------------  ------------------------------------  ---------
DatastoreTrueNAS02      5ffde51b-48e6cc53-8cd2-441ea139f918              0  naa.6589cfc00000094b1a28df7aabbb4ce8          1
DatastoreTrueNAS01      5ffde4f1-b97ebafc-3592-441ea139f918              0  naa.6589cfc000000a1cd96c6ef171f063eb          1

[root@esx01:~] esxcli storage core device list -d naa.6589cfc000000a1cd96c6ef171f063eb
naa.6589cfc000000a1cd96c6ef171f063eb
   Display Name: TrueNAS iSCSI Disk (naa.6589cfc000000a1cd96c6ef171f063eb)
   Has Settable Display Name: true
   Size: 10240
   Device Type: Direct-Access
   Multipath Plugin: NMP
   Devfs Path: /vmfs/devices/disks/naa.6589cfc000000a1cd96c6ef171f063eb
   Vendor: TrueNAS
   Model: iSCSI Disk
   Revision: 0123
   SCSI Level: 7
   Is Pseudo: false
   Status: on
   Is RDM Capable: true
   Is Local: false
   Is Removable: false
   Is SSD: true
   Is VVOL PE: false
   Is Offline: false
   Is Perennially Reserved: false
   Queue Full Sample Size: 0
   Queue Full Threshold: 0
   Thin Provisioning Status: yes
   Attached Filters:
   VAAI Status: supported
   Other UIDs: vml.010000000030303530353661303738303230303000695343534920
   Is Shared Clusterwide: true
   Is SAS: false
   Is USB: false
   Is Boot Device: false
   Device Max Queue Depth: 128
   No of outstanding IOs with competing worlds: 32
   Drive Type: unknown
   RAID Level: unknown
   Number of Physical Drives: unknown
   Protection Enabled: false
   PI Activated: false
   PI Type: 0
   PI Protection Mask: NO PROTECTION
   Supported Guard Types: NO GUARD SUPPORT
   DIX Enabled: false
   DIX Guard Type: NO GUARD SUPPORT
   Emulated DIX/DIF Enabled: false

We can mask by path (removing individual path visibility), by vendor (this will mask all LUNs to a specific vendor), or by storage transport (yeah, like all iSCSI or all FC). To list claimrule we can use below comamnd.

[root@esx01:~] esxcli storage core claimrule list
Rule Class   Rule  Class    Type       Plugin     Matches                            XCOPY Use Array Reported Values  XCOPY Use Multiple Segments  XCOPY Max Transfer Size KiB
----------  -----  -------  ---------  ---------  ---------------------------------  -------------------------------  ---------------------------  ---------------------------
MP             50  runtime  transport  NMP        transport=usb                                                false                        false                            0
MP             51  runtime  transport  NMP        transport=sata                                               false                        false                            0
MP             52  runtime  transport  NMP        transport=ide                                                false                        false                            0
MP             53  runtime  transport  NMP        transport=block                                              false                        false                            0
MP             54  runtime  transport  NMP        transport=unknown                                            false                        false                            0
MP            101  runtime  vendor     MASK_PATH  vendor=DELL model=Universal Xport                            false                        false                            0
MP            101  file     vendor     MASK_PATH  vendor=DELL model=Universal Xport                            false                        false                            0
MP          65535  runtime  vendor     NMP        vendor=* model=*                                             false                        false                            0

As we have 2 LUNs from the same vendor and goal is mask single LUN masking by path is correct solution in this case.

[root@esx01:~] esxcfg-mpath -m -d naa.6589cfc000000a1cd96c6ef171f063eb
vmhba65:C1:T0:L0 vmhba65 iqn.1998-01.com.vmware:csmktcesx01-721f100e 00023d000003,iqn.2005-10.org.freenas.ctl:datastore1,t,1 naa.6589cfc000000a1cd96c6ef171f063eb
vmhba65:C0:T0:L0 vmhba65 iqn.1998-01.com.vmware:csmktcesx01-721f100e 00023d000002,iqn.2005-10.org.freenas.ctl:datastore1,t,1 naa.6589cfc000000a1cd96c6ef171f063eb

# mask device naa.6589cfc000000a1cd96c6ef171f063eb
esxcli storage core claimrule add -r 110 -t location -A vmhba32 -C 1 -T 0 -L 0 -P MASK_PATH
esxcli storage core claimrule add -r 111 -t location -A vmhba32 -C 0 -T 0 -L 0 -P MASK_PATH

 Clainrules are in runtime, but the rules will not be applied until that device is reclaimed.  So, a reboot would work here or, we can run a reclaim on device.  

esxcli storage core claiming reclaim -d naa.6589cfc000000a1cd96c6ef171f063eb

To roll back changes we need remove claimrules no. 110 and 110 , load new claimrules and unclaim previously cialmed paths

esxcli storage core claimrule remove -r 110
esxcli storage core claimrule remove -r 111
esxcli storage core claimrule load
esxcli storage core claiming unclaim -t location -A vmhba65  -C 1 -T 0 -L 0 
esxcli storage core claiming unclaim -t location -A vmhba65  -C 0 -T 0 -L 0 
esxcfg-rescan --all
esxcli storage vmfs extent list
Volume Name             VMFS UUID                            Extent Number  Device Name                           Partition
----------------------  -----------------------------------  -------------  ------------------------------------  ---------

DatastoreTrueNAS02      5ffde51b-48e6cc53-8cd2-441ea139f918              0  naa.6589cfc00000094b1a28df7aabbb4ce8          1
DatastoreTrueNAS01      5ffde4f1-b97ebafc-3592-441ea139f918              0  naa.6589cfc000000a1cd96c6ef171f063eb          1

In second scenario we can create LUN masking based on vendor name

esxcli storage core claimrule add -r 112 -t vendor -V TrueNAS -M "iSCSI Disk" -P MASK_PATH
esxcli storage core claimrule load
esxcli storage core claiming unclaim -t device -d naa.6589cfc000000a1cd96c6ef171f063eb
esxcli storage core claiming unclaim -t device -d naa.6589cfc00000094b1a28df7aabbb4ce8
Unable to perform unclaim. Error message was : Unable to unclaim all requested paths. Some paths were busy or were the last path to an in use device.  See VMkernel logs for more information.
esxcfg-mpath -m -d naa.6589cfc00000094b1a28df7aabbb4ce8 
vmhba65:C1:T1:L1 vmhba65 iqn.1998-01.com.vmware:csmktcesx02-35e54f13 00023d000003,iqn.2005-10.org.freenas.ctl:datastore2,t,1 naa.6589cfc00000094b1a28df7aabbb4ce8
esxcli storage core claiming unclaim -t location -A vmhba65  -C 1 -T 1 -L 1

Roll back changes. As static discover addresses of iSCSI array has been removed with unclaim rule we need it discover once again

esxcli storage core claimrule remove -r 112
esxcli storage core claimrule load
esxcli storage core claiming unclaim -t location -A vmhba65 
#
esxcli iscsi adapter discovery sendtarget list
Adapter  Sendtarget
-------  -------------------
vmhba65  192.168.40.131:3260
#re-add static discovery as they were removed from list 
esxcli iscsi adapter discovery sendtarget add -A vmhba65 -a 192.168.40.131:3260
#
esxcfg-rescan --all
#
esxcli storage vmfs extent list
Volume Name             VMFS UUID                            Extent Number  Device Name                           Partition
----------------------  -----------------------------------  -------------  ------------------------------------  ---------
DatastoreTrueNAS01      5ffde4f1-b97ebafc-3592-441ea139f918              0  naa.6589cfc000000a1cd96c6ef171f063eb          1
DatastoreTrueNAS02      5ffde51b-48e6cc53-8cd2-441ea139f918              0  naa.6589cfc00000094b1a28df7aabbb4ce8          1

iSCSI port binding

Configure port Binding using CLI

[root@esx02:~] esxcli network nic list
Name    PCI Device    Driver  Admin Status  Link Status  Speed  Duplex  MAC Address         MTU  Description
------  ------------  ------  ------------  -----------  -----  ------  -----------------  ----  ----------------------------------------------------------------------------------
vmnic0  0000:03:00.0  qflge   Up            Up            1000  Full    3c:d9:2b:07:61:2a  1500  QLogic Corporation NC382i Integrated Multi Port PCI Express Gigabit Server Adapter
vmnic1  0000:03:00.1  qflge   Up            Up            1000  Full    3c:d9:2b:07:61:2c  1500  QLogic Corporation NC382i Integrated Multi Port PCI Express Gigabit Server Adapter

 esxcli network vswitch standard list
vSwitch0
   Name: vSwitch0
   Class: cswitch
   Num Ports: 5376
   Used Ports: 8
   Configured Ports: 128
   MTU: 1500
   CDP Status: listen
   Beacon Enabled: false
   Beacon Interval: 1
   Beacon Threshold: 3
   Beacon Required By:
   Uplinks: vmnic1, vmnic0
   Portgroups: VM Network, Storage B1 network, Storage A1 Network, Management Network
#Add vmnic0 and vmnic1 as uplink to vSwitch0
esxcli network vswitch standard uplink add -u=vmnic0 -v=vSwitch0
esxcli network vswitch standard uplink add -u=vmnic1 -v=vSwitch0
#Make the uplinks as active
esxcli network vswitch standard policy failover set -a vmnic1,vmnic2 -v vSwitch0
# Add 2 portgroups (Storage B1 network, Storage A1 Network ) to vSwitch0
esxcli network vswitch standard portgroup add -p="Storage B1 network" -v=vSwitch0
esxcli network vswitch standard portgroup add -p="Storage A1 network" -v=vSwitch0
Associate the uplinks with the correct portgroups
esxcli network vswitch standard portgroup policy failover set -a vmnic0 -p "Storage A1 Network"
esxcli network vswitch standard portgroup policy failover set -a vmnic1 -p "Storage B1 Network"
#Create the VMkernel interfaces and associate them with the portgroups 
esxcli network ip interface add -p "Storage A1 network" -i vmk1
esxcli network ip interface add -p "Storage B1 network" -i vmk2
esxcli network ip interface ipv4 set -i vmk1 -I 192.168.40.138 -N 255.255.255.224 -t static
esxcli network ip interface ipv4 set -i vmk2 -I 192.168.41.138 -N 255.255.255.224 -t static
# enable software iscsi 
esxcli iscsi software set -e true
Configure IP addresses of the iSCSI targets
esxcli iscsi adapter discovery sendtarget add -a 192.168.40.131:3260 -A vmhba65
# Bind the VMkernel network adapter to the iSCSI adapter
 esxcli iscsi networkportal add -n vmk1  -A vmhba65 
 # validate networkportal configuration 
 esxcli iscsi networkportal list -A vmhba65
 # check session and relogon iscsi after port binding configuration 
 esxcli iscsi session list -A vmhba65
# rescan adapters to reconnect 
 esxcfg-rescan --all

Mark HDD as SSD

#esxcli storage nmp device list

mpx.vmhba0:C0:T1:L0
   Device Display Name: Local VMware Disk (mpx.vmhba0:C0:T1:L0)
   Storage Array Type: VMW_SATP_LOCAL
   Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration.
   Path Selection Policy: VMW_PSP_FIXED
   Path Selection Policy Device Config: {preferred=vmhba0:C0:T1:L0;current=vmhba0:C0:T1:L0}
   Path Selection Policy Device Custom Config:
   Working Paths: vmhba0:C0:T1:L0
   Is USB: false

mpx.vmhba0:C0:T0:L0
   Device Display Name: Local VMware Disk (mpx.vmhba0:C0:T0:L0)
   Storage Array Type: VMW_SATP_LOCAL
   Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration.
   Path Selection Policy: VMW_PSP_FIXED
   Path Selection Policy Device Config: {preferred=vmhba0:C0:T0:L0;current=vmhba0:C0:T0:L0}
   Path Selection Policy Device Custom Config:
   Working Paths: vmhba0:C0:T0:L0
   Is USB: false

#esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d mpx.vmhba0:C0:T0:L0 -o enable_ssd 
#esxcli storage core claiming reclaim -d mpx.vmhba0:C0:T0:L0