LUN masking is a process that will mask away LUNs, or make those LUNs inaccessible to certain ESXi hosts. Typically this is done on storage side or FC switches but can be done on ESXi level as well.
The LUN shown blow image has an identifier of “naa.6589cfc00000094b1a28df7aabbb4ce8” is the one I’m targetting.
[root@esx01:~] esxcli storage vmfs extent list
Volume Name VMFS UUID Extent Number Device Name Partition
---------------------- ----------------------------------- ------------- ------------------------------------ ---------
DatastoreTrueNAS02 5ffde51b-48e6cc53-8cd2-441ea139f918 0 naa.6589cfc00000094b1a28df7aabbb4ce8 1
DatastoreTrueNAS01 5ffde4f1-b97ebafc-3592-441ea139f918 0 naa.6589cfc000000a1cd96c6ef171f063eb 1
[root@esx01:~] esxcli storage core device list -d naa.6589cfc000000a1cd96c6ef171f063eb
naa.6589cfc000000a1cd96c6ef171f063eb
Display Name: TrueNAS iSCSI Disk (naa.6589cfc000000a1cd96c6ef171f063eb)
Has Settable Display Name: true
Size: 10240
Device Type: Direct-Access
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/disks/naa.6589cfc000000a1cd96c6ef171f063eb
Vendor: TrueNAS
Model: iSCSI Disk
Revision: 0123
SCSI Level: 7
Is Pseudo: false
Status: on
Is RDM Capable: true
Is Local: false
Is Removable: false
Is SSD: true
Is VVOL PE: false
Is Offline: false
Is Perennially Reserved: false
Queue Full Sample Size: 0
Queue Full Threshold: 0
Thin Provisioning Status: yes
Attached Filters:
VAAI Status: supported
Other UIDs: vml.010000000030303530353661303738303230303000695343534920
Is Shared Clusterwide: true
Is SAS: false
Is USB: false
Is Boot Device: false
Device Max Queue Depth: 128
No of outstanding IOs with competing worlds: 32
Drive Type: unknown
RAID Level: unknown
Number of Physical Drives: unknown
Protection Enabled: false
PI Activated: false
PI Type: 0
PI Protection Mask: NO PROTECTION
Supported Guard Types: NO GUARD SUPPORT
DIX Enabled: false
DIX Guard Type: NO GUARD SUPPORT
Emulated DIX/DIF Enabled: false
We can mask by path (removing individual path visibility), by vendor (this will mask all LUNs to a specific vendor), or by storage transport (yeah, like all iSCSI or all FC). To list claimrule we can use below comamnd.
[root@esx01:~] esxcli storage core claimrule list
Rule Class Rule Class Type Plugin Matches XCOPY Use Array Reported Values XCOPY Use Multiple Segments XCOPY Max Transfer Size KiB
---------- ----- ------- --------- --------- --------------------------------- ------------------------------- --------------------------- ---------------------------
MP 50 runtime transport NMP transport=usb false false 0
MP 51 runtime transport NMP transport=sata false false 0
MP 52 runtime transport NMP transport=ide false false 0
MP 53 runtime transport NMP transport=block false false 0
MP 54 runtime transport NMP transport=unknown false false 0
MP 101 runtime vendor MASK_PATH vendor=DELL model=Universal Xport false false 0
MP 101 file vendor MASK_PATH vendor=DELL model=Universal Xport false false 0
MP 65535 runtime vendor NMP vendor=* model=* false false 0
As we have 2 LUNs from the same vendor and goal is mask single LUN masking by path is correct solution in this case.
[root@esx01:~] esxcfg-mpath -m -d naa.6589cfc000000a1cd96c6ef171f063eb
vmhba65:C1:T0:L0 vmhba65 iqn.1998-01.com.vmware:csmktcesx01-721f100e 00023d000003,iqn.2005-10.org.freenas.ctl:datastore1,t,1 naa.6589cfc000000a1cd96c6ef171f063eb
vmhba65:C0:T0:L0 vmhba65 iqn.1998-01.com.vmware:csmktcesx01-721f100e 00023d000002,iqn.2005-10.org.freenas.ctl:datastore1,t,1 naa.6589cfc000000a1cd96c6ef171f063eb
# mask device naa.6589cfc000000a1cd96c6ef171f063eb
esxcli storage core claimrule add -r 110 -t location -A vmhba32 -C 1 -T 0 -L 0 -P MASK_PATH
esxcli storage core claimrule add -r 111 -t location -A vmhba32 -C 0 -T 0 -L 0 -P MASK_PATH
Clainrules are in runtime, but the rules will not be applied until that device is reclaimed. So, a reboot would work here or, we can run a reclaim on device.
esxcli storage core claiming reclaim -d naa.6589cfc000000a1cd96c6ef171f063eb
To roll back changes we need remove claimrules no. 110 and 110 , load new claimrules and unclaim previously cialmed paths
esxcli storage core claimrule remove -r 110
esxcli storage core claimrule remove -r 111
esxcli storage core claimrule load
esxcli storage core claiming unclaim -t location -A vmhba65 -C 1 -T 0 -L 0
esxcli storage core claiming unclaim -t location -A vmhba65 -C 0 -T 0 -L 0
esxcfg-rescan --all
esxcli storage vmfs extent list
Volume Name VMFS UUID Extent Number Device Name Partition
---------------------- ----------------------------------- ------------- ------------------------------------ ---------
DatastoreTrueNAS02 5ffde51b-48e6cc53-8cd2-441ea139f918 0 naa.6589cfc00000094b1a28df7aabbb4ce8 1
DatastoreTrueNAS01 5ffde4f1-b97ebafc-3592-441ea139f918 0 naa.6589cfc000000a1cd96c6ef171f063eb 1
In second scenario we can create LUN masking based on vendor name
esxcli storage core claimrule add -r 112 -t vendor -V TrueNAS -M "iSCSI Disk" -P MASK_PATH
esxcli storage core claimrule load
esxcli storage core claiming unclaim -t device -d naa.6589cfc000000a1cd96c6ef171f063eb
esxcli storage core claiming unclaim -t device -d naa.6589cfc00000094b1a28df7aabbb4ce8
Unable to perform unclaim. Error message was : Unable to unclaim all requested paths. Some paths were busy or were the last path to an in use device. See VMkernel logs for more information.
esxcfg-mpath -m -d naa.6589cfc00000094b1a28df7aabbb4ce8
vmhba65:C1:T1:L1 vmhba65 iqn.1998-01.com.vmware:csmktcesx02-35e54f13 00023d000003,iqn.2005-10.org.freenas.ctl:datastore2,t,1 naa.6589cfc00000094b1a28df7aabbb4ce8
esxcli storage core claiming unclaim -t location -A vmhba65 -C 1 -T 1 -L 1
Roll back changes. As static discover addresses of iSCSI array has been removed with unclaim rule we need it discover once again
esxcli storage core claimrule remove -r 112
esxcli storage core claimrule load
esxcli storage core claiming unclaim -t location -A vmhba65
#
esxcli iscsi adapter discovery sendtarget list
Adapter Sendtarget
------- -------------------
vmhba65 192.168.40.131:3260
#re-add static discovery as they were removed from list
esxcli iscsi adapter discovery sendtarget add -A vmhba65 -a 192.168.40.131:3260
#
esxcfg-rescan --all
#
esxcli storage vmfs extent list
Volume Name VMFS UUID Extent Number Device Name Partition
---------------------- ----------------------------------- ------------- ------------------------------------ ---------
DatastoreTrueNAS01 5ffde4f1-b97ebafc-3592-441ea139f918 0 naa.6589cfc000000a1cd96c6ef171f063eb 1
DatastoreTrueNAS02 5ffde51b-48e6cc53-8cd2-441ea139f918 0 naa.6589cfc00000094b1a28df7aabbb4ce8 1
iSCSI port binding
Configure port Binding using CLI
[root@esx02:~] esxcli network nic list
Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address MTU Description
------ ------------ ------ ------------ ----------- ----- ------ ----------------- ---- ----------------------------------------------------------------------------------
vmnic0 0000:03:00.0 qflge Up Up 1000 Full 3c:d9:2b:07:61:2a 1500 QLogic Corporation NC382i Integrated Multi Port PCI Express Gigabit Server Adapter
vmnic1 0000:03:00.1 qflge Up Up 1000 Full 3c:d9:2b:07:61:2c 1500 QLogic Corporation NC382i Integrated Multi Port PCI Express Gigabit Server Adapter
esxcli network vswitch standard list
vSwitch0
Name: vSwitch0
Class: cswitch
Num Ports: 5376
Used Ports: 8
Configured Ports: 128
MTU: 1500
CDP Status: listen
Beacon Enabled: false
Beacon Interval: 1
Beacon Threshold: 3
Beacon Required By:
Uplinks: vmnic1, vmnic0
Portgroups: VM Network, Storage B1 network, Storage A1 Network, Management Network
#Add vmnic0 and vmnic1 as uplink to vSwitch0
esxcli network vswitch standard uplink add -u=vmnic0 -v=vSwitch0
esxcli network vswitch standard uplink add -u=vmnic1 -v=vSwitch0
#Make the uplinks as active
esxcli network vswitch standard policy failover set -a vmnic1,vmnic2 -v vSwitch0
# Add 2 portgroups (Storage B1 network, Storage A1 Network ) to vSwitch0
esxcli network vswitch standard portgroup add -p="Storage B1 network" -v=vSwitch0
esxcli network vswitch standard portgroup add -p="Storage A1 network" -v=vSwitch0
Associate the uplinks with the correct portgroups
esxcli network vswitch standard portgroup policy failover set -a vmnic0 -p "Storage A1 Network"
esxcli network vswitch standard portgroup policy failover set -a vmnic1 -p "Storage B1 Network"
#Create the VMkernel interfaces and associate them with the portgroups
esxcli network ip interface add -p "Storage A1 network" -i vmk1
esxcli network ip interface add -p "Storage B1 network" -i vmk2
esxcli network ip interface ipv4 set -i vmk1 -I 192.168.40.138 -N 255.255.255.224 -t static
esxcli network ip interface ipv4 set -i vmk2 -I 192.168.41.138 -N 255.255.255.224 -t static
# enable software iscsi
esxcli iscsi software set -e true
Configure IP addresses of the iSCSI targets
esxcli iscsi adapter discovery sendtarget add -a 192.168.40.131:3260 -A vmhba65
# Bind the VMkernel network adapter to the iSCSI adapter
esxcli iscsi networkportal add -n vmk1 -A vmhba65
# validate networkportal configuration
esxcli iscsi networkportal list -A vmhba65
# check session and relogon iscsi after port binding configuration
esxcli iscsi session list -A vmhba65
# rescan adapters to reconnect
esxcfg-rescan --all
Mark HDD as SSD
#esxcli storage nmp device list
mpx.vmhba0:C0:T1:L0
Device Display Name: Local VMware Disk (mpx.vmhba0:C0:T1:L0)
Storage Array Type: VMW_SATP_LOCAL
Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration.
Path Selection Policy: VMW_PSP_FIXED
Path Selection Policy Device Config: {preferred=vmhba0:C0:T1:L0;current=vmhba0:C0:T1:L0}
Path Selection Policy Device Custom Config:
Working Paths: vmhba0:C0:T1:L0
Is USB: false
mpx.vmhba0:C0:T0:L0
Device Display Name: Local VMware Disk (mpx.vmhba0:C0:T0:L0)
Storage Array Type: VMW_SATP_LOCAL
Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration.
Path Selection Policy: VMW_PSP_FIXED
Path Selection Policy Device Config: {preferred=vmhba0:C0:T0:L0;current=vmhba0:C0:T0:L0}
Path Selection Policy Device Custom Config:
Working Paths: vmhba0:C0:T0:L0
Is USB: false
#esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d mpx.vmhba0:C0:T0:L0 -o enable_ssd
#esxcli storage core claiming reclaim -d mpx.vmhba0:C0:T0:L0