Storage Foundation SFCFSHA 7.1/InfoScale Enterprise 7.1 and Flexible Shared Storage

Veritas InfoScale 7.1 Flexible Storage Sharing

  • What is FSS?

Veritas Flexible Storage Sharing (FSS) is a new SFHA 6.1 feature that lets you configure a 'shared-nothing' storage volume. It is based upon Cluster Filesystem (CFS) but unlike base CFS it doesn't require shared storage (SAN, iSCSI, etc..).

You can read more about FSS here:
http://vcojot.blogspot.ca/2015/01/storage-foundation-ha-61-and-flexible.html

Upon upgrade to Veritas Infoscale Enterprise 7.1, I was faced with a nasty surprise: The environments that used to work on 6.1.1 and 6.2.1 were no longer re-createable. After some investigation, I finally figured out the reason and how to make it work again. This post gives you those details.

  • Not all disks are created equal for FSS 7.x

Veritas Infoscale introduced a new extensions to 'vxddladm' that lets you verify if a disk can be used for FSS:

[root@vcs18 ~]# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
disk_00      auto:cdsdisk    SAN00dg14    SAN00dg      online
disk_01      auto:cdsdisk    SAN00dg07    SAN00dg      online
disk_02      auto:cdsdisk    SAN00dg08    SAN00dg      online
disk_03      auto:cdsdisk    SAN00dg10    SAN00dg      online
[...]
disk_14      auto:cdsdisk    SAN00dg01    SAN00dg      online
disk_15      auto:cdsdisk    -            -            online
sdb          auto:cdsdisk    -            -            online
ssd_01       auto:cdsdisk    -            -            online
ssd_02       auto:cdsdisk    -            -            online
ssd_03       auto:cdsdisk    -            -            online
ssd_04       auto:cdsdisk    -            -            online
ssd_05       auto:cdsdisk    -            -            online
ssd_06       auto:cdsdisk    -            -            online
ssd_07       auto:cdsdisk    -            -            online
[...]
ssd_27       auto:cdsdisk    -            -            online
ssd_28       auto:cdsdisk    -            -            online
ssd_29       auto:cdsdisk    -            -            online
ssd_30       auto:cdsdisk    -            -            online
[root@vcs18 ~]# vxddladm checkfss ssd_01
VxVM vxddladm INFO V-5-1-18713 ssd_01 is a valid disk for FSS.
[root@vcs18 ~]# vxddladm checkfss sdb
VxVM vxddladm INFO V-5-1-18714 sdb is not a valid disk for FSS.


And consequently we get (remember the da names are just labels):

[root@vcs18 ~]# vxdisk export sdb
VxVM vxdisk ERROR V-5-1-531 Device sdb: export failed:
        Disk not supported for FSS operations
[root@vcs18 ~]# vxdisk export ssd_01
[root@vcs18 ~]# vxdisk unexport ssd_01


So why does that difference happen? Both disks are locally attached to the system.
They are even detected by vx as being part of the same enclosure:

[root@vcs18 ~]# vxdmpadm getsubpaths|egrep 'sdb|ssd_01'
sdb          ENABLED(A)   -          sdb          disk         c32             -         -
sdw          ENABLED(A)   -          ssd_01       disk         c64             -         -


The root cause of this is that they have different kinds of attachments, resulting in different UDID forms:

[root@vcs18 ~]# vxdmpadm listctlr all
CTLR_NAME       ENCLR_TYPE      STATE        ENCLR_NAME      PATH_COUNT
=========================================================================
c32             Disk            ENABLED      disk                 1
c62             Disk            ENABLED      disk                 8
c63             Disk            ENABLED      disk                 8
c64             Disk            ENABLED      disk                 15
c65             Disk            ENABLED      disk                 15


[root@vcs18 ~]# cd /dev/vx/.dmp
[root@vcs18 .dmp]# ls -l
total 0
drwxr-xr-x 2 root root 40 Jun  8 13:31 HBA
lrwxrwxrwx 1 root root 16 Jun  8 13:31 c2 -> pci-0000:00:10.0
lrwxrwxrwx 1 root root 16 Jun  8 14:01 c32 -> pci-0000:02:08.0
lrwxrwxrwx 1 root root 16 Jun  8 13:31 c62 -> pci-0000:03:00.0
lrwxrwxrwx 1 root root 16 Jun  8 13:31 c63 -> pci-0000:0b:00.0
lrwxrwxrwx 1 root root 16 Jun  8 13:31 c64 -> pci-0000:13:00.0
lrwxrwxrwx 1 root root 16 Jun  8 13:31 c65 -> pci-0000:1b:00.0
lrwxrwxrwx 1 root root 47 Jun  8 13:31 sda -> /dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0
lrwxrwxrwx 1 root root 63 Jun  8 13:31 sdaa -> /dev/disk/by-path/pci-0000:13:00.0-sas-0x5000c292d168dbfc-lun-0
lrwxrwxrwx 1 root root 63 Jun  8 13:31 sdab -> /dev/disk/by-path/pci-0000:13:00.0-sas-0x5000c290a40f6c16-lun-0
[...]
lrwxrwxrwx 1 root root 63 Jun  8 13:31 sdat -> /dev/disk/by-path/pci-0000:1b:00.0-sas-0x5000c29deeb69e80-lun-0
lrwxrwxrwx 1 root root 63 Jun  8 13:31 sdau -> /dev/disk/by-path/pci-0000:1b:00.0-sas-0x5000c29d56e34557-lun-0
lrwxrwxrwx 1 root root 63 Jun  8 13:31 sdav -> /dev/disk/by-path/pci-0000:1b:00.0-sas-0x5000c2910fca2108-lun-0
lrwxrwxrwx 1 root root 47 Jun  8 14:01 sdb -> /dev/disk/by-path/pci-0000:02:08.0-scsi-0:0:0:0
lrwxrwxrwx 1 root root 63 Jun  8 13:31 sdc -> /dev/disk/by-path/pci-0000:03:00.0-sas-0x5000c29222068c5e-lun-0
lrwxrwxrwx 1 root root 63 Jun  8 13:31 sdd -> /dev/disk/by-path/pci-0000:03:00.0-sas-0x5000c29315b8032e-lun-0
lrwxrwxrwx 1 root root 63 Jun  8 13:31 sde -> /dev/disk/by-path/pci-0000:03:00.0-sas-0x5000c29d5e7f2997-lun-0
[...]
lrwxrwxrwx 1 root root 63 Jun  8 13:31 sdu -> /dev/disk/by-path/pci-0000:13:00.0-sas-0x5000c2910bd0fcea-lun-0
lrwxrwxrwx 1 root root 63 Jun  8 13:31 sdv -> /dev/disk/by-path/pci-0000:13:00.0-sas-0x5000c292348014dd-lun-0
lrwxrwxrwx 1 root root 63 Jun  8 13:31 sdw -> /dev/disk/by-path/pci-0000:13:00.0-sas-0x5000c29e67edcae0-lun-0
lrwxrwxrwx 1 root root 63 Jun  8 13:31 sdx -> /dev/disk/by-path/pci-0000:13:00.0-sas-0x5000c29b8104b56a-lun-0
lrwxrwxrwx 1 root root 63 Jun  8 13:31 sdy -> /dev/disk/by-path/pci-0000:13:00.0-sas-0x5000c296e05cc225-lun-0
lrwxrwxrwx 1 root root 63 Jun  8 13:31 sdz -> /dev/disk/by-path/pci-0000:13:00.0-sas-0x5000c2922aea4b65-lun-0


The result gets clearer by looking at the disks' UDID formats:

[root@vcs18 ~]# vxdisk list sdb|grep udid
udid:      ATA%5FVMware%20Virtual%20S%5FDISKS%5F5000C297821298C0
[root@vcs18 ~]# vxdisk list ssd_01|grep udid
udid:      VMware%2C%5FVMware%20Virtual%20S%5FDISKS%5F6000C29E67EDCAE016E6C473CDDFD175


So, for some reason (better supportability, perhaps).. my previously working FSS 6.2.1 setup with SATA disks wasn't easily re-createable on InfoScale Enterprise 7.1.0.

The solution was to change the HBA definitions in the vmx config file from 'sata' to 'lsilogic' or  'lsisas1068'

Old FSS 6.2.1 setup (excerpt from the VMX file):
[...]
sata1.present = "TRUE"
sata1.sharedBus = "none"
sata1.pciSlotNumber = "18"
sata1:1.fileName = "c1t0d0.vmdk"
sata1:1.present = "TRUE"
sata1:2.fileName = "c1t1d0.vmdk"
sata1:2.present = "TRUE"
sata1:3.fileName = "c1t2d0.vmdk"
sata1:3.present = "TRUE"
sata1:4.fileName = "c1t3d0.vmdk"
sata1:4.present = "TRUE"
[...]

New FSS 7.1.0 setup:
[...]
scsi2.present = "TRUE"
scsi2.sharedBus = "virtual"
scsi2.virtualDev = "lsisas1068"
scsi2:0.fileName = "c1t0d0.vmdk"
scsi2:0.present = "TRUE"
scsi2:1.fileName = "c1t1d0.vmdk"
scsi2:1.present = "TRUE"
scsi2:2.fileName = "c1t2d0.vmdk"
scsi2:2.present = "TRUE"
scsi2:3.fileName = "c1t3d0.vmdk"
scsi2:3.present = "TRUE"
scsi2:4.fileName = "c1t4d0.vmdk"
scsi2:4.present = "TRUE"
[...]


With that change in place, I also had to reduce the number of disks per controller from 16 to 15 since SAS/SCSI HBAs on support up to 15 attachments ('7' being the HBA default SCSI ID).
Not a huge deal but I'll deal with that at a later time.



Comments

Popular posts from this blog

LSI MegaRaid HBA's, overheating and one ugly hack

Some Tips about running a Dell PowerEdge Tower Server as your workstation

VMWare Worksation 12 on Fedora Core 23/24 (fc23 and fc24)