Error Failed To Stop Vxfen
file later to scsi3, the vxfen driver throws the following spurious error when you stop vxfen: ERROR: failed to stop vxfen However, the actual I/O fencing stop operation is successful. [1301698, 1504943] Workaround: This error message may be safely ignored. Preexisting split brain after rebooting nodes The fencing driver in 5.0 uses Veritas DMP to handle SCSI commands to the disk driver if fencing is configured in dmp mode. This allows fencing to use Veritas DMP for access to the coordinator disks. With certain disk arrays, when paths are failed over due to a path failure, the SCSI-3 persistent reservation keys for the previously active paths are not removed. If the nodes in a cluster are all rebooted at the same time, then the cluster will not start due to a Preexisting split brain message. [609407] Workaround: Use the vxfenclearpre script to remove the keys from the coordinator disks as well as from the data disks. Stopping vxfen when the fencing module is being configured Trying to stop the vxfen driver when the fencing module is being configured results in the following error. VCS FEN vxfenconfig ERROR V-11-2-1013 Unable to unconfigure vxfen VCS FEN vxfenconfig ERROR V-11-2-1022 Active cluster is currently fencing. Workaround: This message may be safely ignored. Fencing configuration fails if fencing module is running on another node The vxfenconfig -c command fails if any of the following commands are running on other nodes in the cluster: /sbin/vxfenconfig -U /sbin/vxfenconfig -c Some vxfenadm options do not work with DMP paths Some options of the vxfenadm utility do not work well with DMP paths such as /dev/vx/rdmp/sdt3. Workaround: Use the -a option to register keys instead of -m option for DMP paths. Prev Up Next Issues related to the VCS engine Home Issues related to global service groups
panics At the time of a network partition the lowest node in each sub-cluster races for the coordination points on behalf of that sub-cluster. If the lowest node is unable to contact a majority of the coordination points or the lowest node itself unexpectedly panics during the race, then all the nodes in that sub-cluster will panic. [1965954] Coordination Point agent does not provide detailed log message for inaccessible CP servers The Coordination Point agent does not log detailed information of the CP servers that are inaccessible. When CP server is not accessible, the agent does not mention the UUID or the virtual IP of the CP https://sort.veritas.com/public/documents/sf/5.0MP2/hpux/html/vcs_notes/rn_vcs36.html server in the engine log. [1907648] Preferred fencing does not work as expected for large clusters in certain cases If you have configured system-based or group-based preferred fencing policy, preferred fencing does not work if all the following cases are true: The fencing setup uses customized mode with one or more CP servers. The application cluster has more than eight nodes. The node weight for a single node (say galaxy with node id 0) https://sort.symantec.com/public/documents/sfha/5.1sp1/solaris/productguides/html/sfrac_notes/ch01s10s02s25.htm is more than the sum total of node weights for the rest of the nodes. A network fault occurs and the cluster partitions into two with the single node (galaxy) on one part and the rest of the nodes on the other part. Under such circumstances, for group-based preferred fencing, the single node panics even though more high priority services are online on that node. For system-based preferred fencing, the single node panics even though more weight is assigned to the node. [2161816] See the Veritas Storage Foundation for Oracle RAC Administrator's Guide for more information on preferred fencing. Server-based I/O fencing fails to start after configuration on nodes with different locale settings On each (application cluster) node, the vxfen module retrieves and stores the list of the UUIDs of coordination points. When different nodes have different locale settings, the list of UUIDs on one (application) node does not match with that of the other (application) nodes. Hence, I/O fencing does not start after configuration. [2112742] Workaround: Start I/O fencing after fixing the locale settings to use the same values on all the (application) cluster nodes. Reconfiguring SF Oracle RAC with I/O fencing fails if you use the same CP servers When you reconfigure an application cluster that uses server-based I/O fencing (customized fencing mode), the installer does not remove the application
Veritas Cluster Server VCS fails to start Occurs when a node cannot join the VCS cluster. If you http://infocenter.sybase.com/help/topic/com.sybase.infocenter.dc00768.1550/html/ase_ce_ug/ase_ce_ug266.htm see this error in your Solaris or Linux error log, the http://foradminsolution.blogspot.com/2013/12/vcs-split-brain-in-vcs-6x-on-solaris-11.html node cannot join the cluster because VCS cannot start I/O fencing: CVMCluster:???:monitor:node - state: out of cluster Use vxdg list to verify that the vxfencoordg disk group is enabled on all nodes: vxdisk list If the vxfencoordg disk group is not enabled, deport and error failed re-import the group: vxdg deport vxfencoorddg vxdg -t import vxfencoorddg Use vxdisk list to verify that the vxfencoordg disks are online: vxdisk list If the vxfencoordg disks are not online, restart VCS on each node: hastop -local /etc/init.d/vxfen stop /etc/init.d/vxfen start hastart Use vxfenadm -g to verify reservations exist on the vxfencoordg disks: vxfenadm error failed to -g If the reservations do not exist, stop the VCS, use vxfenclearpre to clear the VCS reservations, and restart VCS: hastop -local /etc/init.d/vxfen stop vxfenadm -g all -f /etc/vxfentab vxfenclearpre /etc/init.d/vxfen start hastart If a new node joins an existing cluster, you may need to use gapconfig -x to reapply the group membership configuration: gabconfig -x If you see this message in your Solaris or Linux error log, the Veritas configuration daemon cannot start: ERROR: IPC Failure: Configuration daemon is not accessible Clear and restart the configuration daemon: vxconfigd -k "vxiod set 10 "vxconfigd -m disable "vxdctl init "vxdctl initdmp "vxdctl enable If you see this error in your VCS error log, the disk group initialization failed when you ran vxdg init: Device sda cannot be added to a CDS disk group By default, vxdg initializes disks as CDS. Rerun the vxdg command with cds=off: vxdg init disk_group cds=off Copyright © 2010. Sybase Inc. All rights reserved. View this document as PDF Â
Format]: VF006400 * [Node Format]: Cluster ID: 100 Node ID: 0 Node Name: EPEP2SPkey[1]: [Numeric Format]: 86,70,48,48,54,52,48,49 [Character Format]: VF006401 * [Node Format]: Cluster ID: 100 Node ID: 1 Node Name: EPEP2SF Device Name: /dev/rdsk/c3d23s2Total Number Of Keys: 2key[0]: [Numeric Format]: 86,70,48,48,54,52,48,48 [Character Format]: VF006400 * [Node Format]: Cluster ID: 100 Node ID: 0 Node Name: EPEP2SPkey[1]: [Numeric Format]: 86,70,48,48,54,52,48,49 [Character Format]: VF006401 * [Node Format]: Cluster ID: 100 Node ID: 1 Node Name: EPEP2SF Device Name: /dev/rdsk/c3d22s2Total Number Of Keys: 2key[0]: [Numeric Format]: 86,70,48,48,54,52,48,48 [Character Format]: VF006400 * [Node Format]: Cluster ID: 100 Node ID: 0 Node Name: EPEP2SPkey[1]: [Numeric Format]: 86,70,48,48,54,52,48,49 [Character Format]: VF006401 * [Node Format]: Cluster ID: 100 Node ID: 1 Node Name: EPEP2SF -bash-4.1# hastop -allVCS ERROR V-16-1-10600 Cannot connect to VCS engine -bash-4.1# gabconfig -aGAB Port Memberships===============================================================Port a gen 563f0d membership 01Port a gen 563f0d jeopardy ;1 -bash-4.1# /etc/init.d/vxfen.rc stop-bash: /etc/init.d/vxfen.rc: No such file or directory[[[[[[[[[ OR ]]]]]]]]]]]]vxfen-shutdown -bash-4.1# /opt/VRTSvcs/vxfen/bin/vxfenclearpre ******** WARNING!!!!!!!! ******** This script recovers from a preexisting split brain condition. This script removes the SCSI-3 registrations on the coordinatordisks contained in the file /etc/vxfentab on this node EPEP2SF. This script also removes the SCSI-3 registrations and reservationson the data disks contained in shared disk groups. This script will remove all the keys from the coordinator and data disks. VERIFY ALL OTHER NODES ARE POWERED OFF OR INCAPABLE OFACCESSING SHARED STORAGE. If this is not the case, data corruption will result. This script will clear keys from the disks as follows. /dev/rdsk/c3d22s2: VF006400 VF006401EMC%5FSYMMETRIX%5F000295700464%5F6401A4A008: VXFEN vxfenadm ERROR V-11-2-1116 Cannot open: EMC%5FSYMMETRIX%5F000295700464%5F6401A4A008VXFEN vxfenadm ERROR V-11-2-1132 Open of file failed, errno = 2VXFEN vxfenadm ERROR V-11-2-1205 READ_KEYS failed for: EMC%5FSYMMETRIX%5F000295700464%5F6401A4A008VXFEN vxfenadm ERROR V-11-2-1133 Error returned /dev/rdsk/c3d23s2: VF006400 VF006401EMC%5FSYMMETRIX%5F000295700464%5F6401A4B008: VXF