Gfs2 Error Mounting Lockproto Lock_dlm
Contents |
Sbin Mount Gfs2 Error Mounting Lockproto Lock_dlm
failing with the following error : > > === > > [root eser~]# /etc/init.d/gfs2 start > Mounting GFS2 filesystem (/sharedweb): fs is for a different cluster > error mounting lockproto lock_dlm > [FAILED]
Connection Refused Error Mounting Lockproto Lock_dlm
> ---------- > Did you restart the cluster demons after you changed the config file? It looks like it still is looking at the old data from the messages you've posted, Steve. > [root eser ~]# tail -f /var/log/messages > Jan 30 15:50:27 eser modcluster: Updating cluster version > Jan 30 15:50:27 eser corosync[7121]: [QUORUM] Members[2]: 1 2 > Jan 30 15:50:28 eser rgmanager[7379]: Reconfiguring > Jan 30 mount gfs2 without cluster 15:50:28 eser rgmanager[7379]: Loading Service Data > Jan 30 15:50:29 eserrgmanager[7379]: Stopping changed resources. > Jan 30 15:50:29 eser rgmanager[7379]: Restarting changed resources. > Jan 30 15:50:29 eser rgmanager[7379]: Starting changed resources. > Jan 30 15:56:21 eser gfs_controld[7254]: join: fs requires > cluster="mycluster" current="sharedweb" > Jan 30 16:02:43 eser gfs_controld[7254]: join: fs requires > cluster="mycluster" current="sharedweb" > Jan 30 18:46:48 eser gfs_controld[7254]: join: fs requires > cluster="mycluster" current="sharedweb" > > > ========== > > "sharedweb" is the cluster which I created earlier and created the > GFS2 file system using this cluster name. But I deleted "sharedweb" > cluster and created a new cluster called "mycluster" , but while > mounting the GFS2 partition with the new cluster , it is showing the > error as mentioned above . > > I created the new GFS2 file system using the command as shown below > > mkfs.gfs2 -t mycluster:mygfs2 -p lock_dlm -j 2 /dev/mapper/mpathcp1 > > My cluster config is as follows: > > ===== > # cat /etc/cluster/cluster.conf > >
Red Hat Certificate System Red Hat Satellite Subscription Asset Manager Red Hat Update Infrastructure Red Hat Insights Ansible Tower by Red Hat Cloud Computing Back Red Hat CloudForms Red Hat OpenStack
Mount Gfs2 File System
Platform Red Hat Cloud Infrastructure Red Hat Cloud Suite Red Hat OpenShift Container gfs error mounting lockproto lock dlm Platform Red Hat OpenShift Online Red Hat OpenShift Dedicated Storage Back Red Hat Gluster Storage Red Hat Ceph Storage JBoss fs is for a different cluster Development and Management Back Red Hat JBoss Enterprise Application Platform Red Hat JBoss Data Grid Red Hat JBoss Web Server Red Hat JBoss Portal Red Hat JBoss Operations Network Red Hat JBoss Developer https://www.redhat.com/archives/linux-cluster/2013-January/msg00069.html Studio JBoss Integration and Automation Back Red Hat JBoss Data Virtualization Red Hat JBoss Fuse Red Hat JBoss A-MQ Red Hat JBoss BPM Suite Red Hat JBoss BRMS Mobile Back Red Hat Mobile Application Platform Services Back Consulting Technical Account Management Training & Certifications Red Hat Enterprise Linux Developer Program Support Get Support Production Support Development Support Product Life Cycle & Update Policies Knowledge Search Documentation https://access.redhat.com/solutions/19324 Knowledgebase Videos Discussions Ecosystem Browse Certified Solutions Overview Partner Resources Tools Back Red Hat Insights Learn More Red Hat Access Labs Explore Labs Configuration Deployment Troubleshooting Security Additional Tools Red Hat Access plug-ins Red Hat Satellite Certificate Tool Security Back Product Security Center Security Updates Security Advisories Red Hat CVE Database Security Labs Resources Overview Security Blog Security Measurement Severity Ratings Backporting Policies Product Signing (GPG) Keys Community Back Discussions Red Hat Enterprise Linux Red Hat Virtualization Red Hat Satellite Customer Portal Private Groups All Discussions Start a Discussion Blogs Customer Portal Red Hat Product Security Red Hat Access Labs Red Hat Insights All Blogs Events Customer Events Red Hat Summit Stories Red Hat Subscription Benefits You Asked. We Acted. Open Source Communities Subscriptions Downloads Support Cases Account Back Log In Register Red Hat Account Number: Account Details Newsletter and Contact Preferences User Management Account Maintenance My Profile Notifications Help Log Out Language Back English español Deutsch italiano 한국어 français 日本語 português 中文 (中国) русский Customer Portal Search Products & Services Back View All Products Infrastructure and Management Back Red Hat Enterprise Linux Red Hat Virtualization Red Hat Identity Management Red Hat Directory Server Red Hat Certi
date ] [ thread ] [ subject ] [ author ] Hi, I'm trying mount a GFS2 disk https://lists.fedoraproject.org/pipermail/users/2008-April/030487.html using the clustering in Fedora 8 but gfs_controld dies (see below). For example: # service cman start (ver. 6.0.1 config 20) Starting cluster: Loading modules... done Mounting configfs... done Starting ccsd... done Starting cman... done Starting daemons... done Starting fencing... done [ OK ] # mkfs.gfs2 -t home:gfs -p lock_dlm -j 2 /dev/drbd0 This error mounting will destroy any data on /dev/drbd0. Are you sure you want to proceed? [y/n] y Device: /dev/drbd0 Blocksize: 4096 Device Size 4.00 GB (1048535 blocks) Filesystem Size: 4.00 GB (1048532 blocks) Journals: 2 Resource Groups: 16 Locking Protocol: "lock_dlm" Lock Table: "home:gfs" ; Check that gfs_controld is running (as you see it is) ps -ef|grep error mounting lockproto gfs root 4915 2 0 08:54 ? 00:00:00 [gfs2_scand] root 6081 1 0 09:32 ? 00:00:00 /sbin/gfs_controld ; Now try to mount it: # mount -t gfs2 /dev/drbd0 /net -o locktable=home:gfs,lockproto=lock_dlm -vv mount: no LABEL=, no UUID=, going to mount /dev/drbd0 by path /sbin/mount.gfs2: mount /dev/drbd0 /net /sbin/mount.gfs2: parse_opts: opts = "rw,locktable=home:gfs,lockproto=lock_dlm" /sbin/mount.gfs2: clear flag 1 for "rw", flags = 0 /sbin/mount.gfs2: add extra locktable=home:gfs /sbin/mount.gfs2: add extra lockproto=lock_dlm /sbin/mount.gfs2: parse_opts: flags = 0 /sbin/mount.gfs2: parse_opts: extra = "locktable=home:gfs,lockproto=lock_dlm" /sbin/mount.gfs2: parse_opts: hostdata = "" /sbin/mount.gfs2: parse_opts: lockproto = "lock_dlm" /sbin/mount.gfs2: parse_opts: locktable = "home:gfs" /sbin/mount.gfs2: waiting for gfs_controld to start <-- DIES HERE /sbin/mount.gfs2: waiting for gfs_controld to start /sbin/mount.gfs2: waiting for gfs_controld to start /sbin/mount.gfs2: waiting for gfs_controld to start /sbin/mount.gfs2: waiting for gfs_controld to start /sbin/mount.gfs2: waiting for gfs_controld to start /sbin/mount.gfs2: waiting for gfs_controld to start /sbin/mount.gfs2: waiting for gfs_controld to start /sbin/mount.gfs2: waiting for gfs_controld to start /sbin/mount.gfs2: waiting for gfs_controld to start /sbin/mount.gfs2: gfs_controld not ru