Glusterfs Reading From Socket Failed Error
Contents |
Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the volume create failed host is not in peer in cluster state workings and policies of this site About Us Learn more about Stack volume create failed is already part of a volume Overflow the company Business Learn more about hiring developers or posting ads with us Server Fault Questions Tags
Gluster Peer Status Failed
Users Badges Unanswered Ask Question _ Server Fault is a question and answer site for system and network administrators. Join them; it only takes a minute: Sign up Here's how
Gluster Add Peer
it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top Why can't I create this gluster volume? up vote 6 down vote favorite 2 I'm setting up my first Gluster 3.4 install and all is good up until I want to create a distributed replicated volume. I have 4 probe returned with transport endpoint is not connected servers 192.168.0.11, 192.168.0.12, 192.168.0.13 & 192.168.0.14. From 192.168.0.11 I ran: gluster peer probe 192.168.0.12 gluster peer probe 192.168.0.13 gluster peer probe 192.168.0.14 On each server I have a mounted storage volume at /export/brick1 I then ran on 192.168.0.11 gluster volume create gv0 replica2 192.168.0.11:/export/brick1 192.168.0.12:/export/brick1 192.168.0.13:/export/brick1 192.168.0.14:/export/brick1 But I get the error: volume create: gv0: failed: Host 192.168.0.11 is not in 'Peer in Cluster' state Sure enough if you run gluster peer status it shows 3 peers with the other connected hosts. i.e. Number of Peers: 3 Hostname: 192.168.0.12 Port: 24007 Uuid: bcea6044-f841-4465-88e4-f76a0c8d5198 State: Peer in Cluster (Connected) Hostname: 192.168.0.13 Port: 24007 Uuid: 3b5c188e-9be8-4d0f-a7bd-b738a88f2199 State: Peer in Cluster (Connected) Hostname: 192.168.0.14 Port: 24007 Uuid: f6f326eb-0181-4f99-8072-f27652dab064 State: Peer in Cluster (Connected) But, from 192.168.0.12, the same command also shows 3 hosts and 192.168.0.11 is part of it. i.e. Number of Peers: 3 Hostname: 192.168.0.11 Port: 24007 Uuid: 09a3bacb-558d-4257-8a85-ca8b56e219f2 State: Peer in Cluster (Connected) Hostname: 192.168.0.13 Uuid: 3b5c188e-9be8-4d0f-a7bd-b738a88f2199 State: Peer in Cluster (Connected) Hostname: 192.168.0.14 Uuid: f6f326eb-0181-4f99-8072-f27652dab064 State: Peer in Cluster (Connected) So 192.168.0.11 is definitely part of the clus
] [ thread ] [ subject ] [ author ] On 01/21/2013 01:50 PM, Kanagaraj Mayilsamy wrote: > Hi Jithin, > > By looking at the logs, looks like you already had
Peer Probe: Failed: Probe Returned With Transport Endpoint Is Not Connected
a volume named 'vol1' in the gluster and you have tried to create gluster remove peer another volume with the same name from the UI. Thats why you were able to see the volume 'vol1' even volume add brick failed host is not in peer in cluster state after the creation was failed. > > I am not sure which version of ovirt-engine you are using. The recent releases(3.2) and the upstream code currently have the support for reflecting the old http://serverfault.com/questions/531359/why-cant-i-create-this-gluster-volume volumes in the UI even though there were created via UI or directly from CLI. With this change vol1 should have appeared in the UI even before the creation. > > So it looks like there are no issues with the creation of volume. I am not familiar with the mount issues, some one else will help you out. Can you please provide the glusterfs version http://lists.ovirt.org/pipermail/users/2013-January/011703.html installed on the host from where you are trying to mount? Note that glusterfs 3.3 or 3.4 is not compatible with glusterfs 3.2 & hence you cannot have a mix of these versions in the cluster or between client & servers. Thanks, Vijay > > Thanks, > Kanagaraj > > ----- Original Message ----- >> From: "Jithin Raju"
[glusterd-rpc-ops.c:1243:glusterd3_1_commit_op_cbk] 0-glusterd: Received ACC from uuid: fc5e6659-a90a-4e25-a3a7-11de9a7de81d [2011-12-06 17:56:59.48811] I [glusterd-rpc-ops.c:1243:glusterd3_1_commit_op_cbk] 0-glusterd: Received ACC from uuid: d1216f43-2ae6-42bd-a597-c0ab6a101d6b [2011-12-06 17:56:59.49073] I [glusterd-rpc-ops.c:1243:glusterd3_1_commit_op_cbk] 0-glusterd: Received ACC from is not uuid: 4bf94e6e-69ca-4d51-9a85-c1d98a95325d [2011-12-06 17:56:59.49137] I [glusterd-rpc-ops.c:1243:glusterd3_1_commit_op_cbk] 0-glusterd: Received ACC from uuid: 154cdbb2-6a53-449d-b6e3-bfd84091d90c [2011-12-06 17:56:59.49567] I [glusterd-rpc-ops.c:1243:glusterd3_1_commit_op_cbk] 0-glusterd: Received ACC from uuid: 4c9d68d6-d573-43d0-aec5-07173c1699d0 [2011-12-06 17:56:59.49803] I [glusterd-rpc-ops.c:818:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received ACC from volume create failed uuid: fc5e6659-a90a-4e25-a3a7-11de9a7de81d [2011-12-06 17:56:59.49850] I [glusterd-rpc-ops.c:818:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received ACC from uuid: d1216f43-2ae6-42bd-a597-c0ab6a101d6b [2011-12-06 17:56:59.50228] I [glusterd-rpc-ops.c:818:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received ACC from uuid: 4bf94e6e-69ca-4d51-9a85-c1d98a95325d [2011-12-06 17:56:59.50285] I [glusterd-rpc-ops.c:818:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received ACC from uuid: 154cdbb2-6a53-449d-b6e3-bfd84091d90c [2011-12-06 17:56:59.50346] I [glusterd-rpc-ops.c:818:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received ACC from uuid: 4c9d68d6-d573-43d0-aec5-07173c1699d0 [2011-12-06 17:56:59.50375] I [glusterd-op-sm.c:7250:glusterd_op_txn_complete] 0-glusterd: Cleared local lock [2011-12-06 17:56:59.52105] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (127.0.0.1:694) [2011-12-06 17:56:59.168257] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (10.0.30.11:730) [2011-12-06 17:56:59.168357] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: re