Lvm Error
Contents |
related to hardware problems Post Reply Print view Search Advanced search 14 posts 1 2 Next marcos86 Posts: 7 Joined: 2015/05/22 19:57:38 LVM error Quote Postby marcos86 » 2015/05/22 20:18:18 Good evening,I'm writing to share my problem and try to find a solution.I read failed after 0 of 4096 at 0 input/output error redhat have a Dell R200 server with two HDD, one of 500GB and one of 1TB, /dev/sdb read failed after 0 of 4096 at 0 input/output error since today everything went fine, I did a yum update and as there was a new kernel update I rebooted the server, then the pvscan read failed after 0 of 4096 problem occur, it cannot boot anymore since of LVM error.I share with you some details to let you better understandCode: Select allfdisk -l
Disk /dev/sda: 500.1GB
Disk identifier 0x00077f18
Device Boot
Read Failed After 0 Of 2048 At 0 Input/output Error
Start End Blocks Id System
/dev/sda1 * 1 13 102400 83 Linux
Partition 1 does not end on cylinder boundary
/dev/sda2 13 536 4194304 83 Linux Swap
Partition 1 does not end on cylinder boundary
/dev/sda3 536 lvm couldn't find device with uuid 797 2097152 83 Linux
Partition 1 does not end on cylinder boundary
/dev/sda4 797 60802 481991704 Extended
/dev/sda5 797 60802 481990656 Linux
Disk: /dev/sdb: 1000.2GB
Disk identifier: 0x000634aa
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 121490 975868393+ 8e Linux LVM
/dev/sdb2 121491 121601 891607+ 8e Linux LVMCode: Select alllvm vgscan -v
Wiping cache of LVM-capable devices
Wiping internal VG cache
Reading all physical volumes. This may take a while....
Finding all volume groups
/dev/sdb: read failed after 0 of 4096 at 0: Input/Output error
/dev/sdb: read failed after 0 of 4096 at 1000204795904: Input/Output error
/dev/sdb: read failed after 0 of 4096 at 1000204795904: Input/Output error
/dev/sdb: read failed after 0 of 4096 at 0: Input/Output error
/dev/sdb: read failed after 0 of 4096 at 4096: Input/Output error
/dev/sdb: read failed after 0 of 4096 at 0: Input/Output error
/dev/sdb: read failed after 0 of 512 at 999289126912: Input/Output error
/dev/sdb: read failed after 0 of 512 at 999289225216: Input/Output error
Removing vg and lv after physical drive has been removed Date: Sat, 1 Sep 2012 20:54:21 +0700 We had a disk fail in a server and replaced dmsetup remove it before removing the drive from LVM. The server has 4 physical
Lvs Input Output Error
drives (PV's), each with it's own volume group (VG). Each VG has 2 or more logical volumes (LV's.) Now
Pvdisplay Input/output Error
LVM is complaining about the missing drive. So we have a VG (vg04) with two LV's that have become orphans than we need to clear out of the system. The problem is http://www.centos.org/forums/viewtopic.php?t=52679 every time we run any LVM command we get some 'read failed' errors. I've tried different commands to try to clear it out but so far without luck: # lvscan /dev/vg04/swap: read failed after 0 of 4096 at 4294901760: Input/output error /dev/vg04/swap: read failed after 0 of 4096 at 4294959104: Input/output error /dev/vg04/swap: read failed after 0 of 4096 at 0: Input/output error https://www.redhat.com/archives/linux-lvm/2012-September/msg00000.html /dev/vg04/swap: read failed after 0 of 4096 at 4096: Input/output error /dev/vg04/vz: read failed after 0 of 4096 at 995903864832: Input/output error /dev/vg04/vz: read failed after 0 of 4096 at 995903922176: Input/output error /dev/vg04/vz: read failed after 0 of 4096 at 0: Input/output error /dev/vg04/vz: read failed after 0 of 4096 at 4096: Input/output error # vgreduce vg04 --removemissing --force /dev/vg04/swap: read failed after 0 of 4096 at 4294901760: Input/output error /dev/vg04/swap: read failed after 0 of 4096 at 4294959104: Input/output error /dev/vg04/swap: read failed after 0 of 4096 at 0: Input/output error /dev/vg04/swap: read failed after 0 of 4096 at 4096: Input/output error /dev/vg04/vz: read failed after 0 of 4096 at 995903864832: Input/output error /dev/vg04/vz: read failed after 0 of 4096 at 995903922176: Input/output error /dev/vg04/vz: read failed after 0 of 4096 at 0: Input/output error /dev/vg04/vz: read failed after 0 of 4096 at 4096: Input/output error Volume group "vg04" not found # vgchange -a n /dev/vg04 /dev/vg04/swap: read failed after 0 of 4096 at 4294901760: Input/output error /dev/vg04/swap: read failed after 0 of 4096 at 4294959104: Input/output error /dev/vg04/swap: read failed after 0 of 4
won't show on server creation wizard Windows BSOD - Suse drivers How to: Configure an ISCSI+Multipath https://help.onapp.com/hc/en-us/articles/222044528-How-to-remove-backup-snapshots-and-not-get-LVM-I-O-error DataStore in OnApp with Nexenta. Virtual Machines Are Showing Off Status http://superuser.com/questions/353796/lvm-removing-a-zombie-unavailable-vg When They Are Online Potential SSH host key vulnerability: update recommended The Heartbleed Bug HOW TO: Zero out disks when destroying Which Backup scheme can I use? Can not create Virtual Server in OnApp UI - next button stays grey See more output error How to remove backup snapshots and not get LVM I/O error Importer Updated July 05, 2016 12:43 Question How do I remove backup snapshots on a hypervisor without getting an LVM I/O error. Environment All OnApp versions LVM Datastore Answer First we will show how OnApp performs backups to show the process and commands read failed after used. This does not cover incremental backups. Three methods depending on the VM operating system and the HV type are described: **** Linux backups on Xen HV (LVM logical volume is formatted with filesystem) 1) LVM snapshot is created and devices in /dev/mapper appear /dev/mapper/onapp-aaaaaaaa/ccccccccccc-real /dev/mapper/onapp-aaaaaaaa/backup-dddddddd /dev/mapper/onapp-aaaaaaaa/backup-dddddddd-cow 2) LVM snapshot is mounted to some temporary directory 3) The data is 'tar'-ed to backup file 4) umount 5) LVM snapshot is deleted **** Linux backups on KVM HV (partition is created on LVM logical volume and this partition is formatted with filesystem) 1) LVM snapshot is created and devices in /dev/mapper appear /dev/mapper/onapp-aaaaaaaa/ccccccccccc-real /dev/mapper/onapp-aaaaaaaa/backup-dddddddd /dev/mapper/onapp-aaaaaaaa/backup-dddddddd-cow 2) 'kpartx -a -p X ...' is used to create device for partition and device in /dev/mapper appear /dev/mapper/backup-ddddddddX1 3) The partition is mounted to some temporary directory 4) The data is 'tar'-ed to backup file 5) The partition is unmounted 6) 'kpartx -d -p X ...' is used to create device for part
here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company Business Learn more about hiring developers or posting ads with us Super User Questions Tags Users Badges Unanswered Ask Question _ Super User is a question and answer site for computer enthusiasts and power users. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top LVM: Removing a 'Zombie' (Unavailable) VG up vote 6 down vote favorite 1 I have an LVM related problem. We have a server that 'learned' about a foreign volume group through a hard drive that was briefly attached. Now, with that hard drive gone again, all LVM utilities complain about lots of IO errors (read failed) for this missing volume group. root@coruscant:~# vgs /dev/mapper/vg_old-lv1: read failed after 0 of 4096 at 2147418112: Input/output error /dev/mapper/vg_old-lv2: read failed after 0 of 4096 at 0: Input/output error [...] /dev/mapper/vg_old-lvn: read failed after 0 of 4096 at 0: Input/output error VG #PV #LV #SN Attr VSize VFree real_vg_1 1 36 1 wz--n- 1.73T 1.04T real_vg_2 1 3 0 wz--n- 111.66G 61.66G As you can see from the output of vgs, the 'missing' zombie VG is not actually listed in the output of the utilities. I can also not disable it with 'vgchange' or remove it with 'vgremove' -- These tools just return 'volume group not found'. Any hints on how to remove this 'zombie' VG from the system without rebooting? System: Ubuntu 8.04LTS, x64, 2.6.24-29-xen, LVM version: 2.02.26 (2007-06-15) Library version: 1.02.20 (2007-06-15) Driver version: 4.12.0 linux ubuntu storage lvm share|improve this question edited Nov 4 '11 at 1:29 Gaff 12.4k113655 asked Nov 4 '11 at 1:19 AndiW 3112 add a comment| 1 Answer 1 active oldest votes up vote 7 down vote The solution for removing ghost LVM-volumes is 'dmsetup'. Do something like: # dmsetup remove --force /dev/VolGroup01/LogVol00 After that your 'vgs'-comma