Error 2 While Executing Fsck Linux_raid_member
Get Kubuntu Get Xubuntu Get Lubuntu Get UbuntuStudio Get Mythbuntu Get Edubuntu Get Ubuntu-GNOME Get UbuntuKylin Ubuntu Code of Conduct Ubuntu Wiki Community Wiki Other Support Launchpad Answers Ubuntu IRC Support AskUbuntu Official Documentation User Documentation Social Media Facebook Twitter Useful Links Distrowatch Bugs: Ubuntu PPAs: Ubuntu Web Upd8: Ubuntu OMG! Ubuntu Ubuntu Insights Planet Ubuntu Activity Page Please read before SSO login Advanced Search Forum The Ubuntu Forum Community Ubuntu Specialised Support Ubuntu Servers, Cloud and Juju Server Platforms [SOLVED] hdd fails to mount Having an Issue With Posting ? Do you want to help us debug the posting issues ? < is the place to report it, thanks ! Page 1 of 2 12 Last Jump to page: Results 1 to 10 of 20 Thread: hdd fails to mount Thread Tools Show Printable Version Subscribe to this Thread… Display Linear Mode Switch to Hybrid Mode Switch to Threaded Mode January 4th, 2014 #1 johans.postbox View Profile View Forum Posts Private Message First Cup of Ubuntu Join Date Jan 2014 Beans 10 hdd fails to mount Hi, I have a strange issue. I am running a 10.04 server and after the latest upgrade (kernel) one disk fails to mount. It used to be part of a raid array, but that has been deleted. Trying to mount manually fails with Code: mount: /dev/sdc1 already mounted or /media/storage/ busy tried to force it but to no success. I suspect it could be some problem with th old raid setup still trying to use it or something (soft raid with mdadm) because when I run fsck on it I get this: Code: fsck from util-linux-ng 2.17.2 fsck: fsck.linux_raid_member: not found fsck: Error 2 while executing fsck.linux_raid_member for /dev/sdc Could it be update-initramfs that mess something up? That is run during kernel upgrade. The drive is working fine when used in a usb-enclosure connected to other computer. It does not have the raid flag set anym
communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company Business Learn more about hiring developers or posting ads with us Ask Ubuntu Questions Tags Users Badges Unanswered Ask Question _ Ask Ubuntu is a question and answer site for Ubuntu users and developers. Join them; it only takes a minute: Sign up Here's how it https://ubuntuforums.org/showthread.php?t=2197570 works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top rescue data from corupted HDD up vote 0 down vote favorite I have Ubuntu server with 3 x 1.5TB SATA2 on Software RAID5 and LVM, it runs as VPS Node. This morning guy from datacenter sent me mail that sda is http://askubuntu.com/questions/379049/rescue-data-from-corupted-hdd corrupted and it needs to be changed... So he change it and found out that sdc is also corrupted, and I demanded that he returns old sda so that I could try to restore some files. I installed grub-install on sdb but it's unconfigured, and on sdb1 there is only "lost+found" (I mount it in live rescue system and checked), and that is only partition that I can mount, as you can see in following (+ fsck): root@rescue ~ # ll /mnt/ total 0 root@rescue ~ # cd /mnt/ root@rescue /mnt # mkdir sda sda1 sda2 sdb sdb1 sdb2 sdc root@rescue /mnt # mount /dev/sda /mnt/sda mount: /dev/sda already mounted or /mnt/sda busy root@rescue /mnt # mount /dev/sda1 /mnt/sda1 mount: unknown filesystem type 'linux_raid_member' root@rescue /mnt # mount /dev/sda2 /mnt/sda2 mount: unknown filesystem type 'linux_raid_member' root@rescue /mnt # mount /dev/sdb /mnt/sdb mount: you must specify the filesystem type root@rescue /mnt # mount /dev/sdb1 /mnt/sdb1 root@rescue /mnt # mount /dev/sdb2 /mnt/sdb2 mount: you must specify the filesystem type root@rescue /mnt # mount /dev/sdc /mnt/sdc mount: you must specify the filesystem type root@resc
Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the http://serverfault.com/questions/479920/mdadm-keeps-removing-disk workings and policies of this site About Us Learn more about Stack Overflow the company Business Learn more about hiring developers or posting ads with us Server Fault Questions Tags https://linuxexpresso.wordpress.com/2010/03/31/repair-a-broken-ext4-superblock-in-ubuntu/ Users Badges Unanswered Ask Question _ Server Fault is a question and answer site for system and network administrators. Join them; it only takes a minute: Sign up Here's how error 2 it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top mdadm keeps removing disk up vote 2 down vote favorite As the title suggests, mdadm keeps marking a drive as "Removed" (from mdadm --detail) and I was hoping to get suggestions as to why that might happen. I was wanting error 2 while to fsck the drives however I got the following error: $ fsck /dev/sda1 fsck from util-linux 2.20.1 fsck: fsck.linux_raid_member: not found fsck: error 2 while executing fsck.linux_raid_member for /dev/sda1 I've since learned that an internal bitmap would help stop me from needing to --add the third drive back and avoiding the resync process/time however I'm assuming I need the third disk to be added back first for the bitmap to be of any use. Any other suggestions on how to avoid a costly resync would be appreciated. The usage of this RAID is for media serving, thus a high read low write application. Update: At the request of MadHatter, here's the output from /proc/mdstat (the RAID is in the process of rebuilding). Personalities : [raid6] [raid5] [raid4] md1 : active raid5 sdc1[3] sda1[2] sdb1[1] 3907023872 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU] [=====>...............] recovery = 25.2% (493990636/1953511936) finish=1893.9m in speed=12843K/sec unused devices:
won't boot, all your filesystem checks tell you you've a bad superblock, but you cant seem to find how to fix it. Well, here goes🙂 This guide is for ext4 , though I'll explain how other filesystems can be cured along the way. The easiest way to carry all this out, seeing as your computer probably won't boot at this stage, is to download and burn a copy of Parted Magic. Boot from that, and you'll access to a number of useful tools. First, figure out what partition we're dealing with. sudo fdisk -l The above will list all the partitions on all the drives in your computer. To recover a lost partition, your going to need Testdisk. Testdisk is included in Parted Magic, and there's a great guide on their site. For this though, we just need the partition number, such as /dev/sda3 or /dev/hdb1. Now, make sure your superblock is the problem, by starting a filesystem check, replacing xxx with your partition name. Here, you can change ext4 to ext3, or ext2 to suit the filesystem. sudo fsck.ext4 -v /dev/xxx If your superblock is corrupt, the output will look like this fsck /dev/sda5 fsck 1.41.4 (27-Jan-2009) e2fsck 1.41.4 (27-Jan-2009) fsck.ext4: Group descriptors look bad... trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/sda5 The superblock could not be read or does not describe a correct ext4 filesystem. If the device is valid and it really contains an ext4 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193