Grub Hard Disk Error Centos Raid 1
the mirroring between two drives. Then one of the drives fail (which they eventually will), here is how you fix it: I have taken info from these pages for this blog post: http://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array http://blog.mydream.com.hk/howto/linux/howto-reinstall-grub-in-rescue-mode-wh… In this example I have two hard drives, /dev/sda and /dev/sdb, with the partitions /dev/sda1 and /dev/sda2 as well as /dev/sdb1 and /dev/sdb2. /dev/sda1 and /dev/sdb1 make up the RAID1 array /dev/md0. /dev/sda2 and /dev/sdb2 make up the RAID1 array /dev/md1. /dev/sda1 + /dev/sdb1 = /dev/md0 /dev/sda2 + /dev/sdb2 = /dev/md1 /dev/sdb has failed, and we want to replace it. First of all: ‘cat /proc/mdstat' is your friend - it will show you the status of your raid during the whole process. In the output from ‘cat /proc/mdstat' you will see an (F) behind a failed device, or it will be missing alltogether. First, fail and remove the failed device(s): mdadm -manage /dev/md0 -fail /dev/sdb1 mdadm -manage /dev/md0 -remove /dev/sdb1 Repeat for other MD-devices containing sdb-parts. Now the output from ‘cat /proc/mdstat' should only contain parts from sda. Power down, change the drive, and turn it back on. To make the same partitions on sdb as you have on sda, do this: sfdisk -d /dev/sda | sfdisk /dev/sdb ‘fdisk -l' should now show the same partitions on sda and sdb. Next, add the proper parts from sdb to the relevant md-device. So if md0 contains sda1, do this: mdadm -manage /dev/md0 -add /dev/sdb1 Repeat for all md-devices so you have the same parts from sda and sdb in all of them. Check with ‘cat /proc/mdstat'. Let is sync back up (check with ‘watch -n 10 cat /proc/mdstat' until it finishes). Now, fix grub: grub grub>root (hd0,0) grub>setup (hd0) If you're unlucky and can't' boot because the wrong device is first (trying to boot from the clean/new hard drive), follow these steps: First boot into a live cd with your os. Then activate the RAID: 1) mkdir /etc/mdadm 2) mdadm -examine -scan > /etc/mdadm/mdadm.conf 3) mdadm -A -scan Then reinstall grub. In this example, you have /boot on md0 and / on md1: 1) mkdir /mnt/sysimage 2) mount /dev/md1 /mnt/sysimage 3) mount -o bind /dev /mnt/sysimage/dev 4) mount -o bind /proc /mnt/sysimage/proc 5) chroot /mnt/sysimage /bin/bash 6) mount /dev/md0 /boot Then fix grub (same as above): grub gr
few minutes. Join Now Hi All, I am having one doubt in the Centos RAID 1 configuration.I have added the new hard disk in the RAID 1 configured system.I am using 2 Hard Disk in the system for RAID 1.Whether the system can be switched ON in one hard disk after recovery of Data in new hard disk after replacing the failed hard disk.I want to know whether the Centos can booted in One Hard Disk after recovery of Data. If it is not https://mortencb.wordpress.com/2012/10/04/fixing-drive-failure-when-using-mdadmraid1-on/ possible to boot in one Hard disk means Why? If it is possible to boot in one hard disk with RAID 1 means How to do that? Please help me it is very urgent. Reply Subscribe View Best Answer RELATED TOPICS: How do I know if a hard disk in the RAID failed? how do i upgrade my hard https://community.spiceworks.com/topic/808527-centos-raid-1-configuration-regarding disk to raid 5 raid 0 configuration   21 Replies Datil OP Best Answer Dataless Feb 24, 2015 at 6:05 UTC Assuming the RAID was created properly you should be able to boot from either one. You say you added a new drive, after it has fully synced the drives you can easily confirm this by unplugging the older drive and seeing if the boot loader comes up. The sync status can be confirmed using mdadm. Here's a link to a page with some information about using mdadm and RAID in general; http://www.ducea.com/2009/03/08/mdadm-cheat-sheet/ Oh, and Welcome to the Spiceworks Community! 6 Pimiento OP lakshmanashivaa Feb 24, 2015 at 6:34 UTC Thanks for your reply.I have configured Software RAID where the newly added hard disk is not booting after synching the hard disk.When I connected only the new hard disk with the system it shows that Disk Boot Failure Error.Why it is showing like that. 0 Datil OP Dataless Feb 24, 2015 at 7:36 UTC It's giving that error because the boot loader isn't installed on the second drive yet. There is a Wiki article on the
One of the was broken so I have changed thebroken disk with a working one. http://grokbase.com/t/centos/centos/1197qjg37e/boot-problem-after-disk-change-on-raid1 I started the server in rescue mode, andcreated the partional http://serverfault.com/questions/612529/linux-raid-1-how-to-make-a-secondary-hd-boot table, and added all the partitions to the softwareraid.*I have added the partitions to the RAID, and reboot.*# mdadm /dev/md0 --add /dev/sdb1# mdadm /dev/md1 --add /dev/sdb2# mdadm /dev/md2 --add /dev/sdb3# mdadm /dev/md3 --add /dev/sdb4*After reboot, server did not boot. So grub hard I do the followings:*# mount /dev/md1 /mnt/rescue# mount /dev/md0 /mnt/rescue/boot# mount -o bind /dev /mnt/rescue/dev# mount -o bind /proc /mnt/rescue/proc# mount -o bind /dev/shm /mnt/rescue/dev/shm# mount -o bind /sys /mnt/rescue/sys# chroot /mnt/rescue*I checked the device.map*# cat /boot/grub/device.map(hd0) /dev/sda(hd1) /dev/sdb*And, install the grub.*# grubProbing devices to guess BIOS drives. This may grub hard disk take a long time.GNU GRUB version 0.97 (640K lower / 3072K upper memory)[ Minimal BASH-like line editing is supported. For the first word, TABlists possible command completions. Anywhere else TAB lists the possiblecompletions of a device/filename.]grub> root (hd0,0)root (hd0,0)Filesystem type is ext2fs, partition type 0xfdgrub> setup (hd0)setup (hd0)Checking if "/boot/grub/stage1" exists... yesChecking if "/boot/grub/stage2" exists... yesChecking if "/boot/grub/e2fs_stage1_5" exists... yesRunning "embed /boot/grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded.succeededRunning "install /boot/grub/stage1 (hd0) (hd0)1+15 p(hd0,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeededDone.grub> root (hd1,0)root (hd1,0)Filesystem type is ext2fs, partition type 0xfdgrub> setup (hd1)setup (hd1)Checking if "/boot/grub/stage1" exists... yesChecking if "/boot/grub/stage2" exists... yesChecking if "/boot/grub/e2fs_stage1_5" exists... yesRunning "embed /boot/grub/e2fs_stage1_5 (hd1)"... 15 sectors are embedded.succeededRunning "install /boot/grub/stage1 (hd1) (hd1)1+15 p(hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeededDone.grub> quitquit*But it still does not boot. What should I do at this point? What do yousuggest?**Disk informations*# fdisk -l /dev/sd[ab]Disk /dev/sda: 1500.3 GB, 1500301910016 bytes255 heads, 63 sectors/track, 182401 cylindersUnits = cylinders of 16065 * 5
Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company Business Learn more about hiring developers or posting ads with us Server Fault Questions Tags Users Badges Unanswered Ask Question _ Server Fault is a question and answer site for system and network administrators. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top Linux RAID 1: How to make a secondary HD boot? up vote 3 down vote favorite 2 I have the following RAID 1 on a Centos 6.5 server: # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb1[3] 974713720 blocks super 1.0 [2/1] [_U] bitmap: 7/8 pages [28KB], 65536KB chunk md1 : active raid1 sdb2[3] sda2[2] 2045944 blocks super 1.1 [2/2] [UU] unused devices: