Grub Mirror Grub Hard Disk Error Centos 5
Contents |
the mirroring between two drives. Then one of the drives fail (which they eventually will), here is how you fix it: I have taken info from these pages grub hard disk error windows for this blog post: http://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array http://blog.mydream.com.hk/howto/linux/howto-reinstall-grub-in-rescue-mode-wh… In this example I have two hard
Grub Hard Disk Error Windows 7
drives, /dev/sda and /dev/sdb, with the partitions /dev/sda1 and /dev/sda2 as well as /dev/sdb1 and /dev/sdb2. /dev/sda1 and /dev/sdb1 make grub hard disk error redhat up the RAID1 array /dev/md0. /dev/sda2 and /dev/sdb2 make up the RAID1 array /dev/md1. /dev/sda1 + /dev/sdb1 = /dev/md0 /dev/sda2 + /dev/sdb2 = /dev/md1 /dev/sdb has failed, and we want to replace hdd grub error it. First of all: ‘cat /proc/mdstat' is your friend - it will show you the status of your raid during the whole process. In the output from ‘cat /proc/mdstat' you will see an (F) behind a failed device, or it will be missing alltogether. First, fail and remove the failed device(s): mdadm -manage /dev/md0 -fail /dev/sdb1 mdadm -manage /dev/md0 -remove /dev/sdb1 Repeat for other MD-devices containing
Mdadm Rebuild Raid 5
sdb-parts. Now the output from ‘cat /proc/mdstat' should only contain parts from sda. Power down, change the drive, and turn it back on. To make the same partitions on sdb as you have on sda, do this: sfdisk -d /dev/sda | sfdisk /dev/sdb ‘fdisk -l' should now show the same partitions on sda and sdb. Next, add the proper parts from sdb to the relevant md-device. So if md0 contains sda1, do this: mdadm -manage /dev/md0 -add /dev/sdb1 Repeat for all md-devices so you have the same parts from sda and sdb in all of them. Check with ‘cat /proc/mdstat'. Let is sync back up (check with ‘watch -n 10 cat /proc/mdstat' until it finishes). Now, fix grub: grub grub>root (hd0,0) grub>setup (hd0) If you're unlucky and can't' boot because the wrong device is first (trying to boot from the clean/new hard drive), follow these steps: First boot into a live cd with your os. Then activate the RAID: 1) mkdir /etc/mdadm 2) mdadm -examine -scan > /etc/mdadm/mdadm.conf 3) mdadm -A -scan Then reinstall grub. In this example, you have /boot on md0 and / on md1: 1) mkdir /mnt/sysimage 2) mount /dev/md1 /mnt/sysimage 3) m
of ServicePrivacy PolicyContactABOUTABOUT USService Level AgreementData CentersPartnersBlogClients Client Panel Toggle navigation Home Announcements Knowledgebase Network Status Contact Us Account Login ----- Forgot Password? Login Remember Me • Forgot Password? View Cart (0) Knowledgebase Portal Home Knowledgebase Dedicated Servers Troubleshooting How to recover from a broken RAID array with MDADM Categories Billing and mdadm rebuild raid 1 Accounts 20 Dedicated Servers 21 DirectAdmin 20 Domains and DNS 32 mdadm assemble E-Mail 19 FTP 6 HTML, PHP, MySQL 11 Installable Scripts 10 Pre-Sales Info 6
Mdadm Resync
Reseller Hosting 8 SSL Certificates 35 VPS Accounts 25 Categories Billing and Accounts (20) Dedicated Servers (21) DirectAdmin (20) Domains and DNS (32) E-Mail (19) FTP (6) HTML, PHP, MySQL (11) Installable https://mortencb.wordpress.com/2012/10/04/fixing-drive-failure-when-using-mdadmraid1-on/ Scripts (10) Pre-Sales Info (6) Reseller Hosting (8) SSL Certificates (35) VPS Accounts (25) How to recover from a broken RAID array with MDADM This article will attempt to guide you to determine if a MDADM based raid array (in our case a RAID1 array) is broken and how to rebuild it. This procedure has been tested on CentOS 5 and 6. Determine the status of your RAID array http://www.woktron.com/secure/knowledgebase/196/How-to-recover-from-a-broken-RAID-array-with-MDADM.html To view the status of your RAID array enter the following as root: # cat /proc/mdstat Personalities : [raid1] read_ahead 1024 sectors md2 : active raid1 hda3[1] hdb3[0] 262016 blocks [2/2] [UU] md1 : active raid1 hda2[1] hdb2[0] 119684160 blocks [2/2] [UU] md0 : active raid1 hda1[1] hdb1[0] 102208 blocks [2/2] [UU] unused devices: Above you see the output of a RAID array in active state. In case a drive has failed, it would look like this: Personalities : [raid1] read_ahead 1024 sectors md0 : active raid1 hda1[1] 102208 blocks [2/1] [_U] md2 : active raid1 hda3[1] 262016 blocks [2/1] [_U] md1 : active raid1 hda2[1] 119684160 blocks [2/1] [_U] unused devices: Notice that it doesn't list the failed drive parts, and that an underscore appears beside each U. This shows that only one drive is active in these arrays - There is no mirror. Another command that will show us the state of the raid drives is "mdadm": # mdadm -D /dev/md1 /dev/md0: Version : 00.90.00 Creation Time : Thu Aug 21 12:22:43 2003 Raid Level : raid1 Array Size : 102208 (99.81 MiB 104.66 MB) Device Size : 102208 (99.81 MiB 104.66 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor
operations performed by your computer between the moment when you switch it on and the moment it's ready for you to log http://www.tuxradar.com/content/how-fix-linux-boot-problems in. During this time, all kinds of incomprehensible messages scroll up the screen, but they're not something you usually take much notice of, and most linux distros cover them https://wiki.archlinux.org/index.php/Convert_a_single_drive_system_to_RAID up with a pretty splash screen and a nice encouraging progress bar. This is all fine, of course, until it stops working. In this tutorial we'll examine the boot process grub hard in more detail, looking in particular at what can go wrong, and how to diagnose and fix the problem. Grokking the problem When I'm teaching Linux on one of my courses, many attendees tell me they are interested in troubleshooting of one form or another. Some of them are looking for a cookbook approach - "If you see the grub hard disk error message X, run command Y", but troubleshooting rarely works that way. My initial advice to anyone who needs to troubleshoot is always the same: "The most important thing in troubleshooting is to understand how the system is supposed to work in the first place. The second most important thing is figuring out exactly what the system was trying to do when it went wrong." Figure 1: the normal sequence of events when booting Linux. With this in mind, let's take a look at how Linux boots. Knowing the normal sequence of events, and determining how far it got before it ran into trouble, are key to diagnosing and fixing boot-time problems. Figure 1 above, right shows the normal sequence of events (green arrows) and indicates some of the possible failure paths (red arrows). Picking yourself up by your bootstraps Booting is a multi-stage affair. When a PC is powered up, control initially passes to a program (called the BIOS) stored in read-only memory on the motherboard. The BIOS performs a self-test of the hardw
need to temporarily store the data on a third drive. The procedure can also be adapted, simplifying it, to the conversion of simple non-root partitions, and to other RAID levels. Tip: You may consider using RaiderAUR, which can convert a single disk into a RAID system with a two-pass command. Contents 1 Scenario 2 Prepare the new disk 2.1 Partition the disk 2.2 Create the RAID device 2.3 Make file system 3 Copy the data on the array 4 Boot on the new disk 4.1 Update the boot loader 4.1.1 GRUB legacy 4.1.2 GRUB 4.2 Alter fstab 4.3 Rebuild the initramfs 4.3.1 Chroot into the RAID system 4.3.2 Record mdadm's config 4.3.3 Rebuild initcpio 4.4 Install the boot loader on the RAID array 4.4.1 GRUB Legacy 4.5 Verify success 5 Add original disk to array 5.1 Partition original disk 5.2 Add disk partition to array 6 See also Scenario This example assumes that the pre-existing disk is /dev/sda, which contains only one partition, /dev/sda1, used for the whole system. The newly-added disk is /dev/sdb. Warning: Backup important data before proceeding. Prepare the new disk Partition the disk The first step is creating the partition on the new disk, /dev/sdb1, that will be used as the mirror for the RAID array. In general, in this step it is not needed to recreate the exact partitioning scheme of the pre-existing drive; RAID can even be configured on whole disks, and partitions or logical volumes created later. Make sure that the partition type is set as FD. See RAID#Prepare the Devices and RAID#Create the Partition Table (GPT) for more information. Create the RAID device Next, create the RAID array in a degraded state, using only the new disk. Note how the missing keyword is specified for the first device: this will be added later. # mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1 The factual accuracy of this article or section is dis