Raid Array Error Message
Contents |
Gaming Smartphones Tablets Windows 8 PSUs Android Your question Get the answer Tom's Hardware>Forum>Storage>Raid 1 error message> Raid 1 error message Tags: Hard Drives NAS / RAID Error Message Windows XP Storage Last response: 20 December 2010
What Is Raid Failure
12:03 in Storage Share cablebeacher 18 December 2010 06:15:31 Hi all, I have a raid troubleshooting interview questions slight problem that I have searched to find an answer but have been unsuccessful. If anyone can assist I would be
Raid Controller Failure Symptoms
grateful. I have had my Windows XP running on Raid 1 for some tome quuite happily. I got an error message the other day on bootup "disk member of array has failed or is not how to check raid status responding" . When i go into the raid manager idt also confirms a disk status is "critical". If I unplug each hardrive and re-start the system runs normally each time. It also runs normally with both H/Drives plugged in. I would have thought unplugging and running each HDrive would show which one was at fault? I ran CHKDSK on each individual Ddrive with no errors. Could it be the Raid raid 0 failure controller? I am only using the onboard ATI Raid controller whick XP says is running OK. Any ideas? regards mal More about : raid error message Paperdoc a c 466 G Storage 20 December 2010 12:03:18 When one of the two units in a RAID1 array fails, the normal operation is that the RAID controller reverts to using ONLY the remaining good unit so that you can keep working with no interruption. However, you have lost the automatic mirroring function so the controller also puts out a warning message to let you know that you need to fix the problem. The fact that your system seems to continue working OK is what it's supposed to do to help you. Within the RAID manager it tells you more than just you have a failure - it also tells exactly WHICH of the two units has failed. This may be by specifying a mobo SATA port number, or by specifying the serial number of the HDD that has failed. If you still cannot figure it out from that, try unplugging only ONE of the two drives in your RAID array. If you leave the good one plugged in it will still work, but give an error message about one drive missing.
简体中文 EspaƱol How to Troubleshoot Hard Drives and RAID Controller Errors on Dell PowerEdge 12G This article provides information on how to troubleshoot hard drive and RAID Controller errors on Dell PowerEdge™ 12G Servers. Show all |
Raid Problems
Hide all Issue 1: Post Error Messages Message Meaning There are X what is server failure enclosures connected to connector Y, but only maximum of 4 enclosures can be connected to a single SAS connector.
Server Failure Definition
Please remove the extra enclosures then restart your system. When the BIOS detects more than 4 enclosures connected to a single SAS connector, it displays this message. You must remove http://www.tomshardware.com/forum/264787-32-raid-error-message all additional enclosures and restart your system. Cache data was lost, but the controller has recovered. This could be due to the fact that your controller had protected cache after an unexpected power loss and your system was without power longer than the battery backup time. Press any key to continue or 'C' to load the configuration utility. This message displays http://www.dell.com/support/article/us/en/04/SLN129432/en under the following conditions: The adapter detects that the cache in the controller cache has not yet been written to the disk subsystem. The controller detects an Error- Correcting Code (ECC) error while performing its cache checking routine during initialization. The controller discards the cache rather than sending it to the disk subsystem because the data integrity cannot be guaranteed. To resolve this problem, allow the battery to charge fully. If the problem persists, the battery or adapter DIMM might be faulty. The following virtual disks have missing disks: (x). If you proceed (or load the configuration utility), these virtual disks will be marked OFFLINE and will be inaccessible. Please check your cables and ensure all disks are present. Press any key to continue, or 'C' to load the configuration utility. The message indicates that some configured disks were removed. If the disks were not removed, they are no longer accessible. The SAS cables for your system might be improperly connected. Check the cable connections and fix any problems. Restart the system. If there are no cable problems, press any key or
power failure. There are several ways to recover from this situation. Method (1): Use the raid tools. These can be used to sync the raid arrays. They do not fix file-system damage; after the http://tldp.org/HOWTO/Software-RAID-0.4x-HOWTO-4.html raid arrays are sync'ed, then the file-system still has to be fixed with fsck. Raid https://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array arrays can be checked with ckraid /etc/raid1.conf (for RAID-1, else, /etc/raid5.conf, etc.) Calling ckraid /etc/raid1.conf --fix will pick one of the disks in the array (usually the first), and use that as the master copy, and copy its blocks to the others in the mirror. To designate which of the disks should be used as the master, you can use the what is --force-source flag: for example, ckraid /etc/raid1.conf --fix --force-source /dev/hdc3 The ckraid command can be safely run without the --fix option to verify the inactive RAID array without making any changes. When you are comfortable with the proposed changes, supply the --fix option. Method (2): Paranoid, time-consuming, not much better than the first way. Lets assume a two-disk RAID-1 array, consisting of partitions /dev/hda3 and /dev/hdc3. You can try the following: fsck /dev/hda3 fsck /dev/hdc3 decide which raid array error of the two partitions had fewer errors, or were more easily recovered, or recovered the data that you wanted. Pick one, either one, to be your new ``master'' copy. Say you picked /dev/hdc3. dd if=/dev/hdc3 of=/dev/hda3 mkraid raid1.conf -f --only-superblock Instead of the last two steps, you can instead run ckraid /etc/raid1.conf --fix --force-source /dev/hdc3 which should be a bit faster. Method (3): Lazy man's version of above. If you don't want to wait for long fsck's to complete, it is perfectly fine to skip the first three steps above, and move directly to the last two steps. Just be sure to run fsck /dev/md0 after you are done. Method (3) is actually just method (1) in disguise. In any case, the above steps will only sync up the raid arrays. The file system probably needs fixing as well: for this, fsck needs to be run on the active, unmounted md device. With a three-disk RAID-1 array, there are more possibilities, such as using two disks to ''vote'' a majority answer. Tools to automate this do not currently (September 97) exist. Q: I have a RAID-4 or a RAID-5 (parity) setup, and lost power while there was disk activity. Now what do I do? A: The redundancy of RAID levels is designed to protect against a disk failure, not against a power failure. Since the disks i
A Hard Disk Has Failed? 3 Removing The Failed Disk 4 Adding The New Hard Disk Replacing A Failed Hard Drive In A Software RAID1 Array. This guide shows how to remove a failed hard drive from a Linux RAID1 array (software RAID), and how to add a new hard disk to the RAID1 array without losing data. NOTE: There is a new version of this tutorial available that uses gdisk instead of sfdiskto supportGPT partitions. 1 Preliminary Note In this example I have two hard drives, /dev/sda and /dev/sdb, with the partitions /dev/sda1 and /dev/sda2 as well as /dev/sdb1 and /dev/sdb2. /dev/sda1 and /dev/sdb1 make up the RAID1 array /dev/md0. /dev/sda2 and /dev/sdb2 make up the RAID1 array /dev/md1. /dev/sda1 + /dev/sdb1 = /dev/md0 /dev/sda2 + /dev/sdb2 = /dev/md1 /dev/sdb has failed, and we want to replace it. 2 How Do I Tell If A Hard Disk Has Failed? If a disk has failed, you will probably find a lot of error messages in the log files, e.g. /var/log/messages or /var/log/syslog. You can also run cat /proc/mdstat and instead of the string [UU] you will see [U_] if you have a degraded RAID1 array. 3 Removing The Failed Disk To remove /dev/sdb, we will mark /dev/sdb1 and /dev/sdb2 as failed and remove them from their respective RAID arrays (/dev/md0 and /dev/md1). First we mark /dev/sdb1 as failed: mdadm --manage /dev/md0 --fail /dev/sdb1 The output of cat /proc/mdstat should look like this: server1:~#cat/proc/mdstatPersonalities:[linear][multipath][raid0][raid1][raid5][raid4][raid6][raid10]md0:activeraid1sda1[0]sdb1[2](F)24418688blocks[2/1][U_] md1:activeraid1sda2[0]sdb2[1]24418688blocks[2/2][UU] unuseddevices: