Error Allocating Block Bitmap 1 Memory Allocation Failed
Contents |
Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company Business fsck error storing directory block information Learn more about hiring developers or posting ads with us Server Fault Questions Tags Users Badges
Qnap Swap Memory
Unanswered Ask Question _ Server Fault is a question and answer site for system and network administrators. Join them; it only takes a
Fsck Memory Allocation Failed
minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top Running out of memory running fsck on large filesystems up vote 9
E2fsck
down vote favorite 3 I look after an old Debian linux box (running etch) with only 512 MB of RAM, but a lot of external storage attached. One ext3 filesystem is 2.7 TB in size, and fsck can't check it, because it runs out of memory, with an error such as this one: Error allocating directory block array: Memory allocation failed e2fsck: aborted I've added a 4 GB swap partition and it still doesn't complete, but this is a 32-bit kernel, so I don't expect adding any more will help. Apart from booting into a 64-bit kernel, are there any other ways of getting fsck to complete its check? linux debian memory ext3 fsck share|improve this question asked May 17 '09 at 22:10 TimB 1,17521116 add a comment| 2 Answers 2 active oldest votes up vote 10 down vote accepted A 64 bit kernel and large quantities of RAM will allow the fsck to finish nice and fast. Alternately, there's now an option in e2fsck that'll tell it to store all of it's intermediate results in a directory instead of in RAM, which helps immensely. Create /etc/e2fsck.conf with the following contents: [scratch_files] directory = /var/cache/e2fsck (And, obviously, make sure that directory exists, and is on a partition with a good few GB of free space). e2fsck will run SLLOOOOWWWWWWW, but at least it'll complete. Of course, this won't work with the root FS, but if you've got swap then you're past mounting the root FS anyway. share|improve this answer answered May 17 '09 at 23:05 womble♦ 76.6k11117184 add a comment| up vote 4 down vote I ended up trying what womble suggested; here are some more details that may be useful if, like me, you haven't seen this new functionality in e2fsck before. The "scratch_files" configuration option for e2fsck became available sometime in th
to your NAS for large volume fsck activities Posted on 06/03/2015 by dan That's right, another heading from the Department of not terribly catchy blog article titles. I've been having a mighty terrible time with one of my QNAP arrays lately. After updating to 4.1.2, I've been getting some weird symptoms. For example, every time the NAS reboots, the filesystem is marked as unclean. Worse, it mounts as read-only from time to time. And it seems generally flaky. So I've spent the last week trying to evacuate the data with the thought that maybe I can re-initialize it and clear out some of http://serverfault.com/questions/9218/running-out-of-memory-running-fsck-on-large-filesystems the nasty stuff that's built up over the last 5 years. Incidentally, while we all like to moan about how slow SATA disks are, try moving a few TB via a USB2 interface. The eSATA seems positively snappy after that. Of course, QNAP released version 4.1.3 of their platform recently, and a lot of the symptoms I've been experiencing have stopped occurring. I'm going to continue down this path http://www.penguinpunk.net/blog/qnap-add-swap-to-your-nas-for-large-volume-fsck-activities/ though, as I hadn't experienced these problems on my other QNAP, and just don't have a good feeling about the state of the filesystem. And you thought that I would be all analytical about it, didn't you? In any case, I've been running e2fsck on the filesytem fairly frequently, particularly when it goes read-only and I have to stop the services, unmount and remount the volume. [/] # cd /share/MD0_DATA/ [/share/MD0_DATA] # cd Qmultimedia/ [/share/MD0_DATA/Qmultimedia] # mkdir temp mkdir: Cannot create directory `temp': Read-only file system [/share/MD0_DATA/Qmultimedia] # cd / [/] # /etc/init.d/services.sh stop Stop qpkg service: chmod: /share/MD0_DATA/.qpkg: Read-only file system Shutting down Download Station: OK Disable QUSBCam ... Shutting down SlimServer... Error: Cannot stop, SqueezeboxServer is not running. WARNING: rc.ssods ERROR: script /opt/ssods4/etc/init.d/K20slimserver failed. Stopping thttpd-ssods .. OK. rm: cannot remove `/opt/ssods4/var/run/thttpd-ssods.pid': Read-only file system WARNING: rc.ssods ERROR: script /opt/ssods4/etc/init.d/K21thttpd-ssods failed. Shutting down QiTunesAir services: Done Disable Optware/ipkg . Stop service: cloud3p.sh vpn_openvpn.sh vpn_pptp.sh ldap_server.sh antivirus.sh iso_mount.sh qbox.sh qsyncman.sh rsyslog.sh snmp lunportman.sh iscsitrgt.sh twonkymedia.sh init_iTune.sh ImRd.sh crond.sh nvrd.sh StartMediaService.sh bt_scheduler.sh btd.sh mysqld.sh recycled.sh Qthttpd.sh atalk.sh nfs ftp.sh smb.sh versiond.sh . [/] # umount /dev/md0 So then I run e2fsck to check the filesystem. But on a large v
a +6TB device Date: Mon, 09 Jun 2008 19:33:48 +0200 Dear Srs, That's https://www.redhat.com/archives/ext3-users/2008-June/msg00014.html the scenario: +6TB device on a 3ware 9550SX RAID http://discussions.citrix.com/topic/328361-local-storage-sr-backend-failure-52/ controller, running Debian Etch 32bits, with 2.6.25.4 kernel, and defaults e2fsprogs version, "1.39+1.40-WIP-2006.11.14+dfsg-2etch1". Running "tune2fs" returns that filesystem is in EXT3_ERROR_FS state, "clean with errors": # tune2fs -l /dev/sda4 tune2fs 1.40.10 (21-May-2008) Filesystem volume name:
Developer Network (CDN) ForumsCitrix Insight ServicesCitrix ReadyCitrix Success KitsCloud Provider PackCloudBridgeCloudPlatform (powered by Apache CloudStack)CloudPortalDemo CenterDesktopPlayerEdgeSightEducationForum PrototypeHDX MonitorHDX RealTime Optimization PackHotfix Rollup PackJapanese ForumsKnowledge Center FeedbackLicensingLTSRNetScalerNetScaler E-Business CommunityNetScaler Gateway (Formerly Access Gateway)Profile ManagementProof of Concept KitsProvisioning ServerQuick Demo ToolkitReceiver, Plug-ins, and Merchandising ServerSecure GatewayShareFileSingle Sign-On (Password Manager)SmartAuditorStoreFrontTechnology PreviewsTrial SoftwareUniversal Print ServerUser Group CommunityVDI-in-a-BoxWeb InterfaceXenAppXenClientXenDesktopXenMobileXenServer Discussions Support Forums Products XenServer XenServer Product Family Storage Javascript Disabled Detected You currently have javascript disabled. Several functions may not work. Please re-enable javascript to access full functionality. Local Storage SR_BACKEND_FAILURE_52 Started by Shannon Kimber , 02 April 2013 - 12:08 PM Login to Reply 6 replies to this topic Best Answer Corin Goodier , 02 April 2013 - 12:15 PM Run lvscan and vgscan so you can see the actual bits that would be mounted then run fsck on the actual logical volume as the partition is a PV, not a logical volume... Go to the full post
Shannon Kimber Members #1 Shannon Kimber 7 posts Posted 02 April 2013 - 12:08 PM We have a Tier 2 Xenserver 6.0.2 (15138 - XS602E005 & XS602E003 applied) 4 x 2TB drives (software RAID 0) with 6 VMs (some backed up). One VM is a fileserver in which we were copying 5Gb of data to (largest file: 200mb). During the copy the host froze and had no choice but to reboot.Xenserver applies a "Red X Cross" to Local Storage.Troubleshooting:1) xe pbd-listuuid ( RO) : 73e11f19-1f12-6cd3-bd40-94bbee40bf8a host-uuid ( RO): 232ce21a-75f3-4616-8241-77385a5538b8 sr-uu