Pg_dump Error Message From Server Error Out Of Shared Memory
pgsql-announce pgsql-bugs pgsql-docs pgsql-general pgsql-interfaces pgsql-jobs pgsql-novice pgsql-performance pgsql-php pgsql-sql pgsql-students Developer lists Regional lists Associations User groups Project lists Inactive lists IRC Local User Groups Featured Users International Sites Propaganda Resources Weekly News pg_dump: Error hint: you might need to increase max_locks_per_transaction. message from server: ERROR: out of shared memory on one system works fine pg_dump out of shared memory on another From: jtkells(at)verizon(dot)net To: pgsql-admin(at)postgresql(dot)org Subject: pg_dump: Error message from server: ERROR: out of shared memory on one system works fine on another Date: 2011-08-07 16:23:32 Message-ID: 1set379ibe1biogcgbq7se6k0m5610b5ds@4ax.com (view raw or whole thread) Thread: 2011-08-07 16:23:32 from jtkells(at)verizon(dot)net 2011-08-07 16:37:22 from Tom Lane
log in tour help Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company Business Learn more about hiring developers or posting ads with us Database Administrators Questions Tags Users Badges Unanswered Ask Question _ Database Administrators Stack Exchange is a question and answer site for database professionals who wish to improve their database skills and learn from others in the community. Join them; it only https://www.postgresql.org/message-id/1set379ibe1biogcgbq7se6k0m5610b5ds@4ax.com takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top Backup a database with a huge number of tables up vote 6 down vote favorite Is there a way to backup a PostgreSQL database with a huge number of tables? An attempt with pg_dump for http://dba.stackexchange.com/questions/29699/backup-a-database-with-a-huge-number-of-tables a database of about 28000 tables resulted in the following error message: pg_dump: WARNING: out of shared memory pg_dump: SQL command failed pg_dump: Error message from server: ERROR: out of shared memory HINT: You might need to increase max_locks_per_transaction. pg_dump: The command was: LOCK TABLE public.link10292 IN ACCESS SHARE MODE pg_dump: *** aborted because of error An increase of max_locks_per_transaction to 256 instead of 64 resulted in a failure to start the server. Anything else I can try? (PostgreSQL 9.0, Mac OS X.) postgresql backup dump mac-os-x share|improve this question asked Dec 3 '12 at 10:06 krlmlr 204311 Could be taking file-level backups instead of using pg_dump and option for you? On the other hand, what sort of error do you get when you say 'failure to start the server'? –dezso Dec 3 '12 at 10:12 @dezso: I always thought file-level backups should be avoided with PostgreSQL. How would you do that? -- I didn't find a log message that would explain the reason for the failure to start, but I haven't looked too hard either. I had to restart the machine because pg_ctl di
which the main contents are ~10,000 tables with severalthousand records each.I was able to backup this server using http://grokbase.com/t/postgresql/pgsql-novice/118bxcwqkd/backup-server-error-might-need-to-increase-max-locks-per-transaction PGAdmin in postgresql 8.4 but nowthat I've migrated to 9.0, http://serverfault.com/questions/56323/using-pg-dump-with-huge-of-tables I get:E:\Program Files\PostgreSQL\9.0\bin\pg_dumpall.exe --host localhost --port5433 --username "postgres" --verbose --file"E:\temp\postgresql9\backup90.sql"....pg_dump: reading constraintspg_dump: reading triggerspg_dump: reading large objectspg_dump: reading dependency datapg_dump: saving encoding = UTF8pg_dump: saving standard_conforming_strings = offpg_dump: saving database definitionpg_dump: WARNING: out of shared memorypg_dump: SQL command failedpg_dump: Error message out of from server: ERROR: out of shared memoryHINT: You might need to increase max_locks_per_transaction.pg_dump: The command was: SELECT sequence_name, start_value, last_value,increment_by, CASE WHEN increment_by > 0 AND max_value = 9223372036854775807THEN NULL WHEN increment_by < 0 AND max_value = -1 THEN NULL ELSEmax_value END AS max_value, CASE WHEN increment_by > 0 AND min_value out of shared = 1THEN NULL WHEN increment_by < 0 AND min_value = -9223372036854775807THEN NULL ELSE min_value END AS min_value, cache_value, is_cycled,is_called from arl_record_seqpg_dump: *** aborted because of errorpg_dumpall: pg_dump failed on database "FinancialData", exitingProcess returned exit code 1.I have tried increasing max_locks_per_transaction first to 1500, then 10000and then 50000 with no success.This seems like it might be a bug since it worked in 8.4.Thanks for any help;Bob reply Tweet Search Discussions Search All Groups PostgreSQL pgsql-novice 1 response Oldest Nested Tom Lane That should fix it. Did you remember to restart the server after adjusting the config file entry? ("pg_ctl reload" won't do.) You can verify the active value with "SHOW max_locks_per_transaction". regards, tom lane Tom Lane at Aug 11, 2011 at 9:31 pm ⇧ "Robert Frantz"
Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company Business Learn more about hiring developers or posting ads with us Server Fault Questions Tags Users Badges Unanswered Ask Question _ Server Fault is a question and answer site for system and network administrators. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top Using pg_dump with huge # of tables? up vote 3 down vote favorite I'm dealing with a database system that can have thousands of tables. The problem is, when I try to back it up using pg_dump, I sometimes get the following error: pg_dump: WARNING: out of shared memory pg_dump: SQL command failed pg_dump: Error message from server: ERROR: out of shared memory HINT: You might need to increase max_locks_per_transaction. pg_dump: The command was: LOCK TABLE public.foo IN ACCESS SHARE MODE I could of course increase the max_locks_per_transaction setting. But the problem is that the number of tables can vary. I don't want to constantly have to revisit this setting every time there is a failure (assuming we notice the failure at all, given that this is in a cron job!). What would be the best way to tackle this problem? Currently I'm working on a Perl script that will list all the tables and then call pg_dump in "chunks" to keep a limit on the number of table locks, but I bet I could do better. postgresql share|improve this question asked Aug 19 '09 at 20:01 Matt Solnit 7531915 add a comment| 1 Answer 1 active oldest votes up vote 3 down vote accepted If you want a consistent backup, you must increase the max_locks_per_transaction. Doing it in chunks from a script will make the backup inconsistent if you have concurrent access, which is probably not what you want. Your other option is to use PITR and do a filesystem level backup. That will not take out any locks at all in the database. share|improve this answer answered Aug 19 '09 at 20:23 Magnus Hagander 1,55986 Thanks for the pointer. I am checking out PITR right now. I will point out, however, that I am fortunate -- these tables are mostly write-only, so consistency is not an issue. –Matt Solnit Aug 19 '09 at 23:11 Sorry, make that "write-once" :-). –Matt Solnit Aug 19 '09 at 23:12 add a com