Postgresql Error Out Of Memory
Contents |
log in tour help Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta
Postgres Out Of Memory For Query Result
Discuss the workings and policies of this site About Us Learn more psql out of memory restore about Stack Overflow the company Business Learn more about hiring developers or posting ads with us Database Administrators
Out Of Memory For Query Result Pgadmin
Questions Tags Users Badges Unanswered Ask Question _ Database Administrators Stack Exchange is a question and answer site for database professionals who wish to improve their database skills and psycopg2 databaseerror out of memory for query result learn from others in the community. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top PostgreSQL Error: out of memory up vote 2 down vote favorite I'm trying to run a query that should return postgres show work_mem around 2000 rows, but my RDS-hosted PostgreSQL 9.3 database is giving me the error "out of memory DETAIL: Failed on request of size 2048.". What does that mean? My instance has 3GB of memory, so what would be limiting it enough to run out of memory with such a small query? Edit: SHOW work_mem; "1024GB" I can't show the full SQL, but it's attempting to perform a pivot. I have two primary tables, library and book, which points back to a library record. My query attempts to find the most popular book for each of the last 12 months for each library record, and join them to a separate column in the result queryset, to have something like: library_id, month_1_book_id, month_2_book_id, month_3_book_id, ... Explain shows this results in quite a few loops: explain select * from myapp_library_get_monthly_popular where id in (5495060, 5495059, 5495048) Nested Loop Left Join (cost=3645798.54..3750412.91 rows=3 width=2980) -> Nested Loop Left Join (cost=3645798.10..3750388.98 rows=3 width=2994) -> Nested Loop Left Join (cost=3645797.66..3750365.05 rows=3 width=2976) -> Nested Loop Left Join (cost=3645797.23..3750341.13 rows=3 wi
pgsql-announce pgsql-bugs pgsql-docs pgsql-general pgsql-interfaces pgsql-jobs pgsql-novice pgsql-performance pgsql-php pgsql-sql pgsql-students Developer lists Regional lists Associations User groups Project lists Inactive lists IRC Local User Groups Featured
Work_mem Postgres
Users International Sites Propaganda Resources Weekly News Re: ERROR: out of postgres memory usage memory DETAIL: Failed on request of size ??? From: "Tomas Vondra"
Pg_restore Out Of Memory
"bricklen"
here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss http://stackoverflow.com/questions/29485644/postgres-gets-out-of-memory-errors-despite-having-plenty-of-free-memory the workings and policies of this site About Us Learn more about Stack Overflow the company Business Learn more about hiring developers or posting ads with us Stack Overflow Questions Jobs Documentation Tags Users Badges Ask Question x Dismiss Join the Stack Overflow Community Stack Overflow is a community of 6.2 million programmers, just like you, helping each out of other. Join them; it only takes a minute: Sign up Postgres gets out of memory errors despite having plenty of free memory up vote 11 down vote favorite 2 I have a server running Postgres 9.1.15. The server has 2GB of RAM and no swap. Intermittently Postgres will start getting "out of memory" errors on some SELECTs, and out of memory will continue doing so until I restart Postgres or some of the clients that are connected to it. What's weird is that when this happens, free still reports over 500MB of free memory. select version();: PostgreSQL 9.1.15 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit uname -a: Linux db 3.2.0-23-virtual #36-Ubuntu SMP Tue Apr 10 22:29:03 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux Postgresql.conf (everything else is commented out/default): max_connections = 100 shared_buffers = 500MB work_mem = 2000kB maintenance_work_mem = 128MB wal_buffers = 16MB checkpoint_segments = 32 checkpoint_completion_target = 0.9 random_page_cost = 2.0 effective_cache_size = 1000MB default_statistics_target = 100 log_temp_files = 0 I got these values from pgtune (I chose "mixed type of applications") and have been fiddling with them based on what I've read, without making much real progress. At the moment there's 68 connections, which is a typical number (I'm not using pgbouncer or any other connection poolers yet). /etc/sysctl.conf: kernel.shmmax=1050451968 kernel.shmall=256458 vm.overcommit_ratio=100 vm.overcommit_memory=2 I first changed overcommit_memory to 2 about a fortnight ago after the OOM killer killed the Postgres