Python Memory Error Stack Overflow
Contents |
here for a quick overview of the site Help Center Detailed answers to any questions you might python memory error increase memory have Meta Discuss the workings and policies of this site About python memory error numpy Us Learn more about Stack Overflow the company Business Learn more about hiring developers or posting memory error python pandas ads with us Stack Overflow Questions Jobs Documentation Tags Users Badges Ask Question x Dismiss Join the Stack Overflow Community Stack Overflow is a community of 6.2 million python memory error large array programmers, just like you, helping each other. Join them; it only takes a minute: Sign up memory error in python up vote 11 down vote favorite 4 Traceback (most recent call last): File "/run-1341144766-1067082874/solution.py", line 27, in main() File "/run-1341144766-1067082874/solution.py", line 11, in main if len(s[i:j+1]) > 0: MemoryError Error in sys.excepthook: Traceback (most recent
Python Range Memory Error
call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in from apport.report import Report MemoryError Original exception was: Traceback (most recent call last): File "/run-1341144766-1067082874/solution.py", line 27, in main() File "/run-1341144766-1067082874/solution.py", line 11, in main if len(s[i:j+1]) > 0: MemoryError The above errors appeared when I tried to run the following program. Can someone explain what is a memory error, and how to overcome this problem? . The program takes strings as input and finds all possible sub strings and creates a set(in a lexicographical order) out of it and it should print the value at the respective index asked by the user otherwise it should print 'Invalid' def main(): no_str = int(raw_input()) sub_strings= [] for k in xrange(0,no_str): s = raw_input() a=len(s) for i in xrange(0, a): for j in xrange(0, a): if j >= i: if len(s[i:j+1]) > 0: sub_strings.append(s[i:j+1]) sub_strings = list(set(sub_strings)) sub_strings.sort() queries= int(raw_input()) resul = [] for i in xrange(0,queries):
here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the
Python Memory Error Reading File
company Business Learn more about hiring developers or posting ads with us Stack Overflow Questions memory error ipython notebook Jobs Documentation Tags Users Badges Ask Question x Dismiss Join the Stack Overflow Community Stack Overflow is a community of 6.2 million python memoryerror exception programmers, just like you, helping each other. Join them; it only takes a minute: Sign up Why Python `Memory Error` with list `append()` lots of RAM left up vote 16 down vote favorite 5 I am http://stackoverflow.com/questions/11283220/memory-error-in-python building a large data dictionary from a set of text files. As I read in the lines and process them, I append(dataline) to a list. At some point the append() generates a Memory Error exception. However, watching the program run in the Windows Task Manager, at the point of the crash I see 4.3 GB available and 1.1 GB free. Thus, I do not understand the reason for the exception. Python version is 2.6.6. http://stackoverflow.com/questions/4441947/why-python-memory-error-with-list-append-lots-of-ram-left I guess, the only reason is that it is not able to use more of the available RAM. If this is so, is it possible to increase the allocation? python list memory share|improve this question asked Dec 14 '10 at 17:07 Pete 5,12062140 1 Try using a 64-bit build of Python. Though if you are using any extension modules, they'll then need to be built 64-bit as well. –Adam Vandenberg Dec 14 '10 at 17:08 Can you print the MemoryError exception string? That should give us more info. –chrisaycock Dec 14 '10 at 17:10 Are you appending before or after you process the lines? –nmichaels Dec 14 '10 at 17:15 @nmichaels- looks like this: data.append(processraw(raw)). each raw is one line. –Pete Dec 14 '10 at 17:36 Show us more code and maybe we will be able to show you how to improve your memory consumption. How big is your set of text files? @aix is right about 32-bit versus 64-bit. –kevpie Dec 14 '10 at 18:01 add a comment| 5 Answers 5 active oldest votes up vote 19 down vote accepted If you're using a 32-bit build of Python, you might want to try a 64-bit version. It is possible for a process to address at most 4GB
tour help Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more http://programmers.stackexchange.com/questions/245339/how-to-avoid-memory-error about Stack Overflow the company Business Learn more about hiring developers or posting ads with us Software Engineering Questions Tags Users Badges Unanswered Ask Question _ Software Engineering Stack Exchange is a question and answer site https://www.gnu.org/software/guile/docs/master/guile.html/Stack-Overflow.html for professionals, academics, and students working within the systems development life cycle who care about creating, delivering, and maintaining software responsibly. Join them; it only takes a minute: Sign up Here's how it works: Anybody memory error can ask a question Anybody can answer The best answers are voted up and rise to the top How to avoid Memory Error up vote 1 down vote favorite I am working with quite large files (pytables) and I am having problems with the Memory Error when I try to load the data for processing. I would like some tips about how to avoid this in my python 32bits, since python memory error I am new working with pandas and pytables, and I do not know how to work splitting the data in small pieces. My concern also comes when, if I get to split the data, how to calculate statistics like mean, std, etc without having the entire list or array, etc. This is a sample of the code that I am using now, this works fine with small tables: def getPageStats(pathToH5, pages, versions, sheets): with openFile(pathToH5, 'r') as f: tab = f.getNode("/pageTable") dversions = dict((i, None) for i in versions) dsheets = dict((i, None) for i in sheets) dpages = dict((i, None) for i in pages) df = pd.DataFrame([[row['page'],row['index0'], row['value0'] ] for row in tab.where('(firstVersion == 0) & (ok == 1)') if row['version'] in dversions and row['sheetNum'] in dsheets and row['pages'] in dpages ], columns=['page','index0', 'value0']) df2 = pd.DataFrame([[row['page'],row['index1'], row['value1'] ] for row in tab.where('(firstVersion == 1) & (ok == 1)') if row['version'] in dversions and row['sheetNum'] in dsheets and row['pages'] in dpages], columns=['page','index1', 'value1']) for i in dpages: m10 = df.loc[df['page']==i]['index0'].mean() s10 = df.loc[df['page']==i]['index0'].std() m20 = df.loc[df['page']==i]['value0'].mean() s20 = df.loc[df['page']==i]['value0'].std() m11 = df2.loc[df2['page']==i]['index1'].mean() s11 = df2.loc[df2['page']==i]['index1'].std() m21 = df2.loc[df2['page']==i]['value1'].mean() s21 = df2.loc[df2['page']==i]['value1'].std() yield (i,m10, s10), (i,m11, s11), (i,m20,s20), (i,m21,s21)) As you can see, I am loading all the necessary dat
a value from a function pops the top frame off the stack. Stack frames take up memory, and as nobody has an infinite amount of memory, deep recursion could cause Guile to run out of memory. Running out of stack memory is called stack overflow. Stack Limits Most languages have a terrible stack overflow story. For example, in C, if you use too much stack, your program will exhibit “undefined behavior”, which if you are lucky means that it will crash. It’s especially bad in C, as you neither know ahead of time how much stack your functions use, nor the stack limit imposed by the user’s system, and the stack limit is often quite small relative to the total memory size. Managed languages like Python have a better error story, as they are defined to raise an exception on stack overflow – but like C, Python and most dynamic languages still have a fixed stack size limit that is usually much smaller than the heap. Arbitrary stack limits would have an unfortunate effect on Guile programs. For example, the following implementation of the inner loop of map is clean and elegant: (define (map f l) (if (pair? l) (cons (f (car l)) (map f (cdr l))) '())) However, if there were a stack limit, that would limit the size of lists that can be processed with this map. Eventually, you would have to rewrite it to use iteration with an accumulator: (define (map f l) (let lp ((l l) (out '())) (if (pair? l) (lp (cdr l) (cons (f (car l)) out)) (reverse out)))) This second version is sadly not as clear, and it also allocates more heap memory (once to build the list in reverse, and then again to reverse the list). You would be tempted to use the destructive reverse! to save memory and time, but then your code would not be continuation-safe – if f returned again after the map had finished, it