Python Multiprocessing Error In Sys Exitfunc
Contents |
here for a quick overview of the site Help Center Detailed answers to any questions you
Error In Atexit._run_exitfuncs: Typeerror: 'nonetype' Object Is Not Callable
might have Meta Discuss the workings and policies of this site python signal About Us Learn more about Stack Overflow the company Business Learn more about hiring developers or posting ads with us Stack Overflow Questions Jobs Documentation Tags Users Badges Ask Question x Dismiss Join the Stack Overflow Community Stack Overflow is a community of 6.2 million programmers, just like you, helping each other. Join them; it only takes a minute: Sign up Python Multiprocessing atexit Error “Error in atexit._run_exitfuncs” up vote 7 down vote favorite 2 I am trying to run a simple multiple processes application in Python. The main thread spawns 1 to N processes and waits until they all done processing. The processes each run an infinite loop, so they can potentially run forever without some user interruption, so I put in some code to handle a KeyboardInterrupt: #!/usr/bin/env python import sys import time from multiprocessing import Process def main(): # Set up inputs.. # Spawn processes Proc( 1).start() Proc( 2).start() class Proc ( Process ): def __init__ ( self, procNum): self.id = procNum Process.__init__(self) def run ( self ): doneWork = False while True: try: # Do work... time.sleep(1) sys.stdout.write('.') if doneWork: print "PROC#" + str(self.id) + " Done." break except KeyboardInterrupt: print "User aborted." sys.exit() # Main Entry if __name__=="__main__": main() The problem is that when using CTRL-C to exit, I get an additional error even though the processes seem to exit immediately: ......User aborted. Error in atexit._run_exitfuncs: Traceback (most recent call last): File "C:\Python26\lib\atexit.py", line 24, in _run_exitfuncs func(*targs, **kargs) File "C:\Python26\lib\multiprocessing\util.py", line 281, in _exit_function p.join() File "C:\Python26\lib\multiprocessing\process.py", line 119, in join res = self._popen
Sign in Pricing Blog Support Search GitHub This repository Watch 15 Star 11 Fork 14 wercker/box-python Code Issues 2 Pull requests 1 Projects 0 Pulse Graphs New issue Error using multiprocessing module on Python 2.7.3 #5 Open nishigori opened this Issue Sep 12, 2014 · 1 comment Projects None yet Labels None yet Milestone No milestone Assignees No one assigned 1 participant nishigori commented Sep 12, 2014 http://stackoverflow.com/questions/883370/python-multiprocessing-atexit-error-error-in-atexit-run-exitfuncs Occurred error Python 2.7.3 using tox. It is already fixed over 2.7.4 http://hg.python.org/cpython/file/9290822f2280/Misc/NEWS#l360 Would you update Python2.7 to over 2.7.5 (or 2.7.4)? result example: wercker.yml: box: wercker/python@1.1.0 build: steps: - script: name: build py27 code: | sudo pip install tox tox -e py27 build output: $ tox -e py27 ... Error in atexit._run_exitfuncs: Traceback https://github.com/wercker/box-python/issues/5 (most recent call last): File "/usr/lib/python2.7/atexit.py", line 24, in _run_exitfuncs func(*targs, **kargs) File "/usr/lib/python2.7/multiprocessing/util.py", line 284, in _exit_function info('process shutting down') TypeError: 'NoneType' object is not callable Error in sys.exitfunc: Traceback (most recent call last): File "/usr/lib/python2.7/atexit.py", line 24, in _run_exitfuncs func(*targs, **kargs) File "/usr/lib/python2.7/multiprocessing/util.py", line 284, in _exit_function info('process shutting down') TypeError: 'NoneType' object is not callable ___________________________________ summary ___________________________________ py27: commands succeeded congratulations :) nishigori commented Sep 12, 2014 Currently yet, Ubuntu 12.04 deb package is Python 2.7.3. When updated destribution or deb package, please fix here. Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment Contact GitHub API Training Shop Blog About © 2016 GitHub, Inc. Terms Privacy Security Status Help You can't perform that action at this time. You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
commentary regarding Georges's comment about this stackoverflow thread. Update 2011/01/28: There is an issue with this code when passing large objects through the queue. While the code listed below will work in most situations, consider using sentinels to indicate the end of http://bryceboe.com/2010/08/26/python-multiprocessing-and-keyboardinterrupt/ jobs in your queue rather than relying on the Queue.Empty error. You can read about that in my post titled, The Python Multiprocessing Queue and Large Objects. I was recently working on improving the efficiency of my botnet analysis code by utilizing 100% of the CPU resources available to my machine. In order to do that in python I needed to span multiple processes as multithreading would produce no error in benefit for these CPU bound events. While python utilizes true threads that have the ability to run on different cores concurrently, the Global Interpreter Lock, or GIL, makes it such that only one of these threads can run "concurrently". Thus the simplest solution seemed to be utilizing python's multiprocessingmodule. Python's multiprocessing module is actually quite simple to use, especially if you've previously used python's threading module. Additionally the multiprocessing module python multiprocessing error contains a pool class which automatically sets up processes to manage a pool of jobs. There is, however, one HUGE caveat. The pool of workers cannot be terminated until all the tasks have been consumed. After some simple experimentation I noticed two key things with the multiprocessing.pool feature. First, while the worker processes can handle the KeyboardInterrupt and call sys.exit, these processes persist and thus receive future tasks. Second, the KeyboardInterrupt is not delivered to the parent process until all jobs arecompleted. #!/usr/bin/env python import multiprocessing, os, time def do_work(): print 'Work Started: %d' % os.getpid() time.sleep(2) return 'Success' def pool_function(): try: return do_work() except KeyboardInterrupt: return 'KeyboardException' def main(): pool = multiprocessing.Pool(3) try: jobs = [] for i in range(6): jobs.append(pool.apply_async(pool_function, args=())) pool.close() pool.join() except KeyboardInterrupt: print 'parent received control-c' pool.terminate() for i in jobs: if i.successful(): print i.get() else: print 'Job failed: %s %s' % (type(i._value), i._value) if __name__ == "__main__": main() I constructed a fairly simple example of this behavior, shown above. Running the code will span three worker processes to handle a total of six jobs. The job is very simple: display a message, sleep for two seconds and return a message to the parent. You'll notice that when you send a K