Skip to content

Add multiprocessing.Process.interrupt #131913

@pulkin

Description

@pulkin

Feature or enhancement

Proposal:

We have .terminate() and .kill() but not .interrupt() for terminating multiprocessing.Process. I believe the latter could be a useful addition when we have a python subprocess that we want to terminate in a "normal" way: i.e. triggering finalizers, etc.

This roughly demonstrates the difference when using one or the other (linux):

from signal import SIGINT
from multiprocessing import Process
from time import sleep

def payload():
    try:
        print("working")
        sleep(10)
    finally:
        print("a very important teardown")

print("> kill or terminate")
p = Process(target=payload)
p.start()
sleep(1)
p.kill()  # or terminate

output

> kill or terminate
working
class MyProcess(Process):
    def interrupt(self):
        return self._popen._send_signal(SIGINT)

print("> interrupt")
p = MyProcess(target=payload)
p.start()
sleep(1)
p.interrupt()

output

> interrupt
working
a very important teardown
Process MyProcess-2:
Traceback (most recent call last):
  File "/usr/lib64/python3.13/multiprocessing/process.py", line 313, in _bootstrap
    self.run()
    ~~~~~~~~^^
  File "/usr/lib64/python3.13/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
    ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "test.py", line 9, in payload
    sleep(10)
    ~~~~~^^^^
KeyboardInterrupt

Motivation for sending SIGINT to subprocesses in general:

  • it allows usual python patterns in the payload: we can now define finally, with, etc. and expect them to trigger normally
  • SIGINT is a very lean way to cancel heavy payloads. It is widely supported such that if you have non-pure-python stack you still have higher chances to exit gracefully
  • I also think it enables a better interaction with nested subprocesses: if a subprocess S manages some Pool or concurrent object, sending SIGINT to S will let nested subprocesses to exit gracefully as opposed to, for example, leaving orphans on linux. A pure nested Process will still leave orphans but you can write some code in the payload to work with this:
def payload():
    try:
        p = MyProcess(target=inner)
        p.start()
        p.join()
    finally:
        p.interrupt()

Motivation for having Process.interrupt() in the standard library:

  • it is very easy to misuse terminate or kill without understanding the consequences or particularities of the two. .interrupt could become the default or recommended way to interrupt long-running tasks. Even without any payload-specific code, it prints error and python stack trace on termination which is a good starting point to write and debug multiprocessing code, thus, making it a friendlier environment
  • I feel like an average developer knows more about the difference between .interrupt and .kill than between .terminate and .kill. This is subjective, of course, but, discarding internal reasons to have both .terminate and .kill, I do not understand how .interrupt is not in this list.

Has this already been discussed elsewhere?

This is a minor feature, which does not need previous discussion elsewhere

Links to previous discussion of this feature:

No response

Linked PRs

Metadata

Metadata

Assignees

Labels

stdlibStandard Library Python modules in the Lib/ directorytopic-multiprocessingtype-featureA feature request or enhancement

Projects

Status

Done

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions