在 python 中替代子进程
Alternative in python to subprocess
我正在尝试编写一个脚本,它必须对一些 bash 命令进行大量调用,解析和处理输出,最后给出一些输出。
我正在使用 subprocess.Popen 和 subprocess.call
如果我理解正确这些方法会产生一个 bah 进程,运行 命令,获取输出然后终止进程。
有没有办法让 bash 进程在后台连续 运行ning,然后 python 调用可以直接转到该进程?这将类似于 bash 运行ning 作为服务器并且 python 调用它。
我觉得这会稍微优化调用,因为没有 bash 进程设置和拆卸。或者它不会带来任何性能优势?
If I understand correct these methods spawn a bah process, run the command, get the output and then kill the process.
subprocess.Popen
有点复杂。它实际上创建了一个 I/O 线程来避免死锁。见 https://www.python.org/dev/peps/pep-0324/:
A communicate()
method, which makes it easy to send stdin
data and read stdout
and stderr
data, without risking deadlocks. Most people are aware of the flow control issues involved with child process communication, but not all have the patience or skills to write a fully correct and deadlock-free select loop. This means that many Python applications contain race conditions. A communicate()
method in the standard library solves this problem.
Is there a way to have a bash process running in the background continuously and then the python calls could just go directly to that process?
当然,您仍然可以使用 subprocess.Popen
并向您的子进程发送消息并在不终止子进程的情况下接收回消息。在最简单的情况下,您的消息可以是行。
这允许请求-响应样式协议以及发布-订阅,当子进程可以在感兴趣的事件发生时继续向您发送消息。
I feel this would optimize the calls a bit as there is no bash process setup and teardown.
subprocess
永远不会 运行 成为 shell 除非您明确要求,例如
#!/usr/bin/env python
import subprocess
subprocess.check_call(['ls', '-l'])
此调用 运行s ls
程序而不调用 /bin/sh
.
Or will it give no performance advantage?
如果您的子进程调用实际上使用了 shell,例如 specify a pipeline consicely or you use ,直接使用 subprocess
模块定义 可能会很冗长且容易出错,那么它调用 bash
不太可能是性能瓶颈 -- 首先对其进行测量。
有 Python 包也允许明确指定此类命令,例如 plumbum
could be used to emulate a shell pipeline。
如果您想将 bash
用作服务器进程,则 pexpect
is useful for dialog-based interactions with an external process -- though it is unlikely that it affects time performance. fabric
允许 运行 本地和远程命令 (ssh
)。
还有其他子流程包装器,例如 sarge
which can parse a pipeline specified in a string without invoking the shell e.g., it enables cross-platform support for bash-like syntax (&&
, ||
, &
in command lines) or sh
-- a complete subprocess
replacement on Unix that provides TTY by default (it seems full-featured but the shell-like piping is less straightforward). You can even use Python-ish BASHwards-looking syntax to run commands with xonsh
shell。
同样,在大多数情况下,它不太可能以有意义的方式影响性能。
以可移植的方式启动和与外部进程通信的问题很复杂——进程、管道、ttys、信号、线程、异步之间的交互。 IO,各个地方的缓冲都有毛边。如果您不知道特定包如何解决与 运行ning shell 命令相关的众多问题,那么引入新包可能会使事情复杂化。
我正在尝试编写一个脚本,它必须对一些 bash 命令进行大量调用,解析和处理输出,最后给出一些输出。
我正在使用 subprocess.Popen 和 subprocess.call
如果我理解正确这些方法会产生一个 bah 进程,运行 命令,获取输出然后终止进程。
有没有办法让 bash 进程在后台连续 运行ning,然后 python 调用可以直接转到该进程?这将类似于 bash 运行ning 作为服务器并且 python 调用它。
我觉得这会稍微优化调用,因为没有 bash 进程设置和拆卸。或者它不会带来任何性能优势?
If I understand correct these methods spawn a bah process, run the command, get the output and then kill the process.
subprocess.Popen
有点复杂。它实际上创建了一个 I/O 线程来避免死锁。见 https://www.python.org/dev/peps/pep-0324/:
A
communicate()
method, which makes it easy to sendstdin
data and readstdout
andstderr
data, without risking deadlocks. Most people are aware of the flow control issues involved with child process communication, but not all have the patience or skills to write a fully correct and deadlock-free select loop. This means that many Python applications contain race conditions. Acommunicate()
method in the standard library solves this problem.
Is there a way to have a bash process running in the background continuously and then the python calls could just go directly to that process?
当然,您仍然可以使用 subprocess.Popen
并向您的子进程发送消息并在不终止子进程的情况下接收回消息。在最简单的情况下,您的消息可以是行。
这允许请求-响应样式协议以及发布-订阅,当子进程可以在感兴趣的事件发生时继续向您发送消息。
I feel this would optimize the calls a bit as there is no bash process setup and teardown.
subprocess
永远不会 运行 成为 shell 除非您明确要求,例如
#!/usr/bin/env python
import subprocess
subprocess.check_call(['ls', '-l'])
此调用 运行s ls
程序而不调用 /bin/sh
.
Or will it give no performance advantage?
如果您的子进程调用实际上使用了 shell,例如 specify a pipeline consicely or you use subprocess
模块定义 可能会很冗长且容易出错,那么它调用 bash
不太可能是性能瓶颈 -- 首先对其进行测量。
有 Python 包也允许明确指定此类命令,例如 plumbum
could be used to emulate a shell pipeline。
如果您想将 bash
用作服务器进程,则 pexpect
is useful for dialog-based interactions with an external process -- though it is unlikely that it affects time performance. fabric
允许 运行 本地和远程命令 (ssh
)。
还有其他子流程包装器,例如 sarge
which can parse a pipeline specified in a string without invoking the shell e.g., it enables cross-platform support for bash-like syntax (&&
, ||
, &
in command lines) or sh
-- a complete subprocess
replacement on Unix that provides TTY by default (it seems full-featured but the shell-like piping is less straightforward). You can even use Python-ish BASHwards-looking syntax to run commands with xonsh
shell。
同样,在大多数情况下,它不太可能以有意义的方式影响性能。
以可移植的方式启动和与外部进程通信的问题很复杂——进程、管道、ttys、信号、线程、异步之间的交互。 IO,各个地方的缓冲都有毛边。如果您不知道特定包如何解决与 运行ning shell 命令相关的众多问题,那么引入新包可能会使事情复杂化。