Python: 读取子进程的标准输出而不打印到文件
Python: Reading a subprocess' stdout without printing to a file
我有一个名为 BOB.exe 的可执行函数,它只在短暂停顿的情况下将大量文本打印到标准输出。 BOB 也有冻结的习惯,所以我写了一个 python 监控函数,它使用 subprocess 模块调用 BOB 可执行文件,将其定向到一个临时文件并观察临时文件的大小以查看它是否已崩溃。这是我目前的解决方案:
#!/usr/bin/python
from subprocess import Popen
import tempfile, time
def runBOB(argsList):
# Create a temporary file where BOB stdout will be piped
BOBout = tempfile.NamedTemporaryFile()
BOBoutSize = 0
# Start the subprocess of BOB
BOBsp = Popen(argsList, stdout=BOBout)
while True:
# See if subprocess has finished
if BOBsp.poll() is not None:
BOBout.close() # Destroy the temp file
return 0
# if the size of the stdout file has increased, BOB.exe is still running
BOBoutSizeNew = os.path.getsize(BOBout.name)
if BOBoutSizeNew > BOBoutSize:
BOBoutSize = BOBoutSizeNew
else: # if not, kill it
BOBsp.kill()
BOBout.close() # Destroy the temp file
return 1
# Check every 10 seconds
time.sleep(10)
但是,这非常慢,我认为写入文件是原因。是否有更有效的方法来执行此操作,例如观看标准输出流然后立即将其发送到 Null?任何减少打印量的方法都可能有所帮助。还有另一种查看exe是否崩溃的方法吗?我可能应该注意到我不关心标准输出,无论如何它都会被忽略
感谢您的帮助!
您可以使用 stdout=subprocess.PIPE
告诉 subprocess
让您能够读取子进程的输出而不将其存储到文件中。棘手的部分是异步执行此操作,以免在 BOB.exe
冻结时出现死锁。一种简单的方法是使用辅助线程;尽管 Python 在线程方面声誉不佳,但这实际上是一个很好的线程用例,其中 GIL 不会妨碍。
只需创建一个辅助线程,它除了从与 Bob 的输出对应的文件句柄中读取输出外什么都不做。助手线程立即丢弃输出并递增字节计数器。主线程实现与之前完全相同的逻辑,但使用内存计数器而不是重新检查文件大小。当 Bob 完成或被主线程杀死时,辅助线程将收到 EOF 并退出。
以下是上述内容的未经测试的实现:
#!/usr/bin/python
import subprocess
import threading
import time
import os
bytes_read = 0
def readBOB(pipe):
global bytes_read
bytes_read = 0
while True:
# Wait for some data to arrive. This must use os.read rather
# than pipe.read(1024) because file.read would block us if less
# than 1024 bytes of data arrives. (Reading one byte at a time
# with pipe.read(1) would work, but would be too slow at
# consuming large amounts of data.)
s = os.read(pipe.fileno(), 1024)
if not s:
return # EOF
# we are the only writer, so GIL serves as the lock
bytes_read += len(s)
def runBOB(argsList):
# Start the subprocess of BOB
BOBsp = subprocess.Popen(argsList, stdout=subprocess.PIPE)
thr = threading.Thread(target=readBOB, args=(BOBsp.stdout,))
thr.start()
old_bytes_read = -1
while True:
# See if subprocess has finished
if BOBsp.poll() is not None:
return 0
# if the size of the stdout has increased, BOB.exe is still running
new_bytes_read = bytes_read
if new_bytes_read > old_bytes_read:
old_bytes_read = new_bytes_read
else: # if not, kill it (readBOB will exit automatically)
BOBsp.kill()
return 1
# Check every 10 seconds
time.sleep(10)
我有一个名为 BOB.exe 的可执行函数,它只在短暂停顿的情况下将大量文本打印到标准输出。 BOB 也有冻结的习惯,所以我写了一个 python 监控函数,它使用 subprocess 模块调用 BOB 可执行文件,将其定向到一个临时文件并观察临时文件的大小以查看它是否已崩溃。这是我目前的解决方案:
#!/usr/bin/python
from subprocess import Popen
import tempfile, time
def runBOB(argsList):
# Create a temporary file where BOB stdout will be piped
BOBout = tempfile.NamedTemporaryFile()
BOBoutSize = 0
# Start the subprocess of BOB
BOBsp = Popen(argsList, stdout=BOBout)
while True:
# See if subprocess has finished
if BOBsp.poll() is not None:
BOBout.close() # Destroy the temp file
return 0
# if the size of the stdout file has increased, BOB.exe is still running
BOBoutSizeNew = os.path.getsize(BOBout.name)
if BOBoutSizeNew > BOBoutSize:
BOBoutSize = BOBoutSizeNew
else: # if not, kill it
BOBsp.kill()
BOBout.close() # Destroy the temp file
return 1
# Check every 10 seconds
time.sleep(10)
但是,这非常慢,我认为写入文件是原因。是否有更有效的方法来执行此操作,例如观看标准输出流然后立即将其发送到 Null?任何减少打印量的方法都可能有所帮助。还有另一种查看exe是否崩溃的方法吗?我可能应该注意到我不关心标准输出,无论如何它都会被忽略
感谢您的帮助!
您可以使用 stdout=subprocess.PIPE
告诉 subprocess
让您能够读取子进程的输出而不将其存储到文件中。棘手的部分是异步执行此操作,以免在 BOB.exe
冻结时出现死锁。一种简单的方法是使用辅助线程;尽管 Python 在线程方面声誉不佳,但这实际上是一个很好的线程用例,其中 GIL 不会妨碍。
只需创建一个辅助线程,它除了从与 Bob 的输出对应的文件句柄中读取输出外什么都不做。助手线程立即丢弃输出并递增字节计数器。主线程实现与之前完全相同的逻辑,但使用内存计数器而不是重新检查文件大小。当 Bob 完成或被主线程杀死时,辅助线程将收到 EOF 并退出。
以下是上述内容的未经测试的实现:
#!/usr/bin/python
import subprocess
import threading
import time
import os
bytes_read = 0
def readBOB(pipe):
global bytes_read
bytes_read = 0
while True:
# Wait for some data to arrive. This must use os.read rather
# than pipe.read(1024) because file.read would block us if less
# than 1024 bytes of data arrives. (Reading one byte at a time
# with pipe.read(1) would work, but would be too slow at
# consuming large amounts of data.)
s = os.read(pipe.fileno(), 1024)
if not s:
return # EOF
# we are the only writer, so GIL serves as the lock
bytes_read += len(s)
def runBOB(argsList):
# Start the subprocess of BOB
BOBsp = subprocess.Popen(argsList, stdout=subprocess.PIPE)
thr = threading.Thread(target=readBOB, args=(BOBsp.stdout,))
thr.start()
old_bytes_read = -1
while True:
# See if subprocess has finished
if BOBsp.poll() is not None:
return 0
# if the size of the stdout has increased, BOB.exe is still running
new_bytes_read = bytes_read
if new_bytes_read > old_bytes_read:
old_bytes_read = new_bytes_read
else: # if not, kill it (readBOB will exit automatically)
BOBsp.kill()
return 1
# Check every 10 seconds
time.sleep(10)