子stream程中的多个pipe道

我试图在stream水线中使用以多个fastq文件作为参数的Sailfish。 我使用python中的subprocess模块​​来执行Sailfish,但是当我设置shell=True时,subprocess调用中的<()不起作用。

这是我想要使用python执行的命令:

 sailfish quant [options] -1 <(cat sample1a.fastq sample1b.fastq) -2 <(cat sample2a.fastq sample2b.fastq) -o [output_file] 

或(最好):

 sailfish quant [options] -1 <(gunzip sample1a.fastq.gz sample1b.fastq.gz) -2 <(gunzip sample2a.fastq.gz sample2b.fastq.gz) -o [output_file] 

泛化:

 someprogram <(someprocess) <(someprocess) 

我将如何去做这个python? 子过程是否正确?

要模拟bash过程replace :

 #!/usr/bin/env python from subprocess import check_call check_call('someprogram <(someprocess) <(anotherprocess)', shell=True, executable='/bin/bash') 

在Python中,您可以使用命名pipe道:

 #!/usr/bin/env python from subprocess import Popen with named_pipes(n=2) as paths: someprogram = Popen(['someprogram'] + paths) processes = [] for path, command in zip(paths, ['someprocess', 'anotherprocess']): with open(path, 'wb', 0) as pipe: processes.append(Popen(command, stdout=pipe, close_fds=True)) for p in [someprogram] + processes: p.wait() 

其中named_pipes(n)是:

 import os import shutil import tempfile from contextlib import contextmanager @contextmanager def named_pipes(n=1): dirname = tempfile.mkdtemp() try: paths = [os.path.join(dirname, 'named_pipe' + str(i)) for i in range(n)] for path in paths: os.mkfifo(path) yield paths finally: shutil.rmtree(dirname) 

另一个更好的方法(不需要在磁盘上创build一个命名条目)来实现bash进程replace,就是使用@Dunesbuild议的 /dev/fd/N文件名(如果可用的话)。 在FreeBSD上, fdescfs(5)/dev/fd/# )为进程打开的所有文件描述符创build条目 。 要testing可用性,请运行:

 $ test -r /dev/fd/3 3</dev/null && echo /dev/fd is available 

如果失败了, 尝试将/dev/fd链接到proc(5)因为它在一些Linux上完成:

 $ ln -s /proc/self/fd /dev/fd 

这里是一些程序的基于/dev/fd的实现someprogram <(someprocess) <(anotherprocess) bash命令:

 #!/usr/bin/env python3 from contextlib import ExitStack from subprocess import CalledProcessError, Popen, PIPE def kill(process): if process.poll() is None: # still running process.kill() with ExitStack() as stack: # for proper cleanup processes = [] for command in [['someprocess'], ['anotherprocess']]: # start child processes processes.append(stack.enter_context(Popen(command, stdout=PIPE))) stack.callback(kill, processes[-1]) # kill on someprogram exit fds = [p.stdout.fileno() for p in processes] someprogram = stack.enter_context( Popen(['someprogram'] + ['/dev/fd/%d' % fd for fd in fds], pass_fds=fds)) for p in processes: # close pipes in the parent p.stdout.close() # exit stack: wait for processes if someprogram.returncode != 0: # errors shouldn't go unnoticed raise CalledProcessError(someprogram.returncode, someprogram.args) 

注意:在我的Ubuntu机器上,尽pipe自Python 3.2以后可用pass_fds,但subprocess进程代码只能在Python 3.4+中工作。

虽然JF Sebastian已经使用命名pipe道提供了答案,但是可以使用匿名pipe道来做到这一点。

 import shlex from subprocess import Popen, PIPE inputcmd0 = "zcat hello.gz" # gzipped file containing "hello" inputcmd1 = "zcat world.gz" # gzipped file containing "world" def get_filename(file_): return "/dev/fd/{}".format(file_.fileno()) def get_stdout_fds(*processes): return tuple(p.stdout.fileno() for p in processes) # setup producer processes inputproc0 = Popen(shlex.split(inputcmd0), stdout=PIPE) inputproc1 = Popen(shlex.split(inputcmd1), stdout=PIPE) # setup consumer process # pass input processes pipes by "filename" eg. /dev/fd/5 cmd = "cat {file0} {file1}".format(file0=get_filename(inputproc0.stdout), file1=get_filename(inputproc1.stdout)) print("command is:", cmd) # pass_fds argument tells Popen to let the child process inherit the pipe's fds someprogram = Popen(shlex.split(cmd), stdout=PIPE, pass_fds=get_stdout_fds(inputproc0, inputproc1)) output, error = someprogram.communicate() for p in [inputproc0, inputproc1, someprogram]: p.wait() assert output == b"hello\nworld\n"