[ https://issues.apache.org/jira/browse/SPARK-7898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15265863#comment-15265863
]
Sam Steingold commented on SPARK-7898:
--------------------------------------
No, this is _NOT_ what I am talking about!
I (pyspark driver) want to *read* from the {{stdout}} from my subprocess, and pyspark messes
up that subprocess's {{stdout}} and {{stderr}}.
> pyspark merges stderr into stdout
> ---------------------------------
>
> Key: SPARK-7898
> URL: https://issues.apache.org/jira/browse/SPARK-7898
> Project: Spark
> Issue Type: Bug
> Components: PySpark
> Affects Versions: 1.3.0
> Reporter: Sam Steingold
>
> When I type
> {code}
> hadoop fs -text /foo/bar/baz.bz2 2>err 1>out
> {code}
> I get two non-empty files: {{err}} with
> {code}
> 2015-05-26 15:33:49,786 INFO [main] bzip2.Bzip2Factory (Bzip2Factory.java:isNativeBzip2Loaded(70))
- Successfully loaded & initialized native-bzip2 library system-native
> 2015-05-26 15:33:49,789 INFO [main] compress.CodecPool (CodecPool.java:getDecompressor(179))
- Got brand-new decompressor [.bz2]
> {code}
> and {{out}} with the content of the file (as expected).
> When I call the same command from Python (2.6):
> {code}
> from subprocess import Popen
> with open("out","w") as out:
> with open("err","w") as err:
> p = Popen(['hadoop','fs','-text',"/foo/bar/baz.bz2"],
> stdin=None,stdout=out,stderr=err)
> print p.wait()
> {code}
> I get the exact same (correct) behavior.
> *However*, when I run the same code under *PySpark* (or using {{spark-submit}}), I get
an *empty* {{err}} file and the {{out}} file starts with the log messages above (and then
it contains the actual data).
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org
|