spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Josh Rosen (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (SPARK-6327) Run PySpark with python directly is broken
Date Mon, 16 Mar 2015 23:27:38 GMT

     [ https://issues.apache.org/jira/browse/SPARK-6327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Josh Rosen resolved SPARK-6327.
-------------------------------
       Resolution: Fixed
    Fix Version/s: 1.4.0

Issue resolved by pull request 5019
[https://github.com/apache/spark/pull/5019]

> Run PySpark with python directly is broken
> ------------------------------------------
>
>                 Key: SPARK-6327
>                 URL: https://issues.apache.org/jira/browse/SPARK-6327
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 1.4.0
>            Reporter: Davies Liu
>            Assignee: Davies Liu
>            Priority: Critical
>             Fix For: 1.4.0
>
>
> It works before, but broken now:
> {code}
> davies@localhost:~/work/spark$ python r.py
> NOTE: SPARK_PREPEND_CLASSES is set, placing locally compiled Spark classes ahead of assembly.
> Usage: spark-submit [options] <app jar | python file> [app arguments]
> Usage: spark-submit --kill [submission ID] --master [spark://...]
> Usage: spark-submit --status [submission ID] --master [spark://...]
> Options:
>   --master MASTER_URL         spark://host:port, mesos://host:port, yarn, or local.
>   --deploy-mode DEPLOY_MODE   Whether to launch the driver program locally ("client")
or
>                               on one of the worker machines inside the cluster ("cluster")
>                               (Default: client).
>   --class CLASS_NAME          Your application's main class (for Java / Scala apps).
>   --name NAME                 A name of your application.
>   --jars JARS                 Comma-separated list of local jars to include on the driver
>                               and executor classpaths.
>   --packages                  Comma-separated list of maven coordinates of jars to include
>                               on the driver and executor classpaths. Will search the
local
>                               maven repo, then maven central and any additional remote
>                               repositories given by --repositories. The format for the
>                               coordinates should be groupId:artifactId:version.
>   --repositories              Comma-separated list of additional remote repositories
to
>                               search for the maven coordinates given with --packages.
>   --py-files PY_FILES         Comma-separated list of .zip, .egg, or .py files to place
>                               on the PYTHONPATH for Python apps.
>   --files FILES               Comma-separated list of files to be placed in the working
>                               directory of each executor.
>   --conf PROP=VALUE           Arbitrary Spark configuration property.
>   --properties-file FILE      Path to a file from which to load extra properties. If
not
>                               specified, this will look for conf/spark-defaults.conf.
>   --driver-memory MEM         Memory for driver (e.g. 1000M, 2G) (Default: 512M).
>   --driver-java-options       Extra Java options to pass to the driver.
>   --driver-library-path       Extra library path entries to pass to the driver.
>   --driver-class-path         Extra class path entries to pass to the driver. Note that
>                               jars added with --jars are automatically included in the
>                               classpath.
>   --executor-memory MEM       Memory per executor (e.g. 1000M, 2G) (Default: 1G).
>   --proxy-user NAME           User to impersonate when submitting the application.
>   --help, -h                  Show this help message and exit
>   --verbose, -v               Print additional debug output
>   --version,                  Print the version of current Spark
>  Spark standalone with cluster deploy mode only:
>   --driver-cores NUM          Cores for driver (Default: 1).
>   --supervise                 If given, restarts the driver on failure.
>   --kill SUBMISSION_ID        If given, kills the driver specified.
>   --status SUBMISSION_ID      If given, requests the status of the driver specified.
>  Spark standalone and Mesos only:
>   --total-executor-cores NUM  Total cores for all executors.
>  YARN-only:
>   --driver-cores NUM          Number of cores used by the driver, only in cluster mode
>                               (Default: 1).
>   --executor-cores NUM        Number of cores per executor (Default: 1).
>   --queue QUEUE_NAME          The YARN queue to submit to (Default: "default").
>   --num-executors NUM         Number of executors to launch (Default: 2).
>   --archives ARCHIVES         Comma separated list of archives to be extracted into the
>                               working directory of each executor.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message