spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Yu <yuzhih...@gmail.com>
Subject Re: Maven out of memory error
Date Sat, 17 Jan 2015 05:46:23 GMT
I tried the following but still didn't see test output :-(

diff --git a/pom.xml b/pom.xml
index f4466e5..dae2ae8 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1131,6 +1131,7 @@

 <spark.driver.allowMultipleContexts>true</spark.driver.allowMultipleContexts>
             </systemProperties>
             <failIfNoTests>false</failIfNoTests>
+            <redirectTestOutputToFile>true</redirectTestOutputToFile>
           </configuration>
         </plugin>
         <!-- Scalatest runs all Scala tests -->

On Fri, Jan 16, 2015 at 12:41 PM, Ted Yu <yuzhihong@gmail.com> wrote:

> I got the same error:
>
> testGuavaOptional(org.apache.spark.JavaAPISuite)  Time elapsed: 261.111
> sec  <<< ERROR!
> org.apache.spark.SparkException: Job aborted due to stage failure: Master
> removed our application: FAILED
> at org.apache.spark.scheduler.DAGScheduler.org
> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1199)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1188)
>
> Looking under ore/target/surefire-reports/ , I don't see test output.
> Trying to figure out how test output can be generated.
>
> Cheers
>
> On Fri, Jan 16, 2015 at 12:26 PM, Andrew Musselman <
> andrew.musselman@gmail.com> wrote:
>
>> Thanks Ted, got farther along but now have a failing test; is this a
>> known issue?
>>
>> -------------------------------------------------------
>>  T E S T S
>> -------------------------------------------------------
>> Running org.apache.spark.JavaAPISuite
>> Tests run: 72, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 123.462
>> sec <<< FAILURE! - in org.apache.spark.JavaAPISuite
>> testGuavaOptional(org.apache.spark.JavaAPISuite)  Time elapsed: 106.5
>> sec  <<< ERROR!
>> org.apache.spark.SparkException: Job aborted due to stage failure: Master
>> removed our application: FAILED
>>     at org.apache.spark.scheduler.DAGScheduler.org
>> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1199)
>>     at
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1188)
>>     at
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1187)
>>     at
>> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>>     at
>> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1187)
>>     at
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
>>     at
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
>>     at scala.Option.foreach(Option.scala:236)
>>     at
>> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
>>     at
>> org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1399)
>>     at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>>     at
>> org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1360)
>>     at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>>     at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>>     at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>>     at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>>     at
>> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
>>     at
>> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>>     at
>> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>>     at
>> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>>     at
>> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>>
>> Running org.apache.spark.JavaJdbcRDDSuite
>> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.846 sec
>> - in org.apache.spark.JavaJdbcRDDSuite
>>
>> Results :
>>
>>
>> Tests in error:
>>   JavaAPISuite.testGuavaOptional ยป Spark Job aborted due to stage
>> failure: Maste...
>>
>> On Fri, Jan 16, 2015 at 12:06 PM, Ted Yu <yuzhihong@gmail.com> wrote:
>>
>>> Can you try doing this before running mvn ?
>>>
>>> export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M
>>> -XX:ReservedCodeCacheSize=512m"
>>>
>>> What OS are you using ?
>>>
>>> Cheers
>>>
>>> On Fri, Jan 16, 2015 at 12:03 PM, Andrew Musselman <
>>> andrew.musselman@gmail.com> wrote:
>>>
>>>> Just got the latest from Github and tried running `mvn test`; is this
>>>> error common and do you have any advice on fixing it?
>>>>
>>>> Thanks!
>>>>
>>>> [INFO] --- scala-maven-plugin:3.2.0:compile (scala-compile-first) @
>>>> spark-core_2.10 ---
>>>> [WARNING] Zinc server is not available at port 3030 - reverting to
>>>> normal incremental compile
>>>> [INFO] Using incremental compilation
>>>> [INFO] compiler plugin:
>>>> BasicArtifact(org.scalamacros,paradise_2.10.4,2.0.1,null)
>>>> [INFO] Compiling 400 Scala sources and 34 Java sources to
>>>> /home/akm/spark/core/target/scala-2.10/classes...
>>>> [WARNING]
>>>> /home/akm/spark/core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala:22:
>>>> imported `DataReadMethod' is permanently hidden by definition of object
>>>> DataReadMethod in package executor
>>>> [WARNING] import org.apache.spark.executor.DataReadMethod
>>>> [WARNING]                                  ^
>>>> [WARNING]
>>>> /home/akm/spark/core/src/main/scala/org/apache/spark/TaskState.scala:41:
>>>> match may not be exhaustive.
>>>> It would fail on the following input: TASK_ERROR
>>>> [WARNING]   def fromMesos(mesosState: MesosTaskState): TaskState =
>>>> mesosState match {
>>>> [WARNING]                                                          ^
>>>> [WARNING]
>>>> /home/akm/spark/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala:89:
>>>> method isDirectory in class FileSystem is deprecated: see corresponding
>>>> Javadoc for more information.
>>>> [WARNING]     if (!fileSystem.isDirectory(new Path(logBaseDir))) {
>>>> [WARNING]                     ^
>>>> [ERROR] PermGen space -> [Help 1]
>>>> [ERROR]
>>>> [ERROR] To see the full stack trace of the errors, re-run Maven with
>>>> the -e switch.
>>>> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>>>> [ERROR]
>>>> [ERROR] For more information about the errors and possible solutions,
>>>> please read the following articles:
>>>> [ERROR] [Help 1]
>>>> http://cwiki.apache.org/confluence/display/MAVEN/OutOfMemoryError
>>>>
>>>>
>>>
>>
>

Mime
View raw message