spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mark Hamstra <m...@clearstorydata.com>
Subject Re: Experimental Scala-2.10.3 branch based on master
Date Fri, 04 Oct 2013 17:41:04 GMT
Yeah, sorry to say, but I think you've largely or completely duplicated
work that has already been done.  If anything, the state of Prashant's
current work is mostly ahead of yours since, among other things, he has
already incorporated the changes I made to use ClassTag.



On Fri, Oct 4, 2013 at 10:34 AM, Reynold Xin <rxin@apache.org> wrote:

> Hi Martin,
>
> Thanks for updating us. Prashant has also been updating the scala 2.10
> branch at https://github.com/mesos/spark/tree/scala-2.10
>
> Did you take a look at his work?
>
>
> On Fri, Oct 4, 2013 at 8:01 AM, Martin Weindel <martin.weindel@arcor.de
> >wrote:
>
> > Here you can find an experimental branch of Spark for Scala 2.10.
> >
> >
> https://github.com/**MartinWeindel/incubator-spark/**tree/0.9_Scala-2.10.3
> <https://github.com/MartinWeindel/incubator-spark/tree/0.9_Scala-2.10.3>
> >
> > I have also updated Akka to version 2.1.4.
> >
> > The branch compiles with both sbt and mvn, but there are a few tests
> which
> > are failing and even more worse producing deadlocks.
> >
> > Also there are a lot of warnings, most related to usage of ClassManifest,
> > which should be replaced with ClassTag.
> > But I don't think it is a good idea to fix these warnings at the moment,
> > as this would make merging with the master branch harder.
> >
> > I would like to know about the official road map for supporting Scala
> 2.10.
> > Does it make sense to investigate the test problems in more details on my
> > experimental branch?
> >
> > Best regards,
> > Martin
> >
> >
> > P.S.: Below are the failing tests (probably not complete because of the
> > deadlocks)
> >
> >
> > DriverSuite:
> > - driver should exit after finishing *** FAILED ***
> >   TestFailedDueToTimeoutExceptio**n was thrown during property
> > evaluation. (DriverSuite.scala:36)
> >   Message: The code passed to failAfter did not complete within 30
> seconds.
> >   Location: (DriverSuite.scala:37)
> >   Occurred at table row 0 (zero based, not counting headings), which had
> > values (
> >     master = local
> >   )
> >
> > UISuite:
> > - jetty port increases under contention *** FAILED ***
> >   java.net.BindException: Die Adresse wird bereits verwendet
> >   at sun.nio.ch.Net.bind0(Native Method)
> >   at sun.nio.ch.Net.bind(Net.java:**444)
> >   at sun.nio.ch.Net.bind(Net.java:**436)
> >   at sun.nio.ch.**ServerSocketChannelImpl.bind(**
> > ServerSocketChannelImpl.java:**214)
> >   at sun.nio.ch.**ServerSocketAdaptor.bind(**ServerSocketAdaptor.java:74)
> >   at org.eclipse.jetty.server.nio.**SelectChannelConnector.open(**
> > SelectChannelConnector.java:**187)
> >   at org.eclipse.jetty.server.**AbstractConnector.doStart(**
> > AbstractConnector.java:316)
> >   at org.eclipse.jetty.server.nio.**SelectChannelConnector.**doStart(**
> > SelectChannelConnector.java:**265)
> >   at org.eclipse.jetty.util.**component.AbstractLifeCycle.**
> > start(AbstractLifeCycle.java:**64)
> >   at org.eclipse.jetty.server.**Server.doStart(Server.java:**286)
> >   ...
> >
> > AccumulatorSuite:
> > - add value to collection accumulators *** FAILED ***
> >   org.apache.spark.**SparkException: Job failed: Task not serializable:
> > java.io.**NotSerializableException: org.scalatest.Engine
> >   at org.apache.spark.scheduler.**DAGScheduler$$anonfun$**
> > abortStage$1.apply(**DAGScheduler.scala:762)
> >   at org.apache.spark.scheduler.**DAGScheduler$$anonfun$**
> > abortStage$1.apply(**DAGScheduler.scala:760)
> >   at scala.collection.mutable.**ResizableArray$class.foreach(**
> > ResizableArray.scala:59)
> >   at scala.collection.mutable.**ArrayBuffer.foreach(**
> > ArrayBuffer.scala:47)
> >   at org.apache.spark.scheduler.**DAGScheduler.abortStage(**
> > DAGScheduler.scala:760)
> >   at org.apache.spark.scheduler.**DAGScheduler.org<
> http://org.apache.spark.scheduler.DAGScheduler.org>
> > $apache$spark$**scheduler$DAGScheduler$$**submitMissingTasks(**
> > DAGScheduler.scala:555)
> >   at org.apache.spark.scheduler.**DAGScheduler.org<
> http://org.apache.spark.scheduler.DAGScheduler.org>
> > $apache$spark$**scheduler$DAGScheduler$$**submitStage(DAGScheduler.**
> > scala:502)
> >   at org.apache.spark.scheduler.**DAGScheduler.processEvent(**
> > DAGScheduler.scala:360)
> >   at org.apache.spark.scheduler.**DAGScheduler.org<
> http://org.apache.spark.scheduler.DAGScheduler.org>
> > $apache$spark$**scheduler$DAGScheduler$$run(**DAGScheduler.scala:440)
> >   at org.apache.spark.scheduler.**DAGScheduler$$anon$1.run(**
> > DAGScheduler.scala:148)
> >   ...
> > - localValue readable in tasks *** FAILED ***
> >   org.apache.spark.**SparkException: Job failed: Task not serializable:
> > java.io.**NotSerializableException: org.scalatest.Engine
> >   at org.apache.spark.scheduler.**DAGScheduler$$anonfun$**
> > abortStage$1.apply(**DAGScheduler.scala:762)
> >   at org.apache.spark.scheduler.**DAGScheduler$$anonfun$**
> > abortStage$1.apply(**DAGScheduler.scala:760)
> >   at scala.collection.mutable.**ResizableArray$class.foreach(**
> > ResizableArray.scala:59)
> >   at scala.collection.mutable.**ArrayBuffer.foreach(**
> > ArrayBuffer.scala:47)
> >   at org.apache.spark.scheduler.**DAGScheduler.abortStage(**
> > DAGScheduler.scala:760)
> >   at org.apache.spark.scheduler.**DAGScheduler.org<
> http://org.apache.spark.scheduler.DAGScheduler.org>
> > $apache$spark$**scheduler$DAGScheduler$$**submitMissingTasks(**
> > DAGScheduler.scala:555)
> >   at org.apache.spark.scheduler.**DAGScheduler.org<
> http://org.apache.spark.scheduler.DAGScheduler.org>
> > $apache$spark$**scheduler$DAGScheduler$$**submitStage(DAGScheduler.**
> > scala:502)
> >   at org.apache.spark.scheduler.**DAGScheduler.processEvent(**
> > DAGScheduler.scala:360)
> >   at org.apache.spark.scheduler.**DAGScheduler.org<
> http://org.apache.spark.scheduler.DAGScheduler.org>
> > $apache$spark$**scheduler$DAGScheduler$$run(**DAGScheduler.scala:440)
> >   at org.apache.spark.scheduler.**DAGScheduler$$anon$1.run(**
> > DAGScheduler.scala:148)
> >   ...
> >
> > ShuffleNettySuite:
> > *deadlock* on "shuffle serializer"
> >
> > FileServerSuite:
> > *deadlock* on "Distributing files on a standalone cluster"
> >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message