spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Reynold Xin <r...@apache.org>
Subject Re: issue regarding akka, protobuf and Hadoop version
Date Tue, 05 Nov 2013 00:33:37 GMT
I chatted with Matt Massie about this, and here are some options:

1. Use dependency injection in google-guice to make Akka use one version of
protobuf, and YARN use the other version.

2. Look into OSGi to accomplish the same goal.

3. Rewrite the messaging part of Spark to use a simple, custom RPC library
instead of Akka. We are really only using a very simple subset of Akka
features, and we can probably implement a simple RPC library tailored for
Spark quickly. We should only do this as the last resort.

4. Talk to Akka guys and hope they can make a maintenance release of Akka
that supports protobuf 2.5.


None of these are ideal, but we'd have to pick one. It would be great if
you have other suggestions.


On Sun, Nov 3, 2013 at 11:46 PM, Liu, Raymond <raymond.liu@intel.com> wrote:

> Hi
>
>         I am working on porting spark onto Hadoop 2.2.0, With some
> renaming and call into new YARN API works done. I can run up the spark
> master. While I encounter the issue that Executor Actor could not
> connecting to Driver actor.
>
>         After some investigation, I found the root cause is that the
> akka-remote do not support protobuf 2.5.0 before 2.3. And hadoop move to
> protobuf 2.5.0 from 2.1-beta.
>
>         The issue is that if I exclude the akka dependency from hadoop and
> force protobuf dependency to 2.4.1, the compile/packing will fail since
> hadoop common jar require a new interface from protobuf 2.5.0.
>
>          So any suggestion on this?
>
> Best Regards,
> Raymond Liu
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message