spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Evo Eftimov" <>
Subject RE: How to share large resources like dictionaries while processing data with Spark ?
Date Fri, 05 Jun 2015 10:04:26 GMT
Oops, @Yiannis, sorry to be a party pooper but the Job Server is for Spark Batch Jobs (besides
anyone can put something like that in 5 min), while I am under the impression that Dmytiy
is working on Spark Streaming app 


Besides the Job Server is essentially for sharing the Spark Context between multiple threads


Re Dmytiis intial question – you can load large data sets as Batch (Static) RDD from any
Spark Streaming App and then join DStream RDDs  against them to emulate “lookups” , you
can also try the “Lookup RDD” – there is a git hub project


From: Dmitry Goldenberg [] 
Sent: Friday, June 5, 2015 12:12 AM
To: Yiannis Gkoufas
Cc: Olivier Girardot;
Subject: Re: How to share large resources like dictionaries while processing data with Spark


Thanks so much, Yiannis, Olivier, Huang!


On Thu, Jun 4, 2015 at 6:44 PM, Yiannis Gkoufas <> wrote:

Hi there,


I would recommend checking out which I
think gives the functionality you are looking for.

I haven't tested it though.




On 5 June 2015 at 01:35, Olivier Girardot <> wrote:

You can use it as a broadcast variable, but if it's "too" large (more than 1Gb I guess), you
may need to share it joining this using some kind of key to the other RDDs.

But this is the kind of thing broadcast variables were designed for.






Le jeu. 4 juin 2015 à 23:50, dgoldenberg <> a écrit :

We have some pipelines defined where sometimes we need to load potentially
large resources such as dictionaries.

What would be the best strategy for sharing such resources among the
transformations/actions within a consumer?  Can they be shared somehow
across the RDD's?

I'm looking for a way to load such a resource once into the cluster memory
and have it be available throughout the lifecycle of a consumer...


View this message in context:
Sent from the Apache Spark User List mailing list archive at

To unsubscribe, e-mail:
For additional commands, e-mail:



View raw message