spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From matthes <mdiekst...@sensenetworks.com>
Subject Setup an huge Unserializable Object in a mapper
Date Mon, 22 Sep 2014 16:11:59 GMT
Hello everybody!

I’m newbe in spark and I hope my problem is solvable!
I need to setup an instance which I want to use is a mapper function. The
problem is it is not Serializable and the broadcast function is no option
for me. The Instance can become very huge (e.g. 1GB-10GB). Is there a way to
setup the getTree function only onetime per prozess like in hadoop. Because
at the moment it will be called for every partition and then I ran out of
memory. The second question is, is there also a secure way to limit the
tasks of mapper that I will never get more as the defined limit?
If this way is totally wrong, please let me know. I’m open for any ideas.

My first try is:

val countresult = file.mapPartitions { valueIterator =>

        val s2tree = getTree(bcTreefilename.value) 

        valueIterator.map { x =>
          val split = x.split("\t")
          val result: String = ""
          val key = split(1)
          var value = CountContainer(split(3).toInt)
           
          if (s2tree.lookupContainingCellsSimple(new
S2CellId(split(2).toLong))) {
            value.exposureCnt = value.totalCnt
          }

          (key, value)
        }
      }.reduceByKey{ (x,y) => x.add(y); x}.cache()

Best,

Matthias




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Setup-an-huge-Unserializable-Object-in-a-mapper-tp14817.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Mime
View raw message