spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From tangweihan <>
Subject Re: how to know the Spark worker Mechanism
Date Tue, 18 Nov 2014 08:57:16 GMT
Ok. I don't put it in the path. Because this is not a lib I want to use
here is my code in RDD.
            val fileaddr = SparkFiles.get("");
            val config = SparkFiles.get("qsegconf.ini")
            val segment = new Segment//this is the native class
            segment.init(config);//here failed if driver doesn't load this

I just use the system.load to load this lib. But now I also  call some
functions in the lib to change some objects in the lib. It turns out the
fatal error. And after I first load it in driver, this works again in
standalone mode. I want to know how the job running from the driver to
worker, or how the worker memory is loaded to the native lib. Then I take
another sample lib. If the function not change the objects in the lib, the
lib can be only loaded in workers. And another thing I want to know is
whether there is something like the cache arche in hadoop streaming.

View this message in context:
Sent from the Apache Spark User List mailing list archive at

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message