BernardoThanks,Hi Renato,I have done that as well, but so far no luck. I believe spark is finding the library correctly, otherwise the error message would be "no libraryname found" or something like that. The problem seems to be something else, and I'm not sure how to find it.On 14 October 2015 at 16:28, Renato Marroquín Mogrovejo <firstname.lastname@example.org> wrote:You can also try setting the env variable LD_LIBRARY_PATH to point where your compiled libraries are.Renato M.2015-10-14 21:07 GMT+02:00 Bernardo Vecchia Stein <email@example.com>:BernardoThank you,Any ideas of what might be failing?Hi Deenar,Yes, the native library is installed on all machines of the cluster. I tried a simpler approach by just using System.load() and passing the exact path of the library, and things still won't work (I get exactly the same error and message).On 14 October 2015 at 02:50, Deenar Toraskar <firstname.lastname@example.org> wrote:Hi BernardoIs the native library installed on all machines of your cluster and are you setting both the spark.driver.extraLibraryPath and spark.executor.extraLibraryPath ?DeenarOn 14 October 2015 at 05:44, Bernardo Vecchia Stein <email@example.com> wrote:BernardoThanks,I appreciate any help.Does anybody have any idea of what might be the problem here? Is there any bug that could be related to this when running in cluster mode?The problem I'm facing is: whenever I try to run this code in cluster mode, spark fails with the following message when trying to execute the native function:Hello,I am trying to run some scala code in cluster mode using spark-submit. This code uses addLibrary to link with a .so that exists in the machine, and this library has a function to be called natively (there's a native definition as needed in the code).java.lang.UnsatisfiedLinkError: org.name.othername.ClassName.nativeMethod([B[B)[BWhen trying to run in client mode, however, this doesn't fail and everything works as expected.
Apparently, the library is being found by spark, but the required function isn't found.