spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Akhil Das <ak...@sigmoidanalytics.com>
Subject Re: IPv6 support
Date Wed, 17 Jun 2015 07:33:14 GMT
If you look at this,

15/06/16 22:25:14 INFO Executor: Starting executor ID driver on host
localhost

15/06/16 22:25:14 ERROR SparkContext: Error initializing SparkContext.

java.lang.AssertionError: assertion failed: Expected hostname

your spark.driver.host
<https://github.com/apache/spark/blob/3c0156899dc1ec1f7dfe6d7c8af47fa6dc7d00bf/core/src/main/scala/org/apache/spark/util/RpcUtils.scala#L33>
is
being set to localhost which fails host.indexOf(':') == -1
<https://github.com/apache/spark/blob/branch-1.4/core/src/main/scala/org/apache/spark/util/Utils.scala#L882>.
Try setting your spark.driver.host to dispark001.ash3 (from whichever
machine you are running the code)




Thanks
Best Regards

On Wed, Jun 17, 2015 at 10:57 AM, Kevin Liu <kevinliu@fb.com> wrote:

>  Thanks Akhil, it seems that what you said should have fixed it.
>
>  1) There was a known issue directly related to below, but it has been
> fixed in 1.4 https://issues.apache.org/jira/browse/SPARK-6440
> 2) Now, with 1.4 - I see the following errors - even after I setting
> SPARK_MASTER_IP - your description seems to be right on, any more thoughts?
>
>   [root@dispark001.ash3 ~/spark-1.4.0-bin-hadoop2.6]# ping6
> dispark001.ash3
>
> PING dispark001.ash3(dispark001.ash3.facebook.com) 56 data bytes
>
> 64 bytes from dispark001.ash3.facebook.com: icmp_seq=1 ttl=64 time=0.012
> ms
>
> 64 bytes from dispark001.ash3.facebook.com: icmp_seq=2 ttl=64 time=0.021
> ms
>
> 64 bytes from dispark001.ash3.facebook.com: icmp_seq=3 ttl=64 time=0.011
> ms
>
> ^C
>
> --- dispark001.ash3 ping statistics ---
>
> 3 packets transmitted, 3 received, 0% packet loss, time 2113ms
>
> rtt min/avg/max/mdev = 0.011/0.014/0.021/0.006 ms
>
> [root@dispark001.ash3 ~/spark-1.4.0-bin-hadoop2.6]# export
> SPARK_MASTER_IP="dispark001.ash3"
>
> [root@dispark001.ash3 ~/spark-1.4.0-bin-hadoop2.6]# ./bin/run-example
> SparkPi 10
>
> Using Spark's default log4j profile:
> org/apache/spark/log4j-defaults.properties
>
> 15/06/16 22:25:13 INFO SparkContext: Running Spark version 1.4.0
>
> 15/06/16 22:25:13 WARN NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
>
> 15/06/16 22:25:13 INFO SecurityManager: Changing view acls to: root
>
> 15/06/16 22:25:13 INFO SecurityManager: Changing modify acls to: root
>
> 15/06/16 22:25:13 INFO SecurityManager: SecurityManager: authentication
> disabled; ui acls disabled; users with view permissions: Set(root); users
> with modify permissions: Set(root)
>
> 15/06/16 22:25:13 INFO Slf4jLogger: Slf4jLogger started
>
> 15/06/16 22:25:13 INFO Remoting: Starting remoting
>
> 15/06/16 22:25:13 INFO Remoting: Remoting started; listening on addresses
> :[akka.tcp://sparkDriver@2401:db00:2030:709b:face:0:9:0:50916]
>
> 15/06/16 22:25:13 INFO Utils: Successfully started service 'sparkDriver'
> on port 50916.
>
> 15/06/16 22:25:13 INFO SparkEnv: Registering MapOutputTracker
>
> 15/06/16 22:25:13 INFO SparkEnv: Registering BlockManagerMaster
>
> 15/06/16 22:25:13 INFO DiskBlockManager: Created local directory at
> /tmp/spark-325ae594-5b00-42e2-bd47-70c92c4e5f66/blockmgr-4bcbf896-03ad-4b11-8db5-a6eaa5f0222b
>
> 15/06/16 22:25:13 INFO MemoryStore: MemoryStore started with capacity
> 265.1 MB
>
> 15/06/16 22:25:13 INFO HttpFileServer: HTTP File server directory is
> /tmp/spark-325ae594-5b00-42e2-bd47-70c92c4e5f66/httpd-2b6a5162-0686-4cc5-accb-1bb66fddf705
>
> 15/06/16 22:25:13 INFO HttpServer: Starting HTTP Server
>
> 15/06/16 22:25:13 INFO Utils: Successfully started service 'HTTP file
> server' on port 35895.
>
> 15/06/16 22:25:13 INFO SparkEnv: Registering OutputCommitCoordinator
>
> 15/06/16 22:25:13 INFO Utils: Successfully started service 'SparkUI' on
> port 4040.
>
> 15/06/16 22:25:13 INFO SparkUI: Started SparkUI at http://
> [2401:db00:2030:709b:face:0:9:0]:4040
>
> 15/06/16 22:25:14 INFO SparkContext: Added JAR
> file:/root/spark-1.4.0-bin-hadoop2.6/lib/spark-examples-1.4.0-hadoop2.6.0.jar
> at http://[2401:db00:2030:709b:face:0:9:0]:35895/jars/spark-examples-1.4.0-hadoop2.6.0.jar
> with timestamp 1434518714122
>
> 15/06/16 22:25:14 INFO Executor: Starting executor ID driver on host
> localhost
>
> 15/06/16 22:25:14 ERROR SparkContext: Error initializing SparkContext.
>
> java.lang.AssertionError: assertion failed: Expected hostname
>
> at scala.Predef$.assert(Predef.scala:179)
>
> at org.apache.spark.util.Utils$.checkHost(Utils.scala:882)
>
> at org.apache.spark.util.RpcUtils$.makeDriverRef(RpcUtils.scala:35)
>
> at org.apache.spark.executor.Executor.<init>(Executor.scala:413)
>
> at
> org.apache.spark.scheduler.local.LocalEndpoint.<init>(LocalBackend.scala:53)
>
> at
> org.apache.spark.scheduler.local.LocalBackend.start(LocalBackend.scala:103)
>
> at
> org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:141)
>
> at org.apache.spark.SparkContext.<init>(SparkContext.scala:497)
>
> at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:28)
>
> at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:483)
>
> at
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
>
> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
>
> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
>
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
>
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
> 15/06/16 22:25:14 INFO SparkUI: Stopped Spark web UI at http://
> [2401:db00:2030:709b:face:0:9:0]:4040
>
> 15/06/16 22:25:14 INFO DAGScheduler: Stopping DAGScheduler
>
> 15/06/16 22:25:14 ERROR SparkContext: Error stopping SparkContext after
> init error.
>
> java.lang.NullPointerException
>
> at
> org.apache.spark.scheduler.local.LocalBackend.stop(LocalBackend.scala:107)
>
> at
> org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:416)
>
> at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1404)
>
> at org.apache.spark.SparkContext.stop(SparkContext.scala:1642)
>
> at org.apache.spark.SparkContext.<init>(SparkContext.scala:565)
>
> at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:28)
>
> at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:483)
>
> at
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
>
> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
>
> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
>
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
>
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
> Exception in thread "main" java.lang.AssertionError: assertion failed:
> Expected hostname
>
> at scala.Predef$.assert(Predef.scala:179)
>
> at org.apache.spark.util.Utils$.checkHost(Utils.scala:882)
>
> at org.apache.spark.util.RpcUtils$.makeDriverRef(RpcUtils.scala:35)
>
> at org.apache.spark.executor.Executor.<init>(Executor.scala:413)
>
> at
> org.apache.spark.scheduler.local.LocalEndpoint.<init>(LocalBackend.scala:53)
>
> at
> org.apache.spark.scheduler.local.LocalBackend.start(LocalBackend.scala:103)
>
> at
> org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:141)
>
> at org.apache.spark.SparkContext.<init>(SparkContext.scala:497)
>
> at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:28)
>
> at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:483)
>
> at
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
>
> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
>
> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
>
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
>
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
> 15/06/16 22:25:14 INFO DiskBlockManager: Shutdown hook called
>
> 15/06/16 22:25:14 INFO Utils: path =
> /tmp/spark-325ae594-5b00-42e2-bd47-70c92c4e5f66/blockmgr-4bcbf896-03ad-4b11-8db5-a6eaa5f0222b,
> already present as root for deletion.
>
> 15/06/16 22:25:14 INFO Utils: Shutdown hook called
>
> 15/06/16 22:25:14 INFO Utils: Deleting directory
> /tmp/spark-325ae594-5b00-42e2-bd47-70c92c4e5f66
>
> [root@dispark001.ash3 ~/spark-1.4.0-bin-hadoop2.6]#
>
>
>   From: Akhil Das <akhil@sigmoidanalytics.com>
> Date: Monday, May 25, 2015 at 9:33 AM
> To: Kevin Liu <kevinliu@fb.com>
> Cc: "user@spark.apache.org" <user@spark.apache.org>
> Subject: Re: IPv6 support
>
>    Hi Kevin,
>
>  Did you try adding a host name for the ipv6? I have a few ipv6 boxes,
> spark failed for me when i use just the ipv6 addresses, but it works fine
> when i use the host names.
>
>  Here's an entry in my /etc/hosts:
>
>  2607:5300:0100:0200:0000:0000:0000:0a4d hacked.work
>
>
>  My spark-env.sh file:
>
>  export SPARK_MASTER_IP="hacked.work"
>
>
>  Here's the master listening on my v6:
>
>   [image: Inline image 1]
>
>
>  The Master UI with running spark-shell:
>
>   [image: Inline image 2]
>
>
>  I even ran a simple sc.parallelize(1 to 100).collect().
>
>
>
>  Thanks
> Best Regards
>
> On Wed, May 20, 2015 at 11:09 PM, Kevin Liu <kevinliu@fb.com> wrote:
>
>> Hello, I have to work with IPv6 only servers and when I installed the
>> 1.3.1 hadoop 2.6 build, I couldn¹t get the example to run due to IPv6
>> issues (errors below). I tried to add the
>> -Djava.net.preferIPv6Addresses=true setting but it still doesn¹t work. A
>> search on Spark¹s support for IPv6 is inconclusive. Can someone help
>> clarify the current status for IPv6?
>>
>> Thanks
>> Kevin
>>
>>
>> ‹‹ errors ‹
>>
>> 5/05/20 10:17:30 INFO Executor: Fetching
>>
>> http://2401:db00:2030:709b:face:0:9:0:51453/jars/spark-examples-1.3.1-hadoo
>> p2.6.0.jar with timestamp 1432142250197
>> 15/05/20 10:17:30 INFO Executor: Fetching
>>
>> http://2401:db00:2030:709b:face:0:9:0:51453/jars/spark-examples-1.3.1-hadoo
>> p2.6.0.jar with timestamp 1432142250197
>> 15/05/20 10:17:30 ERROR Executor: Exception in task 5.0 in stage 0.0 (TID
>> 5)
>> java.net.MalformedURLException: For input string:
>> "db00:2030:709b:face:0:9:0:51453"
>>         at java.net.URL.<init>(URL.java:620)
>>         at java.net.URL.<init>(URL.java:483)
>>         at java.net.URL.<init>(URL.java:432)
>>         at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:603)
>>         at org.apache.spark.util.Utils$.fetchFile(Utils.scala:431)
>>         at
>>
>> org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Execu
>> tor$$updateDependencies$5.apply(Executor.scala:374)
>>         at
>>
>> org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Execu
>> tor$$updateDependencies$5.apply(Executor.scala:366)
>>         at
>>
>> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(Traver
>> sableLike.scala:772)
>>         at
>>
>> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>>         at
>>
>> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>>         at
>> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
>>         at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
>>         at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
>>         at
>>
>> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:7
>> 71)
>>         at
>> org.apache.spark.executor.Executor.org
>> <https://urldefense.proofpoint.com/v1/url?u=http://org.apache.spark.executor.Executor.org&k=ZVNjlDMF0FElm4dQtryO4A%3D%3D%0A&r=YviX1%2F1vaAZK%2BrqaSzu%2FMg%3D%3D%0A&m=cToWyWSkkQGPchtZEHgZAymvCC%2FYOX8btPSeh%2Bth5wM%3D%0A&s=69d2377deecbf9077810fa58426e84fc72e54932d7bf06064e130e9a3ac6af04>
>> $apache$spark$executor$Executor$$upda
>> teDependencies(Executor.scala:366)
>>         at
>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:184)
>>         at
>>
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1
>> 142)
>>         at
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:
>> 617)
>>         at java.lang.Thread.run(Thread.java:745)
>> Caused by: java.lang.NumberFormatException: For input string:
>> "db00:2030:709b:face:0:9:0:51453"
>>         at
>>
>> java.lang.NumberFormatException.forInputString(NumberFormatException.java:6
>> 5)
>>         at java.lang.Integer.parseInt(Integer.java:580)
>>         at java.lang.Integer.parseInt(Integer.java:615)
>>         at java.net.URLStreamHandler.parseURL(URLStreamHandler.java:216)
>>         at java.net.URL.<init>(URL.java:615)
>>         ... 18 more
>>
>>
>>
>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
>> For additional commands, e-mail: user-help@spark.apache.org
>>
>>
>

Mime
View raw message