ignite-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mehdi sey <seydali.me...@gmail.com>
Subject igfs as cache for hdfs run on apache ignite accelerator but not on apache ignite 2.6
Date Thu, 14 Mar 2019 11:12:18 GMT
i want to execute a wordcount example of hadoop over apache ignite. i
haveused IGFS as cache for HDFS configuration in ignite, but after
submittingjob via hadoop for execution on ignite i encountered with below
error.thanks in advance to anyone who could help me! there is a note that i
can execute igfs as cache for hdfs over apache ignite hadoop accelerator
version 2.6.Using configuration:
examples/config/filesystem/example-igfs-hdfs.xml[00:47:13]    __________ 
________________[00:47:13]   /  _/ ___/ |/ /  _/_  __/ __/[00:47:13]  _/ //
(7 7    // /  / / / _/  [00:47:13] /___/\___/_/|_/___/ /_/ /___/ 
[00:47:13][00:47:13] ver. 2.6.0#20180710-sha1:669feacc[00:47:13] 2018
Copyright(C) Apache Software Foundation[00:47:13][00:47:13] Ignite
documentation: http://ignite.apache.org[00:47:13][00:47:13] Quiet
mode.[00:47:13]   ^-- Logging to
file'/usr/local/apache-ignite-fabric-2.6.0-bin/work/log/ignite-f3712946.log'[00:47:13]  
^-- Logging by 'Log4JLogger
[quiet=true,config=/usr/local/apache-ignite-fabric-2.6.0-bin/config/ignite-log4j.xml]'[00:47:13]
 
^-- To see **FULL** console log here add -DIGNITE_QUIET=falseor "-v" to
ignite.{sh|bat}[00:47:13][00:47:13] OS: Linux 4.15.0-46-generic
amd64[00:47:13] VM information: Java(TM) SE Runtime Environment
1.8.0_192-ea-b04Oracle Corporation Java HotSpot(TM) 64-Bit Server VM
25.192-b04[00:47:13] Configured plugins:[00:47:13]   ^-- Ignite Native I/O
Plugin [Direct I/O][00:47:13]   ^-- Copyright(C) Apache Software
Foundation[00:47:13][00:47:13] Configured failure handler:
[hnd=StopNodeOrHaltFailureHandler[tryStop=false, timeout=0]][00:47:22]
Message queue limit is set to 0 which may lead to potential OOMEswhen
running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due tomessage
queues growth on sender and receiver sides.[00:47:22] Security status
[authentication=off, tls/ssl=off]SLF4J: Class path contains multiple SLF4J
bindings.SLF4J: Found binding
in[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/slf4j-nop-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
Found binding
in[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/htrace%20dependency/slf4j-nop-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
Found binding
in[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-rest-http/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
Found binding
in[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-yarn/ignite-yarn-2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
Found binding
in[jar:file:/usr/local/apache-ignite-fabric-2.6.0-bin/libs/ignite-zookeeper/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
See http://www.slf4j.org/codes.html#multiple_bindings for
anexplanation.SLF4J: Actual binding is of type
[org.slf4j.helpers.NOPLoggerFactory][00:47:23] HADOOP_HOME is set to
/usr/local/hadoop[00:47:23] Resolved Hadoop classpath
locations:/usr/local/hadoop/share/hadoop/common,
/usr/local/hadoop/share/hadoop/hdfs,/usr/local/hadoop/share/hadoop/mapreduce[00:47:26]
Performance suggestions for grid  (fix if possible)[00:47:26] To disable,
set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true[00:47:26]   ^-- Enable G1
Garbage Collector (add '-XX:+UseG1GC' to JVMoptions)[00:47:26]   ^-- Set max
direct memory size if getting 'OOME: Direct buffermemory' (add
'-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM options)[00:47:26]   ^--
Disable processing of calls to System.gc() (add'-XX:+DisableExplicitGC' to
JVM options)[00:47:26]   ^-- Enable ATOMIC mode if not using transactions
(set'atomicityMode' to ATOMIC)[00:47:26]   ^-- Disable fully synchronous
writes (set'writeSynchronizationMode' to PRIMARY_SYNC or
FULL_ASYNC)[00:47:26] Refer to this page for more performance
suggestions:https://apacheignite.readme.io/docs/jvm-and-system-tuning[00:47:26][00:47:26]
To start Console Management & Monitoring
runignitevisorcmd.{sh|bat}[00:47:26][00:47:26] Ignite node started OK
(id=f3712946)[00:47:26] Topology snapshot [ver=1, servers=1, clients=0,
CPUs=8,offheap=1.6GB, heap=1.0GB][00:47:26]   ^-- Node
[id=F3712946-0810-440F-A440-140FE4AB6FA7,clusterState=ACTIVE][00:47:26] Data
Regions Configured:[00:47:27]   ^-- default [initSize=256.0 MiB, maxSize=1.6
GiB,persistenceEnabled=false][00:47:35] New version is available at
ignite.apache.org: 2.7.0[2019-03-13
00:47:46,978][ERROR][igfs-igfs-ipc-#53][IgfsImpl] File infooperation in DUAL
mode failed [path=/output]class org.apache.ignite.IgniteException: For input
string: "30s"       
atorg.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:100)
      
atorg.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate.getWithMappedName(HadoopCachingFileSystemFactoryDelegate.java:53)
      
atorg.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.get(HadoopBasicFileSystemFactoryDelegate.java:75)
      
atorg.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.get(HadoopBasicFileSystemFactoryDelegate.java:43)
      
atorg.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopIgfsSecondaryFileSystemDelegateImpl.fileSystemForUser(HadoopIgfsSecondaryFileSystemDelegateImpl.java:517)
      
atorg.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopIgfsSecondaryFileSystemDelegateImpl.info(HadoopIgfsSecondaryFileSystemDelegateImpl.java:296)
      
atorg.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem.info(IgniteHadoopIgfsSecondaryFileSystem.java:240)
      
atorg.apache.ignite.internal.processors.igfs.IgfsImpl.resolveFileInfo(IgfsImpl.java:1600)
      
atorg.apache.ignite.internal.processors.igfs.IgfsImpl.access$800(IgfsImpl.java:110)      

atorg.apache.ignite.internal.processors.igfs.IgfsImpl$6.call(IgfsImpl.java:524)       
atorg.apache.ignite.internal.processors.igfs.IgfsImpl$6.call(IgfsImpl.java:517)       
atorg.apache.ignite.internal.processors.igfs.IgfsImpl.safeOp(IgfsImpl.java:1756)       
atorg.apache.ignite.internal.processors.igfs.IgfsImpl.info(IgfsImpl.java:517)       
atorg.apache.ignite.internal.processors.igfs.IgfsIpcHandler$2.apply(IgfsIpcHandler.java:341)
      
atorg.apache.ignite.internal.processors.igfs.IgfsIpcHandler$2.apply(IgfsIpcHandler.java:332)
      
at org.apache.ignite.igfs.IgfsUserContext.doAs(IgfsUserContext.java:54)       
atorg.apache.ignite.internal.processors.igfs.IgfsIpcHandler.processPathControlRequest(IgfsIpcHandler.java:332)
      
atorg.apache.ignite.internal.processors.igfs.IgfsIpcHandler.execute(IgfsIpcHandler.java:241)
      
atorg.apache.ignite.internal.processors.igfs.IgfsIpcHandler.access$000(IgfsIpcHandler.java:57)
      
atorg.apache.ignite.internal.processors.igfs.IgfsIpcHandler$1.run(IgfsIpcHandler.java:167)
      
atjava.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)       
atjava.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)       
at java.lang.Thread.run(Thread.java:748)Caused by: class
org.apache.ignite.IgniteCheckedException: For input string:"30s"        at
org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7307)       
atorg.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:259)
      
atorg.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:171)
      
atorg.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
      
atorg.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.getValue(HadoopLazyConcurrentMap.java:191)
      
atorg.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:93)
      
... 22 moreCaused by: java.lang.NumberFormatException: For input string:
"30s"       
atjava.lang.NumberFormatException.forInputString(NumberFormatException.java:65)       
at java.lang.Long.parseLong(Long.java:589)        at
java.lang.Long.parseLong(Long.java:631)        at
org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1538)       
at org.apache.hadoop.hdfs.DFSClient$Conf.(DFSClient.java:430)        at
org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:540)        at
org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:524)       
atorg.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
      
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)       
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476)        at
org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:217)        at
org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:214)        at
java.security.AccessController.doPrivileged(Native Method)        at
javax.security.auth.Subject.doAs(Subject.java:422)       
atorg.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)   
   
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:214)       
atorg.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.create(HadoopBasicFileSystemFactoryDelegate.java:117)
      
atorg.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.getWithMappedName(HadoopBasicFileSystemFactoryDelegate.java:95)
      
atorg.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate.access$001(HadoopCachingFileSystemFactoryDelegate.java:32)
      
atorg.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate$1.createValue(HadoopCachingFileSystemFactoryDelegate.java:37)
      
atorg.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopCachingFileSystemFactoryDelegate$1.createValue(HadoopCachingFileSystemFactoryDelegate.java:35)
      
atorg.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.init(HadoopLazyConcurrentMap.java:173)
      
atorg.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.access$100(HadoopLazyConcurrentMap.java:154)
      
atorg.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:82)
      
... 22 morefor execution hadoop wordcount example i have created forlder in
hdfs asname /user/input/ and put a text file on it and execute wordcout
examplewith below command: time hadoop
--config/home/mehdi/ignite-conf/ignite-configs-master/igfs-hadoop-fs-cache/ignite_confjar/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.0.jarwordcount
/user/input/ /outputi have used this below configuration file for executing
wordcount example ofhadoop over ignite. i have a folde name as ignite_config
which consist oftwo file core-site.xml and mapred-site.xml with attache
content.  core-site.xml
<http://apache-ignite-users.70518.x6.nabble.com/file/t2160/core-site.xml>  
core-site.xml
<http://apache-ignite-users.70518.x6.nabble.com/file/t2160/core-site.xml> 
is it necessary to execute igfs as cache for hdfs only over apache ignite
hadoop accelerator version or we can use apache ignite also? anybody knows?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Mime
View raw message