metron-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (METRON-744) Allow Stellar functions to be loaded from HDFS
Date Thu, 02 Mar 2017 21:00:49 GMT

    [ https://issues.apache.org/jira/browse/METRON-744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15893007#comment-15893007
] 

ASF GitHub Bot commented on METRON-744:
---------------------------------------

Github user dlyle65535 commented on the issue:

    https://github.com/apache/incubator-metron/pull/468
  
    Oh, hey, that stack trace looks familiar. I think Justin fixed this here:
    https://github.com/apache/incubator-metron/pull/461.
    
    -D...
    
    
    On Thu, Mar 2, 2017 at 2:08 PM, Michael Miklavcic <notifications@github.com>
    wrote:
    
    > +1 tested this in Vagrant quick-dev.
    >
    > @cestella <https://github.com/cestella> I'm good with waiting on the
    > default dir. I was also able to run this through e2e with your latest
    > version. I also ran into issues with vagrant up not loading the geo
    > enrichment. I get the following exception. (I'll add a Jira.)
    >
    > 2017-03-01 14:59:24.634 o.a.m.e.a.g.GeoLiteDatabase [ERROR] [Metron] Unable to open
new database file /apps/metron/geo/default/GeoLite2-City.mmdb.gz
    > java.io.FileNotFoundException: File does not exist: /apps/metron/geo/default/GeoLite2-City.mmdb.gz
    >         at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71)
    >         at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
    >         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1860)
    >         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1831)
    >         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1744)
    >         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:693)
    >         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373)
    >         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    >         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
    >         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
    >         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
    >         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
    >         at java.security.AccessController.doPrivileged(Native Method)
    >         at javax.security.auth.Subject.doAs(Subject.java:422)
    >         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
    >         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
    >
    >         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
~[?:1.8.0_77]
    >         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
~[?:1.8.0_77]
    >         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
~[?:1.8.0_77]
    >         at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_77]
    >         at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
~[stormjar.jar:?]
    >         at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
~[stormjar.jar:?]
    >         at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1242)
~[stormjar.jar:?]
    >         at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1227)
~[stormjar.jar:?]
    >         at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1215)
~[stormjar.jar:?]
    >         at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:303)
~[stormjar.jar:?]
    >         at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:269)
~[stormjar.jar:?]
    >         at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:261)
~[stormjar.jar:?]
    >         at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1540) ~[stormjar.jar:?]
    >         at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304)
~[stormjar.jar:?]
    >         at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:299)
~[stormjar.jar:?]
    >         at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
~[stormjar.jar:?]
    >         at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:312)
~[stormjar.jar:?]
    >         at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:767) ~[stormjar.jar:?]
    >         at org.apache.metron.enrichment.adapters.geo.GeoLiteDatabase.update(GeoLiteDatabase.java:93)
[stormjar.jar:?]
    >         at org.apache.metron.enrichment.bolt.ThreatIntelJoinBolt.prepare(ThreatIntelJoinBolt.java:68)
[stormjar.jar:?]
    >         at org.apache.metron.enrichment.bolt.JoinBolt.prepare(JoinBolt.java:87) [stormjar.jar:?]
    >         at org.apache.storm.daemon.executor$fn__6571$fn__6584.invoke(executor.clj:798)
[storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
    >         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:482) [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
    >         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
    >         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
    > Caused by: org.apache.hadoop.ipc.RemoteException: File does not exist: /apps/metron/geo/default/GeoLite2-City.mmdb.gz
    >         at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71)
    >         at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
    >         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1860)
    >         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1831)
    >         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1744)
    >         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:693)
    >         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373)
    >         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    >         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
    >         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
    >         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
    >         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
    >         at java.security.AccessController.doPrivileged(Native Method)
    >         at javax.security.auth.Subject.doAs(Subject.java:422)
    >         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
    >         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
    >
    >         at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[stormjar.jar:?]
    >         at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[stormjar.jar:?]
    >         at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
~[stormjar.jar:?]
    >         at com.sun.proxy.$Proxy46.getBlockLocations(Unknown Source) ~[?:?]
    >         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255)
~[stormjar.jar:?]
    >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_77]
    >         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
~[?:1.8.0_77]
    >         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[?:1.8.0_77]
    >         at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_77]
    >         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
~[stormjar.jar:?]
    >         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
~[stormjar.jar:?]
    >         at com.sun.proxy.$Proxy47.getBlockLocations(Unknown Source) ~[?:?]
    >         at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1240)
~[stormjar.jar:?]
    >         ... 18 more
    > 2017-03-01 14:59:24.636 o.a.s.util [ERROR] Async loop died!
    > java.lang.IllegalStateException: [Metron] Unable to update MaxMind database
    >         at org.apache.metron.enrichment.adapters.geo.GeoLiteDatabase.update(GeoLiteDatabase.java:107)
~[stormjar.jar:?]
    >         at org.apache.metron.enrichment.bolt.ThreatIntelJoinBolt.prepare(ThreatIntelJoinBolt.java:68)
~[stormjar.jar:?]
    >         at org.apache.metron.enrichment.bolt.JoinBolt.prepare(JoinBolt.java:87) ~[stormjar.jar:?]
    >         at org.apache.storm.daemon.executor$fn__6571$fn__6584.invoke(executor.clj:798)
~[storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
    >         at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:482) [storm-core-1.0.1.2.5.0.0-1245.jar:1.0.1.2.5.0.0-1245]
    >         at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
    >         at java.lang.Thread.run(Thread.java:745) [?:1.8.0_77]
    >
    > —
    > You are receiving this because you were mentioned.
    > Reply to this email directly, view it on GitHub
    > <https://github.com/apache/incubator-metron/pull/468#issuecomment-283748490>,
    > or mute the thread
    > <https://github.com/notifications/unsubscribe-auth/ADR6CURMGZB9a17Cr5OY6umzktBJ-2s6ks5rhxPKgaJpZM4MODXQ>
    > .
    >



> Allow Stellar functions to be loaded from HDFS
> ----------------------------------------------
>
>                 Key: METRON-744
>                 URL: https://issues.apache.org/jira/browse/METRON-744
>             Project: Metron
>          Issue Type: New Feature
>            Reporter: Casey Stella
>
> The benefit of Stellar is that adding new functionality is as simple as providing a Jar.
 This enables people who want to integrate with Metron to easy add enrichments or other functionality.
 The snag currently with this is that we provide a single jar, so all stellar functions that
we have available must be dependencies of the main jar that drives the topology plus what
local directories we can configure via the storm configs.  This makes the process of adding
3rd party jars not as easy as it could be.
> Adjust the the following to additionally load classes from a location in HDFS /apps/metron/stellar
using something like accumulo ( https://accumulo.apache.org/blog/2014/05/03/accumulo-classloader.html)
> * Profiler topology
> * Parser topology
> * Enrichment topology
> * Enrichment Flat file loader
> * Enrichment MR loader



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message