From dev-return-5466-apmail-metron-dev-archive=metron.apache.org@metron.incubator.apache.org Tue Jan 17 00:12:20 2017 Return-Path: X-Original-To: apmail-metron-dev-archive@minotaur.apache.org Delivered-To: apmail-metron-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 4C36819226 for ; Tue, 17 Jan 2017 00:12:20 +0000 (UTC) Received: (qmail 65021 invoked by uid 500); 17 Jan 2017 00:12:20 -0000 Delivered-To: apmail-metron-dev-archive@metron.apache.org Received: (qmail 64973 invoked by uid 500); 17 Jan 2017 00:12:20 -0000 Mailing-List: contact dev-help@metron.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@metron.incubator.apache.org Delivered-To: mailing list dev@metron.incubator.apache.org Received: (qmail 64961 invoked by uid 99); 17 Jan 2017 00:12:19 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 17 Jan 2017 00:12:19 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id E9395C00B6 for ; Tue, 17 Jan 2017 00:12:18 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.38 X-Spam-Level: ** X-Spam-Status: No, score=2.38 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RCVD_IN_SORBS_SPAM=0.5, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd4-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id bVyDYskrmZiy for ; Tue, 17 Jan 2017 00:12:08 +0000 (UTC) Received: from mail-yw0-f171.google.com (mail-yw0-f171.google.com [209.85.161.171]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id C066D5F5D3 for ; Tue, 17 Jan 2017 00:12:07 +0000 (UTC) Received: by mail-yw0-f171.google.com with SMTP id l75so77907765ywb.0 for ; Mon, 16 Jan 2017 16:12:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=YY0td0fWqpvCrzCjXsffHd68YPHmiUkwj+kgoCIbMkw=; b=OqgzzjofR4vMAYyAeFk3Cbv8ldoOlt7/VnnvJ0Ev3UhlkX34FY76G/5mEgCTqslUv+ P0A1GEvqFRhtJyJxGoPNsREFlo+aAYn+EZcLkIRsEShTHBZkzzgiYLhTBMRFRSBXX78s BmZMaF53b438U8E3tnXKX1033Qxw4DEHCQiszsjW9t9Ebw9JDaVITkd5OeUxjSoEFMGo y6OQ7fjwDvn0h3TnH1tjX7y7U7TGwFnRFhVCsOGTFmBiQjULDk7aED8z3gQ0wTuNjejW wZRtNKqiQJFQHQQxd2zKw98BWoMNeWGnxx/W0txKSEEIIDmXW0Q3n8QgO25RU62dzN9W KB0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=YY0td0fWqpvCrzCjXsffHd68YPHmiUkwj+kgoCIbMkw=; b=aBChEXphYqj/ETPftO5wHBUUlnFVYwEgMFCzcfmoqcFf1I1YmtJaWokuVIVJcUH4fS krApHRw55d1FUtH3NOtLDkZ63q6xvLAKqZjiRvePDggn8aVQiz66XT0z7APsCSLrFosb o2LdCH7w9r+1U/ayFJBEiiFrhejXFVlaIKvDKrvcpzSC5tYOtzfsllEtjUsKeUtiMhQ5 FSVLUYyr/JhZhbItb4GPT9Rgru/Vk7Rv86UQpl6UzJa+NuNUXt34+13lR3r+6ggppL4S R9l+rT7i/nI6omdVSW1l2b62jdBZrB5MUGm7mmi3+CAWCDgrhGet7dQ7OosrtPkstjZS yVVQ== X-Gm-Message-State: AIkVDXIuhHfarBDlT5FUChcRHHz/gXP62t+8QnO5jLY8cQfewmT+J5Bz/tkCnxGjIG4TMxg9pczZqIvdfApwUw== X-Received: by 10.129.98.70 with SMTP id w67mr26510472ywb.184.1484611919924; Mon, 16 Jan 2017 16:11:59 -0800 (PST) MIME-Version: 1.0 Received: by 10.13.213.4 with HTTP; Mon, 16 Jan 2017 16:11:59 -0800 (PST) In-Reply-To: References: <26C11436-7BA6-4D9F-BAAD-A2ACA0DBE7F9@apache.org> <560681484598427@web39m.yandex.ru> From: Casey Stella Date: Mon, 16 Jan 2017 19:11:59 -0500 Message-ID: Subject: Re: [DISCUSS] Moving GeoIP management away from MySQL To: "dev@metron.incubator.apache.org" Content-Type: multipart/alternative; boundary=001a11471bae1dd17405463f2686 --001a11471bae1dd17405463f2686 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable I'd recommend storing the MM data location in HDFS in the global config. When the config property changes, then you know you need to reread the database from HDFS. This would keep you from re-reading frequently. On Mon, Jan 16, 2017 at 18:45 Matt Foley wrote: > I agree too. I confirmed the GeoIP2 Java API is ASF2.0 licensed, as you > all no doubt knew already. > > Just a couple comments and a question: > > First note that storing data in HDFS, while it avoids the deployment > question, also induces a network hop to read it. > Presumably that only happens once per update per geo bolt instance, but > how do you avoid re-reading it frequently, to make sure you see updates? > > Second, I just want to comment that there is not a single point of failur= e > for an enterprise db that has been properly set up for HA. Granted that= =E2=80=99s > neither here nor there if we don=E2=80=99t need a db, but it isn=E2=80=99= t a valid argument > against using a db. :-) > > Thanks, > --Matt > > On 1/16/17, 1:36 PM, "Michael Miklavcic" > wrote: > > I'm also in agreement on this. > > On Mon, Jan 16, 2017 at 2:11 PM, Nick Allen > wrote: > > > +1 to using the Java API with the MMDB file provided by Maxmind. > This is > > what I had thought we were doing when we discussed this a few month= s > back. > > I'd rather use the Maxmind tools as-provided instead of engineering > > something on top of it. > > > > On Mon, Jan 16, 2017 at 3:59 PM, JJ Meyer > wrote: > > > > > Matt, I agree with your points on why we shouldn't just get rid o= f > the > > > database just to get rid of a database. But IMO, I think we may b= e > > > reinventing the wheel a little bit by even putting the maxmind > data into > > > MySQL. Right now we are already downloading a maxmind file. To me > it > > seems > > > simpler to push the file to HDFS where we can pick it up and have > the > > > maxmind client use that instead of importing data into a DB and > then > > > running a query. Also, I believe the data gets updated weekly. So > syncing > > > may become easier too. > > > > > > James, I believe it works with the paid and free versions of > geoip. I > > know > > > NiFi uses this client library in their Geo enrichment processor. > > > > > > Also, if it is decided that using a SQL database is still the bes= t > > > solution, I think there is a benefit to using their library. We > would > > just > > > have to implement a `DatabaseProvider` that hits some SQL db > instead of > > > using their standard implementation. > > > > > > Thanks, > > > JJ > > > > > > On Mon, Jan 16, 2017 at 2:27 PM, James Sirota > > wrote: > > > > > > > Hi Guys, I just wanted to clarify one point that I think is los= t > in > > this > > > > tread. Geo enrichment is NOT a key-value enrichment. It > requires a > > > range > > > > scan and a join (which is why it's implemented via mySql and no= t > > Hbase). > > > > To account for this access pattern via a key-value store you > would > > > > inevitably have to do something funky or in case of Hbase I > don't think > > > > there is a way to avoid doing a range scan. > > > > > > > > With respect to mapdb it only has support for Maps, Sets, Lists= , > > Queues. > > > > Are we sure it provides enough functionality for us to do this > > > enrichment? > > > > > > > > With respect to the Maxmind client, are we sure we can use it o= n > the > > > > mySql-backed version of their DB? I thought the Maxmind databa= se > > itself > > > is > > > > proprietary and is something you have to pay for. My > understanding is > > > that > > > > the client is designed for that proprietary version. > > > > > > > > I somewhat agree with Matt's point. If mySql is a problem > because of > > > > licensing, the path of least resistance to remove mySql > dependencies > > > would > > > > be to simply switch to postgresql. We will always have > conventional > > sql > > > > databases in our stack because other big data tools use them. > Why not > > > take > > > > advantage of them too? > > > > > > > > Thanks, > > > > James > > > > > > > > 16.01.2017, 12:27, "Matt Foley" : > > > > > Hi Justin, and team, > > > > > Several components of the Hadoop Stack utilize a SQL database= , > > usually > > > > for metadata of some sort. Ambari knows this and arranges for > them to > > > share > > > > a single database installation (on or off the cluster), unless > they > > > > explicitly configure use of different databases (which is > allowed for > > > sites > > > > that desire it). Ambari defaults to using PostgreSQL, altho it= =E2=80=99s > happy > > to > > > > use MySQL, Oracle, or Microsoft, along with whatever each > component > > > > historically defined as their default (such as Derby). > > > > > > > > > > If we want to start with a replacement of current > functionality, I > > > would > > > > suggest switching the default database to PostgreSQL. Replacing > fast, > > > > efficient, and proven db services with a file-based api library > (but no > > > > standard way to propagate the underlying storage files) seems t= o > me to > > be > > > > taking a step backwards. > > > > > > > > > > Sticking with a SQL-based service will surely minimize the > amount of > > > > code changes needed. And making the SQL either > dialect-independent or > > > > capable of switching among dialects, then enables us to do what > the > > rest > > > of > > > > the Hadoop stack does: allow enterprise customers to substitute > Oracle > > or > > > > Microsoft enterprise-class databases where they wish. Regarding > the > > > > drivers, we should study what the other Stack components do; I= =E2=80=99m > not an > > > > expert in those areas. > > > > > > > > > > Using the same db as the rest of the stack also means > administrators > > > can > > > > be confident they=E2=80=99ve set up adequate backup and recover= y > processes. > > > > > All these are valuable reasons not to roll our own storage > system for > > > > this enrichment data. IMO, of course. > > > > > > > > > > Cheers, > > > > > --Matt > > > > > > > > > > On 1/16/17, 9:52 AM, "Kyle Richardson" < > kylerichardson2@gmail.com> > > > > wrote: > > > > > > > > > > +1 Agree with David's order > > > > > > > > > > -Kyle > > > > > > > > > > On Mon, Jan 16, 2017 at 12:41 PM, David Lyle < > > dlyle65535@gmail.com > > > > > > > > wrote: > > > > > > > > > > > Def agree on the parity point. > > > > > > > > > > > > I'm a little worried about Supervisor relocations for > non-HBase > > > > solutions, > > > > > > but having much of the work done for us by MaxMind > changes my > > > > preference to > > > > > > (in order) > > > > > > > > > > > > 1) MM API > > > > > > 2) HBase Enrichment > > > > > > 3) MapDB should the others prove not feasible > > > > > > > > > > > > > > > > > > -D... > > > > > > > > > > > > > > > > > > On Mon, Jan 16, 2017 at 12:15 PM, Justin Leet < > > > > justinjleet@gmail.com> > > > > > > wrote: > > > > > > > > > > > > > I definitely agree on checking out the MaxMind API. > I'll > > take a > > > > look at > > > > > > > it, but at first glance it looks like it does include > > > everything > > > > we use. > > > > > > > Great find, JJ. > > > > > > > > > > > > > > More details on various people's points: > > > > > > > > > > > > > > As a note to anyone hopping in, Simon's point on the > range > > > > lookup vs a > > > > > > key > > > > > > > lookup is why it becomes a Scan in HBase vs a Get. As > an > > > > addendum to > > > > > > what > > > > > > > Simon mentioned, denormalizing is easy enough and > turns it > > into > > > > an easy > > > > > > > range lookup. > > > > > > > > > > > > > > To David's point, the MapDB does require a network > hop, but > > > it's > > > > once per > > > > > > > refresh of the data (Got a relevant callback? Grab ne= w > data, > > > > load it, > > > > > > swap > > > > > > > out) instead of (up to) once per message. I would > expect the > > > > same to be > > > > > > > true of the MaxMind db files. > > > > > > > > > > > > > > I'd also argue MapDB not really more complex than > refreshing > > > the > > > > HBase > > > > > > > table, because we potentially have to start worrying > about > > > > things like > > > > > > > hashing and/or indices and even just general data > > represtation. > > > > It's > > > > > > > definitely correct that the file processing has to > occur on > > > > either path, > > > > > > so > > > > > > > it really boils down to handling the callback and > reloading > > the > > > > file vs > > > > > > > handling some of the standard HBasey things. I don't > think > > > > either is an > > > > > > > enormous amount of work (and both are almost certainl= y > more > > > work > > > > than > > > > > > > MaxMind's API) > > > > > > > > > > > > > > Regarding extensibility, I'd argue for parity with > what we > > have > > > > first, > > > > > > then > > > > > > > build what we need from there. Does anybody have any > > > > disagreement with > > > > > > > that approach for right now? > > > > > > > > > > > > > > Justin > > > > > > > > > > > > > > On Mon, Jan 16, 2017 at 12:04 PM, David Lyle < > > > > dlyle65535@gmail.com> > > > > > > wrote: > > > > > > > > > > > > > > > It is interesting- save us a ton of effort, and has > the > > right > > > > license. > > > > > > I > > > > > > > > think it's worth at least checking out. > > > > > > > > > > > > > > > > -D... > > > > > > > > > > > > > > > > > > > > > > > > On Mon, Jan 16, 2017 at 12:00 PM, Simon Elliston > Ball < > > > > > > > > simon@simonellistonball.com> wrote: > > > > > > > > > > > > > > > > > I like that approach even more. That way we would > only > > have > > > > to worry > > > > > > > > about > > > > > > > > > distributing the database file in binary format t= o > all > > the > > > > supervisor > > > > > > > > nodes > > > > > > > > > on update. > > > > > > > > > > > > > > > > > > It would also make it easier for people to switch > to the > > > > enterprise > > > > > > DB > > > > > > > > > potentially if they had the license. > > > > > > > > > > > > > > > > > > One slight issue with this might be for people wh= o > wanted > > > to > > > > extend > > > > > > the > > > > > > > > > database. For example, organisations may want to > add > > > > geo-enrichment > > > > > > to > > > > > > > > > their own private network addresses based modifie= d > > versions > > > > of the > > > > > > geo > > > > > > > > > database. Currently we don=E2=80=99t really allow= this, > since we > > > > hard-code > > > > > > > > ignoring > > > > > > > > > private network classes into the geo enrichment > adapter, > > > but > > > > I can > > > > > > see > > > > > > > a > > > > > > > > > case where a global org might want to add their o= wn > > ranges > > > > and > > > > > > > locations > > > > > > > > to > > > > > > > > > the data set. Does that make sense to anyone else= ? > > > > > > > > > > > > > > > > > > Simon > > > > > > > > > > > > > > > > > > > > > > > > > > > > On 16 Jan 2017, at 16:50, JJ Meyer < > jjmeyer0@gmail.com > > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > Hello all, > > > > > > > > > > > > > > > > > > > > Can we leverage maxmind's Java client ( > > > > > > > > > > https://github.com/maxmind/ > > GeoIP2-java/tree/master/src/ > > > > > > > > > main/java/com/maxmind/geoip2) > > > > > > > > > > in this case? I believe it can directly read > maxmind > > > file. > > > > Plus I > > > > > > > think > > > > > > > > > it > > > > > > > > > > also has some support for caching as well. > > > > > > > > > > > > > > > > > > > > Thanks, > > > > > > > > > > JJ > > > > > > > > > > > > > > > > > > > > On Mon, Jan 16, 2017 at 10:32 AM, Simon Ellisto= n > Ball < > > > > > > > > > > simon@simonellistonball.com> wrote: > > > > > > > > > > > > > > > > > > > >> I like the idea of MapDB, since we can > essentially > > pull > > > an > > > > > > instance > > > > > > > > into > > > > > > > > > >> each supervisor, so it makes a lot of sense fo= r > > > > relatively small > > > > > > > > scale, > > > > > > > > > >> relatively static enrichments in general. > > > > > > > > > >> > > > > > > > > > >> Generally this feels like a caching problem, > and would > > > be > > > > for a > > > > > > > simple > > > > > > > > > >> key-value lookup. In that case I would agree > with > > David > > > > Lyle on > > > > > > > using > > > > > > > > > HBase > > > > > > > > > >> as a source or truth and relying on caching. > > > > > > > > > >> > > > > > > > > > >> That said, GeoIP is a different lookup pattern= , > since > > > > it=E2=80=99s a range > > > > > > > > > lookup > > > > > > > > > >> then a key lookup (or if we denormalize the > MaxMind > > > data, > > > > just a > > > > > > > range > > > > > > > > > >> lookup) for that kind of thing MapDB with > something > > like > > > > the BTree > > > > > > > > > seems a > > > > > > > > > >> good fit. > > > > > > > > > >> > > > > > > > > > >> Simon > > > > > > > > > >> > > > > > > > > > >> > > > > > > > > > >>> On 16 Jan 2017, at 16:28, David Lyle < > > > > dlyle65535@gmail.com> > > > > > > wrote: > > > > > > > > > >>> > > > > > > > > > >>> I'm +1 on removing the MySQL dependency, BUT = - > I'd > > > > prefer to see > > > > > > it > > > > > > > > as > > > > > > > > > an > > > > > > > > > >>> HBase enrichment. If our current caching isn'= t > enough > > > to > > > > mitigate > > > > > > > the > > > > > > > > > >> above > > > > > > > > > >>> issues, we have a problem, don't we? Or do we > not > > > > recommend HBase > > > > > > > > > >>> enrichment for per message enrichment in > general? > > > > > > > > > >>> > > > > > > > > > >>> Also- can you elaborate on how MapDB would no= t > > require > > > a > > > > network > > > > > > > hop? > > > > > > > > > >>> Doesn't this mean we would have to sync the > > enrichment > > > > data to > > > > > > each > > > > > > > > > Storm > > > > > > > > > >>> supervisor? HDFS could (probably would) have = a > > network > > > > hop too, > > > > > > no? > > > > > > > > > >>> > > > > > > > > > >>> Fwiw - > > > > > > > > > >>> "In its place, I've looked at using MapDB, > which is a > > > > really easy > > > > > > > to > > > > > > > > > use > > > > > > > > > >>> library for creating Java collections backed > by a > > file > > > > (This is > > > > > > > NOT a > > > > > > > > > >>> separate installation of anything, it's just = a > jar > > that > > > > manages > > > > > > > > > >> interaction > > > > > > > > > >>> with the file system). Given the slow churn o= f > the > > > GeoIP > > > > files > > > > > > (I > > > > > > > > > >> believe > > > > > > > > > >>> they get updated once a week), we can have a > script > > > that > > > > can be > > > > > > run > > > > > > > > > when > > > > > > > > > >>> needed, downloads the MaxMind tar file, build= s > the > > > MapDB > > > > file > > > > > > that > > > > > > > > will > > > > > > > > > >> be > > > > > > > > > >>> used by the bolts, and places it into HDFS. > Finally, > > we > > > > update a > > > > > > > > > config > > > > > > > > > >> to > > > > > > > > > >>> point to the new file, the bolts get the > updated > > config > > > > callback > > > > > > > and > > > > > > > > > can > > > > > > > > > >>> update their db files. Inside the code, we > wrap the > > > MapDB > > > > > > portions > > > > > > > > to > > > > > > > > > >> make > > > > > > > > > >>> it transparent to downstream code." > > > > > > > > > >>> > > > > > > > > > >>> Seems a bit more complex than "refresh the > hbase > > > table". > > > > Afaik, > > > > > > > > either > > > > > > > > > >>> approach would require some sort of translati= on > > between > > > > GeoIP > > > > > > > source > > > > > > > > > >> format > > > > > > > > > >>> and target format, so that part is a wash imo= . > > > > > > > > > >>> > > > > > > > > > >>> So, I'd really like to see, at least, an > attempt to > > > > leverage > > > > > > HBase > > > > > > > > > >>> enrichment. > > > > > > > > > >>> > > > > > > > > > >>> -D... > > > > > > > > > >>> > > > > > > > > > >>> > > > > > > > > > >>> On Mon, Jan 16, 2017 at 11:02 AM, Casey Stell= a > < > > > > > > cestella@gmail.com > > > > > > > > > > > > > > > > > >> wrote: > > > > > > > > > >>> > > > > > > > > > >>>> I think that it's a sensible thing to use > MapDB for > > > the > > > > geo > > > > > > > > > enrichment. > > > > > > > > > >>>> Let me state my reasoning: > > > > > > > > > >>>> > > > > > > > > > >>>> - An HBase implementation would necessitate = a > HBase > > > scan > > > > > > > possibly > > > > > > > > > >>>> hitting HDFS, which is expensive per-message= . > > > > > > > > > >>>> - An HBase implementation would necessitate = a > > network > > > > hop and > > > > > > > MapDB > > > > > > > > > >>>> would not. > > > > > > > > > >>>> > > > > > > > > > >>>> I also think this might be the beginning of = a > more > > > > general > > > > > > purpose > > > > > > > > > >> support > > > > > > > > > >>>> in Stellar for locally shipped, read-only > MapDB > > > > lookups, which > > > > > > > might > > > > > > > > > be > > > > > > > > > >>>> interesting. > > > > > > > > > >>>> > > > > > > > > > >>>> In short, all quotes about premature > optimization > > are > > > > sure to > > > > > > > apply > > > > > > > > to > > > > > > > > > >> my > > > > > > > > > >>>> reasoning, but I can't help but have my spid= ey > > senses > > > > tingle > > > > > > when > > > > > > > we > > > > > > > > > >>>> introduce a scan-per-message architecture. > > > > > > > > > >>>> > > > > > > > > > >>>> Casey > > > > > > > > > >>>> > > > > > > > > > >>>> On Mon, Jan 16, 2017 at 10:53 AM, Dima > Kovalyov < > > > > > > > > > >> Dima.Kovalyov@sstech.us> > > > > > > > > > >>>> wrote: > > > > > > > > > >>>> > > > > > > > > > >>>>> Hello Justin, > > > > > > > > > >>>>> > > > > > > > > > >>>>> Considering that Metron uses hbase tables f= or > > storing > > > > > > enrichment > > > > > > > > and > > > > > > > > > >>>>> threatintel feeds, can we use Hbase for geo > > > enrichment > > > > as well? > > > > > > > > > >>>>> Or MapDB can be used for enrichment and > threatintel > > > > feeds > > > > > > instead > > > > > > > > of > > > > > > > > > >>>> hbase? > > > > > > > > > >>>>> > > > > > > > > > >>>>> - Dima > > > > > > > > > >>>>> > > > > > > > > > >>>>> On 01/16/2017 04:17 PM, Justin Leet wrote: > > > > > > > > > >>>>>> Hi all, > > > > > > > > > >>>>>> > > > > > > > > > >>>>>> As a bit of background, right now, GeoIP > data is > > > > loaded into > > > > > > and > > > > > > > > > >>>> managed > > > > > > > > > >>>>> by > > > > > > > > > >>>>>> MySQL (the connectors are LGPL licensed an= d > we > > need > > > > to sever > > > > > > our > > > > > > > > > Maven > > > > > > > > > >>>>>> dependency on it before next release). We > > currently > > > > depend on > > > > > > > and > > > > > > > > > >>>> install > > > > > > > > > >>>>>> an instance of MySQL (in each of the > Management > > > Pack, > > > > Ansible, > > > > > > > and > > > > > > > > > >>>> Docker > > > > > > > > > >>>>>> installs). In the topology, we use the > JDBCAdapter > > > to > > > > connect > > > > > > to > > > > > > > > > MySQL > > > > > > > > > >>>>> and > > > > > > > > > >>>>>> query for a given IP. Additionally, it's a > single > > > > point of > > > > > > > > failure > > > > > > > > > >> for > > > > > > > > > >>>>>> that particular enrichment right now. If > MySQL is > > > > down, geo > > > > > > > > > >> enrichment > > > > > > > > > >>>>>> can't occur. > > > > > > > > > >>>>>> > > > > > > > > > >>>>>> I'm proposing that we eliminate the use of > MySQL > > > > entirely, > > > > > > > through > > > > > > > > > all > > > > > > > > > >>>>>> installation paths (which, unless I missed > some, > > > > includes > > > > > > > Ansible, > > > > > > > > > the > > > > > > > > > >>>>>> Ambari Management Pack, and Docker). We'd > do this > > by > > > > dropping > > > > > > > all > > > > > > > > > the > > > > > > > > > >>>>>> various MySQL setup and management through > the > > code, > > > > along > > > > > > with > > > > > > > > all > > > > > > > > > >> the > > > > > > > > > >>>>>> DDL, etc. The JDBCAdapter would stay, so > that > > > anybody > > > > who > > > > > > wants > > > > > > > > to > > > > > > > > > >>>> setup > > > > > > > > > >>>>>> their own databases for enrichments and > install > > > > connectors is > > > > > > > able > > > > > > > > > to > > > > > > > > > >>>> do > > > > > > > > > >>>>> so. > > > > > > > > > >>>>>> > > > > > > > > > >>>>>> In its place, I've looked at using MapDB, > which > > is a > > > > really > > > > > > easy > > > > > > > > to > > > > > > > > > >> use > > > > > > > > > >>>>>> library for creating Java collections > backed by a > > > > file (This > > > > > > is > > > > > > > > NOT > > > > > > > > > a > > > > > > > > > >>>>>> separate installation of anything, it's > just a jar > > > > that > > > > > > manages > > > > > > > > > >>>>> interaction > > > > > > > > > >>>>>> with the file system). Given the slow chur= n > of the > > > > GeoIP > > > > > > files > > > > > > > (I > > > > > > > > > >>>>> believe > > > > > > > > > >>>>>> they get updated once a week), we can have= a > > script > > > > that can > > > > > > be > > > > > > > > run > > > > > > > > > >>>> when > > > > > > > > > >>>>>> needed, downloads the MaxMind tar file, > builds the > > > > MapDB file > > > > > > > that > > > > > > > > > >> will > > > > > > > > > >>>>> be > > > > > > > > > >>>>>> used by the bolts, and places it into HDFS= . > > Finally, > > > > we > > > > > > update > > > > > > > a > > > > > > > > > >>>> config > > > > > > > > > >>>>> to > > > > > > > > > >>>>>> point to the new file, the bolts get the > updated > > > > config > > > > > > callback > > > > > > > > and > > > > > > > > > >>>> can > > > > > > > > > >>>>>> update their db files. Inside the code, we > wrap > > the > > > > MapDB > > > > > > > > portions > > > > > > > > > to > > > > > > > > > >>>>> make > > > > > > > > > >>>>>> it transparent to downstream code. > > > > > > > > > >>>>>> > > > > > > > > > >>>>>> The particularly nice parts about using > MapDB are > > > > that its > > > > > > ease > > > > > > > of > > > > > > > > > use > > > > > > > > > >>>>> plus > > > > > > > > > >>>>>> it offers the utilities we need out of the > box to > > be > > > > able to > > > > > > > > support > > > > > > > > > >>>> the > > > > > > > > > >>>>>> operations we need on this (Keep in mind > the GeoIP > > > > files use > > > > > > IP > > > > > > > > > ranges > > > > > > > > > >>>>> and > > > > > > > > > >>>>>> we need to be able to easily grab the > appropriate > > > > range). > > > > > > > > > >>>>>> > > > > > > > > > >>>>>> The main point of concern I have about thi= s > is > > that > > > > when we > > > > > > grab > > > > > > > > the > > > > > > > > > >>>> HDFS > > > > > > > > > >>>>>> file during an update, given that multiple > JVMs > > can > > > be > > > > > > running, > > > > > > > we > > > > > > > > > >>>> don't > > > > > > > > > >>>>>> want them to clobber each other. I believe > this > > can > > > > be avoided > > > > > > > by > > > > > > > > > >>>> simply > > > > > > > > > >>>>>> using each worker's working directory to > store the > > > > file (and > > > > > > > > > >>>>> appropriately > > > > > > > > > >>>>>> ensure threads on the same JVM manage > > > > multithreading). This > > > > > > > > should > > > > > > > > > >>>> keep > > > > > > > > > >>>>>> the JVMs (and the underlying DB files) > entirely > > > > independent. > > > > > > > > > >>>>>> > > > > > > > > > >>>>>> This script would get called by the variou= s > > > > installations > > > > > > during > > > > > > > > > >>>> startup > > > > > > > > > >>>>> to > > > > > > > > > >>>>>> do the initial setup. After install, it ca= n > then > > be > > > > called on > > > > > > > > > demand > > > > > > > > > >>>> in > > > > > > > > > >>>>>> order. > > > > > > > > > >>>>>> > > > > > > > > > >>>>>> At this point, we should be all set, with > > everything > > > > running > > > > > > and > > > > > > > > > >>>>> updatable. > > > > > > > > > >>>>>> > > > > > > > > > >>>>>> Justin > > > > > > > > > >>>>>> > > > > > > > > > >>>>> > > > > > > > > > >>>>> > > > > > > > > > >>>> > > > > > > > > > >> > > > > > > > > > >> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ------------------- > > > > Thank you, > > > > > > > > James Sirota > > > > PPMC- Apache Metron (Incubating) > > > > jsirota AT apache DOT org > > > > > > > > > > > > > > > -- > > Nick Allen > > > > > > --001a11471bae1dd17405463f2686--