storm-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From bo...@apache.org
Subject [1/2] storm git commit: Fix minor typos in storm-hdfs and storm-hbase docs that shows usage of configKeys
Date Mon, 15 May 2017 18:22:18 GMT
Repository: storm
Updated Branches:
  refs/heads/master e3c7ff1dd -> 7742398d0


Fix minor typos in storm-hdfs and storm-hbase docs that shows usage of configKeys

And make sure docs/storm-hdfs.md docs/storm-hbase.md are in sync with external/storm-hdfs/README.md
and
external/storm-hbase/README.md


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/a5005669
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/a5005669
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/a5005669

Branch: refs/heads/master
Commit: a50056691e02109a52c665aadf06f0c1135ee0ad
Parents: 9755ff5
Author: Arun Mahadevan <arunm@apache.org>
Authored: Fri May 12 14:27:12 2017 +0530
Committer: Arun Mahadevan <arunm@apache.org>
Committed: Fri May 12 14:44:12 2017 +0530

----------------------------------------------------------------------
 docs/storm-hbase.md            |  8 +++----
 docs/storm-hdfs.md             |  8 +++----
 external/storm-hbase/README.md | 37 ++++++++++++++++++++++++-------
 external/storm-hdfs/README.md  | 43 +++++++++++++++++++++++++++----------
 4 files changed, 69 insertions(+), 27 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/a5005669/docs/storm-hbase.md
----------------------------------------------------------------------
diff --git a/docs/storm-hbase.md b/docs/storm-hbase.md
index ec88054..1584d85 100644
--- a/docs/storm-hbase.md
+++ b/docs/storm-hbase.md
@@ -79,15 +79,15 @@ As an alternative to adding the configuration files (core-site.xml, hdfs-site.xm
 
 ```
 hbaseCredentialsConfigKeys : ["cluster1", "cluster2"] (the hbase clusters you want to fetch
the tokens from)
-cluster1: [{"config1": "value1", "config2": "value2", ... }] (A map of config key-values
specific to cluster1)
-cluster2: [{"config1": "value1", "hbase.keytab.file": "/path/to/keytab/for/cluster2/on/nimubs",
"hbase.kerberos.principal": "cluster2user@EXAMPLE.com"}] (here along with other configs, we
have custom keytab and principal for "cluster2" which will override the keytab/principal specified
at topology level)
+"cluster1": {"config1": "value1", "config2": "value2", ... } (A map of config key-values
specific to cluster1)
+"cluster2": {"config1": "value1", "hbase.keytab.file": "/path/to/keytab/for/cluster2/on/nimubs",
"hbase.kerberos.principal": "cluster2user@EXAMPLE.com"} (here along with other configs, we
have custom keytab and principal for "cluster2" which will override the keytab/principal specified
at topology level)
 ```
 
 Instead of specifying key values you may also directly specify the resource files for e.g.,
 
 ```
-cluster1: [{"resources": ["/path/to/core-site1.xml", "/path/to/hbase-site1.xml"]}]
-cluster2: [{"resources": ["/path/to/core-site2.xml", "/path/to/hbase-site2.xml"]}]
+"cluster1": {"resources": ["/path/to/core-site1.xml", "/path/to/hbase-site1.xml"]}
+"cluster2": {"resources": ["/path/to/core-site2.xml", "/path/to/hbase-site2.xml"]}
 ```
 
 Storm will download the tokens separately for each of the clusters and populate it into the
subject and also renew the tokens periodically. 

http://git-wip-us.apache.org/repos/asf/storm/blob/a5005669/docs/storm-hdfs.md
----------------------------------------------------------------------
diff --git a/docs/storm-hdfs.md b/docs/storm-hdfs.md
index 8391efd..219a137 100644
--- a/docs/storm-hdfs.md
+++ b/docs/storm-hdfs.md
@@ -437,15 +437,15 @@ as a part of the topology configuration. E.g. in you custom storm.yaml
(or -c op
 
 ```
 hdfsCredentialsConfigKeys : ["cluster1", "cluster2"] (the hdfs clusters you want to fetch
the tokens from)
-cluster1: [{"config1": "value1", "config2": "value2", ... }] (A map of config key-values
specific to cluster1)
-cluster2: [{"config1": "value1", "hdfs.keytab.file": "/path/to/keytab/for/cluster2/on/nimubs",
"hdfs.kerberos.principal": "cluster2user@EXAMPLE.com"}] (here along with other configs, we
have custom keytab and principal for "cluster2" which will override the keytab/principal specified
at topology level)
+"cluster1": {"config1": "value1", "config2": "value2", ... } (A map of config key-values
specific to cluster1)
+"cluster2": {"config1": "value1", "hdfs.keytab.file": "/path/to/keytab/for/cluster2/on/nimubs",
"hdfs.kerberos.principal": "cluster2user@EXAMPLE.com"} (here along with other configs, we
have custom keytab and principal for "cluster2" which will override the keytab/principal specified
at topology level)
 ```
 
 Instead of specifying key values you may also directly specify the resource files for e.g.,
 
 ```
-cluster1: [{"resources": ["/path/to/core-site1.xml", "/path/to/hdfs-site1.xml"]}]
-cluster2: [{"resources": ["/path/to/core-site2.xml", "/path/to/hdfs-site2.xml"]}]
+"cluster1": {"resources": ["/path/to/core-site1.xml", "/path/to/hdfs-site1.xml"]}
+"cluster2": {"resources": ["/path/to/core-site2.xml", "/path/to/hdfs-site2.xml"]}
 ```
 
 Storm will download the tokens separately for each of the clusters and populate it into the
subject and also renew the tokens periodically. This way it would be possible to run multiple
bolts connecting to separate HDFS cluster within the same topology.

http://git-wip-us.apache.org/repos/asf/storm/blob/a5005669/external/storm-hbase/README.md
----------------------------------------------------------------------
diff --git a/external/storm-hbase/README.md b/external/storm-hbase/README.md
index 72d0f5f..d7cec31 100644
--- a/external/storm-hbase/README.md
+++ b/external/storm-hbase/README.md
@@ -52,28 +52,49 @@ The approach described above requires that all potential worker hosts
have "stor
 multiple topologies on a cluster , each with different hbase user, you will have to create
multiple keytabs and distribute
 it to all workers. Instead of doing that you could use the following approach:
 
-Your administrator can configure nimbus to automatically get delegation tokens on behalf
of the topology submitter user.
-The nimbus need to start with following configurations:
+Your administrator can configure nimbus to automatically get delegation tokens on behalf
of the topology submitter user. The nimbus should be started with following configurations:
 
+```
 nimbus.autocredential.plugins.classes : ["org.apache.storm.hbase.security.AutoHBase"] 
 nimbus.credential.renewers.classes : ["org.apache.storm.hbase.security.AutoHBase"] 
 hbase.keytab.file: "/path/to/keytab/on/nimbus" (This is the keytab of hbase super user that
can impersonate other users.)
 hbase.kerberos.principal: "superuser@EXAMPLE.com"
-nimbus.credential.renewers.freq.secs : 518400 (6 days, hbase tokens by default expire every
7 days and can not be renewed, 
-if you have custom settings for hbase.auth.token.max.lifetime in hbase-site.xml than you
should ensure this value is 
-atleast 1 hour less then that.)
+nimbus.credential.renewers.freq.secs : 518400 (6 days, hbase tokens by default expire every
7 days and can not be renewed,  if you have custom settings for hbase.auth.token.max.lifetime
in hbase-site.xml than you should ensure this value is atleast 1 hour less then that.)
+```
 
 Your topology configuration should have:
-topology.auto-credentials :["org.apache.storm.hbase.security.AutoHBase"] 
+
+```
+topology.auto-credentials :["org.apache.storm.hbase.security.AutoHBase"]
+```
 
 If nimbus did not have the above configuration you need to add it and then restart it. Ensure
the hbase configuration 
-files(core-site.xml,hdfs-site.xml and hbase-site.xml) and the storm-hbase jar with all the
dependencies is present in nimbus's classpath. 
+files(core-site.xml, hdfs-site.xml and hbase-site.xml) and the storm-hbase jar with all the
dependencies is present in nimbus's classpath.
+
+As an alternative to adding the configuration files (core-site.xml, hdfs-site.xml and hbase-site.xml)
to the classpath, you could specify the configurations as a part of the topology configuration.
E.g. in you custom storm.yaml (or -c option while submitting the topology),
+
+```
+hbaseCredentialsConfigKeys : ["cluster1", "cluster2"] (the hbase clusters you want to fetch
the tokens from)
+"cluster1": {"config1": "value1", "config2": "value2", ... } (A map of config key-values
specific to cluster1)
+"cluster2": {"config1": "value1", "hbase.keytab.file": "/path/to/keytab/for/cluster2/on/nimubs",
"hbase.kerberos.principal": "cluster2user@EXAMPLE.com"} (here along with other configs, we
have custom keytab and principal for "cluster2" which will override the keytab/principal specified
at topology level)
+```
+
+Instead of specifying key values you may also directly specify the resource files for e.g.,
+
+```
+"cluster1": {"resources": ["/path/to/core-site1.xml", "/path/to/hbase-site1.xml"]}
+"cluster2": {"resources": ["/path/to/core-site2.xml", "/path/to/hbase-site2.xml"]}
+```
+
+Storm will download the tokens separately for each of the clusters and populate it into the
subject and also renew the tokens periodically. 
+This way it would be possible to run multiple bolts connecting to separate HBase cluster
within the same topology.
+
 Nimbus will use the keytab and principal specified in the config to authenticate with HBase.
From then on for every
 topology submission, nimbus will impersonate the topology submitter user and acquire delegation
tokens on behalf of the
 topology submitter user. If topology was started with topology.auto-credentials set to AutoHBase,
nimbus will push the
 delegation tokens to all the workers for your topology and the hbase bolt/state will authenticate
with these tokens.
 
-As nimbus is impersonating topology submitter user, you need to ensure the user specified
in storm.kerberos.principal 
+As nimbus is impersonating topology submitter user, you need to ensure the user specified
in hbase.kerberos.principal 
 has permissions to acquire tokens on behalf of other users. To achieve this you need to follow
configuration directions 
 listed on this link
 

http://git-wip-us.apache.org/repos/asf/storm/blob/a5005669/external/storm-hdfs/README.md
----------------------------------------------------------------------
diff --git a/external/storm-hdfs/README.md b/external/storm-hdfs/README.md
index f97ed51..f930daa 100644
--- a/external/storm-hdfs/README.md
+++ b/external/storm-hdfs/README.md
@@ -411,23 +411,44 @@ If your topology is going to interact with secure HDFS, your bolts/states
needs
 currently have 2 options to support this:
 
 ### Using HDFS delegation tokens 
-Your administrator can configure nimbus to automatically get delegation tokens on behalf
of the topology submitter user.
-The nimbus need to start with following configurations:
+Your administrator can configure nimbus to automatically get delegation tokens on behalf
of the topology submitter user. The nimbus should be started with following configurations:
 
-nimbus.autocredential.plugins.classes : ["org.apache.storm.hdfs.common.security.AutoHDFS"]

-nimbus.credential.renewers.classes : ["org.apache.storm.hdfs.common.security.AutoHDFS"] 
+```
+nimbus.autocredential.plugins.classes : ["org.apache.storm.hdfs.security.AutoHDFS"]
+nimbus.credential.renewers.classes : ["org.apache.storm.hdfs.security.AutoHDFS"]
 hdfs.keytab.file: "/path/to/keytab/on/nimbus" (This is the keytab of hdfs super user that
can impersonate other users.)
 hdfs.kerberos.principal: "superuser@EXAMPLE.com" 
-nimbus.credential.renewers.freq.secs : 82800 (23 hours, hdfs tokens needs to be renewed every
24 hours so this value should be
-less then 24 hours.)
-topology.hdfs.uri:"hdfs://host:port" (This is an optional config, by default we will use
value of "fs.defaultFS" property
-specified in hadoop's core-site.xml)
+nimbus.credential.renewers.freq.secs : 82800 (23 hours, hdfs tokens needs to be renewed every
24 hours so this value should be less then 24 hours.)
+topology.hdfs.uri:"hdfs://host:port" (This is an optional config, by default we will use
value of "fs.defaultFS" property specified in hadoop's core-site.xml)
+```
 
 Your topology configuration should have:
-topology.auto-credentials :["org.apache.storm.hdfs.common.security.AutoHDFS"] 
 
-If nimbus did not have the above configuration you need to add it and then restart it. Ensure
the hadoop configuration 
-files(core-site.xml and hdfs-site.xml) and the storm-hdfs jar with all the dependencies is
present in nimbus's classpath. 
+```
+topology.auto-credentials :["org.apache.storm.hdfs.common.security.AutoHDFS"]
+```
+
+If nimbus did not have the above configuration you need to add and then restart it. Ensure
the hadoop configuration 
+files (core-site.xml and hdfs-site.xml) and the storm-hdfs jar with all the dependencies
is present in nimbus's classpath.
+
+As an alternative to adding the configuration files (core-site.xml and hdfs-site.xml) to
the classpath, you could specify the configurations
+as a part of the topology configuration. E.g. in you custom storm.yaml (or -c option while
submitting the topology),
+
+```
+hdfsCredentialsConfigKeys : ["cluster1", "cluster2"] (the hdfs clusters you want to fetch
the tokens from)
+"cluster1": {"config1": "value1", "config2": "value2", ... } (A map of config key-values
specific to cluster1)
+"cluster2": {"config1": "value1", "hdfs.keytab.file": "/path/to/keytab/for/cluster2/on/nimubs",
"hdfs.kerberos.principal": "cluster2user@EXAMPLE.com"} (here along with other configs, we
have custom keytab and principal for "cluster2" which will override the keytab/principal specified
at topology level)
+```
+
+Instead of specifying key values you may also directly specify the resource files for e.g.,
+
+```
+"cluster1": {"resources": ["/path/to/core-site1.xml", "/path/to/hdfs-site1.xml"]}
+"cluster2": {"resources": ["/path/to/core-site2.xml", "/path/to/hdfs-site2.xml"]}
+```
+
+Storm will download the tokens separately for each of the clusters and populate it into the
subject and also renew the tokens periodically. This way it would be possible to run multiple
bolts connecting to separate HDFS cluster within the same topology.
+
 Nimbus will use the keytab and principal specified in the config to authenticate with Namenode.
From then on for every
 topology submission, nimbus will impersonate the topology submitter user and acquire delegation
tokens on behalf of the
 topology submitter user. If topology was started with topology.auto-credentials set to AutoHDFS,
nimbus will push the


Mime
View raw message