metron-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From James Sirota <>
Subject Re: Need suggestion on how to configure HCP Big Data for Development and Testing
Date Fri, 06 Oct 2017 18:12:32 GMT
As I mentioned in my previous response,
is where you want to go for help with Hortonworks products

06.10.2017, 05:34, "Dima Kovalyov" <>:
> Hello Ashikin,
> HCP is Hortonworks product and they have installation document here:
> Chapter that you are looking for is below:
> Dell specific wording: Dell PowerEdge VRTX, M630 and HDD 6006 does not
> tell me much about hardware you have. But if I have two guess, I would
> suggest to have:
> 1 node for Ambari and hdfs, hbase, zookeeper, metrics, zeppelin, spark,
> hive, tez and yarn masters.
> 1 node for Metron and storm, kafka, metron, es, kibana
> 2 nodes + 1 node (ambari one) for all the slaves and masters that
> require replication (hdfs datanode, zookeeper, kafka broker, es slaves).
> I have not tried HCP myself, but if I would, I would rather post all my
> HCP related questions to HortonWorks forums
> (, they have really good
> support there) rather then to Apache Metron devs as they are not related
> (HortonWorks uses Apache Metron (open-source software) in their HDP
> framework).
> - Dima
> On 10/05/2017 04:59 PM, Ashikin Abdullah wrote:
>>  Hi, can anyone help me to suggest appropriate deployment for Hortonworks
>>  Cybersecurity Package within this environment. We have Dell PowerEdge VRTX
>>  with 4 nodes, M630 x 4 and HDD 6006 x 25 (shared storage).
>>  Therefore, how to manage all this resources to properly configured HCP?
>>  Thanks in advance.

Thank you,

James Sirota
PPMC- Apache Metron (Incubating)
jsirota AT apache DOT org

View raw message