[ https://issues.apache.org/jira/browse/AMBARI-2746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13760437#comment-13760437
]
Sumit Mohanty commented on AMBARI-2746:
---------------------------------------
Should have been committed but assigning to myself to check.
> After hostcleanup over existing install, hive metastore nagios check fails
> --------------------------------------------------------------------------
>
> Key: AMBARI-2746
> URL: https://issues.apache.org/jira/browse/AMBARI-2746
> Project: Ambari
> Issue Type: Bug
> Components: agent
> Affects Versions: 1.2.5
> Reporter: Artem Baranchuk
> Assignee: Sumit Mohanty
> Fix For: 1.4.1
>
> Attachments: AMBARI-2746.patch
>
>
> Steps to reproduce:
> # install cluster on single host
> # stop and reset cluster
> # re-install cluster, perform HostCleanup
> # continue with install
> # be sure to choose new namenode and datanode dirs on Customize Services > HDFS. If
you leave to default, HDFS won't start because (from step 1 install), those dirs aren't empty
> # cluster installs, all checks clear except hive metastore nagios alert won't go away.
So something must have been left hanging around from initial install.
> CRITICAL: Error accessing hive-metaserver status [Exception in thread "main" java.io.IOException:
Permission denied
> at java.io.UnixFileSystem.createFileExclusively(Native Method)
> at java.io.File.checkAndCreate(File.java:1704)
> at java.io.File.createTempFile(File.java:1792)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
|