cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Carlos Reategui <create...@gmail.com>
Subject Host failed issues
Date Wed, 03 Jun 2015 20:39:09 GMT
Hi All,

I have a cluster with a pod of 6 XenServer 6.2 hosts on CloudStack 4.4.0.
One of the hosts stopped/rebooted suddenly and now can't find its disks
(dell on the way).  Luckily the host was not the pool master but it did
have the VR running on it so no instances could be created or started.  My
network is a basic network without security groups.

CS recognized that the host was down and also the VR, but it was unable to
move the VR to a different machine.  After destroying the VR from the UI it
came up on a different machine.  I would have expected CS to have moved the
VR automatically but maybe I don't have something setup correctly.

In the hosts tab I put the failed host in maintenance mode.  I tried
starting an instance that had previously been stopped.  CS started it on
the bad host and reported that it was successful even though it was not.
Why did it do this if the host was down and in maint mode??  Now I can't
stop that instance or migrate it.  I took the host out of maintenance mode
and marked it as disabled instead.  I still can't stop the instance that
thinks it is on the bad host.  Do I need to edit the DB to fix this
instance?

Any ideas?

thanks,
Carlos

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message