On Wed, Jun 3, 2015 at 1:52 PM, Andrija Panic <andrija.panic@gmail.com>
wrote:
> Hi Carlos,
>
> Im not familiar with Zen, but on KVM - you would "virsh destroy i-xx-yy-VM"
> and then edit the DB set VM state to Stopped.
>
Hmm... Interesting... "xe vm-list" does show the VM as running. However I
was initially unable to stop it:
# xe vm-shutdown name-label=i-3-201-VM force=true
You attempted an operation which involves a host which could not be
contacted.
host: bf9bb7a9-ee8e-46de-855d-9712ed037943 (labxen03)
I had to force the power state on the VM first:
# xe vm-reset-powerstate vm=i-3-201-VM force=true
# xe vm-destroy uuid=dfb377f9-7f7b-bed7-88d9-a03b3a5d96e5
> That should be sufficient to stop VM manually.
>
Yup. Thank you
>
> Check from Zen Center if possible to shutdown VM...
>
XenCenter does not show the VM. It only shows the ones on functional
hosts.
> If you still start VM for some reason on bad host - you can play with
> last_host_id field in cloud.instance table (or similar name...), while the
> VM is off, edit to point to some other host ID, and ACS should later try to
> start it on that "new" host
>
This did it. Table is vm_instance. I have gone through the stopped VMs
with the last_host_id of the bad host and changed it. Hopefully this will
do it.
Thanks!
>
> Hope that helps
>
> On 3 June 2015 at 22:39, Carlos Reategui <creategui@gmail.com> wrote:
>
> > Hi All,
> >
> > I have a cluster with a pod of 6 XenServer 6.2 hosts on CloudStack 4.4.0.
> > One of the hosts stopped/rebooted suddenly and now can't find its disks
> > (dell on the way). Luckily the host was not the pool master but it did
> > have the VR running on it so no instances could be created or started.
> My
> > network is a basic network without security groups.
> >
> > CS recognized that the host was down and also the VR, but it was unable
> to
> > move the VR to a different machine. After destroying the VR from the UI
> it
> > came up on a different machine. I would have expected CS to have moved
> the
> > VR automatically but maybe I don't have something setup correctly.
> >
> > In the hosts tab I put the failed host in maintenance mode. I tried
> > starting an instance that had previously been stopped. CS started it on
> > the bad host and reported that it was successful even though it was not.
> > Why did it do this if the host was down and in maint mode?? Now I can't
> > stop that instance or migrate it. I took the host out of maintenance
> mode
> > and marked it as disabled instead. I still can't stop the instance that
> > thinks it is on the bad host. Do I need to edit the DB to fix this
> > instance?
> >
> > Any ideas?
> >
> > thanks,
> > Carlos
> >
>
>
>
> --
>
> Andrija Panić
>
|