cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rakesh Venkatesh <www.rakeshv....@gmail.com>
Subject Re: Dedicated hosts for Domain/Account
Date Mon, 12 Aug 2019 16:02:18 GMT
Thanks for the quick reply.
I was browsing through the code and found the following


        // check affinity group of type Explicit dedication exists. If No
put
        // dedicated pod/cluster/host in avoid list
        List<AffinityGroupVMMapVO> vmGroupMappings =
_affinityGroupVMMapDao.findByVmIdType(vm.getId(), "ExplicitDedication");

        if (vmGroupMappings != null && !vmGroupMappings.isEmpty()) {
            isExplicit = true;
        }


So this feature will work only if vm's are associated with affinity groups.
I created two vm's with same affinity group and after enabling the
maintenance mode they were migrated to the other dedicated machines.
So no need to create a github issue I guess.

On Mon, Aug 12, 2019 at 5:04 PM Andrija Panic <andrija.panic@gmail.com>
wrote:

> Considering that manual VM LIVE migrations via CloudStack from
> non-dedicated to dedicated SHOULD/DOES work - then I would say this is an
> "unhandled" case, which indeed should be handled and live migration should
> happen instead of stopping the VMs.
>
> I assume someone else might jump in - but if not, please raise GitHub
> issues as a bug report.
>
>
> Thx
>
> On Mon, 12 Aug 2019 at 16:52, Rakesh Venkatesh <www.rakeshv.com@gmail.com>
> wrote:
>
> > Hello
> >
> > In my cloudstack setup, I have three KVM hypervisors out of which two
> > hypervisors are dedicated to Root/admin account and the third is not
> > dedicated. When I enable the maintenance mode on the dedicated
> hypervisor,
> > it will always migrate the vm's from dedicated to non dedicated
> hypervisor
> > but not to second dedicated hypervisor. I dont think this is the expected
> > behavior. Can any one please verify? The dedicated hypervisors will be
> > added to avoid set and the deployment planning manager skips these
> > hypervisors.
> >
> > If I dedicate the third hypervisor to different domain and enable the
> > maintenance mode on the first hypervisor then all the vm's will be
> stopped
> > instead of migrating to second dedicated hypervisor of the same
> > domain/account.
> >
> >
> > I have highlighted the necessary logs in red. You can see from the logs
> > that host with id 17 and 20 are dedicated but not 26. When maintenance
> mode
> > is enabled on host id 20, it skips 17 and 20 and migrates vm's to host id
> > 26
> >
> >
> >
> > 2019-08-12 14:35:23,754 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) Deploy avoids pods: null, clusters: null, hosts: [20],
> > pools: null
> > 2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) DeploymentPlanner allocation algorithm:
> > com.cloud.deploy.FirstFitPlanner@6fecace4
> > 2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) Trying to allocate a host and storage pools from dc:8,
> > pod:8,cluster:null, requested cpu: 16000, requested ram: 8589934592
> > 2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) Is ROOT volume READY (pool already allocated)?: Yes
> > 2019-08-12 14:35:23,757 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) This VM has last host_id specified, trying to choose the
> > same host: 20
> > 2019-08-12 14:35:23,759 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) The last host of this VM is in avoid set
> > 2019-08-12 14:35:23,759 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) Cannot choose the last host to deploy this VM
> > 2019-08-12 14:35:23,759 DEBUG [c.c.d.FirstFitPlanner]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) Searching resources only under specified Pod: 8
> > 2019-08-12 14:35:23,759 DEBUG [c.c.d.FirstFitPlanner]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) Listing clusters in order of aggregate capacity, that
> have
> > (atleast one host with) enough CPU and RAM capacity under this Pod: 8
> > 2019-08-12 14:35:23,761 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
> > (logid:bbb870bf) Deploy avoids pods: [], clusters: [], hosts: [17, 20],
> > pools: null
> > 2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
> > (logid:bbb870bf) DeploymentPlanner allocation algorithm:
> > com.cloud.deploy.FirstFitPlanner@6fecace4
> > 2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
> > (logid:bbb870bf) Trying to allocate a host and storage pools from dc:8,
> > pod:8,cluster:null, requested cpu: 500, requested ram: 536870912
> > 2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
> > (logid:bbb870bf) Is ROOT volume READY (pool already allocated)?: Yes
> > 2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
> > (logid:bbb870bf) This VM has last host_id specified, trying to choose the
> > same host: 26
> > 2019-08-12 14:35:23,763 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-8:ctx-1cc07ab1 job-246119/job-246902 ctx-9dbb7241)
> > (logid:b7e8e3a2) Deploy avoids pods: [], clusters: [], hosts: [17, 20],
> > pools: null
> > 2019-08-12 14:35:23,766 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-8:ctx-1cc07ab1 job-246119/job-246902 ctx-9dbb7241)
> > (logid:b7e8e3a2) DeploymentPlanner allocation algorithm:
> > com.cloud.deploy.FirstFitPlanner@6fecace4
> > 2019-08-12 14:35:23,766 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-8:ctx-1cc07ab1 job-246119/job-246902 ctx-9dbb7241)
> > (logid:b7e8e3a2) Trying to allocate a host and storage pools from dc:8,
> > pod:8,cluster:null, requested cpu: 500, requested ram: 536870912
> > 2019-08-12 14:35:23,766 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-8:ctx-1cc07ab1 job-246119/job-246902 ctx-9dbb7241)
> > (logid:b7e8e3a2) Is ROOT volume READY (pool already allocated)?: Yes
> > 2019-08-12 14:35:23,766 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-8:ctx-1cc07ab1 job-246119/job-246902 ctx-9dbb7241)
> > (logid:b7e8e3a2) This VM has last host_id specified, trying to choose the
> > same host: 26
> > 2019-08-12 14:35:23,780 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c)
> > (logid:a16d7711) Checking resources in Cluster: 8 under Pod: 8
> > 2019-08-12 14:35:23,782 DEBUG [c.c.c.CapacityManagerImpl]
> > (Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
> > (logid:bbb870bf) Host: 26 has cpu capability (cpu:48, speed:2900) to
> > support requested CPU: 1 and requested speed: 500
> > 2019-08-12 14:35:23,782 DEBUG [c.c.c.CapacityManagerImpl]
> > (Work-Job-Executor-7:ctx-9f4363d1 job-473/job-246899 ctx-cef9b496)
> > (logid:bbb870bf) Checking if host: 26 has enough capacity for requested
> > CPU: 500 and requested RAM: 536870912 , cpuOverprovisioningFactor: 1.0
> > 2019-08-12 14:35:23,782 DEBUG [c.c.a.m.a.i.FirstFitAllocator]
> > (Work-Job-Executor-9:ctx-786e4f7a job-246740/job-246905 ctx-73b6368c
> > FirstFitRoutingAllocator) (logid:a16d7711) Looking for hosts in dc: 8
> >  pod:8  cluster:8
> >
> > --
> > Thanks and regards
> > Rakesh venkatesh
> >
>
>
> --
>
> Andrija Panić
>


-- 
Thanks and regards
Rakesh venkatesh

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message