cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eric Lee Green <eric.lee.gr...@gmail.com>
Subject Re: Cloudstack 4.11.3 to 4.13.1 SystemVMs Error
Date Wed, 12 Aug 2020 03:35:25 GMT
Correct, 4.11.3 template is used for 4.11.3, 4.12, and 4.13. 4.14 moves 
to the 4.14.0 template.

There seems to be something odd happening key-wise sometimes with 
upgrades from 4.11.3 to 4.13.1 or 4.14.0.   I managed an upgrade from 
4.11.3 to 4.13.1 that *almost* worked, but the secondary storage VM 
wouldn't work and thus I couldn't spawn new virtual machines. Same 
symptom -- key error when the agent tried to ssh into it. And deleting 
it and making it respawn didn't help. Then I tried 4.11.3 to 4.14.0 and 
*all* the VM's failed at that point (of course, that was with the new 
template).

Right now I'm back at 4.11.3 until this can be figured out.

On 8/11/2020 5:53 AM, Ammad Syed wrote:
> Hi,
>
> I think 4.12 and 4.13 uses same systemVM template i.e 4.11.3 version, which
> I already have registered. Currently I am running 4.11.3 version of ACS.
>
> MariaDB [cloud]> SELECT id,name,type,cross_zones,state FROM
> cloud.vm_template WHERE name like '%systemvm-xenserver%' AND removed IS
> NULL;
> +------+-----------------------------+--------+-------------+----------+
> | id   | name                        | type   | cross_zones | state    |
> +------+-----------------------------+--------+-------------+----------+
> |  337 | systemvm-xenserver-3.0.0    | SYSTEM |           0 | Inactive |
> |  418 | systemvm-xenserver-4.2      | SYSTEM |           0 | Active   |
> |  472 | systemvm-xenserver-4.3      | USER   |           1 | Inactive |
> |  473 | systemvm-xenserver-4.3      | USER   |           1 | Inactive |
> |  474 | systemvm-xenserver-4.3      | USER   |           1 | Inactive |
> |  475 | systemvm-xenserver-4.3      | USER   |           1 | Inactive |
> |  476 | systemvm-xenserver-4.3      | USER   |           0 | Inactive |
> |  479 | systemvm-xenserver-4.3-2    | USER   |           1 | Inactive |
> |  480 | systemvm-xenserver-4.3      | SYSTEM |           0 | Active   |
> |  549 | systemvm-xenserver-4.5.1    | USER   |           0 | Active   |
> |  550 | systemvm-xenserver-4.5.1    | SYSTEM |           0 | Active   |
> |  651 | systemvm-xenserver-4.7.0    | USER   |           0 | Inactive |
> |  652 | systemvm-xenserver-4.7.0    | USER   |           0 | Inactive |
> |  653 | systemvm-xenserver-4.7.0    | SYSTEM |           0 | Inactive |
> |  737 | systemvm-xenserver-4.9.2    | SYSTEM |           1 | Inactive |
> |  739 | systemvm-xenserver-4.9.2-sb | SYSTEM |           1 | Active   |
> | 1245 | systemvm-xenserver-4.11.1   | SYSTEM |           1 | Active   |
> | 1584 | systemvm-xenserver-4.11.2   | SYSTEM |           1 | Active   |
> | 1677 | systemvm-xenserver-4.11.3   | SYSTEM |           1 | Active   |
> +------+-----------------------------+--------+-------------+----------+
>
> - Ammad
>
> On Tue, Aug 11, 2020 at 5:17 PM Pierre-Luc Dion <pdion891@apache.org> wrote:
> db.
>> Hi Syed,
>>  From 4.12, the systemvm template had to be upgraded because of OS change in
>> the template, moved to a latest version of Debian. Because of that, some VR
>> scripts have changed and make obsolete older version of VRs, so you will
>> most likely have to register an updated systemvm templates and upgrade your
>> system VMs and VRs.
>>
>> Regards,
>>
>> On Tue, Aug 11, 2020 at 6:24 AM Ammad Syed <syedammad83@gmail.com> wrote:
>>
>>> Hi Guys,
>>>
>>> I was previously on 4.9.3 cloudstack and upgraded to 4.11.1 then 4.11.3.
>>> The version 4.11.3 is working fine since six months.
>>>
>>> Now I have tried to upgrade my system from 4.11.3 to 4.13.1. The upgrade
>>> goes successful. I didn't uploaded any system VM template. However the
>>> problem occured when I recreated my systemVM of POD, the VM recreated and
>>> its state was running but agent state was not getting up, its showing
>> blank
>>> in column.
>>>
>>> Digging further via job logs, the job is failed with error that unable to
>>> execute command via ssh. Below are the logs.
>>>
>>> 2020-07-25 02:30:48,126 ERROR [c.c.u.s.SshHelper]
>>> (DirectAgent-211:ctx-62f09b31) (logid:9fa7dece) SSH execution of command
>>> /opt/cloud/bin/router_proxy.sh keystore-s
>>> etup 169.254.2.199 /usr/local/cloud/systemvm/conf/agent.properties
>>> /usr/local/cloud/systemvm/conf/cloud.jks TJaQYChYBwKh7Cx9 365
>>> /usr/local/cloud/systemvm/conf/clou
>>> d.csr has an error status code in return. Result output:
>>> 2020-07-25 02:30:48,127 DEBUG [c.c.a.m.DirectAgentAttache]
>>> (DirectAgent-211:ctx-62f09b31) (logid:9fa7dece) Seq
>>> 906-3195585410596077730: Response Received:
>>> 2020-07-25 02:30:48,127 DEBUG [c.c.a.t.Request]
>>> (DirectAgent-211:ctx-62f09b31) (logid:9fa7dece) Seq
>>> 906-3195585410596077730: Processing:  { Ans: , MgmtId: 779271079
>>> 43497, via: 906(xen-21-10-a3-khi02), Ver: v1, Flags: 10,
>>> [{"org.apache.cloudstack.ca
>>> .SetupKeystoreAnswer":{"result":false,"wait":0}}]
>>> }
>>> 2020-07-25 02:30:48,127 DEBUG [c.c.a.t.Request]
>>> (Work-Job-Executor-41:ctx-4e3c666d job-1208155/job-1208258 ctx-df740f75)
>>> (logid:9fa7dece) Seq 906-319558541059607773
>>> 0: Received:  { Ans: , MgmtId: 77927107943497, via:
>>> 906(xen-21-10-a3-khi02), Ver: v1, Flags: 10, { SetupKeystoreAnswer } }
>>> 2020-07-25 02:30:48,127 ERROR [c.c.v.VirtualMachineManagerImpl]
>>> (Work-Job-Executor-41:ctx-4e3c666d job-1208155/job-1208258 ctx-df740f75)
>>> (logid:9fa7dece) Failed to setup keystore and generate CSR for system vm:
>>> s-24142-VM
>>> 2020-07-25 02:30:48,127 DEBUG [c.c.v.VmWorkJobHandlerProxy]
>>> (Work-Job-Executor-41:ctx-4e3c666d job-1208155/job-1208258 ctx-df740f75)
>>> (logid:9fa7dece) Done executing VM work job:
>>>
>>>
>> com.cloud.vm.VmWorkStart{"dcId":0,"userId":1,"accountId":1,"vmId":24142,"handlerName":"VirtualMachineManagerImpl"}
>>> 2020-07-25 02:30:48,128 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
>>> (Work-Job-Executor-41:ctx-4e3c666d job-1208155/job-1208258 ctx-df740f75)
>>> (logid:9fa7dece) Complete async job-1208258, jobStatus: SUCCEEDED,
>>> resultCode: 0, result: null
>>>
>>> I tried to dig it further, I was unable to login systemVM via ssh from
>>> xenserver host with key /root/.ssh/id_rsa.cloud placed. Look like private
>>> key issue. However I am able to login on my old systemVMs ( i.e created
>> on
>>> ACS 4.11.3)
>>>
>>> Also I have SSL certificate enabled for console proxy on my ACS 4.11.3
>> and
>>> I am using only xenserver 7.0 hosts.
>>>
>>> I tried to disable SSL on secstorage and console proxy from global
>>> settings, but still didn't worked.
>>>
>>> I had a fresh installation of ACS 4.13.1 with xenserver 7.0, systemVMs
>> are
>>> working fine in it.
>>>
>>> Please advise.
>>> --
>>> Regards,
>>>
>>>
>>> Syed Ammad Ali
>>>
>

Mime
View raw message