cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From <cristia...@istream.today>
Subject RE: cloud: Password server at 192.xx.1xx.79 did not have any password for the VM - After upgrade to ACS 4.14
Date Mon, 22 Jun 2020 14:06:45 GMT
Hi Andrija,

   I have already created few days ago : https://github.com/apache/cloudstack/issues/4158

Regards,
Cristian

-----Original Message-----
From: Andrija Panic <andrija.panic@gmail.com> 
Sent: Monday, June 22, 2020 12:22 PM
To: users <users@cloudstack.apache.org>
Subject: Re: cloud: Password server at 192.xx.1xx.79 did not have any password for the VM
- After upgrade to ACS 4.14

That sounds very weird and looks like a possible bug - can you please open an issue here https://github.com/apache/cloudstack/issues
?

Not sure if anyone else can advise if they saw a similar issue.

Regards,
Andrija

On Fri, 19 Jun 2020 at 22:53, Cristian Ciobanu <cristian.c@istream.today>
wrote:

> I found out that there is a firewall issue and sshd config issue on VR 
> on this ACS version (4.14) when it is configured with basic networking.
>
> By default management server is able to establish ssh connection only 
> via local IP with VR: eth1 172.11.0.167/24, but in order to run health 
> check si trying to connect via public IPs of the VR, this is not 
> possible because of this :
>
> sshd config :
> Port 3922
> #AddressFamily any
> ListenAddress 172.11.0.167, here i changed to 0.0.0.0
>
> iptables :
> -A INPUT -i eth1 -p tcp -m tcp --dport 3922 -m state --state 
> NEW,ESTABLISHED -j ACCEPT  ( rule for eth0 is missing ) in basic 
> network it will not work without this. I have added a rule to allow 
> also for eth0
>
> Regarding password issue:
> in VR iptables there is only this rule :
> -A INPUT -s 158.xx.xx.224/28 -i eth0 -p tcp -m tcp --dport 8080 -m 
> state --state NEW -j ACCEPT, only for the first, main public IP, not 
> for all the IPs, so i have added a rule to allow 8080 on each public 
> IP from this router.
>
> Everything works now, till i destroy the router and i have to 
> reconfigure again.
>
>
> root@r-3480-VM:~#
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
> group default qlen 1
>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>     inet 127.0.0.1/8 scope host lo
>        valid_lft forever preferred_lft forever
> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast 
> state UP group default qlen 1000
>     link/ether 1e:00:91:00:00:33 brd ff:ff:ff:ff:ff:ff
>     inet 158.xx.xx.226/28 brd 158.xx.xx.239 scope global eth0
>        valid_lft forever preferred_lft forever
>     inet 167.xxx.xx.246/28 brd 167.xxx.xx.255 scope global eth0
>        valid_lft forever preferred_lft forever
>     inet 149.xx.xxx.80/27 brd 149.xx.xxx.95 scope global eth0
>        valid_lft forever preferred_lft forever
>     inet 192.xx.xxx.79/26 brd 192.xx.xxx.127 scope global eth0
>        valid_lft forever preferred_lft forever
>     inet 198.xx.xxx.162/27 brd 198.xx.xxx.191 scope global eth0
>        valid_lft forever preferred_lft forever
>     inet 149.xx.xxx.99/27 brd 149.xx.xxx.127 scope global eth0
>        valid_lft forever preferred_lft forever
>     inet 144.xxx.xx.199/27 brd 144.xxx.xx.223 scope global eth0
>        valid_lft forever preferred_lft forever
>     inet 144.xxx.xxx.177/27 brd 144.xxx.xxx.191 scope global eth0
>        valid_lft forever preferred_lft forever
>     inet 66.xxx.xxx.133/27 brd 66.xx.xxx.159 scope global eth0
>        valid_lft forever preferred_lft forever
> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast 
> state UP group default qlen 1000
>     link/ether 02:00:57:d0:02:14 brd ff:ff:ff:ff:ff:ff
>     inet 172.11.0.167/24 brd 172.11.0.255 scope global eth1
>        valid_lft forever preferred_lft forever root@r-3480-VM:~#
>
>
> Regards,
> Cristian
>
> On Fri, 19 Jun 2020 at 21:40, Cristian Ciobanu 
> <cristian.c@istream.today>
> wrote:
>
> > Hello,
> >
> >    This is what I tried first, restart the VM before trying to reset 
> > the pass.
> >    The line you ask about was from messages log file. (vR)  BTW, I 
> > saw that now there is a local IP assigned to systems VM router.
> > Till now only public IP was assigned, not sure if this have 
> > something to
> do
> > with.
> >
> > Regards,
> > Cristian
> >
> > On Fri, 19 Jun 2020, 19:04 Andrija Panic, <andrija.panic@gmail.com>
> wrote:
> >
> >> After the the upgrade to 4.14, I assume you have restarted all 
> >> existing VRs or networks (new VRs are created from the new systemVM 
> >> template for
> 4.14)?
> >>
> >> If you see the password inside this file (and you do, as it seems) 
> >> -
> that
> >> means that the VM (script) did not fetch the password. When the VM
> fetches
> >> the password, it will say "saved" instead of the actual password in 
> >> that file.
> >> Can you just reboot once more the VM - do NOT reset the password in 
> >> the meantime? Check logs after that and see if the password was changed.
> >>
> >> What about these 2 lines?
> >>
> >> Jun 19 14:08:04 systemvm passwd_server_ip.py: serve_password: 
> >> password saved for VM IP 192.xx.xxx.80
> >>
> >> This indicates that the password was sent to the VM,
> >>
> >> Regards,
> >> Andrija
> >>
> >>
> >> On Fri, 19 Jun 2020 at 17:11, <cristian.c@istream.today> wrote:
> >>
> >> > Hi Andrija,
> >> >
> >> >    Please see :
> >> >
> >> >
> >> > root@r-2705-VM:~# cat /var/cache/cloud/passwords-192.xx.xxx.79
> >> > 192.xx.xxx.108=Yj4AZj
> >> > 192.xx.xxx.101=pnj6dD
> >> > 192.xx.xxx.115=Q7wyGw
> >> > 192.xx.xxx.80=y2sS7E
> >> >
> >> >
> >> > y2sS7E
> >> >
> >> > VM IP : 192.xx.xxx.80
> >> >
> >> >
> >> > Regards,
> >> > Cristian
> >> >
> >> > -----Original Message-----
> >> > From: Andrija Panic <andrija.panic@gmail.com>
> >> > Sent: Friday, June 19, 2020 5:56 PM
> >> > To: users <users@cloudstack.apache.org>
> >> > Subject: Re: cloud: Password server at 192.xx.1xx.79 did not have 
> >> > any password for the VM - After upgrade to ACS 4.14
> >> >
> >> > Can you reset a password for a VM, boot the VM and then provide 
> >> > the content of the file /var/cache/cloud/password* from the VR?
> >> >
> >> > Regards,
> >> > Andrija
> >> >
> >> > On Fri, 19 Jun 2020 at 16:20, <cristian.c@istream.today> wrote:
> >> >
> >> > > Hello folks,
> >> > >
> >> > >
> >> > >
> >> > >      I have successfully upgraded my Cloudstack 4.11 to 4.14 (
> VMware
> >> > with
> >> > > Basic Networking ) everything works except password for VMs.   I did
> >> > > multiple testes, different OS, looks like it is not working 
> >> > > anymore, any idea why?
> >> > >
> >> > >
> >> > >
> >> > > VM log:
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > > Jun 19 10:57:14 localhost cloud-set-guest-password: Starting
> >> > > cloud-set-guest-password:  [  OK  ]
> >> > >
> >> > > Jun 19 10:57:14 localhost cloud-set-guest-sshkey: Starting
> >> > > cloud-set-guest-sshkey:  [  OK  ]
> >> > >
> >> > > Jun 19 10:57:14 localhost cloud: Sending request to ssh key 
> >> > > server
> at
> >> > > 192.xx.xxx.79
> >> > >
> >> > > Jun 19 10:57:14 localhost cloud: Found password server IP
> >> > > 192.xx.xxx.79 in
> >> > >
> >> > >
> /var/lib/NetworkManager/dhclient-6395a6b2-9b5d-4daa-86bd-343e5b823d5e-
> >> > > eno167
> >> > > 77752.lease
> >> > >
> >> > > Jun 19 10:57:14 localhost cloud: Sending request to password 
> >> > > server
> at
> >> > > 192.xx.xxx79
> >> > >
> >> > > Jun 19 10:57:15 localhost systemd: Started Dynamic System 
> >> > > Tuning
> >> Daemon.
> >> > >
> >> > > Jun 19 10:57:15 localhost systemd: Started Postfix Mail 
> >> > > Transport
> >> Agent.
> >> > >
> >> > > Jun 19 10:57:15 localhost kdumpctl: No memory reserved for 
> >> > > crash
> >> kernel.
> >> > >
> >> > > Jun 19 10:57:15 localhost kdumpctl: Starting kdump: [FAILED]
> >> > >
> >> > > Jun 19 10:57:15 localhost systemd: kdump.service: main process
> exited,
> >> > > code=exited, status=1/FAILURE
> >> > >
> >> > > Jun 19 10:57:15 localhost systemd: Failed to start Crash 
> >> > > recovery kernel arming.
> >> > >
> >> > > Jun 19 10:57:15 localhost systemd: Unit kdump.service entered 
> >> > > failed
> >> > state.
> >> > >
> >> > > Jun 19 10:57:15 localhost systemd: kdump.service failed.
> >> > >
> >> > > Jun 19 10:57:42 localhost systemd: Created slice user-0.slice.
> >> > >
> >> > > Jun 19 10:57:42 localhost systemd: Starting user-0.slice.
> >> > >
> >> > > Jun 19 10:57:42 localhost systemd: Started Session 1 of user root.
> >> > >
> >> > > Jun 19 10:57:42 localhost systemd-logind: New session 1 of user
> root.
> >> > >
> >> > > Jun 19 10:57:42 localhost systemd: Starting Session 1 of user root.
> >> > >
> >> > > Jun 19 10:58:17 localhost cloud: Failed to get ssh keys from 
> >> > > any server
> >> > >
> >> > > Jun 19 10:58:17 localhost systemd: cloud-set-guest-sshkey.service:
> >> > > main process exited, code=exited, status=1/FAILURE
> >> > >
> >> > > Jun 19 10:58:17 localhost systemd: Failed to start Cloud Set 
> >> > > Guest SSHKey Service.
> >> > >
> >> > > Jun 19 10:58:17 localhost systemd: Unit
> cloud-set-guest-sshkey.service
> >> > > entered failed state.
> >> > >
> >> > > Jun 19 10:58:17 localhost systemd: 
> >> > > cloud-set-guest-sshkey.service
> >> failed.
> >> > >
> >> > > Jun 19 10:58:17 localhost cloud: Got response from server at
> >> > > 192.xx.xxx.79
> >> > >
> >> > > Jun 19 10:58:17 localhost cloud: Password server at 
> >> > > 192.xx.xxx.79did not have any password for the VM
> >> > >
> >> > > Jun 19 10:58:17 localhost cloud: Did not need to change password.
> >> > >
> >> > > Jun 19 10:58:17 localhost systemd: Started Cloud Set Guest 
> >> > > Password Service.
> >> > >
> >> > > Jun 19 10:58:17 localhost systemd: Reached target Multi-User System.
> >> > >
> >> > > Jun 19 10:58:17 localhost systemd: Starting Multi-User System.
> >> > >
> >> > > Jun 19 10:58:17 localhost systemd: Started Stop Read-Ahead Data 
> >> > > Collection 10s After Completed Startup.
> >> > >
> >> > > Jun 19 10:58:17 localhost systemd: Starting Update UTMP about 
> >> > > System Runlevel Changes...
> >> > >
> >> > > Jun 19 10:58:17 localhost systemd: Started Update UTMP about 
> >> > > System Runlevel Changes.
> >> > >
> >> > > Jun 19 10:58:17 localhost systemd: Startup finished in 521ms
> (kernel)
> >> > > + 1.563s (initrd) + 1min 10.596s (userspace) = 1min 12.681s.
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > > Router Log :
> >> > >
> >> > > Jun 19 12:15:24 systemvm cloud: VR config: create file success
> >> > >
> >> > > Jun 19 12:15:24 systemvm cloud: VR config: executing:
> >> > > /opt/cloud/bin/update_config.py 
> >> > > vm_metadata.json.a997727c-51e2-4730-b1e4-5a033cf8672f
> >> > >
> >> > > Jun 19 12:15:24 systemvm cloud: VR config: execution success
> >> > >
> >> > > Jun 19 12:15:24 systemvm cloud: VR config: creating file:
> >> > >
> /var/cache/cloud/vm_metadata.json.079f12c9-45d1-48e0-986c-0ed68f764128
> >> > >
> >> > > Jun 19 12:15:24 systemvm cloud: VR config: create file success
> >> > >
> >> > > Jun 19 12:15:24 systemvm cloud: VR config: executing:
> >> > > /opt/cloud/bin/update_config.py
> >> > > vm_metadata.json.079f12c9-45d1-48e0-986c-0ed68f764128
> >> > >
> >> > > Jun 19 12:08:48 systemvm cloud: VR config: execution success
> >> > >
> >> > > Jun 19 12:08:50 systemvm cloud: VR config: Flushing conntrack 
> >> > > table
> >> > >
> >> > > Jun 19 12:08:50 systemvm cloud: VR config: Flushing conntrack 
> >> > > table completed
> >> > >
> >> > > Jun 19 12:13:46 systemvm kernel: [  320.771392] nf_conntrack:
> default
> >> > > automatic helper assignment has been turned off for security 
> >> > > reasons and CT-based  firewall rule not found. Use the iptables 
> >> > > CT target to attach helpers instead.
> >> > >
> >> > > Jun 19 13:38:36 systemvm passwd_server_ip.py: serve_password:
> password
> >> > > saved for VM IP 192.xx.xxx.101
> >> > >
> >> > > Jun 19 13:47:24 systemvm passwd_server_ip.py: serve_password:
> password
> >> > > saved for VM IP 192.xx.xxx.101
> >> > >
> >> > > Jun 19 13:53:00 systemvm passwd_server_ip.py: serve_password:
> password
> >> > > saved for VM IP 192.xx.xxx.108
> >> > >
> >> > > Jun 19 14:05:22 systemvm passwd_server_ip.py: serve_password:
> password
> >> > > saved for VM IP 192.xx.xxx.108
> >> > >
> >> > > Jun 19 14:08:04 systemvm passwd_server_ip.py: serve_password:
> password
> >> > > saved for VM IP 192.xx.xxx.80
> >> > >
> >> > >
> >> > >
> >> > > 2020-06-19 14:08:19,737 INFO     Executing: systemctl start
> >> > > cloud-password-server@192.xx.xxx.79
> >> > >
> >> > > 2020-06-19 14:08:19,742 INFO     Service
> >> > > cloud-password-server@192.xx.xxx.79
> >> > > start
> >> > >
> >> > > 2020-06-19 14:08:19,742 INFO     Checking if default ipv4 route is
> >> > present
> >> > >
> >> > > 2020-06-19 14:08:19,742 INFO     Executing: ip -4 route list 0/0
> >> > >
> >> > > 2020-06-19 14:08:19,744 INFO     Default route found: default via
> >> > > 158.xx.xx.238 dev eth0
> >> > >
> >> > > 2020-06-19 14:08:19,744 INFO     Address found in DataBag ==>
> >> > > {u'public_ip':
> >> > > u'198.xx.xxx.162', u'nic_dev_id': u'0', u'network':
> >> > > u'198.xxx.xxx.160/27',
> >> > > u'netmask': u'255.255.255.224', u'broadcast': 
> >> > > u'198.xxx.xxx.191',
> >> u'add':
> >> > > True, u'nw_type': u'guest', u'device': u'eth0', u'cidr':
> >> > > u'198.xxx.xxx.162/27', u'size': u'27'}
> >> > >
> >> > > 2020-06-19 14:08:19,744 INFO     Address 198.xx.xxx.162/27 on device
> >> eth0
> >> > > already configured
> >> > >
> >> > > 2020-06-19 14:08:19,744 INFO     Adding route table: 100 Table_eth0
> to
> >> > > /etc/iproute2/rt_tables if not present
> >> > >
> >> > > 2020-06-19 14:08:19,744 INFO     Executing: ip rule show
> >> > >
> >> > > 2020-06-19 14:08:19,746 INFO     Executing: ip rule show
> >> > >
> >> > > 2020-06-19 14:08:19,748 INFO     Executing: ip link show eth0 | grep
> >> > 'state
> >> > > DOWN'
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > > Best regards,
> >> > >
> >> > > Cristian
> >> > >
> >> > >
> >> >
> >> > --
> >> >
> >> > Andrija Panić
> >> >
> >> >
> >>
> >> --
> >>
> >> Andrija Panić
> >>
> >
>


-- 

Andrija Panić


Mime
View raw message