Hi,
since your lvm shows duplicate pv's I assume the filter in your
lvm.conf isn't set correct.
This could also explain the listing of (xenserver unrelated)
vg_srv1/lv_* due to nested lvm.
btw. I'ld rather use something like
vhd-util scan -f -m "VHD-*" -l VG_XenStorage-3e26eaad-befd-fb47-82ad-
b8f2bec1378e -p
to get a proper view on your vhd-lv's.
- Stephan
Am Dienstag, den 09.08.2016, 21:43 +0300 schrieb Mindaugas
Milinavičius:
> 56G
> 50G and 4G - have no idea what is it...
>
>
> # lvs
> Found duplicate PV aVgL0a29JUALu5j3MJZb5iFHRKQhOJi0: using
> /dev/sdi3 not
> /dev/sdb3
> LV VG
> Attr LSize Origin Snap% Move Log Copy% Convert
> MGT
> VG_XenStorage-3e26eaad-befd-fb47-82ad-b8f2bec1378e -wi-a- 4.00M
> VHD-0b4dab04-4b0b-4fbb-a847-5818a9b28e66
> VG_XenStorage-3e26eaad-befd-fb47-82ad-b8f2bec1378e -ri-ao 2.94G
> VHD-337c4cac-8027-4dfa-8739-f1846ba2dc24
> VG_XenStorage-3e26eaad-befd-fb47-82ad-b8f2bec1378e -wi--- 2.94G
> VHD-6fecae1a-cbbf-4e20-8201-de634e2a2be4
> VG_XenStorage-3e26eaad-befd-fb47-82ad-b8f2bec1378e -wi--- 2.94G
> VHD-756e318f-a958-45d5-9837-46b0c91b4293
> VG_XenStorage-3e26eaad-befd-fb47-82ad-b8f2bec1378e -wi-ao 2.94G
> hb-0a0de3c4-e181-4424-af7c-798ebd38269b
> VG_XenStorage-3e26eaad-befd-fb47-82ad-b8f2bec1378e -wi--- 4.00M
> hb-8940634c-1203-44da-bb9b-73193f160eb7
> VG_XenStorage-3e26eaad-befd-fb47-82ad-b8f2bec1378e -wi--- 4.00M
> hb-d5929bfd-bae8-4ed0-bc1e-6e7f5a1987b7
> VG_XenStorage-3e26eaad-befd-fb47-82ad-b8f2bec1378e -wi-a- 4.00M
> lv_home vg_srv1
> -wi--- 56.14G
> lv_root vg_srv1
> -wi--- 50.00G
> lv_swap vg_srv1
> -wi--- 4.00G
>
>
>
>
>
> Pagarbiai
> Mindaugas Milinavičius
> UAB STARNITA
> Direktorius
> http://www.clustspace.com
> LT: +37068882880
> RU: +79199993933
>
> Tomorrow's possibilities today
> <http://www.clustspace.com/>
>
> - 1 core CPU, 512MB RAM, 20GB (€ 5.00)
> - 1 core CPU, 1GB RAM, 30GB (€ 10.00)
> - 2 core CPU, 2GB RAM, 40GB (€ 20.00)
> - 2 core CPU, 4GB RAM, 60GB (€ 40.00)
> - 4 core CPU, 8GB RAM, 80GB (€ 80.00)
> - 8 core CPU, 16GB RAM, 160GB (€ 160.00)
>
>
> On Tue, Aug 9, 2016 at 9:32 PM, Makrand <makrandsanap@gmail.com>
> wrote:
>
> >
> > I've learned few facts about XENserver in last couple of days.
> >
> > e.g. On XENserver, when you take snapshot, XENserver will create a
> > 2 VDI
> > (Base VDI+place holder for snapshot) file on same primary storage
> > (SR in
> > XENserver terms) as disk is. You will offcouse have a vhd file
> > saved on
> > seconday storage. Here is funny part, when you delete snapshot from
> > cloudstack, XENserver won't do anything to remove these
> > additionally crated
> > VDI
> >
> > Plus XENserver will copy and crate template for VR on each
> > individual host
> > on its SR. this space is not visible in cloud stack.
> >
> > Check things from SR level on XENcenter. You can delete any
> > template
> > entries etc.(BE CAREFUL)
> >
> > Also try digging in with command line
> >
> > 1) xe vdi-list sr-name-label=<LUNNAME>
> > params=uuid,name-label,name-description,physical-
> > utilisation,virtual-size,is-a-snapshot,sm-config
> >
> > this will give you all VDIs present on that storage (give attention
> > to
> > is-snapshot=true ones)
> >
> > 2) lvs
> >
> > this will give you summary of all the LVs on the SRs. Note the last
> > marked
> > in this example (Attr=-ri---) is snapshot.
> >
> > lvs
> > LV VG
> > Attr LSize Origin
Snap% Move Log
> > Copy%
> > Convert
> > MGT
> > VG_XenStorage-27c5343c-422a-1ee9-0df5-50a15c7f2437 -wi-a- 4.00M
> > VHD-00ac9fd1-26d3-4c45-9680-bbf3b253c7e1
> > VG_XenStorage-27c5343c-422a-1ee9-0df5-50a15c7f2437 -wi--- 3.34G
> > VHD-15bb4af8-99a0-4425-8227-50a97dc04a8c
> > VG_XenStorage-27c5343c-422a-1ee9-0df5-50a15c7f2437 -ri--- 2.70G
> > VHD-19ed4499-7592-4fe3-8fc3-fbcbcdfcdc51
> > VG_XenStorage-27c5343c-422a-1ee9-0df5-50a15c7f2437 -wi--- 8.00M
> > VHD-1b7e2b7d-3dc5-4f33-b126-4197b59c787f
> > VG_XenStorage-27c5343c-422a-1ee9-0df5-50a15c7f2437 -wi--- 8.00M
> > VHD-1c9fb2a0-1a9f-49f2-80a4-6047d56ca0c8
> > VG_XenStorage-27c5343c-422a-1ee9-0df5-50a15c7f2437 -wi--- 250.50G
> > VHD-28a79d76-d3ef-4d9d-8773-a888a559d15d
> > VG_XenStorage-27c5343c-422a-1ee9-0df5-50a15c7f2437 -ri--- 3.13G
> >
> > I have cases where I've to remove some entries manually just to
> > gain free
> > space. for me its ACS 4.4.2 and XENserver 6.2
> >
> > Good luck with your troubleshooting
> >
> >
> > --
> > Best,
> > Makrand
> >
> >
> > On Tue, Aug 9, 2016 at 11:38 PM, Mindaugas Milinavičius <
> > mindaugas@clustspace.com> wrote:
> >
> > >
> > > Hello,
> > >
> > > version of cloudstack 4.7.1
> > > Type: xenserver
> > > primary storage - scaleio with lvmohba (presetup).
> > >
> > > I think, it can be with expunge time, because it was set to 24h
> > > after
> > > expunge the VM.
> > >
|