cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Anshul Gangwar <anshul.gang...@accelerite.com>
Subject Re: Snapshot and secondary storage utilisation.
Date Mon, 10 Jul 2017 08:05:00 GMT
By default, xenserver takes delta snapshots i.e. snapshot of differential disk from last snapshot
taken. This is configurable via global setting “snapshot.delta.max”. That setting tells
till when keep taking delta snapshots before taking full snapshots. By default, value is 16.
If you don’t want that behaviour make it 0. Taking only differential disks snapshot makes
the snapshot operation faster but with the cost of additional storage.

Regards,
Anshul 

On 10/07/17, 12:43 PM, "Makrand" <makrandsanap@gmail.com> wrote:

    ​​
    Hi all,
    
    My setup is:- ACS 4.4. XENserver 6.2 SP1. 4TB of secondary storage coming
    from NFS.
    
    I am observing some issues the way *.vhd* files are stored and cleaned up
    in secondary storage. Let's take an example of a VM-813. It has 250G root
    disk (disk ID 1015) The snapshot is scheduled to happen once every week
    (sat night) and supposes to keep only 1 snapshot. From GUI I am seeing its
    only keeping the latest week snapshot.
    
    But resource utilization on CS GUI is increasing day by day. So I just ran
    du -smh and found there are multiple vhd files of different sizes under
    secondary storage.
    
    Here is snippet:-
    
    root@gcx-bom-cloudstack:/mnt/secondary2/snapshots/22# du -smhh *
    1.5K    1002
    1.5K    1003
    1.5K    1004
    *243G    1015*
    1.5K    1114
    
    root@gcx-bom-cloudstack:/mnt/secondary2/snapshots/22# ls -lht *
    *1015:*
    *total 243G*
    *-rw-r--r-- 1 nobody nogroup  32G Jul  8 21:19
    8a7e6580-5191-4eb0-9eb1-3ec8e75ce104.vhd*
    *-rw-r--r-- 1 nobody nogroup  40G Jul  1 21:30
    f52b82b0-0eaf-4297-a973-1f5477c10b5e.vhd*
    *-rw-r--r-- 1 nobody nogroup  43G Jun 24 21:35
    3dc72a3b-91ad-45ae-b618-9aefb7565edb.vhd*
    *-rw-r--r-- 1 nobody nogroup  40G Jun 17 21:30
    c626a9c5-1929-4489-b181-6524af1c88ad.vhd*
    *-rw-r--r-- 1 nobody nogroup  29G Jun 10 21:16
    697cf9bd-4433-426d-a4a1-545f03aae3e6.vhd*
    *-rw-r--r-- 1 nobody nogroup  29G Jun  3 21:00
    bff859b3-a51c-4186-8c19-1ba94f99f9e7.vhd*
    *-rw-r--r-- 1 nobody nogroup  43G May 27 21:35
    127e3f6e-4fa5-45ed-a95d-7d0b850a053d.vhd*
    *-rw-r--r-- 1 nobody nogroup  60G May 20 22:01
    619fe1ed-6807-441c-9526-526486d7a6d2.vhd*
    *-rw-r--r-- 1 nobody nogroup  35G May 13 21:23
    71b0d6a8-3c93-493f-b82c-732b7a808f6d.vhd*
    *-rw-r--r-- 1 nobody nogroup  31G May  6 21:19
    ccbfb3ec-abd8-448c-ba79-36631b227203.vhd*
    *-rw-r--r-- 1 nobody nogroup  32G Apr 29 21:18
    52215821-ed4d-4283-9aed-9f9cc5acd5bd.vhd*
    *-rw-r--r-- 1 nobody nogroup  38G Apr 22 21:26
    4cb6ea42-8450-493a-b6f2-5be5b0594a30.vhd*
    *-rw-r--r-- 1 nobody nogroup 248G Apr 16 00:44
    243f50d6-d06a-47af-ab45-e0b8599aac8d.vhd*
    
    
    Observed same behavior for root disks of other 4 VMs. So the number of vhds
    are ever growing on secondary storage and one will eventually run out of
    secondary storage size.
    
    Simple Question:-
    
    1) Why is cloud stack creating multiple vhd files? Should not it supposed
    to keep only one vhd at secondary storage defined in snap policy?
    
    Any thoughts? As explained earlier...from GUI I am seeing last weeks snap
    as backed up.
    
    
    
    --
    Makrand
    

DISCLAIMER
==========
This e-mail may contain privileged and confidential information which is the property of Accelerite,
a Persistent Systems business. It is intended only for the use of the individual or entity
to which it is addressed. If you are not the intended recipient, you are not authorized to
read, retain, copy, print, distribute or use this message. If you have received this communication
in error, please notify the sender and delete all copies of this message. Accelerite, a Persistent
Systems business does not accept any liability for virus infected mails.
Mime
View raw message