subversion-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Johan Corveleyn <>
Subject Re: Questions about a script for regular backups
Date Tue, 27 Aug 2019 09:06:30 GMT
On Mon, Aug 26, 2019 at 9:01 PM Mark Phippard <> wrote:
> On Mon, Aug 26, 2019 at 1:29 PM Anton Shepelev <> wrote:
>> I have now set up a post-commit hook that makes an
>> --incremental hotcopy.  With the destination on the same
>> machine's HDD, it takes about two seconds, but with a
>> network share it lasts 30 seconds.  Is it expected behavior
>> for committing a tiny change in a text file?  If not, then
>> where shall I look for the possible performance problems?  I
>> have svn 1.8.16.
> It is probably due to slowness of the IO across network to read what is in the target
repository and then copy over the files. Other than tuning NFS or whatever you are using there
is not much you can do.  This is why my first recommendation was to use svnsync. You could
have a second backup server running and then use svnsync via https or svn protocol to that
server.  This basically replays the commit transaction so performs comparably to the original
commit. It also makes it a lot easier to send the backup around the world or to another data
center since it is using a protocol that is meant for that sort of latency.

Does svnsync also copy locks and hook scripts?

Just to mention another option: Since 1.8 there is the command
'svnadmin freeze', which locks the repository for writing while you
run another command. That way, you can use regular backup / copy
commands (like rsync) to create a consistent copy. See the example
mentioned in the 1.8 release notes [1]:

    svnadmin freeze /svn/my-repos -- rsync -av /svn/my-repos /backup/my-repos

Of course, in contrast with hotcopy, the original repository is locked
for a (hopefully short) while, so users might experience errors /
timeouts if this takes too long.



View raw message