subversion-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stefan Sperling <>
Subject Re: svn version 1.10 lack of robustness in presence of flaky network
Date Wed, 24 Apr 2019 06:54:16 GMT
On Wed, Apr 24, 2019 at 12:55:47AM +0200, Johan Corveleyn wrote:
> On Mon, Apr 22, 2019 at 9:22 AM Marlow, Andrew
> <> wrote:
> > Hello everyone,
> >
> > I got this error below during an svn co command. It left my workspace in a bad state
from which I had to do svn cleanup before trying again (the retry worked):
> >
> > svn: E200033: Another process is blocking the working copy database, or the underlying
filesystem does not support file locking; if the working copy is on a network filesystem,
make sure file locking has been enabled on the file server
> > svn: E200033: sqlite[S5]: database is locked
> > svn: E200042: Additional errors:
> > svn: E200033: sqlite[S5]: database is locked
> > svn: E200030: sqlite[S1]: cannot start a transaction within a transaction
> > svn: E200030: sqlite[S1]: cannot start a transaction within a transaction
> >
> > I think this happens when the network is flaky. This error happened on windows but
I have also seen it happen on solaris 10.  Has anyone else seen this? If it is due to network
flakiness then perhaps svn should retry to work around this transparently, and thus be more
robust? Perhaps it could retry up to 3 times with a sleep a 1 second between retries?
> >
> Is your working copy on a network filesystem (CIFS, NFS, ...)? And are
> you talking about a flaky network between your machine and its
> networked filesystem? If so, I think there is not much we can do about
> it ... the filesystem you're checking out to should reliable. There is
> already a retry loop in some places for putting checked out files into
> place, to work around locks from antivirus software etc, ... (but the
> sqlite database itself should be reliably available).

While working copies on networks filesystems should generally work,
such use is strongly discouraged.

So far, all reasons I've heard for putting working copies on network
drives turned out to be backed by bad or misinformed decisions about
the development process or allocation of hardware resources.
Moving to local disks improved not just the SVN user experience but
also repaired a broken process in such cases.

So put working copies on a local disk, preferrably SSDs. Working copies
should be considered temporary and disposable copies of your data.
The repository on the server is the important and permanent database
which must be protected and backed up, not the working copy.

If your working copy is too large for a modern SSD (really?),
consider sparse working copies
and/or reorganize your project such that parts of it can be checked
out and built in isolation.

That said, if you really must use a network filesystem, you should
look into tweaking the following options in Subversion's 'config' file:

### Section for configuring working copies.
### Set to a list of the names of specific clients that should use
### exclusive SQLite locking of working copies.  This increases the
### performance of the client but prevents concurrent access by
### other clients.  Third-party clients may also support this
### option.
### Possible values:
###   svn                (the command line client)
# exclusive-locking-clients =
### Set to true to enable exclusive SQLite locking of working
### copies by all clients using the 1.8 APIs.  Enabling this may
### cause some clients to fail to work properly. This does not have
### to be set for exclusive-locking-clients to work.
# exclusive-locking = false
### Set the SQLite busy timeout in milliseconds: the maximum time
### the client waits to get access to the SQLite database before
### returning an error.  The default is 10000, i.e. 10 seconds.
### Longer values may be useful when exclusive locking is enabled.
# busy-timeout = 10000

View raw message