Mercurial > hg > mercurial-source
view mercurial/repocache.py @ 44087:ffd04bc9f57d
copies: move from a copy on branchpoint to a copy on write approach
Before this changes, any branch points results in a copy of the dictionary containing the
copy information. This can be very costly for branchy history with few rename
information. Instead, we take a "copy on write" approach. Copying the input data
only when we are about to update them.
In practice we where already doing the copying in half of these case (because
`_chain` makes a copy), so we don't add a significant cost here even in the
linear case. However the speed up in branchy case is very significant. Here are
some timing on the pypy repository.
revision: large amount; added files: large amount; rename small amount; c3b14617fbd7 9ba6ab77fd29
before: ! wall 1.399863 comb 1.400000 user 1.370000 sys 0.030000 (median of 10)
after: ! wall 0.766453 comb 0.770000 user 0.750000 sys 0.020000 (median of 11)
revision: large amount; added files: small amount; rename small amount; c3b14617fbd7 f650a9b140d2
before: ! wall 1.876748 comb 1.890000 user 1.870000 sys 0.020000 (median of 10)
after: ! wall 1.167223 comb 1.170000 user 1.150000 sys 0.020000 (median of 10)
revision: large amount; added files: large amount; rename large amount; 08ea3258278e d9fa043f30c0
before: ! wall 0.242457 comb 0.240000 user 0.240000 sys 0.000000 (median of 39)
after: ! wall 0.211476 comb 0.210000 user 0.210000 sys 0.000000 (median of 45)
revision: small amount; added files: large amount; rename large amount; df6f7a526b60 a83dc6a2d56f
before: ! wall 0.013193 comb 0.020000 user 0.020000 sys 0.000000 (median of 224)
after: ! wall 0.013290 comb 0.010000 user 0.010000 sys 0.000000 (median of 222)
revision: small amount; added files: large amount; rename small amount; 4aa4e1f8e19a 169138063d63
before: ! wall 0.001673 comb 0.000000 user 0.000000 sys 0.000000 (median of 1000)
after: ! wall 0.001677 comb 0.000000 user 0.000000 sys 0.000000 (median of 1000)
revision: small amount; added files: small amount; rename small amount; 4bc173b045a6 964879152e2e
before: ! wall 0.000119 comb 0.000000 user 0.000000 sys 0.000000 (median of 8023)
after: ! wall 0.000119 comb 0.000000 user 0.000000 sys 0.000000 (median of 7997)
revision: medium amount; added files: large amount; rename medium amount; c95f1ced15f2 2c68e87c3efe
before: ! wall 0.201898 comb 0.210000 user 0.200000 sys 0.010000 (median of 48)
after: ! wall 0.167415 comb 0.170000 user 0.160000 sys 0.010000 (median of 58)
revision: medium amount; added files: medium amount; rename small amount; d343da0c55a8 d7746d32bf9d
before: ! wall 0.036820 comb 0.040000 user 0.040000 sys 0.000000 (median of 100)
after: ! wall 0.035797 comb 0.040000 user 0.040000 sys 0.000000 (median of 100)
The extra cost in the linear case can be reclaimed later with some extra logic.
Differential Revision: https://phab.mercurial-scm.org/D7124
author | Pierre-Yves David <pierre-yves.david@octobus.net> |
---|---|
date | Tue, 15 Oct 2019 18:23:34 +0200 |
parents | 8ff1ecfadcd1 |
children |
line wrap: on
line source
# repocache.py - in-memory repository cache for long-running services # # Copyright 2018 Yuya Nishihara <yuya@tcha.org> # # This software may be used and distributed according to the terms of the # GNU General Public License version 2 or any later version. from __future__ import absolute_import import collections import gc import threading from . import ( error, hg, obsolete, scmutil, util, ) class repoloader(object): """Load repositories in background thread This is designed for a forking server. A cached repo cannot be obtained until the server fork()s a worker and the loader thread stops. """ def __init__(self, ui, maxlen): self._ui = ui.copy() self._cache = util.lrucachedict(max=maxlen) # use deque and Event instead of Queue since deque can discard # old items to keep at most maxlen items. self._inqueue = collections.deque(maxlen=maxlen) self._accepting = False self._newentry = threading.Event() self._thread = None def start(self): assert not self._thread if self._inqueue.maxlen == 0: # no need to spawn loader thread as the cache is disabled return self._accepting = True self._thread = threading.Thread(target=self._mainloop) self._thread.start() def stop(self): if not self._thread: return self._accepting = False self._newentry.set() self._thread.join() self._thread = None self._cache.clear() self._inqueue.clear() def load(self, path): """Request to load the specified repository in background""" self._inqueue.append(path) self._newentry.set() def get(self, path): """Return a cached repo if available This function must be called after fork(), where the loader thread is stopped. Otherwise, the returned repo might be updated by the loader thread. """ if self._thread and self._thread.is_alive(): raise error.ProgrammingError( b'cannot obtain cached repo while loader is active' ) return self._cache.peek(path, None) def _mainloop(self): while self._accepting: # Avoid heavy GC after fork(), which would cancel the benefit of # COW. We assume that GIL is acquired while GC is underway in the # loader thread. If that isn't true, we might have to move # gc.collect() to the main thread so that fork() would never stop # the thread where GC is in progress. gc.collect() self._newentry.wait() while self._accepting: self._newentry.clear() try: path = self._inqueue.popleft() except IndexError: break scmutil.callcatch(self._ui, lambda: self._load(path)) def _load(self, path): start = util.timer() # TODO: repo should be recreated if storage configuration changed try: # pop before loading so inconsistent state wouldn't be exposed repo = self._cache.pop(path) except KeyError: repo = hg.repository(self._ui, path).unfiltered() _warmupcache(repo) repo.ui.log( b'repocache', b'loaded repo into cache: %s (in %.3fs)\n', path, util.timer() - start, ) self._cache.insert(path, repo) # TODO: think about proper API of preloading cache def _warmupcache(repo): repo.invalidateall() repo.changelog repo.obsstore._all repo.obsstore.successors repo.obsstore.predecessors repo.obsstore.children for name in obsolete.cachefuncs: obsolete.getrevs(repo, name) repo._phasecache.loadphaserevs(repo) # TODO: think about proper API of attaching preloaded attributes def copycache(srcrepo, destrepo): """Copy cached attributes from srcrepo to destrepo""" destfilecache = destrepo._filecache srcfilecache = srcrepo._filecache if b'changelog' in srcfilecache: destfilecache[b'changelog'] = ce = srcfilecache[b'changelog'] ce.obj.opener = ce.obj._realopener = destrepo.svfs if b'obsstore' in srcfilecache: destfilecache[b'obsstore'] = ce = srcfilecache[b'obsstore'] ce.obj.svfs = destrepo.svfs if b'_phasecache' in srcfilecache: destfilecache[b'_phasecache'] = ce = srcfilecache[b'_phasecache'] ce.obj.opener = destrepo.svfs