* [PATCH 0/2] make repeated fetch faster on "read only" repos
@ 2012-12-07 13:53 Jeff King
2012-12-07 13:58 ` [PATCH 1/2] fetch: run gc --auto after fetching Jeff King
2012-12-07 14:04 ` [PATCH 2/2] fetch-pack: avoid repeatedly re-scanning pack directory Jeff King
0 siblings, 2 replies; 3+ messages in thread
From: Jeff King @ 2012-12-07 13:53 UTC (permalink / raw)
To: git
Like many dev shops, we run a CI server that basically does:
git fetch $some_branch &&
git checkout $some_branch &&
make test
all day long. Sometimes the fetches would get very slow, and the problem
turned out to be a combination of:
1. Never running "git gc". This means you can end up with a ton of
loose objects, or even a bunch of small packs[1].
2. One of the loops in fetch caused us to re-scan the entire
objects/pack directory repeatedly, proportional to the number of
refs on the remote.
I think the fundamental fix is to gc more often, as it makes the re-scans
less expensive, along with making general object lookup faster. But the
repeated re-scans strike me as kind of hacky. This series tries to
address both:
[1/2]: fetch: run gc --auto after fetching
[2/2]: fetch-pack: avoid repeatedly re-scanning pack directory
-Peff
[1] It turns out we had our transfer.unpacklimit set unreasonably low,
leading to a large number of tiny packs, but even with the defaults,
you will end up with a ton of loose objects if you do repeated small
fetches.
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH 1/2] fetch: run gc --auto after fetching
2012-12-07 13:53 [PATCH 0/2] make repeated fetch faster on "read only" repos Jeff King
@ 2012-12-07 13:58 ` Jeff King
2012-12-07 14:04 ` [PATCH 2/2] fetch-pack: avoid repeatedly re-scanning pack directory Jeff King
1 sibling, 0 replies; 3+ messages in thread
From: Jeff King @ 2012-12-07 13:58 UTC (permalink / raw)
To: git
We generally try to run "gc --auto" after any commands that
might introduce a large number of new objects. An obvious
place to do so is after running "fetch", which may introduce
new loose objects or packs (depending on the size of the
fetch).
While an active developer repository will probably
eventually trigger a "gc --auto" on another action (e.g.,
git-rebase), there are two good reasons why it is nicer to
do it at fetch time:
1. Read-only repositories which track an upstream (e.g., a
continuous integration server which fetches and builds,
but never makes new commits) will accrue loose objects
and small packs, but never coalesce them into a more
efficient larger pack.
2. Fetching is often already perceived to be slow to the
user, since they have to wait on the network. It's much
more pleasant to include a potentially slow auto-gc as
part of the already-long network fetch than in the
middle of productive work with git-rebase or similar.
Signed-off-by: Jeff King <peff@peff.net>
---
The original "gc --auto" patch in 2007 suggested adding this to fetch,
but we never did (there was some brief mention that maybe tweaking
fetch.unpacklimit would be relevant, but I think you would eventually
want to gc, whether you are accruing packs or loose objects).
I didn't bother protecting this with an option (as we do for
receive.gc). I don't see much point, as anybody who doesn't want gc on
their client repos will already have set gc.auto to prevent it from
triggering via other commands).
builtin/fetch.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/builtin/fetch.c b/builtin/fetch.c
index 4b5a898..1ddbf0d 100644
--- a/builtin/fetch.c
+++ b/builtin/fetch.c
@@ -959,6 +959,9 @@ int cmd_fetch(int argc, const char **argv, const char *prefix)
struct string_list list = STRING_LIST_INIT_NODUP;
struct remote *remote;
int result = 0;
+ static const char *argv_gc_auto[] = {
+ "gc", "--auto", NULL,
+ };
packet_trace_identity("fetch");
@@ -1026,5 +1029,7 @@ int cmd_fetch(int argc, const char **argv, const char *prefix)
list.strdup_strings = 1;
string_list_clear(&list, 0);
+ run_command_v_opt(argv_gc_auto, RUN_GIT_CMD);
+
return result;
}
--
1.8.1.rc0.4.g5948dfd.dirty
^ permalink raw reply related [flat|nested] 3+ messages in thread
* [PATCH 2/2] fetch-pack: avoid repeatedly re-scanning pack directory
2012-12-07 13:53 [PATCH 0/2] make repeated fetch faster on "read only" repos Jeff King
2012-12-07 13:58 ` [PATCH 1/2] fetch: run gc --auto after fetching Jeff King
@ 2012-12-07 14:04 ` Jeff King
1 sibling, 0 replies; 3+ messages in thread
From: Jeff King @ 2012-12-07 14:04 UTC (permalink / raw)
To: git
When we look up a sha1 object for reading, we first check
packfiles, and then loose objects. If we still haven't found
it, we re-scan the list of packfiles in `objects/pack`. This
final step ensures that we can co-exist with a simultaneous
repack process which creates a new pack and then prunes the
old object.
This extra re-scan usually does not have a performance
impact for two reasons:
1. If an object is missing, then typically the re-scan
will find a new pack, then no more misses will occur.
Or if it truly is missing, then our next step is
usually to die().
2. Re-scanning is cheap enough that we do not even notice.
However, these do not always hold. The assumption in (1) is
that the caller is expecting to find the object. This is
usually the case, but the call to `parse_object` in
`everything_local` does not follow this pattern. It is
looking to see whether we have objects that the remote side
is advertising, not something we expect to have. Therefore
if we are fetching from a remote which has many refs
pointing to objects we do not have, we may end up
re-scanning the pack directory many times.
Even with this extra re-scanning, the impact is often not
noticeable due to (2); we just readdir() the packs directory
and skip any packs that are already loaded. However, if
there are a large number of packs, then even enumerating the
directory directory can be expensive (especially if we do it
repeatedly). Having this many packs is a good sign the user
should run `git gc`, but it would still be nice to avoid
having to scan the directory at all.
This patch checks has_sha1_file (which does not have the
re-scan and re-check behavior) in the critical loop, and
avoids calling parse_object at all if we do not have the
object.
Signed-off-by: Jeff King <peff@peff.net>
---
I'm lukewarm on this patch. The re-scan _shouldn't_ be that expensive,
so maybe patch 1 is enough to be a reasonable fix. The fact that we
re-scan repeatedly seems ugly and hacky to me, but it really is just
opendir/readdir/closedir in the case that nothing has changed (and if
something has changed, then it's a good thing to be checking). And with
my patch, fetch-pack would not notice new packs from a simultaneous
repack process (although it's OK, as the result is not incorrect, but
merely that we may ask for the object from the server).
Another option would be to make the reprepare_packed_git re-scan less
expensive by checking the mtime of the directory before scanning it.
fetch-pack.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/fetch-pack.c b/fetch-pack.c
index 099ff4d..b4383c6 100644
--- a/fetch-pack.c
+++ b/fetch-pack.c
@@ -594,6 +594,9 @@ static int everything_local(struct fetch_pack_args *args,
for (ref = *refs; ref; ref = ref->next) {
struct object *o;
+ if (!has_sha1_file(ref->old_sha1))
+ continue;
+
o = parse_object(ref->old_sha1);
if (!o)
continue;
--
1.8.1.rc0.4.g5948dfd.dirty
^ permalink raw reply related [flat|nested] 3+ messages in thread
end of thread, other threads:[~2012-12-07 14:05 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-12-07 13:53 [PATCH 0/2] make repeated fetch faster on "read only" repos Jeff King
2012-12-07 13:58 ` [PATCH 1/2] fetch: run gc --auto after fetching Jeff King
2012-12-07 14:04 ` [PATCH 2/2] fetch-pack: avoid repeatedly re-scanning pack directory Jeff King
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).