From: "Shawn O. Pearce" <spearce@spearce.org>
To: Sam Hocevar <sam@zoy.org>
Cc: git@vger.kernel.org
Subject: Re: Git memory usage (1): fast-import
Date: Sat, 7 Mar 2009 13:01:30 -0800 [thread overview]
Message-ID: <20090307210130.GO16213@spearce.org> (raw)
In-Reply-To: <20090307201920.GE12880@zoy.org>
Sam Hocevar <sam@zoy.org> wrote:
> I joined a project that uses very large binary files (up to 1 GiB) in
> a p4 repository and as I would like to use Git, I am trying to make it
> more memory-efficient when handling huge files.
Yikes. As you saw, this won't play well...
> In practice, it takes even more memory than that. Experiment shows
> that importing six 100 MiB files made of urandom data takes 370 MiB of
> memory [...]
Yes.
As you saw, this is the last object, the current object, the delta
index of the last object (in order to more efficiently compare the
current one to it), and the deflate buffer for the current object,
oh, and probably memory fragmentation....
I'm not surprised a 100 MiB file turned into 370 MiB heap usage.
> - stop trying to compute deltas in fast-import and leave that task
> to other tools
This isn't practical for source code imports, unless we do...
> (optionally, define a file size threshold beyond
> which the last file is not kept in memory, and maybe make that a
> configuration option).
what you suggest here. fast-import is faster than other methods
because we get some delta compression on the content, so the output
pack uses up less virtual memory when the front-end or end-user
finally gets around to doing `git repack -a -d -f` to recompute
the delta chains.
> - use a temporary file to store the deflate data when it reaches a
> given size threshold (and maybe make that a configuration option).
Zoiks. There's no reason for that.
A better method would be to just look at the size of the incoming
blob, and if its over some configured threshold (default e.g. 100
MB is perhaps sane) we just stream the data through deflate()
and into the pack file, with no delta compression.
That would also bypass the "massive" buffer in the last object slot,
as you point out above.
> - also, I haven't tracked all strbuf_* uses in fast-import, but I got
> the feeling that strbuf_release() could be used in a few places
> instead of strbuf_setlen(0) in order to free some memory.
Examples? I haven't gone through the code in detail since it
was modified to use strbufs. But I had the feeling that the code
wasn't freeing strbufs that it would just reuse on the next command,
and that are likely to be "smallish", e.g. just a few KiBs in size.
--
Shawn.
prev parent reply other threads:[~2009-03-07 21:03 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-03-07 20:19 Git memory usage (1): fast-import Sam Hocevar
2009-03-07 21:01 ` Shawn O. Pearce [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090307210130.GO16213@spearce.org \
--to=spearce@spearce.org \
--cc=git@vger.kernel.org \
--cc=sam@zoy.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).