From: "Dana How" <danahow@gmail.com>
To: "Junio C Hamano" <junkio@cox.net>
Cc: "Git Mailing List" <git@vger.kernel.org>, danahow@gmail.com
Subject: Re: [PATCH] Prevent megablobs from gunking up git packs
Date: Wed, 23 May 2007 16:55:27 -0700 [thread overview]
Message-ID: <56b7f5510705231655o589de801w88adc1aa6c18162b@mail.gmail.com> (raw)
In-Reply-To: <7v7iqz19d2.fsf@assigned-by-dhcp.cox.net>
On 5/23/07, Junio C Hamano <junkio@cox.net> wrote:
> Dana How <danahow@gmail.com> writes:
> > This patch implements the following:
> > 1. git pack-objects takes a new --max-blob-size=N flag,
> > with the effect that only blobs less than N KB are written
> > to the packfiles(s). If a blob was in a pack but violates
> > this limit (perhaps the packs were created by fast-import
> > or max-blob-size was reduced), then a new loose object
> > is written out if needed so the data is not lost.
>
> Why?
>
> I really do not like that "write a new loose object" part
> without proper justification. From your description, I thought
> the most natural way to do this is to pretend you did not hear
> about large objects at all, by rejecting them early, perhaps
> inside add_object_entry() or inside get_object_details() --
> either case you would do sha1_object_info() early instead of
> doing it in check_object().
>
> By the way, is there fundamental reason that this needs to be
> "blob size" limit? Wouldn't "max-object-size" be more clean in
> theory, and work the same way in practice?
I agree with your sentiments. Some nasty details pushed
me to implement it this way. Let me explain them and perhaps
you can come up with a different combination of solutions.
Each object to be packed can be in one of 4 states:
{loose or packed} X {small or too big}.
Regardless of the {loose or packed} selection,
a "small" object can always be placed in the pack.
A loose object which is too big does not require
any special action either -- if we drop it from the pack,
we don't lose anything since it already exists in objects/xx .
The packed X too big combination is the problem. As the
commit message says, this could happen if the packs
came from fast-import, or if the max-blob-size were reduced.
We have three options in this case:
(1) Drop the object (do not put it in the new pack(s)).
(2) Pass the object into the new pack(s).
(3) Write out the object as a new loose object.
Option (1) is unacceptable. When you call git-repack -a,
it blindly deletes all the non-kept packs at the end. So
the megablobs would be lost.
Let's suppose we always used Option (2), and Option (3)
were never available. Then once a large object got into
a pack, the only way to get it out would be to explode
the pack using git-unpack-objects -- but hey, that doesn't
work because they all already exist in this repository packed.
So we make a temporary repository to explode the pack,
then copy over the loose objects, then delete the pack,
then repack with the correct/updated max-blob-size specification.
I would claim this is even more ugly, and more importantly,
more error-prone than supporting Option (3) in at least some cases.
The way large objects get into a pack is that you realize after
the fact that you need a max-blob-size, or that you need to
decrease it since your pack files have become unwieldy.
I think I've shown we need Option (3) in at least some cases.
To better address _which_ cases, let's move on to your
second point:: why did I implement --max-blob-size instead
of --max-object-size? I take this to mean that I should use
the blob size if undeltified, and the delta size if previously deltified?
That's actually what I do internally, but the total effect
over many repackings is what one would expect from --max-blob-size.
In normal use on a personal repository starting from scratch,
all blobs are first packed from loose objects. When I am
deciding if I want to include the loose object or not, I want
to use the object's direct size, not its deltified size. If I use
the latter, then I have to deltify all the megablobs to see
if their deltas are small enough. This takes a long time
and is one of the things this patch aims to eliminate.
So I take the list of loose objects, throw out the megablobs,
try to deltify the rest, and write out the pack.
When I repack, any deltas are smaller than the original
objects, so in any sequence of packings from loose
objects and repackings of packs, in which all packs
were created with a max-blob-size limit which stayed the
same or increased over time, there is no difference
between max-blob-size or max-object-size: checking loose
object size or in-pack size gives the same pack contents.
Now let's say you decrease max-blob-size and repack,
or repack from some packs that had no max-blob-size
(e.g. from fast-import). Now there is some difficulty in
a clean definition when you follow Option (3). You might
have a base object B in an old pack, and a deltified object
D based on it, where B is now larger than the newly-decreased
max-blob-size while D's deltified size is still smaller.
So you write out B, but D can no longer be stored deltified
and may become bigger as well.
But in this case you should do the following:
Since you are going to decrease max-blob-size when you decide
your repository packs have become "gunked up" with megablobs,
I would propose git-repack -a -f (git-pack-objects --no-object-reuse)
be used. This means all deltas are recomputed
from the new, smaller universe of blobs, which makes sense,
and since no deltas are reused, all blob-size limits can only
be based on the actual blob size by the arguments 3 paragraphs back.
(Note: I made sure the current patch loses no data if you decrease
max-blob-size and forget to specify -f -- you just get a few large
packed objects resulting from "lost" deltifications.)
In actuality, the object_entry.size field is checked, so objects
_are_ being filtered by delta size if deltified. But I went through
the (suggested) usage scenarios above to show that the *result* is that
you get a pack limited in raw object size. --max-object-size
is what I implemented because it's easier, but in use it acts
like --max-blob-size so I named it that.
Now, concerning Option (2) vs Option (3): I need to write
out loose objects in *some* cases. Right now I always write
out a large object as a loose one. It would be reasonable to
change this so that it only happens with --no-object-reuse
(i.e. git-repack -f). This is what you should specify when you're
decreasing max-blob-size. Shall I change it so that large objects
are passed to the result pack without --no-object-reuse,
and written out as loose objects with --no-object-reuse?
I did not do this because it's a strange interaction --
just look how long it took me to build up to explaining it!
> > 2. git repack inspects repack.maxblobsize . If set, its
> > value is passed to git pack-objects on the command line.
> > The user should change repack.maxblobsize , NOT specify
> > --max-blob-size=N .
> Why not?
Well, indeed. That's why [PATCH *v2*] lets you specify
--max-blob-size to git-repack, and gives an example of
why you might want to do that [I don't].
> > This patch is on top of the earlier max-pack-size patch,
> > because I thought I needed some behavior it supplied,
> > but could be rebased on master if desired.
>
> Your earlier "split according to max-pack-size" will hopefully be
> on master shortly.
Thanks!
--
Dana L. How danahow@gmail.com +1 650 804 5991 cell
next prev parent reply other threads:[~2007-05-23 23:55 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-05-22 6:14 [PATCH] Prevent megablobs from gunking up git packs Dana How
2007-05-22 6:30 ` Shawn O. Pearce
2007-05-22 7:33 ` Dana How
2007-05-22 6:52 ` Junio C Hamano
2007-05-22 8:00 ` Dana How
2007-05-22 11:05 ` Jakub Narebski
2007-05-22 16:59 ` Dana How
2007-05-22 23:44 ` Jakub Narebski
2007-05-23 0:28 ` Junio C Hamano
2007-05-23 1:58 ` Nicolas Pitre
2007-05-22 17:38 ` Nicolas Pitre
2007-05-22 18:07 ` Dana How
2007-05-23 22:08 ` Junio C Hamano
2007-05-23 23:55 ` Dana How [this message]
2007-05-24 1:44 ` Junio C Hamano
2007-05-24 7:12 ` Shawn O. Pearce
2007-05-24 9:38 ` Johannes Schindelin
2007-05-24 17:23 ` david
2007-05-24 17:29 ` Johannes Schindelin
2007-05-25 0:55 ` Shawn O. Pearce
2007-05-24 20:43 ` Geert Bosch
2007-05-24 23:29 ` Dana How
2007-05-25 2:06 ` Shawn O. Pearce
2007-05-25 5:44 ` Nicolas Pitre
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56b7f5510705231655o589de801w88adc1aa6c18162b@mail.gmail.com \
--to=danahow@gmail.com \
--cc=git@vger.kernel.org \
--cc=junkio@cox.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).