From: Sergio Callegari <sergio.callegari@gmail.com>
To: Jeff King <peff@peff.net>
Cc: git@vger.kernel.org
Subject: Re: Multiblobs
Date: Fri, 07 May 2010 00:56:59 +0200 [thread overview]
Message-ID: <4BE3493B.8010409@gmail.com> (raw)
In-Reply-To: <20100506062644.GB16151@coredump.intra.peff.net>
Many thanks for the clear and evidently very well thought answer.
I wonder if I can take another minute of your (and Avery, and anybody
else who is interested) time to feed a little more my curiosity.
And I apologize in advance for possible mistakes in my understanding of
git internals.
Jeff King wrote:
> And for both of those cases, the upside is a speed increase, but the
> downside is a breakage of the user-visible git model (i.e., blobs get
> different sha1's depending on how they've been split).
Is this different from what happens with clean/smudge filters? I wonder
what hash does a cleanable object get. The hash of its cleaned version
or its original hash? If it is the first case, the hash can change if
the filter is used/not-used or slightly modified, so I wonder if an
enhanced "clean" filter capable of splitting an object into a multiblob
would be different in this sense. If it gets the original hash, again I
wonder if an enhanced "clean" filter capable of splitting an object into
a multiblob could not do the same.
> But being two
> years wiser than when I wrote the original message, I don't think that
> breakage is justified. Instead, you should retain the simple git object
> model, and consider on-the-fly content-specific splits. In other words,
> at rename (or delta) time notice that blob 123abc is a PDF, and that it
> can be intelligently split into several chunks, and then look for other
> files which share chunks with it. As a bonus, this sort of scheme is
> very easy to cache, just as textconv is. You cache the smart-split of
> the blob, which is immutable for some blob/split-scheme combination. And
> then you can even do rename detection on large blob 123abc without even
> retrieving it from storage.
>
Now I see why for things like diffing, showing textual representations
or rename detection caching can be much more practical.
My initial list of "potential applications" was definitely too wide and
vague.
> Another benefit is that you still _store_ the original (you just don't
> look at it as often).
... but of course if you keep storing the original, I guess there is no
advantage in storage efficiency.
> Which means there is no annoyance with perfectly
> reconstructing a file. I had originally envisioned straight splitting,
> with concatenation as the reverse operation. But I have seen things like
> zip and tar files mentioned in this thread. They are quite challenging,
> because it is difficult to reproduce them byte-for-byte.
I agree, but this is already being done. For instance on odf and zip
files, by using clean filters capable of removing compression you can
greatly improve the storage efficiency of the delta machinery included
in git. And of course, to re-create the original file is potentially
challenging. But most time, it does not really matter. For instance,
when I use this technique with odf files, I do not need to care if the
smudge filter recreates the original file or not, the important thing is
that it recreates a file that can then be cleaned to the same thing (and
this makes me think that cleanable objects get the sha1 of the cleaned
blob, see above).
In other terms, all the time we underline that git is about tracking
/content/. However, when you have a structured file, and you want to
track its /content/, most time you are not interested at all at the
/envelope/ (e.g. the compression level of the odf/zip file): the content
is what is inside (typically a tree-structured thing). And maybe scms
could be made better at tracking structured files, by providing an easy
way to tell the scm how to discard the envelope.
In fact, having the hash of the structured file only depend on its real
content (the inner tree or list of files/streams/whatever), seems to me
to be completely respectful of the git model. This is why I originally
thought that having enhanced filters enabling the storage of the the
inner matter of a structured file as a multiblob could make sense.
> The other application I saw in this thread is structured files where you
> actually _want_ to see all of the innards as individual files (e.g.,
> being able to do "git show HEAD:foo.zip/file.txt"). And for those, I
> don't think any sort of automated chunking is really desirable. If you
> want git to store and process those files individually, then you should
> provide them to git individually. In other words, there is no need for
> git to know or care at all that "foo.zip" exists, but you should simply
> feed it a directory containing the files. The right place to do that
> conversion is either totally outside of git, or at the edges of git
> (i.e., git-add and when git places the file in the repository).
Originally, I thought of creating wrappers for some git commands.
However, things like "status" or "commit -a" appeared to me quite
complicated to be done in a wrapper.
> Our
> current hooks may not be sufficient, but that means those hooks should
> be improved, which to me is much more favorable than a scheme that
> alters the core of the git data model.
>
Having a sufficient number of hooks could help a lot. However, if I
remember properly, one of the reasons why the clean/smudge filters were
introduced was to avoid the need to implement a similar functionality
with hooks.
Thanks in advance for the further explanations that might come!
Sergio
next prev parent reply other threads:[~2010-05-06 22:57 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-04-28 15:12 Multiblobs Sergio Callegari
2010-04-28 18:07 ` Multiblobs Avery Pennarun
2010-04-28 19:13 ` Multiblobs Sergio Callegari
2010-04-28 21:27 ` Multiblobs Avery Pennarun
2010-04-28 23:10 ` Multiblobs Michael Witten
2010-04-28 23:26 ` Multiblobs Sergio
2010-04-29 0:44 ` Multiblobs Avery Pennarun
2010-04-29 11:34 ` Multiblobs Peter Krefting
2010-04-29 15:28 ` Multiblobs Avery Pennarun
2010-04-30 8:20 ` Multiblobs Peter Krefting
2010-04-30 17:26 ` Multiblobs Avery Pennarun
2010-04-30 9:14 ` Multiblobs Hervé Cauwelier
2010-04-30 17:32 ` Multiblobs Avery Pennarun
2010-04-30 18:16 ` Multiblobs Michael Witten
2010-04-30 19:06 ` Multiblobs Hervé Cauwelier
2010-04-28 18:34 ` Multiblobs Geert Bosch
2010-04-29 6:55 ` Multiblobs Mike Hommey
2010-05-06 6:26 ` Multiblobs Jeff King
2010-05-06 22:56 ` Sergio Callegari [this message]
2010-05-10 6:36 ` Multiblobs Jeff King
2010-05-10 13:58 ` Multiblobs Sergio Callegari
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4BE3493B.8010409@gmail.com \
--to=sergio.callegari@gmail.com \
--cc=git@vger.kernel.org \
--cc=peff@peff.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).