git.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Narrow clone implementation difficulty estimate
@ 2009-05-14 10:04 Alexander Gavrilov
  2009-05-14 10:39 ` Jakub Narebski
  0 siblings, 1 reply; 3+ messages in thread
From: Alexander Gavrilov @ 2009-05-14 10:04 UTC (permalink / raw)
  To: git; +Cc: Asger Ottar Alstrup

Hello,

We are considering using Git to manage a large set of mostly binary
files (large images, pdf files, open-office documents, etc). The
amount of data is such that it is infeasible to force every user
to download all of it, so it is necessary to implement a partial
retrieval scheme.

In particular, we need to decide whether it is better to invest
effort into implementing Narrow Clone, or partitioning and
reorganizing the data set into submodules (the latter may prove
to be almost impossible for this data set). We will most likely
develop a new, very simplified GUI for non-technical users,
so the details of both possible approaches will be hidden
under the hood.


After some looking around, I think that Narrow clone would probably involve:

1. Modifying the revision walk engine used by the pack generator to
allow filtering blobs using a set of path masks. (Handling the same
tree object appearing at different paths may be tricky.)

2. Modifying the fetch protocol to allow sending such filter
expressions to the server.

3. Adding necessary configuration entries and parameters to commands,
in order to allow using the new functionality.

4. Resurrecting the sparse checkout series and merging it with the
new filtering logic. Narrow clone must imply sparse checkout that
is a subset of the cloned paths.

5. Fixing all breakage that may be caused by missing blobs.

I feel that the last point involves the most uncertainty, and may also
prove the most difficult one to implement. However, I cannot judge the
actual difficulty due to an incomplete understanding of Git internals.


I currently see the following additional problems with this approach:

1. Merge conflicts outside the filtered area cannot be handled.
However, in the case of this project they are estimated to be
extremely unlikely.

2. Changing the filter set is tricky, because extending the watched
area requires connecting to the server, and requesting missing blobs.
This action appears to be mostly identical to initial clone with a
more complex filter. On the other hand, shrinking the area would leave
unnecessary data in the repository, which is difficult to reuse safely
if the area is extended back. Finally, editing the set without
downloading missing data essentially corrupts the repository.

3. One of the goals of using git is building a distributed mirroring
system, similar to gittorrent or mirror-sync proposals. Narrow clone
significantly complicates this because of incomplete data sets.
A simple solution may be restricting download to peers whose set is
a superset of what's needed, but that may cause the system to degrade
to a fully centralized one.


In relation to the last point, namely building a mirroring
network, I also had an idea that perhaps in the current state
of things bundles are more suited to it, because they can be
directly reused by many peers, and deciding what to put in
the bundle is not much of a problem for this particular project.
I expect that implementation of narrow bundle support should
not be much different from narrow clone.


Currently we are evaluating possibilities to approach this
problem, and would like to know if this analysis makes sense.
We are willing to contribute the results to the Git community
if/when we implement it.

Alexander

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2009-05-16  5:17 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-05-14 10:04 Narrow clone implementation difficulty estimate Alexander Gavrilov
2009-05-14 10:39 ` Jakub Narebski
2009-05-16  5:17   ` Nguyen Thai Ngoc Duy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).