* Git Pack: Improving cache performance (maybe a good GSoC practice)
@ 2011-03-29 22:21 Sebastian Thiel
2011-03-29 22:45 ` Shawn Pearce
0 siblings, 1 reply; 6+ messages in thread
From: Sebastian Thiel @ 2011-03-29 22:21 UTC (permalink / raw)
To: git; +Cc: Tay Ray Chuan
Hi,
What follows is a summary of how I approached the git cache in order to
write my own improved version. The conclusions can be found further
down, in case you want to skip all the extra words for now.
I am currently working on a c++ implementation of the git core, which
for now includes reading and writing of loose objects, as well as
reading and verifying pack files. Actually, this is not the first time I
do this, as I made my first experience in that matter with a pure python
implementation of the git core
(https://github.com/gitpython-developers/gitdb). This time though, I
wanted to see whether I can achieve better performance, and how I can
make git more suitable to handle big files.
When profiling my initial uncached version of my pack reading
implementation, I noticed that most of the time was actually spent in
zlibs inflate method. Apparently, a cache was a good idea - git already
has one. Ignoring its implementation, I wrote my own naive one right
away which stored only base objects and inflated deltas. Interestingly,
this could already double the performance of my test case, which would
just stream all data of all objects contained in the pack, sha by sha,
resembling a random access pattern.
To compare my cache with git, I implemented pack verification, which
basically generates sha1 of all uncompressed objects in the pack, and
runs a crc32 on the compressed streams. As the objects are streamed
ordered by offset, the access pattern can be described as sequential.
It came at no surprise, that git would verify my test-pack (aggressively
packed git source repository with 137k objects, one 27mb pack) much
faster, i.e. 25%. After some profiling and optimizations, I could bring
it down to being just 15% faster. Considering that my cache, which only
got faster in the course of the optimizations, could speed up random
access by a factor of about 2.5, it was hard to understand that I
couldn't reach git's performance.
The major difference turned out to be the way the cache works. Git has a
small delta cache with only 256 [hardcoded] cache entries and a default
memory limit of 16mb. There it stores fully decompressed objects. It
maps objects to entries by hashing their pack offsets into the available
range of entries. When the pack is accessed sequentially, the cache will
be filled with related uncompressed objects, which can in turn reduce
the time required to apply the next delta by a huge amount, as only a
single delta has to be applied instead of a possibly long delta chain.
As git appears to pack deltas of related objects close to each other
(regarding their offset in the pack), the cache will be hit quite often
automatically. As the number of entries is small, and as entries are
connected using a doubly linked list, reclaiming of memory is rather
efficient, as it hits the first used objects first, which are unlikely
to be needed ever again. Collection doesn't necessarily run too often as
well, as most entries will be overwritten with new data during hash
collisions.
This cache implementation is clearly suitable for sequential access.
My cache was optimized for random access, hence it stores only base
objects and uncompressed delta streams, using many more entries to
achieve good cache hit ratios. The reason for the performance gain of
the random access cache was that it stores full objects. This fills the
cache memory up much faster, so having a lot of cache entries makes no
sense.
Both cache types are optimized for different kinds of access modes, and
both are required to efficiently deal with everything git usually has to do.
Hence I changed my cache to support both modes, and rerun the pack
verification test.
The result was better than expected, as the my implementation now takes
the lead by a tiny amount (25.3s vs. 26.0s) with a 16mb cache size. On
my way to make it even faster, I experimented with different cache
sizes, amounts of entries and of course different packs, which ranged
from 20mb to 600mb, which helped me fine tune the relations of these
variables.
In the end, with a cache of 27mb, my implementation took 20.6s, whereas
the git implementation could only improve slightly to finish after 25.3s.
I believe the cause of this is the fixed amount of entries. My cache
adjusts this amount depending on the packs size, the amount of objects,
as well as the size of the cache. In the this case, my cache would have
nearly 1000 entries, which helped to spread the amount of available
memory. Due to the limited amount of entries, git will not even benefit
from further increasing the size, whereas I could get as low as 13.5s by
increasing the cache size to 48mb for instance.
Just for the fun of it, I increased the amount of entries in the git
cache to the same amount my cache was using, and suddenly git was
performing equally well, finishing after just 20.8s with a 27mb cache size.
As my random access cache performed worse in sequential access mode, I
ran a test to see whether the opposite is true as well: Does the
sequential cache harm performance in random access mode ? The answer is:
Yes it does ! To show some numbers: 34mb of objects per second could be
streamed without cache, which was reduced to 28mb/s with a random access
cache. The cache in that case just causes overhead (especially when
reclaiming memory), and is hit just rarely.
To test my assumptions not only with my code, but also with git itself,
I used a test written for git-python, which streams blobs from the 27mb
git pack. With the default cache, I get 14mb/s. When I removed the
cache, it was upped to 15mb, which was less than expected, but we must
not forget the git-python overhead here. Finally, with the sequential
access cache enabled, its entries increased to 1000, and the cache size
upped to 27mb, suddenly I would get 34.2mb/s ! A new record, for
git-python at least ;).
As a final disclaimer, please let me emphasize that the tests I run are
neither statistically profound, nor are the pack verification tests
necessarily comparable in all details. Additionally, the git-python
object throughput tests cannot be directly compared to the c++ test
which has much less overhead. The tests were made to show performance
relations and uncover ways to improve performance, and not to claim that
one implementation is 'better' than the other.
-- Conclusions --
* delta cache needs to deal with random and sequential access.
* current implementation deals with sequential access only, which is
only suitable for pack verification, and in fact hurts performance in
other cases if the amount of entries (at least) is not dynamically
adjusted depending on parameters of the actual pack.
* random access caches work well with plenty of entries, when storing
only uncompressed deltas and base objects, as reapplying a delta is very
fast.
* Sequential access caches have to dynamically adjust their amount of
entries according to the amount of available cache memory and the
average packed object size, to make best use of the available memory.
* it should be possible to adjust the caching mode at runtime, or to
fully disable the cache.
* it might be useful/necessary to have one cache per pack sharing global
memory limits, instead of having one global cache, as caches need to be
adjusted depending on the actual pack.
In case anyone is interested in having a look at the way the I determine
the cache parameters (which really are the key to optimizing
performance), this is the line you would have to focus on:
https://github.com/Byron/gitplusplus/blob/deltastream/src/git/db/pack_file.cpp#L103.
The cache is used by the pack stream, whose core is in the
unpack_object_recursive method (equivalent to unpack_delta_entry in the
git source):
https://github.com/Byron/gitplusplus/blob/deltastream/src/git/db/pack_stream.cpp#L247
.
Kind Regards,
Sebastian
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Git Pack: Improving cache performance (maybe a good GSoC practice)
2011-03-29 22:21 Git Pack: Improving cache performance (maybe a good GSoC practice) Sebastian Thiel
@ 2011-03-29 22:45 ` Shawn Pearce
2011-03-30 8:45 ` Sebastian Thiel
0 siblings, 1 reply; 6+ messages in thread
From: Shawn Pearce @ 2011-03-29 22:45 UTC (permalink / raw)
To: Sebastian Thiel; +Cc: git, Tay Ray Chuan
On Tue, Mar 29, 2011 at 15:21, Sebastian Thiel <byronimo@googlemail.com> wrote:
> I am currently working on a c++ implementation of the git core, which for
> now includes reading and writing of loose objects, as well as reading and
> verifying pack files.
Have considered wrapping libgit2 with a C++ binding? Just curious.
> Actually, this is not the first time I do this, as I
> made my first experience in that matter with a pure python implementation of
> the git core (https://github.com/gitpython-developers/gitdb).
I think I saw this the other week... why this project vs. using Dulwich[1]?
[1] http://samba.org/~jelmer/dulwich/
> This time
> though, I wanted to see whether I can achieve better performance, and how I
> can make git more suitable to handle big files.
A noble goal...
> When profiling my initial uncached version of my pack reading
> implementation, I noticed that most of the time was actually spent in zlibs
> inflate method.
Yes. The profile is somewhere in this ballpark if Git is doing
rev-list --objects, aka the "Counting" phase of a git clone:
- 30% in zlib inflate()
- 30% in object map lookup/insertion
- 30% misc. elsewhere
> The major difference turned out to be the way the cache works. Git has a
> small delta cache with only 256 [hardcoded] cache entries and a default
> memory limit of 16mb. There it stores fully decompressed objects. It maps
> objects to entries by hashing their pack offsets into the available range of
> entries.
Right, a very simple cache. FWIW, I've tried to use more complex cache
rules inside of JGit, to no avail. A more complex cache implementation
(e.g. one that supports a limited number of collisions in the hash
buckets and uses a full LRU) runs slow enough relative to this simple
cache that performance actually gets worse.
> When the pack is accessed sequentially, the cache will be filled
> with related uncompressed objects, which can in turn reduce the time
> required to apply the next delta by a huge amount, as only a single delta
> has to be applied instead of a possibly long delta chain.
Yes... mostly.
> As git appears to
> pack deltas of related objects close to each other (regarding their offset
> in the pack),
This isn't true. Git packs object by time, *not* delta ordering.
However objects are delta compressed by commonality on tree path *and*
time. An example repository I like to play with is the linux-2.6
repository; in that repository the pack is around 370 MiB. If you
break the pack up into 1 MiB slices by offset, you will find that an
object at the end of a 50 deep delta chain touches about 50 unique 1
MiB slices in order to build itself up. :-)
This is caused by things being clustered by both time and path. If a
path is heavily modified within a short time period, sure, those will
be clustered together in the file. But if a path is rarely modified,
its objects will be distributed throughout the file.
> the cache will be hit quite often automatically.
The hit rate happens to work well because most uses access less than
256 distinct similar things at once. I forget what the stats are for
the linux-2.6 repository, but I think there are less than 256 unique
directories. As Git walks through the history sequentially from
most-recent to least-recent, its priming the cache with objects that
have very short delta chains and are thus more likely to be used as
delta bases for objects later in the file. Since each directory or
file acts as a delta base for someone else later, its likely to be in
this cache as the reader walks backwards through time. As bases
switch, the cache is updated at a relatively low penalty, because the
new base was itself recently accessed using the base that is already
in the cache.
The simple % 256 rule the cache uses is effective because objects are
pretty randomly allocated as far as offsets go in the file. We just
damn lucky. :-)
> This cache implementation is clearly suitable for sequential access.
Yes.
> Both cache types are optimized for different kinds of access modes, and both
> are required to efficiently deal with everything git usually has to do.
> Hence I changed my cache to support both modes, and rerun the pack
> verification test.
>
> The result was better than expected, as the my implementation now takes the
> lead by a tiny amount (25.3s vs. 26.0s) with a 16mb cache size. On my way to
This isn't a very significant speed difference given the differences
in implementation. We're not really looking to shave 3% off the
running time for operation X, we're looking to shave >10%.
> make it even faster, I experimented with different cache sizes, amounts of
> entries and of course different packs, which ranged from 20mb to 600mb,
> which helped me fine tune the relations of these variables.
> In the end, with a cache of 27mb, my implementation took 20.6s, whereas the
OK, this is pretty significant. Saving 21% of the running time, at the
expense of an extra 11M of working set.
But the verify pack workload is pretty useless, nobody accesses data
by SHA-1 order. Most uses of Git are going backwards through time. log
and blame are the two notable things that happen *a lot* and that
users complain about being slow. These also aren't random accesses,
there is a definite pattern and the pattern can be exploited. I'm
really only interested in improving these two patterns.
As far as verify-pack improving, Junio improved it by switching to use
index-pack with the new --verify flag. There really isn't a faster way
to scan through a pack than the way index-pack does it.
So, all I'm trying to say is, verify-pack isn't the right thing to
target when you are looking at "how do I make Git faster".
> -- Conclusions --
> * delta cache needs to deal with random and sequential access.
I'm not sure where the random access case is coming from. Who is doing
random access except verify-pack?
> * current implementation deals with sequential access only, which is only
> suitable for pack verification,
Not true. First, pack verification is horrifically random, since its
by SHA-1 order and not sequential order. Second, every other use of
the pack data is generally sequential in time, because every other use
is starting from the current revisions as found from the refs and
walking backwards in time, which is forwards sequentially in the pack.
--
Shawn.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Git Pack: Improving cache performance (maybe a good GSoC practice)
2011-03-29 22:45 ` Shawn Pearce
@ 2011-03-30 8:45 ` Sebastian Thiel
2011-03-30 9:46 ` Vicent Marti
0 siblings, 1 reply; 6+ messages in thread
From: Sebastian Thiel @ 2011-03-30 8:45 UTC (permalink / raw)
To: Shawn Pearce; +Cc: git
Hi Shawn,
Thank you for your detailed answer, especially about how deltas are
ordered within the pack. First things first: Where does my random access
(sha1 by sha1) access pattern come from ?
Its clearly just part of my test, as its easy to just iterate shas in
the index and query their data in the pack. The pack verification though
is not using sha1 order, but offset order, iterating the pack from the
smallest to the largest offset. This is true for the git implementation,
as well as for my one, which is why I would believe the access pattern
is quite sequential here.
When reading the reply, at first I thought we agreed that pack
verification is sequential, but what confused me is one of your last
statement: "First, pack verification is horrifically random, since its
by SHA-1 order and not sequential order".
Nonetheless, you are absolutely right that the sha1 ordered access is
nothing that would usually happen in real life, but I didn't yet
implement commit walking or tree iteration.
What stays is my observation that a larger, or lets say, more adaptive
amount of entries, can greatly improve performance. The git-python test
actually iterates commits, new to old, iterates the respective trees
depth first, and streams all blobs. As I understand it, this is a common
access pattern, which would greatly benefit from a larger entry cache.
It improved performance from 14mb/s to 34mb/s, I used about 1000 entries
in the cache, and a memory cap of 27mb.
The default pack-verify implementation would also benefit from more
cache entries
Maybe the default of 256 entries is sufficient if the trees are iterated
breadth first, but to my mind depth first would be a valid access
pattern as well.
The simplicity of the cache to me is the right approach, but I cannot
agree with its statically allocated amount of entries, as it apparently
doesn't suit any but the smallest packs I tried. Even though it might
not be statistically relevant, completely disabling the cache boosted
the git-python test
described previously by 1mb/s, reproducibly, which seems to show that
the cache can hurt if there aren't enough entries at least.
To my mind, changing the cache to be per-pack with dynamically allocated
entries depending on the average size of uncompressed objects will help
performance enough to be worth the effort.
Please see some more comments further down the email.
Kind Regards,
Sebastian
On 03/30/2011 12:45 AM, Shawn Pearce wrote:
> On Tue, Mar 29, 2011 at 15:21, Sebastian Thiel<byronimo@googlemail.com> wrote:
>> I am currently working on a c++ implementation of the git core, which for
>> now includes reading and writing of loose objects, as well as reading and
>> verifying pack files.
> Have considered wrapping libgit2 with a C++ binding? Just curious.
The project appears to be silent for nearly 5 months now, and it is in a
rather early stage of development. There is no delta cache yet, nor is
there a sliding window mmap implementation which would be required on 32
bit systems, at least if you want to have big file support.
>> Actually, this is not the first time I do this, as I
>> made my first experience in that matter with a pure python implementation of
>> the git core (https://github.com/gitpython-developers/gitdb).
> I think I saw this the other week... why this project vs. using Dulwich[1]?
>
> [1] http://samba.org/~jelmer/dulwich/
Jelmer and I talked about how both projects could benefit from each
other, but we dropped the idea once it turned out that the licenses are
quite incompatible (gpl vs. bsd). Besides, I like big file support,
which also means that the system should internally stream all data,
using a stream-like interface. Dulwich currently puts all data into RAM,
and so does git. Gitdb uses stream interfaces exclusively, but
admittedly I still didn't implement a delta decompression that would
work without plenty of buffers ... but that's a different topic.
>> This time
>> though, I wanted to see whether I can achieve better performance, and how I
>> can make git more suitable to handle big files.
> A noble goal...
... which can be reached :). Git-like databases could greatly improve
the performance of existing technologies, like package managers or
update systems (for games, for instance) if people wouldn't have to
re-download whole packages although only a few bytes/files changed in
the new version. Having a customizable git-library for this would allow
anyone to easily implement his custom git-like database solution to
optimize these kinds of transfers. This is what drives me.
>> When profiling my initial uncached version of my pack reading
>> implementation, I noticed that most of the time was actually spent in zlibs
>> inflate method.
> Yes. The profile is somewhere in this ballpark if Git is doing
> rev-list --objects, aka the "Counting" phase of a git clone:
>
> - 30% in zlib inflate()
> - 30% in object map lookup/insertion
> - 30% misc. elsewhere
>
>> The major difference turned out to be the way the cache works. Git has a
>> small delta cache with only 256 [hardcoded] cache entries and a default
>> memory limit of 16mb. There it stores fully decompressed objects. It maps
>> objects to entries by hashing their pack offsets into the available range of
>> entries.
> Right, a very simple cache. FWIW, I've tried to use more complex cache
> rules inside of JGit, to no avail. A more complex cache implementation
> (e.g. one that supports a limited number of collisions in the hash
> buckets and uses a full LRU) runs slow enough relative to this simple
> cache that performance actually gets worse.
>
>> When the pack is accessed sequentially, the cache will be filled
>> with related uncompressed objects, which can in turn reduce the time
>> required to apply the next delta by a huge amount, as only a single delta
>> has to be applied instead of a possibly long delta chain.
> Yes... mostly.
>
>> As git appears to
>> pack deltas of related objects close to each other (regarding their offset
>> in the pack),
> This isn't true. Git packs object by time, *not* delta ordering.
> However objects are delta compressed by commonality on tree path *and*
> time. An example repository I like to play with is the linux-2.6
> repository; in that repository the pack is around 370 MiB. If you
> break the pack up into 1 MiB slices by offset, you will find that an
> object at the end of a 50 deep delta chain touches about 50 unique 1
> MiB slices in order to build itself up. :-)
>
> This is caused by things being clustered by both time and path. If a
> path is heavily modified within a short time period, sure, those will
> be clustered together in the file. But if a path is rarely modified,
> its objects will be distributed throughout the file.
>
>> the cache will be hit quite often automatically.
> The hit rate happens to work well because most uses access less than
> 256 distinct similar things at once. I forget what the stats are for
> the linux-2.6 repository, but I think there are less than 256 unique
> directories. As Git walks through the history sequentially from
> most-recent to least-recent, its priming the cache with objects that
> have very short delta chains and are thus more likely to be used as
> delta bases for objects later in the file. Since each directory or
> file acts as a delta base for someone else later, its likely to be in
> this cache as the reader walks backwards through time. As bases
> switch, the cache is updated at a relatively low penalty, because the
> new base was itself recently accessed using the base that is already
> in the cache.
>
> The simple % 256 rule the cache uses is effective because objects are
> pretty randomly allocated as far as offsets go in the file. We just
> damn lucky. :-)
>
>> This cache implementation is clearly suitable for sequential access.
> Yes.
>
>> Both cache types are optimized for different kinds of access modes, and both
>> are required to efficiently deal with everything git usually has to do.
>> Hence I changed my cache to support both modes, and rerun the pack
>> verification test.
>>
>> The result was better than expected, as the my implementation now takes the
>> lead by a tiny amount (25.3s vs. 26.0s) with a 16mb cache size. On my way to
> This isn't a very significant speed difference given the differences
> in implementation. We're not really looking to shave 3% off the
> running time for operation X, we're looking to shave>10%.
>
>> make it even faster, I experimented with different cache sizes, amounts of
>> entries and of course different packs, which ranged from 20mb to 600mb,
>> which helped me fine tune the relations of these variables.
>> In the end, with a cache of 27mb, my implementation took 20.6s, whereas the
> OK, this is pretty significant. Saving 21% of the running time, at the
> expense of an extra 11M of working set.
>
> But the verify pack workload is pretty useless, nobody accesses data
> by SHA-1 order. Most uses of Git are going backwards through time. log
> and blame are the two notable things that happen *a lot* and that
> users complain about being slow. These also aren't random accesses,
> there is a definite pattern and the pattern can be exploited. I'm
> really only interested in improving these two patterns.
>
> As far as verify-pack improving, Junio improved it by switching to use
> index-pack with the new --verify flag. There really isn't a faster way
> to scan through a pack than the way index-pack does it.
>
> So, all I'm trying to say is, verify-pack isn't the right thing to
> target when you are looking at "how do I make Git faster".
>
I couldn't find the index-pack --verify flag in 1.7.4.2 - but maybe it
is even more bleeding edge, or I am looking in the wrong place.
>> -- Conclusions --
>> * delta cache needs to deal with random and sequential access.
> I'm not sure where the random access case is coming from. Who is doing
> random access except verify-pack?
>
See top of reply.
>> * current implementation deals with sequential access only, which is only
>> suitable for pack verification,
> Not true. First, pack verification is horrifically random, since its
> by SHA-1 order and not sequential order. Second, every other use of
> the pack data is generally sequential in time, because every other use
> is starting from the current revisions as found from the refs and
> walking backwards in time, which is forwards sequentially in the pack.
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Git Pack: Improving cache performance (maybe a good GSoC practice)
2011-03-30 8:45 ` Sebastian Thiel
@ 2011-03-30 9:46 ` Vicent Marti
2011-03-30 9:53 ` Sebastian Thiel
0 siblings, 1 reply; 6+ messages in thread
From: Vicent Marti @ 2011-03-30 9:46 UTC (permalink / raw)
To: Sebastian Thiel; +Cc: Shawn Pearce, git
On Wed, Mar 30, 2011 at 11:45 AM, Sebastian Thiel
<byronimo@googlemail.com> wrote:
>> Have considered wrapping libgit2 with a C++ binding? Just curious.
>
> The project appears to be silent for nearly 5 months now, and it is in a
> rather early stage of development. There is no delta cache yet, nor is there
> a sliding window mmap implementation which would be required on 32 bit
> systems, at least if you want to have big file support.
wat?
http://libgit2.github.com
https://github.com/libgit2/libgit2
Cheers,
Vicent
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Git Pack: Improving cache performance (maybe a good GSoC practice)
2011-03-30 9:46 ` Vicent Marti
@ 2011-03-30 9:53 ` Sebastian Thiel
2011-03-30 12:07 ` Erik Faye-Lund
0 siblings, 1 reply; 6+ messages in thread
From: Sebastian Thiel @ 2011-03-30 9:53 UTC (permalink / raw)
To: Vicent Marti; +Cc: Shawn Pearce, git
Thank you very much for the heads-up - I was using old mirrors it appears:
git://repo.or.cz/libgit2.git
git://repo.or.cz/libgit2/raj.git
Its quite terrible that I was left thinking that the project stalled for
so long, and it was hard for me to understand why people would continue
to bring it up :).
Now it all makes sense !
Cheers,
Sebastian
On 30.03.11 11:46, Vicent Marti wrote:
> On Wed, Mar 30, 2011 at 11:45 AM, Sebastian Thiel
> <byronimo@googlemail.com> wrote:
>>> Have considered wrapping libgit2 with a C++ binding? Just curious.
>> The project appears to be silent for nearly 5 months now, and it is in a
>> rather early stage of development. There is no delta cache yet, nor is there
>> a sliding window mmap implementation which would be required on 32 bit
>> systems, at least if you want to have big file support.
> wat?
>
> http://libgit2.github.com
> https://github.com/libgit2/libgit2
>
> Cheers,
> Vicent
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Git Pack: Improving cache performance (maybe a good GSoC practice)
2011-03-30 9:53 ` Sebastian Thiel
@ 2011-03-30 12:07 ` Erik Faye-Lund
0 siblings, 0 replies; 6+ messages in thread
From: Erik Faye-Lund @ 2011-03-30 12:07 UTC (permalink / raw)
To: Sebastian Thiel; +Cc: Vicent Marti, Shawn Pearce, git
On Wed, Mar 30, 2011 at 11:53 AM, Sebastian Thiel
<byronimo@googlemail.com> wrote:
> Thank you very much for the heads-up - I was using old mirrors it appears:
>
> git://repo.or.cz/libgit2.git
> git://repo.or.cz/libgit2/raj.git
>
> Its quite terrible that I was left thinking that the project stalled for
> so long, and it was hard for me to understand why people would continue
> to bring it up :).
> Now it all makes sense !
>
I was also confused by this at first. Shawn, would you mind updating
the readme-field in the repo.or.cz-mirror to reflect that the project
has moved to GitHub?
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2011-03-30 12:07 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-03-29 22:21 Git Pack: Improving cache performance (maybe a good GSoC practice) Sebastian Thiel
2011-03-29 22:45 ` Shawn Pearce
2011-03-30 8:45 ` Sebastian Thiel
2011-03-30 9:46 ` Vicent Marti
2011-03-30 9:53 ` Sebastian Thiel
2011-03-30 12:07 ` Erik Faye-Lund
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).