* Fwd: Git SCM and zlib dictionaries
[not found] ` <00B40C71-72B6-499B-806B-64A140136944@alumni.caltech.edu>
@ 2006-08-15 15:19 ` Jon Smirl
0 siblings, 0 replies; only message in thread
From: Jon Smirl @ 2006-08-15 15:19 UTC (permalink / raw)
To: Shawn Pearce, git
---------- Forwarded message ----------
From: Mark Adler <madler@alumni.caltech.edu>
Date: Aug 15, 2006 10:43 AM
Subject: Re: Git SCM and zlib dictionaries
To: Jon Smirl <jonsmirl@gmail.com>
Cc: Jean-loup Gailly <jloup@gzip.org>
On Aug 15, 2006, at 6:11 AM, Jon Smirl wrote:
> What we are doing is similar to full-text
> search indexing.
If the point of very small (1Kish) compressed chunks is for random
access and individual decompression of those pieces, then there are
other approaches. You can for example compress many of them together
for better compression (say 32), and accept some speed degradation by
having to decompress on average half (16) of them to get to the one
you want.
------------------------------
We have delta runs of about 20 revsisions, compress those 20 blobs as
a group instead of individually. The pack index would point all 20
sha1's to the same blob with a different type code. You had to load
and unzip most of these objects anyway to compute the revision off
from the diffs. Putting them into a single zip means that they share a
single compression table.
-------------------------------
Or you can process the whole thing to create a custom coding scheme,
as illustrated in "Managing Gigabytes":
http://www.cs.mu.oz.au/mg/
mark
--
Jon Smirl
jonsmirl@gmail.com
^ permalink raw reply [flat|nested] only message in thread