* Computing delta sizes in pack files
@ 2006-11-21 5:39 Shawn Pearce
2006-11-22 16:44 ` Jonas Fonseca
0 siblings, 1 reply; 3+ messages in thread
From: Shawn Pearce @ 2006-11-21 5:39 UTC (permalink / raw)
To: git
[-- Attachment #1: Type: text/plain, Size: 1200 bytes --]
Recently I wanted to know how well Git's pack files were doing at
storing rather large JAR files. So I wrote the attached script to
parse the output of `git verify-pack -v` and use that to determine
how many bytes are needed for each revision of any given file.
For example running it on builtin-blame.c:
$ perl ../delta-sizes.pl builtin-blame.c
Caching cache-cdc41646a9de201b06a936fc3bddcbd51aeb532c.v...
Pack index cache created.
builtin-blame.c
16660221... s 2 44
066dee74... s 1 62
176f51a4... 0 12797
----------------------------------------
3 revs 12 KiB
3 revs 12 KiB
There are 3 revisions of this file, totalling 12 KiB in disk space
within the pack files. One of those revisions uses 44 bytes and the
other uses 62 bytes. Given that this includes the complete overhead
(including the 20 byte OBJ_REF_DELTA header) we're talking about
~20 bytes of delta data in revision 16660221. Pretty good. :)
Of course this only looks at a single blob object and does not take
into account the tree and commit overheads for a given revision,
but it does give a really good idea of what is going on.
--
Shawn.
[-- Attachment #2: delta-sizes.pl --]
[-- Type: text/x-perl, Size: 3228 bytes --]
#!/usr/bin/perl
use strict;
unless ($ENV{GIT_DIR})
{
$ENV{GIT_DIR} = '.git' if -f '.git/config';
}
unless ($ENV{GIT_DIR})
{
$ENV{GIT_DIR} = shift || die "usage: $0 gitdir file...\n";
}
my %revs_by_path;
my %path_by_rev;
my %by_hash;
open(R, "git rev-list --objects --all |");
while (<R>)
{
chomp;
my ($sha1, $path) = split / /, $_, 2;
next unless $path;
push(@{$revs_by_path{$path}}, $sha1);
$path_by_rev{$sha1}{$path} = 1;
}
close R;
sub index_pack
{
my $idx = shift;
my $pack = $idx;
local *R, *V, $_;
$pack =~ s/\.idx$/.pack/;
$pack =~ /pack-([a-z0-9]{40})\.pack$/;
my $cache = "cache-$1.v";
my @objects;
unless (open(R, $cache))
{
print STDERR "Caching $cache...\n";
open(R, ">$cache");
open(V, "git verify-pack -v $idx|");
print R while $_ = <V>;
close V;
close R;
print STDERR "Pack index cache created.\n\n";
open(R, $cache);
}
while (<R>)
{
last if /^chain length/;
chomp;
my ($sha1, $type, $size, $offset, $depth, $base) = split /\s+/;
my $o = {
sha1 => $sha1,
type => $type,
uncompressed_size => $size,
offset => $offset,
depth => $depth,
base => $base,
};
push @objects, $o;
$by_hash{$sha1} = $o;
}
close R;
my $last = undef;
foreach my $o (sort {$a->{offset} <=> $b->{offset}} @objects)
{
$last->{pack_size} = $o->{offset} - $last->{offset} if $last;
$last = $o;
}
$last->{pack_size} = ((-s $pack) - 20) - $last->{offset};
}
opendir(D, "$ENV{GIT_DIR}/objects/pack");
while (my $entry = readdir D)
{
next unless $entry =~ /^pack-[a-z0-9]{40}\.idx$/;
index_pack "$ENV{GIT_DIR}/objects/pack/$entry";
}
closedir D;
if (@ARGV)
{
my $g_total = 0;
my $g_revs = 0;
foreach my $path (@ARGV)
{
print $path, "\n";
my $total = 0;
my $revs = 0;
foreach my $sha1 (
sort {$by_hash{$b}{depth} <=> $by_hash{$a}{depth}}
grep {$by_hash{$_}}
@{$revs_by_path{$path}})
{
my $o = $by_hash{$sha1};
printf "%8s... %1s%2i %10i\n",
substr($sha1, 0, 8),
($o->{depth}
? ($path_by_rev{$o->{base}}{$path}
? 's'
: 'o')
: ''),
$o->{depth},
$o->{pack_size};
$total += $o->{pack_size};
$revs++;
}
$g_total += $total;
$g_revs += $revs;
my $units = 'bytes';
if ($total >= 1024)
{
$units = 'KiB';
$total /= 1024;
if ($total >= 1024)
{
$units = 'MiB';
$total /= 1024;
}
}
print '-'x40, "\n";
printf "%15s %10i %s\n",
"$revs revs",
$total, $units;
print "\n";
}
my $units = 'bytes';
if ($g_total >= 1024)
{
$units = 'KiB';
$g_total /= 1024;
if ($g_total >= 1024)
{
$units = 'MiB';
$g_total /= 1024;
}
}
printf "%15s %10i %s\n",
"$g_revs revs",
$g_total, $units;
}
else
{
foreach my $path (sort keys %revs_by_path)
{
my $total = 0;
my $revs = 0;
foreach my $sha1 (
sort {$by_hash{$b}{depth} <=> $by_hash{$a}{depth}}
grep {$by_hash{$_}}
@{$revs_by_path{$path}})
{
$total += $by_hash{$sha1}{pack_size};
$revs++;
}
my $units = 'bytes';
if ($total >= 1024)
{
$units = 'KiB';
$total /= 1024;
if ($total >= 1024)
{
$units = 'MiB';
$total /= 1024;
}
}
$total = int $total;
printf "%3i revs %10i %-5s %s\n",
$revs,
$total, $units,
$path;
}
}
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Computing delta sizes in pack files
2006-11-21 5:39 Computing delta sizes in pack files Shawn Pearce
@ 2006-11-22 16:44 ` Jonas Fonseca
2006-11-25 7:33 ` Shawn Pearce
0 siblings, 1 reply; 3+ messages in thread
From: Jonas Fonseca @ 2006-11-22 16:44 UTC (permalink / raw)
To: Shawn Pearce; +Cc: git
On 11/21/06, Shawn Pearce <spearce@spearce.org> wrote:
> Of course this only looks at a single blob object and does not take
> into account the tree and commit overheads for a given revision,
> but it does give a really good idea of what is going on.
I have some numbers that also includes the other object types. They are
based on running a set of scripts in 5 different repositories. First,
each repository has been both packed and unpacked with respect to the
different object types to show the compression level and disk-space
saving. The main results are compression level for the different object
types and a test to see which pack sizes provide "optimal" packing. I
will not post the numbers here. They are available in
http://jonas.nitro.dk/tmp/stats.pdf for those interested. The following
is my "analysis" of the numbers.
It can be seen that the blob objects generally control the overall
packing properties of the repository, especially when it comes to the
compression level. Generally, there are more tree objects than blob
objects, which can be due to the fact that both the ELinks and Linux
kernel tree are structured into many subdirectories containing few
files. The Tig repository is exceptional in that it has only few files
and one tree object per revision, which has the effect of reducing the
tree object compression level. At 83% on average, tree objects compress
very well and in the general case better than blob objects. As
expected, the randomness of the content of both commit and tag objects
results in a very poor packing performance of only 2%. In terms of
disk-space usage between packed and unpacked object stores, it is
obvious that the overhead of many small object files is unavoidable.
Next, is the examination of how different pack sizes affect the
compression level. This also includes looking at how the size of the
index file varies. The columns on per-object sizes are of interest. The
per-object sizes can be compared with the average object size of each
project to get a rough idea of the compression level. In some of the
tables, rows are missing because not all repositories contain enough
objects with respect to the pack sizes being examined.
The data show that for minimal index files, the packs need to contain
more than 2500 objects. The 24 bytes per-object for the optimal case
includes 20-bytes for the object SHA1, and thus cannot be expected to
become lower. For commit objects, the numbers show that there is almost
no difference between small and big packs. The same is assumed to be the
case for tag objects. For tree objects, it becomes very clear that in
repositories with structured source trees, the tree objects compress
much better for big packs, whereas a project, such as Git, does not save
much after pack files reach a size of 250 objects. Across all
repositories, bigger packs always leads to better compression and fewer
bytes per object, but only for blob objects.
In conclusion, the heuristic of packing based on object type is very
good. Neither commit nor tag objects compress very well when packed.
While both tree and blob objects compress well, tree objects do not
require bigger packs to provide better compression. The pack index files
do not require many objects to become optimal. It should also be noted
from looking at the columns containing numbers about the minimum and
maximum pack sizes that they can vary a lot compared to the average
size.
--
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Computing delta sizes in pack files
2006-11-22 16:44 ` Jonas Fonseca
@ 2006-11-25 7:33 ` Shawn Pearce
0 siblings, 0 replies; 3+ messages in thread
From: Shawn Pearce @ 2006-11-25 7:33 UTC (permalink / raw)
To: Jonas Fonseca; +Cc: git
Jonas Fonseca <jonas.fonseca@gmail.com> wrote:
> I will not post the numbers here. They are available in
> http://jonas.nitro.dk/tmp/stats.pdf for those interested. The following
> is my "analysis" of the numbers.
Thanks, this was interesting stuff.
> As expected, the randomness of the content of both commit and tag objects
> results in a very poor packing performance of only 2%.
This is one reason why Jon Smirl was pushing the idea of dictionary based
compression. git.git has only 276 unique author lines, yet 37 of them
are really the top committers. Not surprisingly Junio C Hamano leads
the pack with 3529+ commits... :-)
A dictionary based compression would allow us to easily compress
Junio's authorship line away from those 3529+ commits into a single
string, getting much better compression on the commits.
In trees this may work very well too for very common file names, e.g.
"Makefile". Yes each tree delta compresses very well against its
base (and likely copies the file name from the base even when the
SHA1 changed) but if the bases were able to use a common dictionary
that would help even more.
> The data show that for minimal index files, the packs need to contain
> more than 2500 objects. The 24 bytes per-object for the optimal case
> includes 20-bytes for the object SHA1, and thus cannot be expected to
> become lower.
This is just a fundamental property of the pack index file format.
The file *MUST* be 1064 bytes of fixed overhead, with 24 bytes of
data per object indexed. So the fixed overhead amortizes very
quickly over the individual object entries, at which point its
exactly 24 bytes per entry. This all of course assumes a 32 bit
index (which is the current format).
The thing is the Mozilla index is 44 MiB. That's roughly 1.9 million
objects. The index itself is larger than the entire git.git pack.
On a large repository the index ain't trivial... yet its essential
to performance!
On the other hand the 1064 bytes of fixed overhead in the index
is nothing compared to the overhead in say an RCS file. Or an
SVN repository... :-)
What I failed to point out in my script (or in my email) is that
the 24 bytes of index entry cannot be eliminated, and thus must
be added to the "revision cost". In some cases its about the same
size as the deltafied revision in the pack file. :-(
--
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2006-11-25 7:33 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-11-21 5:39 Computing delta sizes in pack files Shawn Pearce
2006-11-22 16:44 ` Jonas Fonseca
2006-11-25 7:33 ` Shawn Pearce
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).