git.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Keith Packard <keithp@keithp.com>
To: Nicolas Pitre <nico@cam.org>
Cc: keithp@keithp.com, git@vger.kernel.org
Subject: Re: observations on parsecvs testing
Date: Thu, 15 Jun 2006 15:03:26 -0700	[thread overview]
Message-ID: <1150409006.30681.132.camel@neko.keithp.com> (raw)
In-Reply-To: <Pine.LNX.4.64.0606151529350.16002@localhost.localdomain>

[-- Attachment #1: Type: text/plain, Size: 3871 bytes --]

On Thu, 2006-06-15 at 16:37 -0400, Nicolas Pitre wrote:
> My machine is a P4 @ 3GHz with 1GB ram.
> 
> Feeding parsecvs with the Mozilla repository, it first ran for 175 
> minutes with about 98% CPU spent in user space reading the 100458 ,v 
> files and writing 700000+ blob objects.  Memory usage grew to 1789MB 
> total while the resident memory saturated around 700MB.  This part was 
> fine even with 1GB of ram since unused memory was gently pushed to swap.  
> Only problem is that spawned git-pack-object instances started failing 
> with memory allocation by that time, which is unffortunate but not 
> fatal.

Right, the ,v -> blob conversion process uses around 160 bytes per
revision as best I can count (one rev_commit, one rev_file and 
a 41-byte sha1 string); 700000 revisions would therefore use 1.1GB just
for the revision objects. It should be possible to reduce the size of
this data structure fairly significantly; converting the sha1 value to
binary and compressing the CVS revision number to minimal length.
Switching from the general git/cvs structure to this cvs-specific
structure is 'on the list' of things I'd like to do.

> But then things started to go bad after all ,v files were parsed.  The 
> parsecvs dropped to 3% CPU while the rest of the time was spent waiting 
> after swap IO and therefore no substantial progress was made at that 
> point.

Yeah, after this point, parsecvs is merging the computed revision
historys of the individual files into a global history. This means it's
walking across the whole set of files to compute each git commit. For
each branch, it computes the set of files visible at the head of that
branch and then sorts the last revision of the visible files to discover
the last change set along that branch, constructing a commit for each
logical changeset backwards from the present into the past. As it's
constructing commits from the present backwards, it must go all the way
to the past before it can emit any commits to the repository. So, it has
to save them somewhere; right now, it's saving them in memory. What it
could do is construct tree objects for each commit, saving only the sha1
that results and dump the rest of the data. That should save plenty of
memory, but would require a radical restructuring of the code (which is
desparately needed, btw). With this change, parsecvs should actually
*shrink* over time, instead of grow.

> So the Mozilla clearly requires 2GB of ram to realistically be converted 
> to GIT using parsecvs, unless its second phase is reworked to avoid 
> totally random access in memory in order to improve swap behavior, or 
> its in-memory data set is shrinked at least by half.

Changing the data structures used in the first phase will shrink them
significantly; replacing the second state data structures with sha1 tree
hash values and disposing of the first phase objects incrementally
should elicit a shrinking memory pattern rather than growing. It might
well be easier at this point to just take the basic CVS parser and start
afresh though; the code is a horror show of incremental refinements.

> Also rcs2git() is very inefficient especially with files having many 
> revisions as it reconstructs the delta chain on every call.  For example 
> mozilla/configure,v has at least 1690 revisions, and actually converting 
> it into GIT blobs goes at a rate of 2.4 objects per second _only_ on my 
> machine.  Can't objects be created as the delta list is walked/applied 
> instead?  That would significantly reduce the initial convertion time.

Yes, I wanted to do this, but also wanted to ensure that the constructed
versions exactly matched the native rcs output. Starting with 'real' rcs
code seemed likely to ensure the latter. This "should" be easy to fix...

-- 
keith.packard@intel.com

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

  reply	other threads:[~2006-06-15 22:03 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-06-15 20:37 observations on parsecvs testing Nicolas Pitre
2006-06-15 22:03 ` Keith Packard [this message]
     [not found] ` <20060615164742.570e33a0.seanlkml@sympatico.ca>
2006-06-15 20:47   ` Sean
2006-06-15 20:55     ` Nicolas Pitre
2006-06-15 22:04   ` Keith Packard

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1150409006.30681.132.camel@neko.keithp.com \
    --to=keithp@keithp.com \
    --cc=git@vger.kernel.org \
    --cc=nico@cam.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).