git.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* observations on parsecvs testing
@ 2006-06-15 20:37 Nicolas Pitre
       [not found] ` <20060615164742.570e33a0.seanlkml@sympatico.ca>
  2006-06-15 22:03 ` Keith Packard
  0 siblings, 2 replies; 5+ messages in thread
From: Nicolas Pitre @ 2006-06-15 20:37 UTC (permalink / raw)
  To: Keith Packard; +Cc: git


My machine is a P4 @ 3GHz with 1GB ram.

Feeding parsecvs with the Mozilla repository, it first ran for 175 
minutes with about 98% CPU spent in user space reading the 100458 ,v 
files and writing 700000+ blob objects.  Memory usage grew to 1789MB 
total while the resident memory saturated around 700MB.  This part was 
fine even with 1GB of ram since unused memory was gently pushed to swap.  
Only problem is that spawned git-pack-object instances started failing 
with memory allocation by that time, which is unffortunate but not 
fatal.

But then things started to go bad after all ,v files were parsed.  The 
parsecvs dropped to 3% CPU while the rest of the time was spent waiting 
after swap IO and therefore no substantial progress was made at that 
point.

So the Mozilla clearly requires 2GB of ram to realistically be converted 
to GIT using parsecvs, unless its second phase is reworked to avoid 
totally random access in memory in order to improve swap behavior, or 
its in-memory data set is shrinked at least by half.

Also rcs2git() is very inefficient especially with files having many 
revisions as it reconstructs the delta chain on every call.  For example 
mozilla/configure,v has at least 1690 revisions, and actually converting 
it into GIT blobs goes at a rate of 2.4 objects per second _only_ on my 
machine.  Can't objects be created as the delta list is walked/applied 
instead?  That would significantly reduce the initial convertion time.


Nicolas

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: observations on parsecvs testing
       [not found] ` <20060615164742.570e33a0.seanlkml@sympatico.ca>
@ 2006-06-15 20:47   ` Sean
  2006-06-15 20:55     ` Nicolas Pitre
  2006-06-15 22:04   ` Keith Packard
  1 sibling, 1 reply; 5+ messages in thread
From: Sean @ 2006-06-15 20:47 UTC (permalink / raw)
  To: Nicolas Pitre; +Cc: keithp, git

On Thu, 15 Jun 2006 16:37:30 -0400 (EDT)
Nicolas Pitre <nico@cam.org> wrote:

> Also rcs2git() is very inefficient especially with files having many 
> revisions as it reconstructs the delta chain on every call.  For example 
> mozilla/configure,v has at least 1690 revisions, and actually converting 
> it into GIT blobs goes at a rate of 2.4 objects per second _only_ on my 
> machine.  Can't objects be created as the delta list is walked/applied 
> instead?  That would significantly reduce the initial convertion time.

Hi Nicolas,

That was a planned optimization which I did mention to Keith previously.
Was kinda waiting to hear back how it was working for him, and if there
was an interest to put more work into it to include in his mainline.

Sean

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: observations on parsecvs testing
  2006-06-15 20:47   ` Sean
@ 2006-06-15 20:55     ` Nicolas Pitre
  0 siblings, 0 replies; 5+ messages in thread
From: Nicolas Pitre @ 2006-06-15 20:55 UTC (permalink / raw)
  To: Sean; +Cc: keithp, git

On Thu, 15 Jun 2006, Sean wrote:

> On Thu, 15 Jun 2006 16:37:30 -0400 (EDT)
> Nicolas Pitre <nico@cam.org> wrote:
> 
> > Also rcs2git() is very inefficient especially with files having many 
> > revisions as it reconstructs the delta chain on every call.  For example 
> > mozilla/configure,v has at least 1690 revisions, and actually converting 
> > it into GIT blobs goes at a rate of 2.4 objects per second _only_ on my 
> > machine.  Can't objects be created as the delta list is walked/applied 
> > instead?  That would significantly reduce the initial convertion time.
> 
> Hi Nicolas,
> 
> That was a planned optimization which I did mention to Keith previously.
> Was kinda waiting to hear back how it was working for him, and if there
> was an interest to put more work into it to include in his mainline.

I think it is really worth it.  I'd expect the first half of the 
convertion to go significantly faster then.


Nicolas

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: observations on parsecvs testing
  2006-06-15 20:37 observations on parsecvs testing Nicolas Pitre
       [not found] ` <20060615164742.570e33a0.seanlkml@sympatico.ca>
@ 2006-06-15 22:03 ` Keith Packard
  1 sibling, 0 replies; 5+ messages in thread
From: Keith Packard @ 2006-06-15 22:03 UTC (permalink / raw)
  To: Nicolas Pitre; +Cc: keithp, git

[-- Attachment #1: Type: text/plain, Size: 3871 bytes --]

On Thu, 2006-06-15 at 16:37 -0400, Nicolas Pitre wrote:
> My machine is a P4 @ 3GHz with 1GB ram.
> 
> Feeding parsecvs with the Mozilla repository, it first ran for 175 
> minutes with about 98% CPU spent in user space reading the 100458 ,v 
> files and writing 700000+ blob objects.  Memory usage grew to 1789MB 
> total while the resident memory saturated around 700MB.  This part was 
> fine even with 1GB of ram since unused memory was gently pushed to swap.  
> Only problem is that spawned git-pack-object instances started failing 
> with memory allocation by that time, which is unffortunate but not 
> fatal.

Right, the ,v -> blob conversion process uses around 160 bytes per
revision as best I can count (one rev_commit, one rev_file and 
a 41-byte sha1 string); 700000 revisions would therefore use 1.1GB just
for the revision objects. It should be possible to reduce the size of
this data structure fairly significantly; converting the sha1 value to
binary and compressing the CVS revision number to minimal length.
Switching from the general git/cvs structure to this cvs-specific
structure is 'on the list' of things I'd like to do.

> But then things started to go bad after all ,v files were parsed.  The 
> parsecvs dropped to 3% CPU while the rest of the time was spent waiting 
> after swap IO and therefore no substantial progress was made at that 
> point.

Yeah, after this point, parsecvs is merging the computed revision
historys of the individual files into a global history. This means it's
walking across the whole set of files to compute each git commit. For
each branch, it computes the set of files visible at the head of that
branch and then sorts the last revision of the visible files to discover
the last change set along that branch, constructing a commit for each
logical changeset backwards from the present into the past. As it's
constructing commits from the present backwards, it must go all the way
to the past before it can emit any commits to the repository. So, it has
to save them somewhere; right now, it's saving them in memory. What it
could do is construct tree objects for each commit, saving only the sha1
that results and dump the rest of the data. That should save plenty of
memory, but would require a radical restructuring of the code (which is
desparately needed, btw). With this change, parsecvs should actually
*shrink* over time, instead of grow.

> So the Mozilla clearly requires 2GB of ram to realistically be converted 
> to GIT using parsecvs, unless its second phase is reworked to avoid 
> totally random access in memory in order to improve swap behavior, or 
> its in-memory data set is shrinked at least by half.

Changing the data structures used in the first phase will shrink them
significantly; replacing the second state data structures with sha1 tree
hash values and disposing of the first phase objects incrementally
should elicit a shrinking memory pattern rather than growing. It might
well be easier at this point to just take the basic CVS parser and start
afresh though; the code is a horror show of incremental refinements.

> Also rcs2git() is very inefficient especially with files having many 
> revisions as it reconstructs the delta chain on every call.  For example 
> mozilla/configure,v has at least 1690 revisions, and actually converting 
> it into GIT blobs goes at a rate of 2.4 objects per second _only_ on my 
> machine.  Can't objects be created as the delta list is walked/applied 
> instead?  That would significantly reduce the initial convertion time.

Yes, I wanted to do this, but also wanted to ensure that the constructed
versions exactly matched the native rcs output. Starting with 'real' rcs
code seemed likely to ensure the latter. This "should" be easy to fix...

-- 
keith.packard@intel.com

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: observations on parsecvs testing
       [not found] ` <20060615164742.570e33a0.seanlkml@sympatico.ca>
  2006-06-15 20:47   ` Sean
@ 2006-06-15 22:04   ` Keith Packard
  1 sibling, 0 replies; 5+ messages in thread
From: Keith Packard @ 2006-06-15 22:04 UTC (permalink / raw)
  To: Sean; +Cc: keithp, Nicolas Pitre, git

[-- Attachment #1: Type: text/plain, Size: 477 bytes --]

On Thu, 2006-06-15 at 16:47 -0400, Sean wrote:
> las,
> 
> That was a planned optimization which I did mention to Keith previously.
> Was kinda waiting to hear back how it was working for him, and if there
> was an interest to put more work into it to include in his mainline.

The rcs2git code is working great and is on 'master' at this point;
optimizations to generate all of the revisions in one pass would be
greatly appreciated.

-- 
keith.packard@intel.com

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2006-06-15 22:04 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-06-15 20:37 observations on parsecvs testing Nicolas Pitre
     [not found] ` <20060615164742.570e33a0.seanlkml@sympatico.ca>
2006-06-15 20:47   ` Sean
2006-06-15 20:55     ` Nicolas Pitre
2006-06-15 22:04   ` Keith Packard
2006-06-15 22:03 ` Keith Packard

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).