git.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sam Hocevar <sam@zoy.org>
To: git@vger.kernel.org
Subject: Re: [PATCH] git-p4: improve performance with large files
Date: Fri, 6 Mar 2009 09:53:59 +0100	[thread overview]
Message-ID: <20090306085357.GA12880@zoy.org> (raw)
In-Reply-To: <1d48f7010903051725v510f99f0h2a05b9381ff75ac1@mail.gmail.com>

On Thu, Mar 05, 2009, Han-Wen Nienhuys wrote:

> i'd say
> 
>   data = []
> 
> add a comment that you're trying to save memory. There is no reason to
> remove data from the namespace.

   Okay. Here is an improved version.

Signed-off-by: Sam Hocevar <sam@zoy.org>
---
 contrib/fast-import/git-p4 |   13 ++++++++++++-
 1 files changed, 12 insertions(+), 1 deletions(-)

diff --git a/contrib/fast-import/git-p4 b/contrib/fast-import/git-p4
index 3832f60..db0ea0a 100755
--- a/contrib/fast-import/git-p4
+++ b/contrib/fast-import/git-p4
@@ -984,11 +984,22 @@ class P4Sync(Command):
         while j < len(filedata):
             stat = filedata[j]
             j += 1
+            data = []
             text = ''
+            # Append data every 8192 chunks to 1) ensure decent performance
+            # by not making too many string concatenations and 2) avoid
+            # excessive memory usage by purging "data" often enough. p4
+            # sends 4k chunks, so we should not use more than 32 MiB of
+            # additional memory while rebuilding the file data.
             while j < len(filedata) and filedata[j]['code'] in ('text', 'unicod
e', 'binary'):
-                text += filedata[j]['data']
+                data.append(filedata[j]['data'])
                 del filedata[j]['data']
+                if len(data) > 8192:
+                    text += ''.join(data)
+                    data = []
                 j += 1
+            text += ''.join(data)
+            data = None

             if not stat.has_key('depotFile'):
                 sys.stderr.write("p4 print fails with: %s\n" % repr(stat))

-- 
Sam.

  reply	other threads:[~2009-03-06  8:55 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-03-04 21:54 [PATCH] git-p4: improve performance with large files Sam Hocevar
2009-03-04 23:05 ` thestar
2009-03-05 17:23   ` Sam Hocevar
2009-03-06  0:01     ` thestar
2009-03-06  1:14     ` Junio C Hamano
2009-03-06  1:25       ` Han-Wen Nienhuys
2009-03-06  8:53         ` Sam Hocevar [this message]
2009-03-06  9:42           ` Junio C Hamano
2009-03-06 10:13             ` [PATCH v4] " Sam Hocevar
2009-03-07 12:25               ` [PATCH v5] git-p4: improve performance when importing huge files by reducing the number of string concatenations while constraining memory usage Sam Hocevar
  -- strict thread matches above, loose matches on Subject: below --
2009-03-06 15:53 [PATCH] git-p4: remove unnecessary semicolons at end of lines Sam Hocevar
2009-03-06 16:55 ` Brandon Casey
2009-03-06 17:11   ` msysgit corrupting commit messages? Sam Hocevar
2009-03-07  2:48     ` Johannes Schindelin
2009-03-07 12:26 ` [PATCH v2] git-p4: remove unnecessary semicolons at end of lines Sam Hocevar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090306085357.GA12880@zoy.org \
    --to=sam@zoy.org \
    --cc=git@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).