From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:44338) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1T1kWp-0004Iy-01 for qemu-devel@nongnu.org; Wed, 15 Aug 2012 16:49:24 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1T1kWn-000805-O4 for qemu-devel@nongnu.org; Wed, 15 Aug 2012 16:49:22 -0400 Received: from mail-ob0-f173.google.com ([209.85.214.173]:47456) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1T1kWn-0007zz-J6 for qemu-devel@nongnu.org; Wed, 15 Aug 2012 16:49:21 -0400 Received: by obbta14 with SMTP id ta14so2430518obb.4 for ; Wed, 15 Aug 2012 13:49:21 -0700 (PDT) Sender: fluxion Date: Wed, 15 Aug 2012 15:48:56 -0500 From: Michael Roth Message-ID: <20120815204856.GK16157@illuin> References: <1345056344-31849-1-git-send-email-mdroth@linux.vnet.ibm.com> <1345056344-31849-2-git-send-email-mdroth@linux.vnet.ibm.com> <502C00E4.1020009@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <502C00E4.1020009@redhat.com> Subject: Re: [Qemu-devel] [PATCH for-1.2 v3 2/3] json-parser: don't replicate tokens at each level of recursion List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Eric Blake Cc: armbru@redhat.com, aliguori@us.ibm.com, qemu-devel@nongnu.org, lcapitulino@redhat.com On Wed, Aug 15, 2012 at 02:04:52PM -0600, Eric Blake wrote: > On 08/15/2012 12:45 PM, Michael Roth wrote: > > Currently, when parsing a stream of tokens we make a copy of the token > > list at the beginning of each level of recursion so that we do not > > modify the original list in cases where we need to fall back to an > > earlier state. > > > > In the worst case, we will only read 1 or 2 tokens off the list before > > recursing again, which means an upper bound of roughly N^2 token allocations. > > > > For a "reasonably" sized QMP request (in this a QMP representation of > > cirrus_vga's device state, generated via QIDL, being passed in via > > qom-set), this caused my 16GB's of memory to be exhausted before any > > noticeable progress was made by the parser. > > > > This patch works around the issue by using single copy of the token list > > in the form of an indexable array so that we can save/restore state by > > manipulating indices. > > > > A subsequent commit adds a "large_dict" test case which exhibits the > > same behavior as above. With this patch applied the test case successfully > > completes in under a second. > > > > Tested with valgrind, make check, and QMP. > > > > Signed-off-by: Michael Roth > > --- > > json-parser.c | 230 +++++++++++++++++++++++++++++++++++---------------------- > > 1 file changed, 142 insertions(+), 88 deletions(-) > > I'm not the most familiar with this code, so take my review with a grain > of salt, but I read through it and the transformation looks sane (and my > non-code findings from v2 were fixed). > > Reviewed-by: Eric Blake > > > +static JSONParserContext parser_context_save(JSONParserContext *ctxt) > > +{ > > + JSONParserContext saved_ctxt = {0}; > > + saved_ctxt.tokens.pos = ctxt->tokens.pos; > > + saved_ctxt.tokens.count = ctxt->tokens.count; > > + saved_ctxt.tokens.buf = ctxt->tokens.buf; > > Is it any simpler to condense 3 lines to 1: > > saved_cts.tokens = ctxt->tokens; > > > + return saved_ctxt; > > +} > > + > > +static void parser_context_restore(JSONParserContext *ctxt, > > + JSONParserContext saved_ctxt) > > +{ > > + ctxt->tokens.pos = saved_ctxt.tokens.pos; > > + ctxt->tokens.count = saved_ctxt.tokens.count; > > + ctxt->tokens.buf = saved_ctxt.tokens.buf; > > and again, ctxt->tokens = saved_ctxt.tokens; Poor function naming: save/restore apply to the token state, the other fields in ctxt are unused, so I opted to set the fields explicitly. Can probably make this read a little better by breaking token state off into it's own struct, but I think we can clean that up later. Thanks for the review! > > -- > Eric Blake eblake@redhat.com +1-919-301-3266 > Libvirt virtualization library http://libvirt.org >