From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35004) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XyXnI-0000NZ-DB for qemu-devel@nongnu.org; Tue, 09 Dec 2014 22:18:33 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XyXnD-0006HX-Iu for qemu-devel@nongnu.org; Tue, 09 Dec 2014 22:18:28 -0500 Received: from mx1.redhat.com ([209.132.183.28]:46158) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XyXnD-0006HQ-BM for qemu-devel@nongnu.org; Tue, 09 Dec 2014 22:18:23 -0500 Date: Wed, 10 Dec 2014 08:48:10 +0530 From: Amit Shah Message-ID: <20141210031810.GC27208@grmbl.mre> References: <1416830152-524-1-git-send-email-arei.gonglei@huawei.com> <1416830152-524-5-git-send-email-arei.gonglei@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1416830152-524-5-git-send-email-arei.gonglei@huawei.com> Subject: Re: [Qemu-devel] [PATCH RESEND for 2.3 4/6] xbzrle: check 8 bytes at a time after an concurrency scene List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: arei.gonglei@huawei.com Cc: ChenLiang , weidong.huang@huawei.com, quintela@redhat.com, peter.huangpeng@huawei.com, qemu-devel@nongnu.org, pbonzini@redhat.com, dgilbert@redhat.com On (Mon) 24 Nov 2014 [19:55:50], arei.gonglei@huawei.com wrote: > From: ChenLiang > > The logic of old code is correct. But Checking byte by byte will > consume time after an concurrency scene. > > Signed-off-by: ChenLiang > Signed-off-by: Gonglei > --- > xbzrle.c | 28 ++++++++++++++++++---------- > 1 file changed, 18 insertions(+), 10 deletions(-) > > diff --git a/xbzrle.c b/xbzrle.c > index d27a140..0477367 100644 > --- a/xbzrle.c > +++ b/xbzrle.c > @@ -50,16 +50,24 @@ int xbzrle_encode_buffer(uint8_t *old_buf, uint8_t *new_buf, int slen, > > /* word at a time for speed */ > if (!res) { > - while (i < slen && > - (*(long *)(old_buf + i)) == (*(long *)(new_buf + i))) { > - i += sizeof(long); > - zrun_len += sizeof(long); > - } > - > - /* go over the rest */ > - while (i < slen && old_buf[i] == new_buf[i]) { > - zrun_len++; > - i++; > + while (i < slen) { > + if ((*(long *)(old_buf + i)) == (*(long *)(new_buf + i))) { > + i += sizeof(long); > + zrun_len += sizeof(long); > + } else { > + /* go over the rest */ > + for (j = 0; j < sizeof(long); j++) { > + if (old_buf[i] == new_buf[i]) { > + i++; > + zrun_len++; I don't see how this is different from the code it's replacing. The check and increments are all the same. Difficult to see why there'll be a speed benefit. Can you please explain? Do you have any performance numbers for before/after? Thanks, Amit