From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:56645) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UO8Ut-00034K-VZ for qemu-devel@nongnu.org; Fri, 05 Apr 2013 11:24:18 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UO8Uo-0005q6-2I for qemu-devel@nongnu.org; Fri, 05 Apr 2013 11:24:11 -0400 Received: from mx1.redhat.com ([209.132.183.28]:61985) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UO8Un-0005pw-R0 for qemu-devel@nongnu.org; Fri, 05 Apr 2013 11:24:06 -0400 Message-ID: <515EEC8C.8010203@redhat.com> Date: Fri, 05 Apr 2013 17:23:56 +0200 From: Paolo Bonzini MIME-Version: 1.0 References: <1363963683-26157-1-git-send-email-owasserm@redhat.com> <1363963683-26157-8-git-send-email-owasserm@redhat.com> <20130405134445.GD2351@dhcp-200-207.str.redhat.com> In-Reply-To: <20130405134445.GD2351@dhcp-200-207.str.redhat.com> Content-Type: multipart/mixed; boundary="------------080206030008060002050106" Subject: Re: [Qemu-devel] [PATCH v5 7/7] Use qemu_put_buffer_async for guest memory pages List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Kevin Wolf Cc: Orit Wasserman , quintela@redhat.com, chegu_vinod@hp.com, qemu-devel@nongnu.org, mst@redhat.com This is a multi-part message in MIME format. --------------080206030008060002050106 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Il 05/04/2013 15:44, Kevin Wolf ha scritto: > This seems to have killed savevm performance. I noticed that > qemu-iotests case 007 took forever on my test box (882 seconds instead > of something like 10 seconds). It can be reproduced by this script: > > export MALLOC_PERTURB_=11 > qemu-img create -f qcow2 -o compat=1.1 test.qcow2 1M > time qemu-system-x86_64 -nographic -hda $TEST_IMG -serial none -monitor stdio < savevm test > quit > EOF > > This used to take about 0.6s for me, after this patch it's around 10s. The solution could be to make bdrv_load_vmstate take an iov/iovcnt pair. Alternatively, you can try the attached patch. I haven't yet tested it though, and won't be able to do so today. Paolo --------------080206030008060002050106 Content-Type: text/x-patch; name="savevm-performance.patch" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="savevm-performance.patch" diff --git a/savevm.c b/savevm.c index b1d8988..af99d64 100644 --- a/savevm.c +++ b/savevm.c @@ -525,27 +525,24 @@ static void qemu_file_set_error(QEMUFile *f, int ret) static void qemu_fflush(QEMUFile *f) { ssize_t ret = 0; - int i = 0; if (!f->ops->writev_buffer && !f->ops->put_buffer) { return; } - if (f->is_write && f->iovcnt > 0) { + if (f->is_write) { if (f->ops->writev_buffer) { - ret = f->ops->writev_buffer(f->opaque, f->iov, f->iovcnt); - if (ret >= 0) { - f->pos += ret; + if (f->iovcnt > 0) { + ret = f->ops->writev_buffer(f->opaque, f->iov, f->iovcnt); } } else { - for (i = 0; i < f->iovcnt && ret >= 0; i++) { - ret = f->ops->put_buffer(f->opaque, f->iov[i].iov_base, f->pos, - f->iov[i].iov_len); - if (ret >= 0) { - f->pos += ret; - } + if (f->buf_index > 0) { + ret = f->ops->put_buffer(f->opaque, f->buf, f->pos, f->buf_index); } } + if (ret >= 0) { + f->pos += ret; + } f->buf_index = 0; f->iovcnt = 0; } @@ -631,6 +628,11 @@ static void add_to_iovec(QEMUFile *f, const uint8_t *buf, int size) f->iov[f->iovcnt].iov_base = (uint8_t *)buf; f->iov[f->iovcnt++].iov_len = size; } + + f->is_write = 1; + if (f->buf_index >= IO_BUF_SIZE || f->iovcnt >= MAX_IOV_SIZE) { + qemu_fflush(f); + } } void qemu_put_buffer_async(QEMUFile *f, const uint8_t *buf, int size) @@ -645,13 +647,11 @@ void qemu_put_buffer_async(QEMUFile *f, const uint8_t *buf, int size) abort(); } - add_to_iovec(f, buf, size); - - f->is_write = 1; - f->bytes_xfer += size; - - if (f->buf_index >= IO_BUF_SIZE || f->iovcnt >= MAX_IOV_SIZE) { - qemu_fflush(f); + if (f->ops->writev_buffer) { + f->bytes_xfer += size; + add_to_iovec(f, buf, size); + } else { + qemu_put_buffer(f, buf, size); } } @@ -674,9 +674,17 @@ void qemu_put_buffer(QEMUFile *f, const uint8_t *buf, int size) if (l > size) l = size; memcpy(f->buf + f->buf_index, buf, l); - f->is_write = 1; - f->buf_index += l; - qemu_put_buffer_async(f, f->buf + (f->buf_index - l), l); + f->bytes_xfer += size; + if (f->ops->writev_buffer) { + add_to_iovec(f, f->buf + f->buf_index, l); + f->buf_index += l; + } else { + f->is_write = 1; + f->buf_index += l; + if (f->buf_index == IO_BUF_SIZE) { + qemu_fflush(f); + } + } if (qemu_file_get_error(f)) { break; } @@ -697,14 +705,16 @@ void qemu_put_byte(QEMUFile *f, int v) abort(); } - f->buf[f->buf_index++] = v; - f->is_write = 1; + f->buf[f->buf_index] = v; f->bytes_xfer++; - - add_to_iovec(f, f->buf + (f->buf_index - 1), 1); - - if (f->buf_index >= IO_BUF_SIZE || f->iovcnt >= MAX_IOV_SIZE) { - qemu_fflush(f); + if (f->ops->writev_buffer) { + add_to_iovec(f, f->buf + f->buf_index, 1); + f->buf_index++; + } else { + f->buf_index++; + if (f->buf_index == IO_BUF_SIZE) { + qemu_fflush(f); + } } } --------------080206030008060002050106--