From: Paolo Bonzini <pbonzini@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: Orit Wasserman <owasserm@redhat.com>,
quintela@redhat.com, chegu_vinod@hp.com, qemu-devel@nongnu.org,
mst@redhat.com
Subject: Re: [Qemu-devel] [PATCH v5 7/7] Use qemu_put_buffer_async for guest memory pages
Date: Fri, 05 Apr 2013 17:42:29 +0200 [thread overview]
Message-ID: <515EF0E5.9090205@redhat.com> (raw)
In-Reply-To: <20130405153927.GE2351@dhcp-200-207.str.redhat.com>
[-- Attachment #1: Type: text/plain, Size: 699 bytes --]
Il 05/04/2013 17:39, Kevin Wolf ha scritto:
>> > The solution could be to make bdrv_load_vmstate take an iov/iovcnt pair.
> Ah, so you're saying that instead of linearising the buffer it breaks up
> the requests in tiny pieces?
Only for RAM (header/page/header/page...), because the page comes
straight from the guest memory. Device state is still buffered and fast.
> Implementing vectored bdrv_load/save_vmstate should be easy in theory.
>
>> > Alternatively, you can try the attached patch. I haven't yet tested it
>> > though, and won't be able to do so today.
> Attempted to write to buffer while read buffer is not empty
>
> Program received signal SIGABRT, Aborted.
Second try.
Paolo
[-- Attachment #2: savevm-performance.patch --]
[-- Type: text/x-patch, Size: 3472 bytes --]
diff --git a/savevm.c b/savevm.c
index b1d8988..5871642 100644
--- a/savevm.c
+++ b/savevm.c
@@ -525,27 +525,24 @@ static void qemu_file_set_error(QEMUFile *f, int ret)
static void qemu_fflush(QEMUFile *f)
{
ssize_t ret = 0;
- int i = 0;
if (!f->ops->writev_buffer && !f->ops->put_buffer) {
return;
}
- if (f->is_write && f->iovcnt > 0) {
+ if (f->is_write) {
if (f->ops->writev_buffer) {
- ret = f->ops->writev_buffer(f->opaque, f->iov, f->iovcnt);
- if (ret >= 0) {
- f->pos += ret;
+ if (f->iovcnt > 0) {
+ ret = f->ops->writev_buffer(f->opaque, f->iov, f->iovcnt);
}
} else {
- for (i = 0; i < f->iovcnt && ret >= 0; i++) {
- ret = f->ops->put_buffer(f->opaque, f->iov[i].iov_base, f->pos,
- f->iov[i].iov_len);
- if (ret >= 0) {
- f->pos += ret;
- }
+ if (f->buf_index > 0) {
+ ret = f->ops->put_buffer(f->opaque, f->buf, f->pos, f->buf_index);
}
}
+ if (ret >= 0) {
+ f->pos += ret;
+ }
f->buf_index = 0;
f->iovcnt = 0;
}
@@ -631,6 +628,11 @@ static void add_to_iovec(QEMUFile *f, const uint8_t *buf, int size)
f->iov[f->iovcnt].iov_base = (uint8_t *)buf;
f->iov[f->iovcnt++].iov_len = size;
}
+
+ f->is_write = 1;
+ if (f->buf_index >= IO_BUF_SIZE || f->iovcnt >= MAX_IOV_SIZE) {
+ qemu_fflush(f);
+ }
}
void qemu_put_buffer_async(QEMUFile *f, const uint8_t *buf, int size)
@@ -645,13 +647,11 @@ void qemu_put_buffer_async(QEMUFile *f, const uint8_t *buf, int size)
abort();
}
- add_to_iovec(f, buf, size);
-
- f->is_write = 1;
- f->bytes_xfer += size;
-
- if (f->buf_index >= IO_BUF_SIZE || f->iovcnt >= MAX_IOV_SIZE) {
- qemu_fflush(f);
+ if (f->ops->writev_buffer) {
+ f->bytes_xfer += size;
+ add_to_iovec(f, buf, size);
+ } else {
+ qemu_put_buffer(f, buf, size);
}
}
@@ -674,9 +674,17 @@ void qemu_put_buffer(QEMUFile *f, const uint8_t *buf, int size)
if (l > size)
l = size;
memcpy(f->buf + f->buf_index, buf, l);
- f->is_write = 1;
- f->buf_index += l;
- qemu_put_buffer_async(f, f->buf + (f->buf_index - l), l);
+ f->bytes_xfer += size;
+ if (f->ops->writev_buffer) {
+ add_to_iovec(f, f->buf + f->buf_index, l);
+ f->buf_index += l;
+ } else {
+ f->is_write = 1;
+ f->buf_index += l;
+ if (f->buf_index == IO_BUF_SIZE) {
+ qemu_fflush(f);
+ }
+ }
if (qemu_file_get_error(f)) {
break;
}
@@ -697,14 +705,17 @@ void qemu_put_byte(QEMUFile *f, int v)
abort();
}
- f->buf[f->buf_index++] = v;
- f->is_write = 1;
+ f->buf[f->buf_index] = v;
f->bytes_xfer++;
-
- add_to_iovec(f, f->buf + (f->buf_index - 1), 1);
-
- if (f->buf_index >= IO_BUF_SIZE || f->iovcnt >= MAX_IOV_SIZE) {
- qemu_fflush(f);
+ if (f->ops->writev_buffer) {
+ add_to_iovec(f, f->buf + f->buf_index, 1);
+ f->buf_index++;
+ } else {
+ f->is_write = 1;
+ f->buf_index++;
+ if (f->buf_index == IO_BUF_SIZE) {
+ qemu_fflush(f);
+ }
}
}
next prev parent reply other threads:[~2013-04-05 15:42 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-22 14:47 [Qemu-devel] [PATCH v5 0/7] Migration: Remove copying of guest ram pages Orit Wasserman
2013-03-22 14:47 ` [Qemu-devel] [PATCH v5 1/7] Add QemuFileWritevBuffer QemuFileOps Orit Wasserman
2013-03-22 14:47 ` [Qemu-devel] [PATCH v5 2/7] Add socket_writev_buffer function Orit Wasserman
2013-03-22 14:47 ` [Qemu-devel] [PATCH v5 3/7] Update bytes_xfer in qemu_put_byte Orit Wasserman
2013-03-22 14:48 ` [Qemu-devel] [PATCH v5 4/7] Store the data to send also in iovec Orit Wasserman
2013-03-22 14:48 ` [Qemu-devel] [PATCH v5 5/7] Use writev ops if available Orit Wasserman
2013-03-22 14:48 ` [Qemu-devel] [PATCH v5 6/7] Add qemu_put_buffer_async Orit Wasserman
2013-03-22 14:48 ` [Qemu-devel] [PATCH v5 7/7] Use qemu_put_buffer_async for guest memory pages Orit Wasserman
2013-04-05 13:44 ` Kevin Wolf
2013-04-05 15:23 ` Paolo Bonzini
2013-04-05 15:39 ` Kevin Wolf
2013-04-05 15:42 ` Paolo Bonzini [this message]
2013-04-05 15:56 ` Kevin Wolf
2013-03-27 21:27 ` [Qemu-devel] [PATCH v5 0/7] Migration: Remove copying of guest ram pages Eric Blake
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=515EF0E5.9090205@redhat.com \
--to=pbonzini@redhat.com \
--cc=chegu_vinod@hp.com \
--cc=kwolf@redhat.com \
--cc=mst@redhat.com \
--cc=owasserm@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).