From: "Michael S. Tsirkin" <mst@redhat.com>
To: Orit Wasserman <owasserm@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
chegu_vinod@hp.com, qemu-devel@nongnu.org, quintela@redhat.com
Subject: Re: [Qemu-devel] [RFC 07/12] Store the data to send also in iovec
Date: Thu, 21 Mar 2013 13:14:27 +0200 [thread overview]
Message-ID: <20130321111427.GD24024@redhat.com> (raw)
In-Reply-To: <514AEAB0.2000108@redhat.com>
On Thu, Mar 21, 2013 at 01:10:40PM +0200, Orit Wasserman wrote:
> On 03/21/2013 11:56 AM, Paolo Bonzini wrote:
> > Il 21/03/2013 10:09, Orit Wasserman ha scritto:
> >> All data is still copied into the static buffer.
> >>
> >> Signed-off-by: Orit Wasserman <owasserm@redhat.com>
> >> ---
> >> savevm.c | 13 +++++++++++--
> >> 1 file changed, 11 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/savevm.c b/savevm.c
> >> index ec64533..d5834ca 100644
> >> --- a/savevm.c
> >> +++ b/savevm.c
> >> @@ -114,6 +114,7 @@ void qemu_announce_self(void)
> >> /* savevm/loadvm support */
> >>
> >> #define IO_BUF_SIZE 32768
> >> +#define MAX_IOV_SIZE 64
> >
> > You could use IOV_MAX, or min(IOV_MAX, 64).
> Sure
> The 64 should be tuned on a
> > good 10G network...
>
> You need to remember that iovec of 64 is equivalent to 32 guest pages which are
> 128K data. This is a large TCP packet that will be fragmented even with jumbo frames.
> I can make it configurable but not sure if it will be useful.
> Micheal, what do you think?
>
> Orit
You are both right :). 64 looks like a sane value, and we can
try and tune it some more if we have the time.
> >
> > Paolo
> >
> >> struct QEMUFile {
> >> const QEMUFileOps *ops;
> >> @@ -129,6 +130,9 @@ struct QEMUFile {
> >> int buf_size; /* 0 when writing */
> >> uint8_t buf[IO_BUF_SIZE];
> >>
> >> + struct iovec iov[MAX_IOV_SIZE];
> >> + unsigned int iovcnt;
> >> +
> >> int last_error;
> >> };
> >>
> >> @@ -546,6 +550,7 @@ static void qemu_fflush(QEMUFile *f)
> >> f->pos += f->buf_index;
> >> }
> >> f->buf_index = 0;
> >> + f->iovcnt = 0;
> >> }
> >> if (ret < 0) {
> >> qemu_file_set_error(f, ret);
> >> @@ -638,12 +643,14 @@ void qemu_put_buffer(QEMUFile *f, const uint8_t *buf, int size)
> >> if (l > size)
> >> l = size;
> >> memcpy(f->buf + f->buf_index, buf, l);
> >> + f->iov[f->iovcnt].iov_base = f->buf + f->buf_index;
> >> + f->iov[f->iovcnt++].iov_len = l;
> >> f->is_write = 1;
> >> f->buf_index += l;
> >> f->bytes_xfer += l;
> >> buf += l;
> >> size -= l;
> >> - if (f->buf_index >= IO_BUF_SIZE) {
> >> + if (f->buf_index >= IO_BUF_SIZE || f->iovcnt >= MAX_IOV_SIZE) {
> >> qemu_fflush(f);
> >> if (qemu_file_get_error(f)) {
> >> break;
> >> @@ -667,8 +674,10 @@ void qemu_put_byte(QEMUFile *f, int v)
> >> f->buf[f->buf_index++] = v;
> >> f->is_write = 1;
> >> f->bytes_xfer += 1;
> >> + f->iov[f->iovcnt].iov_base = f->buf + (f->buf_index - 1);
> >> + f->iov[f->iovcnt++].iov_len = 1;
> >>
> >> - if (f->buf_index >= IO_BUF_SIZE) {
> >> + if (f->buf_index >= IO_BUF_SIZE || f->iovcnt >= MAX_IOV_SIZE) {
> >> qemu_fflush(f);
> >> }
> >> }
> >>
> >
next prev parent reply other threads:[~2013-03-21 11:13 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-21 9:09 [Qemu-devel] [RFC 00/12] Migration: Remove copying of guest ram pages Orit Wasserman
2013-03-21 9:09 ` [Qemu-devel] [RFC 01/12] Add iov_writev to use writev to send iovec (also for files) Orit Wasserman
2013-03-21 9:23 ` Paolo Bonzini
2013-03-21 9:09 ` [Qemu-devel] [RFC 02/12] Add QemuFileWritevBuffer QemuFileOps Orit Wasserman
2013-03-21 9:09 ` [Qemu-devel] [RFC 03/12] Add socket_writev_buffer function Orit Wasserman
2013-03-21 9:18 ` Paolo Bonzini
2013-03-21 9:47 ` Orit Wasserman
2013-03-21 9:47 ` Paolo Bonzini
2013-03-21 10:17 ` Orit Wasserman
2013-03-21 9:09 ` [Qemu-devel] [RFC 04/12] Add stdio_writev_buffer function Orit Wasserman
2013-03-21 9:28 ` Paolo Bonzini
2013-03-21 9:09 ` [Qemu-devel] [RFC 05/12] Add block_writev_buffer function Orit Wasserman
2013-03-21 9:09 ` [Qemu-devel] [RFC 06/12] Update bytes_xfer in qemu_put_byte Orit Wasserman
2013-03-21 9:09 ` [Qemu-devel] [RFC 07/12] Store the data to send also in iovec Orit Wasserman
2013-03-21 9:56 ` Paolo Bonzini
2013-03-21 11:10 ` Orit Wasserman
2013-03-21 11:14 ` Michael S. Tsirkin [this message]
2013-03-21 12:50 ` Orit Wasserman
2013-03-21 13:00 ` Paolo Bonzini
2013-03-21 9:09 ` [Qemu-devel] [RFC 08/12] Use writev ops instead of put_buffer ops Orit Wasserman
2013-03-21 9:09 ` [Qemu-devel] [RFC 09/12] More optimized qemu_put_be64/32/16 Orit Wasserman
2013-03-21 9:09 ` [Qemu-devel] [RFC 10/12] Add qemu_put_buffer_no_copy Orit Wasserman
2013-03-21 9:25 ` Paolo Bonzini
2013-03-23 16:27 ` Michael R. Hines
2013-03-25 8:11 ` Orit Wasserman
2013-03-25 13:05 ` Paolo Bonzini
2013-03-25 15:18 ` Michael R. Hines
2013-03-25 15:59 ` Paolo Bonzini
2013-03-21 9:09 ` [Qemu-devel] [RFC 11/12] Use qemu_put_buffer_no_copy for guest memory pages Orit Wasserman
2013-03-21 9:09 ` [Qemu-devel] [RFC 12/12] Bye Bye put_buffer Orit Wasserman
2013-03-21 9:29 ` [Qemu-devel] [RFC 00/12] Migration: Remove copying of guest ram pages Paolo Bonzini
2013-03-21 10:05 ` Orit Wasserman
2013-03-21 9:48 ` Michael S. Tsirkin
2013-03-21 9:53 ` Paolo Bonzini
2013-03-21 11:09 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130321111427.GD24024@redhat.com \
--to=mst@redhat.com \
--cc=chegu_vinod@hp.com \
--cc=owasserm@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).