From: Leonardo Bras <leobras@redhat.com>
To: "Daniel P. Berrangé" <berrange@redhat.com>,
"Juan Quintela" <quintela@redhat.com>,
"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
"Peter Xu" <peterx@redhat.com>
Cc: Leonardo Bras <leobras@redhat.com>, qemu-devel@nongnu.org
Subject: [RFC PATCH 0/4] MultiFD zero-copy improvements
Date: Tue, 25 Oct 2022 01:47:27 -0300 [thread overview]
Message-ID: <20221025044730.319941-1-leobras@redhat.com> (raw)
RFC for an improvement suggested by Juan during the KVM Forum and a few
optimizations I found in the way.
Patch #1 is just moving code to a helper, should have no impact.
Patch #2 is my implementation of Juan's suggestion. I implemented the
simplest way I thought on the array size: a fixed defined value.
I am not sure if this is fine, or if the array size should be either
informed by the user either via QMP or cmdline.
That's an important point I really need feedback on.
Patch #3: Improve the qio_channel_flush() interface to accept flush
waiting for some writes finished instead of all of them. This reduces
the waiting time, since most recent writes/sends will take more time to
finish, while the older ones are probably finished by the first recvmsg()
syscall return.
Patch #4 uses #3 in multifd zero-copy. It flushes the LRU half of the
header array, allowing more writes to happen while the most recent ones
are ongoing, instead of waiting for everything to finish before sending
more.
It all works fine in my tests, but maybe I missed some cornercase.
Please provide any feedback you find fit.
Thank you all!
Best regards,
Leo
Leonardo Bras (4):
migration/multifd/zero-copy: Create helper function for flushing
migration/multifd/zero-copy: Merge header & pages send in a single
write
QIOChannel: Add max_pending parameter to qio_channel_flush()
migration/multifd/zero-copy: Flush only the LRU half of the header
array
include/io/channel.h | 7 +++-
migration/multifd.h | 5 ++-
io/channel-socket.c | 5 ++-
io/channel.c | 5 ++-
migration/multifd.c | 88 ++++++++++++++++++++++++++------------------
5 files changed, 68 insertions(+), 42 deletions(-)
--
2.38.0
next reply other threads:[~2022-10-25 4:50 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-25 4:47 Leonardo Bras [this message]
2022-10-25 4:47 ` [RFC PATCH 1/4] migration/multifd/zero-copy: Create helper function for flushing Leonardo Bras
2022-10-25 9:44 ` Juan Quintela
2022-10-25 4:47 ` [RFC PATCH 2/4] migration/multifd/zero-copy: Merge header & pages send in a single write Leonardo Bras
2022-10-25 9:51 ` Juan Quintela
2022-10-25 13:28 ` Leonardo Brás
2022-10-25 4:47 ` [RFC PATCH 3/4] QIOChannel: Add max_pending parameter to qio_channel_flush() Leonardo Bras
2022-10-25 4:47 ` [RFC PATCH 4/4] migration/multifd/zero-copy: Flush only the LRU half of the header array Leonardo Bras
2022-10-25 8:35 ` Daniel P. Berrangé
2022-10-25 10:07 ` Juan Quintela
2022-10-25 13:47 ` Leonardo Brás
2022-10-25 16:36 ` [RFC PATCH 0/4] MultiFD zero-copy improvements Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221025044730.319941-1-leobras@redhat.com \
--to=leobras@redhat.com \
--cc=berrange@redhat.com \
--cc=dgilbert@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).