From: Juan Quintela <quintela@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Yanan Wang" <wangyanan55@huawei.com>,
"Marcel Apfelbaum" <marcel.apfelbaum@gmail.com>,
"Juan Quintela" <quintela@redhat.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Peter Xu" <peterx@redhat.com>,
"Eduardo Habkost" <eduardo@habkost.net>,
"Leonardo Bras" <leobras@redhat.com>
Subject: [PATCH v8 0/3] Eliminate multifd flush
Date: Tue, 25 Apr 2023 18:31:11 +0200 [thread overview]
Message-ID: <20230425163114.2609-1-quintela@redhat.com> (raw)
Hi
In this v8:
- rebase over latests
Please review.
[v7]
- Rebased to last upstream
- Rename the capability to a property. So we move all the problems
that we have on last review dissaper because it is not a capability.
So now, it is works as expected. Enabled for old machine types,
disabled for new machine types. Users will only found it if they go through the migration properties.
Please review.
In this v6:
- Rename multifd-sync-after-each-section to
multifd-flush-after-each-section
- Redo comments (thanks Markus)
- Redo how to comment capabilities that are enabled/disabled during
development. (thanks Markus)
Please, review.
In this v5:
- Remove RAM Flags documentation (already on PULL request)
- rebase on top of PULL request.
Please review.
Based-on: <20230213025150.71537-1-quintela@redhat.com>
Migration 20230213 patches
In this v4:
- Rebased on top of migration-20230209 PULL request
- Integrate two patches in that pull request
- Rebase
- Address Eric reviews.
Please review.
In this v3:
- update to latest upstream.
- fix checkpatch errors.
Please, review.
In this v2:
- update to latest upstream
- change 0, 1, 2 values to defines
- Add documentation for SAVE_VM_FLAGS
- Add missing qemu_fflush(), it made random hangs for migration test
(only for tls, no clue why).
Please, review.
[v1]
Upstream multifd code synchronize all threads after each RAM section. This is suboptimal.
Change it to only flush after we go trough all ram.
Preserve all semantics for old machine types.
Juan Quintela (3):
multifd: Create property multifd-flush-after-each-section
multifd: Protect multifd_send_sync_main() calls
multifd: Only flush once each full round of memory
hw/core/machine.c | 1 +
migration/migration.c | 9 +++++++++
migration/migration.h | 12 ++++++++++++
migration/ram.c | 44 +++++++++++++++++++++++++++++++++++++------
4 files changed, 60 insertions(+), 6 deletions(-)
--
2.40.0
next reply other threads:[~2023-04-25 16:32 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-25 16:31 Juan Quintela [this message]
2023-04-25 16:31 ` [PATCH v8 1/3] multifd: Create property multifd-flush-after-each-section Juan Quintela
2023-04-25 18:42 ` Peter Xu
2023-04-26 16:55 ` Juan Quintela
2023-04-25 16:31 ` [PATCH v8 2/3] multifd: Protect multifd_send_sync_main() calls Juan Quintela
2023-04-25 16:31 ` [PATCH v8 3/3] multifd: Only flush once each full round of memory Juan Quintela
2023-04-25 18:49 ` [PATCH v8 0/3] Eliminate multifd flush Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230425163114.2609-1-quintela@redhat.com \
--to=quintela@redhat.com \
--cc=eduardo@habkost.net \
--cc=leobras@redhat.com \
--cc=marcel.apfelbaum@gmail.com \
--cc=peterx@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=wangyanan55@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).