From: Juan Quintela <quintela@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
"Juan Quintela" <quintela@redhat.com>,
"Eduardo Habkost" <eduardo@habkost.net>,
"Eric Blake" <eblake@redhat.com>,
"Marcel Apfelbaum" <marcel.apfelbaum@gmail.com>,
"Yanan Wang" <wangyanan55@huawei.com>,
"Markus Armbruster" <armbru@redhat.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>
Subject: [PATCH 00/11] Multifd zero page support
Date: Mon, 28 Nov 2022 11:04:11 +0100 [thread overview]
Message-ID: <20221128100422.13522-1-quintela@redhat.com> (raw)
Based on top of my next-8.0 branch.
- rebased on top of latest upstream
- lots of minor fixes
- start support for atomic counters
* we need to move ram_limit_used/max to migration.c
* that means fixing rdma.c
* and test-vmstate.
So I am donig that right now.
Juan Quintela (11):
migration: Update atomic stats out of the mutex
migration: Make multifd_bytes atomic
multifd: We already account for this packet on the multifd thread
multifd: Count the number of bytes sent correctly
migration: Make ram_save_target_page() a pointer
multifd: Make flags field thread local
multifd: Prepare to send a packet without the mutex held
multifd: Add capability to enable/disable zero_page
multifd: Support for zero pages transmission
multifd: Zero pages transmission
So we use multifd to transmit zero pages.
qapi/migration.json | 8 ++-
migration/migration.h | 1 +
migration/multifd.h | 36 ++++++++++--
migration/ram.h | 1 +
hw/core/machine.c | 1 +
migration/migration.c | 16 +++++-
migration/multifd.c | 123 +++++++++++++++++++++++++++++++----------
migration/ram.c | 51 +++++++++++++++--
migration/trace-events | 8 +--
9 files changed, 197 insertions(+), 48 deletions(-)
--
2.38.1
next reply other threads:[~2022-11-28 10:06 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-11-28 10:04 Juan Quintela [this message]
2022-11-28 10:04 ` [PATCH 01/11] migration: Update atomic stats out of the mutex Juan Quintela
2022-11-28 10:04 ` [PATCH 02/11] migration: Make multifd_bytes atomic Juan Quintela
2022-11-28 10:04 ` [PATCH 03/11] multifd: We already account for this packet on the multifd thread Juan Quintela
2022-11-28 10:04 ` [PATCH 04/11] multifd: Count the number of bytes sent correctly Juan Quintela
2022-11-28 10:04 ` [PATCH 05/11] migration: Make ram_save_target_page() a pointer Juan Quintela
2022-11-28 10:04 ` [PATCH 06/11] multifd: Make flags field thread local Juan Quintela
2022-11-28 10:04 ` [PATCH 07/11] multifd: Prepare to send a packet without the mutex held Juan Quintela
2022-11-28 10:04 ` [PATCH 08/11] multifd: Add capability to enable/disable zero_page Juan Quintela
2022-11-28 10:04 ` [PATCH 09/11] multifd: Support for zero pages transmission Juan Quintela
2022-11-28 10:04 ` [PATCH 10/11] multifd: Zero " Juan Quintela
2022-11-28 10:04 ` [PATCH 11/11] So we use multifd to transmit zero pages Juan Quintela
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221128100422.13522-1-quintela@redhat.com \
--to=quintela@redhat.com \
--cc=armbru@redhat.com \
--cc=dgilbert@redhat.com \
--cc=eblake@redhat.com \
--cc=eduardo@habkost.net \
--cc=marcel.apfelbaum@gmail.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=wangyanan55@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).