From: "Wang, Lei" <lei4.wang@intel.com>
To: Hao Xiang <hao.xiang@bytedance.com>,
farosas@suse.de, peter.maydell@linaro.org, quintela@redhat.com,
peterx@redhat.com, marcandre.lureau@redhat.com,
bryan.zhang@bytedance.com, qemu-devel@nongnu.org
Subject: Re: [PATCH v2 13/20] migration/multifd: Prepare to introduce DSA acceleration on the multifd path.
Date: Mon, 18 Dec 2023 11:20:48 +0800 [thread overview]
Message-ID: <3a309e76-be33-4226-b341-567d2791dbfe@intel.com> (raw)
In-Reply-To: <20231114054032.1192027-14-hao.xiang@bytedance.com>
On 11/14/2023 13:40, Hao Xiang wrote:> 1. Refactor multifd_send_thread function.
> 2. Implement buffer_is_zero_use_cpu to handle CPU based zero page
> checking.
> 3. Introduce the batch task structure in MultiFDSendParams.
>
> Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
> ---
> migration/multifd.c | 82 ++++++++++++++++++++++++++++++++++++---------
> migration/multifd.h | 3 ++
> 2 files changed, 70 insertions(+), 15 deletions(-)
>
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 1198ffde9c..68ab97f918 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -14,6 +14,8 @@
> #include "qemu/cutils.h"
> #include "qemu/rcu.h"
> #include "qemu/cutils.h"
> +#include "qemu/dsa.h"
> +#include "qemu/memalign.h"
> #include "exec/target_page.h"
> #include "sysemu/sysemu.h"
> #include "exec/ramblock.h"
> @@ -574,6 +576,11 @@ void multifd_save_cleanup(void)
> p->name = NULL;
> multifd_pages_clear(p->pages);
> p->pages = NULL;
> + g_free(p->addr);
> + p->addr = NULL;
> + buffer_zero_batch_task_destroy(p->batch_task);
> + qemu_vfree(p->batch_task);
> + p->batch_task = NULL;
> p->packet_len = 0;
> g_free(p->packet);
> p->packet = NULL;
> @@ -678,13 +685,66 @@ int multifd_send_sync_main(QEMUFile *f)
> return 0;
> }
>
> +static void set_page(MultiFDSendParams *p, bool zero_page, uint64_t offset)
> +{
> + RAMBlock *rb = p->pages->block;
> + if (zero_page) {
> + p->zero[p->zero_num] = offset;
> + p->zero_num++;
> + ram_release_page(rb->idstr, offset);
> + } else {
> + p->normal[p->normal_num] = offset;
> + p->normal_num++;
> + }
> +}
> +
> +static void buffer_is_zero_use_cpu(MultiFDSendParams *p)
> +{
> + const void **buf = (const void **)p->addr;
> + assert(!migrate_use_main_zero_page());
> +
> + for (int i = 0; i < p->pages->num; i++) {
> + p->batch_task->results[i] = buffer_is_zero(buf[i], p->page_size);
> + }
> +}
> +
> +static void set_normal_pages(MultiFDSendParams *p)
> +{
> + for (int i = 0; i < p->pages->num; i++) {
> + p->batch_task->results[i] = false;
> + }
> +}
> +
> +static void multifd_zero_page_check(MultiFDSendParams *p)
> +{
> + /* older qemu don't understand zero page on multifd channel */
> + bool use_multifd_zero_page = !migrate_use_main_zero_page();
> +
> + RAMBlock *rb = p->pages->block;
> +
> + for (int i = 0; i < p->pages->num; i++) {
> + p->addr[i] = (ram_addr_t)(rb->host + p->pages->offset[i]);
> + }
> +
> + if (use_multifd_zero_page) {
> + buffer_is_zero_use_cpu(p);
> + } else {
> + // No zero page checking. All pages are normal pages.
Please pay attention to the comment style here and in other patches.
> + set_normal_pages(p);
> + }
> +
> + for (int i = 0; i < p->pages->num; i++) {
> + uint64_t offset = p->pages->offset[i];
> + bool zero_page = p->batch_task->results[i];
> + set_page(p, zero_page, offset);
> + }
> +}
> +
> static void *multifd_send_thread(void *opaque)
> {
> MultiFDSendParams *p = opaque;
> MigrationThread *thread = NULL;
> Error *local_err = NULL;
> - /* qemu older than 8.2 don't understand zero page on multifd channel */
> - bool use_multifd_zero_page = !migrate_use_main_zero_page();
> int ret = 0;
> bool use_zero_copy_send = migrate_zero_copy_send();
>
> @@ -710,7 +770,6 @@ static void *multifd_send_thread(void *opaque)
> qemu_mutex_lock(&p->mutex);
>
> if (p->pending_job) {
> - RAMBlock *rb = p->pages->block;
> uint64_t packet_num = p->packet_num;
> uint32_t flags;
>
> @@ -723,18 +782,7 @@ static void *multifd_send_thread(void *opaque)
> p->iovs_num = 1;
> }
>
> - for (int i = 0; i < p->pages->num; i++) {
> - uint64_t offset = p->pages->offset[i];
> - if (use_multifd_zero_page &&
> - buffer_is_zero(rb->host + offset, p->page_size)) {
> - p->zero[p->zero_num] = offset;
> - p->zero_num++;
> - ram_release_page(rb->idstr, offset);
> - } else {
> - p->normal[p->normal_num] = offset;
> - p->normal_num++;
> - }
> - }
> + multifd_zero_page_check(p);
>
> if (p->normal_num) {
> ret = multifd_send_state->ops->send_prepare(p, &local_err);
> @@ -976,6 +1024,10 @@ int multifd_save_setup(Error **errp)
> p->pending_job = 0;
> p->id = i;
> p->pages = multifd_pages_init(page_count);
> + p->addr = g_new0(ram_addr_t, page_count);
> + p->batch_task =
> + (struct buffer_zero_batch_task *)qemu_memalign(64, sizeof(*p->batch_task));
> + buffer_zero_batch_task_init(p->batch_task, page_count);
> p->packet_len = sizeof(MultiFDPacket_t)
> + sizeof(uint64_t) * page_count;
> p->packet = g_malloc0(p->packet_len);
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 13762900d4..62f31b03c0 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -119,6 +119,9 @@ typedef struct {
> * pending_job != 0 -> multifd_channel can use it.
> */
> MultiFDPages_t *pages;
> + /* Address of each pages in pages */
s/pages/page
> + ram_addr_t *addr;
I think there is not need to introduce this variable since is can be simply
derived by:
p->addr[i] = (ram_addr_t)(rb->host + p->pages->offset[i]);
and it is useless when we check the zero page in main thread (main-zero-page=y)
> + struct buffer_zero_batch_task *batch_task;
>
> /* thread local variables. No locking required */
>
next prev parent reply other threads:[~2023-12-18 3:21 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-14 5:40 [PATCH v2 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
2023-11-14 5:40 ` [PATCH v2 01/20] multifd: Add capability to enable/disable zero_page Hao Xiang
2023-11-16 15:15 ` Fabiano Rosas
2023-11-14 5:40 ` [PATCH v2 02/20] multifd: Support for zero pages transmission Hao Xiang
2023-11-14 5:40 ` [PATCH v2 03/20] multifd: Zero " Hao Xiang
2023-12-18 2:43 ` Wang, Lei
2023-11-14 5:40 ` [PATCH v2 04/20] So we use multifd to transmit zero pages Hao Xiang
2023-11-16 15:14 ` Fabiano Rosas
2024-01-23 4:28 ` [External] " Hao Xiang
2024-01-25 21:55 ` Hao Xiang
2024-01-25 23:14 ` Fabiano Rosas
2024-01-25 23:46 ` Hao Xiang
2023-11-14 5:40 ` [PATCH v2 05/20] meson: Introduce new instruction set enqcmd to the build system Hao Xiang
2023-12-11 15:41 ` Fabiano Rosas
2023-12-16 0:26 ` [External] " Hao Xiang
2023-11-14 5:40 ` [PATCH v2 06/20] util/dsa: Add dependency idxd Hao Xiang
2023-11-14 5:40 ` [PATCH v2 07/20] util/dsa: Implement DSA device start and stop logic Hao Xiang
2023-12-11 21:28 ` Fabiano Rosas
2023-12-19 6:41 ` [External] " Hao Xiang
2023-12-19 13:18 ` Fabiano Rosas
2023-12-27 6:00 ` Hao Xiang
2023-11-14 5:40 ` [PATCH v2 08/20] util/dsa: Implement DSA task enqueue and dequeue Hao Xiang
2023-12-12 16:10 ` Fabiano Rosas
2023-12-27 0:07 ` [External] " Hao Xiang
2023-11-14 5:40 ` [PATCH v2 09/20] util/dsa: Implement DSA task asynchronous completion thread model Hao Xiang
2023-12-12 19:36 ` Fabiano Rosas
2023-12-18 3:11 ` Wang, Lei
2023-12-18 18:57 ` [External] " Hao Xiang
2023-12-19 1:33 ` Wang, Lei
2023-12-19 5:12 ` Hao Xiang
2023-11-14 5:40 ` [PATCH v2 10/20] util/dsa: Implement zero page checking in DSA task Hao Xiang
2023-11-14 5:40 ` [PATCH v2 11/20] util/dsa: Implement DSA task asynchronous submission and wait for completion Hao Xiang
2023-12-13 14:01 ` Fabiano Rosas
2023-12-27 6:26 ` [External] " Hao Xiang
2023-11-14 5:40 ` [PATCH v2 12/20] migration/multifd: Add new migration option for multifd DSA offloading Hao Xiang
2023-12-11 19:44 ` Fabiano Rosas
2023-12-18 18:34 ` [External] " Hao Xiang
2023-12-18 3:12 ` Wang, Lei
2023-11-14 5:40 ` [PATCH v2 13/20] migration/multifd: Prepare to introduce DSA acceleration on the multifd path Hao Xiang
2023-12-18 3:20 ` Wang, Lei [this message]
2023-11-14 5:40 ` [PATCH v2 14/20] migration/multifd: Enable DSA offloading in multifd sender path Hao Xiang
2023-11-14 5:40 ` [PATCH v2 15/20] migration/multifd: Add test hook to set normal page ratio Hao Xiang
2023-11-14 5:40 ` [PATCH v2 16/20] migration/multifd: Enable set normal page ratio test hook in multifd Hao Xiang
2023-11-14 5:40 ` [PATCH v2 17/20] migration/multifd: Add migration option set packet size Hao Xiang
2023-11-14 5:40 ` [PATCH v2 18/20] migration/multifd: Enable set packet size migration option Hao Xiang
2023-12-13 17:33 ` Fabiano Rosas
2024-01-03 20:04 ` [External] " Hao Xiang
2023-11-14 5:40 ` [PATCH v2 19/20] util/dsa: Add unit test coverage for Intel DSA task submission and completion Hao Xiang
2023-11-14 5:40 ` [PATCH v2 20/20] migration/multifd: Add integration tests for multifd with Intel DSA offloading Hao Xiang
2023-11-15 17:43 ` [PATCH v2 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Elena Ufimtseva
2023-11-15 19:37 ` [External] " Hao Xiang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3a309e76-be33-4226-b341-567d2791dbfe@intel.com \
--to=lei4.wang@intel.com \
--cc=bryan.zhang@bytedance.com \
--cc=farosas@suse.de \
--cc=hao.xiang@bytedance.com \
--cc=marcandre.lureau@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).