From: Fabiano Rosas <farosas@suse.de>
To: Hao Xiang <hao.xiang@bytedance.com>
Cc: quintela@redhat.com, peterx@redhat.com,
marcandre.lureau@redhat.com, bryan.zhang@bytedance.com,
qemu-devel@nongnu.org
Subject: Re: [External] Re: [PATCH 01/16] Cherry pick a set of patches that enables multifd zero page feature.
Date: Mon, 30 Oct 2023 10:58:00 -0300 [thread overview]
Message-ID: <87msw0nrfb.fsf@suse.de> (raw)
In-Reply-To: <CAAYibXh+E-ZJ7SKMJie=NG8x8_hP9B5AxYZMXxXY2cK9QuuPrw@mail.gmail.com>
Hao Xiang <hao.xiang@bytedance.com> writes:
> On Fri, Oct 27, 2023 at 5:30 AM Fabiano Rosas <farosas@suse.de> wrote:
>>
>> Hao Xiang <hao.xiang@bytedance.com> writes:
>>
>> > Juan Quintela had a patchset enabling zero page checking in multifd
>> > threads.
>> >
>> > https://lore.kernel.org/all/20220802063907.18882-13-quintela@redhat.com/
>>
>> Hmm, risky to base your series on code more than an year old. We should
>> bother Juan so he sends an updated version for review.
>>
>> I have concerns about that series. First is why are we doing payload
>> processing (i.e. zero page detection) in the multifd thread. And that
>> affects your series directly, because AFAICS we're now doing more
>> processing still.
>>
>
> I am pretty new to QEMU so my take could be wrong. We can wait for Juan
> to comment here. My understanding is that the migration main loop was originally
> designed to handle single sender thread (before multifd feature). Zero
> page checking
> is a pretty CPU intensive operation. So in case of multifd, we scaled
> up the number
> of sender threads in order to saturate network traffic.
Right. That's all fine.
> Doing zero page checking in the
> main loop is not going to scale with this new design.
Yep. Moving work outside of the main loop is reasonable. Juan is
focusing on separating the migration code from the QEMUFile internals,
so moving zero page into multifd is a step in the right direction from
that perspective.
> In fact, we
> (Bytedance) has merged
> Juan's change into our internal QEMU and we have been using this
> feature since last
> year. I was told that it improved performance pretty significantly.
> Ideally, I would love to
> see zero page checking be done in a separate thread pool so we can
> scale it independently
> from the sender threads but doing it in the sender thread is an
> inexpensive way to scale.
Yep, you got the point. And I acknowledge that reusing the sender
threads is the natural next step. Even if we go that route, let's make
sure it still leaves us space to separate pre-processing from actual
sending.
next prev parent reply other threads:[~2023-10-30 13:58 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-25 19:38 [PATCH 00/16] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
2023-10-25 19:38 ` [PATCH 01/16] Cherry pick a set of patches that enables multifd zero page feature Hao Xiang
2023-10-27 12:30 ` Fabiano Rosas
2023-10-27 13:21 ` Peter Maydell
2023-10-28 1:13 ` [External] " Hao Xiang
2023-10-28 1:06 ` Hao Xiang
2023-10-30 13:58 ` Fabiano Rosas [this message]
2023-11-06 18:53 ` Hao Xiang
2023-10-25 19:38 ` [PATCH 02/16] meson: Introduce new instruction set enqcmd to the build system Hao Xiang
2023-10-25 19:38 ` [PATCH 03/16] util/dsa: Add dependency idxd Hao Xiang
2023-10-25 19:38 ` [PATCH 04/16] util/dsa: Implement DSA device start and stop logic Hao Xiang
2023-10-25 19:38 ` [PATCH 05/16] util/dsa: Implement DSA task enqueue and dequeue Hao Xiang
2023-10-25 19:38 ` [PATCH 06/16] util/dsa: Implement DSA task asynchronous completion thread model Hao Xiang
2023-10-25 19:38 ` [PATCH 07/16] util/dsa: Implement zero page checking in DSA task Hao Xiang
2023-10-25 19:38 ` [PATCH 08/16] util/dsa: Implement DSA task asynchronous submission and wait for completion Hao Xiang
2023-10-25 19:38 ` [PATCH 09/16] migration/multifd: Add new migration option for multifd DSA offloading Hao Xiang
2023-10-30 14:41 ` Fabiano Rosas
2023-11-06 21:58 ` [External] " Hao Xiang
2023-10-25 19:38 ` [PATCH 10/16] migration/multifd: Enable DSA offloading in multifd sender path Hao Xiang
2023-10-30 14:37 ` Fabiano Rosas
2023-10-25 19:38 ` [PATCH 11/16] migration/multifd: Add test hook to set normal page ratio Hao Xiang
2023-10-25 19:38 ` [PATCH 12/16] migration/multifd: Enable set normal page ratio test hook in multifd Hao Xiang
2023-10-25 19:38 ` [PATCH 13/16] migration/multifd: Add migration option set packet size Hao Xiang
2023-10-30 15:03 ` Fabiano Rosas
2023-10-25 19:38 ` [PATCH 14/16] migration/multifd: Enable set packet size migration option Hao Xiang
2023-10-25 19:38 ` [PATCH 15/16] util/dsa: Add unit test coverage for Intel DSA task submission and completion Hao Xiang
2023-10-25 19:38 ` [PATCH 16/16] migration/multifd: Add integration tests for multifd with Intel DSA offloading Hao Xiang
2023-10-30 15:26 ` Fabiano Rosas
2023-10-30 15:26 ` [PATCH 00/16] Use Intel DSA accelerator to offload zero page checking in multifd live migration Fabiano Rosas
2023-10-31 1:02 ` [External] " Hao Xiang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87msw0nrfb.fsf@suse.de \
--to=farosas@suse.de \
--cc=bryan.zhang@bytedance.com \
--cc=hao.xiang@bytedance.com \
--cc=marcandre.lureau@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).