From: Peter Xu <peterx@redhat.com>
To: Fabiano Rosas <farosas@suse.de>
Cc: "Yichen Wang" <yichen.wang@bytedance.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Marc-André Lureau" <marcandre.lureau@redhat.com>,
"Daniel P. Berrangé" <berrange@redhat.com>,
"Thomas Huth" <thuth@redhat.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Eric Blake" <eblake@redhat.com>,
"Markus Armbruster" <armbru@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
"Cornelia Huck" <cohuck@redhat.com>,
qemu-devel@nongnu.org, "Hao Xiang" <hao.xiang@linux.dev>,
"Liu, Yuan1" <yuan1.liu@intel.com>,
"Shivam Kumar" <shivam.kumar1@nutanix.com>,
"Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com>
Subject: Re: [PATCH v5 11/13] migration/multifd: Add migration option set packet size.
Date: Wed, 21 Aug 2024 17:16:07 -0400 [thread overview]
Message-ID: <ZsZZFwws5tlOMmZk@x1n> (raw)
In-Reply-To: <87msmg2heh.fsf@suse.de>
On Wed, Jul 17, 2024 at 11:59:50AM -0300, Fabiano Rosas wrote:
> Yichen Wang <yichen.wang@bytedance.com> writes:
>
> > From: Hao Xiang <hao.xiang@linux.dev>
> >
> > During live migration, if the latency between sender and receiver is
> > high and bandwidth is also high (a long and fat pipe), using a bigger
> > packet size can help reduce migration total time. The current multifd
> > packet size is 128 * 4kb. In addition, Intel DSA offloading performs
> > better with a large batch task.
>
> Last time we measured, mapped-ram also performed slightly better with a
> larger packet size:
>
> 2 MiB 1 MiB 512 KiB 256 KiB 128 KiB
> AVG(10) 50814 50396 48732 46423 34574
> DEV 736 552 619 473 1430
I wonder whether we could make the new parameter to be pages-per-packet,
rather than in the form of packet-size, just to make our lifes easier for a
possibly static offset[] buffer in the future for the MultiFDPages_t.
With that, we throttle it with MAX_N_PAGES, we can have MultiFDPages_t
statically allocated always with the max buffer. After all, it won't
consume a lot of memory anyway; for MAX_N_PAGES=1K pages it's 8KB per
channel.
--
Peter Xu
next prev parent reply other threads:[~2024-08-21 21:17 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-11 22:04 [PATCH v5 10/13] migration/multifd: Enable DSA offloading in multifd sender path Yichen Wang
2024-07-11 22:04 ` [PATCH v5 11/13] migration/multifd: Add migration option set packet size Yichen Wang
2024-07-17 14:59 ` Fabiano Rosas
2024-08-21 21:16 ` Peter Xu [this message]
2024-07-11 22:04 ` [PATCH v5 12/13] util/dsa: Add unit test coverage for Intel DSA task submission and completion Yichen Wang
2024-07-11 22:04 ` [PATCH v5 13/13] migration/multifd: Add integration tests for multifd with Intel DSA offloading Yichen Wang
2024-07-17 14:41 ` [PATCH v5 10/13] migration/multifd: Enable DSA offloading in multifd sender path Fabiano Rosas
2024-09-09 23:31 ` [External] " Yichen Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZsZZFwws5tlOMmZk@x1n \
--to=peterx@redhat.com \
--cc=armbru@redhat.com \
--cc=berrange@redhat.com \
--cc=cohuck@redhat.com \
--cc=eblake@redhat.com \
--cc=farosas@suse.de \
--cc=hao.xiang@linux.dev \
--cc=horenchuang@bytedance.com \
--cc=marcandre.lureau@redhat.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=shivam.kumar1@nutanix.com \
--cc=thuth@redhat.com \
--cc=yichen.wang@bytedance.com \
--cc=yuan1.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).