From: Peter Xu <peterx@redhat.com>
To: "Liu, Yuan1" <yuan1.liu@intel.com>
Cc: "Wang, Yichen" <yichen.wang@bytedance.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Daniel P. Berrangé" <berrange@redhat.com>,
"Eduardo Habkost" <eduardo@habkost.net>,
"Marc-André Lureau" <marcandre.lureau@redhat.com>,
"Thomas Huth" <thuth@redhat.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Fabiano Rosas" <farosas@suse.de>,
"Eric Blake" <eblake@redhat.com>,
"Markus Armbruster" <armbru@redhat.com>,
"Laurent Vivier" <lvivier@redhat.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
"Hao Xiang" <hao.xiang@linux.dev>,
"Zou, Nanhai" <nanhai.zou@intel.com>,
"Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com>
Subject: Re: [PATCH v4 0/4] Implement using Intel QAT to offload ZLIB
Date: Wed, 10 Jul 2024 11:18:51 -0400 [thread overview]
Message-ID: <Zo6mWzuxFET1q81j@x1n> (raw)
In-Reply-To: <PH7PR11MB594133AD3E08A6E35D07DD97A3A42@PH7PR11MB5941.namprd11.prod.outlook.com>
On Wed, Jul 10, 2024 at 01:55:23PM +0000, Liu, Yuan1 wrote:
[...]
> migrate_set_parameter max-bandwidth 1250M
> |-----------|--------|---------|----------|----------|------|------|
> |8 Channels |Total |down |throughput|pages per | send | recv |
> | |time(ms)|time(ms) |(mbps) |second | cpu %| cpu% |
> |-----------|--------|---------|----------|----------|------|------|
> |qatzip | 16630| 28| 10467| 2940235| 160| 360|
> |-----------|--------|---------|----------|----------|------|------|
> |zstd | 20165| 24| 8579| 2391465| 810| 340|
> |-----------|--------|---------|----------|----------|------|------|
> |none | 46063| 40| 10848| 330240| 45| 85|
> |-----------|--------|---------|----------|----------|------|------|
>
> QATzip's dirty page processing throughput is much higher than that no compression.
> In this test, the vCPUs are in idle state, so the migration can be successful even
> without compression.
Thanks! Maybe good material to be put into the docs/ too, if Yichen's
going to pick up your doc patch when repost.
[...]
> I don’t have much experience with postcopy, here are some of my thoughts
> 1. For write-intensive VMs, this solution can improve the migration success,
> because in a limited bandwidth network scenario, the dirty page processing
> throughput will be significantly reduced for no compression, the previous
> data includes this(pages_per_second), it means that in the no compression
> precopy, the dirty pages generated by the workload are greater than the
> migration processing, resulting in migration failure.
Yes.
>
> 2. If the VM is read-intensive or has low vCPU utilization (for example, my
> current test scenario is that the vCPUs are all idle). I think no compression +
> precopy + postcopy also cannot improve the migration performance, and may also
> cause timeout failure due to long migration time, same with no compression precopy.
I don't think postcopy will trigger timeout failures - postcopy should use
constant time to complete a migration, that is guest memsize / bw.
The challenge is normally on the delay of page requests higher than
precopy, but in this case it might not be a big deal. And I wonder if on
100G*2 cards it can also perform pretty well, as the delay might be minimal
even if bandwidth is throttled.
>
> 3. In my opinion, the postcopy is a good solution in this scenario(low network bandwidth,
> VM is not critical), because even if compression is turned on, the migration may still
> fail(page_per_second may still less than the new dirty pages), and it is hard to predict
> whether VM memory is compression-friendly.
Yes.
Thanks,
--
Peter Xu
next prev parent reply other threads:[~2024-07-10 15:19 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-05 18:28 [PATCH v4 0/4] Implement using Intel QAT to offload ZLIB Yichen Wang
2024-07-05 18:28 ` [PATCH v4 1/4] meson: Introduce 'qatzip' feature to the build system Yichen Wang
2024-07-05 18:28 ` [PATCH v4 2/4] migration: Add migration parameters for QATzip Yichen Wang
2024-07-08 21:10 ` Peter Xu
2024-07-05 18:29 ` [PATCH v4 3/4] migration: Introduce 'qatzip' compression method Yichen Wang
2024-07-08 21:34 ` Peter Xu
2024-07-10 15:20 ` Liu, Yuan1
2024-07-05 18:29 ` [PATCH v4 4/4] tests/migration: Add integration test for " Yichen Wang
2024-07-09 8:42 ` [PATCH v4 0/4] Implement using Intel QAT to offload ZLIB Liu, Yuan1
2024-07-09 18:42 ` Peter Xu
2024-07-10 13:55 ` Liu, Yuan1
2024-07-10 15:18 ` Peter Xu [this message]
2024-07-10 15:39 ` Liu, Yuan1
2024-07-10 18:51 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zo6mWzuxFET1q81j@x1n \
--to=peterx@redhat.com \
--cc=armbru@redhat.com \
--cc=berrange@redhat.com \
--cc=eblake@redhat.com \
--cc=eduardo@habkost.net \
--cc=farosas@suse.de \
--cc=hao.xiang@linux.dev \
--cc=horenchuang@bytedance.com \
--cc=lvivier@redhat.com \
--cc=marcandre.lureau@redhat.com \
--cc=nanhai.zou@intel.com \
--cc=pbonzini@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=thuth@redhat.com \
--cc=yichen.wang@bytedance.com \
--cc=yuan1.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).