From: "Michael S. Tsirkin" <mst@redhat.com>
To: "Liu, Yuan1" <yuan1.liu@intel.com>
Cc: "Wang, Yichen" <yichen.wang@bytedance.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Marc-André Lureau" <marcandre.lureau@redhat.com>,
"Daniel P. Berrangé" <berrange@redhat.com>,
"Thomas Huth" <thuth@redhat.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Peter Xu" <peterx@redhat.com>, "Fabiano Rosas" <farosas@suse.de>,
"Eric Blake" <eblake@redhat.com>,
"Markus Armbruster" <armbru@redhat.com>,
"Cornelia Huck" <cohuck@redhat.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
"Hao Xiang" <hao.xiang@linux.dev>,
"Kumar, Shivam" <shivam.kumar1@nutanix.com>,
"Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com>
Subject: Re: [PATCH v5 00/13] WIP: Use Intel DSA accelerator to offload zero page checking in multifd live migration.
Date: Mon, 15 Jul 2024 12:08:30 -0400 [thread overview]
Message-ID: <20240715120657-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <PH7PR11MB59410B8B35F932AF55C8A243A3A12@PH7PR11MB5941.namprd11.prod.outlook.com>
On Mon, Jul 15, 2024 at 03:23:13PM +0000, Liu, Yuan1 wrote:
> > -----Original Message-----
> > From: Michael S. Tsirkin <mst@redhat.com>
> > Sent: Monday, July 15, 2024 10:43 PM
> > To: Liu, Yuan1 <yuan1.liu@intel.com>
> > Cc: Wang, Yichen <yichen.wang@bytedance.com>; Paolo Bonzini
> > <pbonzini@redhat.com>; Marc-André Lureau <marcandre.lureau@redhat.com>;
> > Daniel P. Berrangé <berrange@redhat.com>; Thomas Huth <thuth@redhat.com>;
> > Philippe Mathieu-Daudé <philmd@linaro.org>; Peter Xu <peterx@redhat.com>;
> > Fabiano Rosas <farosas@suse.de>; Eric Blake <eblake@redhat.com>; Markus
> > Armbruster <armbru@redhat.com>; Cornelia Huck <cohuck@redhat.com>; qemu-
> > devel@nongnu.org; Hao Xiang <hao.xiang@linux.dev>; Kumar, Shivam
> > <shivam.kumar1@nutanix.com>; Ho-Ren (Jack) Chuang
> > <horenchuang@bytedance.com>
> > Subject: Re: [PATCH v5 00/13] WIP: Use Intel DSA accelerator to offload
> > zero page checking in multifd live migration.
> >
> > On Mon, Jul 15, 2024 at 01:09:59PM +0000, Liu, Yuan1 wrote:
> > > > -----Original Message-----
> > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > Sent: Monday, July 15, 2024 8:24 PM
> > > > To: Liu, Yuan1 <yuan1.liu@intel.com>
> > > > Cc: Wang, Yichen <yichen.wang@bytedance.com>; Paolo Bonzini
> > > > <pbonzini@redhat.com>; Marc-André Lureau
> > <marcandre.lureau@redhat.com>;
> > > > Daniel P. Berrangé <berrange@redhat.com>; Thomas Huth
> > <thuth@redhat.com>;
> > > > Philippe Mathieu-Daudé <philmd@linaro.org>; Peter Xu
> > <peterx@redhat.com>;
> > > > Fabiano Rosas <farosas@suse.de>; Eric Blake <eblake@redhat.com>;
> > Markus
> > > > Armbruster <armbru@redhat.com>; Cornelia Huck <cohuck@redhat.com>;
> > qemu-
> > > > devel@nongnu.org; Hao Xiang <hao.xiang@linux.dev>; Kumar, Shivam
> > > > <shivam.kumar1@nutanix.com>; Ho-Ren (Jack) Chuang
> > > > <horenchuang@bytedance.com>
> > > > Subject: Re: [PATCH v5 00/13] WIP: Use Intel DSA accelerator to
> > offload
> > > > zero page checking in multifd live migration.
> > > >
> > > > On Mon, Jul 15, 2024 at 08:29:03AM +0000, Liu, Yuan1 wrote:
> > > > > > -----Original Message-----
> > > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > > Sent: Friday, July 12, 2024 6:49 AM
> > > > > > To: Wang, Yichen <yichen.wang@bytedance.com>
> > > > > > Cc: Paolo Bonzini <pbonzini@redhat.com>; Marc-André Lureau
> > > > > > <marcandre.lureau@redhat.com>; Daniel P. Berrangé
> > > > <berrange@redhat.com>;
> > > > > > Thomas Huth <thuth@redhat.com>; Philippe Mathieu-Daudé
> > > > > > <philmd@linaro.org>; Peter Xu <peterx@redhat.com>; Fabiano Rosas
> > > > > > <farosas@suse.de>; Eric Blake <eblake@redhat.com>; Markus
> > Armbruster
> > > > > > <armbru@redhat.com>; Cornelia Huck <cohuck@redhat.com>; qemu-
> > > > > > devel@nongnu.org; Hao Xiang <hao.xiang@linux.dev>; Liu, Yuan1
> > > > > > <yuan1.liu@intel.com>; Kumar, Shivam <shivam.kumar1@nutanix.com>;
> > Ho-
> > > > Ren
> > > > > > (Jack) Chuang <horenchuang@bytedance.com>
> > > > > > Subject: Re: [PATCH v5 00/13] WIP: Use Intel DSA accelerator to
> > > > offload
> > > > > > zero page checking in multifd live migration.
> > > > > >
> > > > > > On Thu, Jul 11, 2024 at 02:52:35PM -0700, Yichen Wang wrote:
> > > > > > > * Performance:
> > > > > > >
> > > > > > > We use two Intel 4th generation Xeon servers for testing.
> > > > > > >
> > > > > > > Architecture: x86_64
> > > > > > > CPU(s): 192
> > > > > > > Thread(s) per core: 2
> > > > > > > Core(s) per socket: 48
> > > > > > > Socket(s): 2
> > > > > > > NUMA node(s): 2
> > > > > > > Vendor ID: GenuineIntel
> > > > > > > CPU family: 6
> > > > > > > Model: 143
> > > > > > > Model name: Intel(R) Xeon(R) Platinum 8457C
> > > > > > > Stepping: 8
> > > > > > > CPU MHz: 2538.624
> > > > > > > CPU max MHz: 3800.0000
> > > > > > > CPU min MHz: 800.0000
> > > > > > >
> > > > > > > We perform multifd live migration with below setup:
> > > > > > > 1. VM has 100GB memory.
> > > > > > > 2. Use the new migration option multifd-set-normal-page-ratio to
> > > > control
> > > > > > the total
> > > > > > > size of the payload sent over the network.
> > > > > > > 3. Use 8 multifd channels.
> > > > > > > 4. Use tcp for live migration.
> > > > > > > 4. Use CPU to perform zero page checking as the baseline.
> > > > > > > 5. Use one DSA device to offload zero page checking to compare
> > with
> > > > the
> > > > > > baseline.
> > > > > > > 6. Use "perf sched record" and "perf sched timehist" to analyze
> > CPU
> > > > > > usage.
> > > > > > >
> > > > > > > A) Scenario 1: 50% (50GB) normal pages on an 100GB vm.
> > > > > > >
> > > > > > > CPU usage
> > > > > > >
> > > > > > > |---------------|---------------|---------------|-------------
> > > > --|
> > > > > > > | |comm |runtime(msec) |totaltime(msec)|
> > > > > > > |---------------|---------------|---------------|-------------
> > > > --|
> > > > > > > |Baseline |live_migration |5657.58 | |
> > > > > > > | |multifdsend_0 |3931.563 | |
> > > > > > > | |multifdsend_1 |4405.273 | |
> > > > > > > | |multifdsend_2 |3941.968 | |
> > > > > > > | |multifdsend_3 |5032.975 | |
> > > > > > > | |multifdsend_4 |4533.865 | |
> > > > > > > | |multifdsend_5 |4530.461 | |
> > > > > > > | |multifdsend_6 |5171.916 | |
> > > > > > > | |multifdsend_7 |4722.769 |41922 |
> > > > > > > |---------------|---------------|---------------|-------------
> > > > --|
> > > > > > > |DSA |live_migration |6129.168 | |
> > > > > > > | |multifdsend_0 |2954.717 | |
> > > > > > > | |multifdsend_1 |2766.359 | |
> > > > > > > | |multifdsend_2 |2853.519 | |
> > > > > > > | |multifdsend_3 |2740.717 | |
> > > > > > > | |multifdsend_4 |2824.169 | |
> > > > > > > | |multifdsend_5 |2966.908 | |
> > > > > > > | |multifdsend_6 |2611.137 | |
> > > > > > > | |multifdsend_7 |3114.732 | |
> > > > > > > | |dsa_completion |3612.564 |32568 |
> > > > > > > |---------------|---------------|---------------|-------------
> > > > --|
> > > > > > >
> > > > > > > Baseline total runtime is calculated by adding up all
> > multifdsend_X
> > > > > > > and live_migration threads runtime. DSA offloading total runtime
> > is
> > > > > > > calculated by adding up all multifdsend_X, live_migration and
> > > > > > > dsa_completion threads runtime. 41922 msec VS 32568 msec runtime
> > and
> > > > > > > that is 23% total CPU usage savings.
> > > > > >
> > > > > >
> > > > > > Here the DSA was mostly idle.
> > > > > >
> > > > > > Sounds good but a question: what if several qemu instances are
> > > > > > migrated in parallel?
> > > > > >
> > > > > > Some accelerators tend to basically stall if several tasks
> > > > > > are trying to use them at the same time.
> > > > > >
> > > > > > Where is the boundary here?
> > > > >
> > > > > A DSA device can be assigned to multiple Qemu instances.
> > > > > The DSA resource used by each process is called a work queue, each
> > DSA
> > > > > device can support up to 8 work queues and work queues are
> > classified
> > > > into
> > > > > dedicated queues and shared queues.
> > > > >
> > > > > A dedicated queue can only serve one process. Theoretically, there
> > is no
> > > > limit
> > > > > on the number of processes in a shared queue, it is based on enqcmd
> > +
> > > > SVM technology.
> > > > >
> > > > > https://www.kernel.org/doc/html/v5.17/x86/sva.html
> > > >
> > > > This server has 200 CPUs which can thinkably migrate around 100 single
> > > > cpu qemu instances with no issue. What happens if you do this with
> > DSA?
> > >
> > > First, the DSA work queue needs to be configured in shared mode, and one
> > > queue is enough.
> > >
> > > The maximum depth of the work queue of the DSA hardware is 128, which
> > means
> > > that the number of zero-page detection tasks submitted cannot exceed
> > 128,
> > > otherwise, enqcmd will return an error until the work queue is available
> > again
> > >
> > > 100 Qemu instances need to be migrated concurrently, I don't have any
> > data on
> > > this yet, I think the 100 zero-page detection tasks can be successfully
> > submitted
> > > to the DSA hardware work queue, but the throughput of DSA's zero-page
> > detection also
> > > needs to be considered. Once the DSA maximum throughput is reached, the
> > work queue
> > > may be filled up quickly, this will cause some Qemu instances to be
> > temporarily unable
> > > to submit new tasks to DSA.
> >
> > The unfortunate reality here would be that there's likely no QoS, this
> > is purely fifo, right?
>
> Yes, this scenario may be fifo, assuming that the number of pages each task
> is the same, because DSA hardware consists of multiple work engines, they can
> process tasks concurrently, usually in a round-robin way to get tasks from the
> work queue.
>
> DSA supports priority and flow control based on work queue granularity.
> https://github.com/intel/idxd-config/blob/stable/Documentation/accfg/accel-config-config-wq.txt
Right but it seems clear there aren't enough work queues for a typical setup.
> > > This is likely to happen in the first round of migration
> > > memory iteration.
> >
> > Try testing this and see then?
>
> Yes, I can test based on this patch set. Please review the test scenario
> My server has 192 CPUs, and 8 DSA devices, 100Gbps NIC.
> All 8 DSA devices serve 100 Qemu instances for simultaneous live migration.
> Each VM has 1 vCPU, and 1G memory, with no workload in the VM.
>
> You want to know if some Qemu instances are stalled because of DSA, right?
And generally just run same benchmark you did compared to cpu:
worst case and average numbers would be interesting.
> > --
> > MST
next prev parent reply other threads:[~2024-07-15 16:10 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-11 21:52 [PATCH v5 00/13] WIP: Use Intel DSA accelerator to offload zero page checking in multifd live migration Yichen Wang
2024-07-11 21:52 ` [PATCH v5 01/13] meson: Introduce new instruction set enqcmd to the build system Yichen Wang
2024-07-15 15:02 ` Liu, Yuan1
2024-09-09 17:55 ` [External] " Yichen Wang
2024-07-11 21:52 ` [PATCH v5 02/13] util/dsa: Add idxd into linux header copy list Yichen Wang
2024-07-11 21:52 ` [PATCH v5 03/13] util/dsa: Implement DSA device start and stop logic Yichen Wang
2024-07-11 21:52 ` [PATCH v5 04/13] util/dsa: Implement DSA task enqueue and dequeue Yichen Wang
2024-07-11 21:52 ` [PATCH v5 05/13] util/dsa: Implement DSA task asynchronous completion thread model Yichen Wang
2024-07-11 21:52 ` [PATCH v5 06/13] util/dsa: Implement zero page checking in DSA task Yichen Wang
2024-07-11 21:52 ` [PATCH v5 07/13] util/dsa: Implement DSA task asynchronous submission and wait for completion Yichen Wang
2024-07-11 21:52 ` [PATCH v5 08/13] migration/multifd: Add new migration option for multifd DSA offloading Yichen Wang
2024-07-11 22:00 ` Yichen Wang
2024-07-17 0:00 ` Fabiano Rosas
2024-07-17 19:43 ` Fabiano Rosas
2024-07-24 14:50 ` Markus Armbruster
2024-09-06 22:29 ` [External] " Yichen Wang
2024-09-16 15:15 ` Fabiano Rosas
2024-07-17 13:30 ` Fabiano Rosas
2024-07-11 21:52 ` [PATCH v5 09/13] migration/multifd: Prepare to introduce DSA acceleration on the multifd path Yichen Wang
2024-07-17 13:39 ` Fabiano Rosas
2024-07-11 22:49 ` [PATCH v5 00/13] WIP: Use Intel DSA accelerator to offload zero page checking in multifd live migration Michael S. Tsirkin
2024-07-15 8:29 ` Liu, Yuan1
2024-07-15 12:23 ` Michael S. Tsirkin
2024-07-15 13:09 ` Liu, Yuan1
2024-07-15 14:42 ` Michael S. Tsirkin
2024-07-15 15:23 ` Liu, Yuan1
2024-07-15 15:57 ` Liu, Yuan1
2024-07-15 16:24 ` Michael S. Tsirkin
2024-07-16 1:25 ` Liu, Yuan1
2024-07-15 16:08 ` Michael S. Tsirkin [this message]
2024-07-16 1:21 ` Liu, Yuan1
2024-07-12 10:58 ` Michael S. Tsirkin
2024-07-16 21:47 ` Fabiano Rosas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240715120657-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=armbru@redhat.com \
--cc=berrange@redhat.com \
--cc=cohuck@redhat.com \
--cc=eblake@redhat.com \
--cc=farosas@suse.de \
--cc=hao.xiang@linux.dev \
--cc=horenchuang@bytedance.com \
--cc=marcandre.lureau@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=shivam.kumar1@nutanix.com \
--cc=thuth@redhat.com \
--cc=yichen.wang@bytedance.com \
--cc=yuan1.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).