qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration.
@ 2024-10-09 23:45 Yichen Wang
  2024-10-09 23:45 ` [PATCH v6 01/12] meson: Introduce new instruction set enqcmd to the build system Yichen Wang
                   ` (13 more replies)
  0 siblings, 14 replies; 26+ messages in thread
From: Yichen Wang @ 2024-10-09 23:45 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Paolo Bonzini, Marc-André Lureau,
	Daniel P. Berrangé, Philippe Mathieu-Daudé, Peter Xu,
	Fabiano Rosas, Eric Blake, Markus Armbruster, Michael S. Tsirkin,
	Cornelia Huck, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Shivam Kumar, Ho-Ren (Jack) Chuang,
	Yichen Wang

v6
* Rebase on top of 838fc0a8769d7cc6edfe50451ba4e3368395f5c1;
* Refactor code to have clean history on all commits;
* Add comments on DSA specific defines about how the value is picked;
* Address all comments from v5 reviews about api defines, questions, etc.;

v5
* Rebase on top of 39a032cea23e522268519d89bb738974bc43b6f6.
* Rename struct definitions with typedef and CamelCase names;
* Add build and runtime checks about DSA accelerator;
* Address all comments from v4 reviews about typos, licenses, comments,
error reporting, etc.

v4
* Rebase on top of 85b597413d4370cb168f711192eaef2eb70535ac.
* A separate "multifd zero page checking" patchset was split from this
patchset's v3 and got merged into master. v4 re-applied the rest of all
commits on top of that patchset, re-factored and re-tested.
https://lore.kernel.org/all/20240311180015.3359271-1-hao.xiang@linux.dev/
* There are some feedback from v3 I likely overlooked.

v3
* Rebase on top of 7425b6277f12e82952cede1f531bfc689bf77fb1.
* Fix error/warning from checkpatch.pl
* Fix use-after-free bug when multifd-dsa-accel option is not set.
* Handle error from dsa_init and correctly propogate the error.
* Remove unnecessary call to dsa_stop.
* Detect availability of DSA feature at compile time.
* Implement a generic batch_task structure and a DSA specific one dsa_batch_task.
* Remove all exit() calls and propagate errors correctly.
* Use bytes instead of page count to configure multifd-packet-size option.

v2
* Rebase on top of 3e01f1147a16ca566694b97eafc941d62fa1e8d8.
* Leave Juan's changes in their original form instead of squashing them.
* Add a new commit to refactor the multifd_send_thread function to prepare for introducing the DSA offload functionality.
* Use page count to configure multifd-packet-size option.
* Don't use the FLAKY flag in DSA tests.
* Test if DSA integration test is setup correctly and skip the test if
* not.
* Fixed broken link in the previous patch cover.

* Background:

I posted an RFC about DSA offloading in QEMU:
https://patchew.org/QEMU/20230529182001.2232069-1-hao.xiang@bytedance.com/

This patchset implements the DSA offloading on zero page checking in
multifd live migration code path.

* Overview:

Intel Data Streaming Accelerator(DSA) is introduced in Intel's 4th generation
Xeon server, aka Sapphire Rapids.
https://cdrdv2-public.intel.com/671116/341204-intel-data-streaming-accelerator-spec.pdf
https://www.intel.com/content/www/us/en/content-details/759709/intel-data-streaming-accelerator-user-guide.html
One of the things DSA can do is to offload memory comparison workload from
CPU to DSA accelerator hardware. This patchset implements a solution to offload
QEMU's zero page checking from CPU to DSA accelerator hardware. We gain
two benefits from this change:
1. Reduces CPU usage in multifd live migration workflow across all use
cases.
2. Reduces migration total time in some use cases. 

* Design:

These are the logical steps to perform DSA offloading:
1. Configure DSA accelerators and create user space openable DSA work
queues via the idxd driver.
2. Map DSA's work queue into a user space address space.
3. Fill an in-memory task descriptor to describe the memory operation.
4. Use dedicated CPU instruction _enqcmd to queue a task descriptor to
the work queue.
5. Pull the task descriptor's completion status field until the task
completes.
6. Check return status.

The memory operation is now totally done by the accelerator hardware but
the new workflow introduces overheads. The overhead is the extra cost CPU
prepares and submits the task descriptors and the extra cost CPU pulls for
completion. The design is around minimizing these two overheads.

1. In order to reduce the overhead on task preparation and submission,
we use batch descriptors. A batch descriptor will contain N individual
zero page checking tasks where the default N is 128 (default packet size
/ page size) and we can increase N by setting the packet size via a new
migration option.
2. The multifd sender threads prepares and submits batch tasks to DSA
hardware and it waits on a synchronization object for task completion.
Whenever a DSA task is submitted, the task structure is added to a
thread safe queue. It's safe to have multiple multifd sender threads to
submit tasks concurrently.
3. Multiple DSA hardware devices can be used. During multifd initialization,
every sender thread will be assigned a DSA device to work with. We
use a round-robin scheme to evenly distribute the work across all used
DSA devices.
4. Use a dedicated thread dsa_completion to perform busy pulling for all
DSA task completions. The thread keeps dequeuing DSA tasks from the
thread safe queue. The thread blocks when there is no outstanding DSA
task. When pulling for completion of a DSA task, the thread uses CPU
instruction _mm_pause between the iterations of a busy loop to save some
CPU power as well as optimizing core resources for the other hypercore.
5. DSA accelerator can encounter errors. The most popular error is a
page fault. We have tested using devices to handle page faults but
performance is bad. Right now, if DSA hits a page fault, we fallback to
use CPU to complete the rest of the work. The CPU fallback is done in
the multifd sender thread.
6. Added a new migration option multifd-dsa-accel to set the DSA device
path. If set, the multifd workflow will leverage the DSA devices for
offloading.
7. Added a new migration option multifd-normal-page-ratio to make
multifd live migration easier to test. Setting a normal page ratio will
make live migration recognize a zero page as a normal page and send
the entire payload over the network. If we want to send a large network
payload and analyze throughput, this option is useful.
8. Added a new migration option multifd-packet-size. This can increase
the number of pages being zero page checked and sent over the network.
The extra synchronization between the sender threads and the dsa
completion thread is an overhead. Using a large packet size can reduce
that overhead.

* Performance:

We use two Intel 4th generation Xeon servers for testing.

Architecture:        x86_64
CPU(s):              192
Thread(s) per core:  2
Core(s) per socket:  48
Socket(s):           2
NUMA node(s):        2
Vendor ID:           GenuineIntel
CPU family:          6
Model:               143
Model name:          Intel(R) Xeon(R) Platinum 8457C
Stepping:            8
CPU MHz:             2538.624
CPU max MHz:         3800.0000
CPU min MHz:         800.0000

We perform multifd live migration with below setup:
1. VM has 100GB memory. 
2. Use the new migration option multifd-set-normal-page-ratio to control the total
size of the payload sent over the network.
3. Use 8 multifd channels.
4. Use tcp for live migration.
4. Use CPU to perform zero page checking as the baseline.
5. Use one DSA device to offload zero page checking to compare with the baseline.
6. Use "perf sched record" and "perf sched timehist" to analyze CPU usage.

A) Scenario 1: 50% (50GB) normal pages on an 100GB vm.

	CPU usage

	|---------------|---------------|---------------|---------------|
	|		|comm		|runtime(msec)	|totaltime(msec)|
	|---------------|---------------|---------------|---------------|
	|Baseline	|live_migration	|5657.58	|		|
	|		|multifdsend_0	|3931.563	|		|
	|		|multifdsend_1	|4405.273	|		|
	|		|multifdsend_2	|3941.968	|		|
	|		|multifdsend_3	|5032.975	|		|
	|		|multifdsend_4	|4533.865	|		|
	|		|multifdsend_5	|4530.461	|		|
	|		|multifdsend_6	|5171.916	|		|
	|		|multifdsend_7	|4722.769	|41922		|
	|---------------|---------------|---------------|---------------|
	|DSA		|live_migration	|6129.168	|		|
	|		|multifdsend_0	|2954.717	|		|
	|		|multifdsend_1	|2766.359	|		|
	|		|multifdsend_2	|2853.519	|		|
	|		|multifdsend_3	|2740.717	|		|
	|		|multifdsend_4	|2824.169	|		|
	|		|multifdsend_5	|2966.908	|		|
	|		|multifdsend_6	|2611.137	|		|
	|		|multifdsend_7	|3114.732	|		|
	|		|dsa_completion	|3612.564	|32568		|
	|---------------|---------------|---------------|---------------|

Baseline total runtime is calculated by adding up all multifdsend_X
and live_migration threads runtime. DSA offloading total runtime is
calculated by adding up all multifdsend_X, live_migration and
dsa_completion threads runtime. 41922 msec VS 32568 msec runtime and
that is 23% total CPU usage savings.

	Latency
	|---------------|---------------|---------------|---------------|---------------|---------------|
	|		|total time	|down time	|throughput	|transferred-ram|total-ram	|
	|---------------|---------------|---------------|---------------|---------------|---------------|	
	|Baseline	|10343 ms	|161 ms		|41007.00 mbps	|51583797 kb	|102400520 kb	|
	|---------------|---------------|---------------|---------------|-------------------------------|
	|DSA offload	|9535 ms	|135 ms		|46554.40 mbps	|53947545 kb	|102400520 kb	|	
	|---------------|---------------|---------------|---------------|---------------|---------------|

Total time is 8% faster and down time is 16% faster.

B) Scenario 2: 100% (100GB) zero pages on an 100GB vm.

	CPU usage
	|---------------|---------------|---------------|---------------|
	|		|comm		|runtime(msec)	|totaltime(msec)|
	|---------------|---------------|---------------|---------------|
	|Baseline	|live_migration	|4860.718	|		|
	|	 	|multifdsend_0	|748.875	|		|
	|		|multifdsend_1	|898.498	|		|
	|		|multifdsend_2	|787.456	|		|
	|		|multifdsend_3	|764.537	|		|
	|		|multifdsend_4	|785.687	|		|
	|		|multifdsend_5	|756.941	|		|
	|		|multifdsend_6	|774.084	|		|
	|		|multifdsend_7	|782.900	|11154		|
	|---------------|---------------|-------------------------------|
	|DSA offloading	|live_migration	|3846.976	|		|
	|		|multifdsend_0	|191.880	|		|
	|		|multifdsend_1	|166.331	|		|
	|		|multifdsend_2	|168.528	|		|
	|		|multifdsend_3	|197.831	|		|
	|		|multifdsend_4	|169.580	|		|
	|		|multifdsend_5	|167.984	|		|
	|		|multifdsend_6	|198.042	|		|
	|		|multifdsend_7	|170.624	|		|
	|		|dsa_completion	|3428.669	|8700		|
	|---------------|---------------|---------------|---------------|

Baseline total runtime is 11154 msec and DSA offloading total runtime is
8700 msec. That is 22% CPU savings.

	Latency
	|--------------------------------------------------------------------------------------------|
	|		|total time	|down time	|throughput	|transferred-ram|total-ram   |
	|---------------|---------------|---------------|---------------|---------------|------------|	
	|Baseline	|4867 ms	|20 ms		|1.51 mbps	|565 kb		|102400520 kb|
	|---------------|---------------|---------------|---------------|----------------------------|
	|DSA offload	|3888 ms	|18 ms		|1.89 mbps	|565 kb		|102400520 kb|	
	|---------------|---------------|---------------|---------------|---------------|------------|

Total time 20% faster and down time 10% faster.

* Testing:

1. Added unit tests for cover the added code path in dsa.c
2. Added integration tests to cover multifd live migration using DSA
offloading.


Hao Xiang (11):
  meson: Introduce new instruction set enqcmd to the build system.
  util/dsa: Implement DSA device start and stop logic.
  util/dsa: Implement DSA task enqueue and dequeue.
  util/dsa: Implement DSA task asynchronous completion thread model.
  util/dsa: Implement zero page checking in DSA task.
  util/dsa: Implement DSA task asynchronous submission and wait for
    completion.
  migration/multifd: Add new migration option for multifd DSA
    offloading.
  migration/multifd: Enable DSA offloading in multifd sender path.
  migration/multifd: Add migration option set packet size.
  util/dsa: Add unit test coverage for Intel DSA task submission and
    completion.
  migration/multifd: Add integration tests for multifd with Intel DSA
    offloading.

Yichen Wang (1):
  util/dsa: Add idxd into linux header copy list.

 hmp-commands.hx                 |    2 +-
 include/qemu/dsa.h              |  189 ++++++
 meson.build                     |   14 +
 meson_options.txt               |    2 +
 migration/migration-hmp-cmds.c  |   26 +-
 migration/multifd-zero-page.c   |  133 +++-
 migration/multifd-zlib.c        |    6 +-
 migration/multifd-zstd.c        |    6 +-
 migration/multifd.c             |   19 +-
 migration/multifd.h             |    5 +
 migration/options.c             |   69 ++
 migration/options.h             |    2 +
 qapi/migration.json             |   49 +-
 scripts/meson-buildoptions.sh   |    3 +
 scripts/update-linux-headers.sh |    2 +-
 tests/qtest/migration-test.c    |   80 ++-
 tests/unit/meson.build          |    6 +
 tests/unit/test-dsa.c           |  503 ++++++++++++++
 util/dsa.c                      | 1114 +++++++++++++++++++++++++++++++
 util/meson.build                |    3 +
 20 files changed, 2204 insertions(+), 29 deletions(-)
 create mode 100644 include/qemu/dsa.h
 create mode 100644 tests/unit/test-dsa.c
 create mode 100644 util/dsa.c

-- 
Yichen Wang



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v6 01/12] meson: Introduce new instruction set enqcmd to the build system.
  2024-10-09 23:45 [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration Yichen Wang
@ 2024-10-09 23:45 ` Yichen Wang
  2024-10-09 23:46 ` [PATCH v6 02/12] util/dsa: Add idxd into linux header copy list Yichen Wang
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Yichen Wang @ 2024-10-09 23:45 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Paolo Bonzini, Marc-André Lureau,
	Daniel P. Berrangé, Philippe Mathieu-Daudé, Peter Xu,
	Fabiano Rosas, Eric Blake, Markus Armbruster, Michael S. Tsirkin,
	Cornelia Huck, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Shivam Kumar, Ho-Ren (Jack) Chuang,
	Yichen Wang

From: Hao Xiang <hao.xiang@linux.dev>

Enable instruction set enqcmd in build.

Signed-off-by: Hao Xiang <hao.xiang@linux.dev>
Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>
---
 meson.build                   | 14 ++++++++++++++
 meson_options.txt             |  2 ++
 scripts/meson-buildoptions.sh |  3 +++
 3 files changed, 19 insertions(+)

diff --git a/meson.build b/meson.build
index 4ea1984fc5..a7b545cd88 100644
--- a/meson.build
+++ b/meson.build
@@ -2954,6 +2954,20 @@ config_host_data.set('CONFIG_AVX512BW_OPT', get_option('avx512bw') \
     int main(int argc, char *argv[]) { return bar(argv[0]); }
   '''), error_message: 'AVX512BW not available').allowed())
 
+config_host_data.set('CONFIG_DSA_OPT', get_option('enqcmd') \
+  .require(have_cpuid_h, error_message: 'cpuid.h not available, cannot enable ENQCMD') \
+  .require(cc.links('''
+    #include <stdint.h>
+    #include <cpuid.h>
+    #include <immintrin.h>
+    static int __attribute__((target("enqcmd"))) bar(void *a) {
+      uint64_t dst[8] = { 0 };
+      uint64_t src[8] = { 0 };
+      return _enqcmd(dst, src);
+    }
+    int main(int argc, char *argv[]) { return bar(argv[argc - 1]); }
+  '''), error_message: 'ENQCMD not available').allowed())
+
 # For both AArch64 and AArch32, detect if builtins are available.
 config_host_data.set('CONFIG_ARM_AES_BUILTIN', cc.compiles('''
     #include <arm_neon.h>
diff --git a/meson_options.txt b/meson_options.txt
index 5ee1d95c9c..2c02c654f0 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -123,6 +123,8 @@ option('avx2', type: 'feature', value: 'auto',
        description: 'AVX2 optimizations')
 option('avx512bw', type: 'feature', value: 'auto',
        description: 'AVX512BW optimizations')
+option('enqcmd', type: 'feature', value: 'disabled',
+       description: 'ENQCMD optimizations')
 option('keyring', type: 'feature', value: 'auto',
        description: 'Linux keyring support')
 option('libkeyutils', type: 'feature', value: 'auto',
diff --git a/scripts/meson-buildoptions.sh b/scripts/meson-buildoptions.sh
index 3bee1c56df..f6c7f1251e 100644
--- a/scripts/meson-buildoptions.sh
+++ b/scripts/meson-buildoptions.sh
@@ -96,6 +96,7 @@ meson_options_help() {
   printf "%s\n" '  auth-pam        PAM access control'
   printf "%s\n" '  avx2            AVX2 optimizations'
   printf "%s\n" '  avx512bw        AVX512BW optimizations'
+  printf "%s\n" '  enqcmd          ENQCMD optimizations'
   printf "%s\n" '  blkio           libblkio block device driver'
   printf "%s\n" '  bochs           bochs image format support'
   printf "%s\n" '  bpf             eBPF support'
@@ -241,6 +242,8 @@ _meson_option_parse() {
     --disable-avx2) printf "%s" -Davx2=disabled ;;
     --enable-avx512bw) printf "%s" -Davx512bw=enabled ;;
     --disable-avx512bw) printf "%s" -Davx512bw=disabled ;;
+    --enable-enqcmd) printf "%s" -Denqcmd=enabled ;;
+    --disable-enqcmd) printf "%s" -Denqcmd=disabled ;;
     --enable-gcov) printf "%s" -Db_coverage=true ;;
     --disable-gcov) printf "%s" -Db_coverage=false ;;
     --enable-lto) printf "%s" -Db_lto=true ;;
-- 
Yichen Wang



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 02/12] util/dsa: Add idxd into linux header copy list.
  2024-10-09 23:45 [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration Yichen Wang
  2024-10-09 23:45 ` [PATCH v6 01/12] meson: Introduce new instruction set enqcmd to the build system Yichen Wang
@ 2024-10-09 23:46 ` Yichen Wang
  2024-10-09 23:46 ` [PATCH v6 03/12] util/dsa: Implement DSA device start and stop logic Yichen Wang
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Yichen Wang @ 2024-10-09 23:46 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Paolo Bonzini, Marc-André Lureau,
	Daniel P. Berrangé, Philippe Mathieu-Daudé, Peter Xu,
	Fabiano Rosas, Eric Blake, Markus Armbruster, Michael S. Tsirkin,
	Cornelia Huck, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Shivam Kumar, Ho-Ren (Jack) Chuang,
	Yichen Wang

Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>
---
 scripts/update-linux-headers.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/scripts/update-linux-headers.sh b/scripts/update-linux-headers.sh
index c34ac6454e..5aba95d9cb 100755
--- a/scripts/update-linux-headers.sh
+++ b/scripts/update-linux-headers.sh
@@ -193,7 +193,7 @@ rm -rf "$output/linux-headers/linux"
 mkdir -p "$output/linux-headers/linux"
 for header in const.h stddef.h kvm.h vfio.h vfio_ccw.h vfio_zdev.h vhost.h \
               psci.h psp-sev.h userfaultfd.h memfd.h mman.h nvme_ioctl.h \
-              vduse.h iommufd.h bits.h; do
+              vduse.h iommufd.h bits.h idxd.h; do
     cp "$hdrdir/include/linux/$header" "$output/linux-headers/linux"
 done
 
-- 
Yichen Wang



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 03/12] util/dsa: Implement DSA device start and stop logic.
  2024-10-09 23:45 [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration Yichen Wang
  2024-10-09 23:45 ` [PATCH v6 01/12] meson: Introduce new instruction set enqcmd to the build system Yichen Wang
  2024-10-09 23:46 ` [PATCH v6 02/12] util/dsa: Add idxd into linux header copy list Yichen Wang
@ 2024-10-09 23:46 ` Yichen Wang
  2024-10-16 18:59   ` Peter Xu
  2024-10-16 21:00   ` Fabiano Rosas
  2024-10-09 23:46 ` [PATCH v6 04/12] util/dsa: Implement DSA task enqueue and dequeue Yichen Wang
                   ` (10 subsequent siblings)
  13 siblings, 2 replies; 26+ messages in thread
From: Yichen Wang @ 2024-10-09 23:46 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Paolo Bonzini, Marc-André Lureau,
	Daniel P. Berrangé, Philippe Mathieu-Daudé, Peter Xu,
	Fabiano Rosas, Eric Blake, Markus Armbruster, Michael S. Tsirkin,
	Cornelia Huck, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Shivam Kumar, Ho-Ren (Jack) Chuang,
	Yichen Wang, Bryan Zhang

From: Hao Xiang <hao.xiang@linux.dev>

* DSA device open and close.
* DSA group contains multiple DSA devices.
* DSA group configure/start/stop/clean.

Signed-off-by: Hao Xiang <hao.xiang@linux.dev>
Signed-off-by: Bryan Zhang <bryan.zhang@bytedance.com>
Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>
---
 include/qemu/dsa.h | 103 +++++++++++++++++
 util/dsa.c         | 282 +++++++++++++++++++++++++++++++++++++++++++++
 util/meson.build   |   3 +
 3 files changed, 388 insertions(+)
 create mode 100644 include/qemu/dsa.h
 create mode 100644 util/dsa.c

diff --git a/include/qemu/dsa.h b/include/qemu/dsa.h
new file mode 100644
index 0000000000..501bb8c70d
--- /dev/null
+++ b/include/qemu/dsa.h
@@ -0,0 +1,103 @@
+/*
+ * Interface for using Intel Data Streaming Accelerator to offload certain
+ * background operations.
+ *
+ * Copyright (C) Bytedance Ltd.
+ *
+ * Authors:
+ *  Hao Xiang <hao.xiang@bytedance.com>
+ *  Yichen Wang <yichen.wang@bytedance.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#ifndef QEMU_DSA_H
+#define QEMU_DSA_H
+
+#include "qemu/error-report.h"
+#include "qemu/thread.h"
+#include "qemu/queue.h"
+
+#ifdef CONFIG_DSA_OPT
+
+#pragma GCC push_options
+#pragma GCC target("enqcmd")
+
+#include <linux/idxd.h>
+#include "x86intrin.h"
+
+typedef struct {
+    void *work_queue;
+} QemuDsaDevice;
+
+typedef QSIMPLEQ_HEAD(QemuDsaTaskQueue, QemuDsaBatchTask) QemuDsaTaskQueue;
+
+typedef struct {
+    QemuDsaDevice *dsa_devices;
+    int num_dsa_devices;
+    /* The index of the next DSA device to be used. */
+    uint32_t device_allocator_index;
+    bool running;
+    QemuMutex task_queue_lock;
+    QemuCond task_queue_cond;
+    QemuDsaTaskQueue task_queue;
+} QemuDsaDeviceGroup;
+
+/**
+ * @brief Initializes DSA devices.
+ *
+ * @param dsa_parameter A list of DSA device path from migration parameter.
+ *
+ * @return int Zero if successful, otherwise non zero.
+ */
+int qemu_dsa_init(const strList *dsa_parameter, Error **errp);
+
+/**
+ * @brief Start logic to enable using DSA.
+ */
+void qemu_dsa_start(void);
+
+/**
+ * @brief Stop the device group and the completion thread.
+ */
+void qemu_dsa_stop(void);
+
+/**
+ * @brief Clean up system resources created for DSA offloading.
+ */
+void qemu_dsa_cleanup(void);
+
+/**
+ * @brief Check if DSA is running.
+ *
+ * @return True if DSA is running, otherwise false.
+ */
+bool qemu_dsa_is_running(void);
+
+#else
+
+static inline bool qemu_dsa_is_running(void)
+{
+    return false;
+}
+
+static inline int qemu_dsa_init(const strList *dsa_parameter, Error **errp)
+{
+    if (dsa_parameter != NULL && strlen(dsa_parameter) != 0) {
+        error_setg(errp, "DSA is not supported.");
+        return -1;
+    }
+
+    return 0;
+}
+
+static inline void qemu_dsa_start(void) {}
+
+static inline void qemu_dsa_stop(void) {}
+
+static inline void qemu_dsa_cleanup(void) {}
+
+#endif
+
+#endif
diff --git a/util/dsa.c b/util/dsa.c
new file mode 100644
index 0000000000..54d0e20c29
--- /dev/null
+++ b/util/dsa.c
@@ -0,0 +1,282 @@
+/*
+ * Use Intel Data Streaming Accelerator to offload certain background
+ * operations.
+ *
+ * Copyright (C) Bytedance Ltd.
+ *
+ * Authors:
+ *  Hao Xiang <hao.xiang@bytedance.com>
+ *  Bryan Zhang <bryan.zhang@bytedance.com>
+ *  Yichen Wang <yichen.wang@bytedance.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "qapi/error.h"
+#include "qemu/queue.h"
+#include "qemu/memalign.h"
+#include "qemu/lockable.h"
+#include "qemu/cutils.h"
+#include "qemu/dsa.h"
+#include "qemu/bswap.h"
+#include "qemu/error-report.h"
+#include "qemu/rcu.h"
+
+#pragma GCC push_options
+#pragma GCC target("enqcmd")
+
+#include <linux/idxd.h>
+#include "x86intrin.h"
+
+#define DSA_WQ_PORTAL_SIZE 4096
+#define MAX_DSA_DEVICES 16
+
+uint32_t max_retry_count;
+static QemuDsaDeviceGroup dsa_group;
+
+
+/**
+ * @brief This function opens a DSA device's work queue and
+ *        maps the DSA device memory into the current process.
+ *
+ * @param dsa_wq_path A pointer to the DSA device work queue's file path.
+ * @return A pointer to the mapped memory, or MAP_FAILED on failure.
+ */
+static void *
+map_dsa_device(const char *dsa_wq_path)
+{
+    void *dsa_device;
+    int fd;
+
+    fd = open(dsa_wq_path, O_RDWR);
+    if (fd < 0) {
+        error_report("Open %s failed with errno = %d.",
+                dsa_wq_path, errno);
+        return MAP_FAILED;
+    }
+    dsa_device = mmap(NULL, DSA_WQ_PORTAL_SIZE, PROT_WRITE,
+                      MAP_SHARED | MAP_POPULATE, fd, 0);
+    close(fd);
+    if (dsa_device == MAP_FAILED) {
+        error_report("mmap failed with errno = %d.", errno);
+        return MAP_FAILED;
+    }
+    return dsa_device;
+}
+
+/**
+ * @brief Initializes a DSA device structure.
+ *
+ * @param instance A pointer to the DSA device.
+ * @param work_queue A pointer to the DSA work queue.
+ */
+static void
+dsa_device_init(QemuDsaDevice *instance,
+                void *dsa_work_queue)
+{
+    instance->work_queue = dsa_work_queue;
+}
+
+/**
+ * @brief Cleans up a DSA device structure.
+ *
+ * @param instance A pointer to the DSA device to cleanup.
+ */
+static void
+dsa_device_cleanup(QemuDsaDevice *instance)
+{
+    if (instance->work_queue != MAP_FAILED) {
+        munmap(instance->work_queue, DSA_WQ_PORTAL_SIZE);
+    }
+}
+
+/**
+ * @brief Initializes a DSA device group.
+ *
+ * @param group A pointer to the DSA device group.
+ * @param dsa_parameter A list of DSA device path from are separated by space
+ * character migration parameter. Multiple DSA device path.
+ *
+ * @return Zero if successful, non-zero otherwise.
+ */
+static int
+dsa_device_group_init(QemuDsaDeviceGroup *group,
+                      const strList *dsa_parameter,
+                      Error **errp)
+{
+    if (dsa_parameter == NULL) {
+        error_setg(errp, "dsa device path is not supplied.");
+        return -1;
+    }
+
+    int ret = 0;
+    const char *dsa_path[MAX_DSA_DEVICES];
+    int num_dsa_devices = 0;
+
+    while (dsa_parameter) {
+        dsa_path[num_dsa_devices++] = dsa_parameter->value;
+        if (num_dsa_devices == MAX_DSA_DEVICES) {
+            break;
+        }
+        dsa_parameter = dsa_parameter->next;
+    }
+
+    group->dsa_devices =
+        g_new0(QemuDsaDevice, num_dsa_devices);
+    group->num_dsa_devices = num_dsa_devices;
+    group->device_allocator_index = 0;
+
+    group->running = false;
+    qemu_mutex_init(&group->task_queue_lock);
+    qemu_cond_init(&group->task_queue_cond);
+    QSIMPLEQ_INIT(&group->task_queue);
+
+    void *dsa_wq = MAP_FAILED;
+    for (int i = 0; i < num_dsa_devices; i++) {
+        dsa_wq = map_dsa_device(dsa_path[i]);
+        if (dsa_wq == MAP_FAILED) {
+            error_setg(errp, "map_dsa_device failed MAP_FAILED.");
+            ret = -1;
+            goto exit;
+        }
+        dsa_device_init(&dsa_group.dsa_devices[i], dsa_wq);
+    }
+
+exit:
+    return ret;
+}
+
+/**
+ * @brief Starts a DSA device group.
+ *
+ * @param group A pointer to the DSA device group.
+ */
+static void
+dsa_device_group_start(QemuDsaDeviceGroup *group)
+{
+    group->running = true;
+}
+
+/**
+ * @brief Stops a DSA device group.
+ *
+ * @param group A pointer to the DSA device group.
+ */
+__attribute__((unused))
+static void
+dsa_device_group_stop(QemuDsaDeviceGroup *group)
+{
+    group->running = false;
+}
+
+/**
+ * @brief Cleans up a DSA device group.
+ *
+ * @param group A pointer to the DSA device group.
+ */
+static void
+dsa_device_group_cleanup(QemuDsaDeviceGroup *group)
+{
+    if (!group->dsa_devices) {
+        return;
+    }
+    for (int i = 0; i < group->num_dsa_devices; i++) {
+        dsa_device_cleanup(&group->dsa_devices[i]);
+    }
+    g_free(group->dsa_devices);
+    group->dsa_devices = NULL;
+
+    qemu_mutex_destroy(&group->task_queue_lock);
+    qemu_cond_destroy(&group->task_queue_cond);
+}
+
+/**
+ * @brief Returns the next available DSA device in the group.
+ *
+ * @param group A pointer to the DSA device group.
+ *
+ * @return struct QemuDsaDevice* A pointer to the next available DSA device
+ *         in the group.
+ */
+__attribute__((unused))
+static QemuDsaDevice *
+dsa_device_group_get_next_device(QemuDsaDeviceGroup *group)
+{
+    if (group->num_dsa_devices == 0) {
+        return NULL;
+    }
+    uint32_t current = qatomic_fetch_inc(&group->device_allocator_index);
+    current %= group->num_dsa_devices;
+    return &group->dsa_devices[current];
+}
+
+/**
+ * @brief Check if DSA is running.
+ *
+ * @return True if DSA is running, otherwise false.
+ */
+bool qemu_dsa_is_running(void)
+{
+    return false;
+}
+
+static void
+dsa_globals_init(void)
+{
+    max_retry_count = UINT32_MAX;
+}
+
+/**
+ * @brief Initializes DSA devices.
+ *
+ * @param dsa_parameter A list of DSA device path from migration parameter.
+ *
+ * @return int Zero if successful, otherwise non zero.
+ */
+int qemu_dsa_init(const strList *dsa_parameter, Error **errp)
+{
+    dsa_globals_init();
+
+    return dsa_device_group_init(&dsa_group, dsa_parameter, errp);
+}
+
+/**
+ * @brief Start logic to enable using DSA.
+ *
+ */
+void qemu_dsa_start(void)
+{
+    if (dsa_group.num_dsa_devices == 0) {
+        return;
+    }
+    if (dsa_group.running) {
+        return;
+    }
+    dsa_device_group_start(&dsa_group);
+}
+
+/**
+ * @brief Stop the device group and the completion thread.
+ *
+ */
+void qemu_dsa_stop(void)
+{
+    QemuDsaDeviceGroup *group = &dsa_group;
+
+    if (!group->running) {
+        return;
+    }
+}
+
+/**
+ * @brief Clean up system resources created for DSA offloading.
+ *
+ */
+void qemu_dsa_cleanup(void)
+{
+    qemu_dsa_stop();
+    dsa_device_group_cleanup(&dsa_group);
+}
+
diff --git a/util/meson.build b/util/meson.build
index 5d8bef9891..3360f62923 100644
--- a/util/meson.build
+++ b/util/meson.build
@@ -88,6 +88,9 @@ if have_block or have_ga
 endif
 if have_block
   util_ss.add(files('aio-wait.c'))
+  if config_host_data.get('CONFIG_DSA_OPT')
+    util_ss.add(files('dsa.c'))
+  endif
   util_ss.add(files('buffer.c'))
   util_ss.add(files('bufferiszero.c'))
   util_ss.add(files('hbitmap.c'))
-- 
Yichen Wang



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 04/12] util/dsa: Implement DSA task enqueue and dequeue.
  2024-10-09 23:45 [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration Yichen Wang
                   ` (2 preceding siblings ...)
  2024-10-09 23:46 ` [PATCH v6 03/12] util/dsa: Implement DSA device start and stop logic Yichen Wang
@ 2024-10-09 23:46 ` Yichen Wang
  2024-10-09 23:46 ` [PATCH v6 05/12] util/dsa: Implement DSA task asynchronous completion thread model Yichen Wang
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Yichen Wang @ 2024-10-09 23:46 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Paolo Bonzini, Marc-André Lureau,
	Daniel P. Berrangé, Philippe Mathieu-Daudé, Peter Xu,
	Fabiano Rosas, Eric Blake, Markus Armbruster, Michael S. Tsirkin,
	Cornelia Huck, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Shivam Kumar, Ho-Ren (Jack) Chuang,
	Yichen Wang

From: Hao Xiang <hao.xiang@linux.dev>

* Use a safe thread queue for DSA task enqueue/dequeue.
* Implement DSA task submission.
* Implement DSA batch task submission.

Signed-off-by: Hao Xiang <hao.xiang@linux.dev>
Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>
---
 include/qemu/dsa.h |  29 +++++++
 util/dsa.c         | 202 ++++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 230 insertions(+), 1 deletion(-)

diff --git a/include/qemu/dsa.h b/include/qemu/dsa.h
index 501bb8c70d..f39533f4ac 100644
--- a/include/qemu/dsa.h
+++ b/include/qemu/dsa.h
@@ -27,6 +27,17 @@
 #include <linux/idxd.h>
 #include "x86intrin.h"
 
+typedef enum QemuDsaTaskType {
+    QEMU_DSA_TASK = 0,
+    QEMU_DSA_BATCH_TASK
+} QemuDsaTaskType;
+
+typedef enum QemuDsaTaskStatus {
+    QEMU_DSA_TASK_READY = 0,
+    QEMU_DSA_TASK_PROCESSING,
+    QEMU_DSA_TASK_COMPLETION
+} QemuDsaTaskStatus;
+
 typedef struct {
     void *work_queue;
 } QemuDsaDevice;
@@ -44,6 +55,24 @@ typedef struct {
     QemuDsaTaskQueue task_queue;
 } QemuDsaDeviceGroup;
 
+typedef void (*qemu_dsa_completion_fn)(void *);
+
+typedef struct QemuDsaBatchTask {
+    struct dsa_hw_desc batch_descriptor;
+    struct dsa_hw_desc *descriptors;
+    struct dsa_completion_record batch_completion __attribute__((aligned(32)));
+    struct dsa_completion_record *completions;
+    QemuDsaDeviceGroup *group;
+    QemuDsaDevice *device;
+    qemu_dsa_completion_fn completion_callback;
+    QemuSemaphore sem_task_complete;
+    QemuDsaTaskType task_type;
+    QemuDsaTaskStatus status;
+    int batch_size;
+    QSIMPLEQ_ENTRY(QemuDsaBatchTask) entry;
+} QemuDsaBatchTask;
+
+
 /**
  * @brief Initializes DSA devices.
  *
diff --git a/util/dsa.c b/util/dsa.c
index 54d0e20c29..79e305cb6e 100644
--- a/util/dsa.c
+++ b/util/dsa.c
@@ -31,6 +31,7 @@
 #include "x86intrin.h"
 
 #define DSA_WQ_PORTAL_SIZE 4096
+#define DSA_WQ_DEPTH 128
 #define MAX_DSA_DEVICES 16
 
 uint32_t max_retry_count;
@@ -212,6 +213,198 @@ dsa_device_group_get_next_device(QemuDsaDeviceGroup *group)
     return &group->dsa_devices[current];
 }
 
+/**
+ * @brief Empties out the DSA task queue.
+ *
+ * @param group A pointer to the DSA device group.
+ */
+static void
+dsa_empty_task_queue(QemuDsaDeviceGroup *group)
+{
+    qemu_mutex_lock(&group->task_queue_lock);
+    QemuDsaTaskQueue *task_queue = &group->task_queue;
+    while (!QSIMPLEQ_EMPTY(task_queue)) {
+        QSIMPLEQ_REMOVE_HEAD(task_queue, entry);
+    }
+    qemu_mutex_unlock(&group->task_queue_lock);
+}
+
+/**
+ * @brief Adds a task to the DSA task queue.
+ *
+ * @param group A pointer to the DSA device group.
+ * @param task A pointer to the DSA task to enqueue.
+ *
+ * @return int Zero if successful, otherwise a proper error code.
+ */
+static int
+dsa_task_enqueue(QemuDsaDeviceGroup *group,
+                 QemuDsaBatchTask *task)
+{
+    bool notify = false;
+
+    qemu_mutex_lock(&group->task_queue_lock);
+
+    if (!group->running) {
+        error_report("DSA: Tried to queue task to stopped device queue.");
+        qemu_mutex_unlock(&group->task_queue_lock);
+        return -1;
+    }
+
+    /* The queue is empty. This enqueue operation is a 0->1 transition. */
+    if (QSIMPLEQ_EMPTY(&group->task_queue)) {
+        notify = true;
+    }
+
+    QSIMPLEQ_INSERT_TAIL(&group->task_queue, task, entry);
+
+    /* We need to notify the waiter for 0->1 transitions. */
+    if (notify) {
+        qemu_cond_signal(&group->task_queue_cond);
+    }
+
+    qemu_mutex_unlock(&group->task_queue_lock);
+
+    return 0;
+}
+
+/**
+ * @brief Takes a DSA task out of the task queue.
+ *
+ * @param group A pointer to the DSA device group.
+ * @return QemuDsaBatchTask* The DSA task being dequeued.
+ */
+__attribute__((unused))
+static QemuDsaBatchTask *
+dsa_task_dequeue(QemuDsaDeviceGroup *group)
+{
+    QemuDsaBatchTask *task = NULL;
+
+    qemu_mutex_lock(&group->task_queue_lock);
+
+    while (true) {
+        if (!group->running) {
+            goto exit;
+        }
+        task = QSIMPLEQ_FIRST(&group->task_queue);
+        if (task != NULL) {
+            break;
+        }
+        qemu_cond_wait(&group->task_queue_cond, &group->task_queue_lock);
+    }
+
+    QSIMPLEQ_REMOVE_HEAD(&group->task_queue, entry);
+
+exit:
+    qemu_mutex_unlock(&group->task_queue_lock);
+    return task;
+}
+
+/**
+ * @brief Submits a DSA work item to the device work queue.
+ *
+ * @param wq A pointer to the DSA work queue's device memory.
+ * @param descriptor A pointer to the DSA work item descriptor.
+ *
+ * @return Zero if successful, non-zero otherwise.
+ */
+static int
+submit_wi_int(void *wq, struct dsa_hw_desc *descriptor)
+{
+    uint32_t retry = 0;
+
+    _mm_sfence();
+
+    while (true) {
+        if (_enqcmd(wq, descriptor) == 0) {
+            break;
+        }
+        retry++;
+        if (retry > max_retry_count) {
+            error_report("Submit work retry %u times.", retry);
+            return -1;
+        }
+    }
+
+    return 0;
+}
+
+/**
+ * @brief Synchronously submits a DSA work item to the
+ *        device work queue.
+ *
+ * @param wq A pointer to the DSA work queue's device memory.
+ * @param descriptor A pointer to the DSA work item descriptor.
+ *
+ * @return int Zero if successful, non-zero otherwise.
+ */
+__attribute__((unused))
+static int
+submit_wi(void *wq, struct dsa_hw_desc *descriptor)
+{
+    return submit_wi_int(wq, descriptor);
+}
+
+/**
+ * @brief Asynchronously submits a DSA work item to the
+ *        device work queue.
+ *
+ * @param task A pointer to the task.
+ *
+ * @return int Zero if successful, non-zero otherwise.
+ */
+__attribute__((unused))
+static int
+submit_wi_async(QemuDsaBatchTask *task)
+{
+    QemuDsaDeviceGroup *device_group = task->group;
+    QemuDsaDevice *device_instance = task->device;
+    int ret;
+
+    assert(task->task_type == QEMU_DSA_TASK);
+
+    task->status = QEMU_DSA_TASK_PROCESSING;
+
+    ret = submit_wi_int(device_instance->work_queue,
+                        &task->descriptors[0]);
+    if (ret != 0) {
+        return ret;
+    }
+
+    return dsa_task_enqueue(device_group, task);
+}
+
+/**
+ * @brief Asynchronously submits a DSA batch work item to the
+ *        device work queue.
+ *
+ * @param batch_task A pointer to the batch task.
+ *
+ * @return int Zero if successful, non-zero otherwise.
+ */
+__attribute__((unused))
+static int
+submit_batch_wi_async(QemuDsaBatchTask *batch_task)
+{
+    QemuDsaDeviceGroup *device_group = batch_task->group;
+    QemuDsaDevice *device_instance = batch_task->device;
+    int ret;
+
+    assert(batch_task->task_type == QEMU_DSA_BATCH_TASK);
+    assert(batch_task->batch_descriptor.desc_count <= batch_task->batch_size);
+    assert(batch_task->status == QEMU_DSA_TASK_READY);
+
+    batch_task->status = QEMU_DSA_TASK_PROCESSING;
+
+    ret = submit_wi_int(device_instance->work_queue,
+                        &batch_task->batch_descriptor);
+    if (ret != 0) {
+        return ret;
+    }
+
+    return dsa_task_enqueue(device_group, batch_task);
+}
+
 /**
  * @brief Check if DSA is running.
  *
@@ -225,7 +418,12 @@ bool qemu_dsa_is_running(void)
 static void
 dsa_globals_init(void)
 {
-    max_retry_count = UINT32_MAX;
+    /*
+     * This value follows a reference example by Intel. The POLL_RETRY_MAX is
+     * defined to 10000, so here we used the max WQ depth * 100 for the the max
+     * polling retry count.
+     */
+    max_retry_count = DSA_WQ_DEPTH * 100;
 }
 
 /**
@@ -268,6 +466,8 @@ void qemu_dsa_stop(void)
     if (!group->running) {
         return;
     }
+
+    dsa_empty_task_queue(group);
 }
 
 /**
-- 
Yichen Wang



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 05/12] util/dsa: Implement DSA task asynchronous completion thread model.
  2024-10-09 23:45 [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration Yichen Wang
                   ` (3 preceding siblings ...)
  2024-10-09 23:46 ` [PATCH v6 04/12] util/dsa: Implement DSA task enqueue and dequeue Yichen Wang
@ 2024-10-09 23:46 ` Yichen Wang
  2024-10-09 23:46 ` [PATCH v6 06/12] util/dsa: Implement zero page checking in DSA task Yichen Wang
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Yichen Wang @ 2024-10-09 23:46 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Paolo Bonzini, Marc-André Lureau,
	Daniel P. Berrangé, Philippe Mathieu-Daudé, Peter Xu,
	Fabiano Rosas, Eric Blake, Markus Armbruster, Michael S. Tsirkin,
	Cornelia Huck, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Shivam Kumar, Ho-Ren (Jack) Chuang,
	Yichen Wang

From: Hao Xiang <hao.xiang@linux.dev>

* Create a dedicated thread for DSA task completion.
* DSA completion thread runs a loop and poll for completed tasks.
* Start and stop DSA completion thread during DSA device start stop.

User space application can directly submit task to Intel DSA
accelerator by writing to DSA's device memory (mapped in user space).
Once a task is submitted, the device starts processing it and write
the completion status back to the task. A user space application can
poll the task's completion status to check for completion. This change
uses a dedicated thread to perform DSA task completion checking.

Signed-off-by: Hao Xiang <hao.xiang@linux.dev>
Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>
---
 include/qemu/dsa.h |   1 +
 util/dsa.c         | 274 ++++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 274 insertions(+), 1 deletion(-)

diff --git a/include/qemu/dsa.h b/include/qemu/dsa.h
index f39533f4ac..7b30303791 100644
--- a/include/qemu/dsa.h
+++ b/include/qemu/dsa.h
@@ -69,6 +69,7 @@ typedef struct QemuDsaBatchTask {
     QemuDsaTaskType task_type;
     QemuDsaTaskStatus status;
     int batch_size;
+    bool *results;
     QSIMPLEQ_ENTRY(QemuDsaBatchTask) entry;
 } QemuDsaBatchTask;
 
diff --git a/util/dsa.c b/util/dsa.c
index 79e305cb6e..56828bec21 100644
--- a/util/dsa.c
+++ b/util/dsa.c
@@ -33,9 +33,20 @@
 #define DSA_WQ_PORTAL_SIZE 4096
 #define DSA_WQ_DEPTH 128
 #define MAX_DSA_DEVICES 16
+#define DSA_COMPLETION_THREAD "qemu_dsa_completion"
+
+typedef struct {
+    bool stopping;
+    bool running;
+    QemuThread thread;
+    int thread_id;
+    QemuSemaphore sem_init_done;
+    QemuDsaDeviceGroup *group;
+} QemuDsaCompletionThread;
 
 uint32_t max_retry_count;
 static QemuDsaDeviceGroup dsa_group;
+static QemuDsaCompletionThread completion_thread;
 
 
 /**
@@ -405,6 +416,265 @@ submit_batch_wi_async(QemuDsaBatchTask *batch_task)
     return dsa_task_enqueue(device_group, batch_task);
 }
 
+/**
+ * @brief Poll for the DSA work item completion.
+ *
+ * @param completion A pointer to the DSA work item completion record.
+ * @param opcode The DSA opcode.
+ *
+ * @return Zero if successful, non-zero otherwise.
+ */
+static int
+poll_completion(struct dsa_completion_record *completion,
+                enum dsa_opcode opcode)
+{
+    uint8_t status;
+    uint64_t retry = 0;
+
+    while (true) {
+        /* The DSA operation completes successfully or fails. */
+        status = completion->status;
+        if (status == DSA_COMP_SUCCESS ||
+            status == DSA_COMP_PAGE_FAULT_NOBOF ||
+            status == DSA_COMP_BATCH_PAGE_FAULT ||
+            status == DSA_COMP_BATCH_FAIL) {
+            break;
+        } else if (status != DSA_COMP_NONE) {
+            error_report("DSA opcode %d failed with status = %d.",
+                    opcode, status);
+            return 1;
+        }
+        retry++;
+        if (retry > max_retry_count) {
+            error_report("DSA wait for completion retry %lu times.", retry);
+            return 1;
+        }
+        _mm_pause();
+    }
+
+    return 0;
+}
+
+/**
+ * @brief Complete a single DSA task in the batch task.
+ *
+ * @param task A pointer to the batch task structure.
+ *
+ * @return Zero if successful, otherwise non-zero.
+ */
+static int
+poll_task_completion(QemuDsaBatchTask *task)
+{
+    assert(task->task_type == QEMU_DSA_TASK);
+
+    struct dsa_completion_record *completion = &task->completions[0];
+    uint8_t status;
+    int ret;
+
+    ret = poll_completion(completion, task->descriptors[0].opcode);
+    if (ret != 0) {
+        goto exit;
+    }
+
+    status = completion->status;
+    if (status == DSA_COMP_SUCCESS) {
+        task->results[0] = (completion->result == 0);
+        goto exit;
+    }
+
+    assert(status == DSA_COMP_PAGE_FAULT_NOBOF);
+
+exit:
+    return ret;
+}
+
+/**
+ * @brief Poll a batch task status until it completes. If DSA task doesn't
+ *        complete properly, use CPU to complete the task.
+ *
+ * @param batch_task A pointer to the DSA batch task.
+ *
+ * @return Zero if successful, otherwise non-zero.
+ */
+static int
+poll_batch_task_completion(QemuDsaBatchTask *batch_task)
+{
+    struct dsa_completion_record *batch_completion =
+        &batch_task->batch_completion;
+    struct dsa_completion_record *completion;
+    uint8_t batch_status;
+    uint8_t status;
+    bool *results = batch_task->results;
+    uint32_t count = batch_task->batch_descriptor.desc_count;
+    int ret;
+
+    ret = poll_completion(batch_completion,
+                          batch_task->batch_descriptor.opcode);
+    if (ret != 0) {
+        goto exit;
+    }
+
+    batch_status = batch_completion->status;
+
+    if (batch_status == DSA_COMP_SUCCESS) {
+        if (batch_completion->bytes_completed == count) {
+            /*
+             * Let's skip checking for each descriptors' completion status
+             * if the batch descriptor says all succedded.
+             */
+            for (int i = 0; i < count; i++) {
+                assert(batch_task->completions[i].status == DSA_COMP_SUCCESS);
+                results[i] = (batch_task->completions[i].result == 0);
+            }
+            goto exit;
+        }
+    } else {
+        assert(batch_status == DSA_COMP_BATCH_FAIL ||
+            batch_status == DSA_COMP_BATCH_PAGE_FAULT);
+    }
+
+    for (int i = 0; i < count; i++) {
+
+        completion = &batch_task->completions[i];
+        status = completion->status;
+
+        if (status == DSA_COMP_SUCCESS) {
+            results[i] = (completion->result == 0);
+            continue;
+        }
+
+        assert(status == DSA_COMP_PAGE_FAULT_NOBOF);
+
+        if (status != DSA_COMP_PAGE_FAULT_NOBOF) {
+            error_report("Unexpected DSA completion status = %u.", status);
+            ret = 1;
+            goto exit;
+        }
+    }
+
+exit:
+    return ret;
+}
+
+/**
+ * @brief Handles an asynchronous DSA batch task completion.
+ *
+ * @param task A pointer to the batch buffer zero task structure.
+ */
+static void
+dsa_batch_task_complete(QemuDsaBatchTask *batch_task)
+{
+    batch_task->status = QEMU_DSA_TASK_COMPLETION;
+    batch_task->completion_callback(batch_task);
+}
+
+/**
+ * @brief The function entry point called by a dedicated DSA
+ *        work item completion thread.
+ *
+ * @param opaque A pointer to the thread context.
+ *
+ * @return void* Not used.
+ */
+static void *
+dsa_completion_loop(void *opaque)
+{
+    QemuDsaCompletionThread *thread_context =
+        (QemuDsaCompletionThread *)opaque;
+    QemuDsaBatchTask *batch_task;
+    QemuDsaDeviceGroup *group = thread_context->group;
+    int ret;
+
+    rcu_register_thread();
+
+    thread_context->thread_id = qemu_get_thread_id();
+    qemu_sem_post(&thread_context->sem_init_done);
+
+    while (thread_context->running) {
+        batch_task = dsa_task_dequeue(group);
+        assert(batch_task != NULL || !group->running);
+        if (!group->running) {
+            assert(!thread_context->running);
+            break;
+        }
+        if (batch_task->task_type == QEMU_DSA_TASK) {
+            ret = poll_task_completion(batch_task);
+        } else {
+            assert(batch_task->task_type == QEMU_DSA_BATCH_TASK);
+            ret = poll_batch_task_completion(batch_task);
+        }
+
+        if (ret != 0) {
+            goto exit;
+        }
+
+        dsa_batch_task_complete(batch_task);
+    }
+
+exit:
+    if (ret != 0) {
+        error_report("DSA completion thread exited due to internal error.");
+    }
+    rcu_unregister_thread();
+    return NULL;
+}
+
+/**
+ * @brief Initializes a DSA completion thread.
+ *
+ * @param completion_thread A pointer to the completion thread context.
+ * @param group A pointer to the DSA device group.
+ */
+static void
+dsa_completion_thread_init(
+    QemuDsaCompletionThread *completion_thread,
+    QemuDsaDeviceGroup *group)
+{
+    completion_thread->stopping = false;
+    completion_thread->running = true;
+    completion_thread->thread_id = -1;
+    qemu_sem_init(&completion_thread->sem_init_done, 0);
+    completion_thread->group = group;
+
+    qemu_thread_create(&completion_thread->thread,
+                       DSA_COMPLETION_THREAD,
+                       dsa_completion_loop,
+                       completion_thread,
+                       QEMU_THREAD_JOINABLE);
+
+    /* Wait for initialization to complete */
+    qemu_sem_wait(&completion_thread->sem_init_done);
+}
+
+/**
+ * @brief Stops the completion thread (and implicitly, the device group).
+ *
+ * @param opaque A pointer to the completion thread.
+ */
+static void dsa_completion_thread_stop(void *opaque)
+{
+    QemuDsaCompletionThread *thread_context =
+        (QemuDsaCompletionThread *)opaque;
+
+    QemuDsaDeviceGroup *group = thread_context->group;
+
+    qemu_mutex_lock(&group->task_queue_lock);
+
+    thread_context->stopping = true;
+    thread_context->running = false;
+
+    /* Prevent the compiler from setting group->running first. */
+    barrier();
+    dsa_device_group_stop(group);
+
+    qemu_cond_signal(&group->task_queue_cond);
+    qemu_mutex_unlock(&group->task_queue_lock);
+
+    qemu_thread_join(&thread_context->thread);
+
+    qemu_sem_destroy(&thread_context->sem_init_done);
+}
+
 /**
  * @brief Check if DSA is running.
  *
@@ -412,7 +682,7 @@ submit_batch_wi_async(QemuDsaBatchTask *batch_task)
  */
 bool qemu_dsa_is_running(void)
 {
-    return false;
+    return completion_thread.running;
 }
 
 static void
@@ -453,6 +723,7 @@ void qemu_dsa_start(void)
         return;
     }
     dsa_device_group_start(&dsa_group);
+    dsa_completion_thread_init(&completion_thread, &dsa_group);
 }
 
 /**
@@ -467,6 +738,7 @@ void qemu_dsa_stop(void)
         return;
     }
 
+    dsa_completion_thread_stop(&completion_thread);
     dsa_empty_task_queue(group);
 }
 
-- 
Yichen Wang



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 06/12] util/dsa: Implement zero page checking in DSA task.
  2024-10-09 23:45 [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration Yichen Wang
                   ` (4 preceding siblings ...)
  2024-10-09 23:46 ` [PATCH v6 05/12] util/dsa: Implement DSA task asynchronous completion thread model Yichen Wang
@ 2024-10-09 23:46 ` Yichen Wang
  2024-10-09 23:46 ` [PATCH v6 07/12] util/dsa: Implement DSA task asynchronous submission and wait for completion Yichen Wang
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Yichen Wang @ 2024-10-09 23:46 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Paolo Bonzini, Marc-André Lureau,
	Daniel P. Berrangé, Philippe Mathieu-Daudé, Peter Xu,
	Fabiano Rosas, Eric Blake, Markus Armbruster, Michael S. Tsirkin,
	Cornelia Huck, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Shivam Kumar, Ho-Ren (Jack) Chuang,
	Yichen Wang, Bryan Zhang

From: Hao Xiang <hao.xiang@linux.dev>

Create DSA task with operation code DSA_OPCODE_COMPVAL.
Here we create two types of DSA tasks, a single DSA task and
a batch DSA task. Batch DSA task reduces task submission overhead
and hence should be the default option. However, due to the way DSA
hardware works, a DSA batch task must contain at least two individual
tasks. There are times we need to submit a single task and hence a
single DSA task submission is also required.

Signed-off-by: Hao Xiang <hao.xiang@linux.dev>
Signed-off-by: Bryan Zhang <bryan.zhang@bytedance.com>
Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>
---
 include/qemu/dsa.h |  45 ++++++--
 util/dsa.c         | 254 +++++++++++++++++++++++++++++++++++++++++----
 2 files changed, 270 insertions(+), 29 deletions(-)

diff --git a/include/qemu/dsa.h b/include/qemu/dsa.h
index 7b30303791..89841a6ffa 100644
--- a/include/qemu/dsa.h
+++ b/include/qemu/dsa.h
@@ -16,6 +16,7 @@
 #define QEMU_DSA_H
 
 #include "qemu/error-report.h"
+#include "exec/cpu-common.h"
 #include "qemu/thread.h"
 #include "qemu/queue.h"
 
@@ -70,10 +71,11 @@ typedef struct QemuDsaBatchTask {
     QemuDsaTaskStatus status;
     int batch_size;
     bool *results;
+    /* Address of each pages in pages */
+    ram_addr_t *addr;
     QSIMPLEQ_ENTRY(QemuDsaBatchTask) entry;
 } QemuDsaBatchTask;
 
-
 /**
  * @brief Initializes DSA devices.
  *
@@ -105,8 +107,26 @@ void qemu_dsa_cleanup(void);
  */
 bool qemu_dsa_is_running(void);
 
+/**
+ * @brief Initializes a buffer zero DSA batch task.
+ *
+ * @param batch_size The number of zero page checking tasks in the batch.
+ * @return A pointer to the zero page checking tasks initialized.
+ */
+QemuDsaBatchTask *
+buffer_zero_batch_task_init(int batch_size);
+
+/**
+ * @brief Performs the proper cleanup on a DSA batch task.
+ *
+ * @param task A pointer to the batch task to cleanup.
+ */
+void buffer_zero_batch_task_destroy(QemuDsaBatchTask *task);
+
 #else
 
+typedef struct QemuDsaBatchTask {} QemuDsaBatchTask;
+
 static inline bool qemu_dsa_is_running(void)
 {
     return false;
@@ -114,19 +134,28 @@ static inline bool qemu_dsa_is_running(void)
 
 static inline int qemu_dsa_init(const strList *dsa_parameter, Error **errp)
 {
-    if (dsa_parameter != NULL && strlen(dsa_parameter) != 0) {
-        error_setg(errp, "DSA is not supported.");
-        return -1;
-    }
-
-    return 0;
+    error_setg(errp, "DSA accelerator is not enabled.");
+    return -1;
 }
 
 static inline void qemu_dsa_start(void) {}
 
 static inline void qemu_dsa_stop(void) {}
 
-static inline void qemu_dsa_cleanup(void) {}
+static inline QemuDsaBatchTask *buffer_zero_batch_task_init(int batch_size)
+{
+    return NULL;
+}
+
+static inline void buffer_zero_batch_task_destroy(QemuDsaBatchTask *task) {}
+
+static inline int
+buffer_is_zero_dsa_batch_sync(QemuDsaBatchTask *batch_task,
+                              const void **buf, size_t count, size_t len)
+{
+    error_setg(errp, "DSA accelerator is not enabled.");
+    return -1;
+}
 
 #endif
 
diff --git a/util/dsa.c b/util/dsa.c
index 56828bec21..d9c066ff00 100644
--- a/util/dsa.c
+++ b/util/dsa.c
@@ -48,6 +48,7 @@ uint32_t max_retry_count;
 static QemuDsaDeviceGroup dsa_group;
 static QemuDsaCompletionThread completion_thread;
 
+static void buffer_zero_dsa_completion(void *context);
 
 /**
  * @brief This function opens a DSA device's work queue and
@@ -176,7 +177,6 @@ dsa_device_group_start(QemuDsaDeviceGroup *group)
  *
  * @param group A pointer to the DSA device group.
  */
-__attribute__((unused))
 static void
 dsa_device_group_stop(QemuDsaDeviceGroup *group)
 {
@@ -212,7 +212,6 @@ dsa_device_group_cleanup(QemuDsaDeviceGroup *group)
  * @return struct QemuDsaDevice* A pointer to the next available DSA device
  *         in the group.
  */
-__attribute__((unused))
 static QemuDsaDevice *
 dsa_device_group_get_next_device(QemuDsaDeviceGroup *group)
 {
@@ -285,7 +284,6 @@ dsa_task_enqueue(QemuDsaDeviceGroup *group,
  * @param group A pointer to the DSA device group.
  * @return QemuDsaBatchTask* The DSA task being dequeued.
  */
-__attribute__((unused))
 static QemuDsaBatchTask *
 dsa_task_dequeue(QemuDsaDeviceGroup *group)
 {
@@ -340,22 +338,6 @@ submit_wi_int(void *wq, struct dsa_hw_desc *descriptor)
     return 0;
 }
 
-/**
- * @brief Synchronously submits a DSA work item to the
- *        device work queue.
- *
- * @param wq A pointer to the DSA work queue's device memory.
- * @param descriptor A pointer to the DSA work item descriptor.
- *
- * @return int Zero if successful, non-zero otherwise.
- */
-__attribute__((unused))
-static int
-submit_wi(void *wq, struct dsa_hw_desc *descriptor)
-{
-    return submit_wi_int(wq, descriptor);
-}
-
 /**
  * @brief Asynchronously submits a DSA work item to the
  *        device work queue.
@@ -364,7 +346,6 @@ submit_wi(void *wq, struct dsa_hw_desc *descriptor)
  *
  * @return int Zero if successful, non-zero otherwise.
  */
-__attribute__((unused))
 static int
 submit_wi_async(QemuDsaBatchTask *task)
 {
@@ -393,7 +374,6 @@ submit_wi_async(QemuDsaBatchTask *task)
  *
  * @return int Zero if successful, non-zero otherwise.
  */
-__attribute__((unused))
 static int
 submit_batch_wi_async(QemuDsaBatchTask *batch_task)
 {
@@ -752,3 +732,235 @@ void qemu_dsa_cleanup(void)
     dsa_device_group_cleanup(&dsa_group);
 }
 
+
+/* Buffer zero comparison DSA task implementations */
+/* =============================================== */
+
+/**
+ * @brief Sets a buffer zero comparison DSA task.
+ *
+ * @param descriptor A pointer to the DSA task descriptor.
+ * @param buf A pointer to the memory buffer.
+ * @param len The length of the buffer.
+ */
+static void
+buffer_zero_task_set_int(struct dsa_hw_desc *descriptor,
+                         const void *buf,
+                         size_t len)
+{
+    struct dsa_completion_record *completion =
+        (struct dsa_completion_record *)descriptor->completion_addr;
+
+    descriptor->xfer_size = len;
+    descriptor->src_addr = (uintptr_t)buf;
+    completion->status = 0;
+    completion->result = 0;
+}
+
+/**
+ * @brief Resets a buffer zero comparison DSA batch task.
+ *
+ * @param task A pointer to the DSA batch task.
+ */
+static void
+buffer_zero_task_reset(QemuDsaBatchTask *task)
+{
+    task->completions[0].status = DSA_COMP_NONE;
+    task->task_type = QEMU_DSA_TASK;
+    task->status = QEMU_DSA_TASK_READY;
+}
+
+/**
+ * @brief Resets a buffer zero comparison DSA batch task.
+ *
+ * @param task A pointer to the batch task.
+ * @param count The number of DSA tasks this batch task will contain.
+ */
+static void
+buffer_zero_batch_task_reset(QemuDsaBatchTask *task, size_t count)
+{
+    task->batch_completion.status = DSA_COMP_NONE;
+    task->batch_descriptor.desc_count = count;
+    task->task_type = QEMU_DSA_BATCH_TASK;
+    task->status = QEMU_DSA_TASK_READY;
+}
+
+/**
+ * @brief Sets a buffer zero comparison DSA task.
+ *
+ * @param task A pointer to the DSA task.
+ * @param buf A pointer to the memory buffer.
+ * @param len The buffer length.
+ */
+static void
+buffer_zero_task_set(QemuDsaBatchTask *task,
+                     const void *buf,
+                     size_t len)
+{
+    buffer_zero_task_reset(task);
+    buffer_zero_task_set_int(&task->descriptors[0], buf, len);
+}
+
+/**
+ * @brief Sets a buffer zero comparison batch task.
+ *
+ * @param batch_task A pointer to the batch task.
+ * @param buf An array of memory buffers.
+ * @param count The number of buffers in the array.
+ * @param len The length of the buffers.
+ */
+static void
+buffer_zero_batch_task_set(QemuDsaBatchTask *batch_task,
+                           const void **buf, size_t count, size_t len)
+{
+    assert(count > 0);
+    assert(count <= batch_task->batch_size);
+
+    buffer_zero_batch_task_reset(batch_task, count);
+    for (int i = 0; i < count; i++) {
+        buffer_zero_task_set_int(&batch_task->descriptors[i], buf[i], len);
+    }
+}
+
+/**
+ * @brief Asychronously perform a buffer zero DSA operation.
+ *
+ * @param task A pointer to the batch task structure.
+ * @param buf A pointer to the memory buffer.
+ * @param len The length of the memory buffer.
+ *
+ * @return int Zero if successful, otherwise an appropriate error code.
+ */
+__attribute__((unused))
+static int
+buffer_zero_dsa_async(QemuDsaBatchTask *task,
+                      const void *buf, size_t len)
+{
+    buffer_zero_task_set(task, buf, len);
+
+    return submit_wi_async(task);
+}
+
+/**
+ * @brief Sends a memory comparison batch task to a DSA device and wait
+ *        for completion.
+ *
+ * @param batch_task The batch task to be submitted to DSA device.
+ * @param buf An array of memory buffers to check for zero.
+ * @param count The number of buffers.
+ * @param len The buffer length.
+ */
+__attribute__((unused))
+static int
+buffer_zero_dsa_batch_async(QemuDsaBatchTask *batch_task,
+                            const void **buf, size_t count, size_t len)
+{
+    assert(count <= batch_task->batch_size);
+    buffer_zero_batch_task_set(batch_task, buf, count, len);
+
+    return submit_batch_wi_async(batch_task);
+}
+
+/**
+ * @brief The completion callback function for buffer zero
+ *        comparison DSA task completion.
+ *
+ * @param context A pointer to the callback context.
+ */
+static void
+buffer_zero_dsa_completion(void *context)
+{
+    assert(context != NULL);
+
+    QemuDsaBatchTask *task = (QemuDsaBatchTask *)context;
+    qemu_sem_post(&task->sem_task_complete);
+}
+
+/**
+ * @brief Wait for the asynchronous DSA task to complete.
+ *
+ * @param batch_task A pointer to the buffer zero comparison batch task.
+ */
+__attribute__((unused))
+static void
+buffer_zero_dsa_wait(QemuDsaBatchTask *batch_task)
+{
+    qemu_sem_wait(&batch_task->sem_task_complete);
+}
+
+/**
+ * @brief Initializes a buffer zero comparison DSA task.
+ *
+ * @param descriptor A pointer to the DSA task descriptor.
+ * @param completion A pointer to the DSA task completion record.
+ */
+static void
+buffer_zero_task_init_int(struct dsa_hw_desc *descriptor,
+                          struct dsa_completion_record *completion)
+{
+    descriptor->opcode = DSA_OPCODE_COMPVAL;
+    descriptor->flags = IDXD_OP_FLAG_RCR | IDXD_OP_FLAG_CRAV;
+    descriptor->comp_pattern = (uint64_t)0;
+    descriptor->completion_addr = (uint64_t)completion;
+}
+
+/**
+ * @brief Initializes a buffer zero DSA batch task.
+ *
+ * @param batch_size The number of zero page checking tasks in the batch.
+ * @return A pointer to the zero page checking tasks initialized.
+ */
+QemuDsaBatchTask *
+buffer_zero_batch_task_init(int batch_size)
+{
+    QemuDsaBatchTask *task = qemu_memalign(64, sizeof(QemuDsaBatchTask));
+    int descriptors_size = sizeof(*task->descriptors) * batch_size;
+
+    memset(task, 0, sizeof(*task));
+    task->addr = g_new0(ram_addr_t, batch_size);
+    task->results = g_new0(bool, batch_size);
+    task->batch_size = batch_size;
+    task->descriptors =
+        (struct dsa_hw_desc *)qemu_memalign(64, descriptors_size);
+    memset(task->descriptors, 0, descriptors_size);
+    task->completions = (struct dsa_completion_record *)qemu_memalign(
+        32, sizeof(*task->completions) * batch_size);
+
+    task->batch_completion.status = DSA_COMP_NONE;
+    task->batch_descriptor.completion_addr = (uint64_t)&task->batch_completion;
+    /* TODO: Ensure that we never send a batch with count <= 1 */
+    task->batch_descriptor.desc_count = 0;
+    task->batch_descriptor.opcode = DSA_OPCODE_BATCH;
+    task->batch_descriptor.flags = IDXD_OP_FLAG_RCR | IDXD_OP_FLAG_CRAV;
+    task->batch_descriptor.desc_list_addr = (uintptr_t)task->descriptors;
+    task->status = QEMU_DSA_TASK_READY;
+    task->group = &dsa_group;
+    task->device = dsa_device_group_get_next_device(&dsa_group);
+
+    for (int i = 0; i < task->batch_size; i++) {
+        buffer_zero_task_init_int(&task->descriptors[i],
+                                  &task->completions[i]);
+    }
+
+    qemu_sem_init(&task->sem_task_complete, 0);
+    task->completion_callback = buffer_zero_dsa_completion;
+
+    return task;
+}
+
+/**
+ * @brief Performs the proper cleanup on a DSA batch task.
+ *
+ * @param task A pointer to the batch task to cleanup.
+ */
+void
+buffer_zero_batch_task_destroy(QemuDsaBatchTask *task)
+{
+    g_free(task->addr);
+    g_free(task->results);
+    qemu_vfree(task->descriptors);
+    qemu_vfree(task->completions);
+    task->results = NULL;
+    qemu_sem_destroy(&task->sem_task_complete);
+    qemu_vfree(task);
+}
-- 
Yichen Wang



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 07/12] util/dsa: Implement DSA task asynchronous submission and wait for completion.
  2024-10-09 23:45 [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration Yichen Wang
                   ` (5 preceding siblings ...)
  2024-10-09 23:46 ` [PATCH v6 06/12] util/dsa: Implement zero page checking in DSA task Yichen Wang
@ 2024-10-09 23:46 ` Yichen Wang
  2024-10-09 23:46 ` [PATCH v6 08/12] migration/multifd: Add new migration option for multifd DSA offloading Yichen Wang
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Yichen Wang @ 2024-10-09 23:46 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Paolo Bonzini, Marc-André Lureau,
	Daniel P. Berrangé, Philippe Mathieu-Daudé, Peter Xu,
	Fabiano Rosas, Eric Blake, Markus Armbruster, Michael S. Tsirkin,
	Cornelia Huck, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Shivam Kumar, Ho-Ren (Jack) Chuang,
	Yichen Wang, Bryan Zhang

From: Hao Xiang <hao.xiang@linux.dev>

* Add a DSA task completion callback.
* DSA completion thread will call the tasks's completion callback
on every task/batch task completion.
* DSA submission path to wait for completion.
* Implement CPU fallback if DSA is not able to complete the task.

Signed-off-by: Hao Xiang <hao.xiang@linux.dev>
Signed-off-by: Bryan Zhang <bryan.zhang@bytedance.com>
Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>
---
 include/qemu/dsa.h |  14 +++++
 util/dsa.c         | 125 +++++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 135 insertions(+), 4 deletions(-)

diff --git a/include/qemu/dsa.h b/include/qemu/dsa.h
index 89841a6ffa..a3b502ee41 100644
--- a/include/qemu/dsa.h
+++ b/include/qemu/dsa.h
@@ -123,6 +123,20 @@ buffer_zero_batch_task_init(int batch_size);
  */
 void buffer_zero_batch_task_destroy(QemuDsaBatchTask *task);
 
+/**
+ * @brief Performs buffer zero comparison on a DSA batch task synchronously.
+ *
+ * @param batch_task A pointer to the batch task.
+ * @param buf An array of memory buffers.
+ * @param count The number of buffers in the array.
+ * @param len The buffer length.
+ *
+ * @return Zero if successful, otherwise non-zero.
+ */
+int
+buffer_is_zero_dsa_batch_sync(QemuDsaBatchTask *batch_task,
+                              const void **buf, size_t count, size_t len);
+
 #else
 
 typedef struct QemuDsaBatchTask {} QemuDsaBatchTask;
diff --git a/util/dsa.c b/util/dsa.c
index d9c066ff00..cbaa47c360 100644
--- a/util/dsa.c
+++ b/util/dsa.c
@@ -435,6 +435,42 @@ poll_completion(struct dsa_completion_record *completion,
     return 0;
 }
 
+/**
+ * @brief Helper function to use CPU to complete a single
+ *        zero page checking task.
+ *
+ * @param completion A pointer to a DSA task completion record.
+ * @param descriptor A pointer to a DSA task descriptor.
+ * @param result A pointer to the result of a zero page checking.
+ */
+static void
+task_cpu_fallback_int(struct dsa_completion_record *completion,
+                      struct dsa_hw_desc *descriptor, bool *result)
+{
+    const uint8_t *buf;
+    size_t len;
+
+    if (completion->status == DSA_COMP_SUCCESS) {
+        return;
+    }
+
+    /*
+     * DSA was able to partially complete the operation. Check the
+     * result. If we already know this is not a zero page, we can
+     * return now.
+     */
+    if (completion->bytes_completed != 0 && completion->result != 0) {
+        *result = false;
+        return;
+    }
+
+    /* Let's fallback to use CPU to complete it. */
+    buf = (const uint8_t *)descriptor->src_addr;
+    len = descriptor->xfer_size;
+    *result = buffer_is_zero(buf + completion->bytes_completed,
+                             len - completion->bytes_completed);
+}
+
 /**
  * @brief Complete a single DSA task in the batch task.
  *
@@ -563,7 +599,7 @@ dsa_completion_loop(void *opaque)
         (QemuDsaCompletionThread *)opaque;
     QemuDsaBatchTask *batch_task;
     QemuDsaDeviceGroup *group = thread_context->group;
-    int ret;
+    int ret = 0;
 
     rcu_register_thread();
 
@@ -831,7 +867,6 @@ buffer_zero_batch_task_set(QemuDsaBatchTask *batch_task,
  *
  * @return int Zero if successful, otherwise an appropriate error code.
  */
-__attribute__((unused))
 static int
 buffer_zero_dsa_async(QemuDsaBatchTask *task,
                       const void *buf, size_t len)
@@ -850,7 +885,6 @@ buffer_zero_dsa_async(QemuDsaBatchTask *task,
  * @param count The number of buffers.
  * @param len The buffer length.
  */
-__attribute__((unused))
 static int
 buffer_zero_dsa_batch_async(QemuDsaBatchTask *batch_task,
                             const void **buf, size_t count, size_t len)
@@ -881,13 +915,61 @@ buffer_zero_dsa_completion(void *context)
  *
  * @param batch_task A pointer to the buffer zero comparison batch task.
  */
-__attribute__((unused))
 static void
 buffer_zero_dsa_wait(QemuDsaBatchTask *batch_task)
 {
     qemu_sem_wait(&batch_task->sem_task_complete);
 }
 
+/**
+ * @brief Use CPU to complete the zero page checking task if DSA
+ *        is not able to complete it.
+ *
+ * @param batch_task A pointer to the batch task.
+ */
+static void
+buffer_zero_cpu_fallback(QemuDsaBatchTask *batch_task)
+{
+    if (batch_task->task_type == QEMU_DSA_TASK) {
+        if (batch_task->completions[0].status == DSA_COMP_SUCCESS) {
+            return;
+        }
+        task_cpu_fallback_int(&batch_task->completions[0],
+                              &batch_task->descriptors[0],
+                              &batch_task->results[0]);
+    } else if (batch_task->task_type == QEMU_DSA_BATCH_TASK) {
+        struct dsa_completion_record *batch_completion =
+            &batch_task->batch_completion;
+        struct dsa_completion_record *completion;
+        uint8_t status;
+        bool *results = batch_task->results;
+        uint32_t count = batch_task->batch_descriptor.desc_count;
+
+        /* DSA is able to complete the entire batch task. */
+        if (batch_completion->status == DSA_COMP_SUCCESS) {
+            assert(count == batch_completion->bytes_completed);
+            return;
+        }
+
+        /*
+         * DSA encounters some error and is not able to complete
+         * the entire batch task. Use CPU fallback.
+         */
+        for (int i = 0; i < count; i++) {
+
+            completion = &batch_task->completions[i];
+            status = completion->status;
+
+            assert(status == DSA_COMP_SUCCESS ||
+                status == DSA_COMP_PAGE_FAULT_NOBOF);
+
+            task_cpu_fallback_int(completion,
+                                  &batch_task->descriptors[i],
+                                  &results[i]);
+        }
+    }
+}
+
 /**
  * @brief Initializes a buffer zero comparison DSA task.
  *
@@ -964,3 +1046,38 @@ buffer_zero_batch_task_destroy(QemuDsaBatchTask *task)
     qemu_sem_destroy(&task->sem_task_complete);
     qemu_vfree(task);
 }
+
+/**
+ * @brief Performs buffer zero comparison on a DSA batch task synchronously.
+ *
+ * @param batch_task A pointer to the batch task.
+ * @param buf An array of memory buffers.
+ * @param count The number of buffers in the array.
+ * @param len The buffer length.
+ *
+ * @return Zero if successful, otherwise non-zero.
+ */
+int
+buffer_is_zero_dsa_batch_sync(QemuDsaBatchTask *batch_task,
+                              const void **buf, size_t count, size_t len)
+{
+    if (count <= 0 || count > batch_task->batch_size) {
+        return -1;
+    }
+
+    assert(batch_task != NULL);
+    assert(len != 0);
+    assert(buf != NULL);
+
+    if (count == 1) {
+        /* DSA doesn't take batch operation with only 1 task. */
+        buffer_zero_dsa_async(batch_task, buf[0], len);
+    } else {
+        buffer_zero_dsa_batch_async(batch_task, buf, count, len);
+    }
+
+    buffer_zero_dsa_wait(batch_task);
+    buffer_zero_cpu_fallback(batch_task);
+
+    return 0;
+}
-- 
Yichen Wang



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 08/12] migration/multifd: Add new migration option for multifd DSA offloading.
  2024-10-09 23:45 [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration Yichen Wang
                   ` (6 preceding siblings ...)
  2024-10-09 23:46 ` [PATCH v6 07/12] util/dsa: Implement DSA task asynchronous submission and wait for completion Yichen Wang
@ 2024-10-09 23:46 ` Yichen Wang
  2024-10-11 17:14   ` Dr. David Alan Gilbert
  2024-10-09 23:46 ` [PATCH v6 09/12] migration/multifd: Enable DSA offloading in multifd sender path Yichen Wang
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 26+ messages in thread
From: Yichen Wang @ 2024-10-09 23:46 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Paolo Bonzini, Marc-André Lureau,
	Daniel P. Berrangé, Philippe Mathieu-Daudé, Peter Xu,
	Fabiano Rosas, Eric Blake, Markus Armbruster, Michael S. Tsirkin,
	Cornelia Huck, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Shivam Kumar, Ho-Ren (Jack) Chuang,
	Yichen Wang

From: Hao Xiang <hao.xiang@linux.dev>

Intel DSA offloading is an optional feature that turns on if
proper hardware and software stack is available. To turn on
DSA offloading in multifd live migration by setting:

zero-page-detection=dsa-accel
dsa-accel-path=[dsa_dev_path1] [dsa_dev_path2] ... [dsa_dev_pathX]

This feature is turned off by default.

Signed-off-by: Hao Xiang <hao.xiang@linux.dev>
Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>
---
 hmp-commands.hx                |  2 +-
 include/qemu/dsa.h             | 13 +++++++++++++
 migration/migration-hmp-cmds.c | 19 ++++++++++++++++++-
 migration/options.c            | 30 ++++++++++++++++++++++++++++++
 migration/options.h            |  1 +
 qapi/migration.json            | 32 ++++++++++++++++++++++++++++----
 util/dsa.c                     | 31 +++++++++++++++++++++++++++++++
 7 files changed, 122 insertions(+), 6 deletions(-)

diff --git a/hmp-commands.hx b/hmp-commands.hx
index 06746f0afc..0e04eac7c7 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -1009,7 +1009,7 @@ ERST
 
     {
         .name       = "migrate_set_parameter",
-        .args_type  = "parameter:s,value:s",
+        .args_type  = "parameter:s,value:S",
         .params     = "parameter value",
         .help       = "Set the parameter for migration",
         .cmd        = hmp_migrate_set_parameter,
diff --git a/include/qemu/dsa.h b/include/qemu/dsa.h
index a3b502ee41..b1bb6daad2 100644
--- a/include/qemu/dsa.h
+++ b/include/qemu/dsa.h
@@ -100,6 +100,13 @@ void qemu_dsa_stop(void);
  */
 void qemu_dsa_cleanup(void);
 
+/**
+ * @brief Check if DSA is supported.
+ *
+ * @return True if DSA is supported, otherwise false.
+ */
+bool qemu_dsa_is_supported(void);
+
 /**
  * @brief Check if DSA is running.
  *
@@ -141,6 +148,12 @@ buffer_is_zero_dsa_batch_sync(QemuDsaBatchTask *batch_task,
 
 typedef struct QemuDsaBatchTask {} QemuDsaBatchTask;
 
+static inline bool qemu_dsa_is_supported(void)
+{
+    return false;
+}
+
+
 static inline bool qemu_dsa_is_running(void)
 {
     return false;
diff --git a/migration/migration-hmp-cmds.c b/migration/migration-hmp-cmds.c
index 20d1a6e219..983f13b73c 100644
--- a/migration/migration-hmp-cmds.c
+++ b/migration/migration-hmp-cmds.c
@@ -312,7 +312,16 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
         monitor_printf(mon, "%s: '%s'\n",
             MigrationParameter_str(MIGRATION_PARAMETER_TLS_AUTHZ),
             params->tls_authz);
-
+        if (params->has_dsa_accel_path) {
+            strList *dsa_accel_path = params->dsa_accel_path;
+            monitor_printf(mon, "%s:",
+                MigrationParameter_str(MIGRATION_PARAMETER_DSA_ACCEL_PATH));
+            while (dsa_accel_path) {
+                monitor_printf(mon, " '%s'", dsa_accel_path->value);
+                dsa_accel_path = dsa_accel_path->next;
+            }
+            monitor_printf(mon, "\n");
+        }
         if (params->has_block_bitmap_mapping) {
             const BitmapMigrationNodeAliasList *bmnal;
 
@@ -563,6 +572,14 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
         p->has_x_checkpoint_delay = true;
         visit_type_uint32(v, param, &p->x_checkpoint_delay, &err);
         break;
+    case MIGRATION_PARAMETER_DSA_ACCEL_PATH:
+        p->has_dsa_accel_path = true;
+        g_autofree char **strv = g_strsplit(valuestr ? : "", " ", -1);
+        strList **tail = &p->dsa_accel_path;
+        for (int i = 0; strv[i]; i++) {
+            QAPI_LIST_APPEND(tail, strv[i]);
+        }
+        break;
     case MIGRATION_PARAMETER_MULTIFD_CHANNELS:
         p->has_multifd_channels = true;
         visit_type_uint8(v, param, &p->multifd_channels, &err);
diff --git a/migration/options.c b/migration/options.c
index 147cd2b8fd..a0b3a7d291 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -13,6 +13,7 @@
 
 #include "qemu/osdep.h"
 #include "qemu/error-report.h"
+#include "qemu/dsa.h"
 #include "exec/target_page.h"
 #include "qapi/clone-visitor.h"
 #include "qapi/error.h"
@@ -832,6 +833,13 @@ const char *migrate_tls_creds(void)
     return s->parameters.tls_creds;
 }
 
+const strList *migrate_dsa_accel_path(void)
+{
+    MigrationState *s = migrate_get_current();
+
+    return s->parameters.dsa_accel_path;
+}
+
 const char *migrate_tls_hostname(void)
 {
     MigrationState *s = migrate_get_current();
@@ -945,6 +953,8 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
     params->zero_page_detection = s->parameters.zero_page_detection;
     params->has_direct_io = true;
     params->direct_io = s->parameters.direct_io;
+    params->has_dsa_accel_path = true;
+    params->dsa_accel_path = QAPI_CLONE(strList, s->parameters.dsa_accel_path);
 
     return params;
 }
@@ -953,6 +963,7 @@ void migrate_params_init(MigrationParameters *params)
 {
     params->tls_hostname = g_strdup("");
     params->tls_creds = g_strdup("");
+    params->dsa_accel_path = NULL;
 
     /* Set has_* up only for parameter checks */
     params->has_throttle_trigger_threshold = true;
@@ -1165,6 +1176,14 @@ bool migrate_params_check(MigrationParameters *params, Error **errp)
         return false;
     }
 
+    if (params->has_zero_page_detection &&
+        params->zero_page_detection == ZERO_PAGE_DETECTION_DSA_ACCEL) {
+        if (!qemu_dsa_is_supported()) {
+            error_setg(errp, "DSA acceleration is not supported.");
+            return false;
+        }
+    }
+
     return true;
 }
 
@@ -1278,6 +1297,11 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
     if (params->has_direct_io) {
         dest->direct_io = params->direct_io;
     }
+
+    if (params->has_dsa_accel_path) {
+        dest->has_dsa_accel_path = true;
+        dest->dsa_accel_path = params->dsa_accel_path;
+    }
 }
 
 static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
@@ -1410,6 +1434,12 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
     if (params->has_direct_io) {
         s->parameters.direct_io = params->direct_io;
     }
+    if (params->has_dsa_accel_path) {
+        qapi_free_strList(s->parameters.dsa_accel_path);
+        s->parameters.has_dsa_accel_path = true;
+        s->parameters.dsa_accel_path =
+            QAPI_CLONE(strList, params->dsa_accel_path);
+    }
 }
 
 void qmp_migrate_set_parameters(MigrateSetParameters *params, Error **errp)
diff --git a/migration/options.h b/migration/options.h
index a0bd6edc06..8198b220bd 100644
--- a/migration/options.h
+++ b/migration/options.h
@@ -86,6 +86,7 @@ const char *migrate_tls_creds(void);
 const char *migrate_tls_hostname(void);
 uint64_t migrate_xbzrle_cache_size(void);
 ZeroPageDetection migrate_zero_page_detection(void);
+const strList *migrate_dsa_accel_path(void);
 
 /* parameters helpers */
 
diff --git a/qapi/migration.json b/qapi/migration.json
index b66cccf107..d8b42ceae6 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -626,10 +626,14 @@
 #     multifd migration is enabled, else in the main migration thread
 #     as for @legacy.
 #
+# @dsa-accel: Perform zero page checking with the DSA accelerator
+#     offloading in multifd sender thread if multifd migration is
+#     enabled, else in the main migration thread as for @legacy.
+#
 # Since: 9.0
 ##
 { 'enum': 'ZeroPageDetection',
-  'data': [ 'none', 'legacy', 'multifd' ] }
+  'data': [ 'none', 'legacy', 'multifd', 'dsa-accel' ] }
 
 ##
 # @BitmapMigrationBitmapAliasTransform:
@@ -837,6 +841,12 @@
 #     See description in @ZeroPageDetection.  Default is 'multifd'.
 #     (since 9.0)
 #
+# @dsa-accel-path: If enabled, use DSA accelerator offloading for
+#     certain memory operations. Enable DSA accelerator for zero
+#     page detection offloading by setting the @zero-page-detection
+#     to dsa-accel. This parameter defines the dsa device path, and
+#     defaults to an empty list.  (Since 9.2)
+#
 # @direct-io: Open migration files with O_DIRECT when possible.  This
 #     only has effect if the @mapped-ram capability is enabled.
 #     (Since 9.1)
@@ -855,7 +865,7 @@
            'cpu-throttle-initial', 'cpu-throttle-increment',
            'cpu-throttle-tailslow',
            'tls-creds', 'tls-hostname', 'tls-authz', 'max-bandwidth',
-           'avail-switchover-bandwidth', 'downtime-limit',
+           'avail-switchover-bandwidth', 'downtime-limit', 'dsa-accel-path',
            { 'name': 'x-checkpoint-delay', 'features': [ 'unstable' ] },
            'multifd-channels',
            'xbzrle-cache-size', 'max-postcopy-bandwidth',
@@ -1018,6 +1028,12 @@
 #     See description in @ZeroPageDetection.  Default is 'multifd'.
 #     (since 9.0)
 #
+# @dsa-accel-path: If enabled, use DSA accelerator offloading for
+#     certain memory operations. Enable DSA accelerator for zero
+#     page detection offloading by setting the @zero-page-detection
+#     to dsa-accel. This parameter defines the dsa device path, and
+#     defaults to an empty list.  (Since 9.2)
+#
 # @direct-io: Open migration files with O_DIRECT when possible.  This
 #     only has effect if the @mapped-ram capability is enabled.
 #     (Since 9.1)
@@ -1063,7 +1079,8 @@
             '*vcpu-dirty-limit': 'uint64',
             '*mode': 'MigMode',
             '*zero-page-detection': 'ZeroPageDetection',
-            '*direct-io': 'bool' } }
+            '*direct-io': 'bool',
+            '*dsa-accel-path': [ 'str' ] } }
 
 ##
 # @migrate-set-parameters:
@@ -1228,6 +1245,12 @@
 #     See description in @ZeroPageDetection.  Default is 'multifd'.
 #     (since 9.0)
 #
+# @dsa-accel-path: If enabled, use DSA accelerator offloading for
+#     certain memory operations. Enable DSA accelerator for zero
+#     page detection offloading by setting the @zero-page-detection
+#     to dsa-accel. This parameter defines the dsa device path, and
+#     defaults to an empty list.  (Since 9.2)
+#
 # @direct-io: Open migration files with O_DIRECT when possible.  This
 #     only has effect if the @mapped-ram capability is enabled.
 #     (Since 9.1)
@@ -1270,7 +1293,8 @@
             '*vcpu-dirty-limit': 'uint64',
             '*mode': 'MigMode',
             '*zero-page-detection': 'ZeroPageDetection',
-            '*direct-io': 'bool' } }
+            '*direct-io': 'bool',
+            '*dsa-accel-path': [ 'str' ] } }
 
 ##
 # @query-migrate-parameters:
diff --git a/util/dsa.c b/util/dsa.c
index cbaa47c360..eeede3c0c7 100644
--- a/util/dsa.c
+++ b/util/dsa.c
@@ -23,6 +23,7 @@
 #include "qemu/bswap.h"
 #include "qemu/error-report.h"
 #include "qemu/rcu.h"
+#include <cpuid.h>
 
 #pragma GCC push_options
 #pragma GCC target("enqcmd")
@@ -691,6 +692,36 @@ static void dsa_completion_thread_stop(void *opaque)
     qemu_sem_destroy(&thread_context->sem_init_done);
 }
 
+/**
+ * @brief Check if DSA is supported.
+ *
+ * @return True if DSA is supported, otherwise false.
+ */
+bool qemu_dsa_is_supported(void)
+{
+    /*
+     * movdir64b is indicated by bit 28 of ecx in CPUID leaf 7, subleaf 0.
+     * enqcmd is indicated by bit 29 of ecx in CPUID leaf 7, subleaf 0.
+     * Doc: https://cdrdv2-public.intel.com/819680/architecture-instruction-\
+     *      set-extensions-programming-reference.pdf
+     */
+    uint32_t eax, ebx, ecx, edx;
+    bool movedirb_enabled;
+    bool enqcmd_enabled;
+
+    __get_cpuid_count(7, 0, &eax, &ebx, &ecx, &edx);
+    movedirb_enabled = (ecx >> 28) & 0x1;
+    if (!movedirb_enabled) {
+        return false;
+    }
+    enqcmd_enabled = (ecx >> 29) & 0x1;
+    if (!enqcmd_enabled) {
+        return false;
+    }
+
+    return true;
+}
+
 /**
  * @brief Check if DSA is running.
  *
-- 
Yichen Wang



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 09/12] migration/multifd: Enable DSA offloading in multifd sender path.
  2024-10-09 23:45 [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration Yichen Wang
                   ` (7 preceding siblings ...)
  2024-10-09 23:46 ` [PATCH v6 08/12] migration/multifd: Add new migration option for multifd DSA offloading Yichen Wang
@ 2024-10-09 23:46 ` Yichen Wang
  2024-10-17 19:11   ` Fabiano Rosas
  2024-10-09 23:46 ` [PATCH v6 10/12] migration/multifd: Add migration option set packet size Yichen Wang
                   ` (4 subsequent siblings)
  13 siblings, 1 reply; 26+ messages in thread
From: Yichen Wang @ 2024-10-09 23:46 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Paolo Bonzini, Marc-André Lureau,
	Daniel P. Berrangé, Philippe Mathieu-Daudé, Peter Xu,
	Fabiano Rosas, Eric Blake, Markus Armbruster, Michael S. Tsirkin,
	Cornelia Huck, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Shivam Kumar, Ho-Ren (Jack) Chuang,
	Yichen Wang

From: Hao Xiang <hao.xiang@linux.dev>

Multifd sender path gets an array of pages queued by the migration
thread. It performs zero page checking on every page in the array.
The pages are classfied as either a zero page or a normal page. This
change uses Intel DSA to offload the zero page checking from CPU to
the DSA accelerator. The sender thread submits a batch of pages to DSA
hardware and waits for the DSA completion thread to signal for work
completion.

Signed-off-by: Hao Xiang <hao.xiang@linux.dev>
Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>
---
 migration/multifd-zero-page.c | 133 ++++++++++++++++++++++++++++++----
 migration/multifd.c           |  19 ++++-
 migration/multifd.h           |   5 ++
 3 files changed, 141 insertions(+), 16 deletions(-)

diff --git a/migration/multifd-zero-page.c b/migration/multifd-zero-page.c
index f1e988a959..e4bfff23a4 100644
--- a/migration/multifd-zero-page.c
+++ b/migration/multifd-zero-page.c
@@ -21,7 +21,9 @@
 
 static bool multifd_zero_page_enabled(void)
 {
-    return migrate_zero_page_detection() == ZERO_PAGE_DETECTION_MULTIFD;
+    ZeroPageDetection curMethod = migrate_zero_page_detection();
+    return (curMethod == ZERO_PAGE_DETECTION_MULTIFD ||
+            curMethod == ZERO_PAGE_DETECTION_DSA_ACCEL);
 }
 
 static void swap_page_offset(ram_addr_t *pages_offset, int a, int b)
@@ -37,26 +39,49 @@ static void swap_page_offset(ram_addr_t *pages_offset, int a, int b)
     pages_offset[b] = temp;
 }
 
+#ifdef CONFIG_DSA_OPT
+
+static void swap_result(bool *results, int a, int b)
+{
+    bool temp;
+
+    if (a == b) {
+        return;
+    }
+
+    temp = results[a];
+    results[a] = results[b];
+    results[b] = temp;
+}
+
 /**
- * multifd_send_zero_page_detect: Perform zero page detection on all pages.
+ * zero_page_detect_dsa: Perform zero page detection using
+ * Intel Data Streaming Accelerator (DSA).
  *
- * Sorts normal pages before zero pages in p->pages->offset and updates
- * p->pages->normal_num.
+ * Sorts normal pages before zero pages in pages->offset and updates
+ * pages->normal_num.
  *
  * @param p A pointer to the send params.
  */
-void multifd_send_zero_page_detect(MultiFDSendParams *p)
+static void zero_page_detect_dsa(MultiFDSendParams *p)
 {
     MultiFDPages_t *pages = &p->data->u.ram;
     RAMBlock *rb = pages->block;
-    int i = 0;
-    int j = pages->num - 1;
+    bool *results = p->dsa_batch_task->results;
 
-    if (!multifd_zero_page_enabled()) {
-        pages->normal_num = pages->num;
-        goto out;
+    for (int i = 0; i < pages->num; i++) {
+        p->dsa_batch_task->addr[i] =
+            (ram_addr_t)(rb->host + pages->offset[i]);
     }
 
+    buffer_is_zero_dsa_batch_sync(p->dsa_batch_task,
+                                  (const void **)p->dsa_batch_task->addr,
+                                  pages->num,
+                                  multifd_ram_page_size());
+
+    int i = 0;
+    int j = pages->num - 1;
+
     /*
      * Sort the page offset array by moving all normal pages to
      * the left and all zero pages to the right of the array.
@@ -64,23 +89,39 @@ void multifd_send_zero_page_detect(MultiFDSendParams *p)
     while (i <= j) {
         uint64_t offset = pages->offset[i];
 
-        if (!buffer_is_zero(rb->host + offset, multifd_ram_page_size())) {
+        if (!results[i]) {
             i++;
             continue;
         }
 
+        swap_result(results, i, j);
         swap_page_offset(pages->offset, i, j);
         ram_release_page(rb->idstr, offset);
         j--;
     }
 
     pages->normal_num = i;
+}
 
-out:
-    stat64_add(&mig_stats.normal_pages, pages->normal_num);
-    stat64_add(&mig_stats.zero_pages, pages->num - pages->normal_num);
+void multifd_dsa_cleanup(void)
+{
+    qemu_dsa_cleanup();
+}
+
+#else
+
+static void zero_page_detect_dsa(MultiFDSendParams *p)
+{
+    g_assert_not_reached();
+}
+
+void multifd_dsa_cleanup(void)
+{
+    g_assert_not_reached();
 }
 
+#endif
+
 void multifd_recv_zero_page_process(MultiFDRecvParams *p)
 {
     for (int i = 0; i < p->zero_num; i++) {
@@ -92,3 +133,67 @@ void multifd_recv_zero_page_process(MultiFDRecvParams *p)
         }
     }
 }
+
+/**
+ * zero_page_detect_cpu: Perform zero page detection using CPU.
+ *
+ * Sorts normal pages before zero pages in p->pages->offset and updates
+ * p->pages->normal_num.
+ *
+ * @param p A pointer to the send params.
+ */
+static void zero_page_detect_cpu(MultiFDSendParams *p)
+{
+    MultiFDPages_t *pages = &p->data->u.ram;
+    RAMBlock *rb = pages->block;
+    int i = 0;
+    int j = pages->num - 1;
+
+    if (!multifd_zero_page_enabled()) {
+        pages->normal_num = pages->num;
+        return;
+    }
+
+    /*
+     * Sort the page offset array by moving all normal pages to
+     * the left and all zero pages to the right of the array.
+     */
+    while (i <= j) {
+        uint64_t offset = pages->offset[i];
+
+        if (!buffer_is_zero(rb->host + offset, multifd_ram_page_size())) {
+            i++;
+            continue;
+        }
+
+        swap_page_offset(pages->offset, i, j);
+        ram_release_page(rb->idstr, offset);
+        j--;
+    }
+
+    pages->normal_num = i;
+}
+
+/**
+ * multifd_send_zero_page_detect: Perform zero page detection on all pages.
+ *
+ * @param p A pointer to the send params.
+ */
+void multifd_send_zero_page_detect(MultiFDSendParams *p)
+{
+    MultiFDPages_t *pages = &p->data->u.ram;
+
+    if (!multifd_zero_page_enabled()) {
+        pages->normal_num = pages->num;
+        return;
+    }
+
+    if (qemu_dsa_is_running()) {
+        zero_page_detect_dsa(p);
+    } else {
+        zero_page_detect_cpu(p);
+    }
+
+    stat64_add(&mig_stats.normal_pages, pages->normal_num);
+    stat64_add(&mig_stats.zero_pages, pages->num - pages->normal_num);
+}
diff --git a/migration/multifd.c b/migration/multifd.c
index 9b200f4ad9..e255ccf0c7 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -13,6 +13,7 @@
 #include "qemu/osdep.h"
 #include "qemu/cutils.h"
 #include "qemu/rcu.h"
+#include "qemu/dsa.h"
 #include "exec/target_page.h"
 #include "sysemu/sysemu.h"
 #include "exec/ramblock.h"
@@ -462,6 +463,8 @@ static bool multifd_send_cleanup_channel(MultiFDSendParams *p, Error **errp)
     p->name = NULL;
     g_free(p->data);
     p->data = NULL;
+    buffer_zero_batch_task_destroy(p->dsa_batch_task);
+    p->dsa_batch_task = NULL;
     p->packet_len = 0;
     g_free(p->packet);
     p->packet = NULL;
@@ -493,6 +496,8 @@ void multifd_send_shutdown(void)
 
     multifd_send_terminate_threads();
 
+    multifd_dsa_cleanup();
+
     for (i = 0; i < migrate_multifd_channels(); i++) {
         MultiFDSendParams *p = &multifd_send_state->params[i];
         Error *local_err = NULL;
@@ -814,11 +819,21 @@ bool multifd_send_setup(void)
     uint32_t page_count = multifd_ram_page_count();
     bool use_packets = multifd_use_packets();
     uint8_t i;
+    Error *local_err = NULL;
 
     if (!migrate_multifd()) {
         return true;
     }
 
+    if (s &&
+        s->parameters.zero_page_detection == ZERO_PAGE_DETECTION_DSA_ACCEL) {
+        const strList *dsa_parameter = migrate_dsa_accel_path();
+        if (qemu_dsa_init(dsa_parameter, &local_err)) {
+            return false;
+        }
+        qemu_dsa_start();
+    }
+
     thread_count = migrate_multifd_channels();
     multifd_send_state = g_malloc0(sizeof(*multifd_send_state));
     multifd_send_state->params = g_new0(MultiFDSendParams, thread_count);
@@ -829,12 +844,12 @@ bool multifd_send_setup(void)
 
     for (i = 0; i < thread_count; i++) {
         MultiFDSendParams *p = &multifd_send_state->params[i];
-        Error *local_err = NULL;
 
         qemu_sem_init(&p->sem, 0);
         qemu_sem_init(&p->sem_sync, 0);
         p->id = i;
         p->data = multifd_send_data_alloc();
+        p->dsa_batch_task = buffer_zero_batch_task_init(page_count);
 
         if (use_packets) {
             p->packet_len = sizeof(MultiFDPacket_t)
@@ -865,7 +880,6 @@ bool multifd_send_setup(void)
 
     for (i = 0; i < thread_count; i++) {
         MultiFDSendParams *p = &multifd_send_state->params[i];
-        Error *local_err = NULL;
 
         ret = multifd_send_state->ops->send_setup(p, &local_err);
         if (ret) {
@@ -1047,6 +1061,7 @@ void multifd_recv_cleanup(void)
             qemu_thread_join(&p->thread);
         }
     }
+    multifd_dsa_cleanup();
     for (i = 0; i < migrate_multifd_channels(); i++) {
         multifd_recv_cleanup_channel(&multifd_recv_state->params[i]);
     }
diff --git a/migration/multifd.h b/migration/multifd.h
index 50d58c0c9c..e293ddbc1d 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -15,6 +15,7 @@
 
 #include "exec/target_page.h"
 #include "ram.h"
+#include "qemu/dsa.h"
 
 typedef struct MultiFDRecvData MultiFDRecvData;
 typedef struct MultiFDSendData MultiFDSendData;
@@ -155,6 +156,9 @@ typedef struct {
     bool pending_sync;
     MultiFDSendData *data;
 
+    /* Zero page checking batch task */
+    QemuDsaBatchTask *dsa_batch_task;
+
     /* thread local variables. No locking required */
 
     /* pointer to the packet */
@@ -313,6 +317,7 @@ void multifd_send_fill_packet(MultiFDSendParams *p);
 bool multifd_send_prepare_common(MultiFDSendParams *p);
 void multifd_send_zero_page_detect(MultiFDSendParams *p);
 void multifd_recv_zero_page_process(MultiFDRecvParams *p);
+void multifd_dsa_cleanup(void);
 
 static inline void multifd_send_prepare_header(MultiFDSendParams *p)
 {
-- 
Yichen Wang



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 10/12] migration/multifd: Add migration option set packet size.
  2024-10-09 23:45 [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration Yichen Wang
                   ` (8 preceding siblings ...)
  2024-10-09 23:46 ` [PATCH v6 09/12] migration/multifd: Enable DSA offloading in multifd sender path Yichen Wang
@ 2024-10-09 23:46 ` Yichen Wang
  2024-10-17 19:16   ` Fabiano Rosas
  2024-10-09 23:46 ` [PATCH v6 11/12] util/dsa: Add unit test coverage for Intel DSA task submission and completion Yichen Wang
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 26+ messages in thread
From: Yichen Wang @ 2024-10-09 23:46 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Paolo Bonzini, Marc-André Lureau,
	Daniel P. Berrangé, Philippe Mathieu-Daudé, Peter Xu,
	Fabiano Rosas, Eric Blake, Markus Armbruster, Michael S. Tsirkin,
	Cornelia Huck, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Shivam Kumar, Ho-Ren (Jack) Chuang,
	Yichen Wang

From: Hao Xiang <hao.xiang@linux.dev>

During live migration, if the latency between sender and receiver is
high and bandwidth is also high (a long and fat pipe), using a bigger
packet size can help reduce migration total time. The current multifd
packet size is 128 * 4kb. In addition, Intel DSA offloading performs
better with a large batch task.

This change adds an option to set the packet size, which is also useful
for performance tuning. Both sender and receiver needs to set the same
packet size for things to work.

Set the option:
migrate_set_parameter multifd-packet-size 4190208

Signed-off-by: Hao Xiang <hao.xiang@linux.dev>
Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>
---
 migration/migration-hmp-cmds.c |  7 ++++++
 migration/multifd-zlib.c       |  6 ++++--
 migration/multifd-zstd.c       |  6 ++++--
 migration/options.c            | 39 ++++++++++++++++++++++++++++++++++
 migration/options.h            |  1 +
 qapi/migration.json            | 21 +++++++++++++++---
 6 files changed, 73 insertions(+), 7 deletions(-)

diff --git a/migration/migration-hmp-cmds.c b/migration/migration-hmp-cmds.c
index 983f13b73c..561ed45250 100644
--- a/migration/migration-hmp-cmds.c
+++ b/migration/migration-hmp-cmds.c
@@ -292,6 +292,9 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
         monitor_printf(mon, "%s: %u ms\n",
             MigrationParameter_str(MIGRATION_PARAMETER_X_CHECKPOINT_DELAY),
             params->x_checkpoint_delay);
+        monitor_printf(mon, "%s: %" PRIu64 "\n",
+            MigrationParameter_str(MIGRATION_PARAMETER_MULTIFD_PACKET_SIZE),
+            params->multifd_packet_size);
         monitor_printf(mon, "%s: %u\n",
             MigrationParameter_str(MIGRATION_PARAMETER_MULTIFD_CHANNELS),
             params->multifd_channels);
@@ -580,6 +583,10 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
             QAPI_LIST_APPEND(tail, strv[i]);
         }
         break;
+    case MIGRATION_PARAMETER_MULTIFD_PACKET_SIZE:
+        p->has_multifd_packet_size = true;
+        visit_type_size(v, param, &p->multifd_packet_size, &err);
+        break;
     case MIGRATION_PARAMETER_MULTIFD_CHANNELS:
         p->has_multifd_channels = true;
         visit_type_uint8(v, param, &p->multifd_channels, &err);
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index 8cf8a26bb4..58c278533a 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -39,6 +39,7 @@ static int multifd_zlib_send_setup(MultiFDSendParams *p, Error **errp)
     struct zlib_data *z = g_new0(struct zlib_data, 1);
     z_stream *zs = &z->zs;
     const char *err_msg;
+    uint64_t multifd_packet_size = migrate_multifd_packet_size();
 
     zs->zalloc = Z_NULL;
     zs->zfree = Z_NULL;
@@ -48,7 +49,7 @@ static int multifd_zlib_send_setup(MultiFDSendParams *p, Error **errp)
         goto err_free_z;
     }
     /* This is the maximum size of the compressed buffer */
-    z->zbuff_len = compressBound(MULTIFD_PACKET_SIZE);
+    z->zbuff_len = compressBound(multifd_packet_size);
     z->zbuff = g_try_malloc(z->zbuff_len);
     if (!z->zbuff) {
         err_msg = "out of memory for zbuff";
@@ -162,6 +163,7 @@ out:
 
 static int multifd_zlib_recv_setup(MultiFDRecvParams *p, Error **errp)
 {
+    uint64_t multifd_packet_size = migrate_multifd_packet_size();
     struct zlib_data *z = g_new0(struct zlib_data, 1);
     z_stream *zs = &z->zs;
 
@@ -176,7 +178,7 @@ static int multifd_zlib_recv_setup(MultiFDRecvParams *p, Error **errp)
         return -1;
     }
     /* To be safe, we reserve twice the size of the packet */
-    z->zbuff_len = MULTIFD_PACKET_SIZE * 2;
+    z->zbuff_len = multifd_packet_size * 2;
     z->zbuff = g_try_malloc(z->zbuff_len);
     if (!z->zbuff) {
         inflateEnd(zs);
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index abed140855..1f97a5417c 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -39,6 +39,7 @@ struct zstd_data {
 
 static int multifd_zstd_send_setup(MultiFDSendParams *p, Error **errp)
 {
+    uint64_t multifd_packet_size = migrate_multifd_packet_size();
     struct zstd_data *z = g_new0(struct zstd_data, 1);
     int res;
 
@@ -58,7 +59,7 @@ static int multifd_zstd_send_setup(MultiFDSendParams *p, Error **errp)
         return -1;
     }
     /* This is the maximum size of the compressed buffer */
-    z->zbuff_len = ZSTD_compressBound(MULTIFD_PACKET_SIZE);
+    z->zbuff_len = ZSTD_compressBound(multifd_packet_size);
     z->zbuff = g_try_malloc(z->zbuff_len);
     if (!z->zbuff) {
         ZSTD_freeCStream(z->zcs);
@@ -149,6 +150,7 @@ out:
 
 static int multifd_zstd_recv_setup(MultiFDRecvParams *p, Error **errp)
 {
+    uint64_t multifd_packet_size = migrate_multifd_packet_size();
     struct zstd_data *z = g_new0(struct zstd_data, 1);
     int ret;
 
@@ -170,7 +172,7 @@ static int multifd_zstd_recv_setup(MultiFDRecvParams *p, Error **errp)
     }
 
     /* To be safe, we reserve twice the size of the packet */
-    z->zbuff_len = MULTIFD_PACKET_SIZE * 2;
+    z->zbuff_len = multifd_packet_size * 2;
     z->zbuff = g_try_malloc(z->zbuff_len);
     if (!z->zbuff) {
         ZSTD_freeDStream(z->zds);
diff --git a/migration/options.c b/migration/options.c
index a0b3a7d291..b1eaf1c095 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -80,6 +80,13 @@
 #define DEFAULT_MIGRATE_ANNOUNCE_ROUNDS    5
 #define DEFAULT_MIGRATE_ANNOUNCE_STEP    100
 
+/*
+ * Parameter for multifd packet size.
+ */
+#define DEFAULT_MIGRATE_MULTIFD_PACKET_SIZE (128 * 4 * 1024)
+/* DSA device supports up to 1024 batches, i.e. 1024 * 4K pages */
+#define MAX_MIGRATE_MULTIFD_PACKET_SIZE (1024 * 4 * 1024)
+
 #define DEFINE_PROP_MIG_CAP(name, x)             \
     DEFINE_PROP_BOOL(name, MigrationState, capabilities[x], false)
 
@@ -173,6 +180,9 @@ Property migration_properties[] = {
     DEFINE_PROP_ZERO_PAGE_DETECTION("zero-page-detection", MigrationState,
                        parameters.zero_page_detection,
                        ZERO_PAGE_DETECTION_MULTIFD),
+    DEFINE_PROP_SIZE("multifd-packet-size", MigrationState,
+                     parameters.multifd_packet_size,
+                     DEFAULT_MIGRATE_MULTIFD_PACKET_SIZE),
 
     /* Migration capabilities */
     DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
@@ -783,6 +793,13 @@ int migrate_multifd_channels(void)
     return s->parameters.multifd_channels;
 }
 
+uint64_t migrate_multifd_packet_size(void)
+{
+    MigrationState *s = migrate_get_current();
+
+    return s->parameters.multifd_packet_size;
+}
+
 MultiFDCompression migrate_multifd_compression(void)
 {
     MigrationState *s = migrate_get_current();
@@ -911,6 +928,8 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
     params->downtime_limit = s->parameters.downtime_limit;
     params->has_x_checkpoint_delay = true;
     params->x_checkpoint_delay = s->parameters.x_checkpoint_delay;
+    params->has_multifd_packet_size = true;
+    params->multifd_packet_size = s->parameters.multifd_packet_size;
     params->has_multifd_channels = true;
     params->multifd_channels = s->parameters.multifd_channels;
     params->has_multifd_compression = true;
@@ -973,6 +992,7 @@ void migrate_params_init(MigrationParameters *params)
     params->has_max_bandwidth = true;
     params->has_downtime_limit = true;
     params->has_x_checkpoint_delay = true;
+    params->has_multifd_packet_size = true;
     params->has_multifd_channels = true;
     params->has_multifd_compression = true;
     params->has_multifd_zlib_level = true;
@@ -1055,6 +1075,19 @@ bool migrate_params_check(MigrationParameters *params, Error **errp)
 
     /* x_checkpoint_delay is now always positive */
 
+    if (params->has_multifd_packet_size &&
+        ((params->multifd_packet_size < DEFAULT_MIGRATE_MULTIFD_PACKET_SIZE) ||
+            (params->multifd_packet_size >  MAX_MIGRATE_MULTIFD_PACKET_SIZE) ||
+            (params->multifd_packet_size % qemu_target_page_size() != 0))) {
+        error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
+                    "multifd_packet_size",
+                    "an integer in the range of "
+                    stringify(DEFAULT_MIGRATE_MULTIFD_PACKET_SIZE)
+                    " to "stringify(MAX_MIGRATE_MULTIFD_PACKET_SIZE)", "
+                    "and must be a multiple of guest VM's page size.");
+        return false;
+    }
+
     if (params->has_multifd_channels && (params->multifd_channels < 1)) {
         error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
                    "multifd_channels",
@@ -1236,6 +1269,9 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
         dest->x_checkpoint_delay = params->x_checkpoint_delay;
     }
 
+    if (params->has_multifd_packet_size) {
+        dest->multifd_packet_size = params->multifd_packet_size;
+    }
     if (params->has_multifd_channels) {
         dest->multifd_channels = params->multifd_channels;
     }
@@ -1364,6 +1400,9 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
         colo_checkpoint_delay_set();
     }
 
+    if (params->has_multifd_packet_size) {
+        s->parameters.multifd_packet_size = params->multifd_packet_size;
+    }
     if (params->has_multifd_channels) {
         s->parameters.multifd_channels = params->multifd_channels;
     }
diff --git a/migration/options.h b/migration/options.h
index 8198b220bd..8158d4879d 100644
--- a/migration/options.h
+++ b/migration/options.h
@@ -87,6 +87,7 @@ const char *migrate_tls_hostname(void);
 uint64_t migrate_xbzrle_cache_size(void);
 ZeroPageDetection migrate_zero_page_detection(void);
 const strList *migrate_dsa_accel_path(void);
+uint64_t migrate_multifd_packet_size(void);
 
 /* parameters helpers */
 
diff --git a/qapi/migration.json b/qapi/migration.json
index d8b42ceae6..1d14d8e82f 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -851,6 +851,10 @@
 #     only has effect if the @mapped-ram capability is enabled.
 #     (Since 9.1)
 #
+# @multifd-packet-size: Packet size in bytes used to migrate data.
+#     The value needs to be a multiple of guest page size.
+#     The default value is 524288 and max value is 4190208.  (Since 9.2)
+#
 # Features:
 #
 # @unstable: Members @x-checkpoint-delay and
@@ -877,7 +881,8 @@
            'vcpu-dirty-limit',
            'mode',
            'zero-page-detection',
-           'direct-io'] }
+           'direct-io',
+           'multifd-packet-size'] }
 
 ##
 # @MigrateSetParameters:
@@ -1038,6 +1043,10 @@
 #     only has effect if the @mapped-ram capability is enabled.
 #     (Since 9.1)
 #
+# @multifd-packet-size: Packet size in bytes used to migrate data.
+#     The value needs to be a multiple of guest page size.
+#     The default value is 524288 and max value is 4190208.  (Since 9.2)
+#
 # Features:
 #
 # @unstable: Members @x-checkpoint-delay and
@@ -1080,7 +1089,8 @@
             '*mode': 'MigMode',
             '*zero-page-detection': 'ZeroPageDetection',
             '*direct-io': 'bool',
-            '*dsa-accel-path': [ 'str' ] } }
+            '*dsa-accel-path': [ 'str' ],
+            '*multifd-packet-size' : 'uint64'} }
 
 ##
 # @migrate-set-parameters:
@@ -1255,6 +1265,10 @@
 #     only has effect if the @mapped-ram capability is enabled.
 #     (Since 9.1)
 #
+# @multifd-packet-size: Packet size in bytes used to migrate data.
+#     The value needs to be a multiple of guest page size.
+#     The default value is 524288 and max value is 4190208.  (Since 9.2)
+#
 # Features:
 #
 # @unstable: Members @x-checkpoint-delay and
@@ -1294,7 +1308,8 @@
             '*mode': 'MigMode',
             '*zero-page-detection': 'ZeroPageDetection',
             '*direct-io': 'bool',
-            '*dsa-accel-path': [ 'str' ] } }
+            '*dsa-accel-path': [ 'str' ],
+            '*multifd-packet-size': 'uint64' } }
 
 ##
 # @query-migrate-parameters:
-- 
Yichen Wang



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 11/12] util/dsa: Add unit test coverage for Intel DSA task submission and completion.
  2024-10-09 23:45 [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration Yichen Wang
                   ` (9 preceding siblings ...)
  2024-10-09 23:46 ` [PATCH v6 10/12] migration/multifd: Add migration option set packet size Yichen Wang
@ 2024-10-09 23:46 ` Yichen Wang
  2024-10-09 23:46 ` [PATCH v6 12/12] migration/multifd: Add integration tests for multifd with Intel DSA offloading Yichen Wang
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 26+ messages in thread
From: Yichen Wang @ 2024-10-09 23:46 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Paolo Bonzini, Marc-André Lureau,
	Daniel P. Berrangé, Philippe Mathieu-Daudé, Peter Xu,
	Fabiano Rosas, Eric Blake, Markus Armbruster, Michael S. Tsirkin,
	Cornelia Huck, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Shivam Kumar, Ho-Ren (Jack) Chuang,
	Yichen Wang, Bryan Zhang

From: Hao Xiang <hao.xiang@linux.dev>

* Test DSA start and stop path.
* Test DSA configure and cleanup path.
* Test DSA task submission and completion path.

Signed-off-by: Bryan Zhang <bryan.zhang@bytedance.com>
Signed-off-by: Hao Xiang <hao.xiang@linux.dev>
Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>
---
 tests/unit/meson.build |   6 +
 tests/unit/test-dsa.c  | 503 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 509 insertions(+)
 create mode 100644 tests/unit/test-dsa.c

diff --git a/tests/unit/meson.build b/tests/unit/meson.build
index d5248ae51d..394219e903 100644
--- a/tests/unit/meson.build
+++ b/tests/unit/meson.build
@@ -50,6 +50,12 @@ tests = {
   'test-fifo': [],
 }
 
+if config_host_data.get('CONFIG_DSA_OPT')
+  tests += {
+    'test-dsa': [],
+  }
+endif
+
 if have_system or have_tools
   tests += {
     'test-qmp-event': [testqapi],
diff --git a/tests/unit/test-dsa.c b/tests/unit/test-dsa.c
new file mode 100644
index 0000000000..181a547528
--- /dev/null
+++ b/tests/unit/test-dsa.c
@@ -0,0 +1,503 @@
+/*
+ * Test DSA functions.
+ *
+ * Copyright (C) Bytedance Ltd.
+ *
+ * Authors:
+ *  Hao Xiang <hao.xiang@bytedance.com>
+ *  Bryan Zhang <bryan.zhang@bytedance.com>
+ *  Yichen Wang <yichen.wang@bytedance.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/host-utils.h"
+
+#include "qemu/cutils.h"
+#include "qemu/memalign.h"
+#include "qemu/dsa.h"
+
+/*
+ * TODO Communicate that DSA must be configured to support this batch size.
+ * TODO Alternatively, poke the DSA device to figure out batch size.
+ */
+#define batch_size 128
+#define page_size 4096
+
+#define oversized_batch_size (batch_size + 1)
+#define num_devices 2
+#define max_buffer_size (64 * 1024)
+
+/* TODO Make these not-hardcoded. */
+static const strList path1[] = {
+    {.value = (char *)"/dev/dsa/wq4.0", .next = NULL}
+};
+static const strList path2[] = {
+    {.value = (char *)"/dev/dsa/wq4.0", .next = (strList*)&path2[1]},
+    {.value = (char *)"/dev/dsa/wq4.1", .next = NULL}
+};
+
+static Error **errp;
+
+static QemuDsaBatchTask *task;
+
+/* A helper for running a single task and checking for correctness. */
+static void do_single_task(void)
+{
+    task = buffer_zero_batch_task_init(batch_size);
+    char buf[page_size];
+    char *ptr = buf;
+
+    buffer_is_zero_dsa_batch_sync(task,
+                                  (const void **)&ptr,
+                                  1,
+                                  page_size);
+    g_assert(task->results[0] == buffer_is_zero(buf, page_size));
+
+    buffer_zero_batch_task_destroy(task);
+}
+
+static void test_single_zero(void)
+{
+    g_assert(!qemu_dsa_init(path1, errp));
+    qemu_dsa_start();
+
+    task = buffer_zero_batch_task_init(batch_size);
+
+    char buf[page_size];
+    char *ptr = buf;
+
+    memset(buf, 0x0, page_size);
+    buffer_is_zero_dsa_batch_sync(task,
+                                  (const void **)&ptr,
+                                  1, page_size);
+    g_assert(task->results[0]);
+
+    buffer_zero_batch_task_destroy(task);
+
+    qemu_dsa_cleanup();
+}
+
+static void test_single_zero_async(void)
+{
+    test_single_zero();
+}
+
+static void test_single_nonzero(void)
+{
+    g_assert(!qemu_dsa_init(path1, errp));
+    qemu_dsa_start();
+
+    task = buffer_zero_batch_task_init(batch_size);
+
+    char buf[page_size];
+    char *ptr = buf;
+
+    memset(buf, 0x1, page_size);
+    buffer_is_zero_dsa_batch_sync(task,
+                                  (const void **)&ptr,
+                                  1, page_size);
+    g_assert(!task->results[0]);
+
+    buffer_zero_batch_task_destroy(task);
+
+    qemu_dsa_cleanup();
+}
+
+static void test_single_nonzero_async(void)
+{
+    test_single_nonzero();
+}
+
+/* count == 0 should return quickly without calling into DSA. */
+static void test_zero_count_async(void)
+{
+    char buf[page_size];
+    buffer_is_zero_dsa_batch_sync(task,
+                                  (const void **)&buf,
+                                  0,
+                                  page_size);
+}
+
+static void test_null_task_async(void)
+{
+    if (g_test_subprocess()) {
+        g_assert(!qemu_dsa_init(path1, errp));
+
+        char buf[page_size * batch_size];
+        char *addrs[batch_size];
+        for (int i = 0; i < batch_size; i++) {
+            addrs[i] = buf + (page_size * i);
+        }
+
+        buffer_is_zero_dsa_batch_sync(NULL, (const void **)addrs,
+                                      batch_size,
+                                      page_size);
+    } else {
+        g_test_trap_subprocess(NULL, 0, 0);
+        g_test_trap_assert_failed();
+    }
+}
+
+static void test_oversized_batch(void)
+{
+    g_assert(!qemu_dsa_init(path1, errp));
+    qemu_dsa_start();
+
+    task = buffer_zero_batch_task_init(batch_size);
+
+    char buf[page_size * oversized_batch_size];
+    char *addrs[batch_size];
+    for (int i = 0; i < oversized_batch_size; i++) {
+        addrs[i] = buf + (page_size * i);
+    }
+
+    int ret = buffer_is_zero_dsa_batch_sync(task,
+                                            (const void **)addrs,
+                                            oversized_batch_size,
+                                            page_size);
+    g_assert(ret != 0);
+
+    buffer_zero_batch_task_destroy(task);
+
+    qemu_dsa_cleanup();
+}
+
+static void test_oversized_batch_async(void)
+{
+    test_oversized_batch();
+}
+
+static void test_zero_len_async(void)
+{
+    if (g_test_subprocess()) {
+        g_assert(!qemu_dsa_init(path1, errp));
+
+        task = buffer_zero_batch_task_init(batch_size);
+
+        char buf[page_size];
+
+        buffer_is_zero_dsa_batch_sync(task,
+                                      (const void **)&buf,
+                                      1,
+                                      0);
+
+        buffer_zero_batch_task_destroy(task);
+    } else {
+        g_test_trap_subprocess(NULL, 0, 0);
+        g_test_trap_assert_failed();
+    }
+}
+
+static void test_null_buf_async(void)
+{
+    if (g_test_subprocess()) {
+        g_assert(!qemu_dsa_init(path1, errp));
+
+        task = buffer_zero_batch_task_init(batch_size);
+
+        buffer_is_zero_dsa_batch_sync(task, NULL, 1, page_size);
+
+        buffer_zero_batch_task_destroy(task);
+    } else {
+        g_test_trap_subprocess(NULL, 0, 0);
+        g_test_trap_assert_failed();
+    }
+}
+
+static void test_batch(void)
+{
+    g_assert(!qemu_dsa_init(path1, errp));
+    qemu_dsa_start();
+
+    task = buffer_zero_batch_task_init(batch_size);
+
+    char buf[page_size * batch_size];
+    char *addrs[batch_size];
+    for (int i = 0; i < batch_size; i++) {
+        addrs[i] = buf + (page_size * i);
+    }
+
+    /*
+     * Using whatever is on the stack is somewhat random.
+     * Manually set some pages to zero and some to nonzero.
+     */
+    memset(buf + 0, 0, page_size * 10);
+    memset(buf + (10 * page_size), 0xff, page_size * 10);
+
+    buffer_is_zero_dsa_batch_sync(task,
+                                  (const void **)addrs,
+                                  batch_size,
+                                  page_size);
+
+    bool is_zero;
+    for (int i = 0; i < batch_size; i++) {
+        is_zero = buffer_is_zero((const void *)&buf[page_size * i], page_size);
+        g_assert(task->results[i] == is_zero);
+    }
+
+    buffer_zero_batch_task_destroy(task);
+
+    qemu_dsa_cleanup();
+}
+
+static void test_batch_async(void)
+{
+    test_batch();
+}
+
+static void test_page_fault(void)
+{
+    g_assert(!qemu_dsa_init(path1, errp));
+    qemu_dsa_start();
+
+    char *buf[2];
+    int prot = PROT_READ | PROT_WRITE;
+    int flags = MAP_SHARED | MAP_ANON;
+    buf[0] = (char *)mmap(NULL, page_size * batch_size, prot, flags, -1, 0);
+    assert(buf[0] != MAP_FAILED);
+    buf[1] = (char *)malloc(page_size * batch_size);
+    assert(buf[1] != NULL);
+
+    for (int j = 0; j < 2; j++) {
+        task = buffer_zero_batch_task_init(batch_size);
+
+        char *addrs[batch_size];
+        for (int i = 0; i < batch_size; i++) {
+            addrs[i] = buf[j] + (page_size * i);
+        }
+
+        buffer_is_zero_dsa_batch_sync(task,
+                                      (const void **)addrs,
+                                      batch_size,
+                                      page_size);
+
+        bool is_zero;
+        for (int i = 0; i < batch_size; i++) {
+            is_zero = buffer_is_zero((const void *)&buf[j][page_size * i],
+                                      page_size);
+            g_assert(task->results[i] == is_zero);
+        }
+        buffer_zero_batch_task_destroy(task);
+    }
+
+    assert(!munmap(buf[0], page_size * batch_size));
+    free(buf[1]);
+    qemu_dsa_cleanup();
+}
+
+static void test_various_buffer_sizes(void)
+{
+    g_assert(!qemu_dsa_init(path1, errp));
+    qemu_dsa_start();
+
+    char *buf = malloc(max_buffer_size * batch_size);
+    char *addrs[batch_size];
+
+    for (int len = 16; len <= max_buffer_size; len *= 2) {
+        task = buffer_zero_batch_task_init(batch_size);
+
+        for (int i = 0; i < batch_size; i++) {
+            addrs[i] = buf + (len * i);
+        }
+
+        buffer_is_zero_dsa_batch_sync(task,
+                                      (const void **)addrs,
+                                      batch_size,
+                                      len);
+
+        bool is_zero;
+        for (int j = 0; j < batch_size; j++) {
+            is_zero = buffer_is_zero((const void *)&buf[len * j], len);
+            g_assert(task->results[j] == is_zero);
+        }
+
+        buffer_zero_batch_task_destroy(task);
+    }
+
+    free(buf);
+
+    qemu_dsa_cleanup();
+}
+
+static void test_various_buffer_sizes_async(void)
+{
+    test_various_buffer_sizes();
+}
+
+static void test_double_start_stop(void)
+{
+    g_assert(!qemu_dsa_init(path1, errp));
+    /* Double start */
+    qemu_dsa_start();
+    qemu_dsa_start();
+    g_assert(qemu_dsa_is_running());
+    do_single_task();
+
+    /* Double stop */
+    qemu_dsa_stop();
+    g_assert(!qemu_dsa_is_running());
+    qemu_dsa_stop();
+    g_assert(!qemu_dsa_is_running());
+
+    /* Restart */
+    qemu_dsa_start();
+    g_assert(qemu_dsa_is_running());
+    do_single_task();
+    qemu_dsa_cleanup();
+}
+
+static void test_is_running(void)
+{
+    g_assert(!qemu_dsa_init(path1, errp));
+
+    g_assert(!qemu_dsa_is_running());
+    qemu_dsa_start();
+    g_assert(qemu_dsa_is_running());
+    qemu_dsa_stop();
+    g_assert(!qemu_dsa_is_running());
+    qemu_dsa_cleanup();
+}
+
+static void test_multiple_engines(void)
+{
+    g_assert(!qemu_dsa_init(path2, errp));
+    qemu_dsa_start();
+
+    QemuDsaBatchTask *tasks[num_devices];
+    char bufs[num_devices][page_size * batch_size];
+    char *addrs[num_devices][batch_size];
+
+    /*
+     *  This is a somewhat implementation-specific way
+     *  of testing that the tasks have unique engines
+     *  assigned to them.
+     */
+    tasks[0] = buffer_zero_batch_task_init(batch_size);
+    tasks[1] = buffer_zero_batch_task_init(batch_size);
+    g_assert(tasks[0]->device != tasks[1]->device);
+
+    for (int i = 0; i < num_devices; i++) {
+        for (int j = 0; j < batch_size; j++) {
+            addrs[i][j] = bufs[i] + (page_size * j);
+        }
+
+        buffer_is_zero_dsa_batch_sync(tasks[i],
+                                      (const void **)addrs[i],
+                                      batch_size, page_size);
+
+        bool is_zero;
+        for (int j = 0; j < batch_size; j++) {
+            is_zero = buffer_is_zero((const void *)&bufs[i][page_size * j],
+                                     page_size);
+            g_assert(tasks[i]->results[j] == is_zero);
+        }
+    }
+
+    buffer_zero_batch_task_destroy(tasks[0]);
+    buffer_zero_batch_task_destroy(tasks[1]);
+
+    qemu_dsa_cleanup();
+}
+
+static void test_configure_dsa_twice(void)
+{
+    g_assert(!qemu_dsa_init(path2, errp));
+    g_assert(!qemu_dsa_init(path2, errp));
+    qemu_dsa_start();
+    do_single_task();
+    qemu_dsa_cleanup();
+}
+
+static void test_configure_dsa_bad_path(void)
+{
+    const strList *bad_path = &(strList) {
+        .value = (char *)"/not/a/real/path", .next = NULL
+    };
+    g_assert(qemu_dsa_init(bad_path, errp));
+}
+
+static void test_cleanup_before_configure(void)
+{
+    qemu_dsa_cleanup();
+    g_assert(!qemu_dsa_init(path2, errp));
+}
+
+static void test_configure_dsa_num_devices(void)
+{
+    g_assert(!qemu_dsa_init(path1, errp));
+    qemu_dsa_start();
+
+    do_single_task();
+    qemu_dsa_stop();
+    qemu_dsa_cleanup();
+}
+
+static void test_cleanup_twice(void)
+{
+    g_assert(!qemu_dsa_init(path2, errp));
+    qemu_dsa_cleanup();
+    qemu_dsa_cleanup();
+
+    g_assert(!qemu_dsa_init(path2, errp));
+    qemu_dsa_start();
+    do_single_task();
+    qemu_dsa_cleanup();
+}
+
+static int check_test_setup(void)
+{
+    const strList *path[2] = {path1, path2};
+    for (int i = 0; i < sizeof(path) / sizeof(strList *); i++) {
+        if (qemu_dsa_init(path[i], errp)) {
+            return -1;
+        }
+        qemu_dsa_cleanup();
+    }
+    return 0;
+}
+
+int main(int argc, char **argv)
+{
+    g_test_init(&argc, &argv, NULL);
+
+    if (check_test_setup() != 0) {
+        /*
+         * This test requires extra setup. The current
+         * setup is not correct. Just skip this test
+         * for now.
+         */
+        exit(0);
+    }
+
+    if (num_devices > 1) {
+        g_test_add_func("/dsa/multiple_engines", test_multiple_engines);
+    }
+
+    g_test_add_func("/dsa/async/batch", test_batch_async);
+    g_test_add_func("/dsa/async/various_buffer_sizes",
+                    test_various_buffer_sizes_async);
+    g_test_add_func("/dsa/async/null_buf", test_null_buf_async);
+    g_test_add_func("/dsa/async/zero_len", test_zero_len_async);
+    g_test_add_func("/dsa/async/oversized_batch", test_oversized_batch_async);
+    g_test_add_func("/dsa/async/zero_count", test_zero_count_async);
+    g_test_add_func("/dsa/async/single_zero", test_single_zero_async);
+    g_test_add_func("/dsa/async/single_nonzero", test_single_nonzero_async);
+    g_test_add_func("/dsa/async/null_task", test_null_task_async);
+    g_test_add_func("/dsa/async/page_fault", test_page_fault);
+
+    g_test_add_func("/dsa/double_start_stop", test_double_start_stop);
+    g_test_add_func("/dsa/is_running", test_is_running);
+
+    g_test_add_func("/dsa/configure_dsa_twice", test_configure_dsa_twice);
+    g_test_add_func("/dsa/configure_dsa_bad_path", test_configure_dsa_bad_path);
+    g_test_add_func("/dsa/cleanup_before_configure",
+                    test_cleanup_before_configure);
+    g_test_add_func("/dsa/configure_dsa_num_devices",
+                    test_configure_dsa_num_devices);
+    g_test_add_func("/dsa/cleanup_twice", test_cleanup_twice);
+
+    return g_test_run();
+}
-- 
Yichen Wang



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6 12/12] migration/multifd: Add integration tests for multifd with Intel DSA offloading.
  2024-10-09 23:45 [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration Yichen Wang
                   ` (10 preceding siblings ...)
  2024-10-09 23:46 ` [PATCH v6 11/12] util/dsa: Add unit test coverage for Intel DSA task submission and completion Yichen Wang
@ 2024-10-09 23:46 ` Yichen Wang
  2024-10-11 14:13 ` [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration Fabiano Rosas
  2024-10-11 16:32 ` Peter Xu
  13 siblings, 0 replies; 26+ messages in thread
From: Yichen Wang @ 2024-10-09 23:46 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Paolo Bonzini, Marc-André Lureau,
	Daniel P. Berrangé, Philippe Mathieu-Daudé, Peter Xu,
	Fabiano Rosas, Eric Blake, Markus Armbruster, Michael S. Tsirkin,
	Cornelia Huck, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Shivam Kumar, Ho-Ren (Jack) Chuang,
	Yichen Wang, Bryan Zhang

From: Hao Xiang <hao.xiang@linux.dev>

* Add test case to start and complete multifd live migration with DSA
offloading enabled.
* Add test case to start and cancel multifd live migration with DSA
offloading enabled.

Signed-off-by: Bryan Zhang <bryan.zhang@bytedance.com>
Signed-off-by: Hao Xiang <hao.xiang@linux.dev>
Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>
---
 tests/qtest/migration-test.c | 80 +++++++++++++++++++++++++++++++++++-
 1 file changed, 79 insertions(+), 1 deletion(-)

diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index 814ec109a6..47a9dd8c50 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -611,6 +611,13 @@ typedef struct {
     bool suspend_me;
 } MigrateStart;
 
+/*
+ * It requires separate steps to configure and enable DSA device.
+ * This test assumes that the configuration is done already.
+ */
+static const char *dsa_dev_path_p = "['/dev/dsa/wq4.0']";
+static const char *dsa_dev_path = "/dev/dsa/wq4.0";
+
 /*
  * A hook that runs after the src and dst QEMUs have been
  * created, but before the migration is started. This can
@@ -3230,7 +3237,7 @@ static void test_multifd_tcp_tls_x509_reject_anon_client(void)
  *
  *  And see that it works
  */
-static void test_multifd_tcp_cancel(void)
+static void test_multifd_tcp_cancel_common(bool use_dsa)
 {
     MigrateStart args = {
         .hide_stderr = true,
@@ -3250,6 +3257,11 @@ static void test_multifd_tcp_cancel(void)
     migrate_set_capability(from, "multifd", true);
     migrate_set_capability(to, "multifd", true);
 
+    if (use_dsa) {
+        migrate_set_parameter_str(from, "zero-page-detection", "dsa-accel");
+        migrate_set_parameter_str(from, "dsa-accel-path", dsa_dev_path_p);
+    }
+
     /* Start incoming migration from the 1st socket */
     migrate_incoming_qmp(to, "tcp:127.0.0.1:0", "{}");
 
@@ -3300,6 +3312,49 @@ static void test_multifd_tcp_cancel(void)
     test_migrate_end(from, to2, true);
 }
 
+/*
+ * This test does:
+ *  source               target
+ *                       migrate_incoming
+ *     migrate
+ *     migrate_cancel
+ *                       launch another target
+ *     migrate
+ *
+ *  And see that it works
+ */
+static void test_multifd_tcp_cancel(void)
+{
+    test_multifd_tcp_cancel_common(false);
+}
+
+#ifdef CONFIG_DSA_OPT
+
+static void *test_migrate_precopy_tcp_multifd_start_dsa(QTestState *from,
+                                                        QTestState *to)
+{
+    migrate_set_parameter_str(from, "zero-page-detection", "dsa-accel");
+    migrate_set_parameter_str(from, "dsa-accel-path", dsa_dev_path_p);
+    return test_migrate_precopy_tcp_multifd_start_common(from, to, "none");
+}
+
+static void test_multifd_tcp_zero_page_dsa(void)
+{
+    MigrateCommon args = {
+        .listen_uri = "defer",
+        .start_hook = test_migrate_precopy_tcp_multifd_start_dsa,
+    };
+
+    test_precopy_common(&args);
+}
+
+static void test_multifd_tcp_cancel_dsa(void)
+{
+    test_multifd_tcp_cancel_common(true);
+}
+
+#endif
+
 static void calc_dirty_rate(QTestState *who, uint64_t calc_time)
 {
     qtest_qmp_assert_success(who,
@@ -3727,6 +3782,19 @@ static bool kvm_dirty_ring_supported(void)
 #endif
 }
 
+#ifdef CONFIG_DSA_OPT
+static int test_dsa_setup(void)
+{
+    int fd;
+    fd = open(dsa_dev_path, O_RDWR);
+    if (fd < 0) {
+        return -1;
+    }
+    close(fd);
+    return 0;
+}
+#endif
+
 int main(int argc, char **argv)
 {
     bool has_kvm, has_tcg;
@@ -3939,6 +4007,16 @@ int main(int argc, char **argv)
                        test_multifd_tcp_zero_page_legacy);
     migration_test_add("/migration/multifd/tcp/plain/zero-page/none",
                        test_multifd_tcp_no_zero_page);
+
+#ifdef CONFIG_DSA_OPT
+    if (g_str_equal(arch, "x86_64") && test_dsa_setup() == 0) {
+        migration_test_add("/migration/multifd/tcp/plain/zero-page/dsa",
+                       test_multifd_tcp_zero_page_dsa);
+        migration_test_add("/migration/multifd/tcp/plain/cancel/dsa",
+                       test_multifd_tcp_cancel_dsa);
+    }
+#endif
+
     migration_test_add("/migration/multifd/tcp/plain/cancel",
                        test_multifd_tcp_cancel);
     migration_test_add("/migration/multifd/tcp/plain/zlib",
-- 
Yichen Wang



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration.
  2024-10-09 23:45 [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration Yichen Wang
                   ` (11 preceding siblings ...)
  2024-10-09 23:46 ` [PATCH v6 12/12] migration/multifd: Add integration tests for multifd with Intel DSA offloading Yichen Wang
@ 2024-10-11 14:13 ` Fabiano Rosas
  2024-10-15 22:05   ` [External] " Yichen Wang
  2024-10-11 16:32 ` Peter Xu
  13 siblings, 1 reply; 26+ messages in thread
From: Fabiano Rosas @ 2024-10-11 14:13 UTC (permalink / raw)
  To: Yichen Wang, Dr. David Alan Gilbert, Paolo Bonzini,
	Marc-André Lureau, Daniel P. Berrangé,
	Philippe Mathieu-Daudé, Peter Xu, Eric Blake,
	Markus Armbruster, Michael S. Tsirkin, Cornelia Huck, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Shivam Kumar, Ho-Ren (Jack) Chuang,
	Yichen Wang

Yichen Wang <yichen.wang@bytedance.com> writes:

> v6
> * Rebase on top of 838fc0a8769d7cc6edfe50451ba4e3368395f5c1;
> * Refactor code to have clean history on all commits;
> * Add comments on DSA specific defines about how the value is picked;
> * Address all comments from v5 reviews about api defines, questions, etc.;
>
> v5
> * Rebase on top of 39a032cea23e522268519d89bb738974bc43b6f6.
> * Rename struct definitions with typedef and CamelCase names;
> * Add build and runtime checks about DSA accelerator;
> * Address all comments from v4 reviews about typos, licenses, comments,
> error reporting, etc.
>
> v4
> * Rebase on top of 85b597413d4370cb168f711192eaef2eb70535ac.
> * A separate "multifd zero page checking" patchset was split from this
> patchset's v3 and got merged into master. v4 re-applied the rest of all
> commits on top of that patchset, re-factored and re-tested.
> https://lore.kernel.org/all/20240311180015.3359271-1-hao.xiang@linux.dev/
> * There are some feedback from v3 I likely overlooked.
>
> v3
> * Rebase on top of 7425b6277f12e82952cede1f531bfc689bf77fb1.
> * Fix error/warning from checkpatch.pl
> * Fix use-after-free bug when multifd-dsa-accel option is not set.
> * Handle error from dsa_init and correctly propogate the error.
> * Remove unnecessary call to dsa_stop.
> * Detect availability of DSA feature at compile time.
> * Implement a generic batch_task structure and a DSA specific one dsa_batch_task.
> * Remove all exit() calls and propagate errors correctly.
> * Use bytes instead of page count to configure multifd-packet-size option.
>
> v2
> * Rebase on top of 3e01f1147a16ca566694b97eafc941d62fa1e8d8.
> * Leave Juan's changes in their original form instead of squashing them.
> * Add a new commit to refactor the multifd_send_thread function to prepare for introducing the DSA offload functionality.
> * Use page count to configure multifd-packet-size option.
> * Don't use the FLAKY flag in DSA tests.
> * Test if DSA integration test is setup correctly and skip the test if
> * not.
> * Fixed broken link in the previous patch cover.
>
> * Background:
>
> I posted an RFC about DSA offloading in QEMU:
> https://patchew.org/QEMU/20230529182001.2232069-1-hao.xiang@bytedance.com/
>
> This patchset implements the DSA offloading on zero page checking in
> multifd live migration code path.
>
> * Overview:
>
> Intel Data Streaming Accelerator(DSA) is introduced in Intel's 4th generation
> Xeon server, aka Sapphire Rapids.
> https://cdrdv2-public.intel.com/671116/341204-intel-data-streaming-accelerator-spec.pdf
> https://www.intel.com/content/www/us/en/content-details/759709/intel-data-streaming-accelerator-user-guide.html
> One of the things DSA can do is to offload memory comparison workload from
> CPU to DSA accelerator hardware. This patchset implements a solution to offload
> QEMU's zero page checking from CPU to DSA accelerator hardware. We gain
> two benefits from this change:
> 1. Reduces CPU usage in multifd live migration workflow across all use
> cases.
> 2. Reduces migration total time in some use cases. 
>
> * Design:
>
> These are the logical steps to perform DSA offloading:
> 1. Configure DSA accelerators and create user space openable DSA work
> queues via the idxd driver.
> 2. Map DSA's work queue into a user space address space.
> 3. Fill an in-memory task descriptor to describe the memory operation.
> 4. Use dedicated CPU instruction _enqcmd to queue a task descriptor to
> the work queue.
> 5. Pull the task descriptor's completion status field until the task
> completes.
> 6. Check return status.
>
> The memory operation is now totally done by the accelerator hardware but
> the new workflow introduces overheads. The overhead is the extra cost CPU
> prepares and submits the task descriptors and the extra cost CPU pulls for
> completion. The design is around minimizing these two overheads.
>
> 1. In order to reduce the overhead on task preparation and submission,
> we use batch descriptors. A batch descriptor will contain N individual
> zero page checking tasks where the default N is 128 (default packet size
> / page size) and we can increase N by setting the packet size via a new
> migration option.
> 2. The multifd sender threads prepares and submits batch tasks to DSA
> hardware and it waits on a synchronization object for task completion.
> Whenever a DSA task is submitted, the task structure is added to a
> thread safe queue. It's safe to have multiple multifd sender threads to
> submit tasks concurrently.
> 3. Multiple DSA hardware devices can be used. During multifd initialization,
> every sender thread will be assigned a DSA device to work with. We
> use a round-robin scheme to evenly distribute the work across all used
> DSA devices.
> 4. Use a dedicated thread dsa_completion to perform busy pulling for all
> DSA task completions. The thread keeps dequeuing DSA tasks from the
> thread safe queue. The thread blocks when there is no outstanding DSA
> task. When pulling for completion of a DSA task, the thread uses CPU
> instruction _mm_pause between the iterations of a busy loop to save some
> CPU power as well as optimizing core resources for the other hypercore.
> 5. DSA accelerator can encounter errors. The most popular error is a
> page fault. We have tested using devices to handle page faults but
> performance is bad. Right now, if DSA hits a page fault, we fallback to
> use CPU to complete the rest of the work. The CPU fallback is done in
> the multifd sender thread.
> 6. Added a new migration option multifd-dsa-accel to set the DSA device
> path. If set, the multifd workflow will leverage the DSA devices for
> offloading.
> 7. Added a new migration option multifd-normal-page-ratio to make
> multifd live migration easier to test. Setting a normal page ratio will
> make live migration recognize a zero page as a normal page and send
> the entire payload over the network. If we want to send a large network
> payload and analyze throughput, this option is useful.
> 8. Added a new migration option multifd-packet-size. This can increase
> the number of pages being zero page checked and sent over the network.
> The extra synchronization between the sender threads and the dsa
> completion thread is an overhead. Using a large packet size can reduce
> that overhead.
>
> * Performance:
>
> We use two Intel 4th generation Xeon servers for testing.
>
> Architecture:        x86_64
> CPU(s):              192
> Thread(s) per core:  2
> Core(s) per socket:  48
> Socket(s):           2
> NUMA node(s):        2
> Vendor ID:           GenuineIntel
> CPU family:          6
> Model:               143
> Model name:          Intel(R) Xeon(R) Platinum 8457C
> Stepping:            8
> CPU MHz:             2538.624
> CPU max MHz:         3800.0000
> CPU min MHz:         800.0000
>
> We perform multifd live migration with below setup:
> 1. VM has 100GB memory. 
> 2. Use the new migration option multifd-set-normal-page-ratio to control the total
> size of the payload sent over the network.
> 3. Use 8 multifd channels.
> 4. Use tcp for live migration.
> 4. Use CPU to perform zero page checking as the baseline.
> 5. Use one DSA device to offload zero page checking to compare with the baseline.
> 6. Use "perf sched record" and "perf sched timehist" to analyze CPU usage.
>
> A) Scenario 1: 50% (50GB) normal pages on an 100GB vm.
>
> 	CPU usage
>
> 	|---------------|---------------|---------------|---------------|
> 	|		|comm		|runtime(msec)	|totaltime(msec)|
> 	|---------------|---------------|---------------|---------------|
> 	|Baseline	|live_migration	|5657.58	|		|
> 	|		|multifdsend_0	|3931.563	|		|
> 	|		|multifdsend_1	|4405.273	|		|
> 	|		|multifdsend_2	|3941.968	|		|
> 	|		|multifdsend_3	|5032.975	|		|
> 	|		|multifdsend_4	|4533.865	|		|
> 	|		|multifdsend_5	|4530.461	|		|
> 	|		|multifdsend_6	|5171.916	|		|
> 	|		|multifdsend_7	|4722.769	|41922		|
> 	|---------------|---------------|---------------|---------------|
> 	|DSA		|live_migration	|6129.168	|		|
> 	|		|multifdsend_0	|2954.717	|		|
> 	|		|multifdsend_1	|2766.359	|		|
> 	|		|multifdsend_2	|2853.519	|		|
> 	|		|multifdsend_3	|2740.717	|		|
> 	|		|multifdsend_4	|2824.169	|		|
> 	|		|multifdsend_5	|2966.908	|		|
> 	|		|multifdsend_6	|2611.137	|		|
> 	|		|multifdsend_7	|3114.732	|		|
> 	|		|dsa_completion	|3612.564	|32568		|
> 	|---------------|---------------|---------------|---------------|
>
> Baseline total runtime is calculated by adding up all multifdsend_X
> and live_migration threads runtime. DSA offloading total runtime is
> calculated by adding up all multifdsend_X, live_migration and
> dsa_completion threads runtime. 41922 msec VS 32568 msec runtime and
> that is 23% total CPU usage savings.
>
> 	Latency
> 	|---------------|---------------|---------------|---------------|---------------|---------------|
> 	|		|total time	|down time	|throughput	|transferred-ram|total-ram	|
> 	|---------------|---------------|---------------|---------------|---------------|---------------|	
> 	|Baseline	|10343 ms	|161 ms		|41007.00 mbps	|51583797 kb	|102400520 kb	|
> 	|---------------|---------------|---------------|---------------|-------------------------------|
> 	|DSA offload	|9535 ms	|135 ms		|46554.40 mbps	|53947545 kb	|102400520 kb	|	
> 	|---------------|---------------|---------------|---------------|---------------|---------------|
>
> Total time is 8% faster and down time is 16% faster.
>
> B) Scenario 2: 100% (100GB) zero pages on an 100GB vm.
>
> 	CPU usage
> 	|---------------|---------------|---------------|---------------|
> 	|		|comm		|runtime(msec)	|totaltime(msec)|
> 	|---------------|---------------|---------------|---------------|
> 	|Baseline	|live_migration	|4860.718	|		|
> 	|	 	|multifdsend_0	|748.875	|		|
> 	|		|multifdsend_1	|898.498	|		|
> 	|		|multifdsend_2	|787.456	|		|
> 	|		|multifdsend_3	|764.537	|		|
> 	|		|multifdsend_4	|785.687	|		|
> 	|		|multifdsend_5	|756.941	|		|
> 	|		|multifdsend_6	|774.084	|		|
> 	|		|multifdsend_7	|782.900	|11154		|
> 	|---------------|---------------|-------------------------------|
> 	|DSA offloading	|live_migration	|3846.976	|		|
> 	|		|multifdsend_0	|191.880	|		|
> 	|		|multifdsend_1	|166.331	|		|
> 	|		|multifdsend_2	|168.528	|		|
> 	|		|multifdsend_3	|197.831	|		|
> 	|		|multifdsend_4	|169.580	|		|
> 	|		|multifdsend_5	|167.984	|		|
> 	|		|multifdsend_6	|198.042	|		|
> 	|		|multifdsend_7	|170.624	|		|
> 	|		|dsa_completion	|3428.669	|8700		|
> 	|---------------|---------------|---------------|---------------|
>
> Baseline total runtime is 11154 msec and DSA offloading total runtime is
> 8700 msec. That is 22% CPU savings.
>
> 	Latency
> 	|--------------------------------------------------------------------------------------------|
> 	|		|total time	|down time	|throughput	|transferred-ram|total-ram   |
> 	|---------------|---------------|---------------|---------------|---------------|------------|	
> 	|Baseline	|4867 ms	|20 ms		|1.51 mbps	|565 kb		|102400520 kb|
> 	|---------------|---------------|---------------|---------------|----------------------------|
> 	|DSA offload	|3888 ms	|18 ms		|1.89 mbps	|565 kb		|102400520 kb|	
> 	|---------------|---------------|---------------|---------------|---------------|------------|
>
> Total time 20% faster and down time 10% faster.
>
> * Testing:
>
> 1. Added unit tests for cover the added code path in dsa.c
> 2. Added integration tests to cover multifd live migration using DSA
> offloading.
>
>
> Hao Xiang (11):
>   meson: Introduce new instruction set enqcmd to the build system.
>   util/dsa: Implement DSA device start and stop logic.
>   util/dsa: Implement DSA task enqueue and dequeue.
>   util/dsa: Implement DSA task asynchronous completion thread model.
>   util/dsa: Implement zero page checking in DSA task.
>   util/dsa: Implement DSA task asynchronous submission and wait for
>     completion.
>   migration/multifd: Add new migration option for multifd DSA
>     offloading.
>   migration/multifd: Enable DSA offloading in multifd sender path.
>   migration/multifd: Add migration option set packet size.
>   util/dsa: Add unit test coverage for Intel DSA task submission and
>     completion.
>   migration/multifd: Add integration tests for multifd with Intel DSA
>     offloading.
>
> Yichen Wang (1):
>   util/dsa: Add idxd into linux header copy list.
>
>  hmp-commands.hx                 |    2 +-
>  include/qemu/dsa.h              |  189 ++++++
>  meson.build                     |   14 +
>  meson_options.txt               |    2 +
>  migration/migration-hmp-cmds.c  |   26 +-
>  migration/multifd-zero-page.c   |  133 +++-
>  migration/multifd-zlib.c        |    6 +-
>  migration/multifd-zstd.c        |    6 +-
>  migration/multifd.c             |   19 +-
>  migration/multifd.h             |    5 +
>  migration/options.c             |   69 ++
>  migration/options.h             |    2 +
>  qapi/migration.json             |   49 +-
>  scripts/meson-buildoptions.sh   |    3 +
>  scripts/update-linux-headers.sh |    2 +-
>  tests/qtest/migration-test.c    |   80 ++-
>  tests/unit/meson.build          |    6 +
>  tests/unit/test-dsa.c           |  503 ++++++++++++++
>  util/dsa.c                      | 1114 +++++++++++++++++++++++++++++++
>  util/meson.build                |    3 +
>  20 files changed, 2204 insertions(+), 29 deletions(-)
>  create mode 100644 include/qemu/dsa.h
>  create mode 100644 tests/unit/test-dsa.c
>  create mode 100644 util/dsa.c

Still doesn't build without DSA:

qemu/include/qemu/dsa.h: In function
‘buffer_is_zero_dsa_batch_sync’:
/home/fabiano/kvm/qemu/include/qemu/dsa.h:183:16: error: ‘errp’
undeclared (first use in this function); did you mean ‘errno’?

     error_setg(errp, "DSA accelerator is not enabled.");
                ^
qemu/include/qapi/error.h:318:26: note: in definition of macro ‘error_setg’
     error_setg_internal((errp), __FILE__, __LINE__, __func__,   \
                          ^~~~
qemu/include/qemu/dsa.h:183:16: note: each undeclared identifier is reported only once for each function it appears in
     error_setg(errp, "DSA accelerator is not enabled.");
                ^
qemu/include/qapi/error.h:318:26: note: in definition of macro ‘error_setg’
     error_setg_internal((errp), __FILE__, __LINE__, __func__,   \
                          ^~~~


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration.
  2024-10-09 23:45 [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration Yichen Wang
                   ` (12 preceding siblings ...)
  2024-10-11 14:13 ` [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration Fabiano Rosas
@ 2024-10-11 16:32 ` Peter Xu
  2024-10-11 16:53   ` Dr. David Alan Gilbert
  2024-10-15 22:02   ` [External] " Yichen Wang
  13 siblings, 2 replies; 26+ messages in thread
From: Peter Xu @ 2024-10-11 16:32 UTC (permalink / raw)
  To: Yichen Wang
  Cc: Dr. David Alan Gilbert, Paolo Bonzini, Marc-André Lureau,
	Daniel P. Berrangé, Philippe Mathieu-Daudé,
	Fabiano Rosas, Eric Blake, Markus Armbruster, Michael S. Tsirkin,
	Cornelia Huck, qemu-devel, Hao Xiang, Liu, Yuan1, Shivam Kumar,
	Ho-Ren (Jack) Chuang

On Wed, Oct 09, 2024 at 04:45:58PM -0700, Yichen Wang wrote:
> v6
> * Rebase on top of 838fc0a8769d7cc6edfe50451ba4e3368395f5c1;
> * Refactor code to have clean history on all commits;
> * Add comments on DSA specific defines about how the value is picked;
> * Address all comments from v5 reviews about api defines, questions, etc.;
> 
> v5
> * Rebase on top of 39a032cea23e522268519d89bb738974bc43b6f6.
> * Rename struct definitions with typedef and CamelCase names;
> * Add build and runtime checks about DSA accelerator;
> * Address all comments from v4 reviews about typos, licenses, comments,
> error reporting, etc.
> 
> v4
> * Rebase on top of 85b597413d4370cb168f711192eaef2eb70535ac.
> * A separate "multifd zero page checking" patchset was split from this
> patchset's v3 and got merged into master. v4 re-applied the rest of all
> commits on top of that patchset, re-factored and re-tested.
> https://lore.kernel.org/all/20240311180015.3359271-1-hao.xiang@linux.dev/
> * There are some feedback from v3 I likely overlooked.
> 
> v3
> * Rebase on top of 7425b6277f12e82952cede1f531bfc689bf77fb1.
> * Fix error/warning from checkpatch.pl
> * Fix use-after-free bug when multifd-dsa-accel option is not set.
> * Handle error from dsa_init and correctly propogate the error.
> * Remove unnecessary call to dsa_stop.
> * Detect availability of DSA feature at compile time.
> * Implement a generic batch_task structure and a DSA specific one dsa_batch_task.
> * Remove all exit() calls and propagate errors correctly.
> * Use bytes instead of page count to configure multifd-packet-size option.
> 
> v2
> * Rebase on top of 3e01f1147a16ca566694b97eafc941d62fa1e8d8.
> * Leave Juan's changes in their original form instead of squashing them.
> * Add a new commit to refactor the multifd_send_thread function to prepare for introducing the DSA offload functionality.
> * Use page count to configure multifd-packet-size option.
> * Don't use the FLAKY flag in DSA tests.
> * Test if DSA integration test is setup correctly and skip the test if
> * not.
> * Fixed broken link in the previous patch cover.
> 
> * Background:
> 
> I posted an RFC about DSA offloading in QEMU:
> https://patchew.org/QEMU/20230529182001.2232069-1-hao.xiang@bytedance.com/
> 
> This patchset implements the DSA offloading on zero page checking in
> multifd live migration code path.
> 
> * Overview:
> 
> Intel Data Streaming Accelerator(DSA) is introduced in Intel's 4th generation
> Xeon server, aka Sapphire Rapids.
> https://cdrdv2-public.intel.com/671116/341204-intel-data-streaming-accelerator-spec.pdf
> https://www.intel.com/content/www/us/en/content-details/759709/intel-data-streaming-accelerator-user-guide.html
> One of the things DSA can do is to offload memory comparison workload from
> CPU to DSA accelerator hardware. This patchset implements a solution to offload
> QEMU's zero page checking from CPU to DSA accelerator hardware. We gain
> two benefits from this change:
> 1. Reduces CPU usage in multifd live migration workflow across all use
> cases.
> 2. Reduces migration total time in some use cases. 
> 
> * Design:
> 
> These are the logical steps to perform DSA offloading:
> 1. Configure DSA accelerators and create user space openable DSA work
> queues via the idxd driver.
> 2. Map DSA's work queue into a user space address space.
> 3. Fill an in-memory task descriptor to describe the memory operation.
> 4. Use dedicated CPU instruction _enqcmd to queue a task descriptor to
> the work queue.
> 5. Pull the task descriptor's completion status field until the task
> completes.
> 6. Check return status.
> 
> The memory operation is now totally done by the accelerator hardware but
> the new workflow introduces overheads. The overhead is the extra cost CPU
> prepares and submits the task descriptors and the extra cost CPU pulls for
> completion. The design is around minimizing these two overheads.
> 
> 1. In order to reduce the overhead on task preparation and submission,
> we use batch descriptors. A batch descriptor will contain N individual
> zero page checking tasks where the default N is 128 (default packet size
> / page size) and we can increase N by setting the packet size via a new
> migration option.
> 2. The multifd sender threads prepares and submits batch tasks to DSA
> hardware and it waits on a synchronization object for task completion.
> Whenever a DSA task is submitted, the task structure is added to a
> thread safe queue. It's safe to have multiple multifd sender threads to
> submit tasks concurrently.
> 3. Multiple DSA hardware devices can be used. During multifd initialization,
> every sender thread will be assigned a DSA device to work with. We
> use a round-robin scheme to evenly distribute the work across all used
> DSA devices.
> 4. Use a dedicated thread dsa_completion to perform busy pulling for all
> DSA task completions. The thread keeps dequeuing DSA tasks from the
> thread safe queue. The thread blocks when there is no outstanding DSA
> task. When pulling for completion of a DSA task, the thread uses CPU
> instruction _mm_pause between the iterations of a busy loop to save some
> CPU power as well as optimizing core resources for the other hypercore.
> 5. DSA accelerator can encounter errors. The most popular error is a
> page fault. We have tested using devices to handle page faults but
> performance is bad. Right now, if DSA hits a page fault, we fallback to
> use CPU to complete the rest of the work. The CPU fallback is done in
> the multifd sender thread.
> 6. Added a new migration option multifd-dsa-accel to set the DSA device
> path. If set, the multifd workflow will leverage the DSA devices for
> offloading.
> 7. Added a new migration option multifd-normal-page-ratio to make
> multifd live migration easier to test. Setting a normal page ratio will
> make live migration recognize a zero page as a normal page and send
> the entire payload over the network. If we want to send a large network
> payload and analyze throughput, this option is useful.
> 8. Added a new migration option multifd-packet-size. This can increase
> the number of pages being zero page checked and sent over the network.
> The extra synchronization between the sender threads and the dsa
> completion thread is an overhead. Using a large packet size can reduce
> that overhead.
> 
> * Performance:
> 
> We use two Intel 4th generation Xeon servers for testing.
> 
> Architecture:        x86_64
> CPU(s):              192
> Thread(s) per core:  2
> Core(s) per socket:  48
> Socket(s):           2
> NUMA node(s):        2
> Vendor ID:           GenuineIntel
> CPU family:          6
> Model:               143
> Model name:          Intel(R) Xeon(R) Platinum 8457C
> Stepping:            8
> CPU MHz:             2538.624
> CPU max MHz:         3800.0000
> CPU min MHz:         800.0000
> 
> We perform multifd live migration with below setup:
> 1. VM has 100GB memory. 
> 2. Use the new migration option multifd-set-normal-page-ratio to control the total
> size of the payload sent over the network.

I didn't find this option.  Is it removed?

> 3. Use 8 multifd channels.
> 4. Use tcp for live migration.
> 4. Use CPU to perform zero page checking as the baseline.
> 5. Use one DSA device to offload zero page checking to compare with the baseline.
> 6. Use "perf sched record" and "perf sched timehist" to analyze CPU usage.
> 
> A) Scenario 1: 50% (50GB) normal pages on an 100GB vm.
> 
> 	CPU usage
> 
> 	|---------------|---------------|---------------|---------------|
> 	|		|comm		|runtime(msec)	|totaltime(msec)|
> 	|---------------|---------------|---------------|---------------|
> 	|Baseline	|live_migration	|5657.58	|		|
> 	|		|multifdsend_0	|3931.563	|		|
> 	|		|multifdsend_1	|4405.273	|		|
> 	|		|multifdsend_2	|3941.968	|		|
> 	|		|multifdsend_3	|5032.975	|		|
> 	|		|multifdsend_4	|4533.865	|		|
> 	|		|multifdsend_5	|4530.461	|		|
> 	|		|multifdsend_6	|5171.916	|		|
> 	|		|multifdsend_7	|4722.769	|41922		|
> 	|---------------|---------------|---------------|---------------|
> 	|DSA		|live_migration	|6129.168	|		|
> 	|		|multifdsend_0	|2954.717	|		|
> 	|		|multifdsend_1	|2766.359	|		|
> 	|		|multifdsend_2	|2853.519	|		|
> 	|		|multifdsend_3	|2740.717	|		|
> 	|		|multifdsend_4	|2824.169	|		|
> 	|		|multifdsend_5	|2966.908	|		|
> 	|		|multifdsend_6	|2611.137	|		|
> 	|		|multifdsend_7	|3114.732	|		|
> 	|		|dsa_completion	|3612.564	|32568		|
> 	|---------------|---------------|---------------|---------------|
> 
> Baseline total runtime is calculated by adding up all multifdsend_X
> and live_migration threads runtime. DSA offloading total runtime is
> calculated by adding up all multifdsend_X, live_migration and
> dsa_completion threads runtime. 41922 msec VS 32568 msec runtime and
> that is 23% total CPU usage savings.
> 
> 	Latency
> 	|---------------|---------------|---------------|---------------|---------------|---------------|
> 	|		|total time	|down time	|throughput	|transferred-ram|total-ram	|
> 	|---------------|---------------|---------------|---------------|---------------|---------------|	
> 	|Baseline	|10343 ms	|161 ms		|41007.00 mbps	|51583797 kb	|102400520 kb	|
> 	|---------------|---------------|---------------|---------------|-------------------------------|
> 	|DSA offload	|9535 ms	|135 ms		|46554.40 mbps	|53947545 kb	|102400520 kb	|	
> 	|---------------|---------------|---------------|---------------|---------------|---------------|
> 
> Total time is 8% faster and down time is 16% faster.

Is this average test results out of many, or one test?

I wonder how the total time and downtime stablizes even across runs.

> 
> B) Scenario 2: 100% (100GB) zero pages on an 100GB vm.
> 
> 	CPU usage
> 	|---------------|---------------|---------------|---------------|
> 	|		|comm		|runtime(msec)	|totaltime(msec)|
> 	|---------------|---------------|---------------|---------------|
> 	|Baseline	|live_migration	|4860.718	|		|
> 	|	 	|multifdsend_0	|748.875	|		|
> 	|		|multifdsend_1	|898.498	|		|
> 	|		|multifdsend_2	|787.456	|		|
> 	|		|multifdsend_3	|764.537	|		|
> 	|		|multifdsend_4	|785.687	|		|
> 	|		|multifdsend_5	|756.941	|		|
> 	|		|multifdsend_6	|774.084	|		|
> 	|		|multifdsend_7	|782.900	|11154		|
> 	|---------------|---------------|-------------------------------|
> 	|DSA offloading	|live_migration	|3846.976	|		|
> 	|		|multifdsend_0	|191.880	|		|
> 	|		|multifdsend_1	|166.331	|		|
> 	|		|multifdsend_2	|168.528	|		|
> 	|		|multifdsend_3	|197.831	|		|
> 	|		|multifdsend_4	|169.580	|		|
> 	|		|multifdsend_5	|167.984	|		|
> 	|		|multifdsend_6	|198.042	|		|
> 	|		|multifdsend_7	|170.624	|		|
> 	|		|dsa_completion	|3428.669	|8700		|
> 	|---------------|---------------|---------------|---------------|
> 
> Baseline total runtime is 11154 msec and DSA offloading total runtime is
> 8700 msec. That is 22% CPU savings.
> 
> 	Latency
> 	|--------------------------------------------------------------------------------------------|
> 	|		|total time	|down time	|throughput	|transferred-ram|total-ram   |
> 	|---------------|---------------|---------------|---------------|---------------|------------|	
> 	|Baseline	|4867 ms	|20 ms		|1.51 mbps	|565 kb		|102400520 kb|
> 	|---------------|---------------|---------------|---------------|----------------------------|
> 	|DSA offload	|3888 ms	|18 ms		|1.89 mbps	|565 kb		|102400520 kb|	
> 	|---------------|---------------|---------------|---------------|---------------|------------|
> 
> Total time 20% faster and down time 10% faster.
> 
> * Testing:
> 
> 1. Added unit tests for cover the added code path in dsa.c
> 2. Added integration tests to cover multifd live migration using DSA
> offloading.
> 
> 
> Hao Xiang (11):
>   meson: Introduce new instruction set enqcmd to the build system.
>   util/dsa: Implement DSA device start and stop logic.
>   util/dsa: Implement DSA task enqueue and dequeue.
>   util/dsa: Implement DSA task asynchronous completion thread model.
>   util/dsa: Implement zero page checking in DSA task.
>   util/dsa: Implement DSA task asynchronous submission and wait for
>     completion.
>   migration/multifd: Add new migration option for multifd DSA
>     offloading.
>   migration/multifd: Enable DSA offloading in multifd sender path.
>   migration/multifd: Add migration option set packet size.
>   util/dsa: Add unit test coverage for Intel DSA task submission and
>     completion.
>   migration/multifd: Add integration tests for multifd with Intel DSA
>     offloading.
> 
> Yichen Wang (1):
>   util/dsa: Add idxd into linux header copy list.
> 
>  hmp-commands.hx                 |    2 +-
>  include/qemu/dsa.h              |  189 ++++++
>  meson.build                     |   14 +
>  meson_options.txt               |    2 +
>  migration/migration-hmp-cmds.c  |   26 +-
>  migration/multifd-zero-page.c   |  133 +++-
>  migration/multifd-zlib.c        |    6 +-
>  migration/multifd-zstd.c        |    6 +-
>  migration/multifd.c             |   19 +-
>  migration/multifd.h             |    5 +
>  migration/options.c             |   69 ++
>  migration/options.h             |    2 +
>  qapi/migration.json             |   49 +-
>  scripts/meson-buildoptions.sh   |    3 +
>  scripts/update-linux-headers.sh |    2 +-
>  tests/qtest/migration-test.c    |   80 ++-
>  tests/unit/meson.build          |    6 +
>  tests/unit/test-dsa.c           |  503 ++++++++++++++
>  util/dsa.c                      | 1114 +++++++++++++++++++++++++++++++
>  util/meson.build                |    3 +
>  20 files changed, 2204 insertions(+), 29 deletions(-)
>  create mode 100644 include/qemu/dsa.h
>  create mode 100644 tests/unit/test-dsa.c
>  create mode 100644 util/dsa.c

The doc update is still missing under docs/, we may need that for a final
merge.

Are you using this in production?  How it performs in real life?  What is
the major issue to solve for you?  Is it "zero detect eats cpu too much",
or "migration too slow", or "we're doing experiment with the new hardwares,
and see how it goes if we apply it on top of migrations"?

There're a lot of new code added for dsa just for this optimization on zero
page detection.  We'd better understand the major benefits, and also
whether that's applicable to other part of qemu or migration-only.  I
actually wonder if we're going to support enqcmd whether migration is the
best starting point (rather than other places where we emulate tons of
devices, and maybe some backends can speedup IOs with enqcmd in some
form?).. but it's more of a pure question.

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration.
  2024-10-11 16:32 ` Peter Xu
@ 2024-10-11 16:53   ` Dr. David Alan Gilbert
  2024-10-15 22:02   ` [External] " Yichen Wang
  1 sibling, 0 replies; 26+ messages in thread
From: Dr. David Alan Gilbert @ 2024-10-11 16:53 UTC (permalink / raw)
  To: Peter Xu
  Cc: Yichen Wang, Paolo Bonzini, Marc-André Lureau,
	Daniel P. Berrangé, Philippe Mathieu-Daudé,
	Fabiano Rosas, Eric Blake, Markus Armbruster, Michael S. Tsirkin,
	Cornelia Huck, qemu-devel, Hao Xiang, Liu, Yuan1, Shivam Kumar,
	Ho-Ren (Jack) Chuang

* Peter Xu (peterx@redhat.com) wrote:

> The doc update is still missing under docs/, we may need that for a final
> merge.
> 
> Are you using this in production?  How it performs in real life?  What is
> the major issue to solve for you?  Is it "zero detect eats cpu too much",
> or "migration too slow", or "we're doing experiment with the new hardwares,
> and see how it goes if we apply it on top of migrations"?
> 
> There're a lot of new code added for dsa just for this optimization on zero
> page detection.  We'd better understand the major benefits, and also
> whether that's applicable to other part of qemu or migration-only.  I
> actually wonder if we're going to support enqcmd whether migration is the
> best starting point (rather than other places where we emulate tons of
> devices, and maybe some backends can speedup IOs with enqcmd in some
> form?).. but it's more of a pure question.

The other thing that worries me here is that there's not much abstraction,
I'm sure there's a whole bunch of offload cards that could do tricks like
this; how do we avoid having this much extra code for each one?

Dave

> 
> Thanks,
> 
> -- 
> Peter Xu
> 
-- 
 -----Open up your eyes, open up your mind, open up your code -------   
/ Dr. David Alan Gilbert    |       Running GNU/Linux       | Happy  \ 
\        dave @ treblig.org |                               | In Hex /
 \ _________________________|_____ http://www.treblig.org   |_______/


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 08/12] migration/multifd: Add new migration option for multifd DSA offloading.
  2024-10-09 23:46 ` [PATCH v6 08/12] migration/multifd: Add new migration option for multifd DSA offloading Yichen Wang
@ 2024-10-11 17:14   ` Dr. David Alan Gilbert
  2024-10-15 22:09     ` [External] " Yichen Wang
  0 siblings, 1 reply; 26+ messages in thread
From: Dr. David Alan Gilbert @ 2024-10-11 17:14 UTC (permalink / raw)
  To: Yichen Wang
  Cc: Paolo Bonzini, Marc-André Lureau, Daniel P. Berrangé,
	Philippe Mathieu-Daudé, Peter Xu, Fabiano Rosas, Eric Blake,
	Markus Armbruster, Michael S. Tsirkin, Cornelia Huck, qemu-devel,
	Hao Xiang, Liu, Yuan1, Shivam Kumar, Ho-Ren (Jack) Chuang

* Yichen Wang (yichen.wang@bytedance.com) wrote:
> From: Hao Xiang <hao.xiang@linux.dev>

Please split the cpuid stuff out into a separate patch; it feels like
it should be in some x86 specific place.

> Intel DSA offloading is an optional feature that turns on if
> proper hardware and software stack is available. To turn on
> DSA offloading in multifd live migration by setting:
> 
> zero-page-detection=dsa-accel
> dsa-accel-path=[dsa_dev_path1] [dsa_dev_path2] ... [dsa_dev_pathX]

I'd like to suggest changing that to:

accel-path=dsa:dev_path1 dsa:dev_path2 somethingelse:dev_path2 etc

that means we don't need a new option when someone adds a different
accelerator.

Dave
> This feature is turned off by default.
> 
> Signed-off-by: Hao Xiang <hao.xiang@linux.dev>
> Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>
> ---
>  hmp-commands.hx                |  2 +-
>  include/qemu/dsa.h             | 13 +++++++++++++
>  migration/migration-hmp-cmds.c | 19 ++++++++++++++++++-
>  migration/options.c            | 30 ++++++++++++++++++++++++++++++
>  migration/options.h            |  1 +
>  qapi/migration.json            | 32 ++++++++++++++++++++++++++++----
>  util/dsa.c                     | 31 +++++++++++++++++++++++++++++++
>  7 files changed, 122 insertions(+), 6 deletions(-)
> 
> diff --git a/hmp-commands.hx b/hmp-commands.hx
> index 06746f0afc..0e04eac7c7 100644
> --- a/hmp-commands.hx
> +++ b/hmp-commands.hx
> @@ -1009,7 +1009,7 @@ ERST
>  
>      {
>          .name       = "migrate_set_parameter",
> -        .args_type  = "parameter:s,value:s",
> +        .args_type  = "parameter:s,value:S",

Can you show the case you need this for?

Dave

>          .params     = "parameter value",
>          .help       = "Set the parameter for migration",
>          .cmd        = hmp_migrate_set_parameter,
> diff --git a/include/qemu/dsa.h b/include/qemu/dsa.h
> index a3b502ee41..b1bb6daad2 100644
> --- a/include/qemu/dsa.h
> +++ b/include/qemu/dsa.h
> @@ -100,6 +100,13 @@ void qemu_dsa_stop(void);
>   */
>  void qemu_dsa_cleanup(void);
>  
> +/**
> + * @brief Check if DSA is supported.
> + *
> + * @return True if DSA is supported, otherwise false.
> + */
> +bool qemu_dsa_is_supported(void);
> +
>  /**
>   * @brief Check if DSA is running.
>   *
> @@ -141,6 +148,12 @@ buffer_is_zero_dsa_batch_sync(QemuDsaBatchTask *batch_task,
>  
>  typedef struct QemuDsaBatchTask {} QemuDsaBatchTask;
>  
> +static inline bool qemu_dsa_is_supported(void)
> +{
> +    return false;
> +}
> +
> +
>  static inline bool qemu_dsa_is_running(void)
>  {
>      return false;
> diff --git a/migration/migration-hmp-cmds.c b/migration/migration-hmp-cmds.c
> index 20d1a6e219..983f13b73c 100644
> --- a/migration/migration-hmp-cmds.c
> +++ b/migration/migration-hmp-cmds.c
> @@ -312,7 +312,16 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
>          monitor_printf(mon, "%s: '%s'\n",
>              MigrationParameter_str(MIGRATION_PARAMETER_TLS_AUTHZ),
>              params->tls_authz);
> -
> +        if (params->has_dsa_accel_path) {
> +            strList *dsa_accel_path = params->dsa_accel_path;
> +            monitor_printf(mon, "%s:",
> +                MigrationParameter_str(MIGRATION_PARAMETER_DSA_ACCEL_PATH));
> +            while (dsa_accel_path) {
> +                monitor_printf(mon, " '%s'", dsa_accel_path->value);
> +                dsa_accel_path = dsa_accel_path->next;
> +            }
> +            monitor_printf(mon, "\n");
> +        }
>          if (params->has_block_bitmap_mapping) {
>              const BitmapMigrationNodeAliasList *bmnal;
>  
> @@ -563,6 +572,14 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
>          p->has_x_checkpoint_delay = true;
>          visit_type_uint32(v, param, &p->x_checkpoint_delay, &err);
>          break;
> +    case MIGRATION_PARAMETER_DSA_ACCEL_PATH:
> +        p->has_dsa_accel_path = true;
> +        g_autofree char **strv = g_strsplit(valuestr ? : "", " ", -1);
> +        strList **tail = &p->dsa_accel_path;
> +        for (int i = 0; strv[i]; i++) {
> +            QAPI_LIST_APPEND(tail, strv[i]);
> +        }
> +        break;
>      case MIGRATION_PARAMETER_MULTIFD_CHANNELS:
>          p->has_multifd_channels = true;
>          visit_type_uint8(v, param, &p->multifd_channels, &err);
> diff --git a/migration/options.c b/migration/options.c
> index 147cd2b8fd..a0b3a7d291 100644
> --- a/migration/options.c
> +++ b/migration/options.c
> @@ -13,6 +13,7 @@
>  
>  #include "qemu/osdep.h"
>  #include "qemu/error-report.h"
> +#include "qemu/dsa.h"
>  #include "exec/target_page.h"
>  #include "qapi/clone-visitor.h"
>  #include "qapi/error.h"
> @@ -832,6 +833,13 @@ const char *migrate_tls_creds(void)
>      return s->parameters.tls_creds;
>  }
>  
> +const strList *migrate_dsa_accel_path(void)
> +{
> +    MigrationState *s = migrate_get_current();
> +
> +    return s->parameters.dsa_accel_path;
> +}
> +
>  const char *migrate_tls_hostname(void)
>  {
>      MigrationState *s = migrate_get_current();
> @@ -945,6 +953,8 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
>      params->zero_page_detection = s->parameters.zero_page_detection;
>      params->has_direct_io = true;
>      params->direct_io = s->parameters.direct_io;
> +    params->has_dsa_accel_path = true;
> +    params->dsa_accel_path = QAPI_CLONE(strList, s->parameters.dsa_accel_path);
>  
>      return params;
>  }
> @@ -953,6 +963,7 @@ void migrate_params_init(MigrationParameters *params)
>  {
>      params->tls_hostname = g_strdup("");
>      params->tls_creds = g_strdup("");
> +    params->dsa_accel_path = NULL;
>  
>      /* Set has_* up only for parameter checks */
>      params->has_throttle_trigger_threshold = true;
> @@ -1165,6 +1176,14 @@ bool migrate_params_check(MigrationParameters *params, Error **errp)
>          return false;
>      }
>  
> +    if (params->has_zero_page_detection &&
> +        params->zero_page_detection == ZERO_PAGE_DETECTION_DSA_ACCEL) {
> +        if (!qemu_dsa_is_supported()) {
> +            error_setg(errp, "DSA acceleration is not supported.");
> +            return false;
> +        }
> +    }
> +
>      return true;
>  }
>  
> @@ -1278,6 +1297,11 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
>      if (params->has_direct_io) {
>          dest->direct_io = params->direct_io;
>      }
> +
> +    if (params->has_dsa_accel_path) {
> +        dest->has_dsa_accel_path = true;
> +        dest->dsa_accel_path = params->dsa_accel_path;
> +    }
>  }
>  
>  static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
> @@ -1410,6 +1434,12 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
>      if (params->has_direct_io) {
>          s->parameters.direct_io = params->direct_io;
>      }
> +    if (params->has_dsa_accel_path) {
> +        qapi_free_strList(s->parameters.dsa_accel_path);
> +        s->parameters.has_dsa_accel_path = true;
> +        s->parameters.dsa_accel_path =
> +            QAPI_CLONE(strList, params->dsa_accel_path);
> +    }
>  }
>  
>  void qmp_migrate_set_parameters(MigrateSetParameters *params, Error **errp)
> diff --git a/migration/options.h b/migration/options.h
> index a0bd6edc06..8198b220bd 100644
> --- a/migration/options.h
> +++ b/migration/options.h
> @@ -86,6 +86,7 @@ const char *migrate_tls_creds(void);
>  const char *migrate_tls_hostname(void);
>  uint64_t migrate_xbzrle_cache_size(void);
>  ZeroPageDetection migrate_zero_page_detection(void);
> +const strList *migrate_dsa_accel_path(void);
>  
>  /* parameters helpers */
>  
> diff --git a/qapi/migration.json b/qapi/migration.json
> index b66cccf107..d8b42ceae6 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -626,10 +626,14 @@
>  #     multifd migration is enabled, else in the main migration thread
>  #     as for @legacy.
>  #
> +# @dsa-accel: Perform zero page checking with the DSA accelerator
> +#     offloading in multifd sender thread if multifd migration is
> +#     enabled, else in the main migration thread as for @legacy.
> +#
>  # Since: 9.0
>  ##
>  { 'enum': 'ZeroPageDetection',
> -  'data': [ 'none', 'legacy', 'multifd' ] }
> +  'data': [ 'none', 'legacy', 'multifd', 'dsa-accel' ] }
>  
>  ##
>  # @BitmapMigrationBitmapAliasTransform:
> @@ -837,6 +841,12 @@
>  #     See description in @ZeroPageDetection.  Default is 'multifd'.
>  #     (since 9.0)
>  #
> +# @dsa-accel-path: If enabled, use DSA accelerator offloading for
> +#     certain memory operations. Enable DSA accelerator for zero
> +#     page detection offloading by setting the @zero-page-detection
> +#     to dsa-accel. This parameter defines the dsa device path, and
> +#     defaults to an empty list.  (Since 9.2)
> +#
>  # @direct-io: Open migration files with O_DIRECT when possible.  This
>  #     only has effect if the @mapped-ram capability is enabled.
>  #     (Since 9.1)
> @@ -855,7 +865,7 @@
>             'cpu-throttle-initial', 'cpu-throttle-increment',
>             'cpu-throttle-tailslow',
>             'tls-creds', 'tls-hostname', 'tls-authz', 'max-bandwidth',
> -           'avail-switchover-bandwidth', 'downtime-limit',
> +           'avail-switchover-bandwidth', 'downtime-limit', 'dsa-accel-path',
>             { 'name': 'x-checkpoint-delay', 'features': [ 'unstable' ] },
>             'multifd-channels',
>             'xbzrle-cache-size', 'max-postcopy-bandwidth',
> @@ -1018,6 +1028,12 @@
>  #     See description in @ZeroPageDetection.  Default is 'multifd'.
>  #     (since 9.0)
>  #
> +# @dsa-accel-path: If enabled, use DSA accelerator offloading for
> +#     certain memory operations. Enable DSA accelerator for zero
> +#     page detection offloading by setting the @zero-page-detection
> +#     to dsa-accel. This parameter defines the dsa device path, and
> +#     defaults to an empty list.  (Since 9.2)
> +#
>  # @direct-io: Open migration files with O_DIRECT when possible.  This
>  #     only has effect if the @mapped-ram capability is enabled.
>  #     (Since 9.1)
> @@ -1063,7 +1079,8 @@
>              '*vcpu-dirty-limit': 'uint64',
>              '*mode': 'MigMode',
>              '*zero-page-detection': 'ZeroPageDetection',
> -            '*direct-io': 'bool' } }
> +            '*direct-io': 'bool',
> +            '*dsa-accel-path': [ 'str' ] } }
>  
>  ##
>  # @migrate-set-parameters:
> @@ -1228,6 +1245,12 @@
>  #     See description in @ZeroPageDetection.  Default is 'multifd'.
>  #     (since 9.0)
>  #
> +# @dsa-accel-path: If enabled, use DSA accelerator offloading for
> +#     certain memory operations. Enable DSA accelerator for zero
> +#     page detection offloading by setting the @zero-page-detection
> +#     to dsa-accel. This parameter defines the dsa device path, and
> +#     defaults to an empty list.  (Since 9.2)
> +#
>  # @direct-io: Open migration files with O_DIRECT when possible.  This
>  #     only has effect if the @mapped-ram capability is enabled.
>  #     (Since 9.1)
> @@ -1270,7 +1293,8 @@
>              '*vcpu-dirty-limit': 'uint64',
>              '*mode': 'MigMode',
>              '*zero-page-detection': 'ZeroPageDetection',
> -            '*direct-io': 'bool' } }
> +            '*direct-io': 'bool',
> +            '*dsa-accel-path': [ 'str' ] } }
>  
>  ##
>  # @query-migrate-parameters:
> diff --git a/util/dsa.c b/util/dsa.c
> index cbaa47c360..eeede3c0c7 100644
> --- a/util/dsa.c
> +++ b/util/dsa.c
> @@ -23,6 +23,7 @@
>  #include "qemu/bswap.h"
>  #include "qemu/error-report.h"
>  #include "qemu/rcu.h"
> +#include <cpuid.h>
>  
>  #pragma GCC push_options
>  #pragma GCC target("enqcmd")
> @@ -691,6 +692,36 @@ static void dsa_completion_thread_stop(void *opaque)
>      qemu_sem_destroy(&thread_context->sem_init_done);
>  }
>  
> +/**
> + * @brief Check if DSA is supported.
> + *
> + * @return True if DSA is supported, otherwise false.
> + */
> +bool qemu_dsa_is_supported(void)
> +{
> +    /*
> +     * movdir64b is indicated by bit 28 of ecx in CPUID leaf 7, subleaf 0.
> +     * enqcmd is indicated by bit 29 of ecx in CPUID leaf 7, subleaf 0.
> +     * Doc: https://cdrdv2-public.intel.com/819680/architecture-instruction-\
> +     *      set-extensions-programming-reference.pdf
> +     */
> +    uint32_t eax, ebx, ecx, edx;
> +    bool movedirb_enabled;
> +    bool enqcmd_enabled;
> +
> +    __get_cpuid_count(7, 0, &eax, &ebx, &ecx, &edx);
> +    movedirb_enabled = (ecx >> 28) & 0x1;
> +    if (!movedirb_enabled) {
> +        return false;
> +    }
> +    enqcmd_enabled = (ecx >> 29) & 0x1;
> +    if (!enqcmd_enabled) {
> +        return false;
> +    }
> +
> +    return true;
> +}
> +
>  /**
>   * @brief Check if DSA is running.
>   *
> -- 
> Yichen Wang
> 
-- 
 -----Open up your eyes, open up your mind, open up your code -------   
/ Dr. David Alan Gilbert    |       Running GNU/Linux       | Happy  \ 
\        dave @ treblig.org |                               | In Hex /
 \ _________________________|_____ http://www.treblig.org   |_______/


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [External] Re: [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration.
  2024-10-11 16:32 ` Peter Xu
  2024-10-11 16:53   ` Dr. David Alan Gilbert
@ 2024-10-15 22:02   ` Yichen Wang
  2024-10-16 19:44     ` Peter Xu
  1 sibling, 1 reply; 26+ messages in thread
From: Yichen Wang @ 2024-10-15 22:02 UTC (permalink / raw)
  To: Peter Xu
  Cc: Dr. David Alan Gilbert, Paolo Bonzini, Marc-André Lureau,
	Daniel P. Berrangé, Philippe Mathieu-Daudé,
	Fabiano Rosas, Eric Blake, Markus Armbruster, Michael S. Tsirkin,
	Cornelia Huck, qemu-devel, Hao Xiang, Liu, Yuan1, Shivam Kumar,
	Ho-Ren (Jack) Chuang

On Fri, Oct 11, 2024 at 9:32 AM Peter Xu <peterx@redhat.com> wrote:
>
> On Wed, Oct 09, 2024 at 04:45:58PM -0700, Yichen Wang wrote:
>
> The doc update is still missing under docs/, we may need that for a final
> merge.
>

I will work with Intel to prepare a doc in my next patch.

> Are you using this in production?  How it performs in real life?  What is
> the major issue to solve for you?  Is it "zero detect eats cpu too much",
> or "migration too slow", or "we're doing experiment with the new hardwares,
> and see how it goes if we apply it on top of migrations"?
>

Yes, we do use it in production. Our codebase is based on an old QEMU
release (5.X), so we backported the series there. The major use case
is just to accelerate the live migration, and it is currently under QA
scale testing. The main motivation is, we reserve 4 cores for all
control plane services including QEMU. While doing 2nd-scheduling
(i.e. live migration to reduce the fragmentations, and very commonly
seen on cloud providers), we realize QEMU will eat a lot of CPUs which
causes jitter and slowness on the control planes. Even though this is
not happening too frequently, we still want it to be stable. With the
help of DSA, it saves CPU while accelerates the process, so we want to
use it in production.

> There're a lot of new code added for dsa just for this optimization on zero
> page detection.  We'd better understand the major benefits, and also
> whether that's applicable to other part of qemu or migration-only.  I
> actually wonder if we're going to support enqcmd whether migration is the
> best starting point (rather than other places where we emulate tons of
> devices, and maybe some backends can speedup IOs with enqcmd in some
> form?).. but it's more of a pure question.
>

I tried to put most of the code in dsa.c and do minimum changes on all
other files. Even in dsa.c, it has the abstraction for "submit task",
and the implementation of "submit a buffer_zero task". I think this is
the best I can think of. I am open to suggestions of how we can help
to move this forward. :)

> Thanks,
>
> --
> Peter Xu
>


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [External] Re: [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration.
  2024-10-11 14:13 ` [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration Fabiano Rosas
@ 2024-10-15 22:05   ` Yichen Wang
  0 siblings, 0 replies; 26+ messages in thread
From: Yichen Wang @ 2024-10-15 22:05 UTC (permalink / raw)
  To: Fabiano Rosas
  Cc: Dr. David Alan Gilbert, Paolo Bonzini, Marc-André Lureau,
	Daniel P. Berrangé, Philippe Mathieu-Daudé, Peter Xu,
	Eric Blake, Markus Armbruster, Michael S. Tsirkin, Cornelia Huck,
	qemu-devel, Hao Xiang, Liu, Yuan1, Shivam Kumar,
	Ho-Ren (Jack) Chuang

On Fri, Oct 11, 2024 at 7:14 AM Fabiano Rosas <farosas@suse.de> wrote:
>
> Yichen Wang <yichen.wang@bytedance.com> writes:
>
>
> Still doesn't build without DSA:
>
> qemu/include/qemu/dsa.h: In function
> ‘buffer_is_zero_dsa_batch_sync’:
> /home/fabiano/kvm/qemu/include/qemu/dsa.h:183:16: error: ‘errp’
> undeclared (first use in this function); did you mean ‘errno’?
>
>      error_setg(errp, "DSA accelerator is not enabled.");
>                 ^
> qemu/include/qapi/error.h:318:26: note: in definition of macro ‘error_setg’
>      error_setg_internal((errp), __FILE__, __LINE__, __func__,   \
>                           ^~~~
> qemu/include/qemu/dsa.h:183:16: note: each undeclared identifier is reported only once for each function it appears in
>      error_setg(errp, "DSA accelerator is not enabled.");
>                 ^
> qemu/include/qapi/error.h:318:26: note: in definition of macro ‘error_setg’
>      error_setg_internal((errp), __FILE__, __LINE__, __func__,   \
>                           ^~~~

Sorry for that, I will make sure I test both for my last version
before git send-mail...


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [External] Re: [PATCH v6 08/12] migration/multifd: Add new migration option for multifd DSA offloading.
  2024-10-11 17:14   ` Dr. David Alan Gilbert
@ 2024-10-15 22:09     ` Yichen Wang
  2024-10-15 22:51       ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 26+ messages in thread
From: Yichen Wang @ 2024-10-15 22:09 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Paolo Bonzini, Marc-André Lureau, Daniel P. Berrangé,
	Philippe Mathieu-Daudé, Peter Xu, Fabiano Rosas, Eric Blake,
	Markus Armbruster, Michael S. Tsirkin, Cornelia Huck, qemu-devel,
	Hao Xiang, Liu, Yuan1, Shivam Kumar, Ho-Ren (Jack) Chuang

On Fri, Oct 11, 2024 at 10:14 AM Dr. David Alan Gilbert
<dave@treblig.org> wrote:
>
> * Yichen Wang (yichen.wang@bytedance.com) wrote:
> > From: Hao Xiang <hao.xiang@linux.dev>
>
> Please split the cpuid stuff out into a separate patch; it feels like
> it should be in some x86 specific place.

DSA is an Intel feature/device, and it only makes sense on x86. I
mean, dsa.c already implies that it is a x86 specific feature. So I
guess keeping it in the dsa.c is a better option?

>
> > Intel DSA offloading is an optional feature that turns on if
> > proper hardware and software stack is available. To turn on
> > DSA offloading in multifd live migration by setting:
> >
> > zero-page-detection=dsa-accel
> > dsa-accel-path=[dsa_dev_path1] [dsa_dev_path2] ... [dsa_dev_pathX]
>
> I'd like to suggest changing that to:
>
> accel-path=dsa:dev_path1 dsa:dev_path2 somethingelse:dev_path2 etc
>
> that means we don't need a new option when someone adds a different
> accelerator.

It all depends on the implementation of "other accelerators". I am OK
with this change if you believe that is better.

>
> Dave
> > This feature is turned off by default.
> >
> > Signed-off-by: Hao Xiang <hao.xiang@linux.dev>
> > Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>
> > ---
> >  hmp-commands.hx                |  2 +-
> >  include/qemu/dsa.h             | 13 +++++++++++++
> >  migration/migration-hmp-cmds.c | 19 ++++++++++++++++++-
> >  migration/options.c            | 30 ++++++++++++++++++++++++++++++
> >  migration/options.h            |  1 +
> >  qapi/migration.json            | 32 ++++++++++++++++++++++++++++----
> >  util/dsa.c                     | 31 +++++++++++++++++++++++++++++++
> >  7 files changed, 122 insertions(+), 6 deletions(-)
> >
> > diff --git a/hmp-commands.hx b/hmp-commands.hx
> > index 06746f0afc..0e04eac7c7 100644
> > --- a/hmp-commands.hx
> > +++ b/hmp-commands.hx
> > @@ -1009,7 +1009,7 @@ ERST
> >
> >      {
> >          .name       = "migrate_set_parameter",
> > -        .args_type  = "parameter:s,value:s",
> > +        .args_type  = "parameter:s,value:S",
>
> Can you show the case you need this for?
>

That is used for pass a strList in the QEMU CLI. Without this change,
I cannot do:
migrate_set_parameter dsa-accel-path /dev/dsa/wq0.1 /dev/dsa/wq0.1
and it will complain.

> Dave
>
> >          .params     = "parameter value",
> >          .help       = "Set the parameter for migration",
> >          .cmd        = hmp_migrate_set_parameter,
> > diff --git a/include/qemu/dsa.h b/include/qemu/dsa.h
> > index a3b502ee41..b1bb6daad2 100644
> > --- a/include/qemu/dsa.h
> > +++ b/include/qemu/dsa.h
> > @@ -100,6 +100,13 @@ void qemu_dsa_stop(void);
> >   */
> >  void qemu_dsa_cleanup(void);
> >
> > +/**
> > + * @brief Check if DSA is supported.
> > + *
> > + * @return True if DSA is supported, otherwise false.
> > + */
> > +bool qemu_dsa_is_supported(void);
> > +
> >  /**
> >   * @brief Check if DSA is running.
> >   *
> > @@ -141,6 +148,12 @@ buffer_is_zero_dsa_batch_sync(QemuDsaBatchTask *batch_task,
> >
> >  typedef struct QemuDsaBatchTask {} QemuDsaBatchTask;
> >
> > +static inline bool qemu_dsa_is_supported(void)
> > +{
> > +    return false;
> > +}
> > +
> > +
> >  static inline bool qemu_dsa_is_running(void)
> >  {
> >      return false;
> > diff --git a/migration/migration-hmp-cmds.c b/migration/migration-hmp-cmds.c
> > index 20d1a6e219..983f13b73c 100644
> > --- a/migration/migration-hmp-cmds.c
> > +++ b/migration/migration-hmp-cmds.c
> > @@ -312,7 +312,16 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
> >          monitor_printf(mon, "%s: '%s'\n",
> >              MigrationParameter_str(MIGRATION_PARAMETER_TLS_AUTHZ),
> >              params->tls_authz);
> > -
> > +        if (params->has_dsa_accel_path) {
> > +            strList *dsa_accel_path = params->dsa_accel_path;
> > +            monitor_printf(mon, "%s:",
> > +                MigrationParameter_str(MIGRATION_PARAMETER_DSA_ACCEL_PATH));
> > +            while (dsa_accel_path) {
> > +                monitor_printf(mon, " '%s'", dsa_accel_path->value);
> > +                dsa_accel_path = dsa_accel_path->next;
> > +            }
> > +            monitor_printf(mon, "\n");
> > +        }
> >          if (params->has_block_bitmap_mapping) {
> >              const BitmapMigrationNodeAliasList *bmnal;
> >
> > @@ -563,6 +572,14 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
> >          p->has_x_checkpoint_delay = true;
> >          visit_type_uint32(v, param, &p->x_checkpoint_delay, &err);
> >          break;
> > +    case MIGRATION_PARAMETER_DSA_ACCEL_PATH:
> > +        p->has_dsa_accel_path = true;
> > +        g_autofree char **strv = g_strsplit(valuestr ? : "", " ", -1);
> > +        strList **tail = &p->dsa_accel_path;
> > +        for (int i = 0; strv[i]; i++) {
> > +            QAPI_LIST_APPEND(tail, strv[i]);
> > +        }
> > +        break;
> >      case MIGRATION_PARAMETER_MULTIFD_CHANNELS:
> >          p->has_multifd_channels = true;
> >          visit_type_uint8(v, param, &p->multifd_channels, &err);
> > diff --git a/migration/options.c b/migration/options.c
> > index 147cd2b8fd..a0b3a7d291 100644
> > --- a/migration/options.c
> > +++ b/migration/options.c
> > @@ -13,6 +13,7 @@
> >
> >  #include "qemu/osdep.h"
> >  #include "qemu/error-report.h"
> > +#include "qemu/dsa.h"
> >  #include "exec/target_page.h"
> >  #include "qapi/clone-visitor.h"
> >  #include "qapi/error.h"
> > @@ -832,6 +833,13 @@ const char *migrate_tls_creds(void)
> >      return s->parameters.tls_creds;
> >  }
> >
> > +const strList *migrate_dsa_accel_path(void)
> > +{
> > +    MigrationState *s = migrate_get_current();
> > +
> > +    return s->parameters.dsa_accel_path;
> > +}
> > +
> >  const char *migrate_tls_hostname(void)
> >  {
> >      MigrationState *s = migrate_get_current();
> > @@ -945,6 +953,8 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
> >      params->zero_page_detection = s->parameters.zero_page_detection;
> >      params->has_direct_io = true;
> >      params->direct_io = s->parameters.direct_io;
> > +    params->has_dsa_accel_path = true;
> > +    params->dsa_accel_path = QAPI_CLONE(strList, s->parameters.dsa_accel_path);
> >
> >      return params;
> >  }
> > @@ -953,6 +963,7 @@ void migrate_params_init(MigrationParameters *params)
> >  {
> >      params->tls_hostname = g_strdup("");
> >      params->tls_creds = g_strdup("");
> > +    params->dsa_accel_path = NULL;
> >
> >      /* Set has_* up only for parameter checks */
> >      params->has_throttle_trigger_threshold = true;
> > @@ -1165,6 +1176,14 @@ bool migrate_params_check(MigrationParameters *params, Error **errp)
> >          return false;
> >      }
> >
> > +    if (params->has_zero_page_detection &&
> > +        params->zero_page_detection == ZERO_PAGE_DETECTION_DSA_ACCEL) {
> > +        if (!qemu_dsa_is_supported()) {
> > +            error_setg(errp, "DSA acceleration is not supported.");
> > +            return false;
> > +        }
> > +    }
> > +
> >      return true;
> >  }
> >
> > @@ -1278,6 +1297,11 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
> >      if (params->has_direct_io) {
> >          dest->direct_io = params->direct_io;
> >      }
> > +
> > +    if (params->has_dsa_accel_path) {
> > +        dest->has_dsa_accel_path = true;
> > +        dest->dsa_accel_path = params->dsa_accel_path;
> > +    }
> >  }
> >
> >  static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
> > @@ -1410,6 +1434,12 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
> >      if (params->has_direct_io) {
> >          s->parameters.direct_io = params->direct_io;
> >      }
> > +    if (params->has_dsa_accel_path) {
> > +        qapi_free_strList(s->parameters.dsa_accel_path);
> > +        s->parameters.has_dsa_accel_path = true;
> > +        s->parameters.dsa_accel_path =
> > +            QAPI_CLONE(strList, params->dsa_accel_path);
> > +    }
> >  }
> >
> >  void qmp_migrate_set_parameters(MigrateSetParameters *params, Error **errp)
> > diff --git a/migration/options.h b/migration/options.h
> > index a0bd6edc06..8198b220bd 100644
> > --- a/migration/options.h
> > +++ b/migration/options.h
> > @@ -86,6 +86,7 @@ const char *migrate_tls_creds(void);
> >  const char *migrate_tls_hostname(void);
> >  uint64_t migrate_xbzrle_cache_size(void);
> >  ZeroPageDetection migrate_zero_page_detection(void);
> > +const strList *migrate_dsa_accel_path(void);
> >
> >  /* parameters helpers */
> >
> > diff --git a/qapi/migration.json b/qapi/migration.json
> > index b66cccf107..d8b42ceae6 100644
> > --- a/qapi/migration.json
> > +++ b/qapi/migration.json
> > @@ -626,10 +626,14 @@
> >  #     multifd migration is enabled, else in the main migration thread
> >  #     as for @legacy.
> >  #
> > +# @dsa-accel: Perform zero page checking with the DSA accelerator
> > +#     offloading in multifd sender thread if multifd migration is
> > +#     enabled, else in the main migration thread as for @legacy.
> > +#
> >  # Since: 9.0
> >  ##
> >  { 'enum': 'ZeroPageDetection',
> > -  'data': [ 'none', 'legacy', 'multifd' ] }
> > +  'data': [ 'none', 'legacy', 'multifd', 'dsa-accel' ] }
> >
> >  ##
> >  # @BitmapMigrationBitmapAliasTransform:
> > @@ -837,6 +841,12 @@
> >  #     See description in @ZeroPageDetection.  Default is 'multifd'.
> >  #     (since 9.0)
> >  #
> > +# @dsa-accel-path: If enabled, use DSA accelerator offloading for
> > +#     certain memory operations. Enable DSA accelerator for zero
> > +#     page detection offloading by setting the @zero-page-detection
> > +#     to dsa-accel. This parameter defines the dsa device path, and
> > +#     defaults to an empty list.  (Since 9.2)
> > +#
> >  # @direct-io: Open migration files with O_DIRECT when possible.  This
> >  #     only has effect if the @mapped-ram capability is enabled.
> >  #     (Since 9.1)
> > @@ -855,7 +865,7 @@
> >             'cpu-throttle-initial', 'cpu-throttle-increment',
> >             'cpu-throttle-tailslow',
> >             'tls-creds', 'tls-hostname', 'tls-authz', 'max-bandwidth',
> > -           'avail-switchover-bandwidth', 'downtime-limit',
> > +           'avail-switchover-bandwidth', 'downtime-limit', 'dsa-accel-path',
> >             { 'name': 'x-checkpoint-delay', 'features': [ 'unstable' ] },
> >             'multifd-channels',
> >             'xbzrle-cache-size', 'max-postcopy-bandwidth',
> > @@ -1018,6 +1028,12 @@
> >  #     See description in @ZeroPageDetection.  Default is 'multifd'.
> >  #     (since 9.0)
> >  #
> > +# @dsa-accel-path: If enabled, use DSA accelerator offloading for
> > +#     certain memory operations. Enable DSA accelerator for zero
> > +#     page detection offloading by setting the @zero-page-detection
> > +#     to dsa-accel. This parameter defines the dsa device path, and
> > +#     defaults to an empty list.  (Since 9.2)
> > +#
> >  # @direct-io: Open migration files with O_DIRECT when possible.  This
> >  #     only has effect if the @mapped-ram capability is enabled.
> >  #     (Since 9.1)
> > @@ -1063,7 +1079,8 @@
> >              '*vcpu-dirty-limit': 'uint64',
> >              '*mode': 'MigMode',
> >              '*zero-page-detection': 'ZeroPageDetection',
> > -            '*direct-io': 'bool' } }
> > +            '*direct-io': 'bool',
> > +            '*dsa-accel-path': [ 'str' ] } }
> >
> >  ##
> >  # @migrate-set-parameters:
> > @@ -1228,6 +1245,12 @@
> >  #     See description in @ZeroPageDetection.  Default is 'multifd'.
> >  #     (since 9.0)
> >  #
> > +# @dsa-accel-path: If enabled, use DSA accelerator offloading for
> > +#     certain memory operations. Enable DSA accelerator for zero
> > +#     page detection offloading by setting the @zero-page-detection
> > +#     to dsa-accel. This parameter defines the dsa device path, and
> > +#     defaults to an empty list.  (Since 9.2)
> > +#
> >  # @direct-io: Open migration files with O_DIRECT when possible.  This
> >  #     only has effect if the @mapped-ram capability is enabled.
> >  #     (Since 9.1)
> > @@ -1270,7 +1293,8 @@
> >              '*vcpu-dirty-limit': 'uint64',
> >              '*mode': 'MigMode',
> >              '*zero-page-detection': 'ZeroPageDetection',
> > -            '*direct-io': 'bool' } }
> > +            '*direct-io': 'bool',
> > +            '*dsa-accel-path': [ 'str' ] } }
> >
> >  ##
> >  # @query-migrate-parameters:
> > diff --git a/util/dsa.c b/util/dsa.c
> > index cbaa47c360..eeede3c0c7 100644
> > --- a/util/dsa.c
> > +++ b/util/dsa.c
> > @@ -23,6 +23,7 @@
> >  #include "qemu/bswap.h"
> >  #include "qemu/error-report.h"
> >  #include "qemu/rcu.h"
> > +#include <cpuid.h>
> >
> >  #pragma GCC push_options
> >  #pragma GCC target("enqcmd")
> > @@ -691,6 +692,36 @@ static void dsa_completion_thread_stop(void *opaque)
> >      qemu_sem_destroy(&thread_context->sem_init_done);
> >  }
> >
> > +/**
> > + * @brief Check if DSA is supported.
> > + *
> > + * @return True if DSA is supported, otherwise false.
> > + */
> > +bool qemu_dsa_is_supported(void)
> > +{
> > +    /*
> > +     * movdir64b is indicated by bit 28 of ecx in CPUID leaf 7, subleaf 0.
> > +     * enqcmd is indicated by bit 29 of ecx in CPUID leaf 7, subleaf 0.
> > +     * Doc: https://cdrdv2-public.intel.com/819680/architecture-instruction-\
> > +     *      set-extensions-programming-reference.pdf
> > +     */
> > +    uint32_t eax, ebx, ecx, edx;
> > +    bool movedirb_enabled;
> > +    bool enqcmd_enabled;
> > +
> > +    __get_cpuid_count(7, 0, &eax, &ebx, &ecx, &edx);
> > +    movedirb_enabled = (ecx >> 28) & 0x1;
> > +    if (!movedirb_enabled) {
> > +        return false;
> > +    }
> > +    enqcmd_enabled = (ecx >> 29) & 0x1;
> > +    if (!enqcmd_enabled) {
> > +        return false;
> > +    }
> > +
> > +    return true;
> > +}
> > +
> >  /**
> >   * @brief Check if DSA is running.
> >   *
> > --
> > Yichen Wang
> >
> --
>  -----Open up your eyes, open up your mind, open up your code -------
> / Dr. David Alan Gilbert    |       Running GNU/Linux       | Happy  \
> \        dave @ treblig.org |                               | In Hex /
>  \ _________________________|_____ http://www.treblig.org   |_______/


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [External] Re: [PATCH v6 08/12] migration/multifd: Add new migration option for multifd DSA offloading.
  2024-10-15 22:09     ` [External] " Yichen Wang
@ 2024-10-15 22:51       ` Dr. David Alan Gilbert
  0 siblings, 0 replies; 26+ messages in thread
From: Dr. David Alan Gilbert @ 2024-10-15 22:51 UTC (permalink / raw)
  To: Yichen Wang
  Cc: Paolo Bonzini, Marc-André Lureau, Daniel P. Berrangé,
	Philippe Mathieu-Daudé, Peter Xu, Fabiano Rosas, Eric Blake,
	Markus Armbruster, Michael S. Tsirkin, Cornelia Huck, qemu-devel,
	Hao Xiang, Liu, Yuan1, Shivam Kumar, Ho-Ren (Jack) Chuang

* Yichen Wang (yichen.wang@bytedance.com) wrote:
> On Fri, Oct 11, 2024 at 10:14 AM Dr. David Alan Gilbert
> <dave@treblig.org> wrote:
> >
> > * Yichen Wang (yichen.wang@bytedance.com) wrote:
> > > From: Hao Xiang <hao.xiang@linux.dev>
> >
> > Please split the cpuid stuff out into a separate patch; it feels like
> > it should be in some x86 specific place.
> 
> DSA is an Intel feature/device, and it only makes sense on x86. I
> mean, dsa.c already implies that it is a x86 specific feature. So I
> guess keeping it in the dsa.c is a better option?

Yes, the dsa.c might be OK; we just shouldn't have very x86 specific
code in the other places.

> >
> > > Intel DSA offloading is an optional feature that turns on if
> > > proper hardware and software stack is available. To turn on
> > > DSA offloading in multifd live migration by setting:
> > >
> > > zero-page-detection=dsa-accel
> > > dsa-accel-path=[dsa_dev_path1] [dsa_dev_path2] ... [dsa_dev_pathX]
> >
> > I'd like to suggest changing that to:
> >
> > accel-path=dsa:dev_path1 dsa:dev_path2 somethingelse:dev_path2 etc
> >
> > that means we don't need a new option when someone adds a different
> > accelerator.
> 
> It all depends on the implementation of "other accelerators". I am OK
> with this change if you believe that is better.

I think so; it makes no assumption about what any other accelerator might
need, but means we don't need to add yet another command for them.

> >
> > Dave
> > > This feature is turned off by default.
> > >
> > > Signed-off-by: Hao Xiang <hao.xiang@linux.dev>
> > > Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>
> > > ---
> > >  hmp-commands.hx                |  2 +-
> > >  include/qemu/dsa.h             | 13 +++++++++++++
> > >  migration/migration-hmp-cmds.c | 19 ++++++++++++++++++-
> > >  migration/options.c            | 30 ++++++++++++++++++++++++++++++
> > >  migration/options.h            |  1 +
> > >  qapi/migration.json            | 32 ++++++++++++++++++++++++++++----
> > >  util/dsa.c                     | 31 +++++++++++++++++++++++++++++++
> > >  7 files changed, 122 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/hmp-commands.hx b/hmp-commands.hx
> > > index 06746f0afc..0e04eac7c7 100644
> > > --- a/hmp-commands.hx
> > > +++ b/hmp-commands.hx
> > > @@ -1009,7 +1009,7 @@ ERST
> > >
> > >      {
> > >          .name       = "migrate_set_parameter",
> > > -        .args_type  = "parameter:s,value:s",
> > > +        .args_type  = "parameter:s,value:S",
> >
> > Can you show the case you need this for?
> >
> 
> That is used for pass a strList in the QEMU CLI. Without this change,
> I cannot do:
> migrate_set_parameter dsa-accel-path /dev/dsa/wq0.1 /dev/dsa/wq0.1
> and it will complain.

OK, please make sure you get a sensible error if you try and pass
a string list to one of the existing parameters.

Dave

> > Dave
> >
> > >          .params     = "parameter value",
> > >          .help       = "Set the parameter for migration",
> > >          .cmd        = hmp_migrate_set_parameter,
> > > diff --git a/include/qemu/dsa.h b/include/qemu/dsa.h
> > > index a3b502ee41..b1bb6daad2 100644
> > > --- a/include/qemu/dsa.h
> > > +++ b/include/qemu/dsa.h
> > > @@ -100,6 +100,13 @@ void qemu_dsa_stop(void);
> > >   */
> > >  void qemu_dsa_cleanup(void);
> > >
> > > +/**
> > > + * @brief Check if DSA is supported.
> > > + *
> > > + * @return True if DSA is supported, otherwise false.
> > > + */
> > > +bool qemu_dsa_is_supported(void);
> > > +
> > >  /**
> > >   * @brief Check if DSA is running.
> > >   *
> > > @@ -141,6 +148,12 @@ buffer_is_zero_dsa_batch_sync(QemuDsaBatchTask *batch_task,
> > >
> > >  typedef struct QemuDsaBatchTask {} QemuDsaBatchTask;
> > >
> > > +static inline bool qemu_dsa_is_supported(void)
> > > +{
> > > +    return false;
> > > +}
> > > +
> > > +
> > >  static inline bool qemu_dsa_is_running(void)
> > >  {
> > >      return false;
> > > diff --git a/migration/migration-hmp-cmds.c b/migration/migration-hmp-cmds.c
> > > index 20d1a6e219..983f13b73c 100644
> > > --- a/migration/migration-hmp-cmds.c
> > > +++ b/migration/migration-hmp-cmds.c
> > > @@ -312,7 +312,16 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
> > >          monitor_printf(mon, "%s: '%s'\n",
> > >              MigrationParameter_str(MIGRATION_PARAMETER_TLS_AUTHZ),
> > >              params->tls_authz);
> > > -
> > > +        if (params->has_dsa_accel_path) {
> > > +            strList *dsa_accel_path = params->dsa_accel_path;
> > > +            monitor_printf(mon, "%s:",
> > > +                MigrationParameter_str(MIGRATION_PARAMETER_DSA_ACCEL_PATH));
> > > +            while (dsa_accel_path) {
> > > +                monitor_printf(mon, " '%s'", dsa_accel_path->value);
> > > +                dsa_accel_path = dsa_accel_path->next;
> > > +            }
> > > +            monitor_printf(mon, "\n");
> > > +        }
> > >          if (params->has_block_bitmap_mapping) {
> > >              const BitmapMigrationNodeAliasList *bmnal;
> > >
> > > @@ -563,6 +572,14 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
> > >          p->has_x_checkpoint_delay = true;
> > >          visit_type_uint32(v, param, &p->x_checkpoint_delay, &err);
> > >          break;
> > > +    case MIGRATION_PARAMETER_DSA_ACCEL_PATH:
> > > +        p->has_dsa_accel_path = true;
> > > +        g_autofree char **strv = g_strsplit(valuestr ? : "", " ", -1);
> > > +        strList **tail = &p->dsa_accel_path;
> > > +        for (int i = 0; strv[i]; i++) {
> > > +            QAPI_LIST_APPEND(tail, strv[i]);
> > > +        }
> > > +        break;
> > >      case MIGRATION_PARAMETER_MULTIFD_CHANNELS:
> > >          p->has_multifd_channels = true;
> > >          visit_type_uint8(v, param, &p->multifd_channels, &err);
> > > diff --git a/migration/options.c b/migration/options.c
> > > index 147cd2b8fd..a0b3a7d291 100644
> > > --- a/migration/options.c
> > > +++ b/migration/options.c
> > > @@ -13,6 +13,7 @@
> > >
> > >  #include "qemu/osdep.h"
> > >  #include "qemu/error-report.h"
> > > +#include "qemu/dsa.h"
> > >  #include "exec/target_page.h"
> > >  #include "qapi/clone-visitor.h"
> > >  #include "qapi/error.h"
> > > @@ -832,6 +833,13 @@ const char *migrate_tls_creds(void)
> > >      return s->parameters.tls_creds;
> > >  }
> > >
> > > +const strList *migrate_dsa_accel_path(void)
> > > +{
> > > +    MigrationState *s = migrate_get_current();
> > > +
> > > +    return s->parameters.dsa_accel_path;
> > > +}
> > > +
> > >  const char *migrate_tls_hostname(void)
> > >  {
> > >      MigrationState *s = migrate_get_current();
> > > @@ -945,6 +953,8 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
> > >      params->zero_page_detection = s->parameters.zero_page_detection;
> > >      params->has_direct_io = true;
> > >      params->direct_io = s->parameters.direct_io;
> > > +    params->has_dsa_accel_path = true;
> > > +    params->dsa_accel_path = QAPI_CLONE(strList, s->parameters.dsa_accel_path);
> > >
> > >      return params;
> > >  }
> > > @@ -953,6 +963,7 @@ void migrate_params_init(MigrationParameters *params)
> > >  {
> > >      params->tls_hostname = g_strdup("");
> > >      params->tls_creds = g_strdup("");
> > > +    params->dsa_accel_path = NULL;
> > >
> > >      /* Set has_* up only for parameter checks */
> > >      params->has_throttle_trigger_threshold = true;
> > > @@ -1165,6 +1176,14 @@ bool migrate_params_check(MigrationParameters *params, Error **errp)
> > >          return false;
> > >      }
> > >
> > > +    if (params->has_zero_page_detection &&
> > > +        params->zero_page_detection == ZERO_PAGE_DETECTION_DSA_ACCEL) {
> > > +        if (!qemu_dsa_is_supported()) {
> > > +            error_setg(errp, "DSA acceleration is not supported.");
> > > +            return false;
> > > +        }
> > > +    }
> > > +
> > >      return true;
> > >  }
> > >
> > > @@ -1278,6 +1297,11 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
> > >      if (params->has_direct_io) {
> > >          dest->direct_io = params->direct_io;
> > >      }
> > > +
> > > +    if (params->has_dsa_accel_path) {
> > > +        dest->has_dsa_accel_path = true;
> > > +        dest->dsa_accel_path = params->dsa_accel_path;
> > > +    }
> > >  }
> > >
> > >  static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
> > > @@ -1410,6 +1434,12 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
> > >      if (params->has_direct_io) {
> > >          s->parameters.direct_io = params->direct_io;
> > >      }
> > > +    if (params->has_dsa_accel_path) {
> > > +        qapi_free_strList(s->parameters.dsa_accel_path);
> > > +        s->parameters.has_dsa_accel_path = true;
> > > +        s->parameters.dsa_accel_path =
> > > +            QAPI_CLONE(strList, params->dsa_accel_path);
> > > +    }
> > >  }
> > >
> > >  void qmp_migrate_set_parameters(MigrateSetParameters *params, Error **errp)
> > > diff --git a/migration/options.h b/migration/options.h
> > > index a0bd6edc06..8198b220bd 100644
> > > --- a/migration/options.h
> > > +++ b/migration/options.h
> > > @@ -86,6 +86,7 @@ const char *migrate_tls_creds(void);
> > >  const char *migrate_tls_hostname(void);
> > >  uint64_t migrate_xbzrle_cache_size(void);
> > >  ZeroPageDetection migrate_zero_page_detection(void);
> > > +const strList *migrate_dsa_accel_path(void);
> > >
> > >  /* parameters helpers */
> > >
> > > diff --git a/qapi/migration.json b/qapi/migration.json
> > > index b66cccf107..d8b42ceae6 100644
> > > --- a/qapi/migration.json
> > > +++ b/qapi/migration.json
> > > @@ -626,10 +626,14 @@
> > >  #     multifd migration is enabled, else in the main migration thread
> > >  #     as for @legacy.
> > >  #
> > > +# @dsa-accel: Perform zero page checking with the DSA accelerator
> > > +#     offloading in multifd sender thread if multifd migration is
> > > +#     enabled, else in the main migration thread as for @legacy.
> > > +#
> > >  # Since: 9.0
> > >  ##
> > >  { 'enum': 'ZeroPageDetection',
> > > -  'data': [ 'none', 'legacy', 'multifd' ] }
> > > +  'data': [ 'none', 'legacy', 'multifd', 'dsa-accel' ] }
> > >
> > >  ##
> > >  # @BitmapMigrationBitmapAliasTransform:
> > > @@ -837,6 +841,12 @@
> > >  #     See description in @ZeroPageDetection.  Default is 'multifd'.
> > >  #     (since 9.0)
> > >  #
> > > +# @dsa-accel-path: If enabled, use DSA accelerator offloading for
> > > +#     certain memory operations. Enable DSA accelerator for zero
> > > +#     page detection offloading by setting the @zero-page-detection
> > > +#     to dsa-accel. This parameter defines the dsa device path, and
> > > +#     defaults to an empty list.  (Since 9.2)
> > > +#
> > >  # @direct-io: Open migration files with O_DIRECT when possible.  This
> > >  #     only has effect if the @mapped-ram capability is enabled.
> > >  #     (Since 9.1)
> > > @@ -855,7 +865,7 @@
> > >             'cpu-throttle-initial', 'cpu-throttle-increment',
> > >             'cpu-throttle-tailslow',
> > >             'tls-creds', 'tls-hostname', 'tls-authz', 'max-bandwidth',
> > > -           'avail-switchover-bandwidth', 'downtime-limit',
> > > +           'avail-switchover-bandwidth', 'downtime-limit', 'dsa-accel-path',
> > >             { 'name': 'x-checkpoint-delay', 'features': [ 'unstable' ] },
> > >             'multifd-channels',
> > >             'xbzrle-cache-size', 'max-postcopy-bandwidth',
> > > @@ -1018,6 +1028,12 @@
> > >  #     See description in @ZeroPageDetection.  Default is 'multifd'.
> > >  #     (since 9.0)
> > >  #
> > > +# @dsa-accel-path: If enabled, use DSA accelerator offloading for
> > > +#     certain memory operations. Enable DSA accelerator for zero
> > > +#     page detection offloading by setting the @zero-page-detection
> > > +#     to dsa-accel. This parameter defines the dsa device path, and
> > > +#     defaults to an empty list.  (Since 9.2)
> > > +#
> > >  # @direct-io: Open migration files with O_DIRECT when possible.  This
> > >  #     only has effect if the @mapped-ram capability is enabled.
> > >  #     (Since 9.1)
> > > @@ -1063,7 +1079,8 @@
> > >              '*vcpu-dirty-limit': 'uint64',
> > >              '*mode': 'MigMode',
> > >              '*zero-page-detection': 'ZeroPageDetection',
> > > -            '*direct-io': 'bool' } }
> > > +            '*direct-io': 'bool',
> > > +            '*dsa-accel-path': [ 'str' ] } }
> > >
> > >  ##
> > >  # @migrate-set-parameters:
> > > @@ -1228,6 +1245,12 @@
> > >  #     See description in @ZeroPageDetection.  Default is 'multifd'.
> > >  #     (since 9.0)
> > >  #
> > > +# @dsa-accel-path: If enabled, use DSA accelerator offloading for
> > > +#     certain memory operations. Enable DSA accelerator for zero
> > > +#     page detection offloading by setting the @zero-page-detection
> > > +#     to dsa-accel. This parameter defines the dsa device path, and
> > > +#     defaults to an empty list.  (Since 9.2)
> > > +#
> > >  # @direct-io: Open migration files with O_DIRECT when possible.  This
> > >  #     only has effect if the @mapped-ram capability is enabled.
> > >  #     (Since 9.1)
> > > @@ -1270,7 +1293,8 @@
> > >              '*vcpu-dirty-limit': 'uint64',
> > >              '*mode': 'MigMode',
> > >              '*zero-page-detection': 'ZeroPageDetection',
> > > -            '*direct-io': 'bool' } }
> > > +            '*direct-io': 'bool',
> > > +            '*dsa-accel-path': [ 'str' ] } }
> > >
> > >  ##
> > >  # @query-migrate-parameters:
> > > diff --git a/util/dsa.c b/util/dsa.c
> > > index cbaa47c360..eeede3c0c7 100644
> > > --- a/util/dsa.c
> > > +++ b/util/dsa.c
> > > @@ -23,6 +23,7 @@
> > >  #include "qemu/bswap.h"
> > >  #include "qemu/error-report.h"
> > >  #include "qemu/rcu.h"
> > > +#include <cpuid.h>
> > >
> > >  #pragma GCC push_options
> > >  #pragma GCC target("enqcmd")
> > > @@ -691,6 +692,36 @@ static void dsa_completion_thread_stop(void *opaque)
> > >      qemu_sem_destroy(&thread_context->sem_init_done);
> > >  }
> > >
> > > +/**
> > > + * @brief Check if DSA is supported.
> > > + *
> > > + * @return True if DSA is supported, otherwise false.
> > > + */
> > > +bool qemu_dsa_is_supported(void)
> > > +{
> > > +    /*
> > > +     * movdir64b is indicated by bit 28 of ecx in CPUID leaf 7, subleaf 0.
> > > +     * enqcmd is indicated by bit 29 of ecx in CPUID leaf 7, subleaf 0.
> > > +     * Doc: https://cdrdv2-public.intel.com/819680/architecture-instruction-\
> > > +     *      set-extensions-programming-reference.pdf
> > > +     */
> > > +    uint32_t eax, ebx, ecx, edx;
> > > +    bool movedirb_enabled;
> > > +    bool enqcmd_enabled;
> > > +
> > > +    __get_cpuid_count(7, 0, &eax, &ebx, &ecx, &edx);
> > > +    movedirb_enabled = (ecx >> 28) & 0x1;
> > > +    if (!movedirb_enabled) {
> > > +        return false;
> > > +    }
> > > +    enqcmd_enabled = (ecx >> 29) & 0x1;
> > > +    if (!enqcmd_enabled) {
> > > +        return false;
> > > +    }
> > > +
> > > +    return true;
> > > +}
> > > +
> > >  /**
> > >   * @brief Check if DSA is running.
> > >   *
> > > --
> > > Yichen Wang
> > >
> > --
> >  -----Open up your eyes, open up your mind, open up your code -------
> > / Dr. David Alan Gilbert    |       Running GNU/Linux       | Happy  \
> > \        dave @ treblig.org |                               | In Hex /
> >  \ _________________________|_____ http://www.treblig.org   |_______/
-- 
 -----Open up your eyes, open up your mind, open up your code -------   
/ Dr. David Alan Gilbert    |       Running GNU/Linux       | Happy  \ 
\        dave @ treblig.org |                               | In Hex /
 \ _________________________|_____ http://www.treblig.org   |_______/


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 03/12] util/dsa: Implement DSA device start and stop logic.
  2024-10-09 23:46 ` [PATCH v6 03/12] util/dsa: Implement DSA device start and stop logic Yichen Wang
@ 2024-10-16 18:59   ` Peter Xu
  2024-10-16 21:00   ` Fabiano Rosas
  1 sibling, 0 replies; 26+ messages in thread
From: Peter Xu @ 2024-10-16 18:59 UTC (permalink / raw)
  To: Yichen Wang
  Cc: Dr. David Alan Gilbert, Paolo Bonzini, Marc-André Lureau,
	Daniel P. Berrangé, Philippe Mathieu-Daudé,
	Fabiano Rosas, Eric Blake, Markus Armbruster, Michael S. Tsirkin,
	Cornelia Huck, qemu-devel, Hao Xiang, Liu, Yuan1, Shivam Kumar,
	Ho-Ren (Jack) Chuang, Bryan Zhang

On Wed, Oct 09, 2024 at 04:46:01PM -0700, Yichen Wang wrote:
> diff --git a/util/meson.build b/util/meson.build
> index 5d8bef9891..3360f62923 100644
> --- a/util/meson.build
> +++ b/util/meson.build
> @@ -88,6 +88,9 @@ if have_block or have_ga
>  endif
>  if have_block
>    util_ss.add(files('aio-wait.c'))
> +  if config_host_data.get('CONFIG_DSA_OPT')
> +    util_ss.add(files('dsa.c'))
> +  endif

It's not clear to me why this needs to be under have_block.

Since we want to make sure this file is ignored on !x86, maybe it should be
put under the "elif cpu in ['x86', 'x86_64']" section.

>    util_ss.add(files('buffer.c'))
>    util_ss.add(files('bufferiszero.c'))
>    util_ss.add(files('hbitmap.c'))
> -- 
> Yichen Wang
> 

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [External] Re: [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration.
  2024-10-15 22:02   ` [External] " Yichen Wang
@ 2024-10-16 19:44     ` Peter Xu
  0 siblings, 0 replies; 26+ messages in thread
From: Peter Xu @ 2024-10-16 19:44 UTC (permalink / raw)
  To: Yichen Wang
  Cc: Dr. David Alan Gilbert, Paolo Bonzini, Marc-André Lureau,
	Daniel P. Berrangé, Philippe Mathieu-Daudé,
	Fabiano Rosas, Eric Blake, Markus Armbruster, Michael S. Tsirkin,
	Cornelia Huck, qemu-devel, Hao Xiang, Liu, Yuan1, Shivam Kumar,
	Ho-Ren (Jack) Chuang

On Tue, Oct 15, 2024 at 03:02:37PM -0700, Yichen Wang wrote:
> On Fri, Oct 11, 2024 at 9:32 AM Peter Xu <peterx@redhat.com> wrote:
> >
> > On Wed, Oct 09, 2024 at 04:45:58PM -0700, Yichen Wang wrote:
> >
> > The doc update is still missing under docs/, we may need that for a final
> > merge.
> >
> 
> I will work with Intel to prepare a doc in my next patch.
> 
> > Are you using this in production?  How it performs in real life?  What is
> > the major issue to solve for you?  Is it "zero detect eats cpu too much",
> > or "migration too slow", or "we're doing experiment with the new hardwares,
> > and see how it goes if we apply it on top of migrations"?
> >
> 
> Yes, we do use it in production. Our codebase is based on an old QEMU
> release (5.X), so we backported the series there. The major use case
> is just to accelerate the live migration, and it is currently under QA
> scale testing. The main motivation is, we reserve 4 cores for all
> control plane services including QEMU. While doing 2nd-scheduling
> (i.e. live migration to reduce the fragmentations, and very commonly
> seen on cloud providers), we realize QEMU will eat a lot of CPUs which
> causes jitter and slowness on the control planes. Even though this is
> not happening too frequently, we still want it to be stable. With the
> help of DSA, it saves CPU while accelerates the process, so we want to
> use it in production.

Thanks. Please consider adding something like this (issues, why DSA help
and how, etc.) into the doc file.

> 
> > There're a lot of new code added for dsa just for this optimization on zero
> > page detection.  We'd better understand the major benefits, and also
> > whether that's applicable to other part of qemu or migration-only.  I
> > actually wonder if we're going to support enqcmd whether migration is the
> > best starting point (rather than other places where we emulate tons of
> > devices, and maybe some backends can speedup IOs with enqcmd in some
> > form?).. but it's more of a pure question.
> >
> 
> I tried to put most of the code in dsa.c and do minimum changes on all
> other files. Even in dsa.c, it has the abstraction for "submit task",
> and the implementation of "submit a buffer_zero task". I think this is
> the best I can think of. I am open to suggestions of how we can help
> to move this forward. :)

That's ok.

Though I think you ignored some of my question in the email on some
parameter I never found myself in this series but got mentioned.  If you
plan to repost soon, please help make sure the patchset is properly tested
(including builds), and the results are reflecting what was posted.

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 03/12] util/dsa: Implement DSA device start and stop logic.
  2024-10-09 23:46 ` [PATCH v6 03/12] util/dsa: Implement DSA device start and stop logic Yichen Wang
  2024-10-16 18:59   ` Peter Xu
@ 2024-10-16 21:00   ` Fabiano Rosas
  1 sibling, 0 replies; 26+ messages in thread
From: Fabiano Rosas @ 2024-10-16 21:00 UTC (permalink / raw)
  To: Yichen Wang, Dr. David Alan Gilbert, Paolo Bonzini,
	Marc-André Lureau, Daniel P. Berrangé,
	Philippe Mathieu-Daudé, Peter Xu, Eric Blake,
	Markus Armbruster, Michael S. Tsirkin, Cornelia Huck, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Shivam Kumar, Ho-Ren (Jack) Chuang,
	Yichen Wang, Bryan Zhang

Yichen Wang <yichen.wang@bytedance.com> writes:

> From: Hao Xiang <hao.xiang@linux.dev>
>
> * DSA device open and close.
> * DSA group contains multiple DSA devices.
> * DSA group configure/start/stop/clean.
>
> Signed-off-by: Hao Xiang <hao.xiang@linux.dev>
> Signed-off-by: Bryan Zhang <bryan.zhang@bytedance.com>
> Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>
> ---
>  include/qemu/dsa.h | 103 +++++++++++++++++
>  util/dsa.c         | 282 +++++++++++++++++++++++++++++++++++++++++++++
>  util/meson.build   |   3 +
>  3 files changed, 388 insertions(+)
>  create mode 100644 include/qemu/dsa.h
>  create mode 100644 util/dsa.c
>
> diff --git a/include/qemu/dsa.h b/include/qemu/dsa.h
> new file mode 100644
> index 0000000000..501bb8c70d
> --- /dev/null
> +++ b/include/qemu/dsa.h
> @@ -0,0 +1,103 @@
> +/*
> + * Interface for using Intel Data Streaming Accelerator to offload certain
> + * background operations.
> + *
> + * Copyright (C) Bytedance Ltd.
> + *
> + * Authors:
> + *  Hao Xiang <hao.xiang@bytedance.com>
> + *  Yichen Wang <yichen.wang@bytedance.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> + * See the COPYING file in the top-level directory.
> + */
> +
> +#ifndef QEMU_DSA_H
> +#define QEMU_DSA_H
> +
> +#include "qemu/error-report.h"
> +#include "qemu/thread.h"
> +#include "qemu/queue.h"
> +
> +#ifdef CONFIG_DSA_OPT
> +
> +#pragma GCC push_options
> +#pragma GCC target("enqcmd")
> +
> +#include <linux/idxd.h>
> +#include "x86intrin.h"
> +
> +typedef struct {
> +    void *work_queue;
> +} QemuDsaDevice;
> +
> +typedef QSIMPLEQ_HEAD(QemuDsaTaskQueue, QemuDsaBatchTask) QemuDsaTaskQueue;
> +
> +typedef struct {
> +    QemuDsaDevice *dsa_devices;
> +    int num_dsa_devices;
> +    /* The index of the next DSA device to be used. */
> +    uint32_t device_allocator_index;
> +    bool running;
> +    QemuMutex task_queue_lock;
> +    QemuCond task_queue_cond;
> +    QemuDsaTaskQueue task_queue;
> +} QemuDsaDeviceGroup;
> +
> +/**
> + * @brief Initializes DSA devices.
> + *
> + * @param dsa_parameter A list of DSA device path from migration parameter.
> + *
> + * @return int Zero if successful, otherwise non zero.
> + */
> +int qemu_dsa_init(const strList *dsa_parameter, Error **errp);
> +
> +/**
> + * @brief Start logic to enable using DSA.
> + */
> +void qemu_dsa_start(void);
> +
> +/**
> + * @brief Stop the device group and the completion thread.
> + */
> +void qemu_dsa_stop(void);
> +
> +/**
> + * @brief Clean up system resources created for DSA offloading.
> + */
> +void qemu_dsa_cleanup(void);
> +
> +/**
> + * @brief Check if DSA is running.
> + *
> + * @return True if DSA is running, otherwise false.
> + */
> +bool qemu_dsa_is_running(void);
> +
> +#else
> +
> +static inline bool qemu_dsa_is_running(void)
> +{
> +    return false;
> +}
> +
> +static inline int qemu_dsa_init(const strList *dsa_parameter, Error **errp)
> +{
> +    if (dsa_parameter != NULL && strlen(dsa_parameter) != 0) {
> +        error_setg(errp, "DSA is not supported.");
> +        return -1;
> +    }
> +
> +    return 0;
> +}
> +
> +static inline void qemu_dsa_start(void) {}
> +
> +static inline void qemu_dsa_stop(void) {}
> +
> +static inline void qemu_dsa_cleanup(void) {}
> +
> +#endif
> +
> +#endif
> diff --git a/util/dsa.c b/util/dsa.c
> new file mode 100644
> index 0000000000..54d0e20c29
> --- /dev/null
> +++ b/util/dsa.c
> @@ -0,0 +1,282 @@
> +/*
> + * Use Intel Data Streaming Accelerator to offload certain background
> + * operations.
> + *
> + * Copyright (C) Bytedance Ltd.
> + *
> + * Authors:
> + *  Hao Xiang <hao.xiang@bytedance.com>
> + *  Bryan Zhang <bryan.zhang@bytedance.com>
> + *  Yichen Wang <yichen.wang@bytedance.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> + * See the COPYING file in the top-level directory.
> + */
> +
> +#include "qemu/osdep.h"
> +#include "qapi/error.h"
> +#include "qemu/queue.h"
> +#include "qemu/memalign.h"
> +#include "qemu/lockable.h"
> +#include "qemu/cutils.h"
> +#include "qemu/dsa.h"
> +#include "qemu/bswap.h"
> +#include "qemu/error-report.h"
> +#include "qemu/rcu.h"
> +
> +#pragma GCC push_options
> +#pragma GCC target("enqcmd")
> +
> +#include <linux/idxd.h>
> +#include "x86intrin.h"
> +
> +#define DSA_WQ_PORTAL_SIZE 4096
> +#define MAX_DSA_DEVICES 16
> +
> +uint32_t max_retry_count;
> +static QemuDsaDeviceGroup dsa_group;
> +
> +
> +/**
> + * @brief This function opens a DSA device's work queue and
> + *        maps the DSA device memory into the current process.
> + *
> + * @param dsa_wq_path A pointer to the DSA device work queue's file path.
> + * @return A pointer to the mapped memory, or MAP_FAILED on failure.
> + */
> +static void *
> +map_dsa_device(const char *dsa_wq_path)
> +{
> +    void *dsa_device;
> +    int fd;
> +
> +    fd = open(dsa_wq_path, O_RDWR);
> +    if (fd < 0) {
> +        error_report("Open %s failed with errno = %d.",
> +                dsa_wq_path, errno);
> +        return MAP_FAILED;
> +    }
> +    dsa_device = mmap(NULL, DSA_WQ_PORTAL_SIZE, PROT_WRITE,
> +                      MAP_SHARED | MAP_POPULATE, fd, 0);
> +    close(fd);
> +    if (dsa_device == MAP_FAILED) {
> +        error_report("mmap failed with errno = %d.", errno);
> +        return MAP_FAILED;
> +    }
> +    return dsa_device;
> +}
> +
> +/**
> + * @brief Initializes a DSA device structure.
> + *
> + * @param instance A pointer to the DSA device.
> + * @param work_queue A pointer to the DSA work queue.
> + */
> +static void
> +dsa_device_init(QemuDsaDevice *instance,
> +                void *dsa_work_queue)
> +{
> +    instance->work_queue = dsa_work_queue;
> +}
> +
> +/**
> + * @brief Cleans up a DSA device structure.
> + *
> + * @param instance A pointer to the DSA device to cleanup.
> + */
> +static void
> +dsa_device_cleanup(QemuDsaDevice *instance)
> +{
> +    if (instance->work_queue != MAP_FAILED) {
> +        munmap(instance->work_queue, DSA_WQ_PORTAL_SIZE);
> +    }
> +}
> +
> +/**
> + * @brief Initializes a DSA device group.
> + *
> + * @param group A pointer to the DSA device group.
> + * @param dsa_parameter A list of DSA device path from are separated by space
> + * character migration parameter. Multiple DSA device path.
> + *
> + * @return Zero if successful, non-zero otherwise.
> + */
> +static int
> +dsa_device_group_init(QemuDsaDeviceGroup *group,
> +                      const strList *dsa_parameter,
> +                      Error **errp)
> +{
> +    if (dsa_parameter == NULL) {
> +        error_setg(errp, "dsa device path is not supplied.");
> +        return -1;
> +    }
> +
> +    int ret = 0;
> +    const char *dsa_path[MAX_DSA_DEVICES];
> +    int num_dsa_devices = 0;
> +
> +    while (dsa_parameter) {
> +        dsa_path[num_dsa_devices++] = dsa_parameter->value;
> +        if (num_dsa_devices == MAX_DSA_DEVICES) {
> +            break;
> +        }
> +        dsa_parameter = dsa_parameter->next;
> +    }
> +
> +    group->dsa_devices =
> +        g_new0(QemuDsaDevice, num_dsa_devices);
> +    group->num_dsa_devices = num_dsa_devices;
> +    group->device_allocator_index = 0;
> +
> +    group->running = false;
> +    qemu_mutex_init(&group->task_queue_lock);
> +    qemu_cond_init(&group->task_queue_cond);
> +    QSIMPLEQ_INIT(&group->task_queue);
> +
> +    void *dsa_wq = MAP_FAILED;
> +    for (int i = 0; i < num_dsa_devices; i++) {
> +        dsa_wq = map_dsa_device(dsa_path[i]);
> +        if (dsa_wq == MAP_FAILED) {
> +            error_setg(errp, "map_dsa_device failed MAP_FAILED.");
> +            ret = -1;
> +            goto exit;

This will leave subsequent dsa_devices uninitialized after
map_dsa_device fails for the i-th device, but the loop at
dsa_device_group_cleanup() still passes all of them into
dsa_device_cleanup(), so the check != MAP_FAILED there will be true.

> +        }
> +        dsa_device_init(&dsa_group.dsa_devices[i], dsa_wq);

I think you mean &group->dsa_devices[i] here.

> +    }
> +
> +exit:
> +    return ret;
> +}
> +
> +/**
> + * @brief Starts a DSA device group.
> + *
> + * @param group A pointer to the DSA device group.
> + */
> +static void
> +dsa_device_group_start(QemuDsaDeviceGroup *group)
> +{
> +    group->running = true;
> +}
> +
> +/**
> + * @brief Stops a DSA device group.
> + *
> + * @param group A pointer to the DSA device group.
> + */
> +__attribute__((unused))
> +static void
> +dsa_device_group_stop(QemuDsaDeviceGroup *group)
> +{
> +    group->running = false;
> +}
> +
> +/**
> + * @brief Cleans up a DSA device group.
> + *
> + * @param group A pointer to the DSA device group.
> + */
> +static void
> +dsa_device_group_cleanup(QemuDsaDeviceGroup *group)
> +{
> +    if (!group->dsa_devices) {
> +        return;
> +    }
> +    for (int i = 0; i < group->num_dsa_devices; i++) {
> +        dsa_device_cleanup(&group->dsa_devices[i]);
> +    }
> +    g_free(group->dsa_devices);
> +    group->dsa_devices = NULL;
> +
> +    qemu_mutex_destroy(&group->task_queue_lock);
> +    qemu_cond_destroy(&group->task_queue_cond);
> +}
> +
> +/**
> + * @brief Returns the next available DSA device in the group.
> + *
> + * @param group A pointer to the DSA device group.
> + *
> + * @return struct QemuDsaDevice* A pointer to the next available DSA device
> + *         in the group.
> + */
> +__attribute__((unused))
> +static QemuDsaDevice *
> +dsa_device_group_get_next_device(QemuDsaDeviceGroup *group)
> +{
> +    if (group->num_dsa_devices == 0) {
> +        return NULL;
> +    }
> +    uint32_t current = qatomic_fetch_inc(&group->device_allocator_index);
> +    current %= group->num_dsa_devices;
> +    return &group->dsa_devices[current];
> +}
> +
> +/**
> + * @brief Check if DSA is running.
> + *
> + * @return True if DSA is running, otherwise false.
> + */
> +bool qemu_dsa_is_running(void)
> +{
> +    return false;
> +}
> +
> +static void
> +dsa_globals_init(void)
> +{
> +    max_retry_count = UINT32_MAX;
> +}
> +
> +/**
> + * @brief Initializes DSA devices.
> + *
> + * @param dsa_parameter A list of DSA device path from migration parameter.
> + *
> + * @return int Zero if successful, otherwise non zero.
> + */
> +int qemu_dsa_init(const strList *dsa_parameter, Error **errp)
> +{
> +    dsa_globals_init();
> +
> +    return dsa_device_group_init(&dsa_group, dsa_parameter, errp);
> +}
> +
> +/**
> + * @brief Start logic to enable using DSA.
> + *
> + */
> +void qemu_dsa_start(void)
> +{
> +    if (dsa_group.num_dsa_devices == 0) {
> +        return;
> +    }
> +    if (dsa_group.running) {
> +        return;
> +    }
> +    dsa_device_group_start(&dsa_group);
> +}
> +
> +/**
> + * @brief Stop the device group and the completion thread.
> + *
> + */
> +void qemu_dsa_stop(void)
> +{
> +    QemuDsaDeviceGroup *group = &dsa_group;
> +
> +    if (!group->running) {
> +        return;
> +    }
> +}
> +
> +/**
> + * @brief Clean up system resources created for DSA offloading.
> + *
> + */
> +void qemu_dsa_cleanup(void)
> +{
> +    qemu_dsa_stop();
> +    dsa_device_group_cleanup(&dsa_group);
> +}
> +
> diff --git a/util/meson.build b/util/meson.build
> index 5d8bef9891..3360f62923 100644
> --- a/util/meson.build
> +++ b/util/meson.build
> @@ -88,6 +88,9 @@ if have_block or have_ga
>  endif
>  if have_block
>    util_ss.add(files('aio-wait.c'))
> +  if config_host_data.get('CONFIG_DSA_OPT')
> +    util_ss.add(files('dsa.c'))
> +  endif
>    util_ss.add(files('buffer.c'))
>    util_ss.add(files('bufferiszero.c'))
>    util_ss.add(files('hbitmap.c'))


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 09/12] migration/multifd: Enable DSA offloading in multifd sender path.
  2024-10-09 23:46 ` [PATCH v6 09/12] migration/multifd: Enable DSA offloading in multifd sender path Yichen Wang
@ 2024-10-17 19:11   ` Fabiano Rosas
  0 siblings, 0 replies; 26+ messages in thread
From: Fabiano Rosas @ 2024-10-17 19:11 UTC (permalink / raw)
  To: Yichen Wang, Dr. David Alan Gilbert, Paolo Bonzini,
	Marc-André Lureau, Daniel P. Berrangé,
	Philippe Mathieu-Daudé, Peter Xu, Eric Blake,
	Markus Armbruster, Michael S. Tsirkin, Cornelia Huck, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Shivam Kumar, Ho-Ren (Jack) Chuang,
	Yichen Wang

Yichen Wang <yichen.wang@bytedance.com> writes:

> From: Hao Xiang <hao.xiang@linux.dev>
>
> Multifd sender path gets an array of pages queued by the migration
> thread. It performs zero page checking on every page in the array.
> The pages are classfied as either a zero page or a normal page. This
> change uses Intel DSA to offload the zero page checking from CPU to
> the DSA accelerator. The sender thread submits a batch of pages to DSA
> hardware and waits for the DSA completion thread to signal for work
> completion.
>
> Signed-off-by: Hao Xiang <hao.xiang@linux.dev>
> Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>
> ---
>  migration/multifd-zero-page.c | 133 ++++++++++++++++++++++++++++++----
>  migration/multifd.c           |  19 ++++-
>  migration/multifd.h           |   5 ++
>  3 files changed, 141 insertions(+), 16 deletions(-)
>
> diff --git a/migration/multifd-zero-page.c b/migration/multifd-zero-page.c
> index f1e988a959..e4bfff23a4 100644
> --- a/migration/multifd-zero-page.c
> +++ b/migration/multifd-zero-page.c
> @@ -21,7 +21,9 @@
>  
>  static bool multifd_zero_page_enabled(void)
>  {
> -    return migrate_zero_page_detection() == ZERO_PAGE_DETECTION_MULTIFD;
> +    ZeroPageDetection curMethod = migrate_zero_page_detection();
> +    return (curMethod == ZERO_PAGE_DETECTION_MULTIFD ||
> +            curMethod == ZERO_PAGE_DETECTION_DSA_ACCEL);
>  }
>  
>  static void swap_page_offset(ram_addr_t *pages_offset, int a, int b)
> @@ -37,26 +39,49 @@ static void swap_page_offset(ram_addr_t *pages_offset, int a, int b)
>      pages_offset[b] = temp;
>  }
>  
> +#ifdef CONFIG_DSA_OPT
> +
> +static void swap_result(bool *results, int a, int b)
> +{
> +    bool temp;
> +
> +    if (a == b) {
> +        return;
> +    }
> +
> +    temp = results[a];
> +    results[a] = results[b];
> +    results[b] = temp;
> +}
> +
>  /**
> - * multifd_send_zero_page_detect: Perform zero page detection on all pages.
> + * zero_page_detect_dsa: Perform zero page detection using
> + * Intel Data Streaming Accelerator (DSA).
>   *
> - * Sorts normal pages before zero pages in p->pages->offset and updates
> - * p->pages->normal_num.
> + * Sorts normal pages before zero pages in pages->offset and updates
> + * pages->normal_num.
>   *
>   * @param p A pointer to the send params.
>   */
> -void multifd_send_zero_page_detect(MultiFDSendParams *p)
> +static void zero_page_detect_dsa(MultiFDSendParams *p)
>  {
>      MultiFDPages_t *pages = &p->data->u.ram;
>      RAMBlock *rb = pages->block;
> -    int i = 0;
> -    int j = pages->num - 1;
> +    bool *results = p->dsa_batch_task->results;
>  
> -    if (!multifd_zero_page_enabled()) {
> -        pages->normal_num = pages->num;
> -        goto out;
> +    for (int i = 0; i < pages->num; i++) {
> +        p->dsa_batch_task->addr[i] =
> +            (ram_addr_t)(rb->host + pages->offset[i]);
>      }
>  
> +    buffer_is_zero_dsa_batch_sync(p->dsa_batch_task,
> +                                  (const void **)p->dsa_batch_task->addr,
> +                                  pages->num,
> +                                  multifd_ram_page_size());
> +
> +    int i = 0;
> +    int j = pages->num - 1;
> +
>      /*
>       * Sort the page offset array by moving all normal pages to
>       * the left and all zero pages to the right of the array.
> @@ -64,23 +89,39 @@ void multifd_send_zero_page_detect(MultiFDSendParams *p)
>      while (i <= j) {
>          uint64_t offset = pages->offset[i];
>  
> -        if (!buffer_is_zero(rb->host + offset, multifd_ram_page_size())) {
> +        if (!results[i]) {
>              i++;
>              continue;
>          }
>  
> +        swap_result(results, i, j);
>          swap_page_offset(pages->offset, i, j);
>          ram_release_page(rb->idstr, offset);
>          j--;
>      }
>  
>      pages->normal_num = i;
> +}
>  
> -out:
> -    stat64_add(&mig_stats.normal_pages, pages->normal_num);
> -    stat64_add(&mig_stats.zero_pages, pages->num - pages->normal_num);
> +void multifd_dsa_cleanup(void)
> +{
> +    qemu_dsa_cleanup();
> +}
> +
> +#else
> +
> +static void zero_page_detect_dsa(MultiFDSendParams *p)
> +{
> +    g_assert_not_reached();
> +}
> +
> +void multifd_dsa_cleanup(void)
> +{
> +    g_assert_not_reached();
>  }
>  
> +#endif
> +
>  void multifd_recv_zero_page_process(MultiFDRecvParams *p)
>  {
>      for (int i = 0; i < p->zero_num; i++) {
> @@ -92,3 +133,67 @@ void multifd_recv_zero_page_process(MultiFDRecvParams *p)
>          }
>      }
>  }
> +
> +/**
> + * zero_page_detect_cpu: Perform zero page detection using CPU.
> + *
> + * Sorts normal pages before zero pages in p->pages->offset and updates
> + * p->pages->normal_num.
> + *
> + * @param p A pointer to the send params.
> + */
> +static void zero_page_detect_cpu(MultiFDSendParams *p)
> +{
> +    MultiFDPages_t *pages = &p->data->u.ram;
> +    RAMBlock *rb = pages->block;
> +    int i = 0;
> +    int j = pages->num - 1;
> +
> +    if (!multifd_zero_page_enabled()) {
> +        pages->normal_num = pages->num;
> +        return;
> +    }

This is already done in the caller.

> +
> +    /*
> +     * Sort the page offset array by moving all normal pages to
> +     * the left and all zero pages to the right of the array.
> +     */
> +    while (i <= j) {
> +        uint64_t offset = pages->offset[i];
> +
> +        if (!buffer_is_zero(rb->host + offset, multifd_ram_page_size())) {
> +            i++;
> +            continue;
> +        }
> +
> +        swap_page_offset(pages->offset, i, j);
> +        ram_release_page(rb->idstr, offset);
> +        j--;
> +    }
> +
> +    pages->normal_num = i;
> +}
> +
> +/**
> + * multifd_send_zero_page_detect: Perform zero page detection on all pages.
> + *
> + * @param p A pointer to the send params.
> + */
> +void multifd_send_zero_page_detect(MultiFDSendParams *p)
> +{
> +    MultiFDPages_t *pages = &p->data->u.ram;
> +
> +    if (!multifd_zero_page_enabled()) {
> +        pages->normal_num = pages->num;
> +        return;

goto out;

> +    }
> +
> +    if (qemu_dsa_is_running()) {
> +        zero_page_detect_dsa(p);
> +    } else {
> +        zero_page_detect_cpu(p);
> +    }
> +

out:

> +    stat64_add(&mig_stats.normal_pages, pages->normal_num);
> +    stat64_add(&mig_stats.zero_pages, pages->num - pages->normal_num);
> +}
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 9b200f4ad9..e255ccf0c7 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -13,6 +13,7 @@
>  #include "qemu/osdep.h"
>  #include "qemu/cutils.h"
>  #include "qemu/rcu.h"
> +#include "qemu/dsa.h"
>  #include "exec/target_page.h"
>  #include "sysemu/sysemu.h"
>  #include "exec/ramblock.h"
> @@ -462,6 +463,8 @@ static bool multifd_send_cleanup_channel(MultiFDSendParams *p, Error **errp)
>      p->name = NULL;
>      g_free(p->data);
>      p->data = NULL;
> +    buffer_zero_batch_task_destroy(p->dsa_batch_task);
> +    p->dsa_batch_task = NULL;
>      p->packet_len = 0;
>      g_free(p->packet);
>      p->packet = NULL;
> @@ -493,6 +496,8 @@ void multifd_send_shutdown(void)
>  
>      multifd_send_terminate_threads();
>  
> +    multifd_dsa_cleanup();
> +
>      for (i = 0; i < migrate_multifd_channels(); i++) {
>          MultiFDSendParams *p = &multifd_send_state->params[i];
>          Error *local_err = NULL;
> @@ -814,11 +819,21 @@ bool multifd_send_setup(void)
>      uint32_t page_count = multifd_ram_page_count();
>      bool use_packets = multifd_use_packets();
>      uint8_t i;
> +    Error *local_err = NULL;
>  
>      if (!migrate_multifd()) {
>          return true;
>      }
>  
> +    if (s &&
> +        s->parameters.zero_page_detection == ZERO_PAGE_DETECTION_DSA_ACCEL) {
> +        const strList *dsa_parameter = migrate_dsa_accel_path();
> +        if (qemu_dsa_init(dsa_parameter, &local_err)) {
> +            return false;
> +        }
> +        qemu_dsa_start();
> +    }
> +
>      thread_count = migrate_multifd_channels();
>      multifd_send_state = g_malloc0(sizeof(*multifd_send_state));
>      multifd_send_state->params = g_new0(MultiFDSendParams, thread_count);
> @@ -829,12 +844,12 @@ bool multifd_send_setup(void)
>  
>      for (i = 0; i < thread_count; i++) {
>          MultiFDSendParams *p = &multifd_send_state->params[i];
> -        Error *local_err = NULL;
>  
>          qemu_sem_init(&p->sem, 0);
>          qemu_sem_init(&p->sem_sync, 0);
>          p->id = i;
>          p->data = multifd_send_data_alloc();
> +        p->dsa_batch_task = buffer_zero_batch_task_init(page_count);
>  
>          if (use_packets) {
>              p->packet_len = sizeof(MultiFDPacket_t)
> @@ -865,7 +880,6 @@ bool multifd_send_setup(void)
>  
>      for (i = 0; i < thread_count; i++) {
>          MultiFDSendParams *p = &multifd_send_state->params[i];
> -        Error *local_err = NULL;
>  
>          ret = multifd_send_state->ops->send_setup(p, &local_err);
>          if (ret) {
> @@ -1047,6 +1061,7 @@ void multifd_recv_cleanup(void)
>              qemu_thread_join(&p->thread);
>          }
>      }
> +    multifd_dsa_cleanup();
>      for (i = 0; i < migrate_multifd_channels(); i++) {
>          multifd_recv_cleanup_channel(&multifd_recv_state->params[i]);
>      }
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 50d58c0c9c..e293ddbc1d 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -15,6 +15,7 @@
>  
>  #include "exec/target_page.h"
>  #include "ram.h"
> +#include "qemu/dsa.h"
>  
>  typedef struct MultiFDRecvData MultiFDRecvData;
>  typedef struct MultiFDSendData MultiFDSendData;
> @@ -155,6 +156,9 @@ typedef struct {
>      bool pending_sync;
>      MultiFDSendData *data;
>  
> +    /* Zero page checking batch task */
> +    QemuDsaBatchTask *dsa_batch_task;
> +
>      /* thread local variables. No locking required */
>  
>      /* pointer to the packet */
> @@ -313,6 +317,7 @@ void multifd_send_fill_packet(MultiFDSendParams *p);
>  bool multifd_send_prepare_common(MultiFDSendParams *p);
>  void multifd_send_zero_page_detect(MultiFDSendParams *p);
>  void multifd_recv_zero_page_process(MultiFDRecvParams *p);
> +void multifd_dsa_cleanup(void);
>  
>  static inline void multifd_send_prepare_header(MultiFDSendParams *p)
>  {


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v6 10/12] migration/multifd: Add migration option set packet size.
  2024-10-09 23:46 ` [PATCH v6 10/12] migration/multifd: Add migration option set packet size Yichen Wang
@ 2024-10-17 19:16   ` Fabiano Rosas
  0 siblings, 0 replies; 26+ messages in thread
From: Fabiano Rosas @ 2024-10-17 19:16 UTC (permalink / raw)
  To: Yichen Wang, Dr. David Alan Gilbert, Paolo Bonzini,
	Marc-André Lureau, Daniel P. Berrangé,
	Philippe Mathieu-Daudé, Peter Xu, Eric Blake,
	Markus Armbruster, Michael S. Tsirkin, Cornelia Huck, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Shivam Kumar, Ho-Ren (Jack) Chuang,
	Yichen Wang

Yichen Wang <yichen.wang@bytedance.com> writes:

> From: Hao Xiang <hao.xiang@linux.dev>
>
> During live migration, if the latency between sender and receiver is
> high and bandwidth is also high (a long and fat pipe), using a bigger
> packet size can help reduce migration total time. The current multifd
> packet size is 128 * 4kb. In addition, Intel DSA offloading performs
> better with a large batch task.
>
> This change adds an option to set the packet size, which is also useful
> for performance tuning. Both sender and receiver needs to set the same
> packet size for things to work.
>
> Set the option:
> migrate_set_parameter multifd-packet-size 4190208
>
> Signed-off-by: Hao Xiang <hao.xiang@linux.dev>
> Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>

Looks good to me. However, could you do a migration-test pass setting
the maximum value for all the tests to see if anything breaks?

> ---
>  migration/migration-hmp-cmds.c |  7 ++++++
>  migration/multifd-zlib.c       |  6 ++++--
>  migration/multifd-zstd.c       |  6 ++++--
>  migration/options.c            | 39 ++++++++++++++++++++++++++++++++++
>  migration/options.h            |  1 +
>  qapi/migration.json            | 21 +++++++++++++++---
>  6 files changed, 73 insertions(+), 7 deletions(-)
>
> diff --git a/migration/migration-hmp-cmds.c b/migration/migration-hmp-cmds.c
> index 983f13b73c..561ed45250 100644
> --- a/migration/migration-hmp-cmds.c
> +++ b/migration/migration-hmp-cmds.c
> @@ -292,6 +292,9 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
>          monitor_printf(mon, "%s: %u ms\n",
>              MigrationParameter_str(MIGRATION_PARAMETER_X_CHECKPOINT_DELAY),
>              params->x_checkpoint_delay);
> +        monitor_printf(mon, "%s: %" PRIu64 "\n",
> +            MigrationParameter_str(MIGRATION_PARAMETER_MULTIFD_PACKET_SIZE),
> +            params->multifd_packet_size);
>          monitor_printf(mon, "%s: %u\n",
>              MigrationParameter_str(MIGRATION_PARAMETER_MULTIFD_CHANNELS),
>              params->multifd_channels);
> @@ -580,6 +583,10 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
>              QAPI_LIST_APPEND(tail, strv[i]);
>          }
>          break;
> +    case MIGRATION_PARAMETER_MULTIFD_PACKET_SIZE:
> +        p->has_multifd_packet_size = true;
> +        visit_type_size(v, param, &p->multifd_packet_size, &err);
> +        break;
>      case MIGRATION_PARAMETER_MULTIFD_CHANNELS:
>          p->has_multifd_channels = true;
>          visit_type_uint8(v, param, &p->multifd_channels, &err);
> diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
> index 8cf8a26bb4..58c278533a 100644
> --- a/migration/multifd-zlib.c
> +++ b/migration/multifd-zlib.c
> @@ -39,6 +39,7 @@ static int multifd_zlib_send_setup(MultiFDSendParams *p, Error **errp)
>      struct zlib_data *z = g_new0(struct zlib_data, 1);
>      z_stream *zs = &z->zs;
>      const char *err_msg;
> +    uint64_t multifd_packet_size = migrate_multifd_packet_size();
>  
>      zs->zalloc = Z_NULL;
>      zs->zfree = Z_NULL;
> @@ -48,7 +49,7 @@ static int multifd_zlib_send_setup(MultiFDSendParams *p, Error **errp)
>          goto err_free_z;
>      }
>      /* This is the maximum size of the compressed buffer */
> -    z->zbuff_len = compressBound(MULTIFD_PACKET_SIZE);
> +    z->zbuff_len = compressBound(multifd_packet_size);
>      z->zbuff = g_try_malloc(z->zbuff_len);
>      if (!z->zbuff) {
>          err_msg = "out of memory for zbuff";
> @@ -162,6 +163,7 @@ out:
>  
>  static int multifd_zlib_recv_setup(MultiFDRecvParams *p, Error **errp)
>  {
> +    uint64_t multifd_packet_size = migrate_multifd_packet_size();
>      struct zlib_data *z = g_new0(struct zlib_data, 1);
>      z_stream *zs = &z->zs;
>  
> @@ -176,7 +178,7 @@ static int multifd_zlib_recv_setup(MultiFDRecvParams *p, Error **errp)
>          return -1;
>      }
>      /* To be safe, we reserve twice the size of the packet */
> -    z->zbuff_len = MULTIFD_PACKET_SIZE * 2;
> +    z->zbuff_len = multifd_packet_size * 2;
>      z->zbuff = g_try_malloc(z->zbuff_len);
>      if (!z->zbuff) {
>          inflateEnd(zs);
> diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
> index abed140855..1f97a5417c 100644
> --- a/migration/multifd-zstd.c
> +++ b/migration/multifd-zstd.c
> @@ -39,6 +39,7 @@ struct zstd_data {
>  
>  static int multifd_zstd_send_setup(MultiFDSendParams *p, Error **errp)
>  {
> +    uint64_t multifd_packet_size = migrate_multifd_packet_size();
>      struct zstd_data *z = g_new0(struct zstd_data, 1);
>      int res;
>  
> @@ -58,7 +59,7 @@ static int multifd_zstd_send_setup(MultiFDSendParams *p, Error **errp)
>          return -1;
>      }
>      /* This is the maximum size of the compressed buffer */
> -    z->zbuff_len = ZSTD_compressBound(MULTIFD_PACKET_SIZE);
> +    z->zbuff_len = ZSTD_compressBound(multifd_packet_size);
>      z->zbuff = g_try_malloc(z->zbuff_len);
>      if (!z->zbuff) {
>          ZSTD_freeCStream(z->zcs);
> @@ -149,6 +150,7 @@ out:
>  
>  static int multifd_zstd_recv_setup(MultiFDRecvParams *p, Error **errp)
>  {
> +    uint64_t multifd_packet_size = migrate_multifd_packet_size();
>      struct zstd_data *z = g_new0(struct zstd_data, 1);
>      int ret;
>  
> @@ -170,7 +172,7 @@ static int multifd_zstd_recv_setup(MultiFDRecvParams *p, Error **errp)
>      }
>  
>      /* To be safe, we reserve twice the size of the packet */
> -    z->zbuff_len = MULTIFD_PACKET_SIZE * 2;
> +    z->zbuff_len = multifd_packet_size * 2;
>      z->zbuff = g_try_malloc(z->zbuff_len);
>      if (!z->zbuff) {
>          ZSTD_freeDStream(z->zds);
> diff --git a/migration/options.c b/migration/options.c
> index a0b3a7d291..b1eaf1c095 100644
> --- a/migration/options.c
> +++ b/migration/options.c
> @@ -80,6 +80,13 @@
>  #define DEFAULT_MIGRATE_ANNOUNCE_ROUNDS    5
>  #define DEFAULT_MIGRATE_ANNOUNCE_STEP    100
>  
> +/*
> + * Parameter for multifd packet size.
> + */
> +#define DEFAULT_MIGRATE_MULTIFD_PACKET_SIZE (128 * 4 * 1024)
> +/* DSA device supports up to 1024 batches, i.e. 1024 * 4K pages */
> +#define MAX_MIGRATE_MULTIFD_PACKET_SIZE (1024 * 4 * 1024)
> +
>  #define DEFINE_PROP_MIG_CAP(name, x)             \
>      DEFINE_PROP_BOOL(name, MigrationState, capabilities[x], false)
>  
> @@ -173,6 +180,9 @@ Property migration_properties[] = {
>      DEFINE_PROP_ZERO_PAGE_DETECTION("zero-page-detection", MigrationState,
>                         parameters.zero_page_detection,
>                         ZERO_PAGE_DETECTION_MULTIFD),
> +    DEFINE_PROP_SIZE("multifd-packet-size", MigrationState,
> +                     parameters.multifd_packet_size,
> +                     DEFAULT_MIGRATE_MULTIFD_PACKET_SIZE),
>  
>      /* Migration capabilities */
>      DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
> @@ -783,6 +793,13 @@ int migrate_multifd_channels(void)
>      return s->parameters.multifd_channels;
>  }
>  
> +uint64_t migrate_multifd_packet_size(void)
> +{
> +    MigrationState *s = migrate_get_current();
> +
> +    return s->parameters.multifd_packet_size;
> +}
> +
>  MultiFDCompression migrate_multifd_compression(void)
>  {
>      MigrationState *s = migrate_get_current();
> @@ -911,6 +928,8 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
>      params->downtime_limit = s->parameters.downtime_limit;
>      params->has_x_checkpoint_delay = true;
>      params->x_checkpoint_delay = s->parameters.x_checkpoint_delay;
> +    params->has_multifd_packet_size = true;
> +    params->multifd_packet_size = s->parameters.multifd_packet_size;
>      params->has_multifd_channels = true;
>      params->multifd_channels = s->parameters.multifd_channels;
>      params->has_multifd_compression = true;
> @@ -973,6 +992,7 @@ void migrate_params_init(MigrationParameters *params)
>      params->has_max_bandwidth = true;
>      params->has_downtime_limit = true;
>      params->has_x_checkpoint_delay = true;
> +    params->has_multifd_packet_size = true;
>      params->has_multifd_channels = true;
>      params->has_multifd_compression = true;
>      params->has_multifd_zlib_level = true;
> @@ -1055,6 +1075,19 @@ bool migrate_params_check(MigrationParameters *params, Error **errp)
>  
>      /* x_checkpoint_delay is now always positive */
>  
> +    if (params->has_multifd_packet_size &&
> +        ((params->multifd_packet_size < DEFAULT_MIGRATE_MULTIFD_PACKET_SIZE) ||
> +            (params->multifd_packet_size >  MAX_MIGRATE_MULTIFD_PACKET_SIZE) ||
> +            (params->multifd_packet_size % qemu_target_page_size() != 0))) {
> +        error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
> +                    "multifd_packet_size",
> +                    "an integer in the range of "
> +                    stringify(DEFAULT_MIGRATE_MULTIFD_PACKET_SIZE)
> +                    " to "stringify(MAX_MIGRATE_MULTIFD_PACKET_SIZE)", "
> +                    "and must be a multiple of guest VM's page size.");
> +        return false;
> +    }
> +
>      if (params->has_multifd_channels && (params->multifd_channels < 1)) {
>          error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
>                     "multifd_channels",
> @@ -1236,6 +1269,9 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
>          dest->x_checkpoint_delay = params->x_checkpoint_delay;
>      }
>  
> +    if (params->has_multifd_packet_size) {
> +        dest->multifd_packet_size = params->multifd_packet_size;
> +    }
>      if (params->has_multifd_channels) {
>          dest->multifd_channels = params->multifd_channels;
>      }
> @@ -1364,6 +1400,9 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
>          colo_checkpoint_delay_set();
>      }
>  
> +    if (params->has_multifd_packet_size) {
> +        s->parameters.multifd_packet_size = params->multifd_packet_size;
> +    }
>      if (params->has_multifd_channels) {
>          s->parameters.multifd_channels = params->multifd_channels;
>      }
> diff --git a/migration/options.h b/migration/options.h
> index 8198b220bd..8158d4879d 100644
> --- a/migration/options.h
> +++ b/migration/options.h
> @@ -87,6 +87,7 @@ const char *migrate_tls_hostname(void);
>  uint64_t migrate_xbzrle_cache_size(void);
>  ZeroPageDetection migrate_zero_page_detection(void);
>  const strList *migrate_dsa_accel_path(void);
> +uint64_t migrate_multifd_packet_size(void);
>  
>  /* parameters helpers */
>  
> diff --git a/qapi/migration.json b/qapi/migration.json
> index d8b42ceae6..1d14d8e82f 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -851,6 +851,10 @@
>  #     only has effect if the @mapped-ram capability is enabled.
>  #     (Since 9.1)
>  #
> +# @multifd-packet-size: Packet size in bytes used to migrate data.
> +#     The value needs to be a multiple of guest page size.
> +#     The default value is 524288 and max value is 4190208.  (Since 9.2)
> +#
>  # Features:
>  #
>  # @unstable: Members @x-checkpoint-delay and
> @@ -877,7 +881,8 @@
>             'vcpu-dirty-limit',
>             'mode',
>             'zero-page-detection',
> -           'direct-io'] }
> +           'direct-io',
> +           'multifd-packet-size'] }
>  
>  ##
>  # @MigrateSetParameters:
> @@ -1038,6 +1043,10 @@
>  #     only has effect if the @mapped-ram capability is enabled.
>  #     (Since 9.1)
>  #
> +# @multifd-packet-size: Packet size in bytes used to migrate data.
> +#     The value needs to be a multiple of guest page size.
> +#     The default value is 524288 and max value is 4190208.  (Since 9.2)
> +#
>  # Features:
>  #
>  # @unstable: Members @x-checkpoint-delay and
> @@ -1080,7 +1089,8 @@
>              '*mode': 'MigMode',
>              '*zero-page-detection': 'ZeroPageDetection',
>              '*direct-io': 'bool',
> -            '*dsa-accel-path': [ 'str' ] } }
> +            '*dsa-accel-path': [ 'str' ],
> +            '*multifd-packet-size' : 'uint64'} }
>  
>  ##
>  # @migrate-set-parameters:
> @@ -1255,6 +1265,10 @@
>  #     only has effect if the @mapped-ram capability is enabled.
>  #     (Since 9.1)
>  #
> +# @multifd-packet-size: Packet size in bytes used to migrate data.
> +#     The value needs to be a multiple of guest page size.
> +#     The default value is 524288 and max value is 4190208.  (Since 9.2)

These will rot, better leave them out.

> +#
>  # Features:
>  #
>  # @unstable: Members @x-checkpoint-delay and
> @@ -1294,7 +1308,8 @@
>              '*mode': 'MigMode',
>              '*zero-page-detection': 'ZeroPageDetection',
>              '*direct-io': 'bool',
> -            '*dsa-accel-path': [ 'str' ] } }
> +            '*dsa-accel-path': [ 'str' ],
> +            '*multifd-packet-size': 'uint64' } }
>  
>  ##
>  # @query-migrate-parameters:


^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2024-10-17 19:17 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-09 23:45 [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration Yichen Wang
2024-10-09 23:45 ` [PATCH v6 01/12] meson: Introduce new instruction set enqcmd to the build system Yichen Wang
2024-10-09 23:46 ` [PATCH v6 02/12] util/dsa: Add idxd into linux header copy list Yichen Wang
2024-10-09 23:46 ` [PATCH v6 03/12] util/dsa: Implement DSA device start and stop logic Yichen Wang
2024-10-16 18:59   ` Peter Xu
2024-10-16 21:00   ` Fabiano Rosas
2024-10-09 23:46 ` [PATCH v6 04/12] util/dsa: Implement DSA task enqueue and dequeue Yichen Wang
2024-10-09 23:46 ` [PATCH v6 05/12] util/dsa: Implement DSA task asynchronous completion thread model Yichen Wang
2024-10-09 23:46 ` [PATCH v6 06/12] util/dsa: Implement zero page checking in DSA task Yichen Wang
2024-10-09 23:46 ` [PATCH v6 07/12] util/dsa: Implement DSA task asynchronous submission and wait for completion Yichen Wang
2024-10-09 23:46 ` [PATCH v6 08/12] migration/multifd: Add new migration option for multifd DSA offloading Yichen Wang
2024-10-11 17:14   ` Dr. David Alan Gilbert
2024-10-15 22:09     ` [External] " Yichen Wang
2024-10-15 22:51       ` Dr. David Alan Gilbert
2024-10-09 23:46 ` [PATCH v6 09/12] migration/multifd: Enable DSA offloading in multifd sender path Yichen Wang
2024-10-17 19:11   ` Fabiano Rosas
2024-10-09 23:46 ` [PATCH v6 10/12] migration/multifd: Add migration option set packet size Yichen Wang
2024-10-17 19:16   ` Fabiano Rosas
2024-10-09 23:46 ` [PATCH v6 11/12] util/dsa: Add unit test coverage for Intel DSA task submission and completion Yichen Wang
2024-10-09 23:46 ` [PATCH v6 12/12] migration/multifd: Add integration tests for multifd with Intel DSA offloading Yichen Wang
2024-10-11 14:13 ` [PATCH v6 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration Fabiano Rosas
2024-10-15 22:05   ` [External] " Yichen Wang
2024-10-11 16:32 ` Peter Xu
2024-10-11 16:53   ` Dr. David Alan Gilbert
2024-10-15 22:02   ` [External] " Yichen Wang
2024-10-16 19:44     ` Peter Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).