qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB
@ 2024-06-27 22:34 Yichen Wang
  2024-06-27 22:34 ` [PATCH v3 1/4] meson: Introduce 'qatzip' feature to the build system Yichen Wang
                   ` (4 more replies)
  0 siblings, 5 replies; 11+ messages in thread
From: Yichen Wang @ 2024-06-27 22:34 UTC (permalink / raw)
  To: Paolo Bonzini, Daniel P. Berrangé, Eduardo Habkost,
	Marc-André Lureau, Thomas Huth, Philippe Mathieu-Daudé,
	Peter Xu, Fabiano Rosas, Eric Blake, Markus Armbruster,
	Laurent Vivier, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Zou, Nanhai, Ho-Ren (Jack) Chuang,
	Yichen Wang

v3:
- Rebase changes on top of master
- Merge two patches per Fabiano Rosas's comment
- Add versions into comments and documentations

v2:
- Rebase changes on top of recent multifd code changes.
- Use QATzip API 'qzMalloc' and 'qzFree' to allocate QAT buffers.
- Remove parameter tuning and use QATzip's defaults for better
  performance.
- Add parameter to enable QAT software fallback.

v1:
https://lists.nongnu.org/archive/html/qemu-devel/2023-12/msg03761.html

* Performance

We present updated performance results. For circumstantial reasons, v1
presented performance on a low-bandwidth (1Gbps) network.

Here, we present updated results with a similar setup as before but with
two main differences:

1. Our machines have a ~50Gbps connection, tested using 'iperf3'.
2. We had a bug in our memory allocation causing us to only use ~1/2 of
the VM's RAM. Now we properly allocate and fill nearly all of the VM's
RAM.

Thus, the test setup is as follows:

We perform multifd live migration over TCP using a VM with 64GB memory.
We prepare the machine's memory by powering it on, allocating a large
amount of memory (60GB) as a single buffer, and filling the buffer with
the repeated contents of the Silesia corpus[0]. This is in lieu of a more
realistic memory snapshot, which proved troublesome to acquire.

We analyze CPU usage by averaging the output of 'top' every second
during migration. This is admittedly imprecise, but we feel that it
accurately portrays the different degrees of CPU usage of varying
compression methods.

We present the latency, throughput, and CPU usage results for all of the
compression methods, with varying numbers of multifd threads (4, 8, and
16).

[0] The Silesia corpus can be accessed here:
https://sun.aei.polsl.pl//~sdeor/index.php?page=silesia

** Results

4 multifd threads:

    |---------------|---------------|----------------|---------|---------|
    |method         |time(sec)      |throughput(mbps)|send cpu%|recv cpu%|
    |---------------|---------------|----------------|---------|---------|
    |qatzip         | 23.13         | 8749.94        |117.50   |186.49   |
    |---------------|---------------|----------------|---------|---------|
    |zlib           |254.35         |  771.87        |388.20   |144.40   |
    |---------------|---------------|----------------|---------|---------|
    |zstd           | 54.52         | 3442.59        |414.59   |149.77   |
    |---------------|---------------|----------------|---------|---------|
    |none           | 12.45         |43739.60        |159.71   |204.96   |
    |---------------|---------------|----------------|---------|---------|

8 multifd threads:

    |---------------|---------------|----------------|---------|---------|
    |method         |time(sec)      |throughput(mbps)|send cpu%|recv cpu%|
    |---------------|---------------|----------------|---------|---------|
    |qatzip         | 16.91         |12306.52        |186.37   |391.84   |
    |---------------|---------------|----------------|---------|---------|
    |zlib           |130.11         | 1508.89        |753.86   |289.35   |
    |---------------|---------------|----------------|---------|---------|
    |zstd           | 27.57         | 6823.23        |786.83   |303.80   |
    |---------------|---------------|----------------|---------|---------|
    |none           | 11.82         |46072.63        |163.74   |238.56   |
    |---------------|---------------|----------------|---------|---------|

16 multifd threads:

    |---------------|---------------|----------------|---------|---------|
    |method         |time(sec)      |throughput(mbps)|send cpu%|recv cpu%|
    |---------------|---------------|----------------|---------|---------|
    |qatzip         |18.64          |11044.52        | 573.61  |437.65   |
    |---------------|---------------|----------------|---------|---------|
    |zlib           |66.43          | 2955.79        |1469.68  |567.47   |
    |---------------|---------------|----------------|---------|---------|
    |zstd           |14.17          |13290.66        |1504.08  |615.33   |
    |---------------|---------------|----------------|---------|---------|
    |none           |16.82          |32363.26        | 180.74  |217.17   |
    |---------------|---------------|----------------|---------|---------|

** Observations

- In general, not using compression outperforms using compression in a
  non-network-bound environment.
- 'qatzip' outperforms other compression workers with 4 and 8 workers,
  achieving a ~91% latency reduction over 'zlib' with 4 workers, and a
~58% latency reduction over 'zstd' with 4 workers.
- 'qatzip' maintains comparable performance with 'zstd' at 16 workers,
  showing a ~32% increase in latency. This performance difference
becomes more noticeable with more workers, as CPU compression is highly
parallelizable.
- 'qatzip' compression uses considerably less CPU than other compression
  methods. At 8 workers, 'qatzip' demonstrates a ~75% reduction in
compression CPU usage compared to 'zstd' and 'zlib'.
- 'qatzip' decompression CPU usage is less impressive, and is even
  slightly worse than 'zstd' and 'zlib' CPU usage at 4 and 16 workers.


Bryan Zhang (4):
  meson: Introduce 'qatzip' feature to the build system
  migration: Add migration parameters for QATzip
  migration: Introduce 'qatzip' compression method
  tests/migration: Add integration test for 'qatzip' compression method

 hw/core/qdev-properties-system.c |   6 +-
 meson.build                      |  10 +
 meson_options.txt                |   2 +
 migration/meson.build            |   1 +
 migration/migration-hmp-cmds.c   |   8 +
 migration/multifd-qatzip.c       | 382 +++++++++++++++++++++++++++++++
 migration/multifd.h              |   1 +
 migration/options.c              |  57 +++++
 migration/options.h              |   2 +
 qapi/migration.json              |  38 +++
 scripts/meson-buildoptions.sh    |   6 +
 tests/qtest/meson.build          |   4 +
 tests/qtest/migration-test.c     |  35 +++
 13 files changed, 551 insertions(+), 1 deletion(-)
 create mode 100644 migration/multifd-qatzip.c

-- 
Yichen Wang



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v3 1/4] meson: Introduce 'qatzip' feature to the build system
  2024-06-27 22:34 [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB Yichen Wang
@ 2024-06-27 22:34 ` Yichen Wang
  2024-06-27 22:34 ` [PATCH v3 2/4] migration: Add migration parameters for QATzip Yichen Wang
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 11+ messages in thread
From: Yichen Wang @ 2024-06-27 22:34 UTC (permalink / raw)
  To: Paolo Bonzini, Daniel P. Berrangé, Eduardo Habkost,
	Marc-André Lureau, Thomas Huth, Philippe Mathieu-Daudé,
	Peter Xu, Fabiano Rosas, Eric Blake, Markus Armbruster,
	Laurent Vivier, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Zou, Nanhai, Ho-Ren (Jack) Chuang,
	Yichen Wang, Bryan Zhang

From: Bryan Zhang <bryan.zhang@bytedance.com>

Add a 'qatzip' feature, which is automatically disabled, and which
depends on the QATzip library if enabled.

Signed-off-by: Bryan Zhang <bryan.zhang@bytedance.com>
Signed-off-by: Hao Xiang <hao.xiang@linux.dev>
Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>
---
 meson.build                   | 10 ++++++++++
 meson_options.txt             |  2 ++
 scripts/meson-buildoptions.sh |  6 ++++++
 3 files changed, 18 insertions(+)

diff --git a/meson.build b/meson.build
index 97e00d6f59..009f07f506 100644
--- a/meson.build
+++ b/meson.build
@@ -1219,6 +1219,14 @@ if not get_option('uadk').auto() or have_system
      uadk = declare_dependency(dependencies: [libwd, libwd_comp])
   endif
 endif
+
+qatzip = not_found
+if get_option('qatzip').enabled()
+  qatzip = dependency('qatzip', version: '>=1.1.2',
+                      required: get_option('qatzip'),
+                      method: 'pkg-config')
+endif
+
 virgl = not_found
 
 have_vhost_user_gpu = have_tools and host_os == 'linux' and pixman.found()
@@ -2353,6 +2361,7 @@ config_host_data.set('CONFIG_STATX_MNT_ID', has_statx_mnt_id)
 config_host_data.set('CONFIG_ZSTD', zstd.found())
 config_host_data.set('CONFIG_QPL', qpl.found())
 config_host_data.set('CONFIG_UADK', uadk.found())
+config_host_data.set('CONFIG_QATZIP', qatzip.found())
 config_host_data.set('CONFIG_FUSE', fuse.found())
 config_host_data.set('CONFIG_FUSE_LSEEK', fuse_lseek.found())
 config_host_data.set('CONFIG_SPICE_PROTOCOL', spice_protocol.found())
@@ -4468,6 +4477,7 @@ summary_info += {'lzfse support':     liblzfse}
 summary_info += {'zstd support':      zstd}
 summary_info += {'Query Processing Library support': qpl}
 summary_info += {'UADK Library support': uadk}
+summary_info += {'qatzip support':    qatzip}
 summary_info += {'NUMA host support': numa}
 summary_info += {'capstone':          capstone}
 summary_info += {'libpmem support':   libpmem}
diff --git a/meson_options.txt b/meson_options.txt
index 7a79dd8970..3670e5058b 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -263,6 +263,8 @@ option('qpl', type : 'feature', value : 'auto',
        description: 'Query Processing Library support')
 option('uadk', type : 'feature', value : 'auto',
        description: 'UADK Library support')
+option('qatzip', type: 'feature', value: 'disabled',
+       description: 'QATzip compression support')
 option('fuse', type: 'feature', value: 'auto',
        description: 'FUSE block device export')
 option('fuse_lseek', type : 'feature', value : 'auto',
diff --git a/scripts/meson-buildoptions.sh b/scripts/meson-buildoptions.sh
index 58d49a447d..226605249e 100644
--- a/scripts/meson-buildoptions.sh
+++ b/scripts/meson-buildoptions.sh
@@ -162,6 +162,8 @@ meson_options_help() {
   printf "%s\n" '  pixman          pixman support'
   printf "%s\n" '  plugins         TCG plugins via shared library loading'
   printf "%s\n" '  png             PNG support with libpng'
+  printf "%s\n" '  pvrdma          Enable PVRDMA support'
+  printf "%s\n" '  qatzip          QATzip compression support'
   printf "%s\n" '  qcow1           qcow1 image format support'
   printf "%s\n" '  qed             qed image format support'
   printf "%s\n" '  qga-vss         build QGA VSS support (broken with MinGW)'
@@ -428,6 +430,10 @@ _meson_option_parse() {
     --enable-png) printf "%s" -Dpng=enabled ;;
     --disable-png) printf "%s" -Dpng=disabled ;;
     --prefix=*) quote_sh "-Dprefix=$2" ;;
+    --enable-pvrdma) printf "%s" -Dpvrdma=enabled ;;
+    --disable-pvrdma) printf "%s" -Dpvrdma=disabled ;;
+    --enable-qatzip) printf "%s" -Dqatzip=enabled ;;
+    --disable-qatzip) printf "%s" -Dqatzip=disabled ;;
     --enable-qcow1) printf "%s" -Dqcow1=enabled ;;
     --disable-qcow1) printf "%s" -Dqcow1=disabled ;;
     --enable-qed) printf "%s" -Dqed=enabled ;;
-- 
Yichen Wang



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 2/4] migration: Add migration parameters for QATzip
  2024-06-27 22:34 [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB Yichen Wang
  2024-06-27 22:34 ` [PATCH v3 1/4] meson: Introduce 'qatzip' feature to the build system Yichen Wang
@ 2024-06-27 22:34 ` Yichen Wang
  2024-06-27 22:34 ` [PATCH v3 3/4] migration: Introduce 'qatzip' compression method Yichen Wang
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 11+ messages in thread
From: Yichen Wang @ 2024-06-27 22:34 UTC (permalink / raw)
  To: Paolo Bonzini, Daniel P. Berrangé, Eduardo Habkost,
	Marc-André Lureau, Thomas Huth, Philippe Mathieu-Daudé,
	Peter Xu, Fabiano Rosas, Eric Blake, Markus Armbruster,
	Laurent Vivier, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Zou, Nanhai, Ho-Ren (Jack) Chuang,
	Yichen Wang, Bryan Zhang

From: Bryan Zhang <bryan.zhang@bytedance.com>

Adds support for migration parameters to control QATzip compression
level and to enable/disable software fallback when QAT hardware is
unavailable. This is a preparatory commit for a subsequent commit that
will actually use QATzip compression.

Signed-off-by: Bryan Zhang <bryan.zhang@bytedance.com>
Signed-off-by: Hao Xiang <hao.xiang@linux.dev>
Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>
---
 migration/migration-hmp-cmds.c |  8 +++++
 migration/options.c            | 57 ++++++++++++++++++++++++++++++++++
 migration/options.h            |  2 ++
 qapi/migration.json            | 35 +++++++++++++++++++++
 4 files changed, 102 insertions(+)

diff --git a/migration/migration-hmp-cmds.c b/migration/migration-hmp-cmds.c
index 7d608d26e1..664e2390a3 100644
--- a/migration/migration-hmp-cmds.c
+++ b/migration/migration-hmp-cmds.c
@@ -576,6 +576,14 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
         p->has_multifd_zlib_level = true;
         visit_type_uint8(v, param, &p->multifd_zlib_level, &err);
         break;
+    case MIGRATION_PARAMETER_MULTIFD_QATZIP_LEVEL:
+        p->has_multifd_qatzip_level = true;
+        visit_type_uint8(v, param, &p->multifd_qatzip_level, &err);
+        break;
+    case MIGRATION_PARAMETER_MULTIFD_QATZIP_SW_FALLBACK:
+        p->has_multifd_qatzip_sw_fallback = true;
+        visit_type_bool(v, param, &p->multifd_qatzip_sw_fallback, &err);
+        break;
     case MIGRATION_PARAMETER_MULTIFD_ZSTD_LEVEL:
         p->has_multifd_zstd_level = true;
         visit_type_uint8(v, param, &p->multifd_zstd_level, &err);
diff --git a/migration/options.c b/migration/options.c
index 645f55003d..334d70fb6d 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -55,6 +55,15 @@
 #define DEFAULT_MIGRATE_MULTIFD_COMPRESSION MULTIFD_COMPRESSION_NONE
 /* 0: means nocompress, 1: best speed, ... 9: best compress ratio */
 #define DEFAULT_MIGRATE_MULTIFD_ZLIB_LEVEL 1
+/*
+ * 1: best speed, ... 9: best compress ratio
+ * There is some nuance here. Refer to QATzip documentation to understand
+ * the mapping of QATzip levels to standard deflate levels.
+ */
+#define DEFAULT_MIGRATE_MULTIFD_QATZIP_LEVEL 1
+/* QATzip's SW fallback implementation is extremely slow, so avoid fallback */
+#define DEFAULT_MIGRATE_MULTIFD_QATZIP_SW_FALLBACK false
+
 /* 0: means nocompress, 1: best speed, ... 20: best compress ratio */
 #define DEFAULT_MIGRATE_MULTIFD_ZSTD_LEVEL 1
 
@@ -123,6 +132,12 @@ Property migration_properties[] = {
     DEFINE_PROP_UINT8("multifd-zlib-level", MigrationState,
                       parameters.multifd_zlib_level,
                       DEFAULT_MIGRATE_MULTIFD_ZLIB_LEVEL),
+    DEFINE_PROP_UINT8("multifd-qatzip-level", MigrationState,
+                      parameters.multifd_qatzip_level,
+                      DEFAULT_MIGRATE_MULTIFD_QATZIP_LEVEL),
+    DEFINE_PROP_BOOL("multifd-qatzip-sw-fallback", MigrationState,
+                      parameters.multifd_qatzip_sw_fallback,
+                      DEFAULT_MIGRATE_MULTIFD_QATZIP_SW_FALLBACK),
     DEFINE_PROP_UINT8("multifd-zstd-level", MigrationState,
                       parameters.multifd_zstd_level,
                       DEFAULT_MIGRATE_MULTIFD_ZSTD_LEVEL),
@@ -787,6 +802,20 @@ int migrate_multifd_zlib_level(void)
     return s->parameters.multifd_zlib_level;
 }
 
+int migrate_multifd_qatzip_level(void)
+{
+    MigrationState *s = migrate_get_current();
+
+    return s->parameters.multifd_qatzip_level;
+}
+
+bool migrate_multifd_qatzip_sw_fallback(void)
+{
+    MigrationState *s = migrate_get_current();
+
+    return s->parameters.multifd_qatzip_sw_fallback;
+}
+
 int migrate_multifd_zstd_level(void)
 {
     MigrationState *s = migrate_get_current();
@@ -892,6 +921,11 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
     params->multifd_compression = s->parameters.multifd_compression;
     params->has_multifd_zlib_level = true;
     params->multifd_zlib_level = s->parameters.multifd_zlib_level;
+    params->has_multifd_qatzip_level = true;
+    params->multifd_qatzip_level = s->parameters.multifd_qatzip_level;
+    params->has_multifd_qatzip_sw_fallback = true;
+    params->multifd_qatzip_sw_fallback =
+        s->parameters.multifd_qatzip_sw_fallback;
     params->has_multifd_zstd_level = true;
     params->multifd_zstd_level = s->parameters.multifd_zstd_level;
     params->has_xbzrle_cache_size = true;
@@ -946,6 +980,8 @@ void migrate_params_init(MigrationParameters *params)
     params->has_multifd_channels = true;
     params->has_multifd_compression = true;
     params->has_multifd_zlib_level = true;
+    params->has_multifd_qatzip_level = true;
+    params->has_multifd_qatzip_sw_fallback = true;
     params->has_multifd_zstd_level = true;
     params->has_xbzrle_cache_size = true;
     params->has_max_postcopy_bandwidth = true;
@@ -1038,6 +1074,14 @@ bool migrate_params_check(MigrationParameters *params, Error **errp)
         return false;
     }
 
+    if (params->has_multifd_qatzip_level &&
+        ((params->multifd_qatzip_level > 9) ||
+        (params->multifd_qatzip_level < 1))) {
+        error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "multifd_qatzip_level",
+                   "a value between 1 and 9");
+        return false;
+    }
+
     if (params->has_multifd_zstd_level &&
         (params->multifd_zstd_level > 20)) {
         error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "multifd_zstd_level",
@@ -1195,6 +1239,12 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
     if (params->has_multifd_compression) {
         dest->multifd_compression = params->multifd_compression;
     }
+    if (params->has_multifd_qatzip_level) {
+        dest->multifd_qatzip_level = params->multifd_qatzip_level;
+    }
+    if (params->has_multifd_qatzip_sw_fallback) {
+        dest->multifd_qatzip_sw_fallback = params->multifd_qatzip_sw_fallback;
+    }
     if (params->has_multifd_zlib_level) {
         dest->multifd_zlib_level = params->multifd_zlib_level;
     }
@@ -1315,6 +1365,13 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
     if (params->has_multifd_compression) {
         s->parameters.multifd_compression = params->multifd_compression;
     }
+    if (params->has_multifd_qatzip_level) {
+        s->parameters.multifd_qatzip_level = params->multifd_qatzip_level;
+    }
+    if (params->has_multifd_qatzip_sw_fallback) {
+        s->parameters.multifd_qatzip_sw_fallback =
+            params->multifd_qatzip_sw_fallback;
+    }
     if (params->has_multifd_zlib_level) {
         s->parameters.multifd_zlib_level = params->multifd_zlib_level;
     }
diff --git a/migration/options.h b/migration/options.h
index a2397026db..24d98c6a29 100644
--- a/migration/options.h
+++ b/migration/options.h
@@ -78,6 +78,8 @@ uint64_t migrate_max_postcopy_bandwidth(void);
 int migrate_multifd_channels(void);
 MultiFDCompression migrate_multifd_compression(void);
 int migrate_multifd_zlib_level(void);
+int migrate_multifd_qatzip_level(void);
+bool migrate_multifd_qatzip_sw_fallback(void);
 int migrate_multifd_zstd_level(void);
 uint8_t migrate_throttle_trigger_threshold(void);
 const char *migrate_tls_authz(void);
diff --git a/qapi/migration.json b/qapi/migration.json
index 0f24206bce..8c9f2a8aa7 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -789,6 +789,16 @@
 #     speed, and 9 means best compression ratio which will consume
 #     more CPU. Defaults to 1.  (Since 5.0)
 #
+# @multifd-qatzip-level: Set the compression level to be used in live
+#     migration. The level is an integer between 1 and 9, where 1 means
+#     the best compression speed, and 9 means the best compression
+#     ratio which will consume more CPU. Defaults to 1. (Since 9.1)
+#
+# @multifd-qatzip-sw-fallback: Enable software fallback if QAT hardware
+#     is unavailable. Defaults to false. Software fallback performance
+#     is very poor compared to regular zlib, so be cautious about
+#     enabling this option. (Since 9.1)
+#
 # @multifd-zstd-level: Set the compression level to be used in live
 #     migration, the compression level is an integer between 0 and 20,
 #     where 0 means no compression, 1 means the best compression
@@ -849,6 +859,7 @@
            'xbzrle-cache-size', 'max-postcopy-bandwidth',
            'max-cpu-throttle', 'multifd-compression',
            'multifd-zlib-level', 'multifd-zstd-level',
+           'multifd-qatzip-level', 'multifd-qatzip-sw-fallback',
            'block-bitmap-mapping',
            { 'name': 'x-vcpu-dirty-limit-period', 'features': ['unstable'] },
            'vcpu-dirty-limit',
@@ -964,6 +975,16 @@
 #     speed, and 9 means best compression ratio which will consume
 #     more CPU. Defaults to 1.  (Since 5.0)
 #
+# @multifd-qatzip-level: Set the compression level to be used in live
+#     migration. The level is an integer between 1 and 9, where 1 means
+#     the best compression speed, and 9 means the best compression
+#     ratio which will consume more CPU. Defaults to 1. (Since 9.1)
+#
+# @multifd-qatzip-sw-fallback: Enable software fallback if QAT hardware
+#     is unavailable. Defaults to false. Software fallback performance
+#     is very poor compared to regular zlib, so be cautious about
+#     enabling this option. (Since 9.1)
+#
 # @multifd-zstd-level: Set the compression level to be used in live
 #     migration, the compression level is an integer between 0 and 20,
 #     where 0 means no compression, 1 means the best compression
@@ -1037,6 +1058,8 @@
             '*max-cpu-throttle': 'uint8',
             '*multifd-compression': 'MultiFDCompression',
             '*multifd-zlib-level': 'uint8',
+            '*multifd-qatzip-level': 'uint8',
+            '*multifd-qatzip-sw-fallback': 'bool',
             '*multifd-zstd-level': 'uint8',
             '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
             '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
@@ -1168,6 +1191,16 @@
 #     speed, and 9 means best compression ratio which will consume
 #     more CPU. Defaults to 1.  (Since 5.0)
 #
+# @multifd-qatzip-level: Set the compression level to be used in live
+#     migration. The level is an integer between 1 and 9, where 1 means
+#     the best compression speed, and 9 means the best compression
+#     ratio which will consume more CPU. Defaults to 1. (Since 9.1)
+#
+# @multifd-qatzip-sw-fallback: Enable software fallback if QAT hardware
+#     is unavailable. Defaults to false. Software fallback performance
+#     is very poor compared to regular zlib, so be cautious about
+#     enabling this option. (Since 9.1)
+#
 # @multifd-zstd-level: Set the compression level to be used in live
 #     migration, the compression level is an integer between 0 and 20,
 #     where 0 means no compression, 1 means the best compression
@@ -1238,6 +1271,8 @@
             '*max-cpu-throttle': 'uint8',
             '*multifd-compression': 'MultiFDCompression',
             '*multifd-zlib-level': 'uint8',
+            '*multifd-qatzip-level': 'uint8',
+            '*multifd-qatzip-sw-fallback': 'bool',
             '*multifd-zstd-level': 'uint8',
             '*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
             '*x-vcpu-dirty-limit-period': { 'type': 'uint64',
-- 
Yichen Wang



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 3/4] migration: Introduce 'qatzip' compression method
  2024-06-27 22:34 [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB Yichen Wang
  2024-06-27 22:34 ` [PATCH v3 1/4] meson: Introduce 'qatzip' feature to the build system Yichen Wang
  2024-06-27 22:34 ` [PATCH v3 2/4] migration: Add migration parameters for QATzip Yichen Wang
@ 2024-06-27 22:34 ` Yichen Wang
  2024-06-27 22:34 ` [PATCH v3 4/4] tests/migration: Add integration test for " Yichen Wang
  2024-07-02 19:16 ` [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB Peter Xu
  4 siblings, 0 replies; 11+ messages in thread
From: Yichen Wang @ 2024-06-27 22:34 UTC (permalink / raw)
  To: Paolo Bonzini, Daniel P. Berrangé, Eduardo Habkost,
	Marc-André Lureau, Thomas Huth, Philippe Mathieu-Daudé,
	Peter Xu, Fabiano Rosas, Eric Blake, Markus Armbruster,
	Laurent Vivier, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Zou, Nanhai, Ho-Ren (Jack) Chuang,
	Yichen Wang, Bryan Zhang

From: Bryan Zhang <bryan.zhang@bytedance.com>

Adds support for 'qatzip' as an option for the multifd compression
method parameter, and implements using QAT for 'qatzip' compression and
decompression.

Signed-off-by: Bryan Zhang <bryan.zhang@bytedance.com>
Signed-off-by: Hao Xiang <hao.xiang@linux.dev>
Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>
---
 hw/core/qdev-properties-system.c |   6 +-
 migration/meson.build            |   1 +
 migration/multifd-qatzip.c       | 382 +++++++++++++++++++++++++++++++
 migration/multifd.h              |   1 +
 qapi/migration.json              |   3 +
 tests/qtest/meson.build          |   4 +
 6 files changed, 396 insertions(+), 1 deletion(-)
 create mode 100644 migration/multifd-qatzip.c

diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index f13350b4fb..eb50d6ec5b 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -659,7 +659,11 @@ const PropertyInfo qdev_prop_fdc_drive_type = {
 const PropertyInfo qdev_prop_multifd_compression = {
     .name = "MultiFDCompression",
     .description = "multifd_compression values, "
-                   "none/zlib/zstd/qpl/uadk",
+                   "none/zlib/zstd/qpl/uadk"
+#ifdef CONFIG_QATZIP
+                   "/qatzip"
+#endif
+                   ,
     .enum_table = &MultiFDCompression_lookup,
     .get = qdev_propinfo_get_enum,
     .set = qdev_propinfo_set_enum,
diff --git a/migration/meson.build b/migration/meson.build
index 5ce2acb41e..c9454c26ae 100644
--- a/migration/meson.build
+++ b/migration/meson.build
@@ -41,6 +41,7 @@ system_ss.add(when: rdma, if_true: files('rdma.c'))
 system_ss.add(when: zstd, if_true: files('multifd-zstd.c'))
 system_ss.add(when: qpl, if_true: files('multifd-qpl.c'))
 system_ss.add(when: uadk, if_true: files('multifd-uadk.c'))
+system_ss.add(when: qatzip, if_true: files('multifd-qatzip.c'))
 
 specific_ss.add(when: 'CONFIG_SYSTEM_ONLY',
                 if_true: files('ram.c',
diff --git a/migration/multifd-qatzip.c b/migration/multifd-qatzip.c
new file mode 100644
index 0000000000..19e54889dc
--- /dev/null
+++ b/migration/multifd-qatzip.c
@@ -0,0 +1,382 @@
+/*
+ * Multifd QATzip compression implementation
+ *
+ * Copyright (c) Bytedance
+ *
+ * Authors:
+ *  Bryan Zhang <bryan.zhang@bytedance.com>
+ *  Hao Xiang   <hao.xiang@bytedance.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "exec/ramblock.h"
+#include "exec/target_page.h"
+#include "qapi/error.h"
+#include "migration.h"
+#include "options.h"
+#include "multifd.h"
+#include <qatzip.h>
+
+struct qatzip_data {
+    /*
+     * Unique session for use with QATzip API
+     */
+    QzSession_T sess;
+
+    /*
+     * For compression: Buffer for pages to compress
+     * For decompression: Buffer for data to decompress
+     */
+    uint8_t *in_buf;
+    uint32_t in_len;
+
+    /*
+     * For compression: Output buffer of compressed data
+     * For decompression: Output buffer of decompressed data
+     */
+    uint8_t *out_buf;
+    uint32_t out_len;
+};
+
+/**
+ * qatzip_send_setup: Set up QATzip session and private buffers.
+ *
+ * @param p    Multifd channel params
+ * @param errp Pointer to error, which will be set in case of error
+ * @return     0 on success, -1 on error (and *errp will be set)
+ */
+static int qatzip_send_setup(MultiFDSendParams *p, Error **errp)
+{
+    struct qatzip_data *q;
+    QzSessionParamsDeflate_T params;
+    const char *err_msg;
+    int ret;
+    int sw_fallback;
+
+    q = g_new0(struct qatzip_data, 1);
+    p->compress_data = q;
+
+    sw_fallback = 0;
+    if (migrate_multifd_qatzip_sw_fallback()) {
+        sw_fallback = 1;
+    }
+
+    ret = qzInit(&q->sess, sw_fallback);
+    if (ret != QZ_OK && ret != QZ_DUPLICATE) {
+        err_msg = "qzInit failed";
+        goto err_free_q;
+    }
+
+    ret = qzGetDefaultsDeflate(&params);
+    if (ret != QZ_OK) {
+        err_msg = "qzGetDefaultsDeflate failed";
+        goto err_close;
+    }
+
+    /* Make sure to use configured QATzip compression level. */
+    params.common_params.comp_lvl = migrate_multifd_qatzip_level();
+
+    ret = qzSetupSessionDeflate(&q->sess, &params);
+    if (ret != QZ_OK && ret != QZ_DUPLICATE) {
+        err_msg = "qzSetupSessionDeflate failed";
+        goto err_close;
+    }
+
+    /* TODO Add support for larger packets. */
+    if (MULTIFD_PACKET_SIZE > UINT32_MAX) {
+        err_msg = "packet size too large for QAT";
+        goto err_close;
+    }
+
+    q->in_len = MULTIFD_PACKET_SIZE;
+    q->in_buf = qzMalloc(q->in_len, 0, PINNED_MEM);
+    if (!q->in_buf) {
+        err_msg = "qzMalloc failed";
+        goto err_close;
+    }
+
+    q->out_len = qzMaxCompressedLength(MULTIFD_PACKET_SIZE, &q->sess);
+    q->out_buf = qzMalloc(q->out_len, 0, PINNED_MEM);
+    if (!q->out_buf) {
+        err_msg = "qzMalloc failed";
+        goto err_free_inbuf;
+    }
+
+    return 0;
+
+err_free_inbuf:
+    qzFree(q->in_buf);
+err_close:
+    qzClose(&q->sess);
+err_free_q:
+    g_free(q);
+    error_setg(errp, "multifd %u: %s", p->id, err_msg);
+    return -1;
+}
+
+/**
+ * qatzip_send_cleanup: Tear down QATzip session and release private buffers.
+ *
+ * @param p    Multifd channel params
+ * @param errp Pointer to error, which will be set in case of error
+ * @return     None
+ */
+static void qatzip_send_cleanup(MultiFDSendParams *p, Error **errp)
+{
+    struct qatzip_data *q = p->compress_data;
+    const char *err_msg;
+    int ret;
+
+    ret = qzTeardownSession(&q->sess);
+    if (ret != QZ_OK) {
+        err_msg = "qzTeardownSession failed";
+        goto err;
+    }
+
+    ret = qzClose(&q->sess);
+    if (ret != QZ_OK) {
+        err_msg = "qzClose failed";
+        goto err;
+    }
+
+    qzFree(q->in_buf);
+    q->in_buf = NULL;
+    qzFree(q->out_buf);
+    q->out_buf = NULL;
+    g_free(p->compress_data);
+    p->compress_data = NULL;
+    return;
+
+err:
+    error_setg(errp, "multifd %u: %s", p->id, err_msg);
+}
+
+/**
+ * qatzip_send_prepare: Compress pages and update IO channel info.
+ *
+ * @param p    Multifd channel params
+ * @param errp Pointer to error, which will be set in case of error
+ * @return     0 on success, -1 on error (and *errp will be set)
+ */
+static int qatzip_send_prepare(MultiFDSendParams *p, Error **errp)
+{
+    MultiFDPages_t *pages = p->pages;
+    struct qatzip_data *q = p->compress_data;
+    int ret;
+    unsigned int in_len, out_len;
+
+    multifd_send_prepare_header(p);
+
+    /* memcpy all the pages into one buffer. */
+    for (int i = 0; i < pages->num; i++) {
+        memcpy(q->in_buf + (i * p->page_size),
+               p->pages->block->host + pages->offset[i],
+               p->page_size);
+    }
+
+    in_len = pages->num * p->page_size;
+    if (in_len > q->in_len) {
+        error_setg(errp, "multifd %u: unexpectedly large input", p->id);
+        return -1;
+    }
+    out_len = q->out_len;
+
+    /*
+     * Unlike other multifd compression implementations, we use a non-streaming
+     * API and place all the data into one buffer, rather than sending each page
+     * to the compression API at a time. Based on initial benchmarks, the
+     * non-streaming API outperforms the streaming API. Plus, the logic in QEMU
+     * is friendly to using the non-streaming API anyway. If either of these
+     * statements becomes no longer true, we can revisit adding a streaming
+     * implementation.
+     */
+    ret = qzCompress(&q->sess, q->in_buf, &in_len, q->out_buf, &out_len, 1);
+    if (ret != QZ_OK) {
+        error_setg(errp, "multifd %u: QATzip returned %d instead of QZ_OK",
+                   p->id, ret);
+        return -1;
+    }
+    if (in_len != pages->num * p->page_size) {
+        error_setg(errp, "multifd %u: QATzip failed to compress all input",
+                   p->id);
+        return -1;
+    }
+
+    p->iov[p->iovs_num].iov_base = q->out_buf;
+    p->iov[p->iovs_num].iov_len = out_len;
+    p->iovs_num++;
+    p->next_packet_size = out_len;
+    p->flags |= MULTIFD_FLAG_QATZIP;
+
+    multifd_send_fill_packet(p);
+
+    return 0;
+}
+
+/**
+ * qatzip_recv_setup: Set up QATzip session and allocate private buffers.
+ *
+ * @param p    Multifd channel params
+ * @param errp Pointer to error, which will be set in case of error
+ * @return     0 on success, -1 on error (and *errp will be set)
+ */
+static int qatzip_recv_setup(MultiFDRecvParams *p, Error **errp)
+{
+    struct qatzip_data *q;
+    QzSessionParamsDeflate_T params;
+    const char *err_msg;
+    int ret;
+    int sw_fallback;
+
+    q = g_new0(struct qatzip_data, 1);
+    p->compress_data = q;
+
+    sw_fallback = 0;
+    if (migrate_multifd_qatzip_sw_fallback()) {
+        sw_fallback = 1;
+    }
+
+    ret = qzInit(&q->sess, sw_fallback);
+    if (ret != QZ_OK && ret != QZ_DUPLICATE) {
+        err_msg = "qzInit failed";
+        goto err_free_q;
+    }
+
+    ret = qzGetDefaultsDeflate(&params);
+    if (ret != QZ_OK) {
+        err_msg = "qzGetDefaultsDeflate failed";
+        goto err_close;
+    }
+
+    /* Make sure to use configured QATzip compression level. */
+    params.common_params.comp_lvl = migrate_multifd_qatzip_level();
+
+    ret = qzSetupSessionDeflate(&q->sess, &params);
+    if (ret != QZ_OK && ret != QZ_DUPLICATE) {
+        err_msg = "qzSetupSessionDeflate failed";
+        goto err_close;
+    }
+
+    /*
+     * Mimic multifd-zlib, which reserves extra space for the
+     * incoming packet.
+     */
+    q->in_len = MULTIFD_PACKET_SIZE * 2;
+    q->in_buf = qzMalloc(q->in_len, 0, PINNED_MEM);
+    if (!q->in_buf) {
+        err_msg = "qzMalloc failed";
+        goto err_close;
+    }
+
+    q->out_len = MULTIFD_PACKET_SIZE;
+    q->out_buf = qzMalloc(q->out_len, 0, PINNED_MEM);
+    if (!q->out_buf) {
+        err_msg = "qzMalloc failed";
+        goto err_free_inbuf;
+    }
+
+    return 0;
+
+err_free_inbuf:
+    qzFree(q->in_buf);
+err_close:
+    qzClose(&q->sess);
+err_free_q:
+    g_free(q);
+    error_setg(errp, "multifd %u: %s", p->id, err_msg);
+    return -1;
+}
+
+/**
+ * qatzip_recv_cleanup: Tear down QATzip session and release private buffers.
+ *
+ * @param p    Multifd channel params
+ * @return     None
+ */
+static void qatzip_recv_cleanup(MultiFDRecvParams *p)
+{
+    struct qatzip_data *q = p->compress_data;
+
+    /* Ignoring return values here due to function signature. */
+    qzTeardownSession(&q->sess);
+    qzClose(&q->sess);
+    qzFree(q->in_buf);
+    qzFree(q->out_buf);
+    g_free(p->compress_data);
+}
+
+
+/**
+ * qatzip_recv: Decompress pages and copy them to the appropriate
+ * locations.
+ *
+ * @param p    Multifd channel params
+ * @param errp Pointer to error, which will be set in case of error
+ * @return     0 on success, -1 on error (and *errp will be set)
+ */
+static int qatzip_recv(MultiFDRecvParams *p, Error **errp)
+{
+    struct qatzip_data *q = p->compress_data;
+    int ret;
+    unsigned int in_len, out_len;
+    uint32_t in_size = p->next_packet_size;
+    uint32_t expected_size = p->normal_num * p->page_size;
+    uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
+
+    if (in_size > q->in_len) {
+        error_setg(errp, "multifd %u: received unexpectedly large packet",
+                   p->id);
+        return -1;
+    }
+
+    if (flags != MULTIFD_FLAG_QATZIP) {
+        error_setg(errp, "multifd %u: flags received %x flags expected %x",
+                   p->id, flags, MULTIFD_FLAG_QATZIP);
+        return -1;
+    }
+
+    ret = qio_channel_read_all(p->c, (void *)q->in_buf, in_size, errp);
+    if (ret != 0) {
+        return ret;
+    }
+
+    in_len = in_size;
+    out_len = q->out_len;
+    ret = qzDecompress(&q->sess, q->in_buf, &in_len, q->out_buf, &out_len);
+    if (ret != QZ_OK) {
+        error_setg(errp, "multifd %u: qzDecompress failed", p->id);
+        return -1;
+    }
+    if (out_len != expected_size) {
+        error_setg(errp, "multifd %u: packet size received %u size expected %u",
+                   p->id, out_len, expected_size);
+        return -1;
+    }
+
+    /* Copy each page to its appropriate location. */
+    for (int i = 0; i < p->normal_num; i++) {
+        memcpy(p->host + p->normal[i],
+               q->out_buf + p->page_size * i,
+               p->page_size);
+    }
+    return 0;
+}
+
+static MultiFDMethods multifd_qatzip_ops = {
+    .send_setup = qatzip_send_setup,
+    .send_cleanup = qatzip_send_cleanup,
+    .send_prepare = qatzip_send_prepare,
+    .recv_setup = qatzip_recv_setup,
+    .recv_cleanup = qatzip_recv_cleanup,
+    .recv = qatzip_recv
+};
+
+static void multifd_qatzip_register(void)
+{
+    multifd_register_ops(MULTIFD_COMPRESSION_QATZIP, &multifd_qatzip_ops);
+}
+
+migration_init(multifd_qatzip_register);
diff --git a/migration/multifd.h b/migration/multifd.h
index 0ecd6f47d7..2a3b904675 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -40,6 +40,7 @@ MultiFDRecvData *multifd_get_recv_data(void);
 #define MULTIFD_FLAG_NOCOMP (0 << 1)
 #define MULTIFD_FLAG_ZLIB (1 << 1)
 #define MULTIFD_FLAG_ZSTD (2 << 1)
+#define MULTIFD_FLAG_QATZIP (3 << 1)
 #define MULTIFD_FLAG_QPL (4 << 1)
 #define MULTIFD_FLAG_UADK (8 << 1)
 
diff --git a/qapi/migration.json b/qapi/migration.json
index 8c9f2a8aa7..ea62f983b1 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -558,6 +558,8 @@
 #
 # @zstd: use zstd compression method.
 #
+# @qatzip: use qatzip compression method. (Since 9.1)
+#
 # @qpl: use qpl compression method.  Query Processing Library(qpl) is
 #       based on the deflate compression algorithm and use the Intel
 #       In-Memory Analytics Accelerator(IAA) accelerated compression
@@ -570,6 +572,7 @@
 { 'enum': 'MultiFDCompression',
   'data': [ 'none', 'zlib',
             { 'name': 'zstd', 'if': 'CONFIG_ZSTD' },
+            { 'name': 'qatzip', 'if': 'CONFIG_QATZIP'},
             { 'name': 'qpl', 'if': 'CONFIG_QPL' },
             { 'name': 'uadk', 'if': 'CONFIG_UADK' } ] }
 
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
index 12792948ff..23e46144d7 100644
--- a/tests/qtest/meson.build
+++ b/tests/qtest/meson.build
@@ -324,6 +324,10 @@ if gnutls.found()
   endif
 endif
 
+if qatzip.found()
+  migration_files += [qatzip]
+endif
+
 qtests = {
   'bios-tables-test': [io, 'boot-sector.c', 'acpi-utils.c', 'tpm-emu.c'],
   'cdrom-test': files('boot-sector.c'),
-- 
Yichen Wang



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 4/4] tests/migration: Add integration test for 'qatzip' compression method
  2024-06-27 22:34 [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB Yichen Wang
                   ` (2 preceding siblings ...)
  2024-06-27 22:34 ` [PATCH v3 3/4] migration: Introduce 'qatzip' compression method Yichen Wang
@ 2024-06-27 22:34 ` Yichen Wang
  2024-07-02 19:16 ` [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB Peter Xu
  4 siblings, 0 replies; 11+ messages in thread
From: Yichen Wang @ 2024-06-27 22:34 UTC (permalink / raw)
  To: Paolo Bonzini, Daniel P. Berrangé, Eduardo Habkost,
	Marc-André Lureau, Thomas Huth, Philippe Mathieu-Daudé,
	Peter Xu, Fabiano Rosas, Eric Blake, Markus Armbruster,
	Laurent Vivier, qemu-devel
  Cc: Hao Xiang, Liu, Yuan1, Zou, Nanhai, Ho-Ren (Jack) Chuang,
	Yichen Wang, Bryan Zhang

From: Bryan Zhang <bryan.zhang@bytedance.com>

Adds an integration test for 'qatzip'.

Signed-off-by: Bryan Zhang <bryan.zhang@bytedance.com>
Signed-off-by: Hao Xiang <hao.xiang@linux.dev>
Signed-off-by: Yichen Wang <yichen.wang@bytedance.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
---
 tests/qtest/migration-test.c | 35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index 571fc1334c..cc4a971e63 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -32,6 +32,10 @@
 # endif /* CONFIG_TASN1 */
 #endif /* CONFIG_GNUTLS */
 
+#ifdef CONFIG_QATZIP
+#include <qatzip.h>
+#endif /* CONFIG_QATZIP */
+
 /* For dirty ring test; so far only x86_64 is supported */
 #if defined(__linux__) && defined(HOST_X86_64)
 #include "linux/kvm.h"
@@ -2992,6 +2996,22 @@ test_migrate_precopy_tcp_multifd_zstd_start(QTestState *from,
 }
 #endif /* CONFIG_ZSTD */
 
+#ifdef CONFIG_QATZIP
+static void *
+test_migrate_precopy_tcp_multifd_qatzip_start(QTestState *from,
+                                              QTestState *to)
+{
+    migrate_set_parameter_int(from, "multifd-qatzip-level", 2);
+    migrate_set_parameter_int(to, "multifd-qatzip-level", 2);
+
+    /* SW fallback is disabled by default, so enable it for testing. */
+    migrate_set_parameter_bool(from, "multifd-qatzip-sw-fallback", true);
+    migrate_set_parameter_bool(to, "multifd-qatzip-sw-fallback", true);
+
+    return test_migrate_precopy_tcp_multifd_start_common(from, to, "qatzip");
+}
+#endif
+
 #ifdef CONFIG_QPL
 static void *
 test_migrate_precopy_tcp_multifd_qpl_start(QTestState *from,
@@ -3089,6 +3109,17 @@ static void test_multifd_tcp_zstd(void)
 }
 #endif
 
+#ifdef CONFIG_QATZIP
+static void test_multifd_tcp_qatzip(void)
+{
+    MigrateCommon args = {
+        .listen_uri = "defer",
+        .start_hook = test_migrate_precopy_tcp_multifd_qatzip_start,
+    };
+    test_precopy_common(&args);
+}
+#endif
+
 #ifdef CONFIG_QPL
 static void test_multifd_tcp_qpl(void)
 {
@@ -4002,6 +4033,10 @@ int main(int argc, char **argv)
     migration_test_add("/migration/multifd/tcp/plain/zstd",
                        test_multifd_tcp_zstd);
 #endif
+#ifdef CONFIG_QATZIP
+    migration_test_add("/migration/multifd/tcp/plain/qatzip",
+                test_multifd_tcp_qatzip);
+#endif
 #ifdef CONFIG_QPL
     migration_test_add("/migration/multifd/tcp/plain/qpl",
                        test_multifd_tcp_qpl);
-- 
Yichen Wang



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB
  2024-06-27 22:34 [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB Yichen Wang
                   ` (3 preceding siblings ...)
  2024-06-27 22:34 ` [PATCH v3 4/4] tests/migration: Add integration test for " Yichen Wang
@ 2024-07-02 19:16 ` Peter Xu
  2024-07-04  3:15   ` Liu, Yuan1
  4 siblings, 1 reply; 11+ messages in thread
From: Peter Xu @ 2024-07-02 19:16 UTC (permalink / raw)
  To: Yichen Wang
  Cc: Paolo Bonzini, Daniel P. Berrangé, Eduardo Habkost,
	Marc-André Lureau, Thomas Huth, Philippe Mathieu-Daudé,
	Fabiano Rosas, Eric Blake, Markus Armbruster, Laurent Vivier,
	qemu-devel, Hao Xiang, Liu, Yuan1, Zou, Nanhai,
	Ho-Ren (Jack) Chuang

On Thu, Jun 27, 2024 at 03:34:41PM -0700, Yichen Wang wrote:
> v3:
> - Rebase changes on top of master
> - Merge two patches per Fabiano Rosas's comment
> - Add versions into comments and documentations
> 
> v2:
> - Rebase changes on top of recent multifd code changes.
> - Use QATzip API 'qzMalloc' and 'qzFree' to allocate QAT buffers.
> - Remove parameter tuning and use QATzip's defaults for better
>   performance.
> - Add parameter to enable QAT software fallback.
> 
> v1:
> https://lists.nongnu.org/archive/html/qemu-devel/2023-12/msg03761.html
> 
> * Performance
> 
> We present updated performance results. For circumstantial reasons, v1
> presented performance on a low-bandwidth (1Gbps) network.
> 
> Here, we present updated results with a similar setup as before but with
> two main differences:
> 
> 1. Our machines have a ~50Gbps connection, tested using 'iperf3'.
> 2. We had a bug in our memory allocation causing us to only use ~1/2 of
> the VM's RAM. Now we properly allocate and fill nearly all of the VM's
> RAM.
> 
> Thus, the test setup is as follows:
> 
> We perform multifd live migration over TCP using a VM with 64GB memory.
> We prepare the machine's memory by powering it on, allocating a large
> amount of memory (60GB) as a single buffer, and filling the buffer with
> the repeated contents of the Silesia corpus[0]. This is in lieu of a more
> realistic memory snapshot, which proved troublesome to acquire.
> 
> We analyze CPU usage by averaging the output of 'top' every second
> during migration. This is admittedly imprecise, but we feel that it
> accurately portrays the different degrees of CPU usage of varying
> compression methods.
> 
> We present the latency, throughput, and CPU usage results for all of the
> compression methods, with varying numbers of multifd threads (4, 8, and
> 16).
> 
> [0] The Silesia corpus can be accessed here:
> https://sun.aei.polsl.pl//~sdeor/index.php?page=silesia
> 
> ** Results
> 
> 4 multifd threads:
> 
>     |---------------|---------------|----------------|---------|---------|
>     |method         |time(sec)      |throughput(mbps)|send cpu%|recv cpu%|
>     |---------------|---------------|----------------|---------|---------|
>     |qatzip         | 23.13         | 8749.94        |117.50   |186.49   |
>     |---------------|---------------|----------------|---------|---------|
>     |zlib           |254.35         |  771.87        |388.20   |144.40   |
>     |---------------|---------------|----------------|---------|---------|
>     |zstd           | 54.52         | 3442.59        |414.59   |149.77   |
>     |---------------|---------------|----------------|---------|---------|
>     |none           | 12.45         |43739.60        |159.71   |204.96   |
>     |---------------|---------------|----------------|---------|---------|
> 
> 8 multifd threads:
> 
>     |---------------|---------------|----------------|---------|---------|
>     |method         |time(sec)      |throughput(mbps)|send cpu%|recv cpu%|
>     |---------------|---------------|----------------|---------|---------|
>     |qatzip         | 16.91         |12306.52        |186.37   |391.84   |
>     |---------------|---------------|----------------|---------|---------|
>     |zlib           |130.11         | 1508.89        |753.86   |289.35   |
>     |---------------|---------------|----------------|---------|---------|
>     |zstd           | 27.57         | 6823.23        |786.83   |303.80   |
>     |---------------|---------------|----------------|---------|---------|
>     |none           | 11.82         |46072.63        |163.74   |238.56   |
>     |---------------|---------------|----------------|---------|---------|
> 
> 16 multifd threads:
> 
>     |---------------|---------------|----------------|---------|---------|
>     |method         |time(sec)      |throughput(mbps)|send cpu%|recv cpu%|
>     |---------------|---------------|----------------|---------|---------|
>     |qatzip         |18.64          |11044.52        | 573.61  |437.65   |
>     |---------------|---------------|----------------|---------|---------|
>     |zlib           |66.43          | 2955.79        |1469.68  |567.47   |
>     |---------------|---------------|----------------|---------|---------|
>     |zstd           |14.17          |13290.66        |1504.08  |615.33   |
>     |---------------|---------------|----------------|---------|---------|
>     |none           |16.82          |32363.26        | 180.74  |217.17   |
>     |---------------|---------------|----------------|---------|---------|
> 
> ** Observations
> 
> - In general, not using compression outperforms using compression in a
>   non-network-bound environment.
> - 'qatzip' outperforms other compression workers with 4 and 8 workers,
>   achieving a ~91% latency reduction over 'zlib' with 4 workers, and a
> ~58% latency reduction over 'zstd' with 4 workers.
> - 'qatzip' maintains comparable performance with 'zstd' at 16 workers,
>   showing a ~32% increase in latency. This performance difference
> becomes more noticeable with more workers, as CPU compression is highly
> parallelizable.
> - 'qatzip' compression uses considerably less CPU than other compression
>   methods. At 8 workers, 'qatzip' demonstrates a ~75% reduction in
> compression CPU usage compared to 'zstd' and 'zlib'.
> - 'qatzip' decompression CPU usage is less impressive, and is even
>   slightly worse than 'zstd' and 'zlib' CPU usage at 4 and 16 workers.

Thanks for the results update.

It looks like the docs/migration/ file is still missing.  It'll be great to
have it in the next version or separately.

So how it compares with QPL (which got merged already)?  They at least look
like both supported on an Intel platform, so an user whoever wants to
compress the RAM could start to look at both.  I'm utterly confused on why
Intel provides these two similar compressors.  It would be great to have
some answer and perhaps put into the doc.

I am honestly curious too on whether are you planning to use it in
production.  It looks like if the network resources are rich, no-comp is
mostly always better than qatzip, no matter on total migration time or cpu
consumption.  I'm pretty surprised that it'll take that much resources even
if the work should have been offloaded to the QAT chips iiuc.

I think it may not be a problem to merge this series even if it performs
slower at some criterias.. but I think we may still want to know when this
should be used, or the good reason this should be merged (if it's not about
it outperforms others).

Thanks,

> 
> 
> Bryan Zhang (4):
>   meson: Introduce 'qatzip' feature to the build system
>   migration: Add migration parameters for QATzip
>   migration: Introduce 'qatzip' compression method
>   tests/migration: Add integration test for 'qatzip' compression method
> 
>  hw/core/qdev-properties-system.c |   6 +-
>  meson.build                      |  10 +
>  meson_options.txt                |   2 +
>  migration/meson.build            |   1 +
>  migration/migration-hmp-cmds.c   |   8 +
>  migration/multifd-qatzip.c       | 382 +++++++++++++++++++++++++++++++
>  migration/multifd.h              |   1 +
>  migration/options.c              |  57 +++++
>  migration/options.h              |   2 +
>  qapi/migration.json              |  38 +++
>  scripts/meson-buildoptions.sh    |   6 +
>  tests/qtest/meson.build          |   4 +
>  tests/qtest/migration-test.c     |  35 +++
>  13 files changed, 551 insertions(+), 1 deletion(-)
>  create mode 100644 migration/multifd-qatzip.c
> 
> -- 
> Yichen Wang
> 

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB
  2024-07-02 19:16 ` [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB Peter Xu
@ 2024-07-04  3:15   ` Liu, Yuan1
  2024-07-04 15:36     ` Peter Xu
  0 siblings, 1 reply; 11+ messages in thread
From: Liu, Yuan1 @ 2024-07-04  3:15 UTC (permalink / raw)
  To: Peter Xu, Wang, Yichen
  Cc: Paolo Bonzini, Daniel P. Berrangé, Eduardo Habkost,
	Marc-André Lureau, Thomas Huth, Philippe Mathieu-Daudé,
	Fabiano Rosas, Eric Blake, Markus Armbruster, Laurent Vivier,
	qemu-devel@nongnu.org, Hao Xiang, Zou, Nanhai,
	Ho-Ren (Jack) Chuang

> -----Original Message-----
> From: Peter Xu <peterx@redhat.com>
> Sent: Wednesday, July 3, 2024 3:16 AM
> To: Wang, Yichen <yichen.wang@bytedance.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>; Daniel P. Berrangé
> <berrange@redhat.com>; Eduardo Habkost <eduardo@habkost.net>; Marc-André
> Lureau <marcandre.lureau@redhat.com>; Thomas Huth <thuth@redhat.com>;
> Philippe Mathieu-Daudé <philmd@linaro.org>; Fabiano Rosas
> <farosas@suse.de>; Eric Blake <eblake@redhat.com>; Markus Armbruster
> <armbru@redhat.com>; Laurent Vivier <lvivier@redhat.com>; qemu-
> devel@nongnu.org; Hao Xiang <hao.xiang@linux.dev>; Liu, Yuan1
> <yuan1.liu@intel.com>; Zou, Nanhai <nanhai.zou@intel.com>; Ho-Ren (Jack)
> Chuang <horenchuang@bytedance.com>
> Subject: Re: [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB
> 
> On Thu, Jun 27, 2024 at 03:34:41PM -0700, Yichen Wang wrote:
> > v3:
> > - Rebase changes on top of master
> > - Merge two patches per Fabiano Rosas's comment
> > - Add versions into comments and documentations
> >
> > v2:
> > - Rebase changes on top of recent multifd code changes.
> > - Use QATzip API 'qzMalloc' and 'qzFree' to allocate QAT buffers.
> > - Remove parameter tuning and use QATzip's defaults for better
> >   performance.
> > - Add parameter to enable QAT software fallback.
> >
> > v1:
> > https://lists.nongnu.org/archive/html/qemu-devel/2023-12/msg03761.html
> >
> > * Performance
> >
> > We present updated performance results. For circumstantial reasons, v1
> > presented performance on a low-bandwidth (1Gbps) network.
> >
> > Here, we present updated results with a similar setup as before but with
> > two main differences:
> >
> > 1. Our machines have a ~50Gbps connection, tested using 'iperf3'.
> > 2. We had a bug in our memory allocation causing us to only use ~1/2 of
> > the VM's RAM. Now we properly allocate and fill nearly all of the VM's
> > RAM.
> >
> > Thus, the test setup is as follows:
> >
> > We perform multifd live migration over TCP using a VM with 64GB memory.
> > We prepare the machine's memory by powering it on, allocating a large
> > amount of memory (60GB) as a single buffer, and filling the buffer with
> > the repeated contents of the Silesia corpus[0]. This is in lieu of a
> more
> > realistic memory snapshot, which proved troublesome to acquire.
> >
> > We analyze CPU usage by averaging the output of 'top' every second
> > during migration. This is admittedly imprecise, but we feel that it
> > accurately portrays the different degrees of CPU usage of varying
> > compression methods.
> >
> > We present the latency, throughput, and CPU usage results for all of the
> > compression methods, with varying numbers of multifd threads (4, 8, and
> > 16).
> >
> > [0] The Silesia corpus can be accessed here:
> > https://sun.aei.polsl.pl//~sdeor/index.php?page=silesia
> >
> > ** Results
> >
> > 4 multifd threads:
> >
> >     |---------------|---------------|----------------|---------|--------
> -|
> >     |method         |time(sec)      |throughput(mbps)|send cpu%|recv
> cpu%|
> >     |---------------|---------------|----------------|---------|--------
> -|
> >     |qatzip         | 23.13         | 8749.94        |117.50   |186.49
> |
> >     |---------------|---------------|----------------|---------|--------
> -|
> >     |zlib           |254.35         |  771.87        |388.20   |144.40
> |
> >     |---------------|---------------|----------------|---------|--------
> -|
> >     |zstd           | 54.52         | 3442.59        |414.59   |149.77
> |
> >     |---------------|---------------|----------------|---------|--------
> -|
> >     |none           | 12.45         |43739.60        |159.71   |204.96
> |
> >     |---------------|---------------|----------------|---------|--------
> -|
> >
> > 8 multifd threads:
> >
> >     |---------------|---------------|----------------|---------|--------
> -|
> >     |method         |time(sec)      |throughput(mbps)|send cpu%|recv
> cpu%|
> >     |---------------|---------------|----------------|---------|--------
> -|
> >     |qatzip         | 16.91         |12306.52        |186.37   |391.84
> |
> >     |---------------|---------------|----------------|---------|--------
> -|
> >     |zlib           |130.11         | 1508.89        |753.86   |289.35
> |
> >     |---------------|---------------|----------------|---------|--------
> -|
> >     |zstd           | 27.57         | 6823.23        |786.83   |303.80
> |
> >     |---------------|---------------|----------------|---------|--------
> -|
> >     |none           | 11.82         |46072.63        |163.74   |238.56
> |
> >     |---------------|---------------|----------------|---------|--------
> -|
> >
> > 16 multifd threads:
> >
> >     |---------------|---------------|----------------|---------|--------
> -|
> >     |method         |time(sec)      |throughput(mbps)|send cpu%|recv
> cpu%|
> >     |---------------|---------------|----------------|---------|--------
> -|
> >     |qatzip         |18.64          |11044.52        | 573.61  |437.65
> |
> >     |---------------|---------------|----------------|---------|--------
> -|
> >     |zlib           |66.43          | 2955.79        |1469.68  |567.47
> |
> >     |---------------|---------------|----------------|---------|--------
> -|
> >     |zstd           |14.17          |13290.66        |1504.08  |615.33
> |
> >     |---------------|---------------|----------------|---------|--------
> -|
> >     |none           |16.82          |32363.26        | 180.74  |217.17
> |
> >     |---------------|---------------|----------------|---------|--------
> -|
> >
> > ** Observations
> >
> > - In general, not using compression outperforms using compression in a
> >   non-network-bound environment.
> > - 'qatzip' outperforms other compression workers with 4 and 8 workers,
> >   achieving a ~91% latency reduction over 'zlib' with 4 workers, and a
> > ~58% latency reduction over 'zstd' with 4 workers.
> > - 'qatzip' maintains comparable performance with 'zstd' at 16 workers,
> >   showing a ~32% increase in latency. This performance difference
> > becomes more noticeable with more workers, as CPU compression is highly
> > parallelizable.
> > - 'qatzip' compression uses considerably less CPU than other compression
> >   methods. At 8 workers, 'qatzip' demonstrates a ~75% reduction in
> > compression CPU usage compared to 'zstd' and 'zlib'.
> > - 'qatzip' decompression CPU usage is less impressive, and is even
> >   slightly worse than 'zstd' and 'zlib' CPU usage at 4 and 16 workers.
> 
> Thanks for the results update.
> 
> It looks like the docs/migration/ file is still missing.  It'll be great
> to
> have it in the next version or separately.
> 
> So how it compares with QPL (which got merged already)?  They at least
> look
> like both supported on an Intel platform, so an user whoever wants to
> compress the RAM could start to look at both.  I'm utterly confused on why
> Intel provides these two similar compressors.  It would be great to have
> some answer and perhaps put into the doc.

I would like to explain some of the reasons why we want to merge the 
two QAT and IAA solutions into the community.

1. Although Intel Xeon Sapphire Rapids supports both QAT and IAA, different 
   SKUs support different numbers of QAT and IAA, so some users do not have 
   both QAT and IAA at the same time.

2. QAT products include PCIe cards, which are compatible with older Xeon
   products and other non-Xeon products. And some users have already used QAT
   cards to accelerate live migration.

3. In addition to compression, QAT and IAA also support various other features 
   to better serve different workloads. Here is an introduction to the accelerators,
   including usage scenarios of QAT and IAA.
   https://www.intel.com/content/dam/www/central-libraries/us/en/documents/2022-12/storage-engines-4th-gen-xeon-brief.pdf

For users who have both QAT and IAA, we recommend the following for choosing a 
live migration solution:

1. If the number of QAT devices is equal to or greater than the number of IAA devices 
   and network bandwidth is limited, it is recommended to use the QATZip(QAT) solution.

2. In other scenarios, the QPL (IAA) solution can be used first.

> I am honestly curious too on whether are you planning to use it in
> production.  It looks like if the network resources are rich, no-comp is
> mostly always better than qatzip, no matter on total migration time or cpu
> consumption.  I'm pretty surprised that it'll take that much resources
> even
> if the work should have been offloaded to the QAT chips iiuc.
> 
> I think it may not be a problem to merge this series even if it performs
> slower at some criterias.. but I think we may still want to know when this
> should be used, or the good reason this should be merged (if it's not
> about
> it outperforms others).
> 
> Thanks,
> 
> >
> >
> > Bryan Zhang (4):
> >   meson: Introduce 'qatzip' feature to the build system
> >   migration: Add migration parameters for QATzip
> >   migration: Introduce 'qatzip' compression method
> >   tests/migration: Add integration test for 'qatzip' compression method
> >
> >  hw/core/qdev-properties-system.c |   6 +-
> >  meson.build                      |  10 +
> >  meson_options.txt                |   2 +
> >  migration/meson.build            |   1 +
> >  migration/migration-hmp-cmds.c   |   8 +
> >  migration/multifd-qatzip.c       | 382 +++++++++++++++++++++++++++++++
> >  migration/multifd.h              |   1 +
> >  migration/options.c              |  57 +++++
> >  migration/options.h              |   2 +
> >  qapi/migration.json              |  38 +++
> >  scripts/meson-buildoptions.sh    |   6 +
> >  tests/qtest/meson.build          |   4 +
> >  tests/qtest/migration-test.c     |  35 +++
> >  13 files changed, 551 insertions(+), 1 deletion(-)
> >  create mode 100644 migration/multifd-qatzip.c
> >
> > --
> > Yichen Wang
> >
> 
> --
> Peter Xu


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB
  2024-07-04  3:15   ` Liu, Yuan1
@ 2024-07-04 15:36     ` Peter Xu
  2024-07-05  8:32       ` Liu, Yuan1
  0 siblings, 1 reply; 11+ messages in thread
From: Peter Xu @ 2024-07-04 15:36 UTC (permalink / raw)
  To: Liu, Yuan1
  Cc: Wang, Yichen, Paolo Bonzini, Daniel P. Berrangé,
	Eduardo Habkost, Marc-André Lureau, Thomas Huth,
	Philippe Mathieu-Daudé, Fabiano Rosas, Eric Blake,
	Markus Armbruster, Laurent Vivier, qemu-devel@nongnu.org,
	Hao Xiang, Zou, Nanhai, Ho-Ren (Jack) Chuang

On Thu, Jul 04, 2024 at 03:15:51AM +0000, Liu, Yuan1 wrote:
> > -----Original Message-----
> > From: Peter Xu <peterx@redhat.com>
> > Sent: Wednesday, July 3, 2024 3:16 AM
> > To: Wang, Yichen <yichen.wang@bytedance.com>
> > Cc: Paolo Bonzini <pbonzini@redhat.com>; Daniel P. Berrangé
> > <berrange@redhat.com>; Eduardo Habkost <eduardo@habkost.net>; Marc-André
> > Lureau <marcandre.lureau@redhat.com>; Thomas Huth <thuth@redhat.com>;
> > Philippe Mathieu-Daudé <philmd@linaro.org>; Fabiano Rosas
> > <farosas@suse.de>; Eric Blake <eblake@redhat.com>; Markus Armbruster
> > <armbru@redhat.com>; Laurent Vivier <lvivier@redhat.com>; qemu-
> > devel@nongnu.org; Hao Xiang <hao.xiang@linux.dev>; Liu, Yuan1
> > <yuan1.liu@intel.com>; Zou, Nanhai <nanhai.zou@intel.com>; Ho-Ren (Jack)
> > Chuang <horenchuang@bytedance.com>
> > Subject: Re: [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB
> > 
> > On Thu, Jun 27, 2024 at 03:34:41PM -0700, Yichen Wang wrote:
> > > v3:
> > > - Rebase changes on top of master
> > > - Merge two patches per Fabiano Rosas's comment
> > > - Add versions into comments and documentations
> > >
> > > v2:
> > > - Rebase changes on top of recent multifd code changes.
> > > - Use QATzip API 'qzMalloc' and 'qzFree' to allocate QAT buffers.
> > > - Remove parameter tuning and use QATzip's defaults for better
> > >   performance.
> > > - Add parameter to enable QAT software fallback.
> > >
> > > v1:
> > > https://lists.nongnu.org/archive/html/qemu-devel/2023-12/msg03761.html
> > >
> > > * Performance
> > >
> > > We present updated performance results. For circumstantial reasons, v1
> > > presented performance on a low-bandwidth (1Gbps) network.
> > >
> > > Here, we present updated results with a similar setup as before but with
> > > two main differences:
> > >
> > > 1. Our machines have a ~50Gbps connection, tested using 'iperf3'.
> > > 2. We had a bug in our memory allocation causing us to only use ~1/2 of
> > > the VM's RAM. Now we properly allocate and fill nearly all of the VM's
> > > RAM.
> > >
> > > Thus, the test setup is as follows:
> > >
> > > We perform multifd live migration over TCP using a VM with 64GB memory.
> > > We prepare the machine's memory by powering it on, allocating a large
> > > amount of memory (60GB) as a single buffer, and filling the buffer with
> > > the repeated contents of the Silesia corpus[0]. This is in lieu of a
> > more
> > > realistic memory snapshot, which proved troublesome to acquire.
> > >
> > > We analyze CPU usage by averaging the output of 'top' every second
> > > during migration. This is admittedly imprecise, but we feel that it
> > > accurately portrays the different degrees of CPU usage of varying
> > > compression methods.
> > >
> > > We present the latency, throughput, and CPU usage results for all of the
> > > compression methods, with varying numbers of multifd threads (4, 8, and
> > > 16).
> > >
> > > [0] The Silesia corpus can be accessed here:
> > > https://sun.aei.polsl.pl//~sdeor/index.php?page=silesia
> > >
> > > ** Results
> > >
> > > 4 multifd threads:
> > >
> > >     |---------------|---------------|----------------|---------|--------
> > -|
> > >     |method         |time(sec)      |throughput(mbps)|send cpu%|recv
> > cpu%|
> > >     |---------------|---------------|----------------|---------|--------
> > -|
> > >     |qatzip         | 23.13         | 8749.94        |117.50   |186.49
> > |
> > >     |---------------|---------------|----------------|---------|--------
> > -|
> > >     |zlib           |254.35         |  771.87        |388.20   |144.40
> > |
> > >     |---------------|---------------|----------------|---------|--------
> > -|
> > >     |zstd           | 54.52         | 3442.59        |414.59   |149.77
> > |
> > >     |---------------|---------------|----------------|---------|--------
> > -|
> > >     |none           | 12.45         |43739.60        |159.71   |204.96
> > |
> > >     |---------------|---------------|----------------|---------|--------
> > -|
> > >
> > > 8 multifd threads:
> > >
> > >     |---------------|---------------|----------------|---------|--------
> > -|
> > >     |method         |time(sec)      |throughput(mbps)|send cpu%|recv
> > cpu%|
> > >     |---------------|---------------|----------------|---------|--------
> > -|
> > >     |qatzip         | 16.91         |12306.52        |186.37   |391.84
> > |
> > >     |---------------|---------------|----------------|---------|--------
> > -|
> > >     |zlib           |130.11         | 1508.89        |753.86   |289.35
> > |
> > >     |---------------|---------------|----------------|---------|--------
> > -|
> > >     |zstd           | 27.57         | 6823.23        |786.83   |303.80
> > |
> > >     |---------------|---------------|----------------|---------|--------
> > -|
> > >     |none           | 11.82         |46072.63        |163.74   |238.56
> > |
> > >     |---------------|---------------|----------------|---------|--------
> > -|
> > >
> > > 16 multifd threads:
> > >
> > >     |---------------|---------------|----------------|---------|--------
> > -|
> > >     |method         |time(sec)      |throughput(mbps)|send cpu%|recv
> > cpu%|
> > >     |---------------|---------------|----------------|---------|--------
> > -|
> > >     |qatzip         |18.64          |11044.52        | 573.61  |437.65
> > |
> > >     |---------------|---------------|----------------|---------|--------
> > -|
> > >     |zlib           |66.43          | 2955.79        |1469.68  |567.47
> > |
> > >     |---------------|---------------|----------------|---------|--------
> > -|
> > >     |zstd           |14.17          |13290.66        |1504.08  |615.33
> > |
> > >     |---------------|---------------|----------------|---------|--------
> > -|
> > >     |none           |16.82          |32363.26        | 180.74  |217.17
> > |
> > >     |---------------|---------------|----------------|---------|--------
> > -|
> > >
> > > ** Observations
> > >
> > > - In general, not using compression outperforms using compression in a
> > >   non-network-bound environment.
> > > - 'qatzip' outperforms other compression workers with 4 and 8 workers,
> > >   achieving a ~91% latency reduction over 'zlib' with 4 workers, and a
> > > ~58% latency reduction over 'zstd' with 4 workers.
> > > - 'qatzip' maintains comparable performance with 'zstd' at 16 workers,
> > >   showing a ~32% increase in latency. This performance difference
> > > becomes more noticeable with more workers, as CPU compression is highly
> > > parallelizable.
> > > - 'qatzip' compression uses considerably less CPU than other compression
> > >   methods. At 8 workers, 'qatzip' demonstrates a ~75% reduction in
> > > compression CPU usage compared to 'zstd' and 'zlib'.
> > > - 'qatzip' decompression CPU usage is less impressive, and is even
> > >   slightly worse than 'zstd' and 'zlib' CPU usage at 4 and 16 workers.
> > 
> > Thanks for the results update.
> > 
> > It looks like the docs/migration/ file is still missing.  It'll be great
> > to
> > have it in the next version or separately.
> > 
> > So how it compares with QPL (which got merged already)?  They at least
> > look
> > like both supported on an Intel platform, so an user whoever wants to
> > compress the RAM could start to look at both.  I'm utterly confused on why
> > Intel provides these two similar compressors.  It would be great to have
> > some answer and perhaps put into the doc.

Yuan,

> 
> I would like to explain some of the reasons why we want to merge the 
> two QAT and IAA solutions into the community.

Yes, these are very helpful information.  Please consider putting them into
the cover letter if there's a repost, and perhaps also in the doc/ files.

> 
> 1. Although Intel Xeon Sapphire Rapids supports both QAT and IAA, different 
>    SKUs support different numbers of QAT and IAA, so some users do not have 
>    both QAT and IAA at the same time.
> 
> 2. QAT products include PCIe cards, which are compatible with older Xeon
>    products and other non-Xeon products. And some users have already used QAT
>    cards to accelerate live migration.

Ah, this makes some sense to me.

So a previous question always haunted me, where I wondered why an user who
bought all these fancy and expensive processors with QAT, would still like
to not invest on a better network of 50G or more, but stick with 10Gs
ancient NICs and switches.

So what you're saying is logically in some old clusters with old chips and
old network solutions, it's possible that user buys these PCIe cards
separately so it may help with that old infra migrate VMs faster.  Is that
the case?

If so, we may still want some numbers showing how this performs in a
network-limited environment, and how that helps users to migrate.  Sorry if
there's some back-and-forth requirement asking for these numbers, but I
think these are still important information when an user would like to
decide whether to use these features.  Again, put that into docs/ if proper
would be nice too.

> 
> 3. In addition to compression, QAT and IAA also support various other features 
>    to better serve different workloads. Here is an introduction to the accelerators,
>    including usage scenarios of QAT and IAA.
>    https://www.intel.com/content/dam/www/central-libraries/us/en/documents/2022-12/storage-engines-4th-gen-xeon-brief.pdf

Thanks for the link.

However this doesn't look like a reason to support it in migration?  It
needs to help migration in some form or another, no matter how many
features it provides.. since migration may not consume them.

Two major (but pure..) questions:

  1) Why high cpu usage?

     I raised this question below [1] too but I didn't yet get an answer.
     Ror 8-chan multifd, it's ~390% (QAT) v.s. ~240% (nocomp), even if
     46Gbps bw for the latter... so when throttled it will be even lower?

     The paper you provided above has this upfront:

        When a CPU can offload storage functions to built-in accelerators,
        it frees up cores for business-critical workloads...

     Isn't that a major feature to be able to "offload" things?  Why cpu
     isn't freed even if the offload happened?

  2) TLS?

     I think I asked before, I apologize if any of you've already answered
     and if I forgot.. but have any of you looked into offload TLS (instead
     of compression) with the QATs?

Thanks,

> 
> For users who have both QAT and IAA, we recommend the following for choosing a 
> live migration solution:
> 
> 1. If the number of QAT devices is equal to or greater than the number of IAA devices 
>    and network bandwidth is limited, it is recommended to use the QATZip(QAT) solution.
> 
> 2. In other scenarios, the QPL (IAA) solution can be used first.
> 
> > I am honestly curious too on whether are you planning to use it in
> > production.  It looks like if the network resources are rich, no-comp is
> > mostly always better than qatzip, no matter on total migration time or cpu
> > consumption.  I'm pretty surprised that it'll take that much resources
> > even
> > if the work should have been offloaded to the QAT chips iiuc.

[1]

> > 
> > I think it may not be a problem to merge this series even if it performs
> > slower at some criterias.. but I think we may still want to know when this
> > should be used, or the good reason this should be merged (if it's not
> > about
> > it outperforms others).
> > 
> > Thanks,
> > 
> > >
> > >
> > > Bryan Zhang (4):
> > >   meson: Introduce 'qatzip' feature to the build system
> > >   migration: Add migration parameters for QATzip
> > >   migration: Introduce 'qatzip' compression method
> > >   tests/migration: Add integration test for 'qatzip' compression method
> > >
> > >  hw/core/qdev-properties-system.c |   6 +-
> > >  meson.build                      |  10 +
> > >  meson_options.txt                |   2 +
> > >  migration/meson.build            |   1 +
> > >  migration/migration-hmp-cmds.c   |   8 +
> > >  migration/multifd-qatzip.c       | 382 +++++++++++++++++++++++++++++++
> > >  migration/multifd.h              |   1 +
> > >  migration/options.c              |  57 +++++
> > >  migration/options.h              |   2 +
> > >  qapi/migration.json              |  38 +++
> > >  scripts/meson-buildoptions.sh    |   6 +
> > >  tests/qtest/meson.build          |   4 +
> > >  tests/qtest/migration-test.c     |  35 +++
> > >  13 files changed, 551 insertions(+), 1 deletion(-)
> > >  create mode 100644 migration/multifd-qatzip.c
> > >
> > > --
> > > Yichen Wang
> > >
> > 
> > --
> > Peter Xu
> 

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB
  2024-07-04 15:36     ` Peter Xu
@ 2024-07-05  8:32       ` Liu, Yuan1
  2024-07-05 18:28         ` [External] " Yichen Wang
  0 siblings, 1 reply; 11+ messages in thread
From: Liu, Yuan1 @ 2024-07-05  8:32 UTC (permalink / raw)
  To: Peter Xu
  Cc: Wang, Yichen, Paolo Bonzini, Daniel P. Berrangé,
	Eduardo Habkost, Marc-André Lureau, Thomas Huth,
	Philippe Mathieu-Daudé, Fabiano Rosas, Eric Blake,
	Markus Armbruster, Laurent Vivier, qemu-devel@nongnu.org,
	Hao Xiang, Zou, Nanhai, Ho-Ren (Jack) Chuang

> -----Original Message-----
> From: Peter Xu <peterx@redhat.com>
> Sent: Thursday, July 4, 2024 11:36 PM
> To: Liu, Yuan1 <yuan1.liu@intel.com>
> Cc: Wang, Yichen <yichen.wang@bytedance.com>; Paolo Bonzini
> <pbonzini@redhat.com>; Daniel P. Berrangé <berrange@redhat.com>; Eduardo
> Habkost <eduardo@habkost.net>; Marc-André Lureau
> <marcandre.lureau@redhat.com>; Thomas Huth <thuth@redhat.com>; Philippe
> Mathieu-Daudé <philmd@linaro.org>; Fabiano Rosas <farosas@suse.de>; Eric
> Blake <eblake@redhat.com>; Markus Armbruster <armbru@redhat.com>; Laurent
> Vivier <lvivier@redhat.com>; qemu-devel@nongnu.org; Hao Xiang
> <hao.xiang@linux.dev>; Zou, Nanhai <nanhai.zou@intel.com>; Ho-Ren (Jack)
> Chuang <horenchuang@bytedance.com>
> Subject: Re: [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB
> 
> On Thu, Jul 04, 2024 at 03:15:51AM +0000, Liu, Yuan1 wrote:
> > > -----Original Message-----
> > > From: Peter Xu <peterx@redhat.com>
> > > Sent: Wednesday, July 3, 2024 3:16 AM
> > > To: Wang, Yichen <yichen.wang@bytedance.com>
> > > Cc: Paolo Bonzini <pbonzini@redhat.com>; Daniel P. Berrangé
> > > <berrange@redhat.com>; Eduardo Habkost <eduardo@habkost.net>; Marc-
> André
> > > Lureau <marcandre.lureau@redhat.com>; Thomas Huth <thuth@redhat.com>;
> > > Philippe Mathieu-Daudé <philmd@linaro.org>; Fabiano Rosas
> > > <farosas@suse.de>; Eric Blake <eblake@redhat.com>; Markus Armbruster
> > > <armbru@redhat.com>; Laurent Vivier <lvivier@redhat.com>; qemu-
> > > devel@nongnu.org; Hao Xiang <hao.xiang@linux.dev>; Liu, Yuan1
> > > <yuan1.liu@intel.com>; Zou, Nanhai <nanhai.zou@intel.com>; Ho-Ren
> (Jack)
> > > Chuang <horenchuang@bytedance.com>
> > > Subject: Re: [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB
> > >
> > > On Thu, Jun 27, 2024 at 03:34:41PM -0700, Yichen Wang wrote:
> > > > v3:
> > > > - Rebase changes on top of master
> > > > - Merge two patches per Fabiano Rosas's comment
> > > > - Add versions into comments and documentations
> > > >
> > > > v2:
> > > > - Rebase changes on top of recent multifd code changes.
> > > > - Use QATzip API 'qzMalloc' and 'qzFree' to allocate QAT buffers.
> > > > - Remove parameter tuning and use QATzip's defaults for better
> > > >   performance.
> > > > - Add parameter to enable QAT software fallback.
> > > >
> > > > v1:
> > > > https://lists.nongnu.org/archive/html/qemu-devel/2023-
> 12/msg03761.html
> > > >
> > > > * Performance
> > > >
> > > > We present updated performance results. For circumstantial reasons,
> v1
> > > > presented performance on a low-bandwidth (1Gbps) network.
> > > >
> > > > Here, we present updated results with a similar setup as before but
> with
> > > > two main differences:
> > > >
> > > > 1. Our machines have a ~50Gbps connection, tested using 'iperf3'.
> > > > 2. We had a bug in our memory allocation causing us to only use ~1/2
> of
> > > > the VM's RAM. Now we properly allocate and fill nearly all of the
> VM's
> > > > RAM.
> > > >
> > > > Thus, the test setup is as follows:
> > > >
> > > > We perform multifd live migration over TCP using a VM with 64GB
> memory.
> > > > We prepare the machine's memory by powering it on, allocating a
> large
> > > > amount of memory (60GB) as a single buffer, and filling the buffer
> with
> > > > the repeated contents of the Silesia corpus[0]. This is in lieu of a
> > > more
> > > > realistic memory snapshot, which proved troublesome to acquire.
> > > >
> > > > We analyze CPU usage by averaging the output of 'top' every second
> > > > during migration. This is admittedly imprecise, but we feel that it
> > > > accurately portrays the different degrees of CPU usage of varying
> > > > compression methods.
> > > >
> > > > We present the latency, throughput, and CPU usage results for all of
> the
> > > > compression methods, with varying numbers of multifd threads (4, 8,
> and
> > > > 16).
> > > >
> > > > [0] The Silesia corpus can be accessed here:
> > > > https://sun.aei.polsl.pl//~sdeor/index.php?page=silesia
> > > >
> > > > ** Results
> > > >
> > > > 4 multifd threads:
> > > >
> > > >     |---------------|---------------|----------------|---------|----
> ----
> > > -|
> > > >     |method         |time(sec)      |throughput(mbps)|send cpu%|recv
> > > cpu%|
> > > >     |---------------|---------------|----------------|---------|----
> ----
> > > -|
> > > >     |qatzip         | 23.13         | 8749.94        |117.50
> |186.49
> > > |
> > > >     |---------------|---------------|----------------|---------|----
> ----
> > > -|
> > > >     |zlib           |254.35         |  771.87        |388.20
> |144.40
> > > |
> > > >     |---------------|---------------|----------------|---------|----
> ----
> > > -|
> > > >     |zstd           | 54.52         | 3442.59        |414.59
> |149.77
> > > |
> > > >     |---------------|---------------|----------------|---------|----
> ----
> > > -|
> > > >     |none           | 12.45         |43739.60        |159.71
> |204.96
> > > |
> > > >     |---------------|---------------|----------------|---------|----
> ----
> > > -|
> > > >
> > > > 8 multifd threads:
> > > >
> > > >     |---------------|---------------|----------------|---------|----
> ----
> > > -|
> > > >     |method         |time(sec)      |throughput(mbps)|send cpu%|recv
> > > cpu%|
> > > >     |---------------|---------------|----------------|---------|----
> ----
> > > -|
> > > >     |qatzip         | 16.91         |12306.52        |186.37
> |391.84
> > > |
> > > >     |---------------|---------------|----------------|---------|----
> ----
> > > -|
> > > >     |zlib           |130.11         | 1508.89        |753.86
> |289.35
> > > |
> > > >     |---------------|---------------|----------------|---------|----
> ----
> > > -|
> > > >     |zstd           | 27.57         | 6823.23        |786.83
> |303.80
> > > |
> > > >     |---------------|---------------|----------------|---------|----
> ----
> > > -|
> > > >     |none           | 11.82         |46072.63        |163.74
> |238.56
> > > |
> > > >     |---------------|---------------|----------------|---------|----
> ----
> > > -|
> > > >
> > > > 16 multifd threads:
> > > >
> > > >     |---------------|---------------|----------------|---------|----
> ----
> > > -|
> > > >     |method         |time(sec)      |throughput(mbps)|send cpu%|recv
> > > cpu%|
> > > >     |---------------|---------------|----------------|---------|----
> ----
> > > -|
> > > >     |qatzip         |18.64          |11044.52        | 573.61
> |437.65
> > > |
> > > >     |---------------|---------------|----------------|---------|----
> ----
> > > -|
> > > >     |zlib           |66.43          | 2955.79        |1469.68
> |567.47
> > > |
> > > >     |---------------|---------------|----------------|---------|----
> ----
> > > -|
> > > >     |zstd           |14.17          |13290.66        |1504.08
> |615.33
> > > |
> > > >     |---------------|---------------|----------------|---------|----
> ----
> > > -|
> > > >     |none           |16.82          |32363.26        | 180.74
> |217.17
> > > |
> > > >     |---------------|---------------|----------------|---------|----
> ----
> > > -|
> > > >
> > > > ** Observations
> > > >
> > > > - In general, not using compression outperforms using compression in
> a
> > > >   non-network-bound environment.
> > > > - 'qatzip' outperforms other compression workers with 4 and 8
> workers,
> > > >   achieving a ~91% latency reduction over 'zlib' with 4 workers, and
> a
> > > > ~58% latency reduction over 'zstd' with 4 workers.
> > > > - 'qatzip' maintains comparable performance with 'zstd' at 16
> workers,
> > > >   showing a ~32% increase in latency. This performance difference
> > > > becomes more noticeable with more workers, as CPU compression is
> highly
> > > > parallelizable.
> > > > - 'qatzip' compression uses considerably less CPU than other
> compression
> > > >   methods. At 8 workers, 'qatzip' demonstrates a ~75% reduction in
> > > > compression CPU usage compared to 'zstd' and 'zlib'.
> > > > - 'qatzip' decompression CPU usage is less impressive, and is even
> > > >   slightly worse than 'zstd' and 'zlib' CPU usage at 4 and 16
> workers.
> > >
> > > Thanks for the results update.
> > >
> > > It looks like the docs/migration/ file is still missing.  It'll be
> great
> > > to
> > > have it in the next version or separately.
> > >
> > > So how it compares with QPL (which got merged already)?  They at least
> > > look
> > > like both supported on an Intel platform, so an user whoever wants to
> > > compress the RAM could start to look at both.  I'm utterly confused on
> why
> > > Intel provides these two similar compressors.  It would be great to
> have
> > > some answer and perhaps put into the doc.
> 
> Yuan,
> 
> >
> > I would like to explain some of the reasons why we want to merge the
> > two QAT and IAA solutions into the community.
> 
> Yes, these are very helpful information.  Please consider putting them
> into
> the cover letter if there's a repost, and perhaps also in the doc/ files.
> 
> >
> > 1. Although Intel Xeon Sapphire Rapids supports both QAT and IAA,
> different
> >    SKUs support different numbers of QAT and IAA, so some users do not
> have
> >    both QAT and IAA at the same time.
> >
> > 2. QAT products include PCIe cards, which are compatible with older Xeon
> >    products and other non-Xeon products. And some users have already
> used QAT
> >    cards to accelerate live migration.
> 
> Ah, this makes some sense to me.
> 
> So a previous question always haunted me, where I wondered why an user who
> bought all these fancy and expensive processors with QAT, would still like
> to not invest on a better network of 50G or more, but stick with 10Gs
> ancient NICs and switches.
> 
> So what you're saying is logically in some old clusters with old chips and
> old network solutions, it's possible that user buys these PCIe cards
> separately so it may help with that old infra migrate VMs faster.  Is that
> the case?

Yes, users do not add a QAT card just for live migration. Users mainly use 
QAT-SRIOV technology to help cloud users offload compression and encryption.

> If so, we may still want some numbers showing how this performs in a
> network-limited environment, and how that helps users to migrate.  Sorry
> if
> there's some back-and-forth requirement asking for these numbers, but I
> think these are still important information when an user would like to
> decide whether to use these features.  Again, put that into docs/ if
> proper
> would be nice too.

Yes, I will provide some performance data at some specific 
bandwidths(100Mbps/1Gbps/10Gbps). And add documentation to explain 
the advantages of using QAT 

> >
> > 3. In addition to compression, QAT and IAA also support various other
> features
> >    to better serve different workloads. Here is an introduction to the
> accelerators,
> >    including usage scenarios of QAT and IAA.
> >    https://www.intel.com/content/dam/www/central-
> libraries/us/en/documents/2022-12/storage-engines-4th-gen-xeon-brief.pdf
> 
> Thanks for the link.
> 
> However this doesn't look like a reason to support it in migration?  It
> needs to help migration in some form or another, no matter how many
> features it provides.. since migration may not consume them.
> 
> Two major (but pure..) questions:
> 
>   1) Why high cpu usage?
> 
>      I raised this question below [1] too but I didn't yet get an answer.
>      Ror 8-chan multifd, it's ~390% (QAT) v.s. ~240% (nocomp), even if
>      46Gbps bw for the latter... so when throttled it will be even lower?
> 
>      The paper you provided above has this upfront:
> 
>         When a CPU can offload storage functions to built-in accelerators,
>         it frees up cores for business-critical workloads...
> 
>      Isn't that a major feature to be able to "offload" things?  Why cpu
>      isn't freed even if the offload happened?

Yes, it doesn't make sense, I will check this.

>   2) TLS?
> 
>      I think I asked before, I apologize if any of you've already answered
>      and if I forgot.. but have any of you looked into offload TLS
> (instead
>      of compression) with the QATs?

I'm sorry for not responding to the previous question about TLS. QAT has many 
related success cases (for example, OpenSSL). 
https://www.intel.com/content/dam/www/public/us/en/documents/solution-briefs/accelerating-openssl-brief.pdf

I will send a separate RFC or patch about this part because the software stacks of QAT 
compression and encryption are independent, so we discuss them separately.

> Thanks,
> 
> >
> > For users who have both QAT and IAA, we recommend the following for
> choosing a
> > live migration solution:
> >
> > 1. If the number of QAT devices is equal to or greater than the number
> of IAA devices
> >    and network bandwidth is limited, it is recommended to use the
> QATZip(QAT) solution.
> >
> > 2. In other scenarios, the QPL (IAA) solution can be used first.
> >
> > > I am honestly curious too on whether are you planning to use it in
> > > production.  It looks like if the network resources are rich, no-comp
> is
> > > mostly always better than qatzip, no matter on total migration time or
> cpu
> > > consumption.  I'm pretty surprised that it'll take that much resources
> > > even
> > > if the work should have been offloaded to the QAT chips iiuc.
> 
> [1]
> 
> > >
> > > I think it may not be a problem to merge this series even if it
> performs
> > > slower at some criterias.. but I think we may still want to know when
> this
> > > should be used, or the good reason this should be merged (if it's not
> > > about
> > > it outperforms others).
> > >
> > > Thanks,
> > >
> > > >
> > > >
> > > > Bryan Zhang (4):
> > > >   meson: Introduce 'qatzip' feature to the build system
> > > >   migration: Add migration parameters for QATzip
> > > >   migration: Introduce 'qatzip' compression method
> > > >   tests/migration: Add integration test for 'qatzip' compression
> method
> > > >
> > > >  hw/core/qdev-properties-system.c |   6 +-
> > > >  meson.build                      |  10 +
> > > >  meson_options.txt                |   2 +
> > > >  migration/meson.build            |   1 +
> > > >  migration/migration-hmp-cmds.c   |   8 +
> > > >  migration/multifd-qatzip.c       | 382
> +++++++++++++++++++++++++++++++
> > > >  migration/multifd.h              |   1 +
> > > >  migration/options.c              |  57 +++++
> > > >  migration/options.h              |   2 +
> > > >  qapi/migration.json              |  38 +++
> > > >  scripts/meson-buildoptions.sh    |   6 +
> > > >  tests/qtest/meson.build          |   4 +
> > > >  tests/qtest/migration-test.c     |  35 +++
> > > >  13 files changed, 551 insertions(+), 1 deletion(-)
> > > >  create mode 100644 migration/multifd-qatzip.c
> > > >
> > > > --
> > > > Yichen Wang
> > > >
> > >
> > > --
> > > Peter Xu
> >
> 
> --
> Peter Xu


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [External] [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB
  2024-07-05  8:32       ` Liu, Yuan1
@ 2024-07-05 18:28         ` Yichen Wang
  2024-07-08 15:46           ` Peter Xu
  0 siblings, 1 reply; 11+ messages in thread
From: Yichen Wang @ 2024-07-05 18:28 UTC (permalink / raw)
  To: Liu, Yuan1
  Cc: Peter Xu, Paolo Bonzini, "Daniel P. Berrangé",
	Eduardo Habkost, Marc-André Lureau, Thomas Huth,
	Philippe Mathieu-Daudé, Fabiano Rosas, Eric Blake,
	Markus Armbruster, Laurent Vivier, qemu-devel@nongnu.org,
	Hao Xiang, Zou, Nanhai, Ho-Ren (Jack) Chuang

[-- Attachment #1: Type: text/plain, Size: 16082 bytes --]



> On Jul 5, 2024, at 1:32 AM, Liu, Yuan1 <yuan1.liu@intel.com> wrote:
> 
>> -----Original Message-----
>> From: Peter Xu <peterx@redhat.com <mailto:peterx@redhat.com>>
>> Sent: Thursday, July 4, 2024 11:36 PM
>> To: Liu, Yuan1 <yuan1.liu@intel.com <mailto:yuan1.liu@intel.com>>
>> Cc: Wang, Yichen <yichen.wang@bytedance.com <mailto:yichen.wang@bytedance.com>>; Paolo Bonzini
>> <pbonzini@redhat.com <mailto:pbonzini@redhat.com>>; Daniel P. Berrangé <berrange@redhat.com <mailto:berrange@redhat.com>>; Eduardo
>> Habkost <eduardo@habkost.net <mailto:eduardo@habkost.net>>; Marc-André Lureau
>> <marcandre.lureau@redhat.com <mailto:marcandre.lureau@redhat.com>>; Thomas Huth <thuth@redhat.com <mailto:thuth@redhat.com>>; Philippe
>> Mathieu-Daudé <philmd@linaro.org <mailto:philmd@linaro.org>>; Fabiano Rosas <farosas@suse.de <mailto:farosas@suse.de>>; Eric
>> Blake <eblake@redhat.com <mailto:eblake@redhat.com>>; Markus Armbruster <armbru@redhat.com <mailto:armbru@redhat.com>>; Laurent
>> Vivier <lvivier@redhat.com <mailto:lvivier@redhat.com>>; qemu-devel@nongnu.org <mailto:qemu-devel@nongnu.org>; Hao Xiang
>> <hao.xiang@linux.dev <mailto:hao.xiang@linux.dev>>; Zou, Nanhai <nanhai.zou@intel.com <mailto:nanhai.zou@intel.com>>; Ho-Ren (Jack)
>> Chuang <horenchuang@bytedance.com <mailto:horenchuang@bytedance.com>>
>> Subject: Re: [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB
>> 
>> On Thu, Jul 04, 2024 at 03:15:51AM +0000, Liu, Yuan1 wrote:
>>>> -----Original Message-----
>>>> From: Peter Xu <peterx@redhat.com>
>>>> Sent: Wednesday, July 3, 2024 3:16 AM
>>>> To: Wang, Yichen <yichen.wang@bytedance.com>
>>>> Cc: Paolo Bonzini <pbonzini@redhat.com>; Daniel P. Berrangé
>>>> <berrange@redhat.com>; Eduardo Habkost <eduardo@habkost.net>; Marc-
>> André
>>>> Lureau <marcandre.lureau@redhat.com>; Thomas Huth <thuth@redhat.com>;
>>>> Philippe Mathieu-Daudé <philmd@linaro.org>; Fabiano Rosas
>>>> <farosas@suse.de>; Eric Blake <eblake@redhat.com>; Markus Armbruster
>>>> <armbru@redhat.com>; Laurent Vivier <lvivier@redhat.com>; qemu-
>>>> devel@nongnu.org; Hao Xiang <hao.xiang@linux.dev>; Liu, Yuan1
>>>> <yuan1.liu@intel.com>; Zou, Nanhai <nanhai.zou@intel.com>; Ho-Ren
>> (Jack)
>>>> Chuang <horenchuang@bytedance.com>
>>>> Subject: Re: [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB
>>>> 
>>>> On Thu, Jun 27, 2024 at 03:34:41PM -0700, Yichen Wang wrote:
>>>>> v3:
>>>>> - Rebase changes on top of master
>>>>> - Merge two patches per Fabiano Rosas's comment
>>>>> - Add versions into comments and documentations
>>>>> 
>>>>> v2:
>>>>> - Rebase changes on top of recent multifd code changes.
>>>>> - Use QATzip API 'qzMalloc' and 'qzFree' to allocate QAT buffers.
>>>>> - Remove parameter tuning and use QATzip's defaults for better
>>>>>  performance.
>>>>> - Add parameter to enable QAT software fallback.
>>>>> 
>>>>> v1:
>>>>> https://lists.nongnu.org/archive/html/qemu-devel/2023-
>> 12/msg03761.html
>>>>> 
>>>>> * Performance
>>>>> 
>>>>> We present updated performance results. For circumstantial reasons,
>> v1
>>>>> presented performance on a low-bandwidth (1Gbps) network.
>>>>> 
>>>>> Here, we present updated results with a similar setup as before but
>> with
>>>>> two main differences:
>>>>> 
>>>>> 1. Our machines have a ~50Gbps connection, tested using 'iperf3'.
>>>>> 2. We had a bug in our memory allocation causing us to only use ~1/2
>> of
>>>>> the VM's RAM. Now we properly allocate and fill nearly all of the
>> VM's
>>>>> RAM.
>>>>> 
>>>>> Thus, the test setup is as follows:
>>>>> 
>>>>> We perform multifd live migration over TCP using a VM with 64GB
>> memory.
>>>>> We prepare the machine's memory by powering it on, allocating a
>> large
>>>>> amount of memory (60GB) as a single buffer, and filling the buffer
>> with
>>>>> the repeated contents of the Silesia corpus[0]. This is in lieu of a
>>>> more
>>>>> realistic memory snapshot, which proved troublesome to acquire.
>>>>> 
>>>>> We analyze CPU usage by averaging the output of 'top' every second
>>>>> during migration. This is admittedly imprecise, but we feel that it
>>>>> accurately portrays the different degrees of CPU usage of varying
>>>>> compression methods.
>>>>> 
>>>>> We present the latency, throughput, and CPU usage results for all of
>> the
>>>>> compression methods, with varying numbers of multifd threads (4, 8,
>> and
>>>>> 16).
>>>>> 
>>>>> [0] The Silesia corpus can be accessed here:
>>>>> https://sun.aei.polsl.pl//~sdeor/index.php?page=silesia
>>>>> 
>>>>> ** Results
>>>>> 
>>>>> 4 multifd threads:
>>>>> 
>>>>>    |---------------|---------------|----------------|---------|----
>> ----
>>>> -|
>>>>>    |method         |time(sec)      |throughput(mbps)|send cpu%|recv
>>>> cpu%|
>>>>>    |---------------|---------------|----------------|---------|----
>> ----
>>>> -|
>>>>>    |qatzip         | 23.13         | 8749.94        |117.50
>> |186.49
>>>> |
>>>>>    |---------------|---------------|----------------|---------|----
>> ----
>>>> -|
>>>>>    |zlib           |254.35         |  771.87        |388.20
>> |144.40
>>>> |
>>>>>    |---------------|---------------|----------------|---------|----
>> ----
>>>> -|
>>>>>    |zstd           | 54.52         | 3442.59        |414.59
>> |149.77
>>>> |
>>>>>    |---------------|---------------|----------------|---------|----
>> ----
>>>> -|
>>>>>    |none           | 12.45         |43739.60        |159.71
>> |204.96
>>>> |
>>>>>    |---------------|---------------|----------------|---------|----
>> ----
>>>> -|
>>>>> 
>>>>> 8 multifd threads:
>>>>> 
>>>>>    |---------------|---------------|----------------|---------|----
>> ----
>>>> -|
>>>>>    |method         |time(sec)      |throughput(mbps)|send cpu%|recv
>>>> cpu%|
>>>>>    |---------------|---------------|----------------|---------|----
>> ----
>>>> -|
>>>>>    |qatzip         | 16.91         |12306.52        |186.37
>> |391.84
>>>> |
>>>>>    |---------------|---------------|----------------|---------|----
>> ----
>>>> -|
>>>>>    |zlib           |130.11         | 1508.89        |753.86
>> |289.35
>>>> |
>>>>>    |---------------|---------------|----------------|---------|----
>> ----
>>>> -|
>>>>>    |zstd           | 27.57         | 6823.23        |786.83
>> |303.80
>>>> |
>>>>>    |---------------|---------------|----------------|---------|----
>> ----
>>>> -|
>>>>>    |none           | 11.82         |46072.63        |163.74
>> |238.56
>>>> |
>>>>>    |---------------|---------------|----------------|---------|----
>> ----
>>>> -|
>>>>> 
>>>>> 16 multifd threads:
>>>>> 
>>>>>    |---------------|---------------|----------------|---------|----
>> ----
>>>> -|
>>>>>    |method         |time(sec)      |throughput(mbps)|send cpu%|recv
>>>> cpu%|
>>>>>    |---------------|---------------|----------------|---------|----
>> ----
>>>> -|
>>>>>    |qatzip         |18.64          |11044.52        | 573.61
>> |437.65
>>>> |
>>>>>    |---------------|---------------|----------------|---------|----
>> ----
>>>> -|
>>>>>    |zlib           |66.43          | 2955.79        |1469.68
>> |567.47
>>>> |
>>>>>    |---------------|---------------|----------------|---------|----
>> ----
>>>> -|
>>>>>    |zstd           |14.17          |13290.66        |1504.08
>> |615.33
>>>> |
>>>>>    |---------------|---------------|----------------|---------|----
>> ----
>>>> -|
>>>>>    |none           |16.82          |32363.26        | 180.74
>> |217.17
>>>> |
>>>>>    |---------------|---------------|----------------|---------|----
>> ----
>>>> -|
>>>>> 
>>>>> ** Observations
>>>>> 
>>>>> - In general, not using compression outperforms using compression in
>> a
>>>>>  non-network-bound environment.
>>>>> - 'qatzip' outperforms other compression workers with 4 and 8
>> workers,
>>>>>  achieving a ~91% latency reduction over 'zlib' with 4 workers, and
>> a
>>>>> ~58% latency reduction over 'zstd' with 4 workers.
>>>>> - 'qatzip' maintains comparable performance with 'zstd' at 16
>> workers,
>>>>>  showing a ~32% increase in latency. This performance difference
>>>>> becomes more noticeable with more workers, as CPU compression is
>> highly
>>>>> parallelizable.
>>>>> - 'qatzip' compression uses considerably less CPU than other
>> compression
>>>>>  methods. At 8 workers, 'qatzip' demonstrates a ~75% reduction in
>>>>> compression CPU usage compared to 'zstd' and 'zlib'.
>>>>> - 'qatzip' decompression CPU usage is less impressive, and is even
>>>>>  slightly worse than 'zstd' and 'zlib' CPU usage at 4 and 16
>> workers.
>>>> 
>>>> Thanks for the results update.
>>>> 
>>>> It looks like the docs/migration/ file is still missing.  It'll be
>> great
>>>> to
>>>> have it in the next version or separately.
>>>> 
>>>> So how it compares with QPL (which got merged already)?  They at least
>>>> look
>>>> like both supported on an Intel platform, so an user whoever wants to
>>>> compress the RAM could start to look at both.  I'm utterly confused on
>> why
>>>> Intel provides these two similar compressors.  It would be great to
>> have
>>>> some answer and perhaps put into the doc.
>> 
>> Yuan,
>> 
>>> 
>>> I would like to explain some of the reasons why we want to merge the
>>> two QAT and IAA solutions into the community.
>> 
>> Yes, these are very helpful information.  Please consider putting them
>> into
>> the cover letter if there's a repost, and perhaps also in the doc/ files.
>> 
>>> 
>>> 1. Although Intel Xeon Sapphire Rapids supports both QAT and IAA,
>> different
>>>   SKUs support different numbers of QAT and IAA, so some users do not
>> have
>>>   both QAT and IAA at the same time.
>>> 
>>> 2. QAT products include PCIe cards, which are compatible with older Xeon
>>>   products and other non-Xeon products. And some users have already
>> used QAT
>>>   cards to accelerate live migration.
>> 
>> Ah, this makes some sense to me.
>> 
>> So a previous question always haunted me, where I wondered why an user who
>> bought all these fancy and expensive processors with QAT, would still like
>> to not invest on a better network of 50G or more, but stick with 10Gs
>> ancient NICs and switches.
>> 
>> So what you're saying is logically in some old clusters with old chips and
>> old network solutions, it's possible that user buys these PCIe cards
>> separately so it may help with that old infra migrate VMs faster.  Is that
>> the case?
> 
> Yes, users do not add a QAT card just for live migration. Users mainly use 
> QAT-SRIOV technology to help cloud users offload compression and encryption.
> 
>> If so, we may still want some numbers showing how this performs in a
>> network-limited environment, and how that helps users to migrate.  Sorry
>> if
>> there's some back-and-forth requirement asking for these numbers, but I
>> think these are still important information when an user would like to
>> decide whether to use these features.  Again, put that into docs/ if
>> proper
>> would be nice too.
> 
> Yes, I will provide some performance data at some specific 
> bandwidths(100Mbps/1Gbps/10Gbps). And add documentation to explain 
> the advantages of using QAT 
Just want to add some information here. So in ByteDance, the current generation server is quipped with 2*100Gb NIC. We reserve 10Gbps for control plane purposes which includes live migration here. So it is not about we are using “good network”, it is about not normal to use full bandwidth for control plane purposes. Hence we do have a requirements for QAT/IAA in these cases.
>>> 
>>> 3. In addition to compression, QAT and IAA also support various other
>> features
>>>   to better serve different workloads. Here is an introduction to the
>> accelerators,
>>>   including usage scenarios of QAT and IAA.
>>>   https://www.intel.com/content/dam/www/central-
>> libraries/us/en/documents/2022-12/storage-engines-4th-gen-xeon-brief.pdf
>> 
>> Thanks for the link.
>> 
>> However this doesn't look like a reason to support it in migration?  It
>> needs to help migration in some form or another, no matter how many
>> features it provides.. since migration may not consume them.
>> 
>> Two major (but pure..) questions:
>> 
>>  1) Why high cpu usage?
>> 
>>     I raised this question below [1] too but I didn't yet get an answer.
>>     Ror 8-chan multifd, it's ~390% (QAT) v.s. ~240% (nocomp), even if
>>     46Gbps bw for the latter... so when throttled it will be even lower?
>> 
>>     The paper you provided above has this upfront:
>> 
>>        When a CPU can offload storage functions to built-in accelerators,
>>        it frees up cores for business-critical workloads...
>> 
>>     Isn't that a major feature to be able to "offload" things?  Why cpu
>>     isn't freed even if the offload happened?
> 
> Yes, it doesn't make sense, I will check this.
> 
>>  2) TLS?
>> 
>>     I think I asked before, I apologize if any of you've already answered
>>     and if I forgot.. but have any of you looked into offload TLS
>> (instead
>>     of compression) with the QATs?
> 
> I'm sorry for not responding to the previous question about TLS. QAT has many 
> related success cases (for example, OpenSSL). 
> https://www.intel.com/content/dam/www/public/us/en/documents/solution-briefs/accelerating-openssl-brief.pdf
> 
> I will send a separate RFC or patch about this part because the software stacks of QAT 
> compression and encryption are independent, so we discuss them separately.
> 
>> Thanks,
>> 
>>> 
>>> For users who have both QAT and IAA, we recommend the following for
>> choosing a
>>> live migration solution:
>>> 
>>> 1. If the number of QAT devices is equal to or greater than the number
>> of IAA devices
>>>   and network bandwidth is limited, it is recommended to use the
>> QATZip(QAT) solution.
>>> 
>>> 2. In other scenarios, the QPL (IAA) solution can be used first.
>>> 
>>>> I am honestly curious too on whether are you planning to use it in
>>>> production.  It looks like if the network resources are rich, no-comp
>> is
>>>> mostly always better than qatzip, no matter on total migration time or
>> cpu
>>>> consumption.  I'm pretty surprised that it'll take that much resources
>>>> even
>>>> if the work should have been offloaded to the QAT chips iiuc.
>> 
>> [1]
>> 
>>>> 
>>>> I think it may not be a problem to merge this series even if it
>> performs
>>>> slower at some criterias.. but I think we may still want to know when
>> this
>>>> should be used, or the good reason this should be merged (if it's not
>>>> about
>>>> it outperforms others).
>>>> 
>>>> Thanks,
>>>> 
>>>>> 
>>>>> 
>>>>> Bryan Zhang (4):
>>>>>  meson: Introduce 'qatzip' feature to the build system
>>>>>  migration: Add migration parameters for QATzip
>>>>>  migration: Introduce 'qatzip' compression method
>>>>>  tests/migration: Add integration test for 'qatzip' compression
>> method
>>>>> 
>>>>> hw/core/qdev-properties-system.c |   6 +-
>>>>> meson.build                      |  10 +
>>>>> meson_options.txt                |   2 +
>>>>> migration/meson.build            |   1 +
>>>>> migration/migration-hmp-cmds.c   |   8 +
>>>>> migration/multifd-qatzip.c       | 382
>> +++++++++++++++++++++++++++++++
>>>>> migration/multifd.h              |   1 +
>>>>> migration/options.c              |  57 +++++
>>>>> migration/options.h              |   2 +
>>>>> qapi/migration.json              |  38 +++
>>>>> scripts/meson-buildoptions.sh    |   6 +
>>>>> tests/qtest/meson.build          |   4 +
>>>>> tests/qtest/migration-test.c     |  35 +++
>>>>> 13 files changed, 551 insertions(+), 1 deletion(-)
>>>>> create mode 100644 migration/multifd-qatzip.c
>>>>> 
>>>>> --
>>>>> Yichen Wang
>>>>> 
>>>> 
>>>> --
>>>> Peter Xu
>>> 
>> 
>> --
>> Peter Xu


[-- Attachment #2: Type: text/html, Size: 38563 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [External] [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB
  2024-07-05 18:28         ` [External] " Yichen Wang
@ 2024-07-08 15:46           ` Peter Xu
  0 siblings, 0 replies; 11+ messages in thread
From: Peter Xu @ 2024-07-08 15:46 UTC (permalink / raw)
  To: Yichen Wang
  Cc: Liu, Yuan1, Paolo Bonzini, "Daniel P. Berrangé",
	Eduardo Habkost, Marc-André Lureau, Thomas Huth,
	Philippe Mathieu-Daudé, Fabiano Rosas, Eric Blake,
	Markus Armbruster, Laurent Vivier, qemu-devel@nongnu.org,
	Hao Xiang, Zou, Nanhai, Ho-Ren (Jack) Chuang

On Fri, Jul 05, 2024 at 11:28:25AM -0700, Yichen Wang wrote:
> Just want to add some information here. So in ByteDance, the current
> generation server is quipped with 2*100Gb NIC. We reserve 10Gbps for
> control plane purposes which includes live migration here. So it is not
> about we are using “good network”, it is about not normal to use full
> bandwidth for control plane purposes. Hence we do have a requirements for
> QAT/IAA in these cases.

Yes, this makes sense.

But then you may then also want to figure out the high cpu consumption of
those cards, to not interrupt more important workloads?

I saw there's a new version posted, I didn't see an explanation of the cpu
consumption issue mentioned.  Meanwhile I also see that the docs/ update is
missing.

Would you consider adding both by replying to the new version?

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2024-07-08 15:47 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-27 22:34 [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB Yichen Wang
2024-06-27 22:34 ` [PATCH v3 1/4] meson: Introduce 'qatzip' feature to the build system Yichen Wang
2024-06-27 22:34 ` [PATCH v3 2/4] migration: Add migration parameters for QATzip Yichen Wang
2024-06-27 22:34 ` [PATCH v3 3/4] migration: Introduce 'qatzip' compression method Yichen Wang
2024-06-27 22:34 ` [PATCH v3 4/4] tests/migration: Add integration test for " Yichen Wang
2024-07-02 19:16 ` [PATCH v3 0/4] Implement using Intel QAT to offload ZLIB Peter Xu
2024-07-04  3:15   ` Liu, Yuan1
2024-07-04 15:36     ` Peter Xu
2024-07-05  8:32       ` Liu, Yuan1
2024-07-05 18:28         ` [External] " Yichen Wang
2024-07-08 15:46           ` Peter Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).