* [PATCH v4 0/8] Live Migration With IAA
@ 2024-03-04 14:00 Yuan Liu
2024-03-04 14:00 ` [PATCH v4 1/8] docs/migration: add qpl compression feature Yuan Liu
` (7 more replies)
0 siblings, 8 replies; 17+ messages in thread
From: Yuan Liu @ 2024-03-04 14:00 UTC (permalink / raw)
To: peterx, farosas; +Cc: qemu-devel, hao.xiang, bryan.zhang, yuan1.liu, nanhai.zou
Hi,
I am writing to submit a code change aimed at enhancing live migration
acceleration by leveraging the compression capability of the Intel
In-Memory Analytics Accelerator (IAA).
The implementation of the IAA (de)compression code is based on Intel Query
Processing Library (QPL), an open-source software project designed for
IAA high-level software programming. https://github.com/intel/qpl
Sorry that the submission of v4 version took so long due to the Chinese New
Year holiday. I would like to summarize the progress so far
1. QPL will be used as an independent compression method like ZLIB and ZSTD,
QPL will force the use of the IAA accelerator and will not support software
compression. For a summary of issues compatible with Zlib, please refer to
docs/devel/migration/qpl-compression.rst
2. Compression accelerator related patches are removed from this patch set and
will be added to the QAT patch set, we will submit separate patches to use
QAT to accelerate ZLIB and ZSTD.
3. Advantages of using IAA accelerator include:
a. Compared with the non-compression method, it can improve downtime
performance without adding additional host resources (both CPU and
network).
b. Compared with using software compression methods (ZSTD/ZLIB), it can
provide high data compression ratio and save a lot of CPU resources
used for compression.
Test condition:
1. Host CPUs are based on Sapphire Rapids
2. VM type, 16 vCPU and 64G memory
3. The source and destination respectively use 4 IAA devices.
4. The workload in the VM
a. all vCPUs are idle state
b. 90% of the virtual machine's memory is used, use silesia to fill
the memory.
The introduction of silesia:
https://sun.aei.polsl.pl//~sdeor/index.php?page=silesia
5. Set "--mem-prealloc" boot parameter on the destination, this parameter
can make IAA performance better and related introduction is added here.
docs/devel/migration/qpl-compression.rst
6. Source migration configuration commands
a. migrate_set_capability multifd on
b. migrate_set_parameter multifd-channels 2/4/8
c. migrate_set_parameter downtime-limit 300
f. migrate_set_parameter max-bandwidth 100G/1G
d. migrate_set_parameter multifd-compression none/qpl/zstd
7. Destination migration configuration commands
a. migrate_set_capability multifd on
b. migrate_set_parameter multifd-channels 2/4/8
c. migrate_set_parameter multifd-compression none/qpl/zstd
Early migration result, each result is the average of three tests
+--------+-------------+--------+--------+---------+----------+------|
| | The number |total |downtime|network |pages per | CPU |
| None | of channels |time(ms)|(ms) |bandwidth|second | Util |
| Comp | | | |(mbps) | | |
| +-------------+-----------------+---------+----------+------+
|Network | 2| 8571| 69| 58391| 1896525| 256%|
|BW:100G +-------------+--------+--------+---------+----------+------+
| | 4| 7180| 92| 69736| 1865640| 300%|
| +-------------+--------+--------+---------+----------+------+
| | 8| 7090| 121| 70562| 2174060| 307%|
+--------+-------------+--------+--------+---------+----------+------+
+--------+-------------+--------+--------+---------+----------+------|
| | The number |total |downtime|network |pages per | CPU |
| QPL | of channels |time(ms)|(ms) |bandwidth|second | Util |
| Comp | | | |(mbps) | | |
| +-------------+-----------------+---------+----------+------+
|Network | 2| 8413| 34| 30067| 1732411| 230%|
|BW:100G +-------------+--------+--------+---------+----------+------+
| | 4| 6559| 32| 38804| 1689954| 450%|
| +-------------+--------+--------+---------+----------+------+
| | 8| 6623| 37| 38745| 1566507| 790%|
+--------+-------------+--------+--------+---------+----------+------+
+--------+-------------+--------+--------+---------+----------+------|
| | The number |total |downtime|network |pages per | CPU |
| ZSTD | of channels |time(ms)|(ms) |bandwidth|second | Util |
| Comp | | | |(mbps) | | |
| +-------------+-----------------+---------+----------+------+
|Network | 2| 95846| 24| 1800| 521829| 203%|
|BW:100G +-------------+--------+--------+---------+----------+------+
| | 4| 49004| 24| 3529| 890532| 403%|
| +-------------+--------+--------+---------+----------+------+
| | 8| 25574| 32| 6782| 1762222| 800%|
+--------+-------------+--------+--------+---------+----------+------+
When network bandwidth resource is sufficient, QPL can improve downtime
by 2x compared to no compression. In this scenario, with 4 channels, the
IAA hardware resources are fully used, so adding more channels will not
gain more benefits.
+--------+-------------+--------+--------+---------+----------+------|
| | The number |total |downtime|network |pages per | CPU |
| None | of channels |time(ms)|(ms) |bandwidth|second | Util |
| Comp | | | |(mbps) | | |
| +-------------+-----------------+---------+----------+------+
|Network | 2| 57758| 66| 8643| 264617| 34%|
|BW: 1G +-------------+--------+--------+---------+----------+------+
| | 4| 57216| 58| 8726| 266773| 34%|
| +-------------+--------+--------+---------+----------+------+
| | 8| 56708| 53| 8804| 270223| 33%|
+--------+-------------+--------+--------+---------+----------+------+
+--------+-------------+--------+--------+---------+----------+------|
| | The number |total |downtime|network |pages per | CPU |
| QPL | of channels |time(ms)|(ms) |bandwidth|second | Util |
| Comp | | | |(mbps) | | |
| +-------------+-----------------+---------+----------+------+
|Network | 2| 30129| 34| 8345| 2224761| 54%|
|BW: 1G +-------------+--------+--------+---------+----------+------+
| | 4| 30317| 39| 8300| 2025220| 73%|
| +-------------+--------+--------+---------+----------+------+
| | 8| 29615| 35| 8514| 2250122| 131%|
+--------+-------------+--------+--------+---------+----------+------+
+--------+-------------+--------+--------+---------+----------+------|
| | The number |total |downtime|network |pages per | CPU |
| ZSTD | of channels |time(ms)|(ms) |bandwidth|second | Util |
| Comp | | | |(mbps) | | |
| +-------------+-----------------+---------+----------+------+
|Network | 2| 95750| 24| 1802| 477236| 202%|
|BW: 1G +-------------+--------+--------+---------+----------+------+
| | 4| 48907| 24| 3536| 1002142| 404%|
| +-------------+--------+--------+---------+----------+------+
| | 8| 25568| 32| 6783| 1696437| 800%|
+--------+-------------+--------+--------+---------+----------+------+
When network bandwidth resource is limited, the "page perf second" metric
decreases for none compression, the success rate of migration will reduce.
Comparison of QPL and ZSTD compression methods, QPL can save a lot of CPU
resources used for compression.
v2:
- add support for multifd compression accelerator
- add support for the QPL accelerator in the multifd
compression accelerator
- fixed the issue that QPL was compiled into the migration
module by default
v3:
- use Meson instead of pkg-config to resolve QPL build
dependency issue
- fix coding style
- fix a CI issue for get_multifd_ops function in multifd.c file
v4:
- patch based on commit: da96ad4a6a Merge tag 'hw-misc-20240215' of
https://github.com/philmd/qemu into staging
- remove the compression accelerator implementation patches, the patches
will be placed in the QAT accelerator implementation.
- introduce QPL as a new compression method
- add QPL compression documentation
- add QPL compression migration test
- fix zlib/zstd compression level issue
Yuan Liu (8):
docs/migration: add qpl compression feature
migration/multifd: add get_iov_count in the multifd method
configure: add --enable-qpl build option
migration/multifd: add qpl compression method
migration/multifd: implement initialization of qpl compression
migration/multifd: implement qpl compression and decompression
migration/multifd: fix zlib and zstd compression levels not working
tests/migration-test: add qpl compression test
docs/devel/migration/features.rst | 1 +
docs/devel/migration/qpl-compression.rst | 231 +++++++++++
hw/core/qdev-properties-system.c | 2 +-
meson.build | 18 +
meson_options.txt | 2 +
migration/meson.build | 1 +
migration/multifd-qpl.c | 485 +++++++++++++++++++++++
migration/multifd-zlib.c | 18 +-
migration/multifd-zstd.c | 18 +-
migration/multifd.c | 24 +-
migration/multifd.h | 3 +
migration/options.c | 12 +
qapi/migration.json | 7 +-
scripts/meson-buildoptions.sh | 3 +
tests/qtest/migration-test.c | 40 ++
15 files changed, 858 insertions(+), 7 deletions(-)
create mode 100644 docs/devel/migration/qpl-compression.rst
create mode 100644 migration/multifd-qpl.c
--
2.39.3
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v4 1/8] docs/migration: add qpl compression feature
2024-03-04 14:00 [PATCH v4 0/8] Live Migration With IAA Yuan Liu
@ 2024-03-04 14:00 ` Yuan Liu
2024-03-04 14:00 ` [PATCH v4 2/8] migration/multifd: add get_iov_count in the multifd method Yuan Liu
` (6 subsequent siblings)
7 siblings, 0 replies; 17+ messages in thread
From: Yuan Liu @ 2024-03-04 14:00 UTC (permalink / raw)
To: peterx, farosas; +Cc: qemu-devel, hao.xiang, bryan.zhang, yuan1.liu, nanhai.zou
add QPL compression method introduction
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
---
docs/devel/migration/features.rst | 1 +
docs/devel/migration/qpl-compression.rst | 231 +++++++++++++++++++++++
2 files changed, 232 insertions(+)
create mode 100644 docs/devel/migration/qpl-compression.rst
diff --git a/docs/devel/migration/features.rst b/docs/devel/migration/features.rst
index a9acaf618e..9819393c12 100644
--- a/docs/devel/migration/features.rst
+++ b/docs/devel/migration/features.rst
@@ -10,3 +10,4 @@ Migration has plenty of features to support different use cases.
dirty-limit
vfio
virtio
+ qpl-compression
diff --git a/docs/devel/migration/qpl-compression.rst b/docs/devel/migration/qpl-compression.rst
new file mode 100644
index 0000000000..42c7969d30
--- /dev/null
+++ b/docs/devel/migration/qpl-compression.rst
@@ -0,0 +1,231 @@
+===============
+QPL Compression
+===============
+The Intel Query Processing Library (Intel ``QPL``) is an open-source library to
+provide compression and decompression features and it is based on deflate
+compression algorithm (RFC 1951).
+
+The ``QPL`` compression relies on Intel In-Memory Analytics Accelerator(``IAA``)
+and Shared Virtual Memory(``SVM``) technology, they are new features supported
+from Intel 4th Gen Intel Xeon Scalable processors, codenamed Sapphire Rapids
+processor(``SPR``).
+
+For more ``QPL`` introduction, please refer to:
+
+https://intel.github.io/qpl/documentation/introduction_docs/introduction.html
+
+QPL Compression Framework
+=========================
+
+::
+
+ +----------------+ +------------------+
+ | MultiFD Service| |accel-config tool |
+ +-------+--------+ +--------+---------+
+ | |
+ | |
+ +-------+--------+ | Setup IAA
+ | QPL library | | Resources
+ +-------+---+----+ |
+ | | |
+ | +-------------+-------+
+ | Open IAA |
+ | Devices +-----+-----+
+ | |idxd driver|
+ | +-----+-----+
+ | |
+ | |
+ | +-----+-----+
+ +-----------+IAA Devices|
+ Submit jobs +-----------+
+ via enqcmd
+
+
+Intel In-Memory Analytics Accelerator (Intel IAA) Introduction
+================================================================
+
+Intel ``IAA`` is an accelerator that has been designed to help benefit
+in-memory databases and analytic workloads. There are three main areas
+that Intel ``IAA`` can assist with analytics primitives (scan, filter, etc.),
+sparse data compression and memory tiering.
+
+``IAA`` Manual Documentation:
+
+https://www.intel.com/content/www/us/en/content-details/721858/intel-in-memory-analytics-accelerator-architecture-specification
+
+IAA Device Enabling
+-------------------
+
+- Enabling ``IAA`` devices for platform configuration, please refer to:
+
+https://www.intel.com/content/www/us/en/content-details/780887/intel-in-memory-analytics-accelerator-intel-iaa.html
+
+- ``IAA`` device driver is ``Intel Data Accelerator Driver (idxd)``, it is
+ recommended that the minimum version of Linux kernel is 5.18.
+
+- Add ``"intel_iommu=on,sm_on"`` parameter to kernel command line
+ for ``SVM`` feature enabling.
+
+Here is an easy way to verify ``IAA`` device driver and ``SVM``, refer to:
+
+https://github.com/intel/idxd-config/tree/stable/test
+
+IAA Device Management
+---------------------
+
+The number of ``IAA`` devices will vary depending on the Xeon product model.
+On a ``SPR`` server, there can be a maximum of 8 ``IAA`` devices, with up to
+4 devices per socket.
+
+By default, all ``IAA`` devices are disabled and need to be configured and
+enabled by users manually.
+
+Check the number of devices through the following command
+
+.. code-block:: shell
+
+ # lspci -d 8086:0cfe
+ # 6a:02.0 System peripheral: Intel Corporation Device 0cfe
+ # 6f:02.0 System peripheral: Intel Corporation Device 0cfe
+ # 74:02.0 System peripheral: Intel Corporation Device 0cfe
+ # 79:02.0 System peripheral: Intel Corporation Device 0cfe
+ # e7:02.0 System peripheral: Intel Corporation Device 0cfe
+ # ec:02.0 System peripheral: Intel Corporation Device 0cfe
+ # f1:02.0 System peripheral: Intel Corporation Device 0cfe
+ # f6:02.0 System peripheral: Intel Corporation Device 0cfe
+
+IAA Device Configuration
+------------------------
+
+The ``accel-config`` tool is used to enable ``IAA`` devices and configure
+``IAA`` hardware resources(work queues and engines). One ``IAA`` device
+has 8 work queues and 8 processing engines, multiple engines can be assigned
+to a work queue via ``group`` attribute.
+
+One example of configuring and enabling an ``IAA`` device.
+
+.. code-block:: shell
+
+ # accel-config config-engine iax1/engine1.0 -g 0
+ # accel-config config-engine iax1/engine1.1 -g 0
+ # accel-config config-engine iax1/engine1.2 -g 0
+ # accel-config config-engine iax1/engine1.3 -g 0
+ # accel-config config-engine iax1/engine1.4 -g 0
+ # accel-config config-engine iax1/engine1.5 -g 0
+ # accel-config config-engine iax1/engine1.6 -g 0
+ # accel-config config-engine iax1/engine1.7 -g 0
+ # accel-config config-wq iax1/wq1.0 -g 0 -s 128 -p 10 -b 1 -t 128 -m shared -y user -n app1 -d user
+ # accel-config enable-device iax1
+ # accel-config enable-wq iax1/wq1.0
+
+.. note::
+ IAX is an early name for IAA
+
+- The ``IAA`` device index is 1, use ``ls -lh /sys/bus/dsa/devices/iax*``
+ command to query the ``IAA`` device index.
+
+- 8 engines and 1 work queue are configured in group 0, so all compression jobs
+ submitted to this work queue can be processed by all engines at the same time.
+
+- Set work queue attributes including the work mode, work queue size and so on.
+
+- Enable the ``IAA1`` device and work queue 1.0
+
+.. note::
+ Set work queue mode to shared mode, since ``QPL`` library only supports
+ shared mode
+
+For more detailed configuration, please refer to:
+
+https://github.com/intel/idxd-config/tree/stable/Documentation/accfg
+
+IAA Resources Allocation For Migration
+--------------------------------------
+
+There is no ``IAA`` resource configuration parameters for migration and
+``accel-config`` tool configuration cannot directly specify the ``IAA``
+resources used for migration.
+
+``QPL`` will use all work queues that are enabled and set to shared mode,
+and use all engines assigned to the work queues with shared mode.
+
+By default, ``QPL`` will only use the local ``IAA`` device for compression
+job processing. The local ``IAA`` device means that the CPU of the job
+submission and the ``IAA`` device are on the same socket, so one CPU
+can submit the jobs to up to 4 ``IAA`` devices.
+
+Shared Virtual Memory(SVM) Introduction
+=======================================
+
+An ability for an accelerator I/O device to operate in the same virtual
+memory space of applications on host processors. It also implies the
+ability to operate from pageable memory, avoiding functional requirements
+to pin memory for DMA operations.
+
+When using ``SVM`` technology, users do not need to reserve memory for the
+``IAA`` device and perform pin memory operation. The ``IAA`` device can
+directly access data using the virtual address of the process.
+
+For more ``SVM`` technology, please refer to:
+
+https://docs.kernel.org/next/x86/sva.html
+
+
+How To Use QPL Compression In Migration
+=======================================
+
+1 - Installation of ``accel-config`` tool and ``QPL`` library
+
+ - Install ``accel-config`` tool from https://github.com/intel/idxd-config
+ - Install ``QPL`` library from https://github.com/intel/qpl
+
+2 - Configure and enable ``IAA`` devices and work queues via ``accel-config``
+
+3 - Build ``Qemu`` with ``--enable-qpl`` parameter
+
+ E.g. configure --target-list=x86_64-softmmu --enable-kvm ``--enable-qpl``
+
+4 - Start VMs with ``sudo`` command or ``root`` permission
+
+ Use the ``sudo`` command or ``root`` privilege to start the source and
+ destination virtual machines, since migration service needs permission
+ to access ``IAA`` hardware resources.
+
+5 - Enable ``QPL`` compression during migration
+
+ Set ``migrate_set_parameter multifd-compression qpl`` when migrating, the
+ ``QPL`` compression does not support configuring the compression level, it
+ only supports one compression level.
+
+The Difference Between QPL And ZLIB
+===================================
+
+Although both ``QPL`` and ``ZLIB`` are based on the deflate compression
+algorithm, and ``QPL`` can support the header and tail of ``ZLIB``, ``QPL``
+is still not fully compatible with the ``ZLIB`` compression in the migration.
+
+``QPL`` only supports 4K history buffer, and ``ZLIB`` is 32K by default. The
+``ZLIB`` compressed data that ``QPL`` may not decompress correctly and
+vice versa.
+
+``QPL`` does not support the ``Z_SYNC_FLUSH`` operation in ``ZLIB`` streaming
+compression, current ``ZLIB`` implementation uses ``Z_SYNC_FLUSH``, so each
+``multifd`` thread has a ``ZLIB`` streaming context, and all page compression
+and decompression are based on this stream. ``QPL`` cannot decompress such data
+and vice versa.
+
+The introduction for ``Z_SYNC_FLUSH``, please refer to:
+
+https://www.zlib.net/manual.html
+
+The Best Practices
+==================
+
+When the virtual machine's pages are not populated and the ``IAA`` device is
+used, I/O page faults occur, which can impact performance due to a large number
+of flush ``IOTLB`` operations.
+
+Since the normal pages on the source side are all populated, ``IOTLB`` caused
+by I/O page fault will not occur. On the destination side, a large number
+of normal pages need to be loaded, so it is recommended to add ``-mem-prealloc``
+parameter on the destination side.
--
2.39.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v4 2/8] migration/multifd: add get_iov_count in the multifd method
2024-03-04 14:00 [PATCH v4 0/8] Live Migration With IAA Yuan Liu
2024-03-04 14:00 ` [PATCH v4 1/8] docs/migration: add qpl compression feature Yuan Liu
@ 2024-03-04 14:00 ` Yuan Liu
2024-03-05 20:24 ` Fabiano Rosas
2024-03-04 14:00 ` [PATCH v4 3/8] configure: add --enable-qpl build option Yuan Liu
` (5 subsequent siblings)
7 siblings, 1 reply; 17+ messages in thread
From: Yuan Liu @ 2024-03-04 14:00 UTC (permalink / raw)
To: peterx, farosas; +Cc: qemu-devel, hao.xiang, bryan.zhang, yuan1.liu, nanhai.zou
the new function get_iov_count is used to get the number of
IOVs required by a specified multifd method
Different multifd methods may require different numbers of IOVs.
Based on streaming compression of zlib and zstd, all pages will be
compressed to a data block, so an IOV is required to send this data
block. For no compression, each IOV is used to send a page, so the
number of IOVs required is the same as the number of pages.
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
---
migration/multifd-zlib.c | 18 +++++++++++++++++-
migration/multifd-zstd.c | 18 +++++++++++++++++-
migration/multifd.c | 24 +++++++++++++++++++++---
migration/multifd.h | 2 ++
4 files changed, 57 insertions(+), 5 deletions(-)
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index 012e3bdea1..35187f2aff 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -313,13 +313,29 @@ static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
return 0;
}
+/**
+ * zlib_get_iov_count: get the count of IOVs
+ *
+ * For zlib streaming compression, all pages will be compressed into a data
+ * block, and an IOV is requested for sending this block.
+ *
+ * Returns the count of the IOVs
+ *
+ * @page_count: Indicate the maximum count of pages processed by multifd
+ */
+static uint32_t zlib_get_iov_count(uint32_t page_count)
+{
+ return 1;
+}
+
static MultiFDMethods multifd_zlib_ops = {
.send_setup = zlib_send_setup,
.send_cleanup = zlib_send_cleanup,
.send_prepare = zlib_send_prepare,
.recv_setup = zlib_recv_setup,
.recv_cleanup = zlib_recv_cleanup,
- .recv_pages = zlib_recv_pages
+ .recv_pages = zlib_recv_pages,
+ .get_iov_count = zlib_get_iov_count
};
static void multifd_zlib_register(void)
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index dc8fe43e94..25ed1add2a 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -304,13 +304,29 @@ static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp)
return 0;
}
+/**
+ * zstd_get_iov_count: get the count of IOVs
+ *
+ * For zstd streaming compression, all pages will be compressed into a data
+ * block, and an IOV is requested for sending this block.
+ *
+ * Returns the count of the IOVs
+ *
+ * @page_count: Indicate the maximum count of pages processed by multifd
+ */
+static uint32_t zstd_get_iov_count(uint32_t page_count)
+{
+ return 1;
+}
+
static MultiFDMethods multifd_zstd_ops = {
.send_setup = zstd_send_setup,
.send_cleanup = zstd_send_cleanup,
.send_prepare = zstd_send_prepare,
.recv_setup = zstd_recv_setup,
.recv_cleanup = zstd_recv_cleanup,
- .recv_pages = zstd_recv_pages
+ .recv_pages = zstd_recv_pages,
+ .get_iov_count = zstd_get_iov_count
};
static void multifd_zstd_register(void)
diff --git a/migration/multifd.c b/migration/multifd.c
index adfe8c9a0a..787402247e 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -209,13 +209,29 @@ static int nocomp_recv_pages(MultiFDRecvParams *p, Error **errp)
return qio_channel_readv_all(p->c, p->iov, p->normal_num, errp);
}
+/**
+ * nocomp_get_iov_count: get the count of IOVs
+ *
+ * For no compression, the count of IOVs required is the same as the count of
+ * pages
+ *
+ * Returns the count of the IOVs
+ *
+ * @page_count: Indicate the maximum count of pages processed by multifd
+ */
+static uint32_t nocomp_get_iov_count(uint32_t page_count)
+{
+ return page_count;
+}
+
static MultiFDMethods multifd_nocomp_ops = {
.send_setup = nocomp_send_setup,
.send_cleanup = nocomp_send_cleanup,
.send_prepare = nocomp_send_prepare,
.recv_setup = nocomp_recv_setup,
.recv_cleanup = nocomp_recv_cleanup,
- .recv_pages = nocomp_recv_pages
+ .recv_pages = nocomp_recv_pages,
+ .get_iov_count = nocomp_get_iov_count
};
static MultiFDMethods *multifd_ops[MULTIFD_COMPRESSION__MAX] = {
@@ -998,6 +1014,8 @@ bool multifd_send_setup(void)
Error *local_err = NULL;
int thread_count, ret = 0;
uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
+ /* We need one extra place for the packet header */
+ uint32_t iov_count = 1;
uint8_t i;
if (!migrate_multifd()) {
@@ -1012,6 +1030,7 @@ bool multifd_send_setup(void)
qemu_sem_init(&multifd_send_state->channels_ready, 0);
qatomic_set(&multifd_send_state->exiting, 0);
multifd_send_state->ops = multifd_ops[migrate_multifd_compression()];
+ iov_count += multifd_send_state->ops->get_iov_count(page_count);
for (i = 0; i < thread_count; i++) {
MultiFDSendParams *p = &multifd_send_state->params[i];
@@ -1026,8 +1045,7 @@ bool multifd_send_setup(void)
p->packet->magic = cpu_to_be32(MULTIFD_MAGIC);
p->packet->version = cpu_to_be32(MULTIFD_VERSION);
p->name = g_strdup_printf("multifdsend_%d", i);
- /* We need one extra place for the packet header */
- p->iov = g_new0(struct iovec, page_count + 1);
+ p->iov = g_new0(struct iovec, iov_count);
p->page_size = qemu_target_page_size();
p->page_count = page_count;
p->write_flags = 0;
diff --git a/migration/multifd.h b/migration/multifd.h
index 8a1cad0996..d82495c508 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -201,6 +201,8 @@ typedef struct {
void (*recv_cleanup)(MultiFDRecvParams *p);
/* Read all pages */
int (*recv_pages)(MultiFDRecvParams *p, Error **errp);
+ /* Get the count of required IOVs */
+ uint32_t (*get_iov_count)(uint32_t page_count);
} MultiFDMethods;
void multifd_register_ops(int method, MultiFDMethods *ops);
--
2.39.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v4 3/8] configure: add --enable-qpl build option
2024-03-04 14:00 [PATCH v4 0/8] Live Migration With IAA Yuan Liu
2024-03-04 14:00 ` [PATCH v4 1/8] docs/migration: add qpl compression feature Yuan Liu
2024-03-04 14:00 ` [PATCH v4 2/8] migration/multifd: add get_iov_count in the multifd method Yuan Liu
@ 2024-03-04 14:00 ` Yuan Liu
2024-03-05 20:32 ` Fabiano Rosas
2024-03-04 14:00 ` [PATCH v4 4/8] migration/multifd: add qpl compression method Yuan Liu
` (4 subsequent siblings)
7 siblings, 1 reply; 17+ messages in thread
From: Yuan Liu @ 2024-03-04 14:00 UTC (permalink / raw)
To: peterx, farosas; +Cc: qemu-devel, hao.xiang, bryan.zhang, yuan1.liu, nanhai.zou
add --enable-qpl and --disable-qpl options to enable and disable
the QPL compression method for multifd migration.
the Query Processing Library (QPL) is an open-source library
that supports data compression and decompression features.
The QPL compression is based on the deflate compression algorithm
and use Intel In-Memory Analytics Accelerator(IAA) hardware for
compression and decompression acceleration.
Please refer to the following for more information about QPL
https://intel.github.io/qpl/documentation/introduction_docs/introduction.html
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
---
meson.build | 18 ++++++++++++++++++
meson_options.txt | 2 ++
scripts/meson-buildoptions.sh | 3 +++
3 files changed, 23 insertions(+)
diff --git a/meson.build b/meson.build
index c1dc83e4c0..2dea1e6834 100644
--- a/meson.build
+++ b/meson.build
@@ -1197,6 +1197,22 @@ if not get_option('zstd').auto() or have_block
required: get_option('zstd'),
method: 'pkg-config')
endif
+qpl = not_found
+if not get_option('qpl').auto()
+ libqpl = cc.find_library('qpl', required: false)
+ if not libqpl.found()
+ error('libqpl not found, please install it from ' +
+ 'https://intel.github.io/qpl/documentation/get_started_docs/installation.html')
+ endif
+ libaccel = cc.find_library('accel-config', required: false)
+ if not libaccel.found()
+ error('libaccel-config not found, please install it from ' +
+ 'https://github.com/intel/idxd-config')
+ endif
+ qpl = declare_dependency(dependencies: [libqpl, libaccel,
+ cc.find_library('dl', required: get_option('qpl'))],
+ link_args: ['-lstdc++'])
+endif
virgl = not_found
have_vhost_user_gpu = have_tools and host_os == 'linux' and pixman.found()
@@ -2298,6 +2314,7 @@ config_host_data.set('CONFIG_MALLOC_TRIM', has_malloc_trim)
config_host_data.set('CONFIG_STATX', has_statx)
config_host_data.set('CONFIG_STATX_MNT_ID', has_statx_mnt_id)
config_host_data.set('CONFIG_ZSTD', zstd.found())
+config_host_data.set('CONFIG_QPL', qpl.found())
config_host_data.set('CONFIG_FUSE', fuse.found())
config_host_data.set('CONFIG_FUSE_LSEEK', fuse_lseek.found())
config_host_data.set('CONFIG_SPICE_PROTOCOL', spice_protocol.found())
@@ -4438,6 +4455,7 @@ summary_info += {'snappy support': snappy}
summary_info += {'bzip2 support': libbzip2}
summary_info += {'lzfse support': liblzfse}
summary_info += {'zstd support': zstd}
+summary_info += {'Query Processing Library support': qpl}
summary_info += {'NUMA host support': numa}
summary_info += {'capstone': capstone}
summary_info += {'libpmem support': libpmem}
diff --git a/meson_options.txt b/meson_options.txt
index 0a99a059ec..06cd675572 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -259,6 +259,8 @@ option('xkbcommon', type : 'feature', value : 'auto',
description: 'xkbcommon support')
option('zstd', type : 'feature', value : 'auto',
description: 'zstd compression support')
+option('qpl', type : 'feature', value : 'auto',
+ description: 'Query Processing Library support')
option('fuse', type: 'feature', value: 'auto',
description: 'FUSE block device export')
option('fuse_lseek', type : 'feature', value : 'auto',
diff --git a/scripts/meson-buildoptions.sh b/scripts/meson-buildoptions.sh
index 680fa3f581..784f74fde9 100644
--- a/scripts/meson-buildoptions.sh
+++ b/scripts/meson-buildoptions.sh
@@ -222,6 +222,7 @@ meson_options_help() {
printf "%s\n" ' Xen PCI passthrough support'
printf "%s\n" ' xkbcommon xkbcommon support'
printf "%s\n" ' zstd zstd compression support'
+ printf "%s\n" ' qpl Query Processing Library support'
}
_meson_option_parse() {
case $1 in
@@ -562,6 +563,8 @@ _meson_option_parse() {
--disable-xkbcommon) printf "%s" -Dxkbcommon=disabled ;;
--enable-zstd) printf "%s" -Dzstd=enabled ;;
--disable-zstd) printf "%s" -Dzstd=disabled ;;
+ --enable-qpl) printf "%s" -Dqpl=enabled ;;
+ --disable-qpl) printf "%s" -Dqpl=disabled ;;
*) return 1 ;;
esac
}
--
2.39.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v4 4/8] migration/multifd: add qpl compression method
2024-03-04 14:00 [PATCH v4 0/8] Live Migration With IAA Yuan Liu
` (2 preceding siblings ...)
2024-03-04 14:00 ` [PATCH v4 3/8] configure: add --enable-qpl build option Yuan Liu
@ 2024-03-04 14:00 ` Yuan Liu
2024-03-05 20:58 ` Fabiano Rosas
2024-03-04 14:00 ` [PATCH v4 5/8] migration/multifd: implement initialization of qpl compression Yuan Liu
` (3 subsequent siblings)
7 siblings, 1 reply; 17+ messages in thread
From: Yuan Liu @ 2024-03-04 14:00 UTC (permalink / raw)
To: peterx, farosas; +Cc: qemu-devel, hao.xiang, bryan.zhang, yuan1.liu, nanhai.zou
add the Query Processing Library (QPL) compression method
Although both qpl and zlib support deflate compression, qpl will
only use the In-Memory Analytics Accelerator(IAA) for compression
and decompression, and IAA is not compatible with the Zlib in
migration, so qpl is used as a new compression method for migration.
How to enable qpl compression during migration:
migrate_set_parameter multifd-compression qpl
The qpl only supports one compression level, there is no qpl
compression level parameter added, users do not need to specify
the qpl compression level.
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
---
hw/core/qdev-properties-system.c | 2 +-
migration/meson.build | 1 +
migration/multifd-qpl.c | 158 +++++++++++++++++++++++++++++++
migration/multifd.h | 1 +
qapi/migration.json | 7 +-
5 files changed, 167 insertions(+), 2 deletions(-)
create mode 100644 migration/multifd-qpl.c
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index 1a396521d5..b4f0e5cbdb 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -658,7 +658,7 @@ const PropertyInfo qdev_prop_fdc_drive_type = {
const PropertyInfo qdev_prop_multifd_compression = {
.name = "MultiFDCompression",
.description = "multifd_compression values, "
- "none/zlib/zstd",
+ "none/zlib/zstd/qpl",
.enum_table = &MultiFDCompression_lookup,
.get = qdev_propinfo_get_enum,
.set = qdev_propinfo_set_enum,
diff --git a/migration/meson.build b/migration/meson.build
index 92b1cc4297..c155c2d781 100644
--- a/migration/meson.build
+++ b/migration/meson.build
@@ -40,6 +40,7 @@ if get_option('live_block_migration').allowed()
system_ss.add(files('block.c'))
endif
system_ss.add(when: zstd, if_true: files('multifd-zstd.c'))
+system_ss.add(when: qpl, if_true: files('multifd-qpl.c'))
specific_ss.add(when: 'CONFIG_SYSTEM_ONLY',
if_true: files('ram.c',
diff --git a/migration/multifd-qpl.c b/migration/multifd-qpl.c
new file mode 100644
index 0000000000..6b94e732ac
--- /dev/null
+++ b/migration/multifd-qpl.c
@@ -0,0 +1,158 @@
+/*
+ * Multifd qpl compression accelerator implementation
+ *
+ * Copyright (c) 2023 Intel Corporation
+ *
+ * Authors:
+ * Yuan Liu<yuan1.liu@intel.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/rcu.h"
+#include "exec/ramblock.h"
+#include "exec/target_page.h"
+#include "qapi/error.h"
+#include "migration.h"
+#include "trace.h"
+#include "options.h"
+#include "multifd.h"
+#include "qpl/qpl.h"
+
+struct qpl_data {
+ qpl_job **job_array;
+ /* the number of allocated jobs */
+ uint32_t job_num;
+ /* the size of data processed by a qpl job */
+ uint32_t data_size;
+ /* compressed data buffer */
+ uint8_t *zbuf;
+ /* the length of compressed data */
+ uint32_t *zbuf_hdr;
+};
+
+/**
+ * qpl_send_setup: setup send side
+ *
+ * Setup each channel with QPL compression.
+ *
+ * Returns 0 for success or -1 for error
+ *
+ * @p: Params for the channel that we are using
+ * @errp: pointer to an error
+ */
+static int qpl_send_setup(MultiFDSendParams *p, Error **errp)
+{
+ /* Implement in next patch */
+ return -1;
+}
+
+/**
+ * qpl_send_cleanup: cleanup send side
+ *
+ * Close the channel and return memory.
+ *
+ * @p: Params for the channel that we are using
+ * @errp: pointer to an error
+ */
+static void qpl_send_cleanup(MultiFDSendParams *p, Error **errp)
+{
+ /* Implement in next patch */
+}
+
+/**
+ * qpl_send_prepare: prepare data to be able to send
+ *
+ * Create a compressed buffer with all the pages that we are going to
+ * send.
+ *
+ * Returns 0 for success or -1 for error
+ *
+ * @p: Params for the channel that we are using
+ * @errp: pointer to an error
+ */
+static int qpl_send_prepare(MultiFDSendParams *p, Error **errp)
+{
+ /* Implement in next patch */
+ return -1;
+}
+
+/**
+ * qpl_recv_setup: setup receive side
+ *
+ * Create the compressed channel and buffer.
+ *
+ * Returns 0 for success or -1 for error
+ *
+ * @p: Params for the channel that we are using
+ * @errp: pointer to an error
+ */
+static int qpl_recv_setup(MultiFDRecvParams *p, Error **errp)
+{
+ /* Implement in next patch */
+ return -1;
+}
+
+/**
+ * qpl_recv_cleanup: setup receive side
+ *
+ * Close the channel and return memory.
+ *
+ * @p: Params for the channel that we are using
+ */
+static void qpl_recv_cleanup(MultiFDRecvParams *p)
+{
+ /* Implement in next patch */
+}
+
+/**
+ * qpl_recv_pages: read the data from the channel into actual pages
+ *
+ * Read the compressed buffer, and uncompress it into the actual
+ * pages.
+ *
+ * Returns 0 for success or -1 for error
+ *
+ * @p: Params for the channel that we are using
+ * @errp: pointer to an error
+ */
+static int qpl_recv_pages(MultiFDRecvParams *p, Error **errp)
+{
+ /* Implement in next patch */
+ return -1;
+}
+
+/**
+ * qpl_get_iov_count: get the count of IOVs
+ *
+ * For QPL compression, in addition to requesting the same number of IOVs
+ * as the page, it also requires an additional IOV to store all compressed
+ * data lengths.
+ *
+ * Returns the count of the IOVs
+ *
+ * @page_count: Indicate the maximum count of pages processed by multifd
+ */
+static uint32_t qpl_get_iov_count(uint32_t page_count)
+{
+ return page_count + 1;
+}
+
+static MultiFDMethods multifd_qpl_ops = {
+ .send_setup = qpl_send_setup,
+ .send_cleanup = qpl_send_cleanup,
+ .send_prepare = qpl_send_prepare,
+ .recv_setup = qpl_recv_setup,
+ .recv_cleanup = qpl_recv_cleanup,
+ .recv_pages = qpl_recv_pages,
+ .get_iov_count = qpl_get_iov_count
+};
+
+static void multifd_qpl_register(void)
+{
+ multifd_register_ops(MULTIFD_COMPRESSION_QPL, &multifd_qpl_ops);
+}
+
+migration_init(multifd_qpl_register);
diff --git a/migration/multifd.h b/migration/multifd.h
index d82495c508..0e9361df2a 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -33,6 +33,7 @@ bool multifd_queue_page(RAMBlock *block, ram_addr_t offset);
#define MULTIFD_FLAG_NOCOMP (0 << 1)
#define MULTIFD_FLAG_ZLIB (1 << 1)
#define MULTIFD_FLAG_ZSTD (2 << 1)
+#define MULTIFD_FLAG_QPL (4 << 1)
/* This value needs to be a multiple of qemu_target_page_size() */
#define MULTIFD_PACKET_SIZE (512 * 1024)
diff --git a/qapi/migration.json b/qapi/migration.json
index 5a565d9b8d..e48e3d7065 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -625,11 +625,16 @@
#
# @zstd: use zstd compression method.
#
+# @qpl: use qpl compression method. Query Processing Library(qpl) is based on
+# the deflate compression algorithm and use the Intel In-Memory Analytics
+# Accelerator(IAA) hardware accelerated compression and decompression.
+#
# Since: 5.0
##
{ 'enum': 'MultiFDCompression',
'data': [ 'none', 'zlib',
- { 'name': 'zstd', 'if': 'CONFIG_ZSTD' } ] }
+ { 'name': 'zstd', 'if': 'CONFIG_ZSTD' },
+ { 'name': 'qpl', 'if': 'CONFIG_QPL' } ] }
##
# @MigMode:
--
2.39.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v4 5/8] migration/multifd: implement initialization of qpl compression
2024-03-04 14:00 [PATCH v4 0/8] Live Migration With IAA Yuan Liu
` (3 preceding siblings ...)
2024-03-04 14:00 ` [PATCH v4 4/8] migration/multifd: add qpl compression method Yuan Liu
@ 2024-03-04 14:00 ` Yuan Liu
2024-03-04 14:00 ` [PATCH v4 6/8] migration/multifd: implement qpl compression and decompression Yuan Liu
` (2 subsequent siblings)
7 siblings, 0 replies; 17+ messages in thread
From: Yuan Liu @ 2024-03-04 14:00 UTC (permalink / raw)
To: peterx, farosas; +Cc: qemu-devel, hao.xiang, bryan.zhang, yuan1.liu, nanhai.zou
the qpl initialization includes memory allocation for compressed
data and the qpl job initialization.
the qpl initialization will check whether the In-Memory Analytics
Accelerator(IAA) hardware is available, if the platform does not
have IAA hardware or the IAA hardware is not available, the QPL
compression initialization will fail.
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
---
migration/multifd-qpl.c | 128 ++++++++++++++++++++++++++++++++++++++--
1 file changed, 122 insertions(+), 6 deletions(-)
diff --git a/migration/multifd-qpl.c b/migration/multifd-qpl.c
index 6b94e732ac..f4db97ca01 100644
--- a/migration/multifd-qpl.c
+++ b/migration/multifd-qpl.c
@@ -33,6 +33,100 @@ struct qpl_data {
uint32_t *zbuf_hdr;
};
+static void free_zbuf(struct qpl_data *qpl)
+{
+ if (qpl->zbuf != NULL) {
+ munmap(qpl->zbuf, qpl->job_num * qpl->data_size);
+ qpl->zbuf = NULL;
+ }
+ if (qpl->zbuf_hdr != NULL) {
+ g_free(qpl->zbuf_hdr);
+ qpl->zbuf_hdr = NULL;
+ }
+}
+
+static int alloc_zbuf(struct qpl_data *qpl, uint8_t chan_id, Error **errp)
+{
+ int flags = MAP_PRIVATE | MAP_POPULATE | MAP_ANONYMOUS;
+ uint32_t size = qpl->job_num * qpl->data_size;
+ uint8_t *buf;
+
+ buf = (uint8_t *) mmap(NULL, size, PROT_READ | PROT_WRITE, flags, -1, 0);
+ if (buf == MAP_FAILED) {
+ error_setg(errp, "multifd: %u: alloc_zbuf failed, job num %u, size %u",
+ chan_id, qpl->job_num, qpl->data_size);
+ return -1;
+ }
+ qpl->zbuf = buf;
+ qpl->zbuf_hdr = g_new0(uint32_t, qpl->job_num);
+ return 0;
+}
+
+static void free_jobs(struct qpl_data *qpl)
+{
+ for (int i = 0; i < qpl->job_num; i++) {
+ qpl_fini_job(qpl->job_array[i]);
+ g_free(qpl->job_array[i]);
+ qpl->job_array[i] = NULL;
+ }
+ g_free(qpl->job_array);
+ qpl->job_array = NULL;
+}
+
+static int alloc_jobs(struct qpl_data *qpl, uint8_t chan_id, Error **errp)
+{
+ qpl_status status;
+ uint32_t job_size = 0;
+ qpl_job *job = NULL;
+ /* always use IAA hardware accelerator */
+ qpl_path_t path = qpl_path_hardware;
+
+ status = qpl_get_job_size(path, &job_size);
+ if (status != QPL_STS_OK) {
+ error_setg(errp, "multifd: %u: qpl_get_job_size failed with error %d",
+ chan_id, status);
+ return -1;
+ }
+ qpl->job_array = g_new0(qpl_job *, qpl->job_num);
+ for (int i = 0; i < qpl->job_num; i++) {
+ job = g_malloc0(job_size);
+ status = qpl_init_job(path, job);
+ if (status != QPL_STS_OK) {
+ error_setg(errp, "multifd: %u: qpl_init_job failed with error %d",
+ chan_id, status);
+ free_jobs(qpl);
+ return -1;
+ }
+ qpl->job_array[i] = job;
+ }
+ return 0;
+}
+
+static int init_qpl(struct qpl_data *qpl, uint32_t job_num, uint32_t data_size,
+ uint8_t chan_id, Error **errp)
+{
+ qpl->job_num = job_num;
+ qpl->data_size = data_size;
+ if (alloc_zbuf(qpl, chan_id, errp) != 0) {
+ return -1;
+ }
+ if (alloc_jobs(qpl, chan_id, errp) != 0) {
+ free_zbuf(qpl);
+ return -1;
+ }
+ return 0;
+}
+
+static void deinit_qpl(struct qpl_data *qpl)
+{
+ if (qpl != NULL) {
+ free_jobs(qpl);
+ free_zbuf(qpl);
+ qpl->job_num = 0;
+ qpl->data_size = 0;
+ }
+}
+
/**
* qpl_send_setup: setup send side
*
@@ -45,8 +139,15 @@ struct qpl_data {
*/
static int qpl_send_setup(MultiFDSendParams *p, Error **errp)
{
- /* Implement in next patch */
- return -1;
+ struct qpl_data *qpl;
+
+ qpl = g_new0(struct qpl_data, 1);
+ if (init_qpl(qpl, p->page_count, p->page_size, p->id, errp) != 0) {
+ g_free(qpl);
+ return -1;
+ }
+ p->data = qpl;
+ return 0;
}
/**
@@ -59,7 +160,11 @@ static int qpl_send_setup(MultiFDSendParams *p, Error **errp)
*/
static void qpl_send_cleanup(MultiFDSendParams *p, Error **errp)
{
- /* Implement in next patch */
+ struct qpl_data *qpl = p->data;
+
+ deinit_qpl(qpl);
+ g_free(p->data);
+ p->data = NULL;
}
/**
@@ -91,8 +196,15 @@ static int qpl_send_prepare(MultiFDSendParams *p, Error **errp)
*/
static int qpl_recv_setup(MultiFDRecvParams *p, Error **errp)
{
- /* Implement in next patch */
- return -1;
+ struct qpl_data *qpl;
+
+ qpl = g_new0(struct qpl_data, 1);
+ if (init_qpl(qpl, p->page_count, p->page_size, p->id, errp) != 0) {
+ g_free(qpl);
+ return -1;
+ }
+ p->data = qpl;
+ return 0;
}
/**
@@ -104,7 +216,11 @@ static int qpl_recv_setup(MultiFDRecvParams *p, Error **errp)
*/
static void qpl_recv_cleanup(MultiFDRecvParams *p)
{
- /* Implement in next patch */
+ struct qpl_data *qpl = p->data;
+
+ deinit_qpl(qpl);
+ g_free(p->data);
+ p->data = NULL;
}
/**
--
2.39.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v4 6/8] migration/multifd: implement qpl compression and decompression
2024-03-04 14:00 [PATCH v4 0/8] Live Migration With IAA Yuan Liu
` (4 preceding siblings ...)
2024-03-04 14:00 ` [PATCH v4 5/8] migration/multifd: implement initialization of qpl compression Yuan Liu
@ 2024-03-04 14:00 ` Yuan Liu
2024-03-04 14:00 ` [PATCH v4 7/8] migration/multifd: fix zlib and zstd compression levels not working Yuan Liu
2024-03-04 14:00 ` [PATCH v4 8/8] tests/migration-test: add qpl compression test Yuan Liu
7 siblings, 0 replies; 17+ messages in thread
From: Yuan Liu @ 2024-03-04 14:00 UTC (permalink / raw)
To: peterx, farosas; +Cc: qemu-devel, hao.xiang, bryan.zhang, yuan1.liu, nanhai.zou
each qpl job is used to (de)compress a normal page and it can
be processed independently by the IAA hardware. All qpl jobs
are submitted to the hardware at once, and wait for all jobs
completion.
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
---
migration/multifd-qpl.c | 219 +++++++++++++++++++++++++++++++++++++++-
1 file changed, 215 insertions(+), 4 deletions(-)
diff --git a/migration/multifd-qpl.c b/migration/multifd-qpl.c
index f4db97ca01..eb815ea3be 100644
--- a/migration/multifd-qpl.c
+++ b/migration/multifd-qpl.c
@@ -167,6 +167,112 @@ static void qpl_send_cleanup(MultiFDSendParams *p, Error **errp)
p->data = NULL;
}
+static inline void prepare_job(qpl_job *job, uint8_t *input, uint32_t input_len,
+ uint8_t *output, uint32_t output_len,
+ bool is_compression)
+{
+ job->op = is_compression ? qpl_op_compress : qpl_op_decompress;
+ job->next_in_ptr = input;
+ job->next_out_ptr = output;
+ job->available_in = input_len;
+ job->available_out = output_len;
+ job->flags = QPL_FLAG_FIRST | QPL_FLAG_LAST | QPL_FLAG_OMIT_VERIFY;
+ /* only supports one compression level */
+ job->level = 1;
+}
+
+/**
+ * set_raw_data_hdr: set the length of raw data
+ *
+ * If the length of the compressed output data is greater than or equal to
+ * the page size, then set the compressed data length to the data size and
+ * send raw data directly.
+ *
+ * @qpl: pointer to the qpl_data structure
+ * @index: the index of the compression job header
+ */
+static inline void set_raw_data_hdr(struct qpl_data *qpl, uint32_t index)
+{
+ assert(index < qpl->job_num);
+ qpl->zbuf_hdr[index] = cpu_to_be32(qpl->data_size);
+}
+
+/**
+ * is_raw_data: check if the data is raw data
+ *
+ * The raw data length is always equal to data size, which is the
+ * size of one page.
+ *
+ * Returns true if the data is raw data, otherwise false
+ *
+ * @qpl: pointer to the qpl_data structure
+ * @index: the index of the decompressed job header
+ */
+static inline bool is_raw_data(struct qpl_data *qpl, uint32_t index)
+{
+ assert(index < qpl->job_num);
+ return qpl->zbuf_hdr[index] == qpl->data_size;
+}
+
+static int run_comp_jobs(MultiFDSendParams *p, Error **errp)
+{
+ qpl_status status;
+ struct qpl_data *qpl = p->data;
+ MultiFDPages_t *pages = p->pages;
+ uint32_t job_num = pages->num;
+ qpl_job *job = NULL;
+ uint32_t off = 0;
+
+ assert(job_num <= qpl->job_num);
+ /* submit all compression jobs */
+ for (int i = 0; i < job_num; i++) {
+ job = qpl->job_array[i];
+ /* the compressed data size should be less than one page */
+ prepare_job(job, pages->block->host + pages->offset[i], qpl->data_size,
+ qpl->zbuf + off, qpl->data_size - 1, true);
+retry:
+ status = qpl_submit_job(job);
+ if (status == QPL_STS_OK) {
+ off += qpl->data_size;
+ } else if (status == QPL_STS_QUEUES_ARE_BUSY_ERR) {
+ goto retry;
+ } else {
+ error_setg(errp, "multifd %u: qpl_submit_job failed with error %d",
+ p->id, status);
+ return -1;
+ }
+ }
+
+ /* wait all jobs to complete */
+ for (int i = 0; i < job_num; i++) {
+ job = qpl->job_array[i];
+ status = qpl_wait_job(job);
+ if (status == QPL_STS_OK) {
+ qpl->zbuf_hdr[i] = cpu_to_be32(job->total_out);
+ p->iov[p->iovs_num].iov_len = job->total_out;
+ p->iov[p->iovs_num].iov_base = qpl->zbuf + (qpl->data_size * i);
+ p->next_packet_size += job->total_out;
+ } else if (status == QPL_STS_MORE_OUTPUT_NEEDED) {
+ /*
+ * the compression job does not fail, the output data
+ * size is larger than the provided memory size. In this
+ * case, raw data is sent directly to the destination.
+ */
+ set_raw_data_hdr(qpl, i);
+ p->iov[p->iovs_num].iov_len = qpl->data_size;
+ p->iov[p->iovs_num].iov_base = pages->block->host +
+ pages->offset[i];
+ p->next_packet_size += qpl->data_size;
+ } else {
+ error_setg(errp, "multifd %u: qpl_wait_job failed with error %d",
+ p->id, status);
+ return -1;
+ }
+ p->iovs_num++;
+ }
+ return 0;
+}
+
/**
* qpl_send_prepare: prepare data to be able to send
*
@@ -180,8 +286,25 @@ static void qpl_send_cleanup(MultiFDSendParams *p, Error **errp)
*/
static int qpl_send_prepare(MultiFDSendParams *p, Error **errp)
{
- /* Implement in next patch */
- return -1;
+ struct qpl_data *qpl = p->data;
+ uint32_t hdr_size = p->pages->num * sizeof(uint32_t);
+
+ multifd_send_prepare_header(p);
+
+ assert(p->pages->num <= qpl->job_num);
+ /* prepare the header that stores the lengths of all compressed data */
+ p->iov[1].iov_base = (uint8_t *) qpl->zbuf_hdr;
+ p->iov[1].iov_len = hdr_size;
+ p->iovs_num++;
+ p->next_packet_size += hdr_size;
+ p->flags |= MULTIFD_FLAG_QPL;
+
+ if (run_comp_jobs(p, errp) != 0) {
+ return -1;
+ }
+
+ multifd_send_fill_packet(p);
+ return 0;
}
/**
@@ -223,6 +346,60 @@ static void qpl_recv_cleanup(MultiFDRecvParams *p)
p->data = NULL;
}
+static int run_decomp_jobs(MultiFDRecvParams *p, Error **errp)
+{
+ qpl_status status;
+ qpl_job *job;
+ struct qpl_data *qpl = p->data;
+ uint32_t off = 0;
+ uint32_t job_num = p->normal_num;
+
+ assert(job_num <= qpl->job_num);
+ /* submit all decompression jobs */
+ for (int i = 0; i < job_num; i++) {
+ /* for the raw data, load it directly */
+ if (is_raw_data(qpl, i)) {
+ memcpy(p->host + p->normal[i], qpl->zbuf + off, qpl->data_size);
+ off += qpl->data_size;
+ continue;
+ }
+ job = qpl->job_array[i];
+ prepare_job(job, qpl->zbuf + off, qpl->zbuf_hdr[i],
+ p->host + p->normal[i], qpl->data_size, false);
+retry:
+ status = qpl_submit_job(job);
+ if (status == QPL_STS_OK) {
+ off += qpl->zbuf_hdr[i];
+ } else if (status == QPL_STS_QUEUES_ARE_BUSY_ERR) {
+ goto retry;
+ } else {
+ error_setg(errp, "multifd %u: qpl_submit_job failed with error %d",
+ p->id, status);
+ return -1;
+ }
+ }
+
+ /* wait all jobs to complete */
+ for (int i = 0; i < job_num; i++) {
+ if (is_raw_data(qpl, i)) {
+ continue;
+ }
+ job = qpl->job_array[i];
+ status = qpl_wait_job(job);
+ if (status != QPL_STS_OK) {
+ error_setg(errp, "multifd %u: qpl_wait_job failed with error %d",
+ p->id, status);
+ return -1;
+ }
+ if (job->total_out != qpl->data_size) {
+ error_setg(errp, "multifd %u: decompressed len %u, expected len %u",
+ p->id, job->total_out, qpl->data_size);
+ return -1;
+ }
+ }
+ return 0;
+}
+
/**
* qpl_recv_pages: read the data from the channel into actual pages
*
@@ -236,8 +413,42 @@ static void qpl_recv_cleanup(MultiFDRecvParams *p)
*/
static int qpl_recv_pages(MultiFDRecvParams *p, Error **errp)
{
- /* Implement in next patch */
- return -1;
+ struct qpl_data *qpl = p->data;
+ uint32_t in_size = p->next_packet_size;
+ uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
+ uint32_t hdr_len = p->normal_num * sizeof(uint32_t);
+ uint32_t data_len = 0;
+ int ret;
+
+ if (flags != MULTIFD_FLAG_QPL) {
+ error_setg(errp, "multifd %u: flags received %x flags expected %x",
+ p->id, flags, MULTIFD_FLAG_QPL);
+ return -1;
+ }
+ /* read comprssed data lengths */
+ assert(hdr_len < in_size);
+ ret = qio_channel_read_all(p->c, (void *) qpl->zbuf_hdr, hdr_len, errp);
+ if (ret != 0) {
+ return ret;
+ }
+ assert(p->normal_num <= qpl->job_num);
+ for (int i = 0; i < p->normal_num; i++) {
+ qpl->zbuf_hdr[i] = be32_to_cpu(qpl->zbuf_hdr[i]);
+ data_len += qpl->zbuf_hdr[i];
+ assert(qpl->zbuf_hdr[i] <= qpl->data_size);
+ }
+
+ /* read comprssed data */
+ assert(in_size == hdr_len + data_len);
+ ret = qio_channel_read_all(p->c, (void *) qpl->zbuf, data_len, errp);
+ if (ret != 0) {
+ return ret;
+ }
+
+ if (run_decomp_jobs(p, errp) != 0) {
+ return -1;
+ }
+ return 0;
}
/**
--
2.39.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v4 7/8] migration/multifd: fix zlib and zstd compression levels not working
2024-03-04 14:00 [PATCH v4 0/8] Live Migration With IAA Yuan Liu
` (5 preceding siblings ...)
2024-03-04 14:00 ` [PATCH v4 6/8] migration/multifd: implement qpl compression and decompression Yuan Liu
@ 2024-03-04 14:00 ` Yuan Liu
2024-03-04 14:00 ` [PATCH v4 8/8] tests/migration-test: add qpl compression test Yuan Liu
7 siblings, 0 replies; 17+ messages in thread
From: Yuan Liu @ 2024-03-04 14:00 UTC (permalink / raw)
To: peterx, farosas; +Cc: qemu-devel, hao.xiang, bryan.zhang, yuan1.liu, nanhai.zou
add zlib and zstd compression levels in multifd parameter
testing and application and add compression level tests
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
Reported-by: Xiaohui Li <xiaohli@redhat.com>
---
migration/options.c | 12 ++++++++++++
tests/qtest/migration-test.c | 16 ++++++++++++++++
2 files changed, 28 insertions(+)
diff --git a/migration/options.c b/migration/options.c
index 3e3e0b93b4..1cd3cc7c33 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -1312,6 +1312,12 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
if (params->has_multifd_compression) {
dest->multifd_compression = params->multifd_compression;
}
+ if (params->has_multifd_zlib_level) {
+ dest->multifd_zlib_level = params->multifd_zlib_level;
+ }
+ if (params->has_multifd_zstd_level) {
+ dest->multifd_zstd_level = params->multifd_zstd_level;
+ }
if (params->has_xbzrle_cache_size) {
dest->xbzrle_cache_size = params->xbzrle_cache_size;
}
@@ -1447,6 +1453,12 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
if (params->has_multifd_compression) {
s->parameters.multifd_compression = params->multifd_compression;
}
+ if (params->has_multifd_zlib_level) {
+ s->parameters.multifd_zlib_level = params->multifd_zlib_level;
+ }
+ if (params->has_multifd_zstd_level) {
+ s->parameters.multifd_zstd_level = params->multifd_zstd_level;
+ }
if (params->has_xbzrle_cache_size) {
s->parameters.xbzrle_cache_size = params->xbzrle_cache_size;
xbzrle_cache_resize(params->xbzrle_cache_size, errp);
diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index 8a5bb1752e..23d50fe599 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -2621,10 +2621,24 @@ test_migrate_precopy_tcp_multifd_start(QTestState *from,
return test_migrate_precopy_tcp_multifd_start_common(from, to, "none");
}
+static void
+test_and_set_multifd_compression_level(QTestState *who, const char *param)
+{
+ /* The default compression level is 1, test a level other than 1 */
+ int level = 2;
+
+ migrate_set_parameter_int(who, param, level);
+ migrate_check_parameter_int(who, param, level);
+ /* only test compression level 1 during migration */
+ migrate_set_parameter_int(who, param, 1);
+}
+
static void *
test_migrate_precopy_tcp_multifd_zlib_start(QTestState *from,
QTestState *to)
{
+ /* the compression level is used only on the source side. */
+ test_and_set_multifd_compression_level(from, "multifd-zlib-level");
return test_migrate_precopy_tcp_multifd_start_common(from, to, "zlib");
}
@@ -2633,6 +2647,8 @@ static void *
test_migrate_precopy_tcp_multifd_zstd_start(QTestState *from,
QTestState *to)
{
+ /* the compression level is used only on the source side. */
+ test_and_set_multifd_compression_level(from, "multifd-zstd-level");
return test_migrate_precopy_tcp_multifd_start_common(from, to, "zstd");
}
#endif /* CONFIG_ZSTD */
--
2.39.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v4 8/8] tests/migration-test: add qpl compression test
2024-03-04 14:00 [PATCH v4 0/8] Live Migration With IAA Yuan Liu
` (6 preceding siblings ...)
2024-03-04 14:00 ` [PATCH v4 7/8] migration/multifd: fix zlib and zstd compression levels not working Yuan Liu
@ 2024-03-04 14:00 ` Yuan Liu
7 siblings, 0 replies; 17+ messages in thread
From: Yuan Liu @ 2024-03-04 14:00 UTC (permalink / raw)
To: peterx, farosas; +Cc: qemu-devel, hao.xiang, bryan.zhang, yuan1.liu, nanhai.zou
add qpl to compression method test for multifd migration
the migration with qpl compression needs to access IAA hardware
resource, please run "check-qtest" with sudo or root permission,
otherwise migration test will fail
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
---
tests/qtest/migration-test.c | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index 23d50fe599..96842f9515 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -2653,6 +2653,15 @@ test_migrate_precopy_tcp_multifd_zstd_start(QTestState *from,
}
#endif /* CONFIG_ZSTD */
+#ifdef CONFIG_QPL
+static void *
+test_migrate_precopy_tcp_multifd_qpl_start(QTestState *from,
+ QTestState *to)
+{
+ return test_migrate_precopy_tcp_multifd_start_common(from, to, "qpl");
+}
+#endif /* CONFIG_QPL */
+
static void test_multifd_tcp_none(void)
{
MigrateCommon args = {
@@ -2688,6 +2697,17 @@ static void test_multifd_tcp_zstd(void)
}
#endif
+#ifdef CONFIG_QPL
+static void test_multifd_tcp_qpl(void)
+{
+ MigrateCommon args = {
+ .listen_uri = "defer",
+ .start_hook = test_migrate_precopy_tcp_multifd_qpl_start,
+ };
+ test_precopy_common(&args);
+}
+#endif
+
#ifdef CONFIG_GNUTLS
static void *
test_migrate_multifd_tcp_tls_psk_start_match(QTestState *from,
@@ -3574,6 +3594,10 @@ int main(int argc, char **argv)
migration_test_add("/migration/multifd/tcp/plain/zstd",
test_multifd_tcp_zstd);
#endif
+#ifdef CONFIG_QPL
+ migration_test_add("/migration/multifd/tcp/plain/qpl",
+ test_multifd_tcp_qpl);
+#endif
#ifdef CONFIG_GNUTLS
migration_test_add("/migration/multifd/tcp/tls/psk/match",
test_multifd_tcp_tls_psk_match);
--
2.39.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v4 2/8] migration/multifd: add get_iov_count in the multifd method
2024-03-04 14:00 ` [PATCH v4 2/8] migration/multifd: add get_iov_count in the multifd method Yuan Liu
@ 2024-03-05 20:24 ` Fabiano Rosas
2024-03-06 1:16 ` Liu, Yuan1
0 siblings, 1 reply; 17+ messages in thread
From: Fabiano Rosas @ 2024-03-05 20:24 UTC (permalink / raw)
To: Yuan Liu, peterx
Cc: qemu-devel, hao.xiang, bryan.zhang, yuan1.liu, nanhai.zou
Yuan Liu <yuan1.liu@intel.com> writes:
> the new function get_iov_count is used to get the number of
> IOVs required by a specified multifd method
>
> Different multifd methods may require different numbers of IOVs.
> Based on streaming compression of zlib and zstd, all pages will be
> compressed to a data block, so an IOV is required to send this data
> block. For no compression, each IOV is used to send a page, so the
> number of IOVs required is the same as the number of pages.
Let's just move the responsibility of allocating p->iov to the client
code. You can move the allocation into send_setup() and the free into
send_cleanup().
>
> Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
> Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
> ---
> migration/multifd-zlib.c | 18 +++++++++++++++++-
> migration/multifd-zstd.c | 18 +++++++++++++++++-
> migration/multifd.c | 24 +++++++++++++++++++++---
> migration/multifd.h | 2 ++
> 4 files changed, 57 insertions(+), 5 deletions(-)
>
> diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
> index 012e3bdea1..35187f2aff 100644
> --- a/migration/multifd-zlib.c
> +++ b/migration/multifd-zlib.c
> @@ -313,13 +313,29 @@ static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
> return 0;
> }
>
> +/**
> + * zlib_get_iov_count: get the count of IOVs
> + *
> + * For zlib streaming compression, all pages will be compressed into a data
> + * block, and an IOV is requested for sending this block.
> + *
> + * Returns the count of the IOVs
> + *
> + * @page_count: Indicate the maximum count of pages processed by multifd
> + */
> +static uint32_t zlib_get_iov_count(uint32_t page_count)
> +{
> + return 1;
> +}
> +
> static MultiFDMethods multifd_zlib_ops = {
> .send_setup = zlib_send_setup,
> .send_cleanup = zlib_send_cleanup,
> .send_prepare = zlib_send_prepare,
> .recv_setup = zlib_recv_setup,
> .recv_cleanup = zlib_recv_cleanup,
> - .recv_pages = zlib_recv_pages
> + .recv_pages = zlib_recv_pages,
> + .get_iov_count = zlib_get_iov_count
> };
>
> static void multifd_zlib_register(void)
> diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
> index dc8fe43e94..25ed1add2a 100644
> --- a/migration/multifd-zstd.c
> +++ b/migration/multifd-zstd.c
> @@ -304,13 +304,29 @@ static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp)
> return 0;
> }
>
> +/**
> + * zstd_get_iov_count: get the count of IOVs
> + *
> + * For zstd streaming compression, all pages will be compressed into a data
> + * block, and an IOV is requested for sending this block.
> + *
> + * Returns the count of the IOVs
> + *
> + * @page_count: Indicate the maximum count of pages processed by multifd
> + */
> +static uint32_t zstd_get_iov_count(uint32_t page_count)
> +{
> + return 1;
> +}
> +
> static MultiFDMethods multifd_zstd_ops = {
> .send_setup = zstd_send_setup,
> .send_cleanup = zstd_send_cleanup,
> .send_prepare = zstd_send_prepare,
> .recv_setup = zstd_recv_setup,
> .recv_cleanup = zstd_recv_cleanup,
> - .recv_pages = zstd_recv_pages
> + .recv_pages = zstd_recv_pages,
> + .get_iov_count = zstd_get_iov_count
> };
>
> static void multifd_zstd_register(void)
> diff --git a/migration/multifd.c b/migration/multifd.c
> index adfe8c9a0a..787402247e 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -209,13 +209,29 @@ static int nocomp_recv_pages(MultiFDRecvParams *p, Error **errp)
> return qio_channel_readv_all(p->c, p->iov, p->normal_num, errp);
> }
>
> +/**
> + * nocomp_get_iov_count: get the count of IOVs
> + *
> + * For no compression, the count of IOVs required is the same as the count of
> + * pages
> + *
> + * Returns the count of the IOVs
> + *
> + * @page_count: Indicate the maximum count of pages processed by multifd
> + */
> +static uint32_t nocomp_get_iov_count(uint32_t page_count)
> +{
> + return page_count;
> +}
> +
> static MultiFDMethods multifd_nocomp_ops = {
> .send_setup = nocomp_send_setup,
> .send_cleanup = nocomp_send_cleanup,
> .send_prepare = nocomp_send_prepare,
> .recv_setup = nocomp_recv_setup,
> .recv_cleanup = nocomp_recv_cleanup,
> - .recv_pages = nocomp_recv_pages
> + .recv_pages = nocomp_recv_pages,
> + .get_iov_count = nocomp_get_iov_count
> };
>
> static MultiFDMethods *multifd_ops[MULTIFD_COMPRESSION__MAX] = {
> @@ -998,6 +1014,8 @@ bool multifd_send_setup(void)
> Error *local_err = NULL;
> int thread_count, ret = 0;
> uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
> + /* We need one extra place for the packet header */
> + uint32_t iov_count = 1;
> uint8_t i;
>
> if (!migrate_multifd()) {
> @@ -1012,6 +1030,7 @@ bool multifd_send_setup(void)
> qemu_sem_init(&multifd_send_state->channels_ready, 0);
> qatomic_set(&multifd_send_state->exiting, 0);
> multifd_send_state->ops = multifd_ops[migrate_multifd_compression()];
> + iov_count += multifd_send_state->ops->get_iov_count(page_count);
>
> for (i = 0; i < thread_count; i++) {
> MultiFDSendParams *p = &multifd_send_state->params[i];
> @@ -1026,8 +1045,7 @@ bool multifd_send_setup(void)
> p->packet->magic = cpu_to_be32(MULTIFD_MAGIC);
> p->packet->version = cpu_to_be32(MULTIFD_VERSION);
> p->name = g_strdup_printf("multifdsend_%d", i);
> - /* We need one extra place for the packet header */
> - p->iov = g_new0(struct iovec, page_count + 1);
> + p->iov = g_new0(struct iovec, iov_count);
> p->page_size = qemu_target_page_size();
> p->page_count = page_count;
> p->write_flags = 0;
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 8a1cad0996..d82495c508 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -201,6 +201,8 @@ typedef struct {
> void (*recv_cleanup)(MultiFDRecvParams *p);
> /* Read all pages */
> int (*recv_pages)(MultiFDRecvParams *p, Error **errp);
> + /* Get the count of required IOVs */
> + uint32_t (*get_iov_count)(uint32_t page_count);
> } MultiFDMethods;
>
> void multifd_register_ops(int method, MultiFDMethods *ops);
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v4 3/8] configure: add --enable-qpl build option
2024-03-04 14:00 ` [PATCH v4 3/8] configure: add --enable-qpl build option Yuan Liu
@ 2024-03-05 20:32 ` Fabiano Rosas
2024-03-06 2:20 ` Liu, Yuan1
0 siblings, 1 reply; 17+ messages in thread
From: Fabiano Rosas @ 2024-03-05 20:32 UTC (permalink / raw)
To: Yuan Liu, peterx
Cc: qemu-devel, hao.xiang, bryan.zhang, yuan1.liu, nanhai.zou
Yuan Liu <yuan1.liu@intel.com> writes:
> add --enable-qpl and --disable-qpl options to enable and disable
> the QPL compression method for multifd migration.
>
> the Query Processing Library (QPL) is an open-source library
> that supports data compression and decompression features.
>
> The QPL compression is based on the deflate compression algorithm
> and use Intel In-Memory Analytics Accelerator(IAA) hardware for
> compression and decompression acceleration.
>
> Please refer to the following for more information about QPL
> https://intel.github.io/qpl/documentation/introduction_docs/introduction.html
>
> Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
> Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
> ---
> meson.build | 18 ++++++++++++++++++
> meson_options.txt | 2 ++
> scripts/meson-buildoptions.sh | 3 +++
> 3 files changed, 23 insertions(+)
>
> diff --git a/meson.build b/meson.build
> index c1dc83e4c0..2dea1e6834 100644
> --- a/meson.build
> +++ b/meson.build
> @@ -1197,6 +1197,22 @@ if not get_option('zstd').auto() or have_block
> required: get_option('zstd'),
> method: 'pkg-config')
> endif
> +qpl = not_found
> +if not get_option('qpl').auto()
> + libqpl = cc.find_library('qpl', required: false)
> + if not libqpl.found()
> + error('libqpl not found, please install it from ' +
> + 'https://intel.github.io/qpl/documentation/get_started_docs/installation.html')
> + endif
> + libaccel = cc.find_library('accel-config', required: false)
> + if not libaccel.found()
> + error('libaccel-config not found, please install it from ' +
> + 'https://github.com/intel/idxd-config')
accel-config seems to be packaged by many distros, I'm not sure we need
to reference the repository here.
https://repology.org/project/accel-config/versions
> + endif
> + qpl = declare_dependency(dependencies: [libqpl, libaccel,
> + cc.find_library('dl', required: get_option('qpl'))],
> + link_args: ['-lstdc++'])
> +endif
> virgl = not_found
>
> have_vhost_user_gpu = have_tools and host_os == 'linux' and pixman.found()
> @@ -2298,6 +2314,7 @@ config_host_data.set('CONFIG_MALLOC_TRIM', has_malloc_trim)
> config_host_data.set('CONFIG_STATX', has_statx)
> config_host_data.set('CONFIG_STATX_MNT_ID', has_statx_mnt_id)
> config_host_data.set('CONFIG_ZSTD', zstd.found())
> +config_host_data.set('CONFIG_QPL', qpl.found())
> config_host_data.set('CONFIG_FUSE', fuse.found())
> config_host_data.set('CONFIG_FUSE_LSEEK', fuse_lseek.found())
> config_host_data.set('CONFIG_SPICE_PROTOCOL', spice_protocol.found())
> @@ -4438,6 +4455,7 @@ summary_info += {'snappy support': snappy}
> summary_info += {'bzip2 support': libbzip2}
> summary_info += {'lzfse support': liblzfse}
> summary_info += {'zstd support': zstd}
> +summary_info += {'Query Processing Library support': qpl}
> summary_info += {'NUMA host support': numa}
> summary_info += {'capstone': capstone}
> summary_info += {'libpmem support': libpmem}
> diff --git a/meson_options.txt b/meson_options.txt
> index 0a99a059ec..06cd675572 100644
> --- a/meson_options.txt
> +++ b/meson_options.txt
> @@ -259,6 +259,8 @@ option('xkbcommon', type : 'feature', value : 'auto',
> description: 'xkbcommon support')
> option('zstd', type : 'feature', value : 'auto',
> description: 'zstd compression support')
> +option('qpl', type : 'feature', value : 'auto',
> + description: 'Query Processing Library support')
> option('fuse', type: 'feature', value: 'auto',
> description: 'FUSE block device export')
> option('fuse_lseek', type : 'feature', value : 'auto',
> diff --git a/scripts/meson-buildoptions.sh b/scripts/meson-buildoptions.sh
> index 680fa3f581..784f74fde9 100644
> --- a/scripts/meson-buildoptions.sh
> +++ b/scripts/meson-buildoptions.sh
> @@ -222,6 +222,7 @@ meson_options_help() {
> printf "%s\n" ' Xen PCI passthrough support'
> printf "%s\n" ' xkbcommon xkbcommon support'
> printf "%s\n" ' zstd zstd compression support'
> + printf "%s\n" ' qpl Query Processing Library support'
> }
> _meson_option_parse() {
> case $1 in
> @@ -562,6 +563,8 @@ _meson_option_parse() {
> --disable-xkbcommon) printf "%s" -Dxkbcommon=disabled ;;
> --enable-zstd) printf "%s" -Dzstd=enabled ;;
> --disable-zstd) printf "%s" -Dzstd=disabled ;;
> + --enable-qpl) printf "%s" -Dqpl=enabled ;;
> + --disable-qpl) printf "%s" -Dqpl=disabled ;;
> *) return 1 ;;
> esac
> }
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v4 4/8] migration/multifd: add qpl compression method
2024-03-04 14:00 ` [PATCH v4 4/8] migration/multifd: add qpl compression method Yuan Liu
@ 2024-03-05 20:58 ` Fabiano Rosas
2024-03-06 2:29 ` Liu, Yuan1
0 siblings, 1 reply; 17+ messages in thread
From: Fabiano Rosas @ 2024-03-05 20:58 UTC (permalink / raw)
To: Yuan Liu, peterx
Cc: qemu-devel, hao.xiang, bryan.zhang, yuan1.liu, nanhai.zou
Yuan Liu <yuan1.liu@intel.com> writes:
> add the Query Processing Library (QPL) compression method
>
> Although both qpl and zlib support deflate compression, qpl will
> only use the In-Memory Analytics Accelerator(IAA) for compression
> and decompression, and IAA is not compatible with the Zlib in
> migration, so qpl is used as a new compression method for migration.
>
> How to enable qpl compression during migration:
> migrate_set_parameter multifd-compression qpl
>
> The qpl only supports one compression level, there is no qpl
> compression level parameter added, users do not need to specify
> the qpl compression level.
>
> Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
> Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
> ---
> hw/core/qdev-properties-system.c | 2 +-
> migration/meson.build | 1 +
> migration/multifd-qpl.c | 158 +++++++++++++++++++++++++++++++
> migration/multifd.h | 1 +
> qapi/migration.json | 7 +-
> 5 files changed, 167 insertions(+), 2 deletions(-)
> create mode 100644 migration/multifd-qpl.c
>
> diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
> index 1a396521d5..b4f0e5cbdb 100644
> --- a/hw/core/qdev-properties-system.c
> +++ b/hw/core/qdev-properties-system.c
> @@ -658,7 +658,7 @@ const PropertyInfo qdev_prop_fdc_drive_type = {
> const PropertyInfo qdev_prop_multifd_compression = {
> .name = "MultiFDCompression",
> .description = "multifd_compression values, "
> - "none/zlib/zstd",
> + "none/zlib/zstd/qpl",
> .enum_table = &MultiFDCompression_lookup,
> .get = qdev_propinfo_get_enum,
> .set = qdev_propinfo_set_enum,
> diff --git a/migration/meson.build b/migration/meson.build
> index 92b1cc4297..c155c2d781 100644
> --- a/migration/meson.build
> +++ b/migration/meson.build
> @@ -40,6 +40,7 @@ if get_option('live_block_migration').allowed()
> system_ss.add(files('block.c'))
> endif
> system_ss.add(when: zstd, if_true: files('multifd-zstd.c'))
> +system_ss.add(when: qpl, if_true: files('multifd-qpl.c'))
>
> specific_ss.add(when: 'CONFIG_SYSTEM_ONLY',
> if_true: files('ram.c',
> diff --git a/migration/multifd-qpl.c b/migration/multifd-qpl.c
> new file mode 100644
> index 0000000000..6b94e732ac
> --- /dev/null
> +++ b/migration/multifd-qpl.c
> @@ -0,0 +1,158 @@
> +/*
> + * Multifd qpl compression accelerator implementation
> + *
> + * Copyright (c) 2023 Intel Corporation
> + *
> + * Authors:
> + * Yuan Liu<yuan1.liu@intel.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> + * See the COPYING file in the top-level directory.
> + */
> +
> +#include "qemu/osdep.h"
> +#include "qemu/rcu.h"
> +#include "exec/ramblock.h"
> +#include "exec/target_page.h"
> +#include "qapi/error.h"
> +#include "migration.h"
> +#include "trace.h"
> +#include "options.h"
> +#include "multifd.h"
> +#include "qpl/qpl.h"
I don't mind adding a skeleton upfront before adding the implementation,
but adding the headers here hurts the review process. Reviewers will
have to go digging through the next patches to be able to validate each
of these. It's better to include them along with their usage.
What I would do in this patch is maybe just add the new option, the
.json and meson changes and this file with just:
static void multifd_qpl_register(void)
{
/* noop */
}
Then in the next commit you can implement all the methods in one
go. That way, the docstrings come along with the implementation, which
also facilitates review.
> +
> +struct qpl_data {
typedef struct {} QplData/QPLData, following QEMU's coding style.
> + qpl_job **job_array;
> + /* the number of allocated jobs */
> + uint32_t job_num;
> + /* the size of data processed by a qpl job */
> + uint32_t data_size;
> + /* compressed data buffer */
> + uint8_t *zbuf;
> + /* the length of compressed data */
> + uint32_t *zbuf_hdr;
> +};
> +
> +/**
> + * qpl_send_setup: setup send side
> + *
> + * Setup each channel with QPL compression.
> + *
> + * Returns 0 for success or -1 for error
> + *
> + * @p: Params for the channel that we are using
> + * @errp: pointer to an error
> + */
> +static int qpl_send_setup(MultiFDSendParams *p, Error **errp)
> +{
> + /* Implement in next patch */
> + return -1;
> +}
> +
> +/**
> + * qpl_send_cleanup: cleanup send side
> + *
> + * Close the channel and return memory.
> + *
> + * @p: Params for the channel that we are using
> + * @errp: pointer to an error
> + */
> +static void qpl_send_cleanup(MultiFDSendParams *p, Error **errp)
> +{
> + /* Implement in next patch */
> +}
> +
> +/**
> + * qpl_send_prepare: prepare data to be able to send
> + *
> + * Create a compressed buffer with all the pages that we are going to
> + * send.
> + *
> + * Returns 0 for success or -1 for error
> + *
> + * @p: Params for the channel that we are using
> + * @errp: pointer to an error
> + */
> +static int qpl_send_prepare(MultiFDSendParams *p, Error **errp)
> +{
> + /* Implement in next patch */
> + return -1;
> +}
> +
> +/**
> + * qpl_recv_setup: setup receive side
> + *
> + * Create the compressed channel and buffer.
> + *
> + * Returns 0 for success or -1 for error
> + *
> + * @p: Params for the channel that we are using
> + * @errp: pointer to an error
> + */
> +static int qpl_recv_setup(MultiFDRecvParams *p, Error **errp)
> +{
> + /* Implement in next patch */
> + return -1;
> +}
> +
> +/**
> + * qpl_recv_cleanup: setup receive side
> + *
> + * Close the channel and return memory.
> + *
> + * @p: Params for the channel that we are using
> + */
> +static void qpl_recv_cleanup(MultiFDRecvParams *p)
> +{
> + /* Implement in next patch */
> +}
> +
> +/**
> + * qpl_recv_pages: read the data from the channel into actual pages
> + *
> + * Read the compressed buffer, and uncompress it into the actual
> + * pages.
> + *
> + * Returns 0 for success or -1 for error
> + *
> + * @p: Params for the channel that we are using
> + * @errp: pointer to an error
> + */
> +static int qpl_recv_pages(MultiFDRecvParams *p, Error **errp)
> +{
> + /* Implement in next patch */
> + return -1;
> +}
> +
> +/**
> + * qpl_get_iov_count: get the count of IOVs
> + *
> + * For QPL compression, in addition to requesting the same number of IOVs
> + * as the page, it also requires an additional IOV to store all compressed
> + * data lengths.
> + *
> + * Returns the count of the IOVs
> + *
> + * @page_count: Indicate the maximum count of pages processed by multifd
> + */
> +static uint32_t qpl_get_iov_count(uint32_t page_count)
> +{
> + return page_count + 1;
> +}
> +
> +static MultiFDMethods multifd_qpl_ops = {
> + .send_setup = qpl_send_setup,
> + .send_cleanup = qpl_send_cleanup,
> + .send_prepare = qpl_send_prepare,
> + .recv_setup = qpl_recv_setup,
> + .recv_cleanup = qpl_recv_cleanup,
> + .recv_pages = qpl_recv_pages,
> + .get_iov_count = qpl_get_iov_count
> +};
> +
> +static void multifd_qpl_register(void)
> +{
> + multifd_register_ops(MULTIFD_COMPRESSION_QPL, &multifd_qpl_ops);
> +}
> +
> +migration_init(multifd_qpl_register);
> diff --git a/migration/multifd.h b/migration/multifd.h
> index d82495c508..0e9361df2a 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -33,6 +33,7 @@ bool multifd_queue_page(RAMBlock *block, ram_addr_t offset);
> #define MULTIFD_FLAG_NOCOMP (0 << 1)
> #define MULTIFD_FLAG_ZLIB (1 << 1)
> #define MULTIFD_FLAG_ZSTD (2 << 1)
> +#define MULTIFD_FLAG_QPL (4 << 1)
>
> /* This value needs to be a multiple of qemu_target_page_size() */
> #define MULTIFD_PACKET_SIZE (512 * 1024)
> diff --git a/qapi/migration.json b/qapi/migration.json
> index 5a565d9b8d..e48e3d7065 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -625,11 +625,16 @@
> #
> # @zstd: use zstd compression method.
> #
> +# @qpl: use qpl compression method. Query Processing Library(qpl) is based on
> +# the deflate compression algorithm and use the Intel In-Memory Analytics
> +# Accelerator(IAA) hardware accelerated compression and decompression.
Missing: (since 9.0)
> +#
> # Since: 5.0
> ##
> { 'enum': 'MultiFDCompression',
> 'data': [ 'none', 'zlib',
> - { 'name': 'zstd', 'if': 'CONFIG_ZSTD' } ] }
> + { 'name': 'zstd', 'if': 'CONFIG_ZSTD' },
> + { 'name': 'qpl', 'if': 'CONFIG_QPL' } ] }
>
> ##
> # @MigMode:
^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH v4 2/8] migration/multifd: add get_iov_count in the multifd method
2024-03-05 20:24 ` Fabiano Rosas
@ 2024-03-06 1:16 ` Liu, Yuan1
0 siblings, 0 replies; 17+ messages in thread
From: Liu, Yuan1 @ 2024-03-06 1:16 UTC (permalink / raw)
To: Fabiano Rosas, peterx@redhat.com
Cc: qemu-devel@nongnu.org, hao.xiang@bytedance.com,
bryan.zhang@bytedance.com, Zou, Nanhai
> -----Original Message-----
> From: Fabiano Rosas <farosas@suse.de>
> Sent: Wednesday, March 6, 2024 4:24 AM
> To: Liu, Yuan1 <yuan1.liu@intel.com>; peterx@redhat.com
> Cc: qemu-devel@nongnu.org; hao.xiang@bytedance.com;
> bryan.zhang@bytedance.com; Liu, Yuan1 <yuan1.liu@intel.com>; Zou, Nanhai
> <nanhai.zou@intel.com>
> Subject: Re: [PATCH v4 2/8] migration/multifd: add get_iov_count in the
> multifd method
>
> Yuan Liu <yuan1.liu@intel.com> writes:
>
> > the new function get_iov_count is used to get the number of
> > IOVs required by a specified multifd method
> >
> > Different multifd methods may require different numbers of IOVs.
> > Based on streaming compression of zlib and zstd, all pages will be
> > compressed to a data block, so an IOV is required to send this data
> > block. For no compression, each IOV is used to send a page, so the
> > number of IOVs required is the same as the number of pages.
>
> Let's just move the responsibility of allocating p->iov to the client
> code. You can move the allocation into send_setup() and the free into
> send_cleanup().
Yes, this is a good way, I will implement it in the next version
> >
> > Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
> > Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
> > ---
> > migration/multifd-zlib.c | 18 +++++++++++++++++-
> > migration/multifd-zstd.c | 18 +++++++++++++++++-
> > migration/multifd.c | 24 +++++++++++++++++++++---
> > migration/multifd.h | 2 ++
> > 4 files changed, 57 insertions(+), 5 deletions(-)
> >
> > diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
> > index 012e3bdea1..35187f2aff 100644
> > --- a/migration/multifd-zlib.c
> > +++ b/migration/multifd-zlib.c
> > @@ -313,13 +313,29 @@ static int zlib_recv_pages(MultiFDRecvParams *p,
> Error **errp)
> > return 0;
> > }
> >
> > +/**
> > + * zlib_get_iov_count: get the count of IOVs
> > + *
> > + * For zlib streaming compression, all pages will be compressed into a
> data
> > + * block, and an IOV is requested for sending this block.
> > + *
> > + * Returns the count of the IOVs
> > + *
> > + * @page_count: Indicate the maximum count of pages processed by
> multifd
> > + */
> > +static uint32_t zlib_get_iov_count(uint32_t page_count)
> > +{
> > + return 1;
> > +}
> > +
> > static MultiFDMethods multifd_zlib_ops = {
> > .send_setup = zlib_send_setup,
> > .send_cleanup = zlib_send_cleanup,
> > .send_prepare = zlib_send_prepare,
> > .recv_setup = zlib_recv_setup,
> > .recv_cleanup = zlib_recv_cleanup,
> > - .recv_pages = zlib_recv_pages
> > + .recv_pages = zlib_recv_pages,
> > + .get_iov_count = zlib_get_iov_count
> > };
> >
> > static void multifd_zlib_register(void)
> > diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
> > index dc8fe43e94..25ed1add2a 100644
> > --- a/migration/multifd-zstd.c
> > +++ b/migration/multifd-zstd.c
> > @@ -304,13 +304,29 @@ static int zstd_recv_pages(MultiFDRecvParams *p,
> Error **errp)
> > return 0;
> > }
> >
> > +/**
> > + * zstd_get_iov_count: get the count of IOVs
> > + *
> > + * For zstd streaming compression, all pages will be compressed into a
> data
> > + * block, and an IOV is requested for sending this block.
> > + *
> > + * Returns the count of the IOVs
> > + *
> > + * @page_count: Indicate the maximum count of pages processed by
> multifd
> > + */
> > +static uint32_t zstd_get_iov_count(uint32_t page_count)
> > +{
> > + return 1;
> > +}
> > +
> > static MultiFDMethods multifd_zstd_ops = {
> > .send_setup = zstd_send_setup,
> > .send_cleanup = zstd_send_cleanup,
> > .send_prepare = zstd_send_prepare,
> > .recv_setup = zstd_recv_setup,
> > .recv_cleanup = zstd_recv_cleanup,
> > - .recv_pages = zstd_recv_pages
> > + .recv_pages = zstd_recv_pages,
> > + .get_iov_count = zstd_get_iov_count
> > };
> >
> > static void multifd_zstd_register(void)
> > diff --git a/migration/multifd.c b/migration/multifd.c
> > index adfe8c9a0a..787402247e 100644
> > --- a/migration/multifd.c
> > +++ b/migration/multifd.c
> > @@ -209,13 +209,29 @@ static int nocomp_recv_pages(MultiFDRecvParams *p,
> Error **errp)
> > return qio_channel_readv_all(p->c, p->iov, p->normal_num, errp);
> > }
> >
> > +/**
> > + * nocomp_get_iov_count: get the count of IOVs
> > + *
> > + * For no compression, the count of IOVs required is the same as the
> count of
> > + * pages
> > + *
> > + * Returns the count of the IOVs
> > + *
> > + * @page_count: Indicate the maximum count of pages processed by
> multifd
> > + */
> > +static uint32_t nocomp_get_iov_count(uint32_t page_count)
> > +{
> > + return page_count;
> > +}
> > +
> > static MultiFDMethods multifd_nocomp_ops = {
> > .send_setup = nocomp_send_setup,
> > .send_cleanup = nocomp_send_cleanup,
> > .send_prepare = nocomp_send_prepare,
> > .recv_setup = nocomp_recv_setup,
> > .recv_cleanup = nocomp_recv_cleanup,
> > - .recv_pages = nocomp_recv_pages
> > + .recv_pages = nocomp_recv_pages,
> > + .get_iov_count = nocomp_get_iov_count
> > };
> >
> > static MultiFDMethods *multifd_ops[MULTIFD_COMPRESSION__MAX] = {
> > @@ -998,6 +1014,8 @@ bool multifd_send_setup(void)
> > Error *local_err = NULL;
> > int thread_count, ret = 0;
> > uint32_t page_count = MULTIFD_PACKET_SIZE /
> qemu_target_page_size();
> > + /* We need one extra place for the packet header */
> > + uint32_t iov_count = 1;
> > uint8_t i;
> >
> > if (!migrate_multifd()) {
> > @@ -1012,6 +1030,7 @@ bool multifd_send_setup(void)
> > qemu_sem_init(&multifd_send_state->channels_ready, 0);
> > qatomic_set(&multifd_send_state->exiting, 0);
> > multifd_send_state->ops =
> multifd_ops[migrate_multifd_compression()];
> > + iov_count += multifd_send_state->ops->get_iov_count(page_count);
> >
> > for (i = 0; i < thread_count; i++) {
> > MultiFDSendParams *p = &multifd_send_state->params[i];
> > @@ -1026,8 +1045,7 @@ bool multifd_send_setup(void)
> > p->packet->magic = cpu_to_be32(MULTIFD_MAGIC);
> > p->packet->version = cpu_to_be32(MULTIFD_VERSION);
> > p->name = g_strdup_printf("multifdsend_%d", i);
> > - /* We need one extra place for the packet header */
> > - p->iov = g_new0(struct iovec, page_count + 1);
> > + p->iov = g_new0(struct iovec, iov_count);
> > p->page_size = qemu_target_page_size();
> > p->page_count = page_count;
> > p->write_flags = 0;
> > diff --git a/migration/multifd.h b/migration/multifd.h
> > index 8a1cad0996..d82495c508 100644
> > --- a/migration/multifd.h
> > +++ b/migration/multifd.h
> > @@ -201,6 +201,8 @@ typedef struct {
> > void (*recv_cleanup)(MultiFDRecvParams *p);
> > /* Read all pages */
> > int (*recv_pages)(MultiFDRecvParams *p, Error **errp);
> > + /* Get the count of required IOVs */
> > + uint32_t (*get_iov_count)(uint32_t page_count);
> > } MultiFDMethods;
> >
> > void multifd_register_ops(int method, MultiFDMethods *ops);
^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH v4 3/8] configure: add --enable-qpl build option
2024-03-05 20:32 ` Fabiano Rosas
@ 2024-03-06 2:20 ` Liu, Yuan1
2024-03-06 11:56 ` Fabiano Rosas
0 siblings, 1 reply; 17+ messages in thread
From: Liu, Yuan1 @ 2024-03-06 2:20 UTC (permalink / raw)
To: Fabiano Rosas, peterx@redhat.com
Cc: qemu-devel@nongnu.org, hao.xiang@bytedance.com,
bryan.zhang@bytedance.com, Zou, Nanhai
> -----Original Message-----
> From: Fabiano Rosas <farosas@suse.de>
> Sent: Wednesday, March 6, 2024 4:32 AM
> To: Liu, Yuan1 <yuan1.liu@intel.com>; peterx@redhat.com
> Cc: qemu-devel@nongnu.org; hao.xiang@bytedance.com;
> bryan.zhang@bytedance.com; Liu, Yuan1 <yuan1.liu@intel.com>; Zou, Nanhai
> <nanhai.zou@intel.com>
> Subject: Re: [PATCH v4 3/8] configure: add --enable-qpl build option
>
> Yuan Liu <yuan1.liu@intel.com> writes:
>
> > add --enable-qpl and --disable-qpl options to enable and disable
> > the QPL compression method for multifd migration.
> >
> > the Query Processing Library (QPL) is an open-source library
> > that supports data compression and decompression features.
> >
> > The QPL compression is based on the deflate compression algorithm
> > and use Intel In-Memory Analytics Accelerator(IAA) hardware for
> > compression and decompression acceleration.
> >
> > Please refer to the following for more information about QPL
> >
> https://intel.github.io/qpl/documentation/introduction_docs/introduction.h
> tml
> >
> > Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
> > Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
> > ---
> > meson.build | 18 ++++++++++++++++++
> > meson_options.txt | 2 ++
> > scripts/meson-buildoptions.sh | 3 +++
> > 3 files changed, 23 insertions(+)
> >
> > diff --git a/meson.build b/meson.build
> > index c1dc83e4c0..2dea1e6834 100644
> > --- a/meson.build
> > +++ b/meson.build
> > @@ -1197,6 +1197,22 @@ if not get_option('zstd').auto() or have_block
> > required: get_option('zstd'),
> > method: 'pkg-config')
> > endif
> > +qpl = not_found
> > +if not get_option('qpl').auto()
> > + libqpl = cc.find_library('qpl', required: false)
> > + if not libqpl.found()
> > + error('libqpl not found, please install it from ' +
> > +
> 'https://intel.github.io/qpl/documentation/get_started_docs/installation.h
> tml')
> > + endif
> > + libaccel = cc.find_library('accel-config', required: false)
> > + if not libaccel.found()
> > + error('libaccel-config not found, please install it from ' +
> > + 'https://github.com/intel/idxd-config')
>
> accel-config seems to be packaged by many distros, I'm not sure we need
> to reference the repository here.
>
> https://repology.org/project/accel-config/versions
Yes, accel-config has been added to many distributions, I will use pkgconfig to
detect the libaccel and the version(at least v4.0).
I have a question, I didn't find accel-config installation package from
https://repology.org/project/accel-config/versions. Does using this link also
require the user to build an accel-config package, and then install it?
It is easy to install accel-config using the installation package, but I didn't
find a repo that provides accel-config installation packages for most distributions.
First check accel-config is available through pktconfig, and if it is not available,
prompts users to install it from https://github.com/intel/idxd-config, is it OK?
> > + endif
> > + qpl = declare_dependency(dependencies: [libqpl, libaccel,
> > + cc.find_library('dl', required: get_option('qpl'))],
> > + link_args: ['-lstdc++'])
> > +endif
> > virgl = not_found
> >
> > have_vhost_user_gpu = have_tools and host_os == 'linux' and
> pixman.found()
> > @@ -2298,6 +2314,7 @@ config_host_data.set('CONFIG_MALLOC_TRIM',
> has_malloc_trim)
> > config_host_data.set('CONFIG_STATX', has_statx)
> > config_host_data.set('CONFIG_STATX_MNT_ID', has_statx_mnt_id)
> > config_host_data.set('CONFIG_ZSTD', zstd.found())
> > +config_host_data.set('CONFIG_QPL', qpl.found())
> > config_host_data.set('CONFIG_FUSE', fuse.found())
> > config_host_data.set('CONFIG_FUSE_LSEEK', fuse_lseek.found())
> > config_host_data.set('CONFIG_SPICE_PROTOCOL', spice_protocol.found())
> > @@ -4438,6 +4455,7 @@ summary_info += {'snappy support': snappy}
> > summary_info += {'bzip2 support': libbzip2}
> > summary_info += {'lzfse support': liblzfse}
> > summary_info += {'zstd support': zstd}
> > +summary_info += {'Query Processing Library support': qpl}
> > summary_info += {'NUMA host support': numa}
> > summary_info += {'capstone': capstone}
> > summary_info += {'libpmem support': libpmem}
> > diff --git a/meson_options.txt b/meson_options.txt
> > index 0a99a059ec..06cd675572 100644
> > --- a/meson_options.txt
> > +++ b/meson_options.txt
> > @@ -259,6 +259,8 @@ option('xkbcommon', type : 'feature', value :
> 'auto',
> > description: 'xkbcommon support')
> > option('zstd', type : 'feature', value : 'auto',
> > description: 'zstd compression support')
> > +option('qpl', type : 'feature', value : 'auto',
> > + description: 'Query Processing Library support')
> > option('fuse', type: 'feature', value: 'auto',
> > description: 'FUSE block device export')
> > option('fuse_lseek', type : 'feature', value : 'auto',
> > diff --git a/scripts/meson-buildoptions.sh b/scripts/meson-
> buildoptions.sh
> > index 680fa3f581..784f74fde9 100644
> > --- a/scripts/meson-buildoptions.sh
> > +++ b/scripts/meson-buildoptions.sh
> > @@ -222,6 +222,7 @@ meson_options_help() {
> > printf "%s\n" ' Xen PCI passthrough support'
> > printf "%s\n" ' xkbcommon xkbcommon support'
> > printf "%s\n" ' zstd zstd compression support'
> > + printf "%s\n" ' qpl Query Processing Library support'
> > }
> > _meson_option_parse() {
> > case $1 in
> > @@ -562,6 +563,8 @@ _meson_option_parse() {
> > --disable-xkbcommon) printf "%s" -Dxkbcommon=disabled ;;
> > --enable-zstd) printf "%s" -Dzstd=enabled ;;
> > --disable-zstd) printf "%s" -Dzstd=disabled ;;
> > + --enable-qpl) printf "%s" -Dqpl=enabled ;;
> > + --disable-qpl) printf "%s" -Dqpl=disabled ;;
> > *) return 1 ;;
> > esac
> > }
^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH v4 4/8] migration/multifd: add qpl compression method
2024-03-05 20:58 ` Fabiano Rosas
@ 2024-03-06 2:29 ` Liu, Yuan1
0 siblings, 0 replies; 17+ messages in thread
From: Liu, Yuan1 @ 2024-03-06 2:29 UTC (permalink / raw)
To: Fabiano Rosas, peterx@redhat.com
Cc: qemu-devel@nongnu.org, hao.xiang@bytedance.com,
bryan.zhang@bytedance.com, Zou, Nanhai
> -----Original Message-----
> From: Fabiano Rosas <farosas@suse.de>
> Sent: Wednesday, March 6, 2024 4:58 AM
> To: Liu, Yuan1 <yuan1.liu@intel.com>; peterx@redhat.com
> Cc: qemu-devel@nongnu.org; hao.xiang@bytedance.com;
> bryan.zhang@bytedance.com; Liu, Yuan1 <yuan1.liu@intel.com>; Zou, Nanhai
> <nanhai.zou@intel.com>
> Subject: Re: [PATCH v4 4/8] migration/multifd: add qpl compression method
>
> Yuan Liu <yuan1.liu@intel.com> writes:
>
> > add the Query Processing Library (QPL) compression method
> >
> > Although both qpl and zlib support deflate compression, qpl will
> > only use the In-Memory Analytics Accelerator(IAA) for compression
> > and decompression, and IAA is not compatible with the Zlib in
> > migration, so qpl is used as a new compression method for migration.
> >
> > How to enable qpl compression during migration:
> > migrate_set_parameter multifd-compression qpl
> >
> > The qpl only supports one compression level, there is no qpl
> > compression level parameter added, users do not need to specify
> > the qpl compression level.
> >
> > Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
> > Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
> > ---
> > hw/core/qdev-properties-system.c | 2 +-
> > migration/meson.build | 1 +
> > migration/multifd-qpl.c | 158 +++++++++++++++++++++++++++++++
> > migration/multifd.h | 1 +
> > qapi/migration.json | 7 +-
> > 5 files changed, 167 insertions(+), 2 deletions(-)
> > create mode 100644 migration/multifd-qpl.c
> >
> > diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-
> system.c
> > index 1a396521d5..b4f0e5cbdb 100644
> > --- a/hw/core/qdev-properties-system.c
> > +++ b/hw/core/qdev-properties-system.c
> > @@ -658,7 +658,7 @@ const PropertyInfo qdev_prop_fdc_drive_type = {
> > const PropertyInfo qdev_prop_multifd_compression = {
> > .name = "MultiFDCompression",
> > .description = "multifd_compression values, "
> > - "none/zlib/zstd",
> > + "none/zlib/zstd/qpl",
> > .enum_table = &MultiFDCompression_lookup,
> > .get = qdev_propinfo_get_enum,
> > .set = qdev_propinfo_set_enum,
> > diff --git a/migration/meson.build b/migration/meson.build
> > index 92b1cc4297..c155c2d781 100644
> > --- a/migration/meson.build
> > +++ b/migration/meson.build
> > @@ -40,6 +40,7 @@ if get_option('live_block_migration').allowed()
> > system_ss.add(files('block.c'))
> > endif
> > system_ss.add(when: zstd, if_true: files('multifd-zstd.c'))
> > +system_ss.add(when: qpl, if_true: files('multifd-qpl.c'))
> >
> > specific_ss.add(when: 'CONFIG_SYSTEM_ONLY',
> > if_true: files('ram.c',
> > diff --git a/migration/multifd-qpl.c b/migration/multifd-qpl.c
> > new file mode 100644
> > index 0000000000..6b94e732ac
> > --- /dev/null
> > +++ b/migration/multifd-qpl.c
> > @@ -0,0 +1,158 @@
> > +/*
> > + * Multifd qpl compression accelerator implementation
> > + *
> > + * Copyright (c) 2023 Intel Corporation
> > + *
> > + * Authors:
> > + * Yuan Liu<yuan1.liu@intel.com>
> > + *
> > + * This work is licensed under the terms of the GNU GPL, version 2 or
> later.
> > + * See the COPYING file in the top-level directory.
> > + */
> > +
> > +#include "qemu/osdep.h"
> > +#include "qemu/rcu.h"
> > +#include "exec/ramblock.h"
> > +#include "exec/target_page.h"
> > +#include "qapi/error.h"
> > +#include "migration.h"
> > +#include "trace.h"
> > +#include "options.h"
> > +#include "multifd.h"
> > +#include "qpl/qpl.h"
>
> I don't mind adding a skeleton upfront before adding the implementation,
> but adding the headers here hurts the review process. Reviewers will
> have to go digging through the next patches to be able to validate each
> of these. It's better to include them along with their usage.
>
> What I would do in this patch is maybe just add the new option, the
> .json and meson changes and this file with just:
>
> static void multifd_qpl_register(void)
> {
> /* noop */
> }
>
> Then in the next commit you can implement all the methods in one
> go. That way, the docstrings come along with the implementation, which
> also facilitates review.
Thanks for the guidance, I will implement it in the next version.
> > +
> > +struct qpl_data {
>
> typedef struct {} QplData/QPLData, following QEMU's coding style.
I will fix it in next version
> > + qpl_job **job_array;
> > + /* the number of allocated jobs */
> > + uint32_t job_num;
> > + /* the size of data processed by a qpl job */
> > + uint32_t data_size;
> > + /* compressed data buffer */
> > + uint8_t *zbuf;
> > + /* the length of compressed data */
> > + uint32_t *zbuf_hdr;
> > +};
> > +
> > +/**
> > + * qpl_send_setup: setup send side
> > + *
> > + * Setup each channel with QPL compression.
> > + *
> > + * Returns 0 for success or -1 for error
> > + *
> > + * @p: Params for the channel that we are using
> > + * @errp: pointer to an error
> > + */
> > +static int qpl_send_setup(MultiFDSendParams *p, Error **errp)
> > +{
> > + /* Implement in next patch */
> > + return -1;
> > +}
> > +
> > +/**
> > + * qpl_send_cleanup: cleanup send side
> > + *
> > + * Close the channel and return memory.
> > + *
> > + * @p: Params for the channel that we are using
> > + * @errp: pointer to an error
> > + */
> > +static void qpl_send_cleanup(MultiFDSendParams *p, Error **errp)
> > +{
> > + /* Implement in next patch */
> > +}
> > +
> > +/**
> > + * qpl_send_prepare: prepare data to be able to send
> > + *
> > + * Create a compressed buffer with all the pages that we are going to
> > + * send.
> > + *
> > + * Returns 0 for success or -1 for error
> > + *
> > + * @p: Params for the channel that we are using
> > + * @errp: pointer to an error
> > + */
> > +static int qpl_send_prepare(MultiFDSendParams *p, Error **errp)
> > +{
> > + /* Implement in next patch */
> > + return -1;
> > +}
> > +
> > +/**
> > + * qpl_recv_setup: setup receive side
> > + *
> > + * Create the compressed channel and buffer.
> > + *
> > + * Returns 0 for success or -1 for error
> > + *
> > + * @p: Params for the channel that we are using
> > + * @errp: pointer to an error
> > + */
> > +static int qpl_recv_setup(MultiFDRecvParams *p, Error **errp)
> > +{
> > + /* Implement in next patch */
> > + return -1;
> > +}
> > +
> > +/**
> > + * qpl_recv_cleanup: setup receive side
> > + *
> > + * Close the channel and return memory.
> > + *
> > + * @p: Params for the channel that we are using
> > + */
> > +static void qpl_recv_cleanup(MultiFDRecvParams *p)
> > +{
> > + /* Implement in next patch */
> > +}
> > +
> > +/**
> > + * qpl_recv_pages: read the data from the channel into actual pages
> > + *
> > + * Read the compressed buffer, and uncompress it into the actual
> > + * pages.
> > + *
> > + * Returns 0 for success or -1 for error
> > + *
> > + * @p: Params for the channel that we are using
> > + * @errp: pointer to an error
> > + */
> > +static int qpl_recv_pages(MultiFDRecvParams *p, Error **errp)
> > +{
> > + /* Implement in next patch */
> > + return -1;
> > +}
> > +
> > +/**
> > + * qpl_get_iov_count: get the count of IOVs
> > + *
> > + * For QPL compression, in addition to requesting the same number of
> IOVs
> > + * as the page, it also requires an additional IOV to store all
> compressed
> > + * data lengths.
> > + *
> > + * Returns the count of the IOVs
> > + *
> > + * @page_count: Indicate the maximum count of pages processed by
> multifd
> > + */
> > +static uint32_t qpl_get_iov_count(uint32_t page_count)
> > +{
> > + return page_count + 1;
> > +}
> > +
> > +static MultiFDMethods multifd_qpl_ops = {
> > + .send_setup = qpl_send_setup,
> > + .send_cleanup = qpl_send_cleanup,
> > + .send_prepare = qpl_send_prepare,
> > + .recv_setup = qpl_recv_setup,
> > + .recv_cleanup = qpl_recv_cleanup,
> > + .recv_pages = qpl_recv_pages,
> > + .get_iov_count = qpl_get_iov_count
> > +};
> > +
> > +static void multifd_qpl_register(void)
> > +{
> > + multifd_register_ops(MULTIFD_COMPRESSION_QPL, &multifd_qpl_ops);
> > +}
> > +
> > +migration_init(multifd_qpl_register);
> > diff --git a/migration/multifd.h b/migration/multifd.h
> > index d82495c508..0e9361df2a 100644
> > --- a/migration/multifd.h
> > +++ b/migration/multifd.h
> > @@ -33,6 +33,7 @@ bool multifd_queue_page(RAMBlock *block, ram_addr_t
> offset);
> > #define MULTIFD_FLAG_NOCOMP (0 << 1)
> > #define MULTIFD_FLAG_ZLIB (1 << 1)
> > #define MULTIFD_FLAG_ZSTD (2 << 1)
> > +#define MULTIFD_FLAG_QPL (4 << 1)
> >
> > /* This value needs to be a multiple of qemu_target_page_size() */
> > #define MULTIFD_PACKET_SIZE (512 * 1024)
> > diff --git a/qapi/migration.json b/qapi/migration.json
> > index 5a565d9b8d..e48e3d7065 100644
> > --- a/qapi/migration.json
> > +++ b/qapi/migration.json
> > @@ -625,11 +625,16 @@
> > #
> > # @zstd: use zstd compression method.
> > #
> > +# @qpl: use qpl compression method. Query Processing Library(qpl) is
> based on
> > +# the deflate compression algorithm and use the Intel In-Memory
> Analytics
> > +# Accelerator(IAA) hardware accelerated compression and
> decompression.
>
> Missing: (since 9.0)
Sure, I will add it in next version
> > +#
> > # Since: 5.0
> > ##
> > { 'enum': 'MultiFDCompression',
> > 'data': [ 'none', 'zlib',
> > - { 'name': 'zstd', 'if': 'CONFIG_ZSTD' } ] }
> > + { 'name': 'zstd', 'if': 'CONFIG_ZSTD' },
> > + { 'name': 'qpl', 'if': 'CONFIG_QPL' } ] }
> >
> > ##
> > # @MigMode:
^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH v4 3/8] configure: add --enable-qpl build option
2024-03-06 2:20 ` Liu, Yuan1
@ 2024-03-06 11:56 ` Fabiano Rosas
2024-03-07 6:45 ` Liu, Yuan1
0 siblings, 1 reply; 17+ messages in thread
From: Fabiano Rosas @ 2024-03-06 11:56 UTC (permalink / raw)
To: Liu, Yuan1, peterx@redhat.com
Cc: qemu-devel@nongnu.org, hao.xiang@bytedance.com,
bryan.zhang@bytedance.com, Zou, Nanhai
"Liu, Yuan1" <yuan1.liu@intel.com> writes:
>> -----Original Message-----
>> From: Fabiano Rosas <farosas@suse.de>
>> Sent: Wednesday, March 6, 2024 4:32 AM
>> To: Liu, Yuan1 <yuan1.liu@intel.com>; peterx@redhat.com
>> Cc: qemu-devel@nongnu.org; hao.xiang@bytedance.com;
>> bryan.zhang@bytedance.com; Liu, Yuan1 <yuan1.liu@intel.com>; Zou, Nanhai
>> <nanhai.zou@intel.com>
>> Subject: Re: [PATCH v4 3/8] configure: add --enable-qpl build option
>>
>> Yuan Liu <yuan1.liu@intel.com> writes:
>>
>> > add --enable-qpl and --disable-qpl options to enable and disable
>> > the QPL compression method for multifd migration.
>> >
>> > the Query Processing Library (QPL) is an open-source library
>> > that supports data compression and decompression features.
>> >
>> > The QPL compression is based on the deflate compression algorithm
>> > and use Intel In-Memory Analytics Accelerator(IAA) hardware for
>> > compression and decompression acceleration.
>> >
>> > Please refer to the following for more information about QPL
>> >
>> https://intel.github.io/qpl/documentation/introduction_docs/introduction.h
>> tml
>> >
>> > Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
>> > Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
>> > ---
>> > meson.build | 18 ++++++++++++++++++
>> > meson_options.txt | 2 ++
>> > scripts/meson-buildoptions.sh | 3 +++
>> > 3 files changed, 23 insertions(+)
>> >
>> > diff --git a/meson.build b/meson.build
>> > index c1dc83e4c0..2dea1e6834 100644
>> > --- a/meson.build
>> > +++ b/meson.build
>> > @@ -1197,6 +1197,22 @@ if not get_option('zstd').auto() or have_block
>> > required: get_option('zstd'),
>> > method: 'pkg-config')
>> > endif
>> > +qpl = not_found
>> > +if not get_option('qpl').auto()
>> > + libqpl = cc.find_library('qpl', required: false)
>> > + if not libqpl.found()
>> > + error('libqpl not found, please install it from ' +
>> > +
>> 'https://intel.github.io/qpl/documentation/get_started_docs/installation.h
>> tml')
>> > + endif
>> > + libaccel = cc.find_library('accel-config', required: false)
>> > + if not libaccel.found()
>> > + error('libaccel-config not found, please install it from ' +
>> > + 'https://github.com/intel/idxd-config')
>>
>> accel-config seems to be packaged by many distros, I'm not sure we need
>> to reference the repository here.
>>
>> https://repology.org/project/accel-config/versions
>
> Yes, accel-config has been added to many distributions, I will use pkgconfig to
> detect the libaccel and the version(at least v4.0).
>
> I have a question, I didn't find accel-config installation package from
> https://repology.org/project/accel-config/versions. Does using this link also
> require the user to build an accel-config package, and then install it?
That is just an aggregated list of distros and the version of the
package they provide in their repos. So I'm just pointing out to you
that there seems to be a packaged accel-config for most distros
already. Which means we just want to say "install accel-config" and
users should be able to use their distro's package manager.
>
> It is easy to install accel-config using the installation package, but I didn't
> find a repo that provides accel-config installation packages for most distributions.
>
> First check accel-config is available through pktconfig, and if it is not available,
> prompts users to install it from https://github.com/intel/idxd-config, is it OK?
There's no need, just check if its available and suggest the user to
install it. We already have the link in the docs.
>
>> > + endif
>> > + qpl = declare_dependency(dependencies: [libqpl, libaccel,
>> > + cc.find_library('dl', required: get_option('qpl'))],
>> > + link_args: ['-lstdc++'])
>> > +endif
>> > virgl = not_found
>> >
>> > have_vhost_user_gpu = have_tools and host_os == 'linux' and
>> pixman.found()
>> > @@ -2298,6 +2314,7 @@ config_host_data.set('CONFIG_MALLOC_TRIM',
>> has_malloc_trim)
>> > config_host_data.set('CONFIG_STATX', has_statx)
>> > config_host_data.set('CONFIG_STATX_MNT_ID', has_statx_mnt_id)
>> > config_host_data.set('CONFIG_ZSTD', zstd.found())
>> > +config_host_data.set('CONFIG_QPL', qpl.found())
>> > config_host_data.set('CONFIG_FUSE', fuse.found())
>> > config_host_data.set('CONFIG_FUSE_LSEEK', fuse_lseek.found())
>> > config_host_data.set('CONFIG_SPICE_PROTOCOL', spice_protocol.found())
>> > @@ -4438,6 +4455,7 @@ summary_info += {'snappy support': snappy}
>> > summary_info += {'bzip2 support': libbzip2}
>> > summary_info += {'lzfse support': liblzfse}
>> > summary_info += {'zstd support': zstd}
>> > +summary_info += {'Query Processing Library support': qpl}
>> > summary_info += {'NUMA host support': numa}
>> > summary_info += {'capstone': capstone}
>> > summary_info += {'libpmem support': libpmem}
>> > diff --git a/meson_options.txt b/meson_options.txt
>> > index 0a99a059ec..06cd675572 100644
>> > --- a/meson_options.txt
>> > +++ b/meson_options.txt
>> > @@ -259,6 +259,8 @@ option('xkbcommon', type : 'feature', value :
>> 'auto',
>> > description: 'xkbcommon support')
>> > option('zstd', type : 'feature', value : 'auto',
>> > description: 'zstd compression support')
>> > +option('qpl', type : 'feature', value : 'auto',
>> > + description: 'Query Processing Library support')
>> > option('fuse', type: 'feature', value: 'auto',
>> > description: 'FUSE block device export')
>> > option('fuse_lseek', type : 'feature', value : 'auto',
>> > diff --git a/scripts/meson-buildoptions.sh b/scripts/meson-
>> buildoptions.sh
>> > index 680fa3f581..784f74fde9 100644
>> > --- a/scripts/meson-buildoptions.sh
>> > +++ b/scripts/meson-buildoptions.sh
>> > @@ -222,6 +222,7 @@ meson_options_help() {
>> > printf "%s\n" ' Xen PCI passthrough support'
>> > printf "%s\n" ' xkbcommon xkbcommon support'
>> > printf "%s\n" ' zstd zstd compression support'
>> > + printf "%s\n" ' qpl Query Processing Library support'
>> > }
>> > _meson_option_parse() {
>> > case $1 in
>> > @@ -562,6 +563,8 @@ _meson_option_parse() {
>> > --disable-xkbcommon) printf "%s" -Dxkbcommon=disabled ;;
>> > --enable-zstd) printf "%s" -Dzstd=enabled ;;
>> > --disable-zstd) printf "%s" -Dzstd=disabled ;;
>> > + --enable-qpl) printf "%s" -Dqpl=enabled ;;
>> > + --disable-qpl) printf "%s" -Dqpl=disabled ;;
>> > *) return 1 ;;
>> > esac
>> > }
^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH v4 3/8] configure: add --enable-qpl build option
2024-03-06 11:56 ` Fabiano Rosas
@ 2024-03-07 6:45 ` Liu, Yuan1
0 siblings, 0 replies; 17+ messages in thread
From: Liu, Yuan1 @ 2024-03-07 6:45 UTC (permalink / raw)
To: Fabiano Rosas, peterx@redhat.com
Cc: qemu-devel@nongnu.org, hao.xiang@bytedance.com,
bryan.zhang@bytedance.com, Zou, Nanhai
> -----Original Message-----
> From: Fabiano Rosas <farosas@suse.de>
> Sent: Wednesday, March 6, 2024 7:56 PM
> To: Liu, Yuan1 <yuan1.liu@intel.com>; peterx@redhat.com
> Cc: qemu-devel@nongnu.org; hao.xiang@bytedance.com;
> bryan.zhang@bytedance.com; Zou, Nanhai <nanhai.zou@intel.com>
> Subject: RE: [PATCH v4 3/8] configure: add --enable-qpl build option
>
> "Liu, Yuan1" <yuan1.liu@intel.com> writes:
>
> >> -----Original Message-----
> >> From: Fabiano Rosas <farosas@suse.de>
> >> Sent: Wednesday, March 6, 2024 4:32 AM
> >> To: Liu, Yuan1 <yuan1.liu@intel.com>; peterx@redhat.com
> >> Cc: qemu-devel@nongnu.org; hao.xiang@bytedance.com;
> >> bryan.zhang@bytedance.com; Liu, Yuan1 <yuan1.liu@intel.com>; Zou,
> Nanhai
> >> <nanhai.zou@intel.com>
> >> Subject: Re: [PATCH v4 3/8] configure: add --enable-qpl build option
> >>
> >> Yuan Liu <yuan1.liu@intel.com> writes:
> >>
> >> > add --enable-qpl and --disable-qpl options to enable and disable
> >> > the QPL compression method for multifd migration.
> >> >
> >> > the Query Processing Library (QPL) is an open-source library
> >> > that supports data compression and decompression features.
> >> >
> >> > The QPL compression is based on the deflate compression algorithm
> >> > and use Intel In-Memory Analytics Accelerator(IAA) hardware for
> >> > compression and decompression acceleration.
> >> >
> >> > Please refer to the following for more information about QPL
> >> >
> >>
> https://intel.github.io/qpl/documentation/introduction_docs/introduction.h
> >> tml
> >> >
> >> > Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
> >> > Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
> >> > ---
> >> > meson.build | 18 ++++++++++++++++++
> >> > meson_options.txt | 2 ++
> >> > scripts/meson-buildoptions.sh | 3 +++
> >> > 3 files changed, 23 insertions(+)
> >> >
> >> > diff --git a/meson.build b/meson.build
> >> > index c1dc83e4c0..2dea1e6834 100644
> >> > --- a/meson.build
> >> > +++ b/meson.build
> >> > @@ -1197,6 +1197,22 @@ if not get_option('zstd').auto() or have_block
> >> > required: get_option('zstd'),
> >> > method: 'pkg-config')
> >> > endif
> >> > +qpl = not_found
> >> > +if not get_option('qpl').auto()
> >> > + libqpl = cc.find_library('qpl', required: false)
> >> > + if not libqpl.found()
> >> > + error('libqpl not found, please install it from ' +
> >> > +
> >>
> 'https://intel.github.io/qpl/documentation/get_started_docs/installation.h
> >> tml')
> >> > + endif
> >> > + libaccel = cc.find_library('accel-config', required: false)
> >> > + if not libaccel.found()
> >> > + error('libaccel-config not found, please install it from ' +
> >> > + 'https://github.com/intel/idxd-config')
> >>
> >> accel-config seems to be packaged by many distros, I'm not sure we need
> >> to reference the repository here.
> >>
> >> https://repology.org/project/accel-config/versions
> >
> > Yes, accel-config has been added to many distributions, I will use
> pkgconfig to
> > detect the libaccel and the version(at least v4.0).
> >
> > I have a question, I didn't find accel-config installation package from
> > https://repology.org/project/accel-config/versions. Does using this link
> also
> > require the user to build an accel-config package, and then install it?
>
> That is just an aggregated list of distros and the version of the
> package they provide in their repos. So I'm just pointing out to you
> that there seems to be a packaged accel-config for most distros
> already. Which means we just want to say "install accel-config" and
> users should be able to use their distro's package manager.
>
> >
> > It is easy to install accel-config using the installation package, but I
> didn't
> > find a repo that provides accel-config installation packages for most
> distributions.
> >
> > First check accel-config is available through pktconfig, and if it is
> not available,
> > prompts users to install it from https://github.com/intel/idxd-config,
> is it OK?
>
> There's no need, just check if its available and suggest the user to
> install it. We already have the link in the docs.
Get it, thanks~
> >
> >> > + endif
> >> > + qpl = declare_dependency(dependencies: [libqpl, libaccel,
> >> > + cc.find_library('dl', required: get_option('qpl'))],
> >> > + link_args: ['-lstdc++'])
> >> > +endif
> >> > virgl = not_found
> >> >
> >> > have_vhost_user_gpu = have_tools and host_os == 'linux' and
> >> pixman.found()
> >> > @@ -2298,6 +2314,7 @@ config_host_data.set('CONFIG_MALLOC_TRIM',
> >> has_malloc_trim)
> >> > config_host_data.set('CONFIG_STATX', has_statx)
> >> > config_host_data.set('CONFIG_STATX_MNT_ID', has_statx_mnt_id)
> >> > config_host_data.set('CONFIG_ZSTD', zstd.found())
> >> > +config_host_data.set('CONFIG_QPL', qpl.found())
> >> > config_host_data.set('CONFIG_FUSE', fuse.found())
> >> > config_host_data.set('CONFIG_FUSE_LSEEK', fuse_lseek.found())
> >> > config_host_data.set('CONFIG_SPICE_PROTOCOL',
> spice_protocol.found())
> >> > @@ -4438,6 +4455,7 @@ summary_info += {'snappy support': snappy}
> >> > summary_info += {'bzip2 support': libbzip2}
> >> > summary_info += {'lzfse support': liblzfse}
> >> > summary_info += {'zstd support': zstd}
> >> > +summary_info += {'Query Processing Library support': qpl}
> >> > summary_info += {'NUMA host support': numa}
> >> > summary_info += {'capstone': capstone}
> >> > summary_info += {'libpmem support': libpmem}
> >> > diff --git a/meson_options.txt b/meson_options.txt
> >> > index 0a99a059ec..06cd675572 100644
> >> > --- a/meson_options.txt
> >> > +++ b/meson_options.txt
> >> > @@ -259,6 +259,8 @@ option('xkbcommon', type : 'feature', value :
> >> 'auto',
> >> > description: 'xkbcommon support')
> >> > option('zstd', type : 'feature', value : 'auto',
> >> > description: 'zstd compression support')
> >> > +option('qpl', type : 'feature', value : 'auto',
> >> > + description: 'Query Processing Library support')
> >> > option('fuse', type: 'feature', value: 'auto',
> >> > description: 'FUSE block device export')
> >> > option('fuse_lseek', type : 'feature', value : 'auto',
> >> > diff --git a/scripts/meson-buildoptions.sh b/scripts/meson-
> >> buildoptions.sh
> >> > index 680fa3f581..784f74fde9 100644
> >> > --- a/scripts/meson-buildoptions.sh
> >> > +++ b/scripts/meson-buildoptions.sh
> >> > @@ -222,6 +222,7 @@ meson_options_help() {
> >> > printf "%s\n" ' Xen PCI passthrough support'
> >> > printf "%s\n" ' xkbcommon xkbcommon support'
> >> > printf "%s\n" ' zstd zstd compression support'
> >> > + printf "%s\n" ' qpl Query Processing Library support'
> >> > }
> >> > _meson_option_parse() {
> >> > case $1 in
> >> > @@ -562,6 +563,8 @@ _meson_option_parse() {
> >> > --disable-xkbcommon) printf "%s" -Dxkbcommon=disabled ;;
> >> > --enable-zstd) printf "%s" -Dzstd=enabled ;;
> >> > --disable-zstd) printf "%s" -Dzstd=disabled ;;
> >> > + --enable-qpl) printf "%s" -Dqpl=enabled ;;
> >> > + --disable-qpl) printf "%s" -Dqpl=disabled ;;
> >> > *) return 1 ;;
> >> > esac
> >> > }
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2024-03-07 6:46 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-04 14:00 [PATCH v4 0/8] Live Migration With IAA Yuan Liu
2024-03-04 14:00 ` [PATCH v4 1/8] docs/migration: add qpl compression feature Yuan Liu
2024-03-04 14:00 ` [PATCH v4 2/8] migration/multifd: add get_iov_count in the multifd method Yuan Liu
2024-03-05 20:24 ` Fabiano Rosas
2024-03-06 1:16 ` Liu, Yuan1
2024-03-04 14:00 ` [PATCH v4 3/8] configure: add --enable-qpl build option Yuan Liu
2024-03-05 20:32 ` Fabiano Rosas
2024-03-06 2:20 ` Liu, Yuan1
2024-03-06 11:56 ` Fabiano Rosas
2024-03-07 6:45 ` Liu, Yuan1
2024-03-04 14:00 ` [PATCH v4 4/8] migration/multifd: add qpl compression method Yuan Liu
2024-03-05 20:58 ` Fabiano Rosas
2024-03-06 2:29 ` Liu, Yuan1
2024-03-04 14:00 ` [PATCH v4 5/8] migration/multifd: implement initialization of qpl compression Yuan Liu
2024-03-04 14:00 ` [PATCH v4 6/8] migration/multifd: implement qpl compression and decompression Yuan Liu
2024-03-04 14:00 ` [PATCH v4 7/8] migration/multifd: fix zlib and zstd compression levels not working Yuan Liu
2024-03-04 14:00 ` [PATCH v4 8/8] tests/migration-test: add qpl compression test Yuan Liu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).