* [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration.
@ 2024-01-04 0:44 Hao Xiang
2024-01-04 0:44 ` [PATCH v3 01/20] multifd: Add capability to enable/disable zero_page Hao Xiang
` (20 more replies)
0 siblings, 21 replies; 41+ messages in thread
From: Hao Xiang @ 2024-01-04 0:44 UTC (permalink / raw)
To: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Hao Xiang
v3
* Rebase on top of 7425b6277f12e82952cede1f531bfc689bf77fb1.
* Fix error/warning from checkpatch.pl
* Fix use-after-free bug when multifd-dsa-accel option is not set.
* Handle error from dsa_init and correctly propogate the error.
* Remove unnecessary call to dsa_stop.
* Detect availability of DSA feature at compile time.
* Implement a generic batch_task structure and a DSA specific one dsa_batch_task.
* Remove all exit() calls and propagate errors correctly.
* Use bytes instead of page count to configure multifd-packet-size option.
v2
* Rebase on top of 3e01f1147a16ca566694b97eafc941d62fa1e8d8.
* Leave Juan's changes in their original form instead of squashing them.
* Add a new commit to refactor the multifd_send_thread function to prepare for introducing the DSA offload functionality.
* Use page count to configure multifd-packet-size option.
* Don't use the FLAKY flag in DSA tests.
* Test if DSA integration test is setup correctly and skip the test if
* not.
* Fixed broken link in the previous patch cover.
* Background:
I posted an RFC about DSA offloading in QEMU:
https://patchew.org/QEMU/20230529182001.2232069-1-hao.xiang@bytedance.com/
This patchset implements the DSA offloading on zero page checking in
multifd live migration code path.
* Overview:
Intel Data Streaming Accelerator(DSA) is introduced in Intel's 4th generation
Xeon server, aka Sapphire Rapids.
https://cdrdv2-public.intel.com/671116/341204-intel-data-streaming-accelerator-spec.pdf
https://www.intel.com/content/www/us/en/content-details/759709/intel-data-streaming-accelerator-user-guide.html
One of the things DSA can do is to offload memory comparison workload from
CPU to DSA accelerator hardware. This patchset implements a solution to offload
QEMU's zero page checking from CPU to DSA accelerator hardware. We gain
two benefits from this change:
1. Reduces CPU usage in multifd live migration workflow across all use
cases.
2. Reduces migration total time in some use cases.
* Design:
These are the logical steps to perform DSA offloading:
1. Configure DSA accelerators and create user space openable DSA work
queues via the idxd driver.
2. Map DSA's work queue into a user space address space.
3. Fill an in-memory task descriptor to describe the memory operation.
4. Use dedicated CPU instruction _enqcmd to queue a task descriptor to
the work queue.
5. Pull the task descriptor's completion status field until the task
completes.
6. Check return status.
The memory operation is now totally done by the accelerator hardware but
the new workflow introduces overheads. The overhead is the extra cost CPU
prepares and submits the task descriptors and the extra cost CPU pulls for
completion. The design is around minimizing these two overheads.
1. In order to reduce the overhead on task preparation and submission,
we use batch descriptors. A batch descriptor will contain N individual
zero page checking tasks where the default N is 128 (default packet size
/ page size) and we can increase N by setting the packet size via a new
migration option.
2. The multifd sender threads prepares and submits batch tasks to DSA
hardware and it waits on a synchronization object for task completion.
Whenever a DSA task is submitted, the task structure is added to a
thread safe queue. It's safe to have multiple multifd sender threads to
submit tasks concurrently.
3. Multiple DSA hardware devices can be used. During multifd initialization,
every sender thread will be assigned a DSA device to work with. We
use a round-robin scheme to evenly distribute the work across all used
DSA devices.
4. Use a dedicated thread dsa_completion to perform busy pulling for all
DSA task completions. The thread keeps dequeuing DSA tasks from the
thread safe queue. The thread blocks when there is no outstanding DSA
task. When pulling for completion of a DSA task, the thread uses CPU
instruction _mm_pause between the iterations of a busy loop to save some
CPU power as well as optimizing core resources for the other hypercore.
5. DSA accelerator can encounter errors. The most popular error is a
page fault. We have tested using devices to handle page faults but
performance is bad. Right now, if DSA hits a page fault, we fallback to
use CPU to complete the rest of the work. The CPU fallback is done in
the multifd sender thread.
6. Added a new migration option multifd-dsa-accel to set the DSA device
path. If set, the multifd workflow will leverage the DSA devices for
offloading.
7. Added a new migration option multifd-normal-page-ratio to make
multifd live migration easier to test. Setting a normal page ratio will
make live migration recognize a zero page as a normal page and send
the entire payload over the network. If we want to send a large network
payload and analyze throughput, this option is useful.
8. Added a new migration option multifd-packet-size. This can increase
the number of pages being zero page checked and sent over the network.
The extra synchronization between the sender threads and the dsa
completion thread is an overhead. Using a large packet size can reduce
that overhead.
* Performance:
We use two Intel 4th generation Xeon servers for testing.
Architecture: x86_64
CPU(s): 192
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 143
Model name: Intel(R) Xeon(R) Platinum 8457C
Stepping: 8
CPU MHz: 2538.624
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
We perform multifd live migration with below setup:
1. VM has 100GB memory.
2. Use the new migration option multifd-set-normal-page-ratio to control the total
size of the payload sent over the network.
3. Use 8 multifd channels.
4. Use tcp for live migration.
4. Use CPU to perform zero page checking as the baseline.
5. Use one DSA device to offload zero page checking to compare with the baseline.
6. Use "perf sched record" and "perf sched timehist" to analyze CPU usage.
A) Scenario 1: 50% (50GB) normal pages on an 100GB vm.
CPU usage
|---------------|---------------|---------------|---------------|
| |comm |runtime(msec) |totaltime(msec)|
|---------------|---------------|---------------|---------------|
|Baseline |live_migration |5657.58 | |
| |multifdsend_0 |3931.563 | |
| |multifdsend_1 |4405.273 | |
| |multifdsend_2 |3941.968 | |
| |multifdsend_3 |5032.975 | |
| |multifdsend_4 |4533.865 | |
| |multifdsend_5 |4530.461 | |
| |multifdsend_6 |5171.916 | |
| |multifdsend_7 |4722.769 |41922 |
|---------------|---------------|---------------|---------------|
|DSA |live_migration |6129.168 | |
| |multifdsend_0 |2954.717 | |
| |multifdsend_1 |2766.359 | |
| |multifdsend_2 |2853.519 | |
| |multifdsend_3 |2740.717 | |
| |multifdsend_4 |2824.169 | |
| |multifdsend_5 |2966.908 | |
| |multifdsend_6 |2611.137 | |
| |multifdsend_7 |3114.732 | |
| |dsa_completion |3612.564 |32568 |
|---------------|---------------|---------------|---------------|
Baseline total runtime is calculated by adding up all multifdsend_X
and live_migration threads runtime. DSA offloading total runtime is
calculated by adding up all multifdsend_X, live_migration and
dsa_completion threads runtime. 41922 msec VS 32568 msec runtime and
that is 23% total CPU usage savings.
Latency
|---------------|---------------|---------------|---------------|---------------|---------------|
| |total time |down time |throughput |transferred-ram|total-ram |
|---------------|---------------|---------------|---------------|---------------|---------------|
|Baseline |10343 ms |161 ms |41007.00 mbps |51583797 kb |102400520 kb |
|---------------|---------------|---------------|---------------|-------------------------------|
|DSA offload |9535 ms |135 ms |46554.40 mbps |53947545 kb |102400520 kb |
|---------------|---------------|---------------|---------------|---------------|---------------|
Total time is 8% faster and down time is 16% faster.
B) Scenario 2: 100% (100GB) zero pages on an 100GB vm.
CPU usage
|---------------|---------------|---------------|---------------|
| |comm |runtime(msec) |totaltime(msec)|
|---------------|---------------|---------------|---------------|
|Baseline |live_migration |4860.718 | |
| |multifdsend_0 |748.875 | |
| |multifdsend_1 |898.498 | |
| |multifdsend_2 |787.456 | |
| |multifdsend_3 |764.537 | |
| |multifdsend_4 |785.687 | |
| |multifdsend_5 |756.941 | |
| |multifdsend_6 |774.084 | |
| |multifdsend_7 |782.900 |11154 |
|---------------|---------------|-------------------------------|
|DSA offloading |live_migration |3846.976 | |
| |multifdsend_0 |191.880 | |
| |multifdsend_1 |166.331 | |
| |multifdsend_2 |168.528 | |
| |multifdsend_3 |197.831 | |
| |multifdsend_4 |169.580 | |
| |multifdsend_5 |167.984 | |
| |multifdsend_6 |198.042 | |
| |multifdsend_7 |170.624 | |
| |dsa_completion |3428.669 |8700 |
|---------------|---------------|---------------|---------------|
Baseline total runtime is 11154 msec and DSA offloading total runtime is
8700 msec. That is 22% CPU savings.
Latency
|--------------------------------------------------------------------------------------------|
| |total time |down time |throughput |transferred-ram|total-ram |
|---------------|---------------|---------------|---------------|---------------|------------|
|Baseline |4867 ms |20 ms |1.51 mbps |565 kb |102400520 kb|
|---------------|---------------|---------------|---------------|----------------------------|
|DSA offload |3888 ms |18 ms |1.89 mbps |565 kb |102400520 kb|
|---------------|---------------|---------------|---------------|---------------|------------|
Total time 20% faster and down time 10% faster.
* Testing:
1. Added unit tests for cover the added code path in dsa.c
2. Added integration tests to cover multifd live migration using DSA
offloading.
* Patchset
Apply this patchset on top of commit
7425b6277f12e82952cede1f531bfc689bf77fb1
Hao Xiang (16):
meson: Introduce new instruction set enqcmd to the build system.
util/dsa: Add dependency idxd.
util/dsa: Implement DSA device start and stop logic.
util/dsa: Implement DSA task enqueue and dequeue.
util/dsa: Implement DSA task asynchronous completion thread model.
util/dsa: Implement zero page checking in DSA task.
util/dsa: Implement DSA task asynchronous submission and wait for
completion.
migration/multifd: Add new migration option for multifd DSA
offloading.
migration/multifd: Prepare to introduce DSA acceleration on the
multifd path.
migration/multifd: Enable DSA offloading in multifd sender path.
migration/multifd: Add test hook to set normal page ratio.
migration/multifd: Enable set normal page ratio test hook in multifd.
migration/multifd: Add migration option set packet size.
migration/multifd: Enable set packet size migration option.
util/dsa: Add unit test coverage for Intel DSA task submission and
completion.
migration/multifd: Add integration tests for multifd with Intel DSA
offloading.
Juan Quintela (4):
multifd: Add capability to enable/disable zero_page
multifd: Support for zero pages transmission
multifd: Zero pages transmission
So we use multifd to transmit zero pages.
include/qemu/dsa.h | 175 +++++
linux-headers/linux/idxd.h | 356 ++++++++++
meson.build | 14 +
meson_options.txt | 2 +
migration/migration-hmp-cmds.c | 22 +
migration/multifd-zlib.c | 6 +-
migration/multifd-zstd.c | 6 +-
migration/multifd.c | 218 +++++-
migration/multifd.h | 27 +-
migration/options.c | 114 ++++
migration/options.h | 4 +
migration/ram.c | 45 +-
migration/trace-events | 8 +-
qapi/migration.json | 62 +-
scripts/meson-buildoptions.sh | 3 +
tests/qtest/migration-test.c | 77 ++-
tests/unit/meson.build | 6 +
tests/unit/test-dsa.c | 475 +++++++++++++
util/dsa.c | 1170 ++++++++++++++++++++++++++++++++
util/meson.build | 1 +
20 files changed, 2749 insertions(+), 42 deletions(-)
create mode 100644 include/qemu/dsa.h
create mode 100644 linux-headers/linux/idxd.h
create mode 100644 tests/unit/test-dsa.c
create mode 100644 util/dsa.c
--
2.30.2
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH v3 01/20] multifd: Add capability to enable/disable zero_page
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
@ 2024-01-04 0:44 ` Hao Xiang
2024-01-08 20:39 ` Fabiano Rosas
2024-01-15 6:02 ` Shivam Kumar
2024-01-04 0:44 ` [PATCH v3 02/20] multifd: Support for zero pages transmission Hao Xiang
` (19 subsequent siblings)
20 siblings, 2 replies; 41+ messages in thread
From: Hao Xiang @ 2024-01-04 0:44 UTC (permalink / raw)
To: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Juan Quintela
From: Juan Quintela <quintela@redhat.com>
We have to enable it by default until we introduce the new code.
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/options.c | 15 +++++++++++++++
migration/options.h | 1 +
qapi/migration.json | 8 +++++++-
3 files changed, 23 insertions(+), 1 deletion(-)
diff --git a/migration/options.c b/migration/options.c
index 8d8ec73ad9..0f6bd78b9f 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -204,6 +204,8 @@ Property migration_properties[] = {
DEFINE_PROP_MIG_CAP("x-switchover-ack",
MIGRATION_CAPABILITY_SWITCHOVER_ACK),
DEFINE_PROP_MIG_CAP("x-dirty-limit", MIGRATION_CAPABILITY_DIRTY_LIMIT),
+ DEFINE_PROP_MIG_CAP("main-zero-page",
+ MIGRATION_CAPABILITY_MAIN_ZERO_PAGE),
DEFINE_PROP_END_OF_LIST(),
};
@@ -284,6 +286,19 @@ bool migrate_multifd(void)
return s->capabilities[MIGRATION_CAPABILITY_MULTIFD];
}
+bool migrate_use_main_zero_page(void)
+{
+ /* MigrationState *s; */
+
+ /* s = migrate_get_current(); */
+
+ /*
+ * We will enable this when we add the right code.
+ * return s->enabled_capabilities[MIGRATION_CAPABILITY_MAIN_ZERO_PAGE];
+ */
+ return true;
+}
+
bool migrate_pause_before_switchover(void)
{
MigrationState *s = migrate_get_current();
diff --git a/migration/options.h b/migration/options.h
index 246c160aee..c901eb57c6 100644
--- a/migration/options.h
+++ b/migration/options.h
@@ -88,6 +88,7 @@ int migrate_multifd_channels(void);
MultiFDCompression migrate_multifd_compression(void);
int migrate_multifd_zlib_level(void);
int migrate_multifd_zstd_level(void);
+bool migrate_use_main_zero_page(void);
uint8_t migrate_throttle_trigger_threshold(void);
const char *migrate_tls_authz(void);
const char *migrate_tls_creds(void);
diff --git a/qapi/migration.json b/qapi/migration.json
index eb2f883513..80c4b13516 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -531,6 +531,12 @@
# and can result in more stable read performance. Requires KVM
# with accelerator property "dirty-ring-size" set. (Since 8.1)
#
+#
+# @main-zero-page: If enabled, the detection of zero pages will be
+# done on the main thread. Otherwise it is done on
+# the multifd threads.
+# (since 8.2)
+#
# Features:
#
# @deprecated: Member @block is deprecated. Use blockdev-mirror with
@@ -555,7 +561,7 @@
{ 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
'validate-uuid', 'background-snapshot',
'zero-copy-send', 'postcopy-preempt', 'switchover-ack',
- 'dirty-limit'] }
+ 'dirty-limit', 'main-zero-page'] }
##
# @MigrationCapabilityStatus:
--
2.30.2
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v3 02/20] multifd: Support for zero pages transmission
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
2024-01-04 0:44 ` [PATCH v3 01/20] multifd: Add capability to enable/disable zero_page Hao Xiang
@ 2024-01-04 0:44 ` Hao Xiang
2024-01-04 0:44 ` [PATCH v3 03/20] multifd: Zero " Hao Xiang
` (18 subsequent siblings)
20 siblings, 0 replies; 41+ messages in thread
From: Hao Xiang @ 2024-01-04 0:44 UTC (permalink / raw)
To: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Juan Quintela
From: Juan Quintela <quintela@redhat.com>
This patch adds counters and similar. Logic will be added on the
following patch.
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/multifd.c | 37 ++++++++++++++++++++++++++++++-------
migration/multifd.h | 17 ++++++++++++++++-
migration/trace-events | 8 ++++----
3 files changed, 50 insertions(+), 12 deletions(-)
diff --git a/migration/multifd.c b/migration/multifd.c
index 409460684f..5a1f50c7e8 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -267,6 +267,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
packet->normal_pages = cpu_to_be32(p->normal_num);
packet->next_packet_size = cpu_to_be32(p->next_packet_size);
packet->packet_num = cpu_to_be64(p->packet_num);
+ packet->zero_pages = cpu_to_be32(p->zero_num);
if (p->pages->block) {
strncpy(packet->ramblock, p->pages->block->idstr, 256);
@@ -326,7 +327,15 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
p->next_packet_size = be32_to_cpu(packet->next_packet_size);
p->packet_num = be64_to_cpu(packet->packet_num);
- if (p->normal_num == 0) {
+ p->zero_num = be32_to_cpu(packet->zero_pages);
+ if (p->zero_num > packet->pages_alloc - p->normal_num) {
+ error_setg(errp, "multifd: received packet "
+ "with %u zero pages and expected maximum pages are %u",
+ p->zero_num, packet->pages_alloc - p->normal_num) ;
+ return -1;
+ }
+
+ if (p->normal_num == 0 && p->zero_num == 0) {
return 0;
}
@@ -431,6 +440,7 @@ static int multifd_send_pages(QEMUFile *f)
p->packet_num = multifd_send_state->packet_num++;
multifd_send_state->pages = p->pages;
p->pages = pages;
+
qemu_mutex_unlock(&p->mutex);
qemu_sem_post(&p->sem);
@@ -552,6 +562,8 @@ void multifd_save_cleanup(void)
p->iov = NULL;
g_free(p->normal);
p->normal = NULL;
+ g_free(p->zero);
+ p->zero = NULL;
multifd_send_state->ops->send_cleanup(p, &local_err);
if (local_err) {
migrate_set_error(migrate_get_current(), local_err);
@@ -680,6 +692,7 @@ static void *multifd_send_thread(void *opaque)
uint64_t packet_num = p->packet_num;
uint32_t flags;
p->normal_num = 0;
+ p->zero_num = 0;
if (use_zero_copy_send) {
p->iovs_num = 0;
@@ -704,12 +717,13 @@ static void *multifd_send_thread(void *opaque)
p->flags = 0;
p->num_packets++;
p->total_normal_pages += p->normal_num;
+ p->total_zero_pages += p->zero_num;
p->pages->num = 0;
p->pages->block = NULL;
qemu_mutex_unlock(&p->mutex);
- trace_multifd_send(p->id, packet_num, p->normal_num, flags,
- p->next_packet_size);
+ trace_multifd_send(p->id, packet_num, p->normal_num, p->zero_num,
+ flags, p->next_packet_size);
if (use_zero_copy_send) {
/* Send header first, without zerocopy */
@@ -732,6 +746,8 @@ static void *multifd_send_thread(void *opaque)
stat64_add(&mig_stats.multifd_bytes,
p->next_packet_size + p->packet_len);
+ stat64_add(&mig_stats.normal_pages, p->normal_num);
+ stat64_add(&mig_stats.zero_pages, p->zero_num);
p->next_packet_size = 0;
qemu_mutex_lock(&p->mutex);
p->pending_job--;
@@ -762,7 +778,8 @@ out:
rcu_unregister_thread();
migration_threads_remove(thread);
- trace_multifd_send_thread_end(p->id, p->num_packets, p->total_normal_pages);
+ trace_multifd_send_thread_end(p->id, p->num_packets, p->total_normal_pages,
+ p->total_zero_pages);
return NULL;
}
@@ -938,6 +955,7 @@ int multifd_save_setup(Error **errp)
p->normal = g_new0(ram_addr_t, page_count);
p->page_size = qemu_target_page_size();
p->page_count = page_count;
+ p->zero = g_new0(ram_addr_t, page_count);
if (migrate_zero_copy_send()) {
p->write_flags = QIO_CHANNEL_WRITE_FLAG_ZERO_COPY;
@@ -1053,6 +1071,8 @@ void multifd_load_cleanup(void)
p->iov = NULL;
g_free(p->normal);
p->normal = NULL;
+ g_free(p->zero);
+ p->zero = NULL;
multifd_recv_state->ops->recv_cleanup(p);
}
qemu_sem_destroy(&multifd_recv_state->sem_sync);
@@ -1121,10 +1141,11 @@ static void *multifd_recv_thread(void *opaque)
flags = p->flags;
/* recv methods don't know how to handle the SYNC flag */
p->flags &= ~MULTIFD_FLAG_SYNC;
- trace_multifd_recv(p->id, p->packet_num, p->normal_num, flags,
- p->next_packet_size);
+ trace_multifd_recv(p->id, p->packet_num, p->normal_num, p->zero_num,
+ flags, p->next_packet_size);
p->num_packets++;
p->total_normal_pages += p->normal_num;
+ p->total_zero_pages += p->zero_num;
qemu_mutex_unlock(&p->mutex);
if (p->normal_num) {
@@ -1149,7 +1170,8 @@ static void *multifd_recv_thread(void *opaque)
qemu_mutex_unlock(&p->mutex);
rcu_unregister_thread();
- trace_multifd_recv_thread_end(p->id, p->num_packets, p->total_normal_pages);
+ trace_multifd_recv_thread_end(p->id, p->num_packets, p->total_normal_pages,
+ p->total_zero_pages);
return NULL;
}
@@ -1190,6 +1212,7 @@ int multifd_load_setup(Error **errp)
p->normal = g_new0(ram_addr_t, page_count);
p->page_count = page_count;
p->page_size = qemu_target_page_size();
+ p->zero = g_new0(ram_addr_t, page_count);
}
for (i = 0; i < thread_count; i++) {
diff --git a/migration/multifd.h b/migration/multifd.h
index a835643b48..d587b0e19c 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -48,7 +48,10 @@ typedef struct {
/* size of the next packet that contains pages */
uint32_t next_packet_size;
uint64_t packet_num;
- uint64_t unused[4]; /* Reserved for future use */
+ /* zero pages */
+ uint32_t zero_pages;
+ uint32_t unused32[1]; /* Reserved for future use */
+ uint64_t unused64[3]; /* Reserved for future use */
char ramblock[256];
uint64_t offset[];
} __attribute__((packed)) MultiFDPacket_t;
@@ -122,6 +125,8 @@ typedef struct {
uint64_t num_packets;
/* non zero pages sent through this channel */
uint64_t total_normal_pages;
+ /* zero pages sent through this channel */
+ uint64_t total_zero_pages;
/* buffers to send */
struct iovec *iov;
/* number of iovs used */
@@ -130,6 +135,10 @@ typedef struct {
ram_addr_t *normal;
/* num of non zero pages */
uint32_t normal_num;
+ /* Pages that are zero */
+ ram_addr_t *zero;
+ /* num of zero pages */
+ uint32_t zero_num;
/* used for compression methods */
void *data;
} MultiFDSendParams;
@@ -181,12 +190,18 @@ typedef struct {
uint8_t *host;
/* non zero pages recv through this channel */
uint64_t total_normal_pages;
+ /* zero pages recv through this channel */
+ uint64_t total_zero_pages;
/* buffers to recv */
struct iovec *iov;
/* Pages that are not zero */
ram_addr_t *normal;
/* num of non zero pages */
uint32_t normal_num;
+ /* Pages that are zero */
+ ram_addr_t *zero;
+ /* num of zero pages */
+ uint32_t zero_num;
/* used for de-compression methods */
void *data;
} MultiFDRecvParams;
diff --git a/migration/trace-events b/migration/trace-events
index de4a743c8a..c0a758db9d 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -128,21 +128,21 @@ postcopy_preempt_reset_channel(void) ""
# multifd.c
multifd_new_send_channel_async(uint8_t id) "channel %u"
multifd_new_send_channel_async_error(uint8_t id, void *err) "channel=%u err=%p"
-multifd_recv(uint8_t id, uint64_t packet_num, uint32_t used, uint32_t flags, uint32_t next_packet_size) "channel %u packet_num %" PRIu64 " pages %u flags 0x%x next packet size %u"
+multifd_recv(uint8_t id, uint64_t packet_num, uint32_t normal, uint32_t zero, uint32_t flags, uint32_t next_packet_size) "channel %u packet_num %" PRIu64 " normal pages %u zero pages %u flags 0x%x next packet size %u"
multifd_recv_new_channel(uint8_t id) "channel %u"
multifd_recv_sync_main(long packet_num) "packet num %ld"
multifd_recv_sync_main_signal(uint8_t id) "channel %u"
multifd_recv_sync_main_wait(uint8_t id) "channel %u"
multifd_recv_terminate_threads(bool error) "error %d"
-multifd_recv_thread_end(uint8_t id, uint64_t packets, uint64_t pages) "channel %u packets %" PRIu64 " pages %" PRIu64
+multifd_recv_thread_end(uint8_t id, uint64_t packets, uint64_t normal_pages, uint64_t zero_pages) "channel %u packets %" PRIu64 " normal pages %" PRIu64 " zero pages %" PRIu64
multifd_recv_thread_start(uint8_t id) "%u"
-multifd_send(uint8_t id, uint64_t packet_num, uint32_t normal, uint32_t flags, uint32_t next_packet_size) "channel %u packet_num %" PRIu64 " normal pages %u flags 0x%x next packet size %u"
+multifd_send(uint8_t id, uint64_t packet_num, uint32_t normalpages, uint32_t zero_pages, uint32_t flags, uint32_t next_packet_size) "channel %u packet_num %" PRIu64 " normal pages %u zero pages %u flags 0x%x next packet size %u"
multifd_send_error(uint8_t id) "channel %u"
multifd_send_sync_main(long packet_num) "packet num %ld"
multifd_send_sync_main_signal(uint8_t id) "channel %u"
multifd_send_sync_main_wait(uint8_t id) "channel %u"
multifd_send_terminate_threads(bool error) "error %d"
-multifd_send_thread_end(uint8_t id, uint64_t packets, uint64_t normal_pages) "channel %u packets %" PRIu64 " normal pages %" PRIu64
+multifd_send_thread_end(uint8_t id, uint64_t packets, uint64_t normal_pages, uint64_t zero_pages) "channel %u packets %" PRIu64 " normal pages %" PRIu64 " zero pages %" PRIu64
multifd_send_thread_start(uint8_t id) "%u"
multifd_tls_outgoing_handshake_start(void *ioc, void *tioc, const char *hostname) "ioc=%p tioc=%p hostname=%s"
multifd_tls_outgoing_handshake_error(void *ioc, const char *err) "ioc=%p err=%s"
--
2.30.2
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v3 03/20] multifd: Zero pages transmission
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
2024-01-04 0:44 ` [PATCH v3 01/20] multifd: Add capability to enable/disable zero_page Hao Xiang
2024-01-04 0:44 ` [PATCH v3 02/20] multifd: Support for zero pages transmission Hao Xiang
@ 2024-01-04 0:44 ` Hao Xiang
2024-01-15 7:01 ` Shivam Kumar
2024-02-01 5:22 ` Peter Xu
2024-01-04 0:44 ` [PATCH v3 04/20] So we use multifd to transmit zero pages Hao Xiang
` (17 subsequent siblings)
20 siblings, 2 replies; 41+ messages in thread
From: Hao Xiang @ 2024-01-04 0:44 UTC (permalink / raw)
To: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Juan Quintela
From: Juan Quintela <quintela@redhat.com>
This implements the zero page dection and handling.
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/multifd.c | 41 +++++++++++++++++++++++++++++++++++++++--
migration/multifd.h | 5 +++++
2 files changed, 44 insertions(+), 2 deletions(-)
diff --git a/migration/multifd.c b/migration/multifd.c
index 5a1f50c7e8..756673029d 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -11,6 +11,7 @@
*/
#include "qemu/osdep.h"
+#include "qemu/cutils.h"
#include "qemu/rcu.h"
#include "exec/target_page.h"
#include "sysemu/sysemu.h"
@@ -279,6 +280,12 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
packet->offset[i] = cpu_to_be64(temp);
}
+ for (i = 0; i < p->zero_num; i++) {
+ /* there are architectures where ram_addr_t is 32 bit */
+ uint64_t temp = p->zero[i];
+
+ packet->offset[p->normal_num + i] = cpu_to_be64(temp);
+ }
}
static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
@@ -361,6 +368,18 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
p->normal[i] = offset;
}
+ for (i = 0; i < p->zero_num; i++) {
+ uint64_t offset = be64_to_cpu(packet->offset[p->normal_num + i]);
+
+ if (offset > (p->block->used_length - p->page_size)) {
+ error_setg(errp, "multifd: offset too long %" PRIu64
+ " (max " RAM_ADDR_FMT ")",
+ offset, p->block->used_length);
+ return -1;
+ }
+ p->zero[i] = offset;
+ }
+
return 0;
}
@@ -664,6 +683,8 @@ static void *multifd_send_thread(void *opaque)
MultiFDSendParams *p = opaque;
MigrationThread *thread = NULL;
Error *local_err = NULL;
+ /* qemu older than 8.2 don't understand zero page on multifd channel */
+ bool use_zero_page = !migrate_use_main_zero_page();
int ret = 0;
bool use_zero_copy_send = migrate_zero_copy_send();
@@ -689,6 +710,7 @@ static void *multifd_send_thread(void *opaque)
qemu_mutex_lock(&p->mutex);
if (p->pending_job) {
+ RAMBlock *rb = p->pages->block;
uint64_t packet_num = p->packet_num;
uint32_t flags;
p->normal_num = 0;
@@ -701,8 +723,16 @@ static void *multifd_send_thread(void *opaque)
}
for (int i = 0; i < p->pages->num; i++) {
- p->normal[p->normal_num] = p->pages->offset[i];
- p->normal_num++;
+ uint64_t offset = p->pages->offset[i];
+ if (use_zero_page &&
+ buffer_is_zero(rb->host + offset, p->page_size)) {
+ p->zero[p->zero_num] = offset;
+ p->zero_num++;
+ ram_release_page(rb->idstr, offset);
+ } else {
+ p->normal[p->normal_num] = offset;
+ p->normal_num++;
+ }
}
if (p->normal_num) {
@@ -1155,6 +1185,13 @@ static void *multifd_recv_thread(void *opaque)
}
}
+ for (int i = 0; i < p->zero_num; i++) {
+ void *page = p->host + p->zero[i];
+ if (!buffer_is_zero(page, p->page_size)) {
+ memset(page, 0, p->page_size);
+ }
+ }
+
if (flags & MULTIFD_FLAG_SYNC) {
qemu_sem_post(&multifd_recv_state->sem_sync);
qemu_sem_wait(&p->sem_sync);
diff --git a/migration/multifd.h b/migration/multifd.h
index d587b0e19c..13762900d4 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -53,6 +53,11 @@ typedef struct {
uint32_t unused32[1]; /* Reserved for future use */
uint64_t unused64[3]; /* Reserved for future use */
char ramblock[256];
+ /*
+ * This array contains the pointers to:
+ * - normal pages (initial normal_pages entries)
+ * - zero pages (following zero_pages entries)
+ */
uint64_t offset[];
} __attribute__((packed)) MultiFDPacket_t;
--
2.30.2
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v3 04/20] So we use multifd to transmit zero pages.
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
` (2 preceding siblings ...)
2024-01-04 0:44 ` [PATCH v3 03/20] multifd: Zero " Hao Xiang
@ 2024-01-04 0:44 ` Hao Xiang
2024-01-04 0:44 ` [PATCH v3 05/20] meson: Introduce new instruction set enqcmd to the build system Hao Xiang
` (16 subsequent siblings)
20 siblings, 0 replies; 41+ messages in thread
From: Hao Xiang @ 2024-01-04 0:44 UTC (permalink / raw)
To: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Juan Quintela, Leonardo Bras
From: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Leonardo Bras <leobras@redhat.com>
---
migration/multifd.c | 7 ++++---
migration/options.c | 17 +++++++++--------
migration/ram.c | 45 ++++++++++++++++++++++++++++++++++++++-------
qapi/migration.json | 1 -
4 files changed, 51 insertions(+), 19 deletions(-)
diff --git a/migration/multifd.c b/migration/multifd.c
index 756673029d..eece85569f 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -13,6 +13,7 @@
#include "qemu/osdep.h"
#include "qemu/cutils.h"
#include "qemu/rcu.h"
+#include "qemu/cutils.h"
#include "exec/target_page.h"
#include "sysemu/sysemu.h"
#include "exec/ramblock.h"
@@ -459,7 +460,6 @@ static int multifd_send_pages(QEMUFile *f)
p->packet_num = multifd_send_state->packet_num++;
multifd_send_state->pages = p->pages;
p->pages = pages;
-
qemu_mutex_unlock(&p->mutex);
qemu_sem_post(&p->sem);
@@ -684,7 +684,7 @@ static void *multifd_send_thread(void *opaque)
MigrationThread *thread = NULL;
Error *local_err = NULL;
/* qemu older than 8.2 don't understand zero page on multifd channel */
- bool use_zero_page = !migrate_use_main_zero_page();
+ bool use_multifd_zero_page = !migrate_use_main_zero_page();
int ret = 0;
bool use_zero_copy_send = migrate_zero_copy_send();
@@ -713,6 +713,7 @@ static void *multifd_send_thread(void *opaque)
RAMBlock *rb = p->pages->block;
uint64_t packet_num = p->packet_num;
uint32_t flags;
+
p->normal_num = 0;
p->zero_num = 0;
@@ -724,7 +725,7 @@ static void *multifd_send_thread(void *opaque)
for (int i = 0; i < p->pages->num; i++) {
uint64_t offset = p->pages->offset[i];
- if (use_zero_page &&
+ if (use_multifd_zero_page &&
buffer_is_zero(rb->host + offset, p->page_size)) {
p->zero[p->zero_num] = offset;
p->zero_num++;
diff --git a/migration/options.c b/migration/options.c
index 0f6bd78b9f..180698a8f5 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -195,6 +195,8 @@ Property migration_properties[] = {
DEFINE_PROP_MIG_CAP("x-block", MIGRATION_CAPABILITY_BLOCK),
DEFINE_PROP_MIG_CAP("x-return-path", MIGRATION_CAPABILITY_RETURN_PATH),
DEFINE_PROP_MIG_CAP("x-multifd", MIGRATION_CAPABILITY_MULTIFD),
+ DEFINE_PROP_MIG_CAP("x-main-zero-page",
+ MIGRATION_CAPABILITY_MAIN_ZERO_PAGE),
DEFINE_PROP_MIG_CAP("x-background-snapshot",
MIGRATION_CAPABILITY_BACKGROUND_SNAPSHOT),
#ifdef CONFIG_LINUX
@@ -288,15 +290,9 @@ bool migrate_multifd(void)
bool migrate_use_main_zero_page(void)
{
- /* MigrationState *s; */
-
- /* s = migrate_get_current(); */
+ MigrationState *s = migrate_get_current();
- /*
- * We will enable this when we add the right code.
- * return s->enabled_capabilities[MIGRATION_CAPABILITY_MAIN_ZERO_PAGE];
- */
- return true;
+ return s->capabilities[MIGRATION_CAPABILITY_MAIN_ZERO_PAGE];
}
bool migrate_pause_before_switchover(void)
@@ -459,6 +455,7 @@ INITIALIZE_MIGRATE_CAPS_SET(check_caps_background_snapshot,
MIGRATION_CAPABILITY_LATE_BLOCK_ACTIVATE,
MIGRATION_CAPABILITY_RETURN_PATH,
MIGRATION_CAPABILITY_MULTIFD,
+ MIGRATION_CAPABILITY_MAIN_ZERO_PAGE,
MIGRATION_CAPABILITY_PAUSE_BEFORE_SWITCHOVER,
MIGRATION_CAPABILITY_AUTO_CONVERGE,
MIGRATION_CAPABILITY_RELEASE_RAM,
@@ -536,6 +533,10 @@ bool migrate_caps_check(bool *old_caps, bool *new_caps, Error **errp)
error_setg(errp, "Postcopy is not yet compatible with multifd");
return false;
}
+ if (new_caps[MIGRATION_CAPABILITY_MAIN_ZERO_PAGE]) {
+ error_setg(errp,
+ "Postcopy is not yet compatible with main zero copy");
+ }
}
if (new_caps[MIGRATION_CAPABILITY_BACKGROUND_SNAPSHOT]) {
diff --git a/migration/ram.c b/migration/ram.c
index 8c7886ab79..f7a42feff2 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2059,17 +2059,42 @@ static int ram_save_target_page_legacy(RAMState *rs, PageSearchStatus *pss)
if (save_zero_page(rs, pss, offset)) {
return 1;
}
-
/*
- * Do not use multifd in postcopy as one whole host page should be
- * placed. Meanwhile postcopy requires atomic update of pages, so even
- * if host page size == guest page size the dest guest during run may
- * still see partially copied pages which is data corruption.
+ * Do not use multifd for:
+ * 1. Compression as the first page in the new block should be posted out
+ * before sending the compressed page
+ * 2. In postcopy as one whole host page should be placed
*/
- if (migrate_multifd() && !migration_in_postcopy()) {
+ if (!migrate_compress() && migrate_multifd() && !migration_in_postcopy()) {
+ return ram_save_multifd_page(pss->pss_channel, block, offset);
+ }
+
+ return ram_save_page(rs, pss);
+}
+
+/**
+ * ram_save_target_page_multifd: save one target page
+ *
+ * Returns the number of pages written
+ *
+ * @rs: current RAM state
+ * @pss: data about the page we want to send
+ */
+static int ram_save_target_page_multifd(RAMState *rs, PageSearchStatus *pss)
+{
+ RAMBlock *block = pss->block;
+ ram_addr_t offset = ((ram_addr_t)pss->page) << TARGET_PAGE_BITS;
+ int res;
+
+ if (!migration_in_postcopy()) {
return ram_save_multifd_page(pss->pss_channel, block, offset);
}
+ res = save_zero_page(rs, pss, offset);
+ if (res > 0) {
+ return res;
+ }
+
return ram_save_page(rs, pss);
}
@@ -2982,9 +3007,15 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
}
migration_ops = g_malloc0(sizeof(MigrationOps));
- migration_ops->ram_save_target_page = ram_save_target_page_legacy;
+
+ if (migrate_multifd() && !migrate_use_main_zero_page()) {
+ migration_ops->ram_save_target_page = ram_save_target_page_multifd;
+ } else {
+ migration_ops->ram_save_target_page = ram_save_target_page_legacy;
+ }
qemu_mutex_unlock_iothread();
+
ret = multifd_send_sync_main(f);
qemu_mutex_lock_iothread();
if (ret < 0) {
diff --git a/qapi/migration.json b/qapi/migration.json
index 80c4b13516..4c7a42e364 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -531,7 +531,6 @@
# and can result in more stable read performance. Requires KVM
# with accelerator property "dirty-ring-size" set. (Since 8.1)
#
-#
# @main-zero-page: If enabled, the detection of zero pages will be
# done on the main thread. Otherwise it is done on
# the multifd threads.
--
2.30.2
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v3 05/20] meson: Introduce new instruction set enqcmd to the build system.
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
` (3 preceding siblings ...)
2024-01-04 0:44 ` [PATCH v3 04/20] So we use multifd to transmit zero pages Hao Xiang
@ 2024-01-04 0:44 ` Hao Xiang
2024-01-04 0:44 ` [PATCH v3 06/20] util/dsa: Add dependency idxd Hao Xiang
` (15 subsequent siblings)
20 siblings, 0 replies; 41+ messages in thread
From: Hao Xiang @ 2024-01-04 0:44 UTC (permalink / raw)
To: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Hao Xiang
Enable instruction set enqcmd in build.
Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
---
meson.build | 14 ++++++++++++++
meson_options.txt | 2 ++
scripts/meson-buildoptions.sh | 3 +++
3 files changed, 19 insertions(+)
diff --git a/meson.build b/meson.build
index 6c77d9687d..ea10d99cf4 100644
--- a/meson.build
+++ b/meson.build
@@ -2712,6 +2712,20 @@ config_host_data.set('CONFIG_AVX512BW_OPT', get_option('avx512bw') \
int main(int argc, char *argv[]) { return bar(argv[0]); }
'''), error_message: 'AVX512BW not available').allowed())
+config_host_data.set('CONFIG_DSA_OPT', get_option('enqcmd') \
+ .require(have_cpuid_h, error_message: 'cpuid.h not available, cannot enable ENQCMD') \
+ .require(cc.links('''
+ #include <stdint.h>
+ #include <cpuid.h>
+ #include <immintrin.h>
+ static int __attribute__((target("enqcmd"))) bar(void *a) {
+ uint64_t dst[8] = { 0 };
+ uint64_t src[8] = { 0 };
+ return _enqcmd(dst, src);
+ }
+ int main(int argc, char *argv[]) { return bar(argv[argc - 1]); }
+ '''), error_message: 'ENQCMD not available').allowed())
+
# For both AArch64 and AArch32, detect if builtins are available.
config_host_data.set('CONFIG_ARM_AES_BUILTIN', cc.compiles('''
#include <arm_neon.h>
diff --git a/meson_options.txt b/meson_options.txt
index c9baeda639..618970d0f7 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -121,6 +121,8 @@ option('avx512f', type: 'feature', value: 'disabled',
description: 'AVX512F optimizations')
option('avx512bw', type: 'feature', value: 'auto',
description: 'AVX512BW optimizations')
+option('enqcmd', type: 'feature', value: 'disabled',
+ description: 'MENQCMD optimizations')
option('keyring', type: 'feature', value: 'auto',
description: 'Linux keyring support')
option('libkeyutils', type: 'feature', value: 'auto',
diff --git a/scripts/meson-buildoptions.sh b/scripts/meson-buildoptions.sh
index 680fa3f581..c1389ff109 100644
--- a/scripts/meson-buildoptions.sh
+++ b/scripts/meson-buildoptions.sh
@@ -93,6 +93,7 @@ meson_options_help() {
printf "%s\n" ' avx2 AVX2 optimizations'
printf "%s\n" ' avx512bw AVX512BW optimizations'
printf "%s\n" ' avx512f AVX512F optimizations'
+ printf "%s\n" ' enqcmd ENQCMD optimizations'
printf "%s\n" ' blkio libblkio block device driver'
printf "%s\n" ' bochs bochs image format support'
printf "%s\n" ' bpf eBPF support'
@@ -240,6 +241,8 @@ _meson_option_parse() {
--disable-avx512bw) printf "%s" -Davx512bw=disabled ;;
--enable-avx512f) printf "%s" -Davx512f=enabled ;;
--disable-avx512f) printf "%s" -Davx512f=disabled ;;
+ --enable-enqcmd) printf "%s" -Denqcmd=enabled ;;
+ --disable-enqcmd) printf "%s" -Denqcmd=disabled ;;
--enable-gcov) printf "%s" -Db_coverage=true ;;
--disable-gcov) printf "%s" -Db_coverage=false ;;
--enable-lto) printf "%s" -Db_lto=true ;;
--
2.30.2
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v3 06/20] util/dsa: Add dependency idxd.
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
` (4 preceding siblings ...)
2024-01-04 0:44 ` [PATCH v3 05/20] meson: Introduce new instruction set enqcmd to the build system Hao Xiang
@ 2024-01-04 0:44 ` Hao Xiang
2024-02-01 5:34 ` Peter Xu
2024-01-04 0:44 ` [PATCH v3 07/20] util/dsa: Implement DSA device start and stop logic Hao Xiang
` (14 subsequent siblings)
20 siblings, 1 reply; 41+ messages in thread
From: Hao Xiang @ 2024-01-04 0:44 UTC (permalink / raw)
To: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Hao Xiang
Idxd is the device driver for DSA (Intel Data Streaming
Accelerator). The driver is fully functioning since Linux
kernel 5.19. This change adds the driver's header file used
for userspace development.
Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
---
| 356 +++++++++++++++++++++++++++++++++++++
1 file changed, 356 insertions(+)
create mode 100644 linux-headers/linux/idxd.h
--git a/linux-headers/linux/idxd.h b/linux-headers/linux/idxd.h
new file mode 100644
index 0000000000..1d553bedbd
--- /dev/null
+++ b/linux-headers/linux/idxd.h
@@ -0,0 +1,356 @@
+/* SPDX-License-Identifier: LGPL-2.1 WITH Linux-syscall-note */
+/* Copyright(c) 2019 Intel Corporation. All rights rsvd. */
+#ifndef _USR_IDXD_H_
+#define _USR_IDXD_H_
+
+#ifdef __KERNEL__
+#include <linux/types.h>
+#else
+#include <stdint.h>
+#endif
+
+/* Driver command error status */
+enum idxd_scmd_stat {
+ IDXD_SCMD_DEV_ENABLED = 0x80000010,
+ IDXD_SCMD_DEV_NOT_ENABLED = 0x80000020,
+ IDXD_SCMD_WQ_ENABLED = 0x80000021,
+ IDXD_SCMD_DEV_DMA_ERR = 0x80020000,
+ IDXD_SCMD_WQ_NO_GRP = 0x80030000,
+ IDXD_SCMD_WQ_NO_NAME = 0x80040000,
+ IDXD_SCMD_WQ_NO_SVM = 0x80050000,
+ IDXD_SCMD_WQ_NO_THRESH = 0x80060000,
+ IDXD_SCMD_WQ_PORTAL_ERR = 0x80070000,
+ IDXD_SCMD_WQ_RES_ALLOC_ERR = 0x80080000,
+ IDXD_SCMD_PERCPU_ERR = 0x80090000,
+ IDXD_SCMD_DMA_CHAN_ERR = 0x800a0000,
+ IDXD_SCMD_CDEV_ERR = 0x800b0000,
+ IDXD_SCMD_WQ_NO_SWQ_SUPPORT = 0x800c0000,
+ IDXD_SCMD_WQ_NONE_CONFIGURED = 0x800d0000,
+ IDXD_SCMD_WQ_NO_SIZE = 0x800e0000,
+ IDXD_SCMD_WQ_NO_PRIV = 0x800f0000,
+ IDXD_SCMD_WQ_IRQ_ERR = 0x80100000,
+ IDXD_SCMD_WQ_USER_NO_IOMMU = 0x80110000,
+};
+
+#define IDXD_SCMD_SOFTERR_MASK 0x80000000
+#define IDXD_SCMD_SOFTERR_SHIFT 16
+
+/* Descriptor flags */
+#define IDXD_OP_FLAG_FENCE 0x0001
+#define IDXD_OP_FLAG_BOF 0x0002
+#define IDXD_OP_FLAG_CRAV 0x0004
+#define IDXD_OP_FLAG_RCR 0x0008
+#define IDXD_OP_FLAG_RCI 0x0010
+#define IDXD_OP_FLAG_CRSTS 0x0020
+#define IDXD_OP_FLAG_CR 0x0080
+#define IDXD_OP_FLAG_CC 0x0100
+#define IDXD_OP_FLAG_ADDR1_TCS 0x0200
+#define IDXD_OP_FLAG_ADDR2_TCS 0x0400
+#define IDXD_OP_FLAG_ADDR3_TCS 0x0800
+#define IDXD_OP_FLAG_CR_TCS 0x1000
+#define IDXD_OP_FLAG_STORD 0x2000
+#define IDXD_OP_FLAG_DRDBK 0x4000
+#define IDXD_OP_FLAG_DSTS 0x8000
+
+/* IAX */
+#define IDXD_OP_FLAG_RD_SRC2_AECS 0x010000
+#define IDXD_OP_FLAG_RD_SRC2_2ND 0x020000
+#define IDXD_OP_FLAG_WR_SRC2_AECS_COMP 0x040000
+#define IDXD_OP_FLAG_WR_SRC2_AECS_OVFL 0x080000
+#define IDXD_OP_FLAG_SRC2_STS 0x100000
+#define IDXD_OP_FLAG_CRC_RFC3720 0x200000
+
+/* Opcode */
+enum dsa_opcode {
+ DSA_OPCODE_NOOP = 0,
+ DSA_OPCODE_BATCH,
+ DSA_OPCODE_DRAIN,
+ DSA_OPCODE_MEMMOVE,
+ DSA_OPCODE_MEMFILL,
+ DSA_OPCODE_COMPARE,
+ DSA_OPCODE_COMPVAL,
+ DSA_OPCODE_CR_DELTA,
+ DSA_OPCODE_AP_DELTA,
+ DSA_OPCODE_DUALCAST,
+ DSA_OPCODE_CRCGEN = 0x10,
+ DSA_OPCODE_COPY_CRC,
+ DSA_OPCODE_DIF_CHECK,
+ DSA_OPCODE_DIF_INS,
+ DSA_OPCODE_DIF_STRP,
+ DSA_OPCODE_DIF_UPDT,
+ DSA_OPCODE_CFLUSH = 0x20,
+};
+
+enum iax_opcode {
+ IAX_OPCODE_NOOP = 0,
+ IAX_OPCODE_DRAIN = 2,
+ IAX_OPCODE_MEMMOVE,
+ IAX_OPCODE_DECOMPRESS = 0x42,
+ IAX_OPCODE_COMPRESS,
+ IAX_OPCODE_CRC64,
+ IAX_OPCODE_ZERO_DECOMP_32 = 0x48,
+ IAX_OPCODE_ZERO_DECOMP_16,
+ IAX_OPCODE_ZERO_COMP_32 = 0x4c,
+ IAX_OPCODE_ZERO_COMP_16,
+ IAX_OPCODE_SCAN = 0x50,
+ IAX_OPCODE_SET_MEMBER,
+ IAX_OPCODE_EXTRACT,
+ IAX_OPCODE_SELECT,
+ IAX_OPCODE_RLE_BURST,
+ IAX_OPCODE_FIND_UNIQUE,
+ IAX_OPCODE_EXPAND,
+};
+
+/* Completion record status */
+enum dsa_completion_status {
+ DSA_COMP_NONE = 0,
+ DSA_COMP_SUCCESS,
+ DSA_COMP_SUCCESS_PRED,
+ DSA_COMP_PAGE_FAULT_NOBOF,
+ DSA_COMP_PAGE_FAULT_IR,
+ DSA_COMP_BATCH_FAIL,
+ DSA_COMP_BATCH_PAGE_FAULT,
+ DSA_COMP_DR_OFFSET_NOINC,
+ DSA_COMP_DR_OFFSET_ERANGE,
+ DSA_COMP_DIF_ERR,
+ DSA_COMP_BAD_OPCODE = 0x10,
+ DSA_COMP_INVALID_FLAGS,
+ DSA_COMP_NOZERO_RESERVE,
+ DSA_COMP_XFER_ERANGE,
+ DSA_COMP_DESC_CNT_ERANGE,
+ DSA_COMP_DR_ERANGE,
+ DSA_COMP_OVERLAP_BUFFERS,
+ DSA_COMP_DCAST_ERR,
+ DSA_COMP_DESCLIST_ALIGN,
+ DSA_COMP_INT_HANDLE_INVAL,
+ DSA_COMP_CRA_XLAT,
+ DSA_COMP_CRA_ALIGN,
+ DSA_COMP_ADDR_ALIGN,
+ DSA_COMP_PRIV_BAD,
+ DSA_COMP_TRAFFIC_CLASS_CONF,
+ DSA_COMP_PFAULT_RDBA,
+ DSA_COMP_HW_ERR1,
+ DSA_COMP_HW_ERR_DRB,
+ DSA_COMP_TRANSLATION_FAIL,
+};
+
+enum iax_completion_status {
+ IAX_COMP_NONE = 0,
+ IAX_COMP_SUCCESS,
+ IAX_COMP_PAGE_FAULT_IR = 0x04,
+ IAX_COMP_ANALYTICS_ERROR = 0x0a,
+ IAX_COMP_OUTBUF_OVERFLOW,
+ IAX_COMP_BAD_OPCODE = 0x10,
+ IAX_COMP_INVALID_FLAGS,
+ IAX_COMP_NOZERO_RESERVE,
+ IAX_COMP_INVALID_SIZE,
+ IAX_COMP_OVERLAP_BUFFERS = 0x16,
+ IAX_COMP_INT_HANDLE_INVAL = 0x19,
+ IAX_COMP_CRA_XLAT,
+ IAX_COMP_CRA_ALIGN,
+ IAX_COMP_ADDR_ALIGN,
+ IAX_COMP_PRIV_BAD,
+ IAX_COMP_TRAFFIC_CLASS_CONF,
+ IAX_COMP_PFAULT_RDBA,
+ IAX_COMP_HW_ERR1,
+ IAX_COMP_HW_ERR_DRB,
+ IAX_COMP_TRANSLATION_FAIL,
+ IAX_COMP_PRS_TIMEOUT,
+ IAX_COMP_WATCHDOG,
+ IAX_COMP_INVALID_COMP_FLAG = 0x30,
+ IAX_COMP_INVALID_FILTER_FLAG,
+ IAX_COMP_INVALID_INPUT_SIZE,
+ IAX_COMP_INVALID_NUM_ELEMS,
+ IAX_COMP_INVALID_SRC1_WIDTH,
+ IAX_COMP_INVALID_INVERT_OUT,
+};
+
+#define DSA_COMP_STATUS_MASK 0x7f
+#define DSA_COMP_STATUS_WRITE 0x80
+
+struct dsa_hw_desc {
+ uint32_t pasid:20;
+ uint32_t rsvd:11;
+ uint32_t priv:1;
+ uint32_t flags:24;
+ uint32_t opcode:8;
+ uint64_t completion_addr;
+ union {
+ uint64_t src_addr;
+ uint64_t rdback_addr;
+ uint64_t pattern;
+ uint64_t desc_list_addr;
+ };
+ union {
+ uint64_t dst_addr;
+ uint64_t rdback_addr2;
+ uint64_t src2_addr;
+ uint64_t comp_pattern;
+ };
+ union {
+ uint32_t xfer_size;
+ uint32_t desc_count;
+ };
+ uint16_t int_handle;
+ uint16_t rsvd1;
+ union {
+ uint8_t expected_res;
+ /* create delta record */
+ struct {
+ uint64_t delta_addr;
+ uint32_t max_delta_size;
+ uint32_t delt_rsvd;
+ uint8_t expected_res_mask;
+ };
+ uint32_t delta_rec_size;
+ uint64_t dest2;
+ /* CRC */
+ struct {
+ uint32_t crc_seed;
+ uint32_t crc_rsvd;
+ uint64_t seed_addr;
+ };
+ /* DIF check or strip */
+ struct {
+ uint8_t src_dif_flags;
+ uint8_t dif_chk_res;
+ uint8_t dif_chk_flags;
+ uint8_t dif_chk_res2[5];
+ uint32_t chk_ref_tag_seed;
+ uint16_t chk_app_tag_mask;
+ uint16_t chk_app_tag_seed;
+ };
+ /* DIF insert */
+ struct {
+ uint8_t dif_ins_res;
+ uint8_t dest_dif_flag;
+ uint8_t dif_ins_flags;
+ uint8_t dif_ins_res2[13];
+ uint32_t ins_ref_tag_seed;
+ uint16_t ins_app_tag_mask;
+ uint16_t ins_app_tag_seed;
+ };
+ /* DIF update */
+ struct {
+ uint8_t src_upd_flags;
+ uint8_t upd_dest_flags;
+ uint8_t dif_upd_flags;
+ uint8_t dif_upd_res[5];
+ uint32_t src_ref_tag_seed;
+ uint16_t src_app_tag_mask;
+ uint16_t src_app_tag_seed;
+ uint32_t dest_ref_tag_seed;
+ uint16_t dest_app_tag_mask;
+ uint16_t dest_app_tag_seed;
+ };
+
+ uint8_t op_specific[24];
+ };
+} __attribute__((packed));
+
+struct iax_hw_desc {
+ uint32_t pasid:20;
+ uint32_t rsvd:11;
+ uint32_t priv:1;
+ uint32_t flags:24;
+ uint32_t opcode:8;
+ uint64_t completion_addr;
+ uint64_t src1_addr;
+ uint64_t dst_addr;
+ uint32_t src1_size;
+ uint16_t int_handle;
+ union {
+ uint16_t compr_flags;
+ uint16_t decompr_flags;
+ };
+ uint64_t src2_addr;
+ uint32_t max_dst_size;
+ uint32_t src2_size;
+ uint32_t filter_flags;
+ uint32_t num_inputs;
+} __attribute__((packed));
+
+struct dsa_raw_desc {
+ uint64_t field[8];
+} __attribute__((packed));
+
+/*
+ * The status field will be modified by hardware, therefore it should be
+ * volatile and prevent the compiler from optimize the read.
+ */
+struct dsa_completion_record {
+ volatile uint8_t status;
+ union {
+ uint8_t result;
+ uint8_t dif_status;
+ };
+ uint16_t rsvd;
+ uint32_t bytes_completed;
+ uint64_t fault_addr;
+ union {
+ /* common record */
+ struct {
+ uint32_t invalid_flags:24;
+ uint32_t rsvd2:8;
+ };
+
+ uint32_t delta_rec_size;
+ uint64_t crc_val;
+
+ /* DIF check & strip */
+ struct {
+ uint32_t dif_chk_ref_tag;
+ uint16_t dif_chk_app_tag_mask;
+ uint16_t dif_chk_app_tag;
+ };
+
+ /* DIF insert */
+ struct {
+ uint64_t dif_ins_res;
+ uint32_t dif_ins_ref_tag;
+ uint16_t dif_ins_app_tag_mask;
+ uint16_t dif_ins_app_tag;
+ };
+
+ /* DIF update */
+ struct {
+ uint32_t dif_upd_src_ref_tag;
+ uint16_t dif_upd_src_app_tag_mask;
+ uint16_t dif_upd_src_app_tag;
+ uint32_t dif_upd_dest_ref_tag;
+ uint16_t dif_upd_dest_app_tag_mask;
+ uint16_t dif_upd_dest_app_tag;
+ };
+
+ uint8_t op_specific[16];
+ };
+} __attribute__((packed));
+
+struct dsa_raw_completion_record {
+ uint64_t field[4];
+} __attribute__((packed));
+
+struct iax_completion_record {
+ volatile uint8_t status;
+ uint8_t error_code;
+ uint16_t rsvd;
+ uint32_t bytes_completed;
+ uint64_t fault_addr;
+ uint32_t invalid_flags;
+ uint32_t rsvd2;
+ uint32_t output_size;
+ uint8_t output_bits;
+ uint8_t rsvd3;
+ uint16_t xor_csum;
+ uint32_t crc;
+ uint32_t min;
+ uint32_t max;
+ uint32_t sum;
+ uint64_t rsvd4[2];
+} __attribute__((packed));
+
+struct iax_raw_completion_record {
+ uint64_t field[8];
+} __attribute__((packed));
+
+#endif
--
2.30.2
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v3 07/20] util/dsa: Implement DSA device start and stop logic.
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
` (5 preceding siblings ...)
2024-01-04 0:44 ` [PATCH v3 06/20] util/dsa: Add dependency idxd Hao Xiang
@ 2024-01-04 0:44 ` Hao Xiang
2024-01-04 0:44 ` [PATCH v3 08/20] util/dsa: Implement DSA task enqueue and dequeue Hao Xiang
` (13 subsequent siblings)
20 siblings, 0 replies; 41+ messages in thread
From: Hao Xiang @ 2024-01-04 0:44 UTC (permalink / raw)
To: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Hao Xiang
* DSA device open and close.
* DSA group contains multiple DSA devices.
* DSA group configure/start/stop/clean.
Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
Signed-off-by: Bryan Zhang <bryan.zhang@bytedance.com>
---
include/qemu/dsa.h | 72 +++++++++++
util/dsa.c | 316 +++++++++++++++++++++++++++++++++++++++++++++
util/meson.build | 1 +
3 files changed, 389 insertions(+)
create mode 100644 include/qemu/dsa.h
create mode 100644 util/dsa.c
diff --git a/include/qemu/dsa.h b/include/qemu/dsa.h
new file mode 100644
index 0000000000..f15c05ee85
--- /dev/null
+++ b/include/qemu/dsa.h
@@ -0,0 +1,72 @@
+#ifndef QEMU_DSA_H
+#define QEMU_DSA_H
+
+#include "qemu/error-report.h"
+#include "qemu/thread.h"
+#include "qemu/queue.h"
+
+#ifdef CONFIG_DSA_OPT
+
+#pragma GCC push_options
+#pragma GCC target("enqcmd")
+
+#include <linux/idxd.h>
+#include "x86intrin.h"
+
+/**
+ * @brief Initializes DSA devices.
+ *
+ * @param dsa_parameter A list of DSA device path from migration parameter.
+ *
+ * @return int Zero if successful, otherwise non zero.
+ */
+int dsa_init(const char *dsa_parameter);
+
+/**
+ * @brief Start logic to enable using DSA.
+ */
+void dsa_start(void);
+
+/**
+ * @brief Stop the device group and the completion thread.
+ */
+void dsa_stop(void);
+
+/**
+ * @brief Clean up system resources created for DSA offloading.
+ */
+void dsa_cleanup(void);
+
+/**
+ * @brief Check if DSA is running.
+ *
+ * @return True if DSA is running, otherwise false.
+ */
+bool dsa_is_running(void);
+
+#else
+
+static inline bool dsa_is_running(void)
+{
+ return false;
+}
+
+static inline int dsa_init(const char *dsa_parameter)
+{
+ if (dsa_parameter != NULL && strlen(dsa_parameter) != 0) {
+ error_report("DSA not supported.");
+ return -1;
+ }
+
+ return 0;
+}
+
+static inline void dsa_start(void) {}
+
+static inline void dsa_stop(void) {}
+
+static inline void dsa_cleanup(void) {}
+
+#endif
+
+#endif
diff --git a/util/dsa.c b/util/dsa.c
new file mode 100644
index 0000000000..05bbf8e31a
--- /dev/null
+++ b/util/dsa.c
@@ -0,0 +1,316 @@
+/*
+ * Use Intel Data Streaming Accelerator to offload certain background
+ * operations.
+ *
+ * Copyright (c) 2023 Hao Xiang <hao.xiang@bytedance.com>
+ * Bryan Zhang <bryan.zhang@bytedance.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu/queue.h"
+#include "qemu/memalign.h"
+#include "qemu/lockable.h"
+#include "qemu/cutils.h"
+#include "qemu/dsa.h"
+#include "qemu/bswap.h"
+#include "qemu/error-report.h"
+#include "qemu/rcu.h"
+
+#ifdef CONFIG_DSA_OPT
+
+#pragma GCC push_options
+#pragma GCC target("enqcmd")
+
+#include <linux/idxd.h>
+#include "x86intrin.h"
+
+#define DSA_WQ_SIZE 4096
+#define MAX_DSA_DEVICES 16
+
+typedef QSIMPLEQ_HEAD(dsa_task_queue, dsa_batch_task) dsa_task_queue;
+
+struct dsa_device {
+ void *work_queue;
+};
+
+struct dsa_device_group {
+ struct dsa_device *dsa_devices;
+ int num_dsa_devices;
+ /* The index of the next DSA device to be used. */
+ uint32_t device_allocator_index;
+ bool running;
+ QemuMutex task_queue_lock;
+ QemuCond task_queue_cond;
+ dsa_task_queue task_queue;
+};
+
+uint64_t max_retry_count;
+static struct dsa_device_group dsa_group;
+
+
+/**
+ * @brief This function opens a DSA device's work queue and
+ * maps the DSA device memory into the current process.
+ *
+ * @param dsa_wq_path A pointer to the DSA device work queue's file path.
+ * @return A pointer to the mapped memory, or MAP_FAILED on failure.
+ */
+static void *
+map_dsa_device(const char *dsa_wq_path)
+{
+ void *dsa_device;
+ int fd;
+
+ fd = open(dsa_wq_path, O_RDWR);
+ if (fd < 0) {
+ error_report("Open %s failed with errno = %d.",
+ dsa_wq_path, errno);
+ return MAP_FAILED;
+ }
+ dsa_device = mmap(NULL, DSA_WQ_SIZE, PROT_WRITE,
+ MAP_SHARED | MAP_POPULATE, fd, 0);
+ close(fd);
+ if (dsa_device == MAP_FAILED) {
+ error_report("mmap failed with errno = %d.", errno);
+ return MAP_FAILED;
+ }
+ return dsa_device;
+}
+
+/**
+ * @brief Initializes a DSA device structure.
+ *
+ * @param instance A pointer to the DSA device.
+ * @param work_queue A pointer to the DSA work queue.
+ */
+static void
+dsa_device_init(struct dsa_device *instance,
+ void *dsa_work_queue)
+{
+ instance->work_queue = dsa_work_queue;
+}
+
+/**
+ * @brief Cleans up a DSA device structure.
+ *
+ * @param instance A pointer to the DSA device to cleanup.
+ */
+static void
+dsa_device_cleanup(struct dsa_device *instance)
+{
+ if (instance->work_queue != MAP_FAILED) {
+ munmap(instance->work_queue, DSA_WQ_SIZE);
+ }
+}
+
+/**
+ * @brief Initializes a DSA device group.
+ *
+ * @param group A pointer to the DSA device group.
+ * @param dsa_parameter A list of DSA device path from are separated by space
+ * character migration parameter. Multiple DSA device path.
+ *
+ * @return Zero if successful, non-zero otherwise.
+ */
+static int
+dsa_device_group_init(struct dsa_device_group *group,
+ const char *dsa_parameter)
+{
+ if (dsa_parameter == NULL || strlen(dsa_parameter) == 0) {
+ return 0;
+ }
+
+ int ret = 0;
+ char *local_dsa_parameter = g_strdup(dsa_parameter);
+ const char *dsa_path[MAX_DSA_DEVICES];
+ int num_dsa_devices = 0;
+ char delim[2] = " ";
+
+ char *current_dsa_path = strtok(local_dsa_parameter, delim);
+
+ while (current_dsa_path != NULL) {
+ dsa_path[num_dsa_devices++] = current_dsa_path;
+ if (num_dsa_devices == MAX_DSA_DEVICES) {
+ break;
+ }
+ current_dsa_path = strtok(NULL, delim);
+ }
+
+ group->dsa_devices =
+ g_new0(struct dsa_device, num_dsa_devices);
+ group->num_dsa_devices = num_dsa_devices;
+ group->device_allocator_index = 0;
+
+ group->running = false;
+ qemu_mutex_init(&group->task_queue_lock);
+ qemu_cond_init(&group->task_queue_cond);
+ QSIMPLEQ_INIT(&group->task_queue);
+
+ void *dsa_wq = MAP_FAILED;
+ for (int i = 0; i < num_dsa_devices; i++) {
+ dsa_wq = map_dsa_device(dsa_path[i]);
+ if (dsa_wq == MAP_FAILED) {
+ error_report("map_dsa_device failed MAP_FAILED.");
+ ret = -1;
+ goto exit;
+ }
+ dsa_device_init(&dsa_group.dsa_devices[i], dsa_wq);
+ }
+
+exit:
+ g_free(local_dsa_parameter);
+ return ret;
+}
+
+/**
+ * @brief Starts a DSA device group.
+ *
+ * @param group A pointer to the DSA device group.
+ */
+static void
+dsa_device_group_start(struct dsa_device_group *group)
+{
+ group->running = true;
+}
+
+/**
+ * @brief Stops a DSA device group.
+ *
+ * @param group A pointer to the DSA device group.
+ */
+__attribute__((unused))
+static void
+dsa_device_group_stop(struct dsa_device_group *group)
+{
+ group->running = false;
+}
+
+/**
+ * @brief Cleans up a DSA device group.
+ *
+ * @param group A pointer to the DSA device group.
+ */
+static void
+dsa_device_group_cleanup(struct dsa_device_group *group)
+{
+ if (!group->dsa_devices) {
+ return;
+ }
+ for (int i = 0; i < group->num_dsa_devices; i++) {
+ dsa_device_cleanup(&group->dsa_devices[i]);
+ }
+ g_free(group->dsa_devices);
+ group->dsa_devices = NULL;
+
+ qemu_mutex_destroy(&group->task_queue_lock);
+ qemu_cond_destroy(&group->task_queue_cond);
+}
+
+/**
+ * @brief Returns the next available DSA device in the group.
+ *
+ * @param group A pointer to the DSA device group.
+ *
+ * @return struct dsa_device* A pointer to the next available DSA device
+ * in the group.
+ */
+__attribute__((unused))
+static struct dsa_device *
+dsa_device_group_get_next_device(struct dsa_device_group *group)
+{
+ if (group->num_dsa_devices == 0) {
+ return NULL;
+ }
+ uint32_t current = qatomic_fetch_inc(&group->device_allocator_index);
+ current %= group->num_dsa_devices;
+ return &group->dsa_devices[current];
+}
+
+/**
+ * @brief Check if DSA is running.
+ *
+ * @return True if DSA is running, otherwise false.
+ */
+bool dsa_is_running(void)
+{
+ return false;
+}
+
+static void
+dsa_globals_init(void)
+{
+ max_retry_count = UINT64_MAX;
+}
+
+/**
+ * @brief Initializes DSA devices.
+ *
+ * @param dsa_parameter A list of DSA device path from migration parameter.
+ *
+ * @return int Zero if successful, otherwise non zero.
+ */
+int dsa_init(const char *dsa_parameter)
+{
+ dsa_globals_init();
+
+ return dsa_device_group_init(&dsa_group, dsa_parameter);
+}
+
+/**
+ * @brief Start logic to enable using DSA.
+ *
+ */
+void dsa_start(void)
+{
+ if (dsa_group.num_dsa_devices == 0) {
+ return;
+ }
+ if (dsa_group.running) {
+ return;
+ }
+ dsa_device_group_start(&dsa_group);
+}
+
+/**
+ * @brief Stop the device group and the completion thread.
+ *
+ */
+void dsa_stop(void)
+{
+ struct dsa_device_group *group = &dsa_group;
+
+ if (!group->running) {
+ return;
+ }
+}
+
+/**
+ * @brief Clean up system resources created for DSA offloading.
+ *
+ */
+void dsa_cleanup(void)
+{
+ dsa_stop();
+ dsa_device_group_cleanup(&dsa_group);
+}
+
+#endif
+
diff --git a/util/meson.build b/util/meson.build
index 174c133368..7125308532 100644
--- a/util/meson.build
+++ b/util/meson.build
@@ -85,6 +85,7 @@ if have_block or have_ga
endif
if have_block
util_ss.add(files('aio-wait.c'))
+ util_ss.add(files('dsa.c'))
util_ss.add(files('buffer.c'))
util_ss.add(files('bufferiszero.c'))
util_ss.add(files('hbitmap.c'))
--
2.30.2
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v3 08/20] util/dsa: Implement DSA task enqueue and dequeue.
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
` (6 preceding siblings ...)
2024-01-04 0:44 ` [PATCH v3 07/20] util/dsa: Implement DSA device start and stop logic Hao Xiang
@ 2024-01-04 0:44 ` Hao Xiang
2024-01-04 0:44 ` [PATCH v3 09/20] util/dsa: Implement DSA task asynchronous completion thread model Hao Xiang
` (12 subsequent siblings)
20 siblings, 0 replies; 41+ messages in thread
From: Hao Xiang @ 2024-01-04 0:44 UTC (permalink / raw)
To: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Hao Xiang
* Use a safe thread queue for DSA task enqueue/dequeue.
* Implement DSA task submission.
* Implement DSA batch task submission.
Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
---
include/qemu/dsa.h | 28 +++++++
util/dsa.c | 201 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 229 insertions(+)
diff --git a/include/qemu/dsa.h b/include/qemu/dsa.h
index f15c05ee85..37cae8d9d2 100644
--- a/include/qemu/dsa.h
+++ b/include/qemu/dsa.h
@@ -13,6 +13,34 @@
#include <linux/idxd.h>
#include "x86intrin.h"
+typedef enum DsaTaskType {
+ DSA_TASK = 0,
+ DSA_BATCH_TASK
+} DsaTaskType;
+
+typedef enum DsaTaskStatus {
+ DSA_TASK_READY = 0,
+ DSA_TASK_PROCESSING,
+ DSA_TASK_COMPLETION
+} DsaTaskStatus;
+
+typedef void (*dsa_completion_fn)(void *);
+
+typedef struct dsa_batch_task {
+ struct dsa_hw_desc batch_descriptor;
+ struct dsa_hw_desc *descriptors;
+ struct dsa_completion_record batch_completion __attribute__((aligned(32)));
+ struct dsa_completion_record *completions;
+ struct dsa_device_group *group;
+ struct dsa_device *device;
+ dsa_completion_fn completion_callback;
+ QemuSemaphore sem_task_complete;
+ DsaTaskType task_type;
+ DsaTaskStatus status;
+ int batch_size;
+ QSIMPLEQ_ENTRY(dsa_batch_task) entry;
+} dsa_batch_task;
+
/**
* @brief Initializes DSA devices.
*
diff --git a/util/dsa.c b/util/dsa.c
index 05bbf8e31a..75739a1af6 100644
--- a/util/dsa.c
+++ b/util/dsa.c
@@ -244,6 +244,205 @@ dsa_device_group_get_next_device(struct dsa_device_group *group)
return &group->dsa_devices[current];
}
+/**
+ * @brief Empties out the DSA task queue.
+ *
+ * @param group A pointer to the DSA device group.
+ */
+static void
+dsa_empty_task_queue(struct dsa_device_group *group)
+{
+ qemu_mutex_lock(&group->task_queue_lock);
+ dsa_task_queue *task_queue = &group->task_queue;
+ while (!QSIMPLEQ_EMPTY(task_queue)) {
+ QSIMPLEQ_REMOVE_HEAD(task_queue, entry);
+ }
+ qemu_mutex_unlock(&group->task_queue_lock);
+}
+
+/**
+ * @brief Adds a task to the DSA task queue.
+ *
+ * @param group A pointer to the DSA device group.
+ * @param context A pointer to the DSA task to enqueue.
+ *
+ * @return int Zero if successful, otherwise a proper error code.
+ */
+static int
+dsa_task_enqueue(struct dsa_device_group *group,
+ struct dsa_batch_task *task)
+{
+ dsa_task_queue *task_queue = &group->task_queue;
+ QemuMutex *task_queue_lock = &group->task_queue_lock;
+ QemuCond *task_queue_cond = &group->task_queue_cond;
+
+ bool notify = false;
+
+ qemu_mutex_lock(task_queue_lock);
+
+ if (!group->running) {
+ error_report("DSA: Tried to queue task to stopped device queue.");
+ qemu_mutex_unlock(task_queue_lock);
+ return -1;
+ }
+
+ /* The queue is empty. This enqueue operation is a 0->1 transition. */
+ if (QSIMPLEQ_EMPTY(task_queue)) {
+ notify = true;
+ }
+
+ QSIMPLEQ_INSERT_TAIL(task_queue, task, entry);
+
+ /* We need to notify the waiter for 0->1 transitions. */
+ if (notify) {
+ qemu_cond_signal(task_queue_cond);
+ }
+
+ qemu_mutex_unlock(task_queue_lock);
+
+ return 0;
+}
+
+/**
+ * @brief Takes a DSA task out of the task queue.
+ *
+ * @param group A pointer to the DSA device group.
+ * @return dsa_batch_task* The DSA task being dequeued.
+ */
+__attribute__((unused))
+static struct dsa_batch_task *
+dsa_task_dequeue(struct dsa_device_group *group)
+{
+ struct dsa_batch_task *task = NULL;
+ dsa_task_queue *task_queue = &group->task_queue;
+ QemuMutex *task_queue_lock = &group->task_queue_lock;
+ QemuCond *task_queue_cond = &group->task_queue_cond;
+
+ qemu_mutex_lock(task_queue_lock);
+
+ while (true) {
+ if (!group->running) {
+ goto exit;
+ }
+ task = QSIMPLEQ_FIRST(task_queue);
+ if (task != NULL) {
+ break;
+ }
+ qemu_cond_wait(task_queue_cond, task_queue_lock);
+ }
+
+ QSIMPLEQ_REMOVE_HEAD(task_queue, entry);
+
+exit:
+ qemu_mutex_unlock(task_queue_lock);
+ return task;
+}
+
+/**
+ * @brief Submits a DSA work item to the device work queue.
+ *
+ * @param wq A pointer to the DSA work queue's device memory.
+ * @param descriptor A pointer to the DSA work item descriptor.
+ *
+ * @return Zero if successful, non-zero otherwise.
+ */
+static int
+submit_wi_int(void *wq, struct dsa_hw_desc *descriptor)
+{
+ uint64_t retry = 0;
+
+ _mm_sfence();
+
+ while (true) {
+ if (_enqcmd(wq, descriptor) == 0) {
+ break;
+ }
+ retry++;
+ if (retry > max_retry_count) {
+ error_report("Submit work retry %lu times.", retry);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * @brief Synchronously submits a DSA work item to the
+ * device work queue.
+ *
+ * @param wq A pointer to the DSA worjk queue's device memory.
+ * @param descriptor A pointer to the DSA work item descriptor.
+ *
+ * @return int Zero if successful, non-zero otherwise.
+ */
+__attribute__((unused))
+static int
+submit_wi(void *wq, struct dsa_hw_desc *descriptor)
+{
+ return submit_wi_int(wq, descriptor);
+}
+
+/**
+ * @brief Asynchronously submits a DSA work item to the
+ * device work queue.
+ *
+ * @param task A pointer to the buffer zero task.
+ *
+ * @return int Zero if successful, non-zero otherwise.
+ */
+__attribute__((unused))
+static int
+submit_wi_async(struct dsa_batch_task *task)
+{
+ struct dsa_device_group *device_group = task->group;
+ struct dsa_device *device_instance = task->device;
+ int ret;
+
+ assert(task->task_type == DSA_TASK);
+
+ task->status = DSA_TASK_PROCESSING;
+
+ ret = submit_wi_int(device_instance->work_queue,
+ &task->descriptors[0]);
+ if (ret != 0) {
+ return ret;
+ }
+
+ return dsa_task_enqueue(device_group, task);
+}
+
+/**
+ * @brief Asynchronously submits a DSA batch work item to the
+ * device work queue.
+ *
+ * @param dsa_batch_task A pointer to the batch buffer zero task.
+ *
+ * @return int Zero if successful, non-zero otherwise.
+ */
+__attribute__((unused))
+static int
+submit_batch_wi_async(struct dsa_batch_task *batch_task)
+{
+ struct dsa_device_group *device_group = batch_task->group;
+ struct dsa_device *device_instance = batch_task->device;
+ int ret;
+
+ assert(batch_task->task_type == DSA_BATCH_TASK);
+ assert(batch_task->batch_descriptor.desc_count <= batch_task->batch_size);
+ assert(batch_task->status == DSA_TASK_READY);
+
+ batch_task->status = DSA_TASK_PROCESSING;
+
+ ret = submit_wi_int(device_instance->work_queue,
+ &batch_task->batch_descriptor);
+ if (ret != 0) {
+ return ret;
+ }
+
+ return dsa_task_enqueue(device_group, batch_task);
+}
+
/**
* @brief Check if DSA is running.
*
@@ -300,6 +499,8 @@ void dsa_stop(void)
if (!group->running) {
return;
}
+
+ dsa_empty_task_queue(group);
}
/**
--
2.30.2
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v3 09/20] util/dsa: Implement DSA task asynchronous completion thread model.
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
` (7 preceding siblings ...)
2024-01-04 0:44 ` [PATCH v3 08/20] util/dsa: Implement DSA task enqueue and dequeue Hao Xiang
@ 2024-01-04 0:44 ` Hao Xiang
2024-01-04 0:44 ` [PATCH v3 10/20] util/dsa: Implement zero page checking in DSA task Hao Xiang
` (11 subsequent siblings)
20 siblings, 0 replies; 41+ messages in thread
From: Hao Xiang @ 2024-01-04 0:44 UTC (permalink / raw)
To: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Hao Xiang
* Create a dedicated thread for DSA task completion.
* DSA completion thread runs a loop and poll for completed tasks.
* Start and stop DSA completion thread during DSA device start stop.
User space application can directly submit task to Intel DSA
accelerator by writing to DSA's device memory (mapped in user space).
Once a task is submitted, the device starts processing it and write
the completion status back to the task. A user space application can
poll the task's completion status to check for completion. This change
uses a dedicated thread to perform DSA task completion checking.
Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
---
include/qemu/dsa.h | 1 +
util/dsa.c | 274 ++++++++++++++++++++++++++++++++++++++++++++-
2 files changed, 274 insertions(+), 1 deletion(-)
diff --git a/include/qemu/dsa.h b/include/qemu/dsa.h
index 37cae8d9d2..2513192a2b 100644
--- a/include/qemu/dsa.h
+++ b/include/qemu/dsa.h
@@ -38,6 +38,7 @@ typedef struct dsa_batch_task {
DsaTaskType task_type;
DsaTaskStatus status;
int batch_size;
+ bool *results;
QSIMPLEQ_ENTRY(dsa_batch_task) entry;
} dsa_batch_task;
diff --git a/util/dsa.c b/util/dsa.c
index 75739a1af6..003c4f47d9 100644
--- a/util/dsa.c
+++ b/util/dsa.c
@@ -44,6 +44,7 @@
#define DSA_WQ_SIZE 4096
#define MAX_DSA_DEVICES 16
+#define DSA_COMPLETION_THREAD "dsa_completion"
typedef QSIMPLEQ_HEAD(dsa_task_queue, dsa_batch_task) dsa_task_queue;
@@ -62,8 +63,18 @@ struct dsa_device_group {
dsa_task_queue task_queue;
};
+struct dsa_completion_thread {
+ bool stopping;
+ bool running;
+ QemuThread thread;
+ int thread_id;
+ QemuSemaphore sem_init_done;
+ struct dsa_device_group *group;
+};
+
uint64_t max_retry_count;
static struct dsa_device_group dsa_group;
+static struct dsa_completion_thread completion_thread;
/**
@@ -443,6 +454,265 @@ submit_batch_wi_async(struct dsa_batch_task *batch_task)
return dsa_task_enqueue(device_group, batch_task);
}
+/**
+ * @brief Poll for the DSA work item completion.
+ *
+ * @param completion A pointer to the DSA work item completion record.
+ * @param opcode The DSA opcode.
+ *
+ * @return Zero if successful, non-zero otherwise.
+ */
+static int
+poll_completion(struct dsa_completion_record *completion,
+ enum dsa_opcode opcode)
+{
+ uint8_t status;
+ uint64_t retry = 0;
+
+ while (true) {
+ /* The DSA operation completes successfully or fails. */
+ status = completion->status;
+ if (status == DSA_COMP_SUCCESS ||
+ status == DSA_COMP_PAGE_FAULT_NOBOF ||
+ status == DSA_COMP_BATCH_PAGE_FAULT ||
+ status == DSA_COMP_BATCH_FAIL) {
+ break;
+ } else if (status != DSA_COMP_NONE) {
+ error_report("DSA opcode %d failed with status = %d.",
+ opcode, status);
+ return 1;
+ }
+ retry++;
+ if (retry > max_retry_count) {
+ error_report("DSA wait for completion retry %lu times.", retry);
+ return 1;
+ }
+ _mm_pause();
+ }
+
+ return 0;
+}
+
+/**
+ * @brief Complete a single DSA task in the batch task.
+ *
+ * @param task A pointer to the batch task structure.
+ *
+ * @return Zero if successful, otherwise non-zero.
+ */
+static int
+poll_task_completion(struct dsa_batch_task *task)
+{
+ assert(task->task_type == DSA_TASK);
+
+ struct dsa_completion_record *completion = &task->completions[0];
+ uint8_t status;
+ int ret;
+
+ ret = poll_completion(completion, task->descriptors[0].opcode);
+ if (ret != 0) {
+ goto exit;
+ }
+
+ status = completion->status;
+ if (status == DSA_COMP_SUCCESS) {
+ task->results[0] = (completion->result == 0);
+ goto exit;
+ }
+
+ assert(status == DSA_COMP_PAGE_FAULT_NOBOF);
+
+exit:
+ return ret;
+}
+
+/**
+ * @brief Poll a batch task status until it completes. If DSA task doesn't
+ * complete properly, use CPU to complete the task.
+ *
+ * @param batch_task A pointer to the DSA batch task.
+ *
+ * @return Zero if successful, otherwise non-zero.
+ */
+static int
+poll_batch_task_completion(struct dsa_batch_task *batch_task)
+{
+ struct dsa_completion_record *batch_completion =
+ &batch_task->batch_completion;
+ struct dsa_completion_record *completion;
+ uint8_t batch_status;
+ uint8_t status;
+ bool *results = batch_task->results;
+ uint32_t count = batch_task->batch_descriptor.desc_count;
+ int ret;
+
+ ret = poll_completion(batch_completion,
+ batch_task->batch_descriptor.opcode);
+ if (ret != 0) {
+ goto exit;
+ }
+
+ batch_status = batch_completion->status;
+
+ if (batch_status == DSA_COMP_SUCCESS) {
+ if (batch_completion->bytes_completed == count) {
+ /*
+ * Let's skip checking for each descriptors' completion status
+ * if the batch descriptor says all succedded.
+ */
+ for (int i = 0; i < count; i++) {
+ assert(batch_task->completions[i].status == DSA_COMP_SUCCESS);
+ results[i] = (batch_task->completions[i].result == 0);
+ }
+ goto exit;
+ }
+ } else {
+ assert(batch_status == DSA_COMP_BATCH_FAIL ||
+ batch_status == DSA_COMP_BATCH_PAGE_FAULT);
+ }
+
+ for (int i = 0; i < count; i++) {
+
+ completion = &batch_task->completions[i];
+ status = completion->status;
+
+ if (status == DSA_COMP_SUCCESS) {
+ results[i] = (completion->result == 0);
+ continue;
+ }
+
+ assert(status == DSA_COMP_PAGE_FAULT_NOBOF);
+
+ if (status != DSA_COMP_PAGE_FAULT_NOBOF) {
+ error_report("Unexpected DSA completion status = %u.", status);
+ ret = 1;
+ goto exit;
+ }
+ }
+
+exit:
+ return ret;
+}
+
+/**
+ * @brief Handles an asynchronous DSA batch task completion.
+ *
+ * @param task A pointer to the batch buffer zero task structure.
+ */
+static void
+dsa_batch_task_complete(struct dsa_batch_task *batch_task)
+{
+ batch_task->status = DSA_TASK_COMPLETION;
+ batch_task->completion_callback(batch_task);
+}
+
+/**
+ * @brief The function entry point called by a dedicated DSA
+ * work item completion thread.
+ *
+ * @param opaque A pointer to the thread context.
+ *
+ * @return void* Not used.
+ */
+static void *
+dsa_completion_loop(void *opaque)
+{
+ struct dsa_completion_thread *thread_context =
+ (struct dsa_completion_thread *)opaque;
+ struct dsa_batch_task *batch_task;
+ struct dsa_device_group *group = thread_context->group;
+ int ret;
+
+ rcu_register_thread();
+
+ thread_context->thread_id = qemu_get_thread_id();
+ qemu_sem_post(&thread_context->sem_init_done);
+
+ while (thread_context->running) {
+ batch_task = dsa_task_dequeue(group);
+ assert(batch_task != NULL || !group->running);
+ if (!group->running) {
+ assert(!thread_context->running);
+ break;
+ }
+ if (batch_task->task_type == DSA_TASK) {
+ ret = poll_task_completion(batch_task);
+ } else {
+ assert(batch_task->task_type == DSA_BATCH_TASK);
+ ret = poll_batch_task_completion(batch_task);
+ }
+
+ if (ret != 0) {
+ goto exit;
+ }
+
+ dsa_batch_task_complete(batch_task);
+ }
+
+exit:
+ if (ret != 0) {
+ error_report("DSA completion thread exited due to internal error.");
+ }
+ rcu_unregister_thread();
+ return NULL;
+}
+
+/**
+ * @brief Initializes a DSA completion thread.
+ *
+ * @param completion_thread A pointer to the completion thread context.
+ * @param group A pointer to the DSA device group.
+ */
+static void
+dsa_completion_thread_init(
+ struct dsa_completion_thread *completion_thread,
+ struct dsa_device_group *group)
+{
+ completion_thread->stopping = false;
+ completion_thread->running = true;
+ completion_thread->thread_id = -1;
+ qemu_sem_init(&completion_thread->sem_init_done, 0);
+ completion_thread->group = group;
+
+ qemu_thread_create(&completion_thread->thread,
+ DSA_COMPLETION_THREAD,
+ dsa_completion_loop,
+ completion_thread,
+ QEMU_THREAD_JOINABLE);
+
+ /* Wait for initialization to complete */
+ qemu_sem_wait(&completion_thread->sem_init_done);
+}
+
+/**
+ * @brief Stops the completion thread (and implicitly, the device group).
+ *
+ * @param opaque A pointer to the completion thread.
+ */
+static void dsa_completion_thread_stop(void *opaque)
+{
+ struct dsa_completion_thread *thread_context =
+ (struct dsa_completion_thread *)opaque;
+
+ struct dsa_device_group *group = thread_context->group;
+
+ qemu_mutex_lock(&group->task_queue_lock);
+
+ thread_context->stopping = true;
+ thread_context->running = false;
+
+ /* Prevent the compiler from setting group->running first. */
+ barrier();
+ dsa_device_group_stop(group);
+
+ qemu_cond_signal(&group->task_queue_cond);
+ qemu_mutex_unlock(&group->task_queue_lock);
+
+ qemu_thread_join(&thread_context->thread);
+
+ qemu_sem_destroy(&thread_context->sem_init_done);
+}
+
/**
* @brief Check if DSA is running.
*
@@ -450,7 +720,7 @@ submit_batch_wi_async(struct dsa_batch_task *batch_task)
*/
bool dsa_is_running(void)
{
- return false;
+ return completion_thread.running;
}
static void
@@ -486,6 +756,7 @@ void dsa_start(void)
return;
}
dsa_device_group_start(&dsa_group);
+ dsa_completion_thread_init(&completion_thread, &dsa_group);
}
/**
@@ -500,6 +771,7 @@ void dsa_stop(void)
return;
}
+ dsa_completion_thread_stop(&completion_thread);
dsa_empty_task_queue(group);
}
--
2.30.2
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v3 10/20] util/dsa: Implement zero page checking in DSA task.
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
` (8 preceding siblings ...)
2024-01-04 0:44 ` [PATCH v3 09/20] util/dsa: Implement DSA task asynchronous completion thread model Hao Xiang
@ 2024-01-04 0:44 ` Hao Xiang
2024-01-04 0:44 ` [PATCH v3 11/20] util/dsa: Implement DSA task asynchronous submission and wait for completion Hao Xiang
` (10 subsequent siblings)
20 siblings, 0 replies; 41+ messages in thread
From: Hao Xiang @ 2024-01-04 0:44 UTC (permalink / raw)
To: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Hao Xiang
Create DSA task with operation code DSA_OPCODE_COMPVAL.
Here we create two types of DSA tasks, a single DSA task and
a batch DSA task. Batch DSA task reduces task submission overhead
and hence should be the default option. However, due to the way DSA
hardware works, a DSA batch task must contain at least two individual
tasks. There are times we need to submit a single task and hence a
single DSA task submission is also required.
Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
Signed-off-by: Bryan Zhang <bryan.zhang@bytedance.com>
---
include/qemu/dsa.h | 18 ++++
util/dsa.c | 247 +++++++++++++++++++++++++++++++++++++++++----
2 files changed, 244 insertions(+), 21 deletions(-)
diff --git a/include/qemu/dsa.h b/include/qemu/dsa.h
index 2513192a2b..645e6fc367 100644
--- a/include/qemu/dsa.h
+++ b/include/qemu/dsa.h
@@ -73,6 +73,24 @@ void dsa_cleanup(void);
*/
bool dsa_is_running(void);
+/**
+ * @brief Initializes a buffer zero batch task.
+ *
+ * @param task A pointer to the batch task to initialize.
+ * @param results A pointer to an array of zero page checking results.
+ * @param batch_size The number of DSA tasks in the batch.
+ */
+void
+buffer_zero_batch_task_init(struct dsa_batch_task *task,
+ bool *results, int batch_size);
+
+/**
+ * @brief Performs the proper cleanup on a DSA batch task.
+ *
+ * @param task A pointer to the batch task to cleanup.
+ */
+void buffer_zero_batch_task_destroy(struct dsa_batch_task *task);
+
#else
static inline bool dsa_is_running(void)
diff --git a/util/dsa.c b/util/dsa.c
index 003c4f47d9..9db4cfcf1d 100644
--- a/util/dsa.c
+++ b/util/dsa.c
@@ -76,6 +76,7 @@ uint64_t max_retry_count;
static struct dsa_device_group dsa_group;
static struct dsa_completion_thread completion_thread;
+static void buffer_zero_dsa_completion(void *context);
/**
* @brief This function opens a DSA device's work queue and
@@ -207,7 +208,6 @@ dsa_device_group_start(struct dsa_device_group *group)
*
* @param group A pointer to the DSA device group.
*/
-__attribute__((unused))
static void
dsa_device_group_stop(struct dsa_device_group *group)
{
@@ -243,7 +243,6 @@ dsa_device_group_cleanup(struct dsa_device_group *group)
* @return struct dsa_device* A pointer to the next available DSA device
* in the group.
*/
-__attribute__((unused))
static struct dsa_device *
dsa_device_group_get_next_device(struct dsa_device_group *group)
{
@@ -320,7 +319,6 @@ dsa_task_enqueue(struct dsa_device_group *group,
* @param group A pointer to the DSA device group.
* @return dsa_batch_task* The DSA task being dequeued.
*/
-__attribute__((unused))
static struct dsa_batch_task *
dsa_task_dequeue(struct dsa_device_group *group)
{
@@ -378,22 +376,6 @@ submit_wi_int(void *wq, struct dsa_hw_desc *descriptor)
return 0;
}
-/**
- * @brief Synchronously submits a DSA work item to the
- * device work queue.
- *
- * @param wq A pointer to the DSA worjk queue's device memory.
- * @param descriptor A pointer to the DSA work item descriptor.
- *
- * @return int Zero if successful, non-zero otherwise.
- */
-__attribute__((unused))
-static int
-submit_wi(void *wq, struct dsa_hw_desc *descriptor)
-{
- return submit_wi_int(wq, descriptor);
-}
-
/**
* @brief Asynchronously submits a DSA work item to the
* device work queue.
@@ -402,7 +384,6 @@ submit_wi(void *wq, struct dsa_hw_desc *descriptor)
*
* @return int Zero if successful, non-zero otherwise.
*/
-__attribute__((unused))
static int
submit_wi_async(struct dsa_batch_task *task)
{
@@ -431,7 +412,6 @@ submit_wi_async(struct dsa_batch_task *task)
*
* @return int Zero if successful, non-zero otherwise.
*/
-__attribute__((unused))
static int
submit_batch_wi_async(struct dsa_batch_task *batch_task)
{
@@ -713,6 +693,231 @@ static void dsa_completion_thread_stop(void *opaque)
qemu_sem_destroy(&thread_context->sem_init_done);
}
+/**
+ * @brief Initializes a buffer zero comparison DSA task.
+ *
+ * @param descriptor A pointer to the DSA task descriptor.
+ * @param completion A pointer to the DSA task completion record.
+ */
+static void
+buffer_zero_task_init_int(struct dsa_hw_desc *descriptor,
+ struct dsa_completion_record *completion)
+{
+ descriptor->opcode = DSA_OPCODE_COMPVAL;
+ descriptor->flags = IDXD_OP_FLAG_RCR | IDXD_OP_FLAG_CRAV;
+ descriptor->comp_pattern = (uint64_t)0;
+ descriptor->completion_addr = (uint64_t)completion;
+}
+
+/**
+ * @brief Initializes a buffer zero batch task.
+ *
+ * @param task A pointer to the batch task to initialize.
+ * @param results A pointer to an array of zero page checking results.
+ * @param batch_size The number of DSA tasks in the batch.
+ */
+void
+buffer_zero_batch_task_init(struct dsa_batch_task *task,
+ bool *results, int batch_size)
+{
+ int descriptors_size = sizeof(*task->descriptors) * batch_size;
+ memset(task, 0, sizeof(*task));
+
+ task->descriptors =
+ (struct dsa_hw_desc *)qemu_memalign(64, descriptors_size);
+ memset(task->descriptors, 0, descriptors_size);
+ task->completions = (struct dsa_completion_record *)qemu_memalign(
+ 32, sizeof(*task->completions) * batch_size);
+ task->results = results;
+ task->batch_size = batch_size;
+
+ task->batch_completion.status = DSA_COMP_NONE;
+ task->batch_descriptor.completion_addr = (uint64_t)&task->batch_completion;
+ /* TODO: Ensure that we never send a batch with count <= 1 */
+ task->batch_descriptor.desc_count = 0;
+ task->batch_descriptor.opcode = DSA_OPCODE_BATCH;
+ task->batch_descriptor.flags = IDXD_OP_FLAG_RCR | IDXD_OP_FLAG_CRAV;
+ task->batch_descriptor.desc_list_addr = (uintptr_t)task->descriptors;
+ task->status = DSA_TASK_READY;
+ task->group = &dsa_group;
+ task->device = dsa_device_group_get_next_device(&dsa_group);
+
+ for (int i = 0; i < task->batch_size; i++) {
+ buffer_zero_task_init_int(&task->descriptors[i],
+ &task->completions[i]);
+ }
+
+ qemu_sem_init(&task->sem_task_complete, 0);
+ task->completion_callback = buffer_zero_dsa_completion;
+}
+
+/**
+ * @brief Performs the proper cleanup on a DSA batch task.
+ *
+ * @param task A pointer to the batch task to cleanup.
+ */
+void
+buffer_zero_batch_task_destroy(struct dsa_batch_task *task)
+{
+ qemu_vfree(task->descriptors);
+ qemu_vfree(task->completions);
+ task->results = NULL;
+
+ qemu_sem_destroy(&task->sem_task_complete);
+}
+
+/**
+ * @brief Resets a buffer zero comparison DSA batch task.
+ *
+ * @param task A pointer to the batch task.
+ * @param count The number of DSA tasks this batch task will contain.
+ */
+static void
+buffer_zero_batch_task_reset(struct dsa_batch_task *task, size_t count)
+{
+ task->batch_completion.status = DSA_COMP_NONE;
+ task->batch_descriptor.desc_count = count;
+ task->task_type = DSA_BATCH_TASK;
+ task->status = DSA_TASK_READY;
+}
+
+/**
+ * @brief Sets a buffer zero comparison DSA task.
+ *
+ * @param descriptor A pointer to the DSA task descriptor.
+ * @param buf A pointer to the memory buffer.
+ * @param len The length of the buffer.
+ */
+static void
+buffer_zero_task_set_int(struct dsa_hw_desc *descriptor,
+ const void *buf,
+ size_t len)
+{
+ struct dsa_completion_record *completion =
+ (struct dsa_completion_record *)descriptor->completion_addr;
+
+ descriptor->xfer_size = len;
+ descriptor->src_addr = (uintptr_t)buf;
+ completion->status = 0;
+ completion->result = 0;
+}
+
+/**
+ * @brief Resets a buffer zero comparison DSA batch task.
+ *
+ * @param task A pointer to the DSA batch task.
+ */
+static void
+buffer_zero_task_reset(struct dsa_batch_task *task)
+{
+ task->completions[0].status = DSA_COMP_NONE;
+ task->task_type = DSA_TASK;
+ task->status = DSA_TASK_READY;
+}
+
+/**
+ * @brief Sets a buffer zero comparison DSA task.
+ *
+ * @param task A pointer to the DSA task.
+ * @param buf A pointer to the memory buffer.
+ * @param len The buffer length.
+ */
+static void
+buffer_zero_task_set(struct dsa_batch_task *task,
+ const void *buf,
+ size_t len)
+{
+ buffer_zero_task_reset(task);
+ buffer_zero_task_set_int(&task->descriptors[0], buf, len);
+}
+
+/**
+ * @brief Sets a buffer zero comparison batch task.
+ *
+ * @param batch_task A pointer to the batch task.
+ * @param buf An array of memory buffers.
+ * @param count The number of buffers in the array.
+ * @param len The length of the buffers.
+ */
+static void
+buffer_zero_batch_task_set(struct dsa_batch_task *batch_task,
+ const void **buf, size_t count, size_t len)
+{
+ assert(count > 0);
+ assert(count <= batch_task->batch_size);
+
+ buffer_zero_batch_task_reset(batch_task, count);
+ for (int i = 0; i < count; i++) {
+ buffer_zero_task_set_int(&batch_task->descriptors[i], buf[i], len);
+ }
+}
+
+/**
+ * @brief Asychronously perform a buffer zero DSA operation.
+ *
+ * @param task A pointer to the batch task structure.
+ * @param buf A pointer to the memory buffer.
+ * @param len The length of the memory buffer.
+ *
+ * @return int Zero if successful, otherwise an appropriate error code.
+ */
+__attribute__((unused))
+static int
+buffer_zero_dsa_async(struct dsa_batch_task *task,
+ const void *buf, size_t len)
+{
+ buffer_zero_task_set(task, buf, len);
+
+ return submit_wi_async(task);
+}
+
+/**
+ * @brief Sends a memory comparison batch task to a DSA device and wait
+ * for completion.
+ *
+ * @param batch_task The batch task to be submitted to DSA device.
+ * @param buf An array of memory buffers to check for zero.
+ * @param count The number of buffers.
+ * @param len The buffer length.
+ */
+__attribute__((unused))
+static int
+buffer_zero_dsa_batch_async(struct dsa_batch_task *batch_task,
+ const void **buf, size_t count, size_t len)
+{
+ assert(count <= batch_task->batch_size);
+ buffer_zero_batch_task_set(batch_task, buf, count, len);
+
+ return submit_batch_wi_async(batch_task);
+}
+
+/**
+ * @brief The completion callback function for buffer zero
+ * comparison DSA task completion.
+ *
+ * @param context A pointer to the callback context.
+ */
+static void
+buffer_zero_dsa_completion(void *context)
+{
+ assert(context != NULL);
+
+ struct dsa_batch_task *task = (struct dsa_batch_task *)context;
+ qemu_sem_post(&task->sem_task_complete);
+}
+
+/**
+ * @brief Wait for the asynchronous DSA task to complete.
+ *
+ * @param batch_task A pointer to the buffer zero comparison batch task.
+ */
+__attribute__((unused))
+static void
+buffer_zero_dsa_wait(struct dsa_batch_task *batch_task)
+{
+ qemu_sem_wait(&batch_task->sem_task_complete);
+}
+
/**
* @brief Check if DSA is running.
*
--
2.30.2
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v3 11/20] util/dsa: Implement DSA task asynchronous submission and wait for completion.
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
` (9 preceding siblings ...)
2024-01-04 0:44 ` [PATCH v3 10/20] util/dsa: Implement zero page checking in DSA task Hao Xiang
@ 2024-01-04 0:44 ` Hao Xiang
2024-03-08 10:10 ` Jonathan Cameron via
2024-01-04 0:44 ` [PATCH v3 12/20] migration/multifd: Add new migration option for multifd DSA offloading Hao Xiang
` (9 subsequent siblings)
20 siblings, 1 reply; 41+ messages in thread
From: Hao Xiang @ 2024-01-04 0:44 UTC (permalink / raw)
To: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Hao Xiang
* Add a DSA task completion callback.
* DSA completion thread will call the tasks's completion callback
on every task/batch task completion.
* DSA submission path to wait for completion.
* Implement CPU fallback if DSA is not able to complete the task.
Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
Signed-off-by: Bryan Zhang <bryan.zhang@bytedance.com>
---
include/qemu/dsa.h | 14 +++++
util/dsa.c | 147 ++++++++++++++++++++++++++++++++++++++++++++-
2 files changed, 158 insertions(+), 3 deletions(-)
diff --git a/include/qemu/dsa.h b/include/qemu/dsa.h
index 645e6fc367..e002652879 100644
--- a/include/qemu/dsa.h
+++ b/include/qemu/dsa.h
@@ -91,6 +91,20 @@ buffer_zero_batch_task_init(struct dsa_batch_task *task,
*/
void buffer_zero_batch_task_destroy(struct dsa_batch_task *task);
+/**
+ * @brief Performs buffer zero comparison on a DSA batch task asynchronously.
+ *
+ * @param batch_task A pointer to the batch task.
+ * @param buf An array of memory buffers.
+ * @param count The number of buffers in the array.
+ * @param len The buffer length.
+ *
+ * @return Zero if successful, otherwise non-zero.
+ */
+int
+buffer_is_zero_dsa_batch_async(struct dsa_batch_task *batch_task,
+ const void **buf, size_t count, size_t len);
+
#else
static inline bool dsa_is_running(void)
diff --git a/util/dsa.c b/util/dsa.c
index 9db4cfcf1d..5a2bf33651 100644
--- a/util/dsa.c
+++ b/util/dsa.c
@@ -473,6 +473,57 @@ poll_completion(struct dsa_completion_record *completion,
return 0;
}
+/**
+ * @brief Helper function to use CPU to complete a single
+ * zero page checking task.
+ *
+ * @param completion A pointer to a DSA task completion record.
+ * @param descriptor A pointer to a DSA task descriptor.
+ * @param result A pointer to the result of a zero page checking.
+ */
+static void
+task_cpu_fallback_int(struct dsa_completion_record *completion,
+ struct dsa_hw_desc *descriptor, bool *result)
+{
+ const uint8_t *buf;
+ size_t len;
+
+ if (completion->status == DSA_COMP_SUCCESS) {
+ return;
+ }
+
+ /*
+ * DSA was able to partially complete the operation. Check the
+ * result. If we already know this is not a zero page, we can
+ * return now.
+ */
+ if (completion->bytes_completed != 0 && completion->result != 0) {
+ *result = false;
+ return;
+ }
+
+ /* Let's fallback to use CPU to complete it. */
+ buf = (const uint8_t *)descriptor->src_addr;
+ len = descriptor->xfer_size;
+ *result = buffer_is_zero(buf + completion->bytes_completed,
+ len - completion->bytes_completed);
+}
+
+/**
+ * @brief Use CPU to complete a single zero page checking task.
+ *
+ * @param task A pointer to the task.
+ */
+static void
+task_cpu_fallback(struct dsa_batch_task *task)
+{
+ assert(task->task_type == DSA_TASK);
+
+ task_cpu_fallback_int(&task->completions[0],
+ &task->descriptors[0],
+ &task->results[0]);
+}
+
/**
* @brief Complete a single DSA task in the batch task.
*
@@ -574,6 +625,47 @@ exit:
return ret;
}
+/**
+ * @brief Use CPU to complete the zero page checking batch task.
+ *
+ * @param batch_task A pointer to the batch task.
+ */
+static void
+batch_task_cpu_fallback(struct dsa_batch_task *batch_task)
+{
+ assert(batch_task->task_type == DSA_BATCH_TASK);
+
+ struct dsa_completion_record *batch_completion =
+ &batch_task->batch_completion;
+ struct dsa_completion_record *completion;
+ uint8_t status;
+ bool *results = batch_task->results;
+ uint32_t count = batch_task->batch_descriptor.desc_count;
+
+ /* DSA is able to complete the entire batch task. */
+ if (batch_completion->status == DSA_COMP_SUCCESS) {
+ assert(count == batch_completion->bytes_completed);
+ return;
+ }
+
+ /*
+ * DSA encounters some error and is not able to complete
+ * the entire batch task. Use CPU fallback.
+ */
+ for (int i = 0; i < count; i++) {
+
+ completion = &batch_task->completions[i];
+ status = completion->status;
+
+ assert(status == DSA_COMP_SUCCESS ||
+ status == DSA_COMP_PAGE_FAULT_NOBOF);
+
+ task_cpu_fallback_int(completion,
+ &batch_task->descriptors[i],
+ &results[i]);
+ }
+}
+
/**
* @brief Handles an asynchronous DSA batch task completion.
*
@@ -861,7 +953,6 @@ buffer_zero_batch_task_set(struct dsa_batch_task *batch_task,
*
* @return int Zero if successful, otherwise an appropriate error code.
*/
-__attribute__((unused))
static int
buffer_zero_dsa_async(struct dsa_batch_task *task,
const void *buf, size_t len)
@@ -880,7 +971,6 @@ buffer_zero_dsa_async(struct dsa_batch_task *task,
* @param count The number of buffers.
* @param len The buffer length.
*/
-__attribute__((unused))
static int
buffer_zero_dsa_batch_async(struct dsa_batch_task *batch_task,
const void **buf, size_t count, size_t len)
@@ -911,13 +1001,29 @@ buffer_zero_dsa_completion(void *context)
*
* @param batch_task A pointer to the buffer zero comparison batch task.
*/
-__attribute__((unused))
static void
buffer_zero_dsa_wait(struct dsa_batch_task *batch_task)
{
qemu_sem_wait(&batch_task->sem_task_complete);
}
+/**
+ * @brief Use CPU to complete the zero page checking task if DSA
+ * is not able to complete it.
+ *
+ * @param batch_task A pointer to the batch task.
+ */
+static void
+buffer_zero_cpu_fallback(struct dsa_batch_task *batch_task)
+{
+ if (batch_task->task_type == DSA_TASK) {
+ task_cpu_fallback(batch_task);
+ } else {
+ assert(batch_task->task_type == DSA_BATCH_TASK);
+ batch_task_cpu_fallback(batch_task);
+ }
+}
+
/**
* @brief Check if DSA is running.
*
@@ -990,5 +1096,40 @@ void dsa_cleanup(void)
dsa_device_group_cleanup(&dsa_group);
}
+/**
+ * @brief Performs buffer zero comparison on a DSA batch task asynchronously.
+ *
+ * @param batch_task A pointer to the batch task.
+ * @param buf An array of memory buffers.
+ * @param count The number of buffers in the array.
+ * @param len The buffer length.
+ *
+ * @return Zero if successful, otherwise non-zero.
+ */
+int
+buffer_is_zero_dsa_batch_async(struct dsa_batch_task *batch_task,
+ const void **buf, size_t count, size_t len)
+{
+ if (count <= 0 || count > batch_task->batch_size) {
+ return -1;
+ }
+
+ assert(batch_task != NULL);
+ assert(len != 0);
+ assert(buf != NULL);
+
+ if (count == 1) {
+ /* DSA doesn't take batch operation with only 1 task. */
+ buffer_zero_dsa_async(batch_task, buf[0], len);
+ } else {
+ buffer_zero_dsa_batch_async(batch_task, buf, count, len);
+ }
+
+ buffer_zero_dsa_wait(batch_task);
+ buffer_zero_cpu_fallback(batch_task);
+
+ return 0;
+}
+
#endif
--
2.30.2
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v3 12/20] migration/multifd: Add new migration option for multifd DSA offloading.
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
` (10 preceding siblings ...)
2024-01-04 0:44 ` [PATCH v3 11/20] util/dsa: Implement DSA task asynchronous submission and wait for completion Hao Xiang
@ 2024-01-04 0:44 ` Hao Xiang
2024-01-04 0:44 ` [PATCH v3 13/20] migration/multifd: Prepare to introduce DSA acceleration on the multifd path Hao Xiang
` (8 subsequent siblings)
20 siblings, 0 replies; 41+ messages in thread
From: Hao Xiang @ 2024-01-04 0:44 UTC (permalink / raw)
To: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Hao Xiang
Intel DSA offloading is an optional feature that turns on if
proper hardware and software stack is available. To turn on
DSA offloading in multifd live migration:
multifd-dsa-accel="[dsa_dev_path1] [dsa_dev_path2] ... [dsa_dev_pathX]"
This feature is turned off by default.
Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
---
migration/migration-hmp-cmds.c | 8 ++++++++
migration/options.c | 30 ++++++++++++++++++++++++++++++
migration/options.h | 1 +
qapi/migration.json | 26 +++++++++++++++++++++++---
4 files changed, 62 insertions(+), 3 deletions(-)
diff --git a/migration/migration-hmp-cmds.c b/migration/migration-hmp-cmds.c
index 99710c8ffb..761d6d54de 100644
--- a/migration/migration-hmp-cmds.c
+++ b/migration/migration-hmp-cmds.c
@@ -353,6 +353,9 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
monitor_printf(mon, "%s: '%s'\n",
MigrationParameter_str(MIGRATION_PARAMETER_TLS_AUTHZ),
params->tls_authz);
+ monitor_printf(mon, "%s: '%s'\n",
+ MigrationParameter_str(MIGRATION_PARAMETER_MULTIFD_DSA_ACCEL),
+ params->multifd_dsa_accel);
if (params->has_block_bitmap_mapping) {
const BitmapMigrationNodeAliasList *bmnal;
@@ -615,6 +618,11 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
p->has_block_incremental = true;
visit_type_bool(v, param, &p->block_incremental, &err);
break;
+ case MIGRATION_PARAMETER_MULTIFD_DSA_ACCEL:
+ p->multifd_dsa_accel = g_new0(StrOrNull, 1);
+ p->multifd_dsa_accel->type = QTYPE_QSTRING;
+ visit_type_str(v, param, &p->multifd_dsa_accel->u.s, &err);
+ break;
case MIGRATION_PARAMETER_MULTIFD_CHANNELS:
p->has_multifd_channels = true;
visit_type_uint8(v, param, &p->multifd_channels, &err);
diff --git a/migration/options.c b/migration/options.c
index 180698a8f5..b5f69031a9 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -179,6 +179,8 @@ Property migration_properties[] = {
DEFINE_PROP_MIG_MODE("mode", MigrationState,
parameters.mode,
MIG_MODE_NORMAL),
+ DEFINE_PROP_STRING("multifd-dsa-accel", MigrationState,
+ parameters.multifd_dsa_accel),
/* Migration capabilities */
DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
@@ -903,6 +905,13 @@ const char *migrate_tls_creds(void)
return s->parameters.tls_creds;
}
+const char *migrate_multifd_dsa_accel(void)
+{
+ MigrationState *s = migrate_get_current();
+
+ return s->parameters.multifd_dsa_accel;
+}
+
const char *migrate_tls_hostname(void)
{
MigrationState *s = migrate_get_current();
@@ -1027,6 +1036,8 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
params->vcpu_dirty_limit = s->parameters.vcpu_dirty_limit;
params->has_mode = true;
params->mode = s->parameters.mode;
+ params->multifd_dsa_accel = g_strdup(s->parameters.multifd_dsa_accel ?
+ s->parameters.multifd_dsa_accel : "");
return params;
}
@@ -1035,6 +1046,7 @@ void migrate_params_init(MigrationParameters *params)
{
params->tls_hostname = g_strdup("");
params->tls_creds = g_strdup("");
+ params->multifd_dsa_accel = g_strdup("");
/* Set has_* up only for parameter checks */
params->has_compress_level = true;
@@ -1364,6 +1376,11 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
if (params->has_mode) {
dest->mode = params->mode;
}
+
+ if (params->multifd_dsa_accel) {
+ assert(params->multifd_dsa_accel->type == QTYPE_QSTRING);
+ dest->multifd_dsa_accel = params->multifd_dsa_accel->u.s;
+ }
}
static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
@@ -1508,6 +1525,13 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
if (params->has_mode) {
s->parameters.mode = params->mode;
}
+
+ if (params->multifd_dsa_accel) {
+ g_free(s->parameters.multifd_dsa_accel);
+ assert(params->multifd_dsa_accel->type == QTYPE_QSTRING);
+ s->parameters.multifd_dsa_accel =
+ g_strdup(params->multifd_dsa_accel->u.s);
+ }
}
void qmp_migrate_set_parameters(MigrateSetParameters *params, Error **errp)
@@ -1533,6 +1557,12 @@ void qmp_migrate_set_parameters(MigrateSetParameters *params, Error **errp)
params->tls_authz->type = QTYPE_QSTRING;
params->tls_authz->u.s = strdup("");
}
+ if (params->multifd_dsa_accel
+ && params->multifd_dsa_accel->type == QTYPE_QNULL) {
+ qobject_unref(params->multifd_dsa_accel->u.n);
+ params->multifd_dsa_accel->type = QTYPE_QSTRING;
+ params->multifd_dsa_accel->u.s = strdup("");
+ }
migrate_params_test_apply(params, &tmp);
diff --git a/migration/options.h b/migration/options.h
index c901eb57c6..56100961a9 100644
--- a/migration/options.h
+++ b/migration/options.h
@@ -94,6 +94,7 @@ const char *migrate_tls_authz(void);
const char *migrate_tls_creds(void);
const char *migrate_tls_hostname(void);
uint64_t migrate_xbzrle_cache_size(void);
+const char *migrate_multifd_dsa_accel(void);
/* parameters setters */
diff --git a/qapi/migration.json b/qapi/migration.json
index 4c7a42e364..74465bc821 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -879,6 +879,12 @@
# @mode: Migration mode. See description in @MigMode. Default is 'normal'.
# (Since 8.2)
#
+# @multifd-dsa-accel: If enabled, use DSA accelerator offloading for
+# certain memory operations. Enable DSA accelerator offloading by
+# setting this string to a list of DSA device path separated by space
+# characters. Setting this string to an empty string means disabling
+# DSA accelerator offloading. Defaults to an empty string. (since 8.2)
+#
# Features:
#
# @deprecated: Member @block-incremental is deprecated. Use
@@ -902,7 +908,7 @@
'cpu-throttle-initial', 'cpu-throttle-increment',
'cpu-throttle-tailslow',
'tls-creds', 'tls-hostname', 'tls-authz', 'max-bandwidth',
- 'avail-switchover-bandwidth', 'downtime-limit',
+ 'avail-switchover-bandwidth', 'downtime-limit', 'multifd-dsa-accel',
{ 'name': 'x-checkpoint-delay', 'features': [ 'unstable' ] },
{ 'name': 'block-incremental', 'features': [ 'deprecated' ] },
'multifd-channels',
@@ -1067,6 +1073,12 @@
# @mode: Migration mode. See description in @MigMode. Default is 'normal'.
# (Since 8.2)
#
+# @multifd-dsa-accel: If enabled, use DSA accelerator offloading for
+# certain memory operations. Enable DSA accelerator offloading by
+# setting this string to a list of DSA device path separated by space
+# characters. Setting this string to an empty string means disabling
+# DSA accelerator offloading. Defaults to an empty string. (since 8.2)
+#
# Features:
#
# @deprecated: Member @block-incremental is deprecated. Use
@@ -1120,7 +1132,8 @@
'*x-vcpu-dirty-limit-period': { 'type': 'uint64',
'features': [ 'unstable' ] },
'*vcpu-dirty-limit': 'uint64',
- '*mode': 'MigMode'} }
+ '*mode': 'MigMode',
+ '*multifd-dsa-accel': 'StrOrNull'} }
##
# @migrate-set-parameters:
@@ -1295,6 +1308,12 @@
# @mode: Migration mode. See description in @MigMode. Default is 'normal'.
# (Since 8.2)
#
+# @multifd-dsa-accel: If enabled, use DSA accelerator offloading for
+# certain memory operations. Enable DSA accelerator offloading by
+# setting this string to a list of DSA device path separated by space
+# characters. Setting this string to an empty string means disabling
+# DSA accelerator offloading. Defaults to an empty string. (since 8.2)
+#
# Features:
#
# @deprecated: Member @block-incremental is deprecated. Use
@@ -1345,7 +1364,8 @@
'*x-vcpu-dirty-limit-period': { 'type': 'uint64',
'features': [ 'unstable' ] },
'*vcpu-dirty-limit': 'uint64',
- '*mode': 'MigMode'} }
+ '*mode': 'MigMode',
+ '*multifd-dsa-accel': 'str'} }
##
# @query-migrate-parameters:
--
2.30.2
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v3 13/20] migration/multifd: Prepare to introduce DSA acceleration on the multifd path.
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
` (11 preceding siblings ...)
2024-01-04 0:44 ` [PATCH v3 12/20] migration/multifd: Add new migration option for multifd DSA offloading Hao Xiang
@ 2024-01-04 0:44 ` Hao Xiang
2024-01-15 6:46 ` Shivam Kumar
2024-01-04 0:44 ` [PATCH v3 14/20] migration/multifd: Enable DSA offloading in multifd sender path Hao Xiang
` (7 subsequent siblings)
20 siblings, 1 reply; 41+ messages in thread
From: Hao Xiang @ 2024-01-04 0:44 UTC (permalink / raw)
To: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Hao Xiang
1. Refactor multifd_send_thread function.
2. Implement buffer_is_zero_use_cpu to handle CPU based zero page
checking.
3. Introduce the batch task structure in MultiFDSendParams.
Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
---
include/qemu/dsa.h | 43 +++++++++++++++++++++++--
migration/multifd.c | 77 ++++++++++++++++++++++++++++++++++++---------
migration/multifd.h | 2 ++
util/dsa.c | 51 +++++++++++++++++++++++++-----
4 files changed, 148 insertions(+), 25 deletions(-)
diff --git a/include/qemu/dsa.h b/include/qemu/dsa.h
index e002652879..fe7772107a 100644
--- a/include/qemu/dsa.h
+++ b/include/qemu/dsa.h
@@ -2,6 +2,7 @@
#define QEMU_DSA_H
#include "qemu/error-report.h"
+#include "exec/cpu-common.h"
#include "qemu/thread.h"
#include "qemu/queue.h"
@@ -42,6 +43,20 @@ typedef struct dsa_batch_task {
QSIMPLEQ_ENTRY(dsa_batch_task) entry;
} dsa_batch_task;
+#endif
+
+struct batch_task {
+ /* Address of each pages in pages */
+ ram_addr_t *addr;
+ /* Zero page checking results */
+ bool *results;
+#ifdef CONFIG_DSA_OPT
+ struct dsa_batch_task *dsa_batch;
+#endif
+};
+
+#ifdef CONFIG_DSA_OPT
+
/**
* @brief Initializes DSA devices.
*
@@ -74,7 +89,7 @@ void dsa_cleanup(void);
bool dsa_is_running(void);
/**
- * @brief Initializes a buffer zero batch task.
+ * @brief Initializes a buffer zero DSA batch task.
*
* @param task A pointer to the batch task to initialize.
* @param results A pointer to an array of zero page checking results.
@@ -102,7 +117,7 @@ void buffer_zero_batch_task_destroy(struct dsa_batch_task *task);
* @return Zero if successful, otherwise non-zero.
*/
int
-buffer_is_zero_dsa_batch_async(struct dsa_batch_task *batch_task,
+buffer_is_zero_dsa_batch_async(struct batch_task *batch_task,
const void **buf, size_t count, size_t len);
#else
@@ -128,6 +143,30 @@ static inline void dsa_stop(void) {}
static inline void dsa_cleanup(void) {}
+static inline int
+buffer_is_zero_dsa_batch_async(struct batch_task *batch_task,
+ const void **buf, size_t count, size_t len)
+{
+ exit(1);
+}
+
#endif
+/**
+ * @brief Initializes a general buffer zero batch task.
+ *
+ * @param task A pointer to the general batch task to initialize.
+ * @param batch_size The number of zero page checking tasks in the batch.
+ */
+void
+batch_task_init(struct batch_task *task, int batch_size);
+
+/**
+ * @brief Destroys a general buffer zero batch task.
+ *
+ * @param task A pointer to the general batch task to destroy.
+ */
+void
+batch_task_destroy(struct batch_task *task);
+
#endif
diff --git a/migration/multifd.c b/migration/multifd.c
index eece85569f..e7c549b93e 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -14,6 +14,8 @@
#include "qemu/cutils.h"
#include "qemu/rcu.h"
#include "qemu/cutils.h"
+#include "qemu/dsa.h"
+#include "qemu/memalign.h"
#include "exec/target_page.h"
#include "sysemu/sysemu.h"
#include "exec/ramblock.h"
@@ -574,6 +576,8 @@ void multifd_save_cleanup(void)
p->name = NULL;
multifd_pages_clear(p->pages);
p->pages = NULL;
+ batch_task_destroy(p->batch_task);
+ p->batch_task = NULL;
p->packet_len = 0;
g_free(p->packet);
p->packet = NULL;
@@ -678,13 +682,66 @@ int multifd_send_sync_main(QEMUFile *f)
return 0;
}
+static void set_page(MultiFDSendParams *p, bool zero_page, uint64_t offset)
+{
+ RAMBlock *rb = p->pages->block;
+ if (zero_page) {
+ p->zero[p->zero_num] = offset;
+ p->zero_num++;
+ ram_release_page(rb->idstr, offset);
+ } else {
+ p->normal[p->normal_num] = offset;
+ p->normal_num++;
+ }
+}
+
+static void buffer_is_zero_use_cpu(MultiFDSendParams *p)
+{
+ const void **buf = (const void **)p->batch_task->addr;
+ assert(!migrate_use_main_zero_page());
+
+ for (int i = 0; i < p->pages->num; i++) {
+ p->batch_task->results[i] = buffer_is_zero(buf[i], p->page_size);
+ }
+}
+
+static void set_normal_pages(MultiFDSendParams *p)
+{
+ for (int i = 0; i < p->pages->num; i++) {
+ p->batch_task->results[i] = false;
+ }
+}
+
+static void multifd_zero_page_check(MultiFDSendParams *p)
+{
+ /* older qemu don't understand zero page on multifd channel */
+ bool use_multifd_zero_page = !migrate_use_main_zero_page();
+
+ RAMBlock *rb = p->pages->block;
+
+ for (int i = 0; i < p->pages->num; i++) {
+ p->batch_task->addr[i] = (ram_addr_t)(rb->host + p->pages->offset[i]);
+ }
+
+ if (use_multifd_zero_page) {
+ buffer_is_zero_use_cpu(p);
+ } else {
+ /* No zero page checking. All pages are normal pages. */
+ set_normal_pages(p);
+ }
+
+ for (int i = 0; i < p->pages->num; i++) {
+ uint64_t offset = p->pages->offset[i];
+ bool zero_page = p->batch_task->results[i];
+ set_page(p, zero_page, offset);
+ }
+}
+
static void *multifd_send_thread(void *opaque)
{
MultiFDSendParams *p = opaque;
MigrationThread *thread = NULL;
Error *local_err = NULL;
- /* qemu older than 8.2 don't understand zero page on multifd channel */
- bool use_multifd_zero_page = !migrate_use_main_zero_page();
int ret = 0;
bool use_zero_copy_send = migrate_zero_copy_send();
@@ -710,7 +767,6 @@ static void *multifd_send_thread(void *opaque)
qemu_mutex_lock(&p->mutex);
if (p->pending_job) {
- RAMBlock *rb = p->pages->block;
uint64_t packet_num = p->packet_num;
uint32_t flags;
@@ -723,18 +779,7 @@ static void *multifd_send_thread(void *opaque)
p->iovs_num = 1;
}
- for (int i = 0; i < p->pages->num; i++) {
- uint64_t offset = p->pages->offset[i];
- if (use_multifd_zero_page &&
- buffer_is_zero(rb->host + offset, p->page_size)) {
- p->zero[p->zero_num] = offset;
- p->zero_num++;
- ram_release_page(rb->idstr, offset);
- } else {
- p->normal[p->normal_num] = offset;
- p->normal_num++;
- }
- }
+ multifd_zero_page_check(p);
if (p->normal_num) {
ret = multifd_send_state->ops->send_prepare(p, &local_err);
@@ -975,6 +1020,8 @@ int multifd_save_setup(Error **errp)
p->pending_job = 0;
p->id = i;
p->pages = multifd_pages_init(page_count);
+ p->batch_task = g_malloc0(sizeof(struct batch_task));
+ batch_task_init(p->batch_task, page_count);
p->packet_len = sizeof(MultiFDPacket_t)
+ sizeof(uint64_t) * page_count;
p->packet = g_malloc0(p->packet_len);
diff --git a/migration/multifd.h b/migration/multifd.h
index 13762900d4..97b5f888a7 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -119,6 +119,8 @@ typedef struct {
* pending_job != 0 -> multifd_channel can use it.
*/
MultiFDPages_t *pages;
+ /* Zero page checking batch task */
+ struct batch_task *batch_task;
/* thread local variables. No locking required */
diff --git a/util/dsa.c b/util/dsa.c
index 5a2bf33651..f6224a27d4 100644
--- a/util/dsa.c
+++ b/util/dsa.c
@@ -802,7 +802,7 @@ buffer_zero_task_init_int(struct dsa_hw_desc *descriptor,
}
/**
- * @brief Initializes a buffer zero batch task.
+ * @brief Initializes a buffer zero DSA batch task.
*
* @param task A pointer to the batch task to initialize.
* @param results A pointer to an array of zero page checking results.
@@ -1107,29 +1107,64 @@ void dsa_cleanup(void)
* @return Zero if successful, otherwise non-zero.
*/
int
-buffer_is_zero_dsa_batch_async(struct dsa_batch_task *batch_task,
+buffer_is_zero_dsa_batch_async(struct batch_task *batch_task,
const void **buf, size_t count, size_t len)
{
- if (count <= 0 || count > batch_task->batch_size) {
+ struct dsa_batch_task *dsa_batch = batch_task->dsa_batch;
+
+ if (count <= 0 || count > dsa_batch->batch_size) {
return -1;
}
- assert(batch_task != NULL);
+ assert(dsa_batch != NULL);
assert(len != 0);
assert(buf != NULL);
if (count == 1) {
/* DSA doesn't take batch operation with only 1 task. */
- buffer_zero_dsa_async(batch_task, buf[0], len);
+ buffer_zero_dsa_async(dsa_batch, buf[0], len);
} else {
- buffer_zero_dsa_batch_async(batch_task, buf, count, len);
+ buffer_zero_dsa_batch_async(dsa_batch, buf, count, len);
}
- buffer_zero_dsa_wait(batch_task);
- buffer_zero_cpu_fallback(batch_task);
+ buffer_zero_dsa_wait(dsa_batch);
+ buffer_zero_cpu_fallback(dsa_batch);
return 0;
}
#endif
+/**
+ * @brief Initializes a general buffer zero batch task.
+ *
+ * @param task A pointer to the general batch task to initialize.
+ * @param batch_size The number of zero page checking tasks in the batch.
+ */
+void
+batch_task_init(struct batch_task *task, int batch_size)
+{
+ task->addr = g_new0(ram_addr_t, batch_size);
+ task->results = g_new0(bool, batch_size);
+#ifdef CONFIG_DSA_OPT
+ task->dsa_batch = qemu_memalign(64, sizeof(struct dsa_batch_task));
+ buffer_zero_batch_task_init(task->dsa_batch, task->results, batch_size);
+#endif
+}
+
+/**
+ * @brief Destroys a general buffer zero batch task.
+ *
+ * @param task A pointer to the general batch task to destroy.
+ */
+void
+batch_task_destroy(struct batch_task *task)
+{
+ g_free(task->addr);
+ g_free(task->results);
+#ifdef CONFIG_DSA_OPT
+ buffer_zero_batch_task_destroy(task->dsa_batch);
+ qemu_vfree(task->dsa_batch);
+#endif
+}
+
--
2.30.2
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v3 14/20] migration/multifd: Enable DSA offloading in multifd sender path.
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
` (12 preceding siblings ...)
2024-01-04 0:44 ` [PATCH v3 13/20] migration/multifd: Prepare to introduce DSA acceleration on the multifd path Hao Xiang
@ 2024-01-04 0:44 ` Hao Xiang
2024-01-04 0:44 ` [PATCH v3 15/20] migration/multifd: Add test hook to set normal page ratio Hao Xiang
` (6 subsequent siblings)
20 siblings, 0 replies; 41+ messages in thread
From: Hao Xiang @ 2024-01-04 0:44 UTC (permalink / raw)
To: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Hao Xiang
Multifd sender path gets an array of pages queued by the migration
thread. It performs zero page checking on every page in the array.
The pages are classfied as either a zero page or a normal page. This
change uses Intel DSA to offload the zero page checking from CPU to
the DSA accelerator. The sender thread submits a batch of pages to DSA
hardware and waits for the DSA completion thread to signal for work
completion.
Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
---
migration/multifd.c | 51 ++++++++++++++++++++++++++++++++++++++++-----
1 file changed, 46 insertions(+), 5 deletions(-)
diff --git a/migration/multifd.c b/migration/multifd.c
index e7c549b93e..6e73d995b0 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -560,6 +560,7 @@ void multifd_save_cleanup(void)
qemu_thread_join(&p->thread);
}
}
+ dsa_cleanup();
for (i = 0; i < migrate_multifd_channels(); i++) {
MultiFDSendParams *p = &multifd_send_state->params[i];
Error *local_err = NULL;
@@ -699,6 +700,7 @@ static void buffer_is_zero_use_cpu(MultiFDSendParams *p)
{
const void **buf = (const void **)p->batch_task->addr;
assert(!migrate_use_main_zero_page());
+ assert(!dsa_is_running());
for (int i = 0; i < p->pages->num; i++) {
p->batch_task->results[i] = buffer_is_zero(buf[i], p->page_size);
@@ -707,15 +709,29 @@ static void buffer_is_zero_use_cpu(MultiFDSendParams *p)
static void set_normal_pages(MultiFDSendParams *p)
{
+ assert(migrate_use_main_zero_page());
+
for (int i = 0; i < p->pages->num; i++) {
p->batch_task->results[i] = false;
}
}
+static void buffer_is_zero_use_dsa(MultiFDSendParams *p)
+{
+ assert(!migrate_use_main_zero_page());
+ assert(dsa_is_running());
+
+ buffer_is_zero_dsa_batch_async(p->batch_task,
+ (const void **)p->batch_task->addr,
+ p->pages->num,
+ p->page_size);
+}
+
static void multifd_zero_page_check(MultiFDSendParams *p)
{
/* older qemu don't understand zero page on multifd channel */
bool use_multifd_zero_page = !migrate_use_main_zero_page();
+ bool use_multifd_dsa_accel = dsa_is_running();
RAMBlock *rb = p->pages->block;
@@ -723,7 +739,9 @@ static void multifd_zero_page_check(MultiFDSendParams *p)
p->batch_task->addr[i] = (ram_addr_t)(rb->host + p->pages->offset[i]);
}
- if (use_multifd_zero_page) {
+ if (use_multifd_dsa_accel && use_multifd_zero_page) {
+ buffer_is_zero_use_dsa(p);
+ } else if (use_multifd_zero_page) {
buffer_is_zero_use_cpu(p);
} else {
/* No zero page checking. All pages are normal pages. */
@@ -997,11 +1015,23 @@ int multifd_save_setup(Error **errp)
int thread_count;
uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
uint8_t i;
+ const char *dsa_parameter = migrate_multifd_dsa_accel();
+ int ret;
+ Error *local_err = NULL;
if (!migrate_multifd()) {
return 0;
}
+ ret = dsa_init(dsa_parameter);
+ if (ret != 0) {
+ error_setg(&local_err, "multifd: Sender failed to initialize DSA.");
+ error_propagate(errp, local_err);
+ return ret;
+ }
+
+ dsa_start();
+
thread_count = migrate_multifd_channels();
multifd_send_state = g_malloc0(sizeof(*multifd_send_state));
multifd_send_state->params = g_new0(MultiFDSendParams, thread_count);
@@ -1046,8 +1076,6 @@ int multifd_save_setup(Error **errp)
for (i = 0; i < thread_count; i++) {
MultiFDSendParams *p = &multifd_send_state->params[i];
- Error *local_err = NULL;
- int ret;
ret = multifd_send_state->ops->send_setup(p, &local_err);
if (ret) {
@@ -1055,6 +1083,7 @@ int multifd_save_setup(Error **errp)
return ret;
}
}
+
return 0;
}
@@ -1132,6 +1161,7 @@ void multifd_load_cleanup(void)
qemu_thread_join(&p->thread);
}
+ dsa_cleanup();
for (i = 0; i < migrate_multifd_channels(); i++) {
MultiFDRecvParams *p = &multifd_recv_state->params[i];
@@ -1266,6 +1296,9 @@ int multifd_load_setup(Error **errp)
int thread_count;
uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
uint8_t i;
+ const char *dsa_parameter = migrate_multifd_dsa_accel();
+ int ret;
+ Error *local_err = NULL;
/*
* Return successfully if multiFD recv state is already initialised
@@ -1275,6 +1308,15 @@ int multifd_load_setup(Error **errp)
return 0;
}
+ ret = dsa_init(dsa_parameter);
+ if (ret != 0) {
+ error_setg(&local_err, "multifd: Receiver failed to initialize DSA.");
+ error_propagate(errp, local_err);
+ return ret;
+ }
+
+ dsa_start();
+
thread_count = migrate_multifd_channels();
multifd_recv_state = g_malloc0(sizeof(*multifd_recv_state));
multifd_recv_state->params = g_new0(MultiFDRecvParams, thread_count);
@@ -1302,8 +1344,6 @@ int multifd_load_setup(Error **errp)
for (i = 0; i < thread_count; i++) {
MultiFDRecvParams *p = &multifd_recv_state->params[i];
- Error *local_err = NULL;
- int ret;
ret = multifd_recv_state->ops->recv_setup(p, &local_err);
if (ret) {
@@ -1311,6 +1351,7 @@ int multifd_load_setup(Error **errp)
return ret;
}
}
+
return 0;
}
--
2.30.2
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v3 15/20] migration/multifd: Add test hook to set normal page ratio.
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
` (13 preceding siblings ...)
2024-01-04 0:44 ` [PATCH v3 14/20] migration/multifd: Enable DSA offloading in multifd sender path Hao Xiang
@ 2024-01-04 0:44 ` Hao Xiang
2024-02-01 5:24 ` Peter Xu
2024-01-04 0:44 ` [PATCH v3 16/20] migration/multifd: Enable set normal page ratio test hook in multifd Hao Xiang
` (5 subsequent siblings)
20 siblings, 1 reply; 41+ messages in thread
From: Hao Xiang @ 2024-01-04 0:44 UTC (permalink / raw)
To: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Hao Xiang
Multifd sender thread performs zero page checking. If a page is
a zero page, only the page's metadata is sent to the receiver.
If a page is a normal page, the entire page's content is sent to
the receiver. This change adds a test hook to set the normal page
ratio. A zero page will be forced to be sent as a normal page. This
is useful for live migration performance analysis and optimization.
Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
---
migration/options.c | 32 ++++++++++++++++++++++++++++++++
migration/options.h | 1 +
qapi/migration.json | 18 +++++++++++++++---
3 files changed, 48 insertions(+), 3 deletions(-)
diff --git a/migration/options.c b/migration/options.c
index b5f69031a9..aceb391338 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -79,6 +79,11 @@
#define DEFAULT_MIGRATE_ANNOUNCE_ROUNDS 5
#define DEFAULT_MIGRATE_ANNOUNCE_STEP 100
+/*
+ * Parameter for multifd normal page test hook.
+ */
+#define DEFAULT_MIGRATE_MULTIFD_NORMAL_PAGE_RATIO 101
+
#define DEFINE_PROP_MIG_CAP(name, x) \
DEFINE_PROP_BOOL(name, MigrationState, capabilities[x], false)
@@ -181,6 +186,9 @@ Property migration_properties[] = {
MIG_MODE_NORMAL),
DEFINE_PROP_STRING("multifd-dsa-accel", MigrationState,
parameters.multifd_dsa_accel),
+ DEFINE_PROP_UINT8("multifd-normal-page-ratio", MigrationState,
+ parameters.multifd_normal_page_ratio,
+ DEFAULT_MIGRATE_MULTIFD_NORMAL_PAGE_RATIO),
/* Migration capabilities */
DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
@@ -862,6 +870,12 @@ int migrate_multifd_channels(void)
return s->parameters.multifd_channels;
}
+uint8_t migrate_multifd_normal_page_ratio(void)
+{
+ MigrationState *s = migrate_get_current();
+ return s->parameters.multifd_normal_page_ratio;
+}
+
MultiFDCompression migrate_multifd_compression(void)
{
MigrationState *s = migrate_get_current();
@@ -1261,6 +1275,14 @@ bool migrate_params_check(MigrationParameters *params, Error **errp)
return false;
}
+ if (params->has_multifd_normal_page_ratio &&
+ params->multifd_normal_page_ratio > 100) {
+ error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
+ "multifd_normal_page_ratio",
+ "a value between 0 and 100");
+ return false;
+ }
+
return true;
}
@@ -1381,6 +1403,11 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
assert(params->multifd_dsa_accel->type == QTYPE_QSTRING);
dest->multifd_dsa_accel = params->multifd_dsa_accel->u.s;
}
+
+ if (params->has_multifd_normal_page_ratio) {
+ dest->has_multifd_normal_page_ratio = true;
+ dest->multifd_normal_page_ratio = params->multifd_normal_page_ratio;
+ }
}
static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
@@ -1532,6 +1559,11 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
s->parameters.multifd_dsa_accel =
g_strdup(params->multifd_dsa_accel->u.s);
}
+
+ if (params->has_multifd_normal_page_ratio) {
+ s->parameters.multifd_normal_page_ratio =
+ params->multifd_normal_page_ratio;
+ }
}
void qmp_migrate_set_parameters(MigrateSetParameters *params, Error **errp)
diff --git a/migration/options.h b/migration/options.h
index 56100961a9..21e3e7b0cf 100644
--- a/migration/options.h
+++ b/migration/options.h
@@ -95,6 +95,7 @@ const char *migrate_tls_creds(void);
const char *migrate_tls_hostname(void);
uint64_t migrate_xbzrle_cache_size(void);
const char *migrate_multifd_dsa_accel(void);
+uint8_t migrate_multifd_normal_page_ratio(void);
/* parameters setters */
diff --git a/qapi/migration.json b/qapi/migration.json
index 74465bc821..dedcdd076a 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -885,6 +885,9 @@
# characters. Setting this string to an empty string means disabling
# DSA accelerator offloading. Defaults to an empty string. (since 8.2)
#
+# @multifd-normal-page-ratio: Test hook setting the normal page ratio.
+# (Since 8.2)
+#
# Features:
#
# @deprecated: Member @block-incremental is deprecated. Use
@@ -918,7 +921,8 @@
'block-bitmap-mapping',
{ 'name': 'x-vcpu-dirty-limit-period', 'features': ['unstable'] },
'vcpu-dirty-limit',
- 'mode'] }
+ 'mode',
+ 'multifd-normal-page-ratio'] }
##
# @MigrateSetParameters:
@@ -1079,6 +1083,9 @@
# characters. Setting this string to an empty string means disabling
# DSA accelerator offloading. Defaults to an empty string. (since 8.2)
#
+# @multifd-normal-page-ratio: Test hook setting the normal page ratio.
+# (Since 8.2)
+#
# Features:
#
# @deprecated: Member @block-incremental is deprecated. Use
@@ -1133,7 +1140,8 @@
'features': [ 'unstable' ] },
'*vcpu-dirty-limit': 'uint64',
'*mode': 'MigMode',
- '*multifd-dsa-accel': 'StrOrNull'} }
+ '*multifd-dsa-accel': 'StrOrNull',
+ '*multifd-normal-page-ratio': 'uint8'} }
##
# @migrate-set-parameters:
@@ -1314,6 +1322,9 @@
# characters. Setting this string to an empty string means disabling
# DSA accelerator offloading. Defaults to an empty string. (since 8.2)
#
+# @multifd-normal-page-ratio: Test hook setting the normal page ratio.
+# (Since 8.2)
+#
# Features:
#
# @deprecated: Member @block-incremental is deprecated. Use
@@ -1365,7 +1376,8 @@
'features': [ 'unstable' ] },
'*vcpu-dirty-limit': 'uint64',
'*mode': 'MigMode',
- '*multifd-dsa-accel': 'str'} }
+ '*multifd-dsa-accel': 'str',
+ '*multifd-normal-page-ratio': 'uint8'} }
##
# @query-migrate-parameters:
--
2.30.2
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v3 16/20] migration/multifd: Enable set normal page ratio test hook in multifd.
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
` (14 preceding siblings ...)
2024-01-04 0:44 ` [PATCH v3 15/20] migration/multifd: Add test hook to set normal page ratio Hao Xiang
@ 2024-01-04 0:44 ` Hao Xiang
2024-01-04 0:44 ` [PATCH v3 17/20] migration/multifd: Add migration option set packet size Hao Xiang
` (4 subsequent siblings)
20 siblings, 0 replies; 41+ messages in thread
From: Hao Xiang @ 2024-01-04 0:44 UTC (permalink / raw)
To: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Hao Xiang
Test hook is disabled by default. To set it, a normal page ratio
between 0 and 100 are valid. If the ratio is set to 50, it means
at least 50% of all pages are sent as normal pages.
Set the option:
migrate_set_parameter multifd-normal-page-ratio 60
Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
---
include/qemu/dsa.h | 5 ++++-
migration/migration-hmp-cmds.c | 7 +++++++
migration/multifd.c | 33 +++++++++++++++++++++++++++++++++
3 files changed, 44 insertions(+), 1 deletion(-)
diff --git a/include/qemu/dsa.h b/include/qemu/dsa.h
index fe7772107a..ac3d8b51f4 100644
--- a/include/qemu/dsa.h
+++ b/include/qemu/dsa.h
@@ -38,7 +38,7 @@ typedef struct dsa_batch_task {
QemuSemaphore sem_task_complete;
DsaTaskType task_type;
DsaTaskStatus status;
- int batch_size;
+ uint32_t batch_size;
bool *results;
QSIMPLEQ_ENTRY(dsa_batch_task) entry;
} dsa_batch_task;
@@ -50,6 +50,9 @@ struct batch_task {
ram_addr_t *addr;
/* Zero page checking results */
bool *results;
+ /* Set normal page ratio test hook. */
+ uint32_t normal_page_index;
+ uint32_t normal_page_counter;
#ifdef CONFIG_DSA_OPT
struct dsa_batch_task *dsa_batch;
#endif
diff --git a/migration/migration-hmp-cmds.c b/migration/migration-hmp-cmds.c
index 761d6d54de..8219d112d6 100644
--- a/migration/migration-hmp-cmds.c
+++ b/migration/migration-hmp-cmds.c
@@ -356,6 +356,9 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
monitor_printf(mon, "%s: '%s'\n",
MigrationParameter_str(MIGRATION_PARAMETER_MULTIFD_DSA_ACCEL),
params->multifd_dsa_accel);
+ monitor_printf(mon, "%s: %u\n",
+ MigrationParameter_str(MIGRATION_PARAMETER_MULTIFD_NORMAL_PAGE_RATIO),
+ params->multifd_normal_page_ratio);
if (params->has_block_bitmap_mapping) {
const BitmapMigrationNodeAliasList *bmnal;
@@ -675,6 +678,10 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
error_setg(&err, "The block-bitmap-mapping parameter can only be set "
"through QMP");
break;
+ case MIGRATION_PARAMETER_MULTIFD_NORMAL_PAGE_RATIO:
+ p->has_multifd_normal_page_ratio = true;
+ visit_type_uint8(v, param, &p->multifd_normal_page_ratio, &err);
+ break;
case MIGRATION_PARAMETER_X_VCPU_DIRTY_LIMIT_PERIOD:
p->has_x_vcpu_dirty_limit_period = true;
visit_type_size(v, param, &p->x_vcpu_dirty_limit_period, &err);
diff --git a/migration/multifd.c b/migration/multifd.c
index 6e73d995b0..cfae5401a9 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -683,6 +683,37 @@ int multifd_send_sync_main(QEMUFile *f)
return 0;
}
+static void multifd_normal_page_test_hook(MultiFDSendParams *p)
+{
+ /*
+ * The value is between 0 to 100. If the value is 10, it means at
+ * least 10% of the pages are normal page. A zero page can be made
+ * a normal page but not the other way around.
+ */
+ uint8_t multifd_normal_page_ratio =
+ migrate_multifd_normal_page_ratio();
+ struct batch_task *batch_task = p->batch_task;
+
+ /* Set normal page test hook is disabled. */
+ if (multifd_normal_page_ratio > 100) {
+ return;
+ }
+
+ for (int i = 0; i < p->pages->num; i++) {
+ if (batch_task->normal_page_counter < multifd_normal_page_ratio) {
+ /* Turn a zero page into a normal page. */
+ batch_task->results[i] = false;
+ }
+ batch_task->normal_page_index++;
+ batch_task->normal_page_counter++;
+
+ if (batch_task->normal_page_index >= 100) {
+ batch_task->normal_page_index = 0;
+ batch_task->normal_page_counter = 0;
+ }
+ }
+}
+
static void set_page(MultiFDSendParams *p, bool zero_page, uint64_t offset)
{
RAMBlock *rb = p->pages->block;
@@ -748,6 +779,8 @@ static void multifd_zero_page_check(MultiFDSendParams *p)
set_normal_pages(p);
}
+ multifd_normal_page_test_hook(p);
+
for (int i = 0; i < p->pages->num; i++) {
uint64_t offset = p->pages->offset[i];
bool zero_page = p->batch_task->results[i];
--
2.30.2
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v3 17/20] migration/multifd: Add migration option set packet size.
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
` (15 preceding siblings ...)
2024-01-04 0:44 ` [PATCH v3 16/20] migration/multifd: Enable set normal page ratio test hook in multifd Hao Xiang
@ 2024-01-04 0:44 ` Hao Xiang
2024-01-04 0:44 ` [PATCH v3 18/20] migration/multifd: Enable set packet size migration option Hao Xiang
` (3 subsequent siblings)
20 siblings, 0 replies; 41+ messages in thread
From: Hao Xiang @ 2024-01-04 0:44 UTC (permalink / raw)
To: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Hao Xiang
The current multifd packet size is 128 * 4kb. This change adds
an option to set the packet size. Both sender and receiver needs
to set the same packet size for things to work.
Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
---
migration/options.c | 36 ++++++++++++++++++++++++++++++++++++
migration/options.h | 1 +
qapi/migration.json | 21 ++++++++++++++++++---
3 files changed, 55 insertions(+), 3 deletions(-)
diff --git a/migration/options.c b/migration/options.c
index aceb391338..fafdeb69cd 100644
--- a/migration/options.c
+++ b/migration/options.c
@@ -84,6 +84,12 @@
*/
#define DEFAULT_MIGRATE_MULTIFD_NORMAL_PAGE_RATIO 101
+/*
+ * Parameter for multifd packet size.
+ */
+#define DEFAULT_MIGRATE_MULTIFD_PACKET_SIZE (128 * 4 * 1024)
+#define MAX_MIGRATE_MULTIFD_PACKET_SIZE (1023 * 4 * 1024)
+
#define DEFINE_PROP_MIG_CAP(name, x) \
DEFINE_PROP_BOOL(name, MigrationState, capabilities[x], false)
@@ -189,6 +195,9 @@ Property migration_properties[] = {
DEFINE_PROP_UINT8("multifd-normal-page-ratio", MigrationState,
parameters.multifd_normal_page_ratio,
DEFAULT_MIGRATE_MULTIFD_NORMAL_PAGE_RATIO),
+ DEFINE_PROP_SIZE("multifd-packet-size", MigrationState,
+ parameters.multifd_packet_size,
+ DEFAULT_MIGRATE_MULTIFD_PACKET_SIZE),
/* Migration capabilities */
DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
@@ -876,6 +885,13 @@ uint8_t migrate_multifd_normal_page_ratio(void)
return s->parameters.multifd_normal_page_ratio;
}
+uint64_t migrate_multifd_packet_size(void)
+{
+ MigrationState *s = migrate_get_current();
+
+ return s->parameters.multifd_packet_size;
+}
+
MultiFDCompression migrate_multifd_compression(void)
{
MigrationState *s = migrate_get_current();
@@ -1014,6 +1030,8 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
params->x_checkpoint_delay = s->parameters.x_checkpoint_delay;
params->has_block_incremental = true;
params->block_incremental = s->parameters.block_incremental;
+ params->has_multifd_packet_size = true;
+ params->multifd_packet_size = s->parameters.multifd_packet_size;
params->has_multifd_channels = true;
params->multifd_channels = s->parameters.multifd_channels;
params->has_multifd_compression = true;
@@ -1075,6 +1093,7 @@ void migrate_params_init(MigrationParameters *params)
params->has_downtime_limit = true;
params->has_x_checkpoint_delay = true;
params->has_block_incremental = true;
+ params->has_multifd_packet_size = true;
params->has_multifd_channels = true;
params->has_multifd_compression = true;
params->has_multifd_zlib_level = true;
@@ -1173,6 +1192,17 @@ bool migrate_params_check(MigrationParameters *params, Error **errp)
/* x_checkpoint_delay is now always positive */
+ if (params->has_multifd_packet_size &&
+ ((params->multifd_packet_size < DEFAULT_MIGRATE_MULTIFD_PACKET_SIZE) ||
+ (params->multifd_packet_size > MAX_MIGRATE_MULTIFD_PACKET_SIZE) ||
+ (params->multifd_packet_size % qemu_target_page_size() != 0))) {
+ error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
+ "multifd_packet_size",
+ "a value between 524288 and 4190208, "
+ "must be a multiple of guest VM's page size.");
+ return false;
+ }
+
if (params->has_multifd_channels && (params->multifd_channels < 1)) {
error_setg(errp, QERR_INVALID_PARAMETER_VALUE,
"multifd_channels",
@@ -1354,6 +1384,9 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
if (params->has_block_incremental) {
dest->block_incremental = params->block_incremental;
}
+ if (params->has_multifd_packet_size) {
+ dest->multifd_packet_size = params->multifd_packet_size;
+ }
if (params->has_multifd_channels) {
dest->multifd_channels = params->multifd_channels;
}
@@ -1499,6 +1532,9 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
" use blockdev-mirror with NBD instead");
s->parameters.block_incremental = params->block_incremental;
}
+ if (params->has_multifd_packet_size) {
+ s->parameters.multifd_packet_size = params->multifd_packet_size;
+ }
if (params->has_multifd_channels) {
s->parameters.multifd_channels = params->multifd_channels;
}
diff --git a/migration/options.h b/migration/options.h
index 21e3e7b0cf..5816f6dac2 100644
--- a/migration/options.h
+++ b/migration/options.h
@@ -96,6 +96,7 @@ const char *migrate_tls_hostname(void);
uint64_t migrate_xbzrle_cache_size(void);
const char *migrate_multifd_dsa_accel(void);
uint8_t migrate_multifd_normal_page_ratio(void);
+uint64_t migrate_multifd_packet_size(void);
/* parameters setters */
diff --git a/qapi/migration.json b/qapi/migration.json
index dedcdd076a..354f10a83f 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -888,6 +888,10 @@
# @multifd-normal-page-ratio: Test hook setting the normal page ratio.
# (Since 8.2)
#
+# @multifd-packet-size: Packet size in bytes used to migrate data.
+# The value needs to be a multiple of guest VM's page size.
+# The default value is 524288 and max value is 4190208. (Since 8.2)
+#
# Features:
#
# @deprecated: Member @block-incremental is deprecated. Use
@@ -922,7 +926,8 @@
{ 'name': 'x-vcpu-dirty-limit-period', 'features': ['unstable'] },
'vcpu-dirty-limit',
'mode',
- 'multifd-normal-page-ratio'] }
+ 'multifd-normal-page-ratio',
+ 'multifd-packet-size'] }
##
# @MigrateSetParameters:
@@ -1086,6 +1091,10 @@
# @multifd-normal-page-ratio: Test hook setting the normal page ratio.
# (Since 8.2)
#
+# @multifd-packet-size: Packet size in bytes used to migrate data.
+# The value needs to be a multiple of guest VM's page size.
+# The default value is 524288 and max value is 4190208. (Since 8.2)
+#
# Features:
#
# @deprecated: Member @block-incremental is deprecated. Use
@@ -1141,7 +1150,8 @@
'*vcpu-dirty-limit': 'uint64',
'*mode': 'MigMode',
'*multifd-dsa-accel': 'StrOrNull',
- '*multifd-normal-page-ratio': 'uint8'} }
+ '*multifd-normal-page-ratio': 'uint8',
+ '*multifd-packet-size' : 'uint64'} }
##
# @migrate-set-parameters:
@@ -1325,6 +1335,10 @@
# @multifd-normal-page-ratio: Test hook setting the normal page ratio.
# (Since 8.2)
#
+# @multifd-packet-size: Packet size in bytes used to migrate data.
+# The value needs to be a multiple of guest VM's page size.
+# The default value is 524288 and max value is 4190208. (Since 8.2)
+#
# Features:
#
# @deprecated: Member @block-incremental is deprecated. Use
@@ -1377,7 +1391,8 @@
'*vcpu-dirty-limit': 'uint64',
'*mode': 'MigMode',
'*multifd-dsa-accel': 'str',
- '*multifd-normal-page-ratio': 'uint8'} }
+ '*multifd-normal-page-ratio': 'uint8',
+ '*multifd-packet-size': 'uint64'} }
##
# @query-migrate-parameters:
--
2.30.2
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v3 18/20] migration/multifd: Enable set packet size migration option.
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
` (16 preceding siblings ...)
2024-01-04 0:44 ` [PATCH v3 17/20] migration/multifd: Add migration option set packet size Hao Xiang
@ 2024-01-04 0:44 ` Hao Xiang
2024-01-04 0:44 ` [PATCH v3 19/20] util/dsa: Add unit test coverage for Intel DSA task submission and completion Hao Xiang
` (2 subsequent siblings)
20 siblings, 0 replies; 41+ messages in thread
From: Hao Xiang @ 2024-01-04 0:44 UTC (permalink / raw)
To: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Hao Xiang
During live migration, if the latency between sender and receiver
is high and bandwidth is also high (a long and fat pipe), using a bigger
packet size can help reduce migration total time. In addition, Intel
DSA offloading performs better with a large batch task. Providing an
option to set the packet size is useful for performance tuning.
Set the option:
migrate_set_parameter multifd-packet-size 4190208
Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
---
migration/migration-hmp-cmds.c | 7 +++++++
migration/multifd-zlib.c | 6 ++++--
migration/multifd-zstd.c | 6 ++++--
migration/multifd.c | 6 ++++--
migration/multifd.h | 3 ---
5 files changed, 19 insertions(+), 9 deletions(-)
diff --git a/migration/migration-hmp-cmds.c b/migration/migration-hmp-cmds.c
index 8219d112d6..272b553c3a 100644
--- a/migration/migration-hmp-cmds.c
+++ b/migration/migration-hmp-cmds.c
@@ -338,6 +338,9 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
monitor_printf(mon, "%s: %s\n",
MigrationParameter_str(MIGRATION_PARAMETER_BLOCK_INCREMENTAL),
params->block_incremental ? "on" : "off");
+ monitor_printf(mon, "%s: %" PRIu64 "\n",
+ MigrationParameter_str(MIGRATION_PARAMETER_MULTIFD_PACKET_SIZE),
+ params->multifd_packet_size);
monitor_printf(mon, "%s: %u\n",
MigrationParameter_str(MIGRATION_PARAMETER_MULTIFD_CHANNELS),
params->multifd_channels);
@@ -626,6 +629,10 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
p->multifd_dsa_accel->type = QTYPE_QSTRING;
visit_type_str(v, param, &p->multifd_dsa_accel->u.s, &err);
break;
+ case MIGRATION_PARAMETER_MULTIFD_PACKET_SIZE:
+ p->has_multifd_packet_size = true;
+ visit_type_size(v, param, &p->multifd_packet_size, &err);
+ break;
case MIGRATION_PARAMETER_MULTIFD_CHANNELS:
p->has_multifd_channels = true;
visit_type_uint8(v, param, &p->multifd_channels, &err);
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index 37ce48621e..4318ccced8 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -49,6 +49,7 @@ static int zlib_send_setup(MultiFDSendParams *p, Error **errp)
struct zlib_data *z = g_new0(struct zlib_data, 1);
z_stream *zs = &z->zs;
const char *err_msg;
+ uint64_t multifd_packet_size = migrate_multifd_packet_size();
zs->zalloc = Z_NULL;
zs->zfree = Z_NULL;
@@ -58,7 +59,7 @@ static int zlib_send_setup(MultiFDSendParams *p, Error **errp)
goto err_free_z;
}
/* This is the maximum size of the compressed buffer */
- z->zbuff_len = compressBound(MULTIFD_PACKET_SIZE);
+ z->zbuff_len = compressBound(multifd_packet_size);
z->zbuff = g_try_malloc(z->zbuff_len);
if (!z->zbuff) {
err_msg = "out of memory for zbuff";
@@ -186,6 +187,7 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
*/
static int zlib_recv_setup(MultiFDRecvParams *p, Error **errp)
{
+ uint64_t multifd_packet_size = migrate_multifd_packet_size();
struct zlib_data *z = g_new0(struct zlib_data, 1);
z_stream *zs = &z->zs;
@@ -200,7 +202,7 @@ static int zlib_recv_setup(MultiFDRecvParams *p, Error **errp)
return -1;
}
/* To be safe, we reserve twice the size of the packet */
- z->zbuff_len = MULTIFD_PACKET_SIZE * 2;
+ z->zbuff_len = multifd_packet_size * 2;
z->zbuff = g_try_malloc(z->zbuff_len);
if (!z->zbuff) {
inflateEnd(zs);
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index b471daadcd..a6fdb2ac11 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -49,6 +49,7 @@ struct zstd_data {
*/
static int zstd_send_setup(MultiFDSendParams *p, Error **errp)
{
+ uint64_t multifd_packet_size = migrate_multifd_packet_size();
struct zstd_data *z = g_new0(struct zstd_data, 1);
int res;
@@ -69,7 +70,7 @@ static int zstd_send_setup(MultiFDSendParams *p, Error **errp)
return -1;
}
/* This is the maximum size of the compressed buffer */
- z->zbuff_len = ZSTD_compressBound(MULTIFD_PACKET_SIZE);
+ z->zbuff_len = ZSTD_compressBound(multifd_packet_size);
z->zbuff = g_try_malloc(z->zbuff_len);
if (!z->zbuff) {
ZSTD_freeCStream(z->zcs);
@@ -175,6 +176,7 @@ static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
*/
static int zstd_recv_setup(MultiFDRecvParams *p, Error **errp)
{
+ uint64_t multifd_packet_size = migrate_multifd_packet_size();
struct zstd_data *z = g_new0(struct zstd_data, 1);
int ret;
@@ -196,7 +198,7 @@ static int zstd_recv_setup(MultiFDRecvParams *p, Error **errp)
}
/* To be safe, we reserve twice the size of the packet */
- z->zbuff_len = MULTIFD_PACKET_SIZE * 2;
+ z->zbuff_len = multifd_packet_size * 2;
z->zbuff = g_try_malloc(z->zbuff_len);
if (!z->zbuff) {
ZSTD_freeDStream(z->zds);
diff --git a/migration/multifd.c b/migration/multifd.c
index cfae5401a9..700fb1c034 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -1046,7 +1046,8 @@ static void multifd_new_send_channel_create(gpointer opaque)
int multifd_save_setup(Error **errp)
{
int thread_count;
- uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
+ uint32_t page_count =
+ migrate_multifd_packet_size() / qemu_target_page_size();
uint8_t i;
const char *dsa_parameter = migrate_multifd_dsa_accel();
int ret;
@@ -1327,7 +1328,8 @@ static void *multifd_recv_thread(void *opaque)
int multifd_load_setup(Error **errp)
{
int thread_count;
- uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
+ uint32_t page_count =
+ migrate_multifd_packet_size() / qemu_target_page_size();
uint8_t i;
const char *dsa_parameter = migrate_multifd_dsa_accel();
int ret;
diff --git a/migration/multifd.h b/migration/multifd.h
index 97b5f888a7..4f30b24832 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -34,9 +34,6 @@ int multifd_queue_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset);
#define MULTIFD_FLAG_ZLIB (1 << 1)
#define MULTIFD_FLAG_ZSTD (2 << 1)
-/* This value needs to be a multiple of qemu_target_page_size() */
-#define MULTIFD_PACKET_SIZE (512 * 1024)
-
typedef struct {
uint32_t magic;
uint32_t version;
--
2.30.2
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v3 19/20] util/dsa: Add unit test coverage for Intel DSA task submission and completion.
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
` (17 preceding siblings ...)
2024-01-04 0:44 ` [PATCH v3 18/20] migration/multifd: Enable set packet size migration option Hao Xiang
@ 2024-01-04 0:44 ` Hao Xiang
2024-01-04 0:44 ` [PATCH v3 20/20] migration/multifd: Add integration tests for multifd with Intel DSA offloading Hao Xiang
2024-03-08 1:47 ` [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration liulongfang via
20 siblings, 0 replies; 41+ messages in thread
From: Hao Xiang @ 2024-01-04 0:44 UTC (permalink / raw)
To: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Hao Xiang
* Test DSA start and stop path.
* Test DSA configure and cleanup path.
* Test DSA task submission and completion path.
Signed-off-by: Bryan Zhang <bryan.zhang@bytedance.com>
Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
---
tests/unit/meson.build | 6 +
tests/unit/test-dsa.c | 475 +++++++++++++++++++++++++++++++++++++++++
2 files changed, 481 insertions(+)
create mode 100644 tests/unit/test-dsa.c
diff --git a/tests/unit/meson.build b/tests/unit/meson.build
index a05d471090..72e22063dc 100644
--- a/tests/unit/meson.build
+++ b/tests/unit/meson.build
@@ -54,6 +54,12 @@ tests = {
'test-virtio-dmabuf': [meson.project_source_root() / 'hw/display/virtio-dmabuf.c'],
}
+if config_host_data.get('CONFIG_DSA_OPT')
+ tests += {
+ 'test-dsa': [],
+ }
+endif
+
if have_system or have_tools
tests += {
'test-qmp-event': [testqapi],
diff --git a/tests/unit/test-dsa.c b/tests/unit/test-dsa.c
new file mode 100644
index 0000000000..a3eeb4702a
--- /dev/null
+++ b/tests/unit/test-dsa.c
@@ -0,0 +1,475 @@
+/*
+ * Test DSA functions.
+ *
+ * Copyright (c) 2023 Hao Xiang <hao.xiang@bytedance.com>
+ * Copyright (c) 2023 Bryan Zhang <bryan.zhang@bytedance.com>
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
+ */
+#include "qemu/osdep.h"
+#include "qemu/host-utils.h"
+
+#include "qemu/cutils.h"
+#include "qemu/memalign.h"
+#include "qemu/dsa.h"
+
+/* TODO Make these not-hardcoded. */
+static const char *path1 = "/dev/dsa/wq4.0";
+static const char *path2 = "/dev/dsa/wq4.0 /dev/dsa/wq4.1";
+static const int num_devices = 2;
+
+static struct batch_task task __attribute__((aligned(64)));
+
+/*
+ * TODO Communicate that DSA must be configured to support this batch size.
+ * TODO Alternatively, poke the DSA device to figure out batch size.
+ */
+static int batch_size = 128;
+static int page_size = 4096;
+
+/* A helper for running a single task and checking for correctness. */
+static void do_single_task(void)
+{
+ batch_task_init(&task, batch_size);
+ char buf[page_size];
+ char *ptr = buf;
+
+ buffer_is_zero_dsa_batch_async(&task,
+ (const void **)&ptr,
+ 1,
+ page_size);
+ g_assert(task.results[0] == buffer_is_zero(buf, page_size));
+}
+
+static void test_single_zero(void)
+{
+ g_assert(!dsa_init(path1));
+ dsa_start();
+
+ batch_task_init(&task, batch_size);
+
+ char buf[page_size];
+ char *ptr = buf;
+
+ memset(buf, 0x0, page_size);
+ buffer_is_zero_dsa_batch_async(&task,
+ (const void **)&ptr,
+ 1, page_size);
+ g_assert(task.results[0]);
+
+ dsa_cleanup();
+}
+
+static void test_single_zero_async(void)
+{
+ test_single_zero();
+}
+
+static void test_single_nonzero(void)
+{
+ g_assert(!dsa_init(path1));
+ dsa_start();
+
+ batch_task_init(&task, batch_size);
+
+ char buf[page_size];
+ char *ptr = buf;
+
+ memset(buf, 0x1, page_size);
+ buffer_is_zero_dsa_batch_async(&task,
+ (const void **)&ptr,
+ 1, page_size);
+ g_assert(!task.results[0]);
+
+ dsa_cleanup();
+}
+
+static void test_single_nonzero_async(void)
+{
+ test_single_nonzero();
+}
+
+/* count == 0 should return quickly without calling into DSA. */
+static void test_zero_count_async(void)
+{
+ char buf[page_size];
+ buffer_is_zero_dsa_batch_async(&task,
+ (const void **)&buf,
+ 0,
+ page_size);
+}
+
+static void test_null_task_async(void)
+{
+ if (g_test_subprocess()) {
+ g_assert(!dsa_init(path1));
+
+ char buf[page_size * batch_size];
+ char *addrs[batch_size];
+ for (int i = 0; i < batch_size; i++) {
+ addrs[i] = buf + (page_size * i);
+ }
+
+ buffer_is_zero_dsa_batch_async(NULL, (const void **)addrs,
+ batch_size,
+ page_size);
+ } else {
+ g_test_trap_subprocess(NULL, 0, 0);
+ g_test_trap_assert_failed();
+ }
+}
+
+static void test_oversized_batch(void)
+{
+ g_assert(!dsa_init(path1));
+ dsa_start();
+
+ batch_task_init(&task, batch_size);
+
+ int oversized_batch_size = batch_size + 1;
+ char buf[page_size * oversized_batch_size];
+ char *addrs[batch_size];
+ for (int i = 0; i < oversized_batch_size; i++) {
+ addrs[i] = buf + (page_size * i);
+ }
+
+ int ret = buffer_is_zero_dsa_batch_async(&task,
+ (const void **)addrs,
+ oversized_batch_size,
+ page_size);
+ g_assert(ret != 0);
+
+ dsa_cleanup();
+}
+
+static void test_oversized_batch_async(void)
+{
+ test_oversized_batch();
+}
+
+static void test_zero_len_async(void)
+{
+ if (g_test_subprocess()) {
+ g_assert(!dsa_init(path1));
+
+ batch_task_init(&task, batch_size);
+
+ char buf[page_size];
+
+ buffer_is_zero_dsa_batch_async(&task,
+ (const void **)&buf,
+ 1,
+ 0);
+ } else {
+ g_test_trap_subprocess(NULL, 0, 0);
+ g_test_trap_assert_failed();
+ }
+}
+
+static void test_null_buf_async(void)
+{
+ if (g_test_subprocess()) {
+ g_assert(!dsa_init(path1));
+
+ batch_task_init(&task, batch_size);
+
+ buffer_is_zero_dsa_batch_async(&task, NULL, 1, page_size);
+ } else {
+ g_test_trap_subprocess(NULL, 0, 0);
+ g_test_trap_assert_failed();
+ }
+}
+
+static void test_batch(void)
+{
+ g_assert(!dsa_init(path1));
+ dsa_start();
+
+ batch_task_init(&task, batch_size);
+
+ char buf[page_size * batch_size];
+ char *addrs[batch_size];
+ for (int i = 0; i < batch_size; i++) {
+ addrs[i] = buf + (page_size * i);
+ }
+
+ /*
+ * Using whatever is on the stack is somewhat random.
+ * Manually set some pages to zero and some to nonzero.
+ */
+ memset(buf + 0, 0, page_size * 10);
+ memset(buf + (10 * page_size), 0xff, page_size * 10);
+
+ buffer_is_zero_dsa_batch_async(&task,
+ (const void **)addrs,
+ batch_size,
+ page_size);
+
+ bool is_zero;
+ for (int i = 0; i < batch_size; i++) {
+ is_zero = buffer_is_zero((const void *)&buf[page_size * i], page_size);
+ g_assert(task.results[i] == is_zero);
+ }
+ dsa_cleanup();
+}
+
+static void test_batch_async(void)
+{
+ test_batch();
+}
+
+static void test_page_fault(void)
+{
+ g_assert(!dsa_init(path1));
+ dsa_start();
+
+ char *buf[2];
+ int prot = PROT_READ | PROT_WRITE;
+ int flags = MAP_SHARED | MAP_ANON;
+ buf[0] = (char *)mmap(NULL, page_size * batch_size, prot, flags, -1, 0);
+ assert(buf[0] != MAP_FAILED);
+ buf[1] = (char *)malloc(page_size * batch_size);
+ assert(buf[1] != NULL);
+
+ for (int j = 0; j < 2; j++) {
+ batch_task_init(&task, batch_size);
+
+ char *addrs[batch_size];
+ for (int i = 0; i < batch_size; i++) {
+ addrs[i] = buf[j] + (page_size * i);
+ }
+
+ buffer_is_zero_dsa_batch_async(&task,
+ (const void **)addrs,
+ batch_size,
+ page_size);
+
+ bool is_zero;
+ for (int i = 0; i < batch_size; i++) {
+ is_zero = buffer_is_zero((const void *)&buf[j][page_size * i],
+ page_size);
+ g_assert(task.results[i] == is_zero);
+ }
+ }
+
+ assert(!munmap(buf[0], page_size * batch_size));
+ free(buf[1]);
+ dsa_cleanup();
+}
+
+static void test_various_buffer_sizes(void)
+{
+ g_assert(!dsa_init(path1));
+ dsa_start();
+
+ int len = 1 << 4;
+ for (int count = 12; count > 0; count--, len <<= 1) {
+ batch_task_init(&task, batch_size);
+
+ char buf[len * batch_size];
+ char *addrs[batch_size];
+ for (int i = 0; i < batch_size; i++) {
+ addrs[i] = buf + (len * i);
+ }
+
+ buffer_is_zero_dsa_batch_async(&task,
+ (const void **)addrs,
+ batch_size,
+ len);
+
+ bool is_zero;
+ for (int j = 0; j < batch_size; j++) {
+ is_zero = buffer_is_zero((const void *)&buf[len * j], len);
+ g_assert(task.results[j] == is_zero);
+ }
+ }
+
+ dsa_cleanup();
+}
+
+static void test_various_buffer_sizes_async(void)
+{
+ test_various_buffer_sizes();
+}
+
+static void test_double_start_stop(void)
+{
+ g_assert(!dsa_init(path1));
+ /* Double start */
+ dsa_start();
+ dsa_start();
+ g_assert(dsa_is_running());
+ do_single_task();
+
+ /* Double stop */
+ dsa_stop();
+ g_assert(!dsa_is_running());
+ dsa_stop();
+ g_assert(!dsa_is_running());
+
+ /* Restart */
+ dsa_start();
+ g_assert(dsa_is_running());
+ do_single_task();
+ dsa_cleanup();
+}
+
+static void test_is_running(void)
+{
+ g_assert(!dsa_init(path1));
+
+ g_assert(!dsa_is_running());
+ dsa_start();
+ g_assert(dsa_is_running());
+ dsa_stop();
+ g_assert(!dsa_is_running());
+ dsa_cleanup();
+}
+
+static void test_multiple_engines(void)
+{
+ g_assert(!dsa_init(path2));
+ dsa_start();
+
+ struct batch_task tasks[num_devices]
+ __attribute__((aligned(64)));
+ char bufs[num_devices][page_size * batch_size];
+ char *addrs[num_devices][batch_size];
+
+ /*
+ * This is a somewhat implementation-specific way
+ * of testing that the tasks have unique engines
+ * assigned to them.
+ */
+ batch_task_init(&tasks[0], batch_size);
+ batch_task_init(&tasks[1], batch_size);
+ g_assert(tasks[0].dsa_batch->device != tasks[1].dsa_batch->device);
+
+ for (int i = 0; i < num_devices; i++) {
+ for (int j = 0; j < batch_size; j++) {
+ addrs[i][j] = bufs[i] + (page_size * j);
+ }
+
+ buffer_is_zero_dsa_batch_async(&tasks[i],
+ (const void **)addrs[i],
+ batch_size, page_size);
+
+ bool is_zero;
+ for (int j = 0; j < batch_size; j++) {
+ is_zero = buffer_is_zero((const void *)&bufs[i][page_size * j],
+ page_size);
+ g_assert(tasks[i].results[j] == is_zero);
+ }
+ }
+
+ dsa_cleanup();
+}
+
+static void test_configure_dsa_twice(void)
+{
+ g_assert(!dsa_init(path2));
+ g_assert(!dsa_init(path2));
+ dsa_start();
+ do_single_task();
+ dsa_cleanup();
+}
+
+static void test_configure_dsa_bad_path(void)
+{
+ const char *bad_path = "/not/a/real/path";
+ g_assert(dsa_init(bad_path));
+}
+
+static void test_cleanup_before_configure(void)
+{
+ dsa_cleanup();
+ g_assert(!dsa_init(path2));
+}
+
+static void test_configure_dsa_num_devices(void)
+{
+ g_assert(!dsa_init(path1));
+ dsa_start();
+
+ do_single_task();
+ dsa_stop();
+ dsa_cleanup();
+}
+
+static void test_cleanup_twice(void)
+{
+ g_assert(!dsa_init(path2));
+ dsa_cleanup();
+ dsa_cleanup();
+
+ g_assert(!dsa_init(path2));
+ dsa_start();
+ do_single_task();
+ dsa_cleanup();
+}
+
+static int check_test_setup(void)
+{
+ const char *path[2] = {path1, path2};
+ for (int i = 0; i < sizeof(path) / sizeof(char *); i++) {
+ if (dsa_init(path[i])) {
+ return -1;
+ }
+ dsa_cleanup();
+ }
+ return 0;
+}
+
+int main(int argc, char **argv)
+{
+ g_test_init(&argc, &argv, NULL);
+
+ if (check_test_setup() != 0) {
+ /*
+ * This test requires extra setup. The current
+ * setup is not correct. Just skip this test
+ * for now.
+ */
+ exit(0);
+ }
+
+ if (num_devices > 1) {
+ g_test_add_func("/dsa/multiple_engines", test_multiple_engines);
+ }
+
+ g_test_add_func("/dsa/async/batch", test_batch_async);
+ g_test_add_func("/dsa/async/various_buffer_sizes",
+ test_various_buffer_sizes_async);
+ g_test_add_func("/dsa/async/null_buf", test_null_buf_async);
+ g_test_add_func("/dsa/async/zero_len", test_zero_len_async);
+ g_test_add_func("/dsa/async/oversized_batch", test_oversized_batch_async);
+ g_test_add_func("/dsa/async/zero_count", test_zero_count_async);
+ g_test_add_func("/dsa/async/single_zero", test_single_zero_async);
+ g_test_add_func("/dsa/async/single_nonzero", test_single_nonzero_async);
+ g_test_add_func("/dsa/async/null_task", test_null_task_async);
+ g_test_add_func("/dsa/async/page_fault", test_page_fault);
+
+ g_test_add_func("/dsa/double_start_stop", test_double_start_stop);
+ g_test_add_func("/dsa/is_running", test_is_running);
+
+ g_test_add_func("/dsa/configure_dsa_twice", test_configure_dsa_twice);
+ g_test_add_func("/dsa/configure_dsa_bad_path", test_configure_dsa_bad_path);
+ g_test_add_func("/dsa/cleanup_before_configure",
+ test_cleanup_before_configure);
+ g_test_add_func("/dsa/configure_dsa_num_devices",
+ test_configure_dsa_num_devices);
+ g_test_add_func("/dsa/cleanup_twice", test_cleanup_twice);
+
+ return g_test_run();
+}
--
2.30.2
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v3 20/20] migration/multifd: Add integration tests for multifd with Intel DSA offloading.
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
` (18 preceding siblings ...)
2024-01-04 0:44 ` [PATCH v3 19/20] util/dsa: Add unit test coverage for Intel DSA task submission and completion Hao Xiang
@ 2024-01-04 0:44 ` Hao Xiang
2024-03-08 1:47 ` [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration liulongfang via
20 siblings, 0 replies; 41+ messages in thread
From: Hao Xiang @ 2024-01-04 0:44 UTC (permalink / raw)
To: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Hao Xiang
* Add test case to start and complete multifd live migration with DSA
offloading enabled.
* Add test case to start and cancel multifd live migration with DSA
offloading enabled.
Signed-off-by: Bryan Zhang <bryan.zhang@bytedance.com>
Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
---
tests/qtest/migration-test.c | 77 +++++++++++++++++++++++++++++++++++-
1 file changed, 76 insertions(+), 1 deletion(-)
diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index d520c587f7..ab3db94326 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -639,6 +639,12 @@ typedef struct {
const char *opts_target;
} MigrateStart;
+/*
+ * It requires separate steps to configure and enable DSA device.
+ * This test assumes that the configuration is done already.
+ */
+static const char *dsa_dev_path = "/dev/dsa/wq4.0";
+
/*
* A hook that runs after the src and dst QEMUs have been
* created, but before the migration is started. This can
@@ -2775,7 +2781,7 @@ static void test_multifd_tcp_tls_x509_reject_anon_client(void)
*
* And see that it works
*/
-static void test_multifd_tcp_cancel(void)
+static void test_multifd_tcp_cancel_common(bool use_dsa)
{
MigrateStart args = {
.hide_stderr = true,
@@ -2796,6 +2802,10 @@ static void test_multifd_tcp_cancel(void)
migrate_set_capability(from, "multifd", true);
migrate_set_capability(to, "multifd", true);
+ if (use_dsa) {
+ migrate_set_parameter_str(from, "multifd-dsa-accel", dsa_dev_path);
+ }
+
/* Start incoming migration from the 1st socket */
migrate_incoming_qmp(to, "tcp:127.0.0.1:0", "{}");
@@ -2852,6 +2862,48 @@ static void test_multifd_tcp_cancel(void)
test_migrate_end(from, to2, true);
}
+/*
+ * This test does:
+ * source target
+ * migrate_incoming
+ * migrate
+ * migrate_cancel
+ * launch another target
+ * migrate
+ *
+ * And see that it works
+ */
+static void test_multifd_tcp_cancel(void)
+{
+ test_multifd_tcp_cancel_common(false);
+}
+
+#ifdef CONFIG_DSA_OPT
+
+static void *test_migrate_precopy_tcp_multifd_start_dsa(QTestState *from,
+ QTestState *to)
+{
+ migrate_set_parameter_str(from, "multifd-dsa-accel", dsa_dev_path);
+ return test_migrate_precopy_tcp_multifd_start_common(from, to, "none");
+}
+
+static void test_multifd_tcp_none_dsa(void)
+{
+ MigrateCommon args = {
+ .listen_uri = "defer",
+ .start_hook = test_migrate_precopy_tcp_multifd_start_dsa,
+ };
+
+ test_precopy_common(&args);
+}
+
+static void test_multifd_tcp_cancel_dsa(void)
+{
+ test_multifd_tcp_cancel_common(true);
+}
+
+#endif
+
static void calc_dirty_rate(QTestState *who, uint64_t calc_time)
{
qtest_qmp_assert_success(who,
@@ -3274,6 +3326,19 @@ static bool kvm_dirty_ring_supported(void)
#endif
}
+#ifdef CONFIG_DSA_OPT
+static int test_dsa_setup(void)
+{
+ int fd;
+ fd = open(dsa_dev_path, O_RDWR);
+ if (fd < 0) {
+ return -1;
+ }
+ close(fd);
+ return 0;
+}
+#endif
+
int main(int argc, char **argv)
{
bool has_kvm, has_tcg;
@@ -3466,6 +3531,16 @@ int main(int argc, char **argv)
}
qtest_add_func("/migration/multifd/tcp/plain/none",
test_multifd_tcp_none);
+
+#ifdef CONFIG_DSA_OPT
+ if (g_str_equal(arch, "x86_64") && test_dsa_setup() == 0) {
+ qtest_add_func("/migration/multifd/tcp/plain/none/dsa",
+ test_multifd_tcp_none_dsa);
+ qtest_add_func("/migration/multifd/tcp/plain/cancel/dsa",
+ test_multifd_tcp_cancel_dsa);
+ }
+#endif
+
/*
* This test is flaky and sometimes fails in CI and otherwise:
* don't run unless user opts in via environment variable.
--
2.30.2
^ permalink raw reply related [flat|nested] 41+ messages in thread
* Re: [PATCH v3 01/20] multifd: Add capability to enable/disable zero_page
2024-01-04 0:44 ` [PATCH v3 01/20] multifd: Add capability to enable/disable zero_page Hao Xiang
@ 2024-01-08 20:39 ` Fabiano Rosas
2024-01-11 5:47 ` [External] " Hao Xiang
2024-01-15 6:02 ` Shivam Kumar
1 sibling, 1 reply; 41+ messages in thread
From: Fabiano Rosas @ 2024-01-08 20:39 UTC (permalink / raw)
To: Hao Xiang, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
Cc: Juan Quintela
Hao Xiang <hao.xiang@bytedance.com> writes:
> From: Juan Quintela <quintela@redhat.com>
>
> We have to enable it by default until we introduce the new code.
>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
> migration/options.c | 15 +++++++++++++++
> migration/options.h | 1 +
> qapi/migration.json | 8 +++++++-
> 3 files changed, 23 insertions(+), 1 deletion(-)
>
> diff --git a/migration/options.c b/migration/options.c
> index 8d8ec73ad9..0f6bd78b9f 100644
> --- a/migration/options.c
> +++ b/migration/options.c
> @@ -204,6 +204,8 @@ Property migration_properties[] = {
> DEFINE_PROP_MIG_CAP("x-switchover-ack",
> MIGRATION_CAPABILITY_SWITCHOVER_ACK),
> DEFINE_PROP_MIG_CAP("x-dirty-limit", MIGRATION_CAPABILITY_DIRTY_LIMIT),
> + DEFINE_PROP_MIG_CAP("main-zero-page",
> + MIGRATION_CAPABILITY_MAIN_ZERO_PAGE),
> DEFINE_PROP_END_OF_LIST(),
> };
>
> @@ -284,6 +286,19 @@ bool migrate_multifd(void)
> return s->capabilities[MIGRATION_CAPABILITY_MULTIFD];
> }
>
> +bool migrate_use_main_zero_page(void)
> +{
> + /* MigrationState *s; */
> +
> + /* s = migrate_get_current(); */
> +
> + /*
> + * We will enable this when we add the right code.
> + * return s->enabled_capabilities[MIGRATION_CAPABILITY_MAIN_ZERO_PAGE];
> + */
> + return true;
> +}
> +
> bool migrate_pause_before_switchover(void)
> {
> MigrationState *s = migrate_get_current();
> diff --git a/migration/options.h b/migration/options.h
> index 246c160aee..c901eb57c6 100644
> --- a/migration/options.h
> +++ b/migration/options.h
> @@ -88,6 +88,7 @@ int migrate_multifd_channels(void);
> MultiFDCompression migrate_multifd_compression(void);
> int migrate_multifd_zlib_level(void);
> int migrate_multifd_zstd_level(void);
> +bool migrate_use_main_zero_page(void);
> uint8_t migrate_throttle_trigger_threshold(void);
> const char *migrate_tls_authz(void);
> const char *migrate_tls_creds(void);
> diff --git a/qapi/migration.json b/qapi/migration.json
> index eb2f883513..80c4b13516 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -531,6 +531,12 @@
> # and can result in more stable read performance. Requires KVM
> # with accelerator property "dirty-ring-size" set. (Since 8.1)
> #
> +#
> +# @main-zero-page: If enabled, the detection of zero pages will be
> +# done on the main thread. Otherwise it is done on
> +# the multifd threads.
> +# (since 8.2)
> +#
> # Features:
> #
> # @deprecated: Member @block is deprecated. Use blockdev-mirror with
> @@ -555,7 +561,7 @@
> { 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
> 'validate-uuid', 'background-snapshot',
> 'zero-copy-send', 'postcopy-preempt', 'switchover-ack',
> - 'dirty-limit'] }
> + 'dirty-limit', 'main-zero-page'] }
>
> ##
> # @MigrationCapabilityStatus:
I'll extract this zero page work into a separate series and submit for
review soon. I want to get people's opinion on it independently of this
series.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [External] Re: [PATCH v3 01/20] multifd: Add capability to enable/disable zero_page
2024-01-08 20:39 ` Fabiano Rosas
@ 2024-01-11 5:47 ` Hao Xiang
0 siblings, 0 replies; 41+ messages in thread
From: Hao Xiang @ 2024-01-11 5:47 UTC (permalink / raw)
To: Fabiano Rosas
Cc: peter.maydell, peterx, marcandre.lureau, bryan.zhang, qemu-devel,
Juan Quintela
On Mon, Jan 8, 2024 at 12:39 PM Fabiano Rosas <farosas@suse.de> wrote:
>
> Hao Xiang <hao.xiang@bytedance.com> writes:
>
> > From: Juan Quintela <quintela@redhat.com>
> >
> > We have to enable it by default until we introduce the new code.
> >
> > Signed-off-by: Juan Quintela <quintela@redhat.com>
> > ---
> > migration/options.c | 15 +++++++++++++++
> > migration/options.h | 1 +
> > qapi/migration.json | 8 +++++++-
> > 3 files changed, 23 insertions(+), 1 deletion(-)
> >
> > diff --git a/migration/options.c b/migration/options.c
> > index 8d8ec73ad9..0f6bd78b9f 100644
> > --- a/migration/options.c
> > +++ b/migration/options.c
> > @@ -204,6 +204,8 @@ Property migration_properties[] = {
> > DEFINE_PROP_MIG_CAP("x-switchover-ack",
> > MIGRATION_CAPABILITY_SWITCHOVER_ACK),
> > DEFINE_PROP_MIG_CAP("x-dirty-limit", MIGRATION_CAPABILITY_DIRTY_LIMIT),
> > + DEFINE_PROP_MIG_CAP("main-zero-page",
> > + MIGRATION_CAPABILITY_MAIN_ZERO_PAGE),
> > DEFINE_PROP_END_OF_LIST(),
> > };
> >
> > @@ -284,6 +286,19 @@ bool migrate_multifd(void)
> > return s->capabilities[MIGRATION_CAPABILITY_MULTIFD];
> > }
> >
> > +bool migrate_use_main_zero_page(void)
> > +{
> > + /* MigrationState *s; */
> > +
> > + /* s = migrate_get_current(); */
> > +
> > + /*
> > + * We will enable this when we add the right code.
> > + * return s->enabled_capabilities[MIGRATION_CAPABILITY_MAIN_ZERO_PAGE];
> > + */
> > + return true;
> > +}
> > +
> > bool migrate_pause_before_switchover(void)
> > {
> > MigrationState *s = migrate_get_current();
> > diff --git a/migration/options.h b/migration/options.h
> > index 246c160aee..c901eb57c6 100644
> > --- a/migration/options.h
> > +++ b/migration/options.h
> > @@ -88,6 +88,7 @@ int migrate_multifd_channels(void);
> > MultiFDCompression migrate_multifd_compression(void);
> > int migrate_multifd_zlib_level(void);
> > int migrate_multifd_zstd_level(void);
> > +bool migrate_use_main_zero_page(void);
> > uint8_t migrate_throttle_trigger_threshold(void);
> > const char *migrate_tls_authz(void);
> > const char *migrate_tls_creds(void);
> > diff --git a/qapi/migration.json b/qapi/migration.json
> > index eb2f883513..80c4b13516 100644
> > --- a/qapi/migration.json
> > +++ b/qapi/migration.json
> > @@ -531,6 +531,12 @@
> > # and can result in more stable read performance. Requires KVM
> > # with accelerator property "dirty-ring-size" set. (Since 8.1)
> > #
> > +#
> > +# @main-zero-page: If enabled, the detection of zero pages will be
> > +# done on the main thread. Otherwise it is done on
> > +# the multifd threads.
> > +# (since 8.2)
> > +#
> > # Features:
> > #
> > # @deprecated: Member @block is deprecated. Use blockdev-mirror with
> > @@ -555,7 +561,7 @@
> > { 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
> > 'validate-uuid', 'background-snapshot',
> > 'zero-copy-send', 'postcopy-preempt', 'switchover-ack',
> > - 'dirty-limit'] }
> > + 'dirty-limit', 'main-zero-page'] }
> >
> > ##
> > # @MigrationCapabilityStatus:
>
> I'll extract this zero page work into a separate series and submit for
> review soon. I want to get people's opinion on it independently of this
> series.
Sounds good. Thanks.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v3 01/20] multifd: Add capability to enable/disable zero_page
2024-01-04 0:44 ` [PATCH v3 01/20] multifd: Add capability to enable/disable zero_page Hao Xiang
2024-01-08 20:39 ` Fabiano Rosas
@ 2024-01-15 6:02 ` Shivam Kumar
2024-01-23 0:33 ` [External] " Hao Xiang
1 sibling, 1 reply; 41+ messages in thread
From: Shivam Kumar @ 2024-01-15 6:02 UTC (permalink / raw)
To: Hao Xiang
Cc: farosas@suse.de, peter.maydell@linaro.org, peterx@redhat.com,
marcandre.lureau@redhat.com, bryan.zhang@bytedance.com,
qemu-devel@nongnu.org, Juan Quintela
> On 04-Jan-2024, at 6:14 AM, Hao Xiang <hao.xiang@bytedance.com> wrote:
>
> From: Juan Quintela <quintela@redhat.com>
>
> We have to enable it by default until we introduce the new code.
>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
> migration/options.c | 15 +++++++++++++++
> migration/options.h | 1 +
> qapi/migration.json | 8 +++++++-
> 3 files changed, 23 insertions(+), 1 deletion(-)
>
> diff --git a/migration/options.c b/migration/options.c
> index 8d8ec73ad9..0f6bd78b9f 100644
> --- a/migration/options.c
> +++ b/migration/options.c
> @@ -204,6 +204,8 @@ Property migration_properties[] = {
> DEFINE_PROP_MIG_CAP("x-switchover-ack",
> MIGRATION_CAPABILITY_SWITCHOVER_ACK),
> DEFINE_PROP_MIG_CAP("x-dirty-limit", MIGRATION_CAPABILITY_DIRTY_LIMIT),
> + DEFINE_PROP_MIG_CAP("main-zero-page",
> + MIGRATION_CAPABILITY_MAIN_ZERO_PAGE),
> DEFINE_PROP_END_OF_LIST(),
> };
>
> @@ -284,6 +286,19 @@ bool migrate_multifd(void)
> return s->capabilities[MIGRATION_CAPABILITY_MULTIFD];
> }
>
> +bool migrate_use_main_zero_page(void)
> +{
> + /* MigrationState *s; */
> +
> + /* s = migrate_get_current(); */
> +
> + /*
> + * We will enable this when we add the right code.
> + * return s->enabled_capabilities[MIGRATION_CAPABILITY_MAIN_ZERO_PAGE];
> + */
> + return true;
> +}
> +
> bool migrate_pause_before_switchover(void)
> {
> MigrationState *s = migrate_get_current();
> diff --git a/migration/options.h b/migration/options.h
> index 246c160aee..c901eb57c6 100644
> --- a/migration/options.h
> +++ b/migration/options.h
> @@ -88,6 +88,7 @@ int migrate_multifd_channels(void);
> MultiFDCompression migrate_multifd_compression(void);
> int migrate_multifd_zlib_level(void);
> int migrate_multifd_zstd_level(void);
> +bool migrate_use_main_zero_page(void);
> uint8_t migrate_throttle_trigger_threshold(void);
> const char *migrate_tls_authz(void);
> const char *migrate_tls_creds(void);
> diff --git a/qapi/migration.json b/qapi/migration.json
> index eb2f883513..80c4b13516 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -531,6 +531,12 @@
> # and can result in more stable read performance. Requires KVM
> # with accelerator property "dirty-ring-size" set. (Since 8.1)
> #
> +#
> +# @main-zero-page: If enabled, the detection of zero pages will be
> +# done on the main thread. Otherwise it is done on
> +# the multifd threads.
> +# (since 8.2)
> +#
Should the capability name be something like "zero-page-detection" or just “zero-page”?
CC: Fabiano Rosas
> # Features:
> #
> # @deprecated: Member @block is deprecated. Use blockdev-mirror with
> @@ -555,7 +561,7 @@
> { 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
> 'validate-uuid', 'background-snapshot',
> 'zero-copy-send', 'postcopy-preempt', 'switchover-ack',
> - 'dirty-limit'] }
> + 'dirty-limit', 'main-zero-page'] }
>
> ##
> # @MigrationCapabilityStatus:
> --
> 2.30.2
>
>
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v3 13/20] migration/multifd: Prepare to introduce DSA acceleration on the multifd path.
2024-01-04 0:44 ` [PATCH v3 13/20] migration/multifd: Prepare to introduce DSA acceleration on the multifd path Hao Xiang
@ 2024-01-15 6:46 ` Shivam Kumar
2024-01-23 0:37 ` [External] " Hao Xiang
0 siblings, 1 reply; 41+ messages in thread
From: Shivam Kumar @ 2024-01-15 6:46 UTC (permalink / raw)
To: Hao Xiang
Cc: farosas@suse.de, peter.maydell@linaro.org, peterx@redhat.com,
marcandre.lureau@redhat.com, bryan.zhang@bytedance.com,
qemu-devel@nongnu.org
> On 04-Jan-2024, at 6:14 AM, Hao Xiang <hao.xiang@bytedance.com> wrote:
>
> 1. Refactor multifd_send_thread function.
> 2. Implement buffer_is_zero_use_cpu to handle CPU based zero page
> checking.
> 3. Introduce the batch task structure in MultiFDSendParams.
>
> Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
> ---
> include/qemu/dsa.h | 43 +++++++++++++++++++++++--
> migration/multifd.c | 77 ++++++++++++++++++++++++++++++++++++---------
> migration/multifd.h | 2 ++
> util/dsa.c | 51 +++++++++++++++++++++++++-----
> 4 files changed, 148 insertions(+), 25 deletions(-)
>
> diff --git a/include/qemu/dsa.h b/include/qemu/dsa.h
> index e002652879..fe7772107a 100644
> --- a/include/qemu/dsa.h
> +++ b/include/qemu/dsa.h
> @@ -2,6 +2,7 @@
> #define QEMU_DSA_H
>
> #include "qemu/error-report.h"
> +#include "exec/cpu-common.h"
> #include "qemu/thread.h"
> #include "qemu/queue.h"
>
> @@ -42,6 +43,20 @@ typedef struct dsa_batch_task {
> QSIMPLEQ_ENTRY(dsa_batch_task) entry;
> } dsa_batch_task;
>
> +#endif
> +
> +struct batch_task {
> + /* Address of each pages in pages */
> + ram_addr_t *addr;
> + /* Zero page checking results */
> + bool *results;
> +#ifdef CONFIG_DSA_OPT
> + struct dsa_batch_task *dsa_batch;
> +#endif
> +};
> +
> +#ifdef CONFIG_DSA_OPT
> +
> /**
> * @brief Initializes DSA devices.
> *
> @@ -74,7 +89,7 @@ void dsa_cleanup(void);
> bool dsa_is_running(void);
>
> /**
> - * @brief Initializes a buffer zero batch task.
> + * @brief Initializes a buffer zero DSA batch task.
> *
> * @param task A pointer to the batch task to initialize.
> * @param results A pointer to an array of zero page checking results.
> @@ -102,7 +117,7 @@ void buffer_zero_batch_task_destroy(struct dsa_batch_task *task);
> * @return Zero if successful, otherwise non-zero.
> */
> int
> -buffer_is_zero_dsa_batch_async(struct dsa_batch_task *batch_task,
> +buffer_is_zero_dsa_batch_async(struct batch_task *batch_task,
> const void **buf, size_t count, size_t len);
>
> #else
> @@ -128,6 +143,30 @@ static inline void dsa_stop(void) {}
>
> static inline void dsa_cleanup(void) {}
>
> +static inline int
> +buffer_is_zero_dsa_batch_async(struct batch_task *batch_task,
> + const void **buf, size_t count, size_t len)
> +{
> + exit(1);
> +}
> +
> #endif
>
> +/**
> + * @brief Initializes a general buffer zero batch task.
> + *
> + * @param task A pointer to the general batch task to initialize.
> + * @param batch_size The number of zero page checking tasks in the batch.
> + */
> +void
> +batch_task_init(struct batch_task *task, int batch_size);
> +
> +/**
> + * @brief Destroys a general buffer zero batch task.
> + *
> + * @param task A pointer to the general batch task to destroy.
> + */
> +void
> +batch_task_destroy(struct batch_task *task);
> +
> #endif
> diff --git a/migration/multifd.c b/migration/multifd.c
> index eece85569f..e7c549b93e 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -14,6 +14,8 @@
> #include "qemu/cutils.h"
> #include "qemu/rcu.h"
> #include "qemu/cutils.h"
> +#include "qemu/dsa.h"
> +#include "qemu/memalign.h"
> #include "exec/target_page.h"
> #include "sysemu/sysemu.h"
> #include "exec/ramblock.h"
> @@ -574,6 +576,8 @@ void multifd_save_cleanup(void)
> p->name = NULL;
> multifd_pages_clear(p->pages);
> p->pages = NULL;
> + batch_task_destroy(p->batch_task);
> + p->batch_task = NULL;
> p->packet_len = 0;
> g_free(p->packet);
> p->packet = NULL;
> @@ -678,13 +682,66 @@ int multifd_send_sync_main(QEMUFile *f)
> return 0;
> }
>
> +static void set_page(MultiFDSendParams *p, bool zero_page, uint64_t offset)
> +{
> + RAMBlock *rb = p->pages->block;
> + if (zero_page) {
> + p->zero[p->zero_num] = offset;
> + p->zero_num++;
> + ram_release_page(rb->idstr, offset);
> + } else {
> + p->normal[p->normal_num] = offset;
> + p->normal_num++;
> + }
> +}
> +
> +static void buffer_is_zero_use_cpu(MultiFDSendParams *p)
> +{
> + const void **buf = (const void **)p->batch_task->addr;
> + assert(!migrate_use_main_zero_page());
> +
> + for (int i = 0; i < p->pages->num; i++) {
> + p->batch_task->results[i] = buffer_is_zero(buf[i], p->page_size);
> + }
> +}
> +
> +static void set_normal_pages(MultiFDSendParams *p)
> +{
> + for (int i = 0; i < p->pages->num; i++) {
> + p->batch_task->results[i] = false;
> + }
> +}
Please correct me if I am wrong but set_normal_pages will not be a part of the final patch, right? They are there for testing out the performance against different zero page ration scenarios. If it’s so, can we isolate these parts into a separate patch.
> +
> +static void multifd_zero_page_check(MultiFDSendParams *p)
> +{
> + /* older qemu don't understand zero page on multifd channel */
> + bool use_multifd_zero_page = !migrate_use_main_zero_page();
> +
> + RAMBlock *rb = p->pages->block;
> +
> + for (int i = 0; i < p->pages->num; i++) {
> + p->batch_task->addr[i] = (ram_addr_t)(rb->host + p->pages->offset[i]);
> + }
> +
> + if (use_multifd_zero_page) {
> + buffer_is_zero_use_cpu(p);
> + } else {
> + /* No zero page checking. All pages are normal pages. */
> + set_normal_pages(p);
> + }
> +
> + for (int i = 0; i < p->pages->num; i++) {
> + uint64_t offset = p->pages->offset[i];
> + bool zero_page = p->batch_task->results[i];
> + set_page(p, zero_page, offset);
> + }
> +}
> +
> static void *multifd_send_thread(void *opaque)
> {
> MultiFDSendParams *p = opaque;
> MigrationThread *thread = NULL;
> Error *local_err = NULL;
> - /* qemu older than 8.2 don't understand zero page on multifd channel */
> - bool use_multifd_zero_page = !migrate_use_main_zero_page();
> int ret = 0;
> bool use_zero_copy_send = migrate_zero_copy_send();
>
> @@ -710,7 +767,6 @@ static void *multifd_send_thread(void *opaque)
> qemu_mutex_lock(&p->mutex);
>
> if (p->pending_job) {
> - RAMBlock *rb = p->pages->block;
> uint64_t packet_num = p->packet_num;
> uint32_t flags;
>
> @@ -723,18 +779,7 @@ static void *multifd_send_thread(void *opaque)
> p->iovs_num = 1;
> }
>
> - for (int i = 0; i < p->pages->num; i++) {
> - uint64_t offset = p->pages->offset[i];
> - if (use_multifd_zero_page &&
> - buffer_is_zero(rb->host + offset, p->page_size)) {
> - p->zero[p->zero_num] = offset;
> - p->zero_num++;
> - ram_release_page(rb->idstr, offset);
> - } else {
> - p->normal[p->normal_num] = offset;
> - p->normal_num++;
> - }
> - }
> + multifd_zero_page_check(p);
>
> if (p->normal_num) {
> ret = multifd_send_state->ops->send_prepare(p, &local_err);
> @@ -975,6 +1020,8 @@ int multifd_save_setup(Error **errp)
> p->pending_job = 0;
> p->id = i;
> p->pages = multifd_pages_init(page_count);
> + p->batch_task = g_malloc0(sizeof(struct batch_task));
> + batch_task_init(p->batch_task, page_count);
> p->packet_len = sizeof(MultiFDPacket_t)
> + sizeof(uint64_t) * page_count;
> p->packet = g_malloc0(p->packet_len);
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 13762900d4..97b5f888a7 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -119,6 +119,8 @@ typedef struct {
> * pending_job != 0 -> multifd_channel can use it.
> */
> MultiFDPages_t *pages;
> + /* Zero page checking batch task */
> + struct batch_task *batch_task;
>
> /* thread local variables. No locking required */
>
> diff --git a/util/dsa.c b/util/dsa.c
> index 5a2bf33651..f6224a27d4 100644
> --- a/util/dsa.c
> +++ b/util/dsa.c
> @@ -802,7 +802,7 @@ buffer_zero_task_init_int(struct dsa_hw_desc *descriptor,
> }
>
> /**
> - * @brief Initializes a buffer zero batch task.
> + * @brief Initializes a buffer zero DSA batch task.
> *
> * @param task A pointer to the batch task to initialize.
> * @param results A pointer to an array of zero page checking results.
> @@ -1107,29 +1107,64 @@ void dsa_cleanup(void)
> * @return Zero if successful, otherwise non-zero.
> */
> int
> -buffer_is_zero_dsa_batch_async(struct dsa_batch_task *batch_task,
> +buffer_is_zero_dsa_batch_async(struct batch_task *batch_task,
> const void **buf, size_t count, size_t len)
> {
> - if (count <= 0 || count > batch_task->batch_size) {
> + struct dsa_batch_task *dsa_batch = batch_task->dsa_batch;
> +
> + if (count <= 0 || count > dsa_batch->batch_size) {
> return -1;
> }
>
> - assert(batch_task != NULL);
> + assert(dsa_batch != NULL);
> assert(len != 0);
> assert(buf != NULL);
>
> if (count == 1) {
> /* DSA doesn't take batch operation with only 1 task. */
> - buffer_zero_dsa_async(batch_task, buf[0], len);
> + buffer_zero_dsa_async(dsa_batch, buf[0], len);
> } else {
> - buffer_zero_dsa_batch_async(batch_task, buf, count, len);
> + buffer_zero_dsa_batch_async(dsa_batch, buf, count, len);
> }
>
> - buffer_zero_dsa_wait(batch_task);
> - buffer_zero_cpu_fallback(batch_task);
> + buffer_zero_dsa_wait(dsa_batch);
> + buffer_zero_cpu_fallback(dsa_batch);
>
> return 0;
> }
>
> #endif
>
> +/**
> + * @brief Initializes a general buffer zero batch task.
> + *
> + * @param task A pointer to the general batch task to initialize.
> + * @param batch_size The number of zero page checking tasks in the batch.
> + */
> +void
> +batch_task_init(struct batch_task *task, int batch_size)
> +{
> + task->addr = g_new0(ram_addr_t, batch_size);
> + task->results = g_new0(bool, batch_size);
> +#ifdef CONFIG_DSA_OPT
> + task->dsa_batch = qemu_memalign(64, sizeof(struct dsa_batch_task));
> + buffer_zero_batch_task_init(task->dsa_batch, task->results, batch_size);
> +#endif
> +}
> +
> +/**
> + * @brief Destroys a general buffer zero batch task.
> + *
> + * @param task A pointer to the general batch task to destroy.
> + */
> +void
> +batch_task_destroy(struct batch_task *task)
> +{
> + g_free(task->addr);
> + g_free(task->results);
> +#ifdef CONFIG_DSA_OPT
> + buffer_zero_batch_task_destroy(task->dsa_batch);
> + qemu_vfree(task->dsa_batch);
> +#endif
> +}
> +
> --
> 2.30.2
>
>
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v3 03/20] multifd: Zero pages transmission
2024-01-04 0:44 ` [PATCH v3 03/20] multifd: Zero " Hao Xiang
@ 2024-01-15 7:01 ` Shivam Kumar
2024-01-23 0:46 ` [External] " Hao Xiang
2024-02-01 5:22 ` Peter Xu
1 sibling, 1 reply; 41+ messages in thread
From: Shivam Kumar @ 2024-01-15 7:01 UTC (permalink / raw)
To: Hao Xiang
Cc: farosas@suse.de, peter.maydell@linaro.org, peterx@redhat.com,
marcandre.lureau@redhat.com, bryan.zhang@bytedance.com,
qemu-devel@nongnu.org, Juan Quintela
> On 04-Jan-2024, at 6:14 AM, Hao Xiang <hao.xiang@bytedance.com> wrote:
>
> From: Juan Quintela <quintela@redhat.com>
>
> This implements the zero page dection and handling.
>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
> migration/multifd.c | 41 +++++++++++++++++++++++++++++++++++++++--
> migration/multifd.h | 5 +++++
> 2 files changed, 44 insertions(+), 2 deletions(-)
>
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 5a1f50c7e8..756673029d 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -11,6 +11,7 @@
> */
>
> #include "qemu/osdep.h"
> +#include "qemu/cutils.h"
> #include "qemu/rcu.h"
> #include "exec/target_page.h"
> #include "sysemu/sysemu.h"
> @@ -279,6 +280,12 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>
> packet->offset[i] = cpu_to_be64(temp);
> }
> + for (i = 0; i < p->zero_num; i++) {
> + /* there are architectures where ram_addr_t is 32 bit */
> + uint64_t temp = p->zero[i];
> +
> + packet->offset[p->normal_num + i] = cpu_to_be64(temp);
> + }
> }
>
> static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
> @@ -361,6 +368,18 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
> p->normal[i] = offset;
> }
>
> + for (i = 0; i < p->zero_num; i++) {
> + uint64_t offset = be64_to_cpu(packet->offset[p->normal_num + i]);
> +
> + if (offset > (p->block->used_length - p->page_size)) {
> + error_setg(errp, "multifd: offset too long %" PRIu64
> + " (max " RAM_ADDR_FMT ")",
> + offset, p->block->used_length);
> + return -1;
> + }
> + p->zero[i] = offset;
> + }
> +
> return 0;
> }
>
> @@ -664,6 +683,8 @@ static void *multifd_send_thread(void *opaque)
> MultiFDSendParams *p = opaque;
> MigrationThread *thread = NULL;
> Error *local_err = NULL;
> + /* qemu older than 8.2 don't understand zero page on multifd channel */
> + bool use_zero_page = !migrate_use_main_zero_page();
> int ret = 0;
> bool use_zero_copy_send = migrate_zero_copy_send();
>
> @@ -689,6 +710,7 @@ static void *multifd_send_thread(void *opaque)
> qemu_mutex_lock(&p->mutex);
>
> if (p->pending_job) {
> + RAMBlock *rb = p->pages->block;
> uint64_t packet_num = p->packet_num;
> uint32_t flags;
> p->normal_num = 0;
> @@ -701,8 +723,16 @@ static void *multifd_send_thread(void *opaque)
> }
>
> for (int i = 0; i < p->pages->num; i++) {
> - p->normal[p->normal_num] = p->pages->offset[i];
> - p->normal_num++;
> + uint64_t offset = p->pages->offset[i];
> + if (use_zero_page &&
> + buffer_is_zero(rb->host + offset, p->page_size)) {
> + p->zero[p->zero_num] = offset;
> + p->zero_num++;
> + ram_release_page(rb->idstr, offset);
> + } else {
> + p->normal[p->normal_num] = offset;
> + p->normal_num++;
> + }
> }
>
> if (p->normal_num) {
> @@ -1155,6 +1185,13 @@ static void *multifd_recv_thread(void *opaque)
> }
> }
>
> + for (int i = 0; i < p->zero_num; i++) {
> + void *page = p->host + p->zero[i];
> + if (!buffer_is_zero(page, p->page_size)) {
> + memset(page, 0, p->page_size);
> + }
> + }
> +
I am wondering if zeroing the zero page on the destination can also be offloaded to DSA. Can it help in reducing cpu consumption on the destination in case of multifd-based migration?
> if (flags & MULTIFD_FLAG_SYNC) {
> qemu_sem_post(&multifd_recv_state->sem_sync);
> qemu_sem_wait(&p->sem_sync);
> diff --git a/migration/multifd.h b/migration/multifd.h
> index d587b0e19c..13762900d4 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -53,6 +53,11 @@ typedef struct {
> uint32_t unused32[1]; /* Reserved for future use */
> uint64_t unused64[3]; /* Reserved for future use */
> char ramblock[256];
> + /*
> + * This array contains the pointers to:
> + * - normal pages (initial normal_pages entries)
> + * - zero pages (following zero_pages entries)
> + */
> uint64_t offset[];
> } __attribute__((packed)) MultiFDPacket_t;
>
> --
> 2.30.2
>
>
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [External] Re: [PATCH v3 01/20] multifd: Add capability to enable/disable zero_page
2024-01-15 6:02 ` Shivam Kumar
@ 2024-01-23 0:33 ` Hao Xiang
2024-01-23 15:10 ` Fabiano Rosas
0 siblings, 1 reply; 41+ messages in thread
From: Hao Xiang @ 2024-01-23 0:33 UTC (permalink / raw)
To: Shivam Kumar
Cc: farosas@suse.de, peter.maydell@linaro.org, peterx@redhat.com,
marcandre.lureau@redhat.com, bryan.zhang@bytedance.com,
qemu-devel@nongnu.org
On Sun, Jan 14, 2024 at 10:02 PM Shivam Kumar <shivam.kumar1@nutanix.com> wrote:
>
>
>
> > On 04-Jan-2024, at 6:14 AM, Hao Xiang <hao.xiang@bytedance.com> wrote:
> >
> > From: Juan Quintela <quintela@redhat.com>
> >
> > We have to enable it by default until we introduce the new code.
> >
> > Signed-off-by: Juan Quintela <quintela@redhat.com>
> > ---
> > migration/options.c | 15 +++++++++++++++
> > migration/options.h | 1 +
> > qapi/migration.json | 8 +++++++-
> > 3 files changed, 23 insertions(+), 1 deletion(-)
> >
> > diff --git a/migration/options.c b/migration/options.c
> > index 8d8ec73ad9..0f6bd78b9f 100644
> > --- a/migration/options.c
> > +++ b/migration/options.c
> > @@ -204,6 +204,8 @@ Property migration_properties[] = {
> > DEFINE_PROP_MIG_CAP("x-switchover-ack",
> > MIGRATION_CAPABILITY_SWITCHOVER_ACK),
> > DEFINE_PROP_MIG_CAP("x-dirty-limit", MIGRATION_CAPABILITY_DIRTY_LIMIT),
> > + DEFINE_PROP_MIG_CAP("main-zero-page",
> > + MIGRATION_CAPABILITY_MAIN_ZERO_PAGE),
> > DEFINE_PROP_END_OF_LIST(),
> > };
> >
> > @@ -284,6 +286,19 @@ bool migrate_multifd(void)
> > return s->capabilities[MIGRATION_CAPABILITY_MULTIFD];
> > }
> >
> > +bool migrate_use_main_zero_page(void)
> > +{
> > + /* MigrationState *s; */
> > +
> > + /* s = migrate_get_current(); */
> > +
> > + /*
> > + * We will enable this when we add the right code.
> > + * return s->enabled_capabilities[MIGRATION_CAPABILITY_MAIN_ZERO_PAGE];
> > + */
> > + return true;
> > +}
> > +
> > bool migrate_pause_before_switchover(void)
> > {
> > MigrationState *s = migrate_get_current();
> > diff --git a/migration/options.h b/migration/options.h
> > index 246c160aee..c901eb57c6 100644
> > --- a/migration/options.h
> > +++ b/migration/options.h
> > @@ -88,6 +88,7 @@ int migrate_multifd_channels(void);
> > MultiFDCompression migrate_multifd_compression(void);
> > int migrate_multifd_zlib_level(void);
> > int migrate_multifd_zstd_level(void);
> > +bool migrate_use_main_zero_page(void);
> > uint8_t migrate_throttle_trigger_threshold(void);
> > const char *migrate_tls_authz(void);
> > const char *migrate_tls_creds(void);
> > diff --git a/qapi/migration.json b/qapi/migration.json
> > index eb2f883513..80c4b13516 100644
> > --- a/qapi/migration.json
> > +++ b/qapi/migration.json
> > @@ -531,6 +531,12 @@
> > # and can result in more stable read performance. Requires KVM
> > # with accelerator property "dirty-ring-size" set. (Since 8.1)
> > #
> > +#
> > +# @main-zero-page: If enabled, the detection of zero pages will be
> > +# done on the main thread. Otherwise it is done on
> > +# the multifd threads.
> > +# (since 8.2)
> > +#
> Should the capability name be something like "zero-page-detection" or just “zero-page”?
> CC: Fabiano Rosas
I think the same concern was brought up last time Juan sent out the
original patchset. Right now, the zero page detection is done in the
main migration thread and it is always "ON". This change added a
functionality to move the zero page detection from the main thread to
the multifd sender threads. Now "main-zero-page" is turned "OFF" by
default, and zero page checking is done in the multifd sender thread
(much better performance). If user wants to run the zero page
detection in the main thread (keep current behavior), user can change
"main-zero-page" to "ON".
Renaming it to "zero-page-detection" or just “zero-page” can not
differentiate the old behavior and the new behavior.
Here are the options:
1) Keep the current behavior. "main-zero-page" is OFF by default and
zero page detection runs on the multifd thread by default. User can
turn the switch to "ON" if they want old behavior.
2) Make "main-zero-page" switch ON as default. This would keep the
current behavior by default. User can set it to "OFF" for better
performance.
> > # Features:
> > #
> > # @deprecated: Member @block is deprecated. Use blockdev-mirror with
> > @@ -555,7 +561,7 @@
> > { 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
> > 'validate-uuid', 'background-snapshot',
> > 'zero-copy-send', 'postcopy-preempt', 'switchover-ack',
> > - 'dirty-limit'] }
> > + 'dirty-limit', 'main-zero-page'] }
> >
> > ##
> > # @MigrationCapabilityStatus:
> > --
> > 2.30.2
> >
> >
> >
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [External] Re: [PATCH v3 13/20] migration/multifd: Prepare to introduce DSA acceleration on the multifd path.
2024-01-15 6:46 ` Shivam Kumar
@ 2024-01-23 0:37 ` Hao Xiang
2024-02-02 10:03 ` Peter Xu
0 siblings, 1 reply; 41+ messages in thread
From: Hao Xiang @ 2024-01-23 0:37 UTC (permalink / raw)
To: Shivam Kumar
Cc: farosas@suse.de, peter.maydell@linaro.org, peterx@redhat.com,
marcandre.lureau@redhat.com, bryan.zhang@bytedance.com,
qemu-devel@nongnu.org
On Sun, Jan 14, 2024 at 10:46 PM Shivam Kumar <shivam.kumar1@nutanix.com> wrote:
>
>
>
> > On 04-Jan-2024, at 6:14 AM, Hao Xiang <hao.xiang@bytedance.com> wrote:
> >
> > 1. Refactor multifd_send_thread function.
> > 2. Implement buffer_is_zero_use_cpu to handle CPU based zero page
> > checking.
> > 3. Introduce the batch task structure in MultiFDSendParams.
> >
> > Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
> > ---
> > include/qemu/dsa.h | 43 +++++++++++++++++++++++--
> > migration/multifd.c | 77 ++++++++++++++++++++++++++++++++++++---------
> > migration/multifd.h | 2 ++
> > util/dsa.c | 51 +++++++++++++++++++++++++-----
> > 4 files changed, 148 insertions(+), 25 deletions(-)
> >
> > diff --git a/include/qemu/dsa.h b/include/qemu/dsa.h
> > index e002652879..fe7772107a 100644
> > --- a/include/qemu/dsa.h
> > +++ b/include/qemu/dsa.h
> > @@ -2,6 +2,7 @@
> > #define QEMU_DSA_H
> >
> > #include "qemu/error-report.h"
> > +#include "exec/cpu-common.h"
> > #include "qemu/thread.h"
> > #include "qemu/queue.h"
> >
> > @@ -42,6 +43,20 @@ typedef struct dsa_batch_task {
> > QSIMPLEQ_ENTRY(dsa_batch_task) entry;
> > } dsa_batch_task;
> >
> > +#endif
> > +
> > +struct batch_task {
> > + /* Address of each pages in pages */
> > + ram_addr_t *addr;
> > + /* Zero page checking results */
> > + bool *results;
> > +#ifdef CONFIG_DSA_OPT
> > + struct dsa_batch_task *dsa_batch;
> > +#endif
> > +};
> > +
> > +#ifdef CONFIG_DSA_OPT
> > +
> > /**
> > * @brief Initializes DSA devices.
> > *
> > @@ -74,7 +89,7 @@ void dsa_cleanup(void);
> > bool dsa_is_running(void);
> >
> > /**
> > - * @brief Initializes a buffer zero batch task.
> > + * @brief Initializes a buffer zero DSA batch task.
> > *
> > * @param task A pointer to the batch task to initialize.
> > * @param results A pointer to an array of zero page checking results.
> > @@ -102,7 +117,7 @@ void buffer_zero_batch_task_destroy(struct dsa_batch_task *task);
> > * @return Zero if successful, otherwise non-zero.
> > */
> > int
> > -buffer_is_zero_dsa_batch_async(struct dsa_batch_task *batch_task,
> > +buffer_is_zero_dsa_batch_async(struct batch_task *batch_task,
> > const void **buf, size_t count, size_t len);
> >
> > #else
> > @@ -128,6 +143,30 @@ static inline void dsa_stop(void) {}
> >
> > static inline void dsa_cleanup(void) {}
> >
> > +static inline int
> > +buffer_is_zero_dsa_batch_async(struct batch_task *batch_task,
> > + const void **buf, size_t count, size_t len)
> > +{
> > + exit(1);
> > +}
> > +
> > #endif
> >
> > +/**
> > + * @brief Initializes a general buffer zero batch task.
> > + *
> > + * @param task A pointer to the general batch task to initialize.
> > + * @param batch_size The number of zero page checking tasks in the batch.
> > + */
> > +void
> > +batch_task_init(struct batch_task *task, int batch_size);
> > +
> > +/**
> > + * @brief Destroys a general buffer zero batch task.
> > + *
> > + * @param task A pointer to the general batch task to destroy.
> > + */
> > +void
> > +batch_task_destroy(struct batch_task *task);
> > +
> > #endif
> > diff --git a/migration/multifd.c b/migration/multifd.c
> > index eece85569f..e7c549b93e 100644
> > --- a/migration/multifd.c
> > +++ b/migration/multifd.c
> > @@ -14,6 +14,8 @@
> > #include "qemu/cutils.h"
> > #include "qemu/rcu.h"
> > #include "qemu/cutils.h"
> > +#include "qemu/dsa.h"
> > +#include "qemu/memalign.h"
> > #include "exec/target_page.h"
> > #include "sysemu/sysemu.h"
> > #include "exec/ramblock.h"
> > @@ -574,6 +576,8 @@ void multifd_save_cleanup(void)
> > p->name = NULL;
> > multifd_pages_clear(p->pages);
> > p->pages = NULL;
> > + batch_task_destroy(p->batch_task);
> > + p->batch_task = NULL;
> > p->packet_len = 0;
> > g_free(p->packet);
> > p->packet = NULL;
> > @@ -678,13 +682,66 @@ int multifd_send_sync_main(QEMUFile *f)
> > return 0;
> > }
> >
> > +static void set_page(MultiFDSendParams *p, bool zero_page, uint64_t offset)
> > +{
> > + RAMBlock *rb = p->pages->block;
> > + if (zero_page) {
> > + p->zero[p->zero_num] = offset;
> > + p->zero_num++;
> > + ram_release_page(rb->idstr, offset);
> > + } else {
> > + p->normal[p->normal_num] = offset;
> > + p->normal_num++;
> > + }
> > +}
> > +
> > +static void buffer_is_zero_use_cpu(MultiFDSendParams *p)
> > +{
> > + const void **buf = (const void **)p->batch_task->addr;
> > + assert(!migrate_use_main_zero_page());
> > +
> > + for (int i = 0; i < p->pages->num; i++) {
> > + p->batch_task->results[i] = buffer_is_zero(buf[i], p->page_size);
> > + }
> > +}
> > +
> > +static void set_normal_pages(MultiFDSendParams *p)
> > +{
> > + for (int i = 0; i < p->pages->num; i++) {
> > + p->batch_task->results[i] = false;
> > + }
> > +}
> Please correct me if I am wrong but set_normal_pages will not be a part of the final patch, right? They are there for testing out the performance against different zero page ration scenarios. If it’s so, can we isolate these parts into a separate patch.
set_normal_pages is used for performance testing and testing only. It
won't introduce any "incorrect" behavior and I would love to see it
being part of the upstream code. But the argument that testing change
should remain private is always correct. So I am totally OK with
isolating the parts into a separate patch.
> > +
> > +static void multifd_zero_page_check(MultiFDSendParams *p)
> > +{
> > + /* older qemu don't understand zero page on multifd channel */
> > + bool use_multifd_zero_page = !migrate_use_main_zero_page();
> > +
> > + RAMBlock *rb = p->pages->block;
> > +
> > + for (int i = 0; i < p->pages->num; i++) {
> > + p->batch_task->addr[i] = (ram_addr_t)(rb->host + p->pages->offset[i]);
> > + }
> > +
> > + if (use_multifd_zero_page) {
> > + buffer_is_zero_use_cpu(p);
> > + } else {
> > + /* No zero page checking. All pages are normal pages. */
> > + set_normal_pages(p);
> > + }
> > +
> > + for (int i = 0; i < p->pages->num; i++) {
> > + uint64_t offset = p->pages->offset[i];
> > + bool zero_page = p->batch_task->results[i];
> > + set_page(p, zero_page, offset);
> > + }
> > +}
> > +
> > static void *multifd_send_thread(void *opaque)
> > {
> > MultiFDSendParams *p = opaque;
> > MigrationThread *thread = NULL;
> > Error *local_err = NULL;
> > - /* qemu older than 8.2 don't understand zero page on multifd channel */
> > - bool use_multifd_zero_page = !migrate_use_main_zero_page();
> > int ret = 0;
> > bool use_zero_copy_send = migrate_zero_copy_send();
> >
> > @@ -710,7 +767,6 @@ static void *multifd_send_thread(void *opaque)
> > qemu_mutex_lock(&p->mutex);
> >
> > if (p->pending_job) {
> > - RAMBlock *rb = p->pages->block;
> > uint64_t packet_num = p->packet_num;
> > uint32_t flags;
> >
> > @@ -723,18 +779,7 @@ static void *multifd_send_thread(void *opaque)
> > p->iovs_num = 1;
> > }
> >
> > - for (int i = 0; i < p->pages->num; i++) {
> > - uint64_t offset = p->pages->offset[i];
> > - if (use_multifd_zero_page &&
> > - buffer_is_zero(rb->host + offset, p->page_size)) {
> > - p->zero[p->zero_num] = offset;
> > - p->zero_num++;
> > - ram_release_page(rb->idstr, offset);
> > - } else {
> > - p->normal[p->normal_num] = offset;
> > - p->normal_num++;
> > - }
> > - }
> > + multifd_zero_page_check(p);
> >
> > if (p->normal_num) {
> > ret = multifd_send_state->ops->send_prepare(p, &local_err);
> > @@ -975,6 +1020,8 @@ int multifd_save_setup(Error **errp)
> > p->pending_job = 0;
> > p->id = i;
> > p->pages = multifd_pages_init(page_count);
> > + p->batch_task = g_malloc0(sizeof(struct batch_task));
> > + batch_task_init(p->batch_task, page_count);
> > p->packet_len = sizeof(MultiFDPacket_t)
> > + sizeof(uint64_t) * page_count;
> > p->packet = g_malloc0(p->packet_len);
> > diff --git a/migration/multifd.h b/migration/multifd.h
> > index 13762900d4..97b5f888a7 100644
> > --- a/migration/multifd.h
> > +++ b/migration/multifd.h
> > @@ -119,6 +119,8 @@ typedef struct {
> > * pending_job != 0 -> multifd_channel can use it.
> > */
> > MultiFDPages_t *pages;
> > + /* Zero page checking batch task */
> > + struct batch_task *batch_task;
> >
> > /* thread local variables. No locking required */
> >
> > diff --git a/util/dsa.c b/util/dsa.c
> > index 5a2bf33651..f6224a27d4 100644
> > --- a/util/dsa.c
> > +++ b/util/dsa.c
> > @@ -802,7 +802,7 @@ buffer_zero_task_init_int(struct dsa_hw_desc *descriptor,
> > }
> >
> > /**
> > - * @brief Initializes a buffer zero batch task.
> > + * @brief Initializes a buffer zero DSA batch task.
> > *
> > * @param task A pointer to the batch task to initialize.
> > * @param results A pointer to an array of zero page checking results.
> > @@ -1107,29 +1107,64 @@ void dsa_cleanup(void)
> > * @return Zero if successful, otherwise non-zero.
> > */
> > int
> > -buffer_is_zero_dsa_batch_async(struct dsa_batch_task *batch_task,
> > +buffer_is_zero_dsa_batch_async(struct batch_task *batch_task,
> > const void **buf, size_t count, size_t len)
> > {
> > - if (count <= 0 || count > batch_task->batch_size) {
> > + struct dsa_batch_task *dsa_batch = batch_task->dsa_batch;
> > +
> > + if (count <= 0 || count > dsa_batch->batch_size) {
> > return -1;
> > }
> >
> > - assert(batch_task != NULL);
> > + assert(dsa_batch != NULL);
> > assert(len != 0);
> > assert(buf != NULL);
> >
> > if (count == 1) {
> > /* DSA doesn't take batch operation with only 1 task. */
> > - buffer_zero_dsa_async(batch_task, buf[0], len);
> > + buffer_zero_dsa_async(dsa_batch, buf[0], len);
> > } else {
> > - buffer_zero_dsa_batch_async(batch_task, buf, count, len);
> > + buffer_zero_dsa_batch_async(dsa_batch, buf, count, len);
> > }
> >
> > - buffer_zero_dsa_wait(batch_task);
> > - buffer_zero_cpu_fallback(batch_task);
> > + buffer_zero_dsa_wait(dsa_batch);
> > + buffer_zero_cpu_fallback(dsa_batch);
> >
> > return 0;
> > }
> >
> > #endif
> >
> > +/**
> > + * @brief Initializes a general buffer zero batch task.
> > + *
> > + * @param task A pointer to the general batch task to initialize.
> > + * @param batch_size The number of zero page checking tasks in the batch.
> > + */
> > +void
> > +batch_task_init(struct batch_task *task, int batch_size)
> > +{
> > + task->addr = g_new0(ram_addr_t, batch_size);
> > + task->results = g_new0(bool, batch_size);
> > +#ifdef CONFIG_DSA_OPT
> > + task->dsa_batch = qemu_memalign(64, sizeof(struct dsa_batch_task));
> > + buffer_zero_batch_task_init(task->dsa_batch, task->results, batch_size);
> > +#endif
> > +}
> > +
> > +/**
> > + * @brief Destroys a general buffer zero batch task.
> > + *
> > + * @param task A pointer to the general batch task to destroy.
> > + */
> > +void
> > +batch_task_destroy(struct batch_task *task)
> > +{
> > + g_free(task->addr);
> > + g_free(task->results);
> > +#ifdef CONFIG_DSA_OPT
> > + buffer_zero_batch_task_destroy(task->dsa_batch);
> > + qemu_vfree(task->dsa_batch);
> > +#endif
> > +}
> > +
> > --
> > 2.30.2
> >
> >
> >
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [External] Re: [PATCH v3 03/20] multifd: Zero pages transmission
2024-01-15 7:01 ` Shivam Kumar
@ 2024-01-23 0:46 ` Hao Xiang
0 siblings, 0 replies; 41+ messages in thread
From: Hao Xiang @ 2024-01-23 0:46 UTC (permalink / raw)
To: Shivam Kumar
Cc: farosas@suse.de, peter.maydell@linaro.org, peterx@redhat.com,
marcandre.lureau@redhat.com, bryan.zhang@bytedance.com,
qemu-devel@nongnu.org, Juan Quintela
On Sun, Jan 14, 2024 at 11:01 PM Shivam Kumar <shivam.kumar1@nutanix.com> wrote:
>
>
>
> > On 04-Jan-2024, at 6:14 AM, Hao Xiang <hao.xiang@bytedance.com> wrote:
> >
> > From: Juan Quintela <quintela@redhat.com>
> >
> > This implements the zero page dection and handling.
> >
> > Signed-off-by: Juan Quintela <quintela@redhat.com>
> > ---
> > migration/multifd.c | 41 +++++++++++++++++++++++++++++++++++++++--
> > migration/multifd.h | 5 +++++
> > 2 files changed, 44 insertions(+), 2 deletions(-)
> >
> > diff --git a/migration/multifd.c b/migration/multifd.c
> > index 5a1f50c7e8..756673029d 100644
> > --- a/migration/multifd.c
> > +++ b/migration/multifd.c
> > @@ -11,6 +11,7 @@
> > */
> >
> > #include "qemu/osdep.h"
> > +#include "qemu/cutils.h"
> > #include "qemu/rcu.h"
> > #include "exec/target_page.h"
> > #include "sysemu/sysemu.h"
> > @@ -279,6 +280,12 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
> >
> > packet->offset[i] = cpu_to_be64(temp);
> > }
> > + for (i = 0; i < p->zero_num; i++) {
> > + /* there are architectures where ram_addr_t is 32 bit */
> > + uint64_t temp = p->zero[i];
> > +
> > + packet->offset[p->normal_num + i] = cpu_to_be64(temp);
> > + }
> > }
> >
> > static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
> > @@ -361,6 +368,18 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
> > p->normal[i] = offset;
> > }
> >
> > + for (i = 0; i < p->zero_num; i++) {
> > + uint64_t offset = be64_to_cpu(packet->offset[p->normal_num + i]);
> > +
> > + if (offset > (p->block->used_length - p->page_size)) {
> > + error_setg(errp, "multifd: offset too long %" PRIu64
> > + " (max " RAM_ADDR_FMT ")",
> > + offset, p->block->used_length);
> > + return -1;
> > + }
> > + p->zero[i] = offset;
> > + }
> > +
> > return 0;
> > }
> >
> > @@ -664,6 +683,8 @@ static void *multifd_send_thread(void *opaque)
> > MultiFDSendParams *p = opaque;
> > MigrationThread *thread = NULL;
> > Error *local_err = NULL;
> > + /* qemu older than 8.2 don't understand zero page on multifd channel */
> > + bool use_zero_page = !migrate_use_main_zero_page();
> > int ret = 0;
> > bool use_zero_copy_send = migrate_zero_copy_send();
> >
> > @@ -689,6 +710,7 @@ static void *multifd_send_thread(void *opaque)
> > qemu_mutex_lock(&p->mutex);
> >
> > if (p->pending_job) {
> > + RAMBlock *rb = p->pages->block;
> > uint64_t packet_num = p->packet_num;
> > uint32_t flags;
> > p->normal_num = 0;
> > @@ -701,8 +723,16 @@ static void *multifd_send_thread(void *opaque)
> > }
> >
> > for (int i = 0; i < p->pages->num; i++) {
> > - p->normal[p->normal_num] = p->pages->offset[i];
> > - p->normal_num++;
> > + uint64_t offset = p->pages->offset[i];
> > + if (use_zero_page &&
> > + buffer_is_zero(rb->host + offset, p->page_size)) {
> > + p->zero[p->zero_num] = offset;
> > + p->zero_num++;
> > + ram_release_page(rb->idstr, offset);
> > + } else {
> > + p->normal[p->normal_num] = offset;
> > + p->normal_num++;
> > + }
> > }
> >
> > if (p->normal_num) {
> > @@ -1155,6 +1185,13 @@ static void *multifd_recv_thread(void *opaque)
> > }
> > }
> >
> > + for (int i = 0; i < p->zero_num; i++) {
> > + void *page = p->host + p->zero[i];
> > + if (!buffer_is_zero(page, p->page_size)) {
> > + memset(page, 0, p->page_size);
> > + }
> > + }
> > +
> I am wondering if zeroing the zero page on the destination can also be offloaded to DSA. Can it help in reducing cpu consumption on the destination in case of multifd-based migration?
I have that change coded up and I have tested for performance. It's
not saving as much CPU cycles as I hoped. The problem is that on the
sender side, we run zero page detection on every single page but on
the receiver side, we only zero-ing the pages if the sender tells us
so. For instance, if a multifd packet with 128 pages are all zero
pages, we can offload the zero-ing pages on the entire 128 pages in a
single DSA operation and that's the best case. In a worse case, if a
multifd packet with 128 pages only has say 10 zero pages, we can
offload the zero-ing pages on only the 10 pages instead of the entire
128 pages. Considering DSA software overhead, that is not a good deal.
In the future, if we refactor the multifd path to separate zero pages
in its own packet, it will be a different story. For instance, if
there are 1024 non-contiguous zero pages detected, they will be sent
across multiple packets. With a new packet format, those pages can be
put into the same packet (and we can put more than 1024 zero pages in
a packet) and DSA offloading can become much more effective. We can
add that function after we have the new packet format implemented.
> > if (flags & MULTIFD_FLAG_SYNC) {
> > qemu_sem_post(&multifd_recv_state->sem_sync);
> > qemu_sem_wait(&p->sem_sync);
> > diff --git a/migration/multifd.h b/migration/multifd.h
> > index d587b0e19c..13762900d4 100644
> > --- a/migration/multifd.h
> > +++ b/migration/multifd.h
> > @@ -53,6 +53,11 @@ typedef struct {
> > uint32_t unused32[1]; /* Reserved for future use */
> > uint64_t unused64[3]; /* Reserved for future use */
> > char ramblock[256];
> > + /*
> > + * This array contains the pointers to:
> > + * - normal pages (initial normal_pages entries)
> > + * - zero pages (following zero_pages entries)
> > + */
> > uint64_t offset[];
> > } __attribute__((packed)) MultiFDPacket_t;
> >
> > --
> > 2.30.2
> >
> >
> >
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [External] Re: [PATCH v3 01/20] multifd: Add capability to enable/disable zero_page
2024-01-23 0:33 ` [External] " Hao Xiang
@ 2024-01-23 15:10 ` Fabiano Rosas
2024-01-23 21:00 ` Hao Xiang
2024-02-01 4:27 ` Peter Xu
0 siblings, 2 replies; 41+ messages in thread
From: Fabiano Rosas @ 2024-01-23 15:10 UTC (permalink / raw)
To: Hao Xiang, Shivam Kumar
Cc: peter.maydell@linaro.org, peterx@redhat.com,
marcandre.lureau@redhat.com, bryan.zhang@bytedance.com,
qemu-devel@nongnu.org
Hao Xiang <hao.xiang@bytedance.com> writes:
> On Sun, Jan 14, 2024 at 10:02 PM Shivam Kumar <shivam.kumar1@nutanix.com> wrote:
>>
>>
>>
>> > On 04-Jan-2024, at 6:14 AM, Hao Xiang <hao.xiang@bytedance.com> wrote:
>> >
>> > From: Juan Quintela <quintela@redhat.com>
>> >
>> > We have to enable it by default until we introduce the new code.
>> >
>> > Signed-off-by: Juan Quintela <quintela@redhat.com>
>> > ---
>> > migration/options.c | 15 +++++++++++++++
>> > migration/options.h | 1 +
>> > qapi/migration.json | 8 +++++++-
>> > 3 files changed, 23 insertions(+), 1 deletion(-)
>> >
>> > diff --git a/migration/options.c b/migration/options.c
>> > index 8d8ec73ad9..0f6bd78b9f 100644
>> > --- a/migration/options.c
>> > +++ b/migration/options.c
>> > @@ -204,6 +204,8 @@ Property migration_properties[] = {
>> > DEFINE_PROP_MIG_CAP("x-switchover-ack",
>> > MIGRATION_CAPABILITY_SWITCHOVER_ACK),
>> > DEFINE_PROP_MIG_CAP("x-dirty-limit", MIGRATION_CAPABILITY_DIRTY_LIMIT),
>> > + DEFINE_PROP_MIG_CAP("main-zero-page",
>> > + MIGRATION_CAPABILITY_MAIN_ZERO_PAGE),
>> > DEFINE_PROP_END_OF_LIST(),
>> > };
>> >
>> > @@ -284,6 +286,19 @@ bool migrate_multifd(void)
>> > return s->capabilities[MIGRATION_CAPABILITY_MULTIFD];
>> > }
>> >
>> > +bool migrate_use_main_zero_page(void)
>> > +{
>> > + /* MigrationState *s; */
>> > +
>> > + /* s = migrate_get_current(); */
>> > +
>> > + /*
>> > + * We will enable this when we add the right code.
>> > + * return s->enabled_capabilities[MIGRATION_CAPABILITY_MAIN_ZERO_PAGE];
>> > + */
>> > + return true;
>> > +}
>> > +
>> > bool migrate_pause_before_switchover(void)
>> > {
>> > MigrationState *s = migrate_get_current();
>> > diff --git a/migration/options.h b/migration/options.h
>> > index 246c160aee..c901eb57c6 100644
>> > --- a/migration/options.h
>> > +++ b/migration/options.h
>> > @@ -88,6 +88,7 @@ int migrate_multifd_channels(void);
>> > MultiFDCompression migrate_multifd_compression(void);
>> > int migrate_multifd_zlib_level(void);
>> > int migrate_multifd_zstd_level(void);
>> > +bool migrate_use_main_zero_page(void);
>> > uint8_t migrate_throttle_trigger_threshold(void);
>> > const char *migrate_tls_authz(void);
>> > const char *migrate_tls_creds(void);
>> > diff --git a/qapi/migration.json b/qapi/migration.json
>> > index eb2f883513..80c4b13516 100644
>> > --- a/qapi/migration.json
>> > +++ b/qapi/migration.json
>> > @@ -531,6 +531,12 @@
>> > # and can result in more stable read performance. Requires KVM
>> > # with accelerator property "dirty-ring-size" set. (Since 8.1)
>> > #
>> > +#
>> > +# @main-zero-page: If enabled, the detection of zero pages will be
>> > +# done on the main thread. Otherwise it is done on
>> > +# the multifd threads.
>> > +# (since 8.2)
>> > +#
>> Should the capability name be something like "zero-page-detection" or just “zero-page”?
>> CC: Fabiano Rosas
>
> I think the same concern was brought up last time Juan sent out the
> original patchset. Right now, the zero page detection is done in the
> main migration thread and it is always "ON". This change added a
> functionality to move the zero page detection from the main thread to
> the multifd sender threads. Now "main-zero-page" is turned "OFF" by
> default, and zero page checking is done in the multifd sender thread
> (much better performance). If user wants to run the zero page
> detection in the main thread (keep current behavior), user can change
> "main-zero-page" to "ON".
>
> Renaming it to "zero-page-detection" or just “zero-page” can not
> differentiate the old behavior and the new behavior.
Yes, the main point here is what happens when we try to migrate from
different QEMU versions that have/don't have this code. We need some way
to maintain the compatibility. In this case Juan chose to keep this
capability with the semantics of "old behavior" so that we can enable it
on the new QEMU to match with the old binary that doesn't expect to see
zero pages on the packet/stream.
> Here are the options:
> 1) Keep the current behavior. "main-zero-page" is OFF by default and
> zero page detection runs on the multifd thread by default. User can
> turn the switch to "ON" if they want old behavior.
> 2) Make "main-zero-page" switch ON as default. This would keep the
> current behavior by default. User can set it to "OFF" for better
> performance.
3) Make multifd-zero-page ON by default. User can set it to OFF to get
the old behavior. There was some consideration about how libvirt works
that would make this one unusable, but I don't understand what's that
about.
I would make this a default ON parameter instead of a capability.
>> > # Features:
>> > #
>> > # @deprecated: Member @block is deprecated. Use blockdev-mirror with
>> > @@ -555,7 +561,7 @@
>> > { 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
>> > 'validate-uuid', 'background-snapshot',
>> > 'zero-copy-send', 'postcopy-preempt', 'switchover-ack',
>> > - 'dirty-limit'] }
>> > + 'dirty-limit', 'main-zero-page'] }
>> >
>> > ##
>> > # @MigrationCapabilityStatus:
>> > --
>> > 2.30.2
>> >
>> >
>> >
>>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [External] Re: [PATCH v3 01/20] multifd: Add capability to enable/disable zero_page
2024-01-23 15:10 ` Fabiano Rosas
@ 2024-01-23 21:00 ` Hao Xiang
2024-02-01 4:27 ` Peter Xu
1 sibling, 0 replies; 41+ messages in thread
From: Hao Xiang @ 2024-01-23 21:00 UTC (permalink / raw)
To: Fabiano Rosas
Cc: Shivam Kumar, peter.maydell@linaro.org, peterx@redhat.com,
marcandre.lureau@redhat.com, bryan.zhang@bytedance.com,
qemu-devel@nongnu.org
On Tue, Jan 23, 2024 at 7:11 AM Fabiano Rosas <farosas@suse.de> wrote:
>
> Hao Xiang <hao.xiang@bytedance.com> writes:
>
> > On Sun, Jan 14, 2024 at 10:02 PM Shivam Kumar <shivam.kumar1@nutanix.com> wrote:
> >>
> >>
> >>
> >> > On 04-Jan-2024, at 6:14 AM, Hao Xiang <hao.xiang@bytedance.com> wrote:
> >> >
> >> > From: Juan Quintela <quintela@redhat.com>
> >> >
> >> > We have to enable it by default until we introduce the new code.
> >> >
> >> > Signed-off-by: Juan Quintela <quintela@redhat.com>
> >> > ---
> >> > migration/options.c | 15 +++++++++++++++
> >> > migration/options.h | 1 +
> >> > qapi/migration.json | 8 +++++++-
> >> > 3 files changed, 23 insertions(+), 1 deletion(-)
> >> >
> >> > diff --git a/migration/options.c b/migration/options.c
> >> > index 8d8ec73ad9..0f6bd78b9f 100644
> >> > --- a/migration/options.c
> >> > +++ b/migration/options.c
> >> > @@ -204,6 +204,8 @@ Property migration_properties[] = {
> >> > DEFINE_PROP_MIG_CAP("x-switchover-ack",
> >> > MIGRATION_CAPABILITY_SWITCHOVER_ACK),
> >> > DEFINE_PROP_MIG_CAP("x-dirty-limit", MIGRATION_CAPABILITY_DIRTY_LIMIT),
> >> > + DEFINE_PROP_MIG_CAP("main-zero-page",
> >> > + MIGRATION_CAPABILITY_MAIN_ZERO_PAGE),
> >> > DEFINE_PROP_END_OF_LIST(),
> >> > };
> >> >
> >> > @@ -284,6 +286,19 @@ bool migrate_multifd(void)
> >> > return s->capabilities[MIGRATION_CAPABILITY_MULTIFD];
> >> > }
> >> >
> >> > +bool migrate_use_main_zero_page(void)
> >> > +{
> >> > + /* MigrationState *s; */
> >> > +
> >> > + /* s = migrate_get_current(); */
> >> > +
> >> > + /*
> >> > + * We will enable this when we add the right code.
> >> > + * return s->enabled_capabilities[MIGRATION_CAPABILITY_MAIN_ZERO_PAGE];
> >> > + */
> >> > + return true;
> >> > +}
> >> > +
> >> > bool migrate_pause_before_switchover(void)
> >> > {
> >> > MigrationState *s = migrate_get_current();
> >> > diff --git a/migration/options.h b/migration/options.h
> >> > index 246c160aee..c901eb57c6 100644
> >> > --- a/migration/options.h
> >> > +++ b/migration/options.h
> >> > @@ -88,6 +88,7 @@ int migrate_multifd_channels(void);
> >> > MultiFDCompression migrate_multifd_compression(void);
> >> > int migrate_multifd_zlib_level(void);
> >> > int migrate_multifd_zstd_level(void);
> >> > +bool migrate_use_main_zero_page(void);
> >> > uint8_t migrate_throttle_trigger_threshold(void);
> >> > const char *migrate_tls_authz(void);
> >> > const char *migrate_tls_creds(void);
> >> > diff --git a/qapi/migration.json b/qapi/migration.json
> >> > index eb2f883513..80c4b13516 100644
> >> > --- a/qapi/migration.json
> >> > +++ b/qapi/migration.json
> >> > @@ -531,6 +531,12 @@
> >> > # and can result in more stable read performance. Requires KVM
> >> > # with accelerator property "dirty-ring-size" set. (Since 8.1)
> >> > #
> >> > +#
> >> > +# @main-zero-page: If enabled, the detection of zero pages will be
> >> > +# done on the main thread. Otherwise it is done on
> >> > +# the multifd threads.
> >> > +# (since 8.2)
> >> > +#
> >> Should the capability name be something like "zero-page-detection" or just “zero-page”?
> >> CC: Fabiano Rosas
> >
> > I think the same concern was brought up last time Juan sent out the
> > original patchset. Right now, the zero page detection is done in the
> > main migration thread and it is always "ON". This change added a
> > functionality to move the zero page detection from the main thread to
> > the multifd sender threads. Now "main-zero-page" is turned "OFF" by
> > default, and zero page checking is done in the multifd sender thread
> > (much better performance). If user wants to run the zero page
> > detection in the main thread (keep current behavior), user can change
> > "main-zero-page" to "ON".
> >
> > Renaming it to "zero-page-detection" or just “zero-page” can not
> > differentiate the old behavior and the new behavior.
>
> Yes, the main point here is what happens when we try to migrate from
> different QEMU versions that have/don't have this code. We need some way
> to maintain the compatibility. In this case Juan chose to keep this
> capability with the semantics of "old behavior" so that we can enable it
> on the new QEMU to match with the old binary that doesn't expect to see
> zero pages on the packet/stream.
>
> > Here are the options:
> > 1) Keep the current behavior. "main-zero-page" is OFF by default and
> > zero page detection runs on the multifd thread by default. User can
> > turn the switch to "ON" if they want old behavior.
> > 2) Make "main-zero-page" switch ON as default. This would keep the
> > current behavior by default. User can set it to "OFF" for better
> > performance.
>
> 3) Make multifd-zero-page ON by default. User can set it to OFF to get
> the old behavior. There was some consideration about how libvirt works
> that would make this one unusable, but I don't understand what's that
> about.
>
> I would make this a default ON parameter instead of a capability.
Sounds good to me.
>
> >> > # Features:
> >> > #
> >> > # @deprecated: Member @block is deprecated. Use blockdev-mirror with
> >> > @@ -555,7 +561,7 @@
> >> > { 'name': 'x-ignore-shared', 'features': [ 'unstable' ] },
> >> > 'validate-uuid', 'background-snapshot',
> >> > 'zero-copy-send', 'postcopy-preempt', 'switchover-ack',
> >> > - 'dirty-limit'] }
> >> > + 'dirty-limit', 'main-zero-page'] }
> >> >
> >> > ##
> >> > # @MigrationCapabilityStatus:
> >> > --
> >> > 2.30.2
> >> >
> >> >
> >> >
> >>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [External] Re: [PATCH v3 01/20] multifd: Add capability to enable/disable zero_page
2024-01-23 15:10 ` Fabiano Rosas
2024-01-23 21:00 ` Hao Xiang
@ 2024-02-01 4:27 ` Peter Xu
1 sibling, 0 replies; 41+ messages in thread
From: Peter Xu @ 2024-02-01 4:27 UTC (permalink / raw)
To: Fabiano Rosas
Cc: Hao Xiang, Shivam Kumar, peter.maydell@linaro.org,
marcandre.lureau@redhat.com, bryan.zhang@bytedance.com,
qemu-devel@nongnu.org
On Tue, Jan 23, 2024 at 12:10:55PM -0300, Fabiano Rosas wrote:
> Hao Xiang <hao.xiang@bytedance.com> writes:
>
> > On Sun, Jan 14, 2024 at 10:02 PM Shivam Kumar <shivam.kumar1@nutanix.com> wrote:
> >>
> >>
> >>
> >> > On 04-Jan-2024, at 6:14 AM, Hao Xiang <hao.xiang@bytedance.com> wrote:
> >> >
> >> > From: Juan Quintela <quintela@redhat.com>
> >> >
> >> > We have to enable it by default until we introduce the new code.
> >> >
> >> > Signed-off-by: Juan Quintela <quintela@redhat.com>
> >> > ---
> >> > migration/options.c | 15 +++++++++++++++
> >> > migration/options.h | 1 +
> >> > qapi/migration.json | 8 +++++++-
> >> > 3 files changed, 23 insertions(+), 1 deletion(-)
> >> >
> >> > diff --git a/migration/options.c b/migration/options.c
> >> > index 8d8ec73ad9..0f6bd78b9f 100644
> >> > --- a/migration/options.c
> >> > +++ b/migration/options.c
> >> > @@ -204,6 +204,8 @@ Property migration_properties[] = {
> >> > DEFINE_PROP_MIG_CAP("x-switchover-ack",
> >> > MIGRATION_CAPABILITY_SWITCHOVER_ACK),
> >> > DEFINE_PROP_MIG_CAP("x-dirty-limit", MIGRATION_CAPABILITY_DIRTY_LIMIT),
> >> > + DEFINE_PROP_MIG_CAP("main-zero-page",
> >> > + MIGRATION_CAPABILITY_MAIN_ZERO_PAGE),
> >> > DEFINE_PROP_END_OF_LIST(),
> >> > };
> >> >
> >> > @@ -284,6 +286,19 @@ bool migrate_multifd(void)
> >> > return s->capabilities[MIGRATION_CAPABILITY_MULTIFD];
> >> > }
> >> >
> >> > +bool migrate_use_main_zero_page(void)
> >> > +{
> >> > + /* MigrationState *s; */
> >> > +
> >> > + /* s = migrate_get_current(); */
> >> > +
> >> > + /*
> >> > + * We will enable this when we add the right code.
> >> > + * return s->enabled_capabilities[MIGRATION_CAPABILITY_MAIN_ZERO_PAGE];
> >> > + */
> >> > + return true;
> >> > +}
> >> > +
> >> > bool migrate_pause_before_switchover(void)
> >> > {
> >> > MigrationState *s = migrate_get_current();
> >> > diff --git a/migration/options.h b/migration/options.h
> >> > index 246c160aee..c901eb57c6 100644
> >> > --- a/migration/options.h
> >> > +++ b/migration/options.h
> >> > @@ -88,6 +88,7 @@ int migrate_multifd_channels(void);
> >> > MultiFDCompression migrate_multifd_compression(void);
> >> > int migrate_multifd_zlib_level(void);
> >> > int migrate_multifd_zstd_level(void);
> >> > +bool migrate_use_main_zero_page(void);
> >> > uint8_t migrate_throttle_trigger_threshold(void);
> >> > const char *migrate_tls_authz(void);
> >> > const char *migrate_tls_creds(void);
> >> > diff --git a/qapi/migration.json b/qapi/migration.json
> >> > index eb2f883513..80c4b13516 100644
> >> > --- a/qapi/migration.json
> >> > +++ b/qapi/migration.json
> >> > @@ -531,6 +531,12 @@
> >> > # and can result in more stable read performance. Requires KVM
> >> > # with accelerator property "dirty-ring-size" set. (Since 8.1)
> >> > #
> >> > +#
> >> > +# @main-zero-page: If enabled, the detection of zero pages will be
> >> > +# done on the main thread. Otherwise it is done on
> >> > +# the multifd threads.
> >> > +# (since 8.2)
> >> > +#
> >> Should the capability name be something like "zero-page-detection" or just “zero-page”?
> >> CC: Fabiano Rosas
> >
> > I think the same concern was brought up last time Juan sent out the
> > original patchset. Right now, the zero page detection is done in the
> > main migration thread and it is always "ON". This change added a
> > functionality to move the zero page detection from the main thread to
> > the multifd sender threads. Now "main-zero-page" is turned "OFF" by
> > default, and zero page checking is done in the multifd sender thread
> > (much better performance). If user wants to run the zero page
> > detection in the main thread (keep current behavior), user can change
> > "main-zero-page" to "ON".
> >
> > Renaming it to "zero-page-detection" or just “zero-page” can not
> > differentiate the old behavior and the new behavior.
>
> Yes, the main point here is what happens when we try to migrate from
> different QEMU versions that have/don't have this code. We need some way
> to maintain the compatibility. In this case Juan chose to keep this
> capability with the semantics of "old behavior" so that we can enable it
> on the new QEMU to match with the old binary that doesn't expect to see
> zero pages on the packet/stream.
>
> > Here are the options:
> > 1) Keep the current behavior. "main-zero-page" is OFF by default and
> > zero page detection runs on the multifd thread by default. User can
> > turn the switch to "ON" if they want old behavior.
> > 2) Make "main-zero-page" switch ON as default. This would keep the
> > current behavior by default. User can set it to "OFF" for better
> > performance.
>
> 3) Make multifd-zero-page ON by default. User can set it to OFF to get
> the old behavior. There was some consideration about how libvirt works
> that would make this one unusable, but I don't understand what's that
> about.
>
> I would make this a default ON parameter instead of a capability.
If we want to add a knob for zero page, can it start with a string rather
than boolean?
It might already be helpful for debugging purpose when e.g. someone would
like to completely turn off zero page detection just for a comparison. I
also believe there can be some corner cases where the guest workload
migrates faster without zero page detection: an extreme case is the guest
memory always got dirtied 1 byte at the end of each page, where the
detection will have a worst case overhead while always returns a !zero
page.
So that implies a string parameter with:
- none: no zero page detection
- legacy: only detect in main thread
- multifd: use multifd detections
Then we can grow that with more HW accelerators. We make machines <=9.0 to
use legacy then, with the default to multifd.
--
Peter Xu
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v3 03/20] multifd: Zero pages transmission
2024-01-04 0:44 ` [PATCH v3 03/20] multifd: Zero " Hao Xiang
2024-01-15 7:01 ` Shivam Kumar
@ 2024-02-01 5:22 ` Peter Xu
2024-02-01 23:24 ` [External] " Hao Xiang
1 sibling, 1 reply; 41+ messages in thread
From: Peter Xu @ 2024-02-01 5:22 UTC (permalink / raw)
To: Hao Xiang
Cc: farosas, peter.maydell, marcandre.lureau, bryan.zhang, qemu-devel,
Juan Quintela
On Thu, Jan 04, 2024 at 12:44:35AM +0000, Hao Xiang wrote:
> From: Juan Quintela <quintela@redhat.com>
>
> This implements the zero page dection and handling.
>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
> migration/multifd.c | 41 +++++++++++++++++++++++++++++++++++++++--
> migration/multifd.h | 5 +++++
> 2 files changed, 44 insertions(+), 2 deletions(-)
>
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 5a1f50c7e8..756673029d 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -11,6 +11,7 @@
> */
>
> #include "qemu/osdep.h"
> +#include "qemu/cutils.h"
> #include "qemu/rcu.h"
> #include "exec/target_page.h"
> #include "sysemu/sysemu.h"
> @@ -279,6 +280,12 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>
> packet->offset[i] = cpu_to_be64(temp);
> }
> + for (i = 0; i < p->zero_num; i++) {
> + /* there are architectures where ram_addr_t is 32 bit */
> + uint64_t temp = p->zero[i];
> +
> + packet->offset[p->normal_num + i] = cpu_to_be64(temp);
> + }
> }
I think changes like this needs to be moved into the previous patch. I got
quite confused when reading previous one and only understood what happens
until now. Fabiano, if you're going to pick these ones out and post
separately, please also consider. Perhaps squashing them together?
--
Peter Xu
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v3 15/20] migration/multifd: Add test hook to set normal page ratio.
2024-01-04 0:44 ` [PATCH v3 15/20] migration/multifd: Add test hook to set normal page ratio Hao Xiang
@ 2024-02-01 5:24 ` Peter Xu
2024-02-01 23:10 ` [External] " Hao Xiang
0 siblings, 1 reply; 41+ messages in thread
From: Peter Xu @ 2024-02-01 5:24 UTC (permalink / raw)
To: Hao Xiang
Cc: farosas, peter.maydell, marcandre.lureau, bryan.zhang, qemu-devel
On Thu, Jan 04, 2024 at 12:44:47AM +0000, Hao Xiang wrote:
> +# @multifd-normal-page-ratio: Test hook setting the normal page ratio.
> +# (Since 8.2)
Please remember to touch all of them to 9.0 when repost, thanks.
--
Peter Xu
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v3 06/20] util/dsa: Add dependency idxd.
2024-01-04 0:44 ` [PATCH v3 06/20] util/dsa: Add dependency idxd Hao Xiang
@ 2024-02-01 5:34 ` Peter Xu
0 siblings, 0 replies; 41+ messages in thread
From: Peter Xu @ 2024-02-01 5:34 UTC (permalink / raw)
To: Hao Xiang
Cc: farosas, peter.maydell, marcandre.lureau, bryan.zhang, qemu-devel
On Thu, Jan 04, 2024 at 12:44:38AM +0000, Hao Xiang wrote:
> Idxd is the device driver for DSA (Intel Data Streaming
> Accelerator). The driver is fully functioning since Linux
> kernel 5.19. This change adds the driver's header file used
> for userspace development.
>
> Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
> ---
> linux-headers/linux/idxd.h | 356 +++++++++++++++++++++++++++++++++++++
> 1 file changed, 356 insertions(+)
> create mode 100644 linux-headers/linux/idxd.h
This can be addressed and posted separately. I see that we already updated
it to v6.7-rc5.
Did you check scripts/update-linux-headers.sh? Please check and see the
usage. If idxd.h is not pulled in for some reason, we may want to address
that.
--
Peter Xu
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [External] Re: [PATCH v3 15/20] migration/multifd: Add test hook to set normal page ratio.
2024-02-01 5:24 ` Peter Xu
@ 2024-02-01 23:10 ` Hao Xiang
0 siblings, 0 replies; 41+ messages in thread
From: Hao Xiang @ 2024-02-01 23:10 UTC (permalink / raw)
To: Peter Xu
Cc: farosas, peter.maydell, marcandre.lureau, bryan.zhang, qemu-devel
On Wed, Jan 31, 2024 at 9:24 PM Peter Xu <peterx@redhat.com> wrote:
>
> On Thu, Jan 04, 2024 at 12:44:47AM +0000, Hao Xiang wrote:
> > +# @multifd-normal-page-ratio: Test hook setting the normal page ratio.
> > +# (Since 8.2)
>
> Please remember to touch all of them to 9.0 when repost, thanks.
>
Will do.
> --
> Peter Xu
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [External] Re: [PATCH v3 03/20] multifd: Zero pages transmission
2024-02-01 5:22 ` Peter Xu
@ 2024-02-01 23:24 ` Hao Xiang
0 siblings, 0 replies; 41+ messages in thread
From: Hao Xiang @ 2024-02-01 23:24 UTC (permalink / raw)
To: Peter Xu
Cc: farosas, peter.maydell, marcandre.lureau, bryan.zhang, qemu-devel,
Juan Quintela
On Wed, Jan 31, 2024 at 9:22 PM Peter Xu <peterx@redhat.com> wrote:
>
> On Thu, Jan 04, 2024 at 12:44:35AM +0000, Hao Xiang wrote:
> > From: Juan Quintela <quintela@redhat.com>
> >
> > This implements the zero page dection and handling.
> >
> > Signed-off-by: Juan Quintela <quintela@redhat.com>
> > ---
> > migration/multifd.c | 41 +++++++++++++++++++++++++++++++++++++++--
> > migration/multifd.h | 5 +++++
> > 2 files changed, 44 insertions(+), 2 deletions(-)
> >
> > diff --git a/migration/multifd.c b/migration/multifd.c
> > index 5a1f50c7e8..756673029d 100644
> > --- a/migration/multifd.c
> > +++ b/migration/multifd.c
> > @@ -11,6 +11,7 @@
> > */
> >
> > #include "qemu/osdep.h"
> > +#include "qemu/cutils.h"
> > #include "qemu/rcu.h"
> > #include "exec/target_page.h"
> > #include "sysemu/sysemu.h"
> > @@ -279,6 +280,12 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
> >
> > packet->offset[i] = cpu_to_be64(temp);
> > }
> > + for (i = 0; i < p->zero_num; i++) {
> > + /* there are architectures where ram_addr_t is 32 bit */
> > + uint64_t temp = p->zero[i];
> > +
> > + packet->offset[p->normal_num + i] = cpu_to_be64(temp);
> > + }
> > }
>
> I think changes like this needs to be moved into the previous patch. I got
> quite confused when reading previous one and only understood what happens
> until now. Fabiano, if you're going to pick these ones out and post
> separately, please also consider. Perhaps squashing them together?
>
Discussed with Fabiano on a separate thread here
https://lore.kernel.org/all/CAAYibXi=WB5wfvLFM0b=d9oJf66Lb7FTGoNzzZ-tvK4RbBXxDw@mail.gmail.com/
I am moving the original multifd zero page checking changes into a
seperate patchset. There is some necessary refactoring work on the top
of the original series. I will send that out this week.
> --
> Peter Xu
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [External] Re: [PATCH v3 13/20] migration/multifd: Prepare to introduce DSA acceleration on the multifd path.
2024-01-23 0:37 ` [External] " Hao Xiang
@ 2024-02-02 10:03 ` Peter Xu
0 siblings, 0 replies; 41+ messages in thread
From: Peter Xu @ 2024-02-02 10:03 UTC (permalink / raw)
To: Hao Xiang
Cc: Shivam Kumar, farosas@suse.de, peter.maydell@linaro.org,
marcandre.lureau@redhat.com, bryan.zhang@bytedance.com,
qemu-devel@nongnu.org
On Mon, Jan 22, 2024 at 04:37:09PM -0800, Hao Xiang wrote:
> > > +static void set_normal_pages(MultiFDSendParams *p)
> > > +{
> > > + for (int i = 0; i < p->pages->num; i++) {
> > > + p->batch_task->results[i] = false;
> > > + }
> > > +}
> > Please correct me if I am wrong but set_normal_pages will not be a part of the final patch, right? They are there for testing out the performance against different zero page ration scenarios. If it’s so, can we isolate these parts into a separate patch.
>
> set_normal_pages is used for performance testing and testing only. It
> won't introduce any "incorrect" behavior and I would love to see it
> being part of the upstream code. But the argument that testing change
> should remain private is always correct. So I am totally OK with
> isolating the parts into a separate patch.
IMHO we can allow that to be production code; as long as the new zeropage
detection parameter can allow user to choose "none" (as I mentioned in
another reply), then it's not a test code only but allow the user to
disable zeropage detections when the user wants. Thanks,
--
Peter Xu
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration.
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
` (19 preceding siblings ...)
2024-01-04 0:44 ` [PATCH v3 20/20] migration/multifd: Add integration tests for multifd with Intel DSA offloading Hao Xiang
@ 2024-03-08 1:47 ` liulongfang via
20 siblings, 0 replies; 41+ messages in thread
From: liulongfang via @ 2024-03-08 1:47 UTC (permalink / raw)
To: qemu-devel
On 2024/1/4 8:44, Hao Xiang wrote:
> v3
> * Rebase on top of 7425b6277f12e82952cede1f531bfc689bf77fb1.
> * Fix error/warning from checkpatch.pl
> * Fix use-after-free bug when multifd-dsa-accel option is not set.
> * Handle error from dsa_init and correctly propogate the error.
> * Remove unnecessary call to dsa_stop.
> * Detect availability of DSA feature at compile time.
> * Implement a generic batch_task structure and a DSA specific one dsa_batch_task.
> * Remove all exit() calls and propagate errors correctly.
> * Use bytes instead of page count to configure multifd-packet-size option.
>
> v2
> * Rebase on top of 3e01f1147a16ca566694b97eafc941d62fa1e8d8.
> * Leave Juan's changes in their original form instead of squashing them.
> * Add a new commit to refactor the multifd_send_thread function to prepare for introducing the DSA offload functionality.
> * Use page count to configure multifd-packet-size option.
> * Don't use the FLAKY flag in DSA tests.
> * Test if DSA integration test is setup correctly and skip the test if
> * not.
> * Fixed broken link in the previous patch cover.
>
> * Background:
>
> I posted an RFC about DSA offloading in QEMU:
> https://patchew.org/QEMU/20230529182001.2232069-1-hao.xiang@bytedance.com/
>
> This patchset implements the DSA offloading on zero page checking in
> multifd live migration code path.
>
> * Overview:
>
> Intel Data Streaming Accelerator(DSA) is introduced in Intel's 4th generation
> Xeon server, aka Sapphire Rapids.
> https://cdrdv2-public.intel.com/671116/341204-intel-data-streaming-accelerator-spec.pdf
> https://www.intel.com/content/www/us/en/content-details/759709/intel-data-streaming-accelerator-user-guide.html
> One of the things DSA can do is to offload memory comparison workload from
> CPU to DSA accelerator hardware. This patchset implements a solution to offload
> QEMU's zero page checking from CPU to DSA accelerator hardware. We gain
> two benefits from this change:
> 1. Reduces CPU usage in multifd live migration workflow across all use
> cases.
> 2. Reduces migration total time in some use cases.
>
> * Design:
>
> These are the logical steps to perform DSA offloading:
> 1. Configure DSA accelerators and create user space openable DSA work
> queues via the idxd driver.
> 2. Map DSA's work queue into a user space address space.
> 3. Fill an in-memory task descriptor to describe the memory operation.
> 4. Use dedicated CPU instruction _enqcmd to queue a task descriptor to
> the work queue.
> 5. Pull the task descriptor's completion status field until the task
> completes.
> 6. Check return status.
>
> The memory operation is now totally done by the accelerator hardware but
> the new workflow introduces overheads. The overhead is the extra cost CPU
> prepares and submits the task descriptors and the extra cost CPU pulls for
> completion. The design is around minimizing these two overheads.
>
> 1. In order to reduce the overhead on task preparation and submission,
> we use batch descriptors. A batch descriptor will contain N individual
> zero page checking tasks where the default N is 128 (default packet size
> / page size) and we can increase N by setting the packet size via a new
> migration option.
> 2. The multifd sender threads prepares and submits batch tasks to DSA
> hardware and it waits on a synchronization object for task completion.
> Whenever a DSA task is submitted, the task structure is added to a
> thread safe queue. It's safe to have multiple multifd sender threads to
> submit tasks concurrently.
> 3. Multiple DSA hardware devices can be used. During multifd initialization,
> every sender thread will be assigned a DSA device to work with. We
> use a round-robin scheme to evenly distribute the work across all used
> DSA devices.
> 4. Use a dedicated thread dsa_completion to perform busy pulling for all
> DSA task completions. The thread keeps dequeuing DSA tasks from the
> thread safe queue. The thread blocks when there is no outstanding DSA
> task. When pulling for completion of a DSA task, the thread uses CPU
> instruction _mm_pause between the iterations of a busy loop to save some
> CPU power as well as optimizing core resources for the other hypercore.
> 5. DSA accelerator can encounter errors. The most popular error is a
> page fault. We have tested using devices to handle page faults but
> performance is bad. Right now, if DSA hits a page fault, we fallback to
> use CPU to complete the rest of the work. The CPU fallback is done in
> the multifd sender thread.
> 6. Added a new migration option multifd-dsa-accel to set the DSA device
> path. If set, the multifd workflow will leverage the DSA devices for
> offloading.
> 7. Added a new migration option multifd-normal-page-ratio to make
> multifd live migration easier to test. Setting a normal page ratio will
> make live migration recognize a zero page as a normal page and send
> the entire payload over the network. If we want to send a large network
> payload and analyze throughput, this option is useful.
> 8. Added a new migration option multifd-packet-size. This can increase
> the number of pages being zero page checked and sent over the network.
> The extra synchronization between the sender threads and the dsa
> completion thread is an overhead. Using a large packet size can reduce
> that overhead.
>
> * Performance:
>
> We use two Intel 4th generation Xeon servers for testing.
>
> Architecture: x86_64
> CPU(s): 192
> Thread(s) per core: 2
> Core(s) per socket: 48
> Socket(s): 2
> NUMA node(s): 2
> Vendor ID: GenuineIntel
> CPU family: 6
> Model: 143
> Model name: Intel(R) Xeon(R) Platinum 8457C
> Stepping: 8
> CPU MHz: 2538.624
> CPU max MHz: 3800.0000
> CPU min MHz: 800.0000
>
> We perform multifd live migration with below setup:
> 1. VM has 100GB memory.
> 2. Use the new migration option multifd-set-normal-page-ratio to control the total
> size of the payload sent over the network.
> 3. Use 8 multifd channels.
> 4. Use tcp for live migration.
> 4. Use CPU to perform zero page checking as the baseline.
> 5. Use one DSA device to offload zero page checking to compare with the baseline.
> 6. Use "perf sched record" and "perf sched timehist" to analyze CPU usage.
>
The scenarios you tested are relatively special scenarios. It looks like there are some gains.
So is there a normal live migration task scenario for testing? For example, a VM with 16GB memory.
The VM has been running the task for a long time. There won't be many zero pages in the VM.
Does the additional system overhead of using DSA outweigh the benefits?
> A) Scenario 1: 50% (50GB) normal pages on an 100GB vm.
>
> CPU usage
>
> |---------------|---------------|---------------|---------------|
> | |comm |runtime(msec) |totaltime(msec)|
> |---------------|---------------|---------------|---------------|
> |Baseline |live_migration |5657.58 | |
> | |multifdsend_0 |3931.563 | |
> | |multifdsend_1 |4405.273 | |
> | |multifdsend_2 |3941.968 | |
> | |multifdsend_3 |5032.975 | |
> | |multifdsend_4 |4533.865 | |
> | |multifdsend_5 |4530.461 | |
> | |multifdsend_6 |5171.916 | |
> | |multifdsend_7 |4722.769 |41922 |
> |---------------|---------------|---------------|---------------|
> |DSA |live_migration |6129.168 | |
> | |multifdsend_0 |2954.717 | |
> | |multifdsend_1 |2766.359 | |
> | |multifdsend_2 |2853.519 | |
> | |multifdsend_3 |2740.717 | |
> | |multifdsend_4 |2824.169 | |
> | |multifdsend_5 |2966.908 | |
> | |multifdsend_6 |2611.137 | |
> | |multifdsend_7 |3114.732 | |
> | |dsa_completion |3612.564 |32568 |
> |---------------|---------------|---------------|---------------|
>
> Baseline total runtime is calculated by adding up all multifdsend_X
> and live_migration threads runtime. DSA offloading total runtime is
> calculated by adding up all multifdsend_X, live_migration and
> dsa_completion threads runtime. 41922 msec VS 32568 msec runtime and
> that is 23% total CPU usage savings.
>
> Latency
> |---------------|---------------|---------------|---------------|---------------|---------------|
> | |total time |down time |throughput |transferred-ram|total-ram |
> |---------------|---------------|---------------|---------------|---------------|---------------|
> |Baseline |10343 ms |161 ms |41007.00 mbps |51583797 kb |102400520 kb |
> |---------------|---------------|---------------|---------------|-------------------------------|
> |DSA offload |9535 ms |135 ms |46554.40 mbps |53947545 kb |102400520 kb |
> |---------------|---------------|---------------|---------------|---------------|---------------|
>
> Total time is 8% faster and down time is 16% faster.
>
> B) Scenario 2: 100% (100GB) zero pages on an 100GB vm.
>
> CPU usage
> |---------------|---------------|---------------|---------------|
> | |comm |runtime(msec) |totaltime(msec)|
> |---------------|---------------|---------------|---------------|
> |Baseline |live_migration |4860.718 | |
> | |multifdsend_0 |748.875 | |
> | |multifdsend_1 |898.498 | |
> | |multifdsend_2 |787.456 | |
> | |multifdsend_3 |764.537 | |
> | |multifdsend_4 |785.687 | |
> | |multifdsend_5 |756.941 | |
> | |multifdsend_6 |774.084 | |
> | |multifdsend_7 |782.900 |11154 |
> |---------------|---------------|-------------------------------|
> |DSA offloading |live_migration |3846.976 | |
> | |multifdsend_0 |191.880 | |
> | |multifdsend_1 |166.331 | |
> | |multifdsend_2 |168.528 | |
> | |multifdsend_3 |197.831 | |
> | |multifdsend_4 |169.580 | |
> | |multifdsend_5 |167.984 | |
> | |multifdsend_6 |198.042 | |
> | |multifdsend_7 |170.624 | |
> | |dsa_completion |3428.669 |8700 |
> |---------------|---------------|---------------|---------------|
>
> Baseline total runtime is 11154 msec and DSA offloading total runtime is
> 8700 msec. That is 22% CPU savings.
>
> Latency
> |--------------------------------------------------------------------------------------------|
> | |total time |down time |throughput |transferred-ram|total-ram |
> |---------------|---------------|---------------|---------------|---------------|------------|
> |Baseline |4867 ms |20 ms |1.51 mbps |565 kb |102400520 kb|
> |---------------|---------------|---------------|---------------|----------------------------|
> |DSA offload |3888 ms |18 ms |1.89 mbps |565 kb |102400520 kb|
> |---------------|---------------|---------------|---------------|---------------|------------|
>
> Total time 20% faster and down time 10% faster.
>
> * Testing:
>
> 1. Added unit tests for cover the added code path in dsa.c
> 2. Added integration tests to cover multifd live migration using DSA
> offloading.
>
> * Patchset
>
> Apply this patchset on top of commit
> 7425b6277f12e82952cede1f531bfc689bf77fb1
>
> Hao Xiang (16):
> meson: Introduce new instruction set enqcmd to the build system.
> util/dsa: Add dependency idxd.
> util/dsa: Implement DSA device start and stop logic.
> util/dsa: Implement DSA task enqueue and dequeue.
> util/dsa: Implement DSA task asynchronous completion thread model.
> util/dsa: Implement zero page checking in DSA task.
> util/dsa: Implement DSA task asynchronous submission and wait for
> completion.
> migration/multifd: Add new migration option for multifd DSA
> offloading.
> migration/multifd: Prepare to introduce DSA acceleration on the
> multifd path.
> migration/multifd: Enable DSA offloading in multifd sender path.
> migration/multifd: Add test hook to set normal page ratio.
> migration/multifd: Enable set normal page ratio test hook in multifd.
> migration/multifd: Add migration option set packet size.
> migration/multifd: Enable set packet size migration option.
> util/dsa: Add unit test coverage for Intel DSA task submission and
> completion.
> migration/multifd: Add integration tests for multifd with Intel DSA
> offloading.
>
> Juan Quintela (4):
> multifd: Add capability to enable/disable zero_page
> multifd: Support for zero pages transmission
> multifd: Zero pages transmission
> So we use multifd to transmit zero pages.
>
> include/qemu/dsa.h | 175 +++++
> linux-headers/linux/idxd.h | 356 ++++++++++
> meson.build | 14 +
> meson_options.txt | 2 +
> migration/migration-hmp-cmds.c | 22 +
> migration/multifd-zlib.c | 6 +-
> migration/multifd-zstd.c | 6 +-
> migration/multifd.c | 218 +++++-
> migration/multifd.h | 27 +-
> migration/options.c | 114 ++++
> migration/options.h | 4 +
> migration/ram.c | 45 +-
> migration/trace-events | 8 +-
> qapi/migration.json | 62 +-
> scripts/meson-buildoptions.sh | 3 +
> tests/qtest/migration-test.c | 77 ++-
> tests/unit/meson.build | 6 +
> tests/unit/test-dsa.c | 475 +++++++++++++
> util/dsa.c | 1170 ++++++++++++++++++++++++++++++++
> util/meson.build | 1 +
> 20 files changed, 2749 insertions(+), 42 deletions(-)
> create mode 100644 include/qemu/dsa.h
> create mode 100644 linux-headers/linux/idxd.h
> create mode 100644 tests/unit/test-dsa.c
> create mode 100644 util/dsa.c
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v3 11/20] util/dsa: Implement DSA task asynchronous submission and wait for completion.
2024-01-04 0:44 ` [PATCH v3 11/20] util/dsa: Implement DSA task asynchronous submission and wait for completion Hao Xiang
@ 2024-03-08 10:10 ` Jonathan Cameron via
[not found] ` <CAAYibXhSdeTod4VNyE5ZsZAjQteRdBZEhh5UieNs8s6Ji+X5og@mail.gmail.com>
0 siblings, 1 reply; 41+ messages in thread
From: Jonathan Cameron via @ 2024-03-08 10:10 UTC (permalink / raw)
To: Hao Xiang
Cc: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
On Thu, 4 Jan 2024 00:44:43 +0000
Hao Xiang <hao.xiang@bytedance.com> wrote:
> * Add a DSA task completion callback.
> * DSA completion thread will call the tasks's completion callback
> on every task/batch task completion.
> * DSA submission path to wait for completion.
> * Implement CPU fallback if DSA is not able to complete the task.
>
> Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
> Signed-off-by: Bryan Zhang <bryan.zhang@bytedance.com>
Hi,
One naming comment inline. You had me confused on how you were handling async
processing at where this is used. Answer is that I think you aren't!
>
> +/**
> + * @brief Performs buffer zero comparison on a DSA batch task asynchronously.
The hardware may be doing it asynchronously but unless that
buffer_zero_dsa_wait() call doesn't do what it's name suggests, this function
is wrapping the async hardware related stuff to make it synchronous.
So name it buffer_is_zero_dsa_batch_sync()!
Jonathan
> + *
> + * @param batch_task A pointer to the batch task.
> + * @param buf An array of memory buffers.
> + * @param count The number of buffers in the array.
> + * @param len The buffer length.
> + *
> + * @return Zero if successful, otherwise non-zero.
> + */
> +int
> +buffer_is_zero_dsa_batch_async(struct dsa_batch_task *batch_task,
> + const void **buf, size_t count, size_t len)
> +{
> + if (count <= 0 || count > batch_task->batch_size) {
> + return -1;
> + }
> +
> + assert(batch_task != NULL);
> + assert(len != 0);
> + assert(buf != NULL);
> +
> + if (count == 1) {
> + /* DSA doesn't take batch operation with only 1 task. */
> + buffer_zero_dsa_async(batch_task, buf[0], len);
> + } else {
> + buffer_zero_dsa_batch_async(batch_task, buf, count, len);
> + }
> +
> + buffer_zero_dsa_wait(batch_task);
> + buffer_zero_cpu_fallback(batch_task);
> +
> + return 0;
> +}
> +
> #endif
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [External] Re: [PATCH v3 11/20] util/dsa: Implement DSA task asynchronous submission and wait for completion.
[not found] ` <CAAYibXhSdeTod4VNyE5ZsZAjQteRdBZEhh5UieNs8s6Ji+X5og@mail.gmail.com>
@ 2024-03-08 21:50 ` hao.xiang
0 siblings, 0 replies; 41+ messages in thread
From: hao.xiang @ 2024-03-08 21:50 UTC (permalink / raw)
To: Jonathan.Cameron
Cc: farosas, peter.maydell, peterx, marcandre.lureau, bryan.zhang,
qemu-devel
>
> On Fri, Mar 8, 2024 at 2:11 AM Jonathan Cameron
>
> <Jonathan.Cameron@huawei.com> wrote:
>
> >
> > On Thu, 4 Jan 2024 00:44:43 +0000
> >
> > Hao Xiang <hao.xiang@bytedance.com> wrote:
> >
> > * Add a DSA task completion callback.
> >
> > * DSA completion thread will call the tasks's completion callback
> >
> > on every task/batch task completion.
> >
> > * DSA submission path to wait for completion.
> >
> > * Implement CPU fallback if DSA is not able to complete the task.
> >
> > Signed-off-by: Hao Xiang <hao.xiang@bytedance.com>
> >
> > Signed-off-by: Bryan Zhang <bryan.zhang@bytedance.com>
> >
> > Hi,
> >
> > One naming comment inline. You had me confused on how you were handling async
> >
> > processing at where this is used. Answer is that I think you aren't!
> >
> > +/**
> >
> > + * @brief Performs buffer zero comparison on a DSA batch task asynchronously.
> >
> > The hardware may be doing it asynchronously but unless that
> >
> > buffer_zero_dsa_wait() call doesn't do what it's name suggests, this function
> >
> > is wrapping the async hardware related stuff to make it synchronous.
> >
> > So name it buffer_is_zero_dsa_batch_sync()!
> >
> > Jonathan
Thanks for reviewing this. The first completion model I tried was to use a busy loop to pull for completion on the submission thread but it turns out to have too much unnecessary overhead. Think about 10 threads all submitting tasks and we end up having 10 busy loops. I moved the completion work to a dedicated thread and named it async! However, the async model doesn't fit well with the current live migration thread model so eventually I added a wait on the submission thread. It was intended to be async but I agree that it is not currently. I will rename it in the next revision.
> >
> > + *
> >
> > + * @param batch_task A pointer to the batch task.
> >
> > + * @param buf An array of memory buffers.
> >
> > + * @param count The number of buffers in the array.
> >
> > + * @param len The buffer length.
> >
> > + *
> >
> > + * @return Zero if successful, otherwise non-zero.
> >
> > + */
> >
> > +int
> >
> > +buffer_is_zero_dsa_batch_async(struct dsa_batch_task *batch_task,
> >
> > + const void **buf, size_t count, size_t len)
> >
> > +{
> >
> > + if (count <= 0 || count > batch_task->batch_size) {
> >
> > + return -1;
> >
> > + }
> >
> > +
> >
> > + assert(batch_task != NULL);
> >
> > + assert(len != 0);
> >
> > + assert(buf != NULL);
> >
> > +
> >
> > + if (count == 1) {
> >
> > + /* DSA doesn't take batch operation with only 1 task. */
> >
> > + buffer_zero_dsa_async(batch_task, buf[0], len);
> >
> > + } else {
> >
> > + buffer_zero_dsa_batch_async(batch_task, buf, count, len);
> >
> > + }
> >
> > +
> >
> > + buffer_zero_dsa_wait(batch_task);
> >
> > + buffer_zero_cpu_fallback(batch_task);
> >
> > +
> >
> > + return 0;
> >
> > +}
> >
> > +
> >
> > #endif
> >
>
^ permalink raw reply [flat|nested] 41+ messages in thread
end of thread, other threads:[~2024-03-08 21:52 UTC | newest]
Thread overview: 41+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-01-04 0:44 [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration Hao Xiang
2024-01-04 0:44 ` [PATCH v3 01/20] multifd: Add capability to enable/disable zero_page Hao Xiang
2024-01-08 20:39 ` Fabiano Rosas
2024-01-11 5:47 ` [External] " Hao Xiang
2024-01-15 6:02 ` Shivam Kumar
2024-01-23 0:33 ` [External] " Hao Xiang
2024-01-23 15:10 ` Fabiano Rosas
2024-01-23 21:00 ` Hao Xiang
2024-02-01 4:27 ` Peter Xu
2024-01-04 0:44 ` [PATCH v3 02/20] multifd: Support for zero pages transmission Hao Xiang
2024-01-04 0:44 ` [PATCH v3 03/20] multifd: Zero " Hao Xiang
2024-01-15 7:01 ` Shivam Kumar
2024-01-23 0:46 ` [External] " Hao Xiang
2024-02-01 5:22 ` Peter Xu
2024-02-01 23:24 ` [External] " Hao Xiang
2024-01-04 0:44 ` [PATCH v3 04/20] So we use multifd to transmit zero pages Hao Xiang
2024-01-04 0:44 ` [PATCH v3 05/20] meson: Introduce new instruction set enqcmd to the build system Hao Xiang
2024-01-04 0:44 ` [PATCH v3 06/20] util/dsa: Add dependency idxd Hao Xiang
2024-02-01 5:34 ` Peter Xu
2024-01-04 0:44 ` [PATCH v3 07/20] util/dsa: Implement DSA device start and stop logic Hao Xiang
2024-01-04 0:44 ` [PATCH v3 08/20] util/dsa: Implement DSA task enqueue and dequeue Hao Xiang
2024-01-04 0:44 ` [PATCH v3 09/20] util/dsa: Implement DSA task asynchronous completion thread model Hao Xiang
2024-01-04 0:44 ` [PATCH v3 10/20] util/dsa: Implement zero page checking in DSA task Hao Xiang
2024-01-04 0:44 ` [PATCH v3 11/20] util/dsa: Implement DSA task asynchronous submission and wait for completion Hao Xiang
2024-03-08 10:10 ` Jonathan Cameron via
[not found] ` <CAAYibXhSdeTod4VNyE5ZsZAjQteRdBZEhh5UieNs8s6Ji+X5og@mail.gmail.com>
2024-03-08 21:50 ` [External] " hao.xiang
2024-01-04 0:44 ` [PATCH v3 12/20] migration/multifd: Add new migration option for multifd DSA offloading Hao Xiang
2024-01-04 0:44 ` [PATCH v3 13/20] migration/multifd: Prepare to introduce DSA acceleration on the multifd path Hao Xiang
2024-01-15 6:46 ` Shivam Kumar
2024-01-23 0:37 ` [External] " Hao Xiang
2024-02-02 10:03 ` Peter Xu
2024-01-04 0:44 ` [PATCH v3 14/20] migration/multifd: Enable DSA offloading in multifd sender path Hao Xiang
2024-01-04 0:44 ` [PATCH v3 15/20] migration/multifd: Add test hook to set normal page ratio Hao Xiang
2024-02-01 5:24 ` Peter Xu
2024-02-01 23:10 ` [External] " Hao Xiang
2024-01-04 0:44 ` [PATCH v3 16/20] migration/multifd: Enable set normal page ratio test hook in multifd Hao Xiang
2024-01-04 0:44 ` [PATCH v3 17/20] migration/multifd: Add migration option set packet size Hao Xiang
2024-01-04 0:44 ` [PATCH v3 18/20] migration/multifd: Enable set packet size migration option Hao Xiang
2024-01-04 0:44 ` [PATCH v3 19/20] util/dsa: Add unit test coverage for Intel DSA task submission and completion Hao Xiang
2024-01-04 0:44 ` [PATCH v3 20/20] migration/multifd: Add integration tests for multifd with Intel DSA offloading Hao Xiang
2024-03-08 1:47 ` [PATCH v3 00/20] Use Intel DSA accelerator to offload zero page checking in multifd live migration liulongfang via
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).