* [PULL 00/30] Next patches
@ 2022-11-15 15:34 Juan Quintela
2022-11-15 15:34 ` [PULL 01/30] migration/channel-block: fix return value for qio_channel_block_{readv, writev} Juan Quintela
` (32 more replies)
0 siblings, 33 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:34 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
The following changes since commit 98f10f0e2613ba1ac2ad3f57a5174014f6dcb03d:
Merge tag 'pull-target-arm-20221114' of https://git.linaro.org/people/pmaydell/qemu-arm into staging (2022-11-14 13:31:17 -0500)
are available in the Git repository at:
https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
for you to fetch changes up to d896a7a40db13fc2d05828c94ddda2747530089c:
migration: Block migration comment or code is wrong (2022-11-15 10:31:06 +0100)
----------------------------------------------------------------
Migration PULL request (take 2)
Hi
This time properly signed.
[take 1]
It includes:
- Leonardo fix for zero_copy flush
- Fiona fix for return value of readv/writev
- Peter Xu cleanups
- Peter Xu preempt patches
- Patches ready from zero page (me)
- AVX2 support (ling)
- fix for slow networking and reordering of first packets (manish)
Please, apply.
----------------------------------------------------------------
Fiona Ebner (1):
migration/channel-block: fix return value for
qio_channel_block_{readv,writev}
Juan Quintela (5):
multifd: Create page_size fields into both MultiFD{Recv,Send}Params
multifd: Create page_count fields into both MultiFD{Recv,Send}Params
migration: Export ram_transferred_ram()
migration: Export ram_release_page()
migration: Block migration comment or code is wrong
Leonardo Bras (1):
migration/multifd/zero-copy: Create helper function for flushing
Peter Xu (20):
migration: Fix possible infinite loop of ram save process
migration: Fix race on qemu_file_shutdown()
migration: Disallow postcopy preempt to be used with compress
migration: Use non-atomic ops for clear log bitmap
migration: Disable multifd explicitly with compression
migration: Take bitmap mutex when completing ram migration
migration: Add postcopy_preempt_active()
migration: Cleanup xbzrle zero page cache update logic
migration: Trivial cleanup save_page_header() on same block check
migration: Remove RAMState.f references in compression code
migration: Yield bitmap_mutex properly when sending/sleeping
migration: Use atomic ops properly for page accountings
migration: Teach PSS about host page
migration: Introduce pss_channel
migration: Add pss_init()
migration: Make PageSearchStatus part of RAMState
migration: Move last_sent_block into PageSearchStatus
migration: Send requested page directly in rp-return thread
migration: Remove old preempt code around state maintainance
migration: Drop rs->f
ling xu (2):
Update AVX512 support for xbzrle_encode_buffer
Unit test code and benchmark code
manish.mishra (1):
migration: check magic value for deciding the mapping of channels
meson.build | 16 +
include/exec/ram_addr.h | 11 +-
include/exec/ramblock.h | 3 +
include/io/channel.h | 25 ++
include/qemu/bitmap.h | 1 +
migration/migration.h | 7 -
migration/multifd.h | 10 +-
migration/postcopy-ram.h | 2 +-
migration/ram.h | 23 +
migration/xbzrle.h | 4 +
io/channel-socket.c | 27 ++
io/channel.c | 39 ++
migration/block.c | 4 +-
migration/channel-block.c | 6 +-
migration/migration.c | 109 +++--
migration/multifd-zlib.c | 14 +-
migration/multifd-zstd.c | 12 +-
migration/multifd.c | 69 +--
migration/postcopy-ram.c | 5 +-
migration/qemu-file.c | 27 +-
migration/ram.c | 794 +++++++++++++++++-----------------
migration/xbzrle.c | 124 ++++++
tests/bench/xbzrle-bench.c | 465 ++++++++++++++++++++
tests/unit/test-xbzrle.c | 39 +-
util/bitmap.c | 45 ++
meson_options.txt | 2 +
scripts/meson-buildoptions.sh | 14 +-
tests/bench/meson.build | 4 +
28 files changed, 1379 insertions(+), 522 deletions(-)
create mode 100644 tests/bench/xbzrle-bench.c
--
2.38.1
^ permalink raw reply [flat|nested] 47+ messages in thread
* [PULL 01/30] migration/channel-block: fix return value for qio_channel_block_{readv, writev}
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
@ 2022-11-15 15:34 ` Juan Quintela
2022-11-15 15:34 ` [PULL 02/30] migration/multifd/zero-copy: Create helper function for flushing Juan Quintela
` (31 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:34 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng, Fiona Ebner
From: Fiona Ebner <f.ebner@proxmox.com>
in the error case. The documentation in include/io/channel.h states
that -1 or QIO_CHANNEL_ERR_BLOCK should be returned upon error. Simply
passing along the return value from the bdrv-functions has the
potential to confuse the call sides. Non-blocking mode is not
implemented currently, so -1 it is.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/channel-block.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/migration/channel-block.c b/migration/channel-block.c
index c55c8c93ce..f4ab53acdb 100644
--- a/migration/channel-block.c
+++ b/migration/channel-block.c
@@ -62,7 +62,8 @@ qio_channel_block_readv(QIOChannel *ioc,
qemu_iovec_init_external(&qiov, (struct iovec *)iov, niov);
ret = bdrv_readv_vmstate(bioc->bs, &qiov, bioc->offset);
if (ret < 0) {
- return ret;
+ error_setg_errno(errp, -ret, "bdrv_readv_vmstate failed");
+ return -1;
}
bioc->offset += qiov.size;
@@ -86,7 +87,8 @@ qio_channel_block_writev(QIOChannel *ioc,
qemu_iovec_init_external(&qiov, (struct iovec *)iov, niov);
ret = bdrv_writev_vmstate(bioc->bs, &qiov, bioc->offset);
if (ret < 0) {
- return ret;
+ error_setg_errno(errp, -ret, "bdrv_writev_vmstate failed");
+ return -1;
}
bioc->offset += qiov.size;
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 02/30] migration/multifd/zero-copy: Create helper function for flushing
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
2022-11-15 15:34 ` [PULL 01/30] migration/channel-block: fix return value for qio_channel_block_{readv, writev} Juan Quintela
@ 2022-11-15 15:34 ` Juan Quintela
2022-11-15 15:34 ` [PULL 03/30] migration: check magic value for deciding the mapping of channels Juan Quintela
` (30 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:34 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng, Leonardo Bras
From: Leonardo Bras <leobras@redhat.com>
Move flushing code from multifd_send_sync_main() to a new helper, and call
it in multifd_send_sync_main().
Signed-off-by: Leonardo Bras <leobras@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/multifd.c | 30 +++++++++++++++++++-----------
1 file changed, 19 insertions(+), 11 deletions(-)
diff --git a/migration/multifd.c b/migration/multifd.c
index 586ddc9d65..509bbbe3bf 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -566,6 +566,23 @@ void multifd_save_cleanup(void)
multifd_send_state = NULL;
}
+static int multifd_zero_copy_flush(QIOChannel *c)
+{
+ int ret;
+ Error *err = NULL;
+
+ ret = qio_channel_flush(c, &err);
+ if (ret < 0) {
+ error_report_err(err);
+ return -1;
+ }
+ if (ret == 1) {
+ dirty_sync_missed_zero_copy();
+ }
+
+ return ret;
+}
+
int multifd_send_sync_main(QEMUFile *f)
{
int i;
@@ -616,17 +633,8 @@ int multifd_send_sync_main(QEMUFile *f)
qemu_mutex_unlock(&p->mutex);
qemu_sem_post(&p->sem);
- if (flush_zero_copy && p->c) {
- int ret;
- Error *err = NULL;
-
- ret = qio_channel_flush(p->c, &err);
- if (ret < 0) {
- error_report_err(err);
- return -1;
- } else if (ret == 1) {
- dirty_sync_missed_zero_copy();
- }
+ if (flush_zero_copy && p->c && (multifd_zero_copy_flush(p->c) < 0)) {
+ return -1;
}
}
for (i = 0; i < migrate_multifd_channels(); i++) {
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 03/30] migration: check magic value for deciding the mapping of channels
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
2022-11-15 15:34 ` [PULL 01/30] migration/channel-block: fix return value for qio_channel_block_{readv, writev} Juan Quintela
2022-11-15 15:34 ` [PULL 02/30] migration/multifd/zero-copy: Create helper function for flushing Juan Quintela
@ 2022-11-15 15:34 ` Juan Quintela
2022-11-15 15:34 ` [PULL 04/30] multifd: Create page_size fields into both MultiFD{Recv, Send}Params Juan Quintela
` (29 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:34 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng, manish.mishra
From: "manish.mishra" <manish.mishra@nutanix.com>
Current logic assumes that channel connections on the destination side are
always established in the same order as the source and the first one will
always be the main channel followed by the multifid or post-copy
preemption channel. This may not be always true, as even if a channel has a
connection established on the source side it can be in the pending state on
the destination side and a newer connection can be established first.
Basically causing out of order mapping of channels on the destination side.
Currently, all channels except post-copy preempt send a magic number, this
patch uses that magic number to decide the type of channel. This logic is
applicable only for precopy(multifd) live migration, as mentioned, the
post-copy preempt channel does not send any magic number. Also, tls live
migrations already does tls handshake before creating other channels, so
this issue is not possible with tls, hence this logic is avoided for tls
live migrations. This patch uses MSG_PEEK to check the magic number of
channels so that current data/control stream management remains
un-effected.
v2: TLS does not support MSG_PEEK, so V1 was broken for tls live
migrations. For tls live migration, while initializing main channel
tls handshake is done before we can create other channels, so this
issue is not possible for tls live migrations. In V2 added a check
to avoid checking magic number for tls live migration and fallback
to older method to decide mapping of channels on destination side.
Suggested-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: manish.mishra <manish.mishra@nutanix.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
include/io/channel.h | 25 +++++++++++++++++++++++
migration/multifd.h | 2 +-
migration/postcopy-ram.h | 2 +-
io/channel-socket.c | 27 ++++++++++++++++++++++++
io/channel.c | 39 +++++++++++++++++++++++++++++++++++
migration/migration.c | 44 +++++++++++++++++++++++++++++-----------
migration/multifd.c | 12 ++++-------
migration/postcopy-ram.c | 5 +----
8 files changed, 130 insertions(+), 26 deletions(-)
diff --git a/include/io/channel.h b/include/io/channel.h
index c680ee7480..74177aeeea 100644
--- a/include/io/channel.h
+++ b/include/io/channel.h
@@ -115,6 +115,10 @@ struct QIOChannelClass {
int **fds,
size_t *nfds,
Error **errp);
+ ssize_t (*io_read_peek)(QIOChannel *ioc,
+ void *buf,
+ size_t nbytes,
+ Error **errp);
int (*io_close)(QIOChannel *ioc,
Error **errp);
GSource * (*io_create_watch)(QIOChannel *ioc,
@@ -475,6 +479,27 @@ int qio_channel_write_all(QIOChannel *ioc,
size_t buflen,
Error **errp);
+/**
+ * qio_channel_read_peek_all:
+ * @ioc: the channel object
+ * @buf: the memory region to read in data
+ * @nbytes: the number of bytes to read
+ * @errp: pointer to a NULL-initialized error object
+ *
+ * Read given @nbytes data from peek of channel into
+ * memory region @buf.
+ *
+ * The function will be blocked until read size is
+ * equal to requested size.
+ *
+ * Returns: 1 if all bytes were read, 0 if end-of-file
+ * occurs without data, or -1 on error
+ */
+int qio_channel_read_peek_all(QIOChannel *ioc,
+ void* buf,
+ size_t nbytes,
+ Error **errp);
+
/**
* qio_channel_set_blocking:
* @ioc: the channel object
diff --git a/migration/multifd.h b/migration/multifd.h
index 519f498643..913e4ba274 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -18,7 +18,7 @@ void multifd_save_cleanup(void);
int multifd_load_setup(Error **errp);
int multifd_load_cleanup(Error **errp);
bool multifd_recv_all_channels_created(void);
-bool multifd_recv_new_channel(QIOChannel *ioc, Error **errp);
+void multifd_recv_new_channel(QIOChannel *ioc, Error **errp);
void multifd_recv_sync_main(void);
int multifd_send_sync_main(QEMUFile *f);
int multifd_queue_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset);
diff --git a/migration/postcopy-ram.h b/migration/postcopy-ram.h
index 6147bf7d1d..25881c4127 100644
--- a/migration/postcopy-ram.h
+++ b/migration/postcopy-ram.h
@@ -190,7 +190,7 @@ enum PostcopyChannels {
RAM_CHANNEL_MAX,
};
-bool postcopy_preempt_new_channel(MigrationIncomingState *mis, QEMUFile *file);
+void postcopy_preempt_new_channel(MigrationIncomingState *mis, QEMUFile *file);
int postcopy_preempt_setup(MigrationState *s, Error **errp);
int postcopy_preempt_wait_channel(MigrationState *s);
diff --git a/io/channel-socket.c b/io/channel-socket.c
index b76dca9cc1..b99f5dfda6 100644
--- a/io/channel-socket.c
+++ b/io/channel-socket.c
@@ -705,6 +705,32 @@ static ssize_t qio_channel_socket_writev(QIOChannel *ioc,
}
#endif /* WIN32 */
+static ssize_t qio_channel_socket_read_peek(QIOChannel *ioc,
+ void *buf,
+ size_t nbytes,
+ Error **errp)
+{
+ QIOChannelSocket *sioc = QIO_CHANNEL_SOCKET(ioc);
+ ssize_t bytes = 0;
+
+retry:
+ bytes = recv(sioc->fd, buf, nbytes, MSG_PEEK);
+
+ if (bytes < 0) {
+ if (errno == EINTR) {
+ goto retry;
+ }
+ if (errno == EAGAIN) {
+ return QIO_CHANNEL_ERR_BLOCK;
+ }
+
+ error_setg_errno(errp, errno,
+ "Unable to read from peek of socket");
+ return -1;
+ }
+
+ return bytes;
+}
#ifdef QEMU_MSG_ZEROCOPY
static int qio_channel_socket_flush(QIOChannel *ioc,
@@ -902,6 +928,7 @@ static void qio_channel_socket_class_init(ObjectClass *klass,
ioc_klass->io_writev = qio_channel_socket_writev;
ioc_klass->io_readv = qio_channel_socket_readv;
+ ioc_klass->io_read_peek = qio_channel_socket_read_peek;
ioc_klass->io_set_blocking = qio_channel_socket_set_blocking;
ioc_klass->io_close = qio_channel_socket_close;
ioc_klass->io_shutdown = qio_channel_socket_shutdown;
diff --git a/io/channel.c b/io/channel.c
index 0640941ac5..a2d9b96f3f 100644
--- a/io/channel.c
+++ b/io/channel.c
@@ -346,6 +346,45 @@ int qio_channel_write_all(QIOChannel *ioc,
return qio_channel_writev_all(ioc, &iov, 1, errp);
}
+int qio_channel_read_peek_all(QIOChannel *ioc,
+ void* buf,
+ size_t nbytes,
+ Error **errp)
+{
+ QIOChannelClass *klass = QIO_CHANNEL_GET_CLASS(ioc);
+ ssize_t bytes = 0;
+
+ if (!klass->io_read_peek) {
+ error_setg(errp, "Channel does not support read peek");
+ return -1;
+ }
+
+ while (bytes < nbytes) {
+ bytes = klass->io_read_peek(ioc,
+ buf,
+ nbytes,
+ errp);
+
+ if (bytes == QIO_CHANNEL_ERR_BLOCK) {
+ if (qemu_in_coroutine()) {
+ qio_channel_yield(ioc, G_IO_OUT);
+ } else {
+ qio_channel_wait(ioc, G_IO_OUT);
+ }
+ continue;
+ }
+ if (bytes == 0) {
+ error_setg(errp,
+ "Unexpected end-of-file on channel");
+ return 0;
+ }
+ if (bytes < 0) {
+ return -1;
+ }
+ }
+
+ return 1;
+}
int qio_channel_set_blocking(QIOChannel *ioc,
bool enabled,
diff --git a/migration/migration.c b/migration/migration.c
index 739bb683f3..406a9e2f72 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -733,31 +733,51 @@ void migration_ioc_process_incoming(QIOChannel *ioc, Error **errp)
{
MigrationIncomingState *mis = migration_incoming_get_current();
Error *local_err = NULL;
- bool start_migration;
QEMUFile *f;
+ bool default_channel = true;
+ uint32_t channel_magic = 0;
+ int ret = 0;
- if (!mis->from_src_file) {
- /* The first connection (multifd may have multiple) */
+ if (migrate_use_multifd() && !migration_in_postcopy() &&
+ !migrate_use_tls()) {
+ /*
+ * With multiple channels, it is possible that we receive channels
+ * out of order on destination side, causing incorrect mapping of
+ * source channels on destination side. Check channel MAGIC to
+ * decide type of channel. Please note this is best effort, postcopy
+ * preempt channel does not send any magic number so avoid it for
+ * postcopy live migration. Also tls live migration already does
+ * tls handshake while initializing main channel so with tls this
+ * issue is not possible.
+ */
+ ret = qio_channel_read_peek_all(ioc, (void *)&channel_magic,
+ sizeof(channel_magic), &local_err);
+
+ if (ret != 1) {
+ error_propagate(errp, local_err);
+ return;
+ }
+
+ default_channel = (channel_magic == cpu_to_be32(QEMU_VM_FILE_MAGIC));
+ } else {
+ default_channel = !mis->from_src_file;
+ }
+
+ if (default_channel) {
f = qemu_file_new_input(ioc);
if (!migration_incoming_setup(f, errp)) {
return;
}
-
- /*
- * Common migration only needs one channel, so we can start
- * right now. Some features need more than one channel, we wait.
- */
- start_migration = !migration_needs_multiple_sockets();
} else {
/* Multiple connections */
assert(migration_needs_multiple_sockets());
if (migrate_use_multifd()) {
- start_migration = multifd_recv_new_channel(ioc, &local_err);
+ multifd_recv_new_channel(ioc, &local_err);
} else {
assert(migrate_postcopy_preempt());
f = qemu_file_new_input(ioc);
- start_migration = postcopy_preempt_new_channel(mis, f);
+ postcopy_preempt_new_channel(mis, f);
}
if (local_err) {
error_propagate(errp, local_err);
@@ -765,7 +785,7 @@ void migration_ioc_process_incoming(QIOChannel *ioc, Error **errp)
}
}
- if (start_migration) {
+ if (migration_has_all_channels()) {
/* If it's a recovery, we're done */
if (postcopy_try_recover()) {
return;
diff --git a/migration/multifd.c b/migration/multifd.c
index 509bbbe3bf..b54b6e7528 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -1228,11 +1228,9 @@ bool multifd_recv_all_channels_created(void)
/*
* Try to receive all multifd channels to get ready for the migration.
- * - Return true and do not set @errp when correctly receiving all channels;
- * - Return false and do not set @errp when correctly receiving the current one;
- * - Return false and set @errp when failing to receive the current channel.
+ * Sets @errp when failing to receive the current channel.
*/
-bool multifd_recv_new_channel(QIOChannel *ioc, Error **errp)
+void multifd_recv_new_channel(QIOChannel *ioc, Error **errp)
{
MultiFDRecvParams *p;
Error *local_err = NULL;
@@ -1245,7 +1243,7 @@ bool multifd_recv_new_channel(QIOChannel *ioc, Error **errp)
"failed to receive packet"
" via multifd channel %d: ",
qatomic_read(&multifd_recv_state->count));
- return false;
+ return;
}
trace_multifd_recv_new_channel(id);
@@ -1255,7 +1253,7 @@ bool multifd_recv_new_channel(QIOChannel *ioc, Error **errp)
id);
multifd_recv_terminate_threads(local_err);
error_propagate(errp, local_err);
- return false;
+ return;
}
p->c = ioc;
object_ref(OBJECT(ioc));
@@ -1266,6 +1264,4 @@ bool multifd_recv_new_channel(QIOChannel *ioc, Error **errp)
qemu_thread_create(&p->thread, p->name, multifd_recv_thread, p,
QEMU_THREAD_JOINABLE);
qatomic_inc(&multifd_recv_state->count);
- return qatomic_read(&multifd_recv_state->count) ==
- migrate_multifd_channels();
}
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index b9a37ef255..f84f783ab4 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -1539,7 +1539,7 @@ void postcopy_unregister_shared_ufd(struct PostCopyFD *pcfd)
}
}
-bool postcopy_preempt_new_channel(MigrationIncomingState *mis, QEMUFile *file)
+void postcopy_preempt_new_channel(MigrationIncomingState *mis, QEMUFile *file)
{
/*
* The new loading channel has its own threads, so it needs to be
@@ -1548,9 +1548,6 @@ bool postcopy_preempt_new_channel(MigrationIncomingState *mis, QEMUFile *file)
qemu_file_set_blocking(file, true);
mis->postcopy_qemufile_dst = file;
trace_postcopy_preempt_new_channel();
-
- /* Start the migration immediately */
- return true;
}
/*
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 04/30] multifd: Create page_size fields into both MultiFD{Recv, Send}Params
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (2 preceding siblings ...)
2022-11-15 15:34 ` [PULL 03/30] migration: check magic value for deciding the mapping of channels Juan Quintela
@ 2022-11-15 15:34 ` Juan Quintela
2022-11-15 15:34 ` [PULL 05/30] multifd: Create page_count " Juan Quintela
` (28 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:34 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng, Leonardo Bras
We were calling qemu_target_page_size() left and right.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Leonardo Bras <leobras@redhat.com>
---
migration/multifd.h | 4 ++++
migration/multifd-zlib.c | 14 ++++++--------
migration/multifd-zstd.c | 12 +++++-------
migration/multifd.c | 18 ++++++++----------
4 files changed, 23 insertions(+), 25 deletions(-)
diff --git a/migration/multifd.h b/migration/multifd.h
index 913e4ba274..941563c232 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -80,6 +80,8 @@ typedef struct {
bool registered_yank;
/* packet allocated len */
uint32_t packet_len;
+ /* guest page size */
+ uint32_t page_size;
/* multifd flags for sending ram */
int write_flags;
@@ -143,6 +145,8 @@ typedef struct {
QIOChannel *c;
/* packet allocated len */
uint32_t packet_len;
+ /* guest page size */
+ uint32_t page_size;
/* syncs main thread and channels */
QemuSemaphore sem_sync;
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index 18213a9513..37770248e1 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -116,7 +116,6 @@ static void zlib_send_cleanup(MultiFDSendParams *p, Error **errp)
static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
{
struct zlib_data *z = p->data;
- size_t page_size = qemu_target_page_size();
z_stream *zs = &z->zs;
uint32_t out_size = 0;
int ret;
@@ -135,8 +134,8 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
* with compression. zlib does not guarantee that this is safe,
* therefore copy the page before calling deflate().
*/
- memcpy(z->buf, p->pages->block->host + p->normal[i], page_size);
- zs->avail_in = page_size;
+ memcpy(z->buf, p->pages->block->host + p->normal[i], p->page_size);
+ zs->avail_in = p->page_size;
zs->next_in = z->buf;
zs->avail_out = available;
@@ -242,12 +241,11 @@ static void zlib_recv_cleanup(MultiFDRecvParams *p)
static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
{
struct zlib_data *z = p->data;
- size_t page_size = qemu_target_page_size();
z_stream *zs = &z->zs;
uint32_t in_size = p->next_packet_size;
/* we measure the change of total_out */
uint32_t out_size = zs->total_out;
- uint32_t expected_size = p->normal_num * page_size;
+ uint32_t expected_size = p->normal_num * p->page_size;
uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
int ret;
int i;
@@ -274,7 +272,7 @@ static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
flush = Z_SYNC_FLUSH;
}
- zs->avail_out = page_size;
+ zs->avail_out = p->page_size;
zs->next_out = p->host + p->normal[i];
/*
@@ -288,8 +286,8 @@ static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
do {
ret = inflate(zs, flush);
} while (ret == Z_OK && zs->avail_in
- && (zs->total_out - start) < page_size);
- if (ret == Z_OK && (zs->total_out - start) < page_size) {
+ && (zs->total_out - start) < p->page_size);
+ if (ret == Z_OK && (zs->total_out - start) < p->page_size) {
error_setg(errp, "multifd %u: inflate generated too few output",
p->id);
return -1;
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index d788d309f2..f4a8e1ed1f 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -113,7 +113,6 @@ static void zstd_send_cleanup(MultiFDSendParams *p, Error **errp)
static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
{
struct zstd_data *z = p->data;
- size_t page_size = qemu_target_page_size();
int ret;
uint32_t i;
@@ -128,7 +127,7 @@ static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
flush = ZSTD_e_flush;
}
z->in.src = p->pages->block->host + p->normal[i];
- z->in.size = page_size;
+ z->in.size = p->page_size;
z->in.pos = 0;
/*
@@ -241,8 +240,7 @@ static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp)
{
uint32_t in_size = p->next_packet_size;
uint32_t out_size = 0;
- size_t page_size = qemu_target_page_size();
- uint32_t expected_size = p->normal_num * page_size;
+ uint32_t expected_size = p->normal_num * p->page_size;
uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
struct zstd_data *z = p->data;
int ret;
@@ -265,7 +263,7 @@ static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp)
for (i = 0; i < p->normal_num; i++) {
z->out.dst = p->host + p->normal[i];
- z->out.size = page_size;
+ z->out.size = p->page_size;
z->out.pos = 0;
/*
@@ -279,8 +277,8 @@ static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp)
do {
ret = ZSTD_decompressStream(z->zds, &z->out, &z->in);
} while (ret > 0 && (z->in.size - z->in.pos > 0)
- && (z->out.pos < page_size));
- if (ret > 0 && (z->out.pos < page_size)) {
+ && (z->out.pos < p->page_size));
+ if (ret > 0 && (z->out.pos < p->page_size)) {
error_setg(errp, "multifd %u: decompressStream buffer too small",
p->id);
return -1;
diff --git a/migration/multifd.c b/migration/multifd.c
index b54b6e7528..b32fe7edaf 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -87,15 +87,14 @@ static void nocomp_send_cleanup(MultiFDSendParams *p, Error **errp)
static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
{
MultiFDPages_t *pages = p->pages;
- size_t page_size = qemu_target_page_size();
for (int i = 0; i < p->normal_num; i++) {
p->iov[p->iovs_num].iov_base = pages->block->host + p->normal[i];
- p->iov[p->iovs_num].iov_len = page_size;
+ p->iov[p->iovs_num].iov_len = p->page_size;
p->iovs_num++;
}
- p->next_packet_size = p->normal_num * page_size;
+ p->next_packet_size = p->normal_num * p->page_size;
p->flags |= MULTIFD_FLAG_NOCOMP;
return 0;
}
@@ -139,7 +138,6 @@ static void nocomp_recv_cleanup(MultiFDRecvParams *p)
static int nocomp_recv_pages(MultiFDRecvParams *p, Error **errp)
{
uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
- size_t page_size = qemu_target_page_size();
if (flags != MULTIFD_FLAG_NOCOMP) {
error_setg(errp, "multifd %u: flags received %x flags expected %x",
@@ -148,7 +146,7 @@ static int nocomp_recv_pages(MultiFDRecvParams *p, Error **errp)
}
for (int i = 0; i < p->normal_num; i++) {
p->iov[i].iov_base = p->host + p->normal[i];
- p->iov[i].iov_len = page_size;
+ p->iov[i].iov_len = p->page_size;
}
return qio_channel_readv_all(p->c, p->iov, p->normal_num, errp);
}
@@ -281,8 +279,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
{
MultiFDPacket_t *packet = p->packet;
- size_t page_size = qemu_target_page_size();
- uint32_t page_count = MULTIFD_PACKET_SIZE / page_size;
+ uint32_t page_count = MULTIFD_PACKET_SIZE / p->page_size;
RAMBlock *block;
int i;
@@ -344,7 +341,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
for (i = 0; i < p->normal_num; i++) {
uint64_t offset = be64_to_cpu(packet->offset[i]);
- if (offset > (block->used_length - page_size)) {
+ if (offset > (block->used_length - p->page_size)) {
error_setg(errp, "multifd: offset too long %" PRIu64
" (max " RAM_ADDR_FMT ")",
offset, block->used_length);
@@ -433,8 +430,7 @@ static int multifd_send_pages(QEMUFile *f)
p->packet_num = multifd_send_state->packet_num++;
multifd_send_state->pages = p->pages;
p->pages = pages;
- transferred = ((uint64_t) pages->num) * qemu_target_page_size()
- + p->packet_len;
+ transferred = ((uint64_t) pages->num) * p->page_size + p->packet_len;
qemu_file_acct_rate_limit(f, transferred);
ram_counters.multifd_bytes += transferred;
ram_counters.transferred += transferred;
@@ -947,6 +943,7 @@ int multifd_save_setup(Error **errp)
/* We need one extra place for the packet header */
p->iov = g_new0(struct iovec, page_count + 1);
p->normal = g_new0(ram_addr_t, page_count);
+ p->page_size = qemu_target_page_size();
if (migrate_use_zero_copy_send()) {
p->write_flags = QIO_CHANNEL_WRITE_FLAG_ZERO_COPY;
@@ -1194,6 +1191,7 @@ int multifd_load_setup(Error **errp)
p->name = g_strdup_printf("multifdrecv_%d", i);
p->iov = g_new0(struct iovec, page_count);
p->normal = g_new0(ram_addr_t, page_count);
+ p->page_size = qemu_target_page_size();
}
for (i = 0; i < thread_count; i++) {
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 05/30] multifd: Create page_count fields into both MultiFD{Recv, Send}Params
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (3 preceding siblings ...)
2022-11-15 15:34 ` [PULL 04/30] multifd: Create page_size fields into both MultiFD{Recv, Send}Params Juan Quintela
@ 2022-11-15 15:34 ` Juan Quintela
2022-11-15 15:34 ` [PULL 06/30] migration: Export ram_transferred_ram() Juan Quintela
` (27 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:34 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng, Leonardo Bras
We were recalculating it left and right. We plan to change that
values on next patches.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Leonardo Bras <leobras@redhat.com>
---
migration/multifd.h | 4 ++++
migration/multifd.c | 7 ++++---
2 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/migration/multifd.h b/migration/multifd.h
index 941563c232..ff3aa2e2e9 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -82,6 +82,8 @@ typedef struct {
uint32_t packet_len;
/* guest page size */
uint32_t page_size;
+ /* number of pages in a full packet */
+ uint32_t page_count;
/* multifd flags for sending ram */
int write_flags;
@@ -147,6 +149,8 @@ typedef struct {
uint32_t packet_len;
/* guest page size */
uint32_t page_size;
+ /* number of pages in a full packet */
+ uint32_t page_count;
/* syncs main thread and channels */
QemuSemaphore sem_sync;
diff --git a/migration/multifd.c b/migration/multifd.c
index b32fe7edaf..c40d98ad5c 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -279,7 +279,6 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
{
MultiFDPacket_t *packet = p->packet;
- uint32_t page_count = MULTIFD_PACKET_SIZE / p->page_size;
RAMBlock *block;
int i;
@@ -306,10 +305,10 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
* If we received a packet that is 100 times bigger than expected
* just stop migration. It is a magic number.
*/
- if (packet->pages_alloc > page_count) {
+ if (packet->pages_alloc > p->page_count) {
error_setg(errp, "multifd: received packet "
"with size %u and expected a size of %u",
- packet->pages_alloc, page_count) ;
+ packet->pages_alloc, p->page_count) ;
return -1;
}
@@ -944,6 +943,7 @@ int multifd_save_setup(Error **errp)
p->iov = g_new0(struct iovec, page_count + 1);
p->normal = g_new0(ram_addr_t, page_count);
p->page_size = qemu_target_page_size();
+ p->page_count = page_count;
if (migrate_use_zero_copy_send()) {
p->write_flags = QIO_CHANNEL_WRITE_FLAG_ZERO_COPY;
@@ -1191,6 +1191,7 @@ int multifd_load_setup(Error **errp)
p->name = g_strdup_printf("multifdrecv_%d", i);
p->iov = g_new0(struct iovec, page_count);
p->normal = g_new0(ram_addr_t, page_count);
+ p->page_count = page_count;
p->page_size = qemu_target_page_size();
}
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 06/30] migration: Export ram_transferred_ram()
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (4 preceding siblings ...)
2022-11-15 15:34 ` [PULL 05/30] multifd: Create page_count " Juan Quintela
@ 2022-11-15 15:34 ` Juan Quintela
2022-11-15 15:34 ` [PULL 07/30] migration: Export ram_release_page() Juan Quintela
` (26 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:34 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng, David Edmondson,
Leonardo Bras
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: David Edmondson <david.edmondson@oracle.com>
Reviewed-by: Leonardo Bras <leobras@redhat.com>
---
migration/ram.h | 2 ++
migration/ram.c | 2 +-
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/migration/ram.h b/migration/ram.h
index c7af65ac74..e844966f69 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -65,6 +65,8 @@ int ram_load_postcopy(QEMUFile *f, int channel);
void ram_handle_compressed(void *host, uint8_t ch, uint64_t size);
+void ram_transferred_add(uint64_t bytes);
+
int ramblock_recv_bitmap_test(RAMBlock *rb, void *host_addr);
bool ramblock_recv_bitmap_test_byte_offset(RAMBlock *rb, uint64_t byte_offset);
void ramblock_recv_bitmap_set(RAMBlock *rb, void *host_addr);
diff --git a/migration/ram.c b/migration/ram.c
index dc1de9ddbc..00a06b2c16 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -422,7 +422,7 @@ uint64_t ram_bytes_remaining(void)
MigrationStats ram_counters;
-static void ram_transferred_add(uint64_t bytes)
+void ram_transferred_add(uint64_t bytes)
{
if (runstate_is_running()) {
ram_counters.precopy_bytes += bytes;
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 07/30] migration: Export ram_release_page()
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (5 preceding siblings ...)
2022-11-15 15:34 ` [PULL 06/30] migration: Export ram_transferred_ram() Juan Quintela
@ 2022-11-15 15:34 ` Juan Quintela
2022-11-15 15:34 ` [PULL 08/30] Update AVX512 support for xbzrle_encode_buffer Juan Quintela
` (25 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:34 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng, Leonardo Bras
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Leonardo Bras <leobras@redhat.com>
---
migration/ram.h | 1 +
migration/ram.c | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/migration/ram.h b/migration/ram.h
index e844966f69..038d52f49f 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -66,6 +66,7 @@ int ram_load_postcopy(QEMUFile *f, int channel);
void ram_handle_compressed(void *host, uint8_t ch, uint64_t size);
void ram_transferred_add(uint64_t bytes);
+void ram_release_page(const char *rbname, uint64_t offset);
int ramblock_recv_bitmap_test(RAMBlock *rb, void *host_addr);
bool ramblock_recv_bitmap_test_byte_offset(RAMBlock *rb, uint64_t byte_offset);
diff --git a/migration/ram.c b/migration/ram.c
index 00a06b2c16..67e41dd2c0 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1234,7 +1234,7 @@ static void migration_bitmap_sync_precopy(RAMState *rs)
}
}
-static void ram_release_page(const char *rbname, uint64_t offset)
+void ram_release_page(const char *rbname, uint64_t offset)
{
if (!migrate_release_ram() || !migration_in_postcopy()) {
return;
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 08/30] Update AVX512 support for xbzrle_encode_buffer
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (6 preceding siblings ...)
2022-11-15 15:34 ` [PULL 07/30] migration: Export ram_release_page() Juan Quintela
@ 2022-11-15 15:34 ` Juan Quintela
2022-11-15 15:34 ` [PULL 09/30] Unit test code and benchmark code Juan Quintela
` (24 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:34 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng, ling xu, Zhou Zhao,
Jun Jin
From: ling xu <ling1.xu@intel.com>
This commit updates code of avx512 support for xbzrle_encode_buffer
function to accelerate xbzrle encoding speed. Runtime check of avx512
support and benchmark for this feature are added. Compared with C
version of xbzrle_encode_buffer function, avx512 version can achieve
50%-70% performance improvement on benchmarking. In addition, if dirty
data is randomly located in 4K page, the avx512 version can achieve
almost 140% performance gain.
Signed-off-by: ling xu <ling1.xu@intel.com>
Co-authored-by: Zhou Zhao <zhou.zhao@intel.com>
Co-authored-by: Jun Jin <jun.i.jin@intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
meson.build | 16 +++++
migration/xbzrle.h | 4 ++
migration/ram.c | 34 +++++++++-
migration/xbzrle.c | 124 ++++++++++++++++++++++++++++++++++
meson_options.txt | 2 +
scripts/meson-buildoptions.sh | 14 ++--
6 files changed, 186 insertions(+), 8 deletions(-)
diff --git a/meson.build b/meson.build
index cf3e517e56..d0d28f5c9e 100644
--- a/meson.build
+++ b/meson.build
@@ -2344,6 +2344,22 @@ config_host_data.set('CONFIG_AVX512F_OPT', get_option('avx512f') \
int main(int argc, char *argv[]) { return bar(argv[argc - 1]); }
'''), error_message: 'AVX512F not available').allowed())
+config_host_data.set('CONFIG_AVX512BW_OPT', get_option('avx512bw') \
+ .require(have_cpuid_h, error_message: 'cpuid.h not available, cannot enable AVX512BW') \
+ .require(cc.links('''
+ #pragma GCC push_options
+ #pragma GCC target("avx512bw")
+ #include <cpuid.h>
+ #include <immintrin.h>
+ static int bar(void *a) {
+
+ __m512i *x = a;
+ __m512i res= _mm512_abs_epi8(*x);
+ return res[1];
+ }
+ int main(int argc, char *argv[]) { return bar(argv[0]); }
+ '''), error_message: 'AVX512BW not available').allowed())
+
have_pvrdma = get_option('pvrdma') \
.require(rdma.found(), error_message: 'PVRDMA requires OpenFabrics libraries') \
.require(cc.compiles(gnu_source_prefix + '''
diff --git a/migration/xbzrle.h b/migration/xbzrle.h
index a0db507b9c..6feb49160a 100644
--- a/migration/xbzrle.h
+++ b/migration/xbzrle.h
@@ -18,4 +18,8 @@ int xbzrle_encode_buffer(uint8_t *old_buf, uint8_t *new_buf, int slen,
uint8_t *dst, int dlen);
int xbzrle_decode_buffer(uint8_t *src, int slen, uint8_t *dst, int dlen);
+#if defined(CONFIG_AVX512BW_OPT)
+int xbzrle_encode_buffer_avx512(uint8_t *old_buf, uint8_t *new_buf, int slen,
+ uint8_t *dst, int dlen);
+#endif
#endif
diff --git a/migration/ram.c b/migration/ram.c
index 67e41dd2c0..bb4f08bfed 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -83,6 +83,34 @@
/* 0x80 is reserved in migration.h start with 0x100 next */
#define RAM_SAVE_FLAG_COMPRESS_PAGE 0x100
+int (*xbzrle_encode_buffer_func)(uint8_t *, uint8_t *, int,
+ uint8_t *, int) = xbzrle_encode_buffer;
+#if defined(CONFIG_AVX512BW_OPT)
+#include "qemu/cpuid.h"
+static void __attribute__((constructor)) init_cpu_flag(void)
+{
+ unsigned max = __get_cpuid_max(0, NULL);
+ int a, b, c, d;
+ if (max >= 1) {
+ __cpuid(1, a, b, c, d);
+ /* We must check that AVX is not just available, but usable. */
+ if ((c & bit_OSXSAVE) && (c & bit_AVX) && max >= 7) {
+ int bv;
+ __asm("xgetbv" : "=a"(bv), "=d"(d) : "c"(0));
+ __cpuid_count(7, 0, a, b, c, d);
+ /* 0xe6:
+ * XCR0[7:5] = 111b (OPMASK state, upper 256-bit of ZMM0-ZMM15
+ * and ZMM16-ZMM31 state are enabled by OS)
+ * XCR0[2:1] = 11b (XMM state and YMM state are enabled by OS)
+ */
+ if ((bv & 0xe6) == 0xe6 && (b & bit_AVX512BW)) {
+ xbzrle_encode_buffer_func = xbzrle_encode_buffer_avx512;
+ }
+ }
+ }
+}
+#endif
+
XBZRLECacheStats xbzrle_counters;
/* struct contains XBZRLE cache and a static page
@@ -802,9 +830,9 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
memcpy(XBZRLE.current_buf, *current_data, TARGET_PAGE_SIZE);
/* XBZRLE encoding (if there is no overflow) */
- encoded_len = xbzrle_encode_buffer(prev_cached_page, XBZRLE.current_buf,
- TARGET_PAGE_SIZE, XBZRLE.encoded_buf,
- TARGET_PAGE_SIZE);
+ encoded_len = xbzrle_encode_buffer_func(prev_cached_page, XBZRLE.current_buf,
+ TARGET_PAGE_SIZE, XBZRLE.encoded_buf,
+ TARGET_PAGE_SIZE);
/*
* Update the cache contents, so that it corresponds to the data
diff --git a/migration/xbzrle.c b/migration/xbzrle.c
index 1ba482ded9..05366e86c0 100644
--- a/migration/xbzrle.c
+++ b/migration/xbzrle.c
@@ -174,3 +174,127 @@ int xbzrle_decode_buffer(uint8_t *src, int slen, uint8_t *dst, int dlen)
return d;
}
+
+#if defined(CONFIG_AVX512BW_OPT)
+#pragma GCC push_options
+#pragma GCC target("avx512bw")
+#include <immintrin.h>
+int xbzrle_encode_buffer_avx512(uint8_t *old_buf, uint8_t *new_buf, int slen,
+ uint8_t *dst, int dlen)
+{
+ uint32_t zrun_len = 0, nzrun_len = 0;
+ int d = 0, i = 0, num = 0;
+ uint8_t *nzrun_start = NULL;
+ /* add 1 to include residual part in main loop */
+ uint32_t count512s = (slen >> 6) + 1;
+ /* countResidual is tail of data, i.e., countResidual = slen % 64 */
+ uint32_t count_residual = slen & 0b111111;
+ bool never_same = true;
+ uint64_t mask_residual = 1;
+ mask_residual <<= count_residual;
+ mask_residual -= 1;
+ __m512i r = _mm512_set1_epi32(0);
+
+ while (count512s) {
+ if (d + 2 > dlen) {
+ return -1;
+ }
+
+ int bytes_to_check = 64;
+ uint64_t mask = 0xffffffffffffffff;
+ if (count512s == 1) {
+ bytes_to_check = count_residual;
+ mask = mask_residual;
+ }
+ __m512i old_data = _mm512_mask_loadu_epi8(r,
+ mask, old_buf + i);
+ __m512i new_data = _mm512_mask_loadu_epi8(r,
+ mask, new_buf + i);
+ uint64_t comp = _mm512_cmpeq_epi8_mask(old_data, new_data);
+ count512s--;
+
+ bool is_same = (comp & 0x1);
+ while (bytes_to_check) {
+ if (is_same) {
+ if (nzrun_len) {
+ d += uleb128_encode_small(dst + d, nzrun_len);
+ if (d + nzrun_len > dlen) {
+ return -1;
+ }
+ nzrun_start = new_buf + i - nzrun_len;
+ memcpy(dst + d, nzrun_start, nzrun_len);
+ d += nzrun_len;
+ nzrun_len = 0;
+ }
+ /* 64 data at a time for speed */
+ if (count512s && (comp == 0xffffffffffffffff)) {
+ i += 64;
+ zrun_len += 64;
+ break;
+ }
+ never_same = false;
+ num = __builtin_ctzll(~comp);
+ num = (num < bytes_to_check) ? num : bytes_to_check;
+ zrun_len += num;
+ bytes_to_check -= num;
+ comp >>= num;
+ i += num;
+ if (bytes_to_check) {
+ /* still has different data after same data */
+ d += uleb128_encode_small(dst + d, zrun_len);
+ zrun_len = 0;
+ } else {
+ break;
+ }
+ }
+ if (never_same || zrun_len) {
+ /*
+ * never_same only acts if
+ * data begins with diff in first count512s
+ */
+ d += uleb128_encode_small(dst + d, zrun_len);
+ zrun_len = 0;
+ never_same = false;
+ }
+ /* has diff, 64 data at a time for speed */
+ if ((bytes_to_check == 64) && (comp == 0x0)) {
+ i += 64;
+ nzrun_len += 64;
+ break;
+ }
+ num = __builtin_ctzll(comp);
+ num = (num < bytes_to_check) ? num : bytes_to_check;
+ nzrun_len += num;
+ bytes_to_check -= num;
+ comp >>= num;
+ i += num;
+ if (bytes_to_check) {
+ /* mask like 111000 */
+ d += uleb128_encode_small(dst + d, nzrun_len);
+ /* overflow */
+ if (d + nzrun_len > dlen) {
+ return -1;
+ }
+ nzrun_start = new_buf + i - nzrun_len;
+ memcpy(dst + d, nzrun_start, nzrun_len);
+ d += nzrun_len;
+ nzrun_len = 0;
+ is_same = true;
+ }
+ }
+ }
+
+ if (nzrun_len != 0) {
+ d += uleb128_encode_small(dst + d, nzrun_len);
+ /* overflow */
+ if (d + nzrun_len > dlen) {
+ return -1;
+ }
+ nzrun_start = new_buf + i - nzrun_len;
+ memcpy(dst + d, nzrun_start, nzrun_len);
+ d += nzrun_len;
+ }
+ return d;
+}
+#pragma GCC pop_options
+#endif
diff --git a/meson_options.txt b/meson_options.txt
index 66128178bf..96814dd211 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -104,6 +104,8 @@ option('avx2', type: 'feature', value: 'auto',
description: 'AVX2 optimizations')
option('avx512f', type: 'feature', value: 'disabled',
description: 'AVX512F optimizations')
+option('avx512bw', type: 'feature', value: 'auto',
+ description: 'AVX512BW optimizations')
option('keyring', type: 'feature', value: 'auto',
description: 'Linux keyring support')
diff --git a/scripts/meson-buildoptions.sh b/scripts/meson-buildoptions.sh
index 2cb0de5601..bcb5d854a5 100644
--- a/scripts/meson-buildoptions.sh
+++ b/scripts/meson-buildoptions.sh
@@ -40,7 +40,8 @@ meson_options_help() {
printf "%s\n" ' --enable-trace-backends=CHOICES'
printf "%s\n" ' Set available tracing backends [log] (choices:'
printf "%s\n" ' dtrace/ftrace/log/nop/simple/syslog/ust)'
- printf "%s\n" ' --firmwarepath=VALUES search PATH for firmware files [share/qemu-firmware]'
+ printf "%s\n" ' --firmwarepath=VALUES search PATH for firmware files [share/qemu-'
+ printf "%s\n" ' firmware]'
printf "%s\n" ' --iasl=VALUE Path to ACPI disassembler'
printf "%s\n" ' --includedir=VALUE Header file directory [include]'
printf "%s\n" ' --interp-prefix=VALUE where to find shared libraries etc., use %M for'
@@ -66,6 +67,7 @@ meson_options_help() {
printf "%s\n" ' attr attr/xattr support'
printf "%s\n" ' auth-pam PAM access control'
printf "%s\n" ' avx2 AVX2 optimizations'
+ printf "%s\n" ' avx512bw AVX512BW optimizations'
printf "%s\n" ' avx512f AVX512F optimizations'
printf "%s\n" ' blkio libblkio block device driver'
printf "%s\n" ' bochs bochs image format support'
@@ -155,6 +157,8 @@ meson_options_help() {
printf "%s\n" ' usb-redir libusbredir support'
printf "%s\n" ' vde vde network backend support'
printf "%s\n" ' vdi vdi image format support'
+ printf "%s\n" ' vduse-blk-export'
+ printf "%s\n" ' VDUSE block export support'
printf "%s\n" ' vfio-user-server'
printf "%s\n" ' vfio-user server support'
printf "%s\n" ' vhost-crypto vhost-user crypto backend support'
@@ -163,8 +167,6 @@ meson_options_help() {
printf "%s\n" ' vhost-user vhost-user backend support'
printf "%s\n" ' vhost-user-blk-server'
printf "%s\n" ' build vhost-user-blk server'
- printf "%s\n" ' vduse-blk-export'
- printf "%s\n" ' VDUSE block export support'
printf "%s\n" ' vhost-vdpa vhost-vdpa kernel backend support'
printf "%s\n" ' virglrenderer virgl rendering support'
printf "%s\n" ' virtfs virtio-9p support'
@@ -193,6 +195,8 @@ _meson_option_parse() {
--disable-auth-pam) printf "%s" -Dauth_pam=disabled ;;
--enable-avx2) printf "%s" -Davx2=enabled ;;
--disable-avx2) printf "%s" -Davx2=disabled ;;
+ --enable-avx512bw) printf "%s" -Davx512bw=enabled ;;
+ --disable-avx512bw) printf "%s" -Davx512bw=disabled ;;
--enable-avx512f) printf "%s" -Davx512f=enabled ;;
--disable-avx512f) printf "%s" -Davx512f=disabled ;;
--enable-gcov) printf "%s" -Db_coverage=true ;;
@@ -426,6 +430,8 @@ _meson_option_parse() {
--disable-vde) printf "%s" -Dvde=disabled ;;
--enable-vdi) printf "%s" -Dvdi=enabled ;;
--disable-vdi) printf "%s" -Dvdi=disabled ;;
+ --enable-vduse-blk-export) printf "%s" -Dvduse_blk_export=enabled ;;
+ --disable-vduse-blk-export) printf "%s" -Dvduse_blk_export=disabled ;;
--enable-vfio-user-server) printf "%s" -Dvfio_user_server=enabled ;;
--disable-vfio-user-server) printf "%s" -Dvfio_user_server=disabled ;;
--enable-vhost-crypto) printf "%s" -Dvhost_crypto=enabled ;;
@@ -438,8 +444,6 @@ _meson_option_parse() {
--disable-vhost-user) printf "%s" -Dvhost_user=disabled ;;
--enable-vhost-user-blk-server) printf "%s" -Dvhost_user_blk_server=enabled ;;
--disable-vhost-user-blk-server) printf "%s" -Dvhost_user_blk_server=disabled ;;
- --enable-vduse-blk-export) printf "%s" -Dvduse_blk_export=enabled ;;
- --disable-vduse-blk-export) printf "%s" -Dvduse_blk_export=disabled ;;
--enable-vhost-vdpa) printf "%s" -Dvhost_vdpa=enabled ;;
--disable-vhost-vdpa) printf "%s" -Dvhost_vdpa=disabled ;;
--enable-virglrenderer) printf "%s" -Dvirglrenderer=enabled ;;
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 09/30] Unit test code and benchmark code
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (7 preceding siblings ...)
2022-11-15 15:34 ` [PULL 08/30] Update AVX512 support for xbzrle_encode_buffer Juan Quintela
@ 2022-11-15 15:34 ` Juan Quintela
2022-11-15 15:34 ` [PULL 10/30] migration: Fix possible infinite loop of ram save process Juan Quintela
` (23 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:34 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng, ling xu, Zhou Zhao,
Jun Jin
From: ling xu <ling1.xu@intel.com>
Unit test code is in test-xbzrle.c, and benchmark code is in xbzrle-bench.c
for performance benchmarking.
Signed-off-by: ling xu <ling1.xu@intel.com>
Co-authored-by: Zhou Zhao <zhou.zhao@intel.com>
Co-authored-by: Jun Jin <jun.i.jin@intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
tests/bench/xbzrle-bench.c | 465 +++++++++++++++++++++++++++++++++++++
tests/unit/test-xbzrle.c | 39 +++-
tests/bench/meson.build | 4 +
3 files changed, 503 insertions(+), 5 deletions(-)
create mode 100644 tests/bench/xbzrle-bench.c
diff --git a/tests/bench/xbzrle-bench.c b/tests/bench/xbzrle-bench.c
new file mode 100644
index 0000000000..d71397e6f4
--- /dev/null
+++ b/tests/bench/xbzrle-bench.c
@@ -0,0 +1,465 @@
+/*
+ * Xor Based Zero Run Length Encoding unit tests.
+ *
+ * Copyright 2013 Red Hat, Inc. and/or its affiliates
+ *
+ * Authors:
+ * Orit Wasserman <owasserm@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+#include "qemu/osdep.h"
+#include "qemu/cutils.h"
+#include "../migration/xbzrle.h"
+
+#define XBZRLE_PAGE_SIZE 4096
+
+#if defined(CONFIG_AVX512BW_OPT)
+static bool is_cpu_support_avx512bw;
+#include "qemu/cpuid.h"
+static void __attribute__((constructor)) init_cpu_flag(void)
+{
+ unsigned max = __get_cpuid_max(0, NULL);
+ int a, b, c, d;
+ is_cpu_support_avx512bw = false;
+ if (max >= 1) {
+ __cpuid(1, a, b, c, d);
+ /* We must check that AVX is not just available, but usable. */
+ if ((c & bit_OSXSAVE) && (c & bit_AVX) && max >= 7) {
+ int bv;
+ __asm("xgetbv" : "=a"(bv), "=d"(d) : "c"(0));
+ __cpuid_count(7, 0, a, b, c, d);
+ /* 0xe6:
+ * XCR0[7:5] = 111b (OPMASK state, upper 256-bit of ZMM0-ZMM15
+ * and ZMM16-ZMM31 state are enabled by OS)
+ * XCR0[2:1] = 11b (XMM state and YMM state are enabled by OS)
+ */
+ if ((bv & 0xe6) == 0xe6 && (b & bit_AVX512BW)) {
+ is_cpu_support_avx512bw = true;
+ }
+ }
+ }
+ return ;
+}
+#endif
+
+struct ResTime {
+ float t_raw;
+ float t_512;
+};
+
+static void encode_decode_zero(struct ResTime *res)
+{
+ uint8_t *buffer = g_malloc0(XBZRLE_PAGE_SIZE);
+ uint8_t *compressed = g_malloc0(XBZRLE_PAGE_SIZE);
+ uint8_t *buffer512 = g_malloc0(XBZRLE_PAGE_SIZE);
+ uint8_t *compressed512 = g_malloc0(XBZRLE_PAGE_SIZE);
+ int i = 0;
+ int dlen = 0, dlen512 = 0;
+ int diff_len = g_test_rand_int_range(0, XBZRLE_PAGE_SIZE - 1006);
+
+ for (i = diff_len; i > 0; i--) {
+ buffer[1000 + i] = i;
+ buffer512[1000 + i] = i;
+ }
+
+ buffer[1000 + diff_len + 3] = 103;
+ buffer[1000 + diff_len + 5] = 105;
+
+ buffer512[1000 + diff_len + 3] = 103;
+ buffer512[1000 + diff_len + 5] = 105;
+
+ /* encode zero page */
+ time_t t_start, t_end, t_start512, t_end512;
+ t_start = clock();
+ dlen = xbzrle_encode_buffer(buffer, buffer, XBZRLE_PAGE_SIZE, compressed,
+ XBZRLE_PAGE_SIZE);
+ t_end = clock();
+ float time_val = difftime(t_end, t_start);
+ g_assert(dlen == 0);
+
+ t_start512 = clock();
+ dlen512 = xbzrle_encode_buffer_avx512(buffer512, buffer512, XBZRLE_PAGE_SIZE,
+ compressed512, XBZRLE_PAGE_SIZE);
+ t_end512 = clock();
+ float time_val512 = difftime(t_end512, t_start512);
+ g_assert(dlen512 == 0);
+
+ res->t_raw = time_val;
+ res->t_512 = time_val512;
+
+ g_free(buffer);
+ g_free(compressed);
+ g_free(buffer512);
+ g_free(compressed512);
+
+}
+
+static void test_encode_decode_zero_avx512(void)
+{
+ int i;
+ float time_raw = 0.0, time_512 = 0.0;
+ struct ResTime res;
+ for (i = 0; i < 10000; i++) {
+ encode_decode_zero(&res);
+ time_raw += res.t_raw;
+ time_512 += res.t_512;
+ }
+ printf("Zero test:\n");
+ printf("Raw xbzrle_encode time is %f ms\n", time_raw);
+ printf("512 xbzrle_encode time is %f ms\n", time_512);
+}
+
+static void encode_decode_unchanged(struct ResTime *res)
+{
+ uint8_t *compressed = g_malloc0(XBZRLE_PAGE_SIZE);
+ uint8_t *test = g_malloc0(XBZRLE_PAGE_SIZE);
+ uint8_t *compressed512 = g_malloc0(XBZRLE_PAGE_SIZE);
+ uint8_t *test512 = g_malloc0(XBZRLE_PAGE_SIZE);
+ int i = 0;
+ int dlen = 0, dlen512 = 0;
+ int diff_len = g_test_rand_int_range(0, XBZRLE_PAGE_SIZE - 1006);
+
+ for (i = diff_len; i > 0; i--) {
+ test[1000 + i] = i + 4;
+ test512[1000 + i] = i + 4;
+ }
+
+ test[1000 + diff_len + 3] = 107;
+ test[1000 + diff_len + 5] = 109;
+
+ test512[1000 + diff_len + 3] = 107;
+ test512[1000 + diff_len + 5] = 109;
+
+ /* test unchanged buffer */
+ time_t t_start, t_end, t_start512, t_end512;
+ t_start = clock();
+ dlen = xbzrle_encode_buffer(test, test, XBZRLE_PAGE_SIZE, compressed,
+ XBZRLE_PAGE_SIZE);
+ t_end = clock();
+ float time_val = difftime(t_end, t_start);
+ g_assert(dlen == 0);
+
+ t_start512 = clock();
+ dlen512 = xbzrle_encode_buffer_avx512(test512, test512, XBZRLE_PAGE_SIZE,
+ compressed512, XBZRLE_PAGE_SIZE);
+ t_end512 = clock();
+ float time_val512 = difftime(t_end512, t_start512);
+ g_assert(dlen512 == 0);
+
+ res->t_raw = time_val;
+ res->t_512 = time_val512;
+
+ g_free(test);
+ g_free(compressed);
+ g_free(test512);
+ g_free(compressed512);
+
+}
+
+static void test_encode_decode_unchanged_avx512(void)
+{
+ int i;
+ float time_raw = 0.0, time_512 = 0.0;
+ struct ResTime res;
+ for (i = 0; i < 10000; i++) {
+ encode_decode_unchanged(&res);
+ time_raw += res.t_raw;
+ time_512 += res.t_512;
+ }
+ printf("Unchanged test:\n");
+ printf("Raw xbzrle_encode time is %f ms\n", time_raw);
+ printf("512 xbzrle_encode time is %f ms\n", time_512);
+}
+
+static void encode_decode_1_byte(struct ResTime *res)
+{
+ uint8_t *buffer = g_malloc0(XBZRLE_PAGE_SIZE);
+ uint8_t *test = g_malloc0(XBZRLE_PAGE_SIZE);
+ uint8_t *compressed = g_malloc(XBZRLE_PAGE_SIZE);
+ uint8_t *buffer512 = g_malloc0(XBZRLE_PAGE_SIZE);
+ uint8_t *test512 = g_malloc0(XBZRLE_PAGE_SIZE);
+ uint8_t *compressed512 = g_malloc(XBZRLE_PAGE_SIZE);
+ int dlen = 0, rc = 0, dlen512 = 0, rc512 = 0;
+ uint8_t buf[2];
+ uint8_t buf512[2];
+
+ test[XBZRLE_PAGE_SIZE - 1] = 1;
+ test512[XBZRLE_PAGE_SIZE - 1] = 1;
+
+ time_t t_start, t_end, t_start512, t_end512;
+ t_start = clock();
+ dlen = xbzrle_encode_buffer(buffer, test, XBZRLE_PAGE_SIZE, compressed,
+ XBZRLE_PAGE_SIZE);
+ t_end = clock();
+ float time_val = difftime(t_end, t_start);
+ g_assert(dlen == (uleb128_encode_small(&buf[0], 4095) + 2));
+
+ rc = xbzrle_decode_buffer(compressed, dlen, buffer, XBZRLE_PAGE_SIZE);
+ g_assert(rc == XBZRLE_PAGE_SIZE);
+ g_assert(memcmp(test, buffer, XBZRLE_PAGE_SIZE) == 0);
+
+ t_start512 = clock();
+ dlen512 = xbzrle_encode_buffer_avx512(buffer512, test512, XBZRLE_PAGE_SIZE,
+ compressed512, XBZRLE_PAGE_SIZE);
+ t_end512 = clock();
+ float time_val512 = difftime(t_end512, t_start512);
+ g_assert(dlen512 == (uleb128_encode_small(&buf512[0], 4095) + 2));
+
+ rc512 = xbzrle_decode_buffer(compressed512, dlen512, buffer512,
+ XBZRLE_PAGE_SIZE);
+ g_assert(rc512 == XBZRLE_PAGE_SIZE);
+ g_assert(memcmp(test512, buffer512, XBZRLE_PAGE_SIZE) == 0);
+
+ res->t_raw = time_val;
+ res->t_512 = time_val512;
+
+ g_free(buffer);
+ g_free(compressed);
+ g_free(test);
+ g_free(buffer512);
+ g_free(compressed512);
+ g_free(test512);
+
+}
+
+static void test_encode_decode_1_byte_avx512(void)
+{
+ int i;
+ float time_raw = 0.0, time_512 = 0.0;
+ struct ResTime res;
+ for (i = 0; i < 10000; i++) {
+ encode_decode_1_byte(&res);
+ time_raw += res.t_raw;
+ time_512 += res.t_512;
+ }
+ printf("1 byte test:\n");
+ printf("Raw xbzrle_encode time is %f ms\n", time_raw);
+ printf("512 xbzrle_encode time is %f ms\n", time_512);
+}
+
+static void encode_decode_overflow(struct ResTime *res)
+{
+ uint8_t *compressed = g_malloc0(XBZRLE_PAGE_SIZE);
+ uint8_t *test = g_malloc0(XBZRLE_PAGE_SIZE);
+ uint8_t *buffer = g_malloc0(XBZRLE_PAGE_SIZE);
+ uint8_t *compressed512 = g_malloc0(XBZRLE_PAGE_SIZE);
+ uint8_t *test512 = g_malloc0(XBZRLE_PAGE_SIZE);
+ uint8_t *buffer512 = g_malloc0(XBZRLE_PAGE_SIZE);
+ int i = 0, rc = 0, rc512 = 0;
+
+ for (i = 0; i < XBZRLE_PAGE_SIZE / 2 - 1; i++) {
+ test[i * 2] = 1;
+ test512[i * 2] = 1;
+ }
+
+ /* encode overflow */
+ time_t t_start, t_end, t_start512, t_end512;
+ t_start = clock();
+ rc = xbzrle_encode_buffer(buffer, test, XBZRLE_PAGE_SIZE, compressed,
+ XBZRLE_PAGE_SIZE);
+ t_end = clock();
+ float time_val = difftime(t_end, t_start);
+ g_assert(rc == -1);
+
+ t_start512 = clock();
+ rc512 = xbzrle_encode_buffer_avx512(buffer512, test512, XBZRLE_PAGE_SIZE,
+ compressed512, XBZRLE_PAGE_SIZE);
+ t_end512 = clock();
+ float time_val512 = difftime(t_end512, t_start512);
+ g_assert(rc512 == -1);
+
+ res->t_raw = time_val;
+ res->t_512 = time_val512;
+
+ g_free(buffer);
+ g_free(compressed);
+ g_free(test);
+ g_free(buffer512);
+ g_free(compressed512);
+ g_free(test512);
+
+}
+
+static void test_encode_decode_overflow_avx512(void)
+{
+ int i;
+ float time_raw = 0.0, time_512 = 0.0;
+ struct ResTime res;
+ for (i = 0; i < 10000; i++) {
+ encode_decode_overflow(&res);
+ time_raw += res.t_raw;
+ time_512 += res.t_512;
+ }
+ printf("Overflow test:\n");
+ printf("Raw xbzrle_encode time is %f ms\n", time_raw);
+ printf("512 xbzrle_encode time is %f ms\n", time_512);
+}
+
+static void encode_decode_range_avx512(struct ResTime *res)
+{
+ uint8_t *buffer = g_malloc0(XBZRLE_PAGE_SIZE);
+ uint8_t *compressed = g_malloc(XBZRLE_PAGE_SIZE);
+ uint8_t *test = g_malloc0(XBZRLE_PAGE_SIZE);
+ uint8_t *buffer512 = g_malloc0(XBZRLE_PAGE_SIZE);
+ uint8_t *compressed512 = g_malloc(XBZRLE_PAGE_SIZE);
+ uint8_t *test512 = g_malloc0(XBZRLE_PAGE_SIZE);
+ int i = 0, rc = 0, rc512 = 0;
+ int dlen = 0, dlen512 = 0;
+
+ int diff_len = g_test_rand_int_range(0, XBZRLE_PAGE_SIZE - 1006);
+
+ for (i = diff_len; i > 0; i--) {
+ buffer[1000 + i] = i;
+ test[1000 + i] = i + 4;
+ buffer512[1000 + i] = i;
+ test512[1000 + i] = i + 4;
+ }
+
+ buffer[1000 + diff_len + 3] = 103;
+ test[1000 + diff_len + 3] = 107;
+
+ buffer[1000 + diff_len + 5] = 105;
+ test[1000 + diff_len + 5] = 109;
+
+ buffer512[1000 + diff_len + 3] = 103;
+ test512[1000 + diff_len + 3] = 107;
+
+ buffer512[1000 + diff_len + 5] = 105;
+ test512[1000 + diff_len + 5] = 109;
+
+ /* test encode/decode */
+ time_t t_start, t_end, t_start512, t_end512;
+ t_start = clock();
+ dlen = xbzrle_encode_buffer(test, buffer, XBZRLE_PAGE_SIZE, compressed,
+ XBZRLE_PAGE_SIZE);
+ t_end = clock();
+ float time_val = difftime(t_end, t_start);
+ rc = xbzrle_decode_buffer(compressed, dlen, test, XBZRLE_PAGE_SIZE);
+ g_assert(rc < XBZRLE_PAGE_SIZE);
+ g_assert(memcmp(test, buffer, XBZRLE_PAGE_SIZE) == 0);
+
+ t_start512 = clock();
+ dlen512 = xbzrle_encode_buffer_avx512(test512, buffer512, XBZRLE_PAGE_SIZE,
+ compressed512, XBZRLE_PAGE_SIZE);
+ t_end512 = clock();
+ float time_val512 = difftime(t_end512, t_start512);
+ rc512 = xbzrle_decode_buffer(compressed512, dlen512, test512, XBZRLE_PAGE_SIZE);
+ g_assert(rc512 < XBZRLE_PAGE_SIZE);
+ g_assert(memcmp(test512, buffer512, XBZRLE_PAGE_SIZE) == 0);
+
+ res->t_raw = time_val;
+ res->t_512 = time_val512;
+
+ g_free(buffer);
+ g_free(compressed);
+ g_free(test);
+ g_free(buffer512);
+ g_free(compressed512);
+ g_free(test512);
+
+}
+
+static void test_encode_decode_avx512(void)
+{
+ int i;
+ float time_raw = 0.0, time_512 = 0.0;
+ struct ResTime res;
+ for (i = 0; i < 10000; i++) {
+ encode_decode_range_avx512(&res);
+ time_raw += res.t_raw;
+ time_512 += res.t_512;
+ }
+ printf("Encode decode test:\n");
+ printf("Raw xbzrle_encode time is %f ms\n", time_raw);
+ printf("512 xbzrle_encode time is %f ms\n", time_512);
+}
+
+static void encode_decode_random(struct ResTime *res)
+{
+ uint8_t *buffer = g_malloc0(XBZRLE_PAGE_SIZE);
+ uint8_t *compressed = g_malloc(XBZRLE_PAGE_SIZE);
+ uint8_t *test = g_malloc0(XBZRLE_PAGE_SIZE);
+ uint8_t *buffer512 = g_malloc0(XBZRLE_PAGE_SIZE);
+ uint8_t *compressed512 = g_malloc(XBZRLE_PAGE_SIZE);
+ uint8_t *test512 = g_malloc0(XBZRLE_PAGE_SIZE);
+ int i = 0, rc = 0, rc512 = 0;
+ int dlen = 0, dlen512 = 0;
+
+ int diff_len = g_test_rand_int_range(0, XBZRLE_PAGE_SIZE - 1);
+ /* store the index of diff */
+ int dirty_index[diff_len];
+ for (int j = 0; j < diff_len; j++) {
+ dirty_index[j] = g_test_rand_int_range(0, XBZRLE_PAGE_SIZE - 1);
+ }
+ for (i = diff_len - 1; i >= 0; i--) {
+ buffer[dirty_index[i]] = i;
+ test[dirty_index[i]] = i + 4;
+ buffer512[dirty_index[i]] = i;
+ test512[dirty_index[i]] = i + 4;
+ }
+
+ time_t t_start, t_end, t_start512, t_end512;
+ t_start = clock();
+ dlen = xbzrle_encode_buffer(test, buffer, XBZRLE_PAGE_SIZE, compressed,
+ XBZRLE_PAGE_SIZE);
+ t_end = clock();
+ float time_val = difftime(t_end, t_start);
+ rc = xbzrle_decode_buffer(compressed, dlen, test, XBZRLE_PAGE_SIZE);
+ g_assert(rc < XBZRLE_PAGE_SIZE);
+
+ t_start512 = clock();
+ dlen512 = xbzrle_encode_buffer_avx512(test512, buffer512, XBZRLE_PAGE_SIZE,
+ compressed512, XBZRLE_PAGE_SIZE);
+ t_end512 = clock();
+ float time_val512 = difftime(t_end512, t_start512);
+ rc512 = xbzrle_decode_buffer(compressed512, dlen512, test512, XBZRLE_PAGE_SIZE);
+ g_assert(rc512 < XBZRLE_PAGE_SIZE);
+
+ res->t_raw = time_val;
+ res->t_512 = time_val512;
+
+ g_free(buffer);
+ g_free(compressed);
+ g_free(test);
+ g_free(buffer512);
+ g_free(compressed512);
+ g_free(test512);
+
+}
+
+static void test_encode_decode_random_avx512(void)
+{
+ int i;
+ float time_raw = 0.0, time_512 = 0.0;
+ struct ResTime res;
+ for (i = 0; i < 10000; i++) {
+ encode_decode_random(&res);
+ time_raw += res.t_raw;
+ time_512 += res.t_512;
+ }
+ printf("Random test:\n");
+ printf("Raw xbzrle_encode time is %f ms\n", time_raw);
+ printf("512 xbzrle_encode time is %f ms\n", time_512);
+}
+
+int main(int argc, char **argv)
+{
+ g_test_init(&argc, &argv, NULL);
+ g_test_rand_int();
+ #if defined(CONFIG_AVX512BW_OPT)
+ if (likely(is_cpu_support_avx512bw)) {
+ g_test_add_func("/xbzrle/encode_decode_zero", test_encode_decode_zero_avx512);
+ g_test_add_func("/xbzrle/encode_decode_unchanged",
+ test_encode_decode_unchanged_avx512);
+ g_test_add_func("/xbzrle/encode_decode_1_byte", test_encode_decode_1_byte_avx512);
+ g_test_add_func("/xbzrle/encode_decode_overflow",
+ test_encode_decode_overflow_avx512);
+ g_test_add_func("/xbzrle/encode_decode", test_encode_decode_avx512);
+ g_test_add_func("/xbzrle/encode_decode_random", test_encode_decode_random_avx512);
+ }
+ #endif
+ return g_test_run();
+}
diff --git a/tests/unit/test-xbzrle.c b/tests/unit/test-xbzrle.c
index ef951b6e54..547046d093 100644
--- a/tests/unit/test-xbzrle.c
+++ b/tests/unit/test-xbzrle.c
@@ -16,6 +16,35 @@
#define XBZRLE_PAGE_SIZE 4096
+int (*xbzrle_encode_buffer_func)(uint8_t *, uint8_t *, int,
+ uint8_t *, int) = xbzrle_encode_buffer;
+#if defined(CONFIG_AVX512BW_OPT)
+#include "qemu/cpuid.h"
+static void __attribute__((constructor)) init_cpu_flag(void)
+{
+ unsigned max = __get_cpuid_max(0, NULL);
+ int a, b, c, d;
+ if (max >= 1) {
+ __cpuid(1, a, b, c, d);
+ /* We must check that AVX is not just available, but usable. */
+ if ((c & bit_OSXSAVE) && (c & bit_AVX) && max >= 7) {
+ int bv;
+ __asm("xgetbv" : "=a"(bv), "=d"(d) : "c"(0));
+ __cpuid_count(7, 0, a, b, c, d);
+ /* 0xe6:
+ * XCR0[7:5] = 111b (OPMASK state, upper 256-bit of ZMM0-ZMM15
+ * and ZMM16-ZMM31 state are enabled by OS)
+ * XCR0[2:1] = 11b (XMM state and YMM state are enabled by OS)
+ */
+ if ((bv & 0xe6) == 0xe6 && (b & bit_AVX512BW)) {
+ xbzrle_encode_buffer_func = xbzrle_encode_buffer_avx512;
+ }
+ }
+ }
+ return ;
+}
+#endif
+
static void test_uleb(void)
{
uint32_t i, val;
@@ -54,7 +83,7 @@ static void test_encode_decode_zero(void)
buffer[1000 + diff_len + 5] = 105;
/* encode zero page */
- dlen = xbzrle_encode_buffer(buffer, buffer, XBZRLE_PAGE_SIZE, compressed,
+ dlen = xbzrle_encode_buffer_func(buffer, buffer, XBZRLE_PAGE_SIZE, compressed,
XBZRLE_PAGE_SIZE);
g_assert(dlen == 0);
@@ -78,7 +107,7 @@ static void test_encode_decode_unchanged(void)
test[1000 + diff_len + 5] = 109;
/* test unchanged buffer */
- dlen = xbzrle_encode_buffer(test, test, XBZRLE_PAGE_SIZE, compressed,
+ dlen = xbzrle_encode_buffer_func(test, test, XBZRLE_PAGE_SIZE, compressed,
XBZRLE_PAGE_SIZE);
g_assert(dlen == 0);
@@ -96,7 +125,7 @@ static void test_encode_decode_1_byte(void)
test[XBZRLE_PAGE_SIZE - 1] = 1;
- dlen = xbzrle_encode_buffer(buffer, test, XBZRLE_PAGE_SIZE, compressed,
+ dlen = xbzrle_encode_buffer_func(buffer, test, XBZRLE_PAGE_SIZE, compressed,
XBZRLE_PAGE_SIZE);
g_assert(dlen == (uleb128_encode_small(&buf[0], 4095) + 2));
@@ -121,7 +150,7 @@ static void test_encode_decode_overflow(void)
}
/* encode overflow */
- rc = xbzrle_encode_buffer(buffer, test, XBZRLE_PAGE_SIZE, compressed,
+ rc = xbzrle_encode_buffer_func(buffer, test, XBZRLE_PAGE_SIZE, compressed,
XBZRLE_PAGE_SIZE);
g_assert(rc == -1);
@@ -152,7 +181,7 @@ static void encode_decode_range(void)
test[1000 + diff_len + 5] = 109;
/* test encode/decode */
- dlen = xbzrle_encode_buffer(test, buffer, XBZRLE_PAGE_SIZE, compressed,
+ dlen = xbzrle_encode_buffer_func(test, buffer, XBZRLE_PAGE_SIZE, compressed,
XBZRLE_PAGE_SIZE);
rc = xbzrle_decode_buffer(compressed, dlen, test, XBZRLE_PAGE_SIZE);
diff --git a/tests/bench/meson.build b/tests/bench/meson.build
index 279a8fcc33..daefead58d 100644
--- a/tests/bench/meson.build
+++ b/tests/bench/meson.build
@@ -3,6 +3,10 @@ qht_bench = executable('qht-bench',
sources: 'qht-bench.c',
dependencies: [qemuutil])
+xbzrle_bench = executable('xbzrle-bench',
+ sources: 'xbzrle-bench.c',
+ dependencies: [qemuutil,migration])
+
executable('atomic_add-bench',
sources: files('atomic_add-bench.c'),
dependencies: [qemuutil],
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 10/30] migration: Fix possible infinite loop of ram save process
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (8 preceding siblings ...)
2022-11-15 15:34 ` [PULL 09/30] Unit test code and benchmark code Juan Quintela
@ 2022-11-15 15:34 ` Juan Quintela
2022-11-15 15:34 ` [PULL 11/30] migration: Fix race on qemu_file_shutdown() Juan Quintela
` (22 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:34 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
From: Peter Xu <peterx@redhat.com>
When starting ram saving procedure (especially at the completion phase),
always set last_seen_block to non-NULL to make sure we can always correctly
detect the case where "we've migrated all the dirty pages".
Then we'll guarantee both last_seen_block and pss.block will be valid
always before the loop starts.
See the comment in the code for some details.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index bb4f08bfed..c0f5d6d287 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2574,14 +2574,22 @@ static int ram_find_and_save_block(RAMState *rs)
return pages;
}
+ /*
+ * Always keep last_seen_block/last_page valid during this procedure,
+ * because find_dirty_block() relies on these values (e.g., we compare
+ * last_seen_block with pss.block to see whether we searched all the
+ * ramblocks) to detect the completion of migration. Having NULL value
+ * of last_seen_block can conditionally cause below loop to run forever.
+ */
+ if (!rs->last_seen_block) {
+ rs->last_seen_block = QLIST_FIRST_RCU(&ram_list.blocks);
+ rs->last_page = 0;
+ }
+
pss.block = rs->last_seen_block;
pss.page = rs->last_page;
pss.complete_round = false;
- if (!pss.block) {
- pss.block = QLIST_FIRST_RCU(&ram_list.blocks);
- }
-
do {
again = true;
found = get_queued_page(rs, &pss);
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 11/30] migration: Fix race on qemu_file_shutdown()
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (9 preceding siblings ...)
2022-11-15 15:34 ` [PULL 10/30] migration: Fix possible infinite loop of ram save process Juan Quintela
@ 2022-11-15 15:34 ` Juan Quintela
2022-11-15 15:34 ` [PULL 12/30] migration: Disallow postcopy preempt to be used with compress Juan Quintela
` (21 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:34 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
From: Peter Xu <peterx@redhat.com>
In qemu_file_shutdown(), there's a possible race if with current order of
operation. There're two major things to do:
(1) Do real shutdown() (e.g. shutdown() syscall on socket)
(2) Update qemufile's last_error
We must do (2) before (1) otherwise there can be a race condition like:
page receiver other thread
------------- ------------
qemu_get_buffer()
do shutdown()
returns 0 (buffer all zero)
(meanwhile we didn't check this retcode)
try to detect IO error
last_error==NULL, IO okay
install ALL-ZERO page
set last_error
--> guest crash!
To fix this, we can also check retval of qemu_get_buffer(), but not all
APIs can be properly checked and ultimately we still need to go back to
qemu_file_get_error(). E.g. qemu_get_byte() doesn't return error.
Maybe some day a rework of qemufile API is really needed, but for now keep
using qemu_file_get_error() and fix it by not allowing that race condition
to happen. Here shutdown() is indeed special because the last_error was
emulated. For real -EIO errors it'll always be set when e.g. sendmsg()
error triggers so we won't miss those ones, only shutdown() is a bit tricky
here.
Cc: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/qemu-file.c | 27 ++++++++++++++++++++++++---
1 file changed, 24 insertions(+), 3 deletions(-)
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index 4f400c2e52..2d5f74ffc2 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -79,6 +79,30 @@ int qemu_file_shutdown(QEMUFile *f)
int ret = 0;
f->shutdown = true;
+
+ /*
+ * We must set qemufile error before the real shutdown(), otherwise
+ * there can be a race window where we thought IO all went though
+ * (because last_error==NULL) but actually IO has already stopped.
+ *
+ * If without correct ordering, the race can happen like this:
+ *
+ * page receiver other thread
+ * ------------- ------------
+ * qemu_get_buffer()
+ * do shutdown()
+ * returns 0 (buffer all zero)
+ * (we didn't check this retcode)
+ * try to detect IO error
+ * last_error==NULL, IO okay
+ * install ALL-ZERO page
+ * set last_error
+ * --> guest crash!
+ */
+ if (!f->last_error) {
+ qemu_file_set_error(f, -EIO);
+ }
+
if (!qio_channel_has_feature(f->ioc,
QIO_CHANNEL_FEATURE_SHUTDOWN)) {
return -ENOSYS;
@@ -88,9 +112,6 @@ int qemu_file_shutdown(QEMUFile *f)
ret = -EIO;
}
- if (!f->last_error) {
- qemu_file_set_error(f, -EIO);
- }
return ret;
}
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 12/30] migration: Disallow postcopy preempt to be used with compress
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (10 preceding siblings ...)
2022-11-15 15:34 ` [PULL 11/30] migration: Fix race on qemu_file_shutdown() Juan Quintela
@ 2022-11-15 15:34 ` Juan Quintela
2022-11-15 15:34 ` [PULL 13/30] migration: Use non-atomic ops for clear log bitmap Juan Quintela
` (20 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:34 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
From: Peter Xu <peterx@redhat.com>
The preempt mode requires the capability to assign channel for each of the
page, while the compression logic will currently assign pages to different
compress thread/local-channel so potentially they're incompatible.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/migration.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/migration/migration.c b/migration/migration.c
index 406a9e2f72..0bc3fce4b7 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1357,6 +1357,17 @@ static bool migrate_caps_check(bool *cap_list,
error_setg(errp, "Postcopy preempt requires postcopy-ram");
return false;
}
+
+ /*
+ * Preempt mode requires urgent pages to be sent in separate
+ * channel, OTOH compression logic will disorder all pages into
+ * different compression channels, which is not compatible with the
+ * preempt assumptions on channel assignments.
+ */
+ if (cap_list[MIGRATION_CAPABILITY_COMPRESS]) {
+ error_setg(errp, "Postcopy preempt not compatible with compress");
+ return false;
+ }
}
return true;
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 13/30] migration: Use non-atomic ops for clear log bitmap
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (11 preceding siblings ...)
2022-11-15 15:34 ` [PULL 12/30] migration: Disallow postcopy preempt to be used with compress Juan Quintela
@ 2022-11-15 15:34 ` Juan Quintela
2022-11-15 15:34 ` [PULL 14/30] migration: Disable multifd explicitly with compression Juan Quintela
` (19 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:34 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
From: Peter Xu <peterx@redhat.com>
Since we already have bitmap_mutex to protect either the dirty bitmap or
the clear log bitmap, we don't need atomic operations to set/clear/test on
the clear log bitmap. Switching all ops from atomic to non-atomic
versions, meanwhile touch up the comments to show which lock is in charge.
Introduced non-atomic version of bitmap_test_and_clear_atomic(), mostly the
same as the atomic version but simplified a few places, e.g. dropped the
"old_bits" variable, and also the explicit memory barriers.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
include/exec/ram_addr.h | 11 +++++-----
include/exec/ramblock.h | 3 +++
include/qemu/bitmap.h | 1 +
util/bitmap.c | 45 +++++++++++++++++++++++++++++++++++++++++
4 files changed, 55 insertions(+), 5 deletions(-)
diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index 1500680458..f4fb6a2111 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -42,7 +42,8 @@ static inline long clear_bmap_size(uint64_t pages, uint8_t shift)
}
/**
- * clear_bmap_set: set clear bitmap for the page range
+ * clear_bmap_set: set clear bitmap for the page range. Must be with
+ * bitmap_mutex held.
*
* @rb: the ramblock to operate on
* @start: the start page number
@@ -55,12 +56,12 @@ static inline void clear_bmap_set(RAMBlock *rb, uint64_t start,
{
uint8_t shift = rb->clear_bmap_shift;
- bitmap_set_atomic(rb->clear_bmap, start >> shift,
- clear_bmap_size(npages, shift));
+ bitmap_set(rb->clear_bmap, start >> shift, clear_bmap_size(npages, shift));
}
/**
- * clear_bmap_test_and_clear: test clear bitmap for the page, clear if set
+ * clear_bmap_test_and_clear: test clear bitmap for the page, clear if set.
+ * Must be with bitmap_mutex held.
*
* @rb: the ramblock to operate on
* @page: the page number to check
@@ -71,7 +72,7 @@ static inline bool clear_bmap_test_and_clear(RAMBlock *rb, uint64_t page)
{
uint8_t shift = rb->clear_bmap_shift;
- return bitmap_test_and_clear_atomic(rb->clear_bmap, page >> shift, 1);
+ return bitmap_test_and_clear(rb->clear_bmap, page >> shift, 1);
}
static inline bool offset_in_ramblock(RAMBlock *b, ram_addr_t offset)
diff --git a/include/exec/ramblock.h b/include/exec/ramblock.h
index 6cbedf9e0c..adc03df59c 100644
--- a/include/exec/ramblock.h
+++ b/include/exec/ramblock.h
@@ -53,6 +53,9 @@ struct RAMBlock {
* and split clearing of dirty bitmap on the remote node (e.g.,
* KVM). The bitmap will be set only when doing global sync.
*
+ * It is only used during src side of ram migration, and it is
+ * protected by the global ram_state.bitmap_mutex.
+ *
* NOTE: this bitmap is different comparing to the other bitmaps
* in that one bit can represent multiple guest pages (which is
* decided by the `clear_bmap_shift' variable below). On
diff --git a/include/qemu/bitmap.h b/include/qemu/bitmap.h
index 82a1d2f41f..3ccb00865f 100644
--- a/include/qemu/bitmap.h
+++ b/include/qemu/bitmap.h
@@ -253,6 +253,7 @@ void bitmap_set(unsigned long *map, long i, long len);
void bitmap_set_atomic(unsigned long *map, long i, long len);
void bitmap_clear(unsigned long *map, long start, long nr);
bool bitmap_test_and_clear_atomic(unsigned long *map, long start, long nr);
+bool bitmap_test_and_clear(unsigned long *map, long start, long nr);
void bitmap_copy_and_clear_atomic(unsigned long *dst, unsigned long *src,
long nr);
unsigned long bitmap_find_next_zero_area(unsigned long *map,
diff --git a/util/bitmap.c b/util/bitmap.c
index f81d8057a7..8d12e90a5a 100644
--- a/util/bitmap.c
+++ b/util/bitmap.c
@@ -240,6 +240,51 @@ void bitmap_clear(unsigned long *map, long start, long nr)
}
}
+bool bitmap_test_and_clear(unsigned long *map, long start, long nr)
+{
+ unsigned long *p = map + BIT_WORD(start);
+ const long size = start + nr;
+ int bits_to_clear = BITS_PER_LONG - (start % BITS_PER_LONG);
+ unsigned long mask_to_clear = BITMAP_FIRST_WORD_MASK(start);
+ bool dirty = false;
+
+ assert(start >= 0 && nr >= 0);
+
+ /* First word */
+ if (nr - bits_to_clear > 0) {
+ if ((*p) & mask_to_clear) {
+ dirty = true;
+ }
+ *p &= ~mask_to_clear;
+ nr -= bits_to_clear;
+ bits_to_clear = BITS_PER_LONG;
+ p++;
+ }
+
+ /* Full words */
+ if (bits_to_clear == BITS_PER_LONG) {
+ while (nr >= BITS_PER_LONG) {
+ if (*p) {
+ dirty = true;
+ *p = 0;
+ }
+ nr -= BITS_PER_LONG;
+ p++;
+ }
+ }
+
+ /* Last word */
+ if (nr) {
+ mask_to_clear &= BITMAP_LAST_WORD_MASK(size);
+ if ((*p) & mask_to_clear) {
+ dirty = true;
+ }
+ *p &= ~mask_to_clear;
+ }
+
+ return dirty;
+}
+
bool bitmap_test_and_clear_atomic(unsigned long *map, long start, long nr)
{
unsigned long *p = map + BIT_WORD(start);
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 14/30] migration: Disable multifd explicitly with compression
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (12 preceding siblings ...)
2022-11-15 15:34 ` [PULL 13/30] migration: Use non-atomic ops for clear log bitmap Juan Quintela
@ 2022-11-15 15:34 ` Juan Quintela
2022-11-15 15:34 ` [PULL 15/30] migration: Take bitmap mutex when completing ram migration Juan Quintela
` (18 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:34 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
From: Peter Xu <peterx@redhat.com>
Multifd thread model does not work for compression, explicitly disable it.
Note that previuosly even we can enable both of them, nothing will go
wrong, because the compression code has higher priority so multifd feature
will just be ignored. Now we'll fail even earlier at config time so the
user should be aware of the consequence better.
Note that there can be a slight chance of breaking existing users, but
let's assume they're not majority and not serious users, or they should
have found that multifd is not working already.
With that, we can safely drop the check in ram_save_target_page() for using
multifd, because when multifd=on then compression=off, then the removed
check on save_page_use_compression() will also always return false too.
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/migration.c | 7 +++++++
migration/ram.c | 11 +++++------
2 files changed, 12 insertions(+), 6 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 0bc3fce4b7..9fbed8819a 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1370,6 +1370,13 @@ static bool migrate_caps_check(bool *cap_list,
}
}
+ if (cap_list[MIGRATION_CAPABILITY_MULTIFD]) {
+ if (cap_list[MIGRATION_CAPABILITY_COMPRESS]) {
+ error_setg(errp, "Multifd is not compatible with compress");
+ return false;
+ }
+ }
+
return true;
}
diff --git a/migration/ram.c b/migration/ram.c
index c0f5d6d287..2fcce796d0 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2333,13 +2333,12 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss)
}
/*
- * Do not use multifd for:
- * 1. Compression as the first page in the new block should be posted out
- * before sending the compressed page
- * 2. In postcopy as one whole host page should be placed
+ * Do not use multifd in postcopy as one whole host page should be
+ * placed. Meanwhile postcopy requires atomic update of pages, so even
+ * if host page size == guest page size the dest guest during run may
+ * still see partially copied pages which is data corruption.
*/
- if (!save_page_use_compression(rs) && migrate_use_multifd()
- && !migration_in_postcopy()) {
+ if (migrate_use_multifd() && !migration_in_postcopy()) {
return ram_save_multifd_page(rs, block, offset);
}
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 15/30] migration: Take bitmap mutex when completing ram migration
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (13 preceding siblings ...)
2022-11-15 15:34 ` [PULL 14/30] migration: Disable multifd explicitly with compression Juan Quintela
@ 2022-11-15 15:34 ` Juan Quintela
2022-11-15 15:35 ` [PULL 16/30] migration: Add postcopy_preempt_active() Juan Quintela
` (17 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:34 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
From: Peter Xu <peterx@redhat.com>
Any call to ram_find_and_save_block() needs to take the bitmap mutex. We
used to not take it for most of ram_save_complete() because we thought
we're the only one left using the bitmap, but it's not true after the
preempt full patchset applied, since the return path can be taking it too.
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/migration/ram.c b/migration/ram.c
index 2fcce796d0..96fa521813 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3434,6 +3434,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
/* try transferring iterative blocks of memory */
/* flush all remaining blocks regardless of rate limiting */
+ qemu_mutex_lock(&rs->bitmap_mutex);
while (true) {
int pages;
@@ -3447,6 +3448,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
break;
}
}
+ qemu_mutex_unlock(&rs->bitmap_mutex);
flush_compressed_data(rs);
ram_control_after_iterate(f, RAM_CONTROL_FINISH);
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 16/30] migration: Add postcopy_preempt_active()
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (14 preceding siblings ...)
2022-11-15 15:34 ` [PULL 15/30] migration: Take bitmap mutex when completing ram migration Juan Quintela
@ 2022-11-15 15:35 ` Juan Quintela
2022-11-15 15:35 ` [PULL 17/30] migration: Cleanup xbzrle zero page cache update logic Juan Quintela
` (16 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:35 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
From: Peter Xu <peterx@redhat.com>
Add the helper to show that postcopy preempt enabled, meanwhile active.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 96fa521813..52c851eb56 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -190,6 +190,11 @@ out:
return ret;
}
+static bool postcopy_preempt_active(void)
+{
+ return migrate_postcopy_preempt() && migration_in_postcopy();
+}
+
bool ramblock_is_ignored(RAMBlock *block)
{
return !qemu_ram_is_migratable(block) ||
@@ -2461,7 +2466,7 @@ static void postcopy_preempt_choose_channel(RAMState *rs, PageSearchStatus *pss)
/* We need to make sure rs->f always points to the default channel elsewhere */
static void postcopy_preempt_reset_channel(RAMState *rs)
{
- if (migrate_postcopy_preempt() && migration_in_postcopy()) {
+ if (postcopy_preempt_active()) {
rs->postcopy_channel = RAM_CHANNEL_PRECOPY;
rs->f = migrate_get_current()->to_dst_file;
trace_postcopy_preempt_reset_channel();
@@ -2499,7 +2504,7 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
return 0;
}
- if (migrate_postcopy_preempt() && migration_in_postcopy()) {
+ if (postcopy_preempt_active()) {
postcopy_preempt_choose_channel(rs, pss);
}
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 17/30] migration: Cleanup xbzrle zero page cache update logic
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (15 preceding siblings ...)
2022-11-15 15:35 ` [PULL 16/30] migration: Add postcopy_preempt_active() Juan Quintela
@ 2022-11-15 15:35 ` Juan Quintela
2022-11-15 15:35 ` [PULL 18/30] migration: Trivial cleanup save_page_header() on same block check Juan Quintela
` (15 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:35 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
From: Peter Xu <peterx@redhat.com>
The major change is to replace "!save_page_use_compression()" with
"xbzrle_enabled" to make it clear.
Reasonings:
(1) When compression enabled, "!save_page_use_compression()" is exactly the
same as checking "xbzrle_enabled".
(2) When compression disabled, "!save_page_use_compression()" always return
true. We used to try calling the xbzrle code, but after this change we
won't, and we shouldn't need to.
Since at it, drop the xbzrle_enabled check in xbzrle_cache_zero_page()
because with this change it's not needed anymore.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 52c851eb56..9ded381e0a 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -769,10 +769,6 @@ void mig_throttle_counter_reset(void)
*/
static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
{
- if (!rs->xbzrle_enabled) {
- return;
- }
-
/* We don't care if this fails to allocate a new cache page
* as long as it updated an old one */
cache_insert(XBZRLE.cache, current_addr, XBZRLE.zero_target_page,
@@ -2329,7 +2325,7 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss)
/* Must let xbzrle know, otherwise a previous (now 0'd) cached
* page would be stale
*/
- if (!save_page_use_compression(rs)) {
+ if (rs->xbzrle_enabled) {
XBZRLE_cache_lock();
xbzrle_cache_zero_page(rs, block->offset + offset);
XBZRLE_cache_unlock();
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 18/30] migration: Trivial cleanup save_page_header() on same block check
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (16 preceding siblings ...)
2022-11-15 15:35 ` [PULL 17/30] migration: Cleanup xbzrle zero page cache update logic Juan Quintela
@ 2022-11-15 15:35 ` Juan Quintela
2022-11-15 15:35 ` [PULL 19/30] migration: Remove RAMState.f references in compression code Juan Quintela
` (14 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:35 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
From: Peter Xu <peterx@redhat.com>
The 2nd check on RAM_SAVE_FLAG_CONTINUE is a bit redundant. Use a boolean
to be clearer.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 9ded381e0a..42b6a543bd 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -689,14 +689,15 @@ static size_t save_page_header(RAMState *rs, QEMUFile *f, RAMBlock *block,
ram_addr_t offset)
{
size_t size, len;
+ bool same_block = (block == rs->last_sent_block);
- if (block == rs->last_sent_block) {
+ if (same_block) {
offset |= RAM_SAVE_FLAG_CONTINUE;
}
qemu_put_be64(f, offset);
size = 8;
- if (!(offset & RAM_SAVE_FLAG_CONTINUE)) {
+ if (!same_block) {
len = strlen(block->idstr);
qemu_put_byte(f, len);
qemu_put_buffer(f, (uint8_t *)block->idstr, len);
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 19/30] migration: Remove RAMState.f references in compression code
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (17 preceding siblings ...)
2022-11-15 15:35 ` [PULL 18/30] migration: Trivial cleanup save_page_header() on same block check Juan Quintela
@ 2022-11-15 15:35 ` Juan Quintela
2022-11-15 15:35 ` [PULL 20/30] migration: Yield bitmap_mutex properly when sending/sleeping Juan Quintela
` (13 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:35 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
From: Peter Xu <peterx@redhat.com>
Removing referencing to RAMState.f in compress_page_with_multi_thread() and
flush_compressed_data().
Compression code by default isn't compatible with having >1 channels (or it
won't currently know which channel to flush the compressed data), so to
make it simple we always flush on the default to_dst_file port until
someone wants to add >1 ports support, as rs->f right now can really
change (after postcopy preempt is introduced).
There should be no functional change at all after patch applied, since as
long as rs->f referenced in compression code, it must be to_dst_file.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 42b6a543bd..ebc5664dcc 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1489,6 +1489,7 @@ static bool save_page_use_compression(RAMState *rs);
static void flush_compressed_data(RAMState *rs)
{
+ MigrationState *ms = migrate_get_current();
int idx, len, thread_count;
if (!save_page_use_compression(rs)) {
@@ -1507,7 +1508,7 @@ static void flush_compressed_data(RAMState *rs)
for (idx = 0; idx < thread_count; idx++) {
qemu_mutex_lock(&comp_param[idx].mutex);
if (!comp_param[idx].quit) {
- len = qemu_put_qemu_file(rs->f, comp_param[idx].file);
+ len = qemu_put_qemu_file(ms->to_dst_file, comp_param[idx].file);
/*
* it's safe to fetch zero_page without holding comp_done_lock
* as there is no further request submitted to the thread,
@@ -1526,11 +1527,11 @@ static inline void set_compress_params(CompressParam *param, RAMBlock *block,
param->offset = offset;
}
-static int compress_page_with_multi_thread(RAMState *rs, RAMBlock *block,
- ram_addr_t offset)
+static int compress_page_with_multi_thread(RAMBlock *block, ram_addr_t offset)
{
int idx, thread_count, bytes_xmit = -1, pages = -1;
bool wait = migrate_compress_wait_thread();
+ MigrationState *ms = migrate_get_current();
thread_count = migrate_compress_threads();
qemu_mutex_lock(&comp_done_lock);
@@ -1538,7 +1539,8 @@ retry:
for (idx = 0; idx < thread_count; idx++) {
if (comp_param[idx].done) {
comp_param[idx].done = false;
- bytes_xmit = qemu_put_qemu_file(rs->f, comp_param[idx].file);
+ bytes_xmit = qemu_put_qemu_file(ms->to_dst_file,
+ comp_param[idx].file);
qemu_mutex_lock(&comp_param[idx].mutex);
set_compress_params(&comp_param[idx], block, offset);
qemu_cond_signal(&comp_param[idx].cond);
@@ -2291,7 +2293,7 @@ static bool save_compress_page(RAMState *rs, RAMBlock *block, ram_addr_t offset)
return false;
}
- if (compress_page_with_multi_thread(rs, block, offset) > 0) {
+ if (compress_page_with_multi_thread(block, offset) > 0) {
return true;
}
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 20/30] migration: Yield bitmap_mutex properly when sending/sleeping
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (18 preceding siblings ...)
2022-11-15 15:35 ` [PULL 19/30] migration: Remove RAMState.f references in compression code Juan Quintela
@ 2022-11-15 15:35 ` Juan Quintela
2022-11-15 15:35 ` [PULL 21/30] migration: Use atomic ops properly for page accountings Juan Quintela
` (12 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:35 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
From: Peter Xu <peterx@redhat.com>
Don't take the bitmap mutex when sending pages, or when being throttled by
migration_rate_limit() (which is a bit tricky to call it here in ram code,
but seems still helpful).
It prepares for the possibility of concurrently sending pages in >1 threads
using the function ram_save_host_page() because all threads may need the
bitmap_mutex to operate on bitmaps, so that either sendmsg() or any kind of
qemu_sem_wait() blocking for one thread will not block the other from
progressing.
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.c | 46 +++++++++++++++++++++++++++++++++++-----------
1 file changed, 35 insertions(+), 11 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index ebc5664dcc..6428138194 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2480,9 +2480,14 @@ static void postcopy_preempt_reset_channel(RAMState *rs)
* a host page in which case the remainder of the hostpage is sent.
* Only dirty target pages are sent. Note that the host page size may
* be a huge page for this block.
+ *
* The saving stops at the boundary of the used_length of the block
* if the RAMBlock isn't a multiple of the host page size.
*
+ * The caller must be with ram_state.bitmap_mutex held to call this
+ * function. Note that this function can temporarily release the lock, but
+ * when the function is returned it'll make sure the lock is still held.
+ *
* Returns the number of pages written or negative on error
*
* @rs: current RAM state
@@ -2490,6 +2495,7 @@ static void postcopy_preempt_reset_channel(RAMState *rs)
*/
static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
{
+ bool page_dirty, preempt_active = postcopy_preempt_active();
int tmppages, pages = 0;
size_t pagesize_bits =
qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
@@ -2513,22 +2519,40 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
break;
}
+ page_dirty = migration_bitmap_clear_dirty(rs, pss->block, pss->page);
+
/* Check the pages is dirty and if it is send it */
- if (migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
+ if (page_dirty) {
+ /*
+ * Properly yield the lock only in postcopy preempt mode
+ * because both migration thread and rp-return thread can
+ * operate on the bitmaps.
+ */
+ if (preempt_active) {
+ qemu_mutex_unlock(&rs->bitmap_mutex);
+ }
tmppages = ram_save_target_page(rs, pss);
- if (tmppages < 0) {
- return tmppages;
+ if (tmppages >= 0) {
+ pages += tmppages;
+ /*
+ * Allow rate limiting to happen in the middle of huge pages if
+ * something is sent in the current iteration.
+ */
+ if (pagesize_bits > 1 && tmppages > 0) {
+ migration_rate_limit();
+ }
}
-
- pages += tmppages;
- /*
- * Allow rate limiting to happen in the middle of huge pages if
- * something is sent in the current iteration.
- */
- if (pagesize_bits > 1 && tmppages > 0) {
- migration_rate_limit();
+ if (preempt_active) {
+ qemu_mutex_lock(&rs->bitmap_mutex);
}
+ } else {
+ tmppages = 0;
+ }
+
+ if (tmppages < 0) {
+ return tmppages;
}
+
pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
} while ((pss->page < hostpage_boundary) &&
offset_in_ramblock(pss->block,
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 21/30] migration: Use atomic ops properly for page accountings
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (19 preceding siblings ...)
2022-11-15 15:35 ` [PULL 20/30] migration: Yield bitmap_mutex properly when sending/sleeping Juan Quintela
@ 2022-11-15 15:35 ` Juan Quintela
2022-11-15 15:35 ` [PULL 22/30] migration: Teach PSS about host page Juan Quintela
` (11 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:35 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
From: Peter Xu <peterx@redhat.com>
To prepare for thread-safety on page accountings, at least below counters
need to be accessed only atomically, they are:
ram_counters.transferred
ram_counters.duplicate
ram_counters.normal
ram_counters.postcopy_bytes
There are a lot of other counters but they won't be accessed outside
migration thread, then they're still safe to be accessed without atomic
ops.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.h | 20 ++++++++++++++++++++
migration/migration.c | 10 +++++-----
migration/multifd.c | 4 ++--
migration/ram.c | 40 ++++++++++++++++++++++++----------------
4 files changed, 51 insertions(+), 23 deletions(-)
diff --git a/migration/ram.h b/migration/ram.h
index 038d52f49f..81cbb0947c 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -32,7 +32,27 @@
#include "qapi/qapi-types-migration.h"
#include "exec/cpu-common.h"
#include "io/channel.h"
+#include "qemu/stats64.h"
+/*
+ * These are the migration statistic counters that need to be updated using
+ * atomic ops (can be accessed by more than one thread). Here since we
+ * cannot modify MigrationStats directly to use Stat64 as it was defined in
+ * the QAPI scheme, we define an internal structure to hold them, and we
+ * propagate the real values when QMP queries happen.
+ *
+ * IOW, the corresponding fields within ram_counters on these specific
+ * fields will be always zero and not being used at all; they're just
+ * placeholders to make it QAPI-compatible.
+ */
+typedef struct {
+ Stat64 transferred;
+ Stat64 duplicate;
+ Stat64 normal;
+ Stat64 postcopy_bytes;
+} MigrationAtomicStats;
+
+extern MigrationAtomicStats ram_atomic_counters;
extern MigrationStats ram_counters;
extern XBZRLECacheStats xbzrle_counters;
extern CompressionStats compression_counters;
diff --git a/migration/migration.c b/migration/migration.c
index 9fbed8819a..1f95877fb4 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1069,13 +1069,13 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
info->has_ram = true;
info->ram = g_malloc0(sizeof(*info->ram));
- info->ram->transferred = ram_counters.transferred;
+ info->ram->transferred = stat64_get(&ram_atomic_counters.transferred);
info->ram->total = ram_bytes_total();
- info->ram->duplicate = ram_counters.duplicate;
+ info->ram->duplicate = stat64_get(&ram_atomic_counters.duplicate);
/* legacy value. It is not used anymore */
info->ram->skipped = 0;
- info->ram->normal = ram_counters.normal;
- info->ram->normal_bytes = ram_counters.normal * page_size;
+ info->ram->normal = stat64_get(&ram_atomic_counters.normal);
+ info->ram->normal_bytes = info->ram->normal * page_size;
info->ram->mbps = s->mbps;
info->ram->dirty_sync_count = ram_counters.dirty_sync_count;
info->ram->dirty_sync_missed_zero_copy =
@@ -1086,7 +1086,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
info->ram->pages_per_second = s->pages_per_second;
info->ram->precopy_bytes = ram_counters.precopy_bytes;
info->ram->downtime_bytes = ram_counters.downtime_bytes;
- info->ram->postcopy_bytes = ram_counters.postcopy_bytes;
+ info->ram->postcopy_bytes = stat64_get(&ram_atomic_counters.postcopy_bytes);
if (migrate_use_xbzrle()) {
info->has_xbzrle_cache = true;
diff --git a/migration/multifd.c b/migration/multifd.c
index c40d98ad5c..7d3aec9a52 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -432,7 +432,7 @@ static int multifd_send_pages(QEMUFile *f)
transferred = ((uint64_t) pages->num) * p->page_size + p->packet_len;
qemu_file_acct_rate_limit(f, transferred);
ram_counters.multifd_bytes += transferred;
- ram_counters.transferred += transferred;
+ stat64_add(&ram_atomic_counters.transferred, transferred);
qemu_mutex_unlock(&p->mutex);
qemu_sem_post(&p->sem);
@@ -624,7 +624,7 @@ int multifd_send_sync_main(QEMUFile *f)
p->pending_job++;
qemu_file_acct_rate_limit(f, p->packet_len);
ram_counters.multifd_bytes += p->packet_len;
- ram_counters.transferred += p->packet_len;
+ stat64_add(&ram_atomic_counters.transferred, p->packet_len);
qemu_mutex_unlock(&p->mutex);
qemu_sem_post(&p->sem);
diff --git a/migration/ram.c b/migration/ram.c
index 6428138194..25fd3cf7dc 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -453,18 +453,25 @@ uint64_t ram_bytes_remaining(void)
0;
}
+/*
+ * NOTE: not all stats in ram_counters are used in reality. See comments
+ * for struct MigrationAtomicStats. The ultimate result of ram migration
+ * counters will be a merged version with both ram_counters and the atomic
+ * fields in ram_atomic_counters.
+ */
MigrationStats ram_counters;
+MigrationAtomicStats ram_atomic_counters;
void ram_transferred_add(uint64_t bytes)
{
if (runstate_is_running()) {
ram_counters.precopy_bytes += bytes;
} else if (migration_in_postcopy()) {
- ram_counters.postcopy_bytes += bytes;
+ stat64_add(&ram_atomic_counters.postcopy_bytes, bytes);
} else {
ram_counters.downtime_bytes += bytes;
}
- ram_counters.transferred += bytes;
+ stat64_add(&ram_atomic_counters.transferred, bytes);
}
void dirty_sync_missed_zero_copy(void)
@@ -753,7 +760,7 @@ void mig_throttle_counter_reset(void)
rs->time_last_bitmap_sync = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
rs->num_dirty_pages_period = 0;
- rs->bytes_xfer_prev = ram_counters.transferred;
+ rs->bytes_xfer_prev = stat64_get(&ram_atomic_counters.transferred);
}
/**
@@ -1113,8 +1120,9 @@ uint64_t ram_pagesize_summary(void)
uint64_t ram_get_total_transferred_pages(void)
{
- return ram_counters.normal + ram_counters.duplicate +
- compression_counters.pages + xbzrle_counters.pages;
+ return stat64_get(&ram_atomic_counters.normal) +
+ stat64_get(&ram_atomic_counters.duplicate) +
+ compression_counters.pages + xbzrle_counters.pages;
}
static void migration_update_rates(RAMState *rs, int64_t end_time)
@@ -1173,8 +1181,8 @@ static void migration_trigger_throttle(RAMState *rs)
{
MigrationState *s = migrate_get_current();
uint64_t threshold = s->parameters.throttle_trigger_threshold;
-
- uint64_t bytes_xfer_period = ram_counters.transferred - rs->bytes_xfer_prev;
+ uint64_t bytes_xfer_period =
+ stat64_get(&ram_atomic_counters.transferred) - rs->bytes_xfer_prev;
uint64_t bytes_dirty_period = rs->num_dirty_pages_period * TARGET_PAGE_SIZE;
uint64_t bytes_dirty_threshold = bytes_xfer_period * threshold / 100;
@@ -1237,7 +1245,7 @@ static void migration_bitmap_sync(RAMState *rs)
/* reset period counters */
rs->time_last_bitmap_sync = end_time;
rs->num_dirty_pages_period = 0;
- rs->bytes_xfer_prev = ram_counters.transferred;
+ rs->bytes_xfer_prev = stat64_get(&ram_atomic_counters.transferred);
}
if (migrate_use_events()) {
qapi_event_send_migration_pass(ram_counters.dirty_sync_count);
@@ -1313,7 +1321,7 @@ static int save_zero_page(RAMState *rs, RAMBlock *block, ram_addr_t offset)
int len = save_zero_page_to_file(rs, rs->f, block, offset);
if (len) {
- ram_counters.duplicate++;
+ stat64_add(&ram_atomic_counters.duplicate, 1);
ram_transferred_add(len);
return 1;
}
@@ -1350,9 +1358,9 @@ static bool control_save_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
}
if (bytes_xmit > 0) {
- ram_counters.normal++;
+ stat64_add(&ram_atomic_counters.normal, 1);
} else if (bytes_xmit == 0) {
- ram_counters.duplicate++;
+ stat64_add(&ram_atomic_counters.duplicate, 1);
}
return true;
@@ -1382,7 +1390,7 @@ static int save_normal_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
qemu_put_buffer(rs->f, buf, TARGET_PAGE_SIZE);
}
ram_transferred_add(TARGET_PAGE_SIZE);
- ram_counters.normal++;
+ stat64_add(&ram_atomic_counters.normal, 1);
return 1;
}
@@ -1438,7 +1446,7 @@ static int ram_save_multifd_page(RAMState *rs, RAMBlock *block,
if (multifd_queue_page(rs->f, block, offset) < 0) {
return -1;
}
- ram_counters.normal++;
+ stat64_add(&ram_atomic_counters.normal, 1);
return 1;
}
@@ -1476,7 +1484,7 @@ update_compress_thread_counts(const CompressParam *param, int bytes_xmit)
ram_transferred_add(bytes_xmit);
if (param->zero_page) {
- ram_counters.duplicate++;
+ stat64_add(&ram_atomic_counters.duplicate, 1);
return;
}
@@ -2651,9 +2659,9 @@ void acct_update_position(QEMUFile *f, size_t size, bool zero)
uint64_t pages = size / TARGET_PAGE_SIZE;
if (zero) {
- ram_counters.duplicate += pages;
+ stat64_add(&ram_atomic_counters.duplicate, pages);
} else {
- ram_counters.normal += pages;
+ stat64_add(&ram_atomic_counters.normal, pages);
ram_transferred_add(size);
qemu_file_credit_transfer(f, size);
}
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 22/30] migration: Teach PSS about host page
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (20 preceding siblings ...)
2022-11-15 15:35 ` [PULL 21/30] migration: Use atomic ops properly for page accountings Juan Quintela
@ 2022-11-15 15:35 ` Juan Quintela
2022-11-15 15:35 ` [PULL 23/30] migration: Introduce pss_channel Juan Quintela
` (10 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:35 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
From: Peter Xu <peterx@redhat.com>
Migration code has a lot to do with host pages. Teaching PSS core about
the idea of host page helps a lot and makes the code clean. Meanwhile,
this prepares for the future changes that can leverage the new PSS helpers
that this patch introduces to send host page in another thread.
Three more fields are introduced for this:
(1) host_page_sending: this is set to true when QEMU is sending a host
page, false otherwise.
(2) host_page_{start|end}: these point to the start/end of host page
we're sending, and it's only valid when host_page_sending==true.
For example, when we look up the next dirty page on the ramblock, with
host_page_sending==true, we'll not try to look for anything beyond the
current host page boundary. This can be slightly efficient than current
code because currently we'll set pss->page to next dirty bit (which can be
over current host page boundary) and reset it to host page boundary if we
found it goes beyond that.
With above, we can easily make migration_bitmap_find_dirty() self contained
by updating pss->page properly. rs* parameter is removed because it's not
even used in old code.
When sending a host page, we should use the pss helpers like this:
- pss_host_page_prepare(pss): called before sending host page
- pss_within_range(pss): whether we're still working on the cur host page?
- pss_host_page_finish(pss): called after sending a host page
Then we can use ram_save_target_page() to save one small page.
Currently ram_save_host_page() is still the only user. If there'll be
another function to send host page (e.g. in return path thread) in the
future, it should follow the same style.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.c | 95 +++++++++++++++++++++++++++++++++++++++----------
1 file changed, 76 insertions(+), 19 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 25fd3cf7dc..b71edf1f26 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -509,6 +509,11 @@ struct PageSearchStatus {
* postcopy pages via postcopy preempt channel.
*/
bool postcopy_target_channel;
+ /* Whether we're sending a host page */
+ bool host_page_sending;
+ /* The start/end of current host page. Only valid if host_page_sending==true */
+ unsigned long host_page_start;
+ unsigned long host_page_end;
};
typedef struct PageSearchStatus PageSearchStatus;
@@ -886,26 +891,38 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
}
/**
- * migration_bitmap_find_dirty: find the next dirty page from start
+ * pss_find_next_dirty: find the next dirty page of current ramblock
*
- * Returns the page offset within memory region of the start of a dirty page
+ * This function updates pss->page to point to the next dirty page index
+ * within the ramblock to migrate, or the end of ramblock when nothing
+ * found. Note that when pss->host_page_sending==true it means we're
+ * during sending a host page, so we won't look for dirty page that is
+ * outside the host page boundary.
*
- * @rs: current RAM state
- * @rb: RAMBlock where to search for dirty pages
- * @start: page where we start the search
+ * @pss: the current page search status
*/
-static inline
-unsigned long migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
- unsigned long start)
+static void pss_find_next_dirty(PageSearchStatus *pss)
{
+ RAMBlock *rb = pss->block;
unsigned long size = rb->used_length >> TARGET_PAGE_BITS;
unsigned long *bitmap = rb->bmap;
if (ramblock_is_ignored(rb)) {
- return size;
+ /* Points directly to the end, so we know no dirty page */
+ pss->page = size;
+ return;
}
- return find_next_bit(bitmap, size, start);
+ /*
+ * If during sending a host page, only look for dirty pages within the
+ * current host page being send.
+ */
+ if (pss->host_page_sending) {
+ assert(pss->host_page_end);
+ size = MIN(size, pss->host_page_end);
+ }
+
+ pss->page = find_next_bit(bitmap, size, pss->page);
}
static void migration_clear_memory_region_dirty_bitmap(RAMBlock *rb,
@@ -1591,7 +1608,9 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
pss->postcopy_requested = false;
pss->postcopy_target_channel = RAM_CHANNEL_PRECOPY;
- pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
+ /* Update pss->page for the next dirty bit in ramblock */
+ pss_find_next_dirty(pss);
+
if (pss->complete_round && pss->block == rs->last_seen_block &&
pss->page >= rs->last_page) {
/*
@@ -2480,6 +2499,44 @@ static void postcopy_preempt_reset_channel(RAMState *rs)
}
}
+/* Should be called before sending a host page */
+static void pss_host_page_prepare(PageSearchStatus *pss)
+{
+ /* How many guest pages are there in one host page? */
+ size_t guest_pfns = qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
+
+ pss->host_page_sending = true;
+ pss->host_page_start = ROUND_DOWN(pss->page, guest_pfns);
+ pss->host_page_end = ROUND_UP(pss->page + 1, guest_pfns);
+}
+
+/*
+ * Whether the page pointed by PSS is within the host page being sent.
+ * Must be called after a previous pss_host_page_prepare().
+ */
+static bool pss_within_range(PageSearchStatus *pss)
+{
+ ram_addr_t ram_addr;
+
+ assert(pss->host_page_sending);
+
+ /* Over host-page boundary? */
+ if (pss->page >= pss->host_page_end) {
+ return false;
+ }
+
+ ram_addr = ((ram_addr_t)pss->page) << TARGET_PAGE_BITS;
+
+ return offset_in_ramblock(pss->block, ram_addr);
+}
+
+static void pss_host_page_finish(PageSearchStatus *pss)
+{
+ pss->host_page_sending = false;
+ /* This is not needed, but just to reset it */
+ pss->host_page_start = pss->host_page_end = 0;
+}
+
/**
* ram_save_host_page: save a whole host page
*
@@ -2507,8 +2564,6 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
int tmppages, pages = 0;
size_t pagesize_bits =
qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
- unsigned long hostpage_boundary =
- QEMU_ALIGN_UP(pss->page + 1, pagesize_bits);
unsigned long start_page = pss->page;
int res;
@@ -2521,6 +2576,9 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
postcopy_preempt_choose_channel(rs, pss);
}
+ /* Update host page boundary information */
+ pss_host_page_prepare(pss);
+
do {
if (postcopy_needs_preempt(rs, pss)) {
postcopy_do_preempt(rs, pss);
@@ -2558,15 +2616,14 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
}
if (tmppages < 0) {
+ pss_host_page_finish(pss);
return tmppages;
}
- pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
- } while ((pss->page < hostpage_boundary) &&
- offset_in_ramblock(pss->block,
- ((ram_addr_t)pss->page) << TARGET_PAGE_BITS));
- /* The offset we leave with is the min boundary of host page and block */
- pss->page = MIN(pss->page, hostpage_boundary);
+ pss_find_next_dirty(pss);
+ } while (pss_within_range(pss));
+
+ pss_host_page_finish(pss);
/*
* When with postcopy preempt mode, flush the data as soon as possible for
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 23/30] migration: Introduce pss_channel
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (21 preceding siblings ...)
2022-11-15 15:35 ` [PULL 22/30] migration: Teach PSS about host page Juan Quintela
@ 2022-11-15 15:35 ` Juan Quintela
2022-11-15 15:35 ` [PULL 24/30] migration: Add pss_init() Juan Quintela
` (9 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:35 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
From: Peter Xu <peterx@redhat.com>
Introduce pss_channel for PageSearchStatus, define it as "the migration
channel to be used to transfer this host page".
We used to have rs->f, which is a mirror to MigrationState.to_dst_file.
After postcopy preempt initial version, rs->f can be dynamically changed
depending on which channel we want to use.
But that later work still doesn't grant full concurrency of sending pages
in e.g. different threads, because rs->f can either be the PRECOPY channel
or POSTCOPY channel. This needs to be per-thread too.
PageSearchStatus is actually a good piece of struct which we can leverage
if we want to have multiple threads sending pages. Sending a single guest
page may not make sense, so we make the granule to be "host page", and in
the PSS structure we allow specify a QEMUFile* to migrate a specific host
page. Then we open the possibility to specify different channels in
different threads with different PSS structures.
The PSS prefix can be slightly misleading here because e.g. for the
upcoming usage of postcopy channel/thread it's not "searching" (or,
scanning) at all but sending the explicit page that was requested. However
since PSS existed for some years keep it as-is until someone complains.
This patch mostly (simply) replace rs->f with pss->pss_channel only. No
functional change intended for this patch yet. But it does prepare to
finally drop rs->f, and make ram_save_guest_page() thread safe.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.c | 70 +++++++++++++++++++++++++++----------------------
1 file changed, 38 insertions(+), 32 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index b71edf1f26..fedd61b3da 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -481,6 +481,8 @@ void dirty_sync_missed_zero_copy(void)
/* used by the search for pages to send */
struct PageSearchStatus {
+ /* The migration channel used for a specific host page */
+ QEMUFile *pss_channel;
/* Current block being searched */
RAMBlock *block;
/* Current page to search from */
@@ -803,9 +805,9 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
* @block: block that contains the page we want to send
* @offset: offset inside the block for the page
*/
-static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
- ram_addr_t current_addr, RAMBlock *block,
- ram_addr_t offset)
+static int save_xbzrle_page(RAMState *rs, QEMUFile *file,
+ uint8_t **current_data, ram_addr_t current_addr,
+ RAMBlock *block, ram_addr_t offset)
{
int encoded_len = 0, bytes_xbzrle;
uint8_t *prev_cached_page;
@@ -873,11 +875,11 @@ static int save_xbzrle_page(RAMState *rs, uint8_t **current_data,
}
/* Send XBZRLE based compressed page */
- bytes_xbzrle = save_page_header(rs, rs->f, block,
+ bytes_xbzrle = save_page_header(rs, file, block,
offset | RAM_SAVE_FLAG_XBZRLE);
- qemu_put_byte(rs->f, ENCODING_FLAG_XBZRLE);
- qemu_put_be16(rs->f, encoded_len);
- qemu_put_buffer(rs->f, XBZRLE.encoded_buf, encoded_len);
+ qemu_put_byte(file, ENCODING_FLAG_XBZRLE);
+ qemu_put_be16(file, encoded_len);
+ qemu_put_buffer(file, XBZRLE.encoded_buf, encoded_len);
bytes_xbzrle += encoded_len + 1 + 2;
/*
* Like compressed_size (please see update_compress_thread_counts),
@@ -1333,9 +1335,10 @@ static int save_zero_page_to_file(RAMState *rs, QEMUFile *file,
* @block: block that contains the page we want to send
* @offset: offset inside the block for the page
*/
-static int save_zero_page(RAMState *rs, RAMBlock *block, ram_addr_t offset)
+static int save_zero_page(RAMState *rs, QEMUFile *file, RAMBlock *block,
+ ram_addr_t offset)
{
- int len = save_zero_page_to_file(rs, rs->f, block, offset);
+ int len = save_zero_page_to_file(rs, file, block, offset);
if (len) {
stat64_add(&ram_atomic_counters.duplicate, 1);
@@ -1352,15 +1355,15 @@ static int save_zero_page(RAMState *rs, RAMBlock *block, ram_addr_t offset)
*
* Return true if the pages has been saved, otherwise false is returned.
*/
-static bool control_save_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
- int *pages)
+static bool control_save_page(PageSearchStatus *pss, RAMBlock *block,
+ ram_addr_t offset, int *pages)
{
uint64_t bytes_xmit = 0;
int ret;
*pages = -1;
- ret = ram_control_save_page(rs->f, block->offset, offset, TARGET_PAGE_SIZE,
- &bytes_xmit);
+ ret = ram_control_save_page(pss->pss_channel, block->offset, offset,
+ TARGET_PAGE_SIZE, &bytes_xmit);
if (ret == RAM_SAVE_CONTROL_NOT_SUPP) {
return false;
}
@@ -1394,17 +1397,17 @@ static bool control_save_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
* @buf: the page to be sent
* @async: send to page asyncly
*/
-static int save_normal_page(RAMState *rs, RAMBlock *block, ram_addr_t offset,
- uint8_t *buf, bool async)
+static int save_normal_page(RAMState *rs, QEMUFile *file, RAMBlock *block,
+ ram_addr_t offset, uint8_t *buf, bool async)
{
- ram_transferred_add(save_page_header(rs, rs->f, block,
+ ram_transferred_add(save_page_header(rs, file, block,
offset | RAM_SAVE_FLAG_PAGE));
if (async) {
- qemu_put_buffer_async(rs->f, buf, TARGET_PAGE_SIZE,
+ qemu_put_buffer_async(file, buf, TARGET_PAGE_SIZE,
migrate_release_ram() &&
migration_in_postcopy());
} else {
- qemu_put_buffer(rs->f, buf, TARGET_PAGE_SIZE);
+ qemu_put_buffer(file, buf, TARGET_PAGE_SIZE);
}
ram_transferred_add(TARGET_PAGE_SIZE);
stat64_add(&ram_atomic_counters.normal, 1);
@@ -1437,8 +1440,8 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss)
XBZRLE_cache_lock();
if (rs->xbzrle_enabled && !migration_in_postcopy()) {
- pages = save_xbzrle_page(rs, &p, current_addr, block,
- offset);
+ pages = save_xbzrle_page(rs, pss->pss_channel, &p, current_addr,
+ block, offset);
if (!rs->last_stage) {
/* Can't send this cached data async, since the cache page
* might get updated before it gets to the wire
@@ -1449,7 +1452,8 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss)
/* XBZRLE overflow or normal page */
if (pages == -1) {
- pages = save_normal_page(rs, block, offset, p, send_async);
+ pages = save_normal_page(rs, pss->pss_channel, block, offset,
+ p, send_async);
}
XBZRLE_cache_unlock();
@@ -1457,10 +1461,10 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss)
return pages;
}
-static int ram_save_multifd_page(RAMState *rs, RAMBlock *block,
+static int ram_save_multifd_page(QEMUFile *file, RAMBlock *block,
ram_addr_t offset)
{
- if (multifd_queue_page(rs->f, block, offset) < 0) {
+ if (multifd_queue_page(file, block, offset) < 0) {
return -1;
}
stat64_add(&ram_atomic_counters.normal, 1);
@@ -1755,7 +1759,7 @@ static int ram_save_release_protection(RAMState *rs, PageSearchStatus *pss,
uint64_t run_length = (pss->page - start_page) << TARGET_PAGE_BITS;
/* Flush async buffers before un-protect. */
- qemu_fflush(rs->f);
+ qemu_fflush(pss->pss_channel);
/* Un-protect memory range. */
res = uffd_change_protection(rs->uffdio_fd, page_address, run_length,
false, false);
@@ -2342,7 +2346,7 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss)
ram_addr_t offset = ((ram_addr_t)pss->page) << TARGET_PAGE_BITS;
int res;
- if (control_save_page(rs, block, offset, &res)) {
+ if (control_save_page(pss, block, offset, &res)) {
return res;
}
@@ -2350,7 +2354,7 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss)
return 1;
}
- res = save_zero_page(rs, block, offset);
+ res = save_zero_page(rs, pss->pss_channel, block, offset);
if (res > 0) {
/* Must let xbzrle know, otherwise a previous (now 0'd) cached
* page would be stale
@@ -2370,7 +2374,7 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss)
* still see partially copied pages which is data corruption.
*/
if (migrate_use_multifd() && !migration_in_postcopy()) {
- return ram_save_multifd_page(rs, block, offset);
+ return ram_save_multifd_page(pss->pss_channel, block, offset);
}
return ram_save_page(rs, pss);
@@ -2572,10 +2576,6 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
return 0;
}
- if (postcopy_preempt_active()) {
- postcopy_preempt_choose_channel(rs, pss);
- }
-
/* Update host page boundary information */
pss_host_page_prepare(pss);
@@ -2635,7 +2635,7 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
* explicit flush or it won't flush until the buffer is full.
*/
if (migrate_postcopy_preempt() && pss->postcopy_requested) {
- qemu_fflush(rs->f);
+ qemu_fflush(pss->pss_channel);
}
res = ram_save_release_protection(rs, pss, start_page);
@@ -2701,6 +2701,12 @@ static int ram_find_and_save_block(RAMState *rs)
}
if (found) {
+ /* Update rs->f with correct channel */
+ if (postcopy_preempt_active()) {
+ postcopy_preempt_choose_channel(rs, &pss);
+ }
+ /* Cache rs->f in pss_channel (TODO: remove rs->f) */
+ pss.pss_channel = rs->f;
pages = ram_save_host_page(rs, &pss);
}
} while (!pages && again);
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 24/30] migration: Add pss_init()
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (22 preceding siblings ...)
2022-11-15 15:35 ` [PULL 23/30] migration: Introduce pss_channel Juan Quintela
@ 2022-11-15 15:35 ` Juan Quintela
2022-11-15 15:35 ` [PULL 25/30] migration: Make PageSearchStatus part of RAMState Juan Quintela
` (8 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:35 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
From: Peter Xu <peterx@redhat.com>
Helper to init PSS structures.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index fedd61b3da..a2e86623d3 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -570,6 +570,14 @@ static bool do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block,
static void postcopy_preempt_restore(RAMState *rs, PageSearchStatus *pss,
bool postcopy_requested);
+/* NOTE: page is the PFN not real ram_addr_t. */
+static void pss_init(PageSearchStatus *pss, RAMBlock *rb, ram_addr_t page)
+{
+ pss->block = rb;
+ pss->page = page;
+ pss->complete_round = false;
+}
+
static void *do_data_compress(void *opaque)
{
CompressParam *param = opaque;
@@ -2678,9 +2686,7 @@ static int ram_find_and_save_block(RAMState *rs)
rs->last_page = 0;
}
- pss.block = rs->last_seen_block;
- pss.page = rs->last_page;
- pss.complete_round = false;
+ pss_init(&pss, rs->last_seen_block, rs->last_page);
do {
again = true;
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 25/30] migration: Make PageSearchStatus part of RAMState
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (23 preceding siblings ...)
2022-11-15 15:35 ` [PULL 24/30] migration: Add pss_init() Juan Quintela
@ 2022-11-15 15:35 ` Juan Quintela
2022-11-15 15:35 ` [PULL 26/30] migration: Move last_sent_block into PageSearchStatus Juan Quintela
` (7 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:35 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
From: Peter Xu <peterx@redhat.com>
We used to allocate PSS structure on the stack for precopy when sending
pages. Make it static, so as to describe per-channel ram migration status.
Here we declared RAM_CHANNEL_MAX instances, preparing for postcopy to use
it, even though this patch has not yet to start using the 2nd instance.
This should not have any functional change per se, but it already starts to
export PSS information via the RAMState, so that e.g. one PSS channel can
start to reference the other PSS channel.
Always protect PSS access using the same RAMState.bitmap_mutex. We already
do so, so no code change needed, just some comment update. Maybe we should
consider renaming bitmap_mutex some day as it's going to be a more commonly
and big mutex we use for ram states, but just leave it for later.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.c | 112 ++++++++++++++++++++++++++----------------------
1 file changed, 61 insertions(+), 51 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index a2e86623d3..bdb29ac4d9 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -113,6 +113,46 @@ static void __attribute__((constructor)) init_cpu_flag(void)
XBZRLECacheStats xbzrle_counters;
+/* used by the search for pages to send */
+struct PageSearchStatus {
+ /* The migration channel used for a specific host page */
+ QEMUFile *pss_channel;
+ /* Current block being searched */
+ RAMBlock *block;
+ /* Current page to search from */
+ unsigned long page;
+ /* Set once we wrap around */
+ bool complete_round;
+ /*
+ * [POSTCOPY-ONLY] Whether current page is explicitly requested by
+ * postcopy. When set, the request is "urgent" because the dest QEMU
+ * threads are waiting for us.
+ */
+ bool postcopy_requested;
+ /*
+ * [POSTCOPY-ONLY] The target channel to use to send current page.
+ *
+ * Note: This may _not_ match with the value in postcopy_requested
+ * above. Let's imagine the case where the postcopy request is exactly
+ * the page that we're sending in progress during precopy. In this case
+ * we'll have postcopy_requested set to true but the target channel
+ * will be the precopy channel (so that we don't split brain on that
+ * specific page since the precopy channel already contains partial of
+ * that page data).
+ *
+ * Besides that specific use case, postcopy_target_channel should
+ * always be equal to postcopy_requested, because by default we send
+ * postcopy pages via postcopy preempt channel.
+ */
+ bool postcopy_target_channel;
+ /* Whether we're sending a host page */
+ bool host_page_sending;
+ /* The start/end of current host page. Invalid if host_page_sending==false */
+ unsigned long host_page_start;
+ unsigned long host_page_end;
+};
+typedef struct PageSearchStatus PageSearchStatus;
+
/* struct contains XBZRLE cache and a static page
used by the compression */
static struct {
@@ -347,6 +387,11 @@ typedef struct {
struct RAMState {
/* QEMUFile used for this migration */
QEMUFile *f;
+ /*
+ * PageSearchStatus structures for the channels when send pages.
+ * Protected by the bitmap_mutex.
+ */
+ PageSearchStatus pss[RAM_CHANNEL_MAX];
/* UFFD file descriptor, used in 'write-tracking' migration */
int uffdio_fd;
/* Last block that we have visited searching for dirty pages */
@@ -390,7 +435,12 @@ struct RAMState {
uint64_t target_page_count;
/* number of dirty bits in the bitmap */
uint64_t migration_dirty_pages;
- /* Protects modification of the bitmap and migration dirty pages */
+ /*
+ * Protects:
+ * - dirty/clear bitmap
+ * - migration_dirty_pages
+ * - pss structures
+ */
QemuMutex bitmap_mutex;
/* The RAMBlock used in the last src_page_requests */
RAMBlock *last_req_rb;
@@ -479,46 +529,6 @@ void dirty_sync_missed_zero_copy(void)
ram_counters.dirty_sync_missed_zero_copy++;
}
-/* used by the search for pages to send */
-struct PageSearchStatus {
- /* The migration channel used for a specific host page */
- QEMUFile *pss_channel;
- /* Current block being searched */
- RAMBlock *block;
- /* Current page to search from */
- unsigned long page;
- /* Set once we wrap around */
- bool complete_round;
- /*
- * [POSTCOPY-ONLY] Whether current page is explicitly requested by
- * postcopy. When set, the request is "urgent" because the dest QEMU
- * threads are waiting for us.
- */
- bool postcopy_requested;
- /*
- * [POSTCOPY-ONLY] The target channel to use to send current page.
- *
- * Note: This may _not_ match with the value in postcopy_requested
- * above. Let's imagine the case where the postcopy request is exactly
- * the page that we're sending in progress during precopy. In this case
- * we'll have postcopy_requested set to true but the target channel
- * will be the precopy channel (so that we don't split brain on that
- * specific page since the precopy channel already contains partial of
- * that page data).
- *
- * Besides that specific use case, postcopy_target_channel should
- * always be equal to postcopy_requested, because by default we send
- * postcopy pages via postcopy preempt channel.
- */
- bool postcopy_target_channel;
- /* Whether we're sending a host page */
- bool host_page_sending;
- /* The start/end of current host page. Only valid if host_page_sending==true */
- unsigned long host_page_start;
- unsigned long host_page_end;
-};
-typedef struct PageSearchStatus PageSearchStatus;
-
CompressionStats compression_counters;
struct CompressParam {
@@ -2665,7 +2675,7 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
*/
static int ram_find_and_save_block(RAMState *rs)
{
- PageSearchStatus pss;
+ PageSearchStatus *pss = &rs->pss[RAM_CHANNEL_PRECOPY];
int pages = 0;
bool again, found;
@@ -2686,11 +2696,11 @@ static int ram_find_and_save_block(RAMState *rs)
rs->last_page = 0;
}
- pss_init(&pss, rs->last_seen_block, rs->last_page);
+ pss_init(pss, rs->last_seen_block, rs->last_page);
do {
again = true;
- found = get_queued_page(rs, &pss);
+ found = get_queued_page(rs, pss);
if (!found) {
/*
@@ -2698,27 +2708,27 @@ static int ram_find_and_save_block(RAMState *rs)
* preempted precopy. Otherwise find the next dirty bit.
*/
if (postcopy_preempt_triggered(rs)) {
- postcopy_preempt_restore(rs, &pss, false);
+ postcopy_preempt_restore(rs, pss, false);
found = true;
} else {
/* priority queue empty, so just search for something dirty */
- found = find_dirty_block(rs, &pss, &again);
+ found = find_dirty_block(rs, pss, &again);
}
}
if (found) {
/* Update rs->f with correct channel */
if (postcopy_preempt_active()) {
- postcopy_preempt_choose_channel(rs, &pss);
+ postcopy_preempt_choose_channel(rs, pss);
}
/* Cache rs->f in pss_channel (TODO: remove rs->f) */
- pss.pss_channel = rs->f;
- pages = ram_save_host_page(rs, &pss);
+ pss->pss_channel = rs->f;
+ pages = ram_save_host_page(rs, pss);
}
} while (!pages && again);
- rs->last_seen_block = pss.block;
- rs->last_page = pss.page;
+ rs->last_seen_block = pss->block;
+ rs->last_page = pss->page;
return pages;
}
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 26/30] migration: Move last_sent_block into PageSearchStatus
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (24 preceding siblings ...)
2022-11-15 15:35 ` [PULL 25/30] migration: Make PageSearchStatus part of RAMState Juan Quintela
@ 2022-11-15 15:35 ` Juan Quintela
2022-11-15 15:35 ` [PULL 27/30] migration: Send requested page directly in rp-return thread Juan Quintela
` (6 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:35 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
From: Peter Xu <peterx@redhat.com>
Since we use PageSearchStatus to represent a channel, it makes perfect
sense to keep last_sent_block (aka, leverage RAM_SAVE_FLAG_CONTINUE) to be
per-channel rather than global because each channel can be sending
different pages on ramblocks.
Hence move it from RAMState into PageSearchStatus.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.c | 71 ++++++++++++++++++++++++++++---------------------
1 file changed, 41 insertions(+), 30 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index bdb29ac4d9..dbdde5a6a5 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -117,6 +117,8 @@ XBZRLECacheStats xbzrle_counters;
struct PageSearchStatus {
/* The migration channel used for a specific host page */
QEMUFile *pss_channel;
+ /* Last block from where we have sent data */
+ RAMBlock *last_sent_block;
/* Current block being searched */
RAMBlock *block;
/* Current page to search from */
@@ -396,8 +398,6 @@ struct RAMState {
int uffdio_fd;
/* Last block that we have visited searching for dirty pages */
RAMBlock *last_seen_block;
- /* Last block from where we have sent data */
- RAMBlock *last_sent_block;
/* Last dirty target page we have sent */
ram_addr_t last_page;
/* last ram version we have seen */
@@ -712,16 +712,17 @@ exit:
*
* Returns the number of bytes written
*
- * @f: QEMUFile where to send the data
+ * @pss: current PSS channel status
* @block: block that contains the page we want to send
* @offset: offset inside the block for the page
* in the lower bits, it contains flags
*/
-static size_t save_page_header(RAMState *rs, QEMUFile *f, RAMBlock *block,
+static size_t save_page_header(PageSearchStatus *pss, RAMBlock *block,
ram_addr_t offset)
{
size_t size, len;
- bool same_block = (block == rs->last_sent_block);
+ bool same_block = (block == pss->last_sent_block);
+ QEMUFile *f = pss->pss_channel;
if (same_block) {
offset |= RAM_SAVE_FLAG_CONTINUE;
@@ -734,7 +735,7 @@ static size_t save_page_header(RAMState *rs, QEMUFile *f, RAMBlock *block,
qemu_put_byte(f, len);
qemu_put_buffer(f, (uint8_t *)block->idstr, len);
size += 1 + len;
- rs->last_sent_block = block;
+ pss->last_sent_block = block;
}
return size;
}
@@ -818,17 +819,19 @@ static void xbzrle_cache_zero_page(RAMState *rs, ram_addr_t current_addr)
* -1 means that xbzrle would be longer than normal
*
* @rs: current RAM state
+ * @pss: current PSS channel
* @current_data: pointer to the address of the page contents
* @current_addr: addr of the page
* @block: block that contains the page we want to send
* @offset: offset inside the block for the page
*/
-static int save_xbzrle_page(RAMState *rs, QEMUFile *file,
+static int save_xbzrle_page(RAMState *rs, PageSearchStatus *pss,
uint8_t **current_data, ram_addr_t current_addr,
RAMBlock *block, ram_addr_t offset)
{
int encoded_len = 0, bytes_xbzrle;
uint8_t *prev_cached_page;
+ QEMUFile *file = pss->pss_channel;
if (!cache_is_cached(XBZRLE.cache, current_addr,
ram_counters.dirty_sync_count)) {
@@ -893,7 +896,7 @@ static int save_xbzrle_page(RAMState *rs, QEMUFile *file,
}
/* Send XBZRLE based compressed page */
- bytes_xbzrle = save_page_header(rs, file, block,
+ bytes_xbzrle = save_page_header(pss, block,
offset | RAM_SAVE_FLAG_XBZRLE);
qemu_put_byte(file, ENCODING_FLAG_XBZRLE);
qemu_put_be16(file, encoded_len);
@@ -1324,19 +1327,19 @@ void ram_release_page(const char *rbname, uint64_t offset)
* Returns the size of data written to the file, 0 means the page is not
* a zero page
*
- * @rs: current RAM state
- * @file: the file where the data is saved
+ * @pss: current PSS channel
* @block: block that contains the page we want to send
* @offset: offset inside the block for the page
*/
-static int save_zero_page_to_file(RAMState *rs, QEMUFile *file,
+static int save_zero_page_to_file(PageSearchStatus *pss,
RAMBlock *block, ram_addr_t offset)
{
uint8_t *p = block->host + offset;
+ QEMUFile *file = pss->pss_channel;
int len = 0;
if (buffer_is_zero(p, TARGET_PAGE_SIZE)) {
- len += save_page_header(rs, file, block, offset | RAM_SAVE_FLAG_ZERO);
+ len += save_page_header(pss, block, offset | RAM_SAVE_FLAG_ZERO);
qemu_put_byte(file, 0);
len += 1;
ram_release_page(block->idstr, offset);
@@ -1349,14 +1352,14 @@ static int save_zero_page_to_file(RAMState *rs, QEMUFile *file,
*
* Returns the number of pages written.
*
- * @rs: current RAM state
+ * @pss: current PSS channel
* @block: block that contains the page we want to send
* @offset: offset inside the block for the page
*/
-static int save_zero_page(RAMState *rs, QEMUFile *file, RAMBlock *block,
+static int save_zero_page(PageSearchStatus *pss, RAMBlock *block,
ram_addr_t offset)
{
- int len = save_zero_page_to_file(rs, file, block, offset);
+ int len = save_zero_page_to_file(pss, block, offset);
if (len) {
stat64_add(&ram_atomic_counters.duplicate, 1);
@@ -1409,16 +1412,18 @@ static bool control_save_page(PageSearchStatus *pss, RAMBlock *block,
*
* Returns the number of pages written.
*
- * @rs: current RAM state
+ * @pss: current PSS channel
* @block: block that contains the page we want to send
* @offset: offset inside the block for the page
* @buf: the page to be sent
* @async: send to page asyncly
*/
-static int save_normal_page(RAMState *rs, QEMUFile *file, RAMBlock *block,
+static int save_normal_page(PageSearchStatus *pss, RAMBlock *block,
ram_addr_t offset, uint8_t *buf, bool async)
{
- ram_transferred_add(save_page_header(rs, file, block,
+ QEMUFile *file = pss->pss_channel;
+
+ ram_transferred_add(save_page_header(pss, block,
offset | RAM_SAVE_FLAG_PAGE));
if (async) {
qemu_put_buffer_async(file, buf, TARGET_PAGE_SIZE,
@@ -1458,7 +1463,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss)
XBZRLE_cache_lock();
if (rs->xbzrle_enabled && !migration_in_postcopy()) {
- pages = save_xbzrle_page(rs, pss->pss_channel, &p, current_addr,
+ pages = save_xbzrle_page(rs, pss, &p, current_addr,
block, offset);
if (!rs->last_stage) {
/* Can't send this cached data async, since the cache page
@@ -1470,8 +1475,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss)
/* XBZRLE overflow or normal page */
if (pages == -1) {
- pages = save_normal_page(rs, pss->pss_channel, block, offset,
- p, send_async);
+ pages = save_normal_page(pss, block, offset, p, send_async);
}
XBZRLE_cache_unlock();
@@ -1494,14 +1498,15 @@ static bool do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block,
ram_addr_t offset, uint8_t *source_buf)
{
RAMState *rs = ram_state;
+ PageSearchStatus *pss = &rs->pss[RAM_CHANNEL_PRECOPY];
uint8_t *p = block->host + offset;
int ret;
- if (save_zero_page_to_file(rs, f, block, offset)) {
+ if (save_zero_page_to_file(pss, block, offset)) {
return true;
}
- save_page_header(rs, f, block, offset | RAM_SAVE_FLAG_COMPRESS_PAGE);
+ save_page_header(pss, block, offset | RAM_SAVE_FLAG_COMPRESS_PAGE);
/*
* copy it to a internal buffer to avoid it being modified by VM
@@ -2321,7 +2326,8 @@ static bool save_page_use_compression(RAMState *rs)
* has been properly handled by compression, otherwise needs other
* paths to handle it
*/
-static bool save_compress_page(RAMState *rs, RAMBlock *block, ram_addr_t offset)
+static bool save_compress_page(RAMState *rs, PageSearchStatus *pss,
+ RAMBlock *block, ram_addr_t offset)
{
if (!save_page_use_compression(rs)) {
return false;
@@ -2337,7 +2343,7 @@ static bool save_compress_page(RAMState *rs, RAMBlock *block, ram_addr_t offset)
* We post the fist page as normal page as compression will take
* much CPU resource.
*/
- if (block != rs->last_sent_block) {
+ if (block != pss->last_sent_block) {
flush_compressed_data(rs);
return false;
}
@@ -2368,11 +2374,11 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss)
return res;
}
- if (save_compress_page(rs, block, offset)) {
+ if (save_compress_page(rs, pss, block, offset)) {
return 1;
}
- res = save_zero_page(rs, pss->pss_channel, block, offset);
+ res = save_zero_page(pss, block, offset);
if (res > 0) {
/* Must let xbzrle know, otherwise a previous (now 0'd) cached
* page would be stale
@@ -2503,7 +2509,7 @@ static void postcopy_preempt_choose_channel(RAMState *rs, PageSearchStatus *pss)
* If channel switched, reset last_sent_block since the old sent block
* may not be on the same channel.
*/
- rs->last_sent_block = NULL;
+ pss->last_sent_block = NULL;
trace_postcopy_preempt_switch_channel(channel);
}
@@ -2842,8 +2848,13 @@ static void ram_save_cleanup(void *opaque)
static void ram_state_reset(RAMState *rs)
{
+ int i;
+
+ for (i = 0; i < RAM_CHANNEL_MAX; i++) {
+ rs->pss[i].last_sent_block = NULL;
+ }
+
rs->last_seen_block = NULL;
- rs->last_sent_block = NULL;
rs->last_page = 0;
rs->last_version = ram_list.version;
rs->xbzrle_enabled = false;
@@ -3037,8 +3048,8 @@ void ram_postcopy_send_discard_bitmap(MigrationState *ms)
migration_bitmap_sync(rs);
/* Easiest way to make sure we don't resume in the middle of a host-page */
+ rs->pss[RAM_CHANNEL_PRECOPY].last_sent_block = NULL;
rs->last_seen_block = NULL;
- rs->last_sent_block = NULL;
rs->last_page = 0;
postcopy_each_ram_send_discard(ms);
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 27/30] migration: Send requested page directly in rp-return thread
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (25 preceding siblings ...)
2022-11-15 15:35 ` [PULL 26/30] migration: Move last_sent_block into PageSearchStatus Juan Quintela
@ 2022-11-15 15:35 ` Juan Quintela
2022-11-15 15:35 ` [PULL 28/30] migration: Remove old preempt code around state maintainance Juan Quintela
` (5 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:35 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
From: Peter Xu <peterx@redhat.com>
With all the facilities ready, send the requested page directly in the
rp-return thread rather than queuing it in the request queue, if and only
if postcopy preempt is enabled. It can achieve so because it uses separate
channel for sending urgent pages. The only shared data is bitmap and it's
protected by the bitmap_mutex.
Note that since we're moving the ownership of the urgent channel from the
migration thread to rp thread it also means the rp thread is responsible
for managing the qemufile, e.g. properly close it when pausing migration
happens. For this, let migration_release_from_dst_file to cover shutdown
of the urgent channel too, renaming it as migration_release_dst_files() to
better show what it does.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/migration.c | 35 +++++++------
migration/ram.c | 112 ++++++++++++++++++++++++++++++++++++++++++
2 files changed, 131 insertions(+), 16 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 1f95877fb4..42f36c1e2c 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2868,8 +2868,11 @@ static int migrate_handle_rp_resume_ack(MigrationState *s, uint32_t value)
return 0;
}
-/* Release ms->rp_state.from_dst_file in a safe way */
-static void migration_release_from_dst_file(MigrationState *ms)
+/*
+ * Release ms->rp_state.from_dst_file (and postcopy_qemufile_src if
+ * existed) in a safe way.
+ */
+static void migration_release_dst_files(MigrationState *ms)
{
QEMUFile *file;
@@ -2882,6 +2885,18 @@ static void migration_release_from_dst_file(MigrationState *ms)
ms->rp_state.from_dst_file = NULL;
}
+ /*
+ * Do the same to postcopy fast path socket too if there is. No
+ * locking needed because this qemufile should only be managed by
+ * return path thread.
+ */
+ if (ms->postcopy_qemufile_src) {
+ migration_ioc_unregister_yank_from_file(ms->postcopy_qemufile_src);
+ qemu_file_shutdown(ms->postcopy_qemufile_src);
+ qemu_fclose(ms->postcopy_qemufile_src);
+ ms->postcopy_qemufile_src = NULL;
+ }
+
qemu_fclose(file);
}
@@ -3026,7 +3041,7 @@ out:
* Maybe there is something we can do: it looks like a
* network down issue, and we pause for a recovery.
*/
- migration_release_from_dst_file(ms);
+ migration_release_dst_files(ms);
rp = NULL;
if (postcopy_pause_return_path_thread(ms)) {
/*
@@ -3044,7 +3059,7 @@ out:
}
trace_source_return_path_thread_end();
- migration_release_from_dst_file(ms);
+ migration_release_dst_files(ms);
rcu_unregister_thread();
return NULL;
}
@@ -3567,18 +3582,6 @@ static MigThrError postcopy_pause(MigrationState *s)
qemu_file_shutdown(file);
qemu_fclose(file);
- /*
- * Do the same to postcopy fast path socket too if there is. No
- * locking needed because no racer as long as we do this before setting
- * status to paused.
- */
- if (s->postcopy_qemufile_src) {
- migration_ioc_unregister_yank_from_file(s->postcopy_qemufile_src);
- qemu_file_shutdown(s->postcopy_qemufile_src);
- qemu_fclose(s->postcopy_qemufile_src);
- s->postcopy_qemufile_src = NULL;
- }
-
migrate_set_state(&s->state, s->state,
MIGRATION_STATUS_POSTCOPY_PAUSED);
diff --git a/migration/ram.c b/migration/ram.c
index dbdde5a6a5..5dc221a2fc 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -574,6 +574,8 @@ static QemuThread *decompress_threads;
static QemuMutex decomp_done_lock;
static QemuCond decomp_done_cond;
+static int ram_save_host_page_urgent(PageSearchStatus *pss);
+
static bool do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block,
ram_addr_t offset, uint8_t *source_buf);
@@ -588,6 +590,16 @@ static void pss_init(PageSearchStatus *pss, RAMBlock *rb, ram_addr_t page)
pss->complete_round = false;
}
+/*
+ * Check whether two PSSs are actively sending the same page. Return true
+ * if it is, false otherwise.
+ */
+static bool pss_overlap(PageSearchStatus *pss1, PageSearchStatus *pss2)
+{
+ return pss1->host_page_sending && pss2->host_page_sending &&
+ (pss1->host_page_start == pss2->host_page_start);
+}
+
static void *do_data_compress(void *opaque)
{
CompressParam *param = opaque;
@@ -2288,6 +2300,57 @@ int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len)
return -1;
}
+ /*
+ * When with postcopy preempt, we send back the page directly in the
+ * rp-return thread.
+ */
+ if (postcopy_preempt_active()) {
+ ram_addr_t page_start = start >> TARGET_PAGE_BITS;
+ size_t page_size = qemu_ram_pagesize(ramblock);
+ PageSearchStatus *pss = &ram_state->pss[RAM_CHANNEL_POSTCOPY];
+ int ret = 0;
+
+ qemu_mutex_lock(&rs->bitmap_mutex);
+
+ pss_init(pss, ramblock, page_start);
+ /*
+ * Always use the preempt channel, and make sure it's there. It's
+ * safe to access without lock, because when rp-thread is running
+ * we should be the only one who operates on the qemufile
+ */
+ pss->pss_channel = migrate_get_current()->postcopy_qemufile_src;
+ pss->postcopy_requested = true;
+ assert(pss->pss_channel);
+
+ /*
+ * It must be either one or multiple of host page size. Just
+ * assert; if something wrong we're mostly split brain anyway.
+ */
+ assert(len % page_size == 0);
+ while (len) {
+ if (ram_save_host_page_urgent(pss)) {
+ error_report("%s: ram_save_host_page_urgent() failed: "
+ "ramblock=%s, start_addr=0x"RAM_ADDR_FMT,
+ __func__, ramblock->idstr, start);
+ ret = -1;
+ break;
+ }
+ /*
+ * NOTE: after ram_save_host_page_urgent() succeeded, pss->page
+ * will automatically be moved and point to the next host page
+ * we're going to send, so no need to update here.
+ *
+ * Normally QEMU never sends >1 host page in requests, so
+ * logically we don't even need that as the loop should only
+ * run once, but just to be consistent.
+ */
+ len -= page_size;
+ };
+ qemu_mutex_unlock(&rs->bitmap_mutex);
+
+ return ret;
+ }
+
struct RAMSrcPageRequest *new_entry =
g_new0(struct RAMSrcPageRequest, 1);
new_entry->rb = ramblock;
@@ -2565,6 +2628,55 @@ static void pss_host_page_finish(PageSearchStatus *pss)
pss->host_page_start = pss->host_page_end = 0;
}
+/*
+ * Send an urgent host page specified by `pss'. Need to be called with
+ * bitmap_mutex held.
+ *
+ * Returns 0 if save host page succeeded, false otherwise.
+ */
+static int ram_save_host_page_urgent(PageSearchStatus *pss)
+{
+ bool page_dirty, sent = false;
+ RAMState *rs = ram_state;
+ int ret = 0;
+
+ trace_postcopy_preempt_send_host_page(pss->block->idstr, pss->page);
+ pss_host_page_prepare(pss);
+
+ /*
+ * If precopy is sending the same page, let it be done in precopy, or
+ * we could send the same page in two channels and none of them will
+ * receive the whole page.
+ */
+ if (pss_overlap(pss, &ram_state->pss[RAM_CHANNEL_PRECOPY])) {
+ trace_postcopy_preempt_hit(pss->block->idstr,
+ pss->page << TARGET_PAGE_BITS);
+ return 0;
+ }
+
+ do {
+ page_dirty = migration_bitmap_clear_dirty(rs, pss->block, pss->page);
+
+ if (page_dirty) {
+ /* Be strict to return code; it must be 1, or what else? */
+ if (ram_save_target_page(rs, pss) != 1) {
+ error_report_once("%s: ram_save_target_page failed", __func__);
+ ret = -1;
+ goto out;
+ }
+ sent = true;
+ }
+ pss_find_next_dirty(pss);
+ } while (pss_within_range(pss));
+out:
+ pss_host_page_finish(pss);
+ /* For urgent requests, flush immediately if sent */
+ if (sent) {
+ qemu_fflush(pss->pss_channel);
+ }
+ return ret;
+}
+
/**
* ram_save_host_page: save a whole host page
*
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 28/30] migration: Remove old preempt code around state maintainance
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (26 preceding siblings ...)
2022-11-15 15:35 ` [PULL 27/30] migration: Send requested page directly in rp-return thread Juan Quintela
@ 2022-11-15 15:35 ` Juan Quintela
2022-11-15 15:35 ` [PULL 29/30] migration: Drop rs->f Juan Quintela
` (4 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:35 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
From: Peter Xu <peterx@redhat.com>
With the new code to send pages in rp-return thread, there's little help to
keep lots of the old code on maintaining the preempt state in migration
thread, because the new way should always be faster..
Then if we'll always send pages in the rp-return thread anyway, we don't
need those logic to maintain preempt state anymore because now we serialize
things using the mutex directly instead of using those fields.
It's very unfortunate to have those code for a short period, but that's
still one intermediate step that we noticed the next bottleneck on the
migration thread. Now what we can do best is to drop unnecessary code as
long as the new code is stable to reduce the burden. It's actually a good
thing because the new "sending page in rp-return thread" model is (IMHO)
even cleaner and with better performance.
Remove the old code that was responsible for maintaining preempt states, at
the meantime also remove x-postcopy-preempt-break-huge parameter because
with concurrent sender threads we don't really need to break-huge anymore.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/migration.h | 7 -
migration/migration.c | 2 -
migration/ram.c | 291 +-----------------------------------------
3 files changed, 3 insertions(+), 297 deletions(-)
diff --git a/migration/migration.h b/migration/migration.h
index cdad8aceaa..ae4ffd3454 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -340,13 +340,6 @@ struct MigrationState {
bool send_configuration;
/* Whether we send section footer during migration */
bool send_section_footer;
- /*
- * Whether we allow break sending huge pages when postcopy preempt is
- * enabled. When disabled, we won't interrupt precopy within sending a
- * host huge page, which is the old behavior of vanilla postcopy.
- * NOTE: this parameter is ignored if postcopy preempt is not enabled.
- */
- bool postcopy_preempt_break_huge;
/* Needed by postcopy-pause state */
QemuSemaphore postcopy_pause_sem;
diff --git a/migration/migration.c b/migration/migration.c
index 42f36c1e2c..22fc863c67 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -4422,8 +4422,6 @@ static Property migration_properties[] = {
DEFINE_PROP_SIZE("announce-step", MigrationState,
parameters.announce_step,
DEFAULT_MIGRATE_ANNOUNCE_STEP),
- DEFINE_PROP_BOOL("x-postcopy-preempt-break-huge", MigrationState,
- postcopy_preempt_break_huge, true),
DEFINE_PROP_STRING("tls-creds", MigrationState, parameters.tls_creds),
DEFINE_PROP_STRING("tls-hostname", MigrationState, parameters.tls_hostname),
DEFINE_PROP_STRING("tls-authz", MigrationState, parameters.tls_authz),
diff --git a/migration/ram.c b/migration/ram.c
index 5dc221a2fc..88e61b0aeb 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -125,28 +125,6 @@ struct PageSearchStatus {
unsigned long page;
/* Set once we wrap around */
bool complete_round;
- /*
- * [POSTCOPY-ONLY] Whether current page is explicitly requested by
- * postcopy. When set, the request is "urgent" because the dest QEMU
- * threads are waiting for us.
- */
- bool postcopy_requested;
- /*
- * [POSTCOPY-ONLY] The target channel to use to send current page.
- *
- * Note: This may _not_ match with the value in postcopy_requested
- * above. Let's imagine the case where the postcopy request is exactly
- * the page that we're sending in progress during precopy. In this case
- * we'll have postcopy_requested set to true but the target channel
- * will be the precopy channel (so that we don't split brain on that
- * specific page since the precopy channel already contains partial of
- * that page data).
- *
- * Besides that specific use case, postcopy_target_channel should
- * always be equal to postcopy_requested, because by default we send
- * postcopy pages via postcopy preempt channel.
- */
- bool postcopy_target_channel;
/* Whether we're sending a host page */
bool host_page_sending;
/* The start/end of current host page. Invalid if host_page_sending==false */
@@ -371,20 +349,6 @@ struct RAMSrcPageRequest {
QSIMPLEQ_ENTRY(RAMSrcPageRequest) next_req;
};
-typedef struct {
- /*
- * Cached ramblock/offset values if preempted. They're only meaningful if
- * preempted==true below.
- */
- RAMBlock *ram_block;
- unsigned long ram_page;
- /*
- * Whether a postcopy preemption just happened. Will be reset after
- * precopy recovered to background migration.
- */
- bool preempted;
-} PostcopyPreemptState;
-
/* State of RAM for migration */
struct RAMState {
/* QEMUFile used for this migration */
@@ -447,14 +411,6 @@ struct RAMState {
/* Queue of outstanding page requests from the destination */
QemuMutex src_page_req_mutex;
QSIMPLEQ_HEAD(, RAMSrcPageRequest) src_page_requests;
-
- /* Postcopy preemption informations */
- PostcopyPreemptState postcopy_preempt_state;
- /*
- * Current channel we're using on src VM. Only valid if postcopy-preempt
- * is enabled.
- */
- unsigned int postcopy_channel;
};
typedef struct RAMState RAMState;
@@ -462,11 +418,6 @@ static RAMState *ram_state;
static NotifierWithReturnList precopy_notifier_list;
-static void postcopy_preempt_reset(RAMState *rs)
-{
- memset(&rs->postcopy_preempt_state, 0, sizeof(PostcopyPreemptState));
-}
-
/* Whether postcopy has queued requests? */
static bool postcopy_has_request(RAMState *rs)
{
@@ -579,9 +530,6 @@ static int ram_save_host_page_urgent(PageSearchStatus *pss);
static bool do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block,
ram_addr_t offset, uint8_t *source_buf);
-static void postcopy_preempt_restore(RAMState *rs, PageSearchStatus *pss,
- bool postcopy_requested);
-
/* NOTE: page is the PFN not real ram_addr_t. */
static void pss_init(PageSearchStatus *pss, RAMBlock *rb, ram_addr_t page)
{
@@ -1640,13 +1588,6 @@ retry:
*/
static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
{
- /*
- * This is not a postcopy requested page, mark it "not urgent", and use
- * precopy channel to send it.
- */
- pss->postcopy_requested = false;
- pss->postcopy_target_channel = RAM_CHANNEL_PRECOPY;
-
/* Update pss->page for the next dirty bit in ramblock */
pss_find_next_dirty(pss);
@@ -2097,55 +2038,6 @@ void ram_write_tracking_stop(void)
}
#endif /* defined(__linux__) */
-/*
- * Check whether two addr/offset of the ramblock falls onto the same host huge
- * page. Returns true if so, false otherwise.
- */
-static bool offset_on_same_huge_page(RAMBlock *rb, uint64_t addr1,
- uint64_t addr2)
-{
- size_t page_size = qemu_ram_pagesize(rb);
-
- addr1 = ROUND_DOWN(addr1, page_size);
- addr2 = ROUND_DOWN(addr2, page_size);
-
- return addr1 == addr2;
-}
-
-/*
- * Whether a previous preempted precopy huge page contains current requested
- * page? Returns true if so, false otherwise.
- *
- * This should really happen very rarely, because it means when we were sending
- * during background migration for postcopy we're sending exactly the page that
- * some vcpu got faulted on on dest node. When it happens, we probably don't
- * need to do much but drop the request, because we know right after we restore
- * the precopy stream it'll be serviced. It'll slightly affect the order of
- * postcopy requests to be serviced (e.g. it'll be the same as we move current
- * request to the end of the queue) but it shouldn't be a big deal. The most
- * imporant thing is we can _never_ try to send a partial-sent huge page on the
- * POSTCOPY channel again, otherwise that huge page will got "split brain" on
- * two channels (PRECOPY, POSTCOPY).
- */
-static bool postcopy_preempted_contains(RAMState *rs, RAMBlock *block,
- ram_addr_t offset)
-{
- PostcopyPreemptState *state = &rs->postcopy_preempt_state;
-
- /* No preemption at all? */
- if (!state->preempted) {
- return false;
- }
-
- /* Not even the same ramblock? */
- if (state->ram_block != block) {
- return false;
- }
-
- return offset_on_same_huge_page(block, offset,
- state->ram_page << TARGET_PAGE_BITS);
-}
-
/**
* get_queued_page: unqueue a page from the postcopy requests
*
@@ -2185,20 +2077,7 @@ static bool get_queued_page(RAMState *rs, PageSearchStatus *pss)
} while (block && !dirty);
- if (block) {
- /* See comment above postcopy_preempted_contains() */
- if (postcopy_preempted_contains(rs, block, offset)) {
- trace_postcopy_preempt_hit(block->idstr, offset);
- /*
- * If what we preempted previously was exactly what we're
- * requesting right now, restore the preempted precopy
- * immediately, boosting its priority as it's requested by
- * postcopy.
- */
- postcopy_preempt_restore(rs, pss, true);
- return true;
- }
- } else {
+ if (!block) {
/*
* Poll write faults too if background snapshot is enabled; that's
* when we have vcpus got blocked by the write protected pages.
@@ -2220,9 +2099,6 @@ static bool get_queued_page(RAMState *rs, PageSearchStatus *pss)
* really rare.
*/
pss->complete_round = false;
- /* Mark it an urgent request, meanwhile using POSTCOPY channel */
- pss->postcopy_requested = true;
- pss->postcopy_target_channel = RAM_CHANNEL_POSTCOPY;
}
return !!block;
@@ -2319,7 +2195,6 @@ int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len)
* we should be the only one who operates on the qemufile
*/
pss->pss_channel = migrate_get_current()->postcopy_qemufile_src;
- pss->postcopy_requested = true;
assert(pss->pss_channel);
/*
@@ -2467,129 +2342,6 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss)
return ram_save_page(rs, pss);
}
-static bool postcopy_needs_preempt(RAMState *rs, PageSearchStatus *pss)
-{
- MigrationState *ms = migrate_get_current();
-
- /* Not enabled eager preempt? Then never do that. */
- if (!migrate_postcopy_preempt()) {
- return false;
- }
-
- /* If the user explicitly disabled breaking of huge page, skip */
- if (!ms->postcopy_preempt_break_huge) {
- return false;
- }
-
- /* If the ramblock we're sending is a small page? Never bother. */
- if (qemu_ram_pagesize(pss->block) == TARGET_PAGE_SIZE) {
- return false;
- }
-
- /* Not in postcopy at all? */
- if (!migration_in_postcopy()) {
- return false;
- }
-
- /*
- * If we're already handling a postcopy request, don't preempt as this page
- * has got the same high priority.
- */
- if (pss->postcopy_requested) {
- return false;
- }
-
- /* If there's postcopy requests, then check it up! */
- return postcopy_has_request(rs);
-}
-
-/* Returns true if we preempted precopy, false otherwise */
-static void postcopy_do_preempt(RAMState *rs, PageSearchStatus *pss)
-{
- PostcopyPreemptState *p_state = &rs->postcopy_preempt_state;
-
- trace_postcopy_preempt_triggered(pss->block->idstr, pss->page);
-
- /*
- * Time to preempt precopy. Cache current PSS into preempt state, so that
- * after handling the postcopy pages we can recover to it. We need to do
- * so because the dest VM will have partial of the precopy huge page kept
- * over in its tmp huge page caches; better move on with it when we can.
- */
- p_state->ram_block = pss->block;
- p_state->ram_page = pss->page;
- p_state->preempted = true;
-}
-
-/* Whether we're preempted by a postcopy request during sending a huge page */
-static bool postcopy_preempt_triggered(RAMState *rs)
-{
- return rs->postcopy_preempt_state.preempted;
-}
-
-static void postcopy_preempt_restore(RAMState *rs, PageSearchStatus *pss,
- bool postcopy_requested)
-{
- PostcopyPreemptState *state = &rs->postcopy_preempt_state;
-
- assert(state->preempted);
-
- pss->block = state->ram_block;
- pss->page = state->ram_page;
-
- /* Whether this is a postcopy request? */
- pss->postcopy_requested = postcopy_requested;
- /*
- * When restoring a preempted page, the old data resides in PRECOPY
- * slow channel, even if postcopy_requested is set. So always use
- * PRECOPY channel here.
- */
- pss->postcopy_target_channel = RAM_CHANNEL_PRECOPY;
-
- trace_postcopy_preempt_restored(pss->block->idstr, pss->page);
-
- /* Reset preempt state, most importantly, set preempted==false */
- postcopy_preempt_reset(rs);
-}
-
-static void postcopy_preempt_choose_channel(RAMState *rs, PageSearchStatus *pss)
-{
- MigrationState *s = migrate_get_current();
- unsigned int channel = pss->postcopy_target_channel;
- QEMUFile *next;
-
- if (channel != rs->postcopy_channel) {
- if (channel == RAM_CHANNEL_PRECOPY) {
- next = s->to_dst_file;
- } else {
- next = s->postcopy_qemufile_src;
- }
- /* Update and cache the current channel */
- rs->f = next;
- rs->postcopy_channel = channel;
-
- /*
- * If channel switched, reset last_sent_block since the old sent block
- * may not be on the same channel.
- */
- pss->last_sent_block = NULL;
-
- trace_postcopy_preempt_switch_channel(channel);
- }
-
- trace_postcopy_preempt_send_host_page(pss->block->idstr, pss->page);
-}
-
-/* We need to make sure rs->f always points to the default channel elsewhere */
-static void postcopy_preempt_reset_channel(RAMState *rs)
-{
- if (postcopy_preempt_active()) {
- rs->postcopy_channel = RAM_CHANNEL_PRECOPY;
- rs->f = migrate_get_current()->to_dst_file;
- trace_postcopy_preempt_reset_channel();
- }
-}
-
/* Should be called before sending a host page */
static void pss_host_page_prepare(PageSearchStatus *pss)
{
@@ -2716,11 +2468,6 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
pss_host_page_prepare(pss);
do {
- if (postcopy_needs_preempt(rs, pss)) {
- postcopy_do_preempt(rs, pss);
- break;
- }
-
page_dirty = migration_bitmap_clear_dirty(rs, pss->block, pss->page);
/* Check the pages is dirty and if it is send it */
@@ -2761,19 +2508,6 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
pss_host_page_finish(pss);
- /*
- * When with postcopy preempt mode, flush the data as soon as possible for
- * postcopy requests, because we've already sent a whole huge page, so the
- * dst node should already have enough resource to atomically filling in
- * the current missing page.
- *
- * More importantly, when using separate postcopy channel, we must do
- * explicit flush or it won't flush until the buffer is full.
- */
- if (migrate_postcopy_preempt() && pss->postcopy_requested) {
- qemu_fflush(pss->pss_channel);
- }
-
res = ram_save_release_protection(rs, pss, start_page);
return (res < 0 ? res : pages);
}
@@ -2821,24 +2555,11 @@ static int ram_find_and_save_block(RAMState *rs)
found = get_queued_page(rs, pss);
if (!found) {
- /*
- * Recover previous precopy ramblock/offset if postcopy has
- * preempted precopy. Otherwise find the next dirty bit.
- */
- if (postcopy_preempt_triggered(rs)) {
- postcopy_preempt_restore(rs, pss, false);
- found = true;
- } else {
- /* priority queue empty, so just search for something dirty */
- found = find_dirty_block(rs, pss, &again);
- }
+ /* priority queue empty, so just search for something dirty */
+ found = find_dirty_block(rs, pss, &again);
}
if (found) {
- /* Update rs->f with correct channel */
- if (postcopy_preempt_active()) {
- postcopy_preempt_choose_channel(rs, pss);
- }
/* Cache rs->f in pss_channel (TODO: remove rs->f) */
pss->pss_channel = rs->f;
pages = ram_save_host_page(rs, pss);
@@ -2970,8 +2691,6 @@ static void ram_state_reset(RAMState *rs)
rs->last_page = 0;
rs->last_version = ram_list.version;
rs->xbzrle_enabled = false;
- postcopy_preempt_reset(rs);
- rs->postcopy_channel = RAM_CHANNEL_PRECOPY;
}
#define MAX_WAIT 50 /* ms, half buffered_file limit */
@@ -3615,8 +3334,6 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
}
qemu_mutex_unlock(&rs->bitmap_mutex);
- postcopy_preempt_reset_channel(rs);
-
/*
* Must occur before EOS (or any QEMUFile operation)
* because of RDMA protocol.
@@ -3696,8 +3413,6 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
return ret;
}
- postcopy_preempt_reset_channel(rs);
-
ret = multifd_send_sync_main(rs->f);
if (ret < 0) {
return ret;
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 29/30] migration: Drop rs->f
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (27 preceding siblings ...)
2022-11-15 15:35 ` [PULL 28/30] migration: Remove old preempt code around state maintainance Juan Quintela
@ 2022-11-15 15:35 ` Juan Quintela
2022-11-15 15:35 ` [PULL 30/30] migration: Block migration comment or code is wrong Juan Quintela
` (3 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:35 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
From: Peter Xu <peterx@redhat.com>
Now with rs->pss we can already cache channels in pss->pss_channels. That
pss_channel contains more infromation than rs->f because it's per-channel.
So rs->f could be replaced by rss->pss[RAM_CHANNEL_PRECOPY].pss_channel,
while rs->f itself is a bit vague now.
Note that vanilla postcopy still send pages via pss[RAM_CHANNEL_PRECOPY],
that's slightly confusing but it reflects the reality.
Then, after the replacement we can safely drop rs->f.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 88e61b0aeb..29e413b97b 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -351,8 +351,6 @@ struct RAMSrcPageRequest {
/* State of RAM for migration */
struct RAMState {
- /* QEMUFile used for this migration */
- QEMUFile *f;
/*
* PageSearchStatus structures for the channels when send pages.
* Protected by the bitmap_mutex.
@@ -2560,8 +2558,6 @@ static int ram_find_and_save_block(RAMState *rs)
}
if (found) {
- /* Cache rs->f in pss_channel (TODO: remove rs->f) */
- pss->pss_channel = rs->f;
pages = ram_save_host_page(rs, pss);
}
} while (!pages && again);
@@ -3117,7 +3113,7 @@ static void ram_state_resume_prepare(RAMState *rs, QEMUFile *out)
ram_state_reset(rs);
/* Update RAMState cache of output QEMUFile */
- rs->f = out;
+ rs->pss[RAM_CHANNEL_PRECOPY].pss_channel = out;
trace_ram_state_resume_prepare(pages);
}
@@ -3208,7 +3204,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
return -1;
}
}
- (*rsp)->f = f;
+ (*rsp)->pss[RAM_CHANNEL_PRECOPY].pss_channel = f;
WITH_RCU_READ_LOCK_GUARD() {
qemu_put_be64(f, ram_bytes_total_common(true) | RAM_SAVE_FLAG_MEM_SIZE);
@@ -3343,7 +3339,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
out:
if (ret >= 0
&& migration_is_setup_or_active(migrate_get_current()->state)) {
- ret = multifd_send_sync_main(rs->f);
+ ret = multifd_send_sync_main(rs->pss[RAM_CHANNEL_PRECOPY].pss_channel);
if (ret < 0) {
return ret;
}
@@ -3413,7 +3409,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
return ret;
}
- ret = multifd_send_sync_main(rs->f);
+ ret = multifd_send_sync_main(rs->pss[RAM_CHANNEL_PRECOPY].pss_channel);
if (ret < 0) {
return ret;
}
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PULL 30/30] migration: Block migration comment or code is wrong
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (28 preceding siblings ...)
2022-11-15 15:35 ` [PULL 29/30] migration: Drop rs->f Juan Quintela
@ 2022-11-15 15:35 ` Juan Quintela
2022-11-15 18:06 ` [PULL 00/30] Next patches Daniel P. Berrangé
` (2 subsequent siblings)
32 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2022-11-15 15:35 UTC (permalink / raw)
To: qemu-devel
Cc: Michael Tokarev, Marc-André Lureau, David Hildenbrand,
Laurent Vivier, Juan Quintela, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
And it appears that what is wrong is the code. During bulk stage we
need to make sure that some block is dirty, but no games with
max_size at all.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
---
migration/block.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/migration/block.c b/migration/block.c
index 3577c815a9..4347da1526 100644
--- a/migration/block.c
+++ b/migration/block.c
@@ -880,8 +880,8 @@ static void block_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
blk_mig_unlock();
/* Report at least one block pending during bulk phase */
- if (pending <= max_size && !block_mig_state.bulk_completed) {
- pending = max_size + BLK_MIG_BLOCK_SIZE;
+ if (!pending && !block_mig_state.bulk_completed) {
+ pending = BLK_MIG_BLOCK_SIZE;
}
trace_migration_block_save_pending(pending);
--
2.38.1
^ permalink raw reply related [flat|nested] 47+ messages in thread
* Re: [PULL 00/30] Next patches
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (29 preceding siblings ...)
2022-11-15 15:35 ` [PULL 30/30] migration: Block migration comment or code is wrong Juan Quintela
@ 2022-11-15 18:06 ` Daniel P. Berrangé
2022-11-15 18:57 ` Stefan Hajnoczi
2022-11-15 18:59 ` Stefan Hajnoczi
32 siblings, 0 replies; 47+ messages in thread
From: Daniel P. Berrangé @ 2022-11-15 18:06 UTC (permalink / raw)
To: Juan Quintela
Cc: qemu-devel, Michael Tokarev, Marc-André Lureau,
David Hildenbrand, Laurent Vivier, Paolo Bonzini, Peter Xu,
Stefan Hajnoczi, Dr. David Alan Gilbert, Thomas Huth, qemu-block,
qemu-trivial, Philippe Mathieu-Daudé, Fam Zheng
Please don't merge this PULL request,
It contains changes to the "io" subsystem in patch 3 that I
have not reviewed nor acked yet, and which should be been
split as a separate patch from the migration changes too.
With regards,
Daniel
On Tue, Nov 15, 2022 at 04:34:44PM +0100, Juan Quintela wrote:
> The following changes since commit 98f10f0e2613ba1ac2ad3f57a5174014f6dcb03d:
>
> Merge tag 'pull-target-arm-20221114' of https://git.linaro.org/people/pmaydell/qemu-arm into staging (2022-11-14 13:31:17 -0500)
>
> are available in the Git repository at:
>
> https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
>
> for you to fetch changes up to d896a7a40db13fc2d05828c94ddda2747530089c:
>
> migration: Block migration comment or code is wrong (2022-11-15 10:31:06 +0100)
>
> ----------------------------------------------------------------
> Migration PULL request (take 2)
>
> Hi
>
> This time properly signed.
>
> [take 1]
> It includes:
> - Leonardo fix for zero_copy flush
> - Fiona fix for return value of readv/writev
> - Peter Xu cleanups
> - Peter Xu preempt patches
> - Patches ready from zero page (me)
> - AVX2 support (ling)
> - fix for slow networking and reordering of first packets (manish)
>
> Please, apply.
>
> ----------------------------------------------------------------
>
> Fiona Ebner (1):
> migration/channel-block: fix return value for
> qio_channel_block_{readv,writev}
>
> Juan Quintela (5):
> multifd: Create page_size fields into both MultiFD{Recv,Send}Params
> multifd: Create page_count fields into both MultiFD{Recv,Send}Params
> migration: Export ram_transferred_ram()
> migration: Export ram_release_page()
> migration: Block migration comment or code is wrong
>
> Leonardo Bras (1):
> migration/multifd/zero-copy: Create helper function for flushing
>
> Peter Xu (20):
> migration: Fix possible infinite loop of ram save process
> migration: Fix race on qemu_file_shutdown()
> migration: Disallow postcopy preempt to be used with compress
> migration: Use non-atomic ops for clear log bitmap
> migration: Disable multifd explicitly with compression
> migration: Take bitmap mutex when completing ram migration
> migration: Add postcopy_preempt_active()
> migration: Cleanup xbzrle zero page cache update logic
> migration: Trivial cleanup save_page_header() on same block check
> migration: Remove RAMState.f references in compression code
> migration: Yield bitmap_mutex properly when sending/sleeping
> migration: Use atomic ops properly for page accountings
> migration: Teach PSS about host page
> migration: Introduce pss_channel
> migration: Add pss_init()
> migration: Make PageSearchStatus part of RAMState
> migration: Move last_sent_block into PageSearchStatus
> migration: Send requested page directly in rp-return thread
> migration: Remove old preempt code around state maintainance
> migration: Drop rs->f
>
> ling xu (2):
> Update AVX512 support for xbzrle_encode_buffer
> Unit test code and benchmark code
>
> manish.mishra (1):
> migration: check magic value for deciding the mapping of channels
>
> meson.build | 16 +
> include/exec/ram_addr.h | 11 +-
> include/exec/ramblock.h | 3 +
> include/io/channel.h | 25 ++
> include/qemu/bitmap.h | 1 +
> migration/migration.h | 7 -
> migration/multifd.h | 10 +-
> migration/postcopy-ram.h | 2 +-
> migration/ram.h | 23 +
> migration/xbzrle.h | 4 +
> io/channel-socket.c | 27 ++
> io/channel.c | 39 ++
> migration/block.c | 4 +-
> migration/channel-block.c | 6 +-
> migration/migration.c | 109 +++--
> migration/multifd-zlib.c | 14 +-
> migration/multifd-zstd.c | 12 +-
> migration/multifd.c | 69 +--
> migration/postcopy-ram.c | 5 +-
> migration/qemu-file.c | 27 +-
> migration/ram.c | 794 +++++++++++++++++-----------------
> migration/xbzrle.c | 124 ++++++
> tests/bench/xbzrle-bench.c | 465 ++++++++++++++++++++
> tests/unit/test-xbzrle.c | 39 +-
> util/bitmap.c | 45 ++
> meson_options.txt | 2 +
> scripts/meson-buildoptions.sh | 14 +-
> tests/bench/meson.build | 4 +
> 28 files changed, 1379 insertions(+), 522 deletions(-)
> create mode 100644 tests/bench/xbzrle-bench.c
>
> --
> 2.38.1
>
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PULL 00/30] Next patches
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (30 preceding siblings ...)
2022-11-15 18:06 ` [PULL 00/30] Next patches Daniel P. Berrangé
@ 2022-11-15 18:57 ` Stefan Hajnoczi
2022-11-16 15:35 ` Xu, Ling1
2022-11-15 18:59 ` Stefan Hajnoczi
32 siblings, 1 reply; 47+ messages in thread
From: Stefan Hajnoczi @ 2022-11-15 18:57 UTC (permalink / raw)
To: Juan Quintela, ling xu, Zhou Zhao, Jun Jin
Cc: qemu-devel, Michael Tokarev, Marc-André Lureau,
David Hildenbrand, Laurent Vivier, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
On Tue, 15 Nov 2022 at 10:40, Juan Quintela <quintela@redhat.com> wrote:
>
> The following changes since commit 98f10f0e2613ba1ac2ad3f57a5174014f6dcb03d:
>
> Merge tag 'pull-target-arm-20221114' of https://git.linaro.org/people/pmaydell/qemu-arm into staging (2022-11-14 13:31:17 -0500)
>
> are available in the Git repository at:
>
> https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
>
> for you to fetch changes up to d896a7a40db13fc2d05828c94ddda2747530089c:
>
> migration: Block migration comment or code is wrong (2022-11-15 10:31:06 +0100)
>
> ----------------------------------------------------------------
> Migration PULL request (take 2)
>
> Hi
>
> This time properly signed.
>
> [take 1]
> It includes:
> - Leonardo fix for zero_copy flush
> - Fiona fix for return value of readv/writev
> - Peter Xu cleanups
> - Peter Xu preempt patches
> - Patches ready from zero page (me)
> - AVX2 support (ling)
> - fix for slow networking and reordering of first packets (manish)
>
> Please, apply.
>
> ----------------------------------------------------------------
>
> Fiona Ebner (1):
> migration/channel-block: fix return value for
> qio_channel_block_{readv,writev}
>
> Juan Quintela (5):
> multifd: Create page_size fields into both MultiFD{Recv,Send}Params
> multifd: Create page_count fields into both MultiFD{Recv,Send}Params
> migration: Export ram_transferred_ram()
> migration: Export ram_release_page()
> migration: Block migration comment or code is wrong
>
> Leonardo Bras (1):
> migration/multifd/zero-copy: Create helper function for flushing
>
> Peter Xu (20):
> migration: Fix possible infinite loop of ram save process
> migration: Fix race on qemu_file_shutdown()
> migration: Disallow postcopy preempt to be used with compress
> migration: Use non-atomic ops for clear log bitmap
> migration: Disable multifd explicitly with compression
> migration: Take bitmap mutex when completing ram migration
> migration: Add postcopy_preempt_active()
> migration: Cleanup xbzrle zero page cache update logic
> migration: Trivial cleanup save_page_header() on same block check
> migration: Remove RAMState.f references in compression code
> migration: Yield bitmap_mutex properly when sending/sleeping
> migration: Use atomic ops properly for page accountings
> migration: Teach PSS about host page
> migration: Introduce pss_channel
> migration: Add pss_init()
> migration: Make PageSearchStatus part of RAMState
> migration: Move last_sent_block into PageSearchStatus
> migration: Send requested page directly in rp-return thread
> migration: Remove old preempt code around state maintainance
> migration: Drop rs->f
>
> ling xu (2):
> Update AVX512 support for xbzrle_encode_buffer
> Unit test code and benchmark code
This commit causes the following CI failure:
cc -m64 -mcx16 -Ilibauthz.fa.p -I. -I.. -Iqapi -Itrace -Iui/shader
-I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include
-fdiagnostics-color=auto -Wall -Winvalid-pch -Werror -std=gnu11 -O2 -g
-isystem /builds/qemu-project/qemu/linux-headers -isystem
linux-headers -iquote . -iquote /builds/qemu-project/qemu -iquote
/builds/qemu-project/qemu/include -iquote
/builds/qemu-project/qemu/tcg/i386 -pthread -U_FORTIFY_SOURCE
-D_FORTIFY_SOURCE=2 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64
-D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wundef
-Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common
-fwrapv -Wold-style-declaration -Wold-style-definition -Wtype-limits
-Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers
-Wempty-body -Wnested-externs -Wendif-labels -Wexpansion-to-defined
-Wimplicit-fallthrough=2 -Wno-missing-include-dirs
-Wno-shift-negative-value -Wno-psabi -fstack-protector-strong -fPIE
-MD -MQ libauthz.fa.p/authz_simple.c.o -MF
libauthz.fa.p/authz_simple.c.o.d -o libauthz.fa.p/authz_simple.c.o -c
../authz/simple.c
In file included from ../authz/simple.c:23:
../authz/trace.h:1:10: fatal error: trace/trace-authz.h: No such file
or directory
1 | #include "trace/trace-authz.h"
| ^~~~~~~~~~~~~~~~~~~~~
https://gitlab.com/qemu-project/qemu/-/jobs/3326576115
I think the issue is that the test links against objects that aren't
present when qemu-user only build is performed. That's my first guess,
I might be wrong but it is definitely this commit that causes the
failure (I bisected it).
There is a second CI failure here:
clang -m64 -mcx16 -Itests/bench/xbzrle-bench.p -Itests/bench
-I../tests/bench -I. -Iqapi -Itrace -Iui -Iui/shader
-I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include
-I/usr/include/sysprof-4 -flto -fcolor-diagnostics -Wall -Winvalid-pch
-Werror -std=gnu11 -O2 -g -isystem
/builds/qemu-project/qemu/linux-headers -isystem linux-headers -iquote
. -iquote /builds/qemu-project/qemu -iquote
/builds/qemu-project/qemu/include -iquote
/builds/qemu-project/qemu/tcg/i386 -pthread -D_GNU_SOURCE
-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes
-Wredundant-decls -Wundef -Wwrite-strings -Wmissing-prototypes
-fno-strict-aliasing -fno-common -fwrapv -Wold-style-definition
-Wtype-limits -Wformat-security -Wformat-y2k -Winit-self
-Wignored-qualifiers -Wempty-body -Wnested-externs -Wendif-labels
-Wexpansion-to-defined -Wno-initializer-overrides
-Wno-missing-include-dirs -Wno-shift-negative-value
-Wno-string-plus-int -Wno-typedef-redefinition
-Wno-tautological-type-limit-compare -Wno-psabi
-Wno-gnu-variable-sized-type-not-at-end -fstack-protector-strong
-fsanitize=safe-stack -fsanitize=cfi-icall
-fsanitize-cfi-icall-generalize-pointers -fno-sanitize-trap=cfi-icall
-fPIE -MD -MQ tests/bench/xbzrle-bench.p/xbzrle-bench.c.o -MF
tests/bench/xbzrle-bench.p/xbzrle-bench.c.o.d -o
tests/bench/xbzrle-bench.p/xbzrle-bench.c.o -c
../tests/bench/xbzrle-bench.c
../tests/bench/xbzrle-bench.c:84:15: error: implicit declaration of
function 'xbzrle_encode_buffer_avx512' is invalid in C99
[-Werror,-Wimplicit-function-declaration]
dlen512 = xbzrle_encode_buffer_avx512(buffer512, buffer512, XBZRLE_PAGE_SIZE,
^
../tests/bench/xbzrle-bench.c:84:15: note: did you mean 'xbzrle_encode_buffer'?
./../migration/xbzrle.h:17:5: note: 'xbzrle_encode_buffer' declared here
int xbzrle_encode_buffer(uint8_t *old_buf, uint8_t *new_buf, int slen,
^
../tests/bench/xbzrle-bench.c:146:15: error: implicit declaration of
function 'xbzrle_encode_buffer_avx512' is invalid in C99
[-Werror,-Wimplicit-function-declaration]
dlen512 = xbzrle_encode_buffer_avx512(test512, test512, XBZRLE_PAGE_SIZE,
^
../tests/bench/xbzrle-bench.c:205:15: error: implicit declaration of
function 'xbzrle_encode_buffer_avx512' is invalid in C99
[-Werror,-Wimplicit-function-declaration]
dlen512 = xbzrle_encode_buffer_avx512(buffer512, test512, XBZRLE_PAGE_SIZE,
^
../tests/bench/xbzrle-bench.c:268:13: error: implicit declaration of
function 'xbzrle_encode_buffer_avx512' is invalid in C99
[-Werror,-Wimplicit-function-declaration]
rc512 = xbzrle_encode_buffer_avx512(buffer512, test512, XBZRLE_PAGE_SIZE,
^
../tests/bench/xbzrle-bench.c:345:15: error: implicit declaration of
function 'xbzrle_encode_buffer_avx512' is invalid in C99
[-Werror,-Wimplicit-function-declaration]
dlen512 = xbzrle_encode_buffer_avx512(test512, buffer512, XBZRLE_PAGE_SIZE,
^
../tests/bench/xbzrle-bench.c:414:15: error: implicit declaration of
function 'xbzrle_encode_buffer_avx512' is invalid in C99
[-Werror,-Wimplicit-function-declaration]
dlen512 = xbzrle_encode_buffer_avx512(test512, buffer512, XBZRLE_PAGE_SIZE,
^
https://gitlab.com/qemu-project/qemu/-/jobs/3326576144
>
> manish.mishra (1):
> migration: check magic value for deciding the mapping of channels
>
> meson.build | 16 +
> include/exec/ram_addr.h | 11 +-
> include/exec/ramblock.h | 3 +
> include/io/channel.h | 25 ++
> include/qemu/bitmap.h | 1 +
> migration/migration.h | 7 -
> migration/multifd.h | 10 +-
> migration/postcopy-ram.h | 2 +-
> migration/ram.h | 23 +
> migration/xbzrle.h | 4 +
> io/channel-socket.c | 27 ++
> io/channel.c | 39 ++
> migration/block.c | 4 +-
> migration/channel-block.c | 6 +-
> migration/migration.c | 109 +++--
> migration/multifd-zlib.c | 14 +-
> migration/multifd-zstd.c | 12 +-
> migration/multifd.c | 69 +--
> migration/postcopy-ram.c | 5 +-
> migration/qemu-file.c | 27 +-
> migration/ram.c | 794 +++++++++++++++++-----------------
> migration/xbzrle.c | 124 ++++++
> tests/bench/xbzrle-bench.c | 465 ++++++++++++++++++++
> tests/unit/test-xbzrle.c | 39 +-
> util/bitmap.c | 45 ++
> meson_options.txt | 2 +
> scripts/meson-buildoptions.sh | 14 +-
> tests/bench/meson.build | 4 +
> 28 files changed, 1379 insertions(+), 522 deletions(-)
> create mode 100644 tests/bench/xbzrle-bench.c
>
> --
> 2.38.1
>
>
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PULL 00/30] Next patches
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
` (31 preceding siblings ...)
2022-11-15 18:57 ` Stefan Hajnoczi
@ 2022-11-15 18:59 ` Stefan Hajnoczi
32 siblings, 0 replies; 47+ messages in thread
From: Stefan Hajnoczi @ 2022-11-15 18:59 UTC (permalink / raw)
To: Juan Quintela
Cc: qemu-devel, Michael Tokarev, Marc-André Lureau,
David Hildenbrand, Laurent Vivier, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block, qemu-trivial,
Philippe Mathieu-Daudé, Fam Zheng
Please only include bug fixes for 7.2 in pull requests during QEMU
hard freeze. The AVX2 support has issues (see my other email) and
anything else that isn't a bug fix should be dropped too.
Stefan
^ permalink raw reply [flat|nested] 47+ messages in thread
* RE: [PULL 00/30] Next patches
2022-11-15 18:57 ` Stefan Hajnoczi
@ 2022-11-16 15:35 ` Xu, Ling1
0 siblings, 0 replies; 47+ messages in thread
From: Xu, Ling1 @ 2022-11-16 15:35 UTC (permalink / raw)
To: Stefan Hajnoczi, Juan Quintela, Zhao, Zhou, Jin, Jun I
Cc: qemu-devel@nongnu.org, Michael Tokarev, Marc-André Lureau,
David Hildenbrand, Laurent Vivier, Paolo Bonzini,
Daniel P. Berrangé, Peter Xu, Stefan Hajnoczi,
Dr. David Alan Gilbert, Thomas Huth, qemu-block@nongnu.org,
qemu-trivial@nongnu.org, Philippe Mathieu-Daudé, Fam Zheng
Hi, All,
Very appreciated for your time on reviewing our patch.
The second CI failure caused by our patch has been addressed. One simple way is moving "#endif" in qemu/tests/bench/xbzrle-bench.c from line 46 to line 450.
We have submitted patch v7 to update this modification. Thanks for your time again.
Best Regards,
Ling
-----Original Message-----
From: Stefan Hajnoczi <stefanha@gmail.com>
Sent: Wednesday, November 16, 2022 2:58 AM
To: Juan Quintela <quintela@redhat.com>; Xu, Ling1 <ling1.xu@intel.com>; Zhao, Zhou <zhou.zhao@intel.com>; Jin, Jun I <jun.i.jin@intel.com>
Cc: qemu-devel@nongnu.org; Michael Tokarev <mjt@tls.msk.ru>; Marc-André Lureau <marcandre.lureau@redhat.com>; David Hildenbrand <david@redhat.com>; Laurent Vivier <laurent@vivier.eu>; Paolo Bonzini <pbonzini@redhat.com>; Daniel P. Berrangé <berrange@redhat.com>; Peter Xu <peterx@redhat.com>; Stefan Hajnoczi <stefanha@redhat.com>; Dr. David Alan Gilbert <dgilbert@redhat.com>; Thomas Huth <thuth@redhat.com>; qemu-block@nongnu.org; qemu-trivial@nongnu.org; Philippe Mathieu-Daudé <philmd@linaro.org>; Fam Zheng <fam@euphon.net>
Subject: Re: [PULL 00/30] Next patches
On Tue, 15 Nov 2022 at 10:40, Juan Quintela <quintela@redhat.com> wrote:
>
> The following changes since commit 98f10f0e2613ba1ac2ad3f57a5174014f6dcb03d:
>
> Merge tag 'pull-target-arm-20221114' of
> https://git.linaro.org/people/pmaydell/qemu-arm into staging
> (2022-11-14 13:31:17 -0500)
>
> are available in the Git repository at:
>
> https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
>
> for you to fetch changes up to d896a7a40db13fc2d05828c94ddda2747530089c:
>
> migration: Block migration comment or code is wrong (2022-11-15
> 10:31:06 +0100)
>
> ----------------------------------------------------------------
> Migration PULL request (take 2)
>
> Hi
>
> This time properly signed.
>
> [take 1]
> It includes:
> - Leonardo fix for zero_copy flush
> - Fiona fix for return value of readv/writev
> - Peter Xu cleanups
> - Peter Xu preempt patches
> - Patches ready from zero page (me)
> - AVX2 support (ling)
> - fix for slow networking and reordering of first packets (manish)
>
> Please, apply.
>
> ----------------------------------------------------------------
>
> Fiona Ebner (1):
> migration/channel-block: fix return value for
> qio_channel_block_{readv,writev}
>
> Juan Quintela (5):
> multifd: Create page_size fields into both MultiFD{Recv,Send}Params
> multifd: Create page_count fields into both MultiFD{Recv,Send}Params
> migration: Export ram_transferred_ram()
> migration: Export ram_release_page()
> migration: Block migration comment or code is wrong
>
> Leonardo Bras (1):
> migration/multifd/zero-copy: Create helper function for flushing
>
> Peter Xu (20):
> migration: Fix possible infinite loop of ram save process
> migration: Fix race on qemu_file_shutdown()
> migration: Disallow postcopy preempt to be used with compress
> migration: Use non-atomic ops for clear log bitmap
> migration: Disable multifd explicitly with compression
> migration: Take bitmap mutex when completing ram migration
> migration: Add postcopy_preempt_active()
> migration: Cleanup xbzrle zero page cache update logic
> migration: Trivial cleanup save_page_header() on same block check
> migration: Remove RAMState.f references in compression code
> migration: Yield bitmap_mutex properly when sending/sleeping
> migration: Use atomic ops properly for page accountings
> migration: Teach PSS about host page
> migration: Introduce pss_channel
> migration: Add pss_init()
> migration: Make PageSearchStatus part of RAMState
> migration: Move last_sent_block into PageSearchStatus
> migration: Send requested page directly in rp-return thread
> migration: Remove old preempt code around state maintainance
> migration: Drop rs->f
>
> ling xu (2):
> Update AVX512 support for xbzrle_encode_buffer
> Unit test code and benchmark code
This commit causes the following CI failure:
cc -m64 -mcx16 -Ilibauthz.fa.p -I. -I.. -Iqapi -Itrace -Iui/shader
-I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include
-fdiagnostics-color=auto -Wall -Winvalid-pch -Werror -std=gnu11 -O2 -g -isystem /builds/qemu-project/qemu/linux-headers -isystem linux-headers -iquote . -iquote /builds/qemu-project/qemu -iquote /builds/qemu-project/qemu/include -iquote
/builds/qemu-project/qemu/tcg/i386 -pthread -U_FORTIFY_SOURCE
-D_FORTIFY_SOURCE=2 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -fwrapv -Wold-style-declaration -Wold-style-definition -Wtype-limits -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wempty-body -Wnested-externs -Wendif-labels -Wexpansion-to-defined
-Wimplicit-fallthrough=2 -Wno-missing-include-dirs -Wno-shift-negative-value -Wno-psabi -fstack-protector-strong -fPIE -MD -MQ libauthz.fa.p/authz_simple.c.o -MF libauthz.fa.p/authz_simple.c.o.d -o libauthz.fa.p/authz_simple.c.o -c ../authz/simple.c In file included from ../authz/simple.c:23:
../authz/trace.h:1:10: fatal error: trace/trace-authz.h: No such file or directory
1 | #include "trace/trace-authz.h"
| ^~~~~~~~~~~~~~~~~~~~~
https://gitlab.com/qemu-project/qemu/-/jobs/3326576115
I think the issue is that the test links against objects that aren't present when qemu-user only build is performed. That's my first guess, I might be wrong but it is definitely this commit that causes the failure (I bisected it).
There is a second CI failure here:
clang -m64 -mcx16 -Itests/bench/xbzrle-bench.p -Itests/bench -I../tests/bench -I. -Iqapi -Itrace -Iui -Iui/shader
-I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include
-I/usr/include/sysprof-4 -flto -fcolor-diagnostics -Wall -Winvalid-pch -Werror -std=gnu11 -O2 -g -isystem /builds/qemu-project/qemu/linux-headers -isystem linux-headers -iquote . -iquote /builds/qemu-project/qemu -iquote /builds/qemu-project/qemu/include -iquote
/builds/qemu-project/qemu/tcg/i386 -pthread -D_GNU_SOURCE
-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -fwrapv -Wold-style-definition -Wtype-limits -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wempty-body -Wnested-externs -Wendif-labels -Wexpansion-to-defined -Wno-initializer-overrides -Wno-missing-include-dirs -Wno-shift-negative-value -Wno-string-plus-int -Wno-typedef-redefinition -Wno-tautological-type-limit-compare -Wno-psabi -Wno-gnu-variable-sized-type-not-at-end -fstack-protector-strong -fsanitize=safe-stack -fsanitize=cfi-icall -fsanitize-cfi-icall-generalize-pointers -fno-sanitize-trap=cfi-icall -fPIE -MD -MQ tests/bench/xbzrle-bench.p/xbzrle-bench.c.o -MF tests/bench/xbzrle-bench.p/xbzrle-bench.c.o.d -o tests/bench/xbzrle-bench.p/xbzrle-bench.c.o -c ../tests/bench/xbzrle-bench.c
../tests/bench/xbzrle-bench.c:84:15: error: implicit declaration of function 'xbzrle_encode_buffer_avx512' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
dlen512 = xbzrle_encode_buffer_avx512(buffer512, buffer512, XBZRLE_PAGE_SIZE, ^
../tests/bench/xbzrle-bench.c:84:15: note: did you mean 'xbzrle_encode_buffer'?
./../migration/xbzrle.h:17:5: note: 'xbzrle_encode_buffer' declared here int xbzrle_encode_buffer(uint8_t *old_buf, uint8_t *new_buf, int slen, ^
../tests/bench/xbzrle-bench.c:146:15: error: implicit declaration of function 'xbzrle_encode_buffer_avx512' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
dlen512 = xbzrle_encode_buffer_avx512(test512, test512, XBZRLE_PAGE_SIZE, ^
../tests/bench/xbzrle-bench.c:205:15: error: implicit declaration of function 'xbzrle_encode_buffer_avx512' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
dlen512 = xbzrle_encode_buffer_avx512(buffer512, test512, XBZRLE_PAGE_SIZE, ^
../tests/bench/xbzrle-bench.c:268:13: error: implicit declaration of function 'xbzrle_encode_buffer_avx512' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
rc512 = xbzrle_encode_buffer_avx512(buffer512, test512, XBZRLE_PAGE_SIZE, ^
../tests/bench/xbzrle-bench.c:345:15: error: implicit declaration of function 'xbzrle_encode_buffer_avx512' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
dlen512 = xbzrle_encode_buffer_avx512(test512, buffer512, XBZRLE_PAGE_SIZE, ^
../tests/bench/xbzrle-bench.c:414:15: error: implicit declaration of function 'xbzrle_encode_buffer_avx512' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
dlen512 = xbzrle_encode_buffer_avx512(test512, buffer512, XBZRLE_PAGE_SIZE, ^
https://gitlab.com/qemu-project/qemu/-/jobs/3326576144
>
> manish.mishra (1):
> migration: check magic value for deciding the mapping of channels
>
> meson.build | 16 +
> include/exec/ram_addr.h | 11 +-
> include/exec/ramblock.h | 3 +
> include/io/channel.h | 25 ++
> include/qemu/bitmap.h | 1 +
> migration/migration.h | 7 -
> migration/multifd.h | 10 +-
> migration/postcopy-ram.h | 2 +-
> migration/ram.h | 23 +
> migration/xbzrle.h | 4 +
> io/channel-socket.c | 27 ++
> io/channel.c | 39 ++
> migration/block.c | 4 +-
> migration/channel-block.c | 6 +-
> migration/migration.c | 109 +++--
> migration/multifd-zlib.c | 14 +-
> migration/multifd-zstd.c | 12 +-
> migration/multifd.c | 69 +--
> migration/postcopy-ram.c | 5 +-
> migration/qemu-file.c | 27 +-
> migration/ram.c | 794 +++++++++++++++++-----------------
> migration/xbzrle.c | 124 ++++++
> tests/bench/xbzrle-bench.c | 465 ++++++++++++++++++++
> tests/unit/test-xbzrle.c | 39 +-
> util/bitmap.c | 45 ++
> meson_options.txt | 2 +
> scripts/meson-buildoptions.sh | 14 +-
> tests/bench/meson.build | 4 +
> 28 files changed, 1379 insertions(+), 522 deletions(-) create mode
> 100644 tests/bench/xbzrle-bench.c
>
> --
> 2.38.1
>
>
^ permalink raw reply [flat|nested] 47+ messages in thread
* [PULL 00/30] Next patches
@ 2023-06-22 2:12 Juan Quintela
2023-06-22 5:38 ` Richard Henderson
0 siblings, 1 reply; 47+ messages in thread
From: Juan Quintela @ 2023-06-22 2:12 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Leonardo Bras, Thomas Huth, Juan Quintela,
Paolo Bonzini, Markus Armbruster, qemu-block, Eric Blake,
Stefan Hajnoczi, Fam Zheng, Laurent Vivier
The following changes since commit 67fe6ae41da64368bc4936b196fee2bf61f8c720:
Merge tag 'pull-tricore-20230621-1' of https://github.com/bkoppelmann/qemu into staging (2023-06-21 20:08:48 +0200)
are available in the Git repository at:
https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
for you to fetch changes up to c53dc569d0a0fb76eaa83f353253a897914948f9:
migration/rdma: Split qemu_fopen_rdma() into input/output functions (2023-06-22 02:45:30 +0200)
----------------------------------------------------------------
Migration Pull request (20230621)
In this pull request:
- fix for multifd thread creation (fabiano)
- dirtylimity (hyman)
* migration-test will go on next PULL request, as it has failures.
- Improve error description (tejus)
- improve -incoming and set parameters before calling incoming (wei)
- migration atomic counters reviewed patches (quintela)
- migration-test refacttoring reviewed (quintela)
Please apply.
----------------------------------------------------------------
Fabiano Rosas (2):
migration/multifd: Rename threadinfo.c functions
migration/multifd: Protect accesses to migration_threads
Hyman Huang(黄勇) (8):
softmmu/dirtylimit: Add parameter check for hmp "set_vcpu_dirty_limit"
qapi/migration: Introduce x-vcpu-dirty-limit-period parameter
qapi/migration: Introduce vcpu-dirty-limit parameters
migration: Introduce dirty-limit capability
migration: Refactor auto-converge capability logic
migration: Put the detection logic before auto-converge checking
migration: Implement dirty-limit convergence algo
migration: Extend query-migrate to provide dirty page limit info
Juan Quintela (16):
migration-test: Be consistent for ppc
migration-test: Make machine_opts regular with other options
migration-test: Create arch_opts
migration-test: machine_opts is really arch specific
migration-test: Create kvm_opts
migration-test: bootpath is the same for all tests and for all archs
migration-test: Add bootfile_create/delete() functions
migration-test: dirtylimit checks for x86_64 arch before
migration-test: simplify shmem_opts handling
qemu-file: Rename qemu_file_transferred_ fast -> noflush
migration: Change qemu_file_transferred to noflush
migration: Use qemu_file_transferred_noflush() for block migration.
qemu_file: Make qemu_file_is_writable() static
qemu-file: Simplify qemu_file_shutdown()
qemu-file: Make qemu_file_get_error_obj() static
migration/rdma: Split qemu_fopen_rdma() into input/output functions
Tejus GK (2):
migration: Update error description whenever migration fails
migration: Refactor repeated call of yank_unregister_instance
Wei Wang (2):
migration: enforce multifd and postcopy preempt to be set before
incoming
qtest/migration-tests.c: use "-incoming defer" for postcopy tests
qapi/migration.json | 74 +++++++++++++++++---
include/sysemu/dirtylimit.h | 2 +
migration/options.h | 1 +
migration/qemu-file.h | 14 ++--
migration/threadinfo.h | 7 +-
migration/block.c | 4 +-
migration/migration-hmp-cmds.c | 26 +++++++
migration/migration.c | 40 +++++++----
migration/multifd.c | 4 +-
migration/options.c | 87 +++++++++++++++++++++++
migration/qemu-file.c | 24 ++-----
migration/ram.c | 59 +++++++++++++---
migration/rdma.c | 39 +++++------
migration/savevm.c | 6 +-
migration/threadinfo.c | 19 ++++-
migration/vmstate.c | 4 +-
softmmu/dirtylimit.c | 97 +++++++++++++++++++++++---
tests/qtest/migration-test.c | 123 ++++++++++++++++++---------------
migration/trace-events | 1 +
19 files changed, 472 insertions(+), 159 deletions(-)
base-commit: 5f9dd6a8ce3961db4ce47411ed2097ad88bdf5fc
prerequisite-patch-id: 99c8bffa9428838925e330eb2881bab476122579
prerequisite-patch-id: 77ba427fd916aeb395e95aa0e7190f84e98e96ab
prerequisite-patch-id: 9983d46fa438d7075a37be883529e37ae41e4228
prerequisite-patch-id: 207f7529924b12dcb57f6557d6db6f79ceb2d682
prerequisite-patch-id: 5ad1799a13845dbf893a28a202b51a6b50d95d90
prerequisite-patch-id: c51959aacd6d65ee84fcd4f1b2aed3dd6f6af879
prerequisite-patch-id: da9dbb6799b2da002c0896574334920097e4c50a
prerequisite-patch-id: c1110ffafbaf5465fb277a20db809372291f7846
prerequisite-patch-id: 8307c92bedd07446214b35b40206eb6793a7384d
prerequisite-patch-id: 0a6106cd4a508d5e700a7ff6c25edfdd03c8ca3d
prerequisite-patch-id: 83205051de22382e75bf4acdf69e59315801fa0d
prerequisite-patch-id: 8c9b3cba89d555c071a410041e6da41806106a7e
prerequisite-patch-id: 0ff62a33b9a242226ccc1f5424a516de803c9fe5
prerequisite-patch-id: 25b8ae1ebe09ace14457c454cfcb23077c37346c
prerequisite-patch-id: 466ea91d5be41fe345dacd4d17bbbe5ce13118c2
prerequisite-patch-id: d1045858f9729ac62eccf2e83ebf95cfebae2cb5
prerequisite-patch-id: 0276ec02073bda5426de39e2f2e81eef080b4f54
prerequisite-patch-id: 7afb4450a163cc1a63ea23831c50214966969131
prerequisite-patch-id: 06c053ce4f41db9675bd1778ae8f6a483641fcef
prerequisite-patch-id: 13ea05d54d741ed08b3bfefa1fc8bedb9c81c782
prerequisite-patch-id: 99c4e2b7101bc8c4b9515129a1bbe6f068053dbf
prerequisite-patch-id: 1e393a196dc7a1ee75f3cc3cebbb591c5422102f
prerequisite-patch-id: 2cf497b41f5024ede0a224b1f5b172226067a534
prerequisite-patch-id: 2a70276ed61d33fc4f3b52560753c05d1cd413be
prerequisite-patch-id: 17ec40f4388b62ba8bf3ac1546c6913f5d1f6079
prerequisite-patch-id: dba969ce9d6cf69c1319661a7d81b1c1c719804d
prerequisite-patch-id: 8d800cda87167314f07320bdb3df936c323e4a40
prerequisite-patch-id: 25d4aaf54ea66f30e426fa38bdd4e0f47303c513
prerequisite-patch-id: 082c9d8584c1daff1e827e44ee3047178e7004a7
prerequisite-patch-id: 0ef73900899425ae2f00751347afdce3739aa954
prerequisite-patch-id: e7db4730b791b71aaf417ee0f65fb6304566aaf8
prerequisite-patch-id: 62d7f28f8196039507ffe362f97723395d7bb704
prerequisite-patch-id: ea8de47bcb54e33bcc67e59e9ed752a4d1fad703
prerequisite-patch-id: 497893ef92e1ea56bd8605e6990a05cb4c7f9293
prerequisite-patch-id: 3dc869c80ee568449bbfa2a9bc427524d0e8970b
prerequisite-patch-id: 52c14b6fb14ed4ccd685385a9fbc6297b762c0ef
prerequisite-patch-id: 23de8371e9e3277c374a47f9bd10de209a22fdd5
prerequisite-patch-id: d21f036dd106af3375fb920bf0a557fe2b86d98e
prerequisite-patch-id: ca717076e9de83d6ce370f7374c4729a9f586fae
prerequisite-patch-id: a235b6ab3237155f2b39e8e10d47ddd250f6b6cc
prerequisite-patch-id: 6db2aa3d8a5804c85dd171864f5140fadf5f72da
prerequisite-patch-id: a799b883f4cb41c34ad074220293f372c6e0a9c7
prerequisite-patch-id: 5e012c426aef7b2f07513cec68e7efa1cf85fe52
prerequisite-patch-id: 4e614e7e3395dda7bae5f9fa21257c57cce45a39
prerequisite-patch-id: 67f8e68622c9698280ff5c5dc7469f36daf9a012
prerequisite-patch-id: d86078331449a21499e3f955e27bc87294969346
prerequisite-patch-id: 3f30d10e0ac7f53307f6b462eaf5b47151b73631
prerequisite-patch-id: a5d84769b776873697b1eb18b6b170a168e68133
prerequisite-patch-id: e5058b3e0ca9f80886bd2c1667349bffc6949da6
prerequisite-patch-id: 669d7e2ef30468034acfa33d4da0404caae526a9
prerequisite-patch-id: 3276a659cbe45efca6a6d3a4579af3cf7ceda992
prerequisite-patch-id: 624657b1f538edfb852b09889d6583970665cad9
prerequisite-patch-id: d7d70d557970f203c288f153361c6782245ba9f9
prerequisite-patch-id: fcbab0e48ac1385d2de1be7852b2adb74f04acfc
prerequisite-patch-id: 88e76e8dc51b054c84c497344424f24e877e325b
prerequisite-patch-id: dc4c193c8571ae5969a6c498edc7cec8ef438bc1
prerequisite-patch-id: 39c1ee32082f6ce1dd088d5a8340ca9b3e2434a4
prerequisite-patch-id: 28deb0caf0f5eac3fc0a866d2901667a56870f6d
prerequisite-patch-id: 53b48b35c9b8217b2da5212d7ed48320d355a943
prerequisite-patch-id: 4469f3663714a65a06b7c472ce9c92ab7502a00f
prerequisite-patch-id: a75f08b401d251fc3465b65f94a8db8d32d8900a
prerequisite-patch-id: 4b7ac86af078a260f656914089da8f43cdaab8ff
prerequisite-patch-id: 398f62abad95f55181d2cf8a383f69000de97af4
prerequisite-patch-id: e102cdc3834018fa88f9fe98a78afb46ff316124
prerequisite-patch-id: 1c30c139ef694b9fd8314bffb4b9b2a02c6c9233
prerequisite-patch-id: d96c304f1c63fa840440508f5b3be70a922b2210
prerequisite-patch-id: a8be25a6b6834ac542d40eaec47a60caf2fe545c
prerequisite-patch-id: d29aa52f7a6c5e61cf793addbbc7d7ea5f383725
prerequisite-patch-id: c7a6d171e568f4aba6f94126abc6a9a64caefcc2
prerequisite-patch-id: dcbf2e0e9b3b3cd985423993bdd411fc0cdf4646
prerequisite-patch-id: d5f1a450785cb6e8d74e152b7026621cd4077d04
prerequisite-patch-id: 6179fd2a7c7d6866906365b564fcef05428318b3
prerequisite-patch-id: 3f0e5189f1046e8fd37b2f104a5ee6c306438bed
prerequisite-patch-id: 9b8f5520663a347e1981c46d1d72f6b7efeef222
prerequisite-patch-id: 6171b4310abc56d03a21f95848beff3f04c04333
prerequisite-patch-id: 93336ba3fd1e3875b644ec6007ef22480df750a2
prerequisite-patch-id: 9a76b6408f38b96af9ad91269a0fac8d1893bdef
prerequisite-patch-id: 74c166a9a1ccafef04f3894085a3b3ed5215df7a
prerequisite-patch-id: 71833e48cfa5875247770ac8b94ab957e4eab4fe
prerequisite-patch-id: 9a14e6755e7d7605267abaeab62d70e651d622e3
prerequisite-patch-id: e0b2ce3211263007de8e2a2b60402cfea4a2f1c2
prerequisite-patch-id: bacccbe583a69dd0f081aa74822e92c23f344a64
prerequisite-patch-id: bee3183ab6c76123bc8795dfd5dd6eb19d417832
prerequisite-patch-id: eee73769e563ea8ad64f82473c235a4fef18ce83
prerequisite-patch-id: 5c2c95e068dc614a7708fe830d773e03e8cde4ec
prerequisite-patch-id: deae58b435dc10d80f1ea754613e9ed3e7bd4fe1
prerequisite-patch-id: 1e6e138aeb18a51b6496a265f84f5df1b98471b6
prerequisite-patch-id: 15865988ed0b88b1e28f0e16c50222f61db5ac35
prerequisite-patch-id: 7884031de035d910c1bddee4331a765c6a266243
prerequisite-patch-id: a2deb3751dec34448ab776b455a7b29bd6d8217f
prerequisite-patch-id: d24167f841efc3b8fd8afe06ae02da4d4a594fdc
prerequisite-patch-id: 4f23aea0e73aa0933a53db59ed716a19dcbd575c
prerequisite-patch-id: f54b42bfeb4241cb96b2490e1261e2bc44e7cc07
prerequisite-patch-id: 7e3f3e34be720b25f3db15d2a878bce23b99353c
prerequisite-patch-id: fc91b48c6593c862aa17b221663d2bbed9261f16
prerequisite-patch-id: f20dbadd1962f16e7f4fc4a2b8747443ada94f80
prerequisite-patch-id: 78de9ba010ec43b9ebbc998f4d3afb38fed6be8e
prerequisite-patch-id: 013c99f721a9a0a4af86772cb8d1d40f6942f16f
prerequisite-patch-id: 7ee4a5b056b343ba2d7491158965023af84f56ad
prerequisite-patch-id: d489b42dc81bf9dd7a51381c06182ddffb2230e5
prerequisite-patch-id: fcf1400a8d8b417801baf1450c8c10efa06f4481
prerequisite-patch-id: e475e1e9cd7ce746410ef9839d02ba242cb72090
prerequisite-patch-id: 68d24ec2081c2224a1c133f2c07544177e693a53
prerequisite-patch-id: 6e70864f4065b79db49a206523bcc8b4afded934
prerequisite-patch-id: 0d259c5d3ebb259d36c5d059a82df3b0ffbf1429
prerequisite-patch-id: 392a9fa483070625cad85778e4742c4fc5a2ac5d
prerequisite-patch-id: 62804c908c4abe71d644f18303780084a4c224e8
prerequisite-patch-id: 3f3f1eed1a6150df53aba846237329cf24bb9548
prerequisite-patch-id: 4d1387c6e00b9c62e638ce955cd5d9681754bfb5
prerequisite-patch-id: 3382e00119d0ecd997ae6f99855c74e3eed9237c
prerequisite-patch-id: 95fcef043070fc902f1cad42ef47a94744624530
prerequisite-patch-id: 5b6737d17f3ef9a4ae5d33343125bc597ad2fa62
prerequisite-patch-id: acc0a1e809cb0639e16d4b05cccfd809f8e299b5
prerequisite-patch-id: aa5c2db287402596e20ef0f935aea67007484f58
prerequisite-patch-id: a0318b66bb3f1f096f7d77c3f03e2018f68430b4
prerequisite-patch-id: 4d709e0d8ac627462de259723ac4fcb8efcc5342
prerequisite-patch-id: 811deca81c0cbdc5a3e977dd282f46427e79dbe2
prerequisite-patch-id: 8f66923ad1790ee9e567b9b1d155c7836c3e7cb8
prerequisite-patch-id: c0f2a71d9835e66a216af2ff3b3d434eec44ab0d
prerequisite-patch-id: 5e1a47e077d7bd22b1af9439e1cf1ae38830f763
prerequisite-patch-id: 04bd404d9e3268cbc399458be5bd69190852884a
prerequisite-patch-id: 6d630be7173e6e8da9a14739d32701f76bdf32a3
prerequisite-patch-id: 463c4f52f60b2c8fbf165caedbbc1bfa3bd2a265
prerequisite-patch-id: 67403a8553d60ec8938906f28319176f2ef0ecc4
prerequisite-patch-id: 18443a910e260eed010a2d4cd8f255678476eb0a
prerequisite-patch-id: cf8d1aab1c343ad9bb6de1b980bf7a8aa7479e79
prerequisite-patch-id: cb8ee956876a21478178d46383e760f2a29ec60a
prerequisite-patch-id: 0384e88e9105ce6907f44a55ae0d22bb57b11701
prerequisite-patch-id: 010b9094c8a9b1b41dc49fc07d9704a7be6374c0
prerequisite-patch-id: 3e96fcf9486ad8decca891534d4d2b41f786087a
prerequisite-patch-id: d77c491a6a5760ca828ecbb1445a77dcf9bebf0b
prerequisite-patch-id: 5c2e9b005429bca543ac6dd39912731d9388eb10
prerequisite-patch-id: 741c7b38494d42817d2d71adb6f267268c9cb208
prerequisite-patch-id: b97fb8b1e5848e8561fc599ef7173b2b07782918
prerequisite-patch-id: cca04ee447a117242679dc946638dd876bbaf931
prerequisite-patch-id: 7e7784bd5ec5ecfdb8d7130975a915fb88e8c26b
prerequisite-patch-id: 5a5137d28df7153718ef924d49a821fc1482b220
prerequisite-patch-id: 3d010e879709b5540948b11c2692a28fd5bdd1ea
prerequisite-patch-id: 3d418aca0f2a4a5a3e8cbd3c9da2a5229dd26f61
prerequisite-patch-id: 423c72136d6c85888dfac083efb54b518323f0dd
prerequisite-patch-id: 099453248d834931415e431e1bb470c1f99cdfa2
prerequisite-patch-id: 9fde172c51e107415194ecce2a3e218d2fab7f60
prerequisite-patch-id: 602a0f6b0121ef0296a3fe14872d59b8a0673b24
prerequisite-patch-id: ded33eb3b89b9da9e7a32ca5c6da3da2b65d3a83
prerequisite-patch-id: 56baaaf053e49e6306c076a80c5e7759a7c27805
prerequisite-patch-id: ec1ace1ef1e30d4c56143b3148b4f70c123ddd85
prerequisite-patch-id: da789135fed934d00e9020a90ec700a273fb18fa
prerequisite-patch-id: ca5e644043851df5c7472b4149806ac98b6cce5c
prerequisite-patch-id: d4f92797793802ce3021c4d1b45d2d0a94c51f07
prerequisite-patch-id: 992ab3fb9e82572494ff6f8af494ffbe8ba5cc6e
prerequisite-patch-id: 8baf34ed9789ac6a26f1d84648869131cea522fa
prerequisite-patch-id: 63f23c96115916e7707201d828fcab42ceeea385
prerequisite-patch-id: 7c166511597619b0f03043ce3a0a7237032bf7e0
prerequisite-patch-id: 35e2044bb5d5ea5d041ed01579943f6cffa6ee41
prerequisite-patch-id: 497b25db87c4c64173167f70d3a48154e44ca655
prerequisite-patch-id: d78941852e275e8b8ed03254a89324fef00fcf18
prerequisite-patch-id: 3cc8bd3f0886b6dca9197bfdba4bfa6ff360bf24
prerequisite-patch-id: 61890eaa88bcab4ac0aa2ef9a131d56ef8376f1d
prerequisite-patch-id: 097b002021ddd12f7b5180ccaa744ecce925f04b
prerequisite-patch-id: f599ecb06efafb8954fc9ad6d08a9819da860d28
prerequisite-patch-id: 484f86ad1f4fb5d8b2debfb1e7a8c7b6d0610f77
prerequisite-patch-id: 52db9f2823a1ad78ffb91f84911cd9963b295465
prerequisite-patch-id: d936469bba0e3720bf957995c0f842e56c815e85
prerequisite-patch-id: 9bfb8fba3a480dd5c8878fd307d96cbaaf521cff
prerequisite-patch-id: 13798977298ceb264b789af0eae36c714b0d17c1
prerequisite-patch-id: fc68338d9f90388d062471e46e9b691b00302c69
prerequisite-patch-id: 25e5e3c9ec0a10aa564b3cf32fe793aaf1e705b7
prerequisite-patch-id: 9160fde4ddf465965328264e443638a3ab0d09d9
prerequisite-patch-id: 20d490f27f569e5784d5501d842b4be8a4931127
prerequisite-patch-id: 9b7866c753c962fa8107eeff4d25ff40a546f329
prerequisite-patch-id: 653af78a35e216c7dc891a2d46849bdde83a5dfd
prerequisite-patch-id: f27f61687c1f423d4ad8f690a5114644364d26c5
prerequisite-patch-id: 705f5636837961012d4ad7cef93e3c06bf2828ad
prerequisite-patch-id: cf18ec6cc3cfb012e68e34f5319e5ca5fc08b04f
prerequisite-patch-id: 496d56c7025c252650cef84afef17c1cbe761011
prerequisite-patch-id: 849633dc3d51365926008341078b46e05349daf3
prerequisite-patch-id: bdd1c3731521ab2bec54b5757a1b1460ffa8a76b
prerequisite-patch-id: 7a4af5cc7a8d0c73fac182936c3034b8b97a1d81
prerequisite-patch-id: b6640f64140c35ef6614a47ac213d815eaacc385
prerequisite-patch-id: 619f2ef543c43d0e537cc5ec16a88aff8d08f7ee
prerequisite-patch-id: 5b7d7481f225dd506c53bc58d913b3dcf670ab3b
prerequisite-patch-id: 76b2c41e04d00656cff256f72fb81617baf75004
prerequisite-patch-id: e1e44b6305e28dc040a7495749756d39f086b407
prerequisite-patch-id: 23b6508f200dc43698cc460285ec47fef2e54a45
prerequisite-patch-id: b225a20694d23a03f11eab8c86a0a4b7b640a302
prerequisite-patch-id: e630536287ec238942c1948f8b966e15384fd3ff
prerequisite-patch-id: 327ccf1526eb31e0899de28c42c6bc687e274be5
prerequisite-patch-id: 6b4ab77555a60de86ad045538ea19460e0f65a27
prerequisite-patch-id: e9766937f032e867e3953ea53c47a5c385c68feb
prerequisite-patch-id: b6f9c8b0ac1722d3809bc48d89ace7070e1c447a
prerequisite-patch-id: e72efe42eec650c70433a02ce9c97ddeaef02d0d
prerequisite-patch-id: 9f17b908169ba05390869f3e35b69df00bd46d41
prerequisite-patch-id: b9a82d490d17914a2f80b0430cf4973dd93ccd2f
prerequisite-patch-id: 2563a1dcb7798936aa2c4a83dc1e1ddd585af241
prerequisite-patch-id: d4e0829e923f60dbf80b39493879280c8da2ff4e
prerequisite-patch-id: e42a585260f999d56d222e7b2f44405f6cfbc406
prerequisite-patch-id: e163a8cb28f70824dcde6295354df58a9ece12c0
prerequisite-patch-id: fb1ecde59655a27c8d3fdff20a66cbbe3f00bd6d
prerequisite-patch-id: 8cb6ee4b61ec57e6972ac1ee1253e85a7765deaa
prerequisite-patch-id: 499ac0cc3c1b5a08c8609ecc7d210c35edf01370
prerequisite-patch-id: 8434161ce61b9b53baf7c2643709e08bede8891d
prerequisite-patch-id: bf01608b5710ec0f4d041056988bf42e8785b7be
prerequisite-patch-id: 31a846018ddc61b29dffd1c56e8ef29c5aeb164b
prerequisite-patch-id: 1a0ab80ff0cde1bf49af90be105764d6504c42e2
prerequisite-patch-id: 4463e631ff73f48de1f761fa08fc2d3a0d02d2e3
prerequisite-patch-id: 1ef269c11829325108dc950ebb6ebbb0413f491d
prerequisite-patch-id: 244d96e340cb984ccaeb47ba1fc32c093df85877
prerequisite-patch-id: 6490e0a9923c2341d6aa355576fc178e830a10ef
prerequisite-patch-id: 73099e3b819e51e70cedf57b5de2da94161a789b
prerequisite-patch-id: 49c793919ad92fe3162b6cc8efac6762037534e1
prerequisite-patch-id: 53796f63653ecdf3f2a66d9cb3181546441d4ac0
prerequisite-patch-id: aa6fbd949a032b6fce2b599309f633393bc0bb08
prerequisite-patch-id: eb33de06c18bd23ef820d12cd5e340eda4fc15dc
prerequisite-patch-id: 9acaaae21935bd460bd45f89ac9e5b3c7be398fc
prerequisite-patch-id: d00c2306f14a27317918b63539f65df88ee08222
prerequisite-patch-id: 87cb389190ca57ff8250be430e3470b39575271d
prerequisite-patch-id: 2eaee2a99c51fd5a0df72b5253e8064d9facbcd0
--
2.40.1
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PULL 00/30] Next patches
2023-06-22 2:12 Juan Quintela
@ 2023-06-22 5:38 ` Richard Henderson
2023-06-22 7:31 ` Juan Quintela
0 siblings, 1 reply; 47+ messages in thread
From: Richard Henderson @ 2023-06-22 5:38 UTC (permalink / raw)
To: Juan Quintela, qemu-devel
Cc: Peter Xu, Leonardo Bras, Thomas Huth, Paolo Bonzini,
Markus Armbruster, qemu-block, Eric Blake, Stefan Hajnoczi,
Fam Zheng, Laurent Vivier
On 6/22/23 04:12, Juan Quintela wrote:
> The following changes since commit 67fe6ae41da64368bc4936b196fee2bf61f8c720:
>
> Merge tag 'pull-tricore-20230621-1' ofhttps://github.com/bkoppelmann/qemu into staging (2023-06-21 20:08:48 +0200)
>
> are available in the Git repository at:
>
> https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
>
> for you to fetch changes up to c53dc569d0a0fb76eaa83f353253a897914948f9:
>
> migration/rdma: Split qemu_fopen_rdma() into input/output functions (2023-06-22 02:45:30 +0200)
>
> ----------------------------------------------------------------
> Migration Pull request (20230621)
>
> In this pull request:
>
> - fix for multifd thread creation (fabiano)
> - dirtylimity (hyman)
> * migration-test will go on next PULL request, as it has failures.
> - Improve error description (tejus)
> - improve -incoming and set parameters before calling incoming (wei)
> - migration atomic counters reviewed patches (quintela)
> - migration-test refacttoring reviewed (quintela)
>
> Please apply.
You really need to test at least one 32-bit host regularly.
It should be trivial for you to do an i686 build somewhere.
https://gitlab.com/qemu-project/qemu/-/jobs/4518975360#L4817
https://gitlab.com/qemu-project/qemu/-/jobs/4518975263#L3486
https://gitlab.com/qemu-project/qemu/-/jobs/4518975261#L3145
https://gitlab.com/qemu-project/qemu/-/jobs/4518975298#L3372
https://gitlab.com/qemu-project/qemu/-/jobs/4518975301#L3221
../softmmu/dirtylimit.c:558:58: error: format specifies type 'long' but the argument has
type 'int64_t' (aka 'long long') [-Werror,-Wformat]
error_setg(&err, "invalid dirty page limit %ld", dirty_rate);
~~~ ^~~~~~~~~~
%lld
r~
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PULL 00/30] Next patches
2023-06-22 5:38 ` Richard Henderson
@ 2023-06-22 7:31 ` Juan Quintela
0 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2023-06-22 7:31 UTC (permalink / raw)
To: Richard Henderson
Cc: qemu-devel, Peter Xu, Leonardo Bras, Thomas Huth, Paolo Bonzini,
Markus Armbruster, qemu-block, Eric Blake, Stefan Hajnoczi,
Fam Zheng, Laurent Vivier
Richard Henderson <richard.henderson@linaro.org> wrote:
> On 6/22/23 04:12, Juan Quintela wrote:
>> The following changes since commit 67fe6ae41da64368bc4936b196fee2bf61f8c720:
>> Merge tag 'pull-tricore-20230621-1'
>> ofhttps://github.com/bkoppelmann/qemu into staging (2023-06-21
>> 20:08:48 +0200)
>> are available in the Git repository at:
>> https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
>> for you to fetch changes up to
>> c53dc569d0a0fb76eaa83f353253a897914948f9:
>> migration/rdma: Split qemu_fopen_rdma() into input/output
>> functions (2023-06-22 02:45:30 +0200)
>> ----------------------------------------------------------------
>> Migration Pull request (20230621)
>> In this pull request:
>> - fix for multifd thread creation (fabiano)
>> - dirtylimity (hyman)
>> * migration-test will go on next PULL request, as it has failures.
>> - Improve error description (tejus)
>> - improve -incoming and set parameters before calling incoming (wei)
>> - migration atomic counters reviewed patches (quintela)
>> - migration-test refacttoring reviewed (quintela)
>> Please apply.
>
> You really need to test at least one 32-bit host regularly.
> It should be trivial for you to do an i686 build somewhere.
>
> https://gitlab.com/qemu-project/qemu/-/jobs/4518975360#L4817
> https://gitlab.com/qemu-project/qemu/-/jobs/4518975263#L3486
> https://gitlab.com/qemu-project/qemu/-/jobs/4518975261#L3145
> https://gitlab.com/qemu-project/qemu/-/jobs/4518975298#L3372
> https://gitlab.com/qemu-project/qemu/-/jobs/4518975301#L3221
>
> ../softmmu/dirtylimit.c:558:58: error: format specifies type 'long'
> but the argument has type 'int64_t' (aka 'long long')
> [-Werror,-Wformat]
> error_setg(&err, "invalid dirty page limit %ld", dirty_rate);
> ~~~ ^~~~~~~~~~
> %lld
Grrr, sorry.
Will not happen again.
Later, Juan.
>
>
> r~
^ permalink raw reply [flat|nested] 47+ messages in thread
* [PULL 00/30] Next patches
@ 2023-06-22 16:54 Juan Quintela
2023-06-23 5:45 ` Richard Henderson
0 siblings, 1 reply; 47+ messages in thread
From: Juan Quintela @ 2023-06-22 16:54 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Juan Quintela,
Leonardo Bras, Markus Armbruster, qemu-block
The following changes since commit b455ce4c2f300c8ba47cba7232dd03261368a4cb:
Merge tag 'q800-for-8.1-pull-request' of https://github.com/vivier/qemu-m68k into staging (2023-06-22 10:18:32 +0200)
are available in the Git repository at:
https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
for you to fetch changes up to 23e4307eadc1497bd0a11ca91041768f15963b68:
migration/rdma: Split qemu_fopen_rdma() into input/output functions (2023-06-22 18:11:58 +0200)
----------------------------------------------------------------
Migration Pull request (20230621) take 2
In this pull request the only change is fixing 32 bits complitaion issue.
Please apply.
[take 1]
- fix for multifd thread creation (fabiano)
- dirtylimity (hyman)
* migration-test will go on next PULL request, as it has failures.
- Improve error description (tejus)
- improve -incoming and set parameters before calling incoming (wei)
- migration atomic counters reviewed patches (quintela)
- migration-test refacttoring reviewed (quintela)
----------------------------------------------------------------
Fabiano Rosas (2):
migration/multifd: Rename threadinfo.c functions
migration/multifd: Protect accesses to migration_threads
Hyman Huang(黄勇) (8):
softmmu/dirtylimit: Add parameter check for hmp "set_vcpu_dirty_limit"
qapi/migration: Introduce x-vcpu-dirty-limit-period parameter
qapi/migration: Introduce vcpu-dirty-limit parameters
migration: Introduce dirty-limit capability
migration: Refactor auto-converge capability logic
migration: Put the detection logic before auto-converge checking
migration: Implement dirty-limit convergence algo
migration: Extend query-migrate to provide dirty page limit info
Juan Quintela (16):
migration-test: Be consistent for ppc
migration-test: Make machine_opts regular with other options
migration-test: Create arch_opts
migration-test: machine_opts is really arch specific
migration-test: Create kvm_opts
migration-test: bootpath is the same for all tests and for all archs
migration-test: Add bootfile_create/delete() functions
migration-test: dirtylimit checks for x86_64 arch before
migration-test: simplify shmem_opts handling
qemu-file: Rename qemu_file_transferred_ fast -> noflush
migration: Change qemu_file_transferred to noflush
migration: Use qemu_file_transferred_noflush() for block migration.
qemu_file: Make qemu_file_is_writable() static
qemu-file: Simplify qemu_file_shutdown()
qemu-file: Make qemu_file_get_error_obj() static
migration/rdma: Split qemu_fopen_rdma() into input/output functions
Tejus GK (2):
migration: Update error description whenever migration fails
migration: Refactor repeated call of yank_unregister_instance
Wei Wang (2):
migration: enforce multifd and postcopy preempt to be set before
incoming
qtest/migration-tests.c: use "-incoming defer" for postcopy tests
qapi/migration.json | 74 +++++++++++++++++---
include/sysemu/dirtylimit.h | 2 +
migration/options.h | 1 +
migration/qemu-file.h | 14 ++--
migration/threadinfo.h | 7 +-
migration/block.c | 4 +-
migration/migration-hmp-cmds.c | 26 +++++++
migration/migration.c | 40 +++++++----
migration/multifd.c | 4 +-
migration/options.c | 87 +++++++++++++++++++++++
migration/qemu-file.c | 24 ++-----
migration/ram.c | 59 +++++++++++++---
migration/rdma.c | 39 +++++------
migration/savevm.c | 6 +-
migration/threadinfo.c | 19 ++++-
migration/vmstate.c | 4 +-
softmmu/dirtylimit.c | 97 +++++++++++++++++++++++---
tests/qtest/migration-test.c | 123 ++++++++++++++++++---------------
migration/trace-events | 1 +
19 files changed, 472 insertions(+), 159 deletions(-)
base-commit: 5f9dd6a8ce3961db4ce47411ed2097ad88bdf5fc
prerequisite-patch-id: 99c8bffa9428838925e330eb2881bab476122579
prerequisite-patch-id: 77ba427fd916aeb395e95aa0e7190f84e98e96ab
prerequisite-patch-id: 9983d46fa438d7075a37be883529e37ae41e4228
prerequisite-patch-id: 207f7529924b12dcb57f6557d6db6f79ceb2d682
prerequisite-patch-id: 5ad1799a13845dbf893a28a202b51a6b50d95d90
prerequisite-patch-id: c51959aacd6d65ee84fcd4f1b2aed3dd6f6af879
prerequisite-patch-id: da9dbb6799b2da002c0896574334920097e4c50a
prerequisite-patch-id: c1110ffafbaf5465fb277a20db809372291f7846
prerequisite-patch-id: 8307c92bedd07446214b35b40206eb6793a7384d
prerequisite-patch-id: 0a6106cd4a508d5e700a7ff6c25edfdd03c8ca3d
prerequisite-patch-id: 83205051de22382e75bf4acdf69e59315801fa0d
prerequisite-patch-id: 8c9b3cba89d555c071a410041e6da41806106a7e
prerequisite-patch-id: 0ff62a33b9a242226ccc1f5424a516de803c9fe5
prerequisite-patch-id: 25b8ae1ebe09ace14457c454cfcb23077c37346c
prerequisite-patch-id: 466ea91d5be41fe345dacd4d17bbbe5ce13118c2
prerequisite-patch-id: d1045858f9729ac62eccf2e83ebf95cfebae2cb5
prerequisite-patch-id: 0276ec02073bda5426de39e2f2e81eef080b4f54
prerequisite-patch-id: 7afb4450a163cc1a63ea23831c50214966969131
prerequisite-patch-id: 06c053ce4f41db9675bd1778ae8f6a483641fcef
prerequisite-patch-id: 13ea05d54d741ed08b3bfefa1fc8bedb9c81c782
prerequisite-patch-id: 99c4e2b7101bc8c4b9515129a1bbe6f068053dbf
prerequisite-patch-id: 1e393a196dc7a1ee75f3cc3cebbb591c5422102f
prerequisite-patch-id: 2cf497b41f5024ede0a224b1f5b172226067a534
prerequisite-patch-id: 2a70276ed61d33fc4f3b52560753c05d1cd413be
prerequisite-patch-id: 17ec40f4388b62ba8bf3ac1546c6913f5d1f6079
prerequisite-patch-id: dba969ce9d6cf69c1319661a7d81b1c1c719804d
prerequisite-patch-id: 8d800cda87167314f07320bdb3df936c323e4a40
prerequisite-patch-id: 25d4aaf54ea66f30e426fa38bdd4e0f47303c513
prerequisite-patch-id: 082c9d8584c1daff1e827e44ee3047178e7004a7
prerequisite-patch-id: 0ef73900899425ae2f00751347afdce3739aa954
prerequisite-patch-id: e7db4730b791b71aaf417ee0f65fb6304566aaf8
prerequisite-patch-id: 62d7f28f8196039507ffe362f97723395d7bb704
prerequisite-patch-id: ea8de47bcb54e33bcc67e59e9ed752a4d1fad703
prerequisite-patch-id: 497893ef92e1ea56bd8605e6990a05cb4c7f9293
prerequisite-patch-id: 3dc869c80ee568449bbfa2a9bc427524d0e8970b
prerequisite-patch-id: 52c14b6fb14ed4ccd685385a9fbc6297b762c0ef
prerequisite-patch-id: 23de8371e9e3277c374a47f9bd10de209a22fdd5
prerequisite-patch-id: d21f036dd106af3375fb920bf0a557fe2b86d98e
prerequisite-patch-id: ca717076e9de83d6ce370f7374c4729a9f586fae
prerequisite-patch-id: a235b6ab3237155f2b39e8e10d47ddd250f6b6cc
prerequisite-patch-id: 6db2aa3d8a5804c85dd171864f5140fadf5f72da
prerequisite-patch-id: a799b883f4cb41c34ad074220293f372c6e0a9c7
prerequisite-patch-id: 5e012c426aef7b2f07513cec68e7efa1cf85fe52
prerequisite-patch-id: 4e614e7e3395dda7bae5f9fa21257c57cce45a39
prerequisite-patch-id: 67f8e68622c9698280ff5c5dc7469f36daf9a012
prerequisite-patch-id: d86078331449a21499e3f955e27bc87294969346
prerequisite-patch-id: 3f30d10e0ac7f53307f6b462eaf5b47151b73631
prerequisite-patch-id: a5d84769b776873697b1eb18b6b170a168e68133
prerequisite-patch-id: e5058b3e0ca9f80886bd2c1667349bffc6949da6
prerequisite-patch-id: 669d7e2ef30468034acfa33d4da0404caae526a9
prerequisite-patch-id: 3276a659cbe45efca6a6d3a4579af3cf7ceda992
prerequisite-patch-id: 624657b1f538edfb852b09889d6583970665cad9
prerequisite-patch-id: d7d70d557970f203c288f153361c6782245ba9f9
prerequisite-patch-id: fcbab0e48ac1385d2de1be7852b2adb74f04acfc
prerequisite-patch-id: 88e76e8dc51b054c84c497344424f24e877e325b
prerequisite-patch-id: dc4c193c8571ae5969a6c498edc7cec8ef438bc1
prerequisite-patch-id: 39c1ee32082f6ce1dd088d5a8340ca9b3e2434a4
prerequisite-patch-id: 28deb0caf0f5eac3fc0a866d2901667a56870f6d
prerequisite-patch-id: 53b48b35c9b8217b2da5212d7ed48320d355a943
prerequisite-patch-id: 4469f3663714a65a06b7c472ce9c92ab7502a00f
prerequisite-patch-id: a75f08b401d251fc3465b65f94a8db8d32d8900a
prerequisite-patch-id: 4b7ac86af078a260f656914089da8f43cdaab8ff
prerequisite-patch-id: 398f62abad95f55181d2cf8a383f69000de97af4
prerequisite-patch-id: e102cdc3834018fa88f9fe98a78afb46ff316124
prerequisite-patch-id: 1c30c139ef694b9fd8314bffb4b9b2a02c6c9233
prerequisite-patch-id: d96c304f1c63fa840440508f5b3be70a922b2210
prerequisite-patch-id: a8be25a6b6834ac542d40eaec47a60caf2fe545c
prerequisite-patch-id: d29aa52f7a6c5e61cf793addbbc7d7ea5f383725
prerequisite-patch-id: c7a6d171e568f4aba6f94126abc6a9a64caefcc2
prerequisite-patch-id: dcbf2e0e9b3b3cd985423993bdd411fc0cdf4646
prerequisite-patch-id: d5f1a450785cb6e8d74e152b7026621cd4077d04
prerequisite-patch-id: 6179fd2a7c7d6866906365b564fcef05428318b3
prerequisite-patch-id: 3f0e5189f1046e8fd37b2f104a5ee6c306438bed
prerequisite-patch-id: 9b8f5520663a347e1981c46d1d72f6b7efeef222
prerequisite-patch-id: 6171b4310abc56d03a21f95848beff3f04c04333
prerequisite-patch-id: 93336ba3fd1e3875b644ec6007ef22480df750a2
prerequisite-patch-id: 9a76b6408f38b96af9ad91269a0fac8d1893bdef
prerequisite-patch-id: 74c166a9a1ccafef04f3894085a3b3ed5215df7a
prerequisite-patch-id: 71833e48cfa5875247770ac8b94ab957e4eab4fe
prerequisite-patch-id: 9a14e6755e7d7605267abaeab62d70e651d622e3
prerequisite-patch-id: e0b2ce3211263007de8e2a2b60402cfea4a2f1c2
prerequisite-patch-id: bacccbe583a69dd0f081aa74822e92c23f344a64
prerequisite-patch-id: bee3183ab6c76123bc8795dfd5dd6eb19d417832
prerequisite-patch-id: eee73769e563ea8ad64f82473c235a4fef18ce83
prerequisite-patch-id: 5c2c95e068dc614a7708fe830d773e03e8cde4ec
prerequisite-patch-id: deae58b435dc10d80f1ea754613e9ed3e7bd4fe1
prerequisite-patch-id: 1e6e138aeb18a51b6496a265f84f5df1b98471b6
prerequisite-patch-id: 15865988ed0b88b1e28f0e16c50222f61db5ac35
prerequisite-patch-id: 7884031de035d910c1bddee4331a765c6a266243
prerequisite-patch-id: a2deb3751dec34448ab776b455a7b29bd6d8217f
prerequisite-patch-id: d24167f841efc3b8fd8afe06ae02da4d4a594fdc
prerequisite-patch-id: 4f23aea0e73aa0933a53db59ed716a19dcbd575c
prerequisite-patch-id: f54b42bfeb4241cb96b2490e1261e2bc44e7cc07
prerequisite-patch-id: 7e3f3e34be720b25f3db15d2a878bce23b99353c
prerequisite-patch-id: fc91b48c6593c862aa17b221663d2bbed9261f16
prerequisite-patch-id: f20dbadd1962f16e7f4fc4a2b8747443ada94f80
prerequisite-patch-id: 78de9ba010ec43b9ebbc998f4d3afb38fed6be8e
prerequisite-patch-id: 013c99f721a9a0a4af86772cb8d1d40f6942f16f
prerequisite-patch-id: 7ee4a5b056b343ba2d7491158965023af84f56ad
prerequisite-patch-id: d489b42dc81bf9dd7a51381c06182ddffb2230e5
prerequisite-patch-id: fcf1400a8d8b417801baf1450c8c10efa06f4481
prerequisite-patch-id: e475e1e9cd7ce746410ef9839d02ba242cb72090
prerequisite-patch-id: 68d24ec2081c2224a1c133f2c07544177e693a53
prerequisite-patch-id: 6e70864f4065b79db49a206523bcc8b4afded934
prerequisite-patch-id: 0d259c5d3ebb259d36c5d059a82df3b0ffbf1429
prerequisite-patch-id: 392a9fa483070625cad85778e4742c4fc5a2ac5d
prerequisite-patch-id: 62804c908c4abe71d644f18303780084a4c224e8
prerequisite-patch-id: 3f3f1eed1a6150df53aba846237329cf24bb9548
prerequisite-patch-id: 4d1387c6e00b9c62e638ce955cd5d9681754bfb5
prerequisite-patch-id: 3382e00119d0ecd997ae6f99855c74e3eed9237c
prerequisite-patch-id: 95fcef043070fc902f1cad42ef47a94744624530
prerequisite-patch-id: 5b6737d17f3ef9a4ae5d33343125bc597ad2fa62
prerequisite-patch-id: acc0a1e809cb0639e16d4b05cccfd809f8e299b5
prerequisite-patch-id: aa5c2db287402596e20ef0f935aea67007484f58
prerequisite-patch-id: a0318b66bb3f1f096f7d77c3f03e2018f68430b4
prerequisite-patch-id: 4d709e0d8ac627462de259723ac4fcb8efcc5342
prerequisite-patch-id: 811deca81c0cbdc5a3e977dd282f46427e79dbe2
prerequisite-patch-id: 8f66923ad1790ee9e567b9b1d155c7836c3e7cb8
prerequisite-patch-id: c0f2a71d9835e66a216af2ff3b3d434eec44ab0d
prerequisite-patch-id: 5e1a47e077d7bd22b1af9439e1cf1ae38830f763
prerequisite-patch-id: 04bd404d9e3268cbc399458be5bd69190852884a
prerequisite-patch-id: 6d630be7173e6e8da9a14739d32701f76bdf32a3
prerequisite-patch-id: 463c4f52f60b2c8fbf165caedbbc1bfa3bd2a265
prerequisite-patch-id: 67403a8553d60ec8938906f28319176f2ef0ecc4
prerequisite-patch-id: 18443a910e260eed010a2d4cd8f255678476eb0a
prerequisite-patch-id: cf8d1aab1c343ad9bb6de1b980bf7a8aa7479e79
prerequisite-patch-id: cb8ee956876a21478178d46383e760f2a29ec60a
prerequisite-patch-id: 0384e88e9105ce6907f44a55ae0d22bb57b11701
prerequisite-patch-id: 010b9094c8a9b1b41dc49fc07d9704a7be6374c0
prerequisite-patch-id: 3e96fcf9486ad8decca891534d4d2b41f786087a
prerequisite-patch-id: d77c491a6a5760ca828ecbb1445a77dcf9bebf0b
prerequisite-patch-id: 5c2e9b005429bca543ac6dd39912731d9388eb10
prerequisite-patch-id: 741c7b38494d42817d2d71adb6f267268c9cb208
prerequisite-patch-id: b97fb8b1e5848e8561fc599ef7173b2b07782918
prerequisite-patch-id: cca04ee447a117242679dc946638dd876bbaf931
prerequisite-patch-id: 7e7784bd5ec5ecfdb8d7130975a915fb88e8c26b
prerequisite-patch-id: 5a5137d28df7153718ef924d49a821fc1482b220
prerequisite-patch-id: 3d010e879709b5540948b11c2692a28fd5bdd1ea
prerequisite-patch-id: 3d418aca0f2a4a5a3e8cbd3c9da2a5229dd26f61
prerequisite-patch-id: 423c72136d6c85888dfac083efb54b518323f0dd
prerequisite-patch-id: 099453248d834931415e431e1bb470c1f99cdfa2
prerequisite-patch-id: 9fde172c51e107415194ecce2a3e218d2fab7f60
prerequisite-patch-id: 602a0f6b0121ef0296a3fe14872d59b8a0673b24
prerequisite-patch-id: ded33eb3b89b9da9e7a32ca5c6da3da2b65d3a83
prerequisite-patch-id: 56baaaf053e49e6306c076a80c5e7759a7c27805
prerequisite-patch-id: ec1ace1ef1e30d4c56143b3148b4f70c123ddd85
prerequisite-patch-id: da789135fed934d00e9020a90ec700a273fb18fa
prerequisite-patch-id: ca5e644043851df5c7472b4149806ac98b6cce5c
prerequisite-patch-id: d4f92797793802ce3021c4d1b45d2d0a94c51f07
prerequisite-patch-id: 992ab3fb9e82572494ff6f8af494ffbe8ba5cc6e
prerequisite-patch-id: 8baf34ed9789ac6a26f1d84648869131cea522fa
prerequisite-patch-id: 63f23c96115916e7707201d828fcab42ceeea385
prerequisite-patch-id: 7c166511597619b0f03043ce3a0a7237032bf7e0
prerequisite-patch-id: 35e2044bb5d5ea5d041ed01579943f6cffa6ee41
prerequisite-patch-id: 497b25db87c4c64173167f70d3a48154e44ca655
prerequisite-patch-id: d78941852e275e8b8ed03254a89324fef00fcf18
prerequisite-patch-id: 3cc8bd3f0886b6dca9197bfdba4bfa6ff360bf24
prerequisite-patch-id: 61890eaa88bcab4ac0aa2ef9a131d56ef8376f1d
prerequisite-patch-id: 097b002021ddd12f7b5180ccaa744ecce925f04b
prerequisite-patch-id: f599ecb06efafb8954fc9ad6d08a9819da860d28
prerequisite-patch-id: 484f86ad1f4fb5d8b2debfb1e7a8c7b6d0610f77
prerequisite-patch-id: 52db9f2823a1ad78ffb91f84911cd9963b295465
prerequisite-patch-id: d936469bba0e3720bf957995c0f842e56c815e85
prerequisite-patch-id: 9bfb8fba3a480dd5c8878fd307d96cbaaf521cff
prerequisite-patch-id: 13798977298ceb264b789af0eae36c714b0d17c1
prerequisite-patch-id: fc68338d9f90388d062471e46e9b691b00302c69
prerequisite-patch-id: 25e5e3c9ec0a10aa564b3cf32fe793aaf1e705b7
prerequisite-patch-id: 9160fde4ddf465965328264e443638a3ab0d09d9
prerequisite-patch-id: 20d490f27f569e5784d5501d842b4be8a4931127
prerequisite-patch-id: 9b7866c753c962fa8107eeff4d25ff40a546f329
prerequisite-patch-id: 653af78a35e216c7dc891a2d46849bdde83a5dfd
prerequisite-patch-id: f27f61687c1f423d4ad8f690a5114644364d26c5
prerequisite-patch-id: 705f5636837961012d4ad7cef93e3c06bf2828ad
prerequisite-patch-id: cf18ec6cc3cfb012e68e34f5319e5ca5fc08b04f
prerequisite-patch-id: 496d56c7025c252650cef84afef17c1cbe761011
prerequisite-patch-id: 849633dc3d51365926008341078b46e05349daf3
prerequisite-patch-id: bdd1c3731521ab2bec54b5757a1b1460ffa8a76b
prerequisite-patch-id: 7a4af5cc7a8d0c73fac182936c3034b8b97a1d81
prerequisite-patch-id: b6640f64140c35ef6614a47ac213d815eaacc385
prerequisite-patch-id: 619f2ef543c43d0e537cc5ec16a88aff8d08f7ee
prerequisite-patch-id: 5b7d7481f225dd506c53bc58d913b3dcf670ab3b
prerequisite-patch-id: 76b2c41e04d00656cff256f72fb81617baf75004
prerequisite-patch-id: e1e44b6305e28dc040a7495749756d39f086b407
prerequisite-patch-id: 23b6508f200dc43698cc460285ec47fef2e54a45
prerequisite-patch-id: b225a20694d23a03f11eab8c86a0a4b7b640a302
prerequisite-patch-id: e630536287ec238942c1948f8b966e15384fd3ff
prerequisite-patch-id: 327ccf1526eb31e0899de28c42c6bc687e274be5
prerequisite-patch-id: 6b4ab77555a60de86ad045538ea19460e0f65a27
prerequisite-patch-id: e9766937f032e867e3953ea53c47a5c385c68feb
prerequisite-patch-id: b6f9c8b0ac1722d3809bc48d89ace7070e1c447a
prerequisite-patch-id: e72efe42eec650c70433a02ce9c97ddeaef02d0d
prerequisite-patch-id: 9f17b908169ba05390869f3e35b69df00bd46d41
prerequisite-patch-id: b9a82d490d17914a2f80b0430cf4973dd93ccd2f
prerequisite-patch-id: 2563a1dcb7798936aa2c4a83dc1e1ddd585af241
prerequisite-patch-id: d4e0829e923f60dbf80b39493879280c8da2ff4e
prerequisite-patch-id: e42a585260f999d56d222e7b2f44405f6cfbc406
prerequisite-patch-id: e163a8cb28f70824dcde6295354df58a9ece12c0
prerequisite-patch-id: fb1ecde59655a27c8d3fdff20a66cbbe3f00bd6d
prerequisite-patch-id: 8cb6ee4b61ec57e6972ac1ee1253e85a7765deaa
prerequisite-patch-id: 499ac0cc3c1b5a08c8609ecc7d210c35edf01370
prerequisite-patch-id: 8434161ce61b9b53baf7c2643709e08bede8891d
prerequisite-patch-id: bf01608b5710ec0f4d041056988bf42e8785b7be
prerequisite-patch-id: 31a846018ddc61b29dffd1c56e8ef29c5aeb164b
prerequisite-patch-id: 1a0ab80ff0cde1bf49af90be105764d6504c42e2
prerequisite-patch-id: 4463e631ff73f48de1f761fa08fc2d3a0d02d2e3
prerequisite-patch-id: 1ef269c11829325108dc950ebb6ebbb0413f491d
prerequisite-patch-id: 244d96e340cb984ccaeb47ba1fc32c093df85877
prerequisite-patch-id: 6490e0a9923c2341d6aa355576fc178e830a10ef
prerequisite-patch-id: 73099e3b819e51e70cedf57b5de2da94161a789b
prerequisite-patch-id: 49c793919ad92fe3162b6cc8efac6762037534e1
prerequisite-patch-id: 53796f63653ecdf3f2a66d9cb3181546441d4ac0
prerequisite-patch-id: aa6fbd949a032b6fce2b599309f633393bc0bb08
prerequisite-patch-id: eb33de06c18bd23ef820d12cd5e340eda4fc15dc
prerequisite-patch-id: 9acaaae21935bd460bd45f89ac9e5b3c7be398fc
prerequisite-patch-id: d00c2306f14a27317918b63539f65df88ee08222
prerequisite-patch-id: 87cb389190ca57ff8250be430e3470b39575271d
prerequisite-patch-id: 2eaee2a99c51fd5a0df72b5253e8064d9facbcd0
prerequisite-patch-id: fa319f93ccd1f3f07955a69b793d0c3ca697fe56
prerequisite-patch-id: 8541186582e59ab7fc5b8242042d068b5ff344ec
prerequisite-patch-id: 02406627f62825a368975d64a3da2c68fafeccea
prerequisite-patch-id: 5ff0c64a0f40ad9c17a67f3ec4af3c24ca81ba7c
prerequisite-patch-id: 7f2ccb5fa7573dd10b4dec4d9b3dda8c5416e77e
prerequisite-patch-id: f32307884ffd5b79bb333fffbf7a1a1fb11e1077
prerequisite-patch-id: dc6d9a25a341dbd7ec34453b5692f0c4ee0e5ebd
prerequisite-patch-id: fdc3da57663352fd762b6ea8b3fb39545cd86f69
prerequisite-patch-id: 93933d68c7698bf5c66aaff4b9e61b8a7cd49deb
prerequisite-patch-id: 601286c6aae0c275b6ebc489197c21f6425d1dea
prerequisite-patch-id: b6bd48dfa8f1d956be0a63ccc27e0f2648ea6188
prerequisite-patch-id: d7bc5f0deb3489e698b5d93d756bb41fe334229e
prerequisite-patch-id: 3f3d8b03dc8e39e30049f94c234fb32333bea7ab
prerequisite-patch-id: e7e5398f510b2490efec81bc227ab3a66fda6db0
prerequisite-patch-id: db92723389041e01bed4d3cab7d800bd0060d44d
prerequisite-patch-id: 7ba892dca22bd88cb2b4f03e9c7ec3ab4828f365
prerequisite-patch-id: 56aac78c79b3a4329e7014d173c8f6edea4ebee1
prerequisite-patch-id: 10e17f897a2bb483d2cd0038598e5e2f08514e4a
prerequisite-patch-id: dfb16a3bd2e302f02dd89fd4337446b454100ed4
prerequisite-patch-id: 145f9e823ba743c9bf1c3f9abcdd1d4648a82d53
prerequisite-patch-id: f7152653caf4f759f0537fc8f7898261e839261a
prerequisite-patch-id: e727e63dd83f98fc829b6b74a11d7f936950aa14
prerequisite-patch-id: c5e20e303c45ddc096195a9d7c5c2cc2432f0290
prerequisite-patch-id: f448aa53083685570eb21b735641430a91610921
--
2.40.1
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PULL 00/30] Next patches
2023-06-22 16:54 Juan Quintela
@ 2023-06-23 5:45 ` Richard Henderson
2023-06-23 7:34 ` Juan Quintela
` (3 more replies)
0 siblings, 4 replies; 47+ messages in thread
From: Richard Henderson @ 2023-06-23 5:45 UTC (permalink / raw)
To: Juan Quintela, qemu-devel
Cc: Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Leonardo Bras,
Markus Armbruster, qemu-block
On 6/22/23 18:54, Juan Quintela wrote:
> The following changes since commit b455ce4c2f300c8ba47cba7232dd03261368a4cb:
>
> Merge tag 'q800-for-8.1-pull-request' ofhttps://github.com/vivier/qemu-m68k into staging (2023-06-22 10:18:32 +0200)
>
> are available in the Git repository at:
>
> https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
>
> for you to fetch changes up to 23e4307eadc1497bd0a11ca91041768f15963b68:
>
> migration/rdma: Split qemu_fopen_rdma() into input/output functions (2023-06-22 18:11:58 +0200)
>
> ----------------------------------------------------------------
> Migration Pull request (20230621) take 2
>
> In this pull request the only change is fixing 32 bits complitaion issue.
>
> Please apply.
>
> [take 1]
> - fix for multifd thread creation (fabiano)
> - dirtylimity (hyman)
> * migration-test will go on next PULL request, as it has failures.
> - Improve error description (tejus)
> - improve -incoming and set parameters before calling incoming (wei)
> - migration atomic counters reviewed patches (quintela)
> - migration-test refacttoring reviewed (quintela)
New failure with check-cfi-x86_64:
https://gitlab.com/qemu-project/qemu/-/jobs/4527202764#L188
/builds/qemu-project/qemu/build/pyvenv/bin/meson test --no-rebuild -t 0 --num-processes
1 --print-errorlogs
1/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/qom-test OK
6.55s 8 subtests passed
▶ 2/350 ERROR:../tests/qtest/migration-test.c:320:check_guests_ram: assertion failed:
(bad == 0) ERROR
2/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/migration-test ERROR
151.99s killed by signal 6 SIGABRT
>>> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh
MALLOC_PERTURB_=3 QTEST_QEMU_IMG=./qemu-img
QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon
QTEST_QEMU_BINARY=./qemu-system-x86_64
/builds/qemu-project/qemu/build/tests/qtest/migration-test --tap -k
――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――
stderr:
qemu-system-x86_64: Unable to read from socket: Connection reset by peer
Memory content inconsistency at 4f65000 first_byte = 30 last_byte = 2f current = 88
hit_edge = 1
**
ERROR:../tests/qtest/migration-test.c:320:check_guests_ram: assertion failed: (bad == 0)
(test program exited with status code -6)
――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
r~
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PULL 00/30] Next patches
2023-06-23 5:45 ` Richard Henderson
@ 2023-06-23 7:34 ` Juan Quintela
2023-06-25 22:01 ` Juan Quintela
` (2 subsequent siblings)
3 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2023-06-23 7:34 UTC (permalink / raw)
To: Richard Henderson
Cc: qemu-devel, Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Leonardo Bras,
Markus Armbruster, qemu-block
Richard Henderson <richard.henderson@linaro.org> wrote:
> On 6/22/23 18:54, Juan Quintela wrote:
>> The following changes since commit b455ce4c2f300c8ba47cba7232dd03261368a4cb:
>> Merge tag 'q800-for-8.1-pull-request'
>> ofhttps://github.com/vivier/qemu-m68k into staging (2023-06-22
>> 10:18:32 +0200)
>> are available in the Git repository at:
>> https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
>> for you to fetch changes up to
>> 23e4307eadc1497bd0a11ca91041768f15963b68:
>> migration/rdma: Split qemu_fopen_rdma() into input/output
>> functions (2023-06-22 18:11:58 +0200)
>> ----------------------------------------------------------------
>> Migration Pull request (20230621) take 2
>> In this pull request the only change is fixing 32 bits complitaion
>> issue.
>> Please apply.
>> [take 1]
>> - fix for multifd thread creation (fabiano)
>> - dirtylimity (hyman)
>> * migration-test will go on next PULL request, as it has failures.
>> - Improve error description (tejus)
>> - improve -incoming and set parameters before calling incoming (wei)
>> - migration atomic counters reviewed patches (quintela)
>> - migration-test refacttoring reviewed (quintela)
I had the feeling when I wake up that today was going to be a great day.
Confirmed.
> New failure with check-cfi-x86_64:
Aha. CFI. Something that I don't know what it is failing on me.
/me googles.
/me enables cfi+lto and compiles with clang.
50/491] Compiling C object subprojects/berkeley-testfloat-3/libtestfloat.a.p/source_test_az_f128_rx.c.o
[51/491] Compiling C object subprojects/berkeley-testfloat-3/libtestfloat.a.p/source_test_az_f128.c.o
[52/491] Compiling C object subprojects/berkeley-testfloat-3/libtestfloat.a.p/source_test_abz_f128.c.o
[53/491] Compiling C object subprojects/berkeley-testfloat-3/libtestfloat.a.p/source_test_abcz_f128.c.o
[54/491] Compiling C object subprojects/berkeley-testfloat-3/libtestfloat.a.p/source_test_ab_f128_z_bool.c.o
[55/491] Linking target qemu-system-x86_64
FAILED: qemu-system-x86_64
clang++ -m64 -mcx16 @qemu-system-x86_64.rsp
/usr/bin/ld: cannot find libchardev.fa: Too many open files
/usr/bin/ld: cannot find libqmp.fa: Too many open files
/usr/bin/ld: cannot find libpage-vary-common.a: Too many open files
/usr/bin/ld: cannot find libqemuutil.a: Too many open files
/usr/bin/ld: cannot find subprojects/libvhost-user/libvhost-user-glib.a: Too many open files
/usr/bin/ld: cannot find subprojects/libvhost-user/libvhost-user.a: Too many open files
/usr/bin/ld: cannot find tcg/libtcg_softmmu.fa: Too many open files
/usr/bin/ld: cannot find libmigration.fa: Too many open files
/usr/bin/ld: cannot find libhwcore.fa: Too many open files
/usr/bin/ld: cannot find libqom.fa: Too many open files
/usr/bin/ld: cannot find gdbstub/libgdb_softmmu.fa: Too many open files
/usr/bin/ld: cannot find libio.fa: Too many open files
/usr/bin/ld: cannot find libcrypto.fa: Too many open files
/usr/bin/ld: cannot find libauthz.fa: Too many open files
/usr/bin/ld: cannot find libblockdev.fa: Too many open files
/usr/bin/ld: cannot find libblock.fa: Too many open files
/usr/bin/ld: cannot find libchardev.fa: Too many open files
/usr/bin/ld: cannot find libqmp.fa: Too many open files
/usr/bin/ld: cannot find /usr/lib64/libpixman-1.so: Too many open files
/usr/bin/ld: cannot find /usr/lib64/libepoxy.so: Too many open files
/usr/bin/ld: cannot find /usr/lib64/libxenctrl.so: Too many open files
/usr/bin/ld: cannot find /usr/lib64/libxenstore.so: Too many open files
/usr/bin/ld: cannot find /usr/lib64/libxenforeignmemory.so: Too many open files
/usr/bin/ld: cannot find /usr/lib64/libxengnttab.so: Too many open files
/usr/bin/ld: cannot find /usr/lib64/libxenevtchn.so: Too many open files
Confirmed, today is going to be a great day.
No check-cfi<anything> target for me.
/me investigates what is going on. Found this and retries.
AR=llvm-ar CC=clang CXX=clang++ /mnt/code/qemu/full/configure
--enable-cfi --target-list=x86_64-softmmu
Gives the same error.
After a while of desesperation trying to disable features, etc, etc
Just doing a plain ulimit -n 4096 fixed the problem.
Here we go.
Later, Juan.
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PULL 00/30] Next patches
2023-06-23 5:45 ` Richard Henderson
2023-06-23 7:34 ` Juan Quintela
@ 2023-06-25 22:01 ` Juan Quintela
2023-06-26 6:37 ` Richard Henderson
2023-06-26 13:09 ` Juan Quintela
2023-06-27 9:07 ` Juan Quintela
3 siblings, 1 reply; 47+ messages in thread
From: Juan Quintela @ 2023-06-25 22:01 UTC (permalink / raw)
To: Richard Henderson
Cc: qemu-devel, Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Leonardo Bras,
Markus Armbruster, qemu-block
Richard Henderson <richard.henderson@linaro.org> wrote:
> On 6/22/23 18:54, Juan Quintela wrote:
>> The following changes since commit b455ce4c2f300c8ba47cba7232dd03261368a4cb:
>> Merge tag 'q800-for-8.1-pull-request'
>> ofhttps://github.com/vivier/qemu-m68k into staging (2023-06-22
>> 10:18:32 +0200)
>> are available in the Git repository at:
>> https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
>> for you to fetch changes up to
>> 23e4307eadc1497bd0a11ca91041768f15963b68:
>> migration/rdma: Split qemu_fopen_rdma() into input/output
>> functions (2023-06-22 18:11:58 +0200)
>> ----------------------------------------------------------------
>> Migration Pull request (20230621) take 2
>> In this pull request the only change is fixing 32 bits complitaion
>> issue.
>> Please apply.
>> [take 1]
>> - fix for multifd thread creation (fabiano)
>> - dirtylimity (hyman)
>> * migration-test will go on next PULL request, as it has failures.
>> - Improve error description (tejus)
>> - improve -incoming and set parameters before calling incoming (wei)
>> - migration atomic counters reviewed patches (quintela)
>> - migration-test refacttoring reviewed (quintela)
>
> New failure with check-cfi-x86_64:
>
> https://gitlab.com/qemu-project/qemu/-/jobs/4527202764#L188
First of all, is there a way to get to the test log? In particular, I
am interested in knowing at least what test has failed (yes,
migration-test don't tell you much more).
After a bit more wrestling, I have been able to get things compiling
with this command:
$ /mnt/code/qemu/full/configure --enable-cfi
--target-list=x86_64-softmmu --enable-cfi-debug --cc=clang --cxx=clang++
--disable-docs --enable-safe-stack --disable-slirp
It should basically be the one that check-cfi-x86_64 is using if I
understand the build recipes correctly (that is a BIG IF).
And it passes for me with flying colors.
Here I have Fedora38, builder has F37.
> /builds/qemu-project/qemu/build/pyvenv/bin/meson test --no-rebuild -t
> 0 --num-processes 1 --print-errorlogs
> 1/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/qom-test
> OK 6.55s 8 subtests passed
> ▶ 2/350 ERROR:../tests/qtest/migration-test.c:320:check_guests_ram:
> assertion failed: (bad == 0) ERROR
> 2/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/migration-test
> ERROR 151.99s killed by signal 6 SIGABRT
>>>>
> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh
> MALLOC_PERTURB_=3 QTEST_QEMU_IMG=./qemu-img
> QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon
> QTEST_QEMU_BINARY=./qemu-system-x86_64
> /builds/qemu-project/qemu/build/tests/qtest/migration-test --tap
> -k
> ――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――
> stderr:
> qemu-system-x86_64: Unable to read from socket: Connection reset by peer
This is the interesting bit, why is the conection closed.
> Memory content inconsistency at 4f65000 first_byte = 30 last_byte = 2f
> current = 88 hit_edge = 1
> **
> ERROR:../tests/qtest/migration-test.c:320:check_guests_ram: assertion failed: (bad == 0)
>
> (test program exited with status code -6)
This makes zero sense, except if we haven't migrated all the guest
state, that it is what it has happened.
Is there a place on the web interface to see the full logs? Or that is
the only thing that the CI system stores?
Later, Juan.
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PULL 00/30] Next patches
2023-06-25 22:01 ` Juan Quintela
@ 2023-06-26 6:37 ` Richard Henderson
2023-06-26 13:05 ` Juan Quintela
0 siblings, 1 reply; 47+ messages in thread
From: Richard Henderson @ 2023-06-26 6:37 UTC (permalink / raw)
To: quintela
Cc: qemu-devel, Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Leonardo Bras,
Markus Armbruster, qemu-block
On 6/26/23 00:01, Juan Quintela wrote:
> Richard Henderson <richard.henderson@linaro.org> wrote:
>> On 6/22/23 18:54, Juan Quintela wrote:
>>> The following changes since commit b455ce4c2f300c8ba47cba7232dd03261368a4cb:
>>> Merge tag 'q800-for-8.1-pull-request'
>>> ofhttps://github.com/vivier/qemu-m68k into staging (2023-06-22
>>> 10:18:32 +0200)
>>> are available in the Git repository at:
>>> https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
>>> for you to fetch changes up to
>>> 23e4307eadc1497bd0a11ca91041768f15963b68:
>>> migration/rdma: Split qemu_fopen_rdma() into input/output
>>> functions (2023-06-22 18:11:58 +0200)
>>> ----------------------------------------------------------------
>>> Migration Pull request (20230621) take 2
>>> In this pull request the only change is fixing 32 bits complitaion
>>> issue.
>>> Please apply.
>>> [take 1]
>>> - fix for multifd thread creation (fabiano)
>>> - dirtylimity (hyman)
>>> * migration-test will go on next PULL request, as it has failures.
>>> - Improve error description (tejus)
>>> - improve -incoming and set parameters before calling incoming (wei)
>>> - migration atomic counters reviewed patches (quintela)
>>> - migration-test refacttoring reviewed (quintela)
>>
>> New failure with check-cfi-x86_64:
>>
>> https://gitlab.com/qemu-project/qemu/-/jobs/4527202764#L188
>
> First of all, is there a way to get to the test log? In particular, I
> am interested in knowing at least what test has failed (yes,
> migration-test don't tell you much more).
>
> After a bit more wrestling, I have been able to get things compiling
> with this command:
>
> $ /mnt/code/qemu/full/configure --enable-cfi
> --target-list=x86_64-softmmu --enable-cfi-debug --cc=clang --cxx=clang++
> --disable-docs --enable-safe-stack --disable-slirp
>
> It should basically be the one that check-cfi-x86_64 is using if I
> understand the build recipes correctly (that is a BIG IF).
>
> And it passes for me with flying colors.
> Here I have Fedora38, builder has F37.
>
>> /builds/qemu-project/qemu/build/pyvenv/bin/meson test --no-rebuild -t
>> 0 --num-processes 1 --print-errorlogs
>> 1/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/qom-test
>> OK 6.55s 8 subtests passed
>> ▶ 2/350 ERROR:../tests/qtest/migration-test.c:320:check_guests_ram:
>> assertion failed: (bad == 0) ERROR
>> 2/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/migration-test
>> ERROR 151.99s killed by signal 6 SIGABRT
>>>>>
>> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh
>> MALLOC_PERTURB_=3 QTEST_QEMU_IMG=./qemu-img
>> QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon
>> QTEST_QEMU_BINARY=./qemu-system-x86_64
>> /builds/qemu-project/qemu/build/tests/qtest/migration-test --tap
>> -k
>> ――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――
>> stderr:
>> qemu-system-x86_64: Unable to read from socket: Connection reset by peer
>
> This is the interesting bit, why is the conection closed.
>
>> Memory content inconsistency at 4f65000 first_byte = 30 last_byte = 2f
>> current = 88 hit_edge = 1
>> **
>> ERROR:../tests/qtest/migration-test.c:320:check_guests_ram: assertion failed: (bad == 0)
>>
>> (test program exited with status code -6)
>
> This makes zero sense, except if we haven't migrated all the guest
> state, that it is what it has happened.
>
> Is there a place on the web interface to see the full logs? Or that is
> the only thing that the CI system stores?
The "full logs" are
https://gitlab.com/qemu-project/qemu/-/jobs/4527202764/artifacts/download?file_type=trace
r~
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PULL 00/30] Next patches
2023-06-26 6:37 ` Richard Henderson
@ 2023-06-26 13:05 ` Juan Quintela
2023-06-26 13:29 ` Richard Henderson
0 siblings, 1 reply; 47+ messages in thread
From: Juan Quintela @ 2023-06-26 13:05 UTC (permalink / raw)
To: Richard Henderson
Cc: qemu-devel, Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Leonardo Bras,
Markus Armbruster, qemu-block
Richard Henderson <richard.henderson@linaro.org> wrote:
> On 6/26/23 00:01, Juan Quintela wrote:
>> Richard Henderson <richard.henderson@linaro.org> wrote:
>>> On 6/22/23 18:54, Juan Quintela wrote:
>>>> The following changes since commit b455ce4c2f300c8ba47cba7232dd03261368a4cb:
>>>> Merge tag 'q800-for-8.1-pull-request'
>>>> ofhttps://github.com/vivier/qemu-m68k into staging (2023-06-22
>>>> 10:18:32 +0200)
>>>> are available in the Git repository at:
>>>> https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
>>>> for you to fetch changes up to
>>>> 23e4307eadc1497bd0a11ca91041768f15963b68:
>>>> migration/rdma: Split qemu_fopen_rdma() into input/output
>>>> functions (2023-06-22 18:11:58 +0200)
>>>> ----------------------------------------------------------------
>>>> Migration Pull request (20230621) take 2
>>>> In this pull request the only change is fixing 32 bits complitaion
>>>> issue.
>>>> Please apply.
>>>> [take 1]
>>>> - fix for multifd thread creation (fabiano)
>>>> - dirtylimity (hyman)
>>>> * migration-test will go on next PULL request, as it has failures.
>>>> - Improve error description (tejus)
>>>> - improve -incoming and set parameters before calling incoming (wei)
>>>> - migration atomic counters reviewed patches (quintela)
>>>> - migration-test refacttoring reviewed (quintela)
>>>
>>> New failure with check-cfi-x86_64:
>>>
>>> https://gitlab.com/qemu-project/qemu/-/jobs/4527202764#L188
>> First of all, is there a way to get to the test log? In particular,
>> I
>> am interested in knowing at least what test has failed (yes,
>> migration-test don't tell you much more).
>> After a bit more wrestling, I have been able to get things compiling
>> with this command:
>> $ /mnt/code/qemu/full/configure --enable-cfi
>> --target-list=x86_64-softmmu --enable-cfi-debug --cc=clang --cxx=clang++
>> --disable-docs --enable-safe-stack --disable-slirp
>> It should basically be the one that check-cfi-x86_64 is using if I
>> understand the build recipes correctly (that is a BIG IF).
>> And it passes for me with flying colors.
>> Here I have Fedora38, builder has F37.
>>
>>> /builds/qemu-project/qemu/build/pyvenv/bin/meson test --no-rebuild -t
>>> 0 --num-processes 1 --print-errorlogs
>>> 1/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/qom-test
>>> OK 6.55s 8 subtests passed
>>> ▶ 2/350 ERROR:../tests/qtest/migration-test.c:320:check_guests_ram:
>>> assertion failed: (bad == 0) ERROR
>>> 2/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/migration-test
>>> ERROR 151.99s killed by signal 6 SIGABRT
>>>>>>
>>> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh
>>> MALLOC_PERTURB_=3 QTEST_QEMU_IMG=./qemu-img
>>> QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon
>>> QTEST_QEMU_BINARY=./qemu-system-x86_64
>>> /builds/qemu-project/qemu/build/tests/qtest/migration-test --tap
>>> -k
>>> ――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――
>>> stderr:
>>> qemu-system-x86_64: Unable to read from socket: Connection reset by peer
>> This is the interesting bit, why is the conection closed.
>>
>>> Memory content inconsistency at 4f65000 first_byte = 30 last_byte = 2f
>>> current = 88 hit_edge = 1
>>> **
>>> ERROR:../tests/qtest/migration-test.c:320:check_guests_ram: assertion failed: (bad == 0)
>>>
>>> (test program exited with status code -6)
>> This makes zero sense, except if we haven't migrated all the guest
>> state, that it is what it has happened.
>> Is there a place on the web interface to see the full logs? Or that
>> is
>> the only thing that the CI system stores?
>
> The "full logs" are
>
> https://gitlab.com/qemu-project/qemu/-/jobs/4527202764/artifacts/download?file_type=trace
Not useful. I was hoping that there is something like when one runs
./tests/qtest/migration-test
Anyways, to make things faster:
- created
/mnt/code/qemu/full/configure --enable-cfi
--target-list=x86_64-softmmu --enable-cfi-debug --cc=clang --cxx=clang++
--disable-docs --enable-safe-stack --disable-slirp
worked as a charm.
- Your test run:
qemu-system-x86_64: Unable to read from socket: Connection reset by peer
one of the sides die, so anything else after that don't matter.
And I don't understand what CFI is (and I don't rule out that
posibility) or I can't understand how checking indirect functions call
can make migration-test die without a single CFI error message?
- I tried myself CI pipeline, some exact source:
https://gitlab.com/juan.quintela/qemu/-/commit/23e4307eadc1497bd0a11ca91041768f15963b68/pipelines?ref=sent%2Fmigration-20230621b
This is what fails:
https://gitlab.com/juan.quintela/qemu/-/jobs/4527782025
16/395 ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child process (/x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-net-pci/virtio-net/virtio-net-tests/vhost-user/reconnect/subprocess [4569]) failed unexpectedly ERROR
16/395 qemu:qtest+qtest-x86_64 / qtest-x86_64/qos-test ERROR 27.46s killed by signal 6 SIGABRT
>>> MALLOC_PERTURB_=92 QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon QTEST_QEMU_BINARY=./qemu-system-x86_64 QTEST_QEMU_IMG=./qemu-img G_TEST_DBUS_DAEMON=/builds/juan.quintela/qemu/tests/dbus-vmstate-daemon.sh /builds/juan.quintela/qemu/build/tests/qtest/qos-test --tap -k
――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――
stderr:
Vhost user backend fails to broadcast fake RARP
qemu-system-x86_64: -chardev socket,id=chr-reconnect,path=/tmp/vhost-test-8XUX61/reconnect.sock,server=on: info: QEMU waiting for connection on: disconnected:unix:/tmp/vhost-test-8XUX61/reconnect.sock,server=on
qemu-system-x86_64: Failed to set msg fds.
qemu-system-x86_64: vhost VQ 0 ring restore failed: -22: Invalid argument (22)
qemu-system-x86_64: Failed to set msg fds.
qemu-system-x86_64: vhost VQ 1 ring restore failed: -22: Invalid argument (22)
**
ERROR:../tests/qtest/vhost-user-test.c:890:wait_for_rings_started: assertion failed (ctpop64(s->rings) == count): (1 == 2)
**
ERROR:../tests/qtest/qos-test.c:191:subprocess_run_one_test: child
process
(/x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-net-pci/virtio-net/virtio-net-tests/vhost-user/reconnect/subprocess
[4569]) failed unexpectedly
vhost? virtio-queue? in a non migration test?
I don't know what is going on, but this is weird.
Do we have a way to run on that image:
./tests/qtest/migration-test
in a loop until it fails, and at least see what test is failing?
Later, Juan.
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PULL 00/30] Next patches
2023-06-23 5:45 ` Richard Henderson
2023-06-23 7:34 ` Juan Quintela
2023-06-25 22:01 ` Juan Quintela
@ 2023-06-26 13:09 ` Juan Quintela
2023-06-27 9:07 ` Juan Quintela
3 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2023-06-26 13:09 UTC (permalink / raw)
To: Richard Henderson
Cc: qemu-devel, Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Leonardo Bras,
Markus Armbruster, qemu-block
Richard Henderson <richard.henderson@linaro.org> wrote:
> On 6/22/23 18:54, Juan Quintela wrote:
>> The following changes since commit b455ce4c2f300c8ba47cba7232dd03261368a4cb:
>> Merge tag 'q800-for-8.1-pull-request'
>> ofhttps://github.com/vivier/qemu-m68k into staging (2023-06-22
>> 10:18:32 +0200)
>> are available in the Git repository at:
>> https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
>> for you to fetch changes up to
>> 23e4307eadc1497bd0a11ca91041768f15963b68:
>> migration/rdma: Split qemu_fopen_rdma() into input/output
>> functions (2023-06-22 18:11:58 +0200)
>> ----------------------------------------------------------------
>> Migration Pull request (20230621) take 2
>> In this pull request the only change is fixing 32 bits complitaion
>> issue.
>> Please apply.
>> [take 1]
>> - fix for multifd thread creation (fabiano)
>> - dirtylimity (hyman)
>> * migration-test will go on next PULL request, as it has failures.
>> - Improve error description (tejus)
>> - improve -incoming and set parameters before calling incoming (wei)
>> - migration atomic counters reviewed patches (quintela)
>> - migration-test refacttoring reviewed (quintela)
>
> New failure with check-cfi-x86_64:
I am looking at the whole series. I can't see a single function that is
new/change prototypes/etc that is changed on this series.
So is this problem related to CFI? Or it is a migration problem that
somehow only happens when one use CFI?
Inquiring minds what to know. Any clue?
Later, Juan.
>
> https://gitlab.com/qemu-project/qemu/-/jobs/4527202764#L188
>
> /builds/qemu-project/qemu/build/pyvenv/bin/meson test --no-rebuild -t
> 0 --num-processes 1 --print-errorlogs
> 1/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/qom-test
> OK 6.55s 8 subtests passed
> ▶ 2/350 ERROR:../tests/qtest/migration-test.c:320:check_guests_ram:
> assertion failed: (bad == 0) ERROR
> 2/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/migration-test
> ERROR 151.99s killed by signal 6 SIGABRT
>>>>
> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh
> MALLOC_PERTURB_=3 QTEST_QEMU_IMG=./qemu-img
> QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon
> QTEST_QEMU_BINARY=./qemu-system-x86_64
> /builds/qemu-project/qemu/build/tests/qtest/migration-test --tap
> -k
> ――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――
> stderr:
> qemu-system-x86_64: Unable to read from socket: Connection reset by peer
> Memory content inconsistency at 4f65000 first_byte = 30 last_byte = 2f
> current = 88 hit_edge = 1
> **
> ERROR:../tests/qtest/migration-test.c:320:check_guests_ram: assertion failed: (bad == 0)
>
> (test program exited with status code -6)
> ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
>
>
> r~
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PULL 00/30] Next patches
2023-06-26 13:05 ` Juan Quintela
@ 2023-06-26 13:29 ` Richard Henderson
0 siblings, 0 replies; 47+ messages in thread
From: Richard Henderson @ 2023-06-26 13:29 UTC (permalink / raw)
To: quintela
Cc: qemu-devel, Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Leonardo Bras,
Markus Armbruster, qemu-block
On 6/26/23 15:05, Juan Quintela wrote:
>> The "full logs" are
>>
>> https://gitlab.com/qemu-project/qemu/-/jobs/4527202764/artifacts/download?file_type=trace
>
> Not useful. I was hoping that there is something like when one runs
> ./tests/qtest/migration-test
I thought I saw some patch today that to save more artifacts.
But the bottom line is that we don't emit enough stuff from any of our tests to debug them
from logs -- we're too used to using other methods.
> And I don't understand what CFI is (and I don't rule out that
> posibility) or I can't understand how checking indirect functions call
> can make migration-test die without a single CFI error message?
CFI (control flow inspection/validation/somesuch) adds checking along call paths, which
may affect timing.
This is almost certainly some sort of race condition.
> Do we have a way to run on that image:
>
> ./tests/qtest/migration-test
>
> in a loop until it fails, and at least see what test is failing?
Not as is, no. You'd have to create a new CI job, and for that you'll need advice beyond
myself.
r~
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PULL 00/30] Next patches
2023-06-23 5:45 ` Richard Henderson
` (2 preceding siblings ...)
2023-06-26 13:09 ` Juan Quintela
@ 2023-06-27 9:07 ` Juan Quintela
3 siblings, 0 replies; 47+ messages in thread
From: Juan Quintela @ 2023-06-27 9:07 UTC (permalink / raw)
To: Richard Henderson
Cc: qemu-devel, Peter Xu, Paolo Bonzini, Stefan Hajnoczi, Thomas Huth,
Laurent Vivier, Eric Blake, Fam Zheng, Leonardo Bras,
Markus Armbruster, qemu-block, Stefan Hajnoczi, Kevin Wolf
Richard Henderson <richard.henderson@linaro.org> wrote:
> On 6/22/23 18:54, Juan Quintela wrote:
>> The following changes since commit b455ce4c2f300c8ba47cba7232dd03261368a4cb:
>> Merge tag 'q800-for-8.1-pull-request'
>> ofhttps://github.com/vivier/qemu-m68k into staging (2023-06-22
>> 10:18:32 +0200)
>> are available in the Git repository at:
>> https://gitlab.com/juan.quintela/qemu.git tags/next-pull-request
>> for you to fetch changes up to
>> 23e4307eadc1497bd0a11ca91041768f15963b68:
>> migration/rdma: Split qemu_fopen_rdma() into input/output
>> functions (2023-06-22 18:11:58 +0200)
>> ----------------------------------------------------------------
>> Migration Pull request (20230621) take 2
>> In this pull request the only change is fixing 32 bits complitaion
>> issue.
>> Please apply.
>> [take 1]
>> - fix for multifd thread creation (fabiano)
>> - dirtylimity (hyman)
>> * migration-test will go on next PULL request, as it has failures.
>> - Improve error description (tejus)
>> - improve -incoming and set parameters before calling incoming (wei)
>> - migration atomic counters reviewed patches (quintela)
>> - migration-test refacttoring reviewed (quintela)
>
> New failure with check-cfi-x86_64:
>
> https://gitlab.com/qemu-project/qemu/-/jobs/4527202764#L188
>
> /builds/qemu-project/qemu/build/pyvenv/bin/meson test --no-rebuild -t
> 0 --num-processes 1 --print-errorlogs
> 1/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/qom-test
> OK 6.55s 8 subtests passed
> ▶ 2/350 ERROR:../tests/qtest/migration-test.c:320:check_guests_ram:
> assertion failed: (bad == 0) ERROR
> 2/350 qemu:qtest+qtest-x86_64 / qtest-x86_64/migration-test
> ERROR 151.99s killed by signal 6 SIGABRT
>>>>
> G_TEST_DBUS_DAEMON=/builds/qemu-project/qemu/tests/dbus-vmstate-daemon.sh
> MALLOC_PERTURB_=3 QTEST_QEMU_IMG=./qemu-img
> QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon
> QTEST_QEMU_BINARY=./qemu-system-x86_64
> /builds/qemu-project/qemu/build/tests/qtest/migration-test --tap
> -k
> ――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――
> stderr:
> qemu-system-x86_64: Unable to read from socket: Connection reset by peer
> Memory content inconsistency at 4f65000 first_byte = 30 last_byte = 2f
> current = 88 hit_edge = 1
> **
> ERROR:../tests/qtest/migration-test.c:320:check_guests_ram: assertion failed: (bad == 0)
>
> (test program exited with status code -6)
> ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
Still running in bisect mode (this takes forever).
[cc'ing stefan and kevin]
And now I get this problem with gcov:
https://gitlab.com/juan.quintela/qemu/-/jobs/4546094720
357/423 qemu:block / io-qcow2-copy-before-write ERROR 6.23s exit status 1
>>> PYTHON=/builds/juan.quintela/qemu/build/pyvenv/bin/python3 MALLOC_PERTURB_=154 /builds/juan.quintela/qemu/build/pyvenv/bin/python3 /builds/juan.quintela/qemu/build/../tests/qemu-iotests/check -tap -qcow2 copy-before-write --source-dir /builds/juan.quintela/qemu/tests/qemu-iotests --build-dir /builds/juan.quintela/qemu/build/tests/qemu-iotests
――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――
stderr:
--- /builds/juan.quintela/qemu/tests/qemu-iotests/tests/copy-before-write.out
+++ /builds/juan.quintela/qemu/build/scratch/qcow2-file-copy-before-write/copy-before-write.out.bad
@@ -1,5 +1,21 @@
-....
+...F
+======================================================================
+FAIL: test_timeout_break_snapshot (__main__.TestCbwError)
+----------------------------------------------------------------------
+Traceback (most recent call last):
+ File "/builds/juan.quintela/qemu/tests/qemu-iotests/tests/copy-before-write", line 210, in test_timeout_break_snapshot
+ self.assertEqual(log, """\
+AssertionError: 'wrot[195 chars]read 1048576/1048576 bytes at offset 0\n1 MiB,[46 chars]c)\n' != 'wrot[195 chars]read failed: Permission denied\n'
+ wrote 524288/524288 bytes at offset 0
+ 512 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+ wrote 524288/524288 bytes at offset 524288
+ 512 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
++ read failed: Permission denied
+- read 1048576/1048576 bytes at offset 0
+- 1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+
----------------------------------------------------------------------
Ran 4 tests
-OK
+FAILED (failures=1)
(test program exited with status code 1)
――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
I have no clue how I can make qtests to fail with my changes.
Specially with a read permission.
Any clue?
Later, Juan.
PD. Yeap, continuing the bisect.
^ permalink raw reply [flat|nested] 47+ messages in thread
end of thread, other threads:[~2023-06-27 9:13 UTC | newest]
Thread overview: 47+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
2022-11-15 15:34 ` [PULL 01/30] migration/channel-block: fix return value for qio_channel_block_{readv, writev} Juan Quintela
2022-11-15 15:34 ` [PULL 02/30] migration/multifd/zero-copy: Create helper function for flushing Juan Quintela
2022-11-15 15:34 ` [PULL 03/30] migration: check magic value for deciding the mapping of channels Juan Quintela
2022-11-15 15:34 ` [PULL 04/30] multifd: Create page_size fields into both MultiFD{Recv, Send}Params Juan Quintela
2022-11-15 15:34 ` [PULL 05/30] multifd: Create page_count " Juan Quintela
2022-11-15 15:34 ` [PULL 06/30] migration: Export ram_transferred_ram() Juan Quintela
2022-11-15 15:34 ` [PULL 07/30] migration: Export ram_release_page() Juan Quintela
2022-11-15 15:34 ` [PULL 08/30] Update AVX512 support for xbzrle_encode_buffer Juan Quintela
2022-11-15 15:34 ` [PULL 09/30] Unit test code and benchmark code Juan Quintela
2022-11-15 15:34 ` [PULL 10/30] migration: Fix possible infinite loop of ram save process Juan Quintela
2022-11-15 15:34 ` [PULL 11/30] migration: Fix race on qemu_file_shutdown() Juan Quintela
2022-11-15 15:34 ` [PULL 12/30] migration: Disallow postcopy preempt to be used with compress Juan Quintela
2022-11-15 15:34 ` [PULL 13/30] migration: Use non-atomic ops for clear log bitmap Juan Quintela
2022-11-15 15:34 ` [PULL 14/30] migration: Disable multifd explicitly with compression Juan Quintela
2022-11-15 15:34 ` [PULL 15/30] migration: Take bitmap mutex when completing ram migration Juan Quintela
2022-11-15 15:35 ` [PULL 16/30] migration: Add postcopy_preempt_active() Juan Quintela
2022-11-15 15:35 ` [PULL 17/30] migration: Cleanup xbzrle zero page cache update logic Juan Quintela
2022-11-15 15:35 ` [PULL 18/30] migration: Trivial cleanup save_page_header() on same block check Juan Quintela
2022-11-15 15:35 ` [PULL 19/30] migration: Remove RAMState.f references in compression code Juan Quintela
2022-11-15 15:35 ` [PULL 20/30] migration: Yield bitmap_mutex properly when sending/sleeping Juan Quintela
2022-11-15 15:35 ` [PULL 21/30] migration: Use atomic ops properly for page accountings Juan Quintela
2022-11-15 15:35 ` [PULL 22/30] migration: Teach PSS about host page Juan Quintela
2022-11-15 15:35 ` [PULL 23/30] migration: Introduce pss_channel Juan Quintela
2022-11-15 15:35 ` [PULL 24/30] migration: Add pss_init() Juan Quintela
2022-11-15 15:35 ` [PULL 25/30] migration: Make PageSearchStatus part of RAMState Juan Quintela
2022-11-15 15:35 ` [PULL 26/30] migration: Move last_sent_block into PageSearchStatus Juan Quintela
2022-11-15 15:35 ` [PULL 27/30] migration: Send requested page directly in rp-return thread Juan Quintela
2022-11-15 15:35 ` [PULL 28/30] migration: Remove old preempt code around state maintainance Juan Quintela
2022-11-15 15:35 ` [PULL 29/30] migration: Drop rs->f Juan Quintela
2022-11-15 15:35 ` [PULL 30/30] migration: Block migration comment or code is wrong Juan Quintela
2022-11-15 18:06 ` [PULL 00/30] Next patches Daniel P. Berrangé
2022-11-15 18:57 ` Stefan Hajnoczi
2022-11-16 15:35 ` Xu, Ling1
2022-11-15 18:59 ` Stefan Hajnoczi
-- strict thread matches above, loose matches on Subject: below --
2023-06-22 2:12 Juan Quintela
2023-06-22 5:38 ` Richard Henderson
2023-06-22 7:31 ` Juan Quintela
2023-06-22 16:54 Juan Quintela
2023-06-23 5:45 ` Richard Henderson
2023-06-23 7:34 ` Juan Quintela
2023-06-25 22:01 ` Juan Quintela
2023-06-26 6:37 ` Richard Henderson
2023-06-26 13:05 ` Juan Quintela
2023-06-26 13:29 ` Richard Henderson
2023-06-26 13:09 ` Juan Quintela
2023-06-27 9:07 ` Juan Quintela
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).