From: Juan Quintela <quintela@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Xu <peterx@redhat.com>, Laurent Vivier <lvivier@redhat.com>,
qemu-block@nongnu.org, Thomas Huth <thuth@redhat.com>,
Juan Quintela <quintela@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Leonardo Bras <leobras@redhat.com>, Fam Zheng <fam@euphon.net>,
Stefan Hajnoczi <stefanha@redhat.com>
Subject: [PULL 13/13] migration/rdma: Simplify the function that saves a page
Date: Mon, 2 Oct 2023 14:20:21 +0200 [thread overview]
Message-ID: <20231002122021.231959-14-quintela@redhat.com> (raw)
In-Reply-To: <20231002122021.231959-1-quintela@redhat.com>
When we sent a page through QEMUFile hooks (RDMA) there are three
posiblities:
- We are not using RDMA. return RAM_SAVE_CONTROL_DELAYED and
control_save_page() returns false to let anything else to proceed.
- There is one error but we are using RDMA. Then we return a negative
value, control_save_page() needs to return true.
- Everything goes well and RDMA start the sent of the page
asynchronously. It returns RAM_SAVE_CONTROL_DELAYED and we need to
return 1 for ram_save_page_legacy.
Clear?
I know, I know, the interface is as bad as it gets. I think that now
it is a bit clearer, but this needs to be done some other way.
Reviewed-by: Leonardo Bras <leobras@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230515195709.63843-16-quintela@redhat.com>
---
migration/qemu-file.h | 14 ++++++--------
migration/qemu-file.c | 12 ++++++------
migration/ram.c | 10 +++-------
migration/rdma.c | 19 +++----------------
4 files changed, 18 insertions(+), 37 deletions(-)
diff --git a/migration/qemu-file.h b/migration/qemu-file.h
index 57b00c8562..03e718c264 100644
--- a/migration/qemu-file.h
+++ b/migration/qemu-file.h
@@ -49,11 +49,10 @@ typedef int (QEMURamHookFunc)(QEMUFile *f, uint64_t flags, void *data);
* This function allows override of where the RAM page
* is saved (such as RDMA, for example.)
*/
-typedef size_t (QEMURamSaveFunc)(QEMUFile *f,
- ram_addr_t block_offset,
- ram_addr_t offset,
- size_t size,
- uint64_t *bytes_sent);
+typedef int (QEMURamSaveFunc)(QEMUFile *f,
+ ram_addr_t block_offset,
+ ram_addr_t offset,
+ size_t size);
typedef struct QEMUFileHooks {
QEMURamHookFunc *before_ram_iterate;
@@ -142,9 +141,8 @@ void ram_control_load_hook(QEMUFile *f, uint64_t flags, void *data);
#define RAM_SAVE_CONTROL_NOT_SUPP -1000
#define RAM_SAVE_CONTROL_DELAYED -2000
-size_t ram_control_save_page(QEMUFile *f, ram_addr_t block_offset,
- ram_addr_t offset, size_t size,
- uint64_t *bytes_sent);
+int ram_control_save_page(QEMUFile *f, ram_addr_t block_offset,
+ ram_addr_t offset, size_t size);
QIOChannel *qemu_file_get_ioc(QEMUFile *file);
#endif
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index 5c43fa34e7..5e8207dae4 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -322,14 +322,14 @@ void ram_control_load_hook(QEMUFile *f, uint64_t flags, void *data)
}
}
-size_t ram_control_save_page(QEMUFile *f, ram_addr_t block_offset,
- ram_addr_t offset, size_t size,
- uint64_t *bytes_sent)
+int ram_control_save_page(QEMUFile *f, ram_addr_t block_offset,
+ ram_addr_t offset, size_t size)
{
if (f->hooks && f->hooks->save_page) {
- int ret = f->hooks->save_page(f, block_offset,
- offset, size, bytes_sent);
-
+ int ret = f->hooks->save_page(f, block_offset, offset, size);
+ /*
+ * RAM_SAVE_CONTROL_* are negative values
+ */
if (ret != RAM_SAVE_CONTROL_DELAYED &&
ret != RAM_SAVE_CONTROL_NOT_SUPP) {
if (ret < 0) {
diff --git a/migration/ram.c b/migration/ram.c
index c6238f7a8b..99cff591a3 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1186,23 +1186,19 @@ static int save_zero_page(PageSearchStatus *pss, QEMUFile *f, RAMBlock *block,
static bool control_save_page(PageSearchStatus *pss, RAMBlock *block,
ram_addr_t offset, int *pages)
{
- uint64_t bytes_xmit = 0;
int ret;
- *pages = -1;
ret = ram_control_save_page(pss->pss_channel, block->offset, offset,
- TARGET_PAGE_SIZE, &bytes_xmit);
+ TARGET_PAGE_SIZE);
if (ret == RAM_SAVE_CONTROL_NOT_SUPP) {
return false;
}
- if (bytes_xmit) {
- *pages = 1;
- }
-
if (ret == RAM_SAVE_CONTROL_DELAYED) {
+ *pages = 1;
return true;
}
+ *pages = ret;
return true;
}
diff --git a/migration/rdma.c b/migration/rdma.c
index 9007261b5c..5748c9045b 100644
--- a/migration/rdma.c
+++ b/migration/rdma.c
@@ -3240,13 +3240,12 @@ qio_channel_rdma_shutdown(QIOChannel *ioc,
*
* @size : Number of bytes to transfer
*
- * @bytes_sent : User-specificed pointer to indicate how many bytes were
+ * @pages_sent : User-specificed pointer to indicate how many pages were
* sent. Usually, this will not be more than a few bytes of
* the protocol because most transfers are sent asynchronously.
*/
-static size_t qemu_rdma_save_page(QEMUFile *f,
- ram_addr_t block_offset, ram_addr_t offset,
- size_t size, uint64_t *bytes_sent)
+static int qemu_rdma_save_page(QEMUFile *f, ram_addr_t block_offset,
+ ram_addr_t offset, size_t size)
{
QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(qemu_file_get_ioc(f));
RDMAContext *rdma;
@@ -3278,18 +3277,6 @@ static size_t qemu_rdma_save_page(QEMUFile *f,
goto err;
}
- /*
- * We always return 1 bytes because the RDMA
- * protocol is completely asynchronous. We do not yet know
- * whether an identified chunk is zero or not because we're
- * waiting for other pages to potentially be merged with
- * the current chunk. So, we have to call qemu_update_position()
- * later on when the actual write occurs.
- */
- if (bytes_sent) {
- *bytes_sent = 1;
- }
-
/*
* Drain the Completion Queue if possible, but do not block,
* just poll.
--
2.41.0
next prev parent reply other threads:[~2023-10-02 12:22 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-02 12:20 [PULL 00/13] Migration 20231002 patches Juan Quintela
2023-10-02 12:20 ` [PULL 01/13] migration-test: Create kvm_opts Juan Quintela
2023-10-02 12:20 ` [PULL 02/13] migration-test: bootpath is the same for all tests and for all archs Juan Quintela
2023-10-02 12:20 ` [PULL 03/13] migration-test: Add bootfile_create/delete() functions Juan Quintela
2023-10-02 12:20 ` [PULL 04/13] migration-test: dirtylimit checks for x86_64 arch before Juan Quintela
2023-10-02 12:20 ` [PULL 05/13] migration-test: simplify shmem_opts handling Juan Quintela
2023-10-02 12:20 ` [PULL 06/13] migration: Refactor repeated call of yank_unregister_instance Juan Quintela
2023-10-02 12:20 ` [PULL 07/13] migration: Use qemu_file_transferred_noflush() for block migration Juan Quintela
2023-10-02 12:20 ` [PULL 08/13] migration: Don't abuse qemu_file transferred for RDMA Juan Quintela
2023-10-02 12:20 ` [PULL 09/13] migration/RDMA: It is accounting for zero/normal pages in two places Juan Quintela
2023-10-02 12:20 ` [PULL 10/13] migration/rdma: Remove QEMUFile parameter when not used Juan Quintela
2023-10-02 12:20 ` [PULL 11/13] migration/rdma: Don't use imaginary transfers Juan Quintela
2023-10-02 12:20 ` [PULL 12/13] migration: Remove unused qemu_file_credit_transfer() Juan Quintela
2023-10-02 12:20 ` Juan Quintela [this message]
2023-10-02 21:57 ` [PULL 00/13] Migration 20231002 patches Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231002122021.231959-14-quintela@redhat.com \
--to=quintela@redhat.com \
--cc=fam@euphon.net \
--cc=leobras@redhat.com \
--cc=lvivier@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
--cc=thuth@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).