From: Juan Quintela <quintela@redhat.com>
To: qemu-devel@nongnu.org
Cc: Fabiano Rosas <farosas@suse.de>,
Leonardo Bras <leobras@redhat.com>, Peter Xu <peterx@redhat.com>,
Juan Quintela <quintela@redhat.com>
Subject: [PATCH v2 03/11] migration: Remove save_page_use_compression()
Date: Thu, 19 Oct 2023 13:07:16 +0200 [thread overview]
Message-ID: <20231019110724.15324-4-quintela@redhat.com> (raw)
In-Reply-To: <20231019110724.15324-1-quintela@redhat.com>
After previous patch, we disable the posiblity that we use compression
together with xbzrle. So we can use directly migrate_compress().
Once there, now we don't need the rs parameter, so remove it.
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.c | 34 +++++++---------------------------
1 file changed, 7 insertions(+), 27 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 16c30a9d7a..42b704ac40 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1291,8 +1291,6 @@ static int ram_save_multifd_page(QEMUFile *file, RAMBlock *block,
return 1;
}
-static bool save_page_use_compression(RAMState *rs);
-
static int send_queued_data(CompressParam *param)
{
PageSearchStatus *pss = &ram_state->pss[RAM_CHANNEL_PRECOPY];
@@ -1329,9 +1327,9 @@ static int send_queued_data(CompressParam *param)
return len;
}
-static void ram_flush_compressed_data(RAMState *rs)
+static void ram_flush_compressed_data(void)
{
- if (!save_page_use_compression(rs)) {
+ if (!migrate_compress()) {
return;
}
@@ -1393,7 +1391,7 @@ static int find_dirty_block(RAMState *rs, PageSearchStatus *pss)
* Also If xbzrle is on, stop using the data compression at this
* point. In theory, xbzrle can do better than compression.
*/
- ram_flush_compressed_data(rs);
+ ram_flush_compressed_data();
/* Hit the end of the list */
pss->block = QLIST_FIRST_RCU(&ram_list.blocks);
@@ -2042,24 +2040,6 @@ int ram_save_queue_pages(const char *rbname, ram_addr_t start, ram_addr_t len)
return 0;
}
-static bool save_page_use_compression(RAMState *rs)
-{
- if (!migrate_compress()) {
- return false;
- }
-
- /*
- * If xbzrle is enabled (e.g., after first round of migration), stop
- * using the data compression. In theory, xbzrle can do better than
- * compression.
- */
- if (rs->xbzrle_started) {
- return false;
- }
-
- return true;
-}
-
/*
* try to compress the page before posting it out, return true if the page
* has been properly handled by compression, otherwise needs other
@@ -2068,7 +2048,7 @@ static bool save_page_use_compression(RAMState *rs)
static bool save_compress_page(RAMState *rs, PageSearchStatus *pss,
ram_addr_t offset)
{
- if (!save_page_use_compression(rs)) {
+ if (!migrate_compress()) {
return false;
}
@@ -2083,7 +2063,7 @@ static bool save_compress_page(RAMState *rs, PageSearchStatus *pss,
* much CPU resource.
*/
if (pss->block != pss->last_sent_block) {
- ram_flush_compressed_data(rs);
+ ram_flush_compressed_data();
return false;
}
@@ -3135,7 +3115,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
* page is sent in one chunk.
*/
if (migrate_postcopy_ram()) {
- ram_flush_compressed_data(rs);
+ ram_flush_compressed_data();
}
/*
@@ -3236,7 +3216,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
}
qemu_mutex_unlock(&rs->bitmap_mutex);
- ram_flush_compressed_data(rs);
+ ram_flush_compressed_data();
int ret = rdma_registration_stop(f, RAM_CONTROL_FINISH);
if (ret < 0) {
--
2.41.0
next prev parent reply other threads:[~2023-10-19 12:07 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-19 11:07 [PATCH v2 00/11] Migration compression cleanup Juan Quintela
2023-10-19 11:07 ` [PATCH v2 01/11] migration: Give one error if trying to set MULTIFD and XBZRLE Juan Quintela
2023-10-23 13:30 ` Fabiano Rosas
2023-10-19 11:07 ` [PATCH v2 02/11] migration: Give one error if trying to set COMPRESSION " Juan Quintela
2023-10-23 13:32 ` Fabiano Rosas
2023-10-19 11:07 ` Juan Quintela [this message]
2023-10-23 13:34 ` [PATCH v2 03/11] migration: Remove save_page_use_compression() Fabiano Rosas
2023-10-19 11:07 ` [PATCH v2 04/11] migration: Make compress_data_with_multithreads return bool Juan Quintela
2023-10-23 13:40 ` Fabiano Rosas
2023-10-19 11:07 ` [PATCH v2 05/11] migration: Simplify compress_page_with_multithread() Juan Quintela
2023-10-23 13:42 ` Fabiano Rosas
2023-10-19 11:07 ` [PATCH v2 06/11] migration: Move busy++ to migrate_with_multithread Juan Quintela
2023-10-23 13:49 ` Fabiano Rosas
2023-10-19 11:07 ` [PATCH v2 07/11] migration: Create compress_update_rates() Juan Quintela
2023-10-23 13:57 ` Fabiano Rosas
2023-10-24 10:29 ` Juan Quintela
2023-10-19 11:07 ` [PATCH v2 08/11] migration: Export send_queued_data() Juan Quintela
2023-10-23 13:58 ` Fabiano Rosas
2023-10-19 11:07 ` [PATCH v2 09/11] migration: Move ram_flush_compressed_data() to ram-compress.c Juan Quintela
2023-10-23 13:59 ` Fabiano Rosas
2023-10-19 11:07 ` [PATCH v2 10/11] migration: Merge flush_compressed_data() and compress_flush_data() Juan Quintela
2023-10-23 14:00 ` Fabiano Rosas
2023-10-19 11:07 ` [PATCH v2 11/11] migration: Rename ram_compressed_pages() to compress_ram_pages() Juan Quintela
2023-10-23 14:00 ` Fabiano Rosas
2023-10-19 11:43 ` [PATCH v2 00/11] Migration compression cleanup Juan Quintela
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231019110724.15324-4-quintela@redhat.com \
--to=quintela@redhat.com \
--cc=farosas@suse.de \
--cc=leobras@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).