From: Juan Quintela <quintela@redhat.com>
To: qemu-devel@nongnu.org
Cc: Thomas Huth <thuth@redhat.com>, Peter Xu <peterx@redhat.com>,
Leonardo Bras <leobras@redhat.com>,
Juan Quintela <quintela@redhat.com>,
Laurent Vivier <lvivier@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Lukas Straub <lukasstraub2@web.de>
Subject: [PULL 08/13] ram.c: Remove last ram.c dependency from the core compress code
Date: Mon, 8 May 2023 20:52:04 +0200 [thread overview]
Message-ID: <20230508185209.68604-9-quintela@redhat.com> (raw)
In-Reply-To: <20230508185209.68604-1-quintela@redhat.com>
From: Lukas Straub <lukasstraub2@web.de>
Make compression interfaces take send_queued_data() as an argument.
Remove save_page_use_compression() from flush_compressed_data().
This removes the last ram.c dependency from the core compress code.
Signed-off-by: Lukas Straub <lukasstraub2@web.de>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.c | 27 +++++++++++++++++----------
1 file changed, 17 insertions(+), 10 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index d1c24eff21..0cce65dfa5 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1545,13 +1545,10 @@ static int send_queued_data(CompressParam *param)
return len;
}
-static void flush_compressed_data(RAMState *rs)
+static void flush_compressed_data(int (send_queued_data(CompressParam *)))
{
int idx, thread_count;
- if (!save_page_use_compression(rs)) {
- return;
- }
thread_count = migrate_compress_threads();
qemu_mutex_lock(&comp_done_lock);
@@ -1573,6 +1570,15 @@ static void flush_compressed_data(RAMState *rs)
}
}
+static void ram_flush_compressed_data(RAMState *rs)
+{
+ if (!save_page_use_compression(rs)) {
+ return;
+ }
+
+ flush_compressed_data(send_queued_data);
+}
+
static inline void set_compress_params(CompressParam *param, RAMBlock *block,
ram_addr_t offset)
{
@@ -1581,7 +1587,8 @@ static inline void set_compress_params(CompressParam *param, RAMBlock *block,
param->trigger = true;
}
-static int compress_page_with_multi_thread(RAMBlock *block, ram_addr_t offset)
+static int compress_page_with_multi_thread(RAMBlock *block, ram_addr_t offset,
+ int (send_queued_data(CompressParam *)))
{
int idx, thread_count, pages = -1;
bool wait = migrate_compress_wait_thread();
@@ -1672,7 +1679,7 @@ static int find_dirty_block(RAMState *rs, PageSearchStatus *pss)
* Also If xbzrle is on, stop using the data compression at this
* point. In theory, xbzrle can do better than compression.
*/
- flush_compressed_data(rs);
+ ram_flush_compressed_data(rs);
/* Hit the end of the list */
pss->block = QLIST_FIRST_RCU(&ram_list.blocks);
@@ -2362,11 +2369,11 @@ static bool save_compress_page(RAMState *rs, PageSearchStatus *pss,
* much CPU resource.
*/
if (block != pss->last_sent_block) {
- flush_compressed_data(rs);
+ ram_flush_compressed_data(rs);
return false;
}
- if (compress_page_with_multi_thread(block, offset) > 0) {
+ if (compress_page_with_multi_thread(block, offset, send_queued_data) > 0) {
return true;
}
@@ -3412,7 +3419,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
* page is sent in one chunk.
*/
if (migrate_postcopy_ram()) {
- flush_compressed_data(rs);
+ ram_flush_compressed_data(rs);
}
/*
@@ -3507,7 +3514,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
}
qemu_mutex_unlock(&rs->bitmap_mutex);
- flush_compressed_data(rs);
+ ram_flush_compressed_data(rs);
ram_control_after_iterate(f, RAM_CONTROL_FINISH);
}
--
2.40.0
next prev parent reply other threads:[~2023-05-08 18:54 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-08 18:51 [PULL 00/13] Compression code patches Juan Quintela
2023-05-08 18:51 ` [PULL 01/13] qtest/migration-test.c: Add tests with compress enabled Juan Quintela
2023-05-08 18:51 ` [PULL 02/13] qtest/migration-test.c: Add postcopy " Juan Quintela
2023-05-08 18:51 ` [PULL 03/13] ram.c: Let the compress threads return a CompressResult enum Juan Quintela
2023-05-08 18:52 ` [PULL 04/13] ram.c: Dont change param->block in the compress thread Juan Quintela
2023-05-08 18:52 ` [PULL 05/13] ram.c: Reset result after sending queued data Juan Quintela
2023-05-08 18:52 ` [PULL 06/13] ram.c: Do not call save_page_header() from compress threads Juan Quintela
2023-05-08 18:52 ` [PULL 07/13] ram.c: Call update_compress_thread_counts from compress_send_queued_data Juan Quintela
2023-05-08 18:52 ` Juan Quintela [this message]
2023-05-08 18:52 ` [PULL 09/13] ram.c: Move core compression code into its own file Juan Quintela
2023-05-08 18:52 ` [PULL 10/13] ram.c: Move core decompression " Juan Quintela
2023-05-08 18:52 ` [PULL 11/13] ram compress: Assert that the file buffer matches the result Juan Quintela
2023-05-08 18:52 ` [PULL 12/13] ram-compress.c: Make target independent Juan Quintela
2023-05-08 18:52 ` [PULL 13/13] migration: Initialize and cleanup decompression in migration.c Juan Quintela
2023-05-09 6:34 ` [PULL 00/13] Compression code patches Richard Henderson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230508185209.68604-9-quintela@redhat.com \
--to=quintela@redhat.com \
--cc=leobras@redhat.com \
--cc=lukasstraub2@web.de \
--cc=lvivier@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=thuth@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).