From: Juan Quintela <quintela@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Michael Tokarev" <mjt@tls.msk.ru>,
"Marc-André Lureau" <marcandre.lureau@redhat.com>,
"David Hildenbrand" <david@redhat.com>,
"Laurent Vivier" <laurent@vivier.eu>,
"Juan Quintela" <quintela@redhat.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Daniel P. Berrangé" <berrange@redhat.com>,
"Peter Xu" <peterx@redhat.com>,
"Stefan Hajnoczi" <stefanha@redhat.com>,
"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
"Thomas Huth" <thuth@redhat.com>,
qemu-block@nongnu.org, qemu-trivial@nongnu.org,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Fam Zheng" <fam@euphon.net>
Subject: [PULL 19/30] migration: Remove RAMState.f references in compression code
Date: Tue, 15 Nov 2022 16:35:03 +0100 [thread overview]
Message-ID: <20221115153514.28003-20-quintela@redhat.com> (raw)
In-Reply-To: <20221115153514.28003-1-quintela@redhat.com>
From: Peter Xu <peterx@redhat.com>
Removing referencing to RAMState.f in compress_page_with_multi_thread() and
flush_compressed_data().
Compression code by default isn't compatible with having >1 channels (or it
won't currently know which channel to flush the compressed data), so to
make it simple we always flush on the default to_dst_file port until
someone wants to add >1 ports support, as rs->f right now can really
change (after postcopy preempt is introduced).
There should be no functional change at all after patch applied, since as
long as rs->f referenced in compression code, it must be to_dst_file.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 42b6a543bd..ebc5664dcc 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1489,6 +1489,7 @@ static bool save_page_use_compression(RAMState *rs);
static void flush_compressed_data(RAMState *rs)
{
+ MigrationState *ms = migrate_get_current();
int idx, len, thread_count;
if (!save_page_use_compression(rs)) {
@@ -1507,7 +1508,7 @@ static void flush_compressed_data(RAMState *rs)
for (idx = 0; idx < thread_count; idx++) {
qemu_mutex_lock(&comp_param[idx].mutex);
if (!comp_param[idx].quit) {
- len = qemu_put_qemu_file(rs->f, comp_param[idx].file);
+ len = qemu_put_qemu_file(ms->to_dst_file, comp_param[idx].file);
/*
* it's safe to fetch zero_page without holding comp_done_lock
* as there is no further request submitted to the thread,
@@ -1526,11 +1527,11 @@ static inline void set_compress_params(CompressParam *param, RAMBlock *block,
param->offset = offset;
}
-static int compress_page_with_multi_thread(RAMState *rs, RAMBlock *block,
- ram_addr_t offset)
+static int compress_page_with_multi_thread(RAMBlock *block, ram_addr_t offset)
{
int idx, thread_count, bytes_xmit = -1, pages = -1;
bool wait = migrate_compress_wait_thread();
+ MigrationState *ms = migrate_get_current();
thread_count = migrate_compress_threads();
qemu_mutex_lock(&comp_done_lock);
@@ -1538,7 +1539,8 @@ retry:
for (idx = 0; idx < thread_count; idx++) {
if (comp_param[idx].done) {
comp_param[idx].done = false;
- bytes_xmit = qemu_put_qemu_file(rs->f, comp_param[idx].file);
+ bytes_xmit = qemu_put_qemu_file(ms->to_dst_file,
+ comp_param[idx].file);
qemu_mutex_lock(&comp_param[idx].mutex);
set_compress_params(&comp_param[idx], block, offset);
qemu_cond_signal(&comp_param[idx].cond);
@@ -2291,7 +2293,7 @@ static bool save_compress_page(RAMState *rs, RAMBlock *block, ram_addr_t offset)
return false;
}
- if (compress_page_with_multi_thread(rs, block, offset) > 0) {
+ if (compress_page_with_multi_thread(block, offset) > 0) {
return true;
}
--
2.38.1
next prev parent reply other threads:[~2022-11-15 15:39 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-11-15 15:34 [PULL 00/30] Next patches Juan Quintela
2022-11-15 15:34 ` [PULL 01/30] migration/channel-block: fix return value for qio_channel_block_{readv, writev} Juan Quintela
2022-11-15 15:34 ` [PULL 02/30] migration/multifd/zero-copy: Create helper function for flushing Juan Quintela
2022-11-15 15:34 ` [PULL 03/30] migration: check magic value for deciding the mapping of channels Juan Quintela
2022-11-15 15:34 ` [PULL 04/30] multifd: Create page_size fields into both MultiFD{Recv, Send}Params Juan Quintela
2022-11-15 15:34 ` [PULL 05/30] multifd: Create page_count " Juan Quintela
2022-11-15 15:34 ` [PULL 06/30] migration: Export ram_transferred_ram() Juan Quintela
2022-11-15 15:34 ` [PULL 07/30] migration: Export ram_release_page() Juan Quintela
2022-11-15 15:34 ` [PULL 08/30] Update AVX512 support for xbzrle_encode_buffer Juan Quintela
2022-11-15 15:34 ` [PULL 09/30] Unit test code and benchmark code Juan Quintela
2022-11-15 15:34 ` [PULL 10/30] migration: Fix possible infinite loop of ram save process Juan Quintela
2022-11-15 15:34 ` [PULL 11/30] migration: Fix race on qemu_file_shutdown() Juan Quintela
2022-11-15 15:34 ` [PULL 12/30] migration: Disallow postcopy preempt to be used with compress Juan Quintela
2022-11-15 15:34 ` [PULL 13/30] migration: Use non-atomic ops for clear log bitmap Juan Quintela
2022-11-15 15:34 ` [PULL 14/30] migration: Disable multifd explicitly with compression Juan Quintela
2022-11-15 15:34 ` [PULL 15/30] migration: Take bitmap mutex when completing ram migration Juan Quintela
2022-11-15 15:35 ` [PULL 16/30] migration: Add postcopy_preempt_active() Juan Quintela
2022-11-15 15:35 ` [PULL 17/30] migration: Cleanup xbzrle zero page cache update logic Juan Quintela
2022-11-15 15:35 ` [PULL 18/30] migration: Trivial cleanup save_page_header() on same block check Juan Quintela
2022-11-15 15:35 ` Juan Quintela [this message]
2022-11-15 15:35 ` [PULL 20/30] migration: Yield bitmap_mutex properly when sending/sleeping Juan Quintela
2022-11-15 15:35 ` [PULL 21/30] migration: Use atomic ops properly for page accountings Juan Quintela
2022-11-15 15:35 ` [PULL 22/30] migration: Teach PSS about host page Juan Quintela
2022-11-15 15:35 ` [PULL 23/30] migration: Introduce pss_channel Juan Quintela
2022-11-15 15:35 ` [PULL 24/30] migration: Add pss_init() Juan Quintela
2022-11-15 15:35 ` [PULL 25/30] migration: Make PageSearchStatus part of RAMState Juan Quintela
2022-11-15 15:35 ` [PULL 26/30] migration: Move last_sent_block into PageSearchStatus Juan Quintela
2022-11-15 15:35 ` [PULL 27/30] migration: Send requested page directly in rp-return thread Juan Quintela
2022-11-15 15:35 ` [PULL 28/30] migration: Remove old preempt code around state maintainance Juan Quintela
2022-11-15 15:35 ` [PULL 29/30] migration: Drop rs->f Juan Quintela
2022-11-15 15:35 ` [PULL 30/30] migration: Block migration comment or code is wrong Juan Quintela
2022-11-15 18:06 ` [PULL 00/30] Next patches Daniel P. Berrangé
2022-11-15 18:57 ` Stefan Hajnoczi
2022-11-16 15:35 ` Xu, Ling1
2022-11-15 18:59 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221115153514.28003-20-quintela@redhat.com \
--to=quintela@redhat.com \
--cc=berrange@redhat.com \
--cc=david@redhat.com \
--cc=dgilbert@redhat.com \
--cc=fam@euphon.net \
--cc=laurent@vivier.eu \
--cc=marcandre.lureau@redhat.com \
--cc=mjt@tls.msk.ru \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=qemu-trivial@nongnu.org \
--cc=stefanha@redhat.com \
--cc=thuth@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).