qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Juan Quintela <quintela@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Paolo Bonzini" <pbonzini@redhat.com>,
	"David Hildenbrand" <david@redhat.com>,
	"Laurent Vivier" <laurent@vivier.eu>,
	"Stefan Hajnoczi" <stefanha@redhat.com>,
	"Fam Zheng" <fam@euphon.net>,
	qemu-block@nongnu.org,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	"Thomas Huth" <thuth@redhat.com>,
	"Philippe Mathieu-Daudé" <philmd@linaro.org>,
	qemu-trivial@nongnu.org, "Michael Tokarev" <mjt@tls.msk.ru>,
	"Daniel P. Berrangé" <berrange@redhat.com>,
	"Marc-André Lureau" <marcandre.lureau@redhat.com>,
	"Peter Xu" <peterx@redhat.com>,
	"Juan Quintela" <quintela@redhat.com>
Subject: [PATCH 14/30] migration: Disable multifd explicitly with compression
Date: Tue, 15 Nov 2022 13:12:10 +0100	[thread overview]
Message-ID: <20221115121226.26609-15-quintela@redhat.com> (raw)
In-Reply-To: <20221115121226.26609-1-quintela@redhat.com>

From: Peter Xu <peterx@redhat.com>

Multifd thread model does not work for compression, explicitly disable it.

Note that previuosly even we can enable both of them, nothing will go
wrong, because the compression code has higher priority so multifd feature
will just be ignored.  Now we'll fail even earlier at config time so the
user should be aware of the consequence better.

Note that there can be a slight chance of breaking existing users, but
let's assume they're not majority and not serious users, or they should
have found that multifd is not working already.

With that, we can safely drop the check in ram_save_target_page() for using
multifd, because when multifd=on then compression=off, then the removed
check on save_page_use_compression() will also always return false too.

Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 migration/migration.c |  7 +++++++
 migration/ram.c       | 11 +++++------
 2 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index 0bc3fce4b7..9fbed8819a 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1370,6 +1370,13 @@ static bool migrate_caps_check(bool *cap_list,
         }
     }
 
+    if (cap_list[MIGRATION_CAPABILITY_MULTIFD]) {
+        if (cap_list[MIGRATION_CAPABILITY_COMPRESS]) {
+            error_setg(errp, "Multifd is not compatible with compress");
+            return false;
+        }
+    }
+
     return true;
 }
 
diff --git a/migration/ram.c b/migration/ram.c
index c0f5d6d287..2fcce796d0 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2333,13 +2333,12 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss)
     }
 
     /*
-     * Do not use multifd for:
-     * 1. Compression as the first page in the new block should be posted out
-     *    before sending the compressed page
-     * 2. In postcopy as one whole host page should be placed
+     * Do not use multifd in postcopy as one whole host page should be
+     * placed.  Meanwhile postcopy requires atomic update of pages, so even
+     * if host page size == guest page size the dest guest during run may
+     * still see partially copied pages which is data corruption.
      */
-    if (!save_page_use_compression(rs) && migrate_use_multifd()
-        && !migration_in_postcopy()) {
+    if (migrate_use_multifd() && !migration_in_postcopy()) {
         return ram_save_multifd_page(rs, block, offset);
     }
 
-- 
2.38.1



  parent reply	other threads:[~2022-11-15 12:27 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-15 12:11 [PATCH 00/30] Migration PULL request Juan Quintela
2022-11-15 12:11 ` [PATCH 01/30] migration/channel-block: fix return value for qio_channel_block_{readv, writev} Juan Quintela
2022-11-15 12:11 ` [PATCH 02/30] migration/multifd/zero-copy: Create helper function for flushing Juan Quintela
2022-11-15 12:11 ` [PATCH 03/30] migration: check magic value for deciding the mapping of channels Juan Quintela
2022-11-15 12:12 ` [PATCH 04/30] multifd: Create page_size fields into both MultiFD{Recv, Send}Params Juan Quintela
2022-11-15 12:12 ` [PATCH 05/30] multifd: Create page_count " Juan Quintela
2022-11-15 12:12 ` [PATCH 06/30] migration: Export ram_transferred_ram() Juan Quintela
2022-11-15 12:12 ` [PATCH 07/30] migration: Export ram_release_page() Juan Quintela
2022-11-15 12:12 ` [PATCH 08/30] Update AVX512 support for xbzrle_encode_buffer Juan Quintela
2022-11-15 12:12 ` [PATCH 09/30] Unit test code and benchmark code Juan Quintela
2022-11-15 12:12 ` [PATCH 10/30] migration: Fix possible infinite loop of ram save process Juan Quintela
2022-11-15 12:12 ` [PATCH 11/30] migration: Fix race on qemu_file_shutdown() Juan Quintela
2022-11-15 12:12 ` [PATCH 12/30] migration: Disallow postcopy preempt to be used with compress Juan Quintela
2022-11-15 12:12 ` [PATCH 13/30] migration: Use non-atomic ops for clear log bitmap Juan Quintela
2022-11-15 12:12 ` Juan Quintela [this message]
2022-11-15 12:12 ` [PATCH 15/30] migration: Take bitmap mutex when completing ram migration Juan Quintela
2022-11-15 12:12 ` [PATCH 16/30] migration: Add postcopy_preempt_active() Juan Quintela
2022-11-15 12:12 ` [PATCH 17/30] migration: Cleanup xbzrle zero page cache update logic Juan Quintela
2022-11-15 12:12 ` [PATCH 18/30] migration: Trivial cleanup save_page_header() on same block check Juan Quintela
2022-11-15 12:12 ` [PATCH 19/30] migration: Remove RAMState.f references in compression code Juan Quintela
2022-11-15 12:12 ` [PATCH 20/30] migration: Yield bitmap_mutex properly when sending/sleeping Juan Quintela
2022-11-15 12:12 ` [PATCH 21/30] migration: Use atomic ops properly for page accountings Juan Quintela
2022-11-15 12:12 ` [PATCH 22/30] migration: Teach PSS about host page Juan Quintela
2022-11-15 12:12 ` [PATCH 23/30] migration: Introduce pss_channel Juan Quintela
2022-11-15 12:12 ` [PATCH 24/30] migration: Add pss_init() Juan Quintela
2022-11-15 12:12 ` [PATCH 25/30] migration: Make PageSearchStatus part of RAMState Juan Quintela
2022-11-15 12:12 ` [PATCH 26/30] migration: Move last_sent_block into PageSearchStatus Juan Quintela
2022-11-15 12:12 ` [PATCH 27/30] migration: Send requested page directly in rp-return thread Juan Quintela
2022-11-15 12:12 ` [PATCH 28/30] migration: Remove old preempt code around state maintainance Juan Quintela
2022-11-15 12:12 ` [PATCH 29/30] migration: Drop rs->f Juan Quintela
2022-11-15 12:12 ` [PATCH 30/30] migration: Block migration comment or code is wrong Juan Quintela
2022-11-15 14:55 ` [PATCH 00/30] Migration PULL request Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221115121226.26609-15-quintela@redhat.com \
    --to=quintela@redhat.com \
    --cc=berrange@redhat.com \
    --cc=david@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=fam@euphon.net \
    --cc=laurent@vivier.eu \
    --cc=marcandre.lureau@redhat.com \
    --cc=mjt@tls.msk.ru \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=philmd@linaro.org \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-trivial@nongnu.org \
    --cc=stefanha@redhat.com \
    --cc=thuth@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).