From: Peter Xu <peterx@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Dr . David Alan Gilbert" <dgilbert@redhat.com>,
peterx@redhat.com, "Daniel P . Berrange" <berrange@redhat.com>,
Juan Quintela <quintela@redhat.com>,
ani@anisinha.ca,
Leonardo Bras Soares Passos <lsoaresp@redhat.com>,
Manish Mishra <manish.mishra@nutanix.com>
Subject: [PATCH v2 15/15] migration: Drop rs->f
Date: Tue, 11 Oct 2022 17:55:59 -0400 [thread overview]
Message-ID: <20221011215559.602584-16-peterx@redhat.com> (raw)
In-Reply-To: <20221011215559.602584-1-peterx@redhat.com>
Now with rs->pss we can already cache channels in pss->pss_channels. That
pss_channel contains more infromation than rs->f because it's per-channel.
So rs->f could be replaced by rss->pss[RAM_CHANNEL_PRECOPY].pss_channel,
while rs->f itself is a bit vague now.
Note that vanilla postcopy still send pages via pss[RAM_CHANNEL_PRECOPY],
that's slightly confusing but it reflects the reality.
Then, after the replacement we can safely drop rs->f.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
migration/ram.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index db3bf51dad..538667b974 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -323,8 +323,6 @@ struct RAMSrcPageRequest {
/* State of RAM for migration */
struct RAMState {
- /* QEMUFile used for this migration */
- QEMUFile *f;
/*
* PageSearchStatus structures for the channels when send pages.
* Protected by the bitmap_mutex.
@@ -2527,8 +2525,6 @@ static int ram_find_and_save_block(RAMState *rs)
}
if (found) {
- /* Cache rs->f in pss_channel (TODO: remove rs->f) */
- pss->pss_channel = rs->f;
pages = ram_save_host_page(rs, pss);
}
} while (!pages && again);
@@ -3084,7 +3080,7 @@ static void ram_state_resume_prepare(RAMState *rs, QEMUFile *out)
ram_state_reset(rs);
/* Update RAMState cache of output QEMUFile */
- rs->f = out;
+ rs->pss[RAM_CHANNEL_PRECOPY].pss_channel = out;
trace_ram_state_resume_prepare(pages);
}
@@ -3175,7 +3171,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
return -1;
}
}
- (*rsp)->f = f;
+ (*rsp)->pss[RAM_CHANNEL_PRECOPY].pss_channel = f;
WITH_RCU_READ_LOCK_GUARD() {
qemu_put_be64(f, ram_bytes_total_common(true) | RAM_SAVE_FLAG_MEM_SIZE);
@@ -3310,7 +3306,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
out:
if (ret >= 0
&& migration_is_setup_or_active(migrate_get_current()->state)) {
- ret = multifd_send_sync_main(rs->f);
+ ret = multifd_send_sync_main(rs->pss[RAM_CHANNEL_PRECOPY].pss_channel);
if (ret < 0) {
return ret;
}
@@ -3380,7 +3376,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
return ret;
}
- ret = multifd_send_sync_main(rs->f);
+ ret = multifd_send_sync_main(rs->pss[RAM_CHANNEL_PRECOPY].pss_channel);
if (ret < 0) {
return ret;
}
--
2.37.3
next prev parent reply other threads:[~2022-10-11 22:11 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-11 21:55 [PATCH v2 00/15] migration: Postcopy Preempt-Full Peter Xu
2022-10-11 21:55 ` [PATCH v2 01/15] migration: Take bitmap mutex when completing ram migration Peter Xu
2022-10-12 16:14 ` Dr. David Alan Gilbert
2022-11-14 15:06 ` Juan Quintela
2022-10-11 21:55 ` [PATCH v2 02/15] migration: Add postcopy_preempt_active() Peter Xu
2022-11-14 15:07 ` Juan Quintela
2022-10-11 21:55 ` [PATCH v2 03/15] migration: Cleanup xbzrle zero page cache update logic Peter Xu
2022-11-14 15:08 ` Juan Quintela
2022-10-11 21:55 ` [PATCH v2 04/15] migration: Trivial cleanup save_page_header() on same block check Peter Xu
2022-11-14 15:08 ` Juan Quintela
2022-10-11 21:55 ` [PATCH v2 05/15] migration: Remove RAMState.f references in compression code Peter Xu
2022-11-14 15:09 ` Juan Quintela
2022-10-11 21:55 ` [PATCH v2 06/15] migration: Yield bitmap_mutex properly when sending/sleeping Peter Xu
2022-10-12 16:43 ` Dr. David Alan Gilbert
2022-10-12 17:51 ` Peter Xu
2022-10-13 16:19 ` Peter Xu
2022-10-13 16:37 ` Dr. David Alan Gilbert
2022-11-15 9:19 ` Juan Quintela
2022-10-11 21:55 ` [PATCH v2 07/15] migration: Use atomic ops properly for page accountings Peter Xu
2022-11-14 15:14 ` Juan Quintela
2022-10-11 21:55 ` [PATCH v2 08/15] migration: Teach PSS about host page Peter Xu
2022-10-11 21:55 ` [PATCH v2 09/15] migration: Introduce pss_channel Peter Xu
2022-10-11 21:55 ` [PATCH v2 10/15] migration: Add pss_init() Peter Xu
2022-10-11 21:55 ` [PATCH v2 11/15] migration: Make PageSearchStatus part of RAMState Peter Xu
2022-10-11 21:55 ` [PATCH v2 12/15] migration: Move last_sent_block into PageSearchStatus Peter Xu
2022-11-14 15:19 ` Juan Quintela
2022-10-11 21:55 ` [PATCH v2 13/15] migration: Send requested page directly in rp-return thread Peter Xu
2022-10-11 21:55 ` [PATCH v2 14/15] migration: Remove old preempt code around state maintainance Peter Xu
2022-11-14 15:19 ` Juan Quintela
2022-10-11 21:55 ` Peter Xu [this message]
2022-11-14 15:18 ` [PATCH v2 15/15] migration: Drop rs->f Juan Quintela
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221011215559.602584-16-peterx@redhat.com \
--to=peterx@redhat.com \
--cc=ani@anisinha.ca \
--cc=berrange@redhat.com \
--cc=dgilbert@redhat.com \
--cc=lsoaresp@redhat.com \
--cc=manish.mishra@nutanix.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).