From: Peter Xu <peterx@redhat.com>
To: Fabiano Rosas <farosas@suse.de>
Cc: qemu-devel@nongnu.org, Bryan Zhang <bryan.zhang@bytedance.com>,
Prasad Pandit <ppandit@redhat.com>,
Yuan Liu <yuan1.liu@intel.com>, Avihai Horon <avihaih@nvidia.com>,
Hao Xiang <hao.xiang@bytedance.com>
Subject: Re: [PATCH 04/14] migration/multifd: Postpone reset of MultiFDPages_t
Date: Fri, 2 Feb 2024 08:28:47 +0800 [thread overview]
Message-ID: <Zbw3P26zfARNBsBy@x1n> (raw)
In-Reply-To: <87plxgi51k.fsf@suse.de>
On Thu, Feb 01, 2024 at 12:21:27PM -0300, Fabiano Rosas wrote:
> Peter Xu <peterx@redhat.com> writes:
>
> > On Wed, Jan 31, 2024 at 12:27:51PM -0300, Fabiano Rosas wrote:
> >> > +/* Reset a MultiFDPages_t* object for the next use */
> >> > +static void multifd_pages_reset(MultiFDPages_t *pages)
> >> > +{
> >> > + /*
> >> > + * We don't need to touch offset[] array, because it will be
> >> > + * overwritten later when reused.
> >> > + */
> >> > + pages->num = 0;
> >> > + pages->block = NULL;
> >>
> >> Having to do this at all is a huge overloading of this field. This not
> >> only resets it, but it also communicates to multifd_queue_page() that
> >> the previous payload has been sent. Otherwise, multifd_queue_page()
> >> wouldn't know whether the previous call to multifd_queue_page() has
> >> called multifd_send_pages() or if it has exited early. So this basically
> >> means "the block that was previously here has been sent".
> >>
> >> That's why we need the changed=true logic. A
> >> multifd_send_state->pages->block still has a few pages left to send, but
> >> because it's less than pages->allocated, it skips
> >> multifd_send_pages(). The next call to multifd_queue_page() already has
> >> the next ramblock. So we set changed=true, call multifd_send_pages() to
> >> send the remaining pages of that block and recurse into
> >> multifd_queue_page() once more to send the new block.
> >
> > I agree, the queue page routines are not easy to follow as well.
> >
> > How do you like a rewrite of the queue logic, like this?
> >
> > =====
> > bool multifd_queue_page(RAMBlock *block, ram_addr_t offset)
> > {
> > MultiFDPages_t *pages;
> >
> > retry:
> > pages = multifd_send_state->pages;
> >
> > /* If the queue is empty, we can already enqueue now */
> > if (multifd_queue_empty(pages)) {
> > pages->block = block;
> > multifd_enqueue(pages, offset);
> > return true;
> > }
> >
> > /*
> > * Not empty, meanwhile we need a flush. It can because of either:
> > *
> > * (1) The page is not on the same ramblock of previous ones, or,
> > * (2) The queue is full.
> > *
> > * After flush, always retry.
> > */
> > if (pages->block != block || multifd_queue_full(pages)) {
> > if (!multifd_send_pages()) {
> > return false;
> > }
> > goto retry;
> > }
> >
> > /* Not empty, and we still have space, do it! */
> > multifd_enqueue(pages, offset);
> > return true;
> > }
> > =====
> >
> > Would this be clearer? With above, we can drop the ->ramblock reset,
> > afaict.
> >
> > I attached three patches if you agree it's better, then I'll include them
> > in v2.
>
> Yes, let's do it.
>
> >
> > --
> > Peter Xu
> > From c5dc2052794efd6da6a1e6f4b49f25d5b32879f7 Mon Sep 17 00:00:00 2001
> > From: Peter Xu <peterx@redhat.com>
> > Date: Thu, 1 Feb 2024 17:50:21 +0800
> > Subject: [PATCH 1/3] migration/multifd: Change retval of multifd_queue_page()
> >
> > Using int is an overkill when there're only two options. Change it to a
> > boolean.
> >
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> > migration/multifd.h | 2 +-
> > migration/multifd.c | 9 +++++----
> > migration/ram.c | 2 +-
> > 3 files changed, 7 insertions(+), 6 deletions(-)
> >
> > diff --git a/migration/multifd.h b/migration/multifd.h
> > index 34a2ecb9f4..a320c53a6f 100644
> > --- a/migration/multifd.h
> > +++ b/migration/multifd.h
> > @@ -22,7 +22,7 @@ bool multifd_recv_all_channels_created(void);
> > void multifd_recv_new_channel(QIOChannel *ioc, Error **errp);
> > void multifd_recv_sync_main(void);
> > int multifd_send_sync_main(void);
> > -int multifd_queue_page(RAMBlock *block, ram_addr_t offset);
> > +bool multifd_queue_page(RAMBlock *block, ram_addr_t offset);
> >
> > /* Multifd Compression flags */
> > #define MULTIFD_FLAG_SYNC (1 << 0)
> > diff --git a/migration/multifd.c b/migration/multifd.c
> > index 91be6d2fc4..d0a3b4e062 100644
> > --- a/migration/multifd.c
> > +++ b/migration/multifd.c
> > @@ -505,7 +505,8 @@ static int multifd_send_pages(void)
> > return 1;
> > }
> >
> > -int multifd_queue_page(RAMBlock *block, ram_addr_t offset)
> > +/* Returns true if enqueue successful, false otherwise */
> > +bool multifd_queue_page(RAMBlock *block, ram_addr_t offset)
> > {
> > MultiFDPages_t *pages = multifd_send_state->pages;
> > bool changed = false;
> > @@ -519,21 +520,21 @@ int multifd_queue_page(RAMBlock *block, ram_addr_t offset)
> > pages->num++;
> >
> > if (pages->num < pages->allocated) {
> > - return 1;
> > + return true;
> > }
> > } else {
> > changed = true;
> > }
> >
> > if (multifd_send_pages() < 0) {
> > - return -1;
> > + return false;
> > }
> >
> > if (changed) {
> > return multifd_queue_page(block, offset);
> > }
> >
> > - return 1;
> > + return true;
> > }
> >
> > /* Multifd send side hit an error; remember it and prepare to quit */
> > diff --git a/migration/ram.c b/migration/ram.c
> > index d5b7cd5ac2..4649a81204 100644
> > --- a/migration/ram.c
> > +++ b/migration/ram.c
> > @@ -1252,7 +1252,7 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss)
> >
> > static int ram_save_multifd_page(RAMBlock *block, ram_addr_t offset)
> > {
> > - if (multifd_queue_page(block, offset) < 0) {
> > + if (!multifd_queue_page(block, offset)) {
> > return -1;
> > }
> > stat64_add(&mig_stats.normal_pages, 1);
> > --
> > 2.43.0
> >
> > From f393f1cfe95d79bed72e6043903ee4c4cb298c21 Mon Sep 17 00:00:00 2001
> > From: Peter Xu <peterx@redhat.com>
> > Date: Thu, 1 Feb 2024 17:51:38 +0800
> > Subject: [PATCH 2/3] migration/multifd: Change retval of multifd_send_pages()
> >
> > Using int is an overkill when there're only two options. Change it to a
> > boolean.
> >
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> > migration/multifd.c | 15 ++++++++-------
> > 1 file changed, 8 insertions(+), 7 deletions(-)
> >
> > diff --git a/migration/multifd.c b/migration/multifd.c
> > index d0a3b4e062..d2b0f0eda9 100644
> > --- a/migration/multifd.c
> > +++ b/migration/multifd.c
> > @@ -449,9 +449,10 @@ static void multifd_send_kick_main(MultiFDSendParams *p)
> > * thread is using the channel mutex when changing it, and the channel
> > * have to had finish with its own, otherwise pending_job can't be
> > * false.
> > + *
> > + * Returns true if succeed, false otherwise.
> > */
> > -
> > -static int multifd_send_pages(void)
> > +static bool multifd_send_pages(void)
> > {
> > int i;
> > static int next_channel;
> > @@ -459,7 +460,7 @@ static int multifd_send_pages(void)
> > MultiFDPages_t *pages = multifd_send_state->pages;
> >
> > if (multifd_send_should_exit()) {
> > - return -1;
> > + return false;
> > }
> >
> > /* We wait here, until at least one channel is ready */
> > @@ -473,7 +474,7 @@ static int multifd_send_pages(void)
> > next_channel %= migrate_multifd_channels();
> > for (i = next_channel;; i = (i + 1) % migrate_multifd_channels()) {
> > if (multifd_send_should_exit()) {
> > - return -1;
> > + return false;
> > }
> > p = &multifd_send_state->params[i];
> > /*
> > @@ -502,7 +503,7 @@ static int multifd_send_pages(void)
> > qemu_mutex_unlock(&p->mutex);
> > qemu_sem_post(&p->sem);
> >
> > - return 1;
> > + return true;
> > }
> >
> > /* Returns true if enqueue successful, false otherwise */
> > @@ -526,7 +527,7 @@ bool multifd_queue_page(RAMBlock *block, ram_addr_t offset)
> > changed = true;
> > }
> >
> > - if (multifd_send_pages() < 0) {
> > + if (!multifd_send_pages()) {
> > return false;
> > }
> >
> > @@ -666,7 +667,7 @@ int multifd_send_sync_main(void)
> > return 0;
> > }
> > if (multifd_send_state->pages->num) {
> > - if (multifd_send_pages() < 0) {
> > + if (!multifd_send_pages()) {
> > error_report("%s: multifd_send_pages fail", __func__);
> > return -1;
> > }
> > --
> > 2.43.0
> >
> > From fcddc942cb31bc9d395d67a555d9a2281da452b1 Mon Sep 17 00:00:00 2001
> > From: Peter Xu <peterx@redhat.com>
> > Date: Thu, 1 Feb 2024 17:55:42 +0800
> > Subject: [PATCH 3/3] migration/multifd: Rewrite multifd_queue_page()
> >
> > The current multifd_queue_page() is not easy to read and follow. It is not
> > good with a few reasons:
> >
> > - No helper at all to show what exactly does a condition mean; in short,
> > readability is low.
> >
> > - Rely on pages->ramblock being cleared to detect an empty queue. It's
> > slightly an overload of the ramblock pointer, per Fabiano [1], which I
> > also agree.
> >
> > - Contains a self recursion, even if not necessary..
> >
> > Rewrite this function. We add some comments to make it even clearer on
> > what it does.
> >
> > [1] https://lore.kernel.org/r/87wmrpjzew.fsf@suse.de
> >
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> > ---
> > migration/multifd.c | 56 ++++++++++++++++++++++++++++++---------------
> > 1 file changed, 37 insertions(+), 19 deletions(-)
> >
> > diff --git a/migration/multifd.c b/migration/multifd.c
> > index d2b0f0eda9..5a64a9c2e2 100644
> > --- a/migration/multifd.c
> > +++ b/migration/multifd.c
> > @@ -506,35 +506,53 @@ static bool multifd_send_pages(void)
> > return true;
> > }
> >
> > +static inline bool multifd_queue_empty(MultiFDPages_t *pages)
> > +{
> > + return pages->num == 0;
> > +}
>
> Good, because we can later switch from pages to something else entirely.
>
> > +
> > +static inline bool multifd_queue_full(MultiFDPages_t *pages)
> > +{
> > + return pages->num == pages->allocated;
> > +}
>
> Pages allocated is nonsense. See if you agree with its removal:
> https://gitlab.com/farosas/qemu/-/commit/7cfff1a3e31b271e901a6c08d8b5d8c01b680e4d
>
> ---
> From 7cfff1a3e31b271e901a6c08d8b5d8c01b680e4d Mon Sep 17 00:00:00 2001
> From: Fabiano Rosas <farosas@suse.de>
> Date: Tue, 24 Oct 2023 19:03:41 -0300
> Subject: [PATCH] multifd: Remove MultiFDPage_t:allocated
>
> When dealing with RAM, having a field called 'allocated' is
> confusing. This field simply holds number of pages that fit in a
> multifd packet.
>
> Since it is a constant dependent on the size of the multifd packet,
> remove it and instead use the page size and MULTIFD_PACKET_SIZE
> directly.
>
> This is another step in the direction of having no mentions of 'page'
> in the multifd send thread.
>
> Signed-off-by: Fabiano Rosas <farosas@suse.de>
> ---
> migration/multifd.c | 6 ++----
> migration/multifd.h | 2 --
> 2 files changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/migration/multifd.c b/migration/multifd.c
> index bdefce27706..83fb2caab04 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -241,7 +241,6 @@ static MultiFDPages_t *multifd_pages_init(uint32_t n)
> {
> MultiFDPages_t *pages = g_new0(MultiFDPages_t, 1);
>
> - pages->allocated = n;
> pages->offset = g_new0(ram_addr_t, n);
> pages->page_size = qemu_target_page_size();
>
> @@ -251,7 +250,6 @@ static MultiFDPages_t *multifd_pages_init(uint32_t n)
> static void multifd_pages_clear(MultiFDPages_t *pages)
> {
> pages->num = 0;
> - pages->allocated = 0;
> pages->block = NULL;
> g_free(pages->offset);
> pages->offset = NULL;
> @@ -264,7 +262,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
> int i;
>
> packet->flags = cpu_to_be32(p->flags);
> - packet->pages_alloc = cpu_to_be32(p->pages->allocated);
> + packet->pages_alloc = cpu_to_be32(MULTIFD_PACKET_SIZE / p->pages->page_size);
> packet->normal_pages = cpu_to_be32(p->pages->num);
> packet->next_packet_size = cpu_to_be32(p->next_packet_size);
> packet->packet_num = cpu_to_be64(p->packet_num);
> @@ -451,7 +449,7 @@ int multifd_queue_page(RAMBlock *block, ram_addr_t offset)
> pages->offset[pages->num] = offset;
> pages->num++;
>
> - if (pages->num < pages->allocated) {
> + if (pages->num * pages->page_size < MULTIFD_PACKET_SIZE) {
> return 1;
> }
> } else {
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 655f8d5eeb4..d1342296d63 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -56,8 +56,6 @@ typedef struct {
> typedef struct {
> /* number of used pages */
> uint32_t num;
> - /* number of allocated pages */
> - uint32_t allocated;
> /* guest page size */
> uint32_t page_size;
> /* offset of each page */
> --
I agree.
Even if we would like to add a parameter to setup the allcated size (I
remember one of the accelerator series has it), it'll still be a global
variable rather than per-pages thing.
I can cherry pick this and post together; will need a rebase but I can do
that.
--
Peter Xu
next prev parent reply other threads:[~2024-02-02 0:30 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-31 10:30 [PATCH 00/14] migration/multifd: Refactor ->send_prepare() and cleanups peterx
2024-01-31 10:30 ` [PATCH 01/14] migration/multifd: Drop stale comment for multifd zero copy peterx
2024-01-31 10:30 ` [PATCH 02/14] migration/multifd: multifd_send_kick_main() peterx
2024-01-31 10:31 ` [PATCH 03/14] migration/multifd: Drop MultiFDSendParams.quit, cleanup error paths peterx
2024-01-31 15:05 ` Fabiano Rosas
2024-02-01 9:28 ` Peter Xu
2024-02-01 13:30 ` Fabiano Rosas
2024-02-02 0:21 ` Peter Xu
2024-01-31 10:31 ` [PATCH 04/14] migration/multifd: Postpone reset of MultiFDPages_t peterx
2024-01-31 15:27 ` Fabiano Rosas
2024-02-01 10:01 ` Peter Xu
2024-02-01 15:21 ` Fabiano Rosas
2024-02-02 0:28 ` Peter Xu [this message]
2024-02-02 0:37 ` Peter Xu
2024-02-02 12:15 ` Fabiano Rosas
2024-01-31 10:31 ` [PATCH 05/14] migration/multifd: Drop MultiFDSendParams.normal[] array peterx
2024-01-31 16:02 ` Fabiano Rosas
2024-01-31 10:31 ` [PATCH 06/14] migration/multifd: Separate SYNC request with normal jobs peterx
2024-01-31 18:45 ` Fabiano Rosas
2024-01-31 10:31 ` [PATCH 07/14] migration/multifd: Simplify locking in sender thread peterx
2024-01-31 20:21 ` Fabiano Rosas
2024-02-01 10:37 ` Peter Xu
2024-01-31 10:31 ` [PATCH 08/14] migration/multifd: Drop pages->num check " peterx
2024-01-31 21:19 ` Fabiano Rosas
2024-01-31 10:31 ` [PATCH 09/14] migration/multifd: Rename p->num_packets and clean it up peterx
2024-01-31 21:24 ` Fabiano Rosas
2024-01-31 10:31 ` [PATCH 10/14] migration/multifd: Move total_normal_pages accounting peterx
2024-01-31 21:26 ` Fabiano Rosas
2024-01-31 10:31 ` [PATCH 11/14] migration/multifd: Move trace_multifd_send|recv() peterx
2024-01-31 21:26 ` Fabiano Rosas
2024-01-31 10:31 ` [PATCH 12/14] migration/multifd: multifd_send_prepare_header() peterx
2024-01-31 21:32 ` Fabiano Rosas
2024-02-01 10:02 ` Peter Xu
2024-01-31 10:31 ` [PATCH 13/14] migration/multifd: Move header prepare/fill into send_prepare() peterx
2024-01-31 21:42 ` Fabiano Rosas
2024-02-01 10:15 ` Peter Xu
2024-02-02 3:57 ` Peter Xu
2024-01-31 10:31 ` [PATCH 14/14] migration/multifd: Forbid spurious wakeups peterx
2024-01-31 21:43 ` Fabiano Rosas
2024-02-01 6:01 ` Peter Xu
2024-01-31 22:49 ` [PATCH 00/14] migration/multifd: Refactor ->send_prepare() and cleanups Fabiano Rosas
2024-02-01 5:47 ` Peter Xu
2024-02-01 12:51 ` Avihai Horon
2024-02-01 21:46 ` Fabiano Rosas
2024-02-02 2:12 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zbw3P26zfARNBsBy@x1n \
--to=peterx@redhat.com \
--cc=avihaih@nvidia.com \
--cc=bryan.zhang@bytedance.com \
--cc=farosas@suse.de \
--cc=hao.xiang@bytedance.com \
--cc=ppandit@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=yuan1.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).