From: Chuang Xu <xuchuangxclwt@bytedance.com>
To: Juan Quintela <quintela@redhat.com>
Cc: qemu-devel@nongnu.org,
"Marcel Apfelbaum" <marcel.apfelbaum@gmail.com>,
"Yanan Wang" <wangyanan55@huawei.com>,
"Markus Armbruster" <armbru@redhat.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
"Eduardo Habkost" <eduardo@habkost.net>,
"Eric Blake" <eblake@redhat.com>
Subject: Re: [PATCH v2 04/11] multifd: Count the number of bytes sent correctly
Date: Fri, 16 Jun 2023 03:53:10 -0500 [thread overview]
Message-ID: <CALophutb+J2Qqa-msbY_aW+sz-OPW-XoQQLfCVfEXLfcaWa8xQ@mail.gmail.com> (raw)
In-Reply-To: <20230130080956.3047-5-quintela@redhat.com>
Hi,Juan,
On 2023/1/30 下午4:09, Juan Quintela wrote:
> Current code asumes that all pages are whole. That is not true for
> example for compression already. Fix it for creating a new field
> ->sent_bytes that includes it.
>
> All ram_counters are used only from the migration thread, so we have
> two options:
> - put a mutex and fill everything when we sent it (not only
> ram_counters, also qemu_file->xfer_bytes).
> - Create a local variable that implements how much has been sent
> through each channel. And when we push another packet, we "add" the
> previous stats.
>
> I choose two due to less changes overall. On the previous code we
> increase transferred and then we sent. Current code goes the other
> way around. It sents the data, and after the fact, it updates the
> counters. Notice that each channel can have a maximum of half a
> megabyte of data without counting, so it is not very important.
>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
> migration/multifd.h | 2 ++
> migration/multifd.c | 6 ++++--
> 2 files changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/migration/multifd.h b/migration/multifd.h
> index e2802a9ce2..36f899c56f 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -102,6 +102,8 @@ typedef struct {
> uint32_t flags;
> /* global number of generated multifd packets */
> uint64_t packet_num;
> + /* How many bytes have we sent on the last packet */
> + uint64_t sent_bytes;
> /* thread has work to do */
> int pending_job;
> /* array of pages to sent.
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 61cafe4c76..cd26b2fda9 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -394,7 +394,6 @@ static int multifd_send_pages(QEMUFile *f)
> static int next_channel;
> MultiFDSendParams *p = NULL; /* make happy gcc */
> MultiFDPages_t *pages = multifd_send_state->pages;
> - uint64_t transferred;
>
> if (qatomic_read(&multifd_send_state->exiting)) {
> return -1;
> @@ -429,7 +428,8 @@ static int multifd_send_pages(QEMUFile *f)
> p->packet_num = multifd_send_state->packet_num++;
> multifd_send_state->pages = p->pages;
> p->pages = pages;
> - transferred = ((uint64_t) pages->num) * p->page_size + p->packet_len;
> + uint64_t transferred = p->sent_bytes;
> + p->sent_bytes = 0;
> qemu_file_acct_rate_limit(f, transferred);
> qemu_mutex_unlock(&p->mutex);
> stat64_add(&ram_atomic_counters.multifd_bytes, transferred);
> @@ -719,6 +719,8 @@ static void *multifd_send_thread(void *opaque)
> }
>
> qemu_mutex_lock(&p->mutex);
> + p->sent_bytes += p->packet_len;
> + p->sent_bytes += p->next_packet_size;
Consider a scenario where some normal pages are transmitted in the first round,
followed by several consecutive rounds of zero pages. When zero pages
are transmitted,
next_packet_size of first round is still incorrectly added to
sent_bytes. If we set a rate
limiting for dirty page transmission, the transmission performance of
multi zero check
will degrade.
Maybe we should set next_packet_size to 0 in multifd_send_pages()?
> p->pending_job--;
> qemu_mutex_unlock(&p->mutex);
>
next prev parent reply other threads:[~2023-06-16 8:54 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-30 8:09 [PATCH v2 00/11] Multifd zero page support Juan Quintela
2023-01-30 8:09 ` [PATCH v2 01/11] migration: Update atomic stats out of the mutex Juan Quintela
2023-01-30 8:09 ` [PATCH v2 02/11] migration: Make multifd_bytes atomic Juan Quintela
2023-01-30 8:09 ` [PATCH v2 03/11] multifd: We already account for this packet on the multifd thread Juan Quintela
2023-01-30 8:09 ` [PATCH v2 04/11] multifd: Count the number of bytes sent correctly Juan Quintela
2023-06-16 8:53 ` Chuang Xu [this message]
2023-06-21 19:49 ` Juan Quintela
2023-01-30 8:09 ` [PATCH v2 05/11] migration: Make ram_save_target_page() a pointer Juan Quintela
2023-01-30 8:09 ` [PATCH v2 06/11] multifd: Make flags field thread local Juan Quintela
2023-01-30 8:09 ` [PATCH v2 07/11] multifd: Prepare to send a packet without the mutex held Juan Quintela
2023-01-30 8:09 ` [PATCH v2 08/11] multifd: Add capability to enable/disable zero_page Juan Quintela
2023-01-30 9:37 ` Markus Armbruster
2023-01-30 14:06 ` Juan Quintela
2023-01-30 14:06 ` Juan Quintela
2023-01-30 8:09 ` [PATCH v2 09/11] multifd: Support for zero pages transmission Juan Quintela
2023-01-30 8:09 ` [PATCH v2 10/11] multifd: Zero " Juan Quintela
2023-01-30 8:09 ` [PATCH v2 11/11] So we use multifd to transmit zero pages Juan Quintela
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CALophutb+J2Qqa-msbY_aW+sz-OPW-XoQQLfCVfEXLfcaWa8xQ@mail.gmail.com \
--to=xuchuangxclwt@bytedance.com \
--cc=armbru@redhat.com \
--cc=dgilbert@redhat.com \
--cc=eblake@redhat.com \
--cc=eduardo@habkost.net \
--cc=marcel.apfelbaum@gmail.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=wangyanan55@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).