From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Zheng Chuan <zhengchuan@huawei.com>
Cc: Xiexiangyou <xiexiangyou@huawei.com>,
Leonardo Bras <leobras@redhat.com>,
qemu-devel@nongnu.org, Peter Xu <peterx@redhat.com>,
Juan Quintela <quintela@redhat.com>
Subject: Re: [PATCH v3 03/23] multifd: Rename used field to num
Date: Mon, 13 Dec 2021 15:17:58 +0000 [thread overview]
Message-ID: <YbdkJiBBiCDJ/35Y@work-vm> (raw)
In-Reply-To: <85f4bf3b-9259-7f19-8717-0297251ee6b2@huawei.com>
* Zheng Chuan (zhengchuan@huawei.com) wrote:
> Hi, Juan,
>
> Sorry, forget to send to qemu-devel, resend it.
>
> On 2021/11/24 18:05, Juan Quintela wrote:
> > We will need to split it later in zero_num (number of zero pages) and
> > normal_num (number of normal pages). This name is better.
> >
> > Signed-off-by: Juan Quintela <quintela@redhat.com>
> > ---
> > migration/multifd.h | 2 +-
> > migration/multifd.c | 38 +++++++++++++++++++-------------------
> > 2 files changed, 20 insertions(+), 20 deletions(-)
> >
> > diff --git a/migration/multifd.h b/migration/multifd.h
> > index 15c50ca0b2..86820dd028 100644
> > --- a/migration/multifd.h
> > +++ b/migration/multifd.h
> > @@ -55,7 +55,7 @@ typedef struct {
> >
> > typedef struct {
> > /* number of used pages */
> > - uint32_t used;
> > + uint32_t num;
> > /* number of allocated pages */
> > uint32_t allocated;
> > /* global number of generated multifd packets */
> > diff --git a/migration/multifd.c b/migration/multifd.c
> > index 8125d0015c..8ea86d81dc 100644
> > --- a/migration/multifd.c
> > +++ b/migration/multifd.c
> > @@ -252,7 +252,7 @@ static MultiFDPages_t *multifd_pages_init(size_t size)
> >
> > static void multifd_pages_clear(MultiFDPages_t *pages)
> > {
> > - pages->used = 0;
> > + pages->num = 0;
> > pages->allocated = 0;
> > pages->packet_num = 0;
> > pages->block = NULL;
> > @@ -270,7 +270,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
> >
> > packet->flags = cpu_to_be32(p->flags);
> > packet->pages_alloc = cpu_to_be32(p->pages->allocated);
> > - packet->pages_used = cpu_to_be32(p->pages->used);
> > + packet->pages_used = cpu_to_be32(p->pages->num);
> > packet->next_packet_size = cpu_to_be32(p->next_packet_size);
> > packet->packet_num = cpu_to_be64(p->packet_num);
> >
> > @@ -278,7 +278,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
> > strncpy(packet->ramblock, p->pages->block->idstr, 256);
> > }
> >
> > - for (i = 0; i < p->pages->used; i++) {
> > + for (i = 0; i < p->pages->num; i++) {
> > /* there are architectures where ram_addr_t is 32 bit */
> > uint64_t temp = p->pages->offset[i];
> >
> > @@ -332,18 +332,18 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
> > p->pages = multifd_pages_init(packet->pages_alloc);
> > }
> >
> > - p->pages->used = be32_to_cpu(packet->pages_used);
> > - if (p->pages->used > packet->pages_alloc) {
> > + p->pages->num = be32_to_cpu(packet->pages_used);
> > + if (p->pages->num > packet->pages_alloc) {
> > error_setg(errp, "multifd: received packet "
> > "with %d pages and expected maximum pages are %d",
> > - p->pages->used, packet->pages_alloc) ;
> > + p->pages->num, packet->pages_alloc) ;
> > return -1;
> > }
> >
> > p->next_packet_size = be32_to_cpu(packet->next_packet_size);
> > p->packet_num = be64_to_cpu(packet->packet_num);
> >
> > - if (p->pages->used == 0) {
> > + if (p->pages->num == 0) {
> > return 0;
> > }
> >
> > @@ -356,7 +356,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
> > return -1;
> > }
> >
> > - for (i = 0; i < p->pages->used; i++) {
> > + for (i = 0; i < p->pages->num; i++) {
> > uint64_t offset = be64_to_cpu(packet->offset[i]);
> >
> > if (offset > (block->used_length - page_size)) {
> > @@ -443,13 +443,13 @@ static int multifd_send_pages(QEMUFile *f)
> > }
> > qemu_mutex_unlock(&p->mutex);
> > }
> > - assert(!p->pages->used);
> > + assert(!p->pages->num);
> > assert(!p->pages->block);
> >
> > p->packet_num = multifd_send_state->packet_num++;
> > multifd_send_state->pages = p->pages;
> > p->pages = pages;
> > - transferred = ((uint64_t) pages->used) * qemu_target_page_size()
> > + transferred = ((uint64_t) pages->num) * qemu_target_page_size()
> > + p->packet_len;
> The size of zero page should not regard as the whole pagesize.
> I think the transferred should be updated after you introduce zero_num in following patches, such as:
> + transferred = ((uint64_t) p->normal_num) * qemu_target_page_size()
> + + ((uint64_t) p->zero_num) * sizeof(uint64_t);
> Otherwise, migration time will get worse if we have low bandwidth limit parameter.
>
> I tested it with bandwidth limit of 100MB/s and it works fine:)
Yes I think you're right; 'transferred' is normally a measure of used
network bandwidth.
Dave
> > qemu_file_update_transfer(f, transferred);
> > ram_counters.multifd_bytes += transferred;
> > @@ -469,12 +469,12 @@ int multifd_queue_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset)
> > }
> >
> > if (pages->block == block) {
> > - pages->offset[pages->used] = offset;
> > - pages->iov[pages->used].iov_base = block->host + offset;
> > - pages->iov[pages->used].iov_len = qemu_target_page_size();
> > - pages->used++;
> > + pages->offset[pages->num] = offset;
> > + pages->iov[pages->num].iov_base = block->host + offset;
> > + pages->iov[pages->num].iov_len = qemu_target_page_size();
> > + pages->num++;
> >
> > - if (pages->used < pages->allocated) {
> > + if (pages->num < pages->allocated) {
> > return 1;
> > }
> > }
> > @@ -586,7 +586,7 @@ void multifd_send_sync_main(QEMUFile *f)
> > if (!migrate_use_multifd()) {
> > return;
> > }
> > - if (multifd_send_state->pages->used) {
> > + if (multifd_send_state->pages->num) {
> > if (multifd_send_pages(f) < 0) {
> > error_report("%s: multifd_send_pages fail", __func__);
> > return;
> > @@ -649,7 +649,7 @@ static void *multifd_send_thread(void *opaque)
> > qemu_mutex_lock(&p->mutex);
> >
> > if (p->pending_job) {
> > - uint32_t used = p->pages->used;
> > + uint32_t used = p->pages->num;
> > uint64_t packet_num = p->packet_num;
> > flags = p->flags;
> >
> > @@ -665,7 +665,7 @@ static void *multifd_send_thread(void *opaque)
> > p->flags = 0;
> > p->num_packets++;
> > p->num_pages += used;
> > - p->pages->used = 0;
> > + p->pages->num = 0;
> > p->pages->block = NULL;
> > qemu_mutex_unlock(&p->mutex);
> >
> > @@ -1091,7 +1091,7 @@ static void *multifd_recv_thread(void *opaque)
> > break;
> > }
> >
> > - used = p->pages->used;
> > + used = p->pages->num;
> > flags = p->flags;
> > /* recv methods don't know how to handle the SYNC flag */
> > p->flags &= ~MULTIFD_FLAG_SYNC;
> >
>
> --
> Regards.
> Chuan
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2021-12-13 15:19 UTC|newest]
Thread overview: 72+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-24 10:05 [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Juan Quintela
2021-11-24 10:05 ` [PATCH v3 01/23] multifd: Delete useless operation Juan Quintela
2021-11-24 18:48 ` Dr. David Alan Gilbert
2021-11-25 7:24 ` Juan Quintela
2021-11-25 19:46 ` Dr. David Alan Gilbert
2021-11-26 9:39 ` Juan Quintela
2021-11-24 10:05 ` [PATCH v3 02/23] migration: Never call twice qemu_target_page_size() Juan Quintela
2021-11-24 18:52 ` Dr. David Alan Gilbert
2021-11-25 7:26 ` Juan Quintela
2021-11-24 10:05 ` [PATCH v3 03/23] multifd: Rename used field to num Juan Quintela
2021-11-24 19:37 ` Dr. David Alan Gilbert
2021-11-25 7:28 ` Juan Quintela
2021-11-25 18:30 ` Dr. David Alan Gilbert
2021-12-13 9:34 ` Zheng Chuan via
2021-12-13 15:17 ` Dr. David Alan Gilbert [this message]
2021-11-24 10:05 ` [PATCH v3 04/23] multifd: Add missing documention Juan Quintela
2021-11-25 18:38 ` Dr. David Alan Gilbert
2021-11-26 9:34 ` Juan Quintela
2021-11-24 10:05 ` [PATCH v3 05/23] multifd: The variable is only used inside the loop Juan Quintela
2021-11-25 18:40 ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 06/23] multifd: remove used parameter from send_prepare() method Juan Quintela
2021-11-25 18:51 ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 07/23] multifd: remove used parameter from send_recv_pages() method Juan Quintela
2021-11-25 18:53 ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 08/23] multifd: Fill offset and block for reception Juan Quintela
2021-11-25 19:41 ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 09/23] multifd: Make zstd compression method not use iovs Juan Quintela
2021-11-29 17:16 ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 10/23] multifd: Make zlib " Juan Quintela
2021-11-29 17:30 ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 11/23] multifd: Move iov from pages to params Juan Quintela
2021-11-29 17:52 ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 12/23] multifd: Make zlib use iov's Juan Quintela
2021-11-29 18:01 ` Dr. David Alan Gilbert
2021-11-29 18:21 ` Juan Quintela
2021-11-24 10:06 ` [PATCH v3 13/23] multifd: Make zstd " Juan Quintela
2021-11-29 18:03 ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 14/23] multifd: Remove send_write() method Juan Quintela
2021-11-29 18:19 ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 15/23] multifd: Use a single writev on the send side Juan Quintela
2021-11-29 18:35 ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 16/23] multifd: Unfold "used" variable by its value Juan Quintela
2021-11-30 10:45 ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 17/23] multifd: Use normal pages array on the send side Juan Quintela
2021-11-30 10:50 ` Dr. David Alan Gilbert
2021-11-30 12:01 ` Juan Quintela
2021-12-01 10:59 ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 18/23] multifd: Use normal pages array on the recv side Juan Quintela
2021-12-07 7:11 ` Peter Xu
2021-12-10 10:41 ` Juan Quintela
2021-11-24 10:06 ` [PATCH v3 19/23] multifd: recv side only needs the RAMBlock host address Juan Quintela
2021-12-01 18:56 ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 20/23] multifd: Rename pages_used to normal_pages Juan Quintela
2021-12-01 19:00 ` Dr. David Alan Gilbert
2021-11-24 10:06 ` [PATCH v3 21/23] multifd: Support for zero pages transmission Juan Quintela
2021-12-02 11:36 ` Dr. David Alan Gilbert
2021-12-02 12:08 ` Juan Quintela
2021-12-02 16:16 ` Dr. David Alan Gilbert
2021-12-02 16:19 ` Juan Quintela
2021-12-02 16:46 ` Dr. David Alan Gilbert
2021-12-02 16:52 ` Juan Quintela
2021-11-24 10:06 ` [PATCH v3 22/23] multifd: Zero " Juan Quintela
2021-12-02 16:42 ` Dr. David Alan Gilbert
2021-12-02 16:49 ` Juan Quintela
2021-11-24 10:06 ` [PATCH v3 23/23] migration: Use multifd before we check for the zero page Juan Quintela
2021-12-02 17:11 ` Dr. David Alan Gilbert
2021-12-02 17:38 ` Juan Quintela
2021-12-02 17:49 ` Dr. David Alan Gilbert
2021-12-07 7:30 ` Peter Xu
2021-12-13 9:03 ` Juan Quintela
2021-12-15 1:39 ` Peter Xu
2021-11-24 10:24 ` [PATCH v3 00/23] Migration: Transmit and detect zero pages in the multifd threads Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YbdkJiBBiCDJ/35Y@work-vm \
--to=dgilbert@redhat.com \
--cc=leobras@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=xiexiangyou@huawei.com \
--cc=zhengchuan@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).