qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: ???? <chenliang0016@icloud.com>
Cc: qemu-devel@nongnu.org, quintela@redhat.com
Subject: Re: [Qemu-devel] [PATCH 1/1] Count used RAMBlock pages for migration_dirty_pages
Date: Fri, 21 Mar 2014 14:41:32 +0000	[thread overview]
Message-ID: <20140321144131.GB8476@work-vm> (raw)
In-Reply-To: <85431B4A-46CD-462D-9464-E9D9ACCA25FD@icloud.com>

* ???? (chenliang0016@icloud.com) wrote:
> 
> > From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> > 
> > This is a fix for a bug* triggered by a migration after hot unplugging
> > a few virtio-net NICs, that caused migration never to converge, because
> > 'migration_dirty_pages' is incorrectly initialised.
> > 
> > 'migration_dirty_pages' is used as a tally of the number of outstanding
> > dirty pages, to give the migration code an idea of how much more data
> > will need to be transferred, and thus whether it can end the iterative
> > phase.
> > 
> > It was initialised to the total size of the RAMBlock address space,
> > however hotunplug can leave this space sparse, and hence
> > migration_dirty_pages ended up too large.
> > 
> > Note that the code tries to be careful when counting to deal with
> > RAMBlocks that share the same end/start page - I don't know
> > if this is actually possible and it does complicate the code,
> > but since there was other code that dealt with unaligned RAMBlocks
> > it seemed possible.
> > 
> > Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > 
> > (* https://bugzilla.redhat.com/show_bug.cgi?id=1074913 )
> > ---
> > arch_init.c | 41 +++++++++++++++++++++++++++++++++++++----
> > 1 file changed, 37 insertions(+), 4 deletions(-)
> > 
> > diff --git a/arch_init.c b/arch_init.c
> > index f18f42e..ef0e98d 100644
> > --- a/arch_init.c
> > +++ b/arch_init.c
> > @@ -727,11 +727,8 @@ static void reset_ram_globals(void)
> > static int ram_save_setup(QEMUFile *f, void *opaque)
> > {
> >     RAMBlock *block;
> > -    int64_t ram_pages = last_ram_offset() >> TARGET_PAGE_BITS;
> > +    int64_t ram_bitmap_pages;
> > 
> > -    migration_bitmap = bitmap_new(ram_pages);
> > -    bitmap_set(migration_bitmap, 0, ram_pages);
> > -    migration_dirty_pages = ram_pages;
> >     mig_throttle_on = false;
> >     dirty_rate_high_cnt = 0;
> > 
> > @@ -770,6 +767,42 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
> >     bytes_transferred = 0;
> >     reset_ram_globals();
> > 
> > +    ram_bitmap_pages = last_ram_offset() >> TARGET_PAGE_BITS;
> > +    migration_bitmap = bitmap_new(ram_bitmap_pages);
> > +    bitmap_set(migration_bitmap, 0, ram_bitmap_pages);
> > +    /*
> > +     * Count the total number of pages used by ram blocks. We clear the dirty
> > +     * bit for the start/end of each ramblock as we go so that we don't double
> > +     * count ramblocks that have overlapping pages - at entry the whole dirty
> > +     * bitmap is set.
> > +     */
> > +    migration_dirty_pages = 0;
> > +    QTAILQ_FOREACH(block, &ram_list.blocks, next) {
> > +        uint64_t block_pages = 0;
> > +        ram_addr_t saddr, eaddr;
> > +
> > +        saddr = block->mr->ram_addr;
> > +        eaddr = saddr + block->length - 1;
> > +        saddr /= TARGET_PAGE_SIZE;
> > +        eaddr /= TARGET_PAGE_SIZE;
> > +
> > +        /* Do the end pages of the range, that might be shared with others */
> > +        block_pages += test_and_clear_bit(saddr, migration_bitmap);
> > +        block_pages += test_and_clear_bit(eaddr, migration_bitmap);
> > +
> > +        if ((saddr + 1) < eaddr) {
> > +            block_pages += eaddr - (saddr + 1);
> > +        }
> > +        migration_dirty_pages += block_pages;
> > +        /*fprintf(stderr, "ram_save_setup: %s s/e=%zx/%zx page s/e=%zx/%zx"
> > +                " bp=%zu\n",
> > +                block->idstr, block->mr->ram_addr,
> > +                block->mr->ram_addr+block->length-1,
> > +                saddr, eaddr, block_pages); */
> > +    }
> > +    /* and set the bits again that got used for our overlap check */
> > +    bitmap_set(migration_bitmap, 0, ram_bitmap_pages);
> 
> maybe it should like this:
> bitmap_set(migration_bitmap, 0, last_ram_offset() >> TARGET_PAGE_BITS);
> Because there are some space sparse.

No, ram_bitmap_pages is already equal to last_ram_offset() >> TARGET_PAGE_BITS
The loop has been calculating migration_dirty_pages and hasn't touched ram_bitmap_pages.

Dave
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

      reply	other threads:[~2014-03-21 14:41 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-03-21 10:58 [Qemu-devel] [PATCH 1/1] Count used RAMBlock pages for migration_dirty_pages Dr. David Alan Gilbert (git)
2014-03-21 13:11 ` Juan Quintela
2014-03-21 13:22   ` Dr. David Alan Gilbert
2014-03-21 15:15     ` Paolo Bonzini
2014-03-21 15:45       ` Dr. David Alan Gilbert
2014-03-21 15:49         ` Paolo Bonzini
2014-03-21 16:08           ` Juan Quintela
2014-03-21 13:44 ` 陈梁
2014-03-21 14:41   ` Dr. David Alan Gilbert [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140321144131.GB8476@work-vm \
    --to=dgilbert@redhat.com \
    --cc=chenliang0016@icloud.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).