qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Lukas Straub <lukasstraub2@web.de>
To: Peter Xu <peterx@redhat.com>
Cc: Hailiang Zhang <zhang.zhanghailiang@huawei.com>,
	Juan Quintela <quintela@redhat.com>,
	David Hildenbrand <david@redhat.com>,
	qemu-devel@nongnu.org,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	Wei Wang <wei.w.wang@intel.com>,
	Leonardo Bras Soares Passos <lsoaresp@redhat.com>
Subject: Re: [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty()
Date: Sat, 3 Jul 2021 18:31:15 +0200	[thread overview]
Message-ID: <20210703183115.17f385f6@gecko.fritz.box> (raw)
In-Reply-To: <20210630200805.280905-1-peterx@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 4320 bytes --]

On Wed, 30 Jun 2021 16:08:05 -0400
Peter Xu <peterx@redhat.com> wrote:

> Taking the mutex every time for each dirty bit to clear is too slow, especially
> we'll take/release even if the dirty bit is cleared.  So far it's only used to
> sync with special cases with qemu_guest_free_page_hint() against migration
> thread, nothing really that serious yet.  Let's move the lock to be upper.
> 
> There're two callers of migration_bitmap_clear_dirty().
> 
> For migration, move it into ram_save_iterate().  With the help of MAX_WAIT
> logic, we'll only run ram_save_iterate() for no more than 50ms-ish time, so
> taking the lock once there at the entry.  It also means any call sites to
> qemu_guest_free_page_hint() can be delayed; but it should be very rare, only
> during migration, and I don't see a problem with it.
> 
> For COLO, move it up to colo_flush_ram_cache().  I think COLO forgot to take
> that lock even when calling ramblock_sync_dirty_bitmap(), where another example
> is migration_bitmap_sync() who took it right.  So let the mutex cover both the
> ramblock_sync_dirty_bitmap() and migration_bitmap_clear_dirty() calls.

Hi,
I don't think COLO needs it, colo_flush_ram_cache() only runs on
the secondary (incoming) side and AFAIK the bitmap is only set in
ram_load_precopy() and they don't run in parallel.

Although I'm not sure what ramblock_sync_dirty_bitmap() does. I guess
it's only there to make the rest of the migration code happy?

Regards,
Lukas Straub

> It's even possible to drop the lock so we use atomic operations upon rb->bmap
> and the variable migration_dirty_pages.  I didn't do it just to still be safe,
> also not predictable whether the frequent atomic ops could bring overhead too
> e.g. on huge vms when it happens very often.  When that really comes, we can
> keep a local counter and periodically call atomic ops.  Keep it simple for now.
> 
> Cc: Wei Wang <wei.w.wang@intel.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Hailiang Zhang <zhang.zhanghailiang@huawei.com>
> Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
> Cc: Juan Quintela <quintela@redhat.com>
> Cc: Leonardo Bras Soares Passos <lsoaresp@redhat.com>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  migration/ram.c | 13 +++++++++++--
>  1 file changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 723af67c2e..9f2965675d 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -795,8 +795,6 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs,
>  {
>      bool ret;
>  
> -    QEMU_LOCK_GUARD(&rs->bitmap_mutex);
> -
>      /*
>       * Clear dirty bitmap if needed.  This _must_ be called before we
>       * send any of the page in the chunk because we need to make sure
> @@ -2834,6 +2832,14 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>          goto out;
>      }
>  
> +    /*
> +     * We'll take this lock a little bit long, but it's okay for two reasons.
> +     * Firstly, the only possible other thread to take it is who calls
> +     * qemu_guest_free_page_hint(), which should be rare; secondly, see
> +     * MAX_WAIT (if curious, further see commit 4508bd9ed8053ce) below, which
> +     * guarantees that we'll at least released it in a regular basis.
> +     */
> +    qemu_mutex_lock(&rs->bitmap_mutex);
>      WITH_RCU_READ_LOCK_GUARD() {
>          if (ram_list.version != rs->last_version) {
>              ram_state_reset(rs);
> @@ -2893,6 +2899,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>              i++;
>          }
>      }
> +    qemu_mutex_unlock(&rs->bitmap_mutex);
>  
>      /*
>       * Must occur before EOS (or any QEMUFile operation)
> @@ -3682,6 +3689,7 @@ void colo_flush_ram_cache(void)
>      unsigned long offset = 0;
>  
>      memory_global_dirty_log_sync();
> +    qemu_mutex_lock(&ram_state->bitmap_mutex);
>      WITH_RCU_READ_LOCK_GUARD() {
>          RAMBLOCK_FOREACH_NOT_IGNORED(block) {
>              ramblock_sync_dirty_bitmap(ram_state, block);
> @@ -3710,6 +3718,7 @@ void colo_flush_ram_cache(void)
>          }
>      }
>      trace_colo_flush_ram_cache_end();
> +    qemu_mutex_unlock(&ram_state->bitmap_mutex);
>  }
>  
>  /**



-- 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  parent reply	other threads:[~2021-07-03 16:33 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-30 20:08 [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty() Peter Xu
2021-07-01  4:42 ` Wang, Wei W
2021-07-01 12:51   ` Peter Xu
2021-07-01 14:21     ` David Hildenbrand
2021-07-02  2:48       ` Wang, Wei W
2021-07-02  7:06         ` David Hildenbrand
2021-07-03  2:53           ` Wang, Wei W
2021-07-05 13:41             ` David Hildenbrand
2021-07-06  9:41               ` Wang, Wei W
2021-07-06 10:05                 ` David Hildenbrand
2021-07-06 17:39                   ` Peter Xu
2021-07-07 12:45                     ` Wang, Wei W
2021-07-07 16:45                       ` Peter Xu
2021-07-07 23:25                         ` Wang, Wei W
2021-07-08  0:21                           ` Peter Xu
2021-07-06 17:47             ` Peter Xu
2021-07-07  8:34               ` Wang, Wei W
2021-07-07 16:54                 ` Peter Xu
2021-07-08  2:55                   ` Wang, Wei W
2021-07-08 18:10                     ` Peter Xu
2021-07-02  2:29     ` Wang, Wei W
2021-07-06 17:59       ` Peter Xu
2021-07-07  8:33         ` Wang, Wei W
2021-07-07 16:44           ` Peter Xu
2021-07-08  2:49             ` Wang, Wei W
2021-07-08 18:30               ` Peter Xu
2021-07-09  8:58                 ` Wang, Wei W
2021-07-09 14:48                   ` Peter Xu
2021-07-13  8:20                     ` Wang, Wei W
2021-07-03 16:31 ` Lukas Straub [this message]
2021-07-04 14:14   ` Lukas Straub
2021-07-06 18:37     ` Peter Xu
2021-07-13  8:40 ` Wang, Wei W
2021-07-13 10:22   ` David Hildenbrand
2021-07-14  5:03     ` Wang, Wei W
2021-07-13 15:59   ` Peter Xu
2021-07-14  5:04     ` Wang, Wei W

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210703183115.17f385f6@gecko.fritz.box \
    --to=lukasstraub2@web.de \
    --cc=david@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=lsoaresp@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=wei.w.wang@intel.com \
    --cc=zhang.zhanghailiang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).