qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: Chuang Xu <xuchuangxclwt@bytedance.com>
Cc: qemu-devel@nongnu.org, quintela@redhat.com, dgilbert@redhat.com,
	pbonzini@redhat.com, david@redhat.com, philmd@linaro.org,
	zhouyibo@bytedance.com
Subject: Re: [RFC v5 0/3] migration: reduce time of loading non-iterable vmstate
Date: Tue, 21 Feb 2023 15:36:47 -0500	[thread overview]
Message-ID: <Y/UrXwRK7rB2KRKO@x1n> (raw)
In-Reply-To: <51d862b2-96be-0b93-7ed2-fcd15ffaa76e@bytedance.com>

On Tue, Feb 21, 2023 at 04:57:30PM +0800, Chuang Xu wrote:
> Hi, Peter

Hi, Chuang,

> 
> This email is a supplement to the previous one.

AFAICT I mostly agree with you, but see a few more comments below.

> 
> On 2023/2/21 上午11:38, Chuang Xu wrote:
> > 
> > I think we need a memory_region_transaction_commit_force() to force
> > commit
> > some transactions when load vmstate. This function is designed like this:
> > 
> > /*
> >  * memory_region_transaction_commit_force() is desgined to
> >  * force the mr transaction to be commited in the process
> >  * of loading vmstate.
> >  */
> > void memory_region_transaction_commit_force(void)

I would call this memory_region_transaction_do_commit(), and I don't think
the manipulation of memory_region_transaction_depth is needed here since we
don't release BQL during the whole process, so changing that depth isn't
needed at all to me.

So, I think we can...

> > {
> >     AddressSpace *as;
> >     unsigned int memory_region_transaction_depth_copy =
> > memory_region_transaction_depth;
> > 
> >     /*
> >      * Temporarily replace memory_region_transaction_depth with 0 to
> > prevent
> >      * memory_region_transaction_commit_force() and
> > address_space_to_flatview()
> >      * call each other recursively.
> >      */
> >     memory_region_transaction_depth = 0;

... drop here ...

> > 
> >     assert(qemu_mutex_iothread_locked());
> > 
> > 
> >     if (memory_region_update_pending) {
> >         flatviews_reset();
> > 
> >         MEMORY_LISTENER_CALL_GLOBAL(begin, Forward);
> > 
> >         QTAILQ_FOREACH(as, &address_spaces, address_spaces_link) {
> >             address_space_set_flatview(as);
> >             address_space_update_ioeventfds(as);
> >         }
> >         memory_region_update_pending = false;
> >         ioeventfd_update_pending = false;
> >         MEMORY_LISTENER_CALL_GLOBAL(commit, Forward);
> >     } else if (ioeventfd_update_pending) {
> >         QTAILQ_FOREACH(as, &address_spaces, address_spaces_link) {
> >             address_space_update_ioeventfds(as);
> >         }
> >         ioeventfd_update_pending = false;
> >     }
> > 
> >     /* recover memory_region_transaction_depth */
> >     memory_region_transaction_depth =
> > memory_region_transaction_depth_copy;

... drop here ...

> > }

... then call this new memory_region_transaction_do_commit() in
memory_region_transaction_commit().

void memory_region_transaction_commit(void)
{
    AddressSpace *as;

    assert(memory_region_transaction_depth);
    --memory_region_transaction_depth;
    memory_region_transaction_do_commit();
}

Then...

> > 
> > Now there are two options to use this function:
> > 1. call it in address_space_to_flatview():
> > 
> > static inline FlatView *address_space_to_flatview(AddressSpace *as)
> > {
> >     /*
> >      * Before using any flatview, check whether we're during a memory
> >      * region transaction. If so, force commit the memory region
> > transaction.
> >      */
> >     if (memory_region_transaction_in_progress())
> 
> Here we need to add the condition of BQL holding, or some threads without
> BQL held running here will trigger the assertion in
> memory_region_transaction_commit_force().
> 
> I'm not sure whether this condition is sufficient, at least for the mr access
> in the load thread it is sufficient (because the load thread will hold the BQL
> when accessing mr). But for other cases, it seems that we will return to
> our discussion on sanity check..

Yes, I think the sanity checks are actually good stuff.

I would think it's nice to impl address_space_to_flatview() like this.  I
guess we don't have an use case to fetch the flatview during a memory
update procedure, but I also don't see why it can't be supported.

/* Need to be called with either BQL or RCU read lock held */
static inline FlatView *address_space_to_flatview(AddressSpace *as)
{
    if (qemu_mutex_iothread_locked()) {
        /* We exclusively own the flatview now.. */
        if (memory_region_transaction_in_progress()) {
            /*
             * Fetch the flatview within a transaction in-progress, it
             * means current_map may not be the latest, we need to update
             * immediately to make sure the caller won't see obsolete
             * mapping.
             */
            memory_region_transaction_do_commit();
        }

        /* No further protection needed to access current_map */
        return as->current_map;
    }

    /* Otherwise we must have had the RCU lock or something went wrong */
    assert(rcu_read_is_locked());

    return qatomic_rcu_read(&as->current_map);
}

Then IIUC everything should start to run normal again, with the same hope
that it will keep the benefit of your whole idea.  Does that look sane to
you?

> 
> Another point I worry about is whether the number of mr transaction commits
> has increased in some other scenarios because of this force commit. Although
> So far, I haven't seen a simple virtual machine lifecycle trigger this force commit.

It won't be so bad; with the help of existing memory_region_update_pending
and ioeventfd_update_pending, the commit() will be noop if both are unset.

> I did a little test: replace commit_force() with abort() and run qtest.
> Almost all error I can see is related to migration..
> 
> > memory_region_transaction_commit_force();
> >     return qatomic_rcu_read(&as->current_map);
> > }
> > 
> > 2. call it before each post_load()
> > 
> > I prefer to use the former one, it is more general than the latter.
> > And with this function, the sanity check is not necessary any more.
> > Although we may inevitably call memory_region_transaction_commit_force()
> > several times, in my actual test, the number of calls to
> > memory_region_transaction_commit_force() is still insignificant compared
> > with the number of calls to memory_region_transaction_commit() before
> > optimization. As a result, This code won't affect the optimization
> > effect,
> > but it can ensure reliability.

Yes the 2nd option doesn't sound appealing to me, because we have so many
post_load()s and it can quickly beat your original purpose, I think.

Thanks,

-- 
Peter Xu



  reply	other threads:[~2023-02-21 20:37 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-17 11:55 [RFC v5 0/3] migration: reduce time of loading non-iterable vmstate Chuang Xu
2023-01-17 11:55 ` [RFC v5 1/3] rcu: introduce rcu_read_is_locked() Chuang Xu
2023-02-02 10:59   ` Juan Quintela
2023-02-14  7:57     ` Chuang Xu
2023-01-17 11:55 ` [RFC v5 2/3] memory: add depth assert in address_space_to_flatview Chuang Xu
2023-02-08 19:31   ` Juan Quintela
2023-01-17 11:55 ` [RFC v5 3/3] migration: reduce time of loading non-iterable vmstate Chuang Xu
2023-02-02 11:01   ` Juan Quintela
2023-01-17 15:41 ` [RFC v5 0/3] " Peter Xu
2023-02-02 11:07 ` Juan Quintela
2023-02-15 17:00 ` Juan Quintela
2023-02-15 17:06 ` Claudio Fontana
2023-02-15 19:10 ` Juan Quintela
2023-02-16 15:41   ` Chuang Xu
     [not found]   ` <a555b989-27be-006e-0d00-9f1688c5be4e@bytedance.com>
2023-02-17  8:11     ` Chuang Xu
2023-02-17 15:52       ` Peter Xu
2023-02-20 13:36         ` Chuang Xu
2023-02-21  3:38         ` Chuang Xu
2023-02-21  8:57           ` Chuang Xu
2023-02-21 20:36             ` Peter Xu [this message]
2023-02-22  6:27               ` Chuang Xu
2023-02-22 15:57                 ` Peter Xu
2023-02-23  3:28                   ` Chuang Xu
2023-02-25 15:32                     ` Peter Xu
2023-02-27 13:19                       ` Chuang Xu
2023-02-27 20:56                         ` Peter Xu
2023-02-20  9:53   ` Chuang Xu
2023-02-20 12:07     ` Juan Quintela

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y/UrXwRK7rB2KRKO@x1n \
    --to=peterx@redhat.com \
    --cc=david@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=philmd@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=xuchuangxclwt@bytedance.com \
    --cc=zhouyibo@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).