qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Kevin Wolf <kwolf@redhat.com>
To: Juan Quintela <quintela@redhat.com>
Cc: "amit.shah@redhat.com" <amit.shah@redhat.com>,
	"Zhang, Yang Z" <yang.z.zhang@intel.com>,
	"Li, Liang Z" <liang.z.li@intel.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"qemu-block@nongnu.org" <qemu-block@nongnu.org>
Subject: Re: [Qemu-devel] [PATCH] migration: flush the bdrv before stopping VM
Date: Wed, 25 Mar 2015 11:53:38 +0100	[thread overview]
Message-ID: <20150325105338.GA4581@noname.str.redhat.com> (raw)
In-Reply-To: <87zj711hd0.fsf@neno.neno>

Am 25.03.2015 um 11:50 hat Juan Quintela geschrieben:
> "Li, Liang Z" <liang.z.li@intel.com> wrote:
> >> >> > Right now, we don't have an interface to detect that cases and got
> >> >> > back to the iterative stage.
> >> >>
> >> >> How about go back to the iterative stage when detect that the
> >> >> pending_size is larger Than max_size, like this:
> >> >>
> >> >> +                /* do flush here is aimed to shorten the VM downtime,
> >> >> +                 * bdrv_flush_all is a time consuming operation
> >> >> +                 * when the guest has done some file writing */
> >> >> +                bdrv_flush_all();
> >> >> +                pending_size = qemu_savevm_state_pending(s->file, max_size);
> >> >> +                if (pending_size && pending_size >= max_size) {
> >> >> +                    qemu_mutex_unlock_iothread();
> >> >> +                    continue;
> >> >> +                }
> >> >>                   ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
> >> >>                   if (ret >= 0) {
> >> >>                       qemu_file_set_rate_limit(s->file, INT64_MAX);
> >> >>
> >> >> and this is quite simple.
> >> >
> >> > Yes, but it is too simple. If you hold all the locks during
> >> > bdrv_flush_all(), your VM will effectively stop as soon as it performs
> >> > the next I/O access, so you don't win much. And you still don't have a
> >> > timeout for cases where the flush takes really long.
> >> 
> >> This is probably better than what we had now (basically we are "meassuring"
> >> after bdrv_flush_all how much the amount of dirty memory has changed,
> >> and return to iterative stage if it took too much.  A timeout would be better
> >> anyways.  And an interface te start the synchronization sooner
> >> asynchronously would be also good.
> >> 
> >> Notice that my understanding is that any proper fix for this is 2.4 material.
> >
> > Then, how to deal with this issue in 2.3, leave it here? or make an
> > incomplete fix like I do above?
> 
> I think it is better to leave it here for 2.3. With a patch like this
> one, we improve in one load and we got worse in a different load (depens
> a lot in the ratio of dirtying memory vs disk).  I have no data which
> load is more common, so I prefer to be conservative so late in the
> cycle.  What do you think?

I agree, it's too late in the release cycle for such a change.

Kevin

  reply	other threads:[~2015-03-25 10:53 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-03-17  8:53 [Qemu-devel] [PATCH] migration: flush the bdrv before stopping VM Liang Li
2015-03-17 12:12 ` Juan Quintela
2015-03-18  3:19   ` Li, Liang Z
2015-03-18 11:17     ` Kevin Wolf
2015-03-18 12:36       ` Juan Quintela
2015-03-18 12:59         ` Paolo Bonzini
2015-03-18 13:42         ` Kevin Wolf
2015-03-20  7:22         ` Li, Liang Z
2015-03-25 10:50           ` Juan Quintela
2015-03-25 10:53             ` Kevin Wolf [this message]
2015-03-26  1:13               ` Li, Liang Z
2015-06-24 11:08               ` Li, Liang Z
2015-06-25 12:34                 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2015-03-18 13:39       ` [Qemu-devel] " Li, Liang Z
2015-03-18 16:55         ` Dr. David Alan Gilbert
2015-03-19 14:06           ` Li, Liang Z
2015-03-19 14:40             ` Dr. David Alan Gilbert

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150325105338.GA4581@noname.str.redhat.com \
    --to=kwolf@redhat.com \
    --cc=amit.shah@redhat.com \
    --cc=liang.z.li@intel.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=yang.z.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).