qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Hajnoczi <stefanha@gmail.com>
To: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Peter Lieven <pl@kamp.de>, Fam Zheng <famz@redhat.com>,
	wency@cn.fujitsu.com, qemu block <qemu-block@nongnu.org>,
	Juan Quintela <quintela@redhat.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefan Hajnoczi <stefanha@redhat.com>
Subject: Re: [Qemu-devel] [Qemu-block] block migration and MAX_IN_FLIGHT_IO
Date: Tue, 6 Mar 2018 16:07:39 +0000	[thread overview]
Message-ID: <20180306160739.GN31045@stefanha-x1.localdomain> (raw)
In-Reply-To: <20180305145215.GM3131@work-vm>

[-- Attachment #1: Type: text/plain, Size: 2586 bytes --]

On Mon, Mar 05, 2018 at 02:52:16PM +0000, Dr. David Alan Gilbert wrote:
> * Peter Lieven (pl@kamp.de) wrote:
> > Am 05.03.2018 um 12:45 schrieb Stefan Hajnoczi:
> > > On Thu, Feb 22, 2018 at 12:13:50PM +0100, Peter Lieven wrote:
> > >> I stumbled across the MAX_INFLIGHT_IO field that was introduced in 2015 and was curious what was the reason
> > >> to choose 512MB as readahead? The question is that I found that the source VM gets very unresponsive I/O wise
> > >> while the initial 512MB are read and furthermore seems to stay unreasponsive if we choose a high migration speed
> > >> and have a fast storage on the destination VM.
> > >>
> > >> In our environment I modified this value to 16MB which seems to work much smoother. I wonder if we should make
> > >> this a user configurable value or define a different rate limit for the block transfer in bulk stage at least?
> > > I don't know if benchmarks were run when choosing the value.  From the
> > > commit description it sounds like the main purpose was to limit the
> > > amount of memory that can be consumed.
> > >
> > > 16 MB also fulfills that criteria :), but why is the source VM more
> > > responsive with a lower value?
> > >
> > > Perhaps the issue is queue depth on the storage device - the block
> > > migration code enqueues up to 512 MB worth of reads, and guest I/O has
> > > to wait?
> > 
> > That is my guess. Especially if the destination storage is faster we basically alsways have
> > 512 I/Os in flight on the source storage.
> > 
> > Does anyone mind if the reduce that value to 16MB or do we need a better mechanism?
> 
> We've got migration-parameters these days; you could connect it to one
> of those fairly easily I think.
> Try: grep -i 'cpu[-_]throttle[-_]initial'  for an example of one that's
> already there.
> Then you can set it to whatever you like.

It would be nice to solve the performance problem without adding a
tuneable.

On the other hand, QEMU has no idea what the queue depth of the device
is.  Therefore it cannot prioritize guest I/O over block migration I/O.

512 parallel requests is much too high.  Most parallel I/O benchmarking
is done at 32-64 queue depth.

I think that 16 parallel requests is a reasonable maximum number for a
background job.

We need to be clear though that the purpose of this change is unrelated
to the original 512 MB memory footprint goal.  It just happens to touch
the same constant but the goal is now to submit at most 16 I/O requests
in parallel to avoid monopolizing the I/O device.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

  reply	other threads:[~2018-03-06 16:07 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-02-22 11:13 [Qemu-devel] block migration and MAX_IN_FLIGHT_IO Peter Lieven
2018-03-05 11:45 ` Stefan Hajnoczi
2018-03-05 14:37   ` Peter Lieven
2018-03-05 14:52     ` Dr. David Alan Gilbert
2018-03-06 16:07       ` Stefan Hajnoczi [this message]
2018-03-06 16:35         ` [Qemu-devel] [Qemu-block] " Peter Lieven
2018-03-07  7:55           ` Peter Lieven
2018-03-07  8:06             ` [Qemu-devel] block migration and dirty bitmap reset Peter Lieven
2018-03-08  1:28               ` Fam Zheng
2018-03-08  8:57                 ` Peter Lieven
2018-03-08  9:01                   ` Fam Zheng
2018-03-08 10:33                     ` Peter Lieven
2018-03-07  9:47             ` [Qemu-devel] [Qemu-block] block migration and MAX_IN_FLIGHT_IO Stefan Hajnoczi
2018-03-07 20:35               ` Peter Lieven
2018-03-06 16:14       ` [Qemu-devel] " Peter Lieven

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180306160739.GN31045@stefanha-x1.localdomain \
    --to=stefanha@gmail.com \
    --cc=dgilbert@redhat.com \
    --cc=famz@redhat.com \
    --cc=pl@kamp.de \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=wency@cn.fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).