From: Paolo Bonzini <pbonzini@redhat.com>
To: "Denis V. Lunev" <den@openvz.org>
Cc: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>,
qemu-block@nongnu.org, qemu-devel@nongnu.org,
stefanha@redhat.com, famz@redhat.com, mreitz@redhat.com,
jcody@redhat.com, eblake@redhat.com,
Kevin Wolf <kwolf@redhat.com>
Subject: Re: [Qemu-devel] [PATCH v2] mirror: double performance of the bulk stage if the disc is full
Date: Wed, 20 Jul 2016 15:08:31 -0400 (EDT) [thread overview]
Message-ID: <1650790738.8994994.1469041711240.JavaMail.zimbra@redhat.com> (raw)
In-Reply-To: <578FB680.5010801@openvz.org>
----- Original Message -----
> From: "Denis V. Lunev" <den@openvz.org>
> To: "Vladimir Sementsov-Ogievskiy" <vsementsov@virtuozzo.com>, qemu-block@nongnu.org, qemu-devel@nongnu.org
> Cc: stefanha@redhat.com, famz@redhat.com, mreitz@redhat.com, jcody@redhat.com, eblake@redhat.com,
> pbonzini@redhat.com, "Kevin Wolf" <kwolf@redhat.com>
> Sent: Wednesday, July 20, 2016 7:36:00 PM
> Subject: Re: [PATCH v2] mirror: double performance of the bulk stage if the disc is full
>
> On 07/14/2016 08:19 PM, Vladimir Sementsov-Ogievskiy wrote:
> > Mirror can do up to 16 in-flight requests, but actually on full copy
> > (the whole source disk is non-zero) in-flight is always 1. This happens
> > as the request is not limited in size: the data occupies maximum available
> > capacity of s->buf.
> >
> > The patch limits the size of the request to some artificial constant
> > (1 Mb here), which is not that big or small. This effectively enables
> > back parallelism in mirror code as it was designed.
> >
> > The result is important: the time to migrate 10 Gb disk is reduced from
> > ~350 sec to 170 sec.
> >
> > Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> > Signed-off-by: Denis V. Lunev <den@openvz.org>
> > CC: Stefan Hajnoczi <stefanha@redhat.com>
> > CC: Fam Zheng <famz@redhat.com>
> > CC: Kevin Wolf <kwolf@redhat.com>
> > CC: Max Reitz <mreitz@redhat.com>
> > CC: Jeff Cody <jcody@redhat.com>
> > CC: Eric Blake <eblake@redhat.com>
> > ---
> >
> > v2: in case of s->buf_size larger than default use it to limit io_sectors
> >
> > block/mirror.c | 10 ++++++++--
> > 1 file changed, 8 insertions(+), 2 deletions(-)
> >
> > diff --git a/block/mirror.c b/block/mirror.c
> > index b1e633e..3ac3b4d 100644
> > --- a/block/mirror.c
> > +++ b/block/mirror.c
> > @@ -23,7 +23,9 @@
> >
> > #define SLICE_TIME 100000000ULL /* ns */
> > #define MAX_IN_FLIGHT 16
> > -#define DEFAULT_MIRROR_BUF_SIZE (10 << 20)
> > +#define MAX_IO_SECTORS ((1 << 20) >> BDRV_SECTOR_BITS) /* 1 Mb */
> > +#define DEFAULT_MIRROR_BUF_SIZE \
> > + (MAX_IN_FLIGHT * MAX_IO_SECTORS * BDRV_SECTOR_SIZE)
> >
> > /* The mirroring buffer is a list of granularity-sized chunks.
> > * Free chunks are organized in a list.
> > @@ -322,6 +324,8 @@ static uint64_t coroutine_fn
> > mirror_iteration(MirrorBlockJob *s)
> > int nb_chunks = 1;
> > int64_t end = s->bdev_length / BDRV_SECTOR_SIZE;
> > int sectors_per_chunk = s->granularity >> BDRV_SECTOR_BITS;
> > + int max_io_sectors = MAX((s->buf_size >> BDRV_SECTOR_BITS) /
> > MAX_IN_FLIGHT,
> > + MAX_IO_SECTORS);
> >
> > sector_num = hbitmap_iter_next(&s->hbi);
> > if (sector_num < 0) {
> > @@ -385,7 +389,9 @@ static uint64_t coroutine_fn
> > mirror_iteration(MirrorBlockJob *s)
> > nb_chunks * sectors_per_chunk,
> > &io_sectors, &file);
> > if (ret < 0) {
> > - io_sectors = nb_chunks * sectors_per_chunk;
> > + io_sectors = MIN(nb_chunks * sectors_per_chunk,
> > max_io_sectors);
> > + } else if (ret & BDRV_BLOCK_DATA) {
> > + io_sectors = MIN(io_sectors, max_io_sectors);
> > }
> >
> > io_sectors -= io_sectors % sectors_per_chunk;
> guys, what about this patch?
I think that at this point it has missed hard freeze. It's not up to me
whether to consider it a bugfix.
Paolo
next prev parent reply other threads:[~2016-07-20 19:08 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-14 17:19 [Qemu-devel] [PATCH v2] mirror: double performance of the bulk stage if the disc is full Vladimir Sementsov-Ogievskiy
2016-07-18 15:36 ` Denis V. Lunev
2016-07-20 17:36 ` Denis V. Lunev
2016-07-20 19:08 ` Paolo Bonzini [this message]
2016-07-20 20:30 ` Denis V. Lunev
2016-07-22 16:41 ` Max Reitz
2016-07-26 2:47 ` Jeff Cody
2016-07-26 2:48 ` Jeff Cody
2016-08-03 10:52 ` Kevin Wolf
2016-08-03 12:20 ` Vladimir Sementsov-Ogievskiy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1650790738.8994994.1469041711240.JavaMail.zimbra@redhat.com \
--to=pbonzini@redhat.com \
--cc=den@openvz.org \
--cc=eblake@redhat.com \
--cc=famz@redhat.com \
--cc=jcody@redhat.com \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
--cc=vsementsov@virtuozzo.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).