From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:40500) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bPwrF-0004P9-Pz for qemu-devel@nongnu.org; Wed, 20 Jul 2016 15:08:39 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bPwrB-0006iY-L1 for qemu-devel@nongnu.org; Wed, 20 Jul 2016 15:08:37 -0400 Date: Wed, 20 Jul 2016 15:08:31 -0400 (EDT) From: Paolo Bonzini Message-ID: <1650790738.8994994.1469041711240.JavaMail.zimbra@redhat.com> In-Reply-To: <578FB680.5010801@openvz.org> References: <1468516741-82174-1-git-send-email-vsementsov@virtuozzo.com> <578FB680.5010801@openvz.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v2] mirror: double performance of the bulk stage if the disc is full List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Denis V. Lunev" Cc: Vladimir Sementsov-Ogievskiy , qemu-block@nongnu.org, qemu-devel@nongnu.org, stefanha@redhat.com, famz@redhat.com, mreitz@redhat.com, jcody@redhat.com, eblake@redhat.com, Kevin Wolf ----- Original Message ----- > From: "Denis V. Lunev" > To: "Vladimir Sementsov-Ogievskiy" , qemu-block@nongnu.org, qemu-devel@nongnu.org > Cc: stefanha@redhat.com, famz@redhat.com, mreitz@redhat.com, jcody@redhat.com, eblake@redhat.com, > pbonzini@redhat.com, "Kevin Wolf" > Sent: Wednesday, July 20, 2016 7:36:00 PM > Subject: Re: [PATCH v2] mirror: double performance of the bulk stage if the disc is full > > On 07/14/2016 08:19 PM, Vladimir Sementsov-Ogievskiy wrote: > > Mirror can do up to 16 in-flight requests, but actually on full copy > > (the whole source disk is non-zero) in-flight is always 1. This happens > > as the request is not limited in size: the data occupies maximum available > > capacity of s->buf. > > > > The patch limits the size of the request to some artificial constant > > (1 Mb here), which is not that big or small. This effectively enables > > back parallelism in mirror code as it was designed. > > > > The result is important: the time to migrate 10 Gb disk is reduced from > > ~350 sec to 170 sec. > > > > Signed-off-by: Vladimir Sementsov-Ogievskiy > > Signed-off-by: Denis V. Lunev > > CC: Stefan Hajnoczi > > CC: Fam Zheng > > CC: Kevin Wolf > > CC: Max Reitz > > CC: Jeff Cody > > CC: Eric Blake > > --- > > > > v2: in case of s->buf_size larger than default use it to limit io_sectors > > > > block/mirror.c | 10 ++++++++-- > > 1 file changed, 8 insertions(+), 2 deletions(-) > > > > diff --git a/block/mirror.c b/block/mirror.c > > index b1e633e..3ac3b4d 100644 > > --- a/block/mirror.c > > +++ b/block/mirror.c > > @@ -23,7 +23,9 @@ > > > > #define SLICE_TIME 100000000ULL /* ns */ > > #define MAX_IN_FLIGHT 16 > > -#define DEFAULT_MIRROR_BUF_SIZE (10 << 20) > > +#define MAX_IO_SECTORS ((1 << 20) >> BDRV_SECTOR_BITS) /* 1 Mb */ > > +#define DEFAULT_MIRROR_BUF_SIZE \ > > + (MAX_IN_FLIGHT * MAX_IO_SECTORS * BDRV_SECTOR_SIZE) > > > > /* The mirroring buffer is a list of granularity-sized chunks. > > * Free chunks are organized in a list. > > @@ -322,6 +324,8 @@ static uint64_t coroutine_fn > > mirror_iteration(MirrorBlockJob *s) > > int nb_chunks = 1; > > int64_t end = s->bdev_length / BDRV_SECTOR_SIZE; > > int sectors_per_chunk = s->granularity >> BDRV_SECTOR_BITS; > > + int max_io_sectors = MAX((s->buf_size >> BDRV_SECTOR_BITS) / > > MAX_IN_FLIGHT, > > + MAX_IO_SECTORS); > > > > sector_num = hbitmap_iter_next(&s->hbi); > > if (sector_num < 0) { > > @@ -385,7 +389,9 @@ static uint64_t coroutine_fn > > mirror_iteration(MirrorBlockJob *s) > > nb_chunks * sectors_per_chunk, > > &io_sectors, &file); > > if (ret < 0) { > > - io_sectors = nb_chunks * sectors_per_chunk; > > + io_sectors = MIN(nb_chunks * sectors_per_chunk, > > max_io_sectors); > > + } else if (ret & BDRV_BLOCK_DATA) { > > + io_sectors = MIN(io_sectors, max_io_sectors); > > } > > > > io_sectors -= io_sectors % sectors_per_chunk; > guys, what about this patch? I think that at this point it has missed hard freeze. It's not up to me whether to consider it a bugfix. Paolo