From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:33639) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YsB8n-0000Aj-MO for qemu-devel@nongnu.org; Tue, 12 May 2015 10:26:42 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YsB8h-00073r-77 for qemu-devel@nongnu.org; Tue, 12 May 2015 10:26:37 -0400 Date: Tue, 12 May 2015 16:26:18 +0200 From: Kevin Wolf Message-ID: <20150512142618.GE3524@noname.str.redhat.com> References: <1431438060-23324-1-git-send-email-den@openvz.org> <1431438060-23324-3-git-send-email-den@openvz.org> <20150512140831.GC3524@noname.str.redhat.com> <55520C23.2020705@openvz.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <55520C23.2020705@openvz.org> Subject: Re: [Qemu-devel] [PATCH 2/2] block: align bounce buffers to page List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Denis V. Lunev" Cc: Paolo Bonzini , Stefan Hajnoczi , qemu-devel@nongnu.org, qemu-block@nongnu.org Am 12.05.2015 um 16:20 hat Denis V. Lunev geschrieben: > On 12/05/15 17:08, Kevin Wolf wrote: > >Am 12.05.2015 um 15:41 hat Denis V. Lunev geschrieben: > >>The following sequence > >> int fd = open(argv[1], O_RDWR | O_CREAT | O_DIRECT, 0644); > >> for (i = 0; i < 100000; i++) > >> write(fd, buf, 4096); > >>performs 5% better if buf is aligned to 4096 bytes. > >> > >>The difference is quite reliable. > >> > >>On the other hand we do not want at the moment to enforce bounce > >>buffering if guest request is aligned to 512 bytes. > >> > >>The patch changes default bounce buffer optimal alignment to > >>MAX(page size, 4k). 4k is chosen as maximal known sector size on real > >>HDD. > >> > >>The justification of the performance improve is quite interesting. > >> From the kernel point of view each request to the disk was split > >>by two. This could be seen by blktrace like this: > >> 9,0 11 1 0.000000000 11151 Q WS 312737792 + 1023 [qemu-img] > >> 9,0 11 2 0.000007938 11151 Q WS 312738815 + 8 [qemu-img] > >> 9,0 11 3 0.000030735 11151 Q WS 312738823 + 1016 [qemu-img] > >> 9,0 11 4 0.000032482 11151 Q WS 312739839 + 8 [qemu-img] > >> 9,0 11 5 0.000041379 11151 Q WS 312739847 + 1016 [qemu-img] > >> 9,0 11 6 0.000042818 11151 Q WS 312740863 + 8 [qemu-img] > >> 9,0 11 7 0.000051236 11151 Q WS 312740871 + 1017 [qemu-img] > >> 9,0 5 1 0.169071519 11151 Q WS 312741888 + 1023 [qemu-img] > >>After the patch the pattern becomes normal: > >> 9,0 6 1 0.000000000 12422 Q WS 314834944 + 1024 [qemu-img] > >> 9,0 6 2 0.000038527 12422 Q WS 314835968 + 1024 [qemu-img] > >> 9,0 6 3 0.000072849 12422 Q WS 314836992 + 1024 [qemu-img] > >> 9,0 6 4 0.000106276 12422 Q WS 314838016 + 1024 [qemu-img] > >>and the amount of requests sent to disk (could be calculated counting > >>number of lines in the output of blktrace) is reduced about 2 times. > >> > >>Both qemu-img and qemu-io are affected while qemu-kvm is not. The guest > >>does his job well and real requests comes properly aligned (to page). > >> > >>Signed-off-by: Denis V. Lunev > >>CC: Paolo Bonzini > >>CC: Kevin Wolf > >>CC: Stefan Hajnoczi > >>--- > >> block.c | 8 ++++---- > >> block/io.c | 2 +- > >> block/raw-posix.c | 15 +++++++++------ > >> 3 files changed, 14 insertions(+), 11 deletions(-) > >> > >>diff --git a/block.c b/block.c > >>index e293907..325f727 100644 > >>--- a/block.c > >>+++ b/block.c > >>@@ -106,8 +106,8 @@ int is_windows_drive(const char *filename) > >> size_t bdrv_opt_mem_align(BlockDriverState *bs) > >> { > >> if (!bs || !bs->drv) { > >>- /* 4k should be on the safe side */ > >>- return 4096; > >>+ /* page size or 4k (hdd sector size) should be on the safe side */ > >>+ return MAX(4096, getpagesize()); > >> } > >> return bs->bl.opt_mem_alignment; > >>@@ -116,8 +116,8 @@ size_t bdrv_opt_mem_align(BlockDriverState *bs) > >> size_t bdrv_min_mem_align(BlockDriverState *bs) > >> { > >> if (!bs || !bs->drv) { > >>- /* 4k should be on the safe side */ > >>- return 4096; > >>+ /* page size or 4k (hdd sector size) should be on the safe side */ > >>+ return MAX(4096, getpagesize()); > >> } > >> return bs->bl.min_mem_alignment; > >>diff --git a/block/io.c b/block/io.c > >>index 908a3d1..071652c 100644 > >>--- a/block/io.c > >>+++ b/block/io.c > >>@@ -205,7 +205,7 @@ void bdrv_refresh_limits(BlockDriverState *bs, Error **errp) > >> bs->bl.opt_mem_alignment = bs->file->bl.opt_mem_alignment; > >> } else { > >> bs->bl.min_mem_alignment = 512; > >>- bs->bl.opt_mem_alignment = 512; > >>+ bs->bl.opt_mem_alignment = getpagesize(); > >> } > >> if (bs->backing_hd) { > >>diff --git a/block/raw-posix.c b/block/raw-posix.c > >>index 7083924..4659552 100644 > >>--- a/block/raw-posix.c > >>+++ b/block/raw-posix.c > >>@@ -301,6 +301,7 @@ static void raw_probe_alignment(BlockDriverState *bs, int fd, Error **errp) > >> { > >> BDRVRawState *s = bs->opaque; > >> char *buf; > >>+ size_t max_align = MAX(MAX_BLOCKSIZE, getpagesize()); > >> /* For /dev/sg devices the alignment is not really used. > >> With buffered I/O, we don't have any restrictions. */ > >>@@ -330,9 +331,9 @@ static void raw_probe_alignment(BlockDriverState *bs, int fd, Error **errp) > >> /* If we could not get the sizes so far, we can only guess them */ > >> if (!s->buf_align) { > >> size_t align; > >>- buf = qemu_memalign(MAX_BLOCKSIZE, 2 * MAX_BLOCKSIZE); > >>- for (align = 512; align <= MAX_BLOCKSIZE; align <<= 1) { > >>- if (raw_is_io_aligned(fd, buf + align, MAX_BLOCKSIZE)) { > >>+ buf = qemu_memalign(max_align, 2 * max_align); > >>+ for (align = 512; align <= max_align; align <<= 1) { > >>+ if (raw_is_io_aligned(fd, buf + align, max_align)) { > >> s->buf_align = align; > >> break; > >> } > >>@@ -342,8 +343,8 @@ static void raw_probe_alignment(BlockDriverState *bs, int fd, Error **errp) > >> if (!bs->request_alignment) { > >> size_t align; > >>- buf = qemu_memalign(s->buf_align, MAX_BLOCKSIZE); > >>- for (align = 512; align <= MAX_BLOCKSIZE; align <<= 1) { > >>+ buf = qemu_memalign(s->buf_align, max_align); > >>+ for (align = 512; align <= max_align; align <<= 1) { > >> if (raw_is_io_aligned(fd, buf, align)) { > >> bs->request_alignment = align; > >> break; > >>@@ -726,7 +727,9 @@ static void raw_refresh_limits(BlockDriverState *bs, Error **errp) > >> raw_probe_alignment(bs, s->fd, errp); > >> bs->bl.min_mem_alignment = s->buf_align; > >>- bs->bl.opt_mem_alignment = s->buf_align; > >>+ if (bs->bl.min_mem_alignment > bs->bl.opt_mem_alignment) { > >>+ bs->bl.opt_mem_alignment = MAX(s->buf_align, getpagesize()); > >>+ } > >> } > >I think this should be unconditional now. > > > >Kevin > frankly speaking I am not comfortable with that. With the 'if' in the code > we are protected if for some reason > bs->bl.opt_mem_alignment > is greater then page size, f.e. if this is a requirement > of underlying backing storage. The old value of bs->bl.opt_mem_alignment is meaningless here. It's not initialised except for some more or less random default value from block.c. By being explicit I meant that we should always overwrite it with known good values. (Currently, that default is getpagesize(), so we never overwrite it with something smaller anyway.) Kevin