From: Maxim Levitsky <mlevitsk@redhat.com>
To: qemu-devel@nongnu.org
Cc: Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
Max Reitz <mreitz@redhat.com>,
qemu-block@nongnu.org, John Ferlan <jferlan@redhat.com>
Subject: Re: [Qemu-devel] [PATCH 0/1] RFC: don't obey the block device max transfer len / max segments for block devices
Date: Tue, 02 Jul 2019 19:11:46 +0300 [thread overview]
Message-ID: <8224b0134d5eadcb19231a44e86bd42c18e1173c.camel@redhat.com> (raw)
In-Reply-To: <20190630150855.1016-1-mlevitsk@redhat.com>
On Sun, 2019-06-30 at 18:08 +0300, Maxim Levitsky wrote:
> It looks like Linux block devices, even in O_DIRECT mode don't have any user visible
> limit on transfer size / number of segments, which underlying block device can have.
> The block layer takes care of enforcing these limits by splitting the bios.
>
> By limiting the transfer sizes, we force qemu to do the splitting itself which
> introduces various overheads.
> It is especially visible in nbd server, where the low max transfer size of the
> underlying device forces us to advertise this over NBD, thus increasing the traffic overhead in case of
> image conversion which benefits from large blocks.
>
> More information can be found here:
> https://bugzilla.redhat.com/show_bug.cgi?id=1647104
>
> Tested this with qemu-img convert over nbd and natively and to my surprise, even native IO performance improved a bit.
> (The device on which it was tested is Intel Optane DC P4800X, which has 128k max transfer size)
>
> The benchmark:
>
> Images were created using:
>
> Sparse image: qemu-img create -f qcow2 /dev/nvme0n1p3 1G / 10G / 100G
> Allocated image: qemu-img create -f qcow2 /dev/nvme0n1p3 -o preallocation=metadata 1G / 10G / 100G
>
> The test was:
>
> echo "convert native:"
> rm -rf /dev/shm/disk.img
> time qemu-img convert -p -f qcow2 -O raw -T none $FILE /dev/shm/disk.img > /dev/zero
>
> echo "convert via nbd:"
> qemu-nbd -k /tmp/nbd.sock -v -f qcow2 $FILE -x export --cache=none --aio=native --fork
> rm -rf /dev/shm/disk.img
> time qemu-img convert -p -f raw -O raw nbd:unix:/tmp/nbd.sock:exportname=export /dev/shm/disk.img > /dev/zero
>
> The results:
>
> =========================================
> 1G sparse image:
> native:
> before: 0.027s
> after: 0.027s
> nbd:
> before: 0.287s
> after: 0.035s
>
> =========================================
> 100G sparse image:
> native:
> before: 0.028s
> after: 0.028s
> nbd:
> before: 23.796s
> after: 0.109s
>
> =========================================
> 1G preallocated image:
> native:
> before: 0.454s
> after: 0.427s
> nbd:
> before: 0.649s
> after: 0.546s
>
> The block limits of max transfer size/max segment size are retained
> for the SCSI passthrough because in this case the kernel passes the userspace request
> directly to the kernel scsi driver, bypassing the block layer, and thus there is no code to split
> such requests.
>
> What do you think?
>
> Fam, since you was the original author of the code that added
> these limits, could you share your opinion on that?
> What was the reason besides SCSI passthrough?
>
> Best regards,
> Maxim Levitsky
>
> Maxim Levitsky (1):
> raw-posix.c - use max transfer length / max segemnt count only for
> SCSI passthrough
>
> block/file-posix.c | 16 +++++++---------
> 1 file changed, 7 insertions(+), 9 deletions(-)
>
Ping
Best regards,
Maxim Levitsky
next prev parent reply other threads:[~2019-07-02 18:21 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-06-30 15:08 [Qemu-devel] [PATCH 0/1] RFC: don't obey the block device max transfer len / max segments for block devices Maxim Levitsky
2019-06-30 15:08 ` [Qemu-devel] [PATCH 1/1] raw-posix.c - use max transfer length / max segemnt count only for SCSI passthrough Maxim Levitsky
2019-07-03 14:50 ` Eric Blake
2019-07-03 15:28 ` Maxim Levitsky
2019-07-02 16:11 ` Maxim Levitsky [this message]
2019-07-03 9:52 ` [Qemu-devel] [Qemu-block] [PATCH 0/1] RFC: don't obey the block device max transfer len / max segments for block devices Stefan Hajnoczi
2019-07-03 14:46 ` Eric Blake
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8224b0134d5eadcb19231a44e86bd42c18e1173c.camel@redhat.com \
--to=mlevitsk@redhat.com \
--cc=fam@euphon.net \
--cc=jferlan@redhat.com \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).