From: Kevin Wolf <kwolf@redhat.com>
To: Ying Fang <fangying1@huawei.com>
Cc: mreitz@redhat.com,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
jianjay.zhou@huawei.com, dengkai1@huawei.com
Subject: Re: [Qemu-devel] [RFC] Questions on the I/O performance of emulated host cdrom device
Date: Tue, 8 Jan 2019 13:46:50 +0100 [thread overview]
Message-ID: <20190108124650.GC11492@linux.fritz.box> (raw)
In-Reply-To: <9af4a3f8-3095-61dc-77fa-17a877e48a02@huawei.com>
Am 29.12.2018 um 07:33 hat Ying Fang geschrieben:
> Hi.
> Recently one of our customer complained about the I/O performance of QEMU emulated host cdrom device.
> I did some investigation on it and there was still some point I could not figure out. So I had to ask for your help.
>
> Here is the application scenario setup by our customer.
> filename.iso /dev/sr0 /dev/cdrom
> remote client --> host(cdemu) --> Linux VM
> (1)A remote client maps an iso file to x86 host machine through network using tcp.
> (2)The cdemu daemon then load it as a local virtual cdrom disk drive.
> (3)A VM is launched with the virtual cdrom disk drive configured.
> The VM can make use of this virtual cdrom to install an OS in the iso file.
>
> The network bandwith btw the remote client and host is 100Mbps, we test I/O perf using: dd if=/dev/sr0 of=/dev/null bs=32K count=100000.
> And we have
> (1) I/O perf of host side /dev/sr0 is 11MB/s;
> (2) I/O perf of /dev/cdrom inside VM is 3.8MB/s.
>
> As we can see, I/O perf of cdrom inside the VM is about 34.5% compared with host side.
> FlameGraph is used to find out the bottleneck of this operation and we find out that too much time is occupied by calling *bdrv_is_inserted*.
> Then we dig into the code and figure out that the ioctl in *cdrom_is_inserted* takes too much time, because it triggers io_schdule_timeout in kernel.
> In the code path of emulated cdrom device, each DMA I/O request consists of several *bdrv_is_inserted*, which degrades the I/O performance by about 31% in our test.
> static bool cdrom_is_inserted(BlockDriverState *bs)
> {
> BDRVRawState *s = bs->opaque;
> int ret;
>
> ret = ioctl(s->fd, CDROM_DRIVE_STATUS, CDSL_CURRENT);
> return ret == CDS_DISC_OK;
> }
> A flamegraph svg file (cdrom.svg) is attachieved in this email to show the code timing profile we've tested.
>
> So here is my question.
> (1) Why do we regularly check the presence of a cdrom disk drive in the code path? Can we do it asynchronously?
> (2) Can we drop some check point in the code path to improve the performance?
> Thanks.
I'm actually not sure why so many places check it. Just letting an I/O
request fail if the CD was removed would probably be easier.
To try out whether that would improve performance significantly, you
could try to use the host_device backend rather than the host_cdrom
backend. That one doesn't implement .bdrv_is_inserted, so the operation
will be cheap (just return true unconditionally).
You will also lose eject/lock passthrough when doing so, so this is not
the final solution, but if it proves to be a lot faster, we can check
where bdrv_is_inserted() calls are actually important (if anywhere) and
hopefully remove some even for the host_cdrom case.
Kevin
next prev parent reply other threads:[~2019-01-08 12:47 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-12-29 6:33 [Qemu-devel] [RFC] Questions on the I/O performance of emulated host cdrom device Ying Fang
2019-01-07 1:23 ` Ying Fang
2019-01-08 12:46 ` Kevin Wolf [this message]
2019-01-09 3:20 ` Ying Fang
2019-01-15 20:15 ` John Snow
2019-01-16 2:48 ` Ying Fang
2019-01-18 22:43 ` John Snow
2019-01-22 3:26 ` fangying
-- strict thread matches above, loose matches on Subject: below --
2019-01-22 5:55 fangying
2018-12-29 6:29 Ying Fang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190108124650.GC11492@linux.fritz.box \
--to=kwolf@redhat.com \
--cc=dengkai1@huawei.com \
--cc=fangying1@huawei.com \
--cc=jianjay.zhou@huawei.com \
--cc=mreitz@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).