From: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: Zhi Yong Wu <zwu.kernel@gmail.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>,
qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH] block: add the support to drain throttled requests
Date: Tue, 20 Mar 2012 11:44:42 +0000 [thread overview]
Message-ID: <20120320114442.GA30819@stefanha-thinkpad.localdomain> (raw)
In-Reply-To: <4F6854B2.8000209@redhat.com>
On Tue, Mar 20, 2012 at 10:58:10AM +0100, Kevin Wolf wrote:
> Am 20.03.2012 10:47, schrieb Paolo Bonzini:
> > Il 20/03/2012 10:40, Zhi Yong Wu ha scritto:
> >> HI, Kevin,
> >>
> >> We hope that I/O throttling can be shipped without known issue in QEMU
> >> 1.1, so if you are available, can you give this patch some love?
> >
> > I'm sorry to say this, but I think I/O throttling is impossible to save.
> > As it is implemented now, it just cannot work in the presence of
> > synchronous I/O, except at the cost of busy waiting with the global
> > mutex taken. See the message from Stefan yesterday.
>
> qemu_aio_flush() is busy waiting with the global mutex taken anyway, so
> it doesn't change that much.
Yesterday I only posted an analysis of the bug but here are some
thoughts on how to move forward. Throttling itself is not the problem.
We've known that synchronous operations in the vcpu thread are a problem
long before throttling. This is just another reason to convert device
emulation to use asynchronous interfaces.
Here is the list of device models that perform synchronous block I/O:
hw/fdc.c
hw/ide/atapi.c
hw/ide/core.c
hw/nand.c
hw/onenand.c
hw/pflash_cfi01.c
hw/pflash_cfi02.c
hw/sd.c
Zhi Hui Li is working on hw/fdc.c and recently sent a patch.
I think it's too close to QEMU 1.1 to convert all the remaining devices
and test them properly before the soft-freeze. But it's probably
possible to convert IDE before the soft-freeze.
In the meantime we could add this to bdrv_rw_co():
if (bs->io_limits_enabled) {
fprintf(stderr, "Disabling I/O throttling on '%s' due "
"to synchronous I/O\n", bdrv_get_device_name(bs));
bdrv_io_limits_disable(bs);
}
It's not pretty but tells the user there is an issue and avoids
deadlocking.
Stefan
next prev parent reply other threads:[~2012-03-20 11:45 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-03-13 1:53 [Qemu-devel] [PATCH] block: add the support to drain throttled requests zwu.kernel
2012-03-20 9:40 ` Zhi Yong Wu
2012-03-20 9:47 ` Paolo Bonzini
2012-03-20 9:58 ` Kevin Wolf
2012-03-20 11:44 ` Stefan Hajnoczi [this message]
2012-03-22 19:07 ` Chris Webb
2012-03-23 10:38 ` Stefan Hajnoczi
2012-03-23 10:43 ` Chris Webb
2012-03-23 10:50 ` Stefan Hajnoczi
2012-03-23 11:02 ` Richard Davies
2012-03-23 11:32 ` Stefan Hajnoczi
2012-03-23 16:56 ` Stefan Hajnoczi
2012-03-26 14:21 ` Stefan Hajnoczi
2012-03-26 14:31 ` Kevin Wolf
2012-03-27 4:29 ` Zhi Yong Wu
2012-03-27 6:52 ` Stefan Hajnoczi
2012-03-20 9:56 ` Kevin Wolf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120320114442.GA30819@stefanha-thinkpad.localdomain \
--to=stefanha@linux.vnet.ibm.com \
--cc=kwolf@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=wuzhy@linux.vnet.ibm.com \
--cc=zwu.kernel@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).