From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Kevin Wolf <kwolf@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Anthony Liguori <aliguori@us.ibm.com>,
pingfank@linux.vnet.ibm.com
Subject: [Qemu-devel] [RFC 00/13] aio: drop io_flush()
Date: Thu, 11 Apr 2013 17:44:32 +0200 [thread overview]
Message-ID: <1365695085-27970-1-git-send-email-stefanha@redhat.com> (raw)
Here's my entry to the "let's get rid of io_flush()" effort. It's based on
Paolo's insight about bdrv_drain_all() that the block layer already has a
tracked_requests list. io_flush() is redundant since the block layer already
knows if requests are pending.
The point of this effort is to simplify our event loop(s). If we can reduce
custom features like io_flush() it becomes easier to unify event loops and
reuse glib or other options.
This is also important to me for dataplane, since bdrv_drain_all() is one of
the synchronization points between threads. QEMU monitor commands invoke
bdrv_drain_all() while the block device is accessed from a dataplane thread.
Background on io_flush() semantics:
The io_flush() handler must return 1 if this aio fd handler is active. That
is, requests are pending and we'd like to make progress by monitoring the fd.
If io_flush() returns 0, the aio event loop skips monitoring this fd. This is
critical for block drivers like iscsi where we have an idle TCP socket which we
want to block on *only* when there are pending requests.
The series works as follows:
1. Make bdrv_drain_all() use tracked_requests to determine when to stop
waiting. From this point onwards io_flush() is redundant.
2. Drop io_flush() handlers and block driver internal state (e.g. in_flight
counters). Just pass NULL for the io_flush argument.
3. Drop io_flush argument from aio_set_fd_handler() and related functions.
I split Step 2 from Step 3 so that experts in sheepdog, rbd, curl, etc can
review those patches in isolation. Otherwise we'd have a monster patch that
touches all files and is hard to review.
Stefan Hajnoczi (13):
block: stop relying on io_flush() in bdrv_drain_all()
dataplane/virtio-blk: check exit conditions before aio_poll()
aio: stop using .io_flush()
block/curl: drop curl_aio_flush()
block/gluster: drop qemu_gluster_aio_flush_cb()
block/iscsi: drop iscsi_process_flush()
block/linux-aio: drop qemu_laio_completion_cb()
block/nbd: drop nbd_have_request()
block/rbd: drop qemu_rbd_aio_flush_cb()
block/sheepdog: drop have_co_req() and aio_flush_request()
dataplane/virtio-blk: drop flush_true() and flush_io()
thread-pool: drop thread_pool_active()
aio: drop io_flush argument
aio-posix.c | 31 ++++---------------------------
aio-win32.c | 26 +++-----------------------
async.c | 4 ++--
block.c | 28 ++++++++++++++++++----------
block/curl.c | 25 ++++---------------------
block/gluster.c | 21 +++------------------
block/iscsi.c | 10 +---------
block/linux-aio.c | 18 ++----------------
block/nbd.c | 18 ++++--------------
block/rbd.c | 16 ++--------------
block/sheepdog.c | 33 ++++++++-------------------------
hw/block/dataplane/virtio-blk.c | 25 ++++++-------------------
include/block/aio.h | 8 ++------
main-loop.c | 9 +++------
thread-pool.c | 11 ++---------
15 files changed, 64 insertions(+), 219 deletions(-)
--
1.8.1.4
next reply other threads:[~2013-04-11 15:51 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-04-11 15:44 Stefan Hajnoczi [this message]
2013-04-11 15:44 ` [Qemu-devel] [RFC 01/13] block: stop relying on io_flush() in bdrv_drain_all() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 02/13] dataplane/virtio-blk: check exit conditions before aio_poll() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 03/13] aio: stop using .io_flush() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 04/13] block/curl: drop curl_aio_flush() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 05/13] block/gluster: drop qemu_gluster_aio_flush_cb() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 06/13] block/iscsi: drop iscsi_process_flush() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 07/13] block/linux-aio: drop qemu_laio_completion_cb() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 08/13] block/nbd: drop nbd_have_request() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 09/13] block/rbd: drop qemu_rbd_aio_flush_cb() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 10/13] block/sheepdog: drop have_co_req() and aio_flush_request() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 11/13] dataplane/virtio-blk: drop flush_true() and flush_io() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 12/13] thread-pool: drop thread_pool_active() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 13/13] aio: drop io_flush argument Stefan Hajnoczi
2013-04-12 8:02 ` [Qemu-devel] [RFC 00/13] aio: drop io_flush() Kevin Wolf
2013-04-12 9:49 ` Stefan Hajnoczi
2013-04-12 10:04 ` Kevin Wolf
2013-04-12 10:27 ` Paolo Bonzini
2013-04-12 12:06 ` Stefan Hajnoczi
2013-04-12 12:22 ` Kevin Wolf
2013-04-12 13:41 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1365695085-27970-1-git-send-email-stefanha@redhat.com \
--to=stefanha@redhat.com \
--cc=aliguori@us.ibm.com \
--cc=kwolf@redhat.com \
--cc=pbonzini@redhat.com \
--cc=pingfank@linux.vnet.ibm.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).