From: Fam Zheng <famz@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
qemu-devel@nongnu.org, qemu-block@nongnu.org
Subject: Re: [Qemu-devel] [PATCH 05/12] aio: introduce aio_{disable, enable}_clients
Date: Mon, 12 Oct 2015 19:20:07 +0800 [thread overview]
Message-ID: <20151012112007.GA12602@ad.nay.redhat.com> (raw)
In-Reply-To: <20151012083108.GB4153@noname.str.redhat.com>
On Mon, 10/12 10:31, Kevin Wolf wrote:
> Am 09.10.2015 um 18:27 hat Fam Zheng geschrieben:
> > On Fri, 10/09 16:31, Kevin Wolf wrote:
> > > Am 09.10.2015 um 07:45 hat Fam Zheng geschrieben:
> > > > Signed-off-by: Fam Zheng <famz@redhat.com>
> > > > ---
> > > > aio-posix.c | 3 ++-
> > > > aio-win32.c | 3 ++-
> > > > async.c | 42 ++++++++++++++++++++++++++++++++++++++++++
> > > > include/block/aio.h | 30 ++++++++++++++++++++++++++++++
> > > > 4 files changed, 76 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/aio-posix.c b/aio-posix.c
> > > > index d25fcfc..a261892 100644
> > > > --- a/aio-posix.c
> > > > +++ b/aio-posix.c
> > > > @@ -261,7 +261,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
> > > >
> > > > /* fill pollfds */
> > > > QLIST_FOREACH(node, &ctx->aio_handlers, node) {
> > > > - if (!node->deleted && node->pfd.events) {
> > > > + if (!node->deleted && node->pfd.events
> > > > + && !aio_type_disabled(ctx, node->type)) {
> > > > add_pollfd(node);
> > > > }
> > > > }
> > > > diff --git a/aio-win32.c b/aio-win32.c
> > > > index f5ecf57..66cff60 100644
> > > > --- a/aio-win32.c
> > > > +++ b/aio-win32.c
> > > > @@ -309,7 +309,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
> > > > /* fill fd sets */
> > > > count = 0;
> > > > QLIST_FOREACH(node, &ctx->aio_handlers, node) {
> > > > - if (!node->deleted && node->io_notify) {
> > > > + if (!node->deleted && node->io_notify
> > > > + && !aio_type_disabled(ctx, node->type)) {
> > > > events[count++] = event_notifier_get_handle(node->e);
> > > > }
> > > > }
> > > > diff --git a/async.c b/async.c
> > > > index 244bf79..855b9d5 100644
> > > > --- a/async.c
> > > > +++ b/async.c
> > > > @@ -361,3 +361,45 @@ void aio_context_release(AioContext *ctx)
> > > > {
> > > > rfifolock_unlock(&ctx->lock);
> > > > }
> > > > +
> > > > +bool aio_type_disabled(AioContext *ctx, int type)
> > > > +{
> > > > + int i = 1;
> > > > + int n = 0;
> > > > +
> > > > + while (type) {
> > > > + bool b = type & 0x1;
> > > > + type >>= 1;
> > > > + n++;
> > >
> > > Any specific reason for leaving client_disable_counters[0] unused?
> >
> > No, I should have started from 0.
> >
> > >
> > > > + i <<= 1;
> > >
> > > i is never read.
> > >
> > > > + if (!b) {
> > > > + continue;
> > > > + }
> > > > + if (ctx->client_disable_counters[n]) {
> > > > + return true;
> > > > + }
> > > > + }
> > > > + return false;
> > > > +}
> > >
> > > In general I wonder whether this function really needs to take a mask
> > > with possibly multiple set bits instead of just a single type.
> >
> > Previous versions used to have more types than "internal" and "external", so it
> > has been a mask. So yes, I think a single type will be better now.
> >
> > >
> > > > +void aio_disable_enable_clients(AioContext *ctx, int clients_mask,
> > > > + bool is_disable)
> > > > +{
> > > > + int i = 1;
> > > > + int n = 0;
> > > > + aio_context_acquire(ctx);
> > > > +
> > > > + while (clients_mask) {
> > > > + bool b = clients_mask & 0x1;
> > > > + clients_mask >>= 1;
> > > > + n++;
> > > > + i <<= 1;
> > >
> > > This i isn't used either.
> > >
> > > > + if (!b) {
> > > > + continue;
> > > > + }
> > > > + if (ctx->client_disable_counters[n]) {
> > > > + return true;
> > > > + }
> > >
> > > Wait, why are you checking the state instead of setting it?
> >
> > Oops, apparent I screwed my workspaces as I do remember coding this assignment.
> > And I must have used a wrong command when building the tree so that I don't
> > even catch the compiling error. :(
> >
> > >
> > > How did you test this series?
> >
> > So far only smoke testing and qemu-iotests, because I don't have a good idea of
> > testifying the transaction's atomicity. Any suggestions?
>
> Perhaps you could use blkdebug to delay something in the middle of the
> transaction while your guest keeps writing stuff? That should result in
> 100% reproducability.
>
> I guess you actually need to make sure that your guest doesn't do any
> I/O, then set the blkdebug breakpoint, send the transaction, and once a
> request is stopped, you start some I/O in the guest. Resume as soon as
> you know that something bad happened.
>
> Possibly you need to add a new blkdebug event to find a good place to
> suspend a transaction request.
>
It's difficult to "start some I/O" in the guest in the middle of transaction,
even with help of blkdebug, because BQL is hold during the whole transaction.
I think it would be a bit easier to program a VCPU to constantly submit I/O
requests to the vq, but that's far from enough.
Anyway I'll start by writing some unit test code instead, in tests/test-aio.c.
Fam
next prev parent reply other threads:[~2015-10-12 11:20 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-10-09 5:45 [Qemu-devel] [PATCH 00/12] block: bdrv_drained_begin/end for transactions on dataplane devices Fam Zheng
2015-10-09 5:45 ` [Qemu-devel] [PATCH 01/12] aio: Introduce "type" in aio_set_fd_handler and aio_set_event_notifier Fam Zheng
2015-10-09 14:08 ` Kevin Wolf
2015-10-09 5:45 ` [Qemu-devel] [PATCH 02/12] aio: Save fd client type to AioHandler Fam Zheng
2015-10-09 14:13 ` Kevin Wolf
2015-10-09 5:45 ` [Qemu-devel] [PATCH 03/12] nbd: Mark fd handlers client type as "external" Fam Zheng
2015-10-09 14:16 ` Kevin Wolf
2015-10-09 5:45 ` [Qemu-devel] [PATCH 04/12] dataplane: Mark host notifiers' " Fam Zheng
2015-10-09 14:18 ` Kevin Wolf
2015-10-09 5:45 ` [Qemu-devel] [PATCH 05/12] aio: introduce aio_{disable, enable}_clients Fam Zheng
2015-10-09 14:10 ` Kevin Wolf
2015-10-09 14:31 ` Kevin Wolf
2015-10-09 16:27 ` Fam Zheng
2015-10-12 8:31 ` Kevin Wolf
2015-10-12 11:20 ` Fam Zheng [this message]
2015-10-09 5:45 ` [Qemu-devel] [PATCH 06/12] block: Introduce "drained begin/end" API Fam Zheng
2015-10-09 5:45 ` [Qemu-devel] [PATCH 07/12] block: Add "drained begin/end" for transactional external snapshot Fam Zheng
2015-10-09 5:45 ` [Qemu-devel] [PATCH 08/12] block: Add "drained begin/end" for transactional backup Fam Zheng
2015-10-09 5:45 ` [Qemu-devel] [PATCH 09/12] block: Add "drained begin/end" for transactional blockdev-backup Fam Zheng
2015-10-09 5:45 ` [Qemu-devel] [PATCH 10/12] block: Add "drained begin/end" for internal snapshot Fam Zheng
2015-10-09 5:45 ` [Qemu-devel] [PATCH 11/12] block: Introduce BlockDriver.bdrv_drain callback Fam Zheng
2015-10-09 5:45 ` [Qemu-devel] [PATCH 12/12] qed: Implement .bdrv_drain Fam Zheng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20151012112007.GA12602@ad.nay.redhat.com \
--to=famz@redhat.com \
--cc=kwolf@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).