From: Bharata B Rao <bharata@linux.vnet.ibm.com>
To: qemu-devel@nongnu.org
Cc: Kevin Wolf <kwolf@redhat.com>,
Anthony Liguori <aliguori@us.ibm.com>,
Anand Avati <aavati@redhat.com>,
Vijay Bellur <vbellur@redhat.com>,
Stefan Hajnoczi <stefanha@gmail.com>,
Harsh Bora <harsh@linux.vnet.ibm.com>,
Amar Tumballi <amarts@redhat.com>,
"Richard W.M. Jones" <rjones@redhat.com>,
Daniel Veillard <veillard@redhat.com>,
Blue Swirl <blauwirbel@gmail.com>, Avi Kivity <avi@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>
Subject: [Qemu-devel] [PATCH v10 1/5] aio: Fix qemu_aio_wait() to maintain correct walking_handlers count
Date: Thu, 27 Sep 2012 19:26:52 +0530 [thread overview]
Message-ID: <20120927135652.GE18285@in.ibm.com> (raw)
In-Reply-To: <20120927135553.GD18285@in.ibm.com>
aio: Fix qemu_aio_wait() to maintain correct walking_handlers count
From: Paolo Bonzini <pbonzini@redhat.com>
Fix qemu_aio_wait() to ensure that registered aio handlers don't get
deleted when they are still active. This is ensured by maintaning the
right count of walking_handlers.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
---
aio.c | 8 ++++----
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/aio.c b/aio.c
index 0a9eb10..99b8b72 100644
--- a/aio.c
+++ b/aio.c
@@ -119,7 +119,7 @@ bool qemu_aio_wait(void)
return true;
}
- walking_handlers = 1;
+ walking_handlers++;
FD_ZERO(&rdfds);
FD_ZERO(&wrfds);
@@ -147,7 +147,7 @@ bool qemu_aio_wait(void)
}
}
- walking_handlers = 0;
+ walking_handlers--;
/* No AIO operations? Get us out of here */
if (!busy) {
@@ -159,7 +159,7 @@ bool qemu_aio_wait(void)
/* if we have any readable fds, dispatch event */
if (ret > 0) {
- walking_handlers = 1;
+ walking_handlers++;
/* we have to walk very carefully in case
* qemu_aio_set_fd_handler is called while we're walking */
@@ -187,7 +187,7 @@ bool qemu_aio_wait(void)
}
}
- walking_handlers = 0;
+ walking_handlers--;
}
return true;
next prev parent reply other threads:[~2012-09-27 13:55 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-09-27 13:55 [Qemu-devel] [PATCH v10 0/5] GlusterFS support in QEMU - v10 Bharata B Rao
2012-09-27 13:56 ` Bharata B Rao [this message]
2012-09-27 13:57 ` [Qemu-devel] [PATCH v10 2/5] aio: Another fix to the walking_handlers logic Bharata B Rao
2012-09-27 13:58 ` [Qemu-devel] [PATCH v10 3/5] qemu: URI parsing library Bharata B Rao
2012-09-27 14:36 ` Daniel P. Berrange
2012-09-27 15:55 ` Paolo Bonzini
2012-09-28 8:39 ` Daniel P. Berrange
2012-09-28 9:47 ` Paolo Bonzini
2012-09-27 13:59 ` [Qemu-devel] [PATCH v10 4/5] configure: Add a config option for GlusterFS as block backend Bharata B Rao
2012-09-27 14:00 ` [Qemu-devel] [PATCH v10 5/5] block: Support GlusterFS as a QEMU " Bharata B Rao
2012-09-28 18:23 ` [Qemu-devel] [PATCH v10 0/5] GlusterFS support in QEMU - v10 Kevin Wolf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120927135652.GE18285@in.ibm.com \
--to=bharata@linux.vnet.ibm.com \
--cc=aavati@redhat.com \
--cc=aliguori@us.ibm.com \
--cc=amarts@redhat.com \
--cc=avi@redhat.com \
--cc=blauwirbel@gmail.com \
--cc=harsh@linux.vnet.ibm.com \
--cc=kwolf@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=rjones@redhat.com \
--cc=stefanha@gmail.com \
--cc=vbellur@redhat.com \
--cc=veillard@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).