From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:54737) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1a1HvV-0002Gs-4A for qemu-devel@nongnu.org; Tue, 24 Nov 2015 13:02:49 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1a1HvU-00068g-8j for qemu-devel@nongnu.org; Tue, 24 Nov 2015 13:02:49 -0500 From: Paolo Bonzini Date: Tue, 24 Nov 2015 19:01:28 +0100 Message-Id: <1448388091-117282-38-git-send-email-pbonzini@redhat.com> In-Reply-To: <1448388091-117282-1-git-send-email-pbonzini@redhat.com> References: <1448388091-117282-1-git-send-email-pbonzini@redhat.com> Subject: [Qemu-devel] [PATCH 37/40] async: optimize aio_bh_poll List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org, qemu-block@nongnu.org Avoid entering the slow path of qemu_lockcnt_dec_and_lock if no bottom half has to be deleted. If a bottom half deletes itself, it will be picked up on the next visit of the list, or when the AioContext itself is finalized. Signed-off-by: Paolo Bonzini --- async.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/async.c b/async.c index 4c1f658..529934c 100644 --- a/async.c +++ b/async.c @@ -69,19 +69,24 @@ int aio_bh_poll(AioContext *ctx) { QEMUBH *bh, **bhp, *next; int ret; + bool deleted = false; qemu_lockcnt_inc(&ctx->list_lock); ret = 0; for (bh = atomic_rcu_read(&ctx->first_bh); bh; bh = next) { next = atomic_rcu_read(&bh->next); + if (bh->deleted) { + deleted = true; + continue; + } /* The atomic_xchg is paired with the one in qemu_bh_schedule. The * implicit memory barrier ensures that the callback sees all writes * done by the scheduling thread. It also ensures that the scheduling * thread sees the zero before bh->cb has run, and thus will call * aio_notify again if necessary. */ - if (!bh->deleted && atomic_xchg(&bh->scheduled, 0)) { + if (atomic_xchg(&bh->scheduled, 0)) { /* Idle BHs don't count as progress */ if (!bh->idle) { ret = 1; @@ -92,6 +97,11 @@ int aio_bh_poll(AioContext *ctx) } /* remove deleted bhs */ + if (!deleted) { + qemu_lockcnt_dec(&ctx->list_lock); + return ret; + } + if (qemu_lockcnt_dec_and_lock(&ctx->list_lock)) { bhp = &ctx->first_bh; while (*bhp) { @@ -105,7 +115,6 @@ int aio_bh_poll(AioContext *ctx) } qemu_lockcnt_unlock(&ctx->list_lock); } - return ret; } -- 1.8.3.1