From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:51865) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZH0nT-0008PI-RB for qemu-devel@nongnu.org; Sun, 19 Jul 2015 22:27:17 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZH0nQ-0000w8-Li for qemu-devel@nongnu.org; Sun, 19 Jul 2015 22:27:15 -0400 Received: from mx1.redhat.com ([209.132.183.28]:34137) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZH0nQ-0000vv-GT for qemu-devel@nongnu.org; Sun, 19 Jul 2015 22:27:12 -0400 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) by mx1.redhat.com (Postfix) with ESMTPS id 70AE347549C for ; Mon, 20 Jul 2015 02:27:11 +0000 (UTC) Date: Mon, 20 Jul 2015 10:27:08 +0800 From: Fam Zheng Message-ID: <20150720022708.GA17582@ad.nay.redhat.com> References: <1437250916-18905-1-git-send-email-pbonzini@redhat.com> <1437250916-18905-3-git-send-email-pbonzini@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1437250916-18905-3-git-send-email-pbonzini@redhat.com> Subject: Re: [Qemu-devel] [PATCH 2/2] AioContext: optimize clearing the EventNotifier List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini Cc: kwolf@redhat.com, lersek@redhat.com, qemu-devel@nongnu.org, stefanha@redhat.com, rjones@redhat.com On Sat, 07/18 22:21, Paolo Bonzini wrote: > It is pretty rare for aio_notify to actually set the EventNotifier. It > can happen with worker threads such as thread-pool.c's, but otherwise it > should never be set thanks to the ctx->notify_me optimization. The > previous patch, unfortunately, added an unconditional call to > event_notifier_test_and_clear; now add a userspace fast path that > avoids the call. > > Note that it is not possible to do the same with event_notifier_set; > it would break, as proved (again) by the included formal model. > > This patch survived over 800 reboots on aarch64 KVM. For aio-posix, how about keeping the optimization local which doesn't need atomic operation? (no idea for win32 :) diff --git a/aio-posix.c b/aio-posix.c index 5c8b266..7e98123 100644 --- a/aio-posix.c +++ b/aio-posix.c @@ -236,6 +236,7 @@ bool aio_poll(AioContext *ctx, bool blocking) int i, ret; bool progress; int64_t timeout; + int aio_notifier_idx = -1; aio_context_acquire(ctx); progress = false; @@ -256,11 +257,18 @@ bool aio_poll(AioContext *ctx, bool blocking) assert(npfd == 0); /* fill pollfds */ + i = 0; QLIST_FOREACH(node, &ctx->aio_handlers, node) { if (!node->deleted && node->pfd.events) { add_pollfd(node); + if (node->pfd.fd == event_notifier_get_fd(&ctx->notifier)) { + assert(aio_notifier_idx == -1); + aio_notifier_idx = i; + } + i++; } } + assert(aio_notifier_idx != -1); timeout = blocking ? aio_compute_timeout(ctx) : 0; @@ -276,7 +284,9 @@ bool aio_poll(AioContext *ctx, bool blocking) aio_context_acquire(ctx); } - event_notifier_test_and_clear(&ctx->notifier); + if (pollfds[aio_notifier_idx].revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) { + event_notifier_test_and_clear(&ctx->notifier); + } /* if we have any readable fds, dispatch event */ if (ret > 0) {