From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:54643) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XuLqU-0004Ou-O5 for qemu-devel@nongnu.org; Fri, 28 Nov 2014 08:44:32 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XuLqO-0001vF-I5 for qemu-devel@nongnu.org; Fri, 28 Nov 2014 08:44:26 -0500 Received: from mx1.redhat.com ([209.132.183.28]:40224) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XuLqO-0001v7-AM for qemu-devel@nongnu.org; Fri, 28 Nov 2014 08:44:20 -0500 Date: Fri, 28 Nov 2014 14:44:07 +0100 From: Kevin Wolf Message-ID: <20141128134407.GF4035@noname.redhat.com> References: <1417013204-30676-1-git-send-email-kwolf@redhat.com> <1417013204-30676-3-git-send-email-kwolf@redhat.com> <5476F3F1.6000105@kamp.de> <20141128125700.GE4035@noname.redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20141128125700.GE4035@noname.redhat.com> Subject: Re: [Qemu-devel] [RFC PATCH 2/3] raw-posix: Convert Linux AIO submission to coroutines List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Peter Lieven Cc: pbonzini@redhat.com, ming.lei@canonical.com, qemu-devel@nongnu.org, stefanha@redhat.com Am 28.11.2014 um 13:57 hat Kevin Wolf geschrieben: > Am 27.11.2014 um 10:50 hat Peter Lieven geschrieben: > > On 26.11.2014 15:46, Kevin Wolf wrote: > > >This improves the performance of requests because an ACB doesn't need to > > >be allocated on the heap any more. It also makes the code nicer and > > >smaller. > > > > > >As a side effect, the codepath taken by aio=threads is changed to use > > >paio_submit_co(). This doesn't change the performance at this point. > > > > > >Results of qemu-img bench -t none -c 10000000 [-n] /dev/loop0: > > > > > > | aio=native | aio=threads > > > | before | with patch | before | with patch > > >------+----------+------------+----------+------------ > > >run 1 | 29.921s | 26.932s | 35.286s | 35.447s > > >run 2 | 29.793s | 26.252s | 35.276s | 35.111s > > >run 3 | 30.186s | 27.114s | 35.042s | 34.921s > > >run 4 | 30.425s | 26.600s | 35.169s | 34.968s > > >run 5 | 30.041s | 26.263s | 35.224s | 35.000s > > > > > >TODO: Do some more serious benchmarking in VMs with less variance. > > >Results of a quick fio run are vaguely positive. > > > > I still see the main-loop spun warnings with this patches applied to master. > > It wasn't there with the original patch from August. > > > > ~/git/qemu$ ./qemu-img bench -t none -c 10000000 -n /dev/ram1 > > Sending 10000000 requests, 4096 bytes each, 64 in parallel > > main-loop: WARNING: I/O thread spun for 1000 iterations > > Run completed in 31.947 seconds. > > Yes, I still need to bisect that. The 'qemu-img bench' numbers above are > actually also from August, we have regressed meanwhile by about a > second, and I also haven't found the reason for that yet. Did the first part of this now. The commit that introduced the "spun" message is 2cdff7f6 ('linux-aio: avoid deadlock in nested aio_poll() calls'). The following patch doesn't make it go away completely, but I only see it sometime during like every other run now, instead of immediately after starting qemu-img bench. It's probably a (very) minor performance optimisation, too. Kevin diff --git a/block/linux-aio.c b/block/linux-aio.c index fd8f0e4..1a0ec62 100644 --- a/block/linux-aio.c +++ b/block/linux-aio.c @@ -136,6 +136,8 @@ static void qemu_laio_completion_bh(void *opaque) qemu_laio_process_completion(s, laiocb); } + + qemu_bh_cancel(s->completion_bh); } static void qemu_laio_completion_cb(EventNotifier *e) @@ -143,7 +145,7 @@ static void qemu_laio_completion_cb(EventNotifier *e) struct qemu_laio_state *s = container_of(e, struct qemu_laio_state, e); if (event_notifier_test_and_clear(&s->e)) { - qemu_bh_schedule(s->completion_bh); + qemu_laio_completion_bh(s); } }