From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1KjIBs-0003Xx-PF for qemu-devel@nongnu.org; Fri, 26 Sep 2008 14:37:20 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1KjIBr-0003WU-LH for qemu-devel@nongnu.org; Fri, 26 Sep 2008 14:37:20 -0400 Received: from [199.232.76.173] (port=45217 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1KjIBr-0003WI-Gq for qemu-devel@nongnu.org; Fri, 26 Sep 2008 14:37:19 -0400 Received: from e36.co.us.ibm.com ([32.97.110.154]:54052) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1KjIBr-0002ep-Fn for qemu-devel@nongnu.org; Fri, 26 Sep 2008 14:37:19 -0400 Received: from d03relay03.boulder.ibm.com (d03relay03.boulder.ibm.com [9.17.195.228]) by e36.co.us.ibm.com (8.13.8/8.13.8) with ESMTP id m8QIajSP005310 for ; Fri, 26 Sep 2008 14:36:45 -0400 Received: from d03av04.boulder.ibm.com (d03av04.boulder.ibm.com [9.17.195.170]) by d03relay03.boulder.ibm.com (8.13.8/8.13.8/NCO v9.1) with ESMTP id m8QIahFb109512 for ; Fri, 26 Sep 2008 12:36:43 -0600 Received: from d03av04.boulder.ibm.com (loopback [127.0.0.1]) by d03av04.boulder.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id m8QIafh2025515 for ; Fri, 26 Sep 2008 12:36:42 -0600 Message-ID: <48DD2B67.407@us.ibm.com> Date: Fri, 26 Sep 2008 13:35:19 -0500 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] [5323] Implement an fd pool to get real AIO with posix-aio References: <20080926175927.GZ31395@us.ibm.com> In-Reply-To: <20080926175927.GZ31395@us.ibm.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Ryan Harper Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org Ryan Harper wrote: > * Anthony Liguori [2008-09-26 11:03]: > >> Revision: 5323 >> http://svn.sv.gnu.org/viewvc/?view=rev&root=qemu&revision=5323 >> Author: aliguori >> Date: 2008-09-26 15:59:29 +0000 (Fri, 26 Sep 2008) >> >> Log Message: >> ----------- >> Implement an fd pool to get real AIO with posix-aio >> >> This patch implements a simple fd pool to allow many AIO requests with >> posix-aio. The result is significantly improved performance (identical to that >> reported for linux-aio) for both cache=on and cache=off. >> >> The fundamental problem with posix-aio is that it limits itself to one thread >> per-file descriptor. I don't know why this is, but this patch provides a simple >> mechanism to work around this (duplicating the file descriptor). >> >> This isn't a great solution, but it seems like a reasonable intermediate step >> between posix-aio and a custom thread-pool to replace it. >> >> Ryan Harper will be posting some performance analysis he did comparing posix-aio >> with fd pooling against linux-aio. The size of the posix-aio thread pool and >> the fd pool were largely determined by him based on this analysis. >> > > I'll have some more data to post in a bit, but for now, bumping the fd > pool up to 64 and ensuring we init aio to support a thread per fd, we > mostly match linux aio performance with a simpler implementation. For > randomwrites, fd_pool lags a bit, but I've got other data that shows in > most scenarios, fd_pool matches linux aio performance and does so with > less CPU consumption. > > Results: > > 16k randwrite 1 thread, 74 iodepth | MB/s | avg sub lat (us) | avg comp lat (ms) > -----------------------------------+------+------------------+------------------ > baremetal (O_DIRECT, aka cache=off)| 61.2 | 13.07 | 19.59 > kvm: cache=off posix-aio w/o patch | 4.7 | 3467.44 | 254.08 > So with posix-aio, once we have many requests, each request is going to block until the request completes. I don't fully understand why the average completion latency is so high because in theory, there should be no delay between completion and submission. Maybe it has to do with the fact that we spend so much time blocking during submission, that the io-thread doesn't get a chance to run. I bet if we dropped the qemu_mutex during submission, the completion latency would drop to a very small number. Not worth actually testing. > kvm: cache=off linux-aio | 61.1 | 75.35 | 19.57 > The fact that the submission latency is so high confirms what I've been about linux-aio submissions being very unoptimal. That is really quite high. > kvm: cache=on posix-aio w/o patch |127.0 | 115.78 | 9.19 > kvm: cache=on posix-aio w/ patch |126.0 | 67.35 | 9.30 > It looks like 127mb/s is pretty close to the optimal cached write time. When using caching, writes can complete almost immediately so it's not surprising that submission latency is so low (even though it's blocking during submission). I am surprised that w/patch has a latency that's so high. I think that suggests that requests are queuing up. I bet increasing the aio_num field would reduce this number. > ------------ new results ----------+------+------------------+------------------ > kvm:cache=off posix-aio fd_pool[16]| 33.5 | 14.28 | 49.19 > kvm:cache=off posix-aio fd_pool[64]| 51.1 | 14.86 | 23.66 > I assume you tried to bump from 64 to something higher and couldn't make up the lost bandwidth? > 16k write 1 thread, 74 iodepth | MB/s | avg sub lat (us) | avg comp lat (ms) > -----------------------------------+------+------------------+------------------ > baremetal (O_DIRECT, aka cache=off)|128.1 | 10.90 | 9.45 > kvm: cache=off posix-aio w/o patch | 5.1 | 3152.00 | 231.06 > kvm: cache=off linux-aio |130.0 | 83.83 | 8.99 > kvm: cache=on posix-aio w/o patch |184.0 | 80.46 | 6.35 > kvm: cache=on posix-aio w/ patch |165.0 | 70.90 | 7.09 > ------------ new results ----------+------+------------------+------------------ > kvm:cache=off posix-aio fd_pool[16]| 78.2 | 58.24 | 15.43 > kvm:cache=off posix-aio fd_pool[64]|129.0 | 71.62 | 9.11 > That's a nice result. We could probably improve the latency by tweaking the queue sizes. Very nice work! Thanks for doing the thorough analysis. Regards, Anthony Liguori > >