From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1KmExv-0002yH-HK for qemu-devel@nongnu.org; Sat, 04 Oct 2008 17:47:07 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1KmExt-0002y5-Vt for qemu-devel@nongnu.org; Sat, 04 Oct 2008 17:47:06 -0400 Received: from [199.232.76.173] (port=43381 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1KmExt-0002y2-PP for qemu-devel@nongnu.org; Sat, 04 Oct 2008 17:47:05 -0400 Received: from e2.ny.us.ibm.com ([32.97.182.142]:54094) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1KmExt-0003aI-Go for qemu-devel@nongnu.org; Sat, 04 Oct 2008 17:47:05 -0400 Received: from d01relay04.pok.ibm.com (d01relay04.pok.ibm.com [9.56.227.236]) by e2.ny.us.ibm.com (8.13.8/8.13.8) with ESMTP id m94Ll2na024115 for ; Sat, 4 Oct 2008 17:47:02 -0400 Received: from d01av02.pok.ibm.com (d01av02.pok.ibm.com [9.56.224.216]) by d01relay04.pok.ibm.com (8.13.8/8.13.8/NCO v9.1) with ESMTP id m94Ll2jb235486 for ; Sat, 4 Oct 2008 17:47:02 -0400 Received: from d01av02.pok.ibm.com (loopback [127.0.0.1]) by d01av02.pok.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id m94Ll1YR008854 for ; Sat, 4 Oct 2008 17:47:02 -0400 Date: Sat, 4 Oct 2008 16:47:00 -0500 From: Ryan Harper Subject: Re: [Qemu-devel] [PATCH 4/4] Reallocate dma buffers in read/write path if needed Message-ID: <20081004214700.GH31395@us.ibm.com> References: <1223071531-31817-1-git-send-email-ryanh@us.ibm.com> <1223071531-31817-5-git-send-email-ryanh@us.ibm.com> <200810040017.09081.paul@codesourcery.com> <48E6AC36.3060404@codemonkey.ws> <48E73ECD.9080309@redhat.com> <20081004135749.pphehrhuw9w4gwsc@imap.linux.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20081004135749.pphehrhuw9w4gwsc@imap.linux.ibm.com> Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: aliguori@linux.ibm.com Cc: Anthony Liguori , kvm@vger.kernel.org, qemu-devel@nongnu.org, Ryan Harper , Paul Brook , Avi Kivity * aliguori@linux.ibm.com [2008-10-04 12:58]: > Quoting Avi Kivity : > > >Anthony Liguori wrote: > >>In general, I don't think there's a correct size to bound them > >>that's less than phys_ram_size. The guest may be issuing really > >>big IO requests. > > > >The correct fix is not to buffer at all but use scatter-gather. Until > >this is done buffering has to be bounded. > > Instead of capping each request at a certain relatively small size and > capping the number of requests, I think if we had a cap that was the > total amount of outstanding IO, that would give us good performance > while not allowing the guest to do anything crazy. > > For instance, a 4MB pool would allow decent sized requests without > letting the guest allocate an absurd amount of memory. I'd rather avoid any additional accounting overhead of a pool. If 4MB is a reasonable limit, lets make that the new max. I can do some testing to see where we drop off on performance improvements. We'd have a default buffer size (smaller than the previous 64, and now 128k buf size) that is used when we allocate scsi requests; scanning through send_command() provides a good idea of other scsi command buf usage; and on reads and writes, keep the capping logic we've had all along, but bump the max size up to something like 4MB -- or whatever tests results show as being ideal. In all, it seems silly to worry about this sort of thing since the entire process could be contained with process ulimits if this is really a concern. Are we any more concerned that by splitting the requests into many smaller requests that we're wasting cpu, pegging the processor to 100% in some cases? -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx (512) 838-9253 T/L: 678-9253 ryanh@us.ibm.com