From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1KmMF3-00076e-2e for qemu-devel@nongnu.org; Sun, 05 Oct 2008 01:33:17 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1KmMF1-00076S-JB for qemu-devel@nongnu.org; Sun, 05 Oct 2008 01:33:15 -0400 Received: from [199.232.76.173] (port=41416 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1KmMF1-00076P-BR for qemu-devel@nongnu.org; Sun, 05 Oct 2008 01:33:15 -0400 Received: from mx2.redhat.com ([66.187.237.31]:52585) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1KmMF1-0007t1-HV for qemu-devel@nongnu.org; Sun, 05 Oct 2008 01:33:15 -0400 Message-ID: <48E850AD.8080904@redhat.com> Date: Sun, 05 Oct 2008 07:29:17 +0200 From: Avi Kivity MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH 4/4] Reallocate dma buffers in read/write path if needed References: <1223071531-31817-1-git-send-email-ryanh@us.ibm.com> <1223071531-31817-5-git-send-email-ryanh@us.ibm.com> <200810040017.09081.paul@codesourcery.com> <48E6AC36.3060404@codemonkey.ws> <48E73ECD.9080309@redhat.com> <20081004135749.pphehrhuw9w4gwsc@imap.linux.ibm.com> <20081004214700.GH31395@us.ibm.com> In-Reply-To: <20081004214700.GH31395@us.ibm.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Ryan Harper Cc: Paul Brook , Anthony Liguori , aliguori@linux.ibm.com, kvm@vger.kernel.org, qemu-devel@nongnu.org Ryan Harper wrote: > I'd rather avoid any additional accounting overhead of a pool. The accounting overhead is noise compared to copying hundreds of megabytes per second. > If 4MB > is a reasonable limit, lets make that the new max. The real max is the dma buffer size multiplied by the number of concurrent requests. With a queue depth of 64, the buffers become 4 MB * 64 = 256 MB. That can double the size of a small guest, and using just one disk, too. > I can do some > testing to see where we drop off on performance improvements. We'd > have a default buffer size (smaller than the previous 64, and now 128k > buf size) that is used when we allocate scsi requests; scanning through > send_command() provides a good idea of other scsi command buf usage; and > on reads and writes, keep the capping logic we've had all along, but > bump the max size up to something like 4MB -- or whatever tests results > show as being ideal. > We know what the ideal is: dropping the scatter/gather buffer completely. -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain.