From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1KluZV-0006FL-9s for qemu-devel@nongnu.org; Fri, 03 Oct 2008 20:00:33 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1KluZU-0006F7-Ao for qemu-devel@nongnu.org; Fri, 03 Oct 2008 20:00:33 -0400 Received: from [199.232.76.173] (port=51345 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1KluZT-0006F4-RI for qemu-devel@nongnu.org; Fri, 03 Oct 2008 20:00:31 -0400 Received: from mail.codesourcery.com ([65.74.133.4]:40716) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1KluZT-00032R-FU for qemu-devel@nongnu.org; Fri, 03 Oct 2008 20:00:31 -0400 From: Paul Brook Subject: Re: [Qemu-devel] [PATCH 4/4] Reallocate dma buffers in read/write path if needed Date: Sat, 4 Oct 2008 01:00:27 +0100 References: <1223071531-31817-1-git-send-email-ryanh@us.ibm.com> <200810040017.09081.paul@codesourcery.com> <48E6AC36.3060404@codemonkey.ws> In-Reply-To: <48E6AC36.3060404@codemonkey.ws> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Message-Id: <200810040100.28341.paul@codesourcery.com> Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Anthony Liguori , Ryan Harper , kvm@vger.kernel.org On Saturday 04 October 2008, Anthony Liguori wrote: > Paul Brook wrote: > > On Friday 03 October 2008, Ryan Harper wrote: > >> The default buffer size breaks up larger read/write requests > >> unnecessarily. When we encounter requests larger than the default dma > >> buffer, reallocate the buffer to support the request. > > > > Allocating unboundedly large host buffers based on guest input seems li= ke > > a bad idea. > > Perhaps they could be at least bound to phys_ram_size. That's still way too large. It means that the maximum host footprint of qe= mu=20 is many times the size of the guest RAM. There's a good chance that the hos= t=20 machine doesn't even have enough virtual address space to satisfy this=20 request. I expect that the only situation where you can only avoid breaking up large= =20 transfers when you have zero-copy IO. Previous zero-copy/vectored IO patch= es=20 suffered from a similar problem: It is not acceptable to allocate huge chun= ks=20 of host ram when you fallback to normal IO. > In general, I don't think there's a correct size to bound them that's > less than phys_ram_size. =A0The guest may be issuing really big IO reques= ts. Qemu is perfectly capable of handling large IO requests by splitting them i= nto=20 multiple smaller requests. Enlarging the size of this buffer is just a=20 secondary performance optimisation. Admittedly we don't currently limit the number of simultaneous commands a=20 guest can submit, but that's relatively easy to fix. Paul