From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1Km3yP-0002na-Gm for qemu-devel@nongnu.org; Sat, 04 Oct 2008 06:02:53 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1Km3yO-0002n8-LP for qemu-devel@nongnu.org; Sat, 04 Oct 2008 06:02:52 -0400 Received: from [199.232.76.173] (port=45598 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Km3yO-0002n3-FF for qemu-devel@nongnu.org; Sat, 04 Oct 2008 06:02:52 -0400 Received: from mx2.redhat.com ([66.187.237.31]:53930) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1Km3yN-0005Tv-VC for qemu-devel@nongnu.org; Sat, 04 Oct 2008 06:02:52 -0400 Message-ID: <48E73ECD.9080309@redhat.com> Date: Sat, 04 Oct 2008 13:00:45 +0300 From: Avi Kivity MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH 4/4] Reallocate dma buffers in read/write path if needed References: <1223071531-31817-1-git-send-email-ryanh@us.ibm.com> <1223071531-31817-5-git-send-email-ryanh@us.ibm.com> <200810040017.09081.paul@codesourcery.com> <48E6AC36.3060404@codemonkey.ws> In-Reply-To: <48E6AC36.3060404@codemonkey.ws> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Anthony Liguori , Ryan Harper , Paul Brook , kvm@vger.kernel.org Anthony Liguori wrote: > Paul Brook wrote: >> On Friday 03 October 2008, Ryan Harper wrote: >> >>> The default buffer size breaks up larger read/write requests >>> unnecessarily. >>> When we encounter requests larger than the default dma buffer, >>> reallocate >>> the buffer to support the request. >>> >> >> Allocating unboundedly large host buffers based on guest input seems >> like a bad idea. >> > > Perhaps they could be at least bound to phys_ram_size. > So the guest could double the memory load on the host with just one request?! > In general, I don't think there's a correct size to bound them that's > less than phys_ram_size. The guest may be issuing really big IO > requests. The correct fix is not to buffer at all but use scatter-gather. Until this is done buffering has to be bounded. -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain.