From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1KmG73-0001Gs-1n for qemu-devel@nongnu.org; Sat, 04 Oct 2008 19:00:37 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1KmG70-0001Gg-Ue for qemu-devel@nongnu.org; Sat, 04 Oct 2008 19:00:35 -0400 Received: from [199.232.76.173] (port=60568 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1KmG70-0001Gd-Qu for qemu-devel@nongnu.org; Sat, 04 Oct 2008 19:00:34 -0400 Received: from mail.codesourcery.com ([65.74.133.4]:55782) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1KmG70-0002rk-51 for qemu-devel@nongnu.org; Sat, 04 Oct 2008 19:00:34 -0400 From: Paul Brook Subject: Re: [Qemu-devel] [PATCH 4/4] Reallocate dma buffers in read/write path if needed Date: Sun, 5 Oct 2008 00:00:27 +0100 References: <1223071531-31817-1-git-send-email-ryanh@us.ibm.com> <20081004135749.pphehrhuw9w4gwsc@imap.linux.ibm.com> <20081004214700.GH31395@us.ibm.com> In-Reply-To: <20081004214700.GH31395@us.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Message-Id: <200810050000.28154.paul@codesourcery.com> Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Ryan Harper Cc: qemu-devel@nongnu.org, Anthony Liguori , aliguori@linux.ibm.com, kvm@vger.kernel.org, Avi Kivity On Saturday 04 October 2008, Ryan Harper wrote: > In all, it seems silly to worry about this sort of thing since the > entire process could be contained with process ulimits if this is really > a concern. =A0Are we any more concerned that by splitting the requests > into many smaller requests that we're wasting cpu, pegging the > processor to 100% in some cases? Using small requests may be a bit inefficient, but it still works and allow= s=20 the guest to make progress. Allocating very large quantities of memory is very likely to kill the VM on= e=20 way or another. This is not acceptable, especially when the guest hasn't ev= en=20 done anything wrong. There are legitimate circumstances where the size of t= he=20 outstanding IO requests may be comparable to the guest ram size. Paul