From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1KmFWg-0005s1-Pi for qemu-devel@nongnu.org; Sat, 04 Oct 2008 18:23:02 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1KmFWf-0005rm-TP for qemu-devel@nongnu.org; Sat, 04 Oct 2008 18:23:02 -0400 Received: from [199.232.76.173] (port=42805 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1KmFWf-0005rj-Qv for qemu-devel@nongnu.org; Sat, 04 Oct 2008 18:23:01 -0400 Received: from mail-gx0-f19.google.com ([209.85.217.19]:43965) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1KmFWf-0007m8-Nk for qemu-devel@nongnu.org; Sat, 04 Oct 2008 18:23:01 -0400 Received: by gxk12 with SMTP id 12so4251050gxk.10 for ; Sat, 04 Oct 2008 15:23:00 -0700 (PDT) Message-ID: <48E7ECC1.90008@codemonkey.ws> Date: Sat, 04 Oct 2008 17:22:57 -0500 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH 4/4] Reallocate dma buffers in read/write path if needed References: <1223071531-31817-1-git-send-email-ryanh@us.ibm.com> <1223071531-31817-5-git-send-email-ryanh@us.ibm.com> <200810040017.09081.paul@codesourcery.com> <48E6AC36.3060404@codemonkey.ws> <48E73ECD.9080309@redhat.com> <20081004135749.pphehrhuw9w4gwsc@imap.linux.ibm.com> <20081004214700.GH31395@us.ibm.com> In-Reply-To: <20081004214700.GH31395@us.ibm.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Ryan Harper Cc: Paul Brook , Anthony Liguori , Avi Kivity , kvm@vger.kernel.org, qemu-devel@nongnu.org Ryan Harper wrote: > I'd rather avoid any additional accounting overhead of a pool. If 4MB > is a reasonable limit, lets make that the new max. I can do some > testing to see where we drop off on performance improvements. We'd > have a default buffer size (smaller than the previous 64, and now 128k > buf size) that is used when we allocate scsi requests; scanning through > send_command() provides a good idea of other scsi command buf usage; and > on reads and writes, keep the capping logic we've had all along, but > bump the max size up to something like 4MB -- or whatever tests results > show as being ideal. > > In all, it seems silly to worry about this sort of thing since the > entire process could be contained with process ulimits if this is really > a concern. Are we any more concerned that by splitting the requests > into many smaller requests that we're wasting cpu, pegging the > processor to 100% in some cases? > There are two concerns with allowing the guest to alloc arbitrary amounts of memory. The first is that QEMU is not written in such a way to be robust in the face of out-of-memory conditions so if we run out of VA space, an important malloc could fail and we'd fall over. The second concern is that if a guest can allocate arbitrary amounts of memory, it could generate a swap storm. Unfortunately, AFAIK, Linux is not yet to a point where it can deal with swap fairness. Hopefully this is a limitation that the IO controller folks are taking into account. Regards, Anthony Liguori