From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1JNYNW-0003oC-Fy for qemu-devel@nongnu.org; Fri, 08 Feb 2008 13:55:14 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1JNYNU-0003nD-Tp for qemu-devel@nongnu.org; Fri, 08 Feb 2008 13:55:14 -0500 Received: from [199.232.76.173] (helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1JNYNU-0003n8-Jd for qemu-devel@nongnu.org; Fri, 08 Feb 2008 13:55:12 -0500 Received: from mail.codesourcery.com ([65.74.133.4]) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1JNYNU-0004Oj-1O for qemu-devel@nongnu.org; Fri, 08 Feb 2008 13:55:12 -0500 From: Paul Brook Subject: Re: [Qemu-devel] Kernel memory allocation debugging with Qemu Date: Fri, 8 Feb 2008 18:55:03 +0000 References: In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200802081855.05779.paul@codesourcery.com> Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Blue Swirl > The patch takes a half of the memory and slows down the system. I > think Qemu could be used instead. A channel (IO/MMIO) is created > between the memory allocator in target kernel and Qemu running in the > host. Memory allocator tells the allocated area to Qemu using the > channel. Qemu changes the physical memory mapping for the area to > special memory that will report any reads before writes back to > allocator. Writes change the memory back to standard RAM. The > performance would be comparable to Qemu in general and host kernel + > Qemu only take a few MB of the memory. The system would be directly > usable for other OSes as well. The qemu implementation isn't actually any more space efficient than the in-kernel implementation. You still need the same amount of bookkeeping ram. In both cases it should be possible to reduce the overhead from 1/2 to 1/9 by using a bitmask rather than whole bytes. Performance is a less clear. A qemu implementation probably causes less relative slowdown than an in-kernel implementation. However it's still going to be significantly slower than normal qemu. Remember that any checked access is going to have to go through the slow case in the TLB lookup. Any optimizations that are applicable to one implementation can probably also be applied to the other. Given qemu is significantly slower to start with, and depending on the overhead of taking the page fault, it might not end up much better overall. A KVM implementation would most likely be slower than the in-kernel. That said it may be an interesting thing to play with. In practice it's probably most useful to generate an interrupt and report back to the guest OS, rather than having qemu reports faults directly. Paul