From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1H7dVB-0003P3-Ly for qemu-devel@nongnu.org; Thu, 18 Jan 2007 15:04:49 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1H7dVA-0003M0-0b for qemu-devel@nongnu.org; Thu, 18 Jan 2007 15:04:48 -0500 Received: from [199.232.76.173] (helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1H7dV9-0003LP-Gv for qemu-devel@nongnu.org; Thu, 18 Jan 2007 15:04:47 -0500 Received: from [84.96.92.11] (helo=smtp.Neuf.fr) by monty-python.gnu.org with esmtp (Exim 4.52) id 1H7dV8-0002qe-Vi for qemu-devel@nongnu.org; Thu, 18 Jan 2007 15:04:47 -0500 Received: from [86.73.70.49] by sp604005mt.gpm.neuf.ld (Sun Java System Messaging Server 6.2-5.05 (built Feb 16 2006)) with ESMTP id <0JC200MGSVOVXD20@sp604005mt.gpm.neuf.ld> for qemu-devel@nongnu.org; Thu, 18 Jan 2007 20:05:20 +0100 (CET) Date: Thu, 18 Jan 2007 20:05:31 +0100 From: Fabrice Bellard Subject: Re: [Qemu-devel] Tracking memory dirtying in QEMU In-reply-to: <45AECBF9.1090209@cs.utexas.edu> Message-id: <45AFC4FB.80504@bellard.org> MIME-version: 1.0 Content-type: text/plain; charset=us-ascii; format=flowed Content-transfer-encoding: 7BIT References: <45AECBF9.1090209@cs.utexas.edu> Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Anthony Liguori wrote: > Howdy, > > I've been working on migration for QEMU and have run into a snag. I've > got a non-live migration patch that works quite happily[1]. I modified > the save/restore code to not seek at all, and then basically pipe a save > over a pipe to a subprocess (usually, ssh). qumranet has written some code to do live migration too. IMHO, client/server code should be integrated in QEMU in order to ease the use of live migration. > Conceptually, adding support for live migration is really easy. All I > think I need to do is extend the current code, to have a pre-save hook > that is activated before the VM is stopped. This hook will be called > until it says it's done and then the rest of the save/load handlers are > invoked. At first, I'm just going to do a pre-save handler for RAM > which should significantly reduce the amount of down time. I think the > only other device we'll have to handle specially is the VGA memory but > I'm happy to ignore that for now. > > So, all I really need is to be able to track which pages are dirtied. I > also need the a method to reset the dirty map. > > I started looking at adding another map like phys_ram_dirty. That seems > to work for some of the IO_MEM_RAM pages, but not all. My initial > thought is that all memory operations should go through one of the > st[bwl]_phys functions but that doesn't seem to be the case. > > Can anyone provide me with some advice on how to do this? Am I right in > assuming that all IO will go through some function? RAM access is not handled via I/O for efficiency, but the phys_ram_dirty flags are always up to date. In order to use it, you must allocate one bit in the dirty flags not used by QEMU and kqemu. Then you can use: cpu_physical_memory_reset_dirty() to mark a page as not dirty and cpu_physical_memory_get_dirty() to test for dirtiness. Note that for performance reasons the dirty bits are not handled while QEMU modifies the A and D bits in the PTEs and it can be a problem for your application. FYI, the dirty bits are currently used in QEMU to optimize VGA refreshs and to track self modifying code. They are also used internally by kqemu. Regards, Fabrice.