From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1H7Lzr-000837-VK for qemu-devel@nongnu.org; Wed, 17 Jan 2007 20:23:19 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1H7Lzq-00082j-Ex for qemu-devel@nongnu.org; Wed, 17 Jan 2007 20:23:19 -0500 Received: from [199.232.76.173] (helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1H7Lzq-00082g-Bn for qemu-devel@nongnu.org; Wed, 17 Jan 2007 20:23:18 -0500 Received: from [128.83.139.10] (helo=mail.cs.utexas.edu) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_AES_256_CBC_SHA:32) (Exim 4.52) id 1H7Lzp-0002gC-Kg for qemu-devel@nongnu.org; Wed, 17 Jan 2007 20:23:17 -0500 Received: from [192.168.1.104] (cpe-70-112-17-156.austin.res.rr.com [70.112.17.156]) (authenticated bits=0) by mail.cs.utexas.edu (8.13.8/8.13.8) with ESMTP id l0I1NCb9013123 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Wed, 17 Jan 2007 19:23:14 -0600 (CST) Message-ID: <45AECBF9.1090209@cs.utexas.edu> Date: Wed, 17 Jan 2007 19:23:05 -0600 From: Anthony Liguori MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Tracking memory dirtying in QEMU Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Howdy, I've been working on migration for QEMU and have run into a snag. I've got a non-live migration patch that works quite happily[1]. I modified the save/restore code to not seek at all, and then basically pipe a save over a pipe to a subprocess (usually, ssh). Conceptually, adding support for live migration is really easy. All I think I need to do is extend the current code, to have a pre-save hook that is activated before the VM is stopped. This hook will be called until it says it's done and then the rest of the save/load handlers are invoked. At first, I'm just going to do a pre-save handler for RAM which should significantly reduce the amount of down time. I think the only other device we'll have to handle specially is the VGA memory but I'm happy to ignore that for now. So, all I really need is to be able to track which pages are dirtied. I also need the a method to reset the dirty map. I started looking at adding another map like phys_ram_dirty. That seems to work for some of the IO_MEM_RAM pages, but not all. My initial thought is that all memory operations should go through one of the st[bwl]_phys functions but that doesn't seem to be the case. Can anyone provide me with some advice on how to do this? Am I right in assuming that all IO will go through some function? [1] http://hg.codemonkey.ws/qemu-pq/?f=758c26c82f52;file=qemu-migration.diff Thanks, Anthony Liguori