From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1MHNMn-0000hX-3r for qemu-devel@nongnu.org; Thu, 18 Jun 2009 15:33:45 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1MHNMi-0000d0-9t for qemu-devel@nongnu.org; Thu, 18 Jun 2009 15:33:44 -0400 Received: from [199.232.76.173] (port=60711 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1MHNMi-0000co-0H for qemu-devel@nongnu.org; Thu, 18 Jun 2009 15:33:40 -0400 Received: from mx2.redhat.com ([66.187.237.31]:43478) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1MHNMh-0008Mb-8L for qemu-devel@nongnu.org; Thu, 18 Jun 2009 15:33:39 -0400 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id n5IJXbad006684 for ; Thu, 18 Jun 2009 15:33:37 -0400 Date: Thu, 18 Jun 2009 16:39:46 -0300 From: Glauber Costa Message-ID: <20090618193946.GH3517@poweredge.glommer> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Subject: [Qemu-devel] sending pci information over the wire List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: kraxel@redhat.com, armbru@redhat.com Hi folks, I have some trial code here for a proposal, which I'd like to hear your opinions about (Well, I _had_, because I'm totally stupid and git reset'd --hard the wrong location, that happened to contain part of the code for it) Let me start by explaining what I'm trying to accomplish, and say that I myself am not sure this is the best approach, it is just a crazy idea that poped up. Right now, migration of pci devices work by a bit of luck. This is because the other side of the wire can enumerate the devices in a different order, causing them to end up at different addresses. Markus approach of pci_addr= patches do help that fact. However, theoretically, there can be a case in which we: * start receiving guest, with parameters determined by pci_addr= * start live migration * add a device. The receiving guest won't know about that device, and migration then fail. This is not a problem _today_, as mgmt tools disallow addition of hardware during migration. But this is a restriction that comes exactly from the lack of robustness of migration! Furthermore, mgmt tools might want to change that behaviour in the future... That said, my proposal is as following: in the savevm part of the pci bus, list properties of all present devices. on the load part of the pci bus, scan the bus looking for devices that should be present and are not, and create them, if needed. A new machine type gets added, pc_migr, that does not put pci devices in the bus. All pci devices will be missing, and them them all get created. To do that, we also have to save/load some qemu internal state. For example, nd_table has to be transferred, to guarantee that proper configuration will exist on the other side. Ultimately, that can be used to transfer _all_ qemu internal state. A side effect, is that a qemu instance receiving migration can be shot with : qemu-system-x86_64 -M pc_migr -incoming addr -vnc :x. It is not that important, if you use mgmt apps, but a nice side effect, otherwise. I tested this with some network cards, and it kinda works. So, let me know what is the general feeling about that. If there is a compelling case for it, I can go back and hack it more. Otherwise, it was fun anyway. ----- End forwarded message -----