From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NN7Pn-0007Xj-MC for qemu-devel@nongnu.org; Tue, 22 Dec 2009 11:16:51 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1NN7Pj-0007X3-4D for qemu-devel@nongnu.org; Tue, 22 Dec 2009 11:16:51 -0500 Received: from [199.232.76.173] (port=38564 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NN7Pi-0007Wz-W9 for qemu-devel@nongnu.org; Tue, 22 Dec 2009 11:16:47 -0500 Received: from mail-gx0-f223.google.com ([209.85.217.223]:55357) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NN7Pi-0001Gh-MZ for qemu-devel@nongnu.org; Tue, 22 Dec 2009 11:16:46 -0500 Received: by gxk23 with SMTP id 23so6623021gxk.2 for ; Tue, 22 Dec 2009 08:16:46 -0800 (PST) Message-ID: <4B30F0EA.508@codemonkey.ws> Date: Tue, 22 Dec 2009 10:16:42 -0600 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] Re: [SeaBIOS] [PATCH 0/8] option rom loading overhaul. References: <1261134074-11795-1-git-send-email-kraxel@redhat.com> <200912221304.42114.paul@codesourcery.com> <4B30E185.9000408@codemonkey.ws> <200912221554.40571.paul@codesourcery.com> In-Reply-To: <200912221554.40571.paul@codesourcery.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paul Brook Cc: qemu-devel@nongnu.org, Avi Kivity On 12/22/2009 09:54 AM, Paul Brook wrote: >>> Ram allocations should be associated with a device. The VMState stuff >>> this should make this fairly straightforward. >>> >> Right, but for the sake of simplicity, you don't want to treat that ram >> any differently than main ram wrt live migration. That's why I proposed >> adding a context id for each ram region. That would allow us to use >> something like the qdev name + id as the context id for a ram chunk to >> get that association while still doing live ram migration of the memory. >> > IMO the best way to do this is to do it via existing VMState machinery. > We've already matched up DeviceStates so this gets us a handy unique > identifier for every ram block. For system memory we can add a dummy device. > Medium term we're probably going to want this anyway. > Okay, I understand and agree. I think the way this would work is that we would have a ram_addr type for VMState that would be an actual ram allocation and size. qemu_ram_alloc() would not need to take a context. ram live migration would walk the list of registered VMState entries searching for anything that had a ram_addr type and would add that to the ram migration. For system ram, we need dummy devices. I think we probably ought to integrate VMState into qdev first though. I think that makes everything a bit more managable. >>> Guest address space mappings are a completely separate issue. The device >>> should be migrating the mappings (directly or via a PCI BAR) as part of >>> its state migration. The ram regions might not be mapped into guest >>> address space at all. >>> >> We don't migrate guest address space memory today. We migrate anything >> that's qemu_ram_alloc()'d. The big problem we have though is that we >> don't have any real association between the qemu_ram_alloc() results and >> what the context of the allocation was. We assume the order of these >> allocations are fixed and that's entirely wrong. >> > The nice thing about the VMState approach is that the device doesn't know or > care how the migration occurs. For bonus points it leads fairly directly to an > object based mapping API, so we can change the implementation or migrate the > ram to a different location without disturbing the device. > Yeah, I like it. Regards, Anthony Liguori > Paul >