From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1KNx3t-00038U-DE for qemu-devel@nongnu.org; Tue, 29 Jul 2008 17:48:53 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1KNx3r-00037z-T1 for qemu-devel@nongnu.org; Tue, 29 Jul 2008 17:48:53 -0400 Received: from [199.232.76.173] (port=44385 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1KNx3r-00037c-K8 for qemu-devel@nongnu.org; Tue, 29 Jul 2008 17:48:51 -0400 Received: from an-out-0708.google.com ([209.85.132.245]:62485) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1KNx3r-0006g5-78 for qemu-devel@nongnu.org; Tue, 29 Jul 2008 17:48:51 -0400 Received: by an-out-0708.google.com with SMTP id d18so15987and.130 for ; Tue, 29 Jul 2008 14:48:50 -0700 (PDT) Message-ID: <488F9021.2050609@codemonkey.ws> Date: Tue, 29 Jul 2008 16:48:17 -0500 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Xen-devel] Re: [Qemu-devel] [PATCH 1/7] xen: groundwork for xen support References: <1217251078-6591-1-git-send-email-kraxel@redhat.com> <1217251078-6591-2-git-send-email-kraxel@redhat.com> <488DD206.8040404@codemonkey.ws> <488DDD2C.10308@redhat.com> <20080729081034.GG32498@redhat.com> <488F1BDB.90702@codemonkey.ws> <20080729142451.GZ32498@redhat.com> <488F6B68.4080908@codemonkey.ws> <488F8D5A.2020902@redhat.com> In-Reply-To: <488F8D5A.2020902@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Gerd Hoffmann Cc: Ian Jackson , xen-devel@lists.xensource.com, qemu-devel@nongnu.org, Samuel Thibault Gerd Hoffmann wrote: > Anthony Liguori wrote: > >> map-cache is one of those things I don't expect to ever get merged. >> > > And the need for that will go away over time IMHO. If your Dom0 is > 64bit you have no address space pressure and thus no need for mapcache. > Given we have 32-on-64 and non-PAE Xen is depricated anyway there is > almost no reason to not run 64bit Xen and Dom0. > Right. >> Ideally, I'd like to see Xen/KVM integration look like this: >> >> 1) Xen support is detected in configure (libxc et al) and conditionally >> enabled. >> 2) When running on bare metal, detect whether KVM acceleration is >> available, also detect if kqemu acceleration is available >> 3) When running under Xen, detect that Xen is available, and create a >> full virt domain >> 4) If a user specifies a type=xen device, it should Just Work provided >> you are in a Xen environment (erroring appropriately) >> 5) A user can explicitly specify -M xenpv. If running under Xen, this >> would create a Xen PV guest. If running on bare metal, Xenner would be >> used to present a Xen shim layer. This should work with KVM >> acceleration or without KVM acceleration. Bonus points if it works with >> kqemu too. >> > > I'm surprised how well you can read my mind. > Scary, huh? :-) > Yes, I wanna have the bonus points ;) > > There are two additional points you didn't see though: > > For (3) and (5) qemu should support two modes: First, attach to an > existing domain. This is how Xen works today. And we want get rid of > the qemu-dm fork, right? Second, optionally also create the domain, > like Xenite. > I have mixed feelings about this, but I don't think there's a way to support stub domains without this functionality. Obviously, when you run QEMU within a stub domain, the guest domain has already been created. Well, maybe it doesn't have to be that way but it seems most reasonable to do it that way. > (4) should also just work when you are *not* in a Xen environment[1] > I considered suggesting that but figured it would be too much. I should have figured it was already working in some form :-) So how does the upstream Xen community feel about all of this? Is this a reasonable approach to merging Xen functionality into QEMU? Regards, Anthony Liguori > cheers, > Gerd > > [1] It actually does, btw. Code isn't ready yet for merging. > Stay tuned ;) > >