xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Keir Fraser <keir@xen.org>
Subject: Re: HYBRID: PV in HVM container
Date: Fri, 29 Jul 2011 14:00:20 -0400	[thread overview]
Message-ID: <20110729180020.GA24360@dumpdata.com> (raw)
In-Reply-To: <alpine.DEB.2.00.1107291855100.12963@kaball-desktop>

On Fri, Jul 29, 2011 at 07:00:07PM +0100, Stefano Stabellini wrote:
> On Fri, 29 Jul 2011, Konrad Rzeszutek Wilk wrote:
> > > Besides if we have HVM dom0, we can enable
> > > XENFEAT_auto_translated_physmap and EPT and have the same level of
> > > performances of a PV on HVM guest. Moreover since we wouldn't be using
> > > the mmu pvops anymore we could drop them completely: that would greatly
> > 
> > Sure. It also means you MUST have an IOMMU in the box.
> 
> Why?

For HVM dom0s.. But I think when you say HVM here, you mean using
PV with the hypervisor's code that is used for managing page-tables - EPT/NPT/HAP.

So PV+HAP = Stefano's HVM :-)

> We can still remap interrupts into event channels.
> Maybe you mean VMX?
> 
> 
> > > simplify the Xen maintenance in the Linux kernel as well as gain back
> > > some love from the x86 maintainers :)
> > > 
> > > The way I see it, normal Linux guests would be PV on HVM guests, but we
> > > still need to do something about dom0.
> > > This work would make dom0 exactly like PV on HVM guests apart from
> > > the boot sequence: dom0 would still boot from xen_start_kernel,
> > > everything else would be pretty much the same.
> > 
> > Ah, so not HVM exactly (you would only use the EPT/NPT/RV1/HAP for
> > pagetables).. and PV for startup, spinlock, timers, debug, CPU, and
> > backends. Thought sticking in the HVM container in PV that Mukesh
> > made work would also benefit.
> 
> Yes for startup, spinlock, timers and backends. I would use HVM for cpu
> operations too (no need for pv_cpu_ops.write_gdt_entry anymore for
> example).

OK, so a SVM/VMX setup is required.
> 
> 
> > Or just come back to the idea of "real" HVM device driver domains
> > and have the PV dom0 be a light one loading the rest. But the setup of
> > it is just so complex.. And the PV dom0 needs to deal with the PCI backend
> > xenstore, and able to comprehend ACPI _PRT... and then launch the "device
> > driver" Dom0, which at its simplest form would have all of the devices
> > passed in to it.
> > 
> > So four payloads: PV dom0, PV dom0 initrd, HVM dom0, HVM dom0 initrd :-)
> > Ok, that is too cumbersome. Maybe ingest the PV dom0+initrd in the Xen
> > hypervisor binary.. I should stop here.
> 
> The goal of splitting up dom0 into multiple management domain is surely
> a worthy goal, no matter is the domains are PV or HVM or PV on HVM, but
> yeah the setup is hard. I hope that the we'll be able to simplify it in
> the near future, maybe after the switchover to the new qemu and seabios
> is completed.

  reply	other threads:[~2011-07-29 18:00 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-06-27 19:24 HYBRID: PV in HVM container Mukesh Rathor
2011-06-27 19:36 ` Keir Fraser
2011-06-28  1:51   ` Mukesh Rathor
2011-06-28  7:46     ` Keir Fraser
2011-06-28  8:30       ` Ian Campbell
2011-06-28  8:35         ` Keir Fraser
2011-06-28  8:49           ` Ian Campbell
2011-06-28 10:46       ` Stefano Stabellini
2011-06-28 10:50         ` Ian Campbell
2011-06-28 18:32       ` Mukesh Rathor
2011-06-28 18:39         ` Keir Fraser
2011-06-28  8:31 ` Ian Campbell
2011-06-28 17:56   ` Mukesh Rathor
2011-07-01  1:54 ` Mukesh Rathor
2011-07-09  1:53   ` Mukesh Rathor
2011-07-09  7:35     ` Keir Fraser
2011-07-28  1:58     ` Mukesh Rathor
2011-07-28 11:34       ` Stefano Stabellini
2011-07-29 15:48         ` Konrad Rzeszutek Wilk
2011-07-29 16:41           ` Stefano Stabellini
2011-07-29 17:28             ` Konrad Rzeszutek Wilk
2011-07-29 18:00               ` Stefano Stabellini
2011-07-29 18:00                 ` Konrad Rzeszutek Wilk [this message]
2011-07-29 18:16                   ` Stefano Stabellini
2011-08-09  8:54         ` Ian Campbell
2011-08-17 19:27           ` Jeremy Fitzhardinge
2011-07-29 15:43       ` Konrad Rzeszutek Wilk
2011-11-17 23:38       ` Mukesh Rathor
2011-11-18 12:21         ` Stefano Stabellini
2011-11-19  0:17           ` Mukesh Rathor

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110729180020.GA24360@dumpdata.com \
    --to=konrad.wilk@oracle.com \
    --cc=Ian.Campbell@eu.citrix.com \
    --cc=Xen-devel@lists.xensource.com \
    --cc=keir@xen.org \
    --cc=stefano.stabellini@eu.citrix.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).