xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/15] Nested Virtualization: Overview
@ 2010-06-03 16:02 Christoph Egger
  2010-06-03 17:16 ` Keir Fraser
  0 siblings, 1 reply; 5+ messages in thread
From: Christoph Egger @ 2010-06-03 16:02 UTC (permalink / raw)
  To: xen-devel


Hi!

This patch series brings Nested Virtualization to Xen.
This is the second patch series with a lot of improvements
and fixes thanks to Tim Deegan's review.

The patch series:

patch 01: add nestedhvm guest config option to the tools.
                  This is the only one patch touching the tools
patch 02: change local_event_delivery_* to take vcpu argument.
                  This prevents spurios xen crashes on guest shutdown/destroy
                  with nestedhvm enabled.
patch 03: Add data structures for nested virtualization.
patch 04: add nestedhvm function hooks, described in XenNestedHVM.pdf
patch 05: The heart of nested virtualization.
patch 06: Allow switch to paged real mode during vmrun emulation.
                  Emulate cr0 and cr4 when guest does not intercept them
                  (i.e. Hyper-V/Windows7)
patch 07: Allow guest to enable SVM in EFER
patch 08: Propagate SVM cpuid feature bits to guest
patch 09: Emulate MSRs needed for nested virtualization
patch 10: Handle interrupts (generic part)
patch 11: SVM specific implementation for nested virtualization
patch 12: Handle interrupts (SVM specific)
patch 13: The piece of code that effectively turns nested virtualization on.
                  Use HVM_PARAM_* this time.
patch 14: Change p2m infrastructure to operate with per-p2m instead
                  of per-domain. Combined with patch 15, this allows to run
                  nested guest with hap-on-hap.
patch 15: Handle nested pagefault to enable hap-on-hap and handle
                  nested guest page-table-walks to emulate instructions
                  the guest does not intercept (i.e. WBINVD with Windows 7).


-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85609 Dornach b. Muenchen
Geschaeftsfuehrer: Andrew Bowd, Thomas M. McCoy, Giuliano Meroni
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 0/15] Nested Virtualization: Overview
  2010-06-03 16:02 [PATCH 0/15] Nested Virtualization: Overview Christoph Egger
@ 2010-06-03 17:16 ` Keir Fraser
  2010-06-03 17:52   ` Keir Fraser
  0 siblings, 1 reply; 5+ messages in thread
From: Keir Fraser @ 2010-06-03 17:16 UTC (permalink / raw)
  To: Christoph Egger, xen-devel@lists.xensource.com; +Cc: Tim Deegan

Looks to me like this still unnecessarily tocuhes generic hvm code quite a
lot. Take the CPUID patch for example. No reason for that to touch
hvm_cpuid() when it properly belongs in the SVM-specific CPUID intercept
routine. If you're stiull trying to peddle SVM-on-HVM -- remember noone else
thought it was a good idea or wanted it. It won't fly.

 -- Keir

On 03/06/2010 17:02, "Christoph Egger" <Christoph.Egger@amd.com> wrote:

> 
> Hi!
> 
> This patch series brings Nested Virtualization to Xen.
> This is the second patch series with a lot of improvements
> and fixes thanks to Tim Deegan's review.
> 
> The patch series:
> 
> patch 01: add nestedhvm guest config option to the tools.
>                   This is the only one patch touching the tools
> patch 02: change local_event_delivery_* to take vcpu argument.
>                   This prevents spurios xen crashes on guest shutdown/destroy
>                   with nestedhvm enabled.
> patch 03: Add data structures for nested virtualization.
> patch 04: add nestedhvm function hooks, described in XenNestedHVM.pdf
> patch 05: The heart of nested virtualization.
> patch 06: Allow switch to paged real mode during vmrun emulation.
>                   Emulate cr0 and cr4 when guest does not intercept them
>                   (i.e. Hyper-V/Windows7)
> patch 07: Allow guest to enable SVM in EFER
> patch 08: Propagate SVM cpuid feature bits to guest
> patch 09: Emulate MSRs needed for nested virtualization
> patch 10: Handle interrupts (generic part)
> patch 11: SVM specific implementation for nested virtualization
> patch 12: Handle interrupts (SVM specific)
> patch 13: The piece of code that effectively turns nested virtualization on.
>                   Use HVM_PARAM_* this time.
> patch 14: Change p2m infrastructure to operate with per-p2m instead
>                   of per-domain. Combined with patch 15, this allows to run
>                   nested guest with hap-on-hap.
> patch 15: Handle nested pagefault to enable hap-on-hap and handle
>                   nested guest page-table-walks to emulate instructions
>                   the guest does not intercept (i.e. WBINVD with Windows 7).
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 0/15] Nested Virtualization: Overview
  2010-06-03 17:16 ` Keir Fraser
@ 2010-06-03 17:52   ` Keir Fraser
  2010-06-04  9:44     ` Christoph Egger
  0 siblings, 1 reply; 5+ messages in thread
From: Keir Fraser @ 2010-06-03 17:52 UTC (permalink / raw)
  To: Christoph Egger, xen-devel@lists.xensource.com; +Cc: Tim Deegan

On 03/06/2010 18:16, "Keir Fraser" <keir.fraser@eu.citrix.com> wrote:

> Looks to me like this still unnecessarily tocuhes generic hvm code quite a
> lot. Take the CPUID patch for example. No reason for that to touch
> hvm_cpuid() when it properly belongs in the SVM-specific CPUID intercept
> routine. If you're stiull trying to peddle SVM-on-HVM -- remember noone else
> thought it was a good idea or wanted it. It won't fly.

Actually the CPUID patch was a bad one to pick out: it actually belongs in
xc_cpuid_x86.c, in the AMD-specific function therein, and it can work out
what to do based on reading the appropriate HVM_PARAM info from Xen. Which
is all nice and neat.

The other patch I actually read was #10, which modifies
hvm_interrupt_blocked(). Tbh, perhaps that one should stand: the alternative
is a SVM/VMX-specific function that calls out to the generic hvm function to
help it out (like what we do with CPUID). Maybe it's not worth it. It'd sure
be clearer if there were some nestedsvm_xxx() functions so we could read
things like:
  if ( nestedsvm_enabled(v->domain) && !nestedsvm_gif_isset(v) )
I mean, in this case, the whole concept of GIF is SVM-specific. Maybe it's
convenient to refer to it from a HVM-generic function in this context, but I
think the inherent SVM-ness of it should at least be clear.

Perhaps the rest isn't too bad really. Things like patch #4 *might* be
acceptable if Intel are going to use the hooks for their VMX-on-VMX work. If
not, I suspect some more culling of changes to generic code will be in
order.

 -- Keir
 
>  -- Keir
> 
> On 03/06/2010 17:02, "Christoph Egger" <Christoph.Egger@amd.com> wrote:
> 
>> 
>> Hi!
>> 
>> This patch series brings Nested Virtualization to Xen.
>> This is the second patch series with a lot of improvements
>> and fixes thanks to Tim Deegan's review.
>> 
>> The patch series:
>> 
>> patch 01: add nestedhvm guest config option to the tools.
>>                   This is the only one patch touching the tools
>> patch 02: change local_event_delivery_* to take vcpu argument.
>>                   This prevents spurios xen crashes on guest shutdown/destroy
>>                   with nestedhvm enabled.
>> patch 03: Add data structures for nested virtualization.
>> patch 04: add nestedhvm function hooks, described in XenNestedHVM.pdf
>> patch 05: The heart of nested virtualization.
>> patch 06: Allow switch to paged real mode during vmrun emulation.
>>                   Emulate cr0 and cr4 when guest does not intercept them
>>                   (i.e. Hyper-V/Windows7)
>> patch 07: Allow guest to enable SVM in EFER
>> patch 08: Propagate SVM cpuid feature bits to guest
>> patch 09: Emulate MSRs needed for nested virtualization
>> patch 10: Handle interrupts (generic part)
>> patch 11: SVM specific implementation for nested virtualization
>> patch 12: Handle interrupts (SVM specific)
>> patch 13: The piece of code that effectively turns nested virtualization on.
>>                   Use HVM_PARAM_* this time.
>> patch 14: Change p2m infrastructure to operate with per-p2m instead
>>                   of per-domain. Combined with patch 15, this allows to run
>>                   nested guest with hap-on-hap.
>> patch 15: Handle nested pagefault to enable hap-on-hap and handle
>>                   nested guest page-table-walks to emulate instructions
>>                   the guest does not intercept (i.e. WBINVD with Windows 7).
>> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 0/15] Nested Virtualization: Overview
  2010-06-03 17:52   ` Keir Fraser
@ 2010-06-04  9:44     ` Christoph Egger
  2010-06-04 10:00       ` Keir Fraser
  0 siblings, 1 reply; 5+ messages in thread
From: Christoph Egger @ 2010-06-04  9:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Tim Deegan, Keir Fraser

On Thursday 03 June 2010 19:52:36 Keir Fraser wrote:
> On 03/06/2010 18:16, "Keir Fraser" <keir.fraser@eu.citrix.com> wrote:
> > Looks to me like this still unnecessarily tocuhes generic hvm code quite
> > a lot. Take the CPUID patch for example. No reason for that to touch
> > hvm_cpuid() when it properly belongs in the SVM-specific CPUID intercept
> > routine. If you're stiull trying to peddle SVM-on-HVM -- remember noone
> > else thought it was a good idea or wanted it. It won't fly.
>
> Actually the CPUID patch was a bad one to pick out: it actually belongs in
> xc_cpuid_x86.c, in the AMD-specific function therein, and it can work out
> what to do based on reading the appropriate HVM_PARAM info from Xen. Which
> is all nice and neat.

Ok, will do so.

> The other patch I actually read was #10, which modifies
> hvm_interrupt_blocked(). Tbh, perhaps that one should stand: the
> alternative is a SVM/VMX-specific function that calls out to the generic
> hvm function to help it out (like what we do with CPUID). Maybe it's not
> worth it. It'd sure be clearer if there were some nestedsvm_xxx() functions
> so we could read things like:
>   if ( nestedsvm_enabled(v->domain) && !nestedsvm_gif_isset(v) )
> I mean, in this case, the whole concept of GIF is SVM-specific. Maybe it's
> convenient to refer to it from a HVM-generic function in this context, but
> I think the inherent SVM-ness of it should at least be clear.
>
> Perhaps the rest isn't too bad really. Things like patch #4 *might* be
> acceptable if Intel are going to use the hooks for their VMX-on-VMX work.

They should or we will end up with two nested virtualization implementations
in Xen each with its own tweaks and bugs, otherwise.

> If not, I suspect some more culling of changes to generic code will be in
> order.

patch #3 and #4 need discussion with Intel now and #14 and #15 later.

In patch #3,
I must replace 'struct vmcb_struct *nh_vmcb' and 'uint64_t nh_vmcbaddr'
with 'void *nh_vm', 'size_t nh_vmsize' and 'uint64_t nh_vmaddr'.

Further I need to twiddle some svm-specific fields into this shape:

          union {
               struct {
                    uint32_t nh_cr_intercepts;
                    uint32_t nh_dr_intercepts;
                    uint32_t nh_exception_intercepts;
                    uint32_t nh_general1_intercepts;
                    uint32_t nh_general2_intercepts;
                    lbrctrl_t nh_lbr_control;
               } svm;
               struct {
                    [Intel: what do you need here?]
               } vmx;
         }

And we need some accessor functions to these fields for use in generic
code.

In patch #4,
Intel should use the hook prepare4vmrun and prepare4vmexit where
they cast above 'nh_vm' to their vmcs.
SVM code will cast it to vmcb.
The hooks 'vmload' and 'vmsave' should be moved into svm completely.

Patch #5 needs adjustments to work with above changes.

All this combined with your cpuid proposal this will turn my implementation
from SVM-on-HVM to  HVM-on-HVM.

Patch #14 and #15 need discussion with Intel due to necessary adaptions
in p2m-ept.c to make shadow-on-hap and hap-on-hap work.

@Tim: On last review you asked about the use of MAX_NESTEDP2M.
Actually, this is a hack. What I really need in Xen is a generic pool
implementation like this 
http://netbsd.gw.com/cgi-bin/man-cgi?pool+9+NetBSD-current
and this
http://netbsd.gw.com/cgi-bin/man-cgi?pool_cache+9+NetBSD-current
In NetBSD, pool_cache(9) is implemented on top of pool(9).

IMO, xmalloc/xfree, machine check and cpupool code should also
use pool_cache(9) in Xen instead of having their own versions.
Can we take the pool/pool_cache code from NetBSD ?

Christoph


> >
> > On 03/06/2010 17:02, "Christoph Egger" <Christoph.Egger@amd.com> wrote:
> >> Hi!
> >>
> >> This patch series brings Nested Virtualization to Xen.
> >> This is the second patch series with a lot of improvements
> >> and fixes thanks to Tim Deegan's review.
> >>
> >> The patch series:
> >>
> >> patch 01: add nestedhvm guest config option to the tools.
> >>                   This is the only one patch touching the tools
> >> patch 02: change local_event_delivery_* to take vcpu argument.
> >>                   This prevents spurios xen crashes on guest
> >> shutdown/destroy with nestedhvm enabled.
> >> patch 03: Add data structures for nested virtualization.
> >> patch 04: add nestedhvm function hooks, described in XenNestedHVM.pdf
> >> patch 05: The heart of nested virtualization.
> >> patch 06: Allow switch to paged real mode during vmrun emulation.
> >>                   Emulate cr0 and cr4 when guest does not intercept them
> >>                   (i.e. Hyper-V/Windows7)
> >> patch 07: Allow guest to enable SVM in EFER
> >> patch 08: Propagate SVM cpuid feature bits to guest
> >> patch 09: Emulate MSRs needed for nested virtualization
> >> patch 10: Handle interrupts (generic part)
> >> patch 11: SVM specific implementation for nested virtualization
> >> patch 12: Handle interrupts (SVM specific)
> >> patch 13: The piece of code that effectively turns nested virtualization
> >> on. Use HVM_PARAM_* this time.
> >> patch 14: Change p2m infrastructure to operate with per-p2m instead
> >>                   of per-domain. Combined with patch 15, this allows to
> >> run nested guest with hap-on-hap.
> >> patch 15: Handle nested pagefault to enable hap-on-hap and handle
> >>                   nested guest page-table-walks to emulate instructions
> >>                   the guest does not intercept (i.e. WBINVD with Windows
> >> 7).
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xensource.com
> > http://lists.xensource.com/xen-devel



-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85609 Dornach b. Muenchen
Geschaeftsfuehrer: Andrew Bowd, Thomas M. McCoy, Giuliano Meroni
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 0/15] Nested Virtualization: Overview
  2010-06-04  9:44     ` Christoph Egger
@ 2010-06-04 10:00       ` Keir Fraser
  0 siblings, 0 replies; 5+ messages in thread
From: Keir Fraser @ 2010-06-04 10:00 UTC (permalink / raw)
  To: Christoph Egger, xen-devel@lists.xensource.com; +Cc: Tim Deegan

On 04/06/2010 10:44, "Christoph Egger" <Christoph.Egger@amd.com> wrote:

> @Tim: On last review you asked about the use of MAX_NESTEDP2M.
> Actually, this is a hack. What I really need in Xen is a generic pool
> implementation like this
> http://netbsd.gw.com/cgi-bin/man-cgi?pool+9+NetBSD-current
> and this
> http://netbsd.gw.com/cgi-bin/man-cgi?pool_cache+9+NetBSD-current
> In NetBSD, pool_cache(9) is implemented on top of pool(9).
> 
> IMO, xmalloc/xfree, machine check and cpupool code should also
> use pool_cache(9) in Xen instead of having their own versions.
> Can we take the pool/pool_cache code from NetBSD ?

I'd hope we can really manage without such a mechanism. At least, we'd need
a darn good reason for it, and to have rejected simpler alternative
solutions. I know some OSes have such a concept so that paging code doesn't
deadlock. I can't immediately guess why we'd need it in Xen.

And does pool_cache have much relationship to cpupool, except both have
"pool" in their name? :-)

 -- Keir

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2010-06-04 10:00 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-06-03 16:02 [PATCH 0/15] Nested Virtualization: Overview Christoph Egger
2010-06-03 17:16 ` Keir Fraser
2010-06-03 17:52   ` Keir Fraser
2010-06-04  9:44     ` Christoph Egger
2010-06-04 10:00       ` Keir Fraser

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).