xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* XSA-60 - how to get back to a sane state
@ 2013-12-02 14:28 Jan Beulich
  2013-12-02 19:43 ` George Dunlap
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2013-12-02 14:28 UTC (permalink / raw)
  To: xen-devel
  Cc: Jinsong Liu, Keir Fraser, George Dunlap, Andrew Cooper,
	Zhenzhong Duan, Donald D Dugger, Jun Nakajima

[-- Attachment #1: Type: text/plain, Size: 2192 bytes --]

All,

Jinsong's patches having been in for nearly a month now, but not
being in a shape that would make releasing in 4.4 or backporting to
the older trees desirable, we need to come to a conclusion on
which way to go. Currently it looks like we have three options, but
of course I'll be happy to see other (better!) ones proposed.

1) Stay with what we have.

2) Revert 86d60e85 ("VMX: flush cache when vmentry back to UC
guest") in its entirety plus, perhaps, the change 62652c00 ("VMX:
fix cr0.cd handling") did to vmx_ctxt_switch_to().

3) Apply the attached patch that Andrew and I have been putting
together, with the caveat that it's still incomplete (see below).

The latter two are based on the observation that the amount of
cache flushing we do with what is in the master tree right now is
more than what we did prior to that patch series but still
insufficient. Hence the revert would get us back to the earlier
state (and obviously eliminate the performance problems that
were observed when doing too eager flushing), whereas
applying the extra 5th patch would get us closer to a proper
solution.

The problem with that new patch is it's - as said - still incomplete,
yet setting ->arch.hvm_vcpu.hypervisor_access_uc_hvm_memory
for all writes through map_domain_page_global() mappings would
get rather ugly:
- guest_walk_tables() (accumulating set_ad_bits() results)
- all writes through vcpu_info()
- event_fifo.c would need this scattered around
- hvm_load_segment_selector()'s setting of the accessed bit
- hvm_task_switch()'s handling of the busy bit
- all the nested HVM code's writes through ->nv_vvmcx

Plus the performance effects of it aren't really known yet (all we
do know is that the too eager "flush always prior to VM entry" is
causing unacceptable stalls).

Otoh we have had no reports that the lack of proper cache
flushing actually caused any problems, and hence reverting the
partial flushing introduced isn't likely to push new problems onto
people. Yet if, ever, such a problem would surface, it can be
expected to be very hard to diagnose.

Please, everyone having an opinion here - voice it.

Thanks, Jan


[-- Attachment #2: xsa60-5 --]
[-- Type: application/octet-stream, Size: 3415 bytes --]

VMX: corrections to XSA-60 fix

c/s 86d60e855fe118df0dbdf67b67b1a0ec8fdb9f0d does not cause a wbinvd() in
certain cases, such as writing the hypercall page, which performs:

    __map_domain_page(page);
    hypercall_page_initialise(d, hypercall_page);
    unmap_domain_page(hypercall_page);

And bypasses the hooks in __hvm_{copy,clear}() looking for hypervisor
modification of domain memory.

Move the hook into unmap_domain_page() so it caches more domain memory
modification points.

Furthermore, the currently unconditional wbinvd() in vmx_ctxt_switch_to()
should be deferred until vmentry.  This way, a ctx_switch_to() followed by an
iteration of a hypercall continuation wont hit Xen with two wbinvd()s.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Fix the changes to unmap_domain_page(), addressing a boot time
dereference of the not yet properly set "current".

Also set the flag upon completion of I/O requests handed to the device
model.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domain_page.c
+++ b/xen/arch/x86/domain_page.c
@@ -166,19 +166,21 @@ void *map_domain_page(unsigned long mfn)
 void unmap_domain_page(const void *ptr)
 {
     unsigned int idx;
-    struct vcpu *v;
+    struct vcpu *v = mapcache_current_vcpu();
     struct mapcache_domain *dcache;
     unsigned long va = (unsigned long)ptr, mfn, flags;
     struct vcpu_maphash_entry *hashent;
 
+    if ( unlikely(v && has_hvm_container_vcpu(v) &&
+                  v->arch.hvm_vcpu.cache_mode == NO_FILL_CACHE_MODE) )
+        v->arch.hvm_vcpu.hypervisor_access_uc_hvm_memory = 1;
+
     if ( va >= DIRECTMAP_VIRT_START )
         return;
 
     ASSERT(va >= MAPCACHE_VIRT_START && va < MAPCACHE_VIRT_END);
 
-    v = mapcache_current_vcpu();
     ASSERT(v && is_pv_vcpu(v));
-
     dcache = &v->domain->arch.pv_domain.mapcache;
     ASSERT(dcache->inuse);
 
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2566,9 +2566,6 @@ static enum hvm_copy_result __hvm_copy(
         return HVMCOPY_unhandleable;
 #endif
 
-    if ( unlikely(curr->arch.hvm_vcpu.cache_mode == NO_FILL_CACHE_MODE) )
-        curr->arch.hvm_vcpu.hypervisor_access_uc_hvm_memory = 1;
-
     while ( todo > 0 )
     {
         count = min_t(int, PAGE_SIZE - (addr & ~PAGE_MASK), todo);
@@ -2680,9 +2677,6 @@ static enum hvm_copy_result __hvm_clear(
         return HVMCOPY_unhandleable;
 #endif
 
-    if ( unlikely(curr->arch.hvm_vcpu.cache_mode == NO_FILL_CACHE_MODE) )
-        curr->arch.hvm_vcpu.hypervisor_access_uc_hvm_memory = 1;
-
     while ( todo > 0 )
     {
         count = min_t(int, PAGE_SIZE - (addr & ~PAGE_MASK), todo);
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -303,6 +303,9 @@ void hvm_io_assist(ioreq_t *p)
     {
         msix_write_completion(curr);
         vcpu_end_shutdown_deferral(curr);
+
+        if ( unlikely(curr->arch.hvm_vcpu.cache_mode == NO_FILL_CACHE_MODE) )
+            curr->arch.hvm_vcpu.hypervisor_access_uc_hvm_memory = 1;
     }
 }
 
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -645,7 +645,7 @@ static void vmx_ctxt_switch_to(struct vc
 
     /* For guest cr0.cd setting, do not use potentially polluted cache */
     if ( unlikely(v->arch.hvm_vcpu.cache_mode == NO_FILL_CACHE_MODE) )
-        wbinvd();
+        v->arch.hvm_vcpu.hypervisor_access_uc_hvm_memory = 1;
 
     vmx_restore_guest_msrs(v);
     vmx_restore_dr(v);

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: XSA-60 - how to get back to a sane state
  2013-12-02 14:28 XSA-60 - how to get back to a sane state Jan Beulich
@ 2013-12-02 19:43 ` George Dunlap
  2013-12-03  2:21   ` Andrew Cooper
  0 siblings, 1 reply; 16+ messages in thread
From: George Dunlap @ 2013-12-02 19:43 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Jinsong Liu, Keir Fraser, Andrew Cooper, Zhenzhong Duan,
	Donald D Dugger, Jun Nakajima

On 12/02/2013 02:28 PM, Jan Beulich wrote:
> All,
>
> Jinsong's patches having been in for nearly a month now, but not
> being in a shape that would make releasing in 4.4 or backporting to
> the older trees desirable, we need to come to a conclusion on
> which way to go. Currently it looks like we have three options, but
> of course I'll be happy to see other (better!) ones proposed.
>
> 1) Stay with what we have.
>
> 2) Revert 86d60e85 ("VMX: flush cache when vmentry back to UC
> guest") in its entirety plus, perhaps, the change 62652c00 ("VMX:
> fix cr0.cd handling") did to vmx_ctxt_switch_to().
>
> 3) Apply the attached patch that Andrew and I have been putting
> together, with the caveat that it's still incomplete (see below).
>
> The latter two are based on the observation that the amount of
> cache flushing we do with what is in the master tree right now is
> more than what we did prior to that patch series but still
> insufficient. Hence the revert would get us back to the earlier
> state (and obviously eliminate the performance problems that
> were observed when doing too eager flushing), whereas
> applying the extra 5th patch would get us closer to a proper
> solution.

What's missing is a description of the pros and cons of 1 and 2.  Do you 
have any links to threads describing the problem?

  -George

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: XSA-60 - how to get back to a sane state
  2013-12-02 19:43 ` George Dunlap
@ 2013-12-03  2:21   ` Andrew Cooper
  2013-12-03  3:06     ` Liu, Jinsong
  2013-12-03  7:57     ` Jan Beulich
  0 siblings, 2 replies; 16+ messages in thread
From: Andrew Cooper @ 2013-12-03  2:21 UTC (permalink / raw)
  To: George Dunlap, Jan Beulich, xen-devel
  Cc: Jinsong Liu, Keir Fraser, Zhenzhong Duan, Donald D Dugger,
	Jun Nakajima

On 02/12/2013 19:43, George Dunlap wrote:
> On 12/02/2013 02:28 PM, Jan Beulich wrote:
>> All,
>>
>> Jinsong's patches having been in for nearly a month now, but not
>> being in a shape that would make releasing in 4.4 or backporting to
>> the older trees desirable, we need to come to a conclusion on
>> which way to go. Currently it looks like we have three options, but
>> of course I'll be happy to see other (better!) ones proposed.
>>
>> 1) Stay with what we have.
>>
>> 2) Revert 86d60e85 ("VMX: flush cache when vmentry back to UC
>> guest") in its entirety plus, perhaps, the change 62652c00 ("VMX:
>> fix cr0.cd handling") did to vmx_ctxt_switch_to().
>>
>> 3) Apply the attached patch that Andrew and I have been putting
>> together, with the caveat that it's still incomplete (see below).
>>
>> The latter two are based on the observation that the amount of
>> cache flushing we do with what is in the master tree right now is
>> more than what we did prior to that patch series but still
>> insufficient. Hence the revert would get us back to the earlier
>> state (and obviously eliminate the performance problems that
>> were observed when doing too eager flushing), whereas
>> applying the extra 5th patch would get us closer to a proper
>> solution.
>
> What's missing is a description of the pros and cons of 1 and 2.  Do
> you have any links to threads describing the problem?
>
>  -George
>

1) has the basic XSA-60 fixes and some wbinvd()s, which are a
significant performance issue and insufficient to completely fix the
problem at hand.  As a result, 1) is the worst possible option to stay
with as far as Xen is concerned (irrespective of the upcoming 4.4 release).

2) will revert us back to the basic XSA-60 with none of the wbinvd()s,
which fixes the security issue and is no worse than before in terms of a
correctness-in-the-case-of-uncachable-hvm-domains point of view.

3) as-is is still insufficient to fix the problem in 1), and would
currently result in a further performance regression.

FWIW, my vote is for option 2) which will ease the current performance
regression, in favor of allowing us time to come up with a proper
solution to the pre-existing problem of Xen and Qemu mappings of a UC
domain's memory.

~Andrew

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: XSA-60 - how to get back to a sane state
  2013-12-03  2:21   ` Andrew Cooper
@ 2013-12-03  3:06     ` Liu, Jinsong
  2013-12-03  8:00       ` Jan Beulich
  2013-12-03  7:57     ` Jan Beulich
  1 sibling, 1 reply; 16+ messages in thread
From: Liu, Jinsong @ 2013-12-03  3:06 UTC (permalink / raw)
  To: Andrew Cooper, George Dunlap, Jan Beulich, xen-devel
  Cc: Zhenzhong Duan, Keir Fraser, Nakajima, Jun, Dugger, Donald D

Andrew Cooper wrote:
> On 02/12/2013 19:43, George Dunlap wrote:
>> On 12/02/2013 02:28 PM, Jan Beulich wrote:
>>> All,
>>> 
>>> Jinsong's patches having been in for nearly a month now, but not
>>> being in a shape that would make releasing in 4.4 or backporting to
>>> the older trees desirable, we need to come to a conclusion on
>>> which way to go. Currently it looks like we have three options, but
>>> of course I'll be happy to see other (better!) ones proposed.
>>> 
>>> 1) Stay with what we have.
>>> 
>>> 2) Revert 86d60e85 ("VMX: flush cache when vmentry back to UC
>>> guest") in its entirety plus, perhaps, the change 62652c00 ("VMX:
>>> fix cr0.cd handling") did to vmx_ctxt_switch_to().
>>> 
>>> 3) Apply the attached patch that Andrew and I have been putting
>>> together, with the caveat that it's still incomplete (see below).
>>> 
>>> The latter two are based on the observation that the amount of
>>> cache flushing we do with what is in the master tree right now is
>>> more than what we did prior to that patch series but still
>>> insufficient. Hence the revert would get us back to the earlier
>>> state (and obviously eliminate the performance problems that
>>> were observed when doing too eager flushing), whereas
>>> applying the extra 5th patch would get us closer to a proper
>>> solution.
>> 
>> What's missing is a description of the pros and cons of 1 and 2.  Do
>> you have any links to threads describing the problem?
>> 
>>  -George
>> 
> 
> 1) has the basic XSA-60 fixes and some wbinvd()s, which are a
> significant performance issue and insufficient to completely fix the
> problem at hand.  As a result, 1) is the worst possible option to stay
> with as far as Xen is concerned (irrespective of the upcoming 4.4
> release). 
> 
> 2) will revert us back to the basic XSA-60 with none of the wbinvd()s,
> which fixes the security issue and is no worse than before in terms
> of a correctness-in-the-case-of-uncachable-hvm-domains point of view.
> 
> 3) as-is is still insufficient to fix the problem in 1), and would
> currently result in a further performance regression.
> 
> FWIW, my vote is for option 2) which will ease the current performance
> regression, in favor of allowing us time to come up with a proper
> solution to the pre-existing problem of Xen and Qemu mappings of a UC
> domain's memory.
> 
> ~Andrew

I also vote option 2, but only revert 86d60e85, keeping 62652c00 (wbinvd at vmx_ctxt_switch_to) since it's used to avoid being polluted when vcpu migrate to another cpu.

Thanks,
Jinsong

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: XSA-60 - how to get back to a sane state
  2013-12-03  2:21   ` Andrew Cooper
  2013-12-03  3:06     ` Liu, Jinsong
@ 2013-12-03  7:57     ` Jan Beulich
  1 sibling, 0 replies; 16+ messages in thread
From: Jan Beulich @ 2013-12-03  7:57 UTC (permalink / raw)
  To: Andrew Cooper, George Dunlap
  Cc: Jinsong Liu, Keir Fraser, Zhenzhong Duan, Donald D Dugger,
	Jun Nakajima, xen-devel

>>> On 03.12.13 at 03:21, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> 1) has the basic XSA-60 fixes and some wbinvd()s, which are a
> significant performance issue and insufficient to completely fix the
> problem at hand.  As a result, 1) is the worst possible option to stay
> with as far as Xen is concerned (irrespective of the upcoming 4.4 release).

I should add that another aspect here is the current inconsistency
of the code: In a few places we give the appearance of taking care
of the required flushing, but we don't really do so everywhere, so
whoever is going to read the code in close detail will rightfully say
"this can't be correct". Whereas if we revert the wbinvd()s we're
at least back to a consistent (even if theoretically broken) state.

> 2) will revert us back to the basic XSA-60 with none of the wbinvd()s,
> which fixes the security issue and is no worse than before in terms of a
> correctness-in-the-case-of-uncachable-hvm-domains point of view.

And keep in mind that at present we have no knowledge of guests
actually needing to run with caching disabled - to only known use
case is that of the guest setting its MTRRs (where the specification
requires caching to be disabled, but the OS [as well as correctness]
doesn't really require that).

> 3) as-is is still insufficient to fix the problem in 1), and would
> currently result in a further performance regression.

Hmm, I'm not sure about the performance regression: During the
sole known usage (see above) there shouldn't be any guest
memory writes, and hence the "needs-flushing" flag should rarely
if ever get set.

> FWIW, my vote is for option 2) which will ease the current performance
> regression, in favor of allowing us time to come up with a proper
> solution to the pre-existing problem of Xen and Qemu mappings of a UC
> domain's memory.

Agreed - my preference too is 2 over 3 over 1.

Jan

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: XSA-60 - how to get back to a sane state
  2013-12-03  3:06     ` Liu, Jinsong
@ 2013-12-03  8:00       ` Jan Beulich
  2013-12-03 14:30         ` Liu, Jinsong
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2013-12-03  8:00 UTC (permalink / raw)
  To: Andrew Cooper, George Dunlap, Jinsong Liu, xen-devel
  Cc: Zhenzhong Duan, Keir Fraser, Jun Nakajima, Donald D Dugger

>>> On 03.12.13 at 04:06, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
> I also vote option 2, but only revert 86d60e85, keeping 62652c00 (wbinvd at 
> vmx_ctxt_switch_to) since it's used to avoid being polluted when vcpu migrate 
> to another cpu.

Please explain this in more detail. Both Andrew and I are concerned
about this extra, but pretty pointless (without being done so too in
other cases) wbinvd(). In particular you'd have to explain what its
counterpart was in the code prior to your four patch XSA-60 series.

Jan

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: XSA-60 - how to get back to a sane state
  2013-12-03  8:00       ` Jan Beulich
@ 2013-12-03 14:30         ` Liu, Jinsong
  2013-12-03 15:09           ` Jan Beulich
  0 siblings, 1 reply; 16+ messages in thread
From: Liu, Jinsong @ 2013-12-03 14:30 UTC (permalink / raw)
  To: Jan Beulich, Andrew Cooper, George Dunlap, xen-devel
  Cc: Zhenzhong Duan, Keir Fraser, Nakajima, Jun, Dugger, Donald D

Jan Beulich wrote:
>>>> On 03.12.13 at 04:06, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>> I also vote option 2, but only revert 86d60e85, keeping 62652c00
>> (wbinvd at vmx_ctxt_switch_to) since it's used to avoid being
>> polluted when vcpu migrate to another cpu.
> 
> Please explain this in more detail. Both Andrew and I are concerned
> about this extra, but pretty pointless (without being done so too in
> other cases) wbinvd(). In particular you'd have to explain what its
> counterpart was in the code prior to your four patch XSA-60 series.
> 
> Jan

The wbinvd at vmx_ctxt_switch_to is for case like
1. vcpu runs at cpu A, flushing cache at vmx_handle_cd;
2. then the vcpu may switch out and migrate to cpu B;
3. historically cpu B may has cacheline polluted;
so when the vcpu is scheduled to cpu B, we need flush cache.

Thanks,
Jinsong

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: XSA-60 - how to get back to a sane state
  2013-12-03 14:30         ` Liu, Jinsong
@ 2013-12-03 15:09           ` Jan Beulich
  2013-12-03 15:14             ` George Dunlap
  2013-12-04 12:04             ` Liu, Jinsong
  0 siblings, 2 replies; 16+ messages in thread
From: Jan Beulich @ 2013-12-03 15:09 UTC (permalink / raw)
  To: Andrew Cooper, George Dunlap, Jinsong Liu, xen-devel
  Cc: Zhenzhong Duan, Keir Fraser, Jun Nakajima, Donald D Dugger

>>> On 03.12.13 at 15:30, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
> Jan Beulich wrote:
>>>>> On 03.12.13 at 04:06, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>>> I also vote option 2, but only revert 86d60e85, keeping 62652c00
>>> (wbinvd at vmx_ctxt_switch_to) since it's used to avoid being
>>> polluted when vcpu migrate to another cpu.
>> 
>> Please explain this in more detail. Both Andrew and I are concerned
>> about this extra, but pretty pointless (without being done so too in
>> other cases) wbinvd(). In particular you'd have to explain what its
>> counterpart was in the code prior to your four patch XSA-60 series.
> 
> The wbinvd at vmx_ctxt_switch_to is for case like
> 1. vcpu runs at cpu A, flushing cache at vmx_handle_cd;
> 2. then the vcpu may switch out and migrate to cpu B;
> 3. historically cpu B may has cacheline polluted;
> so when the vcpu is scheduled to cpu B, we need flush cache.

But you didn't clarify whether/how this case was taken care of
_before_ your XSA-60 patches.

Jan

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: XSA-60 - how to get back to a sane state
  2013-12-03 15:09           ` Jan Beulich
@ 2013-12-03 15:14             ` George Dunlap
  2013-12-04 12:04             ` Liu, Jinsong
  1 sibling, 0 replies; 16+ messages in thread
From: George Dunlap @ 2013-12-03 15:14 UTC (permalink / raw)
  To: Jan Beulich, Andrew Cooper, Jinsong Liu, xen-devel
  Cc: Zhenzhong Duan, Keir Fraser, Jun Nakajima, Donald D Dugger

On 12/03/2013 03:09 PM, Jan Beulich wrote:
>>>> On 03.12.13 at 15:30, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>> Jan Beulich wrote:
>>>>>> On 03.12.13 at 04:06, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>>>> I also vote option 2, but only revert 86d60e85, keeping 62652c00
>>>> (wbinvd at vmx_ctxt_switch_to) since it's used to avoid being
>>>> polluted when vcpu migrate to another cpu.
>>>
>>> Please explain this in more detail. Both Andrew and I are concerned
>>> about this extra, but pretty pointless (without being done so too in
>>> other cases) wbinvd(). In particular you'd have to explain what its
>>> counterpart was in the code prior to your four patch XSA-60 series.
>>
>> The wbinvd at vmx_ctxt_switch_to is for case like
>> 1. vcpu runs at cpu A, flushing cache at vmx_handle_cd;
>> 2. then the vcpu may switch out and migrate to cpu B;
>> 3. historically cpu B may has cacheline polluted;
>> so when the vcpu is scheduled to cpu B, we need flush cache.
>
> But you didn't clarify whether/how this case was taken care of
> _before_ your XSA-60 patches.

Is this still about guests doing wbinvd?  As Jan said, there is no point 
in doing wbinvd piecemeal: if it's not 100% reliable (to the best of our 
knowledge), then the more unreliable the better really.

  -George

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: XSA-60 - how to get back to a sane state
  2013-12-03 15:09           ` Jan Beulich
  2013-12-03 15:14             ` George Dunlap
@ 2013-12-04 12:04             ` Liu, Jinsong
  2013-12-04 12:16               ` Jan Beulich
  1 sibling, 1 reply; 16+ messages in thread
From: Liu, Jinsong @ 2013-12-04 12:04 UTC (permalink / raw)
  To: Jan Beulich, Andrew Cooper, George Dunlap, xen-devel
  Cc: Zhenzhong Duan, Keir Fraser, Nakajima, Jun, Dugger, Donald D

Jan Beulich wrote:
>>>> On 03.12.13 at 15:30, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>> Jan Beulich wrote:
>>>>>> On 03.12.13 at 04:06, "Liu, Jinsong" <jinsong.liu@intel.com>
>>>>>> wrote: 
>>>> I also vote option 2, but only revert 86d60e85, keeping 62652c00
>>>> (wbinvd at vmx_ctxt_switch_to) since it's used to avoid being
>>>> polluted when vcpu migrate to another cpu.
>>> 
>>> Please explain this in more detail. Both Andrew and I are concerned
>>> about this extra, but pretty pointless (without being done so too in
>>> other cases) wbinvd(). In particular you'd have to explain what its
>>> counterpart was in the code prior to your four patch XSA-60 series.
>> 
>> The wbinvd at vmx_ctxt_switch_to is for case like
>> 1. vcpu runs at cpu A, flushing cache at vmx_handle_cd;
>> 2. then the vcpu may switch out and migrate to cpu B;
>> 3. historically cpu B may has cacheline polluted;
>> so when the vcpu is scheduled to cpu B, we need flush cache.
> 
> But you didn't clarify whether/how this case was taken care of
> _before_ your XSA-60 patches.
> 

I didn't understand your question. What do you mean by 'before my XSA-60 patches'?

Thanks,
Jinsong

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: XSA-60 - how to get back to a sane state
  2013-12-04 12:04             ` Liu, Jinsong
@ 2013-12-04 12:16               ` Jan Beulich
  2013-12-04 15:55                 ` George Dunlap
  2013-12-04 16:03                 ` Liu, Jinsong
  0 siblings, 2 replies; 16+ messages in thread
From: Jan Beulich @ 2013-12-04 12:16 UTC (permalink / raw)
  To: Jinsong Liu
  Cc: Keir Fraser, George Dunlap, Andrew Cooper, Zhenzhong Duan,
	Donald D Dugger, Jun Nakajima, xen-devel

>>> On 04.12.13 at 13:04, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
> Jan Beulich wrote:
>>>>> On 03.12.13 at 15:30, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>>> Jan Beulich wrote:
>>>>>>> On 03.12.13 at 04:06, "Liu, Jinsong" <jinsong.liu@intel.com>
>>>>>>> wrote: 
>>>>> I also vote option 2, but only revert 86d60e85, keeping 62652c00
>>>>> (wbinvd at vmx_ctxt_switch_to) since it's used to avoid being
>>>>> polluted when vcpu migrate to another cpu.
>>>> 
>>>> Please explain this in more detail. Both Andrew and I are concerned
>>>> about this extra, but pretty pointless (without being done so too in
>>>> other cases) wbinvd(). In particular you'd have to explain what its
>>>> counterpart was in the code prior to your four patch XSA-60 series.
>>> 
>>> The wbinvd at vmx_ctxt_switch_to is for case like
>>> 1. vcpu runs at cpu A, flushing cache at vmx_handle_cd;
>>> 2. then the vcpu may switch out and migrate to cpu B;
>>> 3. historically cpu B may has cacheline polluted;
>>> so when the vcpu is scheduled to cpu B, we need flush cache.
>> 
>> But you didn't clarify whether/how this case was taken care of
>> _before_ your XSA-60 patches.
>> 
> 
> I didn't understand your question. What do you mean by 'before my XSA-60 
> patches'?

Before your 4 patch series was applied (e.g. consider plain
4.3.1) - how was the situation taken care of that your change
to vmx_ctxt_switch_to() is intended to deal with?

Jan

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: XSA-60 - how to get back to a sane state
  2013-12-04 12:16               ` Jan Beulich
@ 2013-12-04 15:55                 ` George Dunlap
  2013-12-04 16:09                   ` Jan Beulich
  2013-12-04 16:03                 ` Liu, Jinsong
  1 sibling, 1 reply; 16+ messages in thread
From: George Dunlap @ 2013-12-04 15:55 UTC (permalink / raw)
  To: Jan Beulich, Jinsong Liu
  Cc: Keir Fraser, Andrew Cooper, Zhenzhong Duan, Donald D Dugger,
	Jun Nakajima, xen-devel

On 12/04/2013 12:16 PM, Jan Beulich wrote:
>>>> On 04.12.13 at 13:04, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>> Jan Beulich wrote:
>>>>>> On 03.12.13 at 15:30, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>>>> Jan Beulich wrote:
>>>>>>>> On 03.12.13 at 04:06, "Liu, Jinsong" <jinsong.liu@intel.com>
>>>>>>>> wrote:
>>>>>> I also vote option 2, but only revert 86d60e85, keeping 62652c00
>>>>>> (wbinvd at vmx_ctxt_switch_to) since it's used to avoid being
>>>>>> polluted when vcpu migrate to another cpu.
>>>>> Please explain this in more detail. Both Andrew and I are concerned
>>>>> about this extra, but pretty pointless (without being done so too in
>>>>> other cases) wbinvd(). In particular you'd have to explain what its
>>>>> counterpart was in the code prior to your four patch XSA-60 series.
>>>> The wbinvd at vmx_ctxt_switch_to is for case like
>>>> 1. vcpu runs at cpu A, flushing cache at vmx_handle_cd;
>>>> 2. then the vcpu may switch out and migrate to cpu B;
>>>> 3. historically cpu B may has cacheline polluted;
>>>> so when the vcpu is scheduled to cpu B, we need flush cache.
>>> But you didn't clarify whether/how this case was taken care of
>>> _before_ your XSA-60 patches.
>>>
>> I didn't understand your question. What do you mean by 'before my XSA-60
>> patches'?
> Before your 4 patch series was applied (e.g. consider plain
> 4.3.1) - how was the situation taken care of that your change
> to vmx_ctxt_switch_to() is intended to deal with?

It sounds like Jan is saying: We would only consider a patch that would 
fix regressions in functionality caused by the 4-patch XSA-60 series.  
Was there the possibility for cacheline pollution in the scenario you 
describe above before XSA-60 was fixed?  If not, then this is a 
regression and we might consider a patch to restore that functionality.  
If there was the possibility of the above scenario before the XSA-60 
series, then it's not a regression; and therefore probably not something 
we want to accept at this point.

Do I understand you properly, Jan?

  -George

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: XSA-60 - how to get back to a sane state
  2013-12-04 12:16               ` Jan Beulich
  2013-12-04 15:55                 ` George Dunlap
@ 2013-12-04 16:03                 ` Liu, Jinsong
  2013-12-04 16:14                   ` Jan Beulich
  1 sibling, 1 reply; 16+ messages in thread
From: Liu, Jinsong @ 2013-12-04 16:03 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Keir Fraser, George Dunlap, Andrew Cooper, Zhenzhong Duan,
	Dugger, Donald D, Nakajima, Jun, xen-devel

Jan Beulich wrote:
>>>> On 04.12.13 at 13:04, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>> Jan Beulich wrote:
>>>>>> On 03.12.13 at 15:30, "Liu, Jinsong" <jinsong.liu@intel.com>
>>>>>> wrote: 
>>>> Jan Beulich wrote:
>>>>>>>> On 03.12.13 at 04:06, "Liu, Jinsong" <jinsong.liu@intel.com>
>>>>>>>> wrote:
>>>>>> I also vote option 2, but only revert 86d60e85, keeping 62652c00
>>>>>> (wbinvd at vmx_ctxt_switch_to) since it's used to avoid being
>>>>>> polluted when vcpu migrate to another cpu.
>>>>> 
>>>>> Please explain this in more detail. Both Andrew and I are
>>>>> concerned about this extra, but pretty pointless (without being
>>>>> done so too in other cases) wbinvd(). In particular you'd have to
>>>>> explain what its counterpart was in the code prior to your four
>>>>> patch XSA-60 series. 
>>>> 
>>>> The wbinvd at vmx_ctxt_switch_to is for case like
>>>> 1. vcpu runs at cpu A, flushing cache at vmx_handle_cd;
>>>> 2. then the vcpu may switch out and migrate to cpu B;
>>>> 3. historically cpu B may has cacheline polluted;
>>>> so when the vcpu is scheduled to cpu B, we need flush cache.
>>> 
>>> But you didn't clarify whether/how this case was taken care of
>>> _before_ your XSA-60 patches. 
>>> 
>> 
>> I didn't understand your question. What do you mean by 'before my
>> XSA-60 patches'?
> 
> Before your 4 patch series was applied (e.g. consider plain
> 4.3.1) - how was the situation taken care of that your change
> to vmx_ctxt_switch_to() is intended to deal with?
> 

Before my 4 patches (in fact before the 3rd patch 62652c00), Xen just didn't take care of the case.
Notice that wbinvd at vmx_ctxt_switch_to is for a corner case (when uc mode).

Thanks,
Jinsong

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: XSA-60 - how to get back to a sane state
  2013-12-04 15:55                 ` George Dunlap
@ 2013-12-04 16:09                   ` Jan Beulich
  0 siblings, 0 replies; 16+ messages in thread
From: Jan Beulich @ 2013-12-04 16:09 UTC (permalink / raw)
  To: George Dunlap, Jinsong Liu
  Cc: KeirFraser, Andrew Cooper, Zhenzhong Duan, Donald D Dugger,
	Jun Nakajima, xen-devel

>>> On 04.12.13 at 16:55, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 12/04/2013 12:16 PM, Jan Beulich wrote:
>>>>> On 04.12.13 at 13:04, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>>> Jan Beulich wrote:
>>>>>>> On 03.12.13 at 15:30, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>>>>> Jan Beulich wrote:
>>>>>>>>> On 03.12.13 at 04:06, "Liu, Jinsong" <jinsong.liu@intel.com>
>>>>>>>>> wrote:
>>>>>>> I also vote option 2, but only revert 86d60e85, keeping 62652c00
>>>>>>> (wbinvd at vmx_ctxt_switch_to) since it's used to avoid being
>>>>>>> polluted when vcpu migrate to another cpu.
>>>>>> Please explain this in more detail. Both Andrew and I are concerned
>>>>>> about this extra, but pretty pointless (without being done so too in
>>>>>> other cases) wbinvd(). In particular you'd have to explain what its
>>>>>> counterpart was in the code prior to your four patch XSA-60 series.
>>>>> The wbinvd at vmx_ctxt_switch_to is for case like
>>>>> 1. vcpu runs at cpu A, flushing cache at vmx_handle_cd;
>>>>> 2. then the vcpu may switch out and migrate to cpu B;
>>>>> 3. historically cpu B may has cacheline polluted;
>>>>> so when the vcpu is scheduled to cpu B, we need flush cache.
>>>> But you didn't clarify whether/how this case was taken care of
>>>> _before_ your XSA-60 patches.
>>>>
>>> I didn't understand your question. What do you mean by 'before my XSA-60
>>> patches'?
>> Before your 4 patch series was applied (e.g. consider plain
>> 4.3.1) - how was the situation taken care of that your change
>> to vmx_ctxt_switch_to() is intended to deal with?
> 
> It sounds like Jan is saying: We would only consider a patch that would 
> fix regressions in functionality caused by the 4-patch XSA-60 series.  
> Was there the possibility for cacheline pollution in the scenario you 
> describe above before XSA-60 was fixed?  If not, then this is a 
> regression and we might consider a patch to restore that functionality.  
> If there was the possibility of the above scenario before the XSA-60 
> series, then it's not a regression; and therefore probably not something 
> we want to accept at this point.
> 
> Do I understand you properly, Jan?

Yes, thanks for wording it in yet another way.

Jan

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: XSA-60 - how to get back to a sane state
  2013-12-04 16:03                 ` Liu, Jinsong
@ 2013-12-04 16:14                   ` Jan Beulich
  2013-12-04 16:23                     ` Liu, Jinsong
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2013-12-04 16:14 UTC (permalink / raw)
  To: Jinsong Liu
  Cc: KeirFraser, George Dunlap, Andrew Cooper, Zhenzhong Duan,
	Donald D Dugger, Jun Nakajima, xen-devel

>>> On 04.12.13 at 17:03, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
> Before my 4 patches (in fact before the 3rd patch 62652c00), Xen just didn't 
> take care of the case.
> Notice that wbinvd at vmx_ctxt_switch_to is for a corner case (when uc 
> mode).

That's what my thinking was. And hence the intention to revert it
as being incomplete anyway. Once we find a need to do the
flushing, we'd need to bite the bullet and fix all of the possible
cases, not just some of them.

Are we on the same page now?

Jan

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: XSA-60 - how to get back to a sane state
  2013-12-04 16:14                   ` Jan Beulich
@ 2013-12-04 16:23                     ` Liu, Jinsong
  0 siblings, 0 replies; 16+ messages in thread
From: Liu, Jinsong @ 2013-12-04 16:23 UTC (permalink / raw)
  To: Jan Beulich
  Cc: KeirFraser, George Dunlap, Andrew Cooper, Zhenzhong Duan,
	Dugger, Donald D, Nakajima, Jun, xen-devel

Jan Beulich wrote:
>>>> On 04.12.13 at 17:03, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>> Before my 4 patches (in fact before the 3rd patch 62652c00), Xen
>> just didn't take care of the case. Notice that wbinvd at
>> vmx_ctxt_switch_to is for a corner case (when uc mode).
> 
> That's what my thinking was. And hence the intention to revert it
> as being incomplete anyway. Once we find a need to do the
> flushing, we'd need to bite the bullet and fix all of the possible
> cases, not just some of them.
> 
> Are we on the same page now?
> 

>From this point, yes. I don't insist keeping that wbinvd.

Thanks,
Jinsong

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2013-12-04 16:24 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-12-02 14:28 XSA-60 - how to get back to a sane state Jan Beulich
2013-12-02 19:43 ` George Dunlap
2013-12-03  2:21   ` Andrew Cooper
2013-12-03  3:06     ` Liu, Jinsong
2013-12-03  8:00       ` Jan Beulich
2013-12-03 14:30         ` Liu, Jinsong
2013-12-03 15:09           ` Jan Beulich
2013-12-03 15:14             ` George Dunlap
2013-12-04 12:04             ` Liu, Jinsong
2013-12-04 12:16               ` Jan Beulich
2013-12-04 15:55                 ` George Dunlap
2013-12-04 16:09                   ` Jan Beulich
2013-12-04 16:03                 ` Liu, Jinsong
2013-12-04 16:14                   ` Jan Beulich
2013-12-04 16:23                     ` Liu, Jinsong
2013-12-03  7:57     ` Jan Beulich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).