From: Julien Grall <julien.grall@linaro.org>
To: Jan Beulich <JBeulich@suse.com>, Julien Grall <julien.grall@arm.com>
Cc: Sergey Dyasli <sergey.dyasli@citrix.com>,
Stefano Stabellini <sstabellini@kernel.org>,
Wei Liu <wei.liu2@citrix.com>,
Igor Druzhinin <igor.druzhinin@citrix.com>,
George Dunlap <George.Dunlap@eu.citrix.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
Ian Jackson <Ian.Jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>,
Dario Faggioli <raistlin@linux.it>,
xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2] sync CPU state upon final domain destruction
Date: Wed, 22 Nov 2017 16:04:55 +0000 [thread overview]
Message-ID: <551b392e-a298-6055-d5b8-5d28e79b3190@linaro.org> (raw)
In-Reply-To: <5A1583150200007800190FE3@prv-mh.provo.novell.com>
Hi,
On 11/22/2017 01:00 PM, Jan Beulich wrote:
>>>> On 22.11.17 at 13:39, <JBeulich@suse.com> wrote:
>> See the code comment being added for why we need this.
>>
>> This is being placed here to balance between the desire to prevent
>> future similar issues (the risk of which would grow if it was put
>> further down the call stack, e.g. in vmx_vcpu_destroy()) and the
>> intention to limit the performance impact (otherwise it could also go
>> into rcu_do_batch(), paralleling the use in do_tasklet_work()).
>>
>> Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> I'm sorry, Julien, I did forget to Cc you (for 4.10 inclusion).
Release-acked-by: Julien Grall <julien.grall@linaro.org>
Cheers,
>
>> ---
>> v2: Move from vmx_vcpu_destroy() to complete_domain_destroy().
>>
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -794,6 +794,14 @@ static void complete_domain_destroy(stru
>> struct vcpu *v;
>> int i;
>>
>> + /*
>> + * Flush all state for the vCPU previously having run on the current CPU.
>> + * This is in particular relevant for x86 HVM ones on VMX, so that this
>> + * flushing of state won't happen from the TLB flush IPI handler behind
>> + * the back of a vmx_vmcs_enter() / vmx_vmcs_exit() section.
>> + */
>> + sync_local_execstate();
>> +
>> for ( i = d->max_vcpus - 1; i >= 0; i-- )
>> {
>> if ( (v = d->vcpu[i]) == NULL )
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
prev parent reply other threads:[~2017-11-22 16:05 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-11-22 12:39 [PATCH v2] sync CPU state upon final domain destruction Jan Beulich
2017-11-22 12:54 ` Andrew Cooper
2017-11-22 13:00 ` Jan Beulich
2017-11-22 16:04 ` Julien Grall [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=551b392e-a298-6055-d5b8-5d28e79b3190@linaro.org \
--to=julien.grall@linaro.org \
--cc=George.Dunlap@eu.citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=JBeulich@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=igor.druzhinin@citrix.com \
--cc=julien.grall@arm.com \
--cc=konrad.wilk@oracle.com \
--cc=raistlin@linux.it \
--cc=sergey.dyasli@citrix.com \
--cc=sstabellini@kernel.org \
--cc=tim@xen.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).