From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: "Roger Pau Monné" <roger.pau@citrix.com>,
"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
Cc: 3.14+@char.us.oracle.com, linux-kernel@vger.kernel.org,
stable@vger.kernel.org, david.vrabel@citrix.com,
jbeulich@suse.com, xen-devel@lists.xenproject.org,
#@char.us.oracle.com
Subject: Re: [Xen-devel] [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op
Date: Mon, 14 Dec 2015 10:58:00 -0500 [thread overview]
Message-ID: <566EE708.6010404@oracle.com> (raw)
In-Reply-To: <566EE1C4.4080204@citrix.com>
On 12/14/2015 10:35 AM, Roger Pau Monn� wrote:
> El 14/12/15 a les 16.27, Konrad Rzeszutek Wilk ha escrit:
>> On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
>>> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor
>>> will likely perform same IPIs as would have the guest.
>>>
>> But if the VCPU is asleep, doing it via the hypervisor will save us waking
>> up the guest VCPU, sending an IPI - just to do an TLB flush
>> of that CPU. Which is pointless as the CPU hadn't been running the
>> guest in the first place.
OK, I then mis-read the hypervisor code, I didn't realize that
vcpumask_to_pcpumask() takes into account vcpu_dirty_cpumask.
>>
>>> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the
>>> guest's address on remote CPU (when, for example, VCPU from another
>>> guest
>>> is running there).
>> Right, so the hypervisor won't even send an IPI there.
>>
>> But if you do it via the normal guest IPI mechanism (which are opaque
>> to the hypervisor) you and up scheduling the guest VCPU to do
>> send an hypervisor callback. And the callback will go the IPI routine
>> which will do an TLB flush. Not necessary.
>>
>> This is all in case of oversubscription of course. In the case where
>> we are fine on vCPU resources it does not matter.
>>
>> Perhaps if we have PV aware TLB flush it could do this differently?
> Why don't HVM/PVH just uses the HVMOP_flush_tlbs hypercall?
It doesn't take any parameters so it will invalidate TLBs for all VCPUs,
which is more than is being asked for. Especially in the case of
MMUEXT_INVLPG_MULTI.
(That's in addition to the fact that it currently doesn't work for PVH
as it has a test for is_hvm_domain() instead of has_hvm_container_domain()).
-boris
next prev parent reply other threads:[~2015-12-14 15:58 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-12-13 0:25 [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op Boris Ostrovsky
[not found] ` <20151214152713.GC23203@char.us.oracle.com>
2015-12-14 15:35 ` [Xen-devel] " Roger Pau Monné
2015-12-14 15:58 ` Boris Ostrovsky [this message]
2015-12-15 14:36 ` Boris Ostrovsky
2015-12-15 15:03 ` Jan Beulich
2015-12-15 15:14 ` Boris Ostrovsky
2015-12-15 15:24 ` Jan Beulich
2015-12-15 15:37 ` Boris Ostrovsky
2015-12-15 16:07 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=566EE708.6010404@oracle.com \
--to=boris.ostrovsky@oracle.com \
--cc=#@char.us.oracle.com \
--cc=3.14+@char.us.oracle.com \
--cc=david.vrabel@citrix.com \
--cc=jbeulich@suse.com \
--cc=konrad.wilk@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=roger.pau@citrix.com \
--cc=stable@vger.kernel.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).