From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Paul Durrant <Paul.Durrant@citrix.com>,
"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wei.liu2@citrix.com>, "Keir (Xen.org)" <keir@xen.org>,
Ian Campbell <Ian.Campbell@citrix.com>,
Stefano Stabellini <Stefano.Stabellini@citrix.com>,
Jan Beulich <jbeulich@suse.com>,
Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [PATCH v3] x86/hvm/viridian: flush remote tlbs by hypercall
Date: Thu, 19 Nov 2015 17:08:44 +0000 [thread overview]
Message-ID: <564E021C.8080105@citrix.com> (raw)
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD02F67B9BF@AMSPEX01CL01.citrite.net>
On 19/11/15 16:57, Paul Durrant wrote:
>> -----Original Message-----
>> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
>> Sent: 19 November 2015 16:07
>> To: Paul Durrant; xen-devel@lists.xenproject.org
>> Cc: Ian Jackson; Stefano Stabellini; Ian Campbell; Wei Liu; Keir (Xen.org); Jan
>> Beulich
>> Subject: Re: [PATCH v3] x86/hvm/viridian: flush remote tlbs by hypercall
>>
>> On 19/11/15 13:19, Paul Durrant wrote:
>>> @@ -561,10 +584,81 @@ int viridian_hypercall(struct cpu_user_regs *regs)
>>> switch ( input.call_code )
>>> {
>>> case HvNotifyLongSpinWait:
>>> + /*
>>> + * See Microsoft Hypervisor Top Level Spec. section 18.5.1.
>>> + */
>>> perfc_incr(mshv_call_long_wait);
>>> do_sched_op(SCHEDOP_yield, guest_handle_from_ptr(NULL, void));
>>> status = HV_STATUS_SUCCESS;
>>> break;
>>> +
>>> + case HvFlushVirtualAddressSpace:
>>> + case HvFlushVirtualAddressList:
>>> + {
>>> + cpumask_t *pcpu_mask;
>>> + struct vcpu *v;
>>> + struct {
>>> + uint64_t address_space;
>>> + uint64_t flags;
>>> + uint64_t vcpu_mask;
>>> + } input_params;
>>> +
>>> + /*
>>> + * See Microsoft Hypervisor Top Level Spec. sections 12.4.2
>>> + * and 12.4.3.
>>> + */
>>> + perfc_incr(mshv_flush);
>>> +
>>> + /* These hypercalls should never use the fast-call convention. */
>>> + status = HV_STATUS_INVALID_PARAMETER;
>>> + if ( input.fast )
>>> + break;
>>> +
>>> + /* Get input parameters. */
>>> + if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
>>> + sizeof(input_params)) != HVMCOPY_okay )
>>> + break;
>>> +
>>> + /*
>>> + * It is not clear from the spec. if we are supposed to
>>> + * include current virtual CPU in the set or not in this case,
>>> + * so err on the safe side.
>>> + */
>>> + if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
>>> + input_params.vcpu_mask = ~0ul;
>>> +
>>> + pcpu_mask = curr->arch.hvm_vcpu.viridian.flush_cpumask;
>>> + cpumask_clear(pcpu_mask);
>>> +
>>> + /*
>>> + * For each specified virtual CPU flush all ASIDs to invalidate
>>> + * TLB entries the next time it is scheduled and then, if it
>>> + * is currently running, add its physical CPU to a mask of
>>> + * those which need to be interrupted to force a flush.
>>> + */
>>> + for_each_vcpu ( currd, v )
>>> + {
>>> + if ( !(input_params.vcpu_mask & (1ul << v->vcpu_id)) )
>>> + continue;
>> You need to cap this loop at a vcpu_id of 63, or the above conditional
>> will become undefined.
> The guest should not issue the hypercall if it has more than 64 vCPUs so to some extend I don't care what happens, as long as it is not harmful to the hypervisor in general, and I don't think that it is in this case.
The compiler is free to do anything it wishes when encountering
undefined behaviour, including crash the hypervisor.
Any undefined-behaviour which can be triggered by a guest action
warrants an XSA, because there is no telling what might happen.
>
>> It might also be wise to fail the vcpu_initialise() for a
>> viridian-enabled domain having more than 64 vcpus.
>>
>>> +
>>> + hvm_asid_flush_vcpu(v);
>>> + if ( v->is_running )
>>> + cpumask_set_cpu(v->processor, pcpu_mask);
>> __cpumask_set_cpu(). No need for atomic operations here.
>>
> Ok.
>
>>> + }
>>> +
>>> + /*
>>> + * Since ASIDs have now been flushed it just remains to
>>> + * force any CPUs currently running target vCPUs out of non-
>>> + * root mode. It's possible that re-scheduling has taken place
>>> + * so we may unnecessarily IPI some CPUs.
>>> + */
>>> + if ( !cpumask_empty(pcpu_mask) )
>>> + flush_tlb_mask(pcpu_mask);
>> Wouldn't it be easier to simply and input_params.vcpu_mask with
>> d->vcpu_dirty_mask ?
>>
> No, that may yield much too big a mask. All we need here is a mask of where the vcpus are running *now*, not everywhere they've been.
The dirty mask is a "currently scheduled on" mask.
~Andrew
next prev parent reply other threads:[~2015-11-19 17:09 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-11-19 13:19 [PATCH v3] x86/hvm/viridian: flush remote tlbs by hypercall Paul Durrant
2015-11-19 15:35 ` Jan Beulich
2015-11-19 16:06 ` Andrew Cooper
2015-11-19 16:57 ` Paul Durrant
2015-11-19 17:08 ` Andrew Cooper [this message]
2015-11-20 9:15 ` Paul Durrant
2015-11-20 9:44 ` Jan Beulich
2015-11-20 9:50 ` Paul Durrant
2015-11-20 12:26 ` Jan Beulich
2015-11-20 13:41 ` Paul Durrant
2015-11-20 15:02 ` Jan Beulich
2015-11-20 15:06 ` Paul Durrant
2015-11-20 13:45 ` Paul Durrant
2015-11-20 13:06 ` Andrew Cooper
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=564E021C.8080105@citrix.com \
--to=andrew.cooper3@citrix.com \
--cc=Ian.Campbell@citrix.com \
--cc=Ian.Jackson@citrix.com \
--cc=Paul.Durrant@citrix.com \
--cc=Stefano.Stabellini@citrix.com \
--cc=jbeulich@suse.com \
--cc=keir@xen.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).