From: Vitaly Kuznetsov <vkuznets@redhat.com>
To: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: kvm@vger.kernel.org, x86@kernel.org,
Paolo Bonzini <pbonzini@redhat.com>,
Roman Kagan <rkagan@virtuozzo.com>,
"K. Y. Srinivasan" <kys@microsoft.com>,
Haiyang Zhang <haiyangz@microsoft.com>,
Stephen Hemminger <sthemmin@microsoft.com>,
"Michael Kelley \(EOSG\)" <Michael.H.Kelley@microsoft.com>,
Mohammed Gamal <mmorsy@redhat.com>,
Cathy Avery <cavery@redhat.com>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v3 4/6] KVM: x86: hyperv: simplistic HVCALL_FLUSH_VIRTUAL_ADDRESS_{LIST,SPACE} implementation
Date: Fri, 11 May 2018 14:27:10 +0200 [thread overview]
Message-ID: <87in7uml7l.fsf@vitty.brq.redhat.com> (raw)
In-Reply-To: <20180510194016.GB3885@flask> ("Radim \=\?utf-8\?B\?S3LEjW3DocWZ\?\= \=\?utf-8\?B\?Iidz\?\= message of "Thu, 10 May 2018 21:40:17 +0200")
Radim Krčmář <rkrcmar@redhat.com> writes:
> 2018-04-16 13:08+0200, Vitaly Kuznetsov:
>> Implement HvFlushVirtualAddress{List,Space} hypercalls in a simplistic way:
>> do full TLB flush with KVM_REQ_TLB_FLUSH and kick vCPUs which are currently
>> IN_GUEST_MODE.
>>
>> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
>> ---
>> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
>> @@ -1242,6 +1242,65 @@ int kvm_hv_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
>> return kvm_hv_get_msr(vcpu, msr, pdata);
>> }
>>
>> +static void ack_flush(void *_completed)
>> +{
>> +}
>> +
>> +static u64 kvm_hv_flush_tlb(struct kvm_vcpu *current_vcpu, u64 ingpa,
>> + u16 rep_cnt)
>> +{
>> + struct kvm *kvm = current_vcpu->kvm;
>> + struct kvm_vcpu_hv *hv_current = ¤t_vcpu->arch.hyperv;
>> + struct hv_tlb_flush flush;
>> + struct kvm_vcpu *vcpu;
>> + int i, cpu, me;
>> +
>> + if (unlikely(kvm_read_guest(kvm, ingpa, &flush, sizeof(flush))))
>> + return HV_STATUS_INVALID_HYPERCALL_INPUT;
>> +
>> + trace_kvm_hv_flush_tlb(flush.processor_mask, flush.address_space,
>> + flush.flags);
>> +
>> + cpumask_clear(&hv_current->tlb_lush);
>> +
>> + me = get_cpu();
>> +
>> + kvm_for_each_vcpu(i, vcpu, kvm) {
>> + struct kvm_vcpu_hv *hv = &vcpu->arch.hyperv;
>> +
>> + if (!(flush.flags & HV_FLUSH_ALL_PROCESSORS) &&
>
> Please add a check to prevent undefined behavior in C:
>
> (hv->vp_index >= 64 ||
>
>> + !(flush.processor_mask & BIT_ULL(hv->vp_index)))
>> + continue;
>
> It would also fail in the wild as shl only considers the bottom 5 bits.
>
>> + /*
>> + * vcpu->arch.cr3 may not be up-to-date for running vCPUs so we
>> + * can't analyze it here, flush TLB regardless of the specified
>> + * address space.
>> + */
>> + kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu);
>> +
>> + /*
>> + * It is possible that vCPU will migrate and we will kick wrong
>> + * CPU but vCPU's TLB will anyway be flushed upon migration as
>> + * we already made KVM_REQ_TLB_FLUSH request.
>> + */
>> + cpu = vcpu->cpu;
>> + if (cpu != -1 && cpu != me && cpu_online(cpu) &&
>> + kvm_arch_vcpu_should_kick(vcpu))
>> + cpumask_set_cpu(cpu, &hv_current->tlb_lush);
>> + }
>> +
>> + if (!cpumask_empty(&hv_current->tlb_lush))
>> + smp_call_function_many(&hv_current->tlb_lush, ack_flush,
>> + NULL, true);
>
> Hm, quite a lot of code duplication with EX hypercall and also
> kvm_make_all_cpus_request ... I'm thinking about making something like
>
> kvm_make_some_cpus_request(struct kvm *kvm, unsigned int req,
> bool (*predicate)(struct kvm_vcpu *vcpu))
>
> or to implement a vp_index -> vcpu mapping and using
>
> kvm_vcpu_request_mask(struct kvm *kvm, unsigned int req, long *vcpu_bitmap)
>
> The latter would probably simplify logic of the EX hypercall.
>
> What do you think?
Makes sense, I'll take a look. Thanks!
--
Vitaly
next prev parent reply other threads:[~2018-05-11 12:27 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-04-16 11:08 [PATCH v3 0/6] KVM: x86: hyperv: PV TLB flush for Windows guests Vitaly Kuznetsov
2018-04-16 11:08 ` [PATCH v3 1/6] x86/hyper-v: move struct hv_flush_pcpu{,ex} definitions to common header Vitaly Kuznetsov
2018-05-10 19:17 ` Radim Krčmář
2018-04-16 11:08 ` [PATCH v3 2/6] KVM: x86: hyperv: use defines when parsing hypercall parameters Vitaly Kuznetsov
2018-04-16 11:08 ` [PATCH v3 3/6] KVM: x86: hyperv: do rep check for each hypercall separately Vitaly Kuznetsov
2018-04-16 11:08 ` [PATCH v3 4/6] KVM: x86: hyperv: simplistic HVCALL_FLUSH_VIRTUAL_ADDRESS_{LIST,SPACE} implementation Vitaly Kuznetsov
2018-05-10 19:40 ` Radim Krčmář
2018-05-11 12:27 ` Vitaly Kuznetsov [this message]
2018-05-13 8:47 ` Vitaly Kuznetsov
2018-04-16 11:08 ` [PATCH v3 5/6] KVM: x86: hyperv: simplistic HVCALL_FLUSH_VIRTUAL_ADDRESS_{LIST,SPACE}_EX implementation Vitaly Kuznetsov
2018-05-10 20:08 ` Radim Krčmář
2018-04-16 11:08 ` [PATCH v3 6/6] KVM: x86: hyperv: declare KVM_CAP_HYPERV_TLBFLUSH capability Vitaly Kuznetsov
2018-05-02 7:41 ` [PATCH v3 0/6] KVM: x86: hyperv: PV TLB flush for Windows guests Vitaly Kuznetsov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87in7uml7l.fsf@vitty.brq.redhat.com \
--to=vkuznets@redhat.com \
--cc=Michael.H.Kelley@microsoft.com \
--cc=cavery@redhat.com \
--cc=haiyangz@microsoft.com \
--cc=kvm@vger.kernel.org \
--cc=kys@microsoft.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mmorsy@redhat.com \
--cc=pbonzini@redhat.com \
--cc=rkagan@virtuozzo.com \
--cc=rkrcmar@redhat.com \
--cc=sthemmin@microsoft.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox