From mboxrd@z Thu Jan 1 00:00:00 1970 From: Vitaly Kuznetsov Subject: Re: [PATCH 3/5] KVM: x86: hyperv: simplistic HVCALL_FLUSH_VIRTUAL_ADDRESS_{LIST,SPACE} implementation Date: Wed, 04 Apr 2018 11:41:46 +0200 Message-ID: <878ta3cnqd.fsf@vitty.brq.redhat.com> References: <20180402161059.8488-1-vkuznets@redhat.com> <20180402161059.8488-4-vkuznets@redhat.com> <20180403191508.GA7386@flask> <87d0zfcoed.fsf@vitty.brq.redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Cc: kvm@vger.kernel.org, x86@kernel.org, Paolo Bonzini , Roman Kagan , "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , "Michael Kelley \(EOSG\)" , Mohammed Gamal , Cathy Avery , linux-kernel@vger.kernel.org To: Radim =?utf-8?B?S3LEjW3DocWZ?= Return-path: In-Reply-To: <87d0zfcoed.fsf@vitty.brq.redhat.com> (Vitaly Kuznetsov's message of "Wed, 04 Apr 2018 11:27:22 +0200") Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org Vitaly Kuznetsov writes: > Radim Krčmář writes: > >> 2018-04-02 18:10+0200, Vitaly Kuznetsov: >>> + if (vcpu != current_vcpu) >>> + kvm_vcpu_kick(vcpu); >> >> The spec says that >> >> "This call guarantees that by the time control returns back to the >> caller, the observable effects of all flushes on the specified virtual >> processors have occurred." >> >> Other KVM code doesn't assume that kvm_vcpu_kick() and a delay provides >> that guarantee; kvm_make_all_cpus_request waits for the target CPU to >> exit before saying that TLB has been flushed. >> >> I am leaning towards the safer variant here as well. (Anyway, it's a >> good time to figure out if we really need it.) > > Ha, it depends on how we define "observable effects" :-) > > I think kvm_vcpu_kick() is enough as the corresponding vCPU can't > actually observe old mapping after being kicked (even if we didn't flush > yet we're not running). Or do you see any possible problem with such > definition? > Oh, now I see it myself -- native_smp_send_reschedule() only does apic->send_IPI() so this is indeed unsafe. We need something like kvm_make_all_cpus_request() with a mask (and, to make it fast, we'll probably have to pre-allocate these). Will do in v2, thanks! -- Vitaly