From: cdall@kernel.org (Christoffer Dall)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH] KVM: arm/arm64: Close VMID generation race
Date: Mon, 9 Apr 2018 22:51:39 +0200 [thread overview]
Message-ID: <20180409205139.GH10904@cbox> (raw)
In-Reply-To: <20180409170706.23541-1-marc.zyngier@arm.com>
On Mon, Apr 09, 2018 at 06:07:06PM +0100, Marc Zyngier wrote:
> Before entering the guest, we check whether our VMID is still
> part of the current generation. In order to avoid taking a lock,
> we start with checking that the generation is still current, and
> only if not current do we take the lock, recheck, and update the
> generation and VMID.
>
> This leaves open a small race: A vcpu can bump up the global
> generation number as well as the VM's, but has not updated
> the VMID itself yet.
>
> At that point another vcpu from the same VM comes in, checks
> the generation (and finds it not needing anything), and jumps
> into the guest. At this point, we end-up with two vcpus belonging
> to the same VM running with two different VMIDs. Eventually, the
> VMID used by the second vcpu will get reassigned, and things will
> really go wrong...
>
> A simple solution would be to drop this initial check, and always take
> the lock. This is likely to cause performance issues. A middle ground
> is to convert the spinlock to a rwlock, and only take the read lock
> on the fast path. If the check fails at that point, drop it and
> acquire the write lock, rechecking the condition.
>
> This ensures that the above scenario doesn't occur.
>
> Reported-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> I haven't seen any reply from Shannon, so reposting this to
> a slightly wider audience for feedback.
>
> virt/kvm/arm/arm.c | 15 ++++++++++-----
> 1 file changed, 10 insertions(+), 5 deletions(-)
>
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index dba629c5f8ac..a4c1b76240df 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -63,7 +63,7 @@ static DEFINE_PER_CPU(struct kvm_vcpu *, kvm_arm_running_vcpu);
> static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1);
> static u32 kvm_next_vmid;
> static unsigned int kvm_vmid_bits __read_mostly;
> -static DEFINE_SPINLOCK(kvm_vmid_lock);
> +static DEFINE_RWLOCK(kvm_vmid_lock);
>
> static bool vgic_present;
>
> @@ -473,11 +473,16 @@ static void update_vttbr(struct kvm *kvm)
> {
> phys_addr_t pgd_phys;
> u64 vmid;
> + bool new_gen;
>
> - if (!need_new_vmid_gen(kvm))
> + read_lock(&kvm_vmid_lock);
> + new_gen = need_new_vmid_gen(kvm);
> + read_unlock(&kvm_vmid_lock);
> +
> + if (!new_gen)
> return;
>
> - spin_lock(&kvm_vmid_lock);
> + write_lock(&kvm_vmid_lock);
>
> /*
> * We need to re-check the vmid_gen here to ensure that if another vcpu
> @@ -485,7 +490,7 @@ static void update_vttbr(struct kvm *kvm)
> * use the same vmid.
> */
> if (!need_new_vmid_gen(kvm)) {
> - spin_unlock(&kvm_vmid_lock);
> + write_unlock(&kvm_vmid_lock);
> return;
> }
>
> @@ -519,7 +524,7 @@ static void update_vttbr(struct kvm *kvm)
> vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK(kvm_vmid_bits);
> kvm->arch.vttbr = kvm_phys_to_vttbr(pgd_phys) | vmid;
>
> - spin_unlock(&kvm_vmid_lock);
> + write_unlock(&kvm_vmid_lock);
> }
>
> static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
> --
> 2.14.2
>
The above looks correct to me. I am wondering if something like the
following would also work, which may be slightly more efficient,
although I doubt the difference can be measured:
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index dba629c5f8ac..7ac869bcad21 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -458,7 +458,9 @@ void force_vm_exit(const cpumask_t *mask)
*/
static bool need_new_vmid_gen(struct kvm *kvm)
{
- return unlikely(kvm->arch.vmid_gen != atomic64_read(&kvm_vmid_gen));
+ u64 current_vmid_gen = atomic64_read(&kvm_vmid_gen);
+ smp_rmb(); /* Orders read of kvm_vmid_gen and kvm->arch.vmid */
+ return unlikely(kvm->arch.vmid_gen != current_vmid_gen);
}
/**
@@ -508,10 +510,11 @@ static void update_vttbr(struct kvm *kvm)
kvm_call_hyp(__kvm_flush_vm_context);
}
- kvm->arch.vmid_gen = atomic64_read(&kvm_vmid_gen);
kvm->arch.vmid = kvm_next_vmid;
kvm_next_vmid++;
kvm_next_vmid &= (1 << kvm_vmid_bits) - 1;
+ smp_wmb();
+ kvm->arch.vmid_gen = atomic64_read(&kvm_vmid_gen);
/* update vttbr to be used with the new vmid */
pgd_phys = virt_to_phys(kvm->arch.pgd);
Thanks,
-Christoffer
next prev parent reply other threads:[~2018-04-09 20:51 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-04-09 17:07 [PATCH] KVM: arm/arm64: Close VMID generation race Marc Zyngier
2018-04-09 20:51 ` Christoffer Dall [this message]
2018-04-10 10:51 ` Mark Rutland
2018-04-10 15:05 ` Christoffer Dall
2018-04-10 15:24 ` Mark Rutland
2018-04-10 15:35 ` Christoffer Dall
2018-04-10 15:37 ` Marc Zyngier
2018-04-10 15:48 ` Christoffer Dall
2018-04-11 1:30 ` Shannon Zhao
2018-04-16 10:05 ` Shannon Zhao
2018-04-16 10:20 ` Marc Zyngier
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180409205139.GH10904@cbox \
--to=cdall@kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).