linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: will.deacon@arm.com (Will Deacon)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v4 08/14] KVM: ARM: World-switch implementation
Date: Mon, 19 Nov 2012 14:57:58 +0000	[thread overview]
Message-ID: <20121119145758.GA3205@mudshark.cambridge.arm.com> (raw)
In-Reply-To: <20121110154306.2836.93473.stgit@chazy-air>

On Sat, Nov 10, 2012 at 03:43:06PM +0000, Christoffer Dall wrote:
> Provides complete world-switch implementation to switch to other guests
> running in non-secure modes. Includes Hyp exception handlers that
> capture necessary exception information and stores the information on
> the VCPU and KVM structures.

[...]

> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index 15e2ab1..d8f8c60 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -40,6 +40,7 @@
>  #include <asm/kvm_arm.h>
>  #include <asm/kvm_asm.h>
>  #include <asm/kvm_mmu.h>
> +#include <asm/kvm_emulate.h>
> 
>  #ifdef REQUIRES_VIRT
>  __asm__(".arch_extension       virt");
> @@ -49,6 +50,10 @@ static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
>  static struct vfp_hard_struct __percpu *kvm_host_vfp_state;
>  static unsigned long hyp_default_vectors;
> 
> +/* The VMID used in the VTTBR */
> +static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1);
> +static u8 kvm_next_vmid;
> +static DEFINE_SPINLOCK(kvm_vmid_lock);
> 
>  int kvm_arch_hardware_enable(void *garbage)
>  {
> @@ -264,6 +269,8 @@ int __attribute_const__ kvm_target_cpu(void)
> 
>  int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
>  {
> +       /* Force users to call KVM_ARM_VCPU_INIT */
> +       vcpu->arch.target = -1;
>         return 0;
>  }
> 
> @@ -274,6 +281,7 @@ void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu)
>  void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  {
>         vcpu->cpu = cpu;
> +       vcpu->arch.vfp_host = this_cpu_ptr(kvm_host_vfp_state);
>  }
> 
>  void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
> @@ -306,12 +314,168 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *v)
> 
>  int kvm_arch_vcpu_in_guest_mode(struct kvm_vcpu *v)
>  {
> +       return v->mode == IN_GUEST_MODE;
> +}
> +
> +static void reset_vm_context(void *info)
> +{
> +       kvm_call_hyp(__kvm_flush_vm_context);
> +}
> +
> +/**
> + * need_new_vmid_gen - check that the VMID is still valid
> + * @kvm: The VM's VMID to checkt
> + *
> + * return true if there is a new generation of VMIDs being used
> + *
> + * The hardware supports only 256 values with the value zero reserved for the
> + * host, so we check if an assigned value belongs to a previous generation,
> + * which which requires us to assign a new value. If we're the first to use a
> + * VMID for the new generation, we must flush necessary caches and TLBs on all
> + * CPUs.
> + */
> +static bool need_new_vmid_gen(struct kvm *kvm)
> +{
> +       return unlikely(kvm->arch.vmid_gen != atomic64_read(&kvm_vmid_gen));
> +}
> +
> +/**
> + * update_vttbr - Update the VTTBR with a valid VMID before the guest runs
> + * @kvm        The guest that we are about to run
> + *
> + * Called from kvm_arch_vcpu_ioctl_run before entering the guest to ensure the
> + * VM has a valid VMID, otherwise assigns a new one and flushes corresponding
> + * caches and TLBs.
> + */
> +static void update_vttbr(struct kvm *kvm)
> +{
> +       phys_addr_t pgd_phys;
> +       u64 vmid;
> +
> +       if (!need_new_vmid_gen(kvm))
> +               return;
> +
> +       spin_lock(&kvm_vmid_lock);
> +
> +       /* First user of a new VMID generation? */
> +       if (unlikely(kvm_next_vmid == 0)) {
> +               atomic64_inc(&kvm_vmid_gen);
> +               kvm_next_vmid = 1;
> +
> +               /*
> +                * On SMP we know no other CPUs can use this CPU's or
> +                * each other's VMID since the kvm_vmid_lock blocks
> +                * them from reentry to the guest.
> +                */
> +               on_each_cpu(reset_vm_context, NULL, 1);
> +       }
> +
> +       kvm->arch.vmid_gen = atomic64_read(&kvm_vmid_gen);
> +       kvm->arch.vmid = kvm_next_vmid;
> +       kvm_next_vmid++;
> +
> +       /* update vttbr to be used with the new vmid */
> +       pgd_phys = virt_to_phys(kvm->arch.pgd);
> +       vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK;
> +       kvm->arch.vttbr = pgd_phys & VTTBR_BADDR_MASK;
> +       kvm->arch.vttbr |= vmid;
> +
> +       spin_unlock(&kvm_vmid_lock);
> +}

I must be missing something here: how do you ensure that a guest running
on multiple CPUs continues to have the same VMID across them after a
rollover?

> +
> +/*
> + * Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on
> + * proper exit to QEMU.
> + */
> +static int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
> +                      int exception_index)
> +{
> +       run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>         return 0;
>  }
> 
> +/**
> + * kvm_arch_vcpu_ioctl_run - the main VCPU run function to execute guest code
> + * @vcpu:      The VCPU pointer
> + * @run:       The kvm_run structure pointer used for userspace state exchange
> + *
> + * This function is called through the VCPU_RUN ioctl called from user space. It
> + * will execute VM code in a loop until the time slice for the process is used
> + * or some emulation is needed from user space in which case the function will
> + * return with return value 0 and with the kvm_run structure filled in with the
> + * required data for the requested emulation.
> + */
>  int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  {
> -       return -EINVAL;
> +       int ret;
> +       sigset_t sigsaved;
> +
> +       /* Make sure they initialize the vcpu with KVM_ARM_VCPU_INIT */
> +       if (unlikely(vcpu->arch.target < 0))
> +               return -ENOEXEC;
> +
> +       if (vcpu->sigset_active)
> +               sigprocmask(SIG_SETMASK, &vcpu->sigset, &sigsaved);
> +
> +       ret = 1;
> +       run->exit_reason = KVM_EXIT_UNKNOWN;
> +       while (ret > 0) {
> +               /*
> +                * Check conditions before entering the guest
> +                */
> +               cond_resched();
> +
> +               update_vttbr(vcpu->kvm);
> +
> +               local_irq_disable();
> +
> +               /*
> +                * Re-check atomic conditions
> +                */
> +               if (signal_pending(current)) {
> +                       ret = -EINTR;
> +                       run->exit_reason = KVM_EXIT_INTR;
> +               }
> +
> +               if (ret <= 0 || need_new_vmid_gen(vcpu->kvm)) {
> +                       local_irq_enable();
> +                       continue;
> +               }

I don't see what the VMID generation check buys you here as you're not
holding the vmid lock, so rollover can occur at any time. If rollover
*does* occur, you'll get signalled when re-enabling interrupts during
guest entry, right?

Is it because your rollover handling doesn't actually update the vttbr
directly and relies on guest sched in to do it? If so, this all feels
pretty racy to me.

Will

  reply	other threads:[~2012-11-19 14:57 UTC|newest]

Thread overview: 65+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-11-10 15:42 [PATCH v4 00/14] KVM/ARM Implementation Christoffer Dall
2012-11-10 15:42 ` [PATCH v4 01/14] ARM: Add page table and page defines needed by KVM Christoffer Dall
2012-11-19 14:14   ` Will Deacon
2012-11-29 15:57     ` Christoffer Dall
2012-11-30 11:46       ` Will Deacon
2012-11-30 15:54         ` Christoffer Dall
2012-11-10 15:42 ` [PATCH v4 02/14] ARM: Section based HYP idmap Christoffer Dall
2012-11-19 14:16   ` Will Deacon
2012-11-29 18:59     ` Christoffer Dall
2012-11-30 10:58       ` Will Deacon
2012-11-30 16:29         ` Christoffer Dall
2012-11-19 14:25   ` Rob Herring
2012-11-10 15:42 ` [PATCH v4 03/14] ARM: Factor out cpuid implementor and part number Christoffer Dall
2012-11-19 14:21   ` Will Deacon
2012-11-29 21:38     ` Christoffer Dall
2012-11-30 10:21       ` Will Deacon
2012-11-30 15:42         ` Christoffer Dall
2012-11-10 15:42 ` [PATCH v4 04/14] KVM: ARM: Initial skeleton to compile KVM support Christoffer Dall
2012-11-19 14:41   ` Will Deacon
2012-11-29 22:36     ` Christoffer Dall
2012-11-10 15:42 ` [PATCH v4 05/14] KVM: ARM: Hypervisor inititalization Christoffer Dall
2012-11-19 14:51   ` Will Deacon
2012-11-19 15:27     ` Cyril Chemparathy
2012-11-30  5:41     ` Christoffer Dall
2012-11-10 15:42 ` [PATCH v4 06/14] KVM: ARM: Memory virtualization setup Christoffer Dall
2012-11-19 14:53   ` Will Deacon
2012-11-19 15:05     ` Christoffer Dall
2012-11-10 15:42 ` [PATCH v4 07/14] KVM: ARM: Inject IRQs and FIQs from userspace Christoffer Dall
2012-11-19 14:55   ` Will Deacon
2012-11-19 15:04     ` Christoffer Dall
2012-11-19 15:26       ` Will Deacon
2012-11-19 16:09         ` Christoffer Dall
2012-11-19 16:21           ` Will Deacon
2012-11-30  6:13             ` Christoffer Dall
2012-11-10 15:43 ` [PATCH v4 08/14] KVM: ARM: World-switch implementation Christoffer Dall
2012-11-19 14:57   ` Will Deacon [this message]
2012-11-30  6:37     ` Christoffer Dall
2012-11-30 15:15       ` Will Deacon
2012-11-30 16:47         ` Christoffer Dall
2012-11-30 17:14           ` Will Deacon
2012-11-30 18:49             ` Christoffer Dall
2012-12-03 10:33               ` Marc Zyngier
2012-12-03 15:05                 ` Christoffer Dall
2012-11-10 15:43 ` [PATCH v4 09/14] KVM: ARM: Emulation framework and CP15 emulation Christoffer Dall
2012-11-19 15:01   ` Will Deacon
2012-11-19 15:27     ` [kvmarm] " Peter Maydell
2012-11-20  2:18       ` Rusty Russell
2012-11-30 20:22     ` Christoffer Dall
2012-12-03 11:05       ` Will Deacon
2012-12-03 19:09         ` Christoffer Dall
2012-11-10 15:43 ` [PATCH v4 10/14] KVM: ARM: User space API for getting/setting co-proc registers Christoffer Dall
2012-11-19 15:02   ` Will Deacon
2012-11-30  6:42     ` Christoffer Dall
2012-11-10 15:43 ` [PATCH v4 11/14] KVM: ARM: Demux CCSIDR in the userspace API Christoffer Dall
2012-11-19 15:03   ` Will Deacon
2012-11-30  6:45     ` Christoffer Dall
2012-11-10 15:43 ` [PATCH v4 12/14] KVM: ARM: VFP userspace interface Christoffer Dall
2012-11-10 15:43 ` [PATCH v4 13/14] KVM: ARM: Handle guest faults in KVM Christoffer Dall
2012-11-19 15:07   ` Will Deacon
2012-11-30 21:40     ` Christoffer Dall
2012-12-03 13:06       ` Will Deacon
2012-12-03 15:02         ` Christoffer Dall
2012-11-10 15:43 ` [PATCH v4 14/14] KVM: ARM: Handle I/O aborts Christoffer Dall
2012-11-19 15:09   ` Will Deacon
2012-11-30 14:46     ` Dave Martin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20121119145758.GA3205@mudshark.cambridge.arm.com \
    --to=will.deacon@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).