From: Christoffer Dall <christoffer.dall@linaro.org>
To: Marc Zyngier <marc.zyngier@arm.com>
Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu,
Steve Capper <steve.capper@linaro.org>
Subject: Re: [PATCH 2/4] arm/arm64: KVM: Use set/way op trapping to track the state of the caches
Date: Fri, 9 Jan 2015 12:19:44 +0100 [thread overview]
Message-ID: <20150109111944.GO21092@cbox> (raw)
In-Reply-To: <1420718349-24152-3-git-send-email-marc.zyngier@arm.com>
On Thu, Jan 08, 2015 at 11:59:07AM +0000, Marc Zyngier wrote:
> Trying to emulate the behaviour of set/way cache ops is fairly
> pointless, as there are too many ways we can end-up missing stuff.
> Also, there is some system caches out there that simply ignore
> set/way operations.
>
> So instead of trying to implement them, let's convert it to VA ops,
> and use them as a way to re-enable the trapping of VM ops. That way,
> we can detect the point when the MMU/caches are turned off, and do
> a full VM flush (which is what the guest was trying to do anyway).
>
> This allows a 32bit zImage to boot on the APM thingy, and will
> probably help bootloaders in general.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
> arch/arm/include/asm/kvm_host.h | 3 --
> arch/arm/kvm/arm.c | 10 ----
> arch/arm/kvm/coproc.c | 90 +++++++++++++++++++++------------
> arch/arm64/include/asm/kvm_host.h | 3 --
> arch/arm64/kvm/sys_regs.c | 102 ++++++++++++++++++++++----------------
> 5 files changed, 116 insertions(+), 92 deletions(-)
>
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index 254e065..04b4ea0 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -125,9 +125,6 @@ struct kvm_vcpu_arch {
> * Anything that is not used directly from assembly code goes
> * here.
> */
> - /* dcache set/way operation pending */
> - int last_pcpu;
> - cpumask_t require_dcache_flush;
>
> /* Don't run the guest on this vcpu */
> bool pause;
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index 2d6d910..0b0d58a 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -281,15 +281,6 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> vcpu->cpu = cpu;
> vcpu->arch.host_cpu_context = this_cpu_ptr(kvm_host_cpu_state);
>
> - /*
> - * Check whether this vcpu requires the cache to be flushed on
> - * this physical CPU. This is a consequence of doing dcache
> - * operations by set/way on this vcpu. We do it here to be in
> - * a non-preemptible section.
> - */
> - if (cpumask_test_and_clear_cpu(cpu, &vcpu->arch.require_dcache_flush))
> - flush_cache_all(); /* We'd really want v7_flush_dcache_all() */
> -
> kvm_arm_set_running_vcpu(vcpu);
> }
>
> @@ -541,7 +532,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
> ret = kvm_call_hyp(__kvm_vcpu_run, vcpu);
>
> vcpu->mode = OUTSIDE_GUEST_MODE;
> - vcpu->arch.last_pcpu = smp_processor_id();
> kvm_guest_exit();
> trace_kvm_exit(*vcpu_pc(vcpu));
> /*
> diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
> index 7928dbd..3d352a5 100644
> --- a/arch/arm/kvm/coproc.c
> +++ b/arch/arm/kvm/coproc.c
> @@ -189,43 +189,57 @@ static bool access_l2ectlr(struct kvm_vcpu *vcpu,
> return true;
> }
>
> -/* See note at ARM ARM B1.14.4 */
> +/*
> + * See note at ARMv7 ARM B1.14.4 (TL;DR: S/W ops are not easily virtualized).
> + *
> + * Main problems:
> + * - S/W ops are local to a CPU (not broadcast)
> + * - We have line migration behind our back (speculation)
> + * - System caches don't support S/W at all (damn!)
> + *
> + * In the face of the above, the best we can do is to try and convert
> + * S/W ops to VA ops. Because the guest is not allowed to infer the
> + * S/W to PA mapping, it can only use S/W to nuke the whole cache,
> + * which is a rather good thing for us.
> + *
> + * Also, it is only used when turning caches on/off ("The expected
> + * usage of the cache maintenance instructions that operate by set/way
> + * is associated with the cache maintenance instructions associated
> + * with the powerdown and powerup of caches, if this is required by
> + * the implementation.").
> + *
> + * We use the following policy:
> + *
> + * - If we trap a S/W operation, we enable VM trapping to detect
> + * caches being turned on/off.
> + *
> + * - If the caches have already been turned off when doing the S/W op,
> + * we nuke the whole VM cache.
> + *
> + * - We flush the cache on both caches being turned on and off.
> + *
> + * - Once the caches are enabled, we stop trapping VM ops.
> + */
> static bool access_dcsw(struct kvm_vcpu *vcpu,
> const struct coproc_params *p,
> const struct coproc_reg *r)
> {
> - unsigned long val;
> - int cpu;
> -
> if (!p->is_write)
> return read_from_write_only(vcpu, p);
>
> - cpu = get_cpu();
> -
> - cpumask_setall(&vcpu->arch.require_dcache_flush);
> - cpumask_clear_cpu(cpu, &vcpu->arch.require_dcache_flush);
> -
> - /* If we were already preempted, take the long way around */
> - if (cpu != vcpu->arch.last_pcpu) {
> - flush_cache_all();
> - goto done;
> - }
> -
> - val = *vcpu_reg(vcpu, p->Rt1);
> -
> - switch (p->CRm) {
> - case 6: /* Upgrade DCISW to DCCISW, as per HCR.SWIO */
> - case 14: /* DCCISW */
> - asm volatile("mcr p15, 0, %0, c7, c14, 2" : : "r" (val));
> - break;
> -
> - case 10: /* DCCSW */
> - asm volatile("mcr p15, 0, %0, c7, c10, 2" : : "r" (val));
> - break;
> - }
> + /*
> + * If this is the first time we do a S/W operation
> + * (i.e. HCT_TVM not set) and the caches are already disabled,
> + * flush the whole memory.
> + *
> + * Otherwise, don't do anything. Just wait for the MMU +
this is misleading because you always enable trapping of the cache
registers, so probably say "don't flush anything". We can fix this up
when applying though.
> + * Caches to be turned off, and use that to actually clean the
> + * caches.
> + */
> + if (!(vcpu->arch.hcr & HCR_TVM) && !vcpu_has_cache_enabled(vcpu))
> + stage2_flush_vm(vcpu->kvm);
>
> -done:
> - put_cpu();
> + vcpu->arch.hcr |= HCR_TVM;
>
> return true;
> }
> @@ -258,12 +272,24 @@ bool access_sctlr(struct kvm_vcpu *vcpu,
> const struct coproc_params *p,
> const struct coproc_reg *r)
> {
> + bool was_enabled = vcpu_has_cache_enabled(vcpu);
> + bool now_enabled;
> +
> access_vm_reg(vcpu, p, r);
>
> - if (vcpu_has_cache_enabled(vcpu)) { /* MMU+Caches enabled? */
> - vcpu->arch.hcr &= ~HCR_TVM;
> + now_enabled = vcpu_has_cache_enabled(vcpu);
> +
> + /*
> + * If switching the MMU+caches on, need to invalidate the caches.
> + * If switching it off, need to clean the caches.
> + * Clean + invalidate does the trick always.
couldn't a clean here then be writing bogus to RAM on an initial boot
of the guest?
> + */
> + if (now_enabled != was_enabled)
> stage2_flush_vm(vcpu->kvm);
> - }
> +
> + /* Caches are now on, stop trapping VM ops (until a S/W op) */
> + if (now_enabled)
> + vcpu->arch.hcr &= ~HCR_TVM;
>
> return true;
> }
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 0b7dfdb..acd101a 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -116,9 +116,6 @@ struct kvm_vcpu_arch {
> * Anything that is not used directly from assembly code goes
> * here.
> */
> - /* dcache set/way operation pending */
> - int last_pcpu;
> - cpumask_t require_dcache_flush;
>
> /* Don't run the guest */
> bool pause;
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 3d7c2df..5c6540b 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -69,55 +69,57 @@ static u32 get_ccsidr(u32 csselr)
> return ccsidr;
> }
>
> -static void do_dc_cisw(u32 val)
> -{
> - asm volatile("dc cisw, %x0" : : "r" (val));
> - dsb(ish);
> -}
> -
> -static void do_dc_csw(u32 val)
> -{
> - asm volatile("dc csw, %x0" : : "r" (val));
> - dsb(ish);
> -}
> -
> -/* See note at ARM ARM B1.14.4 */
> +/*
> + * See note at ARMv7 ARM B1.14.4 (TL;DR: S/W ops are not easily virtualized).
> + *
> + * Main problems:
> + * - S/W ops are local to a CPU (not broadcast)
> + * - We have line migration behind our back (speculation)
> + * - System caches don't support S/W at all (damn!)
> + *
> + * In the face of the above, the best we can do is to try and convert
> + * S/W ops to VA ops. Because the guest is not allowed to infer the
> + * S/W to PA mapping, it can only use S/W to nuke the whole cache,
> + * which is a rather good thing for us.
> + *
> + * Also, it is only used when turning caches on/off ("The expected
> + * usage of the cache maintenance instructions that operate by set/way
> + * is associated with the cache maintenance instructions associated
> + * with the powerdown and powerup of caches, if this is required by
> + * the implementation.").
> + *
> + * We use the following policy:
> + *
> + * - If we trap a S/W operation, we enable VM trapping to detect
> + * caches being turned on/off.
> + *
> + * - If the caches have already been turned off when doing the S/W op,
> + * we nuke the whole VM cache.
> + *
> + * - We flush the cache on both caches being turned on and off.
> + *
> + * - Once the caches are enabled, we stop trapping VM ops.
> + */
can't we factor this out into a handler in arch/arm/kvm/mmu.c and only
have this comment in one place?
> static bool access_dcsw(struct kvm_vcpu *vcpu,
> const struct sys_reg_params *p,
> const struct sys_reg_desc *r)
> {
> - unsigned long val;
> - int cpu;
> -
> if (!p->is_write)
> return read_from_write_only(vcpu, p);
>
> - cpu = get_cpu();
> -
> - cpumask_setall(&vcpu->arch.require_dcache_flush);
> - cpumask_clear_cpu(cpu, &vcpu->arch.require_dcache_flush);
> -
> - /* If we were already preempted, take the long way around */
> - if (cpu != vcpu->arch.last_pcpu) {
> - flush_cache_all();
> - goto done;
> - }
> -
> - val = *vcpu_reg(vcpu, p->Rt);
> -
> - switch (p->CRm) {
> - case 6: /* Upgrade DCISW to DCCISW, as per HCR.SWIO */
> - case 14: /* DCCISW */
> - do_dc_cisw(val);
> - break;
> -
> - case 10: /* DCCSW */
> - do_dc_csw(val);
> - break;
> - }
> + /*
> + * If this is the first time we do a S/W operation
> + * (i.e. HCT_TVM not set) and the caches are already disabled,
> + * flush the whole memory.
> + *
> + * Otherwise, don't do anything. Just wait for the MMU +
> + * Caches to be turned off, and use that to actually clean the
> + * caches.
> + */
> + if (!(vcpu->arch.hcr_el2 & HCR_TVM) && !vcpu_has_cache_enabled(vcpu))
> + stage2_flush_vm(vcpu->kvm);
>
> -done:
> - put_cpu();
> + vcpu->arch.hcr_el2 |= HCR_TVM;
>
> return true;
> }
> @@ -155,12 +157,24 @@ static bool access_sctlr(struct kvm_vcpu *vcpu,
> const struct sys_reg_params *p,
> const struct sys_reg_desc *r)
> {
> + bool was_enabled = vcpu_has_cache_enabled(vcpu);
> + bool now_enabled;
> +
> access_vm_reg(vcpu, p, r);
>
> - if (vcpu_has_cache_enabled(vcpu)) { /* MMU+Caches enabled? */
> - vcpu->arch.hcr_el2 &= ~HCR_TVM;
> + now_enabled = vcpu_has_cache_enabled(vcpu);
> +
> + /*
> + * If switching the MMU+caches on, need to invalidate the caches.
> + * If switching it off, need to clean the caches.
> + * Clean + invalidate does the trick always.
same question as above (hint: should we really not try to share this
code?)
> + */
> + if (now_enabled != was_enabled)
> stage2_flush_vm(vcpu->kvm);
> - }
> +
> + /* Caches are now on, stop trapping VM ops (until a S/W op) */
> + if (now_enabled)
> + vcpu->arch.hcr_el2 &= ~HCR_TVM;
>
> return true;
> }
> --
> 2.1.4
>
Thanks,
-Christoffer
next prev parent reply other threads:[~2015-01-09 11:19 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-01-08 11:59 [PATCH 0/4] arm/arm64: KVM: Random selection of MM related fixes Marc Zyngier
2015-01-08 11:59 ` [PATCH 1/4] mm: Correct ordering of *_clear_flush_young_notify Marc Zyngier
2015-01-08 13:12 ` Paolo Bonzini
2015-01-08 19:00 ` Andrea Arcangeli
2015-01-12 10:15 ` Steve Capper
2015-01-08 11:59 ` [PATCH 2/4] arm/arm64: KVM: Use set/way op trapping to track the state of the caches Marc Zyngier
2015-01-09 11:19 ` Christoffer Dall [this message]
2015-01-09 11:38 ` Marc Zyngier
2015-01-09 12:12 ` Christoffer Dall
2015-01-08 11:59 ` [PATCH 3/4] arm/arm64: KVM: Flush caches to memory on unmap Marc Zyngier
2015-01-09 12:30 ` Christoffer Dall
2015-01-09 14:35 ` Marc Zyngier
2015-01-11 12:30 ` Christoffer Dall
2015-01-12 11:15 ` Marc Zyngier
2015-01-12 20:13 ` Christoffer Dall
2015-01-13 13:47 ` Christoffer Dall
2015-01-13 13:57 ` Marc Zyngier
2015-01-08 11:59 ` [PATCH 4/4] arm/arm64: KVM: use kernel mapping to perform invalidation on page fault Marc Zyngier
2015-01-08 12:30 ` Peter Maydell
2015-01-08 13:07 ` Marc Zyngier
2015-01-08 13:16 ` Peter Maydell
2015-01-08 15:06 ` Marc Zyngier
2015-01-08 15:21 ` Peter Maydell
2015-01-09 12:50 ` Christoffer Dall
2015-01-09 13:03 ` Peter Maydell
2015-01-09 14:16 ` Marc Zyngier
2015-01-09 15:28 ` Peter Maydell
2015-01-09 17:18 ` Marc Zyngier
2015-01-11 12:33 ` Christoffer Dall
2015-01-11 17:37 ` Peter Maydell
2015-01-11 17:58 ` Christoffer Dall
2015-01-11 18:27 ` Peter Maydell
2015-01-11 18:38 ` Christoffer Dall
2015-01-12 9:58 ` Marc Zyngier
2015-01-12 20:10 ` Christoffer Dall
2015-01-13 11:38 ` Marc Zyngier
2015-01-13 12:04 ` Christoffer Dall
2015-01-13 12:12 ` Peter Maydell
2015-01-13 13:35 ` Christoffer Dall
2015-01-13 13:41 ` Peter Maydell
2015-01-13 13:49 ` Christoffer Dall
2015-01-15 12:00 ` Mark Rutland
2015-01-15 13:00 ` Christoffer Dall
2015-01-15 15:47 ` Mark Rutland
2015-01-09 12:51 ` Christoffer Dall
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150109111944.GO21092@cbox \
--to=christoffer.dall@linaro.org \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.cs.columbia.edu \
--cc=marc.zyngier@arm.com \
--cc=steve.capper@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).