* [PATCH 1/6] arm64: cpuinfo: remove I-cache VIPT aliasing detection
@ 2017-03-10 20:32 Will Deacon
2017-03-10 20:32 ` [PATCH 2/6] arm64: cacheinfo: Remove CCSIDR-based cache information probing Will Deacon
` (4 more replies)
0 siblings, 5 replies; 9+ messages in thread
From: Will Deacon @ 2017-03-10 20:32 UTC (permalink / raw)
To: linux-arm-kernel
The CCSIDR_EL1.{NumSets,Associativity,LineSize} fields are only for use
in conjunction with set/way cache maintenance and are not guaranteed to
represent the actual microarchitectural features of a design.
The architecture explicitly states:
| You cannot make any inference about the actual sizes of caches based
| on these parameters.
We currently use these fields to determine whether or the I-cache is
aliasing, which is bogus and known to break on some platforms. Instead,
assume the I-cache is always aliasing if it advertises a VIPT policy.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
arch/arm64/include/asm/cachetype.h | 13 -------------
arch/arm64/kernel/cpuinfo.c | 23 ++++++++++-------------
2 files changed, 10 insertions(+), 26 deletions(-)
diff --git a/arch/arm64/include/asm/cachetype.h b/arch/arm64/include/asm/cachetype.h
index f5588692f1d4..4dbf3d10022d 100644
--- a/arch/arm64/include/asm/cachetype.h
+++ b/arch/arm64/include/asm/cachetype.h
@@ -63,19 +63,6 @@ extern unsigned long __icache_flags;
#define CACHE_NUMSETS(x) (CCSIDR_EL1_NUMSETS(x) + 1)
#define CACHE_ASSOCIATIVITY(x) (CCSIDR_EL1_ASSOCIATIVITY(x) + 1)
-extern u64 __attribute_const__ cache_get_ccsidr(u64 csselr);
-
-/* Helpers for Level 1 Instruction cache csselr = 1L */
-static inline int icache_get_linesize(void)
-{
- return CACHE_LINESIZE(cache_get_ccsidr(1L));
-}
-
-static inline int icache_get_numsets(void)
-{
- return CACHE_NUMSETS(cache_get_ccsidr(1L));
-}
-
/*
* Whilst the D-side always behaves as PIPT on AArch64, aliasing is
* permitted in the I-cache.
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 5b22c687f02a..155ddd8ad56a 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -289,20 +289,17 @@ static void cpuinfo_detect_icache_policy(struct cpuinfo_arm64 *info)
unsigned int cpu = smp_processor_id();
u32 l1ip = CTR_L1IP(info->reg_ctr);
- if (l1ip != ICACHE_POLICY_PIPT) {
- /*
- * VIPT caches are non-aliasing if the VA always equals the PA
- * in all bit positions that are covered by the index. This is
- * the case if the size of a way (# of sets * line size) does
- * not exceed PAGE_SIZE.
- */
- u32 waysize = icache_get_numsets() * icache_get_linesize();
-
- if (l1ip != ICACHE_POLICY_VIPT || waysize > PAGE_SIZE)
- set_bit(ICACHEF_ALIASING, &__icache_flags);
- }
- if (l1ip == ICACHE_POLICY_AIVIVT)
+ switch (l1ip) {
+ case ICACHE_POLICY_PIPT:
+ break;
+ default:
+ case ICACHE_POLICY_AIVIVT:
set_bit(ICACHEF_AIVIVT, &__icache_flags);
+ /* Fallthrough */
+ case ICACHE_POLICY_VIPT:
+ /* Assume aliasing */
+ set_bit(ICACHEF_ALIASING, &__icache_flags);
+ }
pr_info("Detected %s I-cache on CPU%d\n", icache_policy_str[l1ip], cpu);
}
--
2.1.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/6] arm64: cacheinfo: Remove CCSIDR-based cache information probing
2017-03-10 20:32 [PATCH 1/6] arm64: cpuinfo: remove I-cache VIPT aliasing detection Will Deacon
@ 2017-03-10 20:32 ` Will Deacon
2017-03-10 20:32 ` [PATCH 3/6] arm64: cache: Remove support for ASID-tagged VIVT I-caches Will Deacon
` (3 subsequent siblings)
4 siblings, 0 replies; 9+ messages in thread
From: Will Deacon @ 2017-03-10 20:32 UTC (permalink / raw)
To: linux-arm-kernel
The CCSIDR_EL1.{NumSets,Associativity,LineSize} fields are only for use
in conjunction with set/way cache maintenance and are not guaranteed to
represent the actual microarchitectural features of a design.
The architecture explicitly states:
| You cannot make any inference about the actual sizes of caches based
| on these parameters.
Furthermore, CCSIDR_EL1.{WT,WB,RA,WA} have been removed retrospectively
from ARMv8 and are now considered to be UNKNOWN.
Since the kernel doesn't make use of set/way cache maintenance and it is
not possible for userspace to execute these instructions, we have no
need for the CCSIDR information in the kernel.
This patch removes the accessors, along with the related portions of the
cacheinfo support, which should instead be reintroduced when firmware has
a mechanism to provide us with reliable information.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
arch/arm64/include/asm/cachetype.h | 24 ------------------------
arch/arm64/kernel/cacheinfo.c | 38 --------------------------------------
2 files changed, 62 deletions(-)
diff --git a/arch/arm64/include/asm/cachetype.h b/arch/arm64/include/asm/cachetype.h
index 4dbf3d10022d..212a0f3d4ecb 100644
--- a/arch/arm64/include/asm/cachetype.h
+++ b/arch/arm64/include/asm/cachetype.h
@@ -40,30 +40,6 @@
extern unsigned long __icache_flags;
/*
- * NumSets, bits[27:13] - (Number of sets in cache) - 1
- * Associativity, bits[12:3] - (Associativity of cache) - 1
- * LineSize, bits[2:0] - (Log2(Number of words in cache line)) - 2
- */
-#define CCSIDR_EL1_WRITE_THROUGH BIT(31)
-#define CCSIDR_EL1_WRITE_BACK BIT(30)
-#define CCSIDR_EL1_READ_ALLOCATE BIT(29)
-#define CCSIDR_EL1_WRITE_ALLOCATE BIT(28)
-#define CCSIDR_EL1_LINESIZE_MASK 0x7
-#define CCSIDR_EL1_LINESIZE(x) ((x) & CCSIDR_EL1_LINESIZE_MASK)
-#define CCSIDR_EL1_ASSOCIATIVITY_SHIFT 3
-#define CCSIDR_EL1_ASSOCIATIVITY_MASK 0x3ff
-#define CCSIDR_EL1_ASSOCIATIVITY(x) \
- (((x) >> CCSIDR_EL1_ASSOCIATIVITY_SHIFT) & CCSIDR_EL1_ASSOCIATIVITY_MASK)
-#define CCSIDR_EL1_NUMSETS_SHIFT 13
-#define CCSIDR_EL1_NUMSETS_MASK 0x7fff
-#define CCSIDR_EL1_NUMSETS(x) \
- (((x) >> CCSIDR_EL1_NUMSETS_SHIFT) & CCSIDR_EL1_NUMSETS_MASK)
-
-#define CACHE_LINESIZE(x) (16 << CCSIDR_EL1_LINESIZE(x))
-#define CACHE_NUMSETS(x) (CCSIDR_EL1_NUMSETS(x) + 1)
-#define CACHE_ASSOCIATIVITY(x) (CCSIDR_EL1_ASSOCIATIVITY(x) + 1)
-
-/*
* Whilst the D-side always behaves as PIPT on AArch64, aliasing is
* permitted in the I-cache.
*/
diff --git a/arch/arm64/kernel/cacheinfo.c b/arch/arm64/kernel/cacheinfo.c
index 3f2250fc391b..380f2e2fbed5 100644
--- a/arch/arm64/kernel/cacheinfo.c
+++ b/arch/arm64/kernel/cacheinfo.c
@@ -17,15 +17,9 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
-#include <linux/bitops.h>
#include <linux/cacheinfo.h>
-#include <linux/cpu.h>
-#include <linux/compiler.h>
#include <linux/of.h>
-#include <asm/cachetype.h>
-#include <asm/processor.h>
-
#define MAX_CACHE_LEVEL 7 /* Max 7 level supported */
/* Ctypen, bits[3(n - 1) + 2 : 3(n - 1)], for n = 1 to 7 */
#define CLIDR_CTYPE_SHIFT(level) (3 * (level - 1))
@@ -43,43 +37,11 @@ static inline enum cache_type get_cache_type(int level)
return CLIDR_CTYPE(clidr, level);
}
-/*
- * Cache Size Selection Register(CSSELR) selects which Cache Size ID
- * Register(CCSIDR) is accessible by specifying the required cache
- * level and the cache type. We need to ensure that no one else changes
- * CSSELR by calling this in non-preemtible context
- */
-u64 __attribute_const__ cache_get_ccsidr(u64 csselr)
-{
- u64 ccsidr;
-
- WARN_ON(preemptible());
-
- write_sysreg(csselr, csselr_el1);
- isb();
- ccsidr = read_sysreg(ccsidr_el1);
-
- return ccsidr;
-}
-
static void ci_leaf_init(struct cacheinfo *this_leaf,
enum cache_type type, unsigned int level)
{
- bool is_icache = type & CACHE_TYPE_INST;
- u64 tmp = cache_get_ccsidr((level - 1) << 1 | is_icache);
-
this_leaf->level = level;
this_leaf->type = type;
- this_leaf->coherency_line_size = CACHE_LINESIZE(tmp);
- this_leaf->number_of_sets = CACHE_NUMSETS(tmp);
- this_leaf->ways_of_associativity = CACHE_ASSOCIATIVITY(tmp);
- this_leaf->size = this_leaf->number_of_sets *
- this_leaf->coherency_line_size * this_leaf->ways_of_associativity;
- this_leaf->attributes =
- ((tmp & CCSIDR_EL1_WRITE_THROUGH) ? CACHE_WRITE_THROUGH : 0) |
- ((tmp & CCSIDR_EL1_WRITE_BACK) ? CACHE_WRITE_BACK : 0) |
- ((tmp & CCSIDR_EL1_READ_ALLOCATE) ? CACHE_READ_ALLOCATE : 0) |
- ((tmp & CCSIDR_EL1_WRITE_ALLOCATE) ? CACHE_WRITE_ALLOCATE : 0);
}
static int __init_cache_level(unsigned int cpu)
--
2.1.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/6] arm64: cache: Remove support for ASID-tagged VIVT I-caches
2017-03-10 20:32 [PATCH 1/6] arm64: cpuinfo: remove I-cache VIPT aliasing detection Will Deacon
2017-03-10 20:32 ` [PATCH 2/6] arm64: cacheinfo: Remove CCSIDR-based cache information probing Will Deacon
@ 2017-03-10 20:32 ` Will Deacon
2017-03-10 20:32 ` [PATCH 4/6] arm64: cache: Merge cachetype.h into cache.h Will Deacon
` (2 subsequent siblings)
4 siblings, 0 replies; 9+ messages in thread
From: Will Deacon @ 2017-03-10 20:32 UTC (permalink / raw)
To: linux-arm-kernel
As a recent change to ARMv8, ASID-tagged VIVT I-caches are removed
retrospectively from the architecture. Consequently, we don't need to
support them in Linux either.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
arch/arm64/include/asm/cachetype.h | 8 --------
arch/arm64/include/asm/kvm_mmu.h | 2 +-
arch/arm64/kernel/cpufeature.c | 4 ++--
arch/arm64/kernel/cpuinfo.c | 9 +++------
arch/arm64/mm/context.c | 3 ---
arch/arm64/mm/flush.c | 2 --
6 files changed, 6 insertions(+), 22 deletions(-)
diff --git a/arch/arm64/include/asm/cachetype.h b/arch/arm64/include/asm/cachetype.h
index 212a0f3d4ecb..fbab37c669a0 100644
--- a/arch/arm64/include/asm/cachetype.h
+++ b/arch/arm64/include/asm/cachetype.h
@@ -23,8 +23,6 @@
#define CTR_CWG_SHIFT 24
#define CTR_CWG_MASK 15
-#define ICACHE_POLICY_RESERVED 0
-#define ICACHE_POLICY_AIVIVT 1
#define ICACHE_POLICY_VIPT 2
#define ICACHE_POLICY_PIPT 3
@@ -35,7 +33,6 @@
#define CTR_L1IP(ctr) (((ctr) >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK)
#define ICACHEF_ALIASING 0
-#define ICACHEF_AIVIVT 1
extern unsigned long __icache_flags;
@@ -48,11 +45,6 @@ static inline int icache_is_aliasing(void)
return test_bit(ICACHEF_ALIASING, &__icache_flags);
}
-static inline int icache_is_aivivt(void)
-{
- return test_bit(ICACHEF_AIVIVT, &__icache_flags);
-}
-
static inline u32 cache_type_cwg(void)
{
return (read_cpuid_cachetype() >> CTR_CWG_SHIFT) & CTR_CWG_MASK;
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index ed1246014901..4be5773d4606 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -245,7 +245,7 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu,
if (!icache_is_aliasing()) { /* PIPT */
flush_icache_range((unsigned long)va,
(unsigned long)va + size);
- } else if (!icache_is_aivivt()) { /* non ASID-tagged VIVT */
+ } else {
/* any kind of VIPT cache */
__flush_icache_all();
}
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index abda8e861865..073a6c641730 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -153,9 +153,9 @@ static const struct arm64_ftr_bits ftr_ctr[] = {
/*
* Linux can handle differing I-cache policies. Userspace JITs will
* make use of *minLine.
- * If we have differing I-cache policies, report it as the weakest - AIVIVT.
+ * If we have differing I-cache policies, report it as the weakest - VIPT.
*/
- ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_EXACT, 14, 2, ICACHE_POLICY_AIVIVT), /* L1Ip */
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_EXACT, 14, 2, ICACHE_POLICY_VIPT), /* L1Ip */
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 0, 4, 0), /* IminLine */
ARM64_FTR_END,
};
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 155ddd8ad56a..efe74ecc9738 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -43,10 +43,9 @@ DEFINE_PER_CPU(struct cpuinfo_arm64, cpu_data);
static struct cpuinfo_arm64 boot_cpu_data;
static char *icache_policy_str[] = {
- [ICACHE_POLICY_RESERVED] = "RESERVED/UNKNOWN",
- [ICACHE_POLICY_AIVIVT] = "AIVIVT",
- [ICACHE_POLICY_VIPT] = "VIPT",
- [ICACHE_POLICY_PIPT] = "PIPT",
+ [0 ... ICACHE_POLICY_PIPT] = "RESERVED/UNKNOWN",
+ [ICACHE_POLICY_VIPT] = "VIPT",
+ [ICACHE_POLICY_PIPT] = "PIPT",
};
unsigned long __icache_flags;
@@ -293,8 +292,6 @@ static void cpuinfo_detect_icache_policy(struct cpuinfo_arm64 *info)
case ICACHE_POLICY_PIPT:
break;
default:
- case ICACHE_POLICY_AIVIVT:
- set_bit(ICACHEF_AIVIVT, &__icache_flags);
/* Fallthrough */
case ICACHE_POLICY_VIPT:
/* Assume aliasing */
diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index 68634c630cdd..ab9f5f0fb2c7 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -119,9 +119,6 @@ static void flush_context(unsigned int cpu)
/* Queue a TLB invalidate and flush the I-cache if necessary. */
cpumask_setall(&tlb_flush_pending);
-
- if (icache_is_aivivt())
- __flush_icache_all();
}
static bool check_update_reserved_asid(u64 asid, u64 newasid)
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index 554a2558c12e..1e968222a544 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -65,8 +65,6 @@ void __sync_icache_dcache(pte_t pte, unsigned long addr)
if (!test_and_set_bit(PG_dcache_clean, &page->flags))
sync_icache_aliases(page_address(page),
PAGE_SIZE << compound_order(page));
- else if (icache_is_aivivt())
- __flush_icache_all();
}
/*
--
2.1.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 4/6] arm64: cache: Merge cachetype.h into cache.h
2017-03-10 20:32 [PATCH 1/6] arm64: cpuinfo: remove I-cache VIPT aliasing detection Will Deacon
2017-03-10 20:32 ` [PATCH 2/6] arm64: cacheinfo: Remove CCSIDR-based cache information probing Will Deacon
2017-03-10 20:32 ` [PATCH 3/6] arm64: cache: Remove support for ASID-tagged VIVT I-caches Will Deacon
@ 2017-03-10 20:32 ` Will Deacon
2017-03-10 20:32 ` [PATCH 5/6] arm64: cache: Identify VPIPT I-caches Will Deacon
2017-03-10 20:32 ` [PATCH 6/6] arm64: KVM: Add support for " Will Deacon
4 siblings, 0 replies; 9+ messages in thread
From: Will Deacon @ 2017-03-10 20:32 UTC (permalink / raw)
To: linux-arm-kernel
cachetype.h and cache.h are small and both obviously related to caches.
Merge them together to reduce clutter.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
arch/arm64/include/asm/cache.h | 31 ++++++++++++++++++++-
arch/arm64/include/asm/cachetype.h | 55 --------------------------------------
arch/arm64/include/asm/kvm_mmu.h | 2 +-
arch/arm64/kernel/cpuinfo.c | 2 +-
arch/arm64/mm/flush.c | 2 +-
5 files changed, 33 insertions(+), 59 deletions(-)
delete mode 100644 arch/arm64/include/asm/cachetype.h
diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
index 5082b30bc2c0..7acb52634299 100644
--- a/arch/arm64/include/asm/cache.h
+++ b/arch/arm64/include/asm/cache.h
@@ -16,7 +16,17 @@
#ifndef __ASM_CACHE_H
#define __ASM_CACHE_H
-#include <asm/cachetype.h>
+#include <asm/cputype.h>
+
+#define CTR_L1IP_SHIFT 14
+#define CTR_L1IP_MASK 3
+#define CTR_CWG_SHIFT 24
+#define CTR_CWG_MASK 15
+
+#define CTR_L1IP(ctr) (((ctr) >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK)
+
+#define ICACHE_POLICY_VIPT 2
+#define ICACHE_POLICY_PIPT 3
#define L1_CACHE_SHIFT 7
#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
@@ -32,6 +42,25 @@
#ifndef __ASSEMBLY__
+#include <linux/bitops.h>
+
+#define ICACHEF_ALIASING 0
+extern unsigned long __icache_flags;
+
+/*
+ * Whilst the D-side always behaves as PIPT on AArch64, aliasing is
+ * permitted in the I-cache.
+ */
+static inline int icache_is_aliasing(void)
+{
+ return test_bit(ICACHEF_ALIASING, &__icache_flags);
+}
+
+static inline u32 cache_type_cwg(void)
+{
+ return (read_cpuid_cachetype() >> CTR_CWG_SHIFT) & CTR_CWG_MASK;
+}
+
#define __read_mostly __attribute__((__section__(".data..read_mostly")))
static inline int cache_line_size(void)
diff --git a/arch/arm64/include/asm/cachetype.h b/arch/arm64/include/asm/cachetype.h
deleted file mode 100644
index fbab37c669a0..000000000000
--- a/arch/arm64/include/asm/cachetype.h
+++ /dev/null
@@ -1,55 +0,0 @@
-/*
- * Copyright (C) 2012 ARM Ltd.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program. If not, see <http://www.gnu.org/licenses/>.
- */
-#ifndef __ASM_CACHETYPE_H
-#define __ASM_CACHETYPE_H
-
-#include <asm/cputype.h>
-
-#define CTR_L1IP_SHIFT 14
-#define CTR_L1IP_MASK 3
-#define CTR_CWG_SHIFT 24
-#define CTR_CWG_MASK 15
-
-#define ICACHE_POLICY_VIPT 2
-#define ICACHE_POLICY_PIPT 3
-
-#ifndef __ASSEMBLY__
-
-#include <linux/bitops.h>
-
-#define CTR_L1IP(ctr) (((ctr) >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK)
-
-#define ICACHEF_ALIASING 0
-
-extern unsigned long __icache_flags;
-
-/*
- * Whilst the D-side always behaves as PIPT on AArch64, aliasing is
- * permitted in the I-cache.
- */
-static inline int icache_is_aliasing(void)
-{
- return test_bit(ICACHEF_ALIASING, &__icache_flags);
-}
-
-static inline u32 cache_type_cwg(void)
-{
- return (read_cpuid_cachetype() >> CTR_CWG_SHIFT) & CTR_CWG_MASK;
-}
-
-#endif /* __ASSEMBLY__ */
-
-#endif /* __ASM_CACHETYPE_H */
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 4be5773d4606..dc3624d8b9db 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -108,7 +108,7 @@ alternative_else_nop_endif
#else
#include <asm/pgalloc.h>
-#include <asm/cachetype.h>
+#include <asm/cache.h>
#include <asm/cacheflush.h>
#include <asm/mmu_context.h>
#include <asm/pgtable.h>
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index efe74ecc9738..260b54f415b8 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -15,7 +15,7 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <asm/arch_timer.h>
-#include <asm/cachetype.h>
+#include <asm/cache.h>
#include <asm/cpu.h>
#include <asm/cputype.h>
#include <asm/cpufeature.h>
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index 1e968222a544..21a8d828cbf4 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -22,7 +22,7 @@
#include <linux/pagemap.h>
#include <asm/cacheflush.h>
-#include <asm/cachetype.h>
+#include <asm/cache.h>
#include <asm/tlbflush.h>
void sync_icache_aliases(void *kaddr, unsigned long len)
--
2.1.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 5/6] arm64: cache: Identify VPIPT I-caches
2017-03-10 20:32 [PATCH 1/6] arm64: cpuinfo: remove I-cache VIPT aliasing detection Will Deacon
` (2 preceding siblings ...)
2017-03-10 20:32 ` [PATCH 4/6] arm64: cache: Merge cachetype.h into cache.h Will Deacon
@ 2017-03-10 20:32 ` Will Deacon
2017-03-10 20:32 ` [PATCH 6/6] arm64: KVM: Add support for " Will Deacon
4 siblings, 0 replies; 9+ messages in thread
From: Will Deacon @ 2017-03-10 20:32 UTC (permalink / raw)
To: linux-arm-kernel
Add support for detecting VPIPT I-caches, as introduced by ARMv8.2.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
arch/arm64/include/asm/cache.h | 7 +++++++
arch/arm64/kernel/cpuinfo.c | 4 ++++
2 files changed, 11 insertions(+)
diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
index 7acb52634299..ea9bb4e0e9bb 100644
--- a/arch/arm64/include/asm/cache.h
+++ b/arch/arm64/include/asm/cache.h
@@ -25,6 +25,7 @@
#define CTR_L1IP(ctr) (((ctr) >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK)
+#define ICACHE_POLICY_VPIPT 0
#define ICACHE_POLICY_VIPT 2
#define ICACHE_POLICY_PIPT 3
@@ -45,6 +46,7 @@
#include <linux/bitops.h>
#define ICACHEF_ALIASING 0
+#define ICACHEF_VPIPT 1
extern unsigned long __icache_flags;
/*
@@ -56,6 +58,11 @@ static inline int icache_is_aliasing(void)
return test_bit(ICACHEF_ALIASING, &__icache_flags);
}
+static inline int icache_is_vpipt(void)
+{
+ return test_bit(ICACHEF_VPIPT, &__icache_flags);
+}
+
static inline u32 cache_type_cwg(void)
{
return (read_cpuid_cachetype() >> CTR_CWG_SHIFT) & CTR_CWG_MASK;
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 260b54f415b8..7d27f4b4881e 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -46,6 +46,7 @@ static char *icache_policy_str[] = {
[0 ... ICACHE_POLICY_PIPT] = "RESERVED/UNKNOWN",
[ICACHE_POLICY_VIPT] = "VIPT",
[ICACHE_POLICY_PIPT] = "PIPT",
+ [ICACHE_POLICY_VPIPT] = "VPIPT",
};
unsigned long __icache_flags;
@@ -291,6 +292,9 @@ static void cpuinfo_detect_icache_policy(struct cpuinfo_arm64 *info)
switch (l1ip) {
case ICACHE_POLICY_PIPT:
break;
+ case ICACHE_POLICY_VPIPT:
+ set_bit(ICACHEF_VPIPT, &__icache_flags);
+ break;
default:
/* Fallthrough */
case ICACHE_POLICY_VIPT:
--
2.1.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 6/6] arm64: KVM: Add support for VPIPT I-caches
2017-03-10 20:32 [PATCH 1/6] arm64: cpuinfo: remove I-cache VIPT aliasing detection Will Deacon
` (3 preceding siblings ...)
2017-03-10 20:32 ` [PATCH 5/6] arm64: cache: Identify VPIPT I-caches Will Deacon
@ 2017-03-10 20:32 ` Will Deacon
2017-03-20 12:08 ` Mark Rutland
2017-03-20 16:22 ` Marc Zyngier
4 siblings, 2 replies; 9+ messages in thread
From: Will Deacon @ 2017-03-10 20:32 UTC (permalink / raw)
To: linux-arm-kernel
A VPIPT I-cache has two main properties:
1. Lines allocated into the cache are tagged by VMID and a lookup can
only hit lines that were allocated with the current VMID.
2. I-cache invalidation from EL1/0 only invalidates lines that match the
current VMID of the CPU doing the invalidation.
This can cause issues with non-VHE configurations, where the host runs
at EL1 and wants to invalidate I-cache entries for a guest running with
a different VMID. VHE is not affected, because the host runs at EL2 and
I-cache invalidation applies as expected.
This patch solves the problem by invalidating the I-cache when unmapping
a page at stage 2 on a system with a VPIPT I-cache but not running with
VHE enabled. Hopefully this is an obscure enough configuration that the
overhead isn't anything to worry about, although it does mean that the
by-range I-cache invalidation currently performed when mapping at stage
2 can be elided on such systems, because the I-cache will be clean for
the guest VMID following a rollover event.
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
arch/arm64/include/asm/kvm_mmu.h | 9 +++++----
arch/arm64/kvm/hyp/tlb.c | 22 ++++++++++++++++++++++
2 files changed, 27 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index dc3624d8b9db..d2293d49f555 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -242,12 +242,13 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu,
kvm_flush_dcache_to_poc(va, size);
- if (!icache_is_aliasing()) { /* PIPT */
- flush_icache_range((unsigned long)va,
- (unsigned long)va + size);
- } else {
+ if (icache_is_aliasing()) {
/* any kind of VIPT cache */
__flush_icache_all();
+ } else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) {
+ /* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */
+ flush_icache_range((unsigned long)va,
+ (unsigned long)va + size);
}
}
diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c
index e8e7ba2bc11f..f02c7e6a8db4 100644
--- a/arch/arm64/kvm/hyp/tlb.c
+++ b/arch/arm64/kvm/hyp/tlb.c
@@ -46,6 +46,28 @@ void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
dsb(ish);
isb();
+ /*
+ * If the host is running at EL1 and we have a VPIPT I-cache,
+ * then we must perform I-cache maintenance at EL2 in order for
+ * it to have an effect on the guest. Since the guest cannot hit
+ * I-cache lines allocated with a different VMID, we don't need
+ * to worry about junk out of guest reset (we nuke the I-cache on
+ * VMID rollover), but we do need to be careful when remapping
+ * executable pages for the same guest. This can happen when KSM
+ * takes a CoW fault on an executable page, copies the page into
+ * a page that was previously mapped in the guest and then needs
+ * to invalidate the guest view of the I-cache for that page
+ * from EL1. To solve this, we invalidate the entire I-cache when
+ * unmapping a page from a guest if we have a VPIPT I-cache but
+ * the host is running at EL1. As above, we could do better if
+ * we had the VA.
+ *
+ * The moral of this story is: if you have a VPIPT I-cache, then
+ * you should be running with VHE enabled.
+ */
+ if (!has_vhe() && icache_is_vpipt())
+ __flush_icache_all();
+
write_sysreg(0, vttbr_el2);
}
--
2.1.4
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 6/6] arm64: KVM: Add support for VPIPT I-caches
2017-03-10 20:32 ` [PATCH 6/6] arm64: KVM: Add support for " Will Deacon
@ 2017-03-20 12:08 ` Mark Rutland
2017-03-20 16:22 ` Marc Zyngier
1 sibling, 0 replies; 9+ messages in thread
From: Mark Rutland @ 2017-03-20 12:08 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, Mar 10, 2017 at 08:32:25PM +0000, Will Deacon wrote:
> A VPIPT I-cache has two main properties:
>
> 1. Lines allocated into the cache are tagged by VMID and a lookup can
> only hit lines that were allocated with the current VMID.
>
> 2. I-cache invalidation from EL1/0 only invalidates lines that match the
> current VMID of the CPU doing the invalidation.
>
> This can cause issues with non-VHE configurations, where the host runs
> at EL1 and wants to invalidate I-cache entries for a guest running with
> a different VMID. VHE is not affected, because the host runs at EL2 and
> I-cache invalidation applies as expected.
>
> This patch solves the problem by invalidating the I-cache when unmapping
> a page at stage 2 on a system with a VPIPT I-cache but not running with
> VHE enabled. Hopefully this is an obscure enough configuration that the
> overhead isn't anything to worry about, although it does mean that the
> by-range I-cache invalidation currently performed when mapping at stage
> 2 can be elided on such systems, because the I-cache will be clean for
> the guest VMID following a rollover event.
>
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
> arch/arm64/include/asm/kvm_mmu.h | 9 +++++----
> arch/arm64/kvm/hyp/tlb.c | 22 ++++++++++++++++++++++
> 2 files changed, 27 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index dc3624d8b9db..d2293d49f555 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -242,12 +242,13 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu,
>
> kvm_flush_dcache_to_poc(va, size);
>
> - if (!icache_is_aliasing()) { /* PIPT */
> - flush_icache_range((unsigned long)va,
> - (unsigned long)va + size);
> - } else {
> + if (icache_is_aliasing()) {
> /* any kind of VIPT cache */
> __flush_icache_all();
> + } else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) {
> + /* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */
> + flush_icache_range((unsigned long)va,
> + (unsigned long)va + size);
> }
> }
>
> diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c
> index e8e7ba2bc11f..f02c7e6a8db4 100644
> --- a/arch/arm64/kvm/hyp/tlb.c
> +++ b/arch/arm64/kvm/hyp/tlb.c
> @@ -46,6 +46,28 @@ void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
> dsb(ish);
> isb();
>
> + /*
> + * If the host is running at EL1 and we have a VPIPT I-cache,
> + * then we must perform I-cache maintenance at EL2 in order for
> + * it to have an effect on the guest. Since the guest cannot hit
> + * I-cache lines allocated with a different VMID, we don't need
> + * to worry about junk out of guest reset (we nuke the I-cache on
> + * VMID rollover), but we do need to be careful when remapping
> + * executable pages for the same guest. This can happen when KSM
> + * takes a CoW fault on an executable page, copies the page into
> + * a page that was previously mapped in the guest and then needs
> + * to invalidate the guest view of the I-cache for that page
> + * from EL1. To solve this, we invalidate the entire I-cache when
> + * unmapping a page from a guest if we have a VPIPT I-cache but
> + * the host is running at EL1. As above, we could do better if
> + * we had the VA.
> + *
> + * The moral of this story is: if you have a VPIPT I-cache, then
> + * you should be running with VHE enabled.
> + */
> + if (!has_vhe() && icache_is_vpipt())
> + __flush_icache_all();
The is_kernel_in_hyp_mode() / has_vhe() inconsistency across these two
functions is somewhat confusing.
Is there any reason __coherent_cache_guest_page() can't use has_vhe()
too?
Otherwise, this all looks sane to me.
Thanks,
Mark.
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 6/6] arm64: KVM: Add support for VPIPT I-caches
2017-03-10 20:32 ` [PATCH 6/6] arm64: KVM: Add support for " Will Deacon
2017-03-20 12:08 ` Mark Rutland
@ 2017-03-20 16:22 ` Marc Zyngier
2017-03-20 16:25 ` Catalin Marinas
1 sibling, 1 reply; 9+ messages in thread
From: Marc Zyngier @ 2017-03-20 16:22 UTC (permalink / raw)
To: linux-arm-kernel
On 10/03/17 20:32, Will Deacon wrote:
> A VPIPT I-cache has two main properties:
>
> 1. Lines allocated into the cache are tagged by VMID and a lookup can
> only hit lines that were allocated with the current VMID.
>
> 2. I-cache invalidation from EL1/0 only invalidates lines that match the
> current VMID of the CPU doing the invalidation.
>
> This can cause issues with non-VHE configurations, where the host runs
> at EL1 and wants to invalidate I-cache entries for a guest running with
> a different VMID. VHE is not affected, because the host runs at EL2 and
> I-cache invalidation applies as expected.
>
> This patch solves the problem by invalidating the I-cache when unmapping
> a page at stage 2 on a system with a VPIPT I-cache but not running with
> VHE enabled. Hopefully this is an obscure enough configuration that the
> overhead isn't anything to worry about, although it does mean that the
> by-range I-cache invalidation currently performed when mapping at stage
> 2 can be elided on such systems, because the I-cache will be clean for
> the guest VMID following a rollover event.
>
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
> arch/arm64/include/asm/kvm_mmu.h | 9 +++++----
> arch/arm64/kvm/hyp/tlb.c | 22 ++++++++++++++++++++++
> 2 files changed, 27 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index dc3624d8b9db..d2293d49f555 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -242,12 +242,13 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu,
>
> kvm_flush_dcache_to_poc(va, size);
>
> - if (!icache_is_aliasing()) { /* PIPT */
> - flush_icache_range((unsigned long)va,
> - (unsigned long)va + size);
> - } else {
> + if (icache_is_aliasing()) {
> /* any kind of VIPT cache */
> __flush_icache_all();
> + } else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) {
> + /* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */
> + flush_icache_range((unsigned long)va,
> + (unsigned long)va + size);
> }
> }
>
> diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c
> index e8e7ba2bc11f..f02c7e6a8db4 100644
> --- a/arch/arm64/kvm/hyp/tlb.c
> +++ b/arch/arm64/kvm/hyp/tlb.c
> @@ -46,6 +46,28 @@ void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
> dsb(ish);
> isb();
>
> + /*
> + * If the host is running at EL1 and we have a VPIPT I-cache,
> + * then we must perform I-cache maintenance at EL2 in order for
> + * it to have an effect on the guest. Since the guest cannot hit
> + * I-cache lines allocated with a different VMID, we don't need
> + * to worry about junk out of guest reset (we nuke the I-cache on
> + * VMID rollover), but we do need to be careful when remapping
> + * executable pages for the same guest. This can happen when KSM
> + * takes a CoW fault on an executable page, copies the page into
> + * a page that was previously mapped in the guest and then needs
> + * to invalidate the guest view of the I-cache for that page
> + * from EL1. To solve this, we invalidate the entire I-cache when
> + * unmapping a page from a guest if we have a VPIPT I-cache but
> + * the host is running at EL1. As above, we could do better if
> + * we had the VA.
> + *
> + * The moral of this story is: if you have a VPIPT I-cache, then
> + * you should be running with VHE enabled.
> + */
> + if (!has_vhe() && icache_is_vpipt())
> + __flush_icache_all();
> +
> write_sysreg(0, vttbr_el2);
> }
>
>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
It is worth noting that this now conflicts with 68925176296a ("arm64:
KVM: VHE: Clear HCR_TGE when invalidating guest TLBs"). I've resolved
it as such in my tree:
diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c
index 9e1d2b75eecd..73464a96c365 100644
--- a/arch/arm64/kvm/hyp/tlb.c
+++ b/arch/arm64/kvm/hyp/tlb.c
@@ -94,6 +94,28 @@ void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
dsb(ish);
isb();
+ /*
+ * If the host is running at EL1 and we have a VPIPT I-cache,
+ * then we must perform I-cache maintenance at EL2 in order for
+ * it to have an effect on the guest. Since the guest cannot hit
+ * I-cache lines allocated with a different VMID, we don't need
+ * to worry about junk out of guest reset (we nuke the I-cache on
+ * VMID rollover), but we do need to be careful when remapping
+ * executable pages for the same guest. This can happen when KSM
+ * takes a CoW fault on an executable page, copies the page into
+ * a page that was previously mapped in the guest and then needs
+ * to invalidate the guest view of the I-cache for that page
+ * from EL1. To solve this, we invalidate the entire I-cache when
+ * unmapping a page from a guest if we have a VPIPT I-cache but
+ * the host is running at EL1. As above, we could do better if
+ * we had the VA.
+ *
+ * The moral of this story is: if you have a VPIPT I-cache, then
+ * you should be running with VHE enabled.
+ */
+ if (!has_vhe() && icache_is_vpipt())
+ __flush_icache_all();
+
__tlb_switch_to_host()(kvm);
}
I'm happy for this to go via the arm64 tree, as it depends on the previous
patches in this series.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 6/6] arm64: KVM: Add support for VPIPT I-caches
2017-03-20 16:22 ` Marc Zyngier
@ 2017-03-20 16:25 ` Catalin Marinas
0 siblings, 0 replies; 9+ messages in thread
From: Catalin Marinas @ 2017-03-20 16:25 UTC (permalink / raw)
To: linux-arm-kernel
On Mon, Mar 20, 2017 at 04:22:15PM +0000, Marc Zyngier wrote:
> On 10/03/17 20:32, Will Deacon wrote:
> > A VPIPT I-cache has two main properties:
> >
> > 1. Lines allocated into the cache are tagged by VMID and a lookup can
> > only hit lines that were allocated with the current VMID.
> >
> > 2. I-cache invalidation from EL1/0 only invalidates lines that match the
> > current VMID of the CPU doing the invalidation.
> >
> > This can cause issues with non-VHE configurations, where the host runs
> > at EL1 and wants to invalidate I-cache entries for a guest running with
> > a different VMID. VHE is not affected, because the host runs at EL2 and
> > I-cache invalidation applies as expected.
> >
> > This patch solves the problem by invalidating the I-cache when unmapping
> > a page at stage 2 on a system with a VPIPT I-cache but not running with
> > VHE enabled. Hopefully this is an obscure enough configuration that the
> > overhead isn't anything to worry about, although it does mean that the
> > by-range I-cache invalidation currently performed when mapping at stage
> > 2 can be elided on such systems, because the I-cache will be clean for
> > the guest VMID following a rollover event.
> >
> > Signed-off-by: Will Deacon <will.deacon@arm.com>
[...]
> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
>
> It is worth noting that this now conflicts with 68925176296a ("arm64:
> KVM: VHE: Clear HCR_TGE when invalidating guest TLBs"). I've resolved
> it as such in my tree:
Thanks for the ack and conflict resolution. All patches applied to the
arm64 tree.
--
Catalin
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2017-03-20 16:25 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-03-10 20:32 [PATCH 1/6] arm64: cpuinfo: remove I-cache VIPT aliasing detection Will Deacon
2017-03-10 20:32 ` [PATCH 2/6] arm64: cacheinfo: Remove CCSIDR-based cache information probing Will Deacon
2017-03-10 20:32 ` [PATCH 3/6] arm64: cache: Remove support for ASID-tagged VIVT I-caches Will Deacon
2017-03-10 20:32 ` [PATCH 4/6] arm64: cache: Merge cachetype.h into cache.h Will Deacon
2017-03-10 20:32 ` [PATCH 5/6] arm64: cache: Identify VPIPT I-caches Will Deacon
2017-03-10 20:32 ` [PATCH 6/6] arm64: KVM: Add support for " Will Deacon
2017-03-20 12:08 ` Mark Rutland
2017-03-20 16:22 ` Marc Zyngier
2017-03-20 16:25 ` Catalin Marinas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).