* [PATCH 1/2] arm64: consolidate context ID for 8-bit ASIDs
@ 2016-06-17 17:32 Jean-Philippe Brucker
2016-06-17 17:33 ` [PATCH 2/2] arm64: update ASID limit Jean-Philippe Brucker
2016-06-20 8:28 ` [PATCH 1/2] arm64: consolidate context ID for 8-bit ASIDs Will Deacon
0 siblings, 2 replies; 3+ messages in thread
From: Jean-Philippe Brucker @ 2016-06-17 17:32 UTC (permalink / raw)
To: linux-arm-kernel
When a CPU uses 8 bits of ASID, software should write the top 8 bits of
TTB registers and TLBI commands as 0. Currently, we put the generation
field right above the ASIDs, which leads to writing it into TTB and TLBIs.
Hardware is supposed to always ignore those bits, but we shouldn't rely on
that.
Always use bits [63:16] of context.id for the generation number, and keep
bits [15:8] as zero when using 8-bit ASIDs.
Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
arch/arm64/mm/context.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index b7b3978..090bf88 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -38,8 +38,9 @@ static DEFINE_PER_CPU(u64, reserved_asids);
static cpumask_t tlb_flush_pending;
#define ASID_MASK (~GENMASK(asid_bits - 1, 0))
-#define ASID_FIRST_VERSION (1UL << asid_bits)
-#define NUM_USER_ASIDS ASID_FIRST_VERSION
+#define GENERATION_SHIFT 16
+#define ASID_FIRST_VERSION (1UL << GENERATION_SHIFT)
+#define NUM_USER_ASIDS (1UL << asid_bits)
/* Get the ASIDBits supported by the current CPU */
static u32 get_cpu_asid_bits(void)
--
2.8.3
^ permalink raw reply related [flat|nested] 3+ messages in thread
* [PATCH 2/2] arm64: update ASID limit
2016-06-17 17:32 [PATCH 1/2] arm64: consolidate context ID for 8-bit ASIDs Jean-Philippe Brucker
@ 2016-06-17 17:33 ` Jean-Philippe Brucker
2016-06-20 8:28 ` [PATCH 1/2] arm64: consolidate context ID for 8-bit ASIDs Will Deacon
1 sibling, 0 replies; 3+ messages in thread
From: Jean-Philippe Brucker @ 2016-06-17 17:33 UTC (permalink / raw)
To: linux-arm-kernel
During a rollover, we mark the active ASID on each CPU as reserved, before
allocating a new ID for the task that caused the rollover. This means that
with N CPUs, we can only guarantee the new task to obtain a valid ASID if
we have at least N+1 ASIDs. Update this limit in the initcall check.
Note that this restriction was introduced by commit 8e648066 on the
arch/arm side, which disallow re-using the previously active ASID on the
local CPU, as it would introduce a TLB race.
In addition, we only dispose of NUM_USER_ASIDS-1, since ASID 0 is
reserved. Add this restriction as well.
Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
---
arch/arm64/mm/context.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index 090bf88..036c44d 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -180,7 +180,7 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu)
&asid_generation);
flush_context(cpu);
- /* We have at least 1 ASID per CPU, so this will always succeed */
+ /* We have more ASIDs than CPUs, so this will always succeed */
asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, 1);
set_asid:
@@ -228,8 +228,11 @@ switch_mm_fastpath:
static int asids_init(void)
{
asid_bits = get_cpu_asid_bits();
- /* If we end up with more CPUs than ASIDs, expect things to crash */
- WARN_ON(NUM_USER_ASIDS < num_possible_cpus());
+ /*
+ * Expect allocation after rollover to fail if we don't have@least
+ * one more ASID than CPUs. ASID #0 is reserved for init_mm.
+ */
+ WARN_ON(NUM_USER_ASIDS - 1 <= num_possible_cpus());
atomic64_set(&asid_generation, ASID_FIRST_VERSION);
asid_map = kzalloc(BITS_TO_LONGS(NUM_USER_ASIDS) * sizeof(*asid_map),
GFP_KERNEL);
--
2.8.3
^ permalink raw reply related [flat|nested] 3+ messages in thread* [PATCH 1/2] arm64: consolidate context ID for 8-bit ASIDs
2016-06-17 17:32 [PATCH 1/2] arm64: consolidate context ID for 8-bit ASIDs Jean-Philippe Brucker
2016-06-17 17:33 ` [PATCH 2/2] arm64: update ASID limit Jean-Philippe Brucker
@ 2016-06-20 8:28 ` Will Deacon
1 sibling, 0 replies; 3+ messages in thread
From: Will Deacon @ 2016-06-20 8:28 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, Jun 17, 2016 at 06:32:59PM +0100, Jean-Philippe Brucker wrote:
> When a CPU uses 8 bits of ASID, software should write the top 8 bits of
> TTB registers and TLBI commands as 0. Currently, we put the generation
> field right above the ASIDs, which leads to writing it into TTB and TLBIs.
> Hardware is supposed to always ignore those bits, but we shouldn't rely on
> that.
Actually, I think we can rely on this. The ARM ARM has a special exception
for ASID size (there are some pending changes to this text that appear to
be inconsequential to this discussion):
ASID size
[...]
When the value of TCR_EL1.AS is 0, ASID[15:8] ... Are ignored by hardware
for every purpose other than reads of ID_AA64MMFR0_EL1.
Will
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2016-06-20 8:28 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-06-17 17:32 [PATCH 1/2] arm64: consolidate context ID for 8-bit ASIDs Jean-Philippe Brucker
2016-06-17 17:33 ` [PATCH 2/2] arm64: update ASID limit Jean-Philippe Brucker
2016-06-20 8:28 ` [PATCH 1/2] arm64: consolidate context ID for 8-bit ASIDs Will Deacon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).