linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Mark Rutland <mark.rutland@arm.com>
To: Jamie Iles <quic_jiles@quicinc.com>,
	Catalin Marinas <catalin.marinas@arm.com>
Cc: linux-arm-kernel@lists.infradead.org, Will Deacon <will@kernel.org>
Subject: Re: [PATCH] arm64/mm: remove now-superfluous ISBs from TTBR writes
Date: Thu, 15 Jun 2023 10:26:55 +0100	[thread overview]
Message-ID: <ZIrZX8H28T41sF+/@FVFF77S0Q05N> (raw)
In-Reply-To: <20230613141959.92697-1-quic_jiles@quicinc.com>

On Tue, Jun 13, 2023 at 03:19:59PM +0100, Jamie Iles wrote:
> At the time of authoring 7655abb95386 ("arm64: mm: Move ASID from TTBR0
> to TTBR1"), the Arm ARM did not specify any ordering guarantees for
> direct writes to TTBR0_ELx and TTBR1_ELx and so an ISB was required
> after each write to ensure TLBs would only be populated from the
> expected (or reserved tables).
> 
> In a recent update to the Arm ARM, the requirements have been relaxed to
> reflect the implementation of current CPUs and required implementation
> of future CPUs to read (RDYDPX in D8.2.3 Translation table base address
> register):
> 
>   Direct writes to TTBR0_ELx and TTBR1_ELx occur in program order
>   relative to one another, without the need for explicit
>   synchronization. For any one translation, all indirect reads of
>   TTBR0_ELx and TTBR1_ELx that are made as part of the translation
>   observe only one point in that order of direct writes.
> 
> Remove the superfluous ISBs to optimize uaccess helpers and context
> switch.
> 
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Jamie Iles <quic_jiles@quicinc.com>

This matches my understanding, and the changes look correct to me.

I have a couple of minor comments below; with those handled:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>

Catalin, are you hapy to fix those up when applying?

Thanks,
Mark.

> ---
>  arch/arm64/include/asm/asm-uaccess.h | 2 --
>  arch/arm64/include/asm/mmu_context.h | 9 +++++++--
>  arch/arm64/include/asm/uaccess.h     | 2 --
>  arch/arm64/mm/context.c              | 1 -
>  4 files changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
> index 75b211c98dea..5b6efe8abeeb 100644
> --- a/arch/arm64/include/asm/asm-uaccess.h
> +++ b/arch/arm64/include/asm/asm-uaccess.h
> @@ -18,7 +18,6 @@
>  	bic	\tmp1, \tmp1, #TTBR_ASID_MASK
>  	sub	\tmp1, \tmp1, #RESERVED_SWAPPER_OFFSET	// reserved_pg_dir
>  	msr	ttbr0_el1, \tmp1			// set reserved TTBR0_EL1
> -	isb
>  	add	\tmp1, \tmp1, #RESERVED_SWAPPER_OFFSET
>  	msr	ttbr1_el1, \tmp1		// set reserved ASID
>  	isb
> @@ -31,7 +30,6 @@
>  	extr    \tmp2, \tmp2, \tmp1, #48
>  	ror     \tmp2, \tmp2, #16
>  	msr	ttbr1_el1, \tmp2		// set the active ASID
> -	isb
>  	msr	ttbr0_el1, \tmp1		// set the non-PAN TTBR0_EL1
>  	isb
>  	.endm
> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> index 56911691bef0..a80285defe81 100644
> --- a/arch/arm64/include/asm/mmu_context.h
> +++ b/arch/arm64/include/asm/mmu_context.h
> @@ -39,11 +39,16 @@ static inline void contextidr_thread_switch(struct task_struct *next)
>  /*
>   * Set TTBR0 to reserved_pg_dir. No translations will be possible via TTBR0.
>   */
> -static inline void cpu_set_reserved_ttbr0(void)
> +static inline void __cpu_set_reserved_ttbr0(void)
>  {
>  	unsigned long ttbr = phys_to_ttbr(__pa_symbol(reserved_pg_dir));
>  
>  	write_sysreg(ttbr, ttbr0_el1);
> +}

Could we please call this cpu_set_reserved_ttbr0_nosync() ?

I think that's a little clearer than the underscores.

> +
> +static inline void cpu_set_reserved_ttbr0(void)
> +{
> +	__cpu_set_reserved_ttbr0();
>  	isb();
>  }
>  
> @@ -52,7 +57,7 @@ void cpu_do_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
>  static inline void cpu_switch_mm(pgd_t *pgd, struct mm_struct *mm)
>  {
>  	BUG_ON(pgd == swapper_pg_dir);
> -	cpu_set_reserved_ttbr0();
> +	__cpu_set_reserved_ttbr0();
>  	cpu_do_switch_mm(virt_to_phys(pgd),mm);

Could we please move the __cpu_set_reserved_ttbr0() into cpu_do_switch_mm(),
just before the first write to TTBR1? That would make it clearer that we have
the required barriers and ordering, as we'd have a back-to-back sequence:

	cpu_set_reserved_ttbr0_nosync();
	write_sysreg(ttbr1, ttbr1_el1);
	write_sysreg(ttbr0, ttbr0_el1);
	isb();

... and it'd be less likely we'd accidentally break that in future.

We didn't do that originally as back when we added this in commit:

  7655abb953860485 ("arm64: mm: Move ASID from TTBR0 to TTBR1")

... cpu_do_switch_mm() was still written in asm and cpu_set_reserved_ttbr0()
was written in C.

We eventually moved cpu_do_switch_mm() to C in commit:

  25b92693a1b67a47 ("arm64: mm: convert cpu_do_switch_mm() to C")

... but didn't think to move the call to cpu_set_reserved_ttbr0().

Thanks,
Mark.

>  }
>  
> diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
> index 05f4fc265428..14be5000c5a0 100644
> --- a/arch/arm64/include/asm/uaccess.h
> +++ b/arch/arm64/include/asm/uaccess.h
> @@ -65,7 +65,6 @@ static inline void __uaccess_ttbr0_disable(void)
>  	ttbr &= ~TTBR_ASID_MASK;
>  	/* reserved_pg_dir placed before swapper_pg_dir */
>  	write_sysreg(ttbr - RESERVED_SWAPPER_OFFSET, ttbr0_el1);
> -	isb();
>  	/* Set reserved ASID */
>  	write_sysreg(ttbr, ttbr1_el1);
>  	isb();
> @@ -89,7 +88,6 @@ static inline void __uaccess_ttbr0_enable(void)
>  	ttbr1 &= ~TTBR_ASID_MASK;		/* safety measure */
>  	ttbr1 |= ttbr0 & TTBR_ASID_MASK;
>  	write_sysreg(ttbr1, ttbr1_el1);
> -	isb();
>  
>  	/* Restore user page table */
>  	write_sysreg(ttbr0, ttbr0_el1);
> diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
> index e1e0dca01839..cf4ba575342e 100644
> --- a/arch/arm64/mm/context.c
> +++ b/arch/arm64/mm/context.c
> @@ -365,7 +365,6 @@ void cpu_do_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm)
>  	ttbr1 |= FIELD_PREP(TTBR_ASID_MASK, asid);
>  
>  	write_sysreg(ttbr1, ttbr1_el1);
> -	isb();
>  	write_sysreg(ttbr0, ttbr0_el1);
>  	isb();
>  	post_ttbr_update_workaround();
> -- 
> 2.25.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2023-06-15  9:27 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-13 14:19 [PATCH] arm64/mm: remove now-superfluous ISBs from TTBR writes Jamie Iles
2023-06-14 10:05 ` Anshuman Khandual
2023-06-14 10:09   ` Jamie Iles
2023-06-14 10:27     ` Catalin Marinas
2023-06-15  9:26 ` Mark Rutland [this message]
2023-06-15 17:11 ` Catalin Marinas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZIrZX8H28T41sF+/@FVFF77S0Q05N \
    --to=mark.rutland@arm.com \
    --cc=catalin.marinas@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=quic_jiles@quicinc.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).