* [PATCH] arm64: avoid TLB conflict with CONFIG_RANDOMIZE_BASE
@ 2016-08-24 17:02 Mark Rutland
2016-08-24 20:41 ` Ard Biesheuvel
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Mark Rutland @ 2016-08-24 17:02 UTC (permalink / raw)
To: linux-arm-kernel
When CONFIG_RANDOMIZE_BASE is selected, we modify the page tables to remap the
kernel at a newly-chosen VA range. We do this with the MMU disabled, but do not
invalidate TLBs prior to re-enabling the MMU with the new tables. Thus the old
mappings entries may still live in TLBs, and we risk violating
Break-Before-Make requirements, leading to TLB conflicts and/or other issues.
We invalidate TLBs when we uninsall the idmap in early setup code, but prior to
this we are subject to issues relating to the Break-Before-Make violation.
Avoid these issues by invalidating the TLBs before the new mappings can be
used by the hardware.
Fixes: f80fb3a3d50843a4 ("arm64: add support for kernel ASLR")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: stable at vger.kernel.org
---
arch/arm64/kernel/head.S | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index b77f583..3e7b050 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -757,6 +757,9 @@ ENTRY(__enable_mmu)
isb
bl __create_page_tables // recreate kernel mapping
+ tlbi vmalle1 // Remove any stale TLB entries
+ dsb nsh
+
msr sctlr_el1, x19 // re-enable the MMU
isb
ic iallu // flush instructions fetched
--
2.7.4
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH] arm64: avoid TLB conflict with CONFIG_RANDOMIZE_BASE
2016-08-24 17:02 [PATCH] arm64: avoid TLB conflict with CONFIG_RANDOMIZE_BASE Mark Rutland
@ 2016-08-24 20:41 ` Ard Biesheuvel
2016-08-25 9:52 ` Will Deacon
2016-08-25 10:14 ` Catalin Marinas
2 siblings, 0 replies; 4+ messages in thread
From: Ard Biesheuvel @ 2016-08-24 20:41 UTC (permalink / raw)
To: linux-arm-kernel
On 24 August 2016 at 19:02, Mark Rutland <mark.rutland@arm.com> wrote:
> When CONFIG_RANDOMIZE_BASE is selected, we modify the page tables to remap the
> kernel at a newly-chosen VA range. We do this with the MMU disabled, but do not
> invalidate TLBs prior to re-enabling the MMU with the new tables. Thus the old
> mappings entries may still live in TLBs, and we risk violating
> Break-Before-Make requirements, leading to TLB conflicts and/or other issues.
>
> We invalidate TLBs when we uninsall the idmap in early setup code, but prior to
> this we are subject to issues relating to the Break-Before-Make violation.
>
> Avoid these issues by invalidating the TLBs before the new mappings can be
> used by the hardware.
>
> Fixes: f80fb3a3d50843a4 ("arm64: add support for kernel ASLR")
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: stable at vger.kernel.org
Ah yes, brown paper bag time for me.
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
> arch/arm64/kernel/head.S | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index b77f583..3e7b050 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -757,6 +757,9 @@ ENTRY(__enable_mmu)
> isb
> bl __create_page_tables // recreate kernel mapping
>
> + tlbi vmalle1 // Remove any stale TLB entries
> + dsb nsh
> +
> msr sctlr_el1, x19 // re-enable the MMU
> isb
> ic iallu // flush instructions fetched
> --
> 2.7.4
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH] arm64: avoid TLB conflict with CONFIG_RANDOMIZE_BASE
2016-08-24 17:02 [PATCH] arm64: avoid TLB conflict with CONFIG_RANDOMIZE_BASE Mark Rutland
2016-08-24 20:41 ` Ard Biesheuvel
@ 2016-08-25 9:52 ` Will Deacon
2016-08-25 10:14 ` Catalin Marinas
2 siblings, 0 replies; 4+ messages in thread
From: Will Deacon @ 2016-08-25 9:52 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Aug 24, 2016 at 06:02:08PM +0100, Mark Rutland wrote:
> When CONFIG_RANDOMIZE_BASE is selected, we modify the page tables to remap the
> kernel at a newly-chosen VA range. We do this with the MMU disabled, but do not
> invalidate TLBs prior to re-enabling the MMU with the new tables. Thus the old
> mappings entries may still live in TLBs, and we risk violating
> Break-Before-Make requirements, leading to TLB conflicts and/or other issues.
>
> We invalidate TLBs when we uninsall the idmap in early setup code, but prior to
> this we are subject to issues relating to the Break-Before-Make violation.
>
> Avoid these issues by invalidating the TLBs before the new mappings can be
> used by the hardware.
>
> Fixes: f80fb3a3d50843a4 ("arm64: add support for kernel ASLR")
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: stable at vger.kernel.org
> ---
> arch/arm64/kernel/head.S | 3 +++
> 1 file changed, 3 insertions(+)
Acked-by: Will Deacon <will.deacon@arm.com>
Although I do wonder whether it would be cleaner to do the local TLBI
in __create_page_tables after zeroing swapper, and then moving the TLBI
out of __cpu_setup and onto the secondary boot path. I suppose it doesn't
really matter...
Will
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index b77f583..3e7b050 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -757,6 +757,9 @@ ENTRY(__enable_mmu)
> isb
> bl __create_page_tables // recreate kernel mapping
>
> + tlbi vmalle1 // Remove any stale TLB entries
> + dsb nsh
> +
> msr sctlr_el1, x19 // re-enable the MMU
> isb
> ic iallu // flush instructions fetched
> --
> 2.7.4
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH] arm64: avoid TLB conflict with CONFIG_RANDOMIZE_BASE
2016-08-24 17:02 [PATCH] arm64: avoid TLB conflict with CONFIG_RANDOMIZE_BASE Mark Rutland
2016-08-24 20:41 ` Ard Biesheuvel
2016-08-25 9:52 ` Will Deacon
@ 2016-08-25 10:14 ` Catalin Marinas
2 siblings, 0 replies; 4+ messages in thread
From: Catalin Marinas @ 2016-08-25 10:14 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Aug 24, 2016 at 06:02:08PM +0100, Mark Rutland wrote:
> When CONFIG_RANDOMIZE_BASE is selected, we modify the page tables to remap the
> kernel at a newly-chosen VA range. We do this with the MMU disabled, but do not
> invalidate TLBs prior to re-enabling the MMU with the new tables. Thus the old
> mappings entries may still live in TLBs, and we risk violating
> Break-Before-Make requirements, leading to TLB conflicts and/or other issues.
>
> We invalidate TLBs when we uninsall the idmap in early setup code, but prior to
> this we are subject to issues relating to the Break-Before-Make violation.
>
> Avoid these issues by invalidating the TLBs before the new mappings can be
> used by the hardware.
>
> Fixes: f80fb3a3d50843a4 ("arm64: add support for kernel ASLR")
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: stable at vger.kernel.org
Applied to arm64 fixes/core.
--
Catalin
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2016-08-25 10:14 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-08-24 17:02 [PATCH] arm64: avoid TLB conflict with CONFIG_RANDOMIZE_BASE Mark Rutland
2016-08-24 20:41 ` Ard Biesheuvel
2016-08-25 9:52 ` Will Deacon
2016-08-25 10:14 ` Catalin Marinas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).