* [PATCH] arm64: kernel: avoid literal load of virtual address with MMU off
@ 2016-08-17 15:54 Ard Biesheuvel
2016-08-17 16:16 ` Mark Rutland
2016-08-17 17:11 ` Catalin Marinas
0 siblings, 2 replies; 6+ messages in thread
From: Ard Biesheuvel @ 2016-08-17 15:54 UTC (permalink / raw)
To: linux-arm-kernel
Literal loads of virtual addresses are subject to runtime relocation when
CONFIG_RELOCATABLE=y, and given that the relocation routines run with the
MMU and caches enabled, literal loads of relocated values performed with
the MMU off are not guaranteed to return the latest value unless the
memory covering the literal is cleaned to the PoC explicitly.
So defer the literal load until after the MMU has been enabled, just like
we do for primary_switch() and secondary_switch() in head.S.
Fixes: 1e48ef7fcc37 ("arm64: add support for building vmlinux as a relocatable PIE binary")
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
This conflicts with the x25/x26 patch I sent yesterday, but this should
probably go into stable, so I based it on v4.8-rc directly.
arch/arm64/kernel/sleep.S | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
index 9a3aec97ac09..ccf79d849e0a 100644
--- a/arch/arm64/kernel/sleep.S
+++ b/arch/arm64/kernel/sleep.S
@@ -101,12 +101,20 @@ ENTRY(cpu_resume)
bl el2_setup // if in EL2 drop to EL1 cleanly
/* enable the MMU early - so we can access sleep_save_stash by va */
adr_l lr, __enable_mmu /* __cpu_setup will return here */
- ldr x27, =_cpu_resume /* __enable_mmu will branch here */
+ adr_l x27, _resume_switched /* __enable_mmu will branch here */
adrp x25, idmap_pg_dir
adrp x26, swapper_pg_dir
b __cpu_setup
ENDPROC(cpu_resume)
+ .pushsection ".idmap.text", "ax"
+_resume_switched:
+ ldr x8, =_cpu_resume
+ br x8
+ENDPROC(_resume_switched)
+ .ltorg
+ .popsection
+
ENTRY(_cpu_resume)
mrs x1, mpidr_el1
adrp x8, mpidr_hash
--
2.7.4
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH] arm64: kernel: avoid literal load of virtual address with MMU off
2016-08-17 15:54 [PATCH] arm64: kernel: avoid literal load of virtual address with MMU off Ard Biesheuvel
@ 2016-08-17 16:16 ` Mark Rutland
2016-08-17 17:11 ` Catalin Marinas
1 sibling, 0 replies; 6+ messages in thread
From: Mark Rutland @ 2016-08-17 16:16 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Aug 17, 2016 at 05:54:41PM +0200, Ard Biesheuvel wrote:
> Literal loads of virtual addresses are subject to runtime relocation when
> CONFIG_RELOCATABLE=y, and given that the relocation routines run with the
> MMU and caches enabled, literal loads of relocated values performed with
> the MMU off are not guaranteed to return the latest value unless the
> memory covering the literal is cleaned to the PoC explicitly.
>
> So defer the literal load until after the MMU has been enabled, just like
> we do for primary_switch() and secondary_switch() in head.S.
>
> Fixes: 1e48ef7fcc37 ("arm64: add support for building vmlinux as a relocatable PIE binary")
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
This looks like the simplest way to handle this, and is consistent with
what we do elsewhere, so FWIW:
Acked-by: Mark Rutland <mark.rutland@arm.com>
>From grepping, this seems to be the only case of a relocated literal
being loaded while the MMU is off under arch/arm64/.
Thanks,
Mark.
> ---
>
> This conflicts with the x25/x26 patch I sent yesterday, but this should
> probably go into stable, so I based it on v4.8-rc directly.
>
> arch/arm64/kernel/sleep.S | 10 +++++++++-
> 1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
> index 9a3aec97ac09..ccf79d849e0a 100644
> --- a/arch/arm64/kernel/sleep.S
> +++ b/arch/arm64/kernel/sleep.S
> @@ -101,12 +101,20 @@ ENTRY(cpu_resume)
> bl el2_setup // if in EL2 drop to EL1 cleanly
> /* enable the MMU early - so we can access sleep_save_stash by va */
> adr_l lr, __enable_mmu /* __cpu_setup will return here */
> - ldr x27, =_cpu_resume /* __enable_mmu will branch here */
> + adr_l x27, _resume_switched /* __enable_mmu will branch here */
> adrp x25, idmap_pg_dir
> adrp x26, swapper_pg_dir
> b __cpu_setup
> ENDPROC(cpu_resume)
>
> + .pushsection ".idmap.text", "ax"
> +_resume_switched:
> + ldr x8, =_cpu_resume
> + br x8
> +ENDPROC(_resume_switched)
> + .ltorg
> + .popsection
> +
> ENTRY(_cpu_resume)
> mrs x1, mpidr_el1
> adrp x8, mpidr_hash
> --
> 2.7.4
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH] arm64: kernel: avoid literal load of virtual address with MMU off
2016-08-17 15:54 [PATCH] arm64: kernel: avoid literal load of virtual address with MMU off Ard Biesheuvel
2016-08-17 16:16 ` Mark Rutland
@ 2016-08-17 17:11 ` Catalin Marinas
2016-08-17 17:22 ` Mark Rutland
2016-08-17 17:26 ` Ard Biesheuvel
1 sibling, 2 replies; 6+ messages in thread
From: Catalin Marinas @ 2016-08-17 17:11 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Aug 17, 2016 at 05:54:41PM +0200, Ard Biesheuvel wrote:
> Literal loads of virtual addresses are subject to runtime relocation when
> CONFIG_RELOCATABLE=y, and given that the relocation routines run with the
> MMU and caches enabled, literal loads of relocated values performed with
> the MMU off are not guaranteed to return the latest value unless the
> memory covering the literal is cleaned to the PoC explicitly.
>
> So defer the literal load until after the MMU has been enabled, just like
> we do for primary_switch() and secondary_switch() in head.S.
I think for consistency with head.S the patch is fine but do we actually
expect a literal pool value to be inconsistent between cache and PoC?
Presumably, the first time the kernel image was loaded it had to be
cleaned to PoC but after that we wouldn't expect modifications of the
literal pool data.
--
Catalin
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH] arm64: kernel: avoid literal load of virtual address with MMU off
2016-08-17 17:11 ` Catalin Marinas
@ 2016-08-17 17:22 ` Mark Rutland
2016-08-17 17:26 ` Ard Biesheuvel
1 sibling, 0 replies; 6+ messages in thread
From: Mark Rutland @ 2016-08-17 17:22 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Aug 17, 2016 at 06:11:34PM +0100, Catalin Marinas wrote:
> On Wed, Aug 17, 2016 at 05:54:41PM +0200, Ard Biesheuvel wrote:
> > Literal loads of virtual addresses are subject to runtime relocation when
> > CONFIG_RELOCATABLE=y, and given that the relocation routines run with the
> > MMU and caches enabled, literal loads of relocated values performed with
> > the MMU off are not guaranteed to return the latest value unless the
> > memory covering the literal is cleaned to the PoC explicitly.
> >
> > So defer the literal load until after the MMU has been enabled, just like
> > we do for primary_switch() and secondary_switch() in head.S.
>
> I think for consistency with head.S the patch is fine but do we actually
> expect a literal pool value to be inconsistent between cache and PoC?
> Presumably, the first time the kernel image was loaded it had to be
> cleaned to PoC but after that we wouldn't expect modifications of the
> literal pool data.
The kernel modifies literal pool values within itself, at boot time,
after having enabled the MMU. This is after any loader has cleaned the
kernel Image to the PoC and branched to it with the MMU off.
See __primary_switch in head.S
With upcoming patches [1], resuming from hibernation can also leave
stale data at the PoC for relocated literals like this.
Thanks,
Mark.
[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-August/448996.html
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH] arm64: kernel: avoid literal load of virtual address with MMU off
2016-08-17 17:11 ` Catalin Marinas
2016-08-17 17:22 ` Mark Rutland
@ 2016-08-17 17:26 ` Ard Biesheuvel
2016-08-18 8:34 ` Catalin Marinas
1 sibling, 1 reply; 6+ messages in thread
From: Ard Biesheuvel @ 2016-08-17 17:26 UTC (permalink / raw)
To: linux-arm-kernel
On 17 August 2016 at 19:11, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Wed, Aug 17, 2016 at 05:54:41PM +0200, Ard Biesheuvel wrote:
>> Literal loads of virtual addresses are subject to runtime relocation when
>> CONFIG_RELOCATABLE=y, and given that the relocation routines run with the
>> MMU and caches enabled, literal loads of relocated values performed with
>> the MMU off are not guaranteed to return the latest value unless the
>> memory covering the literal is cleaned to the PoC explicitly.
>>
>> So defer the literal load until after the MMU has been enabled, just like
>> we do for primary_switch() and secondary_switch() in head.S.
>
> I think for consistency with head.S the patch is fine but do we actually
> expect a literal pool value to be inconsistent between cache and PoC?
Yes. Any literals covered by R_AARCH64_ABS64 relocations at link time
will be covered by R_AARCH64_RELATIVE at runtime (when
CONFIG_RELOCATABLE=y). Since AArch64 uses the RELA format, the targets
of the relocations will contain zeroes before the relocation routine
runs, and no cache maintenance is performed. This means a relocatable
image requires runtime processing even if it is loaded unrandomised.
> Presumably, the first time the kernel image was loaded it had to be
> cleaned to PoC but after that we wouldn't expect modifications of the
> literal pool dataz
>
> --
> Catalin
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH] arm64: kernel: avoid literal load of virtual address with MMU off
2016-08-17 17:26 ` Ard Biesheuvel
@ 2016-08-18 8:34 ` Catalin Marinas
0 siblings, 0 replies; 6+ messages in thread
From: Catalin Marinas @ 2016-08-18 8:34 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Aug 17, 2016 at 07:26:32PM +0200, Ard Biesheuvel wrote:
> On 17 August 2016 at 19:11, Catalin Marinas <catalin.marinas@arm.com> wrote:
> > On Wed, Aug 17, 2016 at 05:54:41PM +0200, Ard Biesheuvel wrote:
> >> Literal loads of virtual addresses are subject to runtime relocation when
> >> CONFIG_RELOCATABLE=y, and given that the relocation routines run with the
> >> MMU and caches enabled, literal loads of relocated values performed with
> >> the MMU off are not guaranteed to return the latest value unless the
> >> memory covering the literal is cleaned to the PoC explicitly.
> >>
> >> So defer the literal load until after the MMU has been enabled, just like
> >> we do for primary_switch() and secondary_switch() in head.S.
> >
> > I think for consistency with head.S the patch is fine but do we actually
> > expect a literal pool value to be inconsistent between cache and PoC?
>
> Yes. Any literals covered by R_AARCH64_ABS64 relocations at link time
> will be covered by R_AARCH64_RELATIVE at runtime (when
> CONFIG_RELOCATABLE=y). Since AArch64 uses the RELA format, the targets
> of the relocations will contain zeroes before the relocation routine
> runs, and no cache maintenance is performed. This means a relocatable
> image requires runtime processing even if it is loaded unrandomised.
Ah, I forgot about this. Patch queued for 4.8-rc3. Thanks.
--
Catalin
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2016-08-18 8:34 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-08-17 15:54 [PATCH] arm64: kernel: avoid literal load of virtual address with MMU off Ard Biesheuvel
2016-08-17 16:16 ` Mark Rutland
2016-08-17 17:11 ` Catalin Marinas
2016-08-17 17:22 ` Mark Rutland
2016-08-17 17:26 ` Ard Biesheuvel
2016-08-18 8:34 ` Catalin Marinas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).