From mboxrd@z Thu Jan 1 00:00:00 1970 From: msalter@redhat.com (Mark Salter) Date: Tue, 29 Jul 2014 11:15:45 -0400 Subject: [PATCH 1/3] arm64: spin-table: handle unmapped cpu-release-addrs In-Reply-To: <1406630950-32432-2-git-send-email-ard.biesheuvel@linaro.org> References: <1406630950-32432-1-git-send-email-ard.biesheuvel@linaro.org> <1406630950-32432-2-git-send-email-ard.biesheuvel@linaro.org> Message-ID: <1406646945.753.5.camel@deneb.redhat.com> To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org On Tue, 2014-07-29 at 12:49 +0200, Ard Biesheuvel wrote: > From: Mark Rutland > > In certain cases the cpu-release-addr of a CPU may not fall in the > linear mapping (e.g. when the kernel is loaded above this address due to > the presence of other images in memory). This is problematic for the > spin-table code as it assumes that it can trivially convert a > cpu-release-addr to a valid VA in the linear map. > > This patch modifies the spin-table code to use a temporary cached > mapping to write to a given cpu-release-addr, enabling us to support > addresses regardless of whether they are covered by the linear mapping. > > Signed-off-by: Mark Rutland > --- > arch/arm64/kernel/smp_spin_table.c | 21 ++++++++++++++++----- > 1 file changed, 16 insertions(+), 5 deletions(-) > > diff --git a/arch/arm64/kernel/smp_spin_table.c b/arch/arm64/kernel/smp_spin_table.c > index 0347d38eea29..70181c1bf42d 100644 > --- a/arch/arm64/kernel/smp_spin_table.c > +++ b/arch/arm64/kernel/smp_spin_table.c > @@ -20,6 +20,7 @@ > #include > #include > #include > +#include > > #include > #include > @@ -65,12 +66,21 @@ static int smp_spin_table_cpu_init(struct device_node *dn, unsigned int cpu) > > static int smp_spin_table_cpu_prepare(unsigned int cpu) > { > - void **release_addr; > + __le64 __iomem *release_addr; > > if (!cpu_release_addr[cpu]) > return -ENODEV; > > - release_addr = __va(cpu_release_addr[cpu]); > + /* > + * The cpu-release-addr may or may not be inside the linear mapping. > + * As ioremap_cache will either give us a new mapping or reuse the > + * existing linear mapping, we can use it to cover both cases. In > + * either case the memory will be MT_NORMAL. > + */ > + release_addr = ioremap_cache(cpu_release_addr[cpu], > + sizeof(*release_addr)); > + if (!release_addr) > + return -ENOMEM; > > /* > * We write the release address as LE regardless of the native > @@ -79,15 +89,16 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu) > * boot-loader's endianess before jumping. This is mandated by > * the boot protocol. > */ > - release_addr[0] = (void *) cpu_to_le64(__pa(secondary_holding_pen)); > - > - __flush_dcache_area(release_addr, sizeof(release_addr[0])); > + writeq_relaxed(__pa(secondary_holding_pen), release_addr); > + __flush_dcache_area(release_addr, sizeof(*release_addr)); __flush_dcache_area((__force void *)release_addr, ... to avoid sparse warning. > > /* > * Send an event to wake up the secondary CPU. > */ > sev(); > > + iounmap(release_addr); > + > return 0; > } >