From mboxrd@z Thu Jan 1 00:00:00 1970 From: ddaney.cavm@gmail.com (David Daney) Date: Tue, 08 Mar 2016 12:08:04 -0800 Subject: [PATCH] arm64: account for sparsemem section alignment when choosing vmemmap offset In-Reply-To: <1457446169-23099-1-git-send-email-ard.biesheuvel@linaro.org> References: <1457446169-23099-1-git-send-email-ard.biesheuvel@linaro.org> Message-ID: <56DF3124.1080804@gmail.com> To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org On 03/08/2016 06:09 AM, Ard Biesheuvel wrote: > Commit dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear > region") fixed an issue where the struct page array would overflow into the > adjacent virtual memory region if system RAM was placed so high up in > physical memory that its addresses were not representable in the build time > configured virtual address size. > > However, the fix failed to take into account that the vmemmap region needs > to be relatively aligned with respect to the sparsemem section size, so that > a sequence of page structs corresponding with a sparsemem section in the > linear region appears naturally aligned in the vmemmap region. > > So round up vmemmap to sparsemem section size. Since this essentially moves > the projection of the linear region up in memory, also revert the reduction > of the size of the vmemmap region. > > Fixes: dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear region") > Tested-by: Mark Langsdorf For me, this fixes the failures caused by dfd55ad85e4a Tested-by: David Daney > Signed-off-by: Ard Biesheuvel > --- > arch/arm64/include/asm/pgtable.h | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index f50608674580..819aff5d593f 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -40,7 +40,7 @@ > * VMALLOC_END: extends to the available space below vmmemmap, PCI I/O space, > * fixed mappings and modules > */ > -#define VMEMMAP_SIZE ALIGN((1UL << (VA_BITS - PAGE_SHIFT - 1)) * sizeof(struct page), PUD_SIZE) > +#define VMEMMAP_SIZE ALIGN((1UL << (VA_BITS - PAGE_SHIFT)) * sizeof(struct page), PUD_SIZE) > > #ifndef CONFIG_KASAN > #define VMALLOC_START (VA_START) > @@ -52,7 +52,8 @@ > #define VMALLOC_END (PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K) > > #define VMEMMAP_START (VMALLOC_END + SZ_64K) > -#define vmemmap ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT)) > +#define vmemmap ((struct page *)VMEMMAP_START - \ > + SECTION_ALIGN_DOWN(memstart_addr >> PAGE_SHIFT)) > > #define FIRST_USER_ADDRESS 0UL > >