From mboxrd@z Thu Jan 1 00:00:00 1970 From: mark.rutland@arm.com (Mark Rutland) Date: Wed, 13 May 2015 15:11:52 +0100 Subject: [PATCH] ARM: mm: handle non-pmd-aligned end of RAM In-Reply-To: <55535440.1090808@redhat.com> References: <20150511103130.GA3490@leverpostej> <5dec6ce1e05fd20c214d9e0ab41dcc0d@agner.ch> <55535440.1090808@redhat.com> Message-ID: <20150513141152.GD6169@leverpostej> To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org On Wed, May 13, 2015 at 02:40:16PM +0100, Hans de Goede wrote: > Hi, > > On 11-05-15 13:43, Stefan Agner wrote: > > On 2015-05-11 12:31, Mark Rutland wrote: > >> At boot time we round the memblock limit down to section size in an > >> attempt to ensure that we will have mapped this RAM with section > >> mappings prior to allocating from it. When mapping RAM we iterate over > >> PMD-sized chunks, creating these section mappings. > >> > >> Section mappings are only created when the end of a chunk is aligned to > >> section size. Unfortunately, with classic page tables (where PMD_SIZE is > >> 2 * SECTION_SIZE) this means that if a chunk is between 1M and 2M in > >> size the first 1M will not be mapped despite having been accounted for > >> in the memblock limit. This has been observed to result in page tables > >> being allocated from unmapped memory, causing boot-time hangs. > >> > >> This patch modifies the memblock limit rounding to always round down to > >> PMD_SIZE instead of SECTION_SIZE. For classic MMU this means that we > >> will round the memblock limit down to a 2M boundary, matching the limits > >> on section mappings, and preventing allocations from unmapped memory. > >> For LPAE there should be no change as PMD_SIZE == SECTION_SIZE. > > > > Thanks Mark, just tested that patch on the hardware I had the issue, > > looks good. > > > > Tested-by: Stefan Agner > > Same for me, this also fixes the issue I was seeing no an Allwinner A33 > tablet with 1024z600 lcd screen. > > Tested-by: Hans de Goede Great. Thanks for testing! > Can we get this Cc-ed to stable at vger.kernel.org please? At least for 4.0 ? Sure, done. Russell, on the assumption you're ok with the patch I've submitted it to the patch system as 8356/1. Thanks, Mark. > Regards, > > Hans > > > > > >> > >> Signed-off-by: Mark Rutland > >> Reported-by: Stefan Agner > >> Cc: Catalin Marinas > >> Cc: Hans de Goede > >> Cc: Laura Abbott > >> Cc: Russell King > >> Cc: Steve Capper > >> --- > >> arch/arm/mm/mmu.c | 20 ++++++++++---------- > >> 1 file changed, 10 insertions(+), 10 deletions(-) > >> > >> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c > >> index 4e6ef89..7186382 100644 > >> --- a/arch/arm/mm/mmu.c > >> +++ b/arch/arm/mm/mmu.c > >> @@ -1112,22 +1112,22 @@ void __init sanity_check_meminfo(void) > >> } > >> > >> /* > >> - * Find the first non-section-aligned page, and point > >> + * Find the first non-pmd-aligned page, and point > >> * memblock_limit at it. This relies on rounding the > >> - * limit down to be section-aligned, which happens at > >> - * the end of this function. > >> + * limit down to be pmd-aligned, which happens at the > >> + * end of this function. > >> * > >> * With this algorithm, the start or end of almost any > >> - * bank can be non-section-aligned. The only exception > >> - * is that the start of the bank 0 must be section- > >> + * bank can be non-pmd-aligned. The only exception is > >> + * that the start of the bank 0 must be section- > >> * aligned, since otherwise memory would need to be > >> * allocated when mapping the start of bank 0, which > >> * occurs before any free memory is mapped. > >> */ > >> if (!memblock_limit) { > >> - if (!IS_ALIGNED(block_start, SECTION_SIZE)) > >> + if (!IS_ALIGNED(block_start, PMD_SIZE)) > >> memblock_limit = block_start; > >> - else if (!IS_ALIGNED(block_end, SECTION_SIZE)) > >> + else if (!IS_ALIGNED(block_end, PMD_SIZE)) > >> memblock_limit = arm_lowmem_limit; > >> } > >> > >> @@ -1137,12 +1137,12 @@ void __init sanity_check_meminfo(void) > >> high_memory = __va(arm_lowmem_limit - 1) + 1; > >> > >> /* > >> - * Round the memblock limit down to a section size. This > >> + * Round the memblock limit down to a pmd size. This > >> * helps to ensure that we will allocate memory from the > >> - * last full section, which should be mapped. > >> + * last full pmd, which should be mapped. > >> */ > >> if (memblock_limit) > >> - memblock_limit = round_down(memblock_limit, SECTION_SIZE); > >> + memblock_limit = round_down(memblock_limit, PMD_SIZE); > >> if (!memblock_limit) > >> memblock_limit = arm_lowmem_limit; > > >