* [PATCH V5] ARM: LPAE: Fix mapping in alloc_init_section for unaligned addresses
@ 2013-02-08 11:30 R Sricharan
2013-02-08 14:43 ` Christoffer Dall
2013-02-08 15:21 ` Catalin Marinas
0 siblings, 2 replies; 6+ messages in thread
From: R Sricharan @ 2013-02-08 11:30 UTC (permalink / raw)
To: linux-arm-kernel
With LPAE enabled, alloc_init_section() does not map the entire
address space for unaligned addresses.
The issue also reproduced with CMA + LPAE. CMA tries to map 16MB
with page granularity mappings during boot. alloc_init_pte()
is called and out of 16MB, only 2MB gets mapped and rest remains
unaccessible.
Because of this OMAP5 boot is broken with CMA + LPAE enabled.
Fix the issue by ensuring that the entire addresses are
mapped.
Signed-off-by: R Sricharan <r.sricharan@ti.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoffer Dall <chris@cloudcar.com>
Cc: Russell King <linux@arm.linux.org.uk>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Christoffer Dall <chris@cloudcar.com>
---
[V2] Moved the loop to alloc_init_pte as per Russell's
feedback and changed the subject accordingly.
Using PMD_XXX instead of SECTION_XXX to avoid
different loop increments with/without LPAE.
[v3] Removed the dummy variable phys and updated
the commit log for CMA case.
[v4] Resending with updated change log and
updating the tags.
[v5] Renamed alloc_init_section to alloc_init_pmd
and moved the loop back there. Also introduced
map_init_section as per Catalin's comments.
arch/arm/mm/mmu.c | 66 ++++++++++++++++++++++++++++++++---------------------
1 file changed, 40 insertions(+), 26 deletions(-)
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index ce328c7..a89df2b 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -576,39 +576,53 @@ static void __init alloc_init_pte(pmd_t *pmd, unsigned long addr,
} while (pte++, addr += PAGE_SIZE, addr != end);
}
-static void __init alloc_init_section(pud_t *pud, unsigned long addr,
+static void __init map_init_section(pmd_t *pmd, unsigned long addr,
+ unsigned long end, phys_addr_t phys,
+ const struct mem_type *type)
+{
+#ifndef CONFIG_ARM_LPAE
+ if (addr & SECTION_SIZE)
+ pmd++;
+#endif
+ do {
+ *pmd = __pmd(phys | type->prot_sect);
+ phys += SECTION_SIZE;
+ } while (pmd++, addr += SECTION_SIZE, addr != end);
+
+ flush_pmd_entry(pmd);
+}
+
+static void __init alloc_init_pmd(pud_t *pud, unsigned long addr,
unsigned long end, phys_addr_t phys,
const struct mem_type *type)
{
pmd_t *pmd = pmd_offset(pud, addr);
+ unsigned long next;
- /*
- * Try a section mapping - end, addr and phys must all be aligned
- * to a section boundary. Note that PMDs refer to the individual
- * L1 entries, whereas PGDs refer to a group of L1 entries making
- * up one logical pointer to an L2 table.
- */
- if (type->prot_sect && ((addr | end | phys) & ~SECTION_MASK) == 0) {
- pmd_t *p = pmd;
-
-#ifndef CONFIG_ARM_LPAE
- if (addr & SECTION_SIZE)
- pmd++;
-#endif
-
- do {
- *pmd = __pmd(phys | type->prot_sect);
- phys += SECTION_SIZE;
- } while (pmd++, addr += SECTION_SIZE, addr != end);
+ do {
+ /*
+ * With LPAE, we may be required to loop in, to map
+ * all the pmds for the given range.
+ */
+ next = pmd_addr_end(addr, end);
- flush_pmd_entry(p);
- } else {
/*
- * No need to loop; pte's aren't interested in the
- * individual L1 entries.
+ * Try a section mapping - end, addr and phys must all be
+ * aligned to a section boundary. Note that PMDs refer to
+ * the individual L1 entries, whereas PGDs refer to a group
+ * of L1 entries making up one logical pointer to an L2 table.
*/
- alloc_init_pte(pmd, addr, end, __phys_to_pfn(phys), type);
- }
+ if (type->prot_sect &&
+ ((addr | next | phys) & ~SECTION_MASK) == 0) {
+ map_init_section(pmd, addr, next, phys, type);
+ } else {
+ alloc_init_pte(pmd, addr, next,
+ __phys_to_pfn(phys), type);
+ }
+
+ phys += next - addr;
+
+ } while (pmd++, addr = next, addr != end);
}
static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr,
@@ -619,7 +633,7 @@ static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr,
do {
next = pud_addr_end(addr, end);
- alloc_init_section(pud, addr, next, phys, type);
+ alloc_init_pmd(pud, addr, next, phys, type);
phys += next - addr;
} while (pud++, addr = next, addr != end);
}
--
1.7.9.5
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH V5] ARM: LPAE: Fix mapping in alloc_init_section for unaligned addresses
2013-02-08 11:30 [PATCH V5] ARM: LPAE: Fix mapping in alloc_init_section for unaligned addresses R Sricharan
@ 2013-02-08 14:43 ` Christoffer Dall
2013-02-08 14:49 ` Santosh Shilimkar
2013-02-11 9:10 ` R Sricharan
2013-02-08 15:21 ` Catalin Marinas
1 sibling, 2 replies; 6+ messages in thread
From: Christoffer Dall @ 2013-02-08 14:43 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, Feb 8, 2013 at 6:30 AM, R Sricharan <r.sricharan@ti.com> wrote:
> With LPAE enabled, alloc_init_section() does not map the entire
> address space for unaligned addresses.
>
> The issue also reproduced with CMA + LPAE. CMA tries to map 16MB
> with page granularity mappings during boot. alloc_init_pte()
> is called and out of 16MB, only 2MB gets mapped and rest remains
> unaccessible.
>
> Because of this OMAP5 boot is broken with CMA + LPAE enabled.
> Fix the issue by ensuring that the entire addresses are
> mapped.
>
> Signed-off-by: R Sricharan <r.sricharan@ti.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christoffer Dall <chris@cloudcar.com>
> Cc: Russell King <linux@arm.linux.org.uk>
> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Tested-by: Christoffer Dall <chris@cloudcar.com>
I don't think that I've tested this version of the patch, but ok.
> ---
> [V2] Moved the loop to alloc_init_pte as per Russell's
> feedback and changed the subject accordingly.
> Using PMD_XXX instead of SECTION_XXX to avoid
> different loop increments with/without LPAE.
>
> [v3] Removed the dummy variable phys and updated
> the commit log for CMA case.
>
> [v4] Resending with updated change log and
> updating the tags.
>
> [v5] Renamed alloc_init_section to alloc_init_pmd
> and moved the loop back there. Also introduced
> map_init_section as per Catalin's comments.
>
> arch/arm/mm/mmu.c | 66 ++++++++++++++++++++++++++++++++---------------------
> 1 file changed, 40 insertions(+), 26 deletions(-)
>
> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
> index ce328c7..a89df2b 100644
> --- a/arch/arm/mm/mmu.c
> +++ b/arch/arm/mm/mmu.c
> @@ -576,39 +576,53 @@ static void __init alloc_init_pte(pmd_t *pmd, unsigned long addr,
> } while (pte++, addr += PAGE_SIZE, addr != end);
> }
>
> -static void __init alloc_init_section(pud_t *pud, unsigned long addr,
> +static void __init map_init_section(pmd_t *pmd, unsigned long addr,
> + unsigned long end, phys_addr_t phys,
> + const struct mem_type *type)
> +{
> +#ifndef CONFIG_ARM_LPAE
> + if (addr & SECTION_SIZE)
> + pmd++;
> +#endif
I still think this one deserves a comment.
> + do {
> + *pmd = __pmd(phys | type->prot_sect);
> + phys += SECTION_SIZE;
> + } while (pmd++, addr += SECTION_SIZE, addr != end);
> +
> + flush_pmd_entry(pmd);
> +}
> +
> +static void __init alloc_init_pmd(pud_t *pud, unsigned long addr,
> unsigned long end, phys_addr_t phys,
> const struct mem_type *type)
> {
> pmd_t *pmd = pmd_offset(pud, addr);
> + unsigned long next;
>
> - /*
> - * Try a section mapping - end, addr and phys must all be aligned
> - * to a section boundary. Note that PMDs refer to the individual
> - * L1 entries, whereas PGDs refer to a group of L1 entries making
> - * up one logical pointer to an L2 table.
> - */
> - if (type->prot_sect && ((addr | end | phys) & ~SECTION_MASK) == 0) {
> - pmd_t *p = pmd;
> -
> -#ifndef CONFIG_ARM_LPAE
> - if (addr & SECTION_SIZE)
> - pmd++;
> -#endif
> -
> - do {
> - *pmd = __pmd(phys | type->prot_sect);
> - phys += SECTION_SIZE;
> - } while (pmd++, addr += SECTION_SIZE, addr != end);
> + do {
> + /*
> + * With LPAE, we may be required to loop in, to map
nit: "loop in" should just be loop. Also, I would not say that we may
be required to loop in the LPAE case, but that for LPAE we must loop
over all the PMDs for the given range.
> + * all the pmds for the given range.
> + */
> + next = pmd_addr_end(addr, end);
>
> - flush_pmd_entry(p);
> - } else {
> /*
> - * No need to loop; pte's aren't interested in the
> - * individual L1 entries.
> + * Try a section mapping - end, addr and phys must all be
> + * aligned to a section boundary. Note that PMDs refer to
> + * the individual L1 entries, whereas PGDs refer to a group
> + * of L1 entries making up one logical pointer to an L2 table.
This is not true for LPAE, so now when we're changing the comment, we
should be more specific.
> */
> - alloc_init_pte(pmd, addr, end, __phys_to_pfn(phys), type);
> - }
> + if (type->prot_sect &&
> + ((addr | next | phys) & ~SECTION_MASK) == 0) {
> + map_init_section(pmd, addr, next, phys, type);
> + } else {
> + alloc_init_pte(pmd, addr, next,
> + __phys_to_pfn(phys), type);
> + }
> +
> + phys += next - addr;
> +
> + } while (pmd++, addr = next, addr != end);
> }
>
> static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr,
> @@ -619,7 +633,7 @@ static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr,
>
> do {
> next = pud_addr_end(addr, end);
> - alloc_init_section(pud, addr, next, phys, type);
> + alloc_init_pmd(pud, addr, next, phys, type);
> phys += next - addr;
> } while (pud++, addr = next, addr != end);
> }
> --
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH V5] ARM: LPAE: Fix mapping in alloc_init_section for unaligned addresses
2013-02-08 14:43 ` Christoffer Dall
@ 2013-02-08 14:49 ` Santosh Shilimkar
2013-02-11 9:09 ` R Sricharan
2013-02-11 9:10 ` R Sricharan
1 sibling, 1 reply; 6+ messages in thread
From: Santosh Shilimkar @ 2013-02-08 14:49 UTC (permalink / raw)
To: linux-arm-kernel
On Friday 08 February 2013 08:13 PM, Christoffer Dall wrote:
> On Fri, Feb 8, 2013 at 6:30 AM, R Sricharan <r.sricharan@ti.com> wrote:
>> With LPAE enabled, alloc_init_section() does not map the entire
>> address space for unaligned addresses.
>>
>> The issue also reproduced with CMA + LPAE. CMA tries to map 16MB
>> with page granularity mappings during boot. alloc_init_pte()
>> is called and out of 16MB, only 2MB gets mapped and rest remains
>> unaccessible.
>>
>> Because of this OMAP5 boot is broken with CMA + LPAE enabled.
>> Fix the issue by ensuring that the entire addresses are
>> mapped.
>>
>> Signed-off-by: R Sricharan <r.sricharan@ti.com>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Christoffer Dall <chris@cloudcar.com>
>> Cc: Russell King <linux@arm.linux.org.uk>
>> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
>> Tested-by: Christoffer Dall <chris@cloudcar.com>
>
> I don't think that I've tested this version of the patch, but ok.
>
Indeed. Neither I have acke'd the updated version ;)
Tags from old patch doesn't apply anymore when patch
is completely revamped.
Regards,
Santosh
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH V5] ARM: LPAE: Fix mapping in alloc_init_section for unaligned addresses
2013-02-08 11:30 [PATCH V5] ARM: LPAE: Fix mapping in alloc_init_section for unaligned addresses R Sricharan
2013-02-08 14:43 ` Christoffer Dall
@ 2013-02-08 15:21 ` Catalin Marinas
1 sibling, 0 replies; 6+ messages in thread
From: Catalin Marinas @ 2013-02-08 15:21 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, Feb 08, 2013 at 11:30:18AM +0000, R Sricharan wrote:
> With LPAE enabled, alloc_init_section() does not map the entire
> address space for unaligned addresses.
>
> The issue also reproduced with CMA + LPAE. CMA tries to map 16MB
> with page granularity mappings during boot. alloc_init_pte()
> is called and out of 16MB, only 2MB gets mapped and rest remains
> unaccessible.
>
> Because of this OMAP5 boot is broken with CMA + LPAE enabled.
> Fix the issue by ensuring that the entire addresses are
> mapped.
>
> Signed-off-by: R Sricharan <r.sricharan@ti.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christoffer Dall <chris@cloudcar.com>
> Cc: Russell King <linux@arm.linux.org.uk>
> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Tested-by: Christoffer Dall <chris@cloudcar.com>
The code looks fine but you need to fix the comments as per
Christoffer's suggestion. Otherwise:
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH V5] ARM: LPAE: Fix mapping in alloc_init_section for unaligned addresses
2013-02-08 14:49 ` Santosh Shilimkar
@ 2013-02-11 9:09 ` R Sricharan
0 siblings, 0 replies; 6+ messages in thread
From: R Sricharan @ 2013-02-11 9:09 UTC (permalink / raw)
To: linux-arm-kernel
On Friday 08 February 2013 08:19 PM, Santosh Shilimkar wrote:
> On Friday 08 February 2013 08:13 PM, Christoffer Dall wrote:
>> On Fri, Feb 8, 2013 at 6:30 AM, R Sricharan <r.sricharan@ti.com> wrote:
>>> With LPAE enabled, alloc_init_section() does not map the entire
>>> address space for unaligned addresses.
>>>
>>> The issue also reproduced with CMA + LPAE. CMA tries to map 16MB
>>> with page granularity mappings during boot. alloc_init_pte()
>>> is called and out of 16MB, only 2MB gets mapped and rest remains
>>> unaccessible.
>>>
>>> Because of this OMAP5 boot is broken with CMA + LPAE enabled.
>>> Fix the issue by ensuring that the entire addresses are
>>> mapped.
>>>
>>> Signed-off-by: R Sricharan <r.sricharan@ti.com>
>>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>>> Cc: Christoffer Dall <chris@cloudcar.com>
>>> Cc: Russell King <linux@arm.linux.org.uk>
>>> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
>>> Tested-by: Christoffer Dall <chris@cloudcar.com>
>>
>> I don't think that I've tested this version of the patch, but ok.
>>
> Indeed. Neither I have acke'd the updated version ;)
> Tags from old patch doesn't apply anymore when patch
> is completely revamped.
>
Ok, corrected this in the updated version.
Regards,
Sricharan
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH V5] ARM: LPAE: Fix mapping in alloc_init_section for unaligned addresses
2013-02-08 14:43 ` Christoffer Dall
2013-02-08 14:49 ` Santosh Shilimkar
@ 2013-02-11 9:10 ` R Sricharan
1 sibling, 0 replies; 6+ messages in thread
From: R Sricharan @ 2013-02-11 9:10 UTC (permalink / raw)
To: linux-arm-kernel
On Friday 08 February 2013 08:13 PM, Christoffer Dall wrote:
> On Fri, Feb 8, 2013 at 6:30 AM, R Sricharan <r.sricharan@ti.com> wrote:
>> With LPAE enabled, alloc_init_section() does not map the entire
>> address space for unaligned addresses.
>>
>> The issue also reproduced with CMA + LPAE. CMA tries to map 16MB
>> with page granularity mappings during boot. alloc_init_pte()
>> is called and out of 16MB, only 2MB gets mapped and rest remains
>> unaccessible.
>>
>> Because of this OMAP5 boot is broken with CMA + LPAE enabled.
>> Fix the issue by ensuring that the entire addresses are
>> mapped.
>>
>> Signed-off-by: R Sricharan <r.sricharan@ti.com>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Christoffer Dall <chris@cloudcar.com>
>> Cc: Russell King <linux@arm.linux.org.uk>
>> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
>> Tested-by: Christoffer Dall <chris@cloudcar.com>
>
> I don't think that I've tested this version of the patch, but ok.
>
>> ---
>> [V2] Moved the loop to alloc_init_pte as per Russell's
>> feedback and changed the subject accordingly.
>> Using PMD_XXX instead of SECTION_XXX to avoid
>> different loop increments with/without LPAE.
>>
>> [v3] Removed the dummy variable phys and updated
>> the commit log for CMA case.
>>
>> [v4] Resending with updated change log and
>> updating the tags.
>>
>> [v5] Renamed alloc_init_section to alloc_init_pmd
>> and moved the loop back there. Also introduced
>> map_init_section as per Catalin's comments.
>>
>> arch/arm/mm/mmu.c | 66 ++++++++++++++++++++++++++++++++---------------------
>> 1 file changed, 40 insertions(+), 26 deletions(-)
>>
>> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
>> index ce328c7..a89df2b 100644
>> --- a/arch/arm/mm/mmu.c
>> +++ b/arch/arm/mm/mmu.c
>> @@ -576,39 +576,53 @@ static void __init alloc_init_pte(pmd_t *pmd, unsigned long addr,
>> } while (pte++, addr += PAGE_SIZE, addr != end);
>> }
>>
>> -static void __init alloc_init_section(pud_t *pud, unsigned long addr,
>> +static void __init map_init_section(pmd_t *pmd, unsigned long addr,
>> + unsigned long end, phys_addr_t phys,
>> + const struct mem_type *type)
>> +{
>> +#ifndef CONFIG_ARM_LPAE
>> + if (addr & SECTION_SIZE)
>> + pmd++;
>> +#endif
>
> I still think this one deserves a comment.
Ok, added it in updated version. In fact i was getting
little verbose in describing this, but not sure if there
is much simpler way.
>
>> + do {
>> + *pmd = __pmd(phys | type->prot_sect);
>> + phys += SECTION_SIZE;
>> + } while (pmd++, addr += SECTION_SIZE, addr != end);
>> +
>> + flush_pmd_entry(pmd);
>> +}
>> +
>> +static void __init alloc_init_pmd(pud_t *pud, unsigned long addr,
>> unsigned long end, phys_addr_t phys,
>> const struct mem_type *type)
>> {
>> pmd_t *pmd = pmd_offset(pud, addr);
>> + unsigned long next;
>>
>> - /*
>> - * Try a section mapping - end, addr and phys must all be aligned
>> - * to a section boundary. Note that PMDs refer to the individual
>> - * L1 entries, whereas PGDs refer to a group of L1 entries making
>> - * up one logical pointer to an L2 table.
>> - */
>> - if (type->prot_sect && ((addr | end | phys) & ~SECTION_MASK) == 0) {
>> - pmd_t *p = pmd;
>> -
>> -#ifndef CONFIG_ARM_LPAE
>> - if (addr & SECTION_SIZE)
>> - pmd++;
>> -#endif
>> -
>> - do {
>> - *pmd = __pmd(phys | type->prot_sect);
>> - phys += SECTION_SIZE;
>> - } while (pmd++, addr += SECTION_SIZE, addr != end);
>> + do {
>> + /*
>> + * With LPAE, we may be required to loop in, to map
>
> nit: "loop in" should just be loop. Also, I would not say that we may
> be required to loop in the LPAE case, but that for LPAE we must loop
> over all the PMDs for the given range.
ok. correct.
>
>> + * all the pmds for the given range.
>> + */
>> + next = pmd_addr_end(addr, end);
>>
>> - flush_pmd_entry(p);
>> - } else {
>> /*
>> - * No need to loop; pte's aren't interested in the
>> - * individual L1 entries.
>> + * Try a section mapping - end, addr and phys must all be
>> + * aligned to a section boundary. Note that PMDs refer to
>> + * the individual L1 entries, whereas PGDs refer to a group
>> + * of L1 entries making up one logical pointer to an L2 table.
>
> This is not true for LPAE, so now when we're changing the comment, we
> should be more specific.
yes, probably the second line of the comment is not common.
i am using it to describe this block now.
#ifndef CONFIG_ARM_LPAE
if (addr & SECTION_SIZE)
pmd++;
#endif
Regards,
Sricharan
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2013-02-11 9:10 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-02-08 11:30 [PATCH V5] ARM: LPAE: Fix mapping in alloc_init_section for unaligned addresses R Sricharan
2013-02-08 14:43 ` Christoffer Dall
2013-02-08 14:49 ` Santosh Shilimkar
2013-02-11 9:09 ` R Sricharan
2013-02-11 9:10 ` R Sricharan
2013-02-08 15:21 ` Catalin Marinas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).