* [PATCH] arm64: kasan: Use actual memory node when populating the kernel image shadow
2016-03-11 2:31 ` Ard Biesheuvel
@ 2016-03-10 7:50 ` Mark Rutland
2016-03-11 10:44 ` Catalin Marinas
2016-03-11 13:55 ` Andrey Ryabinin
2 siblings, 0 replies; 5+ messages in thread
From: Mark Rutland @ 2016-03-10 7:50 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, Mar 11, 2016 at 09:31:02AM +0700, Ard Biesheuvel wrote:
> On 11 March 2016 at 01:57, Catalin Marinas <catalin.marinas@arm.com> wrote:
> > With the 16KB or 64KB page configurations, the generic
> > vmemmap_populate() implementation warns on potential offnode
> > page_structs via vmemmap_verify() because the arm64 kasan_init() passes
> > NUMA_NO_NODE instead of the actual node for the kernel image memory.
> >
> > Fixes: f9040773b7bb ("arm64: move kernel image to base of vmalloc area")
> > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> > Reported-by: James Morse <james.morse@arm.com>
>
> I still think using vmemmap_populate() is somewhat of a hack here, and
> the fact that we have different versions for 4k pages and !4k pages,
> while perhaps justified for the actual real purpose of allocating
> struct page arrays, makes this code more fragile than it needs to be.
One of the things I had hoped to look into was having a common p?d block
mapping aware vmemmap_populate that we could use in all cases, so we could
minimise TLB pressure for vmemmap regardless of SWAPPER_USES_SECTION_MAPS (when
we can allocate sufficiently aligned physical memory).
[...]
> Regardless,
>
> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Likewise:
Acked-by: Mark Rutland <mark.rutland@arm.com>
>
> > ---
> > arch/arm64/mm/kasan_init.c | 3 ++-
> > 1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> > index 56e19d150c21..a164183f3481 100644
> > --- a/arch/arm64/mm/kasan_init.c
> > +++ b/arch/arm64/mm/kasan_init.c
> > @@ -152,7 +152,8 @@ void __init kasan_init(void)
> >
> > clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
> >
> > - vmemmap_populate(kimg_shadow_start, kimg_shadow_end, NUMA_NO_NODE);
> > + vmemmap_populate(kimg_shadow_start, kimg_shadow_end,
> > + pfn_to_nid(virt_to_pfn(_text)));
> >
> > /*
> > * vmemmap_populate() has populated the shadow region that covers the
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH] arm64: kasan: Use actual memory node when populating the kernel image shadow
2016-03-11 2:31 ` Ard Biesheuvel
2016-03-10 7:50 ` Mark Rutland
@ 2016-03-11 10:44 ` Catalin Marinas
2016-03-11 13:55 ` Andrey Ryabinin
2 siblings, 0 replies; 5+ messages in thread
From: Catalin Marinas @ 2016-03-11 10:44 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, Mar 11, 2016 at 09:31:02AM +0700, Ard Biesheuvel wrote:
> On 11 March 2016 at 01:57, Catalin Marinas <catalin.marinas@arm.com> wrote:
> > With the 16KB or 64KB page configurations, the generic
> > vmemmap_populate() implementation warns on potential offnode
> > page_structs via vmemmap_verify() because the arm64 kasan_init() passes
> > NUMA_NO_NODE instead of the actual node for the kernel image memory.
> >
> > Fixes: f9040773b7bb ("arm64: move kernel image to base of vmalloc area")
> > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> > Reported-by: James Morse <james.morse@arm.com>
>
> I still think using vmemmap_populate() is somewhat of a hack here, and
> the fact that we have different versions for 4k pages and !4k pages,
> while perhaps justified for the actual real purpose of allocating
> struct page arrays, makes this code more fragile than it needs to be.
I agree, kasan is hijacking API meant for something else.
> How difficult would it be to simply have a kasan specific
> vmalloc_shadow() function that performs a
> memblock_alloc/create_mapping, and does the right thing wrt aligning
> the edges, rather than putting knowledge about how vmemmap_populate
> happens to align its allocations into the kasan code?
With a long flight from Bangkok, who knows, I may see some patches on
Monday ;)
Anyway, I think we also need to change the 4K pages vmemmap_populate to
do a vmemmap_verify in all cases, something like below (untested):
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index d2d8b8c2e17f..c0f61235a6ec 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -642,8 +642,8 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
return -ENOMEM;
set_pmd(pmd, __pmd(__pa(p) | PROT_SECT_NORMAL));
- } else
- vmemmap_verify((pte_t *)pmd, node, addr, next);
+ }
+ vmemmap_verify((pte_t *)pmd, node, addr, next);
} while (addr = next, addr != end);
return 0;
--
Catalin
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH] arm64: kasan: Use actual memory node when populating the kernel image shadow
2016-03-11 2:31 ` Ard Biesheuvel
2016-03-10 7:50 ` Mark Rutland
2016-03-11 10:44 ` Catalin Marinas
@ 2016-03-11 13:55 ` Andrey Ryabinin
2 siblings, 0 replies; 5+ messages in thread
From: Andrey Ryabinin @ 2016-03-11 13:55 UTC (permalink / raw)
To: linux-arm-kernel
On 03/11/2016 05:31 AM, Ard Biesheuvel wrote:
> On 11 March 2016 at 01:57, Catalin Marinas <catalin.marinas@arm.com> wrote:
>> With the 16KB or 64KB page configurations, the generic
>> vmemmap_populate() implementation warns on potential offnode
>> page_structs via vmemmap_verify() because the arm64 kasan_init() passes
>> NUMA_NO_NODE instead of the actual node for the kernel image memory.
>>
>> Fixes: f9040773b7bb ("arm64: move kernel image to base of vmalloc area")
>> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
>> Reported-by: James Morse <james.morse@arm.com>
>
> I still think using vmemmap_populate() is somewhat of a hack here, and
> the fact that we have different versions for 4k pages and !4k pages,
> while perhaps justified for the actual real purpose of allocating
> struct page arrays, makes this code more fragile than it needs to be.
> How difficult would it be to simply have a kasan specific
> vmalloc_shadow() function that performs a
> memblock_alloc/create_mapping, and does the right thing wrt aligning
> the edges, rather than putting knowledge about how vmemmap_populate
> happens to align its allocations into the kasan code?
Should be easy.
Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
^ permalink raw reply [flat|nested] 5+ messages in thread