linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Mark Rutland <mark.rutland@arm.com>
To: Pavel Tatashin <pasha.tatashin@oracle.com>
Cc: linux-kernel@vger.kernel.org, sparclinux@vger.kernel.org,
	linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org,
	linux-s390@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	x86@kernel.org, kasan-dev@googlegroups.com,
	borntraeger@de.ibm.com, heiko.carstens@de.ibm.com,
	davem@davemloft.net, willy@infradead.org, mhocko@kernel.org,
	ard.biesheuvel@linaro.org, will.deacon@arm.com,
	catalin.marinas@arm.com, sam@ravnborg.org,
	mgorman@techsingularity.net, Steven.Sistare@oracle.com,
	daniel.m.jordan@oracle.com, bob.picco@oracle.com
Subject: Re: [PATCH v8 10/11] arm64/kasan: explicitly zero kasan shadow memory
Date: Fri, 15 Sep 2017 02:10:36 +0100	[thread overview]
Message-ID: <20170915011035.GA6936@remoulade> (raw)
In-Reply-To: <20170914223517.8242-11-pasha.tatashin@oracle.com>

On Thu, Sep 14, 2017 at 06:35:16PM -0400, Pavel Tatashin wrote:
> To optimize the performance of struct page initialization,
> vmemmap_populate() will no longer zero memory.
> 
> We must explicitly zero the memory that is allocated by vmemmap_populate()
> for kasan, as this memory does not go through struct page initialization
> path.
> 
> Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
> Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
> Reviewed-by: Bob Picco <bob.picco@oracle.com>
> ---
>  arch/arm64/mm/kasan_init.c | 42 ++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 42 insertions(+)
> 
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index 81f03959a4ab..e78a9ecbb687 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -135,6 +135,41 @@ static void __init clear_pgds(unsigned long start,
>  		set_pgd(pgd_offset_k(start), __pgd(0));
>  }
>  
> +/*
> + * Memory that was allocated by vmemmap_populate is not zeroed, so we must
> + * zero it here explicitly.
> + */
> +static void
> +zero_vmemmap_populated_memory(void)
> +{
> +	struct memblock_region *reg;
> +	u64 start, end;
> +
> +	for_each_memblock(memory, reg) {
> +		start = __phys_to_virt(reg->base);
> +		end = __phys_to_virt(reg->base + reg->size);
> +
> +		if (start >= end)
> +			break;
> +
> +		start = (u64)kasan_mem_to_shadow((void *)start);
> +		end = (u64)kasan_mem_to_shadow((void *)end);
> +
> +		/* Round to the start end of the mapped pages */
> +		start = round_down(start, SWAPPER_BLOCK_SIZE);
> +		end = round_up(end, SWAPPER_BLOCK_SIZE);
> +		memset((void *)start, 0, end - start);
> +	}
> +
> +	start = (u64)kasan_mem_to_shadow(_text);
> +	end = (u64)kasan_mem_to_shadow(_end);
> +
> +	/* Round to the start end of the mapped pages */
> +	start = round_down(start, SWAPPER_BLOCK_SIZE);
> +	end = round_up(end, SWAPPER_BLOCK_SIZE);
> +	memset((void *)start, 0, end - start);
> +}

I really don't see the need to duplicate the existing logic to iterate over
memblocks, calculate the addresses, etc.

Why can't we just have a zeroing wrapper? e.g. something like the below.

I really don't see why we couldn't have a generic function in core code to do
this, even if vmemmap_populate() doesn't.

Thanks,
Mark.

---->8----
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 81f0395..698d065 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -135,6 +135,17 @@ static void __init clear_pgds(unsigned long start,
                set_pgd(pgd_offset_k(start), __pgd(0));
 }
 
+void kasan_populate_shadow(unsigned long shadow_start, unsigned long shadow_end,
+                          nid_t nid)
+{
+       shadow_start = round_down(shadow_start, SWAPPER_BLOCK_SIZE);
+       shadow_end = round_up(shadow_end, SWAPPER_BLOCK_SIZE);
+
+       vmemmap_populate(shadow_start, shadow_end, nid);
+
+       memset((void *)shadow_start, 0, shadow_end - shadow_start);
+}
+
 void __init kasan_init(void)
 {
        u64 kimg_shadow_start, kimg_shadow_end;
@@ -161,8 +172,8 @@ void __init kasan_init(void)
 
        clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
 
-       vmemmap_populate(kimg_shadow_start, kimg_shadow_end,
-                        pfn_to_nid(virt_to_pfn(lm_alias(_text))));
+       kasah_populate_shadow(kimg_shadow_start, kimg_shadow_end,
+                             pfn_to_nid(virt_to_pfn(lm_alias(_text))));
 
        /*
         * vmemmap_populate() has populated the shadow region that covers the
@@ -191,9 +202,9 @@ void __init kasan_init(void)
                if (start >= end)
                        break;
 
-               vmemmap_populate((unsigned long)kasan_mem_to_shadow(start),
-                               (unsigned long)kasan_mem_to_shadow(end),
-                               pfn_to_nid(virt_to_pfn(start)));
+               kasan_populate_shadow((unsigned long)kasan_mem_to_shadow(start),
+                                     (unsigned long)kasan_mem_to_shadow(end),
+                                     pfn_to_nid(virt_to_pfn(start)));
        }
 
        /*

  reply	other threads:[~2017-09-15  1:10 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-14 22:35 [PATCH v7 00/11] complete deferred page initialization Pavel Tatashin
2017-09-14 22:35 ` [PATCH v8 01/11] x86/mm: setting fields in deferred pages Pavel Tatashin
2017-09-14 22:35 ` [PATCH v8 02/11] sparc64/mm: " Pavel Tatashin
2017-09-14 22:35 ` [PATCH v8 03/11] mm: deferred_init_memmap improvements Pavel Tatashin
2017-09-14 22:35 ` [PATCH v8 04/11] sparc64: simplify vmemmap_populate Pavel Tatashin
2017-09-14 22:35 ` [PATCH v8 05/11] mm: defining memblock_virt_alloc_try_nid_raw Pavel Tatashin
2017-09-14 22:35 ` [PATCH v8 06/11] mm: zero struct pages during initialization Pavel Tatashin
2017-09-14 22:35 ` [PATCH v8 07/11] sparc64: optimized struct page zeroing Pavel Tatashin
2017-09-14 22:35 ` [PATCH v8 08/11] mm: zero reserved and unavailable struct pages Pavel Tatashin
2017-09-14 22:35 ` [PATCH v8 09/11] x86/kasan: explicitly zero kasan shadow memory Pavel Tatashin
2017-09-14 22:35 ` [PATCH v8 10/11] arm64/kasan: " Pavel Tatashin
2017-09-15  1:10   ` Mark Rutland [this message]
2017-09-15  1:30     ` Pavel Tatashin
2017-09-15 20:38       ` Mark Rutland
2017-09-15 21:20         ` Pavel Tatashin
2017-09-15 21:51           ` Mark Rutland
2017-09-14 22:35 ` [PATCH v8 11/11] mm: stop zeroing memory during allocation in vmemmap Pavel Tatashin
2017-09-14 22:40 ` [PATCH v8 00/11] complete deferred page initialization Pavel Tatashin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170915011035.GA6936@remoulade \
    --to=mark.rutland@arm.com \
    --cc=Steven.Sistare@oracle.com \
    --cc=ard.biesheuvel@linaro.org \
    --cc=bob.picco@oracle.com \
    --cc=borntraeger@de.ibm.com \
    --cc=catalin.marinas@arm.com \
    --cc=daniel.m.jordan@oracle.com \
    --cc=davem@davemloft.net \
    --cc=heiko.carstens@de.ibm.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    --cc=pasha.tatashin@oracle.com \
    --cc=sam@ravnborg.org \
    --cc=sparclinux@vger.kernel.org \
    --cc=will.deacon@arm.com \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).