From: Pavel Tatashin <pasha.tatashin@oracle.com>
To: linux-kernel@vger.kernel.org, sparclinux@vger.kernel.org,
linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org,
linux-s390@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
x86@kernel.org, kasan-dev@googlegroups.com,
borntraeger@de.ibm.com, heiko.carstens@de.ibm.com,
davem@davemloft.net, willy@infradead.org, mhocko@kernel.org
Subject: [v5 09/15] sparc64: optimized struct page zeroing
Date: Thu, 3 Aug 2017 17:23:47 -0400 [thread overview]
Message-ID: <1501795433-982645-10-git-send-email-pasha.tatashin@oracle.com> (raw)
In-Reply-To: <1501795433-982645-1-git-send-email-pasha.tatashin@oracle.com>
Add an optimized mm_zero_struct_page(), so struct page's are zeroed without
calling memset(). We do eight regular stores, thus avoid cost of membar.
Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
Reviewed-by: Steven Sistare <steven.sistare@oracle.com>
Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Reviewed-by: Bob Picco <bob.picco@oracle.com>
---
arch/sparc/include/asm/pgtable_64.h | 32 ++++++++++++++++++++++++++++++++
1 file changed, 32 insertions(+)
diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index 6fbd931f0570..be47537e84c5 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -230,6 +230,38 @@ extern unsigned long _PAGE_ALL_SZ_BITS;
extern struct page *mem_map_zero;
#define ZERO_PAGE(vaddr) (mem_map_zero)
+/* This macro must be updated when the size of struct page grows above 80
+ * or reduces below 64.
+ * The idea that compiler optimizes out switch() statement, and only
+ * leaves clrx instructions or memset() call.
+ */
+#define mm_zero_struct_page(pp) do { \
+ unsigned long *_pp = (void *)(pp); \
+ \
+ /* Check that struct page is 8-byte aligned */ \
+ BUILD_BUG_ON(sizeof(struct page) & 7); \
+ \
+ switch (sizeof(struct page)) { \
+ case 80: \
+ _pp[9] = 0; /* fallthrough */ \
+ case 72: \
+ _pp[8] = 0; /* fallthrough */ \
+ case 64: \
+ _pp[7] = 0; \
+ _pp[6] = 0; \
+ _pp[5] = 0; \
+ _pp[4] = 0; \
+ _pp[3] = 0; \
+ _pp[2] = 0; \
+ _pp[1] = 0; \
+ _pp[0] = 0; \
+ break; /* no fallthrough */ \
+ default: \
+ pr_warn_once("suboptimal mm_zero_struct_page"); \
+ memset(_pp, 0, sizeof(struct page)); \
+ } \
+} while (0)
+
/* PFNs are real physical page numbers. However, mem_map only begins to record
* per-page information starting at pfn_base. This is to handle systems where
* the first physical page in the machine is at some huge physical address,
--
2.13.4
next prev parent reply other threads:[~2017-08-03 21:25 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-08-03 21:23 [v5 00/15] complete deferred page initialization Pavel Tatashin
2017-08-03 21:23 ` [v5 01/15] x86/mm: reserve only exiting low pages Pavel Tatashin
2017-08-03 21:23 ` [v5 02/15] x86/mm: setting fields in deferred pages Pavel Tatashin
2017-08-03 21:23 ` [v5 03/15] sparc64/mm: " Pavel Tatashin
2017-08-03 21:23 ` [v5 04/15] mm: discard memblock data later Pavel Tatashin
2017-08-03 21:23 ` [v5 05/15] mm: don't accessed uninitialized struct pages Pavel Tatashin
2017-08-03 21:23 ` [v5 06/15] sparc64: simplify vmemmap_populate Pavel Tatashin
2017-08-03 21:23 ` [v5 07/15] mm: defining memblock_virt_alloc_try_nid_raw Pavel Tatashin
2017-08-03 21:23 ` [v5 08/15] mm: zero struct pages during initialization Pavel Tatashin
2017-08-03 21:23 ` Pavel Tatashin [this message]
2017-08-04 5:37 ` [v5 09/15] sparc64: optimized struct page zeroing Sam Ravnborg
2017-08-04 13:50 ` Pasha Tatashin
2017-08-03 21:23 ` [v5 10/15] x86/kasan: explicitly zero kasan shadow memory Pavel Tatashin
2017-08-03 21:23 ` [v5 11/15] arm64/kasan: " Pavel Tatashin
2017-08-04 0:14 ` Ard Biesheuvel
2017-08-04 14:01 ` Pasha Tatashin
2017-08-03 21:23 ` [v5 12/15] mm: explicitly zero pagetable memory Pavel Tatashin
2017-08-03 21:23 ` [v5 13/15] mm: stop zeroing memory during allocation in vmemmap Pavel Tatashin
2017-08-03 21:23 ` [v5 14/15] mm: optimize early system hash allocations Pavel Tatashin
2017-08-03 21:23 ` [v5 15/15] mm: debug for raw alloctor Pavel Tatashin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1501795433-982645-10-git-send-email-pasha.tatashin@oracle.com \
--to=pasha.tatashin@oracle.com \
--cc=borntraeger@de.ibm.com \
--cc=davem@davemloft.net \
--cc=heiko.carstens@de.ibm.com \
--cc=kasan-dev@googlegroups.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-s390@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mhocko@kernel.org \
--cc=sparclinux@vger.kernel.org \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).