From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from aserp1050.oracle.com (aserp1050.oracle.com [141.146.126.70]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3xN4mn0v8JzDqny for ; Thu, 3 Aug 2017 06:39:36 +1000 (AEST) Received: from aserp1040.oracle.com (aserp1040.oracle.com [141.146.126.69]) by aserp1050.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id v72KdX7i003052 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK) for ; Wed, 2 Aug 2017 20:39:33 GMT From: Pavel Tatashin To: linux-kernel@vger.kernel.org, sparclinux@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, kasan-dev@googlegroups.com, borntraeger@de.ibm.com, heiko.carstens@de.ibm.com, davem@davemloft.net, willy@infradead.org, mhocko@kernel.org Subject: [v4 09/15] sparc64: optimized struct page zeroing Date: Wed, 2 Aug 2017 16:38:18 -0400 Message-Id: <1501706304-869240-10-git-send-email-pasha.tatashin@oracle.com> In-Reply-To: <1501706304-869240-1-git-send-email-pasha.tatashin@oracle.com> References: <1501706304-869240-1-git-send-email-pasha.tatashin@oracle.com> List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Add an optimized mm_zero_struct_page(), so struct page's are zeroed without calling memset(). We do eight regular stores, thus avoid cost of membar. Signed-off-by: Pavel Tatashin Reviewed-by: Steven Sistare Reviewed-by: Daniel Jordan Reviewed-by: Bob Picco --- arch/sparc/include/asm/pgtable_64.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index 6fbd931f0570..23ad51ea5340 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -230,6 +230,24 @@ extern unsigned long _PAGE_ALL_SZ_BITS; extern struct page *mem_map_zero; #define ZERO_PAGE(vaddr) (mem_map_zero) +/* This macro must be updated when the size of struct page changes, + * so use static assert to enforce the assumed size. + */ +#define mm_zero_struct_page(pp) \ + do { \ + unsigned long *_pp = (void *)(pp); \ + \ + BUILD_BUG_ON(sizeof(struct page) != 64); \ + _pp[0] = 0; \ + _pp[1] = 0; \ + _pp[2] = 0; \ + _pp[3] = 0; \ + _pp[4] = 0; \ + _pp[5] = 0; \ + _pp[6] = 0; \ + _pp[7] = 0; \ + } while (0) + /* PFNs are real physical page numbers. However, mem_map only begins to record * per-page information starting at pfn_base. This is to handle systems where * the first physical page in the machine is at some huge physical address, -- 2.13.3