From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx106.postini.com [74.125.245.106]) by kanga.kvack.org (Postfix) with SMTP id C83616B0072 for ; Mon, 20 Aug 2012 09:52:44 -0400 (EDT) From: "Kirill A. Shutemov" Subject: [PATCH v4 6/8] mm: make clear_huge_page cache clear only around the fault address Date: Mon, 20 Aug 2012 16:52:35 +0300 Message-Id: <1345470757-12005-7-git-send-email-kirill.shutemov@linux.intel.com> In-Reply-To: <1345470757-12005-1-git-send-email-kirill.shutemov@linux.intel.com> References: <1345470757-12005-1-git-send-email-kirill.shutemov@linux.intel.com> Sender: owner-linux-mm@kvack.org List-ID: To: linux-mm@kvack.org Cc: Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , x86@kernel.org, Andi Kleen , "Kirill A. Shutemov" , Tim Chen , Alex Shi , Jan Beulich , Robert Richter , Andy Lutomirski , Andrew Morton , Andrea Arcangeli , Johannes Weiner , Hugh Dickins , KAMEZAWA Hiroyuki , Mel Gorman , linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mips@linux-mips.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org From: Andi Kleen Clearing a 2MB huge page will typically blow away several levels of CPU caches. To avoid this only cache clear the 4K area around the fault address and use a cache avoiding clears for the rest of the 2MB area. Signed-off-by: Andi Kleen Signed-off-by: Kirill A. Shutemov --- mm/memory.c | 37 +++++++++++++++++++++++++++++-------- 1 files changed, 29 insertions(+), 8 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index dfc179b..625ca33 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3969,18 +3969,32 @@ EXPORT_SYMBOL(might_fault); #endif #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) + +#ifndef ARCH_HAS_USER_NOCACHE +#define ARCH_HAS_USER_NOCACHE 0 +#endif + +#if ARCH_HAS_USER_NOCACHE == 0 +#define clear_user_highpage_nocache clear_user_highpage +#endif + static void clear_gigantic_page(struct page *page, - unsigned long addr, - unsigned int pages_per_huge_page) + unsigned long haddr, unsigned long fault_address, + unsigned int pages_per_huge_page) { int i; struct page *p = page; + unsigned long vaddr; + int target = (fault_address - haddr) >> PAGE_SHIFT; might_sleep(); - for (i = 0; i < pages_per_huge_page; - i++, p = mem_map_next(p, page, i)) { + for (i = 0, vaddr = haddr; i < pages_per_huge_page; + i++, p = mem_map_next(p, page, i), vaddr += PAGE_SIZE) { cond_resched(); - clear_user_highpage(p, addr + i * PAGE_SIZE); + if (!ARCH_HAS_USER_NOCACHE || i == target) + clear_user_highpage(p, vaddr); + else + clear_user_highpage_nocache(p, vaddr); } } void clear_huge_page(struct page *page, @@ -3988,16 +4002,23 @@ void clear_huge_page(struct page *page, unsigned int pages_per_huge_page) { int i; + unsigned long vaddr; + int target = (fault_address - haddr) >> PAGE_SHIFT; if (unlikely(pages_per_huge_page > MAX_ORDER_NR_PAGES)) { - clear_gigantic_page(page, haddr, pages_per_huge_page); + clear_gigantic_page(page, haddr, fault_address, + pages_per_huge_page); return; } might_sleep(); - for (i = 0; i < pages_per_huge_page; i++) { + for (i = 0, vaddr = haddr; i < pages_per_huge_page; + i++, page++, vaddr += PAGE_SIZE) { cond_resched(); - clear_user_highpage(page + i, haddr + i * PAGE_SIZE); + if (!ARCH_HAS_USER_NOCACHE || i == target) + clear_user_highpage(page, vaddr); + else + clear_user_highpage_nocache(page, vaddr); } } -- 1.7.7.6 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org