From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx150.postini.com [74.125.245.150]) by kanga.kvack.org (Postfix) with SMTP id B24726B0074 for ; Sun, 18 Nov 2012 21:15:38 -0500 (EST) Received: by mail-ea0-f169.google.com with SMTP id a12so1265406eaa.14 for ; Sun, 18 Nov 2012 18:15:38 -0800 (PST) From: Ingo Molnar Subject: [PATCH 07/27] mm: Optimize the TLB flush of sys_mprotect() and change_protection() users Date: Mon, 19 Nov 2012 03:14:24 +0100 Message-Id: <1353291284-2998-8-git-send-email-mingo@kernel.org> In-Reply-To: <1353291284-2998-1-git-send-email-mingo@kernel.org> References: <1353291284-2998-1-git-send-email-mingo@kernel.org> Sender: owner-linux-mm@kvack.org List-ID: To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Peter Zijlstra , Paul Turner , Lee Schermerhorn , Christoph Lameter , Rik van Riel , Mel Gorman , Andrew Morton , Andrea Arcangeli , Linus Torvalds , Thomas Gleixner , Johannes Weiner , Hugh Dickins Reuse the NUMA code's 'modified page protections' count that change_protection() computes and skip the TLB flush if there's no changes to a range that sys_mprotect() modifies. Given that mprotect() already optimizes the same-flags case I expected this optimization to dominantly trigger on CONFIG_NUMA_BALANCING=y kernels - but even with that feature disabled it triggers rather often. There's two reasons for that: 1) While sys_mprotect() already optimizes the same-flag case: if (newflags == oldflags) { *pprev = vma; return 0; } and this test works in many cases, but it is too sharp in some others, where it differentiates between protection values that the underlying PTE format makes no distinction about, such as PROT_EXEC == PROT_READ on x86. 2) Even where the pte format over vma flag changes necessiates a modification of the pagetables, there might be no pagetables yet to modify: they might not be instantiated yet. During a regular desktop bootup this optimization hits a couple of hundred times. During a Java test I measured thousands of hits. So this optimization improves sys_mprotect() in general, not just CONFIG_NUMA_BALANCING=y kernels. [ We could further increase the efficiency of this optimization if change_pte_range() and change_huge_pmd() was a bit smarter about recognizing exact-same-value protection masks - when the hardware can do that safely. This would probably further speed up mprotect(). ] Cc: Linus Torvalds Cc: Andrew Morton Cc: Peter Zijlstra Cc: Andrea Arcangeli Cc: Rik van Riel Cc: Mel Gorman Cc: Hugh Dickins Cc: Thomas Gleixner Signed-off-by: Ingo Molnar --- mm/mprotect.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/mprotect.c b/mm/mprotect.c index 1e265be..7c3628a 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -153,7 +153,9 @@ static unsigned long change_protection_range(struct vm_area_struct *vma, dirty_accountable); } while (pgd++, addr = next, addr != end); - flush_tlb_range(vma, start, end); + /* Only flush the TLB if we actually modified any entries: */ + if (pages) + flush_tlb_range(vma, start, end); return pages; } -- 1.7.11.7 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org