From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752738AbaBRTcR (ORCPT ); Tue, 18 Feb 2014 14:32:17 -0500 Received: from mga11.intel.com ([192.55.52.93]:35381 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752473AbaBRTaP (ORCPT ); Tue, 18 Feb 2014 14:30:15 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.97,502,1389772800"; d="scan'208";a="477225207" Subject: [RFC][PATCH 3/6] x86: mm: fix missed global TLB flush stat To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, ak@linux.intel.com, alex.shi@linaro.org, kirill.shutemov@linux.intel.com, mgorman@suse.de, tim.c.chen@linux.intel.com, x86@kernel.org, peterz@infradead.org, Dave Hansen From: Dave Hansen Date: Tue, 18 Feb 2014 11:30:12 -0800 References: <20140218193008.CA410E17@viggo.jf.intel.com> In-Reply-To: <20140218193008.CA410E17@viggo.jf.intel.com> Message-Id: <20140218193012.4D308564@viggo.jf.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Dave Hansen If we take the if (end == TLB_FLUSH_ALL || vmflag & VM_HUGETLB) { local_flush_tlb(); goto out; } path out of flush_tlb_mm_range(), we will have flushed the tlb, but not incremented NR_TLB_LOCAL_FLUSH_ALL. This unifies the way out of the function so that we always take a single path when doing a full tlb flush. Signed-off-by: Dave Hansen --- b/arch/x86/mm/tlb.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff -puN arch/x86/mm/tlb.c~fix-missed-global-flush-stat arch/x86/mm/tlb.c --- a/arch/x86/mm/tlb.c~fix-missed-global-flush-stat 2014-02-18 10:59:37.611420354 -0800 +++ b/arch/x86/mm/tlb.c 2014-02-18 10:59:37.619420720 -0800 @@ -164,8 +164,9 @@ unsigned long tlb_single_page_flush_ceil void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, unsigned long end, unsigned long vmflag) { - int need_flush_others_all = 1; unsigned long addr; + /* do a global flush by default */ + unsigned long base_pages_to_flush = TLB_FLUSH_ALL; preempt_disable(); if (current->active_mm != mm) @@ -176,16 +177,14 @@ void flush_tlb_mm_range(struct mm_struct goto out; } - if (end == TLB_FLUSH_ALL || vmflag & VM_HUGETLB) { - local_flush_tlb(); - goto out; - } + if ((end != TLB_FLUSH_ALL) && !(vmflag & VM_HUGETLB)) + base_pages_to_flush = (end - start) >> PAGE_SHIFT; - if ((end - start) > tlb_single_page_flush_ceiling * PAGE_SIZE) { + if (base_pages_to_flush > tlb_single_page_flush_ceiling) { + base_pages_to_flush = TLB_FLUSH_ALL; count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); local_flush_tlb(); } else { - need_flush_others_all = 0; /* flush range by one by one 'invlpg' */ for (addr = start; addr < end; addr += PAGE_SIZE) { count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ONE); @@ -193,7 +192,7 @@ void flush_tlb_mm_range(struct mm_struct } } out: - if (need_flush_others_all) { + if (base_pages_to_flush == TLB_FLUSH_ALL) { start = 0UL; end = TLB_FLUSH_ALL; } _