From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751163AbdEHPfA (ORCPT ); Mon, 8 May 2017 11:35:00 -0400 Received: from mga04.intel.com ([192.55.52.120]:2210 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750710AbdEHPe7 (ORCPT ); Mon, 8 May 2017 11:34:59 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.38,309,1491289200"; d="scan'208";a="84941901" Subject: Re: [RFC 03/10] x86/mm: Make the batched unmap TLB flush API more generic To: Andy Lutomirski , X86 ML References: <983c5ee661d8fe8a70c596c4e77076d11ce3f80a.1494160201.git.luto@kernel.org> Cc: "linux-kernel@vger.kernel.org" , Borislav Petkov , Linus Torvalds , Andrew Morton , Mel Gorman , "linux-mm@kvack.org" , Rik van Riel , Nadav Amit , Michal Hocko , Sasha Levin From: Dave Hansen Message-ID: Date: Mon, 8 May 2017 08:34:58 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: <983c5ee661d8fe8a70c596c4e77076d11ce3f80a.1494160201.git.luto@kernel.org> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 05/07/2017 05:38 AM, Andy Lutomirski wrote: > diff --git a/mm/rmap.c b/mm/rmap.c > index f6838015810f..2e568c82f477 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -579,25 +579,12 @@ void page_unlock_anon_vma_read(struct anon_vma *anon_vma) > void try_to_unmap_flush(void) > { > struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; > - int cpu; > > if (!tlb_ubc->flush_required) > return; > > - cpu = get_cpu(); > - > - if (cpumask_test_cpu(cpu, &tlb_ubc->cpumask)) { > - count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); > - local_flush_tlb(); > - trace_tlb_flush(TLB_LOCAL_SHOOTDOWN, TLB_FLUSH_ALL); > - } > - > - if (cpumask_any_but(&tlb_ubc->cpumask, cpu) < nr_cpu_ids) > - flush_tlb_others(&tlb_ubc->cpumask, NULL, 0, TLB_FLUSH_ALL); > - cpumask_clear(&tlb_ubc->cpumask); > tlb_ubc->flush_required = false; > tlb_ubc->writable = false; > - put_cpu(); > } > > /* Flush iff there are potentially writable TLB entries that can race with IO */ > @@ -613,7 +600,7 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable) > { > struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; > > - cpumask_or(&tlb_ubc->cpumask, &tlb_ubc->cpumask, mm_cpumask(mm)); > + arch_tlbbatch_add_mm(&tlb_ubc->arch, mm); > tlb_ubc->flush_required = true; > > /* Looking at this patch in isolation, how can this be safe? It removes TLB flushes from the generic code. Do other patches in the series fix this up?